text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
Creation parameters for a game entity. More...
#include "EntityCreateParams.hpp"
Creation parameters for a game entity.
The constructor.
This method is used on the client to create entities with the ID sent from the server, not the automatically created ID that would otherwise (normally) be used.
Returns the "forced" ID that is to be used for the new entity, or
UINT_MAX if the normal ID should be used.
The world in which the entity should be created. | https://api.cafu.de/c++/classcf_1_1GameSys_1_1EntityCreateParamsT.html | CC-MAIN-2018-51 | en | refinedweb |
What is the fastest way to find unused enum members?
Commenting values out one by one won't work because I have almost 700 members and want to trim off a few unused ones.
I am not aware of any compiler warning, but you could possibly try with
splint static analyzer tool. According to its documentation (emphasis mine):
Splint detects constants, functions, parameters, variables, types, enumerator members, and structure or union fields that are declared but never used.
As I checked, it works as intented. Here is example code:
#include <stdio.h> enum Month { JAN, FEB, MAR }; int main() { enum Month m1 = JAN; printf("%d\n", m1); }
By running the
splint command, you will obtain following messages:
main.c:3:19: Enum member FEB not used A member of an enum type is never used. (Use -enummemuse to inhibit warning) main.c:3:24: Enum member MAR not used | https://codedump.io/share/ElOkEOFJaXcm/1/finding-unused-enum-members-in-c | CC-MAIN-2018-51 | en | refinedweb |
Can QSqlDatabase use MySQL?
Hi,
I am very much impressed with the SQL classes and support in Qt. However, I usually work with MySQL Community Edition. Is there any support for that database? I don't see a driver for it...
Thanks,
Juan Dent
- blaisesegbeaya
QT supports MySql community version.
Add to your .pro file: QT += sql # that will allow the SQL library to be loaded.
In your application do the following
#include <QSqlDatabase>
QSqlDatabase mdb; // instantiate a QSqlDatabase class variable
int someFunction()
{
mdb = QSqldatabase::addDatabase("QMYSQL", "MyConnection"); // load the MySql driver
mdb.setHostName(serverAddress);
mdb.setUserName(userName);
mdb.setPassword(passWord);
mdb.setDatabaseName(yourDefaultDatabaseName);
return mdb.open();
}
There is no attempt from me to check for errors. Kindly read the documentation.
Once the database is open, then you can use the high level classes of QT. You will see that the query classes use QSqlDatabase as parameter.
Hope I helped.
- SGaist Lifetime Qt Champion
Hi,
In addition to what @blaisesegbeaya wrote. Here you can find the complete list of SQL drivers supported by Qt.
Note that you need to have the MySQL client libraries installed. Depending on the version of them you have, you may have to rebuild the driver. | https://forum.qt.io/topic/62733/can-qsqldatabase-use-mysql | CC-MAIN-2017-51 | en | refinedweb |
The first session in our statistical learning with Python series will briefly touch on some of the core components of Python’s scientific computing stack that we will use extensively later in the course. We will not only introduce two important libraries for data wrangling, numpy and pandas, but also show how to create plots using matplotlib. Please note that this is not a thorough introduction to these libraries; instead, we would like to point out what basic functionality they provide and how they differ from their counterparts in R.
But before we get into the details we will briefly describe how to setup a Python environment and what packages you need to install in order to run the code examples in this notebook.
Requirements
Python environment
We strongly recommend that you use a bundled Python distribution such as Anaconda. We have assembled a quick installation guide for Mac, Linux, and Windows in a previous blog post.
To run the R examples in this code you also need:
rpy2>=2.3.1
You can find instructions how to install
rpy2 here . If you have an working R environment on your machine the following command should install
rpy2:
$ pip install -U rpy2
To test if
rpy2 was installed correctly run:
$ python -m 'rpy2.tests'
If you run on Anaconda and it complains that it misses
libreadline.so please install the following conda package:
$ conda install python=2.7.5=2
IPython
IPython is an interactive computing environment for Python. It is a great tool for interactive data analysis and programming in general. Amongst other things it features a web-based notebook server that supports code, documentation, inline plots, and much more. In fact, all blog posts in this series will be written using IPython notebooks with the advantage that you can simply download it from here and either run it locally or view it on nbviewer.
Manipulating Data
The goal of this session is to get familiar with the basics of how to work with data in Python. The basic data containers that are used to manipulate data in Python are n-dimensional arrays that act either as vectors, matrices, or tensors.
In contrast to statistical computing environments like R, the fundamental data structures for data analysis in Python are not built into the computing environment but are available via dedicated 3rd party libraries. These libraries are called
numpy and
pandas.
Numpy
Numpy is the lingua-franca in the Python scientific computing ecosystem. It basically provides an n-dimensional array object that holds elements of a specific
dtype (eg.
numpy.float64 or
numpy.int32). Most packages that we will discuss in this series will directly operate on arrays. Numpy also provides common operations on arrays such as element-wise arithmetic, indexing/slicing, and basic linear algebra (dot product, matrix decompositions, …).
Below we show some basic working with numpy arrays:
from __future__ import division # always use floating point division import numpy as np # convention, use alias ``np`` # a one dimensional array x = np.array([2, 7, 5]) print 'x:', x # print x # a sequence starting from 4 to 12 with a step size of 3 y = np.arange(4, 12, 3) print 'y:', y # element-wise operations on arrays print 'x + y:', x + y print 'x / y:', x / y print 'x ^ y:', x ** y # python uses ** for exponentiation
x: [2 7 5] y: [ 4 7 10] x + y: [ 6 14 15] x / y: [ 0.5 1. 0.5] x ^ y: [ 16 823543 9765625]
If you need any help on operations such as
np.arange you can access its documentation by either typing
help(np.arange) or — if you use IPython — write a
'?' after the command:
np.arange?.
You can index and slice an array using square brackets
[]. To slice an array, numpy uses Python’s slicing syntax
x[start:end:step] where step is the step size which is optional. If you omit
start or
end it will use the beginning or end, respectively. Python uses exclusive semantics meaning that the element with position
end is not included in the result. Indexing can be done either by position or by using a boolean mask:
print x[1] # second element of x print x[1:3] # slice of x that includes second and third elements print print x[-2] # indexing using negative indices - starts from -1 print x[-np.array([1, 2])] # fancy indexing using index array print print x[np.array([False, True, True])] # indexing using boolean mask
7 [7 5] 7 [5 7] [7 5]
For two or more dimensional arrays we just add slicing/indexing arguments, to select the whole dimension you can simply put a colon (
:).
# reshape sequence to 2d array (=matrix) where rows hold contiguous sequences # then transpose so that columns hold contiguous sub sequences z_temp = np.arange(1, 13).reshape((3,4)) print "z_temp" print z_temp print # transpose z = z_temp.T print "z = z_temp.T (transpose of z_temp)" print z print # slicing along two dimensions a = z[2:4, 1:3] print "a = z[2:4, 1:3]" print a print # slicing along 2nd dimension b = z[:, 1:3] print "b = z[:, 1:3]" print b print # first column, returns 1d array c = z[:, 0] print "c = z[:, 0]" print c # one dimensional print # first column but return 2d array (remember: exclusive semantics) cc = z[:, 0:1] print "cc = z[:, 0:1]" print cc # two dimensional; column vector
z_temp [[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12]] z = z_temp.T (transpose of z_temp) [[ 1 5 9] [ 2 6 10] [ 3 7 11] [ 4 8 12]] a = z[2:4, 1:3] [[ 7 11] [ 8 12]] b = z[:, 1:3] [[ 5 9] [ 6 10] [ 7 11] [ 8 12]] c = z[:, 0] [1 2 3 4] cc = z[:, 0:1] [[1] [2] [3] [4]]
To get information on the dimensionality and shape of an array you will find the following methods useful:
print z.shape # number of elements along each axis (=dimension) print z.ndim # number of dimensions print z[:, 0].ndim # return first column as 1d array
(4, 3) 2 1
In numpy, slicing will return a new array that is basically a view on the original array, thus, it doesn’t require copying any memory. Indexing (in numpy often called fancy indexing), on the other hand, always copies the underlying memory.
Differences between R and Python
R differentiates between vectors and matrices whereas in numpy both are unified by the n-dimensional
numpy.ndarray class. There are a number of crucial differences in how indexing and slicing are handled in Python vs. R.
Note that the examples below require the Python package
rpy2 to be installed.
# allows execution of R code in IPython try: %load_ext rmagic except ImportError: print "Please install rpy2 to run the R/Python comparision code examples"
The rmagic extension is already loaded. To reload it, use: %reload_ext rmagic
Python uses 0-based indexing whereas indices in R start from 1:
x = np.arange(5) # seq has excl semantics x[0]
0
%%R # tells IPython that the following lines will be R code x <- seq(0, 4) # seq has incl semantics print(x[1])
[1] 0
Python uses exclusive semantics for slicing whereas R uses inclusive semantics:
x[0:2] # doesnt include index 2
array([0, 1])
%%R x <- seq(0, 4) # seq has incl semantics print(x[1:2]) # includes index 2
[1] 0 1
Negative indices have different semantics: in Python they are used to index from the end on an array whereas in R they are used to drop positions:
x[-2] # second element from the end
3
%%R x <- seq(0, 4) # seq has incl semantics print(x[-2]) # drop 2nd position, ie 1
[1] 0 2 3 4
If you index on a specific position of a matrix both R and Python will return a vector (ie. array with one less dimension). If you want to retain the dimensionality, R supports a
drop=FALSE argument whereas in Python you have to use slicing instead:
X = np.arange(4).reshape((2, 2)).T # 2d array X[0:1, :] # still 2d array - slice selects one element
array([[0, 2]])
%%R X = matrix(seq(0, 3), 2, 2) print(X[1, , drop=FALSE]) # use drop=FALSE
[,1] [,2] [1,] 0 2
Pandas
As
numpy,
pandas provides a key data structure: the
pandas.DataFrame; as can be inferred from the name it behaves very much like an R data frame. Pandas data frames address three deficiencies of arrays:
- they hold heterogenous data; each column can have its own
numpy.dtype,
- the axes of a DataFrame are labeled with column names and row indices,
- and, they account for missing values which this is not directly supported by arrays.
Data frames are extremely useful for data munging. They provide a large range of operations such as filter, join, and group-by aggregation.
Below we briefly show some of the core functionality of pandas data frames using some sample data from the website of the book “Introduction to Statistical Learning”:
import pandas as pd # convention, alias ``pd`` # Load car dataset auto = pd.read_csv("") auto.head() # print the first lines
One of the first things you should do when you work with a new dataset is to look at some summary statistics such as mean, min, max, the number of missing values and quantiles. For this, pandas provides the convenience method
pd.DataFrame.describe:
auto.describe()
You can use the dot
. or bracket
[] notation to access columns of the dataset. To add new columns you have to use the bracket
[] notation:
mpg = auto.mpg # get mpg column weight = auto['weight'] # get weight column auto['mpg_per_weight'] = mpg / weight print auto[['mpg', 'weight', 'mpg_per_weight']].head()
mpg weight mpg_per_weight 0 18 3504 0.005137 1 15 3693 0.004062 2 18 3436 0.005239 3 16 3433 0.004661 4 17 3449 0.004929
Columns and rows of a data frame are labeled, to access/manipulate the labels either use
pd.DataFrame.columns or
pd.DataFrame.index:
print(auto.columns) print(auto.index[:10])
Index([u'mpg', u'cylinders', u'displacement', u'horsepower', u'weight', u'acceleration', u'year', u'origin', u'name', u'mpg_per_weight'], dtype=object) Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int64)
Indexing and slicing work similar as for numpy arrays just that you can also use column and row labels instead of positions:
auto.ix[0:5, ['weight', 'mpg']] # select the first 5 rows and two columns weight and mpg
For more information on
pandas please consult the excellent online documentation or the references at the end of this post.
Differences between R and Python
The major difference between the data frame in R and pandas from a user’s point of view is that pandas uses an object-oriented interface (ie methods) whereas R uses a functional interface:
auto.head(2)
# this command pushes the pandas.DataFrame auto to R-land %Rpush auto
%%R auto = data.frame(auto) print(head(auto, 2))
mpg cylinders displacement horsepower weight acceleration year origin 0 18 8 307 130 3504 12.0 70 1 1 15 8 350 165 3693 11.5 70 1 name mpg_per_weight 0 chevrolet chevelle malibu 0.005136986 1 buick skylark 320 0.004061738
Below is a table that shows some of methods that pandas DataFrame provides and the corresponding functions in R:
Plotting using Matplotlib
Like R there are several different options for creating statistical graphics in Python, including Chaco and Bokeh, but the most common plotting libary is Matplotlib. Here is a quick introduction on how to create graphics in Python similar to those created using the base R functions.
%pylab inline import pandas as pd import matplotlib.pyplot as plt data = np.random.randn(500) # array of 500 random numbers
Populating the interactive namespace from numpy and matplotlib
To make a histogram you can use the
hist() command
plt.hist(data) plt.ylabel("Counts") plt.title("The Gaussian Distribution")
<matplotlib.text.Text at 0x5e46d50>
Like R, you can specify various options to change the plotting behavior. For example, to make a histogram of frequency rather than of raw counts you pass the argument
normed=True
You can also easily make a scatter plot
x = np.random.randn(50) y = np.random.randn(50) plt.plot(x, y, 'bo') # b for blue, o for circles plt.xlabel("x") plt.ylabel("y") plt.title("A scatterplot")
<matplotlib.text.Text at 0x5e5a890>
Matplotlib supports Matlab-style plotting commands, where you can quickly specify color (b for blue, r for red, k for black etc.) and a symbol for the plotting character (
'-' for solid lines,
'--' for dashed lines,
'*' for stars, …)
s = np.arange(11) plt.plot(s, s ** 2, 'r--')
[<matplotlib.lines.Line2D at 0x687bd50>]
There is also a scatter command that also creates scatterplots
plt.scatter(x, y)
<matplotlib.collections.PathCollection at 0x667a390>
Boxplots are very useful to compare two distributions
plt.boxplot([x, y]) # Pass a list of two arrays to plot them side-by-side plt.title("Two box plots, side-by-side")
<matplotlib.text.Text at 0x6155410>
Matplotlib and Pandas
Pandas provides a convenience interface to matplotlib, you can create plots by using the
pd.DataFrame.plot() method
# create a scatterplot of weight vs "miles per galone" auto.plot(x='weight', y='mpg', style='bo') plt.title("Scatterplot of weight and mpg") # create a histogram of "miles per galone" plt.figure() auto.hist('mpg') plt.title("Histogram of mpg (miles per galone)")
<matplotlib.text.Text at 0x64e8190>
<matplotlib.figure.Figure at 0x6890c10>
from pandas.tools.plotting import scatter_matrix _ = scatter_matrix(auto[['mpg', 'cylinders', 'displacement']], figsize=(14, 10))
Matplotlib has a rich set of features to manipulate and style statistical graphics. Over the next few weeks we will cover many of them to help you make charts that you find visually appealing, but for now this should be enough to get you up and running in Python.
Further Reading
Python Scientific Lecture Notes
For a more in-depth discussion of the Python scientific computing ecosystem we strongly recommend the Python Scientific Lecture Notes. The lecture notes contain lots of code examples from applied science such as signal processing, image processing, and machine learning.
Wes Mckinney, the original author of pandas, wrote a great book on using Python for data analysis. It is not only the primary reference to pandas but also features a concise yet profound introduction to Python, numpy and matplotlib.
Download Notebook View on NBViewer
This post was written by Peter Prettenhofer and Mark Steadman. Please post any feedback,
This post was inspired from the StatLearning MOOC by Stanford. | https://www.datarobot.com/blog/introduction-to-python-for-statistical-learning/ | CC-MAIN-2017-51 | en | refinedweb |
41
I must not be drinking enough coffee. I am accustomed to thinking of
them as rt-vbr and nrt-vbr so when I saw vbr I went brain dead, sorry.
:)
-----Original Message-----
From: Brian S. Julin [mailto:bri@...]
Sent: Tuesday, March 26, 2002 1:23 PM
To: Bob Balsover
Subject: Re: [Linux-ATM-General] VBR support
On Tue, 26 Mar 2002, Bob Balsover wrote:
> Is there such a thing as VBR? If so how does it differ from the other
> types?
There are in fact two types of VBR defined in ATM -- VBR-nrt
and VBR-rt. The difference between them is that VBR-rt
polices against jitter like CBR does, so it is used for example
for H.321 videoconferencing whereas VBR-nrt is usually used for things
like coloring packet networks where jitter doesn't matter.
Surf the library at atmforum.org for more information.
--
Brian
From: "Mr. James W. Laferriere" <babydr@...>
Date: Tue, 26 Mar 2002 18:27:12 -0500 (EST)
Hello Dave , I have attached the linux-atm list so they can see
what you have found . Hth , JimL
If the people doing ATM maintainence aren't:
1) Sending me fixes/patches
2) At least lightly skimming linux-net and linux-kernel for
ATM reports.
They need to start doing so.
Hello Dave , I have attached the linux-atm list so they can see
what you have found . Hth , JimL
On? :-))))
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...
> More majordomo info at
> Please read the FAQ at
>
+------------------------------------------------------------------+
| James W. Laferriere | System Techniques | Give me VMS |
| Network Engineer | P.O. Box 854 | Give me Linux |
| babydr@... | Coudersport PA 16915 | only on AXP |
+------------------------------------------------------------------+
Is there such a thing as VBR? If so how does it differ from the other
types?
--__--__--
Message: 3
From: "Antonio Medina Ortega" <bermejal@...>
To: linux-atm-general@...
Date: Tue, 26 Mar 2002 20:01:24 +0100
Subject: [Linux-ATM-General] VBR support
Hi everybody !!!
I would like to know which is the status of implementing VBR
support in
ATM on LINUX. Is it available yet?
Thanks in advance.
Best regards
Antonio
Hi everybody !!!
I would like to know which is the status of implementing VBR support in
ATM on LINUX. Is it available yet?
Thanks in advance.
Best regards
Antonio
_________________________________________________________________
Con MSN Hotmail súmese al servicio de correo electrónico más grande del
mundo.
Looks ok to me.
> static inline struct br2684_dev *BRPRIV(const struct net_device *net_dev)
> {
>- return (struct br2684_dev *) ((char *) (net_dev) -
>- (unsigned long) (&((struct br2684_dev *) 0)->net_dev));
>+ return list_entry(net_dev, struct br2684_dev, net_dev);
> }
Although list_entry does the same thing it's inappropriate here. People
will be confused.
Max
Sorry for the Off Topic query, but I'm trying to isolate and fix a
production problem before the politics get any worse.
We've got an Origin using a Fore/Marconi PCA-200E PCI OC-3 interface.
We need to bring up both LANE and CLIP instances on the same physical
interface.
Has anyone had experience doing this? Is it easy, difficult?
I'm trying to make some things work before too many more tenured egos
can aim at my troops...
TIA,
Gerry
--
Gerry Creager
AATLT/Network Engineering
Texas A&M University
979.458.4020 (Phone) 979.847.8578 (FAX)
I would think that would normally be a reasonably simple
thing to enable/disable in your device driver of choice..
It appears to be so in the Interphase case at least.
Mike
> Fabrice Arnal wrote:
>
> Hello,
>
> I would like to know if an existing current ATM adapter device
> supports the corrupted data delivery option of AAL-5 (though not
> recommended).
>
> Any idea ?
>
> Thanks
>
>
>
[thread starts at:]
The following patch applies on top of Maksim's one. It:
- adds a missing include (hunk #1);
- turns a cast fiesta into the equivalent list_entry (hunk #2);
- unregisters net_device once it isn't needed for bridging (hunk #3);
- compiles and stands a kill-br2684ctl-before-ifconfig-down test: nas0
disappeared as it should.
Without the patch an oops happens, see:
<URL:>
--- net/atm/br2684.c.orig Sat Mar 23 21:21:41 2002
+++ net/atm/br2684.c Sun Mar 24 21:42:30 2002
@@ -15,6 +15,7 @@ Author: Marcell GAL, 2000, XDSL Ltd, Hun
#include <linux/etherdevice.h>
#include <net/arp.h>
#include <linux/rtnetlink.h>
+#include <linux/ip.h>
#include <linux/atmbr2684.h>
#include "ipcommon.h"
@@ -97,8 +98,7 @@ static LIST_HEAD(br2684_devs);
static inline struct br2684_dev *BRPRIV(const struct net_device *net_dev)
{
- return (struct br2684_dev *) ((char *) (net_dev) -
- (unsigned long) (&((struct br2684_dev *) 0)->net_dev));
+ return list_entry(net_dev, struct br2684_dev, net_dev);
}
static inline struct br2684_dev *list_entry_brdev(const struct list_head *le)
@@ -410,6 +410,13 @@ static void br2684_push(struct atm_vcc *
if (skb == NULL) { /* skb==NULL means VCC is being destroyed */
br2684_close_vcc(brvcc);
+ if (list_empty(&brdev->brvccs)) {
+ read_lock(&devs_lock);
+ list_del(&brdev->br2684_devs);
+ read_unlock(&devs_lock);
+ unregister_netdev(&brdev->net_dev);
+ kfree(brdev);
+ }
return;
}
--
Ueimor
Since you did such a fine job of describing the problem,
I can actually tell what the trouble is and how to fix
it even though I'm not currently using that driver.
The problem begins with the fact that <all> ioctls are
vectored to the ioctl entry point of the atmdev that is registered
on the interface in question. In this case, all
are vectored to ia_ioctl(). Right near the
beginning of that ia_ioctl() you will see:
if (!dev->phy->ioctl) return(-EINVAL);
In "theory" the dev->phy pointer was set in suni_init()
to point to the suni ioctl handler.
However, in reality, if you look around line 2528 in iphase.c
you will see that suni_init is NOT called if the phy is
25mbps, DS3, or E3 (as yours is).
Thus phy = 0 at the time of the call and the attempt to
evaluate dev->phy->ioctl causes the seg fault shown below.
(eax is holding what should be phy).
> Code; c8842bee <[iphase]ia_ioctl+2a/52c> <=====
> 0: 83 78 04 00 cmpl $0x0,0x4(%eax) <=====
The obvious simple solution is to insert
if (!dev->phy) return(-EINVAL)
(or I suppose you could also add a phy driver for your interface)
Mike
murble <murble-atmbits@...> :
[nice oops report]
Can you reproduce it with 2.4.19-preX ?
--
Ueimor
Hi,
[1.] One line summary of the problem:
Interphase x531 driver oopses when sonetdiag is run.
[2.] Full description of the problem/report:
suni.o oops when ioctl(4, SONET_GETSTAT) is called
from drivers/atm/suni.c
suni_ioctl(struct atm_dev *dev,unsigned int cmd,void *arg)
...
case SONET_GETSTATZ:
case SONET_GETSTAT:
return fetch_stats(dev,(struct sonet_stats *) arg,
cmd == SONET_GETSTATZ);
When the sonetdiag program from the atm-0.79
[3.] Keywords (i.e., modules, networking, kernel):
modules, atm, networking, sonetdiag
[4.] Kernel version (from /proc/version):
Linux version 2.4.19-pre2-ac2 (root@...) (gcc version 2.95.4 (Debian prerelease)) #1 Tue Mar 5 12:52:36 GMT 2002
[5.] Output of Oops.. message (if applicable) with symbolic information
resolved (see Documentation/oops-tracing.txt)
ksymoops 2.4.3 on i686 2.4.19-pre2-ac2. Options used
-V (default)
-k /proc/ksyms (default)
-l /proc/modules (default)
-o /lib/modules/2.4.19-pre2-ac2/ (default)
-m /boot/System.map-2.4.19-pre2-ac2 .
Unable to handle kernel NULL pointer dereference at virtual address 00000004
c8842bee
*pde = 00000000
Oops: 0000
CPU: 0
EIP: 0010:[<c8842bee>] Not tainted
Using defaults from ksymoops -t elf32-i386 -a i386
EFLAGS: 00010292
eax: 00000000 ebx: bffffd3c ecx: c7fd2b60 edx: 80246110
esi: bffffd3c edi: c5453e04 ebp: c7fd2b60 esp: c54fd6f0
ds: 0018 es: 0018 ss: 0018
Process sonetdiag (pid: 481, stackpage=c54fd000)
Stack: bffffd3c 00000024 c5453e04 c7fd2b60 40169444 4016943c 40169434 4016942c
40169424 4016941c 40169414 4016940c 40169404 40169466 40169464 40169462
40169460 4016945e 4016945c 4016945a 40169458 40169456 40169454 00000030
Call Trace: [<c0199327>] [<c0198632>] [<c0199327>] [<c019c936>] [<c0192fd7>]
[<c0193037>] [<c0193397>] [<c01119b9>] [<c01230f9>] [<c012acae>] [<c012edde>]
[<c0120d63>] [<c0120e4e>] [<c0110c0f>] [<c0110af8>] [<c0200a81>] [<c01212e5>]
[<c0121852>] [<c0106ca4>] [<c01ff6ce>] [<c0106ca4>] [<c01ff6ce>] [<c01469bc>]
[<c01479d2>] [<c01470a4>] [<c0122fc3>] [<c01242f4>] [<c012edde>] [<c0120d63>]
[<c0120e4e>] [<c0110c0f>] [<c01fc580>] [<c0122019>] [<c01b8825>] [<c013cb89>]
[<c0106ca4>] [<c0106bb3>]
Code: 83 78 04 00 75 0c b8 ea ff ff ff e9 e6 04 00 00 89 f6 56 52
>>EIP; c8842bee <[iphase]ia_ioctl+2a/52c> <=====
Trace; c0199326 <piix_dmaproc+22/2c>
Trace; c0198632 <ide_dmaproc+1b2/278>
Trace; c0199326 <piix_dmaproc+22/2c>
Trace; c019c936 <do_rw_disk+2de/530>
Trace; c0192fd6 <start_request+146/220>
Trace; c0193036 <start_request+1a6/220>
Trace; c0193396 <ide_do_request+286/2d4>
Trace; c01119b8 <schedule+1f8/20c>
Trace; c01230f8 <__lock_page+cc/d8>
Trace; c012acae <__alloc_pages+5e/2b0>10af8 <do_page_fault+0/42c>
Trace; c0200a80 <rb_insert_color+50/c4>
Trace; c01212e4 <__vma_link+60/b0>
Trace; c0121852 <do_mmap_pgoff+422/4f8>
Trace; c0106ca4 <error_code+34/3c>
Trace; c01ff6ce <clear_user+2e/3c>
Trace; c0106ca4 <error_code+34/3c>
Trace; c01ff6ce <clear_user+2e/3c>
Trace; c01469bc <padzero+1c/20>
Trace; c01479d2 <load_elf_binary+92e/a80>
Trace; c01470a4 <load_elf_binary+0/a80>
Trace; c0122fc2 <___wait_on_page+aa/b4>
Trace; c01242f4 <filemap_nopage+bc/248>fc580 <atm_ioctl+bc4/c40>
Trace; c0122018 <do_munmap+234/244>
Trace; c01b8824 <sock_ioctl+20/28>
Trace; c013cb88 <sys_ioctl+25c/274>
Trace; c0106ca4 <error_code+34/3c>
Trace; c0106bb2 <system_call+32/38>
Code; c8842bee <[iphase]ia_ioctl+2a/52c>
00000000 <_EIP>:
Code; c8842bee <[iphase]ia_ioctl+2a/52c> <=====
0: 83 78 04 00 cmpl $0x0,0x4(%eax) <=====
Code; c8842bf2 <[iphase]ia_ioctl+2e/52c>
4: 75 0c jne 12 <_EIP+0x12> c8842c00 <[iphase]ia_ioctl+3c/52c>
Code; c8842bf4 <[iphase]ia_ioctl+30/52c>
6: b8 ea ff ff ff mov $0xffffffea,%eax
Code; c8842bf8 <[iphase]ia_ioctl+34/52c>
b: e9 e6 04 00 00 jmp 4f6 <_EIP+0x4f6> c88430e4 <[iphase]ia_ioctl+520/52c>
Code; c8842bfe <[iphase]ia_ioctl+3a/52c>
10: 89 f6 mov %esi,%esi
Code; c8842c00 <[iphase]ia_ioctl+3c/52c>
12: 56 push %esi
Code; c8842c00 <[iphase]ia_ioctl+3c/52c>
13: 52 push %edx
1 warning issued. Results may not be reliable.
[6.] A small shell script or example program which triggers the
problem (if possible)
just running sonetdiag works when the driver is loaded
[7.] Environment
[7.1.] Software (add the output of the ver_linux script here)
Linux sansys4 2.4.19-pre2-ac2 #1 Tue Mar 5 12:52:36 GMT 2002 i686 unknown
Gnu C 2.95.4
Gnu make 3.79.1
util-linux 2.11n
mount 2.11n
modutils 2.4.13
e2fsprogs 1.26
Linux C Library 2.2.5
Dynamic linker (ldd) 2.2.5
Procps 2.0.7
Net-tools 1.60
Console-tools 0.2.3
Sh-utils 2.0.11
Modules Loaded nfs lockd sunrpc 3c59x rtc
[7.2.] Processor information (from /proc/cpuinfo):
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 7
model name : Pentium III (Katmai)
stepping : 3
cpu MHz : 498.47892.87
[7.3.] Module information (from /proc/modules):
nfs 69660 1 (autoclean)
lockd 46656 1 (autoclean) [nfs]
sunrpc 57428 1 (autoclean) [nfs lockd]
3c59x 24840 1
rtc 5528 0 (autoclean)
normally iphase and suni are loaded, but as explained above this
card has been removed and i can't access it for a few days.
[7.4.] Loaded driver and hardware information (/proc/ioports, /proc/iomem)
[7.5.] PCI information ('lspci -vvv' as root)
00:00.0 Host bridge: Intel Corp. 440BX/ZX - 82443BX/ZX Host bridge (rev 03)
Region 0: Memory at f4000000 (32-bit, prefetchable) [size=64M]
Capabilities: [a0] AGP version 1.0
Status: RQ=31 SBA+ 64bit- FW- Rate=x1,x2
Command: RQ=0 SBA- AGP- 64bit- FW- Rate=<none>
00:01.0 PCI bridge: Intel Corp. 440BX/ZX - 82443BX/ZX AGP bridge (rev 03)
Bus: primary=00, secondary=01, subordinate=01, sec-latency=64
I/O behind bridge: 0000e000-0000efff
Memory behind bridge: fc000000-feffffff
Prefetchable memory behind bridge: f9000000-f9ffffff
BridgeCtl: Parity- SERR- NoISA- VGA+ MAbort- >Reset- FastB2B+
00:07.0 ISA bridge: Intel Corp. 82371AB PIIX4 ISA
00:07.1 IDE interface: Intel Corp. 82371AB PIIX4 IDE (rev 01) (prog-if 80 [Master]) 4: I/O ports at ffa0 [size=16]
00:07.2 USB Controller: Intel Corp. 82371AB PIIX4 USB (rev 01) (prog-if 00 [UH D routed to IRQ 11
Region 4: I/O ports at dce0 [size=32]
00:07.3 Bridge: Intel Corp. 82371AB PIIX4 ACPI ? routed to IRQ 9
00:11.0 Ethernet controller: 3Com Corporation 3c905B 100BaseTX [Cyclone] (rev 24)
Subsystem: Dell Computer Corporation 3C905B Fast Etherlink XL 10 (2500ns min, 2500ns max), cache line size 08
Interrupt: pin A routed to IRQ 11
Region 0: I/O ports at dc00 [size=128]
Region 1: Memory at ff000000 (32-bit, non-prefetchable) [size=128]
Expansion ROM at fb000000 [disabled] [size=128-
01:00.0 VGA compatible controller: ATI Technologies Inc 3D Rage Pro AGP 1X/2X (rev 5c) (prog-if 00 [VGA])
Subsystem: Dell Computer Corporation Optiplex GX1 Onboard 9
Region 0: Memory at fd000000 (32-bit, non-prefetchable) [size=16M]
Region 1: I/O ports at ec00 [size=256]
Region 2: Memory at fcfff000 (32-bit, non-prefetchable) [size=4K]
Expansion ROM at <unassigned> [disabled] [size=128K]
Capabilities: [50] AGP version 1.0
Status: RQ=255 SBA+ 64bit- FW- Rate=x1,x2
Command: RQ=0 SBA- AGP- 64bit- FW- Rate=<none>
[7.6.] SCSI information (from /proc/scsi/scsi)
[7.7.] Other information that might be relevant to the problem
(please look in /proc and include all information that you
think to be relevant):
It seems to work
[X.] Other notes, patches, fixes, workarounds:
straces sonetdiag shows
socket(0x8 /* AF_ATMPVC? */, SOCK_DGRAM, 0) = 4
ioctl(4, SONET_GETSTAT <unfinished ...>
+++ killed by SIGSEGV +++
at this point the oops has occured.
The drivers work ok even after the kernel has oops and general
ATM with Classical IP / ATM seems to be working fine.
I have the same problem with misc other 2.4.recent > 17
kernels on various PC hardware.
murble
Hi Sudheedra!
I don't know Q.2630.2 yet. But playing with AAL2 I simply did the changes
below to the QOS structure...
I personaly think that the min/max UUI values are not necessary. But some
people liked them. It allowes differnet applications to use the same AAL2
path. But adds some stupid dependencies.
The big problem doing such a patch is that library and kernel have to use
the same header file...
Regards,
Stephan
--------------------------------
Changes on linux/atm.h:
union atm_aal_specific {
struct {
unsigned char cid;
unsigned char pdu_type;
unsigned char min_uui;
unsigned char max_uui;
} aal2;
};
struct atm_qos {
struct atm_trafprm txtp; /* parameters in TX direction */
struct atm_trafprm rxtp __ATM_API_ALIGN;
/* parameters in RX direction */
unsigned char aal __ATM_API_ALIGN;
union atm_aal_specific aal_specific __ATM_API_ALIGN; /* aal specific
parameters (AAL2,..) */
};
> -----Original Message-----
> From: linux-atm-general-admin@...
> [mailto:linux-atm-general-admin@...]On Behalf Of
> Sudheendra S
> Sent: Freitag, 22. Marz 2002 06:04
> To: ATM_LIST
> Subject: [Linux-ATM-General] Query on AAL type 2 signalling protocol
>
>
>
Hello,
Here you are a solution using RFC1483/2684 routed protocols with CLIP
implementation and PVC. A simple script:
#!/bin/sh
# For RFC1483/2684 routed protocols
IP_ADDRESS=217.126.2.10
NETMASK=255.255.255.192
GATEWAY=217.126.2.2
VPI=8
VCI=32
# Load ATMARP protocol
echo Loading atmarpd... &&
atmarpd -b -l syslog -m &&
sleep 3s &&
# Create interface
echo Creating interface... &&
atmarp -c atm0 &&
# Activate interface
echo Activating interface... &&
ifconfig atm0 $IP_ADDRESS netmask $NETMASK up &&
sleep 3s &&
# Create a PVC, link IP_ADDRESS with INTERFACE, VPI, VCI
echo Creating PVC... &&
atmarp -s $GATEWAY 0.$VPI.$VCI &&
# Add default route
echo Adding default route... &&
route add default gw $GATEWAY &&
Regards,
Josep
3Com ADSL Modem USB driver for Linux in (examples using libusb with c and
kylix)
Hi,
I spent almost the entire Monday with a funky ATM problem, and didn't
find any info on web, mailing list archives or Usenet. We are using a
Linux machine (running Debian GNU/Linux with kernel 2.4.18, atm-tools
0.79 and a Fore 200E card). Here is the syslog output from the kernel
module loading:
|Mar 18 16:20:08 elena kernel: fore200e: device PCA-200E found at 0xec000000, IRQ 10
|Mar 18 16:20:08 elena kernel: fore200e: device PCA-200E-0 self-test passed
|Mar 18 16:20:08 elena kernel: fore200e: device PCA-200E-0 firmware started
|Mar 18 16:20:08 elena kernel: fore200e: device PCA-200E-0 initialized
|Mar 18 16:20:08 elena kernel: fore200e: device PCA-200E-0, rev. A, S/N: 48400, ESI: 00:20:48:04:bd:10
|Mar 18 16:20:08 elena kernel: fore200e: IRQ 10 reserved for device PCA-200E-0
For our application (termination of bandwidth limited PVCs connecting
DSL customers), we need UBR with a specified peak cell rate. However,
when I try to program this for the PVC (using the command atmarp -s
10.33.66.62 0.0.33 qos ubr:max_pcr=144kbps), the card seems to ignore
the pcr, "blasting away" at link speed. This causes the ATM switch on
the link (which is not under our control) to drop cells, making the IP
link nearly unuseable.
This doesn't seem to apply for CBR, though. cbr:max_pcr=1Mbit yields a
throughput of 60 KByte/s (way too slow for an 1 Mbit link), and
cbr:max_pcr=2Mbit yields about 180 Kbyte/s (still too slow for the
bandwidth allocated, but not double 1 Mbit). Is the Fore 200E hardware
capable to do UBR with a specified peak cell rate? Do you have any
idea what I might be doing wrong?
The fore200e driver in linux 2.4.18 has a CVS tag from March 2001
"davem". Do you have any idea who davem might be?
Any hints will be greatly appreciated. Thanks.
[Please reply to me direct; majordomo@... bounces weirdly so I
can't subscribe]
Hi,
I'm attempting to get an Alcatel Speedtouch USB/ADSL modem working under
Linux. I've successfully got the USB side working, and Alcatel's management
program is running. I've built the 2.4.7 kernel with the pppoatm-1.gz
patches
and options as suggested by
I've also built ppp.2.4.0b2.tar.gz with these patches applied:
pppoatm.pppd.vs.2.4.0b2+240600.diff.gz
pppd.patch.240600
(I had to manually shuffle some ATM header files to get it to build).
When I run pppd, I get this in /var/log/messages:
Mar 18 17:00:55 localhost pppd[1191]: Plugin
/usr/lib/pppd/plugins/pppoatm.so loaded.
Mar 18 17:00:55 localhost pppd[1191]: PPPoATM plugin_init
Mar 18 17:00:55 localhost pppd[1191]: PPPoATM setdevname_pppoatm
Mar 18 17:00:55 localhost pppd[1191]: PPPoATM setdevname_pppoatm - SUCCESS
Mar 18 17:00:55 localhost pppd[1192]: pppd 2.4.0b1 started by root, uid 0
Mar 18 17:00:55 localhost pppd[1192]: connect(0.38): No such device
Mar 18 17:00:55 localhost pppd[1192]: Exit.
I belive that on working configurations, the line following "pppd 2.4.0b1
started by root..." is usually something like "using interface ppp0". I have
/dev/ppp but not /dev/ppp0 (and I have no clue what the significance of the
trailing zero is).
This is my /etc/ppp/options:
lock
defaultroute
noipdefault
noauth
passive
asyncmap 0
lcp-echo-interval 2
lcp-echo-failure 7
user rob_office@...
name rob_office@...
debug
plugin /usr/lib/pppd/plugins/pppoatm.so
0.38
I am pretty damn sure that 0.38 is the correct specification for the UK.
Can anyone help?
Thanks,
Paul
This is what I got from majordomo@...
-------------------
The original message was received at Wed, 20 Mar 2002 18:36:05 +0100 (MET)
from root@... [212.1.137.122]
----- The following addresses had permanent fatal errors -----
"|/home/mjdomo/wrapper majordomo"
(expanded from: <majordomo@...>)
----- Transcript of session follows -----
sh: /home/mjdomo/wrapper: not found
554 "|/home/mjdomo/wrapper majordomo"... unknown mailer error 1
Hello,
I would like to know if an existing current ATM adapter device supports
the corrupted data delivery option of AAL-5 (though not recommended).
Any idea ?
Thanks
> Hi,
> I have a PC with an ATM card (Efficient Network 155).
> My goal is : connect the ATM card directly to a BAS and establish a
> PPPoE over ATM link between the PC and the BAS (in order to do simulate user
> connexions without ADSL cloud).
>
> Is this already done?
> Wich softwares may be used?
>
> Thanks for you help.
> Thierry
I recommend to get a recent kernel (2.4.18).
You need to patch the kernel to get rfc1483/rfc2684 bridged support
(br2684-against2.4). This gives you the br2684.o module.
You also need to install atm-2.4.0 (from sourceforge).
When this this is done, the kernel has to be recompiled with atm and rfc2684
support.
You will also need brctl to control the br2684.o module.
For PPPoE, I have used Roaring Penguin PPPoE (rp-pppoe).
Frode
Linux ATM group:
I need some help with the ATM device drivers.
I would like to be able to install the ATM drivers as a loadable module but
I'm not sure how to do this. (I would like the ability to install the
driver as a module for future upgrades.. right now it's built-into the
kernel).
I've read up on lsmod, insmod and modprobe and also have defined an alias in
/etc/modules.config. I've not been able to locate where the insmod or
modprobe command is installed. What start-up script does the command belong
in and what is the best way to install the ATM drivers as modules?
The eepro100 drivers are modules but they are automatically loaded. I've
not been able to locate where these are loaded to use that as an example.
Any help would be much appreciated.
regards
jd wilson
Hi,
Is there any atm mailing list for general discussion on ATM.
regards,
Padma
Hi,
af-pnni-0055.00 referes to draft Q.934 for NNI state machine.
But there is no such document from ITU-T. Can any body
tell me where I can find the NNI state machine info.
Thanking you in advance.
regards,
Padma
I have an alternative 5575 driver that I
derived from the Solaris unified 5515/5575
driver that Iphase posted. This is a completely
different code base from the Mona/Peter
Linux 5575 driver that is included in the
standard Linux distributions and is also
available from Iphase. In general I found
the Solaris driver to be a good deal more
reader-friendly than the Mona/Peter driver.
However, now that I have added my porting touches
unbiased observers may find them equally
unfriendly.
Anyhow I have been using this version both in
my office machine and in the file server of
my research group for almost a year now with
good results. It compiles fine on RH 7.0 and
7.1 (haven't tested 7.2) and with all 2.4 kernels
that I have tried. It also includes some basic
diagnostic tools that allow one to see if the
end system and switch are talking at all.
However, it is module only and UBR only (5575
hw support for ABR had some "features" that
require ugly driver workarounds). It is available
via e=mail request to me.
----
As far as configuring LANE goes, I recommend a
step by step approach.
1 - Is ilmi working
2 - Is atmsigd working
3 - Is zeppelin working
Alice Flower's M.S. research paper on my web page
provides an overview for newcomers.
Mike
I have had significant problems compiling the Interphase driver, and
then making it work in RH 7.0-7.2. I am not completely sure if the
driver is not right, or if there's something in the RH glibc/gcc
configurations that's unhappy.
I've abandoned the Interphase cards and returned to Marconi LE-155's,
PCA-200's and HE-series cards, which have performed flawlessly for me
using both LAME and CLIP.
--
Gerry Creager -- gerry@...
Network Engineering
Academy for Advanced Telecommunications and Learning Technologies
Texas A&M University 979.458.4020 (Phone) -- 979.847.8578 (Fax) | https://sourceforge.net/p/linux-atm/mailman/linux-atm-general/?viewmonth=200203 | CC-MAIN-2017-51 | en | refinedweb |
cxxtools::Hdostream Class Reference
hexdumper as a outputstream. More...
#include <cxxtools/hdstream.h>
Detailed Description
hexdumper as a outputstream.
Data written to a hdostream are passed as a hexdump to the given sink.
Constructor & Destructor Documentation
Member Function Documentation
The documentation for this class was generated from the following file:
- include/cxxtools/hdstream.h | http://www.tntnet.org/apidoc_stable/html/classcxxtools_1_1Hdostream.html | CC-MAIN-2017-51 | en | refinedweb |
public class Solution { public int lengthOfLIS(int[] nums) { int N = 0, idx, x; for(int i = 0; i < nums.length; i++) { x = nums[i]; if (N < 1 || x > nums[N-1]) { nums[N++] = x; } else if ((idx = Arrays.binarySearch(nums, 0, N, x)) < 0) { nums[-(idx + 1)] = x; } } return N; } }
Re-use the array given as input for storing the tails.
As this statement is executed only if idx variable is negative (meaning there is no such tail in our DP part of the array at the moment), we use -(idx + 1) to convert it to the right position.
If we already have this number in the array Arrays.binarySearch() will return positive index (or 0) - we don't need to update anything. But if it is not in the array then we update it.
To avoid any confusion, look at the documentation of Arrays.binarySearch() (return part) here:(int[], int, int, int)
the code won't work with an input of 10,9,2,5,3,7,101,18,1,5,6,7,8,9,10,11
the output should be 8 but this will output a 9 instead.
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/30334/o-1-space-o-n-log-n-time-short-solution-without-additional-memory-java | CC-MAIN-2017-51 | en | refinedweb |
Bjorn Sundberg wrote ..
> Thanks Graham for your quick response. Its 2 am and my head is abit slow.
> But is the idea to let apache do the digest authentication, that is apache
> takes care of matching username against the password supplied in the
> authenhandler()?
If you use AuthDigestFile to specify a user/password file that Apache can
itself use, the authenhandler() isn't even required. As you probably know,
you can find more details of how to set up Apache at:
Given that Apache will handle all aspects of authorisation, all that needs
to be done now is to work around the problem in mod_python.publisher
that prevents it being used in a directory authenticated using digest
authentication.
I was putting that workaround in authenhandler(), but probably shouldn't
have suggested it as it has probably confused the issue. What has to be
done though is to hook in a bit of code somehow before the handler
for mod_python.publisher. This could be done in an earlier processing
phase or as a content handler just prior to mod_python.publisher is
triggered. I would suggest the latter.
To do that, where you currently have:
PythonHandler mod_python.publisher
change it to:
PythonHandler my_digest_workaround::_delete_authorization_header
PythonHandler mod_python.publisher
When you specify two handlers like this, mod_python will execute each in
turn. Thus, by adding a _delete_authorization_header() method to a module
my_digest_workaround we can hook in some code to run before
mod_python.publisher. The content of my_digest_workaround would thus be:
from mod_python import apache
def _delete_authorization_header(req):
if req.headers_in.has_key("Authorization"):
del req.headers_in["Authorization"]
return apache.OK
The my_digest_workaround module could be put in the same directory as
.htaccess file, or if using global Apache configuration in root directory of
where your published files are kept. I explicitly called the handler
_delete_authorization_header(), with a leading underscore so that it
will not be found if some addressed a URL for publisher at it directly.
End result is that the workaround gets called first and it removes the
problem header and then publisher gets executed and your function
called.
Graham
> Björn S
>
> On 11/23/05, Graham Dumpleton <grahamd at dscpl.com.au> wrote:
> >
> > Graham Dumpleton wrote ..
> > > Bjorn Sundberg wrote ..
> > > > Is there a way do use http digest authentication?
> > >
> > > No. HTTP digest authentication and mod_python.publisher are currently
> > > incompatible. See:
> > >
> > >
> > >
> > > It is actually a simple fix, but wasn't done for mod_python 3.2.
> > >
> > > Even if fixed, the HTTP digest authentication has to be done by Apache,
> > > it cannot be done by mod_python.publisher when using __auth__ etc.
> > > The fix is merely to stop mod_python.publisher barfing when it is being
> > > done by Apache.
> >
> > Actually, as usual there is nearly always a way to fudge things. You
> could
> > still use Apache HTTP digest authentication (managed by Apache) and
> > still use mod_python.publisher by having an authenhandler() which
> > deleted the "Authorization" header so that mod_python.publisher didn't
> > find it and therefore didn't barf.
> >
> > def authenhandler(req):
> >
> > if req.headers_in.has_key("Authorization"):
> > del req.headers_in["Authorization"]
> >
> > ... etc.
> >
> > I haven't tried this, but it should work.
> >
> > Graham
> > | http://modpython.org/pipermail/mod_python/2005-November/019583.html | CC-MAIN-2017-51 | en | refinedweb |
Example 5: Getting Image Orientation and Bounding Box Coordinates
Applications that use Amazon Rekognition commonly need to display the images that are detected by Amazon Rekognition operations and the boxes around detected faces. To display an image correctly in your application, you need to know the image's orientation and possibly correct it. For some .jpg files, the image's orientation is contained in the image's Exchangeable image file format (Exif) metadata. For other .jpg files and all .png files, Amazon Rekognition operations return the estimated orientation.
To display a box around a face, you need the coordinates for the face's bounding box and, if the box isn't oriented correctly, you might need to adjust those coordinates. Amazon Rekognition face detection operations return bounding box coordinates for each detected face.
The following Amazon Rekognition operations return information for correcting an image's orientation and bounding box coordinates:
-
-
DetectLabels (returns only information to correct image orientation)
-
-
This example shows how to get the following information for your code:
The estimated orientation of an image (if there is no orientation information in Exif metadata)
The bounding box coordinates for the faces detected in an image
Translated bounding box coordinates for bounding boxes that are affected by estimated image orientation
Use the information in this example to ensure that your images are oriented correctly and that bounding boxes are displayed in the correct location in your application.
Because the code used to rotate and display images and bounding boxes depends on the language and environment that you use, we don't explain how to display images and bounding boxes in your code or how to get orientation information from Exif metadata.
Finding an Image's Orientation
To display an image correctly in your application, you might need to rotate it. The following image is oriented to 0 degrees and is displayed correctly.
However, the following image is rotated 90 degrees counterclockwise. To display it correctly, you need to find the orientation of the image and use that information in your code to rotate the image to 0 degrees.
Some images in .jpg format contain orientation information in Exif metadata. If
the value of the
OrientationCorrection field is
null in
the operation's response, the Exif metadata for the image contains the orientation.
In the Exif metadata, you can find the image's orientation in the
orientation field. Although Amazon Rekognition identifies the presence of
image orientation information in Exif metadata, it does not provide access to it.
To
access the Exif metadata in an image, use a third-party library or write your own
code. For more information, see Exif
Version 2.31.
Images in .png format do not have Exif metadata. For .jpg images that don't have
Exif metadata and for all .png images, Amazon Rekognition operations return an estimated
orientation for the image in the
OrientationCorrection field. Estimated
orientation is measured counterclockwise and in increments of 90 degrees. For
example, Amazon Rekognition returns ROTATE_0 for an image that is oriented to 0 degrees
and
ROTATE_90 for an image that is rotated 90 degrees counterclockwise.
Note
The
CompareFaces operation returns the source image orientation
in the
SourceImageOrientationCorrection field and the target image
orientation in the
TargetImageOrientationCorrection field.
When you know an image's orientation, you can write code to rotate and correctly display it.
Displaying Bounding Boxes
The Amazon Rekognition operations that analyze faces in an image also return the coordinates of the bounding boxes that surround the faces. For more information, see BoundingBox.
To display a bounding box around a face similar to the box shown in the following image in your application, use the bounding box coordinates in your code. The bounding box coordinates returned by an operation reflect the image's orientation. If you have to rotate the image to display it correctly, you might need to translate the bounding box coordinates.
Displaying Bounding Boxes When Orientation Information Is Not Present in Exif Metadata
If an image doesn't have Exif metadata, or if the
orientation
field in the Exif metadata is not populated, Amazon Rekognition operations return
the
following:
An estimated orientation for the image
The bounding box coordinates oriented to the estimated orientation
If you need to rotate the image to display it correctly, you also need to rotate the bounding box.
For example, the following image is oriented at 90 degrees counterclockwise and shows a bounding box around the face. The bounding box is displayed using the coordinates for the estimated orientation returned from an Amazon Rekognition operation.
When you rotate the image to 0 degrees orientation, you also need to rotate the bounding box by translating the bounding box coordinates. For example, the following image has been rotated to 0 degrees from 90 degrees counterclockwise. The bounding box coordinates have not yet been translated, so the bounding box is displayed in the wrong position.
To rotate and display bounding boxes when orientation isn't present in Exif metadata
Call an Amazon Rekognition operation providing an input image with at least one face and with no Exif metadata orientation. For an example, see Exercise 2: Detect Faces (API).
Note the estimated orientation returned in the response's
OrientationCorrectionfield.
Rotate the image to 0 degrees orientation by using the estimated orientation you noted in step 2 in your code.
Translate the top and left bounding box coordinates to 0 degrees orientation and convert them to pixel points on the image in your code. Use the formula in the following list that matches the estimated orientation you noted in step 2.
Note the following definitions:
ROTATE_(n)is the estimated image orientation returned by an Amazon Rekognition operation.
<face>represents information about the face that is returned by an Amazon Rekognition operation. For example, the FaceDetail data type that the DetectFaces operation returns contains bounding box information for faces detected in the source image.
image.widthand
image.heightare pixel values for the width and height of the source image.
The bounding box coordinates are a value between 0 and 1 relative to the image size. For example, for an image with 0 degree orientation, a
BoundingBox.leftvalue of 0.9 puts the left coordinate close to the right side of the image. To display the box, translate the bounding box coordinate values to pixel points on the image and rotate them to 0 degrees, as shown in each of the following formulas. For more information, see BoundingBox.
- ROTATE_0
left = image.width*BoundingBox.Left
top = image.height*BoundingBox.Top
- ROTATE_90
left = image.height * (1 - (<face>.BoundingBox.Top + <face>.BoundingBox.Height)
top = image.width * <face>.BoundingBox.Left
- ROTATE_180
left = image.width - (image.width*(<face>.BoundingBox.Left+<face>.BoundingBox.Width))
top = image.height * (1 - (<face>.BoundingBox.Top + <face>.BoundingBox.Height))
- ROTATE_270
left = image.height * BoundingBox.top
top = image.width * (1 - BoundingBox.Left - BoundingBox.Width)
Using the following formulas, calculate the bounding box's width and height as pixel ranges on the image in your code.
The width and height of a bounding box is returned in the
BoundingBox.Widthand
BoundingBox.Heightfields. The width and height values range between 0 and 1 relative to the image size.
image.widthand
image.heightare pixel values for the width and height of the source image.
-
box width = image.width * (<face>.BoundingBox.Width)
box height = image.height * (<face>.BoundingBox.Height)
Display the bounding box on the rotated image by using the values calculated in steps 4 and 5.
Displaying Bounding Boxes When Orientation Information is Present in Exif Metadata
If an image's orientation is included in Exif metadata, Amazon Rekognition operations do the following:
Return null in the orientation correction field in the operation's response. To rotate the image, use the orientation provided in the Exif metadata in your code.
Return bounding box coordinates already oriented to 0 degrees. To show the bounding box in the correct position, use the coordinates that were returned. You do not need to translate them.
Example: Getting Image Orientation and Bounding Box Coordinates For an Image
The following example shows how to use the AWS SDK for Java to get the estimated
orientation of an image and to translate bounding box coordinates for faces detected
by the
DetectFaces operation.
The example loads an image from the local file system, calls the
DetectFaces operation, determines the height and width of the
image, and calculates the bounding box coordinates of the face for the rotated
image. The example does not show how to process orientation information that is
stored in Exif metadata.
To use this code, replace
input.jpg with the name and path of an
image that is stored locally in either .png or .jpg format.
Copy
import java.awt.image.BufferedImage; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.InputStream; import java.nio.ByteBuffer; import java.util.List; import javax.imageio.ImageIO; import com.amazonaws.AmazonClientException; import com.amazonaws.auth.AWSCredentials; import com.amazonaws.auth.AWSStaticCredentialsProvider; import com.amazonaws.auth.profile.ProfileCredentialsProvider; import com.amazonaws.regions.Regions; import com.amazonaws.services.rekognition.AmazonRekognition; import com.amazonaws.services.rekognition.AmazonRekognitionClientBuilder; import com.amazonaws.services.rekognition.model.Image; import com.amazonaws.util.IOUtils; import com.amazonaws.services.rekognition.model.AgeRange; import com.amazonaws.services.rekognition.model.AmazonRekognitionException; import com.amazonaws.services.rekognition.model.Attribute; import com.amazonaws.services.rekognition.model.BoundingBox; import com.amazonaws.services.rekognition.model.DetectFacesRequest; import com.amazonaws.services.rekognition.model.DetectFacesResult; import com.amazonaws.services.rekognition.model.FaceDetail; public class RotateImage { public static void main(String[] args) throws Exception { String photo = "
input.jpg"; //Get Rekognition client AWSCredentials credentials = null; try { credentials = new ProfileCredentialsProvider("AdminUser").getCredentials(); } catch (Exception e) { throw new AmazonClientException("Cannot load the credentials: ", e); } AmazonRekognition amazonRekognition = AmazonRekognitionClientBuilder .standard() .withRegion(Regions.US_WEST_2) .withCredentials(new AWSStaticCredentialsProvider(credentials)) .build(); // Load image ByteBuffer imageBytes=null; BufferedImage image = null; try (InputStream inputStream = new FileInputStream(new File(photo))) { imageBytes = ByteBuffer.wrap(IOUtils.toByteArray(inputStream)); } catch(Exception e) { System.out.println("Failed to load file " + photo); System.exit(1); } //Get image width and height InputStream imageBytesStream; imageBytesStream = new ByteArrayInputStream(imageBytes.array()); ByteArrayOutputStream baos = new ByteArrayOutputStream(); image=ImageIO.read(imageBytesStream); ImageIO.write(image, "jpg", baos); int height = image.getHeight(); int width = image.getWidth(); System.out.println("Image Information:"); System.out.println(photo); System.out.println("Image Height: " + Integer.toString(height)); System.out.println("Image Width: " + Integer.toString(width)); //Call detect faces and show face age and placement try{ DetectFacesRequest request = new DetectFacesRequest() .withImage(new Image() .withBytes((imageBytes))) .withAttributes(Attribute.ALL); DetectFacesResult result = amazonRekognition.detectFaces(request); System.out.println("Orientation: " + result.getOrientationCorrection() + "\n"); List <FaceDetail> faceDetails = result.getFaceDetails(); for (FaceDetail face: faceDetails) { System.out.println("Face:"); ShowBoundingBoxPositions(height, width, face.getBoundingBox(), result.getOrientationCorrection()); AgeRange ageRange = face.getAgeRange(); System.out.println("The detected face is estimated to be between " + ageRange.getLow().toString() + " and " + ageRange.getHigh().toString() + " years old."); System.out.println(); } } catch (AmazonRekognitionException e) { e.printStackTrace(); } } public static void ShowBoundingBoxPositions(int imageHeight, int imageWidth, BoundingBox box, String rotation) { float left = 0; float top = 0; if(rotation==null){ System.out.println("No estimated estimated orientation. Check Exif data."); return; } //Calculate face position based on image orientation. switch (rotation) { case "ROTATE_0": left = imageWidth * box.getLeft(); top = imageHeight * box.getTop(); break; case "ROTATE_90": left = imageHeight * (1 - (box.getTop() + box.getHeight())); top = imageWidth * box.getLeft(); break; case "ROTATE_180": left = imageWidth - (imageWidth * (box.getLeft() + box.getWidth())); top = imageHeight * (1 - (box.getTop() + box.getHeight())); break; case "ROTATE_270": left = imageHeight * box.getTop(); top = imageWidth * (1 - box.getLeft() - box.getWidth()); break; default: System.out.println("No estimated orientation information. Check Exif data."); return; } //Display face location information. System.out.println("Left: " + String.valueOf((int) left)); System.out.println("Top: " + String.valueOf((int) top)); System.out.println("Face Width: " + String.valueOf((int)(imageWidth * box.getWidth()))); System.out.println("Face Height: " + String.valueOf((int)(imageHeight * box.getHeight()))); } } | http://docs.aws.amazon.com/rekognition/latest/dg/example5.html | CC-MAIN-2017-34 | en | refinedweb |
Acid is a fully asynchronous static site generator designed to make working with content services simple. It is designed for use with Webpack and the excellent Marko templating language.
Acid allows for plugins to be written and imported to connect to any service that provides an API. These plugins can add routes in your static site as well as supply custom Marko tags that can fetch additional content mid-render.
Acid allows for simple data access through two main methods. The first is through the plugin system and the second is custom Marko tags.
Acid plugins can provide a set of resolvers that handle requests for a given path. Each resolver is able to determine its own context item and provide the path to a template required for rendering it. As an example, you could configure a blog resolver to handle any requests under
/blog, fetch the content from the Wordpress REST API then render the page using a template called
post.marko.
See more in the Writing Acid Plugins section.
As Marko was designed from the ground up with asynchronous rendering in mind, it's possible to import or write your own tags that can fetch data when you need it. As a simple example, suppose you were rendering a blog post and wanted to include details about the author. Assuming the author's ID is available at
data.context.authorId, you could use a custom Marko tag called
author-by-id to create a local variable called
author like so:
<author-by-id <img class="author-image" src="${author.imageUrl}"/> <p class="author-name">${author.name}</p> </author-by-id>
This allows for increased flexibility in your templates as you can decide what data you need, when you need it. More information about writing your own Marko tags can be found in the Marko documentation.
The recommended way to use Acid is through webpack-plugin-acid. View the project for documentation on its usage.
A config file can be provided at acid.config.js and will be loaded automatically if detected. Following is an example config using the acid-plugin-static plugin.
var acidPluginStatic = require('acid-plugin-static'); module.exports = { plugins: [ acidPluginStatic({ templateDir: './src/templates/static', generateListing: false }) ] };
If you wish to use Acid outside of a Webpack environment, you may create an instance by calling the
create method. This method returns a promise for the actual Acid instance as
create will also ensure routes are registered which may be an asynchronous task.
import Acid from 'ameeno-acid'; Acid.create().then(acid => { let myAcid = acid; // this is the acid instance });
Acid plugins can provide both routes to be added to the static site and/or configuration data for any custom Marko tags provided by the plugin. Plugins can be added either through the
acid.config.js file or by invoking the
addPlugin(config) method on the acid object.
If you intend to write your own plugin, the format of the plugin object follows.
Acid plugins must return a configuration object in the form:
{ name: '...', // required resolver: {...}, // a single resolver object }
or:
{ name: '...', // required resolvers: [{...}, {...}] // multiple resolver objects }
Resolver objects do not need to be supplied if the purpose of the configuration is to simply attach custom properties that can be accessed from the module's custom tags.
If the modules is intended to handle a route (or routes) in your site, it must provide one or more resolver objects. These take the form:
{ resolveRoutes: ..., // required resolveContext: ..., // optional resolveTemplate: ... // required }
or:
{ resolveRoutes: ..., // required handleRequest: ... // required }
A description of each method is provided below.
resolveRoutes()
This method is invoked by acid to map any routes that should be made available to the site. The acceptable return values are:
- Array - Function returning an Array - Function returning a Promise for an Array
handleRequest(path)
If this method is supplied, Marko template rendering will be bypassed and this method will be invoked to generate the content for any routes returned from
resolveRoutes. The acceptable return values are:
- Object - Function returning an Object - Function returning a Promise for an Object
resolveContext(path)
If
handleRequest is not supplied, this method will be invoked against each route returned from
resolveRoutes. The result of this function will be made available to the template render via:
${data.context}
and
${out.global.context}
The acceptable return values are:
- Anything - Function returning anything - Function returning a Promise for anything
If this method is not supplied, no context will be provided to the render.
resolveTemplate(path, context)
If
handleRequest is not supplied, this method will be invoked against each route return from
resolveRoutes. This function will be provided both the path and the resolved context in order to select an appropriate template. The acceptable return values are:
- Path to a Marko template - Function returning a path to a Marko template
The entire configuration, including any custom keys will be attached to Marko's global object during render. The
name value returned by the plugin will be the key used on Acid's
config object. A config in the form:
{ name: 'example', customString: 'my custom key' customObject: {key: 'value'} }
will have its properties available in marko templates via:
${out.global.config.example.customString} ${out.global.config.example.customObject}
Plugins that only provide Marko tags and don't need any custom configuration (such as API keys) do not need to be added to Acid's configuration. The modules simply needs to be installed with
npm and the tags will be resolved using Marko's own tag resolution.
Documentation for the Marko templating language can be found at markojs.com. | https://www.npmjs.com/package/ameeno-acid | CC-MAIN-2017-34 | en | refinedweb |
i need to create a class named rational, and i need to have the integer variables denominator and numerator in private. The i need to provide a constructor that enables an object of the class to be initialized when it is declared. then it says the constructor should contain default values in case no initializers are provided and should store the fraction in reduced form. so 2/4 should be stored in the object as 1 in the numerator, and 2 in the denominator. heres what i have done so far, im really confused.
#include <iostream> class rational { public: private: int numerator; int denominator; }; // end of rational class int main() { return 0; } | https://www.daniweb.com/programming/software-development/threads/18213/c-classes | CC-MAIN-2017-34 | en | refinedweb |
""" Hi i intended to list the built-in function using this function. It seems however to give me the results corresponding with a dictionary. Why is that ? """"
def showbuiltins(): """ saves the list of built in functions to a file you chose """ blist = dir(__builtins__) print blist import asl drawer, file = asl.FileRequest('Select a file where to save the list of built-ins', 'ram:BI_list', '', '') file = drawer + file print file bfile = open(file,"w") for item in blist: bfile.write(item + "\n") bfile.close() | https://www.daniweb.com/programming/software-development/threads/303513/list-built-in-functions-to-file | CC-MAIN-2017-34 | en | refinedweb |
Summary
Converts one or more multipatch features into a collection of COLLADA (.dae) files and referenced texture image files in an output folder. The inputs can be a layer or a feature class.
Usage
COLLADA files are an XML representation of a 3D object that can reference additional image files that act as textures draped onto the 3D geometry. This means that exporting a multipatch feature to COLLADA can result in the creation of several files—a .dae file containing the XML representation of the 3D object and one or more image files (for example, a .jpg or .png file) that contain the textures.
This tool creates one COLLADA representation for each multipatch feature that it exports. The tool uses a field value from each feature—by default, this is the Object ID—to define the output file names. This allows easier identification of which feature was exported to which COLLADA file and also provides the methodology for defining unique names when exporting multiple features to the same directory. Texture files are stored in the same directory as the COLLADA file. To minimize the total export file size, textures that are used in multiple COLLADA files—such as a repeated brick or window texture—are only exported once and then referenced by the applicable DAE files.
This tool automatically overwrites any existing COLLADA files with the same file name. When this happens, a warning message is given stating which files were overwritten with a new file during the export process. A GP message is also generated for any features that fail to export—for example, if the output location is read-only, or the disk is full.
To ensure a new COLLADA file is created for all the exported multipatch features, set the destination directory to an empty or new folder and choose a file name field that is unique for each feature. Exporting two features with the same attribute value will result in the second exported feature overwriting the COLLADA file of the first.
When iteratively updating a multipatch feature by exporting it to COLLADA and making changes outside of ArcGIS, export the feature to the same location each time. This will keep a single file on disk for that feature, representing the most up-to-date state of the 3D object.
If the exported multipatch is in a projected coordinate system, such as a building stored in a UTM zone, then a KML file containing the coordinates as WGS84 will also be created in the output folder. Note that this process will not use a datum transformation, which may result in positional discrepancies when viewing the KML.
Syntax
MultipatchToCollada_conversion (in_features, output_folder, {prepend_source}, {field_name})
Code sample
MultipatchToCollada example 1 (Python window)
The following sample demonstrates the use of this tool in the Python window.
import arcpy from arcpy import env env.workspace = "C:/data" arcpy.MultipatchToCollada_conversion("Sample.gdb/Buildings", "C:/COLLADA", "PREPEND_SOURCE_NAME", "BldName")
MultipatchToCollada example 2 (stand-alone script)
The following sample demonstrates the use of this tool in a stand-alone Python script.
'''********************************************************************* Name: Convert Multipatch To Collada Description: Converts multipatch features in an input workspace to a Collada model. *********************************************************************''' # Import system modules import arcpy # Script variables inWorkspace = arcpy.GetParameterAsText(0) # Set environment settings arcpy.env.workspace = inWorkspace # Create list of feature classes in workspace fcList = arcpy.ListFeatureClasses() # Determine if the list contained any feature classes if fcList: # Iterate through each feature class for fc in fcList: # Describe the feature class desc = arcpy.Describe(fc) # Determine if feature class is a multipatch if desc.shapeType is 'MultiPatch': # Ensure unique name for output folder outDir = arcpy.CreateUniqueName('collada_dir') # Specify that collada file is prefixed by source name prepend = 'PREPEND_SOURCE_NAME' # Specify the feature attribute used to name Collada files fldName = 'OID' #Execute MultipatchToCollada arcpy.MultipatchToCollada(fc, outDir, prepend, fldName) else: print('There are no feature classes in {0}.'.format(inWorkspace))
Environments
This tool does not use any geoprocessing environments.
Licensing information
- ArcGIS Desktop Basic: Yes
- ArcGIS Desktop Standard: Yes
- ArcGIS Desktop Advanced: Yes | http://pro.arcgis.com/en/pro-app/tool-reference/conversion/multipatch-to-collada.htm | CC-MAIN-2017-34 | en | refinedweb |
I'm caching some information from a file and I want to be able to check periodically if the file's content has been modified so that I can read the file again to get the new content if needed.
That's why I'm wondering if there is a way to get a file's last modified time in C++.
There is no language-specific way to do this: however the OS provides the required functionality.
If you're using a unix system, the the
stat call is what you need. There is an equivalent
_stat call provided for windows under Visual Studio. So here is code that would work for both:
#include <sys/types.h> #include <sys/stat.h> #ifndef WIN32 #include <unistd.h> #endif #ifdef WIN32 #define stat _stat #endif auto filename = "/path/to/file"; struct stat result; if(stat(filename.c_str(), &result)==0) { auto mod_time = result.st_mtim; ... } | https://codedump.io/share/9NQuFkeVLdNY/1/c-how-to-check-the-last-modified-time-of-a-file | CC-MAIN-2017-34 | en | refinedweb |
I have a very general question about calculating time complexity(Big O notation). when people say that the worst time complexity for QuickSort is O(n^2) (picking the first element of the array to be the pivot every time, and array is inversely sorted), which operation do they account for to get O(n^2)? Do people count the comparisons made by the if/else statements? Or do they only count the total number of swaps it makes? Generally how do you know which "steps" to count to calculate Big O notation.
I know this is a very basic question but I've read almost all the articles on google but still haven't figured it out
Worst cases of Quick Sort
Worst case of Quick Sort is when array is inversely sorted, sorted normally and all elements are equal.
Understand Big-Oh
Having said that, let us first understand what Big-Oh of something means.
When we have only and asymptotic upper bound, we use O-notation. For a given function g(n), we denote by O(g(n)) the set of functions,
O(g(n)) = { f(n) : there exist positive c and no,
such that 0<= f(n) <= cg(n) for all n >= no}
How do we calculate Big-Oh?
Big-Oh basically means how program's complexity increases with the input size.
Here is the code:
import java.util.*; class QuickSort { static int partition(int A[],int p,int r) { int x = A[r]; int i=p-1; for(int j=p;j<=r-1;j++) { if(A[j]<=x) { i++; int t = A[i]; A[i] = A[j]; A[j] = t; } } int temp = A[i+1]; A[i+1] = A[r]; A[r] = temp; return i+1; } static void quickSort(int A[],int p,int r) { if(p<r) { int q = partition(A,p,r); quickSort(A,p,q-1); quickSort(A,q+1,r); } } public static void main(String[] args) { int A[] = {5,9,2,7,6,3,8,4,1,0}; quickSort(A,0,9); Arrays.stream(A).forEach(System.out::println); } }
Take into consideration the following statements:
int x = A[r]; int i=p-1; if(A[j]<=x) { i++; int t = A[i]; A[i] = A[j]; A[j] = t; } int temp = A[i+1]; A[i+1] = A[r]; A[r] = temp; return i+1; if(p<r) { int q = partition(A,p,r); quickSort(A,p,q-1); quickSort(A,q+1,r); }
Assuming each statements take a constant time c. Let's calculate how many times each block is calculated.
The first block is executed 2c times. The second block is executed 5c times. The thirst block is executed 4c times.
We write this as O(1) which implies the number of times statement is executed same number of times even when size of input varies. all 2c, 5c and 4c all are O(1).
But, when we add the loop over second block
for(int j=p;j<=r-1;j++) { if(A[j]<=x) { i++; int t = A[i]; A[i] = A[j]; A[j] = t; } }
It runs for n times (assuming r-p is equal to n, size of the input) i.e., nO(1) times i.e., O(n). But this doesn't run n times all the time. Hence, we have average case.
We now established that the partition runs O(n) or O(log n). The last block definetly runs in O(n). Hence the entire complexity is either O(n2) or O(nlog n). | https://codedump.io/share/9huBgWs4XOfm/1/time-complexityjava-quicksort | CC-MAIN-2017-34 | en | refinedweb |
Requirements¶
Let's get our tutorial environment setup. Most of the setup work is in standard Python development practices (install Python, make an isolated environment, and setup packaging tools.)
Note
Pyramid encourages standard Python development practices with packaging tools, virtual environments, logging, and so on. There are many variations, implementations, and opinions across the Python community. For consistency, ease of documentation maintenance, and to minimize confusion, the Pyramid documentation has adopted specific conventions.
This Quick Tutorial is based on:
-.
Windows and Mac OS X users can download and run an installer.
Windows users should also install the Python for Windows extensions. Carefully read the
README.txt file at the end of the list of builds, and follow its
directions. Make sure you get the proper 32- or 64-bit build and Python
version.
Linux users can either use their package manager to install Python 3.3 or may build Python 3.3 from source. that stores
isolated Python environments (virtualenvs) and specific project files
and repositories.
Warning
The current state of isolated Python environments using
pyvenv on Windows is suboptimal in comparison to Mac and Linux. See for a discussion of the issue
and PEP 453 for a proposed
resolution.
pyvenv is a tool to create isolated Python 3.3 environments, each
with its own Python binary and independent set of installed Python
packages in its site directories. Let's create one, using the location
we just specified in the environment variable.
# Mac and Linux $ pyvenv $VENV # Windows c:\> c:\Python33\python -m venv %VENV%
See also
See also Python 3's
venv module,
Python 2's virtualenv
package,
Installing Pyramid on a Windows System.8" # Windows c:\> %VENV%\Scripts\easy_install "pyramid==1.5.8"
Our Python virtual environment now has the Pyramid software available.
You can optionally install some of the extra Python packages used during this tutorial:
# Mac and Linux $ $VENV/bin/easy_install nose webtest deform sqlalchemy \ pyramid_chameleon pyramid_debugtoolbar waitress \ pyramid_tm zope.sqlalchemy # Windows c:\> %VENV%\Scripts\easy_install nose webtest deform sqlalchemy pyramid_chameleon pyramid_debugtoolbar waitress pyramid_tm zope.sqlalchemy
Note
Why
easy_install and not
pip? Pyramid encourages use of namespace
packages, for which
pip's support is less-than-optimal. Also, Pyramid's
dependencies use some optional C extensions for performance: with
easy_install, Windows users can get these extensions without needing
a C compiler (
pip does not support installing binary Windows
distributions, except for
wheels, which are not yet available for
all dependencies).
See also
See also Installing Pyramid on a UNIX System. For instructions to set up your Python environment for development using Windows or Python 2, see Pyramid's Before You Install.
See also Python 3's
venv module, the setuptools
installation instructions,
and easy_install help. | https://docs.pylonsproject.org/projects/pyramid/en/1.5-branch/quick_tutorial/requirements.html | CC-MAIN-2018-05 | en | refinedweb |
- Windows
- 'wm geometry . +0+0' will move the main toplevel so that it nestles into the top-left corner of the screen, with the left border and titlebar completely visible.
- MacOS X
- 'wm geometry . +0+0' will move the main toplevel so that it nestles into the top-left corner of the screen, with the left border completely visible. The titlebar is also within the screen, but it (and possibly a few pixel rows of contents) is completely obscured by the menu bar.
- X11
- 'wm geometry . +0+0' will move the main toplevel so that its contents are nestled into the top-left corner of the screen, but with the left border and titlebar completely offscreen and invisible.
proc decoration + menubar (if it exists) thickness set titleMenubarThickness [expr {$contentsTop - $decorationTop}] return [list $titleMenubarThickness $decorationThickness] }Is only useful on MacOS X/Windows (where it returns the thickness of the titlebar/menubar and the thickness of the left window border). On X11, it simply returns 0 0.Difference 3: geometry of withdraw windowsOn immediate creation:
toplevel .tt ; wm withdraw .tt ; wm geometry .ttwill return 1x1+0+0 on all platforms, but:
toplevel .tt ; wm withdraw .tt ; update; wm geometry .ttwill return 1x1+0+0 on X11, 200x200+198+261 (or something similar) on Windows, and 1x1+45+85 (or something similar) on OS X. Similarly, winfo height .tt will return (of course) 1 on x11 and 200 on Windows.A constant here is that winfo reqheight .tt will return 200 (or equivalent) on all platforms. So there is at least a workaround for this difference in behaviour....
[laterne] - 2015-06-19 08:09:45The regular expression in Difference 2 should be{^([0-9]+)x([0-9]+)([+-])(\-?[0-9]+)([+-])(\-?[0-9]+)$}to handle negative positions and orientations | http://wiki.tcl.tk/11502 | CC-MAIN-2018-05 | en | refinedweb |
JBoss.orgCommunity Documentation
timer
groupactivity
groupsimple
grouptimer
groupmultiple entries
groupconcurrency
groupsecret
foreach
javaactivity
assign
rules-decisionactivity
rulesactivity
jmsactivity
This developers guide is intended for experienced developers that want to get the full flexibility out of jBPM. The features described in this developers guide are currently not supported. Use at your own risk.
Chapter 2, Incubation explains the features that are intended to move to the userguide eventually and become part of the supported offering. Do note that incubation features are not yet considered stable (ie. there could be major syntax or implementation changes in next versions).
Chapter 3, BPMN 2.0 shows how the BPMN 2.0 process language can be used with jBPM.
Chapter 5, The Process Virtual Machine through Chapter 9, Advanced graph execution explain the core of jBPM, the process virtual machine (PVM) and how activity and event listener can be build for it.
Chapter 10, Configuration through Chapter 18, Signavio web modeler explain advanced usage of the jBPM framework. (See Jira issue
JBPM-2556
and vote for it if you want to let us know that this issue has priority for you).
The versions of the libraries.
timer
groupactivity
groupsimple
grouptimer
groupmultiple entries
groupconcurrency
groupsecret
foreach
javaactivity
assign
rules-decisionactivity
rulesactivity
jmsactivity:
[<Base Date> {+|-}] quantity [business] {second | seconds | minute | minutes | hour | hours | day | days | week | weeks | month | months | year | years}
Where
Base Date is specified as EL and”
Note: 'business' is not supported when subtracting from a base date!
The base date can be specified in any JAVA Expression Language expression that resolves to a JAVA Date or Calendar object. Referencing variables of other object types, even a String in a date format like '2036-02-12', will throw a JbpmException
NOTE: This baseDate is supported on the duedate and repeat attributes of all places where timers can be used, but also on the reminder of a task" /> <reminder name="hitBoss" duedate="#{payRaiseDay} + 3 days" repeat="#{iritationFactor}" />
Remember, the following example, a subtraction in combination with 'business', is not supported and will throw an exception, as will resulting due dates that will be in the past
<reminder name="toGoOrNotToGo" duedate="#{goLive} - 3 business days"/>>
If the default business calendar implementation is sufficient for you, you can simply adjust the timings in the xml configuration as shown above.
If the default implementation doesn't cover your use cases, you can easily
write your own implementation by implementing the
org.jbpm.pvm.internal.cal.BusinessCalendar interface.
For example:; } }
To configure the jBPM engine to use this custom business calendar, just add
the following line to your
jbpm.cfg.xml:
<process-engine-context> <object class="org.jbpm.test.custombusinesscalendarimpl.CustomBusinessCalendar" /> </process-engine-context>
Take a look at the
org.jbpm.test.custombusinesscalendarimpl.CustomBusinessCalendarImplTest
for more information.
The example
org.jbpm.examples.timer.transition.
Activity
foreach allows multiple group names,
whereas
quota indicates how many tasks must be completed
before execution leaves the
join activity.
<process name="ForEach" xmlns=""> <start g="28,61,48,48" name="start1"> <transition to="foreach1"/> </start> <foreach var="department" in="#{departments}" g="111,60,48,48" name="foreach1"> <transition to="Collect reports"/> </foreach> <task candidate- <transition to="join1"/> </task> <join g="343,59,48,48" multiplicity="#{quorum}" name="join1"> <transition to="end1"/> </join> <end g="433,60,48,48" name="end1"/> </process>
When using foreach, the corresponding join must have the multiplicity attribute set. Without it, join continues execution based on its incoming transitions. In the preceding example, join has a single incoming transition. If multiplicity is not specified, the first execution that reaches the join activity will cause the parent execution to leave the join.
Here is how to initialize the iterative process variables.
Map<String, Object> variables = new HashMap<String, Object>(); variables.put("departments", new String[] { "sales-dept", "hr-dept", "finance-dept" }); variables.put("quorum", 3); ProcessInstance processInstance = executionService.startProcessInstanceByKey("ForEach", variables);
The purpose of the
java activity in general is:
package org.jbpm.test.enterprise.stateless.bean; import javax.ejb.Stateless; @Stateless public class CalculatorBean implements CalculatorRemote, CalculatorLocal { public Integer add(Integer x, Integer y) { return x + y; } public Integer subtract(Integer x, Integer y) { return x - y; } }
and the following process definition:
<process name="EJB"> <start> <transition to="calculate" /> </start> <java name="calculate" ejb- <arg><int value="25"/></arg> <arg><int value="38"/></arg> <transition to="wait" /> </java> <state name="wait" /> </process>
As you can expect, the execution of this node will invoke the
add method of the ejb
that is known under the jndi name
CalculatorBean/local. The result will be stored in
the variable
answer. This is illustrated in the following test snippet.
public void testEjbInvocation() throws Exception { String executionId = executionService .startProcessInstanceByKey("EJB") .getProcessInstance() .getId(); assertEquals(63, executionService.getVariable(executionId, "answer")); }
The
assign activity retrieves a value and assigns it
to a target location.
Every form of
from can be combined with any form
of
to. The listing below simply assigns a variable to
another.
<assign name='copy' from- <transition to='wait' /> </assign>
The next example shows an expression value being assigned to a variable.
<assign name='resolve' from- <transition to='wait' /> </assign>
Our last example presents a value constructed by a descriptor being assigned to the expression location.
<assign name='resolve' to- <from><string value='gasthuisstraat' /></from> <transition to='wait' /> </assign>
The rules deployer is a convenience integration between jBPM and Drools. It creates a KnowledgeBase based on all .drl files that are included in a business archive deployment. That KnowledgeBase is then stored in the repository cache. So one KnowledgeBase is maintained in memory the process-engine-context. Activities like the rules decision leverage this KnowledgeBase.
A rules-decision is an automatic activity that will select a single outgoing transition based on the evaluation of rules.
Rules for a rules decision are to be deployed as part of the business
archive. Those rules can use all process variables as globals in rule definitions.
The
rule-decision activity will use a stateless knowledge
session on the knowledgebase. The execution arriving in the
rules-decision
is executed on the stateless drools knowledge session.
Let's look at the next example how that works in practice. We'll start
with the
RulesDecision process
<process name="RulesDecision"> <start> <transition to="isImportant" /> </start> <rules-decision <transition name="dunno" to="analyseManually" /> <transition name="important" to="processWithPriority" /> <transition name="irrelevant" to="processWhenResourcesAvailable" /> </rules-decision> <state name="analyseManually" /> <state name="processWithPriority" /> <state name="processWhenResourcesAvailable" /> </process>
The following
isImportant.drl is included in the
business archive deployment.
global java.lang.Integer amount; global java.lang.String product; global org.jbpm.jpdl.internal.rules.Outcome outcome; rule "LessThen3IsIrrelevant" when eval(amount < 3) then outcome.set("irrelevant"); end rule "MoreThen24IsImportant" when eval(amount > 24) then outcome.set("important"); end rule "TwelveTempranillosIsImportant" when eval(product == "Tempranillo") eval(amount > 12) then outcome.set("important"); end
First you see that amount and product are defined as globals.
Those will resolve by the
rules-decision to the
process variables with those respective names.
outcome is a special global variable that is
used to indicate the transition to take in the consequence. Also,
if no outcome is specified by the rules, the default transition will be taken.
So let's start a new process instance and set 2 variables
product and
amount with respective
values
shoe and
32:
Map<String, Object> variables = new HashMap<String, Object>(); variables.put("amount", 32); variables.put("product", "shoe"); ProcessInstance processInstance = executionService.startProcessInstanceByKey("RulesDecision", variables);
After starting the process instance method returns, the process instance
will have arrived in the activity
processWithPriority
In similar style, a new RulesDecision process instance with
2 missiles will go to
activity
processWhenResourcesAvailable
A RulesDecision process instance with
15 shoes will go to
activity
analyseManually
And a RulesDecision process instance with
13 Tempranillos will go to
activity
analyseManually
A
rules is an automatic activity that will
create a stateful knowledge session, feed a number of facts in it and
fire all rules. The idea is that the rules will update or create
process variables that will be used later in the process. Facts
can be specified as sub elements of the rules activity.
If a rules activity has one outgoing transition, then that one is
taken automatically. But multiple outgoing transitions can be specified
with conditions on them, just like with the
decision activity
when using the conditions.
For example:
<process name="Rules"> <start> <transition to="evaluateStatus"/> </start> <rules name="evaluateStatus"> <fact var="room" /> <transition to="checkForFires"/> </rules> <decision name="checkForFires"> <transition to="getFireExtinguisher"> <condition expr="#{room.onFire}" /> </transition> <transition to="goToPub"/> </decision> <state name="getFireExtinguisher"/> <state name="goToPub"/> </process>
The process first checks with rules if the room is on fire. The Room class looks like this:
public class Room implements Serializable { int temperature = 21; boolean smoke = false; boolean isOnFire = false; public Room(int temperature, boolean smoke) { this.temperature = temperature; this.smoke = smoke; } ...getters and setters... }
Following rules are deployed in the same business archive:
rule "CheckRoomOnFire" when room : org.jbpm.examples.rules.Room( temperature > 30, smoke == true ) then room.setOnFire( true ); end
So when a new
Rules process instance is started
like this:
Map<String, Object> variables = new HashMap<String, Object>(); variables.put("room", new Room(350, true)); ProcessInstance processInstance = executionService.startProcessInstanceByKey("Rules", variables);
Then the process will end up in the activity
getFireExtinguisher
And when the process is started with a Room(21, false), it will end up in the
activity
goToPub
Disclaimer: this activity is not yet stable. Two aspects will be revisisted in following releases:
transacted="false"in our enterprise QA run processes (used for docs here). And that is why we use method
jmsConsumeMessageFromQueue("java:JmsXA", "queue/jbpm-test-queue", 1000, false, Session.AUTO_ACKNOWLEDGE);in our test cases that run on JBoss (also used in docs here)
The
jms activity provides users with convenience for sending JMS messages.
At this moment the sending of three different types of JMS messages is possible: text, object
and map. Specifying message properties is not yet supported.
There are 3 types of JMS messages that you can send to to the destination: text, object
and map. The
connection-factory and
destination attributes are
mandatory and respectively contain the names of the connection factory and destination (queue or topic)
that will be used to lookup the corresponding objects in JNDI. The lookup is done like this:
InitialContext initialContext = new InitialContext(); Destination destination = (Destination) initialContext.lookup(destinationName); Object connectionFactory = initialContext.lookup(connectionFactoryName);
The
jms activity will use the JMS queue apis if the destination
is an
instanceof Queue. Analogue for topics.
The
jms activity will use the XA JMS apis if the connectionFactory
is an
instanceof XAConnectionFactory. Analogue for plain ConnectionFactory's.
So if you're running inside an appserver, then the
new InitialContext()
will see the queue's and topics configured in the appserver.
When your're using the JMS mocking
in standalone test mode, then the queues and topics that you created with
JbpmTestCase.jmsCreateQueue
and
JbpmTestCase.jmsCreateTopic will be available.
When you're running as a remote application client, then you have to specify the jndi environment with system properties.
Exactly one of the elements
text,
object or
map
is mandatory. The presence of this element will determine the kind of message that will be sent to the
queue obtained in the lookup mentioned above. This message will be a
TextMessage,
ObjectMessage or
MapMessage respectively.
In the following subsections the different types of supported messages will be explained. The used process is in the three cases similar. The graphical representation of the process is shown below.
Apart from configuring a real JMS and making sure it is available in JNDI,
a
jms activity can also be tested with a mock JMS provider.
That might be easier to perform scenario testing of your process.
Following test helper methods are based solely on plain JMS apis and hence they work in a standalone environment as well as in an appserver environment:
For example, after the process execution has executed the
jms activity,
messages can be asserted like this:
MapMessage mapMessage = (MapMessage) jmsConsumeMessageFromQueue("java:/JmsXA", "queue/ProductQueue"); assertEquals("shampoo", mapMessage.getString("product"));
The following jms helper methods are based on mockrunner and hence they only work in a standalone environment:
(we're collaborating with mockrunner people to have these methods also work in an appserver environment)
For example, a queue can be created and removed in the setup and tearDown methods of a test like this:
protected void setUp() throws Exception { super.setUp(); jmsCreateQueue("java:/JmsXA", "queue/ProductQueue"); } protected void tearDown() throws Exception { jmsRemoveQueue("java:/JmsXA", "queue/ProductQueue"); super.tearDown(); }
The first possibility of sending JMS messages is to use text as its payload. In this case a JMS
TextMessage will be created and sent to the specified destination.
Consider the following process definition:
<process name="JmsQueueText"> <start> <transition to="send message"/> </start> <jms name="send message" connection- <text>This is the body</text> <transition to="wait"/> </jms> <state name="wait"/> </process>
As you may expect and as is shown in the following test case starting this process will cause the JMS node to send a message to the queue with the name "queue/jbpm-test-queue". The factory used to create a connection to connect to this queue is named "java:JmsXA". The payload of the message is the text string "This is the body".
executionService.startProcessInstanceByKey("JmsQueueText"); TextMessage textMessage = (TextMessage) jmsConsumeMessageFromQueue("java:JmsXA", "queue/jbpm-test-queue", 1000, false, Session.AUTO_ACKNOWLEDGE); assertEquals("This is the body", textMessage.getText());
The relevant code is shown above in boldface. The rest of the method is boilerplate code needed to setup a message consumer. We will omit this code in the subsequent examples.
The second possibility is to use a serializable object as the payload of the message. In this case a
JMS
ObjectMessage will be created and sent to the specified destination.
Consider the following process definition:
<process name="JmsQueueObject"> <start> <transition to="send message"/> </start> <jms name="send message" connection- <object expr="${object}"/> <transition to="wait"/> </jms> <state name="wait"/> </process>
As in the previous case a message will be sent to the queue with the name "queue/jbpm-test-queue".
Also again a factory used to create a connection to connect to this queue is named "java:JmsXA".
But in this case the payload of the message is the serializable object that is obtained by
evaluating the expression specified by the
expr attribute. This is illustrated in the test case below.
Map<String, Object> variables = new HashMap<String, Object>(); variables.put("object", "this is the object"); executionService.startProcessInstanceByKey("JmsQueueObject", variables); ObjectMessage objectMessage = (ObjectMessage) jmsConsumeMessageFromQueue("java:JmsXA", "queue/jbpm-test-queue", 1000, false, Session.AUTO_ACKNOWLEDGE); assertEquals("this is the object", objectMessage.getObject());
In this third possibility the payload is constituted by the key-value entries of a map. This time a
JMS
MapMessage will be created and sent to the specified destination.
Consider the following process definition:
<process name="JmsQueueMap"> <start> <transition to="send message"/> </start> <jms name="send message" connection- <map> <entry> <key><string value="x"/></key> <value><string value="foo"/></value> </entry> </map> <transition to="wait"/> </jms> <state name="wait"/> </process>
Again a message will be sent to the queue with the name "queue/jbpm-test-queue" and the factory used to create a connection to connect to this queue is named "java:JmsXA". In this case the payload of the message are the specified key-value pairs of the map. It is illustrated in the test case below.
executionService.startProcessInstanceByKey("JmsQueueMap"); MapMessage mapMessage = (MapMessage) jmsConsumeMessageFromQueue("java:JmsXA", "queue/jbpm-test-queue", 1000, false, Session.AUTO_ACKNOWLEDGE); assertTrue(mapMessage.itemExists("x")); assertEquals("foo", mapMessage.getObject("x"));
The history session, which can be added to the transaction-context in the jBPM configuration will add a default history event listener to the process engine. This default history session will write the information in the history events to the history tables in the database.
The history session chain construct allows to define custom history event listeners. These custom history sessions will be called when a history event is fired. Multiple custom implementations are possible as follows:
<transaction-context> <history-sessions> <object class="org.jbpm.test.historysessionchain.MyProcessStartListener" /> <object class="org.jbpm.test.historysessionchain.MyProcessEndListener" /> </history-sessions> </transaction-context>
The custom history sessions must be on the classpath when the jBPM configuration is parsed and they must implement the HistorySession interface.
public class MyProcessStartListener implements HistorySession { public void process(HistoryEvent historyEvent) { if (historyEvent instanceof ProcessInstanceCreate) { ... } } }
If you want to add the default history session implementation to your configuration, add the following line to the transaction-context section:If you want to add the default history session implementation to your configuration, add the following line to the transaction-context section:
<history-sessions> <object class="org.jbpm.pvm.internal.history.HistorySessionImpl" /> </history-sessions>If you are using the jbpm.default.cfg.xml import in your configuration, this default history session implementation is already configured as above.:
By default the behaviour of jBPM upon redeployment is to start new process instances with the newly deployed version. Also, it is possible to start new process instances using a specified older version if needed. The existing running process instances always keep running following the definition that they were started in. But what when a customer or some piece of legislation mandates that this behaviour is not enough? We could e.g. think of a situation where process instances do not make sense anymore when a new definition is deployed. In this case these instances should be ended. In another situation it might be needed that all (or even some particular) instances are migrated and moved to the newly deployed definition. jBPM contains a tool that exactly supports these use cases.
Before delving into the details of the instance migration tool, we have to warn the reader. Though we did a reasonable attempt at trying to understand the different use cases, there are certainly a number of situations that are not yet covered. For now we have concentrated on the limited case where the nodes that are involved in the migration are states. The goal is to expand this support to other nodes (e.g. human tasks) in the future. We welcome any feedback around these use cases very eagerly.
For all the examples that follow, we will start from the same original process definition:
<process name="foobar"> <start> <transition to="a"/> </start> <state name="a"> <transition to="b"> </state> <state name="b"> <transition to="c"/> </state> <state name="c"> <transition to="end"/> </state> <end name="end"/> </process>
The first obvious use case that we wanted to cover is where a new version of a process is deployed for which one of the following statements is true:
This use case might be useful if for instance event handler names change between versions, if new processing has to be inserted or if new paths of execution have to be added. Consider the following modification of the above process definition to indicate that running instances have to be migrated:
<process name="foobar"> <start> <transition to="a"/> </start> <state name="a"> <transition to="b"> </state> <state name="b"> <transition to="c"/> </state> <state name="c"> <transition to="end"/> </state> <end name="end"/> <migrate-instances/> </process>
When this second process is deployed the running instances of the previous version - and only of the previous version - will be migrated to the new version. We"ll explain later what to do if you want more than only the instances of the previous version to be migrated. Assume that when deploying the second version there would be one process instance in the state "a" and one process instance in the state "b". The following snippet in a unit test would be valid:
ProcessDefinition pd1 = deployProcessDefinition("foobar", originalVersion); ProcessInstance pi1 = startAndSignal(pd1, "a"); ProcessInstance pi2 = startAndSignal(pd1, "b"); ProcessDefinition pd2 = deployProcessDefinition("foobar", versionWithSimpleMigration); pi1 = executionService.findProcessInstanceById(pi1.getId()); pi2 = executionService.findProcessInstanceById(pi2.getId()); assertEquals(pd2.getId(), pi1.getProcessDefinitionId()); assertEquals(pd2.getId(), pi2.getProcessDefinitionId()); assertEquals(pi1, pi1.findActiveExecutionIn("a")); assertEquals(pi2, pi2.findActiveExecutionIn("b"));
The second use case is when the instances of the previous process definition have to be ended. The way to indicate this would be to add the action attribute to the migrate-instances tag with the value of "end".
<process name="foobar"> ... <migrate-instances </process>
If we take the situation from above with one process in state "a" and another in state "b" the two processes would be ended as can be seen in the following snippet:
ProcessDefinition pd1 = deployProcessDefinition("foobar", originalVersion); ProcessInstance pi1 = startAndSignal(pd1, "a"); ProcessInstance pi2 = startAndSignal(pd1, "b"); ProcessDefinition pd2 = deployProcessDefinition("foobar", versionWithSimpleAbortion); pi1 = executionService.findProcessInstanceById(pi1.getId()); pi2 = executionService.findProcessInstanceById(pi2.getId()); assertNull(pi1); assertNull(pi2);
So we've showed you how instances of the previously deployed version - and only that one - could be either migrated or ended. But what to do when you want to perform these actions on process instances of other already deployed versions. This can be done by making use of the versions attribute of the migrate-instances tag. This attribute lets you specify a range of versions that need to be migrated (or ended). Consider the following process definition:
<process name="foobar"> <start> <transition to="a"/> </start> <state name="a"> <transition to="b"> </state> <state name="b"> <transition to="c"/> </state> <state name="c"> <transition to="end"/> </state> <end name="end"/> <migrate-instances </process>
Imagine a situation where we would deploy the original process definition 4 times in a row and for each deployment start a process instance that waits in state "a". Then we deploy the above version of the process definition with instance migration. The result will be that instance 2 and instance 3 will be migrated while instance 1 and instance 4 will keep running following their original definition. This is shown in the snippet below:
ProcessDefinition pd1 = deployProcessDefinition("foobar", originalVersion); ProcessInstance pi1 = startAndSignal(pd1, "a"); ProcessDefinition pd2 = deployProcessDefinition("foobar", originalVersion); ProcessInstance pi2 = startAndSignal(pd2, "a"); ProcessDefinition pd3 = deployProcessDefinition("foobar", originalVersion); ProcessInstance pi3 = startAndSignal(pd3, "a"); ProcessDefinition pd4 = deployProcessDefinition("foobar", originalVersion); ProcessInstance pi4 = startAndSignal(pd4, "a"); ProcessDefinition pd5 = deployProcessDefinition("foobar", versionWithAbsoluteVersionRange); pi1 = executionService.findProcessInstanceById(pi1.getId()); pi2 = executionService.findProcessInstanceById(pi2.getId()); pi3 = executionService.findProcessInstanceById(pi3.getId()); pi4 = executionService.findProcessInstanceById(pi4.getId()); assertEquals(pd1.getId(), pi1.getProcessDefinitionId()); assertEquals(pd5.getId(), pi2.getProcessDefinitionId()); assertEquals(pd5.getId(), pi3.getProcessDefinitionId()); assertEquals(pd4.getId(), pi4.getProcessDefinitionId());
A number of variants exist for the versions attribute. The example above uses an absolute version range. It is also possible to use an expression of the form x-n to indicate a version number relative to the last deployed version. So if you want to only migrate instances from the last two versions you could use the following expression for the versions attribute:
<process name="foobar"> ... <migrate-instances </process>
You can also mix and match the absolute and the relative specifications. E.g. if you would like to migrate all the instances of all the versions to the newly deployed version you can use the following:
<process name="foobar"> ... <migrate-instances </process>
And for this last example you can also use the "*" wildcard notation:
<process name="foobar"> ... <migrate-instances </process>
In some cases users will want to map nodes from the previously deployed process definition to nodes of the newly deployed process definition. This could be the case when in the newly deployed process definition some nodes are deleted or have been replaced by nodes with a different name. To support this use case it is possible to specify so-called activity-mapping elements. These elements have two attributes: the activity name in the old process definition and the activity name in the new process definition. Consider the following process definition:
<process name="foobar"> <start> <transition to="a"/> </start> <state name="a"> <transition to="b"> </state> <state name="b"> <transition to="c"/> </state> <state name="c"> <transition to="d"/> </state> <state name="d"> <transition to="end"/> </state> <end name="end"/> <migrate-instances> <activity-mapping <activity-mapping </migrate-instances> </process>
Deploying this process will put all the instances of the previously deployed process that are waiting in the state "b" into the state "a" of the newly deployed process. Likewise all instances of the previously deployed process definition that are waiting in state "c" will be placed in the state "d". The following piece of code illustrates this:
ProcessDefinition pd1 = deployProcessDefinition("foobar", originalVersion); ProcessInstance pi1 = startAndSignal(pd1, "a"); ProcessInstance pi2 = startAndSignal(pd1, "b"); ProcessInstance pi3 = startAndSignal(pd1, "c"); ProcessDefinition pd2 = deployProcessDefinition("foobar", versionWithCorrectMappings); pi1 = executionService.findProcessInstanceById(pi1.getId()); pi2 = executionService.findProcessInstanceById(pi2.getId()); pi3 = executionService.findProcessInstanceById(pi3.getId()); assertEquals(pd2.getId(), pi1.getProcessDefinitionId()); assertEquals(pd2.getId(), pi2.getProcessDefinitionId()); assertEquals(pd2.getId(), pi3.getProcessDefinitionId()); assertEquals(pi1, pi1.findActiveExecutionIn("a")); assertEquals(pi2, pi2.findActiveExecutionIn("a")); assertEquals(pi3, pi3.findActiveExecutionIn("d")); pi1 = executionService.signalExecutionById(pi1.getId()); pi2 = executionService.signalExecutionById(pi2.getId()); pi2 = executionService.signalExecutionById(pi2.getId()); pi3 = executionService.signalExecutionById(pi3.getId()); assertEquals(pi1, pi1.findActiveExecutionIn("b")); assertEquals(pi2, pi2.findActiveExecutionIn("c")); assertTrue(pi3.isEnded());
We already mentioned that there are a lot of use cases that we probably didn't think of or for which there was not enough time to build support. Exactly for this reason we provide the concept of a migration handler. This is a user defined piece of code that implements the interface "org.jbpm.pvm.internal.migration.MigrationHandler":
public interface MigrationHandler { void migrateInstance( ProcessDefinition newProcessDefinition, ProcessInstance processInstance, MigrationDescriptor migrationDescriptor); }
This migration handler can be specified in the process definition xml and will be executed for each process instance that has to be migrated. Experienced users can use this to do all kinds of bookkeeping they need to do when migrating (or ending) process instances. To perform this bookkeeping, it gets of course a handle to the process instance that needs to be migrated, but also to the new process definition and to a so called migration descriptor that contains among others the migration mapping. Take e.g. the following example:
<process name="foobar"> ... <migrate-instances> <migration-handler </migrate-instances> </process>
In this case the specified migration handler will be executed for each process instance that needs to be migrated AFTER the default migration has been done. If the attribute action is set to "end" the migration handler will be called BEFORE the process instance is ended. If more than one migration handler is specified, they will be executed one after another.
As indicated in section "User code" in the Users guide, instantiated user objects are by default cached as part of the process definition. So the single user object will be used to serve all requests. So you have to make sure that when your user code is used, that it doesn't change the member fields of the user object. In that case it will be safe for your user object to be used by all threads/requests. This is also called stateless user objects.
In case you do have a need for stateful user objects, you can specify
parameter
cache="disabled" on the definition of the user
code. In that case a new user object will be instatiated for every usage.
In your project you might have user domain objects like e.g. an Order or Claim object in your project mapped with hibernate. This section explains how to combine updates to user domain objects with jBPM operations in a single transaction.
Here's an example of a user command:
public class MyUserCommand implements Command<Void> { public Void execute(Environment environment) throws Exception { // your user domain objects // an example jBPM operation ExecutionService executionService = environment.get(ExecutionService.class); executionService.signalExecutionById(executionId); return null; } }
Then such commands can be executed by the ProcessEngine:
processEngine.execute(new MyUserCommand());
The Business Process Modeling Notation (BPMN) is a standard for the graphical notation of business process models. The standard is maitained by the Object Management Group (OMG).
Basically, the BPMN specification finalization phase and is scheduled to be finished soon, notation.
The jBPM BPMN2 implementation was started in close collaboration with the community in augustus 2009 after releasing jBPM 4.0. Later, it was decided that the first release (ie documented/QA'd) to incorporate parts of the BPMN2 spec would be jBPM 4.
Users who are already familiar with jBPM will find that
One of the first questions that might, rightfully, come to mind is why BPMN2 is being implemented while there is jPDL. Both are languages.
The BPMN2 specification defines a very rich language for modeling and executing business processes. However, this also means that it is quite hard to get an overview of what's possible with BPMN2. To simplify this situation, we've decided to categorize the BPMN2 constructs into three 'levels'. The separation itself is primarily based on the book 'BPMN method and Style' by Bruce Silver (), the training material of Dr. Jim Arlow (), 'How much BPMN do you need' (), and also our own experience.
We define three categories of BPMN2 constructs:
Enabling BPMN 2.0 in your application is extremely simple: just add the following line to the jbpm.cfg.xml file.
<import resource="jbpm.bpmn.cfg.xml" />
This import will enable BPMN 2.0 process deployment by installing a BPMN 2.0 deployer in the Process Engine. Do note that a Process Engine can cope with both JPDL and BPMN 2.0 processes. This means that in your application, some processes can be JPDL and others can be BPMN 2.0.
Process definitions are distinguished by the process engine based on the extension of the definition file. For BPMN 2.0, use the *.bpmn.xml extension (where JPDL is having the *.jpdl.xml extension).
The examples that are shipped with the distribution also contain examples for every construct that is discussed in the following sections. Look for example BPMN 2.0 processes and test cases in the org.jbpm.examples.bpmn.* package .
See the userguide, chapter 2 (Installation), for a walktough on how to import the examples. Look for the section 'Importing the Examples'.
The root of an BPMN 2.0 XML process is the definitions elements. As the name states, the subelements will contain the actual definitions of the business process(es). Every process child will be able to have an id (required) and name (optional). An empty business process in BPMN 2.0 looks as follows. Also note that it is handy to have the BPMN2.xsd on the classpath, to enable XML completion.
<definitions id="myProcesses" xmlns: <process id="My business processs" name="myBusinessProcess"> ... </process> <definitions>
If a name is defined for the process element, it is be used as key for that process (ie. starting a process can be done by calling executionService.startProcessInstanceByKey("myBusinessProcess"). If no name is defined, the id will be used as key. So having only an id defined, will allow to start a process instance using that id. So basically, name and key are of equivalent in usage, for example to search process definitions. Note that for a key the same rules apply as with JPDL: whitespace and non alpha-numeric characters are replaced by an underscore.
Together with activitites and gateways, events are used in practically every business process. Events allow process modelers to describe business processes in a very natural way, such as 'This process starts when I receive a customer order', 'If the task is not finished in 2 days, terminate the process' or 'When I receive a cancel e-mail when the process is running, handle the e-mail using this sub-process'. Notice that typical businesses always work in a very event-driven way. People are not hard-coded sequential creatures, but they tend to react on things that happen in their environment (ie. events). In the BPMN specification, a great number of event types are described, to cover the range of possible things that might occur in context of a business.
A start event indicates the start of process (or a subprocess). Graphically, it is visualized as a circle with (possibly) a small icon inside. The icon specifies the actual type of event that will trigger the process instance creation.
The 'none start event' is drawn as a circle without an icon inside, which means that the trigger is unknown or unspecified. The start activity of JPDL basically has the same semantics. Process instances whose process definition has a 'none start event' are created using the typical API calls on the executionService.
A none start event is defined as follows. An id is required, a name is optional.
<startEvent id="start" name="myStart" />
An end event indicates the end of an execution path in a process instance. Graphically, it is visualized as a circle with a thick border with (possibly) a small icon inside. The icon specifies the type of signal that is thrown when the end is reached.
The 'none end event' is drawn as a circle with thick border with no icon inside, which means that no signal is thrown when the execution reaches the event. The end activity in JPDL has the same semantics as the none end event.
A none end event is defined as follows. An id is required, a name is optional.
<endEvent id="end" name="myEnd" />
The following example shows a process with only a none start and end event:
The corresponding executable XML for this process looks like this (omitting the definitions root element for clarity)
<process id="noneStartEndEvent" name="BPMN2 Example none start and end event"> <startEvent id="start" /> <sequenceFlow id="flow1" name="fromStartToEnd" sourceRef="start" targetRef="end" /> <endEvent id="end" name="End" /> </process>
A process instances can now be created by calling the startProcessInstanceXXX operations.
ProcessInstance processInstance = executionService.startProcessInstanceByKey("noneStartEndEvent");
The difference between a 'terminate' and a 'none' end event lies in the fact how a path of execution is treated (or a 'token' in BPMN 2.0 terminology). The 'terminate' end event will end the complete process instance, whereas the 'none' end event will only end the current path of execution. They both don't throw anything when the end event is reached.
A terminate end event is defined as follows. An id is required, a name is optional.
<endEvent id="terminateEnd" name="myTerminateEnd"> <terminateEventDefinition/> </endEvent>
A terminate end event is depicted as an end event (circle with thick border), with a full circle as icon inside. In the following example, completing the 'task1' will end the process instance, while completing the 'task2' will only end the path of execution which enters the end event, leaving the task1 open.
See the examples shipped with the jBPM distribution for the unit test and XML counterpart of this business process.
A sequence flow is the connection between events, activities and gateways shown as a solid line with an arrow in a BPMN diagram (JPDL equivalent is the transition). Each sequence flow has exactly one source and exactly one target reference, that contains the id of an activity, event or gateway.
<sequenceFlow id="myFlow" name="My Flow" sourceRef="sourceId" targetRef="targetId" />
An important difference with JPDL is the behaviour of multiple outgoing sequence flows. In JPDL, only one transition is selected as outgoing transition, unless the activity is a fork (or a custom activity with fork behaviour). However, in BPMN, the standard behaviour of multiple outgoing sequence flow is to split the incoming token ('execution' in jBPM terminology) into a collection of tokens, one for each outgoing sequence flow. In the following situation, after completing the first task, there will be three tasks activated instead of one.
To avoid that a certain sequence flow is taken, one has to add a condition to the sequence flow. At runtime, only when the condition evaluates to true, that sequence flow will be taken.
To put a condition on a sequence flow, add a conditionExpression element to the sequence flow. Conditions are to be put between ${}.
<sequenceFlow id=....> <conditionExpression xsi:${amount >= 500}</conditionExpression> </sequenceFlow>
Note that is currently is necessary to add the xsi:type="tFormalExpression" to the conditionExpression. A conditional sequence flow is visualized as a mini diamond shape at the beginning of the sequence flow. Keep in mind that conditions always can be defined on sequence flow, but some constructs will not interprete them (eg. parallel gateway).
Activities (such as the user task) and gateways (such as the exclusive gateway) can have a default sequence flow. This default sequence flow is taken only when all the other outgoing sequence flow from an activity or gateway have a condition that evaluate to false. A default sequence flow is graphically visualized as a sequence flow with a 'slash marker'.
The default sequence flow is specified by filling in the 'default' attribute of the activity or gateway.
Also note that an expression on a default sequence flow is ignored.
A gateway in BPMN is used to control the flow through the process. More specifically, when a token (the BPMN 2.0 conceptual notion of an execution) arrives in a gateway, it can be merged or split depending on the gateway type.
Gateways are depicted as a diamond shape, with an icon inside specifying the type (exclusive, inclusive, etc.).
On every gateway type, the attribute gatewayDirection can be set. following values are possible:
Take for example the following example: a parallel gateway that has as gatewayDirection 'converging', will have a join behaviour.
<parallelGateway id="myJoin" name="My synchronizing join" gatewayDirection="converging" />
Note: the 'gatewayDirection' attribute is optional according to the specification. This means that we cannot rely on this attribute at runtime to know which type of behaviour a certain gateway has (for example for a parallel gateway if we have joining of forking behaviour). However, the 'gatewayDirection' attribute is used at parsing time as a constraint check for the incoming/outgoing sequence flow. So using this attribute will lower the chance on errors when referencing sequence flow, but is not required.
An exclusive gateway represents an exclusive decision in the process. Exactly one outgoing sequence flow will be taken, depending on the conditions defined on the sequence flow.
The corresponding JPDL construct with the same semantics is the decision activity. The full technical name of the exclusive gateway is the 'exclusive data-based gateway', but it is also often called the XOR Gateway. The XOR gateway is depicted as a diamond with a 'X' icon inside. An empty diamond without a gateway also signifies an exclusive gateway.
The following diagram shows the usage of an exclusive gateway: depending on the value of the amount variable, one of the three outgoing sequence flow out of the exclusive gateway is chosen.
The corresponding executable XML of this process looks as follows. Note that the conditions are defined on the sequence flow. The exclusive gateway will select the single sequence flow for which its condition evaluates to true. If multiple conditions evaluate to true, the first one encountered will be taken (a log message will indicate this situation).
<process id="exclusiveGateway" name="BPMN2 Example exclusive gateway"> <startEvent id="start" /> <sequenceFlow id="flow1" name="fromStartToExclusiveGateway" sourceRef="start" targetRef="decideBasedOnAmountGateway" /> <exclusiveGateway id="decideBasedOnAmountGateway" name="decideBasedOnAmount" /> <sequenceFlow id="flow2" name="fromGatewayToEndNotEnough" sourceRef="decideBasedOnAmountGateway" targetRef="endNotEnough"> <conditionExpression xsi: ${amount < 100} </conditionExpression> </sequenceFlow> <sequenceFlow id="flow3" name="fromGatewayToEnEnough" sourceRef="decideBasedOnAmountGateway" targetRef="endEnough"> <conditionExpression xsi: ${amount <= 500 && amount >= 100} </conditionExpression> </sequenceFlow> <sequenceFlow id="flow4" name="fromGatewayToMoreThanEnough" sourceRef="decideBasedOnAmountGateway" targetRef="endMoreThanEnough"> <conditionExpression xsi: ${amount > 500} </conditionExpression> </sequenceFlow> <endEvent id="endNotEnough" name="not enough" /> <endEvent id="endEnough" name="enough" /> <endEvent id="endMoreThanEnough" name="more than enough" /> </process>
This process needs a variable such that the expression can be evaluated at runtime. Variables can be provided when starting the process instance (similar to JPDL):
Map<String, Object> vars = new HashMap<String, Object>(); vars.put("amount", amount); ProcessInstance processInstance = executionService.startProcessInstanceByKey("exclusiveGateway", vars);
The exclusive gateway requires that all outgoing sequence flow have conditions defined on them. An exception to this rule is the default sequence flow. Use the default attribute to reference an existing id of a sequence flow. This sequence flow will be taken when the conditions on the other outgoing sequence flow all evaluate to false.
<exclusiveGateway id="decision" name="decideBasedOnAmountAndBankType" default="myFlow"/> <sequenceFlow id="myFlow" name="fromGatewayToStandard" sourceRef="decision" targetRef="standard"> </sequenceFlow>
An exclusive gateway can have both convering and diverging functionality. The logic is easy to grasp: for every execution that arrives at the gateway, one outgoing sequence flow is selected to continue the flow. The following diagram is completely legal in BPMN 2.0 (omitting names and conditions for clarity).
A parallel gateway is used to split or synchronize the respectively incoming or outgoing sequence flow.
A parallel gateway is defined as follows:
<parallelGateway id="myParallelGateway" name="My Parallel Gateway" />
Note that the 'gatewayDirection' attribute can be used to catch modeling errors at parsing time (see above).
The following diagram shows how a parallel gateway can be used. After process start, both the 'prepare shipment' and 'bill customer' user tasks will be active. The parallel gateway is depicted as a diamond shape with a plus icon inside, both for the splitting and joining behaviour.
The XML counterpart of this diagram looks as follows:
<process id="parallelGateway" name="BPMN2 example parallel gatewar"> <startEvent id="Start" /> <sequenceFlow id="flow1" name="fromStartToSplit" sourceRef="Start" targetRef="parallelGatewaySplit" /> <parallelGateway id="parallelGatewaySplit" name="Split" gatewayDirection="diverging"/> <sequenceFlow id="flow2a" name="Leg 1" sourceRef="parallelGatewaySplit" targetRef="prepareShipment" /> <userTask id="prepareShipment" name="Prepare shipment" implementation="other" /> <sequenceFlow id="flow2b" name="fromPrepareShipmentToJoin" sourceRef="prepareShipment" targetRef="parallelGatewayJoin" /> <sequenceFlow id="flow3a" name="Leg 2" sourceRef="parallelGatewaySplit" targetRef="billCustomer" /> <userTask id="billCustomer" name="Bill customer" implementation="other" /> <sequenceFlow id="flow3b" name="fromLeg2ToJoin" sourceRef="billCustomer" targetRef="parallelGatewayJoin" /> <parallelGateway id="parallelGatewayJoin" name="Join" gatewayDirection="converging"/> <sequenceFlow id="flow4" sourceRef="parallelGatewayJoin" targetRef="End"> </sequenceFlow> <endEvent id="End" name="End" /> </process>
A parallel gateway (as is the case for any gateway) can have both splitting and merging behaviour. The following diagram is completely legal BPMN 2.0. After process start, both task A and B will be active. When both A en B are completed, tasks C,D and E will be active.
An inclusive gateway - also called an OR-gateway - is used to 'conditionally' split or merge sequence flow. It basically behaves as a parallel gateway, but it also takes in account conditions on the outgoing sequence flow (split behaviour) and calculates if there are executions left that could reach the gateway (merge behaviour).
The inclusive gateway is depicted as a typical gateway shape with a circle inside (referring to 'OR' semantics). Unlike the exclusive gateway, all condition expressions are evaluated (diverging or 'split' behaviour). For every expression that evaluates to true, a new child execution is created. Sequence flow without a condition will always be taken (ie. a child execution will always be created in that case).
A converging inclusive gateway ('merge' behaviour) has a somewhat more difficult execution logic. When an execution (Token in BPMN 2.0 terminology) arrives at the merging inclusive gateway, the following is checked (quoting the specification literally):
For each empty incoming sequence flow, there is no Token in the graph anywhere upstream of this sequence flow, i.e., there is no directed path (formed by Sequence Flow) from a Token to this sequence flow unless a) the path visits the inclusive gateway or b) the path visits a node that has a directed path to a non-empty incoming sequence flow of the inclusive gateway. "
In more simple words: when an execution arrives at the gateway, all active execution are checked if they can reach the inclusive gateway, by only taking in account the sequence flow (note: conditions are not evaluated!). When the inclusive gateway is used, it is usally used in a pair of splitting/merging inclusive gateways. In those cases, the execution behaviour is easy enough to grasph by just looking at the model.
Of course, it is not hard to imagine situations where the executions are split and merged in complex combinations using a variety of constructs including the inclusive gateway. In those cases, it could very well be that the actual execution behaviour might not be what the modelers' expects. So be careful when using the inclusive gateway and keep in mind that it is often the best practice to use inclusive gateways just in pairs.
The following diagram shows how the inclusive gateway can be used. (example taken from "BPMN method and style" by Bruce Silver)
We can distinguish following cases:
No matter how many tasks are active after going through the inclusive gateway, the converging inclusive gateway on the right will wait until all outgoing sequence flow of the inclusive gateway on the left have reached the merging gateway (sometimes only one, sometimes two). Take a look at org.jbpm.examples.bpmn.gateway.inclusive.InclusiveGatewayTest to see how this example reflects in a unit test.
The XML version of the example above looks as follows:
<process id="inclusiveGateway" name="BPMN2 Example inclusive gateway"> <startEvent id="start" /> <sequenceFlow id="flow1" sourceRef="start" targetRef="inclusiveGatewaySplit" /> <inclusiveGateway id="inclusiveGatewaySplit" default="flow3"/> <sequenceFlow id="flow2" sourceRef="inclusiveGatewaySplit" targetRef="largeDeposit"> <conditionExpression xsi:${cash > 10000}</conditionExpression> </sequenceFlow> <sequenceFlow id="flow3" sourceRef="inclusiveGatewaySplit" targetRef="standardDeposit" /> <sequenceFlow id="flow4" sourceRef="inclusiveGatewaySplit" targetRef="foreignDeposit"> <conditionExpression xsi:${bank == 'foreign'}</conditionExpression> </sequenceFlow> <userTask id="largeDeposit" name="Large deposit" /> <sequenceFlow id="flow5" sourceRef="largeDeposit" targetRef="inclusiveGatewayMerge" /> <userTask id="standardDeposit" name="Standard deposit" /> <sequenceFlow id="flow6" sourceRef="standardDeposit" targetRef="inclusiveGatewayMerge" /> <userTask id="foreignDeposit" name="Foreign deposit" /> <sequenceFlow id="flow7" sourceRef="foreignDeposit" targetRef="inclusiveGatewayMerge" /> <inclusiveGateway id="inclusiveGatewayMerge" /> <sequenceFlow id="flow8" sourceRef="inclusiveGatewayMerge" targetRef="theEnd" /> <endEvent id="theEnd" /> </process>
As with any gateway type, the inclusive gateway type can have both merging and splitting behaviour. In that case, the inclusive gateway will first wait until all executions have arrived, before splitting again for every sequence flow that has a condition that evauluates to true (or doesn't have a condition).
A task represents work that needs to be done by an external entity, such as a human actor or an automated service.
It's important to note that the BPMN semantics of a 'task' differ from the JPDL semantics. In JPDL, the concept 'task' is always used in the context of a human actor doing some type of work. When the process engine encounters a task in JPDL, it will create a task in some human actor's task list and it will behave as a wait state. In BPMN 2.0 however, there are several task types, some indicating a wait state (eg. the User Task and some indicating an automatic activity (eg. the Service Task). So take good care not to confuse the meaning of the task concept when switching languages.
Tasks are depicted by a rounded rectangle, typically containing a text inside. The type of the task (user task, service task, script task, etc.) is shown as a little icon on the left top corner of the rectangle. Depending on the task type, the engine will execute different functionality.
A User task is the typical 'human task' that is found in practically every workflow or BPM software out there. When process execution reaches such a user task, a new human task is created in task list for a given user.
The main difference with a manual task (which also signifies work for a human actor) is that the task is known to the process engine. The engine can track the completion, assignee, time, etc which is not the case for a manual task.
A user task is depicted as a rounded rectangle with a small user icon in the top left corner.
A user task is defined as follows in the BPMN 2.0 XML:
<userTask id="myTask" name="My task" />
According to the specification, multiple implementations are possible (Webservice, WS-humantask, etc.), as stated by using the implementation attribute. Currently, only the standard jBPM task mechanism is available, so there is no point (yet) in defining the 'implementation' attribute.
The BPMN 2.0 specification contains quite a few ways of assigning user tasks to user(s), group(s), role(s), etc. The current BPMN 2.0 jBPM implementation allows to assign tasks using a resourceAssignmentExpression, combined with the humanPerformer or PotentialOwner construct. It is to be expected that this area will evolve future releases.
A potentialOwner is used when you want to make a certain user, group, role, etc. a candidate for a certain task. Take the following example. Here the candidate group for the task 'My task' will be the 'management' group. Also note that a resource must be defined outside the process, such that the task assignment can reference the resource. In fact, any activity can reference one or more resource elements. Currently, only defining this resource is enough (since it is a required attribute by the spec), but this will be enhanced in a later release (eg. resources can have runtime parameters).
<resource id="manager" name="manager" /> <process ...> ... <userTask id="myTask" name="My task"> <potentialOwner resourceRef="manager" jbpm: <resourceAssignmentExpression> <formalExpression>management</formalExpression> </resourceAssignmentExpression> </potentialOwner> </userTask>
Note that we are using a specific extension here (jbpm:type="group"), to define this is a group assignment. If this attribute is removed, the group semantics will be used as default (which would be ok in this example). Now suppose that Peter and Mary are a member of the management group (here using the default identity service):
identityService.createGroup("management"); identityService.createUser("peter", "Peter", "Pan"); identityService.createMembership("peter", "management"); identityService.createUser("mary", "Mary", "Littlelamb"); identityService.createMembership("mary", "management");
Then both peter and mary can look in their task list for this task (code snippet from the example unit test):
// Peter and Mary are both part of management, so they both should see the task List<Task> tasks = taskService.findGroupTasks("peter"); assertEquals(1, tasks.size()); tasks = taskService.findGroupTasks("mary"); assertEquals(1, tasks.size()); // Mary claims the task Task task = tasks.get(0); taskService.takeTask(task.getId(), "mary"); assertNull(taskService.createTaskQuery().candidate("peter").uniqueResult()); taskService.completeTask(task.getId()); assertProcessInstanceEnded(processInstance);
When the assignment should be done to a candidate user, just use the jbpm:type="user" attribute.
<userTask id="myTask" name="My User task"> <potentialOwner resourceRef="employee" jbpm: <resourceAssignmentExpression> <formalExpression>peter</formalExpression> </resourceAssignmentExpression> </potentialOwner> </userTask>
In this example, peter will be able to find the task since he's a candidate user for the task.
List<Task> tasks = taskService.createTaskQuery().candidate("peter").list();
A human performer is used when you want to assign a task directly to a certain user, group, role, etc. The way to do this looks very much like that of the potential owner.
<resource id="employee" name="employee" /> <process ...> ... <userTask id="myTask" name="My User task"> <humanPerformer resourceRef="employee"> <resourceAssignmentExpression> <formalExpression>mary</formalExpression> </resourceAssignmentExpression> </humanPerformer> </userTask>
In this example, the task will be directly assigned to Mary. She can now find the task in her task list:
List<Task> tasks = taskService.findPersonalTasks("mary");
Since the task assignment is done through the use of a formalExpression, it's also possible to define expressions that are evaluated at runtime. The expressions itself need to be put inside a ${}, as usual in jBPM. For example, if a process variable 'user' is defined, then it can be used inside an expression. More complex expressions are of course possible.
<userTask id="myTask" name="My User task"> <humanPerformer resourceRef="employee"> <resourceAssignmentExpression> <formalExpression>${user}</formalExpression> </resourceAssignmentExpression> </humanPerformer> </userTask>
Note that it is not needed to use the 'jbpm:type' on a humanPerformer element, since only direct user assignments can be done. If a task needs to be assigned to a role or group, use the potentialOwner with a group type (when you assign a task to a group, all members of that group will always be candidate users for that group - hence the usage of potentialOwner).
A Service Task is an automatic activity that calls some sort of service, such as a web service, Java service, etc. Currently, only Java service invocations are supported by the jBPM engine, but Web service invocations are planned for a future release.
Defining a service task requires quite a few lines of XML (the BPEL influence is certainly visible here). Of course, in the near future, we expect that tooling will simplify this area a lot. A service task is defined as follows:
<serviceTask id="MyServiceTask" name="My service task" implementation="Other" operationRef="myOperation" />
The service task has a required id and an optional name. The implementation attribute is used to indicate what the type of the invoked service is. Possible values are WebService, Other or Unspecified. Since we've only implemented the Java invocation, only the Other choice will do something useful for the moment.
The service task will invoke a certain operation that is referenced by the operationRef attribute using the id of an operation. Such an operation is part of an interface as shown below. Every operation has at least one input message and at most one output message.
<interface id="myInterface" name="org.jbpm.MyJavaServicek"> <operation id="myOperation2" name="myMethod"> <inMessageRef>inputMessage</inMessageRef> <outMessageRef>outputMessage</outMessageRef> </bpmn:operation> </interface>
For a Java service, the name of the interface is used to specificy the fully qualified classname of the Java class. The name of the operation is then used to specify the name of the method that must be called. The input/output message that represent the parameters/return value of the Java method are defined as follows:
<message id="inputMessage" name="input message" structureRef="myItemDefinition1" />
Several elements in BPMN are so-called 'item-aware', including this message construct. This means that they are involved in storing or reading items during process execution. The data structure to hold these elements is specified using a reference to an ItemDefinition. In this context, the message specifies its data structure by referencing an Itemdefinition in the structureRef attribute.
<itemDefinition id="myItemDefinition1" > <jbpm:arg> <jbpm:object </jbpm:arg> </itemDefinition> <itemDefinition id="myItemDefinition2"> <jbpm:var </itemDefinition>
Do note that this is not fully standard BPMN 2.0 as by the specification (hence the 'jbpm' prefix). In fact, according to the specification, the ItemDefinition shouldn't contain more than a data structure definition. The actual mapping between input paramaters, with a ceratin data structure, is done in the ioSpecification section of the serviceTask. However, the current jBPM BPMN 2.0 implementation hasn't implemented that construct yet. So, this means that the current usage as described above, will probably change in the near future.
Important note: Interfaces, ItemDefinitions and messages are defined outside a <process>. See the example ServiceTaskTest for a concrete process and unit test.
A script task is a an automatic activity upon which the process engine will execute a script when the task is reached. The script task is used as follows:
<scriptTask id="scriptTask" name="Script Task" scriptLanguage="bsh"> <script><![CDATA[ for(int i=0; i < input.length; i++){ System.out.println(input[i] + " x 2 = " + (input[i]*2)); }]]> </script> </scriptTask>
The script task, besides the required id and the optional name, allows for specifying a scriptLanguage and a script. Since we're using JSR-223 ('Scripting for the Java platform'), changing the script language involves
The XML above is visualized as follows (adding a none start and end event).
As shown in the example, process variables can be used inside the scripts. We can now start a process instance for this example process, while also supplying some random input variables:
Map<String, Object> variables = new HashMap<String, Object>(); Integer[] values = { 11, 23, 56, 980, 67543, 8762524 }; variables.put("input", values); executionService.startProcessInstanceBykey("scriptTaskExample", variables);
In the output console, we can now see the script being executed:
11 x 2 = 22 23 x 2 = 46 56 x 2 = 112 980 x 2 = 1960 67543 x 2 = 135086 8762524 x 2 = 17525048
A manual task is a task that is performed by an external actor, but without the aid of a BPM system or a service that is invoked. In the real world, examples are plenty: the installation of telephone system, sending of a letter using regular mail, calling a customer by phone, etc.
<manualTask id="myManualTask" name="Call customer" />
The purpose of the manual task is more documentation/modeling-wise, as it has no meaning for execution on a process engine. As such, the process engine will simply pass through a manual task when it encounters one.
A receive task is a task that waits for the arrival of an external message. Besides the obvious use case involving webservices, the specification is liberal in what to do in other environments. The web service use case is not yet implemented, but the receive task can already be used in a Java environment.
The receive task is depicted as a rounded rectangle (= task shape) with a little enveloppe in the left top corner.
In a Java environment, the receive task without any other attribute filled in besides an id and (optionally) a name, behaves as a wait state. To introduce a wait state in your business process, just add the following line:
<receiveTask id="receiveTask" name="wait" />
Process execution will wait in such a receive task. The process can then be continued using the familiar jBPM signal methods. Note that this will probably change in the future, since a 'signal' has a completely different meaning in BPMN 2.0.
Execution execution = processInstance.findActiveExecutionIn("receiveTask"); executionService.signalExecutionById(execution.getId());
Subprocesses are in the first place a way of making a process "hierarchical", meaning that a modeller can create several 'levels' of detail. The top level view then explains the high-level way of doing things, while the lowest level focusses on the nitty gritty details.
Take for example the following diagram. In this model, only the high level steps are shown. The actual implementation of the "Check credit" step is hidden behind a collapsed subprocess, which may be the perfect level of detail to discuss business processes with end-users.
The second major use case for sub-processes is that the sub-process "container" acts as a scope for events. When an event is fired from within the sub-process, the catch events on the boundary of the sub-process will be the first to receive this event.
A sub-process that is defined within a top-level process is called an embeddable sub-process. All process data that is available in the parent process is also available in the sub-process. The following diagram shows the expanded version of the model above.
The XML counterpart of this model looks as follows:$
<process id="embeddedSubprocess"> <startEvent id="theStart" /> <sequenceFlow id="flow1" sourceRef="theStart" targetRef="receiveOrder" /> <receiveTask name="Receive order" id="receiveOrder" /> <sequenceFlow id="flow2" sourceRef="receiveOrder" targetRef="checkCreditSubProcess" /> <subProcess id="checkCreditSubProcess" name="Credit check"> ... </subProcess> <sequenceFlow id="flow9" sourceRef="checkCreditSubProcess" targetRef="theEnd" /> <endEvent id="theEnd" /> </process>
Note that inside the sub-process, events, activities, tasks are defined as if it were a top-level process (hence the three "..." within the XML example above. Sub-processes are only allowed to have a none start event.
Conceptually an embedded sub-process works as follows: when an execution arrives at the subprocess, a child execution is created. The child execution can then later create other (sub-)child executions, for example when a parallel gateway is used whithin the sub-process. The sub-process however, is only completed when no executions are active anymore within the subprocess. In that case, the parent execution is taken for further continuation of the process.
For example, in the following diagram, the "Third task" will only be reached after both the "First task" and the "Second task" are completed. Completing one of the tasks in the sub-process, will not trigger the continuation of the sub-process, since one execution is still active within the sub-process.
Sub-processes can have multiple start events. In that case, multiple parallel paths will exist within the sub-process. The rules for sub-process completion are unchanged: the sub-process will only be left when all the executions of the parallel paths are ended.
Nested sub-processes are also possible. This way, the process can be divided into several levels of detail. There is no limitation on the levels of nesting.
Implementation note: According to the BPMN 2 specification, an activity without ougoing sequence flow implicitly ends the current execution. However currently, it is necessary for a correct functioning to specifically use an end event within the sub-process to end a certain path. This will be enhanced in the future to be specification-compliant.
A timer start event is used to indicate that a process should be started when a given time condition is met. This could be a specific point in time (eg. October 10th, 2010 at 5am), but also and more typically a recurring time (eg. every Friday at midnight).
A timer start event is visualized as a circle with the clock icon inside.
Using a Timer Start event is done by adding a timerEventDefinition element below the startEvent element:
<startEvent name="Every Monday morning" id="myStart"> <timerEventDefinition/> </startEvent>
Following time definitions are possible:
<startEvent id="myStartEvent" > <timerEventDefinition> <timeDate>10/10/2099 00:00:00</timeDate> </timerEventDefinition> </startEvent>Note that using a fixed duedate makes the process only useable for a single time. After the process instance is created, the timer start event will never fire again.
Duration expression:
quantity [business] {second | seconds | minute | minutes | hour | hours | day | days | week | weeks | month | months | year | years}
This is completely similar to a timer duration definition in JPDL. Note that the BPMN2 start timer event also understands "business time". This allows for example to define a "business day" as an interval from 9am to 5pm. This way, the time from 5pm to 9am will not be taken in account when the time on which the event will fire is calculated. Please refer to the JPDL userguide for more details on how this business calendar can be customized. The following example shows a timer start event that will start a new process instance every five hours.
<startEvent id="myStartEvent" > <timerEventDefinition> <timeCycle>5 hours</timeCycle> </timerEventDefinition> </startEvent>
Cron expression: altough duration expressions cover already a great deal of recurring time definitions, sometimes they are not easy to use. When for example a process instance should be started every Friday night 23pm, cron expressions allow a more natural way of defining such repeating occurrences.
The following example shows a timer start event that will start a new process instance every Friday at 23pm.
<startEvent id="myStartEvent" > <timerEventDefinition> <timeCycle>0 0 23 ? * FRI</timeCycle> </timerEventDefinition> </startEvent>
The timer start event implementation in jBPM also has following features:
An intermediate event is used to model something that happens during a process (ie. after the process has started and before it is ended). Intermediate events are visualized as a circle with a double border, with an icon indicating the event type within the circle.
There are several intermediate event types, such as a timer event, signal event, escalation event, etc. Intermediate events can be either throwing or catching:
An intermediate timer event is used to represent a delay in the process. Straightfoward use cases are for example polling of data, executing heavy logic only at night when nobody is working, etc.
Note that an intermediate timer only can be used as a catch event (throwing a timer event makes no sense). The following diagram shows how the intermediate timer event is visualized.
Defining an intermediate timer event is done in XML as follows:
<intermediateCatchEvent id="myTimer" name="Wait for an hour"> <timerEventDefinition> <timeCycle>1 hour</timeCycle> </timerEventDefinition> </intermediateCatchEvent>
There are two ways to specify the delay, using either a timeCycle or a timeDate. In the example above, a timeCycle is used.
Following delay definitions are possible (similar to those for a Timer Start Event).
<intermediateCatchEvent id="myTimer" > <timerEventDefinition> <timeDate>10/10/2099 00:00:00</timeDate> </timerEventDefinition> </intermediateCatchEvent>
Duration expression:
quantity [business] {second | seconds | minute | minutes | hour | hours | day | days | week | weeks | month | months | year | years}
This is completely similar to a timer duration definition in JPDL. Note that the BPMN2 intermediate timer event also understands "business time". This allows for example to define a "business day" as an interval from 9am to 5pm. Timers that are started at 4pm with a duration of 2 hours, will fire at 10am the next business day. Please refer to the JPDL userguide for more details on how this business calendar can be customized.
<intermediateCatchEvent id="intermediateTimer" > <timerEventDefinition> <timeCycle>5 hours</timeCycle> </timerEventDefinition> </intermediateCatchEvent>
Cron expression: altough duration expressions cover already a great deal of delay definitions, sometimes they are not easy to use. When for example the process should be delayed until Friday night 23pm such that the processes can be executed in the weekend, duration expressions are hard to use (you need something like "#{calculated_value} seconds", where you need to calculate the value first).
Cron expressions allow to define delays in a way many people know (since cron expressions are used to define scheduled task on Unix machines). Note that a cron expression typically is used to define repetion. In this context however, the first point in time where the cron expression is satisfied is used to set the due date of the timer event (so no repetition). The following example shows how an intermediate timer event can be specified to continue the process the next friday night at 23pm.
<intermediateCatchEvent id="intermediateTimer" > <timerEventDefinition> <timeCycle>0 0 23 ? * FRI</timeCycle> </timerEventDefinition> </intermediateCatchEvent>
Prerequisites: to run the example, we assume that a working jBPM console has been installed on your JBoss server. If not, please run the 'demo.setup.jboss' install script first.
The business process we're implementing looks as follows: sequence flow - it means there is a conditional expression on the sequence flow), a rejection message is send or the process ends. Do note that in fact we've used a shortcut here: instead of putting expressions on the outgoing sequence flow of the 'verify request' task, we've could have used an exclusive gateway after the user task to control the flow through the process. Also note that since we haven't implemented swimlanes yet (probably the next release), it's difficult to actually see who does what in the business process.
The XML version of this process looks as follows:
<process id="vacationRequestProcess" name="BPMN2 Example process using task forms"> <startEvent id="start" /> <sequenceFlow id="flow1" name="fromStartToRequestVacation" sourceRef="start" targetRef="requestVacation" /> <userTask id="requestVacation" name="Request Vacation" implementation="other"> <potentialOwner resourceRef="user" jbpm: <resourceAssignmentExpression> <formalExpression>user</formalExpression> </resourceAssignmentExpression> </potentialOwner> <rendering id="requestForm"> <jbpm:form>org/jbpm/examples/bpmn/usertask/taskform="user" jbpm: <resourceAssignmentExpression> <formalExpression>manager</formalExpression> </resourceAssignmentExpression> </potentialOwner> <rendering id="verifyForm"> <jbpm:form>org/jbpm/examples/bpmn/usertask/taskform>
Note: this example is already installed when you've used the demo setup. Also note that we're using a Script Task here, to quickly write something as output instead of sending a real message (the diagram is showing a Service Task). Also note that we've taken some shortcuts here regarding task assignment (will be fixed in the next release).
The constructs used in this implementation are all covered in the previous section. Also note that we're using the taskform functionality here, which is a custom jBPM extension for the rendering element of a User task.
<userTask id="verifyRequest" name="Verify Request" implementation="other"> <potentialOwner resourceRef="user" jbpm: <resourceAssignmentExpression> <formalExpression>user</formalExpression> </resourceAssignmentExpression> </potentialOwner> <rendering id="verifyForm"> <jbpm:form>org/jbpm/examples/bpmn/usertask/taskform/verify_request.ftl</jbpm:form> </rendering> </userTask>
The mechanism regarding task forms for BPMN 2.0 is complete equivalent to that of JPDL. The form itself is a Freemarker template file that needs to be incorporated in the deployment. For example, the 'verify_request.ftl' form looks like as follows.
/> <input type="submit" name="verificationResult" value="OK"> <input type="submit" name="verificationResult" value="Not OK"> </form> </body> </html>
Note that process variables can be accessed using the ${my_process_variable} construct. Also note that named input controls (eg. input field, submit button) can be used to define new process variables. For example, the text input of the following field will be stored as the process variable 'reason'
<input type="textarea" name="reason"/>
Note that there are two submit buttons (which makes sense if you look at the 'OK' and 'Not OK' sequence flows going out the 'request vacation' task. By pressing one of these buttons, the process variable 'verificationResult' will be stored. It can then be used to evaluate the outgoing sequence flow:
<sequenceFlow id="flow3" name="fromVerifyRequestToEnd" sourceRef="verifyRequest" targetRef="theEnd"> <conditionExpression xsi: ${verificationResult == 'OK'} </conditionExpression> </sequenceFlow>
The process can now be deployed. You can use the ant deploy task for this (see examples), or you can point your jBPM configuration to the database of the console. To deploy your process programmatically, you need to add the task forms to your deployment:
NewDeployment deployment = repositoryService.createDeployment(); deployment.addResourceFromClasspath("org/jbpm/examples/bpmn/usertask/taskform/vacationrequest.bpmn.xml"); deployment.addResourceFromClasspath("org/jbpm/examples/bpmn/usertask/taskform/request_vacation.ftl"); deployment.addResourceFromClasspath("org/jbpm/examples/bpmn/usertask/taskform/verify_request.ftl"); deployment.deploy();
You can now embed (or run on a standalone server) this business process, by using the familiar jBPM API operations. For example, process instances can now be started using the key (ie. the process id for BPMN 2.0):
ProcessInstance pi = executionService.startProcessInstanceByKey("vacationRequestProcess");
Or tasks list can be retrieved:
Task requestTasktask = taskService.createTaskQuery().candidate("peter").uniqueResult();
When deploying to the jBPM console database, you should see our new business process popping up.
After you start a new process, a new task should be available in the employee's tasklist. When clicking on 'view', the task form will be displayed, requesting to fill in some variables for further use in the process.
After task completion, the manager will find a new verification task in his task list. He can now accept or reject the vacation request, based on the input of the employee.
Since the database schema remains unchanged when we added BPMN 2.0 on top of the jBPM PVM, all existing reports can be applied to our new BPMN 2.0 processes. 8.
Mail producers are responsible for creating email messages within jBPM. Producers
implement the
org.jbpm.pvm.internal.email.spi.MailProducer interface.
A default, local files or expressions.
Note that every section of the template is amenable to expression evaluation.
For complex emails or custom generation of attachments, see: custom mail producers. JavaMail documentation for details on the supported Pattern API for more information about the allowable regular expressions.
<jbpm-configuration> <transaction-context> <mail-session> <mail-server> <address-filter> <include>.+@example.com</include> </address-filter> <session-properties> <property name="mail.smtp.host" value="internal.smtp.example.com" /> <property name="mail.from" value="noreply@example.com" /> </session-properties> </mail-server> <mail-server> <address-filter> <exclude>.+@example.com</exclude> </address-filter> <session-properties> <property name="mail.smtp.host" value="external.smtp.example.com" /> <property name="mail.from" value="noreply@example.com" /> < custom mail producers to address the specific
requirements of an organization. To do so, create a class that implements the
org.jbpm.pvm.internal.email.spi.MailProducer interface. Method
produce takes an Execution and returns a collection of
Messages to be sent through the
MailSession.
The underpinning of customized mail production is the ability to instantiate a class that implements the MailProducer interface. In the simplest scenario, the class will extend the default mail producer and make a small addition such as adding more recipients. The following process snippet shows a mail activity with a custom mail producer.
>
The Java code for the
AuditMailProducer comes next.
public class AuditMailProducer extends MailProducerImpl { private String auditGroup; public String getAuditGroup() { return auditGroup; } public void setAuditGroup(String auditGroup) { this.auditGroup = auditGroup; } @Override protected void fillRecipients(Execution execution, Message email) throws MessagingException { // add recipients from template super.fillRecipients(execution, email); // load audit group from database EnvironmentImpl environment = EnvironmentImpl.getCurrent(); IdentitySession identitySession = environment.get(IdentitySession.class); Group group = identitySession.findGroupById(auditGroup); // send a blind carbon copy of every message to the audit group AddressResolver addressResolver = environment.get(AddressResolver.class); email.addRecipients(RecipientType.BCC, addressResolver.resolveAddresses(group)); } }
MailProducerImpl exposes a
template
property. To access a mail template, the mail producer descriptor references
the
MailTemplateRegistry object and invokes its
getTemplate method. This method takes one string
parameter, the name of the desired template.
AuditMailProducer adds an extra property,
the identifier of the group that will receive blind carbon copies of the
outgoing emails. The audit mail producer overrides the default
fillRecipients implementation to add the
extra BCC recipients. maps).
The easiest way to integrate Spring with jBPM is to import the jbpm.tx.spring.cfg.xml in your jbpm.cfg.xml file:
<import resource="jbpm.tx.spring.cfg.xml" />
This configuration uses the single transaction manager which is defined in the Spring configuration. Start from the content of this file if you need to tweak the jBPM-Spring integration configuration.
If you start from an existing configuration, replace the standard-transaction-interceptor with the spring-transaction-interceptor. The hibernate session needs the attribute current=”true”, depending if you are using the 'current Session' strategy in Spring. Also, the <transaction/> must be removed from the transaction-context if you want the transactions to be handled by Spring only. This forces jBPM to search for the current session, which will then be provided by Spring.
<process-engine-context> <command-service> <spring-transaction-interceptor /> ... </command-service> ... </process-engine-context> <transaction-context> ... <hibernate-session </transaction-context>
The spring-transaction-interceptor will look by default for a PlatformTransactionManager implementation by doing a search by type on the defined beans. In the case of multiple transaction managers, it is possible to specifically define the name of the transaction manager that must be used by the interceptor:
<spring-transaction-interceptor
The Spring integration provides a special context, which is added to the set of contexts where the jBPM engine will look for beans. Using this SpringContext, it is now possible to retrieve beans from the Spring Application Context. The jBPM process engine can be configured in a Spring applicationContext.xml as follows:
<bean id="springHelper" class="org.jbpm.pvm.internal.processengine.SpringHelper"> <property name="jbpmCfg" value="org/jbpm/spring/jbpm.cfg.xml"></property> </bean> <bean id="processEngine" factory-
Note that the jbpmCfg property for the SpringHelper is optional. If a default jbpm.cfg.xml exists on the classpath (ie not in some package), this line can be removed.
The jBPM services can also be defined in the Spring applicationContext, as following:
.
Since version 4.1, jBPM ships with a completely open-source web-based BPMN modeling tool called 'Signavio'. This Signavio web modeler is the result of a close collaboration between the JBoss jBPM team, the company also named 'Signavio' and the Hasso Plattner Instut (HPI) in Germany. Signavio is based on the web-based modeling tool Oryx, which was developed in open-source by HPI. Both HPI and Signavio have comitted themselves to continue investing in Oryx and Signavio. More information about the initiative can be found here.
Using the Signavio web-based BPMN modeler, it is possible to let business
analyst model the business processes through their browser. The file format
which is used to store the BPMN processes is actually jPDL. This means that
the resulting processes can directly be imported into the Eclipse GPD and vice-versa. The process
files will be stored on the hard disk, in
$jbpm_home/signavio-repository if you've used the default
installation scripts.
NOTE: The web-based BPMN modeling tool which ships with jBPM is 100% open-source (MIT-licence). The company Signavio also offers commercial versions of the same modeling tool, enhanced with additional features. Do note that new features, beneficial for the jBPM project, always will be comitted in the open-source repository of the modeling tool.
There are several ways of installing Signavio into your web container:
$jbpm_home/install
$jbpm_home/install
$jbpm_home/install/src/signavio/jbpmeditor.warto your web container
Most of the Signavio configuration parameters can be changed in the
web.xml file, which you can find in
jbpmeditor.war/WEB-INF/.
The only parameters which is of real importance is the
fileSystemRootDirectory
parameter. The value of this parameter must point to an existing folder
on your hard disk and indicates where the processes must be stored:
</context-param> <context-param> <description>Filesystem directory that is used to store models</description> <param-name>fileSystemRootDirectory</param-name><param-value>/home/jbarrez/dev/temp/jbpm-4.4-SNAPSHOT/signavio-repository</param-value> </context-param>
If you use the installation scripts provided in
$jbpm_home/install,
this parameter is automatically set to
$jbpm_home/signavio-repository
during installation. | http://docs.jboss.org/jbpm/v4/devguide/html_single/ | CC-MAIN-2018-05 | en | refinedweb |
by Jeffrey Kantor (jeff at nd.edu). The latest version of this notebook is available at.
Process models usually exhibit some form of non-linearity due to the multiplication of an extensive flowrate and an intensive thermodynamic state variable, chemical kinetics, or various types of transport phenomenon. Near a steady-state, however, an approximate linear models often provide a 'good-enough' dynamical model for control design and analysis. Here we show how a linear approximation is constructed using a Taylor series expansion.
As we will learn later in this course, linear models are amenable to control design and analysis. For process systems that will be operated near a steady-state, linear models provide a useful framework for control design.
We start with a process model that consists of a single first-order ordinary differential equation of the form
$$\frac{dh}{dt}=f(h,q)$$
where $h$ is the state variable and $q$ is an input variable. Choose a nominal value for the process input $q$, we'll call this nominal value $\bar{q}$. The following procedure will produce a linear approximate valid near the steady-state.
Find the steady-state value of $h$ by solving
$$0=f(\bar{h},\bar{q})$$
where $\bar{q}$ is a nominal (i.e, typical, or desired value) of a manipulated process variable.
\begin{eqnarray*} x & = & h-\bar{h}\\ u & = & q-\bar{q} \end{eqnarray*}
Compute the first terms in the Taylor series and evaluate at steady-state. The higher-order terms are not needed provided the lower-order terms are non-zero and the deviations are small. $$ f(\bar{h}+x,\bar{q}+u)\approx f(\bar{h},\bar{q})+\left.\frac{\partial f}{\partial h}\right|_{\bar{h},\bar{q}}x+\left.\frac{\partial f}{\partial q}\right|_{\bar{h},\bar{q}}u+\cdots $$
The linear approximation is $$ \frac{dx}{dt}=\underbrace{\left.\frac{\partial f}{\partial h}\right|_{\bar{h},\bar{q}}}_{a}x+\underbrace{\left.\frac{\partial f}{\partial q}\right|_{\bar{h},\bar{q}}}_{b}u $$
A simple model for the liquid height in a gravity-drained tank with cross-sectional area $A$ is
$$A\frac{dh}{dt}=q_{in}-C\sqrt{h}$$
where $q_{in}$ is a volumetric inflow and $C$ is a constant associated with the drain. This is a non-linear process model that can be written
$$\frac{dh}{dt}=f(h,q_{in})$$
where
$$f(h,q_{in})=\frac{1}{A}\left(q_{in}-C\sqrt{h}\right)$$
Given a nominal inlet flowrate $\bar{q}_{in}$, the steady state value of $h$, that is $\bar{h}$, is found by solving the steady state equation
$$0=f(\bar{h},\bar{q}_{in})=\frac{1}{A}\left(\bar{q}_{in}-C\sqrt{\bar{h}}\right)$$
which gives
$$\bar{h}=\frac{\bar{q}_{in}^{2}}{C^{2}}$$
It's interesting to note the steady-state height of the liquid in a gravity-drained tank is proportional to the square of the nominal flowrate. A 50\% increase in flowrate more than doubles the liquid height.
Let $x$ and $u$ represent the deviations from steady-state
\begin{eqnarray*} x & = & h-\bar{h}\\ u & = & q-\bar{q} \end{eqnarray*}
Then
\begin{eqnarray*} \frac{d(\bar{h}+x)}{dt} & = & \frac{1}{A}\left(\bar{q}_{in}+u-C\sqrt{\bar{h}+x}\right) \end{eqnarray*}
The Taylor series expansion
$$f(\bar{h}+x,\bar{q}_{in}+u)\approx f(\bar{h},\bar{q}_{in})+\left.\frac{\partial f}{\partial h}\right|_{\bar{h},\bar{q}}x+\left.\frac{\partial f}{\partial q}\right|_{\bar{h},\bar{q}}u+\frac{1}{2}\left.\frac{\partial^{2}f}{\partial h^{2}}\right|_{\bar{h}}x^{2}\cdots$$
For this example
$$\frac{dx}{dt}=\left(-\frac{C}{2A\sqrt{\bar{h}}}\right)x+\left(\frac{1}{A}\right)u$$
An alternative form of the model is found by substituting the solution for $\bar{h}$. While these have somewhat different analytical expressions for a given application they will yield identical numerical results.
$$\frac{dx}{dt}=\left(-\frac{C^{2}}{2A\bar{q}_{in}}\right)x+\left(\frac{1}{A}\right)u$$
How well does this approximation work?
This question can be answered by comparing the results of two simulations. In the first case the simulation consists of integrating
$$A\frac{dh}{dt}=q_{in}-C\sqrt{h}$$
as shown in the graph below.
where $A=1$, $C=2$, initial condition $h(0)=0$, and a constant input $q_{in}=\bar{q}_{in}=1$. For these parameter values, the approximate linear model for the deviation from steady-state is given by
\begin{eqnarray*} \frac{dx}{dt} & = & \left(-\frac{C^{2}}{2A\bar{q}_{in}}\right)x+\left(\frac{1}{A}\right)u\\ \\ & = & -2\,x+u \end{eqnarray*}
In terms of deviations from steady state, the input $u=q_{in}-\bar{q}_{in}=0$ and the initial conditionis $x(0)=h(0)-\bar{h}=-\bar{h}$. Plotting $h(t)$ and $x(t)+\bar{h}$ on the same axis produces the results shown in Figure \ref{fig:LinearApproximation}.
#Simulation of a Gravity-Drained Tank %matplotlib inline import matplotlib.pyplot as plt import numpy as np from scipy.integrate import odeint # parameter values qin = 1 A = 1 C = 2 # steady state hbar = (qin/C)**2 # nonlinear simulation def hdot(h,t): return (qin - C*np.sqrt(h))/A t = np.linspace(0,5) h = odeint(hdot,[0],t) # linear approximation a = -C**2/2/A/qin b = 1/A def xdot(x,t): return a*x + b*u u = 0 x = odeint(xdot,[-hbar],t) # visualization plt.plot(t,h) plt.plot(t,x+hbar) plt.legend(['Nonlinear','Linear Approximation'],loc='lower right') plt.xlabel('Time [min]') plt.ylabel('Height [m]') plt.title('Gravity-drained tank: Nonlinear model vs. linear approximation') plt.grid()
1. Suppose you have a tank with an cross-sectional area of 1 square meter, a steady inlet flow $\bar{q}_{in}=10$ liters/min, and observe a liquid height of 0.5 meters. What is the constant $C$? What is the characteristic time constant for the tank?
2. You have an elementary reaction
$$2{\cal A}\longrightarrow Products$$
carried out under isothermal conditions in a stirred tank of volume $V$. The reaction rate is given by the expression
$$R_{A}=k_{A}C_{A}^{2}$$
and the inlet concentration to the tank is $C_{A,in}$ at flowrate $q$. Construct a linear approximation for the dynamics of this process in the neighborhood of a steady state. | http://nbviewer.jupyter.org/github/jckantor/CBE30338/blob/master/notebooks/Linear%20Approximation%20of%20a%20Process%20Model%20using%20Taylor%20Series.ipynb | CC-MAIN-2018-05 | en | refinedweb |
For simple programs, you can easily write your own Perl routines and subroutines. As the tasks to which you apply Perl become more difficult, however, sometimes you'll find yourself thinking, "someone must have done this already." You are probably more right than you imagine.
For most common tasks, other people have already written the code. Moreover, they've placed it either in the standard Perl distribution or in the freely downloadable CPAN archive. To use this existing code (and save yourself some time), you'll have to understand how to make use of a Perl library. This task was briefly discussed in Chapter 18, CGI Programming.
One advantage in using modules from the standard distribution is that you can then share your program with others without having to take any special steps. This statement is true because the same standard library is available to Perl programs almost everywhere.
You'll save yourself time in the long run if you get to know the standard library. No one benefits from reinventing the wheel. You should be aware, however, that the library contains a wide range of material. While some modules may be extremely helpful, others may be completely irrelevant to your needs. For example, some modules are useful only if you are creating extensions to Perl.
To read the documentation for a standard module, use the perldoc program (if you have the standard distribution), or perhaps your web browser on HTML versions of the documentation. If all else fails, just look in the module itself; the documentation is contained within each module in pod format. To locate the module on your system, try executing this Perl program from the command line:
perl -e "print \"@INC\n\""
You should find the module in one of the directories listed by this command.
Before we list the standard modules, let's untangle some terminology:
A package is a simple namespace management device, which allows two different parts of a Perl program to each have a (different) variable named
$fred. These namespaces are managed with the
package declaration, described in Chapter 5 of Programming Perl. used to.
A module is a library that conforms to specific conventions, allowing the library routines to be brought into your program with the
use directive at compile time. Module filenames end in .pm, because the
use directive insists on that convention. Chapter 5 of Programming Perl describes Perl modules in greater detail.
An extension is a combination of a module written in Perl and a library written in C (or C++). On Win32 systems, these extensions are implemented as dynamic-link libraries and have a .pll file extension. Extension modules are used just like modules - with the
use directive at compile time. The case is important here: it doesn't necessarily need to match the filename that the package is stored in, but should match the case used in the package declaration.
A pragma is a module that affects the compilation phase of your program as well as the execution phase. Think of it as something that contains hints to the compiler. Unlike other modules, pragmas often (but not always) limit the scope of their effects to the innermost enclosing block of your program (that is, the block enclosing the pragma invocation). The names of pragmas are by convention all lowercase. | http://doc.novsu.ac.ru/oreilly/perl/learn32/appb_01.htm | CC-MAIN-2018-05 | en | refinedweb |
Via this blog we have discussed the fundamentals of Exchange Autodiscover, and also issues around the Set-AutodiscoverVirtualDirectory cmdlet.
At this point the message should be out there with regards to how Outlook functions internally and externally to locate Autodiscover and the difference that having the workstation domain joined makes. Lync on the other hand is a different beastie!
Both the Outlook Client and the Lync client want to get to the Exchange Autodiscover endpoint, but they differ in how to get to Sesame Street. **
Same But Different
At one of my recent engagements the customer experienced a situation around Lync 2010 and Exchange 2010 integration. Exchange was successfully upgraded to Exchange 2010, and OCS was still in use. When piloting Lync 2010 and the Lync 2010 client they noted errors in the Lync client. There were a couple of reasons for this. The required configuration on the load balancer was not in place, and the device’s firmware was not at the required build level.
When we investigated what Lync and Exchange Autodiscover were doing, we noted that Lync was not locating the Exchange Autodiscover endpoint. Hmm. That’s a bit strange, innit? Outlook was running perfectly, and all the domain joined clients were always able to located Autodiscover by querying for the SCP. The Lync client on the other hand does not leverage SCP when locating Exchange Autodiscover.
Dave Howe’s whitepaper Understanding and Troubleshooting Microsoft Exchange Server Integration discusses this in more detail and is a great read! The one line that distils the important message is:
Unlike Outlook, which uses an SCP object to locate the Autodiscover URL, UC clients and devices will only use the DNS-based discovery method.
There is also a flow diagram in the whitepaper showing the DNS records used.
Note that nowhere in Dave’s article does he change or view the properties of the Autodiscover virtual directory. The same is also true in Prerequisites for Integrating Microsoft Lync Server 2013 and Microsoft Exchange Server 2013.
There are some differences between Exchange 2007 and 2010 with regards to how the requests get serviced. Exchange 2007 only does POX (Plain Old Xml) whereas newer Exchange does SOAP (Simple Object Access Protocol) in addition. Lync can leverage SOAP, Outlook kicks it old School with POX.
Letting Lync Play Nicely With Exchange Autodiscover
The customer above had deployed Exchange, but had not created any internal DNS records for Autodiscover.domain.com. Technically this was not needed for their Exchange + Outlook design, as they have an environment with HA load balancers and multiple CAS servers behind each load balancer. Their Autodiscover namespace had been set as the load balancer FQDN. As such the FQDN Autodoscover.domain.com was not on any of the Exchange CAS Certificates. And as mentioned in the busting Autodiscover myth post on Set-AutodiscoverVirtualDirectory their Autodiscover URI was previously configured by running:
Set-ClientAccessServer –AutoDiscoverServiceInternalUri “”
In order to change this they:
- Request and install new certificates that included the Autodiscover.domain.com namespace
- Update the service bindings on Exchange to use the new certificate
- Update the configuration on the load balancers
- Create internal DNS entries for the Autodiscover.domain.com namespace
- Test
- Update build documentation
- Update DR documentation
Cheers,
Rhoderick
** - That 8 foot tall yellow bird still freaks me out!!
>>>
HI
Great article.
Where multiple sip domains and associated smtp domains exist does this pose a challenge. For instance exchange 2010 and lync 2010 may support multiple sip and smtp domains like @a.com and @b.com. In this example do you need multiple autodiscover a records for each lync sip domain and multiople virtual directories on exchange along with the other items you mention like certifcate entries etc.
That's my understanding. There is an option to use DNS SRV records, but there is currently a known issue with this for the Lync 2013 client
office.microsoft.com/…/lync-2013-known-issues-HA102919641.aspx
I haven't check to see a fix for that was checked into the last update bundle.
Cheers,
Rhoderick
Hi Rhoderick
Thanks for the rapid reply.
I think the EMS powershell set command to set the url and virtual diorectories on exchange only allow you to set one external and internal virtual directory etc – thats my understanding but i could be wrong.
When you mention the alternative using SRV records are you proposing this method to reduce the complexity over using the autodiscover A record method. For instance for each sip domain you have an SRV record in the case of a.com the SRV record _autodiscover._tcp.a.com which has a target of mail.a.com and in the case of b.com the SRV record _autodiscover._tcp.b.com which has a target of mail.a.com. This means that there is an SRV record per domain but it points to the same target mail.a.com thereby reducing the number of virtual directories and SAN names in certificates etc. I am not sure if the SRV standard supports targets outside of its zone but certainly Windows DNS manager allows targets to be set outside of the SRV zone so it may be possible and supported?
Many Thanks
That’s correct – in Exchange we set a single URL onto the various directories. Using ADSI edit to fnangle and add extral URLs onto them is not supported.
For Exchange we can do this with SRV redirect, but this requires an update to the Lync 2013 client. See link above.
We can target a machine out of the zone, and it would be up to the app to decide if that is OK. For example my fried Ed wrote this post here:
social.technet.microsoft.com/…/6818.exchange-2010-multi-tenant-autodiscover-service.aspx
In this example, the Autodiscover service does the following when the client tries to contact the Autodiscover service:
1. Autodiscover posts to
testorg1.org/…/Autodiscover.xml. This fails.
2. Autodiscover posts to
autodiscover.testorg1.org/…/Autodiscover.xml. This fails.
3. Autodiscover posts to
autodiscover.testorg1.org/…/Autodiscover.xml This fails.
4. Autodiscover performs the following redirect check using the looking for SRV record:
5. Autodiscover uses DNS SRV lookup for _autodiscover._tcp.testorg1.com, and then "mail.contoso.com" is returned.
6. Outlook asks permission from the user to continue with Autodiscover to post to
mail.contoso.com/…/autodiscover.xml.
7. Autodiscover’s POST request is successfully posted to
mail.contoso.com/…/autodiscover.xml.
Cheers,
Rhoderick
Hi Rhoderick,
Thanks for the Excellent Post!!!
Based on the above customer scenario, Per you description post you added the Autodiscover.domain.com DNS entry , did the AutodiscoverInernalURI which was set to use the Loadbalancer FQDN changed to the autodiscover.domain.com, if so then shall we point the Autodiscover.domain.com DNS entry to the same Loadbalaner IP and when we do this do we achieve the same experience as it was before for the client connection to get loadbalanced as the SCP is getting updated to use the autodiscover.domain.com and this eventually points the LB and the client experience is same and with this change we also full fill the Lync requirement to work better with Exchange.
Review and correct me if the above said configurations are correct or any changes needed for my better understanding.
Hi Shakthi
In my customer’s case they were using a different namespace for Autodiscover in each of the many AD sites. For example:
NA-Mail.contoso.com
EUR-Mail.contoso.com
APAC-Mail.contoso.com
All of those names were on the relevant certificate. Since Autodiscover.contoso.com was not on the cert, they had to re-issue the cert with that additional name and enable that on the LB device.
As to pointing to the same LB VIP or to use a separate one that is going to depend on the LB that you have. Are there any issues/challenges with doing that. I don’t know, that’s up to the LB.
We continued to have the same client experience, as domain joined Outlook clients on the internal network look for SCP first. Only if the SCP lookups fail will they drop to DNS.
Cheers,
Rhoderick
Hi Rhoderick,
Is there a technical reason why Lync cannot use SCP? It would be good to have the option at least, in the case where the DNS zone for the primary email is not something easy to update, and the customer domain is all internal with domain joined devices.
Cheers,
David
Hi David,
I think about the simplest denominator — simple phones and other UC devices. They are not domain joined and resort to using DNS to locate AutoD.
If customer has internal only DNS zone, that would be a challenge for internet located devices. What sort of situations do you encounter where you are unable to update the zone for the primary SMTP namespace? That sounds interesting – anything you can share on that?
Cheers,
Rhoderick
Thanks for the update!!!
I have a small query i have multiple SCP records in my environment and have set autodiscover internal uri to all them as and also have DNS entry created for autodiscover.domain.com which points to a LB device and has all the CAS servers added
to the LB and set the site scope accordingly and mine is a Hybrid Exchange environment and with respect to onpremise environment the client selects one SCP and makes the connection without issues but for the outlook client with a cloud user who are domain
joined I can see multiple SCP lookup performed and failed ( as it would) with each of the available SCP in the site before it makes appropriate connection to cloud and my concern is why there are many scp lookup performed can’t it pickup just one SCP and once
its get to know it fails and fall back to the DNS and then do the rest of the redirection . I need to know is this the default behavior of the domain joined cloud user outlook. I do have a similar requirement for Lync like the above customer and also need
to know will these SCP failures cause any issues with Lync EWS or its just the DNS entry which Lync requires to talk to Exchange via EWS which I already made available for my environment and pointed to the LB which contains the CAS servers. Kindly clarify
on the same
is there a reason why lync fails to access DNS after DNS is well configured?
ShakthiRavi — that is a great question and one that I’m digging into for other reasons as well. I do all of this blogging etc. in my own time so I never get to do as much as I really want to 🙁
Cheers,
Rhoderick
Ronald – is this referring to an A record or SRV please?
Cheers,
Rhoderick
Hi Rohderick,
well, you are correct. The SCP problem with Lync is that it doesn’t make sense for Lync querying SCP, if e.g. Exchange is not installed. So DNS is enough and DNS is also the choice for Lync finding all required resources.
I would be happy if you could also check my blog article here, Thanks so much
Thomas
Hi Thomas,
I did take a peek – the AutoD URLS are present in the Exchange 2013 schema, and are not listed on Exchange 2013 TechNet page. Your post has that reversed.
I’d also suggest removing the internal and external URLs from the set-autodiscovervirtualdirectory command. Setting this to -ExternalURL ‘‘
-InternalURL ‘‘ is also a tad confusing and we do not want customers to confuse the AutoD namespace with the EWS namespace.
Cheers,
Rhoderick
Rhoderick,
could you please comment if there are still issues with use of Exchange autodiscover DNS SRV records for Lync (client or server)?
Thanks,
Markus
Just as a clarification to my previous post: I’m referring to Exchange 2013 and Lync 2013 (server and client) with all available CUs/updates.
Hi Markus,
Are you past that build?
Cheers,
Rhoderick
Thanks, Rhoderick, for he quick pointer to this KB article. This shall help us to verify client installations.
Regards,
Markus
Rhoderick, you said, "What sort of situations do you encounter where you are unable to update the zone for the primary SMTP namespace? That sounds interesting – anything you can share on that?"
I work for a subsidiary of a very large corporation, where we all have the same email domain, user@domain.com, and the main corporate office manages the inbound/outbound email flow of 20 or so separate Forests and Exchange environments using AD FS GalSync and
email aliases like user@child.forest.domain.com. They also control the domain.com DNS namespace. We set our SIP address to the email address for simplicity. So when Lync (or non-domain joined Outlook clients) try to find the autodiscover they go to autdiscover.domain.com,
when they really need to go to autodiscover.forest.domain.com, or the dns namespace of whatever the child is. So the end result is those clients can never find the right autodiscover FQDN using DNS or srv records because they use the email address to look
that up, instead of looking at the dns namespace the client host is hitting.
MO
Hi Rhoderick,
For Exchange we can do this with SRV redirect, but this requires an update to the Lync 2013 client. See link above.
Where can I find this update? | https://blogs.technet.microsoft.com/rmilne/2013/09/11/exchange-autodiscover-lync/ | CC-MAIN-2018-05 | en | refinedweb |
This page uses content from Wikipedia and is licensed under CC BY-SA.
In computing, a linear-feedback shift register (LFSR) is a shift register whose input bit is a linear function of its previous state.
The most commonly used linear function of single bits is exclusive-or (XOR). Thus, an LFSR is most often a shift register whose input bit is driven by that appears random and.[1]
The bit positions that affect the next state are called the taps. In the diagram the taps are .
The sequence of numbers generated by an LFSR or its XNOR counterpart can be considered a binary numeral system just as valid as Gray code or the natural binary code.
The arrangement of taps for feedback in an LFSR can be expressed in finite field arithmetic as a polynomial mod 2. This means that the coefficients of the polynomial must be 1s or 0s. This is called the feedback polynomial or reciprocal characteristic polynomial. For example, if the taps are at the 16th, 14th, 13th and 11th bits (as shown), the feedback polynomial is
The "one" in the polynomial does not correspond to a tap – it corresponds to the input to the first bit (i.e. x0, which is equivalent to 1). The powers of the terms represent the tapped bits, counting from the left. The first and last bits are always connected as an input and output tap respectively.
The LFSR is maximal-length if and only if the corresponding feedback polynomial is primitive. This means that the following conditions are necessary (but not sufficient):
Tables of primitive polynomials from which maximum-length LFSRs can be constructed are given below and in the references.
There can be more than one maximum-length tap sequence for a given LFSR length. Also, once one maximum-length tap sequence has been found, another automatically follows. If the tap sequence in an n-bit LFSR is [n, A, B, C, 0], where the 0 corresponds to the x0 = 1 term, then the corresponding "mirror" sequence is [n, n − C, n − B, n − A, 0]. So the tap sequence [32, 7, 3, 2, 0] has as its counterpart [32, 30, 29, 25, 0]. Both give a maximum-length sequence.
An example in C is below:
#; }
This LFSR configuration is also known as standard, many-to-one or external XOR gates. The alternative Galois configuration is described in the next section.
Named after the French mathematician Évariste Galois, an LFSR in Galois configuration, which is also known as modular, internal XORs, or one-to-many LFSR, is an alternate structure that can generate the same output stream as a conventional LFSR (but offset in time).[3] In the Galois configuration, when the system is clocked,.
To generate the same output stream, the order of the taps is the counterpart (see above) of the order for the conventional LFSR, otherwise the stream will be in reverse. Note that the internal state of the LFSR is not necessarily the same. The Galois register shown has the same output stream as the Fibonacci register in the first section. A time offset exists between the streams, so a different startpoint will be needed to get the same output each cycle.
Below is a C code example for the 16-bit maximal-period Galois LFSR example in the figure:
# include <stdint.h> int main(void) { uint16_t start_state = 0xACE1u; /* Any nonzero start state will work. */ uint16_t lfsr = start_state; unsigned period = 0; do { unsigned lsb = lfsr & 1; /* Get LSB (i.e., the output bit). */ lfsr >>= 1; /* Shift register */ if (lsb) /* If the output bit is 1, apply toggle mask. */ lfsr ^= 0xB400u; ++period; } while (lfsr != start_state); return 0; }
Note that
if (lsb) lfsr ^= 0xB400u;
can also be written as
lfsr ^= (-lsb) & 0xB400u;
which may produce more efficient code on some compilers.
Binary Galois LFSRs like the ones shown above can be generalized to any q-ary alphabet {0, 1, ..., q − 1} (e.g., for binary, q = 2, and the alphabet is simply {0, 1}). In this case, the exclusive-or component is generalized to addition modulo-q (note that XOR is addition modulo 2), and the feedback bit (output bit) is multiplied (modulo-q) by a q-ary value, which is constant for each specific tap point. Note that this is also a generalization of the binary case, where the feedback is multiplied by either 0 (no feedback, i.e., no tap) or 1 (feedback is present). Given an appropriate tap configuration, such LFSRs can be used to generate Galois fields for arbitrary prime values of q.
The following table lists maximal-length polynomials for shift-register lengths up to 19. Note that more than one maximal-length polynomial may exist for any given shift-register length. A list of alternative maximal-length polynomials for shift-register lengths 4–32 (beyond which it becomes unfeasible to store or transfer them) can be found here: []. repeating sequence of states of an LFSR allows it to be used as a clock divider or as a counter when a non-binary sequence is acceptable, as is often the case where computer index or framing locations need to be machine-readable.[4]. The table of primitive polynomials shows how LFSRs can be arranged in Fibonacci or Galois form to give maximal periods. One can obtain any other period by adding to an LFSR that has a longer period some logic that shortens the sequence by skipping some states.text,. This LFSR can then be fed the intercepted stretch of output stream to recover the remaining plaintext.
Three general methods are employed to reduce this problem in LFSR-based stream ciphers:.[6][7]
The linear feedback shift register has a strong relationship to linear congruential generators.[8]
LFSRs are used in circuit testing for test-pattern generation (for exhaustive testing, pseudo-random testing or pseudo-exhaustive testing) and for signature analysis.
Complete LFSR are commonly used as pattern generators for exhaustive testing, since they cover all possible inputs for an n-input circuit. Maximal-length LFSRs and weighted LFSRs are widely used as pseudo-random test-pattern generators for pseudo-random test applications.
In built-in self-test (BIST) techniques, storing all the circuit outputs on chip is not possible, but the circuit output can be compressed to form a signature that will later be compared to the golden signature (of the good circuit) to detect faults. Since this compression is lossy, there is always a possibility that a faulty output also generates the same signature as the golden signature and the faults cannot be detected. This condition is called error masking or aliasing. BIST is accomplished with a multiple-input signature register (MISR or MSR), which is a type of LFSR. A standard LFSR has a single XOR or XNOR gate, where the input of the gate is connected to several "taps" and the output is connected to the input of the first flip-flop. A MISR has the same structure, but the input to every flip-flop is fed through an XOR/XNOR gate. For example, a 4-bit MISR has a 4-bit parallel output and a 4-bit parallel input. The input of the first flip-flop is XOR/XNORd with parallel input bit zero and the "taps". Every other flip-flop input is XOR/XNORd with the preceding flip-flop output and the corresponding parallel input bit. Consequently, the next state of the MISR depends on the last several states opposed to just the current state. Therefore, a MISR will always generate the same golden signature given that the input sequence is the same every time.
To prevent short repeating sequences (e.g., runs of 0s or 1s) from forming spectral lines that may complicate symbol tracking at the receiver or interfere with other transmissions, the data bit sequence is combined with the output of a linear-feedback register before modulation and transmission. This scrambling is removed at the receiver after demodulation. When the LFSR runs at the same bit rate as the transmitted symbol stream, this technique is referred to as scrambling. When the LFSR runs considerably faster than the symbol stream, the LFSR-generated bit sequence is called chipping code. The chipping code is combined with the data using exclusive or before transmitting using binary phase-shift keying or a similar modulation method. The resulting signal has a higher bandwidth than the data, and therefore this is a method of spread-spectrum communication. When used only for the spread-spectrum property, this technique is called direct-sequence spread spectrum; when used to distinguish several signals transmitted in the same channel at the same time and frequency, it is called code division multiple access.
Neither scheme should be confused with encryption or encipherment; scrambling and spreading with LFSRs do not protect the information from eavesdropping. They are instead used to produce equivalent streams that possess convenient engineering properties to allow robust and efficient modulation and demodulation.
Digital broadcasting systems that use linear-feedback registers:
Other digital communications systems using LFSRs:
LFSRs are also used in radio jamming systems to generate pseudo-random noise to raise the noise floor of a target communication system.
The German time signal DCF77, in addition to amplitude keying, employs phase-shift keying driven by a 9-stage LFSR to increase the accuracy of received time and the robustness of the data stream in the presence of noise.[10] | https://readtiger.com/wkp/en/Linear-feedback_shift_register | CC-MAIN-2018-05 | en | refinedweb |
* CompositionEdgeRenderer.java22 *23 * Created on October 30, 2005, 12:45 PM24 *25 * To change this template, choose Tools | Template Manager26 * and open the template in the editor.27 */28 29 package org.netbeans.modules.xml.nbprefuse.render;30 31 import java.awt.Polygon ;32 import prefuse.render.EdgeRenderer;33 34 /**35 *36 * @author Jeri Lockhart37 */38 public class CompositionEdgeRenderer39 extends EdgeRenderer {40 41 42 protected static final Polygon COMPOSITION_ARROW_HEAD =43 new Polygon (new int[] {0,-4,0,4,0}, new int[] {0,-8,-16,-8,0}, 5);44 45 /** Creates a new instance of CompositionEdgeRenderer */46 public CompositionEdgeRenderer() {47 super();48 m_arrowHead = COMPOSITION_ARROW_HEAD;49 }50 51 }52
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/netbeans/modules/xml/nbprefuse/render/CompositionEdgeRenderer.java.htm | CC-MAIN-2018-05 | en | refinedweb |
I am seriously so bummed out right now, and I feel like a complete idiot. I have been sitting here for seriously 7 hours now trying to figure out how to do this program, and the book I have for the class has been of no help whatsoever. I posted my problem in C++ and had someone tell me that this is a C Program, so after a minute I looked at my course website and the book and realized that the reason the book makes no sense at all is that it is C++ Primer Plus, instead of C Primer Plus. I am so upset because I have been working on this assignment for so long, and it's due at midnight tonight, and I feel like I just set myself back really far. If anyone would be willing to help me I would appreciate it so much. I've understood every other thing I've learned in C so far (printf, scanf, validation, calculations, etc.) but I can't figure out how to write these functions, and now I don't even have the right book to look in. I know the assignment isn't too difficult because all of the assignments before this have been fairly easy for me. I just really need some help!
The program is to track the amount and price of raffle tickets sold, and then add to a grand total. The price per raffle ticket has to be equal to or greater than $5.00, amount of tickets has to be equal to or greater than 0.
I've planned the program out to hopefully look like this at the end--
WELCOME TO THE RAFFLE!
Enter price per raffle ticket (must be at least $5.00):
Enter number of raffle tickets (must be at least 0):
Number of tickets ---------->
Price per ticket ------------->
Total amount due ---------->
Would you like to make another raffle ticket purchase? (y/n)
/* If y, loop back to beginning. If n, proceed to next. */
Total cost of all tickets ---->
/*==============================================*/
Can someone please help me with this? I know it's not that difficult but I'm totally lost and so exhausted! Here is the code I've written so far:
/* ================================================================== */ /* > Program: Exam1.c < */ /* > < */ /* > Description: printf, scanf, loop < */ /* > < */ /* > Name Date Comment < */ /* > ------------- -------- ------------------------------ < */ /* > Janelle Wood 2/27/08 Author < */ /* ================================================================== */ #include <stdio.h> #include <stdlib.h> void heading(void); void error(double min, double max); double getfloat(char prompt[], double min); double grandtotal(double tickettotal); double total(double ticketquan, double ticketcost); void show(double ticketquan, double ticketcost, double tickettotal, double grandtotal); int contyn(char msg[]); int main() { double ticketcost, ticketquan, tickettotal, grandtotal, costmin; int quanmin; char choice; do { heading(); ticketquan = getfloat("quantity", quanmin); ticketcost = getfloat("price", costmin); tickettotal= total(ticketquan, ticketcost); grandtotal = grandtotal(tickettotal); show(ticketquan, ticketcost, tickettotal, grandtotal); choice = contyn("\nWould you like to purchase more raffle tickets?"); } while (choice == 'y' || choice == 'Y'); return 0; } /* ================================================================ */ void heading() { system("cls"); // Linux/Unix use: system("clear"); printf("WELCOME TO THE RAFFLE!"); return; } /* ================================================================ */ double getfloat(char item[], double min) { int err; double ticketcost; do { printf("\nEnter a ticket %s greater than or equal to %.0lf: ", item, min); scanf("%lf%*c", &ticketcost); err = (ticketcost < min); if (err) error(min, max); } while (err); return (ticketcost); } /* ================================================================ */ void error(double min, double max) { printf("\nOut of range, try %.0lf to %.0lf, press any key ", min, max); return; } /* ================================================================ */ double total(double ticketnum, double ticketcost) { return (ticketnum * ticketcost); } /* ================================================================ */ | https://www.daniweb.com/programming/software-development/threads/111726/major-problem-with-functions-need-help | CC-MAIN-2018-05 | en | refinedweb |
Hello All,
I am trying to import test data using hdbtable.
For this i created hdbtable,one csv file consists of 40k records and hdbti file which ha s below coding.
import = [ { hdbtable = "x.template.data::Machine";
file = "x.template.data:Machine.csv";
header = false;
delimField = ","; delimEnclosing = "\""; } ];
Now my issue is when iam trying to import all 40 k records the database table is updating with all 40 k records but the order of inserting rows is not same as my csv file.
Could anybody please help me to resolve my issue.
I think this is wrongly tagged with 'SAP HANA Vora' but should go to Hana.
Add comment | https://answers.sap.com/questions/70717/issue-with-hdbti-table-import.html | CC-MAIN-2019-04 | en | refinedweb |
How to add slider in blynk to control temperature setpoint
Flash the DHT sketch that works OK and keep the same Blynk token etc.
Paste Serial Monitor and sketch that gives correct temperature readings.
here are the code that are working
#include <BlynkSimpleEsp8266.h> #include <DHT.h> int LED = 5; // You should get Auth Token in the Blynk App. // Go to the Project Settings (nut icon). char auth[] = ""; // Your WiFi credentials. // Set password to "" for open networks. char ssid[] = ""; char pass[] = ""; #define DHTPIN 0 // D3 // Uncomment whatever type you're using! #define DHTTYPE DHT11 // DHT 11 DHT dht(DHTPIN, DHTTYPE); Blynk); } void setup() { Serial.begin(9600); Blynk.begin(auth, ssid, pass); dht.begin(); // Setup a function to be called every second timer.setInterval(1000L, sendSensor); } void loop() { Blynk.run(); timer.run(); }```
This are the serial monitor, although it says failed to read but i can get reading normally on the blynk app
I don’t know how Blynk can have accurate data if the sensor is not being read correctly.
Do you have a 1K resistor between pins 1 and 2 on the DHT11?
i have another code that can display the reading from serial monitor but it doesnt connected to blynk
OK paste that for me to check it over but obviously we will need the Blynk stuff in the sketch to move forward.
here the code
// Written by ladyada, public domain #include "DHT.h" #define DHTPIN 0 // what digital pin we're connected to // Uncomment whatever type you're using! #define DHTTYPE DHT11 // DHT 11 //. // Note that older versions of this library took an optional third parameter to // tweak the timings for faster processors. This parameter is no longer needed // as the current DHT reading algorithm adjusts itself to work on faster procs. DHT dht(DHTPIN, DHTTYPE);"); }```
Change the delay(2000) to delay(5000). I would like to see a full page without errors before trying to move forward.
2000ms should be ok but you have any issue somewhere.
Do you just have the one DHT11?
Try this loop() which just reads temperature:
void loop() { // Wait a 5 seconds between measurements. delay(5000); // Read temperature as Celsius (the default) float t = dht.readTemperature(); // Check if any reads failed and exit early (to try again). if (isnan(t)) { Serial.println("Failed to read temperature!"); return; } Serial.print("Temperature: "); Serial.print(t); Serial.println(" *C "); }
So the 2000ms looks to be a problem and therefore the interval of 1000ms you had in the Blynk sketch will certainly be a problem.
You can probably switch back to the Blynk sketch but ensure the interval is 5000ms (or higher).
You probably only need temperature so you could cut out the humidity reading. | https://community.blynk.cc/t/how-to-add-slider-in-blynk-to-control-temperature-setpoint/32521?page=3 | CC-MAIN-2019-04 | en | refinedweb |
Learn how to use CLI schematics, effects, Auth0, and more to secure your NgRx application.
TL;DR: In this article, we’ll get a quick refresher on NgRx basics and get up to speed on more features of the NgRx ecosystem. We'll then walk through how to add Auth0 authentication to an NgRx app. You can access the finished code for this tutorial on the ngrx-auth GitHub repository.
NgRx: From Humble Beginnings to a New Ecosystem
I remember when Rob Wormald, an Angular Core Team member, first wrote ngrx/store at the end of 2015. It was a tiny library implementing Redux in Angular (
store.ts was a single file with 66 lines of code!). Fast forward a few years and NgRx is much more than a simple library — it’s an entire ecosystem with over 120 contributors! Learning to think in the Redux pattern, while also keeping up with the various structures and pieces of NgRx, can be a challenge at first. Don’t worry, though — we’ve got your back.
In this article, we’ll do a quick refresher on NgRx concepts and learn about what has changed in its ecosystem in the last year. Then, we’ll look at how authentication works in NgRx and how to add Auth0 to an NgRx application.
Let’s get started!
"I'm learning about adding authentication to an NgRx application."
TWEET THIS
Basics of NgRx - A Recap
While we have a great tutorial on managing state with
ngrx/store and implementing Auth0, a lot has changed with NgRx since then. Let’s do a quick recap of what hasn't changed — the fundamental concepts of NgRx and the Redux pattern.
State is an immutable data structure describing the current status of parts of your application. This could be in the UI or in the business logic. State can represent a range of different things, from whether to show a navigation menu to whether a user is logged in.
You can access that state through the store. You can think of the store as the main brain of your application. It’s the hub of anything that needs to change. Since we’re using Angular, the store is implemented using an RxJS subject, which basically means it’s both an observable and an observer. The store is an observable of state, which you can subscribe to, but what is it observing?
The store is an observer of actions. Actions are information payloads that describe state changes. You send (or dispatch) actions to the store, and that’s the only way the store receives data or instructions on how to update your application’s state. An action has a type (like “Add Book”) and an optional payload (like an object containing the title and description for the book Return of the King).
Actions are the bedrock of an NgRx application, so it’s important to follow best practices with them. Here are some pro tips on actions from Mike Ryan’s ng-conf talk Good Action Hygiene:
- Write your actions first. This lets you plan out the different use cases of your application before you write too much code.
- Don’t reuse actions or use action sub-types. Instead, focus on clarity over brevity. Be specific with your actions and use descriptive action types. Use actions like
[Login Page] Load User Profileand
[Feature Page] Load User Profile. You can then use the switch statement in your reducer to modify the state in the same way for multiple actions.
- Remember: good actions are actions you can read after a year and tell where they are being dispatched.
Once the store receives an action to change the state, pure functions called reducers use those actions along with the previous state to compute the new state.
If you’re feeling lost on the basics of the Redux pattern, one of my favorite introductory resources is Dan Abramov’s Egghead course on getting started with Redux.
Going Further with NgRx
With the basics covered, let’s look at some additional concepts in NgRx we'll need for this tutorial.
Selectors
Selectors are pure functions that are used to get simple and complex pieces of state. They act as queries to your store and can make your life a lot easier. Instead of having to hold onto different filtered versions of data in your store, you can just use selectors.
For example, to get your current user, you’d set up a selector by first exporting this function:
export const selectAuthUser = (state: AuthState) => state.user;
You typically put these pure functions in the same file as your feature’s reducer functions and then register both the feature selectors and individual state piece selectors in the
index.ts file where you register your reducers with NgRx. For example:
// state.index.ts // Feature selectors get registered first. export const selectAuthState = createFeatureSelector<fromAuth.State>('auth'); // Then you can use that feature selector to select a piece of state. export const selectAuthUser = createSelector( selectAuthState, fromAuth.selectUser, // the pure function that returns the user from our auth state );
You can then easily use that selector in your component or your route guards:
this.user = this.store.pipe(select(fromAuth.selectAuthUser));
To learn more, check out this talk from ng-conf 2018 on selectors by David East and Todd Motto.
Effects
Effects (used with the library
@ngrx/effects) are where you connect actions to side effects or external requests. Effects can listen for actions, then perform a side effect. That effect can then (optionally) dispatch a new action to the store. For example, if you need to load data from an API, you might dispatch a
LOAD_DATA action. That action could trigger an effect that calls your API and then dispatches either a
LOAD_DATA_SUCCESS or
LOAD_DATA_FAILURE action to handle the result. Here’s how that looks:
@Effect() loadData$ = this.actions$.ofType(LOAD_DATA) .pipe(switchMap(() => { return this.apiService.getData().pipe( map(data => new LOAD_DATA_SUCCESS(data)), catchError(error => of(new LOAD_DATA_FAILURE(error))) ); });
You can see that the effect is listening for the
LOAD_DATA action, and then calls the
ApiService to get the data and return a new action.
Using effects ensures that your reducers aren’t containing too many implementation details and that your state isn’t full of temporary clutter. For more information, check out this tutorial from Todd Motto on using effects.
NgRx Schematics
One of the most common complaints about NgRx is the amount of repetitive code that it takes to set up an application and add new features (never say "boilerplate" around NgRx Core Team member Brandon Roberts!). Part of this is due to the NgRx philosophy to focus on clarity over brevity, and to front-load the work of app architecture so that future feature additions are easy and straightforward. At the same time, there can be a lot of repetition in the code you write — adding a new reducer, set of actions, effects, and selectors for each new feature. Luckily, Angular CLI schematics are here to help.
Schematics for the Angular CLI are blueprints to generate new code quickly and are a huge time-saver. NgRx has its own set of schematics that you can take advantage of when writing your application. These schematics follow best practices and cut out lots of repetitive tasks. They also automatically register your feature states, wire up your effects, and integrate into NgModules. Awesome!
There are NgRx schematics for:
- Action
- Container
- Effect
- Entity
- Feature (generate an action, reducer, and an effect and automatically wire them up to a module!)
- Reducer
- Store
To get started with schematics, first install them in your project:
npm install @ngrx/schematics --save-dev
Then, set the NgRx schematics as your defaults in your CLI configuration:
ng config cli.defaultCollection @ngrx/schematics
Once that’s done, you can simply run CLI commands to generate new NgRx items. For example, to generate the initial state for your application, you can run:
ng generate store --name State --root -m app
This will generate a store for you at the root of your application and register it in your
AppModule. Pretty cool!
For more information on NgRx schematics, check out the repository readme and this excellent ng-conf talk by Vitalii Bobrov.
Adding Authentication to an NgRx Project
We’re caught up on the latest and greatest with NgRx, so let’s learn how to implement authentication in NgRx.
Authentication is a perfect example of shared state in an application. Authentication affects everything from being able to access client-side routes, get data from private API endpoints, and even what UI a user might see. It can be incredibly frustrating to keep track of authentication state in all these places. Luckily, this is exactly the kind of problem NgRx solves. Let’s walk through the steps. (We’ll be following the best practices Brandon Roberts describes in his ng-conf 2018 talk Authentication in NgRx.)
"I'm learning how to use schematics, selectors, and effects to add authentication to an NgRx application."
TWEET THIS
Our Sample App
We’re going to use a heavily simplified version of the official ngrx/platform book collection app to learn the basics of authentication in NgRx. To save time, most of the app is done, but we’ll be adding Auth0 to protect our book collection through an Auth0 login page.
You can access the code for this tutorial at the Auth0 Blog repository. To get started with the application, you'll need to clone the app, install the dependencies, and check out the "Starting point" commit in order to follow along. You can also run the application using the Angular CLI. You can do all of that with these commands:
git clone cd ngrx-auth git checkout 23c1b25 npm install ng serve
You can now visit to see the running application.
We'll also be taking advantage of NgRx schematics to help us set up authentication in this app, so I've added them to the project and set them as the default.
Our app currently has the book library (found in the
books folder), as well as some basic setup of NgRx already done. We also have the very beginnings of the
AuthModule. The
UserHomeComponent just lets us go to the book collection. This is what we'll protect with authentication.
The first phase of this task will be the basic scaffolding of the authentication state, reducer, and selectors. Then, we’ll add Auth0 to the application, create our UI components, and finally set up effects to make it all work.
Two Quick Asides
Before we get started, I need to make a couple of side notes.
First, because this article is focused on how to set up authentication in an NgRx app, we're going to abstract away and over-simplify things that aren't specific to NgRx. For example, we won't discuss tying the book collection to a specific user or setting up authentication on the back end. Check out our tutorial on Angular authentication to learn more about those things. Instead, you'll learn the mechanics of adding the necessary state setup, adding actions and reducers to handle the flow of logging in, and using effects to retrieve and process a token.
Second, almost all NgRx tutorials need a caveat. In the real world, if you had an application this simple, you most likely wouldn't use NgRx. NgRx is extremely powerful, but the setup and learning curve involved prevent it from being a fit for every application, especially super simple ones. However, it's tough to learn these concepts using a large and complex sample application. My goal in this article is to focus only on teaching the core concepts of NgRx authentication so that you can apply them to your own larger application. For more on this subject, see Dan Abramov's iconic article You Might Not Need Redux.
Define Global Authentication State
The first step to adding authentication to an NgRx app is to define the piece of your state related to authentication. Do you need to keep track of a user profile, or whether there is a token stored in memory? Often the presence of a token or profile is enough to derive a “logged in” state. In any case, this is what you’ll want to attach to your main state.
We’re going to keep it very simple for this example and just detect the presence of a valid token. If we have a valid token, we'll toggle a global piece of state called
isLoggedIn.
To get started, let’s use a schematic to create a reducer in the main
state folder:
ng g reducer state/auth --no-spec
This is where we’ll define our authentication state, reducer, and selector functions. We can go ahead and update the state interface and initial state like this:
// src/app/state/auth.reducer.ts // ... // Leave everything else and update the following: export interface State { isLoggedIn: boolean; } export const initialState: State = { isLoggedIn: false }; // ...
We'll come back to the reducer in this file once we've defined our actions in the next section.
Next, import everything from the auth reducer file into the
index.ts file. That way we can add the authentication state to the overall application state and the authentication reducer to our action reducer map. The complete file will look like this right now:
// src/app/state/index.ts import { ActionReducerMap, createFeatureSelector, createSelector, MetaReducer } from '@ngrx/store'; import { environment } from '../../environments/environment'; import * as fromAuth from './auth.reducer'; export interface State { auth: fromAuth.State; } export const reducers: ActionReducerMap<State> = { auth: fromAuth.reducer }; export const metaReducers: MetaReducer<State>[] = !environment.production ? [] : [];
In a more complex application, you’ll need to think through the state of other pages that use authentication in addition to your overall authentication state. For example, it’s common for login pages to need pending or error states. In our example, we’ll be using Auth0’s login page, which will redirect our application to a login screen to authenticate, then back to the application. Because of this, we won’t need to keep track of our login page state in this example.
Define Authentication Actions
Before we write our authentication reducer, let’s take Mike Ryan’s advice and write our actions first. Let’s walk through the flow of authentication in our example:
- User clicks the “Log In” button to trigger the Auth0 login screen.
- After going through the Auth0 authentication process, the user will be redirected to a
callbackroute in our application.
- That
CallbackComponentwill need to trigger an action to parse the hash fragment and set the user session. (Hint: this is where we’re altering our application state.)
- Once that’s done, the user should be redirected to a
homeroute.
- If the user refreshes while logged in, the authentication state should persist.
I can identify several actions here:
- Login — to trigger the Auth0 login screen.
- LoginComplete — to handle the Auth0 callback.
- LoginSuccess — to update our authentication state
isLoggedInto
trueand navigate to the
homeroute.
- LoginFailure — to handle errors.
- CheckLogin - to see if the user is still logged in on the server and update the state accordingly.
Notice that only the LoginSuccess action will modify our application state, which means that one will need to be in our reducers. The rest of these actions will use effects.
The logout process is similar:
- User clicks “Log Out” button to trigger a confirmation dialog.
- If the user clicks “Cancel,” the dialog will close.
- If the user clicks “Okay,” we’ll trigger the Auth0 logout process.
- Once logged out, Auth0 will redirect the user back to the application, which should default to the
loginroute when not authenticated.
Can you think of what actions we’ll need? I spotted these:
- Logout — to trigger the logout confirmation dialog
- LogoutCancelled — to close the logout dialog.
- LogoutConfirmed — to tell Auth0 to log out and redirect home.
We’ll use the LogoutConfirmed action to reset our authentication state in a reducer in addition to telling Auth0 to log out. The rest will be handled with effects.
Add the Auth NgRx Feature
We've defined our actions and identified which will be handled by the reducer and which will be handled with effects. We'll need to add some new files to the
AuthModule and wire them up to our main application. Luckily, we can use schematics to make this much easier for us:
ng g feature auth/auth --group --no-spec --module auth
Let's break this command down to understand what's happening:
-
g is short for
generate in the CLI
-
feature is the NgRx schematic that generates actions, effects, and reducers for a new feature
-
auth/auth tells the CLI to put these files under the
auth folder and name them
auth
-
group is an NgRx schematic option that groups all the files under respective folders (e.g. actions go in an
actions folder)
-
no-spec skips generation of test files
- Finally,
module auth wires up the feature in the
AuthModule
We should end up with
actions,
effects, and
reducers folders inside of
src/app/auth with corresponding TypeScript files prefixed with
auth (e.g.
auth.actions.ts).
We actually won't be using the module-specific
reducers, so we can delete that file and folder (
src/app/auth/reducers) and the references to the auth reducer in
auth.module.ts. We'll need to delete the import of the reducer on line 9 and the reducer registration from the
imports array on line 22:
// src/app/auth/auth.module.ts import * as fromAuth from './reducers/auth.reducer'; // remove this line // ... StoreModule.forFeature(‘auth’, fromAuth.reducer), // remove this line // ...
It's helpful to stop and re-run
ng serve to prevent getting some errors at this stage.
Adding the Auth Actions
Let's create the actions we defined earlier in our newly created
/auth/auth.actions.ts file. We can delete the generated code, but we’ll follow the same pattern in our code: creating an enum with the action type strings, defining each action and optional payload, and finally defining an
AuthActions type that we can use throughout the application. The finished result will look like this:
// src/app/auth/actions/auth.actions.ts // Finished: import { Action } from '@ngrx/store'; export enum AuthActionTypes { Login = '[Login Page] Login', LoginComplete = '[Login Page] Login Complete', LoginSuccess = '[Auth API] Login Success', LoginFailure = '[Auth API] Login Failure', CheckLogin = '[Auth] Check Login', Logout = '[Auth] Confirm Logout', LogoutCancelled = '[Auth] Logout Cancelled', LogoutConfirmed = '[Auth] Logout Confirmed' } export class Login implements Action { readonly type = AuthActionTypes.Login; } export class LoginComplete implements Action { readonly type = AuthActionTypes.LoginComplete; } export class LoginSuccess implements Action { readonly type = AuthActionTypes.LoginSuccess; } export class LoginFailure implements Action { readonly type = AuthActionTypes.LoginFailure; constructor(public payload: any) {} } export class CheckLogin implements Action { readonly type = AuthActionTypes.CheckLogin; } export class Logout implements Action { readonly type = AuthActionTypes.Logout; } export class LogoutConfirmed implements Action { readonly type = AuthActionTypes.LogoutConfirmed; } export class LogoutCancelled implements Action { readonly type = AuthActionTypes.LogoutCancelled; } export type AuthActions = | Login | LoginComplete | LoginSuccess | LoginFailure | CheckLogin | Logout | LogoutCancelled | LogoutConfirmed;
You can see that, in our example, we’re only using a payload for the
LoginFailure action to pass in an error message. If we were using a user profile, we’d need to define a payload in
LoginComplete in order to handle it in the reducer. Instead, we'll just be handling the token through an effect and an
AuthService we'll create later.
Do you notice how thinking through the actions in a feature also helps us identify our use cases? This keeps us from cluttering our reducers and application state because we shift most of the heavy lifting to effects.
Quick aside: if you'd like your application to continue to build at this step, you'll need to comment out the generated code in
auth.effects.ts, since it now references the deleted schematic-generated action. Comment out lines 8 and 9:
// src/app/auth/effects/auth.effects.ts // ... // @Effect() // loadFoos$ = this.actions$.pipe(ofType(AuthActionTypes.LoadAuths)); // ...
Define Authentication Reducer
Now let’s circle back to authentication reducer (
state/auth.reducer.ts). Since we’ve figured out what our actions are, we know that we’ll only need to change the state of our application when the login is successful and when the logout dialog is confirmed.
First, import the
AuthActions and
AuthActionTypes at the top of the file so we can use them in our reducer:
import { AuthActions, AuthActionTypes } from '@app/auth/actions/auth.actions';
Next, replace the current reducer function with this:
// /state/auth.reducer.ts export function reducer(state = initialState, action: AuthActions): State { switch (action.type) { case AuthActionTypes.LoginSuccess: return { ...state, isLoggedIn: true }; case AuthActionTypes.LogoutConfirmed: return initialState; // the initial state has isLoggedIn set to false default: return state; } }
Notice that the
LoginSuccess case toggles the global
isLoggedIn state to
true, while the
LogoutConfirmed case returns us to our initial state, where
isLoggedIn is
false.
We’ll handle the rest of our actions using effects later on.
Define Authentication Selector
We've defined our
isLoggedIn state, actions related to our login process, and a reducer to modify our application state. But how do we access the status of
isLoggedIn throughout our application? For example, we’ll need to know whether we're authenticated in a route guard to control access to the
home and
books routes.
This is exactly what selectors are for. To create a selector, we’ll first define the pure function in
/state/auth.reducer.ts and then register the selector in
index.ts. Underneath our reducer function, we can add this pure function:
// src/app/state/auth.reducer.ts // ...all previous code remains the same export const selectIsLoggedIn = (state: State) => state.isLoggedIn;
We can then define our selectors at the bottom of
state/index.ts:
// src/app/state/index.ts // ...all previous code remains the same export const selectAuthState = createFeatureSelector<fromAuth.State>('auth'); export const selectIsLoggedIn = createSelector( selectAuthState, fromAuth.selectIsLoggedIn );
We’ve now got all the basic scaffolding set up for an authenticated state. Let's start setting up our authentication process.
Adding Authentication with Auth0
In this section, we're going to set up Auth0, create an Angular authentication service, and wire everything up using NgRx Effects. The Auth0 log in screen will look like this:
The first thing you'll need to do is sign up for an Auth0 account to manage authentication. You can sign up for a free Auth0 account here. (If you've already got an account, great! You can simply log in to Auth0.)
Set Up an Application
Once you've got your account, you can set up an application to use with our NgRx project. We'll only be setting up a Single Page Application (SPA) in Auth0 since we're using the Google Books API as our back end.
Here's how to set that up:
- Go to your Auth0 Applications and click the "Create Application" button.
- Name your new app, select "Single Page Web Applications," and click the "Create" button. You can skip the Quick Start and click on Settings.
- In the Settings for your new Auth0 app, add the Allowed Callback URLs. (We're using
localhost:4200since it's the default port for the Angular CLI
servecommand.)
- Add both Allowed Web Origins and Allowed Logout URLs.
- Click the "Save Changes" button.
- Copy down your Domain and Client ID. We'll use them in just a minute.
- If you'd like, you can set up some social connections. You can then enable them for your app in the Application options under the Connections tab. The example shown in the screenshot above utilizes username/password database, Facebook, Google, and Twitter..
Install auth0.js and Set Up Environment Config
Now that we've got the SPA authentication set up, we need to add the JavaScript SDK that allows us to easily interact with Auth0. We can install that with this command:
npm install auth0-js --save
We'll use this library in just a bit when we create our authentication service. We can now set up our environment variables using the client ID and domain we copied from our Auth0 application settings. The Angular CLI makes this very easy. Open up
/src/environments/environment.ts and add the
auth section below:
// src/environments/environment.ts export const environment = { production: false, auth: { clientID: 'YOUR-AUTH0-CLIENT-ID', domain: 'YOUR-AUTH0-DOMAIN', // e.g., you.auth0.com redirect: '', scope: 'openid profile email' } };
Note: if we were using an API identifier, we would add an
audienceproperty to this configuration. We're leaving it out here since we're using the Google Books API.
This file stores the authentication configuration variables so we can re-use them throughout our application by importing them. Be sure to update the
YOUR-AUTH0-CLIENT-ID and
YOUR-AUTH0-DOMAIN to your own information from your Auth0 application settings.
Add Authentication Service
We're now ready to set up the main engine of authentication for our application: the authentication service. The authentication service is where we’ll handle interaction with the Auth0 library. It will be responsible for anything related to getting the token from Auth0, but won’t dispatch any actions to the store.
To generate the service using the CLI, run:
ng g service auth/services/auth --no-spec
We first need to import the
auth0-js library, our environment configuration, and
bindNodeCallback from RxJS:
// src/app/auth/services/auth.service.ts // Add these to the imports import { Injectable } from '@angular/core'; import { bindNodeCallback } from 'rxjs'; import * as auth0 from 'auth0-js'; import { environment } from '../../../environments/environment'; // ...below code remains the same
Next, we need to set some properties on our class. We'll need an Auth0 configuration property, a flag we'll use for setting a property in local storage to persist our authentication on refresh, and some URLs for navigation. We'll also add a property for setting the token expiration date. Add these before the class
constructor:
// src/app/auth/services/auth.service.ts // ...previous code remains the same // Add properties above the constructor private _Auth0 = new auth0.WebAuth({ clientID: environment.auth.clientID, domain: environment.auth.domain, responseType: 'token', redirectUri: environment.auth.redirect, scope: environment.auth.scope }); // Track whether or not to renew token private _authFlag = 'isLoggedIn'; // Authentication navigation private _onAuthSuccessUrl = '/home'; private _onAuthFailureUrl = '/login'; private _logoutUrl = ''; private _expiresAt: number; // ...below code remains the same
We're setting the different URLs here in the service in case multiple places in the application need to perform this redirection.
Next, we'll need to set up two observables using the Auth0 library that we can access with our NgRx effects later on. The Auth0 library uses Node-style callback functions. Luckily, we can use
bindNodeCallback, built into RxJS, to transform these functions into observables. Add these lines after the properties we just created but before the constructor:
// src/app/auth/services/auth.service.ts // ...above code unchanged //)); // ...below code unchanged
The first observable we've created is using Auth0's
parseHash function, which will parse the access token from the window hash fragment. We'll use this during
LoginComplete. The second observable is for the
checkSession function, which we'll use when the app refreshes during
CheckLogin. You'll notice that both of these have a little bit of JavaScript magic tacked onto them. We need to
bind the Auth0 object to both of these functions for them to work correctly. This is pretty common when using
bindNodeCallback.
We'll now need functions to handle logging in, setting the authentication, and logging out, as well as a public getters for our URLs and whether we're logged in. We'll use a flag in local storage to track that the application is authenticated. That way we can check for that flag and call
checkSession to update our state.
Update the service like so after the constructor:
// src/app/auth/services/auth.service.ts // ...previous code remains the same // Add these functions after the constructor get authSuccessUrl(): string { return this._onAuthSuccessUrl; } get authFailureUrl(): string { return this._onAuthFailureUrl; } get authenticated(): boolean { return JSON.parse(localStorage.getItem(this._authFlag)); } resetAuthFlag() { localStorage.removeItem(this._authFlag); } login() { this._Auth0.authorize(); } setAuth(authResult) { this._expiresAt = authResult.expiresIn * 1000 + Date.now(); // Set flag in local storage stating this app is logged in localStorage.setItem(this._authFlag, JSON.stringify(true)); }ID }); } // ...below code remains the same
Auth0 is handling logging in and logging out, and we have a
setAuth method to set the local storage flag. If we needed to return the token for use in the store, we'd do that here. We're going to handle much of the flow of our authentication using effects. Before we set those up, though, we're going to need some components.
Build Out the Authentication UI
We've got the authentication service set up, but now we need to build out our UI. We'll need components for logging in, a callback component for Auth0 to redirect to, a logout dialog, and a logout button on our user home screen. We'll also need to add some routing, and we'll want to add a route guard to lock down our
home route and redirect users back to the
login route if no token is found.
Log In Components
We need to create a
login route with a simple form that contains a button to log in. This button will dispatch the
Login action. We'll set up an effect in just a bit that will call the authentication service.
First, create a container component called
login-page:
ng g c auth/components/login-page -m auth --no-spec
Replace the boilerplate with this:
// src/app/auth/components/login-page.component.ts import { Component, OnInit } from '@angular/core'; import { Store } from '@ngrx/store'; import * as fromStore from '@app/state'; import { Login } from '@app/auth/actions/auth.actions'; @Component({ selector: 'abl-login-page', template: ` <abl-login-form (submitted)="onLogin($event)"> </abl-login-form> `, styles: [ ` :host { display: flex; flex-direction: column; align-items: center; padding: 128px 12px 12px 12px; } abl-login-form { width: 100%; min-width: 250px; max-width: 300px; } ` ] }) export class LoginPageComponent implements OnInit { constructor(private store: Store<fromStore.State>) {} ngOnInit() {} onLogin() { this.store.dispatch(new Login()); } }
Notice that we'll be passing the
onLogin function into our form, which will dispatch our action.
If you see an error when running the application right now, it's because we haven't yet created
abl-login-form referenced in the template. Let's create that component now:
ng g c auth/components/login-form -m auth --no-spec
Now replace the contents of that file with this:
// src/app/auth/components/login-form.component.ts import { Component, OnInit, EventEmitter, Output } from '@angular/core'; import { FormGroup } from '@angular/forms'; @Component({ selector: 'abl-login-form', template: ` <mat-card> <mat-card-title> Welcome </mat-card-title> <mat-card-content> <form [formGroup]="loginForm" (ngSubmit)="submit()"> <div class="loginButtons"> <button type="submit" mat-button>Log In</button> </div> </form> </mat-card-content> </mat-card> `, styles: [ ` :host { width: 100%; } form { display: flex; flex-direction: column; width: 100%; } mat-card-title, mat-card-content { display: flex; justify-content: center; } mat-form-field { width: 100%; margin-bottom: 8px; } .loginButtons { display: flex; flex-direction: row; justify-content: flex-end; } ` ] }) export class LoginFormComponent implements OnInit { @Output() submitted = new EventEmitter<any>(); loginForm = new FormGroup({}); ngOnInit() {} submit() { this.submitted.emit(); } }
Finally, in our
AuthRoutingModule (
auth/auth-routing.module.ts), import
LoginPageComponent at the top of the file and add a new route to the routes array, above the
home route:
// src/app/auth/auth-routing.module.ts // Add to imports at the top import { LoginPageComponent } from '@app/auth/components/login-page.component'; // ... // Add to routes array above the home route { path: 'login', component: LoginPageComponent }, // ...below code remains the same
We should be able to build the application, navigate to, and see the new form.
Of course, nothing will happen when we click the button, because we don't have any effects listening for the
Login action yet. Let's finish building our UI and then come back to that.
Add Callback Component
Once Auth0 successfully logs us in, it will redirect back to our application
callback route, which we'll add in this section. First, let's build the
CallbackComponent and have it dispatch a
LoginComplete action.
First, generate the component:
ng g c auth/components/callback -m auth --nospec
This component will just display a loading screen and dispatch
LoginComplete using
ngOnInit. Replace the contents of the generated file with this code:
// src/app/auth/components/callback.component.ts import { Component, OnInit } from '@angular/core'; import { Store } from '@ngrx/store'; import * as fromStore from '@app/state'; import { LoginComplete } from '@app/auth/actions/auth.actions'; @Component({ selector: 'abl-callback', template: ` <p> Loading... </p> `, styles: [] }) export class CallbackComponent implements OnInit { constructor(private store: Store<fromStore.State>) {} ngOnInit() { this.store.dispatch(new LoginComplete()); } }
And, finally, import the new
CallbackComponent into
auth/auth-routing.module.ts and add a new route after the
login route:
// src/app/auth/auth-routing.module.ts // Add to imports at the top import { CallbackComponent } from '@app/auth/components/callback.component'; // ... // Add to routes array after the login route { path: 'callback', component: CallbackComponent }, // ...below code remains the same
Once again, if we build the application and run it, we're now able to navigate to and see the new component (which, once again, will do nothing yet).
Log Out Buttons
For logging out of the application, we'll need a confirmation dialog, as well as log out buttons on the user home and books page components.
Let's add the buttons first. In
auth/user-home.component.ts, add a button in the template under the book collection button. The completed template will look like this:
// src/app/auth/components/user-home.component.ts <div> <h3>Welcome Home!</h3> <button mat-button raisedSee my book collection</button> <button mat-button raisedLog Out</button> </div>
Then, add these imports at the top of the file:
// src/app/auth/components/user-home.component.ts import { Store } from '@ngrx/store'; import * as fromStore from '@app/state'; import { Logout } from '@app/auth/actions/auth.actions'; // ...below code remains the same
This will let us add the store to our constructor:
// src/app/auth/components/user-home.component.ts // ...above code unchanged constructor(private store: Store<fromStore.State>, private router: Router) {} // ...below code unchanged
With that done, we can now add a
logout function to the component that will dispatch the
Logout action:
// src/app/auth/components/user-home.component.ts // add anywhere after the constructor logout() { this.store.dispatch(new Logout()); } // ...other code unchanged
We can do something similar with the
BooksPageComponent so the user can log out from their book collection. In
books/components/books-page-component.ts, add the following block underneath the
mat-card-title tag:
// src/app/books/components/books-page.component.ts <mat-card-actions> <button mat-button raisedLogout</button> </mat-card-actions>
Here's what the completed
template for the
BooksPageComponent will look like:
// src/app/books/components/books-page.component.ts <mat-card> <mat-card-title>My Collection</mat-card-title> <mat-card-actions> <button mat-button raisedLogout</button> </mat-card-actions> </mat-card> <abl-book-preview-list [books]="books$ | async"></abl-book-preview-list>
Next, add the
Logout action to the imports:
// src/app/books/components/books-page.component.ts import { Logout } from '@app/auth/actions/auth.actions'; // ...remaining code unchanged
And, finally, add a
logout function to dispatch the
Logout action from the button:
// src/app/books/components/books-page.component.ts // add anywhere after the constructor logout() { this.store.dispatch(new Logout()); } // ...remaining code unchanged
And that's it! Now we just need to add the logout confirmation.
Log Out Prompt
We're going to use Angular Material to pop up a confirmation when the user clicks log out.
To generate the component, run:
ng g c auth/components/logout-prompt -m auth --no-spec
Then, replace the contents of the file with this:
// src/app/auth/components/logout-prompt.component.ts import { Component } from '@angular/core'; import { MatDialogRef } from '@angular/material'; @Component({ selector: 'abl-logout-prompt', template: ` <h3 mat-dialog-title>Log Out</h3> <mat-dialog-content> Are you sure you want to log out? </mat-dialog-content> <mat-dialog-actions> <button mat-button (click)="cancel()"> No </button> <button mat-button (click)="confirm()"> Yes </button> </mat-dialog-actions> `, styles: [ ` :host { display: block; width: 100%; max-width: 300px; } mat-dialog-actions { display: flex; justify-content: flex-end; } [mat-button] { padding: 0; } ` ] }) export class LogoutPromptComponent { constructor(private ref: MatDialogRef<LogoutPromptComponent>) {} cancel() { this.ref.close(false); } confirm() { this.ref.close(true); } }
You're probably seeing an error in the console at this point. That's because there is one thing that the CLI didn't do for us. We need to create an
entryComponents array in the
NgModule decorator of
AuthModule and add the
LogoutPromptComponent to it.
Entry components are components loaded imperatively through code instead of declaratively through a template. The
LogoutPromptComponent doesn't get called through a template, it gets loaded by Angular Material when the user clicks the
Log Out button.
Just add this after the
declarations array in
auth/auth.module.ts (don't forget a comma!):
// src/app/auth/auth.module.ts // ... imports @NgModule({ imports: [ ... ], declarations: [ ... ], entryComponents: [LogoutPromptComponent] }) // ...other code unchanged
We'll create an effect for
Logout to open the prompt, listen for the response, and dispatch either
LogoutCancelled or
LogoutConfirmed when we wire everything up in just a bit.
Add Route Guard
We've added our login and logout components, but we want to ensure that a visitor to our site can only access the
home route if they are logged in. Otherwise, we want to redirect them to our new
login route. We can accomplish this with a
CanActivate route guard.
To create the route guard, run this command:
ng g guard auth/services/auth --no-spec
This will create
/auth/services/auth.guard.ts. Replace the contents of this file with the following:
// src/app/auth/services/auth.guard.ts import { Injectable } from '@angular/core'; import { CanActivate, Router } from '@angular/router'; import { of } from 'rxjs'; import { mergeMap, map, take } from 'rxjs/operators'; import { Store } from '@ngrx/store'; import { AuthService } from '@app/auth/services/auth.service'; import * as fromStore from '@app/state'; @Injectable({ providedIn: 'root' }) export class AuthGuard implements CanActivate { constructor( private authService: AuthService, private store: Store<fromStore.State>, private router: Router ) {} canActivate() { return this.checkStoreAuthentication().pipe( mergeMap(storeAuth => { if (storeAuth) { return of(true); } return this.checkApiAuthentication(); }), map(storeOrApiAuth => { if (!storeOrApiAuth) { this.router.navigate(['/login']); return false; } return true; }) ); } checkStoreAuthentication() { return this.store.select(fromStore.selectIsLoggedIn).pipe(take(1)); } checkApiAuthentication() { return of(this.authService.authenticated); } }
Let's break down what's happening here. When this guard runs, we first call the function
checkStoreAuthentication, which uses the selector we created to get
isLoggedIn from our global state. We also call
checkApiAuthentication, which checks if the state matches
authenticated on our
AuthService (which we get from local storage). If these are true, we return true and allow the route to load. Otherwise, we redirect the user to the
login route.
We'll want to add this route guard to both the
home route (in our
AuthModule) and our
books route (specifically, the
forChild in
BooksModule).
In
auth/auth-routing.module.ts, add the guard to the imports:
// src/app/auth/auth-routing.module.ts import { AuthGuard } from './services/auth.guard';
Then, modify the
home route to the following:
{ path: 'home', component: UserHomeComponent, canActivate: [AuthGuard] }
Similarly, import the
AuthGuard at the top of
books/books.module.ts:
// src/app/books/books.module.ts import { AuthGuard } from '@app/auth/services/auth.guard'; // ...all other code unchanged
Then, modify
RouterModule.forChild to this:
// src/app/books/books.module.ts // ...above code unchanged RouterModule.forChild([ { path: '', component: BooksPageComponent, canActivate: [AuthGuard] }, ]), // ...below code unchanged
We did it! If we run
ng serve, we should no longer be able to access the
home or
books route. Instead, we should be redirected to
Note that we haven't added any sort of 404 redirecting here. To do that, we'd want to add a wildcard route and a
PageNotFoundComponentto redirect to. You can read more about wildcard routes in the Angular docs.
Checking Authentication on App Load
We just have one last UI piece to add. We need to dispatch the
CheckLogin action when the application loads so that we can retrieve the token from the server if the user is logged in. The best place to do this is in the
AppComponent (
src/app/app.component.ts).
Our steps in
AppComponent will be identical to what we did in the
CallbackComponent — the only difference is the action we will dispatch. In
app.component.ts, we'll first need to add
OnInit to our
@angular/core imports. We'll also need to import the
Store from
ngrx, our app
State, and our
CheckLogin action. Our complete imports in the file will look like this:
// src/app/app.component.ts import { Component, OnInit } from '@angular/core'; import { Store } from '@ngrx/store'; import * as fromStore from '@app/state'; import { CheckLogin } from '@app/auth/actions/auth.actions'; // below code unchnaged.
Then, in the component class, implement the
OnInit interface and create the
OnInit function:
// src/app/app.component.ts // ...imports and template defined above (unchaged) // The class definition should look like this: export class AppComponent implements OnInit { constructor() { } ngOnInit() {} }
Lastly, we'll need to inject the
Store into the constructor and dispatch the
CheckLogin action inside of
ngOnInit. The finished class definition should look like this:
// src/app/app.component.ts // ...imports and template defined above (unchaged) // The class definition should look like this: export class AppComponent implements OnInit { constructor(private store: Store<fromStore.State>) { } ngOnInit() { this.store.dispatch(new CheckLogin()); } }
You won't notice anything different when you run the application, because we'll be calling
checkSession on the
AuthService from an effect. So, let's put it all together by creating effects that will control logging in and out in addition to checking for persisted authentication.
Controlling the Authentication Flow with Effects
Alright, friends, we're ready for the final piece of this puzzle. We're going to add effects to handle our authentication flow. Effects allow us to initiate side effects as a result of actions dispatched in a central and predictable location. This way, if we ever need to universally change the behavior of an action's side effect, we can do so quickly without repeating ourselves.
Add Imports and Update Constructor
All of our effects will go in
auth/effects/auth.effects.ts, and the CLI has already connected them to our application through the
AuthModule. All we need to do is fill in our effects.
Before we do that, be sure that all of these imports are at the top of the file:
// auth/effects/auth.effects.ts import { Injectable } from '@angular/core'; import { Router } from '@angular/router'; import { Actions, Effect } from '@ngrx/effects'; import { tap, exhaustMap, map, catchError } from 'rxjs/operators'; import { MatDialog } from '@angular/material'; import * as fromAuth from '../actions/auth.actions'; import { LogoutPromptComponent } from '@app/auth/components/logout-prompt.component'; import { AuthService } from '@app/auth/services/auth.service'; import { of, empty } from 'rxjs';
If you commented out the boilerplate
loadFoos$ effect that the CLI generated earlier, go ahead and delete it:
// src/app/auth/auth.effects.ts // ... // Delete both of these lines. // @Effect() // loadFoos$ = this.actions$.pipe(ofType(AuthActionTypes.LoadAuths)); // ...
Next, update the constructor so that we're injecting the router, the
AuthService, and
MatDialog (from Angular Material):
// auth/effects/auth.effects.ts constructor( private actions$: Actions, private authService: AuthService, private router: Router, private dialogService: MatDialog ) { }
We'll use all of these in our effects.
Add Log In Effects
Let's add our log in effects first.
Add the following to our class before the constructor (this is a convention with effects):
// auth/effects/auth.effects.ts // Add before the constructor @Effect({ dispatch: false }) login$ = this.actions$.ofType<fromAuth.Login>(fromAuth.AuthActionTypes.Login).pipe( tap(() => { return this.authService.login(); }) ); @Effect() loginComplete$ = this.actions$ .ofType<fromAuth.Login>(fromAuth.AuthActionTypes.LoginComplete) .pipe( exhaustMap(() => { return this.authService.parseHash$().pipe( map((authResult: any) => { if (authResult && authResult.accessToken) { this.authService.setAuth(authResult); window.location.hash = ''; return new fromAuth.LoginSuccess(); } }), catchError(error => of(new fromAuth.LoginFailure(error))) ); }) ); @Effect({ dispatch: false }) loginRedirect$ = this.actions$ .ofType<fromAuth.LoginSuccess>(fromAuth.AuthActionTypes.LoginSuccess) .pipe( tap(() => { this.router.navigate([this.authService.authSuccessUrl]); }) ); @Effect({ dispatch: false }) loginErrorRedirect$ = this.actions$ .ofType<fromAuth.LoginFailure>(fromAuth.AuthActionTypes.LoginFailure) .pipe( map(action => action.payload), tap((err: any) => { if (err.error_description) { console.error(`Error: ${err.error_description}`); } else { console.error(`Error: ${JSON.stringify(err)}`); } this.router.navigate([this.authService.authFailureUrl]); }) ); // ...below code unchanged
Let's break down what's happening in each of these.
- Login — calls the
loginmethod on
AuthService, which triggers Auth0. Does not dispatch an action.
- Login Complete — calls
parseHash$on
AuthService, which returns an observable of the parsed hash. If there's a token, this effect calls
setAuth, clears the hash from the window location, and then dispatches the
LoginSuccessaction. If there's not a token, the effect dispatches the
LoginFailureaction with the error as its payload.
- Login Redirect — This effect happens when
LoginSuccessis dispatched. It redirects the user to
home(using the
authSuccessUrlproperty on the
AuthService) and does not dispatch a new action.
- Login Error Redirect — This effect happens when
LoginFailureis dispatched. It redirects the user back to
login(using the
authFailureUrlproperty on the
AuthService) and does not dispatch a new action.
If we run the application with
ng serve, we should now be able to successfully log in to our application using Auth0! We'll see a login screen similar to this:
On your first login, you'll need to click the "Sign Up" button to create a user. Alternatively, you can manually create users from the Auth0 dashboard. In either case, you'll receive an email asking you to verify your email address. Your first login to the application will also trigger an "Authorize App" screen requesting permission. Just click the green button to continue. Once you're all signed up and logged in, you'll be redirected to the
home route, where you can click the button to view the book collection. Of course, we can't log out yet, so let's add the effects for that now.
Note: You can also use Google to sign up and log in. However, be aware that you will need to generate a Google Client ID and Client Secret in order to complete this tutorial.
Persist Authentication on Refresh
This is great, but if we refresh the application, we'll lose the access token. Because we already set up the flag in local storage and the route guard is checking for that flag, our application will appear to be logged in on refresh. However, we haven't called
checkSession yet through Auth0, so we'll no longer have the token. Let's add the effect for
CheckLogin to fix that.
// auth/effects/auth.effects.ts // add below login effects: // ...above code unchanged @Effect() checkLogin$ = this.actions$.ofType<fromAuth.CheckLogin>(fromAuth.AuthActionTypes.CheckLogin).pipe( exhaustMap(() => { if (this.authService.authenticated) { return this.authService.checkSession$({}).pipe( map((authResult: any) => { if (authResult && authResult.accessToken) { this.authService.setAuth(authResult); return new fromAuth.LoginSuccess(); } }), catchError(error => { this.authService.resetAuthFlag(); return of(new fromAuth.LoginFailure({ error })); }) ); } else { return empty(); } }) ); // ...below code unchanged
When
CheckLogin is dispatched, this effect will call
checkSession on the
AuthService, which, like
parseHash, returns token data. If there's token data, the effect will call the
setAuth method and dispatch the
LoginSuccess action. If there's an error, the effect will dispatch
LoginFailure. Those actions will work the same way as with logging in - navigating to
home on success or
login on failure.
Let's check if it works. Run
ng serve, navigate to, and log in. Once we're back at the
home route, refresh the page with our browser. We should not be redirected back to the
login route, but remain on the
home route. Awesome!
Here's a challenge for you: how might you update this feature so that you can persist the previous route, too? That way a user would land back on
books when refreshing.
Add Log Out Effects
Let's add our final two effects to finish off this application.
// auth/effects/auth.effects.ts // Add under the login effects // ...above code unchanged @Effect() logoutConfirmation$ = this.actions$.ofType<fromAuth.Logout>(fromAuth.AuthActionTypes.Logout).pipe( exhaustMap(() => this.dialogService .open(LogoutPromptComponent) .afterClosed() .pipe( map(confirmed => { if (confirmed) { return new fromAuth.LogoutConfirmed(); } else { return new fromAuth.LogoutCancelled(); } }) ) ) ); @Effect({ dispatch: false }) logout$ = this.actions$ .ofType<fromAuth.LogoutConfirmed>(fromAuth.AuthActionTypes.LogoutConfirmed) .pipe(tap(() => this.authService.logout())); // ...below code unchanged
Here's what's happening here:
- Logout Confirmation — This effect will display the log out confirmation dialog. It will then process the result by dispatching either the
LogoutConfirmedor
LogoutCancelledactions.
- Logout — This effect happens after
LogoutConfirmedhas been dispatched. It will call the
logoutfunction on the
AuthService, which tells Auth0 to log us out and redirect back home. This effect does not dispatch another action.
Running
ng serve again should now allow us to log in, view the book collection, and log out. Be sure to check if we can also log out from the
home route!
Remember, you can access the finished code for this tutorial here.
Review and Where to Go From Here
Congratulations on making it to the end! We've covered a lot of ground here, like:
- Using @ngrx/schematics to quickly generate new code
- Defining global authentication state
- Using selectors to access authentication state
- Setting up Auth0 to handle your authentication
- Using effects to handle the login and logout flows
- Persisting authentication on refresh
We've spent time laying the foundation of a basic authentication flow in a very simple application. However, we could easily apply this same setup to a much more complex application. Scaling the setup is very minor, and adding new pieces of state, new actions, or new side effects would be relatively easy. We've got all the building blocks in place.
For example, let's say you needed to add the access token to outgoing HTTP requests as an
Authorization header. You've already got what you need to get this working quickly. You could add
tokenData to the authentication state and create a selector for it. Then, add the token data as the
payload of the
LoginSuccess action and update the effects that use it. Once that's all set up, you could then create an HTTP interceptor to select the token data from the
store and add it to outgoing requests.
Now that the foundation of the authentication flow has been built, the sky is the limit for how you want to extend it. My goal for this tutorial was to keep it simple while helping you understand some new, fairly complex concepts. I hope you can take this knowledge and use it in the real world — let me know how it goes! | https://auth0.com/blog/ngrx-authentication-tutorial/ | CC-MAIN-2019-04 | en | refinedweb |
is it could be that my arduino IDE is wrongly setting?
How to add slider in blynk to control temperature setpoint
In the IDE under tools set erase flash to “All Flash Contents” and flash a blank sketch. Then change the settings back to erase “Only Sketch” and flash my latest sketch again.
i checked all my sensor and they working using DHTtester code, i wonder what would seem the problem
You shouldn’t have to change either. I think without Blynk your DHTtester sketch runs OK.
At the moment I can’t work out why adding Blynk is a problem.
Do any Blynkers have a DHT11 that they can test a sketch with?
Sounds to me like there’s some newbie mistake happening here.
I’m reading this on an iPhone (sat in a very nice, and hot, beach bar in Thailand by the way
) so following the ins-and-outs of the thread is tricky. Has the OP posted his DHT test code, and does it use the same DHT pins, library etc as is being used in the other code?
Also, if this is lashed-up on a breadboard then there’s every chance that there are some dodgy connections in there somewhere.
A more forensic approach to the fault-finding process would no doubt reveal the problem, but that’s not really possible with the OP’s obvious lack of experience and language issues.
Pete.
Yes iam actually very fresh newbie in programming
. Iam sure the connection is right. Btw iam not using breadboard just straight jumper wire connection. I already download the library for blynk and the dht sensor. What do you mean that library being used in another code?
Still warmer than Blighty though.
Currently 31° and Factor 50 here!
Pete.
@PeteKnight don’t you travel with a WeMos and a DHT in your case?
After changing all the jumper wire to new one and i using below code , it is functioning correctly! Unfortunately the serial monitor still showing " Failed to read from DHT sensor".
Iam planning to put LED widget as notification when the relay module is on and Button widget for manually operate the relay.
#include <SPI.h> #include <ESP8266WiFi.h> #include <BlynkSimpleEsp8266.h> #include <DHT.h> // You should get Auth Token in the Blynk App. // Go to the Project Settings (nut icon). char auth[] = "xxx"; // Your WiFi credentials. // Set password to "" for open networks. char ssid[] = "xxx"; char pass[] = "xxx"; #define DHTPIN 0 // D3 // Uncomment whatever type you're using! #define DHTTYPE DHT11 // DHT 11 DHT dht(DHTPIN, DHTTYPE); BlynkTimer timer; float setpoint = 0; bool relaystatuschanged = false; const int relayPin = 5; // D1; BLYNK_WRITE(V0)// slider widget { setpoint = param.asFloat(); relaystatuschanged = true; }); if((setpoint < t) && (relaystatuschanged == true)) { digitalWrite(relayPin, 0); // assuming relay is active HIGH relaystatuschanged = false; } if((setpoint > t) && (relaystatuschanged == true)) { digitalWrite(relayPin, 1); // assuming relay is active HIGH relaystatuschanged = false; } } void setup() { Serial.begin(9600); Blynk.begin(auth, ssid, pass); dht.begin(); pinMode(relayPin, OUTPUT); // Setup a function to be called every second timer.setInterval(5000L, sendSensor); } void loop() { Blynk.run(); timer.run(); }``` | https://community.blynk.cc/t/how-to-add-slider-in-blynk-to-control-temperature-setpoint/32521?page=5 | CC-MAIN-2019-04 | en | refinedweb |
O f fice
GAO
Scl)tcntln.r 1!)f)2
Report to Congressional Requesters
SECURITIES
I
STO R
PROTECTION The Regulatory Framework Has Minimized SIPC's Losses
147624
(rAO/(x (xD-02- 1 09
G
United 8tates General Accounting OClce Washington, D,C. 20548 General Government Division
B-248152 September 28, 1992 The Honorable Donald W. Riegle, Jr. Chairman, Committee on Banking, Housing, and Urban Affairs United States Senate The Honorable John D. Dingell Chairman, Subcommittee on Oversight and Investigations Committee on Energy and Commerce House of Representatives This report responds to your requests that we review the operations and solvency of the Securities Investor Protection Corporation (sn c). It discusses how the regulators' success in protecting customers depends upon the quality of regulatory oversight of the securities industry. We also provide recommendations to improve Securities and Exchange Commission (sEc) and sac disclosures to customers and sEc's oversight of stoic's operations. We will send copies of this report to the Chairman, sn'c; the Chairman, sEC; appropriate congressional committees and subcommittees; and other interested parties. We will also make copies available to others upon request. This report was prepared under the direction of Craig A. Simmons, Director, Financial Institutions and Markets Issues, who may be reached on (202) 275-8678 if there are any questions concerning the contents of this report. Other major contributors to this report are listed in appendix IV.
Richard L. Fogel . Assistant Comptroller General
Executive Summary
Purpose
Congress created the Securities Investor Protection Corporation (SIPc) in 1970 after a large number of customers lost money when they were unable to obtain possession of their cash and securities from failed broker-dealers. SIPc was established to promote public confidence in the nation's securities markets by guaranteeing the return of properly to small investors if securities firms fail or go out of business. siPc is a member-financed, private nonprofit corporation with statutory authority to borrow up to $1 billion from the U.S. Treasury. This report responds to requests by the Senate Banking Committee and the House Energy and Commerce Subcommittee on Oversight and Investigations that GAO report on several issues, including (1) the exposure and adequacy of the SIPc fund, (2) the effectiveness of siPc's liquidation oversight efforts, and (3) the disclosure of sIPc protections to customers.
Background
The law that created SIPc also required the Securities and Exchange Commission (SEc) to strengthen customer protection and increase investor confidence in the securities markets by increasing the financial responsibility of broker-dealers. Pursuant to this mandate, SEc developed a framework for customer protection based on two key rules: (1) the customer protection rule and (2) the net capital rule. These rules respectively require broker-dealers that carry customer accounts to (1) keep customer cash and securities separate from those of the company itself and (2) maintain sufficient liquid assets to protect customer interests if the firm ceases doing business. In essence, siPc is a back-up line of protection to be called upon generally in the event of fraud or breakdown of the other regulatory protections. Except for certain specialized broker-dealers, all securities broker-dealers registered with SEc are required to be members of sIPc. Other types of financial firms that are involved in the purchase or sale of securities products, such as open-end investment companies and certain types of investment advisory firms, are not permitted to be siPc members. As of December 31, 1991, sII c had 8,153 members. Of this number, only 954 are authorized to receive and hold customer property. The rest either trade .exclusively for their own accounts or act as agents in the purchase or sale of securities to the public. SEc and sIPc officials estimate that over $1 trillion of customer property is held by SIPc members. sIPC is not designed to keep securities firms from failing or, as in the case of deposit insurance for banks, to shield customers from changes in the
Page 2
GAO/GGD-92-109 Securities Investor Protection
Execntlve summary
market value of their investment. Rather, SIPC has the limited purpose of ensuring that when securities firms fail or otherwise go out of business, customers will receive the cash and securities they own up to the siPc limits of $500,000 per customer, of which $100,000 may be used to satisfy claims for cash. Thus, the risks to the taxpayer inherent in siPc are less than those associated with the deposit insurance system. sEc and self-regulatory organizations, such as the New York Stock Exchange, are responsible for enforcing the net capital and customer protection rules. However, if a firm is in danger of failing and customer accounts are at risk, si c may initiate liquidation proceedings. sEc and industry participants do not expect that siPc's back-up role in liquidating firms should be needed very often, which both reduces SIPc's exposure to loss and minimizes potential adverse market impacts. siPc liquidation proceedings can be quite complex, and it can take weeks or longer before customers receive the bulk of their property. In the 20 years since its inception, siPc has been called on to liquidate 228 firms, most of which have involved fewer than 1,000 customers. The revenues available to the si c fund have been sufficient to meet all liquidation and administrative expenses, which totaled $236 million. As of December 31, 1991, the accrued balance of the fund stood at $653 million, the highest level ever. After conducting a review of its funding needs, siPc adopted a policy to increase its reserves to $1 billion by 1997. siPC and SEC officials believe that reserves of this level, augmented by bank lines of credit of $1 billion and also by a $1 billion line of credit at the U.S. Treasury, will be more than sufficient to fulfill its back-up role in protecting against the loss of customer property.
Results in Brief
The regulatory framework within which siPc operates has thus far been successful in protecting customers while at the same time limiting slPc s losses. However, complacency regarding siPc's continuing ability to be successful is not warranted because securities markets have grown more complex and the SIPc liquidation of a large firm could be very disruptive to the financial system. The central conclusion of this report — that siPc's funding requirements and market stability depend on the quality of regulatory oversight of the industry — underscores the need for sEc and self-regulatory organizations to be diligent in their oversight of the industry and their enforcement of the net capital and customer protection rules.
Page 3
GAD/GGD-92-109 Securities Investor Protection
Erecutlve Summary
No objective basis exists for setting the right level for SIPcreserves, but Gho believes that efforts to plan for the siPc fund's future needs by increasing SIPc's reserves represent a responsible approach to dealing with the fund's potential exposure. However, in view of the industry's dynamic nature, siPc and SEc must make periodic assessments of the fund to a<gust funding plans to changing SIPc needs. In particular, measures to strengthen the fund must be taken immediately if there is evidence that the customer protection and net capital rules are losing effectiveness. While sIPc generally has received favorable comments from securities regulators and industry officials on its handling of past liquidations, it could do more to prepare for the potential liquidation of a large firm. sIPc s readiness to respond quickly by having the information and automated systems necessary to carry out a liquidation is important for the timely settlement of customer claims. The impact upon public confidence in the securities markets may be important in the liquidation of a large firm with thousands of customers. sIPc and sEc could provide the public with more complete information about the nature of SIPc coverage. Certain sEc-registered firms that are not sIPc members, including some investment advisers, may act as intermediaries in the purchase and sale of securities to the public and have temporary access to customer funds. These firms are not required to disclose the fact that they are not SIPc members, even though their customers are subject to the risks of loss and misappropriation of their funds and securities. Better disclosure is needed so that customers can make informed investment decisions.
GAO's Analysis
Strong Enforcement Is the Key to Continued Success in Protecting Customers
To date, sIPc's role in providing back-up protection for customers' cash and securities has worked well. The securities industry has faced many difficult challenges since SIPc's inception, such as major volatility in the stock markets and numerous broker-dealer failures (including two of the largest securities firms within the past 3 years). Since 1971, more than 20,000 broker-dealers have failed or ceased operations, but sIPc has initiated liquidation proceedings for only 228 — about 1 percent — of these firms. (See p. 22.)
Page 4
GAO/GGD-92-109 Securities Investor Protection
Executive Summary
Most firms involved in si c liquidations failed due to fraudulent activities. Within the last 5 years, 26 of 39 siPc liquidations have involved failures due to fraud by firms that were acting as intermediaries between customers and firms authorized to hold customer accounts. Most firms that cease operations do not require a si c liquidation because they do not carry customer accounts, customer accounts are fully protected, or they and/or the regulators have made alternative arrangements to protect the customer accounts. (See pp. 29-31.) In the future, SIPc losses can remain modest if sEc and self-regulatory organizations continue to successfully oversee the securities industry. But complacency is not warranted, and securities markets could be significantly disrupted if the enforcement of the net capital and customer protection rules proved insufficient to prevent a siPc liquidation of a large securities firm. In that instance, customers of the firm could experience delays in obtaining access to their funds. In addition, the development of new products and the increasing risks associated with the activities of many of the larger securities firms pose special challenges to the
regulators. (See pp. 36-39.)
SIPC Has Addressed Its Funding Needs
There is no scientific basis for determining what SIPc's level of funding should be because the greatest risk the fund faces — a breakdown of the effectiveness of the net capital and customer protection rules — cannot be foreseen. However, given the growing complexity and riskiness of securities markets, GAO believes that SIPc officials have acted responsibly in adopting a financial plan that would increase fund reserves to $1 billion by 1997. While GAo cannot conclude that this level of funding will be adequate, $1 billion should be more than sufficient to deal with cases of fraud at smaUer firms, and it probably can finance the liquidation of one of the largest securities firms. The $1 billion fund may not, however, be sufficient to finance worst-case situations such as massive fraud at a major firm or the unlikely simultaneous failures of several of the largest broker-dealers. Periodic su c and sEc assessments must account for factors such as the size of the largest broker-dealer and any signs that regulatory enforcement of the net capital or customer protection rules has deteriorated. (See pp. 40-46.)
Improve SIPC Preparation for Liquidating a Large
Firm
sIPc liquidations may involve delays and can expose customers to declines in the market value of their securities. To minimize delays, in the early 1980s a SIPc task force and sEc recommended that sIPc prepare for
Page I
GAD/GGD-92-109 Securities Investor Protection
potential liquidations of large firms. However,sIPc continues to make only limited preparations for the potential liquidations of large troubled firms. SIPC believes it is unlikely it will ever be called on to liquidate a large firm and cites its record of success as demonstrating its ability to liquidate any firm, (See pp. 54-57.) GAQ has no reason to question the way SIPc has conducted liquidations. However, those liquidations have all been of relatively small firms. GAQ is concerned that lack of preparation and planning may limit siPc's ability to ensure the prompt return of customer property in the event it was called on to liquidate a large, complex firm. SIPc could have been better prepared to conduct the liquidation of a large firm that could have become a liquidation in 1989.In addition, SIPc has not analyzed automation options and may be limited in its ability to ensure that the trustee of a major liquidation would be able to acquire a timely and cost-effective automation system. Working with sEc, sII c should improve its capabilities in these areas. (See pp. 57-61.)
Improve Disclosure to Customers
siPc-member broker-dealers are required to display a sIPcsymbol to notify their customers that they are SIPc members. They are also encouraged to provide customers with a brochure that explains SIPc protection. GAo believes that this brochure could be modified to clarify areas of confusion that have been raised by customers — for example, that customers of firms that fail or go out of business have only 6 months to file a claim. (See pp.
65-67.)
However, the greatest opportunity for customer confusion arises from SEc-registered firms that act as intermediaries in the purchase and sale of securities products to customers. These firms include some SIPc-exempt broker-dealers and certain types of investment advisory firms. These firms may have temporary access to customer property but are not required to disclose that they are not SIPc members. Some customers have purchased securities from nonmember intermediaries that were affiliated or associated with SIPc firms and were not protected by SIPc when the intermediary firm failed. Customers of these intermediary firms risk loss of their property by fraud and mismanagement. GAobelieves that customers should receive information on the sII c status ofSEc-registered intermediary firms that have access to customer funds and securities so that they can make informed investment decisions. (See pp. 67-72.)
Page 8
GAO/GGD-92-109 Securities Investor Protection
Iaeeltlve Sumnuuy
Recommendations
The chairmen of SIPc and SEc should periodically review the adequacy of sIPc's funding arrangements (see p. 58). The chairmen should also work with self-regulatory organizations to improve siPc's access to the information and automated systems necessary to carry out a liquidation of a large firm on as timely a basis as possible. In addition, the sEc Chairman should periodically review sIPc operations to ensure that si c liquidations are timely and cost effective (see p. 62). Finally, the chairmen of siPc and SEc, within their respective jurisdictions, should review and, as necessary, improve disclosure information and regulations to ensure that customers are adequately informed about the SIPC status of SEc-registered financial firms that serve as intermediaries in customer purchases of securities and have access to customer property
(see p. 72).
Agency Comments
sEc and sIPc provided written comments on a draft of this report (see apps. II and III). sEc and sIPc agreed with GAo s assessment of the condition of the siPC fund and with GAQ s recommendation for periodic evaluation of the fund's adequacy. sEc also agreed with GAo's recommendations to improve its oversight of siPc's operations and to consider some expansion of sEc disclosure regulations. siPc agreed with GAo s recommendation to improve siPc disclosures to customers. sEc and SIPc did not believe that problems exist in obtaining information or acquiring automated liquidation systems, but they agreed to review their policies and consider GAo's recommendations in these areas.
Page 7
GAO/GGD-92-109 Securities Investor Protection
Contents
Executive Summary Chapter 1 Introduction Chapter 2 The Regulatory Framework Is Critical to Minimizing SIPC's Exposure Chapter 3 SIPC's Responsible Approach for Meeting Future Financial Demands
Background
SIPC Reserves Are Increasing Objectives, Scope, and Methodology Agency Comments 12 12 16 19 21 22 22 24 29 32 35 39 40 40 42 46 51 52 63 63 54 64 57 61 62 62 62
Few SIPC Liquidations Needed to Protect Customers How the Regulators Have Protected Customers While Minimizing SIPC's Exposure to Losses Most SIPC-Liquidated Firms Failed Due to Fraud SIPA Liquidation Procedures Involve Delays A My)or SIPC Liquidation Could Damage Public Confidence Conclusions
SIPC Funding Needs Are Tied to the Risk of a Breakdown in the Regulatory System SIPC's Plan Seems Reasonable to Fulfill Back-Up Role If SIPC's Funding Needs Increase, Assessment Burden Issues Could Arise Alternatives or Supplements to SIPC's Financial Structure Conclusions
Recommendations
Agency Comments and Our Evaluation
Chapter 4 Prepare for Potential Liquidations
SIPC Can Better
SIPC Has Not Made Special Preparations for Liquidating a Large Firm Measures to Enhance SIPC's Ability to Liquidate a Large Firm on a Timely Basis More Effective Oversight by SEC Is Needed Conclusions Recommendations Agency Comments and Our Evaluation
Page 8
GAD/GGD-92-109 Securities Investor Protection
Contents
Chapter 5 Discrepancies in Disclosing Customer Protections Appendixes
Disclosure Requirements for SIPC Members Differences in Customer Disclosure Need to Be Addressed SEC Should Address Differences in Customer Protection Conclusions Recommendations Agency Comments and Our Evaluation Appendix I: SEC Customer Protection and Net Capital Rules Appendix II: Comments From the Securities Investor Protection Corporation Appendix III: Comments From the Securities and Exchange Commission Appendix 1V: Major Contributors to This Report Table 1.1: SIPC's Cumulative Expenses for the Years 1971-1991 Table 2.1: SIPC Membership and Liquidations, 1971-1991 Table 2.2: Most Expensive SIPC Liquidations as of December 31, 1991 Table 2.3: SIPA Liquidation Proceedings Table 2.4: SIPC Bulk Transfers, 1978-1991 Table 2.5: Major Securities Firms and Largest SIPC Liquidations Table 3.1: Most Expensive SIPC Liquidations as of December 31, 1991 Table 3.2: Largest SIPC Liquidations as Measured by Customer Claims Paid as of December 31, 1991 Table 3.3: History of SIPC Assessment Rates Table 3.4: SIPC Assessments, Industry Income, and Revenue, 1983-1991 Table 4.1: Operational Information Recommended by a 1985 SIPC-SEC Committee to Help Ensure the Timely Liquidation of a Large Broker-Dealer Table I.1: Credits Component of the Reserve Formula Calculation Table I.2: Debits Component of the Reserve Formula Calculation Table I.3: Reserve Calculation Table I.4: Alternative Net Capital Calculation Figure 1.1: SIPC Accrued Fund Balance, 1971-1991 Figure 3.1: SIPC Revenue and Expenses, 1971-1991
65 65 67 71 72 72 73 76 86 92 101 17 23 30 33 34 37 41 42 47 49
Tables
79 81 82 85 18 47
Figures
Page 9
GAD/GGD-9$-109 Securities Investor Protection
Contents
Figure 3.2: SIPC Assessments as a Percentage of Securities Industry Pretax Income, 1983-1991
50
Abbreviations
DTC FDIC FDR FOCUS
NASD NYSE
Depository Trust Corporation
Federal Deposit Insurance Corporation
Fitzgerald, DeArman, and Roberts, Inc. Financial and Operational Combined Uniformed Single
occ
SEC SIPA SIPC
SRO WBP
National Association of Securities Dealers, Inc. New York Stock Exchange Options Clearing Corporation Securities and Exchange Commission Securities Investor Protection Act Securities Investor Protection Corporation self-regulatory organization Waddell Benefit Plans, Inc.
GAO/GGD-92-109 Securities Investor Protection
Page 10
Page 11
GADIGGD-98-109 Securities Investor Protection
Cha ter 1
Introduction
This report was prepared in response to requests from the chairmen of the Senate Banking Committee and the House Energy and Commerce Subcommittee on Oversight and Investigations that we review the effectiveness of the Securities Investor Protection Corporation (siPc). sIPC, a private, nonprofit membership corporation established by Congress in 1970, provides certain financial protections to the customers of failed broker-dealers. As requested, this report assesses several issues, including the exposure and adequacy of the member-financed SIPC fund, the effectiveness of SIPc's liquidation efforts, and the disclosure of siPc protections to customers.
Background
The Securities Investor Protection Act of 1970 (SIPA), which created SIPc, was passed to address a specific issue within the securities industry: how to ensure that customers recover cash and securities from broker-dealers that fail or cease operations and cannot meet their obligations to customers. To address this issue, SIPA authorized the Securities and Exchange Commission (sEc) to promulgate financial responsibility rules designed to strengthen broker-dealer operations and minimize slpc s exposure. The rules require broker-dealers to (1) maintain sufficient liquid assets to satisfy customer and creditor claims and (2) safeguard customer cash and securities. sIPC serves a back-up role and is generally called upon to compensate the customers of firms that fail due to fraud and cannot meet their obligations to customers.' When a troubled firm cannot fulfill its obligations to customers, SIPc initiates liquidation proceedings in federal district court. The court appoints an independent trustee or, in certain small cases,sipc itself to liquidate the firm if the court agrees that customers face losses. After the case is moved to federal bankruptcy court, siPc oversees the liquidation proceedings, advises the trustee, and advances payments from its fund if needed to protect customers. Customers of a firm in liquidation receive all securities registered in their name and a pro rata share of the firm's remaining customer cash and securities. Customers with remaining claims for securities and cash may each receive up to $500,000 from the sIPcfund, of which no more than $100,000 can be used to protect claims for cash. siPC coverage applies to most securities — notes, stocks, bonds, debentures, certificates of deposit, and options on securities — and cash deposited to purchase securities. However, siPc coverage does not include,
'The regulators require operating firms to maintain blanket fldelity bonds to protect customers against the fraudulent misappropriation of their property.
Page Ig
GAO/GGD-9R-109 Securities Investor Protection
Chapter 1 Introduction
among other things, any unregistered investment contracts, currency, commodity or related contracts, or futures contracts. Congress enacted siPA in response to what is often referred to as the securities industry's "back-office crisis" of the late 1960s, which was brought on by unexpectedly high trading volume. This crisis was followed by a sharp decline in stock prices, which resulted in hundreds of broker-dealers merging, failing, or going out of business. During that period, some firms used customer property for proprietary activities, and procedures broke down for the possession, custody, location, and delivery of securities belonging to customers. The breakdown resulted in customer losses exceeding 4100 million because failed firms did not have their customers' property on hand. The industry attempted to compensate customers through voluntary trust funds financed by assessments on broker4ealers. However, industry officials, sEc, and Congress subsequently agreed that the trust funds were inadequate,' and that an alternative — sipc — was needed to better protect customers and maintain public confidence.
SIPC's Structure and Membership
sIPA defines sl c's structure and identifies the types of broker<ealers that are required to be sl c members. Under sl A, sl c has a board of seven directors that includes government and industry representatives and
determines policies and oversees operations.' Among other duties, the
board has the obligation to examine the condition of the SIPc fund and ensure that it has sufficient money to meet anticipated liquidation expenses. sipc has one office located in Washington, D.C., and employs 32 staff members. Sl c spent about $5.1 million in 1991 to pay salaries, travel, and other operating expenses. sIPA authorizes SEc to oversee slPc and ensure that slPc fulfills its responsibilities under the act. For example, SIPc must submit all proposed rules to sEc for review and approval. sEc's oversight responsibilities for SIPc are generally similar to sEc's oversight responsibilities for the self-regulatory organizations (sRo) — the national exchanges such as the New York Stock Exchange (msE) and the National Association of
'The trust funds failed for the following reasons: (I) the size of the funds was inadequate, (2) the exchanges disbursed money I'rom the funds on a voluntary basis, and (3) the funds did not protect customers of firms that were not members of the exchanges.
Fhe president of the United States appoints five of the directors, subject to Senate approval. Two of these appointees — &e chairman and the vic~hairman — must be from the general public; the other three represent the securities industry. The secretary of the Treasury and the Federal Reserve Board appoint officers ol' their respective organizations to serve as the sixth and seventh directors.
Page 18
GAO/GGD-82-108 securities Investor Protection
Chapter I Introduction
Securities Dealers, Inc. (NAsD). sRos, whose boards are elected by their members, are private corporations that examine broker-dealers, monitor their compliance with the securities laws and regulations, and, along with sEc, notify sIPc when a broker-dealer experiences financial problems. With certain exceptions discussed below, all firms re'stered as broker-dealers under section 15(b) of the Securities Exchange Act of 1934 are required to become sIPc members regardless of whether they hold customer accounts or property. As of December 31, 1991, SIPc had 8,153 members. Of this total, only 954 (12 percent) were carrying firms that had met the sEc requirements for holding customer property or accounts.' The other 7,199 sipc members (88 percent) were either (1) introducing firms, which serve as agents between the customers and the carrying firms and handle customer property for limited periods,' or (2) firms that trade solely for their own accounts on the national securities exchanges. Data were not available to determine the total amount of customer property that is protected by siPc. sEc does not routinely collect data on the amount of fully paid customer securities held by broker-dealers that would make up the bulk of sipc's potential exposure. However, sEc and SIPc officials estimated that broker-dealers hold over $1 trillion of sIPc-protected customer property based on data from the 20 largest broker-dealers. sIPA excludes broker-dealers whose principal business, as determined by sipc subject to sEc review, is conducted outside the United States, its possessions, and territories. A sIPc official said that sIPc reviews applications for exclusion on a case-by-case basis. Moreover, sIPA excludes broker-dealers whose business consists exclusively of (1) distributing shares of registered open-end investment companies (mutual funds) or unit investment trusts, (2) selling variable annuities, (3) providing insurance, or (4) rendering investment advisory services to registered investment companies or insurance company separate accounts.
'These carrying firms also include clearing firms that hold customer property for a limited period solely to settle trades. 'For example, the introducing firm may send a customer's check to the clearing firm as payment for executing a trade. 'SEC officials stated that information on the amount of SIPC-protected customer property is not collected for several reasons: (I) the value of customer securities is marked-to-market and changes continuously; (2) gathering this information would be expensive and require significant computer capability, which would be especially difficult for small firms; and (3) SEC has not needed the data for regulatory purposes.
Page 14
GAO/GGD-92-109 Securities Investor Protection
Chapter I Introduction
SIPC Has Back-Up Customer Protection Role
Congress established sIPc as one part of a broader regulatory framework to protect the customers of U.S. broker-dealers. Congress also required sEc to issue financial responsibility rules designed to improve the operations of broker-dealers and prevent the types of abuses that occurred during the 1960s back-office crisis. The two key financial responsibility rules are the customer protection rule and the net capital rule. In 1972, sEc issued a customer protection rule (rule 15c3-3) that requires firms to safeguard customer cash and securities and forbids their use in proprietary activities. In 1975, sEc strengthened its net capital rule (rule 15c3-I). The net capital rule requires firms to have sufficient liquid assets on hand to satisfy liabilities, including customer and creditor claims. sRos and sEc are responsible for monitoring broker-dealer compliance with the customer protection and net capital rules and for closely monitoring the activities of financially troubled firms. Generally, the regulators are able to arrange the transfer of all customer accounts at troubled firms to other firms or to return customer property directly to customers if the troubled firms are in compliance with the sEc rules. A sipc liquidation becomes necessary if customer cash and securities are missing or if the sRQ feels that there is not enough money to self-liquidate.
SIPC Protections Differ From Deposit Insurance
Protections
sII c's protections differ fundamentally from federal deposit insurance protections for bank and thrift depositors, which are administered by the Federal Deposit Insurance Corporation {FDIc).' sipc does not protect investors from declines in the market value of their securities. The major risk that sipc faces, therefore, is that broker-dealers will lose or steal customer cash and securities and violate the customer protection or net capital rules. By contrast, FDIc protects the par value of deposits and accrued interest payments up to 8100,000.' Suppose that a customer purchased one share of XYZ Corporation for $100 through a broker-dealer, and the firm held the security. The market value of the share then declined to $50. If the broker-dealer failed and the share
was iiusslIig, sIPC would advance 450 so that the trustee could purchase
one share of XYZ Corporation. SIPC would not protect the customer against the share's 850 market loss. By contrast, FDIc would pay an individual with
~The other deposit insurer is the National Credit Union Administration, which protects the customers of credit unions. "Customers receive similar protection from both FDIC and SIPC for cash claims of up to 4100,000.
Page 15
GAO/GGD-92-108 Securities Investor Protection
Chapter 1 Introduction
a $100 deposit the full $100 if the bank failed, even if the assets of the bank were worth 50 percent of their book value. Another difference is that sEc's customer protection rule prevents broker-dealers from using customers' securities and funds for proprietary purposes. By contrast, the essence of banking is that banks use insured deposits to make loans and other investments. Consequently, by guaranteeing the par value of deposits, FDIC protects depositors not only against the disappearance of deposits due to bookkeeping errors or fraud but also against bad investment decisions by such banks. It is much riskier for the government to protect depositors against the consequences of bad investments, as FDic does, than only against missing property, assiPCdoes. There is also a difference in the amount of customer property that is protected. sipc protects customer losses of up to $500,000 after all customer funds and securities have been distributed on a pro rata basis from the failed firm's separate account that includes all customer property. This means that a customer with a claim for $5 million of stock who received $4.5 million of their stock from the pro rata distribution would then receive an additional $500,000 worth of securities from su'C. Creditors of the failed securities firm cannot claim assets from the firm's customer property account. By contrast, bank depositors are assured of recovering their deposits only up to the $100,000 limit; if they had any deposits exceeding $100,000, in many cases they are required to join all other creditors for a pro rata share of the remaining failed bank assets. Finally, SIPC and FDIc protections differ in that the customers of broker-dealers liquidated by sipc trustees are likely to wait longer to receive compensation than are insured bank depositors. Under su'A, customers frequently must file claims with the trustee before receiving their property. Although trustees and sn'c have the authority to arrange bulk transfers of customer accounts to acquiring firms to speed up the process, such transfers are not always possible if the firm failed due to fraud, if it kept inaccurate books and records, or if its accounts were of poor quality. Moreover, a bulk transfer can take weeks or longer to arrange. In contrast, FDic frequently transfers the insured deposits of failed banks to other banks over a weekend.
SIPCReserves Are Increasing
Between 1971 and 1991, sipc initiated liquidation proceedings against 228 failed firms. As of December 31, 1991, sipc trustees had completed 183 of the 228 liquidation proceedings. The 183 completed liquidations had an
Page 18
GAO/GGD-92-109 Securities Investor Protection
Chapter I Introduction
average of about 930 customer accounts and cost slPc about $500,000 per liquidation in customer protection and administrative expenses. At yearwnd 1991, the other 45 liquidation proceedings remained open because trustees were still processing claims or litigating matters, such as civil actions against former firm officials. As of December 31, 1991, SIPc's cumulative operational expenses totaled $63 million and liquidation expenses for closed cases and open proceedings totaled $236 million. Of the $236 million, SIPc used $175 million to satisfy customer claims for missing cash and securities and $61 million to pay administrative costs, such as trustees fees and litigation expenses. (See table 1.1.)
Table 1.1: SIPC's Cumulative Expenses for the Years 1971-1991 Type of expense SIPC operations Liquidation expenses Administrative costs Customer claims Total
Source: SIPC.
Total expense
$62,575,788 61,032,655 174,834,104
$298,442,547
Yo acquire the cash necessary to pay liquidation expenses and maintain a reserve fund, si c levies assessments on the revenues of member firms and also earns interest on the invested fund balance. When SIPc was first established, the assessment was 0.5 percent of a firm's gross revenues from the securities business of each member.' Rates fluctuated from that time depending on the level of expenses, and for several years the assessment was nominal. Following the stock market crash of 1987, the sIPc board decided to increase the assessment rate to 0.019 percent of gross revenues. In 1990, sipc assessments amounted to $73 million, based upon industry gross revenues of $39 billion. Because of the assessment increases, interest income, and low liquidation expenses, sipc's accrued fund balance has increased significantly in recent
Litigation matters were still pending in 37 of the 46 cases. In those 37 cases, the trustees had already satisfied all customer claims.
' Gross revenues, as specified in SIPA, include fees and other income from various categories of the securities business but do not include revenues received by a broker-dealer in connection with the distribution of shares of a registered open-end investment company or unit investment trust, from the sale of variable annuities, or from insurance business. In 1990, gross revenues were about 64 percent of total industry revenues.
Page I7
GADIGGD-92-109 Securities Investor Protection
Chapter 1 Introduction
years (see fig. 1.1)." As of December 31, 1991, the accrued balance of SIPc's fund was $653 million, its highest level since slPc's inception and an 87-percent increase over the fund balance at year-end 1987. Si c also maintained a $500 million line of credit with a consortium of U.S. banks at year-end 1991. In addition, sipc has the authority to borrow — through sEc — up to $1 billion from the U.S. Treasury.
Figure 1.1: SIPC Accrued Fund
Balance, 1971-1991
700 Oollars In millions
1971
10 7 3
1070
1077
1070
1081
1083
1085
1087
1080
1001
Source: SIPC.
In 1991, the slPc board reviewed the adequacy of the fund size and bank borrowing authority in light of potential liquidation expenses. Based on the review, the board decided to build the fund at a 10-percent annual rate with a goal of $1 billion by 1997. To accomplish this goal, the board set the assessment rate at 0.065 percent of each firm's net operating revenues; this action resulted in assessment revenue in 1991 of $39 million — a $34 million
"The SIPC i'und, as defined by SIPA, consists of cash and amounts invested in U.S. government or agency securities while the accrued fund balance represents SIPC's assets minus funds needed to complete ongoing liquidations.
Page 18
GAO/GGD-92-109 Securities Investor Protection
Introduction
Chapter I
reduction from the amount collected in the previous year." In 1991, the fund increased by $47 million due to interest revenue. The board also decided to raise slPc's bank line of credit to $1 billion beginning in 1992. Over the next 4 years, $250 million of credit will come due annually and may be renewed. The line of credit was arranged with a consortium of banks and cannot be canceled by the banks, but the banks could decline to renew as each portion of the line comes up for renewal.
Objectives, Scope, and Methodology
We received separate requests from the chairmen of the Senate Banking Committee and the House Energy and Commerce Subcommittee on Oversight and Investigations to assess several issues, including (i) the exposure and adequacy of the sipc fund, (2) the feasibility of supplemental funding mechanisms such as private insurance, (3) the effectiveness of sIPc s liquidation efforts, and (4) the disclosure of sIPc protections to customers. We were also asked to determine whether sipc needs the authority to examine the books and records of its members and to take enforcement actions. To gain a basic understanding about how slpc and the securities regulatory framework protects customers, we reviewed slPA and its legislative history, sEc's net capital and customer protection rules, and sIPc bylaws and internal documents. We also reviewed our previous reports on the securities industry. During our review, we determined that no quantifiable measure exists to assess the exposure of the sipc fund and the adequacy of its reserves (such as the ratio of reserves to insured deposits, which FDIc uses to assess the exposure of the Bank Insurance Fund). As a result, we based our conclusions about the sipc fund's ability to protect customers and maintain public confidence in the markets on such factors as slpc's past expenses, current trends in the securities industry, the regulators' enforcement of the net capital and customer protection rules, and sipc's policies and procedures. We reviewed the principal studies used by the slpc board in making its judgments: a report prepared by the Deloitte and Touche accounting firm and a report on sipc's assessment policies prepared for slpc by a task force
"Net operating revenue-based assessments allow broker-dealers to deduct all interest expense from securities business revenue. Broker-dealers also have the option of continuing to deduct 40 percent of interest revenue from margin accounts.
Page 19
GAO/GGD-92-109 Securities Investor Protection
Chapter 1 Introduction
of regulatory and industry officials.' We did not independently duplicate the methodology of these studies, but we assessed the reasonableness of the studies and the board's decisions in light of the risk characteristics of the industry, the history of sIPc liquidations, the effectiveness of the regulatory structure, and recent developments within the industry. We discussed the reports and sIPc fund issues with senior sIPc of6cials, sEc officials in the Division of Market Regulation and the New York Regional Office, officials at msE and the NAsD, officials at the Federal Reserve Board and the Federal Reserve Bank of New York, and an official at the Department of the Treasury. We also interviewed the individuals who wrote the Deloitte and Touche report and industiy representatives to ascertain their views on the adequacy of the sipc fund. We did not conduct a comprehensive review of the efficiency of si c's liquidations proceedings; rather, we focused on SIPc's preparations for liquidations that could affect the timeliness of customers' ability to access their accounts. We also looked at the sEc's oversight efforts and reviewed a 1986 sEc letter to the sIPc chairman reporting on sEc's review of SIPc's operations, which is the only written evaluation sEc has issued on siPc's operations. We discussed SIPc's annual financial audits with its independent auditor, Ernst and Young. We also contacted the trustees of four large sIPc liquidations (as measured by sIPc expenditures and number of customer claims paid). We interviewed the trustees of the two most expensive liquidations to date — Bell and Beckwith, and Bevill, Bresler & Schulman, Inc. In addition, we interviewed the trustee who liquidated the largest firm, Blinder Robinson and Co., Inc. (as measured by the number — 61,000 — of customer claims paid) and contacted the trustee who liquidated Fitzgerald, DeArman, and Roberts, Inc. (FDR) in the largest bulk transfer to date (80,000 accounts). Moreover, we discussed with senior sIPc officials their efforts to prepare for the liquidations of two large firms that could have become sipc liquidations — Thomson McKinnon Securities Inc. and Drexel Burnham Lambert Incorporated. We reviewed sIPc bylaws and sEc regulations to determine the requirements for SEc-registered firms to disclose their sipc status. We also reviewed slPc and SEc customer correspondence and litigation relating to customer protection issues to assess customer concerns in this area. We did our work between May 1991 and May 1992 in accordance with generally accepted government auditing standards.
"See The Securities Investor Protection Co oration: S ecial Stud of the SIPC Fund and Fundin u irements, e oitte and ouche, t o r , I . see ort an omme a t ions of the n a o rce on Assessments, presented to the SIPC Boar o s i r e ctors p t e m r , I l .
Page 20
GAO/GGD-92-109 Securities Investor Protection
Agency Comments
slPc and smc provided written comments on a draft of this report. Relevant portions of their comments are presented and evaluated at the end of chapters 3 through 5. The comments are reprinted in their entirety as appendixes II and III. They also provided technical comments on the draft, which were incorporated as appropriate.
Page 21
GAD/GGD-92-109Secnrltlea Investor Protection
Cha ter2
The Regulatory Framework Is Critical to Minimizing SIPC's Exposure
As we pointed out in chapter 1, the regulatory framework — including the net capital and customer protection rules — serves as the primary means of customer protection wle siPc serves in a back-up role. Since Congress passed siPA in 1970, the regulatory framework has successfully limited the number of firms that have become sIPc liquidations. The firms that siPc has liquidated failed primarily because their owners committed fraud and misappropriated customer cash and securities. Given the relative success of the regulatory framework, which relies largely on sEc and the SRos to prevent sIPc liquidations, we do not believe that siPc needs the authority to examine its members. However, sEc and the sRos must continue to enforce existing rules to ensure that siPc can fulfill its back-up role and maintain public confidence in the securities industry. The regulators' ability to protect siPc in the future could prove challenging due to the continued consolidation of the industry and increased risk-taking by moor firms.
Few SIPC
Liquidations Needed
to Protect Customers
The U.S. securities industry consists of thousands of broker-dealers, many of which are small and not allowed to hold customer property. The regulatory framework and the restrictions on the holding of customer property ensure that hundreds of broker-dealers can fail or cease doing business each year without becoming siPc liquidations. As table 2.1 indicates, 20,344siPc members went out of business or failed between 1971 and 1991, but only 228 (about 1 percent) became siPc liquidations. Moreover, the number of siPc liquidations begun annually has declined since the early 1970s. Between 1971 and 1973, siPc initiated an average of 31 liquidations a year. Since 1976, siPc has initiated an average of seven liquidations a year.
Page 92
GAO/GGD-99-109 Securities Investor Protection
Chapter 2 The Regulatory Framework Is Critical to Mintmtsing SIPC's Ezposure
Table 2,1" .SIPC Membership and liquidations, 1971-1 991 Year
1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988
SIPC members
3,994 3,756 3,974 4,238 4,372 5,168 5,412 5,670 5,985 6,469 7,176 8,082 9,260 10,338 11,004 11,305 12,076 12,022 11,284
Non-SIPC ter m l natlons'
0
SIPC liqui d ations
24
669 622 551 631 219 637 663 637 635 741 706 666 1,176 1,059 1,354 1,033 1,430 1,79 1
40 30 15
10
12 4 6
1989 1990
1991
9,958
8,153
2,279
2,845 20,344 228
Total
bSIPC did not report on membership terminations in 1971. Source: SIPC annual reports, 1971-1991,
'Number of terminations listed in SIPC's annual reports minus the number of SIPC liquidations.
Many of the 20,344 firms that went out of business without SIPc involvement were introducing firms or firms that trade solely for their own accounts on national securities exchanges and do not hold customer property. In the absence of fraud, introducing firms can fail, disband, or cease doing business without becoming SIPc liquidations. However, SIPc protection is extended to these firms because fraudulent activities — such as theft of money or securities — could result in customer losses. The partners of a small firm who trade solely for their own account may decide to sell the firm's proprietary securities and cease doing business. A sipc official also said that sipc's membership may fluctuate because individuals
Page 2$
GAO/GGD-92-109 Securities Investor Protection
Chapter I The Eeguiatorr Framework Is Critical to Minlmlalng SIPC's Exposure
tend to form broker-dealer firms during market upturns, as in the early to mid-1980s. Many firms may later cease doing business or fail when market downturns occur, as happened after 1987. According to a slpc official, the SIPc liquidation caseload peaked in the early 1970s because many firms still suffered operational and financial problems associated with the "back-office crisis" discussed earlier. The number of siPc liquidations has declined since 1976 as a result of the introduction of the customer protection rule, the strengthening of the net capital rule, and improved supervision by the regulators. Moreover, before financially troubled firms actually fail, the regulators frequently arrange for the transfer of their customer accounts to acquiring firms. For example, between 1980 and 1990, msE and sEc arranged account transfers for 21 of the 25 msE members that went out of business under financial duress and protected about 2.7 million customer accounts. SIPc liquidated the other four firms, which, combined, had about 112,500 customer accounts.' In its 20-year history, sipc has paid about 329,000 customer claims.
How the Regulators Have Protected Customers %bile Minimizing SIPC's Exposure to Losses
The customers of broker-dealers that fail or go out of business without becoming si c liquidations generally can continue trading in their accounts without any delays or disruptions if their accounts are transferred to other firms or if their property is returned. The regulatory foundations of this customer protection are the net capital and customer protection rules. The regulators routinely monitor broker-dealer compliance with the rules and place financially troubled firms under intensive supervisory scrutiny. The regulators may also arrange for the transfer of the accounts of troubled firms to acquiring firms via computer.
Net Capital Rule
The net capital rule requires each broker-dealer to maintain a minimum level of liquid capital sufficient to satisfy its liabilities — the claims of customers, creditors, and counterparties. Net capital is similar to equity capital in that it is based on an analysis of each broker-dealer's balance sheet assets and liabilities. Unlike equity capital, however, only liquid assets — such as cash, proprietary securities that are readily marketable, and receivables collateralized by readily marketable securities — can be counted in the net capital calculation. Assets that are not considered liquid include furniture, the value of exchange seats, and unsecured receivables.
'The four firms were John Muir 4 Co., Bell and Beckwith, Hanover Square Securities Group, and H.B. Shaine 4 Co., Inc.
Page 24
GAO/GGD-92-109 Securities Investor Protection
Chapter 9 The Iiegulatory Framework Is Critical to Minimising BIPC's Exposure
The proprietary securities that qualify for inclusion in the net capital calculation must be carried at their current market value. Even after securities positions are marked to reflect market value, the net capital rule offers further protection by requiring broker-dealers to deduct a certain percentage of the market value of all proprietary security positions from the capital of the firm. These deductions — or "haircuts" — are intended to reflect the actual liquidity of the broker-dealers' proprietary securities by providing a cushion for possible future losses in liquidating the positions. For example, debt obligations of the U.S. government receive a haircut depending on their time to maturity: from a 0-percent haircut for obligations with less than 3 months to maturity to a 6-percent haircut for obligations with 25 years or more to maturity. Haircuts for more risky assets can be much higher. SEc also allows broker-dealers to include subordinated liabilities that meet the rule's requirements in the net capital calculation. In order to count toward net capital, these subordinated liabilities must be subordinated to the claims of all present and future creditors, including customers, and must be approved for inclusion as net capital by the broker-dealer's SRo. The subordinated liabilities may not be repaid if the repayment would reduce the broker-dealer's net capital level below a level specified by the rule, and the liabilities must have an initial term of I year or more. The minimum amount of net capital required varies from broker-dealer to broker-dealer, depending upon the activities in which the firm engages. Because they hold customer property, carrying firms have higher minimum capital requirements than introducing firms. In addition, the regulators have established "early-warning" levels of net capital that exceed the minimum requirement. As discussed below, the sRos notify sEc and place restrictions on firms whose capital falls to the early warning levels. They also begin consultations with the ailing broker-dealer to formulate a recovery plan. Should the plan fail, the regulators may try to arrange a transfer of the customer accounts to one or more healthy broker-dealers. As soon as the net capital falls below the minimum level, the firm is closed. Closing a broker-dealer before insolvency either makes the firm a viable merger candidate (because there is residual value left in the firm) or allows the broker-dealer's customers to be fully compensated when the firm is liquidated.
Page 25
GAG/GGD-92-109 Securities Investor Protection
Chapter 2 The Regulatory Framework Is Critical to Mlnimising SIPC's Exposure
Customer Protection Rule
T he customer protection rule (rule 15c3-3) applies to c~ g fir m s because they hold customer securities and cash. The rule requires the firms to have possession or control of customers' securities. As a result, the rule minimizes the need for sIPc liquidations because financially troubled firms can return customer property or send it to acquiring firms under the supervision of the regulators. The customer protection rule has two provisions. The first provision requires broker-dealers to maintain possession or contro12 of customers' fully paid and excess-margin securities.' This requirement prevents broker-dealers from using customer property to finance proprietary activities because fully paid and excess-margin securities must be in possession or control locations. The rule also forces the broker-dealer to maintain a system capable of tracking fully paid and excess-margin securities daily.
The second provision of the customer protection rule involves customer
cash kept at broker-dealers for the purchase of securities. When customer cash — the amount the firm owes customers — exceeds the amount customers owe the firm, the broker-dealer must keep the difference in a special reserve bank account. The amount of the difference is calculated weekly using the reserve account formula specified in the customer protection rule.4 The rule assumes that all margin loans will be collected because they are collateralized by the securities in customer margin accounts. A sharp and sudden decline in the market value of this collateral would render the loans unsecured; hence, these loans are required to be overcollateralized.
'The customer protection rule specifies the locations in which a security will be considered in possession or control of the broker-dealer. This includes those securities that are held at a clearing corporation or depository, free of any lien; carried in a Special Omnibus Account under Federal Reserve Board regulation T with instructions for segregation; a bona fide item of transfer of up to 40 days; in the custody of foreign banks or depositories approved by SEC; in a custodian bank; in transit between offices of the broker-dealer, or held by certain subsidiaries of the broker-dealer.
'Excess-margin securities in a customer account are margin securities with a market value in excess of 140 percent of the account debit balance (the amount the customer owes the firm). For example, assume that a firm hss a customer account with 100,000 shares of stock and that each share has a $10 market value, for a total account value of $1,000,000, The customer pays for $900,000 worth of stock and purchases the remaining $100,000 worth on margin from the broker-dealer. Applying the 140 percent to the $100,000 owed by the customer results in $140,000 worth of margin securities that the broker-dealer can use as collateral on the original $100,000 loan, To calculate the excess-margin securities in the account, subtract $140,000 from the market value of $1,000,000. The brokerAealer must have $860,000 worth of excess-margin securities in its possession or control.
'See appendix 1 for a more detailed explanation of the reserve formula and the customer protection rule.
Page 26
GAO/GGD-92-109 Securities Investor Protection
Chapter 2 The Regniatorg FramevrorhIs Critical to Minimiaing SIPC's Exposnre
Broker-dealers are subject to initial margin account requirements set by Federal Reserve Regulation T and sRo regulations that must be met before a customer may effect new securities transactions and commitments. In addition, maintenance margin requirements are set by the sRos and broker-dealers. The requirements specify how much equity each customer must have in an account when securities are purchased and how much equity must be maintained in that account. For example, the NYsE requirement for securities held long (owned by a customer but held by a brokerage firm) in a margin account is currently set at 26 percent of the current market value of the securities in the account. With these customer protection rules in place and properly enforced, customers are assured that their cash — up to the $100,000 SIpA limit — is readily available and can be quickly returned. These rules also facilitate the unwinding of a failed firm through a self-liquidation, with oversight by the regulators, without the need for sipc's involvement. While the customer protection rule significantly limits siPc's exposure, it does not completely eliminate the exposure. The rule includes provisions that are intended to minimize the compliance burden yet could potentially result in sipc losses. For example, broker-dealers are required to make the cash reserve deposit calculation only once a week, on Friday, and to make the actual bank deposit the following Tuesday. Therefore, if a firm received large customer cash deposits on a Wednesday and became a sipc liquidation on Thursday, it might not have sufficient cash in the reserve bank account to pay customer claims. sn c might have to reimburse the customers for the cash deposits if the deposits could not be recovered from the firm's estate. Also, a broker-dealer is considered to be in compliance with the rule and in control of customer securities when the securities are in transfer between branch offices. A liquidation expert told us that this provision has been used by small, financially troubled broker-dealers to fraudulently disguise the fact that they do not have the required control over their customers' property.
Regulator s M o n i t o r
sEc and s Ros have established inspection schedules and procedures to
ornpiiange + jt h H u les o n Routine Basi s
rout i nely monitor broker-dealer compliance with the net capital and customer p rotection rules:
• The two largest sRos — NvsE and NAsn — inspect their carrying members annually. During each exam, the examiners calculate the firm's net capital and assess the quality and accuracy of the automated systems it uses to
Page 27
GAO/GGD-92-109 Securities Investor Protection
Chapter 2 The Regulatorg Ftanleworh Is Critical to Mlnlsslslng SIPC's Exposure
•
•
•
•
maintain possession and control of customer fully paid and excess-margin securities.' sEc annually examines about 6 percent of the broker-dealers that the sRos have previously examined to ensure broker-dealer compliance with the securities laws and to evaluate and provide feedback to the sRos on the quality of their examination programs. Once every 2 years, sEc also examines the 20 largest broker-dealers that carry customer accounts. sEc requires broker-dealers to notify the regulators when their capital falls to certain levels above the minimum requirement and again if it falls below the minimum requirement. sEc requires carrying firms to submit financial and operational data monthly and requires introducing firms to report quarterly. The financial data include a computation of each firm's net capital and the amount in its reserve bank deposit account. sFc requires each broker-dealer to have its financial statements audited each year and to file the audited statements with the regulators. The regulators' policy is to place firms with financial or operational problems under more intensive supervisory scrutiny than that outlined above. Evidence of such financial or operational problems include net capital levels that (I) decline to early warning levels that exceed the parameters or (2) lead to consecutive monthly losses. When such problems are detected, the regulators may require the firm to provide daily financial statements and restrict its activities, such as its ability to increase its asset size. The regulators may also begin to solicit other firms to acquire the troubled firm's customer accounts. If the troubled firm continues to deteriorate, the regulators may arrange for the transfer of its customer accounts to an acquiring firm or firms. The regulators' ongoing monitoring and supervisory efforts are critical to minimizing sipc's potential exposure. Regulators told us that they pay especially close attention to financially troubled broker-dealers. In an attempt to stay in business, financially troubled broker-dealers may be forced to alter their behavior in such a way as to increase sipc's exposure if the firm fails and becomes a sipc liquidation. For example, NYsE officials said that a financially troubled broker-dealer may be tempted to violate the customer protection rule by using fully paid customer securities as collateral in order to increase its short-term borrowings. This situation may arise if creditors have cut off their unsecured loans needed for liquidity purposes. If this broker-dealer does not recover and becomes a
'SeeSecurities Re ulation: Customer Protection Rule Oversi t Procedures A ear Ade uate (GA D- -1 , N ov. 1, 1 1
Page 28
GAO/GGD-92-108 Securfities Investor Protection
Chapter S The Regulatory Framework Is Critical to Minimising SIPC's Exposme
si c liquidation, si c may need to make advances to recover the customer property serving as collateral for these additional loans. To keep track of this sort of activity, sEc and the sRos frequently require troubled broker-dealers to report their daily bank and stock loan activity.
Most SIPC-Liquidated Firms Failed Due to
Fraud
Although the regulatory framework has successfully protected millions of customers without the need for sipc liquidations, sipc has had to liquidate 228 firms. si c officials estimate that fraud — which can prove difficult for the regulators to detect — was involved in more than half of the 228 liquidations and accounted for about 81 percent of sipc's $236 million in liquidation expenditures as of December 31, 1991. The fraudulent schemes have included not only the officials of carrying firms who illegally violated the customer protection and net capital rules but also officials at introducing firms who stole customer property that should have been sent to the co ng f i rms for the customers. Between 1986 and 1991, introducing firm failures accounted for 26 of sipc's 39 liquidations. Other factors that have caused sipc liquidations include poor management and market conditions. Ordinarily, the regulators have time to transfer out a troubled firm's accounts because broker-dealer financial positions tend to deteriorate over a period of months or years. However, the regulators may not discover fraud until the principals of the firm have already depleted its capital or misappropriated customer assets. For example, in the most expensive liquidation — Bell and Beckwith (see table 2.2) — a senior firm official managed to "borrow" $32 million from the firm's margin accounts over a 5-year period without being detected by the regulators. As collateral for the loan, the official pledged stock in a Japanese corporation, which he valued at nearly $280 million; its real worth was approximately $5,000. When the fraud was discovered, sipc initiated liquidation proceedings to protect customers.6
fhe official spent time in federal prison, and the SIPC trustee, in conjunction with the Bevill, Bresler 4 Schulman, Inc., trustee, agreed to a 410 million settlement with the firm's auditors.
Page 29
GAO/GGD-99-109 Securities Investor Protection
Chapter 2 'rhe Regulatory Franiewortt Is Crltteal to Mtntniistng SIPC's Exposure
Table 2.2: Most Expensive SIPC Liquidations as of December 31, 1991
Firm Bell 8 Beckwith
SIPC expenses $31,722,352
Bevill, Bresler 8 Schulman,
26,395,628
Inc, (BBS)
Cau s e of failure Fir m official stole about $32 million from the firm by grossly inflating the value of collateral for the margin loan. BBS officials funded the losses of its affiliates, The losses continued to mount and resulted in failures of BBS and several affiliates.
Firm officials wrongfully diverted about $14 million from the firm
Stix & Co., Inc.
16,990,497
by creating fictitious margin
accounts. Officials used the
Joseph Sebag, Incorporated
11,351,787
Government Securities Corp.
8,109,953
funds to purchase real estate. Firm officials allegedly purchased shares without customers' permission and caused share prices to artificially increase. When share prices collapsed, Sebag failed because it had a substantial ownership position in the shares. Firm officials allegedly set up fraudulent "managed accounts" for certain customers, Rather than executing trades, firm officials used customer funds for their own benefit.
Total
Source: SIPC.
$94,570,217
Fraudulent sales practices may also increase financial and regulatory pressures on a firm and force it into asl c liquidation. For example, Blinder Robinson — the largest liquidation as measured by customer claims paid (61,000) — became a SIPC liquidation in July 1990 when its owner tried to put the penny stock firm' into a federal bankruptcy proceeding without the knowledge of sEC and SIPc. At the time, Blinder Robinson was under serious regulatory and financial pressure because sEC had been investigating the firm's sales practices for almost a decade and a Denver - businessman had won a substantial legal judgment against the firm, According to the sIPctrustee, Blinder Robinson's owner filed for bankruptcy so the firm could avoid its legal obligations. However, sipc
'Penny stock firms specialize in selling the low-priced securities of highly speculative companies.
'NASD had first informed SIPC about Blinder Robinson's deteriorating position in August 1988.
Page 20
GAO/GGD-92-109 Securities Investor Protection
Chapter 2 The Regulatory Framework Is Critical to Minimlslng SIPC's Exposure
filed liquidation proceedings against the firm because its customers were at risk, and the courts have agreed with sIPc. To date, fraud at a major broker-dealer involving the possession or control requirement or the cash reserve calculation has not adversely affected sIPc. While fraud and questionable management practices have contributed to the demise of major broker-dealers, such as E.F. Hutton and Drexel, the regulators have had time to arrange the transfer of customer accounts without the need for siPc liquidations.9
Other Factors That Have
Caused SIPC Liquidations
Poor management and market conditions may also cause firms to fail with minimal warning and become a sIPc liquidation. For example, H.B. Shaine and Company, Inc., failed during the October 1987 market crash because management did not properly oversee the firm's options department. Certain customers engaged in risky options trading, which proved profitable while the market increased during the mid-1980s. However, when the market plunged on October 19, 1987, the Options Clearing Corporation (occ) issued very high margin calls to the firm. Shaine officials could not collect sufficient margin payments from their options customers, and the firm had insufficient capital to pay the margin calls, so it was closed and turned over to SIPc. The trustee anticipates that the Shaine liquidation ultimately will impose minimal costs on si c because the firm had most customer property on hand and the administrative expenses will be recovered from the firm's estate. Of the approximately 30 broker-dealer firms that failed as a result of the October 1987 stock market break, only Shaine required a sipc liquidation. A si c official also said that siPC has initiated liquidation proceedings to protect the customers of firms no longer in business. When a sIPc member broker-dealer chooses to cease operations, it should file a form with sEc and its withdrawal from registration becomes effective 60 days after the filing. sEc checks the form to see whether the firm owes any property to customers. If any amounts are owed, sEc asks the sRos to ensure that all customer property is returned. sFc should then notify so'c of the firm's withdrawal date, which starts a 180-day countdown. During the next 180 days, sl c must protect any customers who come forward with valid
claims for cash or securities. Under SIPA, sipc cannot initiate liquidation
proceedings after the 180-day period has passed. sipc correspondence files
'ln E.F. Hutton's case, the firm merged with Shearson Lehman Brothers in 1988. In Drexel's case, the failure of the holding company due to fraud, the resulting settlement, and the concentration in high-yield securities impaired the broker-dealer's ability to trade and ultimately forced the broker-dealer into bankruptcy.
Page 21
GAO/GGD-92-109 Securities Investor Protection
Chapter g The ILegalatorI Prameworh Ie Critical to Minlmlalng SIPC'e Expoenre
indicated that several customers have lost cash and securities because they filed claims after the 180-day deadline.
SIPA Liquidation Procedures Involve Delays
Customers benefit if the regulators can arrange to protect customer accounts without the need for siPc liquidations because the customers generally do not lose access to their investments. However, if a SIPC liquidation becomes necessary, siPc and the trustees must comply with siPA procedures (see table 2.3), such as freezing all customer accounts. The period of time during which customers are denied access to their accounts depends upon whether the trustee pays claims account by account via the mail or arranges a bulk transfer of customer accounts to acquiring firms. According to siPc officials, bulk transfers often permit customers to trade in their accounts within days or weeks of the liquidation's commencement, although the process can take longer. Payment of claims account by account can take months. For example, when FDR failed in 1988, the trustee used the bulk transfer authority to satisfy about 25,000 (80 percent) of 30,000 claims within 3 months of the liquidation's commencement. By contrast, the trustee of Blinder Robinson had to pay about 61,000 customer claims on an account-by-account basis. The trustee had paid out about half of the claims 6 months after the start of the liquidation, and the entire process took about a year. When the liquidation process denies customers access to their accounts for extended periods, they can be exposed to declines in the market value of their securities. The market risks facing customers were exemplified by the failure of John Muir 8r, Co. in August 1981. An NI'sEmember, Muir had approximately 16,000 customer accounts. While the SIPC trustee arranged the transfer of about 8,000 accounts within 10 days of the liquidation's commencement and another 4,700 accounts within 3 months, it took 7 months or more to satisfy the remaining accounts, primarily because of disputes over how much the customers owed Muir. The delay adversely affected many of the Muir customers, who were denied access to their accounts. For example, one customer who owned $500,000 worth of stock at the start of the Muir liquidation received shares worth about $350,000 from the trustee 14 months later.
Page 82
GAO/GGD-92-109 Securities Investor Protection
Chapter 2 The Regulatory Framework Is Critical to Mtutmtstng SIPC s Exposure
Table 2.3: SIPA liquidation Proceedinge
Step Regulators notify SIPC about troubled firm. SIPC initiates liquidation proceedings. Court appoints trustee to liquidate firm. Accounts are frozen and trustee completes "housekeeping" tasks. Customers file claim with trustee. Trustee distributes customer property up to
SIPA limits. Source: SIPC.
Overview
SEC and the SROs have the responsibility to examine SIPCmembers. Under SIPA 5(a), regulators must notify SIPC when a firm is in or approaching financial difficulty, such as substantially declining net capital levels. SIP C may initiate liquidation proceedings in federal distric t court if customers are at risk, If the court agrees with SIPC, it may appoint an independent trustee and counsel to liquidate the firm. Trustee may hire legal staff, and then the case is removed to federal bankruptcy court, Trustee secures firm offices and customer and creditor accounts, hires liquidation staff, locates customer property, and begins notification process. Customers have 6 months to file a claim. Trustee's staff and SIPC officials review claims to ensure accuracy. Customers can appeal trustee's decision on claims to bankruptcy judge. Trustee distributes customers' name securities and approves claims up to SIPA limits of $500,000 ($100,000 cash) per customer, SIPC makes advances to cover missing cash and securities.
Payment of claims account by account can be time consuming because it is a labor-intensive process, particularly for large firms. For example, a si c official said the bulk transfer of a major firm's accounts may involve several employees, and an official involved in the Blinder Robinson liquidation said that paying claims account by account required 26 employees during the initial stages of the liquidation. After the staff and sIPc officials had reviewed and approved each customer claim, the staff had to send instructions to the Depository Trust Corporation (DTC) in New York, where Blinder kept most securities, to deliver the appropriate securities via the mail to the liquidation site in Englewood, CO. The staff opened the package from DTc to ensure that it contained the appropriate number of securities and that the securities were registered to their proper owners. Only then did the staff send the securities to the customers via registered mail. Bulk transfers can expedite the payment process because customer accounts are transferred via computer to acquiring firms before the trustee reviews customer claim forms. However, trustees and SIPc arranged bulk transfers for only 18 of the 99 liquidations commenced between 1978 and
Page 33
GAO/GGD-92-109 Securities Investor Protection
Chapter 2 The Regntatori Pramevrorh Is Critical to Mtntmtxtng SIPC'sExposure
1991. (See table 2.4.) A sIPC official said the high incidence of fraud — more than 50 percent — among SIPC liquidations accounts for the low number of bulk transfers. In such cases, the trustee and SIPC staff cannot rely on the books and records of the firm, so they review each customer claim to ensure accuracy. Another reason for the low number of bulk transfers is that some failed broker-dealers specialized in securities (such as penny stocks) that qualified acquiring firms found unattractive. The Blinder Robinson trustee said he did not attempt a bulk transfer because {1) firms experienced in handling numerous customer accounts expressed no interest in Blinder's customer accounts, which primarily contained penny stocks, and (2) firms that did express interest lacked adequate financial and operational controls to accept the accounts without endangering their own survival.
Table 2.4: SIPC Bulk Transfers, 1978-1991
Plrm Mr. Discount Stockbrokers, Inc. Gallagher, Boylan, 8 Cook, Inc. John Muir 8 Co, Stix 8 Co., inc, Bell 8 Beckwith
Gibralco, Inc.
Filing date 6/30/80
3/17/81
Number of claims paid
541 1,363
8/16/81 11/5/81 2/5/83 6/21/83
1/31/84
16,000
4,205 6,523 713 1,500 11,658 1,338 1,079 6,140 331 3,601 2,362 256 9,103 4,372 30,376 101,461
California Municipal Investors Southeast Securities of Florida, Inc. M,V. Securities, Inc. June Jones Co. First Interwest Securities Corp. Coastal Securities Bevill, Bresler & Schulman, Inc. Donald Sheldon & Co, Cusack, Light & Co., Inc. Norbay Securities Inc. H,B. Shaine 8 Co., Inc. Fitzgerald, DeArman,& Roberts, Inc. Total
Source; SIPC,
1/31/84 3/14/84
6/4/84 6/7/84
5/3/85
4/8/85
7/30/85 6/25/86 10/14/86 10/20/87 6/28/88
Page 34
GAO/GGD-92-109 Securities Investor Protection
Chapter 2 The Begulatory Framevrorh Is Crltlcal to Minimlsing SIPC's Exposure
A M@or SIPC Liquidation Could Damage Public Confidence
Because SIPc liquidations can involve delays, the liquidation of a major broker-dealer could damage public confidence in the securities industry. Under such a worst-case scenario, hundreds of thousands of customers could be temporarily denied access to their property and exposed to market risks. Although the regulatory framework discussed earlier has been successful in preventing such an occurrence, the regulators and SIPc cannot afford to become complacent. This is all the more true because large firms are continuing to engage in riskier activities than in the past. To prevent large broker-dealers from becoming sIPc liquidations in the future, the regulators must continue to vigorously enforce the net capital and customer protection rules and other applicable securities laws and regulations.
Potential Impacts on Market Stability
The sl c liquidation of a major broker-dealer may only affect the customers of the failed firm. However, it is possible that the impact of a large SIPC liquidation could adversely affect the stability of securities firms and markets more generally. This spillover effect could occur if customers of other broker-dealers became worried about what would happen if their broker-dealer got into financial difficulty. In such an event, large numbers of customers could be motivated to move their accounts from one broker-dealer to another to avoid the possibility of having their funds tied up for some indefinite period of time. Or customers might get out of securities investments altogether, for example, by selling investments and depositing money in a bank. Both types of acgustments could be destabilizing to the normal operation of the securities markets, but the latter situation of actually selling securities could be highly disruptive because it could result in rapid declines in the prices of many types of securities. sEc and sIPc officials told us that the destabilizing effects associated with a large broker-dealer liquidation could be contained. The regulators and sIPc believe they could arrange to transfer the customer accounts of a large failed firm to acquiring firms within weeks. Unlike penny stock brokers such as Blinder Robinson, the officials said that the customers of large broker-dealers tend to hold highly liquid assets such as government bonds and blue chip stocks in their accounts. Other large broker-dealers find such customer accounts attractive and could generally be expected to bid on and acquire the accounts within a relatively short time.
Page 35
GAO/GGD-92-109 Securities Investor Protection
Chayter 2 The Regnlatorr Fralneworlr. Is Critical to Mlnljnlaing SIPC's Rxpoonre
Incentives Foster Efforts to Avoid Major SIPC Liquidations
Given the potentially adverse consequences of a ~o r broker-dealer liquidation, incentives exist to avoid such an event. Regulators, creditors, and customers of failed securities firms all have incentives to avoid the unpleasant aspects of SIPC liquidations — their length, their cost, and the provisions 1Il sIPA regarding creditor and counterparty relationships with the failed broker-dealer. • Regulators (sEc, sRos, and the Federal Reserve) want to avoid very large siPc liquidations because such liquidations can cause significant delays for counterparties of the failed firms and can disrupt the smooth functioning of the financial markets. • Creditors want to prevent a slPc liquidation because their share of the failed broker-dealer's assets would decrease in the event of a SIPC liquidation, where SIPC has a priority claim on the assets of the firm to pay the administrative costs of the liquidation. Customers prefer that their firm not go into a siPC liquidation because they could lose access to their property for an extended period of time and, consequently, be exposed to market risk. • Banks having loans and other arrangements with a failed broker-dealer want to avoid SIPc liquidation because they lose the ability to call their loans or unwind transactions for a period of time determined by the court. This exposes them to market risk and reduces their flexibility. • Other securities firms with noncustomer claims against troubled firms would like to avoid siPc liquidations because they, like creditors, could only settle claims from the general estate, which would be diminished by administrative expenses, and the completion of any other nonopen financial arrangements, like those involving banks, would be delayed. The strength of these incentives, in tandem with the regulatory framework, can be very important. As we pointed out earlier, most firms that have slipped through the regulatory framework and become SIPc liquidations were small and failed as a result of fraud. As table 2.5 indicates, the five largest siPC liquidations in terms of customers are dwarfed by the five largest broker-dealers, At year-end 1990, the Securities Industry Association, an industry trade group, reported that it had 60 members with 100,000 or more customer accounts.
Page 86
GADIGGD-92-109 securities Investor Protection
Chapter g
The Regulatory Framework Is Critical to
Mtntmixtng SIPC's Exposure
Table 2.6: Major Securities Firms and Largest SIPC Liquidations
Major securities firms by number of customer accounts
Merrill Lynch, & Co. Shearson Lehman Brothers, Inc.
Prudential Securities, Inc.
Customer accounts
7,900,000 4,000,000 2,700,000
Dean Witter Reynolds, Inc. 2,500,000 Paine Webber Group, Inc. 1,700,000 Largest SIPC liquidations by number of customer accounts C ustomer claims paid Blinder Robinson, Inc. 61,334 Weis Securities, Inc. 32,000 Fitzgerald, DeArman, and Roberts, Inc, 30,376 John Muir & Co. 16,000 OTC Net, Inc. 14,107
Sources: 1991-1992 Securities Indust Yearbook and SIPC,
Regulators and SIPC Must Avoid Complacency
While the incentives and the regulatory framework have been successful in preventing major SIPc liquidations to date, sEc officials and slPc cannot afford to become complacent. During our review, sEc officials told us that two large firms — Thomson McKinnon and Drexel — could have become slPC liquidations. In fact, in July 1989 slPC's general counsel flew to New York to prepare to initiate liquidation proceedings against Thomson McKinnon, which had about 500,000 customer accounts." Fortunately, slPc did not have to liquidate Thomson McKinnon because hnrsE and sEG
officials arranged the transfer of the firm's customer accounts to
Prudential-Bache Securities Inc. Moreover, in 1990 four major broker-dealers received capital contributions from their parent firms: the First Boston Corporation; Shearson Lehman Hutton Inc.; Prudential-Bache
Securities Inc.; and Kidder, Peabody 8r, Co. Incorporated.
Looking forward, there is no cause for complacency because changes in the securities industry are making the regulators' job of monitoring broker-dealer net capital and protecting customers more difficult. Continuing a trend that began about 10 years ago, broker-dealers are relying on riskier activities for more of their revenue than in previous decades. Moreover, many of these activities are new and technically sophisticated, and the risks involved may not be well understood. The
' thomson McKinnon had been experiencing 5nancial problems since the 1987 stock market crash. In 1989, the firm entered into merger negotiations with Prudential-Bache. On July 14, 1989, the merger negotiations broke down temporarily in a dispute over Thomson's 5nancial exposure. The negotiations later resumed and Prudential-Bache acquired Thomson's customer accounts and retail branch network.
Page 87
GAOfGGD-98-109 Securities Investor Protection
Chapter 2 The Regulatory Framework Is Critical to Minimlsing SIPC's Exposure
sophisticated, and the risks involved may not be well understood. The structures of broker-dealers and broker-dealer holding companies are also changing and becoming more complicated. Increasingly, broker-dealer holding companies are moving very risky activities out of the registered and regulated broker-dealers and into unregulated affiliates. Although these affiliates are separate, their activities, and financial difficulties, could affect the financial health of the broker-dealer (a slPc member)." These changes may reduce the amount of time the regulators have to protect customers of a financially troubled broker-dealer, making it more difficult
to protect customers without sIPC involvement.
While the riskiness of broker-dealers and their affiliates has continued to increase, sEc's ability to oversee the securities industry and thereby protect sIPc was enhanced by the passage of the Market Reform Act of 1990 (P.L 101-432, 104 Stat. 963). This act, passed in the wake of the Drexel bankruptcy, authorized sEc to collect information from registered broker-dealers and government securities dealers about the activities and financial condition of their holding companies and unregulated affiliates. sEc has issued proposed rules under the act that would require firms to maintain and preserve records on financial activities that might affect the broker-dealer. sEc officials plan to use this information to assess the risks presented to these regulated broker-dealers by the activities and financial condition of their affiliated organizations.
No Expansion of SIPC's
Role Warranted
We were asked to look into whether SIPc should have the authority to examine the books and records of its members to fulfill its customer protection role. Given the relative success of the regulatory framework to date in preventing slPc liquidations, we do not believe there is any evidence to warrant such an expansion of slPc's authority. Several practical problems also are associated with such proposals. sIPc, with 32 staff members, does not have the resources to ensure that its members comply with securities laws and regulations. Giving slPc regulatory authority to monitor its 8,153 members would, therefore, require a large increase in SIPc's staff and impose additional costs on the securities industry. The benefits of such an expansion are questionable because it would (1) duplicate the work of sEc and sRos and (2) prove counterproductive if it weakened the accountability sEc and the sRos now have for monitoring securities firms and enforcing the net capital and
"See our report Securities Markets: Assessin the Need to Re ulate Additional Financial Activities of U.S. Securities Firms A / D- -7 , A p r , 1 , 1 92 .
Page 38
GAO/GGD-92-109 Securities Investor Protection
Chapter 2 The Regulatory Pramework Is Critical to Minimlslng SIPC's Exposure
customer protection rules, sEC and the sRos should continue to serve as the first line of defense for customers. Sipc should also maintain its back-up role within the regulatory framework. However, although SIPC does not need expanded regulatory authority, it can better prepare for potential liquidations (see ch. 4).
Conclusions
The regulatory framework has successfully limited the number and size of sipc liquidations. Most of the firms that slipped through the regulatory framework and became su'c liquidations failed because of fraud. When a sipc liquidation becomes necessary, customers may be denied access to their accounts for extended periods. The delays expose customers to market risk, and if a major broker-dealer becomes a SIPc liquidation, public confidence in the securities industry could be damaged. In recent years, several large broker-dealers have experienced financial difficulties that could have resulted in sipc liquidations. As a result, the regulators and sipc cannot afford to become complacent about the possibility of a major firm becoming a sipc liquidation. They must work to avoid such an outcome and be prepared to respond effectively if it should occur.
Page $9
GAO/GGD-92-109 Securities Investor Protection
Cha ter 3
SIPG's Responsible Approach for Meeting
Future Financial Demands
In 1991, the siPc board implemented a new strategy for building the sa'c customer protection fund. The board set a goal of 41 billion for fund resources (cash and investments in government securities) to be met by 1997, The board also changed its assessment strategy. The new strategy calls for consistent fund growth of 10 percent annually, with assessment rates varying as needed to achieve the target. If sipc expenses remain in line with past experience, assessments will be lower than they have been foi' the last 2 years. In November 1991, sEc approved the board's proposed changes to the sipc bylaws, and su'c implemented the plan. Given sipc's back-up role in securities industry customer protection, we believe that the board's strategy represents a responsible approach to anticipating funding demands that may be placed on sipc in the future. The plan provides resources well above what sipc would need if its future demands are similar to those of its past. Furthermore, siPc's resources should enhance the credibility of protection afforded to customers from the failure of a very large firm — something sipc has never experienced — if such a firm should end up in a su'c liquidation. However, the reasonableness of this strategy depends entirely on the continued success of the securities industry's regulatory framework in shielding su'c from losses. Given the changing nature of the securities industry, the siPc board and sEc will have to continue to assess the adequacy of the fund.
SIPC Funding Needs Are Tied to the Risk of a Breakdown in the Regulatory System
One characteristic of the sipc Fund that makes assessing its adequacy very difficult is that fund liquidation expense is not correlated with any traditional measure of financial exposure for financial institutions, such as credit risk or the amount of insured property. Instead, its adequacy is most dependent on the industry's compliance with sEc and SRo rules, particularly the sEc customer protection and net capital rules. The probability of such compliance, or noncompliance, is not quantifiable. If the risk of broker-dealer activities was a good predictor of siPc expenses, we would expect to find either that sipc liquidations increased sharply during economic downturns in the securities industry' or that most of the broker-dealers ending up in a sipc liquidation were engaged in very risky activities. However, we found that neither case represents reality.
'The riskier a broker-dealer's activities, the more sensitive that broker-dealer is to economic downturns, poor decisions, or even bad luck and the more likely the brokerMealer is to fail.
Page 40
GAD/GGD-92-109 Securities Investor Protection
Chapter 9 SIPC's Responsible Approach for Meeting Future Financial Demands
sIPc endured a period of securities industry recession from 1987 through 1990 without an appreciable increase in the number of SIPc liquidations.' Moreover, a significant percentage of broker-dealers that have been turned over to siPc did not engage in particularly risky activities. As we explained in chapter 2, 26 of the 39 broker-dealers turned over to sIPc since 1986 (67 percent) were introducing firms engaged in very low risk lines of business. If the amount of sipc-protected property was correlated to sIPc losses, we would expect that the largest liquidations would be the most costly. However, this has not been the case. Tables 3.1 and 3.2 show that the size of a broker-dealer (as measured by the amount of customer property or the number of customers) is not correlated with the cost to sIPc. Returning $190 million worth of property to customers of John Muir, Inc., resulted in no cost to SIPc while returning about $106 million to Bell and Beckwith customers cost siPc nearly $32 million. Blinder Robinson, listed in table 3.2, had more customers than all five firms listed in table 3.1, yet this liquidation was much less expensive than that of Bell and Beckwith.
Table 3.1: Most Expensive SIPC Llquldatlons as of Oecember 31, 1991 Dollars in millions
SIPC
Firm
Bell 8 Beckwith
advances $31.7
26.4 17,0 11.4 8.1
Customer Customer property c laims pai d return e d 6,523 $105.7
3,60 1 4,205 3 ,640 2,403 20,372 4 17 .5 51.2 33.9 40.8
Bevill, Bresler & Schulman, Inc. Stix 8 Co„ inc. Joseph Sebag, Inc. Government Securities Corp. Total
Source: SIPC.
$94.6
$649.1
'See table 2.1, where SEC turned four broker-dealers over to SIPC for liquidation in 1987, five in 1988, six in 1989, and eight in 1990.
Page 4i
GAO/GGD-92-109 Securities Investor Protection
Chapter 8 SIPC's Responsible hpproach for Meeting Future Financial Demands
Table 3.2; Largest SIPC Liquidations as Measured by Customer Claims Paid as of December 31, 1991
Dollars in millions Firm Blinder Robinson, Inc. Weis Securities, Inc. Fitzgerald, DeArman, and Roberts, Inc. John Muir & Co. OTC Net, Inc.
Total
Source: SIPC.
S IPC advances $6.2
3.4 5 .6 0,0 -.4 $14.8
Customer Customer property clai m s paid returne d
6 1,334 3 2,000 30 ,376 16,000 14 ,107 153 ,817 $25.8 187,2 137.0 190.4 17.4 $ 557 .8
As has been discussed, the regulatory framework established in the last 20 years to protect customers of broker-dealers has helped to limit sipc liquidations to a little over 1 percent of all broker-dealer closures. With the sIPc fund currently equaling more than twice siPc's cumulative liquidation expenses from 1971 through 1991, it appears that sipc is in a good position to continue its past performance with these small broker-dealers. Thus, based on the historical record alone, si c resources would seem to be adequate. There is, however, no reason to assume that the future will be like the past. Therefore, sipc must consider its funding needs in relation to the possibility of a breakdown in security industry compliance with the net capital and customer protection rules.
SIPC's Plan Seems Reasonable to Fulfill Back-Up Role
In 1989, the board initiated a substantial reevaluation of its funding and assessment strategies. While the board believed that the regulatory framework — backed up by the SIPC fund — was adequate to protect customers, it recognized that the securities industry had changed dramatically since sipc's inception. The industry had consolidated, with fewer firms doing a greater share of the business. The primary source of industry revenue had also changed from commissions to more risky lines of business such as trading, mergers and acquisitions, and merchant banking. Moreover, the stock market crash in 1987 and the recent demise of several of the largest broker-dealers in the industry (including Thomson McKinnon and Drexel) as well as the savings and loan and banking crises, attracted a great deal of attention and caused a significant decrease in public confidence in financial institutions.
Page 42
GAO/GGD-92-109 Securities Investor Protection
Chapter 8 SIPC's Responsible Approach for Meeting Future Financial Demands
The board first approached the question of how to adapt to changes in the securities industry and how to confront sagging customer confidence by evaluating the adequacy of the fund. To help in this process, the board commissioned a study of the fund's size as well as alternatives to supplement the existing fund, which comprises cash and government securities and is supplemented by commercial bank lines of credit.3 The study considered sipc's responsibilities, its resources, and the effect of changes in the risks taken by broker-dealers on siPc's future funding requirements, given the current regulatory framework. The study also explored plausible scenarios that might place the sipc fund under considerable strain, such as the failure of the largest brokerMealer in the industry, and whether or not slPC resources were sufficient to withstand this sort of stress. Finally, the study considered alternative forms of customer protection that could be used to supplement the current cash fund. The board decided that building a cash fund to an amount sufficient to liquidate the largest broker-dealer' in the industry would be an effective way to demonstrate sIPc's capacity to protect customers. According to the fund adequacy study, $1.24 billion was the largest amount likely to be needed to liquidate the largest broker-dealer. Of this amount, roughly 60 percent would represent temporary liquidity requirements and would be recovered bysIPc in the course of the liquidation. The largest cost component of such a liquidation was assumed to be temporary advances required to retrieve customer property pledged as collateral for bank loans or involved in stock loans.' If such an event were to occur, siPc, with a $1 billion fund together with a $1 billion commercial bank line of credit and the $1 billion Treasury line of credit, would provide the resources necessary to meet this responsibility.
'See Deloitte 4 Touche'sSpecial Study of the SIPC Fund.
'Deloitte 4 Touche estimated how much it would cost SIPC to liquidate the largest brokerAealer in the industry, at the time of the study. The study based this estimate only partially on SIPC liquidation experience because SIPC has never liquidated a brokerAealer that had more than 61,000 customers. By comparison, the largest broker-dealer in the industry had more than 6 million customer accounts at the time of the study.
~When customers purchase securities on margin, they must pay at least half of the purchase price, and the broker-dealer may borrow the remaining half from a bank or another broker<ealer. The lending institution will demand that the customer's broker4ealer pledge securities that exceed the value of the loan as collateraL Due to the excess collateralization requirement, SIPC can pay off the loan, recover the customer margin securities, and still recover its advance in full.
Page 48
GAOIGGD-92-109 Securities Investor Protection
Chapter S SIPC's Responsible hpproacb for Meeting Pntnre Financial Demands
Building the slPc fund to $1 billion would better enable it to meet such a contingency, although it would have to draw on its commercial bank line of credit to meet all the liquidity needs. sEc officials told us that the 41.24 billion estimate is highly conservative because it assumes a substantial breakdown in compliance with the customer protection rule. By incorporating the 81.24 billion conclusion of the fund adequacy study into slPc's new fund goal, the board decided that a significant cumulative event, such as slPc being asked to liquidate two or more major broker-dealers within a short time, was improbable because of the securities industry's regulatory and capital structure. In assessing the reasonableness of si c's financial plans, we concluded that there is no methodology that slPc could follow that would provide a completely reliable estimate of the amount of money sipc might need in the future. slPc has had no experience with a large liquidation, and the evidence from smaller liquidations is that the cash outlay and net cost aspects depend greatly on the particular circumstances of the firm. sipc s estimate, therefore, must be judgmental. We have not tried to develop our own independent estimate of sipc's funding needs. As explained in the following paragraphs, however, we believe that slPc's strategy represents a responsible approach to planning for future financial needs. We base our conclusions on several factors. In general, the plan does not assume that the future will be like the past, and it anticipates the possibility that sIPc may have to liquidate a large firm. Furthermore, in the absence of recognized measures of fund adequacy, the concept of using a worst-case scenario to look at potential funding needs makes sense, although this approach is limited by the assumptions made and by the uncertainty of f'uture developments. While the simultaneous liquidations of several large broker-dealers, which could wipe out the sn c fund, cannot be ruled out in an uncertain world, in assessing the adequacy of sn'c's plans it is appropriate to bear in mind the back-up role that has been laid out for su'c. In such an event, sEc and all the other key financial agencies of the federal government, including the Federal Reserve and the Department of the Treasury, would be involved in attempting to manage what would clearly be a crisis situation. Even a market break the size of the one in 1987, which potentially could have caused many siPc liquidations, placed no unusual demands on siPc. Since that time, regulators' ability to contain the damage that market breaks may
Page 44
GAO/GGD-92-109 Securities Investor Protection
Chapter 8 SIPC's Responsible Approach for Meeting Future Financial Demands
have on broker-dealers has been strengthened through "circuit-breaker" provisions and through improvements in communication and coordination
among the agencies.'
Looking more directly at the $1.24 billion estimate of the amount of cash needed to liquidate the largest securities firm, the slPc funding requirement is conservative with respect to some of its assumptions. It assumes, for example, that the failed broker-dealer's capital would be depleted to the point that its required reserves would be exhausted and that the trustee would not recover any portion of the broker-dealer's partially secured and unsecured receivables. Largely because of these assumptions, officials of sIPc and the sEc and representatives of the securities industry told us that sipc's funding estimates were on the conservative side. However, we cannot definitively conclude that the $1.24 billion estimate is overstated. The study sipc used assumed that the books and records of a large failed broker-dealer would accurately reflect the Qrm's accounts and that the brokerMealer would be in compliance with the possession or control component of the customer protection rule. In view of the prevalence of fraud in past smaller sipc liquidations, we believe that the possibility of fraud or of a serious breakdown of internal controls cannot be ruled out, even though sEc contends that these controls are monitored more closely in larger broker-dealers. Furthermore, the largest firms in the industry are likely to continue to grow, so the amount of money that might be needed in 1997 could be higher than the $1.24 billion estimated in 1990. We commend the board for taking a forward-looking approach to planning the SIPC fund strategy. However, in view of the dynamic nature of the industry, it is essential that the board, together with sEc in its oversight role, assess the fund periodically to adust the funding plans to changing sipc needs. Among other factors, the periodic assessments of the fund's adequacy must focus on the size of the largest broker-dealers, evidence of increased risk-taking within the industry, trends with respect to the amount of customer property, and any signs that regulatory enforcement has deteriorated. To a large degree, the new fund strategy builds in the opportunity for such periodic assessment; on an annual basis, the sipc board must estimate its liquidation expenses and determine the revenues needed to build the fund
'In 1989, SEC approved new exchange and NASD rules that require temporary trading halts of i or 2 hours if the Dow Jones Industrial Average falls more than 260 points or more than 400 points, respectively, in a single day.
Page 45
GAG/GGD-92-109 securities Investor Protection
Chayter 8 SIPC's Reayonslble Ayyroach for Meeting Pnture Financial Demands
at a 10-percent annual rate and decide whether to renew 25 percent of its commercial bank line of credit.
If SIPC's Funding Needs Increase, Assessment Burden Issues Could Arise
In the past, the si c assessment burden in most years has been quite low — less than 0.1 percent of total securities industry revenue and not more than 2 percent of the industry's pretax income. The burden was greatest in 1990 when si c was building its resources and industry profits were down. The assessments in that year represented approximately 10 percent of pretax income. The plan slPc has adopted will enable it to reach the $1 billion goal by 1997 with low assessments if liquidation expenses remain low (as has been the case in the last several years). Total estimated 1991 assessments were $39 million, a 47-percent decrease from the 1990 assessments of $73 million. However, if sIPc liquidation expenses increase significantly and sIPc needs to recapitalize its fund, sIPC may have to address both the total assessment burden and the distribution of the assessment burden.
Assessment History
When SIPc was created, sIPA required each siPc member firm to contribute 0.125 (one eighth of 1 percent) of its gross revenues for that year to start the customer protection fund. Until recently, the board has retained the gross revenue base for the assessments needed to maintain fund viability till'ougllout sIPc s history, believing that it was the most equitable distribution of the assessment burden.' Also in the past, the board attempted to match assessment rate increases with declines in the fund balance, so that years of high SIPc expenses were followed by periods of higher assessments. Figure 3.1 shows how siyc revenues and expenses have varied. In 1973 and 1981, expenses were high; consequently, the board increased revenue to cover the high expenses by increasing assessments in the years that followed. Table 3.3 shows the various assessment rates for each year.
7See SIPC assessable gross revenue definition in chapter i.
Page 46
GAO/GGD-92-109 Securities Investor Protection
Chapter tt SIPC'0 Responsible Approach for Meeting Fntnre Financial Demands
Figure 3.1: SIPC Revenue and ExPenses, 1971-1 991
100 Dollars In millions
110
ll l1 I
gl
I
r(
I
I 1
r
I
\
I
l
I
r
I l l l
30
10
I
I
0
rr
r~~
~ a ~ ~ ~
I I I
I
1g
1071
10 7 3
1078
1977
1079
1981
1083
1088
1087
1989
1991
— Revenue - - • Expenses Souroe: SIPC
Table 3.3: History of SIPC Assessment Rates
Period January 1971 - December 1977 January 1978 - June 1978 July 1978 - December 1978 January 1979 - December 1982 January 1983 - March 1986
April 1986 - December 1988
Rate 0.5% of gross revenues 0.25% of gross revenues
0,0 % $25 flat fee
0,25% of gross revenues
$100 flat fee
January 1989 - December 1990
January 1991 - present Source: SIPC.
0.19% of gross revenues 0.065% of net operating revenues
SIPC's New Assessment
BBse
In 1991, the board created a task force to examine the assessment strategy, and the task force concluded that steady fund growth, regardless of liquidation expense, was preferable to the previous reactive strategy. The board also directed the task force to examine the way SIPC assesses
Page47
GAO/GGD-92-109 Securities Investor Protection
Chapter 9 SIPC's Responsibleby proach for Meeting Putnre Financial Demands
member 5rms to build the fund. The task force examined a variety of assessment strategies that would appear to be more closely correlated with actual SIPc losses to make the assessments risk- or exposure-based. However, the task force did not find a material relationship between either risk or exposure and slPc losses. For example, as noted in tables 3.1 and 3.2, no correlation could be found between the level of securities and cash balances at failed broker-dealers and actual slPc liquidation costs. Also, the riskiness of failed broker-dealers' activities did not translate into SIPc losses as long as the failed broker-dealers complied with sec and SRo regulations. The board adopted the task force's recommendation that revenue remains the best base for assessments, but that the existing gross revenue assessment base should be changed to net operating revenue. While the change to a net operating revenue assessment base did not tie assessments any closer to fund risk or exposure, it did address the concerns of some sIPc members, especially some of the larger broker-dealers, about the treatment of interest expense in the previous assessment base. The slPc task force on assessments reported that the increased emphasis on activities that involve interest expense made gross revenues an inappropriate basis for assessments. Interest expense at NYsE member firms increased from 21 percent of gross revenues in 1980 to 42 percent in 1990.9 Many large broker-dealers complained that a broker-dealer's gross revenues could increase dramatically — and with it, the SIPc assessment — with a rise in interest rates. Such an interest rate increase would cause little or no economic change for the broker-dealer because interest expense would also increase. The change to a net operating revenue base eliminated this problem by basing the assessment on the difference (spread) between interest revenue and interest expense. In the event of a significant downturn in the health of the fund, sipc may not be able to meet the 10-percent annual fund growth goal. Although sipc assessments will increase if the fund experiences losses, it may not be able to achieve the annual growth goal because there is a cap on the total amount of assessments that may be collected in any 1 year. With this cap,
rhe board also maintained an alternative assessment base that SlpC members may choose, of gross revenues less 40 percent of margin interest earned on customers' securities accounts. The SIPC task force on assessments recommended that this option be made available in an attempt to distribute the assessment burden equitably between firms that actively engage in trading and interest-rate spread transactions and firms that rely on their retail operations for income.
NYSE member broker-dealers were responsible for approximately 80 percent of SIPC's total assessment revenue before the assessment change.
Page 48
GAO/GGD-92-109 Securities Investor Protection
Chapter 8 SIPC's Responsible hpproaeh for Meetbsg Pntnre Plnandal Demands
assessments collected in 1 year may not exceed the equivalent of 0.5 percent of gross revenues. Moreover, if the fund falls below $150 million (approximately 22 percent of its current level), the assessment base reverts back to gross revenues. The gross revenue base would shift more of the assessment burden to firms with relatively higher gross revenues, usually larger broker-dealers. Industry criticism of the proposed changes was minimal, largely because the overall effect of the change for the near term was lower assessments for most broker-dealers in the industry. As long as the securities industry regulators vigorously enforce the net capital and customer protection rules, the incentives limiting sl c's exposure remain, and slpc's investments in U.S. government securities continue to generate considerable interest income, slpc expects assessments to remain low in the near term.
Assessment Burden Issues
Can Arise If SIPC Assessments Increase
The effect of the change in the assessment base should be small as long as the assessment rate remains at or near its current low level. However, in the event that a significant increase in assessments is required to meet the fund growth goal, the issue of assessment burden, for both the entire industry as well as individual broker-dealers within the industry, may require reevaluation. While slpc assessments have generally been small compared to industry income, 1990 si c assessments represented a significant percentage of industry income. Table 3.4 compares SIPc assessments to securities industry pretax income and total revenues for 1983 to 1991.
Table 3.4: SIPC Assessments, Industry Income, and Revenue, 1983-1991
Dollars in millions Year
1983 1984 1985 1986 1987 1988 1989 1991 Sources: SEC and SIPC.
Assessment revenue $36.8
52.3 71.0
P retax Income $5,206,8
2,856.6
Total revenue $36,904.1
39,607.1 49,844.3 64,423,8 66,104.4 66,100,4 76,864.0 72,087.8
23,1
1.0 1.0 66.0 73.0
6,502.4 8,301.2 3,209.9
3,477.3 2,822.9 737.2
39,0
7,600.0
76,900.0
Page 48
Gho/GGD-88-108 Secnrltles Investor Protection
Chapter 8 SIPC'0 Responsible hpproach for Meeting Fntare Financial Demands
Figure 3.2 shows the burden of assessments on the securities industry in terms of a percentage of pretax income.
Figure 3.2: SIPC Aaaeaamenta ea e Percentage of Securltlea Induatry Pretax Income, 1983-1 QS1
10 Psrosnfage of Prelax Income
1088
1084
1N 6
19N
1N 7
10N
19N
1000
1001
Years Source: SIPC and SEC,
A high slPC assessment rate, combined with the change to net operating revenue-based assessments, may have a more profound effect on the distribution of the assessment burden among sD'c members than it does on the industry as a whole. By focusing SIPc assessments on net operating revenue, sipc shifted some of the assessment burden from broker-dealers that are actively engaged in trading and interest rate spread transactions to broker-dealers that are primarily dependent on their retail brokerage business for income. Under the new assessment structure, broker-dealers are allowed to deduct interest expense — from debt-financed activities — from SIPC assessable revenues. Generally, only large broker-dealers have a significant amount of deductible interest expense. sipc is also continuing the broker-dealers' option of choosing to deduct 40 percent of margin interest revenue (this
Page 50
GAOIGGD-92-109 Securities Investor Protection
Pntnre PlnaneLal Dentate
Chapter 8 SIPO s 144ponsible hpproaell for Meeting
deduction existed prior to the change in the assessment strategy). The change to the new assessment structuie would shif't the assessment burden further toward broker-dealers that have primarily a retail business.' The issue of assessment equity has come up in the past and raised the following concerns: Large broker-dealers claim they have carried too much of the burden because small brokerAealers usually become sIPc liquidations. Small broker-dealers claim that large broker-dealers pose a more significant threat to the fund and are in a better position to carry the assessment burden. Broker-dealers with few or no customers claim that they receive little benefit from sipc and consequently should not be forced to pay assessments at the same rate as broker-dealers with more customers. However, as we discussed earlier, the impact of changing the assessment structure on both the total assessment burden and the distribution of the assessment burden among individual broker<ealers depends upon the assessment rate. As long as the rate remains low, questions concerning the equity of the assessment structure should not demand a great deal of attention. If rates rise significantly as a result of high liquidation expenses, the sIPc board may need to revisit the issue.
Alternatives or Supplements to SIPC's Financial Structure
We were also asked to look at the role of alternatives such as private insurance to supplement sipc coverage. The Deloitte 6b Touche study of the sIPc fund and the sIPc task force on assessments also addressed alternative or supplemental ways to provide protection to securities investors. The task force concluded that a customer protection fund comprising cash and short-term government securities, like the current fund, is the best protection for customers and the best way to maintain public confidence in the securities industry, We agree that a cash fund is superior to private insurance, letters of credit, and lines of credit in terms of providing a basic level of customer protection and public confidence. Historical experience with private insurance plans, like the excess customer protection insurance coverage carried by many major broker-dealers, has shown that coverage frequently cannot be obtained when it is needed most. For example, private insurance coverage for
' rhe SIPC task force on assessments proposed eliminating the option once SIPC reaches its 41 billion fund goal.
Page 51
GAD/GGD-SR-108 Securities Investor Protection
Chapter I SIPC's Responsible hpproacbtor Meethag Fatnre Phaancial Demands
customers with account values above sipc coverage limits was not renewed at either Drexel or Thomson McKinnon before their closing. Although we believe that private insurance cannot adequately provide the basic customer protection currently provided by thesipc fund, supplemental private insurers, through the pricing of their products, can provide valuable information concerning the health of the institutions they insure. In this way, private insurance fulfills a monitoring function that supplements the activities of the regulators. The Federal Deposit Insurance Corporation Improvement Act of 1991 (P.L. No. 102-242, 105 Stat. 2236) requires a study of the feasibility of a similar option for a private reinsurance system covering depository institutions. Bank lines of credit, like su c's current line of credit, or bank letters of credit may be appropriate to serve as a supplement but are not appropriate to replace the current cash fund. Lines of credit can be written so that they will be honored under almost all circumstances, but the cost of such a line might be prohibitive. Banks also have the option of not renewing lines or letters of credit when they expire, and they may choose not to renew siPc's credit when sipc would need it most, during periods of significant losses.
Conclusions
The su c board's new fund strategy appears to be responsible, given sipc's back-up role in customer protection and the regulatory framework that exists in the securities industry today. With this regulatory structure in place and diligent supervisory and enforcement efforts, it is reasonable to assume that only a small percentage of broker-dealer closures will be turned over to su c for liquidation, and sipc has the resources necessary to liquidate these firms. sIPc currently has more than twice the money available to protect customers than it has spent in its entire 20-year history to meet similar obligations. However, the reasonableness of this strategy depends entirely on the continued success of the securities industry's regulatory framework in shieldingsu c from losses. sipc has a responsibility to regularly review its funding needs and take measures to strengthen the fund if there is evidence of any declining effectiveness of the customer protection and net capital rules. Further, in view of the importance of the regulatory protections,sEc in its oversight capacity should also regularly review the adequacy of sipc's funding strategy.
Page 62
GAD/GGD-92-109 Securities Investor Protection
SXPC's Responsible hpproach for MeetIng Future Financial Demands
Rapter I
Recommendations
We recommend that the sipc Chairman periodically review the adequacy of sipc's funding arrangements, taking into account any changes in the principal risk factors affecting the fund's exposure to loss. We also recommend that the sEc Chairman review the adequacy of funding plans developed by sipc.
Agency Comments and Our Evaluation
sIPc and sEc officials provided comments on our assessment of the adequacy of the sipc fund. Both sipc and sEc agreed with our assessment that so c acted responsibly in planning for the siPc fund's future needs. SIPc also agreed that a cash fund is superior to private insurance, letters of credit, and lines of credit. sipc did not comment on our recommendations, but sEc agreed that the adequacy of the sipc fund should be reviewed periodically. sEc stated that it and si c have reviewed the adequacy of the fund and will continue to do so.
Page 59
GhO/GGD-92-109 Securities Investor Protection
Chanter4
SIPC Can Better Prepare for Potential r,iquidations
siPc has never had to liquidate a large securities firm, and siPc and sEc officials believe it unlikely that they will ever have to. However, should SIPc be called on to liquidate a large firm, the complexities of such a liquidation could impede the timely resolution of customer claims. Such delays, in turn, could damage public confidence in the securities industry. We believe that there are reasonable steps sEc and siPc officials can take that would better enable them to liquidate a large firm in a timely manner should the need arise.
SIPC Has Not Made Special Preparations for Liquidating a Large
Firm
The persons we contacted in the course of our review — industry officials, liquidation trustees and others involved in liquidations, and regulatory officials — generally gave siPc high marks for its ability to conduct liquidations. Although a detailed review of the efficiency of siPc liquidations was outside the scope of our review, we found no reason to question this assessment of siPc's liquidation activities. However, these liquidations have all been of relatively small firms. Successful performance in the past does not, therefore, necessarily mean that sIPG is adequately prepared to move quickly to take on the liquidation of one of the largest firms. siPc's largest liquidation to date has involved the processing of about 61,000 claims; the five largest broker-dealers each has more than 1 million
customer accounts.
A decade ago, si c recognized the need to address the problems associated with the potential liquidation of larger firms by establishing a task force to look into the topic of how to handle large liquidations. The task force, composed of si c, sRo, and industry officials, was initiated in 1981 to study ways to ensure the timely return of customer property in the event a large firm with more than 100,000 customers became a sIPc liquidation. The task force was prompted by siPc's liquidation of several relatively large firms in 1981, the largest of which, Muir, had about 16,000 accounts. The task force reported in 1981 that there were 11 securities firms carrying over 100,000 active customer accounts. gn 1990, over 50 securities firms had more than 100,000 customer accounts.) The 1982 the task force report stated that the failure of a major broker-dealer would pose substantial challenges to siPc and its normal liquidation procedures. The report stressed the problems that could confront the trustee and si c's efforts to promptly satisfy customer claims in a large liquidation. For example, the trustee, si c, and the regulators would generally try to arrange a bulk transfer and avoid the need to
Page d4
GAO/GGD-9$-109 Securities Investor Protection
Chapter 4 SIPC Can Better Prepare for Potential Llqnldatlons
process customer claims.' However, as we pointed out in chapter 2, this
process may not always be possible due to the quality of the firm's
accounts and the reliability of its books and records. Bulk transfers could also prove time consuming, particularly if SIPc had to simultaneously
negotiate transfer agreements with one or more acquiring firms. Acquiring
firms would want to ensure that the accounts meet internal credit standards, and extensive computer programming efforts may be required to ensure that no errors occur in the transfer process. To minimize these and other potential problems, the task force recommended that sl c develop a plan for large liquidations. The task force suggested that sipc work with industry officials to negotiate agreements needed to ensure the timely liquidation of a large broker-dealer. For example, SIPC could negotiate standby agreements on data processing services.2 Since the task force report, sipc officials have not attempted to strengthen their pl~ g pr o c esses or make special preparations for a large broker-dealer liquidation. In response to the task force recommendation, a committee composed of sl c and SEc ofTicials developed a list of operational information that should be available from a large debtor broker-dealer at the beginning of a liquidation. (See table 4.1.) However, no action was taken to implement the recommendation.
'Amendments to SIPA in 1078 allowed SIPC to pay claims directly in some small cases and bulk transfer customer accounts to other acquiring firms. 'The task force also recommended that SIPC be given the authority to operate large troubled firms so that customers could continue to trade in their accounts and thereby avoid market losses. SIPC and SEC officials said the recommendation was not adopted because it would involve rescuing failed firms and that no other brokers would be willing to serve as counterparties to a bankrupt firm.
Page 66
GAO/GGD-98-109 securities Investor Protection
CInLyter 4 SIPC Can Setter PrePare for Potential Liquidations
Table 4.1: Operational Information Recommendrd by a 10LI SIPC-SEC Commltte»Io Nelp KnIMN the Timely Llquldatlon of a Laryo Srokor-INaler
1. 2. 3. 4. 5. 6, 7. 8. 9,
Cur r ent list of branch offices Loc a tion of leases for branch offices Loc a tion of equipment leases and other executory contracts Lis t of banks or financial institutions with funds or securities on deposit and banks with outstanding loans (both customer and firm) Loc a tion of vaults and other secure locations Loc a tion and description of computer databases and services used Loc a tion of mail drops, e.g., post office boxes and other depositories Cha r t of interlocking corporate relationships between the broker-dealer and its affiliates Lis t of key personnel
10. A ccurate count of active customer accounts
Source: SIPC.
Senior sipc officials said that they do not see the need to implement the 1982 task force recommendation or take other special measures to develop a plan for large liquidations.sEc officials agreed and said it would be impossible to develop a single plan that would be applicable to all troubled firms. We believe that the views of sIPc andsFc do not take seriously enough the problems that would result were sipc to have to conduct the liquidation of a large firm. In support of their position, SIPcand sEc officials said it is unlikely that they will have to liquidate a large firm. They pointed out that over the past decade the regulators have demonstrated the ability to protect the customers of such firms without SIPc involvement. However, as we noted in chapter 3, when we discussedsIPc's financing needs, the regulators and sIPcofficials cannot afford to become complacent about the possibility of a large broker-dealer ending up in asIPc liquidation. sEc officials told us that the financially troubled Thomson McKinnon and Drexel firms could have become siPC liquidations, and in 1990 four major broker-dealers had to be recapitalized by their parent companies. Another reason sIPcBIll sEcofficials say that special preparations for large liquidations are not needed is that SIPc can readily adapt the procedures developed for smaller firms to the liquidation of larger ones. They point out that larger firms are more likely than smaller ones to have well-functioning computerized information systems that are the key to being able to move quickly to protect customer accounts.
Page 56
GAG/GGD-92-109 Securities Investor Protection
Chapter 4 SIPC Cais Setter Prepare for Potential Lliiuldations
We agree that the experience sIPc has gained in liquidating firms over the years has certainly enabled it to develop and improve its ability to liquidate fhms. Throughout the past 20 years, so c has continued to upgrade its procedures and its automated liquidation system to improve its ability to conduct timely liquidations. While we appreciate and support SIPc's ongoing improvement efforts, we also believe that SIPc would be in a better position to protect customers if it were to take reasonable steps, in coordination with sEc, to prepare for the contingency of a large firm liquidation. In view of the risks to market stability that may accompany the failure of a large firm, we think it reasonable for siPc to do everything possible to be able to protect customers should it be called on to conduct such a liquidation. Experience from the last several years, reviewed in the next section, suggests strongly that additional measures can be taken to help customers of large firms gain access to their property as quickly as possible should any such firm fail.
Measures to Enhance SIPC's Ability to Liquidate a Large Firm on a Timely Basis
Operational Information Could Be Collected Sooner
The ability of sipc and trustees to satisfy customer claims in a timely fashion can be directly related to actions taken within the first hectic days of a liquidation's commencement. By better planning with regard to obtaining information about failing firms and securing automation support, siPC can increase the chances that a large liquidation can proceed without delay, should such a liquidation prove necessary.
In the early stages of a liquidation, the trustee — with sIPc advisement — must simultaneously gain control of the failed firm's headquarters and branch offices, freeze all customer accounts and creditor claims against the firm„ identify the location and availability of customer cash and securities, and determine the feasibility of arranging a bulk transfer. siPc officials also (I) advise the trustees on the hiring of key liquidation staff such as accounting firms and (2) review and approve customer claim forms with the liquidation staff.s We found that siPc officials have generally received high marks from trustees and other individuals involved in slPc liquidations for the guidance and assistance they provided in the conduct of liquidations.For example, the Blinder Robinson trustee said that siPc provided excellent legal advice, which he used to defend against challenges to his authority by the former
SIPC had not established specific documents that customers must file to support their claims. Instead, customers are encouraged to submit the ordinary documentation broker-dealers normally provide, such as monthly statements, purchase and sale confirmations, and canceled checks.
Page 67
GAD/GGD-92-109 Securities Investor Protection
Chapter 4 SIPC Can Setter Prepare for Potenthal Lirinidations
owner of the firm, and the FDR trustee praised SIPc's valuable legal and technical assistance. However, we also learned from sIPc staff and trustees of complications they experienced in acquiring the necessary information and automated liquidation systems during past liquidations, We believe such complications are indicative of the types of problems in obtaining operational information that, should they occur in the liquidation of much larger firms, could potentially result in significant delays for a large number of customers. For example, the trustee for Blinder Robinson, which had been on the 5(a)4 list 11 months prior to its failure, said his staff did not trust the accuracy of the information the firm provided about the locations of its branch offices. In addition, the staff did not locate certain of the firm's bank accounts until 10 months after the start of the liquidation. Similarly, the trustee of the FDR liquidation also experienced problems gathering all the information he needed. For example, it took two employees 4 to 6 weeks to find all the firm's branch office leases. Also, the trustee estimated that it took three staff members between 60 to 75 days to examine each of the firm's customer accounts to determine whether it was active before a bulk transfer could be arranged. In the above examples, the trustees had some difficulty in obtaining operational information concerning the location of offices and accounts. In each instance, the trustees did not believe that these complications had interfered with the timely processing of customer claims. But even if they did result in delays, relatively few customers were affected, and there was no potential for adverse impact on confidence in securities markets in general. For a large liquidation the stakes would be higher. We therefore believe it wise to initiate procedures to be sure that sIPc has as much operational information as possible before it would actually have to undertake the liquidation of a large firm. The potential impact that lack of operational information of the type referred to in table 4.1 could have on the liquidation of a major dealer can be illustrated by the events surrounding the failure of Thomson McKinnon in 1989. Although slPC did not have to initiate liquidation proceedings for Thompson McKinnon, sl c officials had made few preparations when they were informed of its imminent demise. Thomson McKinnon had about
'Under SIPA section 5(a), the regulators must notify SIPC about broker<ealers that are in or approaching financial difficulty.
Page 59
GAO/GGD-92-109 Securities Investor Protection
Chapter 4 SIPC Can Batter Prepare for Potential Liqnldations
600,000 customer accounts, 170,000 more customers than su c had protected in its 20-year history. As discussed earlier, NYsE and sEc arranged for the transfer of Thomson McKinnon's accounts to Prudential-Bache, but sEc officials said the firm could have become a sipc liquidation when the merger negotiations broke down temporarily. NYSE first warned su c that Thomson McKinnon was experiencing financial problems in May 19S9. But information sipc received about Thomson McKinnon was primarily financial information, such as the firm's quarterly financial reports, which identify its net capital level and aggregate data on the value of bank loans secured by customer margin securities. At the request of sEc, sIPc's general counsel went to New York on Friday, July 14, 19S9, to prepare to initiate liquidation proceedings, possibly as early as the following week. After sEc notified si'c, the staff began intensive efforts to collect operational information about the firm, such as the location of branch offices, and plan for the liquidation. We believe that the regulators should provide sIPc with operational information needed to liquidate troubled firms so that sIPc can begin preparations before firms fail. With such information, sipc officials could assess, on a case-by-case basis, the impact that a liquidation would have on customers days or weeks in advance and make plans to return customers' property as quickly as possible, SEc officials said that requiring the regulators and troubled firms to provide the information in updated form to sIPc would impose unnecessary administrative burdens, particularly as they try to protect customers without sipc involvement. However, we question how great a burden such a requirement would impose on sIPc, the regulators, and troubled firms. As sEc officials and the Blinder Robinson and FDR trustees told us, much of the information is already collected by the regulators and available at the start of the liquidation process. Furthermore, if the regulators are attempting to protect customers by transferring accounts to another firm, they would need virtually all of this information. The burden of being certain that sIPC has as much operational information as possible before it has to undertake a liquidation could be minimized if the requirement is limited to 5(a) referrals (perhaps only exceeding a certain size) and other troubled firms at the discretion of the regulators.~ For example, the regulators may decide that sipc should take
~Between 1088 and 1001, SIPC received 63 new 6(a) referrals, of which 18 (29 percent) became SIPC liquidations.
Page 59
GAO/GGD-92-109 Securities Investor Protection
Chapter 4 SIPC Can Better Prepare for Potential Llgnldatlona
precautionary steps and plan for the liquidation of a large firm, with numerous customers and a nationwide branch office network, whose capital has fallen to the early warning levels but was not on the 6(a) list. sipc had advance warning via the 6(a) list of 67 percent of the firms that were liquidated between 1988 and 1991. sipc, sEc, and the sRos should work together to identify operational information that si c will need to plan for potential liquidations and that the regulators and troubled firms can reasonably be expected to provide.
SIPC Has Not Addressed
Cost-Effective Automation
System Options
Another important tool used by sipc to promptly respond to the demands of any liquidation is an automated liquidation system. An automated liquidation system is the computer software program or programs that help trustees organize liquidations and pay customer claims promptly. sipc developed its own automated system in 1985 and has periodically modified the system to upgrade software and hardware capability. The system is designed for typical-sized liquidations and to be used either alone or in corjunction with modifications to the failed broker-dealer's system. We support sipc's efforts to develop a system that meets the needs of typical liquidations and si c's policy of acquiring the most cost-effective automated liquidation systems. However, it is not clear what automated system sipc would use in situations where either its own system or the failed broker-dealer's automated systems could not be readily adapted to meet the liquidation's needs. siPc's system has not been used in a liquidation involving more than 30,000 customer claims. Although sipc officials have stated that their system could be modified to handle liquidations of any size, they also recognize that it may not be cost effective to modify their system for a large liquidation. To date, si c has relied primarily on one supplier to meet its automated liquidation system needs for liquidations where sipc's system cannot be used. When Blinder Robinson failed, sipc advised the trustee to use that supplier's system even though the trustee had made arrangements to use another supplier. To the extent that it is relying on one supplier, sipc is incurring a management risk that could delay efforts to return customer property. For example, the system may be unavailable in an emergency, or it may cost more than other competitive systems. We are concerned about su'c's ability to acquire the most cost-effective automated system in a timely manner because they have not analyzed various data processing options or compared cost data to determine
Page 60
GAO/GGD-82-109 Securities Inveator Protection
Chapter 4 SIPC Can Setter Prepare for Potential Liquidations
system costs and capabilities. While SIPc officials stated that all rr@jor public accounting firms are capable of meeting their automation needs, they had only had experience with one firm, and they did not have cost data from any other firms. If the process of analyzing system comparisons does not take place until a siPc liquidation is initiated, unnecessary delays could result in acquiring the automated system that is key to the processing of customer claims. We believe that sipc would be in a better position to ensure that trustees acquire cost-effective automated liquidation systems on a timely basis by systematically analyzing automated system options and developing plans to meet diverse requirements of potential liquidations.
More Eff'ective Oversight by SEC Is Needed
As the federal agency responsible for overseeing the securities industry, sEc has a vital interest in the protection of customers and the continued stability of the securities markets. SEc also has the responsibility for overseeing siPc's operations, and in many cases would itself have to take action to ensure that sipc can fulfill its responsibilities in the best possible manner. For example, sEc would have to issue any rule that would require the sRos to provide operational information to siPC about troubled firms. In the past, sEc has carried out its oversight responsibilities by participating in sipc task forces, reviewing monthly and annual reports on sIPc s expenditures, investigating customer complaints about si'c, and meeting with slPc staff regarding liquidation issues. Moreover, sEc officials said the director of sEc's Division of Market Regulation began attending sIPc board meetings at the invitation of sipc beginning in 1991. While such contacts between spc and SIPc are important, we question whether SEc has paid sufficient attention to its sipc oversight responsibilities. In particular, sEc has not taken steps to ensure that sIPc develops plans to liquidate large troubled firms as the 1982 task force recommended. Additionally, according to sEc and sipc officials, sEc has evaluated SIPc's operations only once, in 1985. Although sEc found at that time that siPc was doing a good job selecting trustees and overseeing the liquidation process, it also identified actions that would speed the payment of customer claims, such as the development of an automated liquidation system. However, sEc never followed up on the 1985 evaluation to determine if SIPc's automation program met siPc's various liquidation requirements. Without more active oversight efforts by sEc, investors and Congress cannot be assured that siPc has fully implemented proposals designed to strengthen its operations.
Page Bl
GM/GGD-92-109 Securities Investor Protection
Chapter 4 SIPC Can Setter Prepare for Potential Lignldatlone
Conclusions
siPc has a responsibility to ensure that trustees liquidate failed firms efficiently so that customers are protected against unnecessary market losses and risks to the financial system are minimized. sEc and trustees have complimented si c's guidance and assistance in past liquidations. However, we believe that siPc can enhance its ability to protect customers by improving its preparations for liquidations of large troubled firms. Specifically, SIPC should (1) collect information needed to liquidate troubled firms sooner and (2) assess the cost effectiveness of various automation options to ensure the timely acquisition of an automated liquidation system. Also, additional oversight by sEc could help ensure that sIPC was as prepared as possible for responding to the demands that would result from the liquidation of a large firm. Unlesssipcand sEc address these concerns, SIPc may not be in a position to manage liquidations efficiently and protect customers from unnecessary market losses resulting from delays in the liquidation process.
Recommendations
We recommend that the chairmen of sipc and SEc work with the sRos to plan for the timely liquidation of a large broker-dealer by improving the timeliness of information provided to SIPc by the regulators that is needed to liquidate a troubled firm. We further recommend that the Chairman of sIPc, in coordination with the SEc Chairman, systematically determine sipc's automation needs for various sized liquidations and develop appropriate plans and procedures to ensure that trustees will promptly acquire cost-effective automated liquidation systems. Finally, we recommend that the sEc Chairman periodically review siPc's operations and its efforts to ensure timely and cost-effective liquidations.
Agency Comments and Our Evaluation
sIPc and sEc officials commented on our recommendations to improve sIPc s preparations for liquidations. On our first recommendation to improve information collection, both agencies questioned the need for additional information. However, they both agreed to thoroughly review this matter. In commenting on our second recommendation to improve sIPc s automation program, siPc stated that they continuously review their own automation system, determine their automation needs at the inception of a liquidation proceeding, and can make any necessary modifications without delaying the liquidation proceedings. Both agencies agreed to again review si c's system and consider aII of our comments. Finally, SEc agreed with our third recommendation to initiate periodic reviews of siPc operations and has taken steps to begin such a review.
Page 68
GAO/GGD-82-109 Seenritiee Inveetor Protection
Chapter 4 SIPC Can Setter Prepare for Potential Uqnidatlons
siPc and sEc responded to our report by stating that there are no indications of any problems with the information currently collected on financially troubled firms and that the evidence cited in the report is anecdotal. They also expressed concern that our report did not recognize srPc's efforts to develop and upgrade its own automated liquidation system or that siPc's decision to use its automated system, either alone or in cog}unction with possible modifications to the debtor's automated system, is based on sIPc's assessment of the most cost-effective solution. The responses by sEc and SIPc officials did not address our main concern, which is the issue of improving sipc's preparations for the liquidation of a large broker-dealer. Instead, the comments suggested that we were criticizing sIPc foI' the way it had conducted liquidations and for alleged deficiencies in the automation system it has developed. We modified the text and recommendations in chapter 4 to emphasize that our focus was on preparing for potential large liquidations because, as we noted in the draft, our main concern was with market stability. We did not assess the quality of specific sipc liquidations or features of siPc's automated liquidation system. Our recommendations to improve sipc's preparations for large liquidations are prompted by a concern we share with previous sipc and sEc chairmen that SIPc's ordinary liquidation procedures may not be sufficient to liquidate a large broker-dealer on a timely basis. sIPc s ability to promptly process customers claims is critical to maintaining public confidence and stability in the financial system. Our recommendations focus on the two areas where timeliness could be key in a large liquidation: (I) the collection of information needed in the liquidation and (2) the acquisition of an automated liquidation system. In the first area, a si c task force, as well as some trustees experienced in sIPc liquidations, suggested that it would be useful to have specific operational information available to plan for a liquidation. We believe that SIPc and sEc officials could work together with the sRos to ensure that the collection of this information is not unduly burdensome for the regulators or troubled
firms.
In the automation area, we agree that sipc deserves credit for developing an automated system to meet its typical liquidation needs and have noted in the report that si c has made periodic improvements to the system. We also agree with SD c's policy of integrating the most cost-effective automated data processing solutions into the assistance and support it
Page B8
GAD/GGD-98-109 Secnrities Investor Protection
Chapter 4 SIpC Can Setter prepare ror potential Ll4lnldatlons
provides to trustees appointed under sly. We do not question slPc's ability in any case to use its own system or modify other systems. However, slPc hss not collected any cost data or compared various automation options. Therefore, we are concerned about whether these determinations can be accomplished without delay to the liquidation proceedings, particularly in situations involving large liquidations and liquidations where slPc's own system could not be used. slPc would be in a better position to make both timely and cosMffective decisions if it analyzed cost data for various automation options to plan for potential liquidations.
Page 64
GAO/GGD-92-109 secnrltles Investor Protection
Discrepancies in Disclosing Customer Protections
Disclosure is an important feature of securities regulation so that customers have adequate information to make informed investment decisions and retain confidence in financial institutions and markets. Within the scope of this review, we were asked to discuss several aspects of disclosure related to sIPcmembership. Specifically, we were asked to determine what information is provided to customers aboutSIPc's coverage and whether customers are informed of whether they are dealing with a sipc member. sIPcmembers are required to disclose their SIPc status to their customers. For the most part this disclosure seems adequate, although we found some areas where improvements could be made. There is, however, no requirement for non-sipc members regulated by SEc to disclose their lack of membership in sIPc. In certain situations, customers have been confused because some nonmember firms are involved in similar securities activities as member firms, such as the purchase and sale of securities to customers. In addition, customers could be harmed because they may be subjected to undisclosed risks of loss or misappropriation of their funds or securities. If these nonmember firms were required to disclose that they were not sipc members, investors would be better informed about the relevance of SIPc coverage to their investment decisions. We, therefore, believe that sEc should require nonmember firms that serve as intermediaries in customers' purchases of securities and have temporary access to customer funds to disclose their sIPc status.
Disclosure Requirements for SIPC Members
SIPc requires its members to inform customers about their sl c status. sl c members generally must display the sipc logo in their principal and branch offices and in most advertising. These firms may also refer to su c in other material such as statements of account. SIPC may, however, prevent members from displaying the logo when it would be misleading — for example, if the firm's principal business was in products such as commodity options that are not covered by slPc. Disclosure regarding some of the features ofsIPc coverage is also important. Even if a broker-dealer is aSIPcmember, customers do not have SIPC protection for products not covered by SIPc. Furthermore, customers of failed firms lose SIPc protection if they do not submit their claims within 6 months after sipc or the trustee publishes notification of a sIPC direct payment procedure or liquidation.' siPc has developed an official
'SIPC may use a direct payment procedure rather than a formal court-supervised liquidation to resolve small firm liquidations if each customer claim is within the limits of SIPC protection and the claims of all customers total less than 4260,000.
Page 65
GAD/GGD-92-109 Securities Investor Protection
Protections
Chapter 6 Discrepancies ln Disclosing Customer
brochure to explain sipc coverage and provides the brochure to its members for voluntary distribution to their customers. sIPc's brochure generally explains what sipc does and does not cover.g siPc officials do not believe that customers of sIPc-member firms have a significant information problem relating to sIpc's coverage because most customers purchase typical securities products that are clearly covered by siPc. Nevertheless, questions concerning customers' eligibility for SIPc coverage have been raised in correspondence and litigation relating to slpc liquidations where some customers found out too late — after their firm failed or after the deadline for filing claims had passed — that they were not entitled to sIPc protection. Some of these customers had transacted business with an affiliate of the broker-dealer that was not a sIPc member. Others had not submitted their claim forms by the designated deadline. siPc's official brochure was recently revised to address potential customer confusion regarding the si c status of si c-member affiliates, The sIPc brochure now advises customers that some affiliates of siPc members may not be sipc members and that they should make checks payable only to sIPc members. We agree that the sIPc brochure provides a useful mechanism for including or clarifying information that customers may need to know. In addition to sIPc s recent changes, we believe that sIPc should consider revising other areas of the brochure to address potential confusion. One area that sIPc should review involves specifically explaining the 6-month deadline for filing a claim in order to be eligible for si c protection. The brochure currently states only that customers should file their claims promptly within the time limits set forth in the notice and in accordance with the instructions to the claim form; no deadline is mentioned. This issue was raised in some customers' letters to si c when customer claims were denied because they were not filed within the designated time frame. If customers do not receive a notice from the trustee or see the newspaper notifications for a SIPc liquidation or direct payment procedure and, therefore, do not file within the 6-month period, they will not be protected
by SIPC,
iSIPC's official brochure lists the securities that SIPC covers when purchased from a SIPC-member firm as notes, stocks, bonds, debentures, and certificates of deposit, Also, SIPC protects shares of mutual funds, publicly registered investment contracts or certificates of partidpation or interest in any profi-sharing agreement or in any oil, gas, or mineral royalty or lease. Hnally, warrants or rights to purchase, sell, or subscribe to the securities mentioned above and to any other instrument commonly referred to as a security are protected under SIPA. On the other hand, the brochure explains that SIPC does not protect some securities-related products such as unregistered investment contracts; gold, silver, and other commodities; and commodity contracts or options.
Page BB
GAO/GGD-98-109 Securities Investor Protection
Chapter 5 Diecrepanclee ln Dlecloelng Customer Protectione
Another area the brochure does not address is what customers should do if their broker-dealer fails or goes out of business but does not go through a sipc liquidation. This information is important because 99 percent of broker-dealers that are liquidated or go out of business do so without going through sipc. Customers should know that they still need to check to ensure that all of their securities and cash have been returned or transferred to another firm. If they 5nd that this has not been done, they must notify sipc or the regulators within 180 days after the 5rm's registration is withdrawn so that sipc may consider whether to initiate formal liquidation or direct payment proceedings. They also should check on the status of their firm if regular statements about their accounts are not received.
Differences in
Customer Disclosure Need to Be Addressed
As mentioned, there are no requirements for firms in the securities industry that are registered with sEc but not members of sipc to disclose this fact to their customers. This lack of disclosure often poses no problem because many such firms do not have access to customer funds. However, there are situations in which this lack of disclosure could harm investors. These situations involve nonmember firms that serve as intermediaries in customers' purchases and sales of securities and may temporarily have access to customer funds. Should they fail or go out of business, these firms could expose customers to loss. If sipc nonmembers with access to customer funds were required to disclose their sipc status, there would be greater assurance that investors would be informed about the relevance of sIpc coverage to their investment decisions. siPc and sEc officials did not know the extent to which customers may have difficulties because of the differences in protections provided by nonmember firms. We agree that extensive evidence is hard to come by. However, the potential harm to investors is demonstrated by evidence that customers of sipe nonmembers that have access to customer property are exposed to the same type of fraud that has been prevalent in sipc-liquidated firms and some customers have had problems with sipc nonmember firms that are affiliated or associated with member firms. The spirit of the securities laws dating back to 1933 emphasizes the need to provide investors with information necessary or appropriate for their protection so that they can make informed decisions. In our judgment, for customers to be fully informed about the risks and differences in
Page 67
GAO/GGD-99-109 Sccurltiee Inveetor Protection
Cbapter 5 Discrepancies in DLsciosing Customer Protections
protection associated with different types of financial firms, disclosure of the siPC status of nonmember firms that serve an intermediary role with customers and have access to customer property should be required.
Customers May Face Similar Risks From Members and Nonmembers
Information about the SIPc status of financial firms is important to customers when they face similar risks, but different protections, in purchasing similar types of products. Although most of the SEc-registered financial firms that are not sipc members do not hold customer accounts, some types of firms play an intermediary role by accepting funds from customers for the purchase of securities products and, accordingly, have discretionary access to customer accounts. These intermediary firms are subject to the risks of misappropriating or losing customer funds. Nonmember intermediary firms include sIPc-exempt broker-dealers and certain types of investment advisory firms. According to sipc data as of year-end 1991, about 440 sEc-registered broker-dealers were excluded from SIPc membership. Broker-dealers that are not siPc members include those whose business is involved exclusively in the following areas: selling shares of mutual funds or unit investment trusts, selling variable annuities or insurance, providing investment advisory services to registered investment companies or insurance company separate accounts, transacting business as a government securities specialist dealer,' or transacting principal business outside the United States and its territories and possessions. Investment advisory firms are required under the Investment Advisers Act of 1940 to register with sEc. These firms may be involved in a variety of services, such as supervising individual clients' portfolios, participating in the purchase or sale of financial products, providing investment advice and developing financial plans for clients, and publishing market reports for subscribers. An sEc official estimated that about half of the approximately 17,500 investment advisory firms are involved in the purchase and sale of securities products to customers and may temporarily have access to customer funds. The other investment advisory firms provide primarily advisory or information services and do not serve an intermediary role or handle customer property. When these firms
3For further discussion of the SIPC exclusion of government securities specialist dealers, see our report, U.S. Government Securities: More Trar~tion Information and Investor Protection Measures Are Ne e A - 114, pt . 1 4, I, pp.
Page 88
GAO/GGD-92-109 Securities Investor Protection
Chapter 5 Discrepancies in Disclosing Customer Protections
register with sEc, they must specify whether they will have custody or access to client accounts and identify any material relationship with a broker-dealer. In a previous review of investment advisers, we found, although precise figures were unavailable, some examples where investment advisory firms had misappropriated client funds.' The importance of customers' choices between sIPc-member and nonmember firms may be illustrated by the purchase of a common securities product, mutual fund shares. Customers may purchase fund shares directly from mutual fund investment companies, which would not involve an intermediary from another firm. However, customers may also use an intermediary firm in their purchase of mutual fund shares. Customers may select different types of intermediaries, including sIPc-member broker-dealers, nonmember broker-dealers, investment advisers, and other types of financial firms not registered with sEc.' In these cases, the customer deals with a sales agent or intermediary who directs the customer funds to the mutual fund where the customers' shares are held. In 1991, slPc-member broker-dealers earned about $4.2 billion in revenues from the sale of mutual fund shares while the revenues for nonmember broker-dealers were about $600 million. In cases where customers are dealing with intermediaries, only customers of slPc-member firms would be protected by si c if the firm holding their securities failed and required a sIPc liquidation. However, the potential for fraud exists in all intermediary situations. sipc officials noted that in the last 5 years, 26 of 39 sIPc liquidations involved failures resulting from fraud on the part of introducing firms that did not retain customer accounts. In addition, during 1991 sipc liquidated a broker-dealer involved primarily in selling mutual funds that failed due to the fraudulent misappropriation of about $1.8 million in customer funds. In these cases, the firms did not hold onto customer money or establish customer accounts. These firms failed due to fraud resulting primarily from agents misappropriating customer funds instead of passing them on to either the mutual fund sponsor or other broker-dealers.
'See our report Investment Advisers: Current Level of Oversi ht Puts Investors at Risk (GAO/GGD- , une , I , pp. 11-1 . Customers may also purchase mutual fund shares from banks and other depository institutions. However, we have limited the scope of our review in this report to those financial firms that are registered with SEC.
Page 69
GAO/GGD-92-109 Securities Investor Protection
Cbayter 5 Discreyancies in Disclosing Customer Protections
Nonmember Affiliates and Associates of SIP C-Member Broker-Dealers
One problem area regarding the slpc status of nonmember intermediary firms involves those firms that are affiliated with (formally tied within the same financial holding company) or associated with (having a material relationship with) a slpc-member broker-dealer. Here, in addition to the underlying risk of misappropriated funds, there is the additional complication of confusion regarding a possible tie to a sipc member. One of the major changes over the last 2 decades within the financial industry has been the emergence of large holding company structures headed by a parent company and comprising many (sometimes hundreds) affiliated insured and uninsured companies involved in diversified activities. In several highly publicized incidents, customers lost money because they unknowingly purchased uninsured products from uninsured affiliates of insured depository firms.' One such example involved the customers of the failed Lincoln Savings and Loan. Some Lincoln customers purchased uninsured bonds of the parent holding company in the lobby of the savings and loan. Similar problems have occurred with customers of sipc nonmember financial firms that were affiliated with sn'c broker-dealers. Under financial holding company structures, some firms may be allowed to sell securities products to customers but must have a broker-dealer execute securities trades and hold the customer accounts. sEc officials acknowledged that most of the problems that they are aware of relate to how the sipc logo is displayed at so'c-member broker-dealers that share common space with nonmember affiliates. One example of the confusion over nonmember affiliates was addressed in a recent SIPC lawsuit involving the liquidation of a sipc-member broker-dealer, Waddell Jenmar Securities, Inc., in North Carolina.' In this case, siPC conceded that several customers were defrauded by Guilford T. Waddell, the president of both the sEc-registered broker-dealer and a nonmember investment advisory firm, Waddell Benefit Plans, Inc. (wBP), which administered pension plans. However, slpc protection depended on whether these customers had been customers of the su'c member broker-dealer. Some customers instructed Mr. Waddell to purchase stocks with funds from their pension fund accounts, which were held by wBP. Mr. -Waddell never purchased the stocks and misappropriated customer funds. When SIPc liquidated the broker-dealer beginning in April 1989, several
'We reported on customer problems relating to the insured status of financial products offered by banks in our report De osit Insurance: A Strate for Reform (GAO/GGD-91-26, Mar. 4, 1991). In re Waddell Jenmar Securities, Inc., 126 Bankr. 936 (E.D. N.C, 1991).
Page 70
GAO/GGD-92-109 Securities Investor Protection
Chapter 5 Discrepancies Ia Disclosing Customer Protections
customers disagreed with the trustee's refusal to honor their claims and appealed to the U.S. Bankruptcy Court. The judge decided in May 1991 that the claimants of the pension plan fund were not eligible for SIPc protection because they were not customers of the broker<ealer. Instead, the court held that the claimants' missing funds and securities were held
by WBP.
Similar confusion has been raised in customer correspondence concerning nonmember firms that were associated, rather than formally affiliated, with a sIPc member. Often, these firms also transact business directly with customers and transfer customer funds to an associated broker-dealer that executes the securities transactions or hold the customer accounts. Some customers wrote to sIPc seeking clarification of these firms' siPC status in situations involving investment advisory firms associated with sIPc-member broker-dealers. One such customer inquiry asked if sIPc protected the funds and securities invested with a nonmember financial planning firm and held by a sIPc-member brokerAealer. In this situation, if the financial planning firm failed and had not delivered the customer funds to a member broker-dealer, the customer would not be eligible for sIPc protection. But if the customer funds or securities were held in an account with a member broker-dealer, the customer property would have sIPc protection.
SEC Should Address Differences in Customer Protection
Differences in customer protection and differences in the disclosure of customer protection are two distinct and important issues. This review does not address the former issue. We believe that the disclosure differences among sEc-registered firms that transact securities-related business with customers and have access to customer funds need to be addressed so that customers can make more informed investment decisions. At a minimum, customers should know whether an sEc-registered firm that is subject to the risks of losing or misappropriating customer property is a member of SIPC. Congress has considered several legislative proposals that would require affiliates of su'c-member broker-dealers to disclose their nonmember status. Another option is for sEc to address discrepancies in its regulatory disclosure requirements for registered firms that serve as intermediaries with customers and have access to customer funds or securities. sEC officials said that they would prefer to address the disclosure issue by amending their regulations rather than by amending sl A. They are considering revising their regulations to require affiliates of
Page 71
GAO/GGD-92-109 Securities Investor Protection
Chapter 5
INscrc~
Protections
ln DLscloslng Customer
brokerAealers, and possibly nonmember broker-dealers, to disclose that they are not SIPc members, but they do not know when the proposal would be issued i' or comment. If sECis to take the lead on this issue, it should identify and require those SEc-registered firms that serve as intermediaries in selling securities products to customers and have access to customer funds or securities to disclose that they are not sipc members. We recognize that other financial firms outside sEC's jurisdiction also sell securities and securities-related products but are not required to disclose their slPc status. This report is limited to financial 5rms under sEc s jurisdiction. In a previous study we have recommended that it would be appropriate for Congress to address the issue of uniform disclosure of federally insured and uninsured products.s
Conclusions
In today's financial markets, customers may receive different protection for similar securities-related products, depending on the type of firm i'rom which they purchase the product. Only SIPc-member firms are required to inform their customers of their SIPc status. Some confusion has occurred over the protections available to customers, particularly those involving financial firms afmiated or associated with SIPc-member broker-dealers. Customers should have adequate information about the siPc status of financial firms that serve as intermediaries in selling securities products so that they can make more informed investment decisions. siPc andsFc can improve the information available to customers by addressing the current discrepancies in the disclosure requirements among thosesEc-registered firms that serve as intermediaries with customers and have access to customer funds and securities.
Recommendations
We recommend that the sipc Chairman review and revise, as necessary, si c's official brochure to better inform customers of what they should do if their securities firm fails or otherwise goes out of business and to specify the amount of time that customers have to respond in order to qualify for slPc protection.
'The SEC-registered firms that are serving an intermediary role and that should be required to disclose their non4IPC status would include those SIPCexempt brokerAealers that assist customers in buying and selling securities much as introducing brokerMealers do. Also included should be those investment advisory firms that manage discretionary or nondiscretionary accounts. These firms have temporary custody of customer property and are subject to the risks of losing or misappropriating customer property. See GAO/GGD-91-26, p. 143.
Page 79
Gho/GGD-98-109 Securltles Investor Protection
Chapter I Discrepancies ln Disclosing CnstonLcr Protections
We also recommend that SEc revise its regulations to require sEc-registered Qnancial firms that serve in an intermediary role with customers and have access to customer funds or securities to disclose to their customers that they are not sipc members.
Agency Comments and Our Evaluation
sIPc andsEc commented on our recommendations to provide customers with better information on slpc liquidation proceedings and to require certain additional securities firms to disclose their sIPcstatus to customers. Both SIPc and SEC generally agreed to address our concerns. Although we do not agree with SIPc's comments that our report concludes that there are no substantial gaps in disclosure to customers about the slpc program, SIPc has agreed to clarify its official sl c brochure as soon as feasible. sEc s comments indicate that while views within SEc differed regarding disclosure of sipc status to customers, sEc is considering expanding disclosure requirements for some of the financial firms that serve in an intermediary role with customers.SEc's letter stated that its Division of Market Regulation is considering recommending a rule that would require disclosure of the absence ofsipc coverage on the part of (I) non-sipc firms that are afmiated with registered broker-dealers and that have similar names or use the same personnel or office space and (2) non-sipc registered broker-dealers. We support this effort, although if enacted it would leave a third category of firms firms that are neither broker-dealers nor affiliates of broker-dealers that serve in an intermediary capacity — without aslpc disclosure requirement. Additional efforts will still be needed to ensure that all SEc-registered firms make adequate disclosure regardingsipc coverage in the event of misappropriation of customer funds. Althollgll sEc s Division of Investment Management believes there is some merit in our concern about the possibility of investor confusion, they do not believe that additional disclosure is necessary for two reasons. First, because investment advisers are excluded fromsipc membership, "there is no more reason to require investment advisers with custody of client funds or securities to disclose their non-sipc status than there is reason to require investment advisers to disclose that they are not members of the Federal Deposit Insurance Corporation." Second, if investment advisers were required to disclose their non-sipc status, they run the risk that customers will have the false impression that the funds and securities they manage or hold have less protection than other financial firms outside
Page 78
GAO/GGD-92-108 Securities Investor Protection
Chapter 5 Discrepancies ln Disclosing customer Protections
sEC's j~ cti o n (e,g., banks and future commission merchants) that also sell securities and securities-related products and are not required to disclose their sipc status. In response to the first reason cited by sEc's Division of Investment Maregement, we do not believe that the analogy to the FMc status of investment advisory firms is valid. Investment advisory firms are not involved in transactions involving deposits, but certain investment advisory firms are involved in the purchase and sale of securities products — sometimes through an affiliation with a siPc-member broker-dealer. Also, customers could be confused because investment advisory firms may be involved in many types of securities-related activities — including, as sEc's letter points out, having temporary custody of customer property. Officials in both sEC divisions agreed that there is a possibility of customer confusion about a firm's sipc status, particularly with firms that are affiliated with siPC-member broker<ealers. For this reason, we believe that it is important to inform customers of the siPC status of firms with whom they transact securities-related business. In response to the second reason, we focused our recommendations in this report on actions within sEC's jurisdiction, which includes only sEC-registered firms. While we cannot say whether customers will think that sipc nonmember firms required to disclose have less protection than nonmember firms that are not required to disclose, we believe it is important that customers have better information to make more informed investment decisions. We also recognized in this report that some other financial firms (e.g., banks) involved in the purchase and sale of securities products are not required to disclose their su'c status. This report notes that in a previous study we recommended that it would be appropriate for Congress to address the issue of uniform disclosure of federally insured and uninsured products.
Page 74
GAO/GGD-92-109 securities Investor Protection
Page 75
GAD/GGD-9R-109 Secnrities Investor Protection
A
e n dix I
SEC Customer Protection and Net Capital Rules
The Securities and Exchange Commission's (sEc) customer protection rule (15c3-3) and uniform net capital rule (15c3-1) form the foundation of the securities industry's customer protection framework. The net capital rule is designed to protect securities customers by requiring that broker-dealers have sufficient liquid resources on hand or in their control at all times to promptly satisfy customer claims. The customer protection rule is designed to ensure that customer property in the custody of broker-dealers is adequately safeguarded. In the Securities Investor Protection Act of 1970 (slPA), Congress directed sEc to promulgate rules and regulations necessary to provide financial responsibility safeguards including, but not limited to, the acceptance of custody and use of customer securities and free credit balances. sEc rule 15c3-3, restricting the use of customer property, was a result of this congressional directive. According to sEc, rule 15c3-3 attempts to ensure that customers' funds held by broker-dealers and cash that is realized through lending, hypothecation,' and other permissible uses of customer securities are used to service customers or are deposited in a segregated account for the exclusive benefit of customers; require broker-dealers to promptly obtain possession or control of all fully paid and excess-margin securities carried by the broker-dealers for customers; separate the brokerage operation of the firm's business from that of its firm activities, such as underwriting and trading; require broker-dealers to maintain more current records, including the daily determination of the location of customer property (for possession or control purposes) and the periodic calculation of the cash reserve; motivate the securities industry to process transactions more expeditiously; inhibit the unwarranted expansion of broker-dealer business activities through the use of customer funds; augment sEc's broad program of broker-dealer responsibility; and facilitate the liquidations of insolvent broker-dealers and protect customer assets in the event of a Securities Investor Protection Corporation (siPc) liquidation. Rule 15c3-3 has two requirements: (1) broker-dealers must maintain possession or control of all customer fully paid and excess-margin
Customer Protection Rule Restricts
Broker-Dealer Use of Customer Property
'Pledging customer securities as collateral for a loan.
Page 7B
GAO/GGD-92-109 Securities Investor Protection
Appendix I SEC Customer Protection andNet Capital Rules
securities2 and (2) broker-dealers must segregate all customer credit balances and cash obtained through the use of customer property that has not been used to finance transactions of other customers.
Part 1: Possession or
Control
sEG s requirement that broker-dealers maintain possession or control of customer fully paid and excess-margin securities substantially limits broker-dealers' abilities to use customer securities. Rule 15c3-3 requires broker-dealers to determine, each business day, the number of customer fully paid and excess-margin securities in their possession or control and the number of fully paid and excess-margin securities that are not in the broker-dealer's possession or control. Should a broker-dealer determine that fewer securities are in possession or control than are required, rule 15c3-3 specifies time frames by which these securities must be placed in possession or control. For example, securities that are subject to a bank loans must be returned to the possession or control of' the broker-dealer
within 2 days. Securities that are on loan to another financial institution
must be returned to possession or control within 5 days of the determination. Once a broker-dealer obtains possession or control of customer fully-paid or excess-margin securities, the broker-dealer must thereafter maintain possession or control of those securities. Rule 15c3-3 also specifies where a security must be located to be considered "in possession or control" of the broker-dealer. "Possession" of securities means the securities are physically located at the broker-dealer. "Control" locations are a clearing corporation or depository, free of any lien; a Special Omnibus Account under Federal Reserve Board Regulation T' with instructions for segregation; a bona fide item of transfer of up to 40 days; foreign banks or depositories approved by SEc; a custodian bank; in transit between offices of the broker-dealer or held by a guaranteed corporate subsidiary of the broker-dealer; in the possession of a majority-controlled subsidiary of the broker-dealer; or in any other location designated by sEc, such as in transit from any control location for no more than 5 business days.
'Excess-margin securities in a customer account are those securities with a market value greater than 140 percent of the customer's debit balance (the amount the customer owes the broker-dealer for the purchase of the securities).
'Securities that have been pledged to a bank as collateral.
'Federal Reserve System Regulation T (12 C.F.R. 220) regulates the extension of credit by and to broker<ealets. For the purposes of SEC rule 16c30, it deals primarily with broker-dealer margin accounts.
Page 77
GADIGGD-92-108 Securities Investor Protection
Appendix I SEC Customer Protection and Net Capital Rules
Part 2: Segregation of
Customer Cash and the Reserve Formula,
The second requirement of rule 15c3-3 dictates how broker-dealers may use customer cash credit balances and cash obtained from the permitted uses of customer securities, including from the pledging of customer margin securities. Essentially, the customer protection rule restricts the use of customer cash or margin securities to activities directly related to financing customer securities purchases. The rule requires a broker-dealer to periodically (weekly for most broker-dealers) compute the amount of funds obtained from customers or through the use of customer securities (credits) and compare it to the total amount it has extended to finance customer transactions (debits). If credits exceed debits, the broker4ealer must have on deposit in an account for the exclusive benefit of customerss at least an equal amount of cash or cash-equivalent securities. For most broker-dealers, the calculation must be made every Friday, and any required deposit must be made by the following Tuesday. Tables I.l, I.2, and I.3 show samples of the individual components of the cash reserve portion of rule 15c3-3 as they appear in the routine Financial and Operational Combined Uniform Single (Focvs) reports submitted by broker-dealers to sEc. First, we will explain the numbered items as they relate to slPc, and then we will use the items to demonstrate how the reserve calculation works. The numbered items in table I.1 make up the credits portion of the reserve calculation. These accounts generally represent accounts payable by the broker-dealer to customers and money borrowed by the broker-dealer using customer property as collateral. Item 1 is the amount of cash in customer accounts that si c would be required to return to customers in a liquidation. Items 2 and 3 show the amount of customer property pledged as collateral for bank loans or involved in stock loans. Generally, the securities involved in these transactions come from customer margin accounts and are used to secure the customers' margin loans. Customers may also volunteer their fully paid securities for use in stock loans if the broker-dealer provides the customer with liquid collateral; however, when they do so they forfeit the slPc protection covering those securities. These items also show the amount slPc may need to advance to recover customer
'Rule 16c34 requires that broker-dealers maintain a bank account that is separate from any other account of the brokerAealer and specified as a "Special Reserve Bank Account for the Exclusive Benefit of Customers" (reserve account). The brokerdealer must also obtain written notification from the bank that all cash or qualified securities within the reserve account are being held for the exclusive benefit of customers; cannot be used directly or indirectly as security for any loan to the broker<ealer by the bank; and shall be subject to no right, charge, security interest, lien, or claim of any kind in favor of the bank or any person claiming through the bank.
Page 78
GAO/GGD-92-109 Securities Investor Protection
Appendix I SEC Cnstotner Protection and Net Capital Rules
property pledged as collateral at banks or involved in stock loans with
other broker<ealers.
Tabl • l.1: Credits Component ot the Reserve Formula Calculation Credit balances 1. F r ee credit balances and other credit balances in customers' security accounts. 2, M o nies borrowed collateralized by securities carried for the accounts of customers. 3, M o n ies payable against customers' securities loaned, 4, C u stomers' securities failed to receive. 5. C r e dit balances in firm accounts which are attributable to principal sales to customers. 6. M a r ket value of stock dividends, stock splits, and similar distributions receivable outstanding over 30 calendar days. 7. M a rket value of short security count differences over 30 calendar days old, 8, M a r ket value of short securities and credits (not to be offset by longs or by debits) in suspense accounts over 30 calendar days. 9. M a r ket value of securities which are in transfer in excess of 40 calendar days and have not been confirmed to be in transfer by the transfer agent or the issuer during the 40 days. 10, Other (list) 11. Total credits
Source; SEC FOCUS Report.
Week 1 $10,000,000 3,000,000 5,000,000
4,000,000
Week 2
$10,000,000 3,000,000 + 50,000 5,000,000 4,000,000
4,000,000
4,000,000
1,000,000 2,000,000
1,000,000 2,000,000
500,000
500,000
1,000,000 $30,500,000
1,000,000 $30,550,000
The numbered items in table I.2 make up the debits portion of the reserve calculation. These accounts generally represent transactions that the broker-dealer has financed for customers; item 18 is analogous to the broker-dealer's loss reserve for the loans made to customers. The loans to customers aggregated in these accounts are secured by customer property. If at some point the market value of the customer property securing the debit falls sufficiently to make the debit unsecured or partially secured, the unsecured portion of that account is taken out of the reserve
See Molinari and Kibter,Broker-Dealer's Financial Res nsibili under the Uniform Net Ca ital Rule — A Case for Li uidi, eo . i
Page 79
GAD/GGD-98-109 Securities Investor Protection
hypendht I SEC Customer Protection andNet Capital Rales
calculation, given a haircut, and charged against the net capital of the broker4ealer. In a siPc liquidation, the customer has the option to either pay the remaining debit balance or allow the trustee to liquidate securities in that customer's account to pay the balance. If the debit balance in the account is greater than the value of the securities in the account, the trustee usually liquidates the securities and attempts to recover the remaining debit balance. The Federal Reserve and the self-regulatory organizations (sRos) set initial margin account requirements that must be met before a customer may effect new securities transactions and commitments. In addition, the sRos and broker-dealers set maintenance margin requirements to limit the likelihood that margin loans to customers will become unsecured. These requirements specify how much equity each customer must have in an account when securities are purchased and how much equity must be maintained in that account. For example, the New York Stock Exchange (NvsE) requires that customers of its member firms maintain at least 26 percent equity for all equity securities long in an account. This means that the customer must maintain equity of at least 25 percent of the current market value of the securities in the account. The equity balance of a margin account is calculated by subtracting the current market value of all securities short and the amount of the customer's debit balance (the amount the customer owes the broker-dealer for the purchase of the securities) from the current market value of the securities held long in the account plus the amount of any credit balance.
Page 80
GAD/GGD-99-109 Securities Investor Protection
hppendht I SEC Cnstomer protection and Nct Capital Rnlas
Table l,2: Deblte Component of the Reserve Formula Calculation
Oeblt balancea 12. De bit balances in customers' cash and margin accounts excluding unsecured accounts and accounts doubtful of collection net of deductions pursuant to rule 15c3-3. 13. Securities borrowed to effectuate short sales by customers and securities borrowed to make delivery on customers' securities failed to deliver. 14. Failed to deliver of customers' securities not older than 30 calendar days. 15. Margin required and on deposit with the Options Clearing Corporation for all option contracts written or purchased in customer accounts. 16. Other (list), 17. Aggregate debit items. 18. Less 3 percent (for alternative net capital requirement calculation method only). 19. Total debits
Source: SEC FOCUS Report,
Week 1
Week 2
$10,000,000
$10,000,000 + 50,000
1,000,000 4,000,000
1,000,000 4,000,000
2,000,000 17,000,000 (510,000) $16,490,000
2,000,000 17,050,000
(511,500)
$18,538,500
The numbered items in table I.3 show how the aggregate credit and debit items come together to determine the required segregated reserve. If aggregate credits are greater than aggregate debits, the broker-dealer must ensure that it has sufficient funds in its reserve account to cover the difference. If debits are greater than credits, no reserve is required.
Page 81
GAO/GGD-92-109 Securities Investor Protection
Appendix I SEC Cnstomer Protection and Net Capital Rnles
Table I.3: Reserve Calculation Reeerve computation 20. Excess of total debits over total credits (line 19 less line 11), 21. Excess of total credits over total debits (line 11 less line 19). 22, If computation permitted on a monthly basis, enter 105 percent of excess total credits over total debits. 23, Amount held on deposit in "Reserve Bank Account(s)", including value of qualified securities, at end of reporting period. 24, Amount of deposit (or withdrawal) including 0 value of qualified securities. 25. New amount in Reserve Bank Account(s) after adding deposit or subtracting withdrawal including 0 value of qualified secur'Itles, 26, Date of deposit
Source: SEC FOCUS Report.
Week 1
Week2
$14,010,000
$14,011,500
14,000,000 10,000
14,010,000 1,500
$14,010,000 1-7-92
$14,011,500 1-14-92
To demonstrate how the reser ve formula works with regard to customer credit balances and margin accounts, we prepared this example. The column labeled "Week 1" in tables I.1, I.2, and I.3 shows the account balances of a hypothetical broker-dealer. During week 2, customer A purchased $100,000 worth of securities on margin by paying $50,000. The broker-dealer borrowed $50,000 from a bank, using $70,000 of customer A's securities as collateral. Item 2 in table I. 1 records the use of customer securities for the bank loan, and item 12 in table I.2 records the $50,000 that customer A borrowed (debit) to buy the securities. Item 11 shows total credits increasing by $50,000 in week 2. Item 17 shows aggregate debits also increasing $60,000; however, total debits only increased by $48,500, reflecting the 3-percent charge from item 18. The effect of customer A's transaction is also reflected in the broker-dealer's cash reserve requirement in table I.3, item 21. • Had the broker-dealer chosen to fund customer A's margin account purchase with free credit cash from other customers, the credit balances shown in table I.1 would not change from week 1 to week 2. The debit balances shown in table I.2 would reflect the $50,000 increase in item 12, increasing total debits. The required reserve in this second example would
Page 82
GAD/GGD-92-109 Secnrtttes Investor Protection
Appendix I SEC Customer Protection and Net Gapitai Ruies
decrease by 448,500, and the broker<ealer would be allowed to withdraw that amount from the reserve account. These examples show that broker-dealers must either segregate customer cash in a reserve account (example 1) or use it to lend to other customers (example 2).
Net Capital Rule Stresses Liquidity
In 1975, SEc established the uniform net capital rule (a modification of rule 15c3-1) as the basic capital rule for broker-dealers, which is applicable to all sIPc members.' This rule was designed to make sure that broker-dealers maintain sufficient liquid assets to cover their liabilities, In order to comply with rule 15c3-1, the broker-dealer must first compute its net capital, the net worth plus subordinated debt less nonallowable assets and deductions that take into account risk in the broker-dealer's securities and commodities positions. Second, the broker-dealer determines its net capital requirement in one of two ways: (1) the basic method, where aggregate indebtedness cannot exceed 15 times net capital, or (2) the alternative method, where net capital must be at least 2 percent of aggregate debits from the cash reserve calculation of rule 15c3-3.
Computing Net Capital
The process of calculating a broker-dealer's net capital is really a process of separating its liquid and nonliquid assets. For the purposes of calculating net capital, only assets that are readily convertible into cash — on the broker-dealer's initiative-count in the capital computation. For example, fixed assets (such as furniture and exchange seats) as well as unsecured receivables (such as unsecured customer debits, described in the previous section) cannot be included as allowable assets in the net capital calculation. The process of computing net capital also involves computing the market value of broker-dealer assets and accounting for the price volatility of broker-dealer securities. The net capital rule applies a discount (haircut) to proprietary securities according to their risk characteristics, i.e., price volatility. For example, debt obligations of the U.S. government receive a haircut depending on their time to maturity — from a 0-percent haircut for obligations with less than 3 months to maturity to a 6-percent haircut for obligations with more than 25 years to maturity.
'This rule also applies to SEC-registered brokerAealers that are not SIPC members, but SEC has the authority to exempt some SIPC nonmember firms from the rule.
Page 83
GAO/GGD-92-109 Securities Investor Protection
hyyendlx I SEC Ctltomer Protection and Net capital Rules
Basic Net Capital Requirement
Calculating a brokerAealer's required capital, using the basic method, involves calculating the broker<ealer's aggregate indebtedness. Generally, aggregate indebtedness means the total liabilities of the brokerMealer, including some collateralized liabilities and liabilities subordinated to the claims of other creditors or customers. For broker-dealers that choose to use the basic net capital requirement, the minimum dollar net capital requirement f' or broker-dealers engaging in the general securities business, which involves customers, is $25,000. For broker-dealers that generally do not carry customer accounts (introducing brokers), the minimum capital requirement is $5,000. sEc has proposed that these minimum net capital standards be raised to $250,000 for broker-dealers that hold customer property. Clearing firms that do not hold customer property and introducing firms that routinely receive customer property would be required to hold at least $100,000 in net capital. Introducing broker-dealers that do not routinely receive customer property would be required to hold at least $50,000 in net capital.
Alternative Net Capital Requirement
sEc offered broker-dealers an alternative to the basic net capital requirement that is based on the broker-dealers' responsibilities to customers rather than aggregate indebtedness. This requirement option (most commonly used by large broker-dealers), in conjunction with rule 15c3-3, is designed to ensure that sufficient liquid capital exists to return all property to customers, repay all creditors, and have a sufficient amount of capital remaining to pay for a liquidation if the broker-dealer fails. The broker-dealer's ability to return customer property is addressed by rule 15c3-3. The repayment of creditors and the payment of the broker-dealer's liquidation expenses is addressed by the 2-percent net capital requirement and the deductions from net worth for illiquid assets and risk in securities and commodities positions. sEc believed the alternative requirement would promote customer protection and still allow broker4ealers to allocate capital as they see fit by acting as an effective early warning device to provide reasonable assurance against loss of customer property, avoiding inefficient and costly misallocations of capital in the securities industry,
Page 84
GAD/GGD-92-109 Securities Investor Protection
hypendht I SEC Cnstomer Protection and Net Capital Bales
• elimdmting competitive restraints on the securities industry in its interaction with other diversified financial institutions, • making the capital structures of broker-dealers more understandable to suppliers of capital to the public, and • providing some reasonable and finite limitation on broker<ealer expansion. Broker-dealers using the alternative capital requirement must hold at least 4100,000 in capital. The new minimum standards proposed bysECwould also apply to broker-dealers using the alternative method. Generally, broker-dealers maintain capital levels far in excess of the minimum requirement; this amount is recorded in item 28 of table I.4. Table I.4 shows the items included in the alternative capital requirement calculation.
Table IA: Alternative Net Capital Calculation 22. Computation of alternative net capital requirement Two percent of combined aggregate debit items as shown in Formula for Reserve Requirements (rule 15c3-3), prepared as of the date on the net capital computation, including both brokers or dealers and consolidated subsidiaries' debits. Min imum dollar net capital requirement of reporting broker or dealer and minimum net capital requirement of subsidiaries. Net capital requirement (greater of line 22 or line 23). Excess net capital (total net capital less line 24). Percentage of net capital to aggregate debits. Percentage of net capital after anticipated capital withdrawals to aggregate debits. Ne t capital in excess of the greater of 5 percent of aggregate debit items or $120,000.
23. 24. 25, 26. 27, 28.
Source: SEC FOCUS Report.
Page 96
GAO/GGD-92-109 Securities Investor Protection
A e ndix II
SPC
S EC U R I T I E S I N V E S T O R P R O T E CT I O N CO R P O R AT I O N 5 05 F I F T E E N T H ST R E E T , N . W . S U I T E SO O
WASH I N G T O N ,
( 202 )
OPPICE O P T H E O E r < E KA L COUNS E L
D . C. 2 0 0 0 5 -2 2 0 7 371 83 0 0
June 22, 1992
Ri«hard L. Fogel Assists»t Comptroller General Gc»or<<i Ac<.ourrtbrg Office Waslringto», D.C. 20548
Dear Mr. Fogel:
Now on p. 2.
We sre pleased io have this opportunity to offer the comments of the Sec<rrities hrvestor Protection Corporation ("SIPC") on the GAO Draft Report on SIPC. As t ire Executive Summary of your report notes, the Congressional «ommiitees asked GAO to report on three principal issues: "1) the exposure and mh.quacy of tire SIPC fund, 2) the effectiveness of SIPC's liquidation oversight «ffnrts, and 8) the disclos<u'e of SIPC protections to customers." Draft Report Executive Summary ("ES") at 1. We are pleased to note that in each of these ar'eas tir<. GAO report gives SIPC a vote of confidence. Indeed, it follows from the report's principal findings that the prograru of investor protection enacted in the Securities hrvestor Protection Act of 1970 ("SIPA") has been a major success.
SIVC's role is viewed in the proper perspective as an element in a sintutorily mandated program to promote investor confidence by upgrading broker-dealer financial responsibility and by providing protection to customers of failed broker-dealers. The report reflects that the Securities and Exchange Commission ("SEC"), under authority granted in SIPA, has devised, promulgated, and, together with the self-regulatory organizations, enforced effective financial resporrsi)rility rules for SIPC members which have sharply curtailed the need for hrvestor prutectio» tlu ough SIPC finrurced customer protection proceedings.
Set forth below are our comments on the matters covered by the report, includi»g our responses to some comments with which we disagree. We have submitted a separate memorandum alerting GAO to a few technical problems we find i» the Draft Report. SIPC will not in this letter offer comments on those parts of the rcport which deal with the SEC or the SEC's role in the SIPC program.
Page 86
GAO/GGD-92-109 Securities Investor Protection
Appendix II Comments Promthe Securities Investor Protection Corporation
Richard L. Fogel Page 2 June 22, 1992
Now on p, 19.
Now on p. 44, Now on p. 44. Now on p. 5. Now on p. 42.
The proper measure of SIPC's exposure and the adequacy of SIPC's resources appear to be the main reasons that GAO was asked to do the study. We note that you have "determined that no quantifiable measure exists to judge the exposure of the SIPC fund and the adequacy of its reserves...." (Draft Report at 1.16) and that "[i]n assessing the reasonableness of SIPC's financial plans, [you] concluded that there is no methodology that SIPC could follow which would provide a completely reliable estimate of the amount of money SIPC might need in the fuiure. ~ ~ ~ SIPC's estimate, therefore, must be judgmental." Draft Report at 3.9. b> light of your determinations, with which we fully concur, we are gratified that GAO hss stated its belief "that SIPC's strategy represents a responsible approach for planning for future needs" (Draft Report at 3.9); that "SIPC officials have acted responsibly in adopting a financial plan that would increase Fund reserves to $1 billion by 1997v (ES at 9); snd that "based strictly on the historical record, SIPC resources would seem to be adequate." Draft Report at 3.5.
The report suggests that t h e p r incipal threat t o t h e c ont inued effectiveness of the SIPC program is the possibility that SIPC and the regulators might become complacent. As a theoretical statement, we cannot disagree, but in fact the report itself shows no reason to believe that either SIPC or the regulators are becomb>g complacent. Indeed, the recent decisions of the SIPC Board with regard to the fund size and the line of credit demonstrate just the opposite.
Now on p, 51,
We concur with the report's position that "a cash fund is superior to privst.e hisurance, let.ters of credit, or lines of credit in terms of providing a basic level of customer protection and public confidence." Draft Report at 3.23. We believe Ib>es of credit, however, are a useful supplement to a cash fund.
Nowon p,22, Now on p. 38.
The report states that GAO does "not. believe that SIPC needs the authority to individually examine its members" (Draft Report at 2.1), and concludes thai. providbig SIPC with investigai,ive or regulatory authority is not warranted. Drsi't Report at 2.34. This fully accords with SIPC's long standing views on this subject. The reasons given in the report are the same reasons SIPC has articulated bi the past. One additional reason, not mentioned in the report, is our perception that, by divorciug the regulatory function from the customer protection function, the authority snd responsibility of the regulator and the protector are both cuban«cd and clarified.
We, of course, are pleased the report concludes that "SIPC's role in providing back-up protection for customers' cash and securities has worked weII."
Page S7
GAO/GGD-92-109 Securities Investor Protection
Appendix 11 Comments Prom the Securities Investor
Protection Corporation
Richard L. Fogel Page 3 June 22, 1992
Now on p. 4. Now on p, 54. Now on p, 54,
(ES at 7); that "the assistance SIPC provides trustees during liquidations has received high marks... " (Draft Report at 4.1); and that the GAO has "no reason to question the quality of the assistance SIPC provides after a liquidation begins.... " Draft Report at 4.2.
Now on p,65.
The report concludes that there are no substantial gapa In disclosure to customers about the SIPC program, noting that "tf]or the most part this disclosure seems adequate... " D r af t R eport at S .l . T h e r eport does recommend that disclosure as to the time lbuits for filing claims In a SIPC liquidation and the time lhults on SIPC's jurisdiction to initiate liquidation proceedings be enhanced. We note that the published notices of liquidation proceedings set forth the time limits for customers to file claims and that the notice, claim form», and instructions mailed to customers set forth t hese thne lhults In l arge, bold, faced type. Nevertheless, we recognize the merit in GAO's comments, and we will revise the question aud answer booklet with a view toward implementing your suggestions as soo» as feasible.
t
The report expresses concern that SIPC does not take adequate steps to gather operational information on firms which may be liquidated prior to the hiitiation of a liquidation proceeding. The evidence cited in the report for the need for this information is anecdotal. There is no indication whatsoever that the problems discussed were more than an inconvenience or that these matters delayed the processing or satisfaction of customer claims.
SIPC does, of course, receive information on members in f i nancial difficulty from the regulators snd frequently requests as much information as it can in order to make its independent determination of the need for SIPC protection snd to select a trustee and counsel with adequate experience and resources to meet the needs of the undertaking, We will, however, thoroughly review this matter with the SEC and the SROs.
SIPC has pursued a policy of i ntegrating the most cost effective automated data processing solutions bito the assistance and support SIPC provides to trustees appointed under SIPA. SIPC bas achieved some important successes, including the conception, definition, and creation of the first and only automated liquidation system designed for use in the liquidation of broker-dealers under SIPA. The GAO Report, however, expresses concerns about SIPC's efforts and preparedness in this area. For the reasons set forth below, we believe those
Page 88
GAO/GGD-92-109 Securities Investor Protection
Appendix 11 Commeats Prom the Securities Investor Protection Corporation
June 22, 1992
Richard L. Fogel Page 4
concenis are based on erroneous assumptions and a misunderstanding of the liquidation process.
The GAO Report states, "[w]e believe it is critical that SIPC review its automation practices iuid develop policies which ensure that t r ustees acquire capable autoniated liquidation systems on a tiniely basis." Draft Report at 4.12. The report defines automated liquidation systems as "computer software piograms that help trustees organize liquidations and pay customer claims promptly." Draft Report at 4.12. The report reflects concern that "SIPC has not assessed current practices to ensure that the trustees of large liquidations acquire automated systems." Draft Report at 4.15. The report states that in cases too large for SIPC's own system "SIPC relies primarily on one supplier that has developed a system SIPC officials believe exceed the capabilities of others on the market,." Draft Report at 4.14. The report observes that SIPC's reliance on "one supplier" b>ours the risk that "the system may be unavailable in an emergency or may cost more tlnm other competitive systems." Diaft Report at 4.15. The difficulty with the Draft Report's position is that, except for what SIPC has developed, there is no off-the-shelf "automated liquidation system" for stockbroker liquidations and there is, therefore, no "supplier" of such systems.
Now on pp. 60-61. Now on p, 60. Now on pp. 60-61. Now on p. 60. Now on p. 60.
At the inception of a liquidation proceeding, the SIPC staff reviews the automated data processing capabilities of t h e d e btor, w it h a v i e w t o ward dcterniinbig wliether to use SIPC's system alone; SIPC's system in conjunction with the debi,or's existbig data processing capability; or the debtor's capability, modified for the needs of t h e l i quidation. T his determbiation and any modifications necessary can be accomplished without delay to the liquidation proceeding. The trustee snd SIPC select a public accounting firm which is best qualified to supply the accounting services required for that liquidation, which includes the automated data processing expertise needed for the unique requirements of a SIPA liquidation. ln SIPC's view, all major public accounting firms are capable, in terms of experience and data processing expertise, of supplying those services. SIPC, and the trustee, engage the public accounting firm judged the best positioned to meet requirements of that liquidation at the lowest cost. The firm selected may well be one with a track record in SIPA proceedings and one which has developed relevant experience aiid expertise. SIPC's "automated liquidation system" was planned to interface with a debtor 's own computer system. As stated by Charles Cash of KMPG Peat Marwick, SIPC's consultant on its data processing requirements, "The system iutis not to be a repiacement for the broker deafer's own back office accounting system. We designed the system to support the liquidation piocess with enough back office functionality to handle routine needs. For larger, more complex liquidations, the bi'oker dealer's existing system can be used to meet back office needs." (Exhibit A. Cash Ltr. June 15, 1992, at 2 . H e reh>after "the Cash Letter.") (Emphasis in orighial.)
SIPC's soFtware package can 1) generate a broad variety of reports needed by a t rustee siid SIPC, 2) provide an automated capability of c l aims
Page 89
GAO/GGD-92-109 Securities Investor Protection
kypen4lx II GonLnlenta FronL the Securities Investor Protection Corporation
Richard L. Fogel Page 5 June 22, 1992
matching and sorting according to the results of the match, and 3) to assist in the satisi'action of customer claims. It is a complex and highly sophisticated system, (Attached as Exhibit B is a copy of the table of contents of the user's manual.) The SIPC software package provides the only existing automated capability f or matching customer claims with the debtor's records and reporting on the results of the match.
SIPC's system was designed for cases of the magnitude most frequently encountered by SIPC although it can now be used in cases larger than originally contemplated.> It is employed hi all cases where its use will be most efficient and cost-effective, for example, in eases in which the debtor broker-dealer does not have an existbig, staffed computer system which can be adapted to meet the special requirements oi' a SIPC liquidation.
The SIPC software package is continuously reviewed and critiqued. You can he confident that we will again review our automation system with all of the
GAO comme»i,s in mba. If the addition of a capability is considered feasible and cost-effective, it will be added. "Each new user requirement and new technological development. is reviewed in terms of other alternatives available, cost and potential use on other liquidations." Cash Letter at 3. See also Cash Letter at 4.
We believe it Is important to call attention to that part of the report which correctly notes the significant differences between the obligations of SIPC member broker-dealers to their customers and the obligations of banks to theh' depositors. An example would be the report's conclusion that the "risks to the taxpayer hd~erent in SIPC are thus less than those associated with the deposit hisurance system." ES at 3. Broker-dealers hold securities and cash entrusted to theta by investors and are prohibited, except in a very limited manner, from using the securities or cash b> their own business. Banks must use their resources, hicludbig hisured deposits, to generate the income necessary for profits, operating expenses, and interest to depositors.
1/ 'While i( was not initially capable of handling 50,000 to 60,000 customer claims, this is not the case today. If we were to add high performance workstations and faster prini,lng devices to the network, the system could handle substantially more than 50,000 to 60,000 customer claims. Th e a d u ances in m i c r ocomputer
technology, networks, high performance systems, and high speed printers make it almost impossible to place a practical limit on its ability to kandle a large number of claims. These additions are easily added on an as needed basis and only Involve a nominal cost. To suggest that the system will only 'handle the small number of claints that SIPC trustees typically liquidate' does not reflect its true capability." Cash Letter at 3. (Emphasis added.)
Page 90
GAO/GGD-92-109 Securities Investor Protection
hppeadlx 11 Comments Prom the Securltles Investor Protection Corporation
Richard L. Fogel Page 6 June 22, 1882 In the case of SIPC members, then„ the risk of loss and the possibility of gain tluough appreciation or loss in value of securities is that of the investor. Bmdre, however, are obligated to depositors for principal and Interest on deposits but the risk of nonperformance of the bank's portfolio of assets is the bank's. Thus, the SIPC member broker-dealer's financial condition is not threatened by the vagaries of the economy in the sine mmuier as is a bank's.
The report'e descriptions of and conclusions as to SIPC depict a successful program. The costs of SIPC'e operations to the taxpayer have been zero. We believe we have taken all reasonable steps to ensure that continues. SIPC has met sll its obligations in an environment of major changes in the Industry, hae absorbed losses of customer property resulting from massive frauds and, in short, hae been equal to all the challenges it has faced. Although the SIPC fund is at ite highest level in history, the report correct,ly notes the assessment burden has been low. SIPC has taken responsible m easures to ensure the financial strength required to continue to meet it s obligations and, as the report notes, assessments should remain low. It would seem fair to conclude that SIPC hae achieved ite objectives in a cost-effective manner and the success of the undertaking makes it a f ine example of industry and government cooperation. Very truly yours,
C~
ames G. Stearns Chairman JGS;ved
Enclosures
Page 91
GAD/GGD-92-108 Securities Investor Protection
endix III
Exchange Commission
„F
UNITED STATES
SECURITIES AND EXCHANGE COMMISSION
W ASHINGTON. D.C. 2 0 5 4 9
DIVIOIOII OF MAIIKCT IIOOVI.AT ION
J uly 2 1 , 1 9 9 2
Mr. R i c h a rd L . Fo g e l A ssist an t C omptr o l l e r G e n e r a l G eneral Government D i v i s i o n Unite d S t a t e s G e n e r a l A c c o u n t i n g O f f i c e
Washing t o n , D.C. 20548
Dear Mr. P o g e l :
I am wr i t i n g i n r es p o ns e t o y o u r l et t er o f J u n e 1 , 1 9 9 2 , t o C hairman Breeden r e ques t i n g o u r c o mments o n t h e G e ner al Accountin g O f f i c e ' s ( " G AO' s" ) d r a f t r ep o r t e n ti t l e d ~ ~ ia
I ( t he " Repor t " )
W concur w i t h e t h e R e p o r t ' s ce n t r a l c o n c l u s i o n t h a t t he S ecuri t i e s I n v e s t o r P r o t e c t i o n C o r p o r a t i o n ( " S I P C" ) h a s b e e n successfu l i n p r o t e c t i n g c u s t o mers a g a i n s t l os s e s . W e a r e p leased t o n o t e t h a t t h e R e p o r t al so c o n c l u d e s t h a t t he Securi t i e s a n d E x change Commission ( " Commission" ) an d t h e s e l f r egulat or y o r g a n i z a t i o n s ( " S ROs") h av e e f f e c t i v e l y e n f o r c e d t h e i r f i n a n c i a l r e s p o n s i b i l i t y r ul e s a n d t h u s h a v e m i n i m i z e d l o s s e s t o S IPC. T h e C ommissi o n ' s p r o m u l g a t i o n a n d e n f o r c e ment o f R u l e s 1 5c3-1 and 1 5 c 3- 3 u n de r t h e S e c u r i t i e s E x c h a nge Ac t o f 1 9 3 4 ( t h e "Act " ) , 1 7 C F R 55 24 0 . 1 5 c 3 -1 a n d 1 5 c 3 - 3 , ar e not e d f o r t he i r p art i c u l a r i m p o r t a nc e i n p r e v e n t i n g s u c h l o s s e s . W r e g ar d ith t o p ro t e c t i o n o f t he i nv e s t i n g p u b l i c , t he Report a c c u r a t e l y r el at e s t h a t S I P C s e r v e s i n a ba c k u p r o l e t o the r e g u l a t o r y a c t i v i t i es o f t he C o mmissio n a n d t h e S ROs. Addit i o n a l l y , t h e Re p o r t c or r ec t l y d e s c r i b e s t h e m e a ns b y w h i c h the Commissio n a n d t h e S ROs ens ur e t h a t br ok e r - d e a l e r s c o m p l y w ith t h e i r r u l e s . Th e C o mmissio n an d SROs monito r c o mp l i a n c e b y , among other th i n g s : cond u c t i n g r o u t i n e e x a m inati o n s of b rok e r d ealers ; r e q u i r i n g f i r m s w h ose c a p i t a l f a l l s be l o w e a r l y - w a r n i n g l evel s t o n o ti f y t he C o mmissi o n a n d t h e S ROs; r e q u i r i n g b r o k e r d ealer s t o p r e p a r e a n d f i l e f i na n c i a l r e p o r t s o n a m o n t h l y a n d q uarte r l y b a s i s ; a n d r e q u i r i n g f i r m s t o u n d e rg o a n nual a u d i t s b y
independent pu b l i c a c c o u nt a n t s . To s u mmari ze , t h e Re p ort d escri be s a s u c c ess fu l p r o g r a m o f i n v e s t o r p r o t e c t i o n . SI P C ' s f i n anc i a l r e s o u r c e s a r e a t a n a l l - t i m e h i g h , n o t a x p a ye r f u n d s h ave ever b e e n u s ed , a n d S I P C ' s f u n d i n g s t r a t e g y r e p r e s e nt s a r esponsi bl e a p p r o ac h f o r d e a li n g w i t h t he S I P C f u n d ' s (t he M Fund's" ) p o t e n t i a l ex p o s u r e .
Page 92
GAO/GGD-92-109 Securities Investor Protection
Appendix HI Comments From the Securities and Exchange Commission
Mr. R i c h a r d Page 2
L. F og e l
I n t h e Report , t h e G AO o f f e r s f i v e r e c ommendati on s r e g a r d i n g the Commissio n' s o v e r s i g h t r e s p o n s i b i l i t i e s w i t h re sp e c t t o SI PC . I n r e s p onse t o G A O' s r e c ommendati o ns , w e h a v e r e a s s e s sed t h e a dequacy of e a r l i e r i n i t i at i v e s w h i c h s o u ght t o a d d r e s s t h e s a me concerns ex p re ssed i n t h e R e p or t .
See p, 53.
We agree t h a t t h e ad e q u a cy o f t h e SI P C f u n d s h o u ld b e r eviewed pe r i o d i c a l l y . SI PC a n d t h e C ommissio n h av e d on e s o , a n d we wil l c o n t i n u e t o d o s o . pur i ng t h e l as t 7 yea r s , SI P C h a s commissioned t w o t a s k f o r c e s a n d D e l o i t t e C T o u c he t o r ev i ew the adequacy of t h e S I P C f u n d a nd f u n d i n g a r r a n g ements. T he Commission s t a f f h a s p a r t i c i p a t e d o n t h e s e t a s k fo r c e s . W e h a v e d iscussed t h e a d e quacy o f t h e S I P C f u n d w i t h t h e S I P C Boar d o f Direc t o r s a n d S I P C s t a f f . The ade q u ac y o f t he S I P C f u n d i s a m atter o f co n c e r n t o u s a t a l l t i mes .
The t a s k f o r c e s w e r e c o mposed o f r ep r e s e n t a t i v e s f r o m t he securi t i e s i n d u s t r y , S I P C an d t h e g o v e r nment.
The Del o i t t e I T ouc h e s tudy u sed a v e r y c o n s e r v a t i v e " w o r s t c a s e a n a l y s i s " w h i c h w e beli e v e s u b s t a n t i a l l y ov e r s t a t e s t h e S I P C a d v ances l i ke l y r equi re d i n l i qu i d a t i n g a l ar g e b r o k e r - d e a l e r . T he Report s u g g e st s t h a t m a s s i v e f r a u d a t a m a j o r f i r m o r t he s i m u l t a n e ou s f a i l u r e s o f s ev e r a l o f t h e l a r ge s t br o k e r dealer s c o u l d r e s u l t i n l oss e s t o S I P C o f ov e r $1 bi l l i on . Fr aud on such an e n o rmous sc a l e , w h i l e t heo r e t i c a l l y po s s i b l e , i s h ighl y u n l i k e l y . I n sm a l l f i r m s , f r a u d h a s r e s u l t e d i n
m isappropr i a t i o n o f a s i gn i f i ca n t p r o p o r t i o n o f c u s t o mer a s s e t s held b y a b r o k e r - d e a l e r . How e v e r , t he p ro p o r t i o n o f cus t o m er assets mis appropri a t e d i n s m a l l e r fi r m c a n n ot b e u s e d t o s r easonably e s t i m at e p o s s i b l e l o s s e s i n l ar g e r f i r m s . Lar g e r fi rms h a ve a c t i v e i n t e r n a l s u r v e i l l an c e a n d c omplia n c e department s t h a t w o u l d m os t l i ke l y u n c o v e r s u c h fr au d we l l b e f or e i t c o u l d j e o p a r d i z e l a r g e a mount s o f c u s t o mer a s s e t s . I n addit i o n , t h e C ommission an d t h e S R Os have s i g n i f i c a n t l y mo r e frequent i n s p e c t i o n s c hedule s an d r e p o r t i n g r e q u i r e ments f o r l arger f i r m s a s a means of p r e v e n t i n g s uc h f r a u d ul ent a c t i v i t y . ( conti nued . . . )
Page 98
GAO/GGD-92-109 Securities Investor Protection
Appendix III Comments From the Securities and Exchange Commhmion
Mr.
Page 3
R i c h a r d L . Fog s l
See p. 62.
S pecif i c a l l y , t h e R e p or t r e c ommends t ha t t h e r e g u l a t o r s p rovid e S I PC wit h t h e f o l l o w i n g i n f o r m a t i o n i n a d v a nc e o f l i q u i d a t i o n : ( 1 ) a l i st of br a n c h o f f i ce s ; ( 2) t he l oc a t i o n o f l eases fo r b r a n c h o f f i c e s ; ( 3 ) t h e l o c a t i o n o f e q u i p ment l e a s e s a nd other e x e c u t or y c o n t r a c t s ; ( 4 ) a l i st of ban k s o r f i n a n c i a l i nst i t u t i o n s w i t h f u n d s o r s e c u r i ti e s o n d e p o s i t ; ( 5 ) l oc a t i o n o f vault s a n d ot h e r s e c u re l o c a t i o n s ; ( 6 ) l oc a t i o n a n d d e s c r i p t i o n of computer d a ta b a s es a nd se r v i c e s us e d; ( 7 ) l oc a t i o n of m a i l dropsj ( 8 ) a char t of i nt er l o cki n g c o r p o r a t e r e l at i ons h i p s b etween th e b r o k e r - d e a l e r a n d i t s af f i l i at e s ; ( 9 ) a l i st of k ey personnel ; a n d ( 1 0 ) a n a c c u r a t e i d e a o f t he n u m ber o f act i v e c ustomer a c c ou nt s . Current l y , t he C ommissi o n ' s reg u l a t i o n s r e q u i r e br o k e r d ealers t o p r e p a r e a n d p r e s e rv e i n a n a c c e s s i b l e p l a c e a c onsiderabl e a mount o f i nf o r m a t i o n r e l a t i n g t o t he i r b us i n e s s . When a SI P C member's f i na n c i a l co n d i t i o n m a y w a r r a n t S I P C i nt e r v e n t i o n , t h e C ommissi o n a n d SRO st a f f s i m medi a t e l y b e g i n t o c oll e c t d a t a a n d d o c umentat i o n t h a t c o u l d b e u s e d i n l i qu i d a t i o n proceedi n gs . T hi s i n f o r m a t i o n i s sh a r e d w i t h S I P C a s s o o n as i t
is obt a i n ed.
T he Report i m p l i e s t h a t t he s a t i sf a c t i o n o f cu s t o mer c l a i m s m ay be del a yed b y a l a c k o f r e a d i l y a c c e s s i b l e d o c umentat i o n . Indeed, r e l u c t a n t , u n c o o per a t i v e ow n ers or ma n agers — who may have been i nv o l ve d i n f r a u d or w r o n g doing — are unl i k e l y t o p r ov i d e
(... cont i n usd) Also, g i v e n t h e o p e r a t i o n o f t he C o mmissi o n ' s a n d S ROs' r egulat or y p r o g r am, s i m u l t a n e ous f a i l u r e s o f s e v e r a l o f t he l argest b r o k e r - d e a l e r s r e q u i r i n g S I P C i n t e r v e n t i o n a r e h i g h l y u nli k e l y .
F inal l y , b e c a us e o f t h e s t r o n g r e g u l a t o r y p r o g r a m , t h e C ommission an d t h e S ROs have b een a bl s t o w i n d d o w n t h e o perat i o n s o f m an y b r o k e r - d e a l e r s e x p e r i e n c i n g d i f f i cu l t y w i t h o u t the need fo r S I P C i n t e r v e n t i o n . Lar ge b r o k e r - d e a l e r s s u c h a s Drexel B u r n ham Lambert , I n c . a n d T h omson McKinnon Secur i t i e s I nc . h ave been wound down i n t h i s f as h i o n . The i n f o r m a t i o n t h a t a b r o k e r - d e a l e r m us t m a i n t a i n i s li s t e d in Ru l e s 17 a - 3 a n d 1 7 a - 4 u n d e r t h e Ac t , 17 CF R II 2 40 . 1 7 a 3 and 1 7 a - 4 .
Page 84
GAO/GGD-88-108 Securities Investor Protection
hppendix III Cowwents Frow the Securities and Exchange Cowwission
M r. R i c h ar d Page 4
L. F ogel
import an t i n f o r m a t i o n . Th e R e p o r t , how e v e r, doe s n o t i d ent i f y a ny cases i n w h i c h t h e a b s e nce o f t h e a b o v e i n f o r m a t i o n a c t u a l l y i mpeded th e s a t i s f a c t i o n o f cu s t o mer c l a i m s . Ne v e r t h e l e s s , w e w il l r ev i e w t h i s r e c o mmendatio n w i t h S I P C an d t h e S ROs i n a n effort t o i m r o ve t h e i n f o r m ati o n g a t h e r i n g a n d d i s t r i bu t i o n p process,
See p. 62.
The Report e x p r e s s es c o n cern t h a t S I P C 's a u t o m ati o n pract i c e s may b e i n a d equate , p a r t i c u l a r l y w i t h re g a r d t o t he s ystem's a b i l i t y t o h a n d l e l i q u i d a t i o n o f a m a ) o r b r o k e r - d e a l e r . T he Report s u g g e st s t h a t t he C ommissi on , i n i t s ov er s i g h t capaci t y , s h o u l d i d e n t i f y a n d c o r r e c t s h o r t c o m i ng s w i t h t h e c urrent S I P C l i q u i d a t i o n s y s t e m , d e t e r m i n e S I P C' s a u t o mat i o n needs wit h re g a r d to l i q ui da t i o n of f i r ms o f va r i ou s s i z e s , a n d e nsure t h a t S I p C t r us t e e s p r o m p t l y a c q u i r e e f f i c i e n t aut o m a t e d li q u i d a t i o n sy s t e m s.
W e have pr e v i o u s l y e x p r e s sed t h e s e s ame concern s t o S I P C , a nd SIPC, i n o u r v i e w , h a s a d e q uat e l y r e s p o n ded . I n 19 8 5 , w e r ecommended to S I P C t h a t i t exp e d i t e a u t o m a t i o n o f i t s li q u i d a t i o n pr o c e s s . SI PC r e t a i n e d K P MG Peat Ma rw i c k a s consult a n t s t o d e v e l o p a n a u t o mated l i q u i d a t i o n s y s t e m . Bo th t he Commission a n d S I P C s t a f f s a n t i ci p a t e d t h a t au t o m a t i n g t h e l i q u i d a t i o n p r o c e s s w o ul d p r o v i d e g r e a t e r u n i f o r m i t y i n li q u i d a t i o n p r o c e edi ngs and e xpedit e s a t i s f a c t i o n o f c u s t o mer clai ms.
K PMG Peat Marwic k d e s i g n e d S I P C' s a u t o mate d l i qu i d a t i o n s ystem t o i n t e r f a c e w i t h b ro k e r - d e a l e r s ' ex i st i ng c o mput e r systems. T h e s y s t e m was desi g ned t o a l l o w S I P C qu i c k l y t o : m atch and s o r t cu s t o mer c l a i m s f o r , a n d a l i qui da t i n g b r o k e r dealer ' s r e c o r d s of , ca s h a n d se c u r i t i es ; g en e r a t e r e p o r t s t hat the SIPC t r u s t e e i s r eq u i r e d t o c o m p l e t e ; a n d o t h e r w i s e meet inf ormati on pr o c e s s i n g r e q u i r e m ents i n b r o k e r - d e a l e r l iquidat i o ns . KP M G Peat Mar w i c k a l s o d e s i g n e d t h e s y s t e m t o b e user- f r i e n d l y a n d n o t t o re qu i r e SI P C t o m a i n t a i n a l a r g e s t a f f s olel y t o o p e r at e t h e c o mputer s .
In i t i a l l y , SI P C a n d K PMG Peat Ma r w i c k de c i d e d t h a t t he system shoul d b e a b l e t o pr oc e s s t h e t yp e s o f ca se s m o s t f requent l y e n c o u n t e r e d b y S I PC — those c a s e s w i t h a p p r o x i m a t e l y 10,000 t o 1 5 , 0 0 0 c u s t o mer c l a i m s . The sof t w a r e w a s a t f i r st operable on l y o n a si n g l e I B M pe r s o nal c o m pute r. T he sy s t e m h a s
Page 95
GAO/GGD-92-109 Securities Investor Protection
hpyendlx HI Comments From the Securities and Exchange Commission
Mr. R i c h a r d L . Fo g e l Page 5
b een progres s i v el y u p g r a ded and now may be i n c o r p o r a t e d i n t o a network w i t h m u l t i p l e wo r k s t a t i o n s . Al t h ou g h t h e R e p o rt i ndicate s t h a t S I P C' s a u t omatio n s y s t em i s c u r r e n t l y c a p a bl e o n l y o f l i q u i d a t i n g a f i r m w i t h f e w e r t h a n 6 0 , 0 0 0 c u s t omers , i n a let t e r t o S I P C f r o m a r ep r e s e n t a t i v e o f KP M G Peat M arw ic k , t he represent a t i v e s t a t e d t h a t i t i s a l m o s t imp o s s i b l e t o p l ac e a pract i c a l n u m eri c al l i mi t on t h e sy s t e m 's a b i l i t y t o h an d l e claims . T h e o n l y pr a c t i c a l l i mi t a t i o n o n t h e a u t o m ati o n s y s t e m rel a t e s t o c o mputer ha r d w a r e. I f a la r ge b r ok e r - d e a l e r m u s t b e li q u i d a t e d , SI P C c a n p r o m ptly a c q u i r e o r r an t t he h ar d w a r e n ecessary t o c o mpl et e t h e l i qu i d a t i p n , o r i t c an u s e t h e b r o k e r dealer' s e x i s t i n g co m puter sy s t e m s.
N otwith s t a n d i n g o u r p r e s en t a s s e ssment o f S I P C ' s a u t o mat i o n program, we have r e s o l v e d t o co n s i d e r t h i s ma t t e r f u r t he r . The C ommission i s cu r r e n t l y un d e r t a k i n g a n i ns p e c t i o n o f SI P C ' s operat i o n s . Th i s i nsp e c t i o n s h o u l d b e c o mpl e t e d d u r i n g t h e l ast q uarte r o f 199 2 . SI PC ' s a u t o m a t i o n s y s t e m i s o n e o f t h e a rea s t hat w i l l be ex a m i n e d b y t h e C ommiss i o n s t a f f . Upon c o m p l e t i o n o f t h e i n s p e c t i o n , w e w i l l t ake s u c h a c t i o n a s a p p e a r s a ppropri a t e .
See p. 62.
The commissio n i s e n g a ge d i n c o n s t a n t o v e r s i g h t o f SI P C '• acti v i t i e s . Com m i s s i o n s t a f f m e mber s h o l d qu a r t e r l y m e e t i n g s w it h S I P C s t a f f m e mber s t o di sc u s s m a t t e r s t h a t co n c e r n o r r equir e t h e a t t e n t i o n o f t h e C ommissi on . I n t h e co u r s e o f d a y t o-day op er a t i o n s , t h e t w o s t a f f s c o mmunicat e r e g u l a r l y b y t el ephone . T h e D i r e c t o r o f t he C o mmissi o n ' s D i v i s i o n o f M a r k e t R egulat i o n at t en d s t h e m e e t i n g s o f S I P C ' s B o ar d o f D ir e c t o r s . B ylaws passed by S I P C' s B o ar d o f D i r e c t o r s m us t b e s u b mi t t e d t o t he Commissi o n b e f o r e t h e y t a k e e f f e c t . SI PC ' s r ul es m u s t be approved b y t h e C o m missi o n . The Co m miss i o n r e c e i v e s m o n t h l y reports fr o m S I P C c o ncern i n g t h e s t at u s o f t he Fu n d a n d c u r r e n t
I n f a c t , a r ep r e s e n t a t i v e o f K P MG Peat Mar wic k h a s s t a t e d t hat t h e r e i s n o p r a c t i c a l l i m it on t he n u mber o f cl ai m s t h a t c a n be processed under t h e e x i s t i n g s y s t e m . ~ Let t er f r om J a m es G Stearns t o Ri c h a r d L. Fo g e l (J u n e 2 2 , 19 9 2 ) ( K PMG Peat M arw ic k ' s l e t t e r t o SI P C r e s p o n d i n g t o t h e G A O' s d r a f t r epo r t i s at t ac h e d a s an appendi x t o M r . S t e a r n s ' l et t er ) . T he Di v i s i o n o f M a r k e t R e g u l a t i o n i s r es p o n s i b l e f o r , am o n g o ther t h i n g s , r e g u l a ti n g t h e a c ti v i t i e s of br ok e r - d e a l e r s .
Page 96
GAD/GGD-92-109 Securities Investor Protection
hppendlx IH Comments From the Securities and Exchange Commission
M r. R i c h a r d Page 6
L. F ogel
l i q u i d a t i o n s . SI PC s u b m i t s a f t e r t he e n d o f e a c h c a l e n d a r y e a r a n annual r e p o r t t o t he C o mmiss i o n t h a t i ncl u d e s i n d e p endent l y a udit e d f i na n c i a l s t at e m e n t s . Th i s r ep o r t i s f or wa r d e d t o C ongress w i t h s u c h c omment a s t h e C ommissio n d e ems approp r i a t e . Personnel a t t he C ommissi o n ' s re g i o n a l o f f i ce s a s s i s t a s n e e d e d i n SIPC l i q u i d a t i o n s .
Members of t h e C o mmissio n ' s s t a f f . m o n i t o r SI P C o p e r a t i o n s i n o ther w a ys . I n 199 1 , a n A s s o c i a t e D i r e c t o r o f t h e D iv i s i o n o f M arket Regul a t i o n s e r v e d o n a S I P C-appoi n t e d t a s k f o r c e f o r me d t o analyse an d make r e c ommendati on s o n S I P C a s s e ssments . Th i s committe e r e c ommended, an d S I P C' s B o ar d o f Di r ect o r s i mp l e m ente d , a pro g r a m u n der w h i c h S I P C i n t e n d s t o bu i l d t he Fu n d t o $1 b i l l i o n . Th i s yea r , C o mmissio n s t a f f m e mbers a r e p a r t i ci p a ti n g in a s u b c o mmitt e e o f t h e Ma r k e t T r a n s a c t i o n s A d v i s o r y C o mmitt e e t hat w i l l m a k e r e c ommendations r e g a r d i n g p r o c edures t o b e f ol l o wed i n t h e e v e n t t h a t a f i r m re g i s t e r e d as b o t h a b r ok e r d ealer a n d a f u t u r e s c o mmiss i o n me r c h an t m us t b e l i qui d a t e d . I n a d d i t i o n , t he C ommissi o n p e r f o r med a n i n s p e c t i o n o f S IPC's o p e r a t i o n s i n 19 8 5 . As pr ev i o u s l y m e n t i o n e d , a n o t h e r i nspec t i o n i s u n d e r w ay . As not e d i n t he R e p o r t , h o w e v er , w e h a v e not e s t a b l i s h e d a p e r i o d i c i ns p e c t i o n s c h e d ul e d e s i g n a t i n g f i xe d d ates o n w h i c h t h e C ommissio n i s t o i nsp e c t S I P C ' s o p e r a t i o n s . W e agree wi t h t h e r e c ommendatio n t h a t s u c h a s c h e d ul e s h o ul d b e e stabl i s h e d , a n d w e w i l l i n sp e c t S I P C a c c o r d i n g t o a s e t sc h e d u l e i n t h e f u t u r e . W w i l l d et e r m i n e t h e a p p r o p r i a t e t i m e t a b l e a f t er e e valuat i n g t h e r e s u l t s o f o u r c u r r e n t i ns p e c t i o n .
See p. 73.
T he Repor t n o t e s t h a t u n d e r t h e S e c u r it i es I nv e s t o r Prote c t i o n A c t o f 19 7 0 ( " S I P A " ) a n d S I P C ' s b y l a w s , S I P C members
T he Marke t T r a n s a c t i o n s A d v i s o r y C o mmi t t e e w a s f o r m e d p ursuan t t o t he M a r k e t R e f o r m A c t of 199 0 , P u b . L . N o. 101 - 4 3 2 l 1 04 St a t . 963 ( 19 9 0 ) .
S IPC members i n c l u d e a l l r eg i st e r e d b r o k e r - d e a l e r s o t h e r t han ( 1 ) t h o s e whose p r i n c i p a l b u s i n e s s , i n t he d e t e r m i n a t i o n o f SIPC, i s co n d u c t e d o u t s i d e t h e U n i t e d S t a t e s ' ( 2) t hos e w h o s e b usiness c o n s i s t s e x c l u s i v e l y o f di st r i b ut i o n o f sha r e s o f regist e r e d o pen end i n v e stment companies or un i t i nve s t m ent ( conti n u e d . . . )
Page 97
GAO/GGD-92-109 Securities Investor Protection
hppendlx III Comments From the Securities and Exchange Commission
M r. R i c h a r d
Page 7
L . Fo g e l
must i n f o r m cu s t o m e rs o f t he i r me m b e rs h i p i n SI pC , wh i l e non S IPC i i r m s t h a t a r e r eg i s t e r e d w i t h t he C o mmis s i o n n e e d n o t dis c l o s e t h e i r no n - membershi p i n S IP C . Th e R e p o r t r ec o mmends that t h e C ommissio n d r a f t a r u l e r eq u i r i n g r e g is t e x e d i n v g s t ment adviser s a n d o t h e x' Commissio n - r e g i s t e r e d " i n t e r m e d i a r i e s " t hat have cus t od y o f cl i e n t f u n d s t o d i sc l o s e t o cl i en t s t h a t t h e y ar e n ot S I P C members .
I n t h e G A o' s v i e w , t h e r at i on a l e f o r su c h a r eq u i r e ment i s t wo- f o l d . F i r st , t he s ec u r i t i es ac t i v i t i es o f t hes e n o n - S I p C int e r medi a r i e s s u b ] e c t t h e i r cus t o m er s t o t h e s a me ri s k s o f l oss or mi s a p p r o p r i a t i o n as d o SI P C m embers. Seco n d , f o r adv i s e r s an d other n o n - S I P C i n t e r m e d i a r i .e s t h a t a r e af f i l i at e d or a sso c i a t e d with S I p c b r o k e r - d e a l e r s , t h e r e i s t h e ad d it i on a l r i sk t h at i nvest or s w i l l be con f u s e d a s t o w h e t h e r o r n o t f un d s h e l d b y t h e adviser o r i nt e r m e d i a r y a r e p r o t e c t e d b y S I p C . Acc o r d i n g t o t h e Report , i f t he s e n o n - member f i r m s we re r e q u i r e d t o d i scl o s e t h a t they we re n o t SI P C m embers, i n v e s t o r s w o u l d b e b e t t e r i n f or m d e a bout t h e s c o p e o f S IP C ' s c o v e r a g e a n d a b ou t i t s x el ev a n c e t o thei r i n v e s t m e nt d e c i s i o n s . Th e re qu i r e d n o n - m embership di s c l o s u r e w o u l d a l s o di m in i s h t he pot e n t i a l f o r c on f us i o n a ri s i n g f r o m a f f i l i at i ons or as s o c i a t i o n s b e t w e e n S I P C a n d n o n
S IPC f i r m s .
(... cont i n ued) t xust s , t h e sa l e o f va r i ab l e a n n u i t i e s, t he b u s i n e s s o f i nsurance , o r t he b u s i n e s s o f r en d e r i n g i n v e s t ment a d v i s o r y
s ervices t o o n e o r m or e r e g i s t e r e d i n v e stment companies or insurance company separat e a c c ounts ; a n d ( 3 ) b r o k e r - d e a l e r s w h ose s ecuri t i e s b u s i n e s s i s l i mi t e d t o U . S . G o v e xnment s e c u r i ti e s a n d who are re g i s t e r e d w i t h t h e C o mmission u n d ex a p r o v i s i o n of l aw
which d oe s n o t re qu i r e SI P C m embership . At s e v e r a l p l a c e s i n t h e Re p o r t , G A Gs u g gests t h a t i nvest ment a d v i s e r s a r e " i nt er m e d i a r i e s " b e c a u s e t h e y s e l l secur i t i e s p r o d u c t s t o cu s t o m e rs . ~ , ~ , Repor t at 5 . 9 . This i s a n i nc o r r e c t st a t e m e nt . I nve s t m en t a d v i s e r s d o n o t se l l secux'it i e s p r o d u c t s t o t he i r cus t o m e rs . The R e p or t i s a ccu r a t e , h owever, w he n i t st at es t h a t i nv e s t m en t a d v i s o r y f i r m s m a y "manage di s c r e t i o n a r y or no n - d i s c r e t i on a r y ac c o u n ts . . . and h ave t emporar y ' c u s t o d y ' o f cu s t o me r p r o p e r t y . . . . " Repor t a t
5 . 15 , n .7 .
SBS p. 72.
Page 98
GAOIGGD-92-109 Securities Investor Protection
hppendlx III Comments From the Securities and Exchange Commlmion
M r. R i c h a r d Page 8
L. F ogel
Regarding i n t e r medi a r i e s r e g i s t e r e d as i n v e s t g e nt a d v i s e r s , t he Commissi o n ' s D i v i s i o n o f I nv e s t m ent Management h as c ommunicate d t o u s t h a t i t d oes n o t b el i e v e t h a t i t i s nec e s s a r y o r appr o p r i a t e t o r e q u i r e i n v e s t ment a d v i s e r s w i t h c u s t o d y o f cl i e n t f u n d s t o d i sc l o s e th e i r non - m embership i n SI P C . Und e r t he S IPA, i n v e s t ment a d v i s e r s a r e e x c l u d e d f r o m S I P C membershi p . Consequently , t h e r e i s n o m or e r e a son t o r e q u i r e i n v e s t ment advis er s wi t h cu s t o d y o f c l i ent f und s o r sec u r i t i es t o d i sc l ose thei r n o n - S I P C s t a t u s t h a n t h e r e i s r ea s o n t o r eq u i r e i nv e s t m e nt advis er s t o d i sc l o s e t h a t t h e y a r e n o t m e mbers of th e F ed e r a l D eposit I n s u r a nce Corpor a t i o n .
The Di v i s i o n o f I nv e s t m ent M anagement c ommented t h a t , a s t he R eport r e c o g n i z e s , o t h e r f i nan c i a l f i r ms o u ts i d e t h e C o mmi.ani o n ' s ) ur i s d i c t i o n a l s o s e l l secu r i t i e s a n d s e c u r i t i e s- r e l a t e d p r o d u c t s
w ithout b e i n g r e q u i r e d t o d i s c l o s e t h e i r S I P C-membership s t a t u s
hanks and f u t u r e co m miss i o n m e r c h a n t s . To r e qui r e r egi s t e r e d i n v e s t ment a d v i s e r s w i t h cus t o d y o f cl i en t f un d s t o d is c l o s e t h e i r no n - membershi p i n S I P C t h u s r u n s t h e r i sk o f creat i n g t h e f a l s e i m p r e s s i o n t h a t f un d s a n d s e c u r i t i e s t h a t t h e y m anage or h o l d a r e a f f o r d e d l e s s p r o t e c t i o n t h a n f u n d s a n d
securi t i e s h e ( d b y f i na n c i a l f i r ms o u t s i d e t h e C ommission ' s j ur i s d i c t i o n . '
Regarding i n t e r m edi a r i e s r e g i s t e r e d a s b r o k e r - d e a l e r s , t he D ivi s i o n o f M a r ke t R e g ul a t i o n i s c o n s i d e r i n g r e c ommending t o t h e c ommission a r u l e t h a t a d d r e s s e s s ome o f t h e i ss u e s r a i s e d b y GAO. Th e r u l e u n d e r c o n s i d e r a t i o n w o u l d re q u i r e di s c l o s u r e i n t hose i n s t a n ce s wh er e c u s t o mer c o n f u s i o n c o n c e r n i n g S I P C p rot e c t i o n ma y r e s u l t (~ , when a n on - S I P C a f f i l i at e h a s a simi l a r n a me , a n d t h e s a me p e r s o n ne l a n d of f i c es , as a S I PC m ember). T h e r u l e m a y a l s o a d d r e s s d i s c l o s u r e r e q u i r e ment s f o r
n on-SIPC, r e g i s t e r e d b r o k e r - d e a l e r s .
The Di v i s i o n o f I nv e s t m en t M a n agement i s r e sp o n s i b l e f o r , a mong othe r t h i n g s , r e g u l a t i n g t h e a c t i v i t i es o f r e gi st e re d i nvestment ad v i s e r s .
See p. 72.
The Div i s i o n o f I nv e s t ment Management b e l i e v e s t h e r e i s some meri t i n G A O' s c o n t e n t i o n t h a t th er e i s a pos s i b i l i t y of investo r c o n t u s i o n c o n c er n i n g t h e a v a i l a b i l i t y oi' S I P C p r o t e c t i o n f or r e g i s t e r e d i n v e s t ment a d v i s e r s t h a t a r e a f f i l i at e d w i t h S IP C b roker - d e a l e r s . As di sc u s s e d b e l o w , t h e D i v i si o n o f M a r k e t Regulat i o n i s c o n s i d e r i n g r u l e makin g t h a t ad d r e s s es t h i s i ssu e , and th e D i v i s i o n o f I nv e s t ment Management w i l l ass i s t t h at D ivi s i o n .
Page 99
GAO/GGD-92-109 Securities Investor Protection
hypemlls Rl Comments From the Secnrltles and Exchange Comndssion
Mr. R i c h a rd L . F o g e l page 9
We apprecia t e t h e o p p o r t u n i t y t o c om ment o n t h e d r af t report . w e wou l d b e h a p p y t o m e e t w i t h t h e GA O s t a f f at y our c onvenience t o d i s c u s s ou r c omments f u r t h e r . I i you h a v e a n y questi on s r e g a r d i n g t h i s l et t e r , p l ea s e f e e l f r e e t o t el e p h on e me
at ( 2 02 ) 2 7 2- 3000, or if y ou h a v e a ny q u est i o n s r e g a r d i n g regist e r e d i n v e stment adv i s e xs , p l e a s e c o nt ac t Gene Gohlke, A ssociat e D i x e ct ox , D i v i s i o n o f I n v e s t ment Management, a t ( 2 0 2 )
2 72 2 0 4 3 .
S incere l y ,
Willi a m H . Heyman
D irec t o r
Page 100
GAD/GGD-92-109 Securities Investor Protection
A endix IV
Major Contributors to This Report
Division, Washington, D.C. Office of the General Counsel, Washington, D.C.
General Government
Stephen C. Swaim, Assistant Director Teresa Anderson, Evaluator-In-Charge Wesley Phillips, Evaluator Mark Ulanowicz, Evaluator Rosemary Healey, Attorney-Advisor
(2SSS25)
Page 101
GAD/GGD-92-109 Securities Investor Protection
()rder ing I tif ormat i«in
The first. c«ipy of each ('AO report and testimony is free. A«l«liti«ntal ««ipi«s are $2 each. Orders sh«iuld be sent to the fo l l o w ing a«ldress, arr«impanie«l by a check or money order made out t«i the Yiuperin ten«lent. of I)«icuments, when ttecessary. Orders for 100 or m«ire r«iliies t«i b«maile«l to a single address are discounted 2,"i per«ent.. 1I.S. (r'«neral Accounting Office
P.(). 11«ix 601,"i
(~ait.h«rsliurg, MD 20877 ()r«l«rs may als«i be placed by calling (202) 27,"i-(i241,
United Stat es General Acr.ount,ing ()A]r e Washingto», I).C. 2054N Ofheial 13usin<",ss
First-Class Mail Postage & Fe es Paid
GAO
Permit No . G100
Penalty for Private Ipse f:300 | https://www.scribd.com/document/70857055/GAO-Audit-SIPC-1992 | CC-MAIN-2019-04 | en | refinedweb |
ISLAMIC RESEARCH AND TRAINING INSTITUTE
RISK MANAGEMENT
AN ANALYSIS OF ISSUES IN
ISLAMIC FINANCIAL INDUSTRY
TARIQULLAH KHAN
HABIB AHMED
OCCASIONAL PAPER
NO. 5
JEDDAH - SAUDI ARABIA
1422H (2001)
© Islamic Research and Training Institute, 2001
Islamic Development Bank
King Fahd National Library Cataloging-in-Publication data
Khan, Tariqullah
Risk Management: An Analysis of Issues in Islamic Financial Industry
Tariqullah Khan and Habib Ahmed - Jeddah
183 P; 17 x 24 cm
ISBN: 9960-32-109-6
Risk Management - Religious Aspects
Business Enterprises; Finance - Religious Aspects
I- Ahmed, Habib (j.a.) II-Title
658.155 dc 2113/22
Legal Deposit No. 2113/22
ISBN: 9960-32-109-6
______________________________________________________________________________
The findings, interpretations and conclusions expressed in this paper are entirely those of the
authors. They do not necessarily represent the views of the Islamic Development Bank, its member
countries, and the Islamic Research and Training Institute.
References and citations from this study are allowed but must be properly acknowledged.
First Edition
1422H (2001)
1
2
CONTENTS
Pages
ACKNOWLEDGEMENTS 7
FOREWORD 9
GLOSSARY 11
ABBREVIATIONS 15
EXECUTIVE SUMMARY 17
I. Introduction 21
1.1 Unique nature of Islamic banking risks 21
1.2 Systemic importance of Islamic banks 22
1.3 Objectives of the paper 23
1.4 Outline of the paper 24
II. Risk Management: Basic Concepts and Techniques 25
2.1 Introduction 25
2.2 Risks faced by financial institutions 27
2.3 Risk management: background and evolution 28
2. 4 Risk management: the process and system 30
2.4.1 Establishing appropriate risk management environment
and sound policies and procedures 30
2.4.2 Maintaining an appropriate risk measurement, mitigating,
and monitoring process
31
2.4.3 Adequate internal controls
32
2.5 Management processes of specific risks
32
2.5.1 Credit risk management
32
2.5.2 Interest rate risk management
34
2.5.3 Liquidity risk management
36
2.5.4 Operational risk management
38
2.6 Risk management and mitigation techniques
39
2.6.1 GAP analysis
39
2.6.2 Duration-gap analysis
3
2.6.3 Value at risk (VaR) 40
2.6.4 Risk adjusted rate of return (RAROC) 41
2.6.5 Securitization 42
2.6.6 Derivatives 44
2.6.6.1 Interest-rate swaps 46
2.6.6.2 Credit derivatives 47
2.7 Islamic financial institutions: nature and risks 49
2.7.1 Nature of risks faced by Islamic banks 49
2.7.2 Unique counterparty risks of Islamic modes of finance 51
2.7.2.1 Murāba ah financing 53
2.7.2.2 Salam financing 54
2.7.2.3 Isti nā‘ financing 54
2.7.2.4 Mushārakah - Muārabah (M-M) financing 55
III. Risk Management: A Survey of Islamic Financial Institutions 56
3.1 Introduction 59
3.2 Risk perceptions 59
3.2.1 Overall risks faced by Islamic financial institutions 60
3.2.2 Risks in different modes of financing 61
3.2.3 Additional issues regarding risks faced by Islamic 62
financial institutions
3.3 Risk management system and process 65
3.3.1 Establishing appropriate risk management environment 66
and sound policies and procedures
3.3.2 Maintaining an appropriate risk measurement, mitigating,
and monitoring process 66
3.3.3 Adequate internal controls 67
3.4 Other issues and concerns 71
3.5 Risk management in Islamic financial institutions: an 72
assessment
IV. Risk Management: Regulatory Perspectives 75
4.1 Economic rationale of regulatory control on bank risks 77
4.1.1 Controlling systemic risks 77
4.1.2 Enhancing the public’s confidence in markets 77
4
4.1.3 Controlling the risk of moral hazard 79
4.2 Instruments of regulation and supervision 81
4.2.1 Regulating risk capital: current standards and new 82
proposals 82
4.2.1.1 Regulatory capital for credit risk: present standards
4.2.1.2 Reforming regulatory capital for credit risk: the 83
Proposed New Basel Accord
4.2.1.3 Treatment of credit risk under the Proposed New
Accord 84
4.2.1.4 Regulatory treatment of market risk 86
4.2.1.5 Banking book interest rate risk 91
4.2.1.6 Treatment of securitization risk 92
4.2.1.7 Treatment of operational risks 93
4.2.2 Effective supervision 93
4.2.3 Risk disclosures: enhancing transparency about the future 94
4.3 Regulation and supervision of Islamic banks 98
4.3.1 Relevance of the international standards for Islamic banks 102
4.3.2 The present state of Islamic banking supervision 102
4.3.3 Unique systemic risk of Islamic banking 103
4.3.3.1 Preventing risk transmission 105
4.3.3.2 Preventing the transmission of risks to demand 106
deposits
4.3.3.3 Other systemic considerations 109
V. Risk Management: Fiqh Related Challenges 112
5.1 Introduction 113
5.1.1 Attitude towards risk 113
5.1.2 Financial risk tolerance 113
5.2. Credit risks 115
5.2.1 Importance of expected loss calculation 116
5.2.2 Credit risk mitigation techniques 116
5.2.2.1 Loan loss reserves 117
5.2.2.2 Collateral 117
5.2.2.3 On-balance sheet netting 118
5
5.2.2.4 Guarantees 121
5.2.2.5 Credit derivatives and securitization 121
5.2.2.6 Contractual risk mitigation 123
5.2.2.7 Internal ratings 125
5.2.2.8 RAROC 126
5.2.2.9 Computerized Models 128
5.3 Market risks 129
5.3.1 Business challenges of Islamic banks: a general observation 129
5.3.2 Composition of overall market risks 129
5.3.3 Challenges of benchmark rate risk management 131
5.3.3.1 Two-step contracts and GAP analysis 132
5.3.3.2 Floating rate contracts 133
5.3.3.3 Permissibility of swaps 135
5.3.3 Challenges of managing commodity and equity price risks 135
5.3.3.1 Salam And commodity futures 137
5.3.3.2 Bay‘ al -Tawrīd with Khiyār al-shar 138
5.3.3.3 Parallel contracts 139
5.3.4 Equity price risks and the use of Bay‘al‘arboon 140
5.3.5 Challenges of managing foreign exchange risk 143
5.3.5.1 Avoid transaction risks 143
5.3.5.2 Netting 144
5.3.5.3 Swap of liabilities 144
5.3.5.4 Deposit swap 144
5.3.5.5 Currency forwards and futures 145
5.3.5.6 Synthetic forward 145
5.3.5.7 Immunization 146
5.4 Liquidity risk 146
VI. Conclusions 146
6.1 The environment 151
6.2 Risks faced by the Islamic financial institutions 151
6.3 Risks management techniques 152
6.4 Risk perception and management in Islamic banks 152
6
6.5 Regulatory concerns with risks management 152
6.6 Instruments of risk-based regulation 153
6.7 Risk-based regulation and supervision of Islamic banks 154
6.8 Risk management: Sharī‘ah -based challenges 154
155
VII. Policy Implications
7.1 Management responsibility 157
7.2 Risk reports 157
7.3 Internal ratings 157
7.4 Risk disclosures 158
7.5 Supporting institutions and facilities 158
7.6 Participation in the process of developing the international 158
standards
7.7 Research and training 159
159
Appendix 1: List of financial institutions included in the study
Appendix 2: Samples of risk reports 161
Appendix 3: Questionnaire 163
Bibliography 167
177
7
ACKNOWLEDGEMENTS
A number of people have contributed with comments, suggestions, and
input at different stages of writing of this paper. We would like to thank the
members of the Policy Committee of Islamic Development Bank (IDB) and our
colleagues of the Islamic Research and Training Institute (IRTI) who gave
valuable suggestions on the proposal of the paper.
The study surveyed risk management issues on Islamic financial
institutions. We are grateful to all Islamic banks that have responded to our
questionnaires. These banks include AlBaraka Bank Bangladesh Limited,
ALBaraka Turkish Finance House, Turkey, Meezan Investment Bank Limited,
Pakistan, Badr Forte Bank, Russia, Islami Bank Bangladesh Limited, Kuwait
Turkish Evkaf Finans House, Turkey, and Tadamon Islamic Bank, Sudan. We
had also visited several Islamic financial institutions in four countries to
interview the officials and collect information on risk management issues in
these institutions. We gratefully acknowledge their cooperation and assistance in
providing us with all the relevant information. Among the banks and officials
who deserve special mention are the following:
ABC Islamic Bank, Bahrain: Mr. Hassan A. Alaali (Executive Director).
Abu Dhabi Islamic Bank, UAE: Abdul-Rahman Abdul-Malik (Chief
Executive), Badaruzzaman H. A. Ahmed (Vice President, Internal Audit
Dept.), Ken Baldwin (Manager, ALM), Ahmed Masood (Manager
Strategic Planning) and Asghar Ijaz (Manager Special Projects).
AlBaraka Islamic Bank, Bahrain: Abdul Kader Kazi (Senior Manager,
International Banking).
Bahrain Islamic Bank, Bahrain: Abdulla AbolFatih (General Manager),
Abdulla Ismail Mohd. Ali (Credit Manager), Jawaad Ameeri (Credit
Department Supervisor), and Adnan Abdulla Al-Bassam (Internal Audit
Department Manager).
Bahrain Monetary Authority, Bahrain: Anwar Khalifa al Sadah (Director,
Financial Institutions Supervision Directorate).
Bank Islam Malaysia Berhad, Kuala Lumpur, Abdul Razak Yaakub,
Head, Risk Management Department.
Citi Islamic Investment Bank, Bahrain: Aref A. Kooheji (Vice President,
Global Islamic Finance).
8
Dubai Islamic Bank, UAE: Buti Khalifa Bin Darwish (General Manager).
Faisal Islamic Bank Egypt, Cairo, Tag ElDin. A. H. Sadek, Manager
Foreign Department.
First Islamic Investment Bank, Bahrain: Alan Barsley and Shahzad Iqbal.
Investors Bank, Bahrain: Yash Parasnis (Head of Risk Management).
Islamic Development Bank, Jeddah, Saudi Arabia.
Shamil Bank of Bahrain, Bahrain: Dr Saad S. AlMartaan (Chief
Executive) and Ghulam Jeelani (Assistant General Manager, Risk
Management).
Earlier drafts of this paper were presented at the IRTI seminar and IDB’s
Policy Committee meeting. We would like to thank Mabid Ali al-Jarhi, Boualem
Bendjilali, M. Umer Chapra, Hussien Kamel Fahmy, Munawar Iqbal, and M.
Fahim Khan from IRTI and the members of the Bank’s Policy Committee for
their valuable comments and suggestions. We are also grateful to Sami
Hammoud, Consulting Expert on Islamic Banking and Finance, Zamir Iqbal,
World Bank, Professor Mervyn K. Lewis, University of South Australia, and
David Marston, International Monetary Fund (IMF), who as external referees
gave thoughtful comments and suggestions on the paper. We are also thankful to
Mr. Syed Qamar Ahmad for proof reading the final version of the paper.
The comments and suggestions of these scholars were helpful in revising
the paper. The views expressed in the paper, however, do not reflect their views,
nor of IRTI or IDB. Views expressed and any remaining errors in the final draft
are those of the authors only.
Jumada' II 29, 1422H TARIQULLAH KHAN
September 17, 2001 HABIB AHMED
9
FOREWORD
The Islamic financial industry is growing continuously ever since the
first institutions started operating during the early Seventies. At the present, most
Islamic financial services are being provided in almost all parts of the world by
different financial institutions. Standards for financial reporting, accounting and
auditing have already been put in place. Progress is being made in establishing
an Islamic capital and inter-bank money market, an Islamic rating agency and an
Islamic financial services supervisory board. These developments imply that the
Islamic financial industry has become systemically important for the
international financial system.
Due to its special treatment of different risks, asset-based nature and the
strong concerns of clients for Islamic values, the concept of Islamic finance
contains inherent features that enhance market discipline and financial stability.
However, due to the relatively new microstructures of the Islamic modes of
finance and the unique risk characteristics of liabilities and assets, the Islamic
financial industry also poses a number of systemic risks. Research studies can be
instrumental in strengthening its stabilizing features and in mitigating the
potential sources of instability. As a result, the stability of financial markets can
be enhanced along with attaining the objective of growth. This is important for
the industry's sustained growth and its contribution to the stability and efficiency
of the international financial markets.
With such a background, the Board of Executive Directors of the
Islamic Development Bank (IDB), asked IRTI to conduct a research dealing
with risk management issues of the Islamic financial industry. As a result,
Tariqullah Khan and Habib Ahmed - researchers at the Institute have prepared
the present paper. Indeed, the subject is very important and the authors have
attempted to undertake a comprehensive stock taking and analysis of some of the
relevant issues. Standard setters, Sharī‘ah scholars, policy makers, practitioners,
academia and researchers may find the work relevant. It is hoped that the study
will be instrumental in motivating to conduct more research in this important
area.
Mabid Ali Al-Jarhi
Director, IRTI
10
11
GLOSSARY
(ARABIC TERMS USED IN THE PAPER)
al khirāju bi al- : These are the two fundamental axioms of Islamic finance
amān and al implying that entitlement to return from an asset is
ghunmu bi al intrinsically related to the liability of loss of that asset.
ghurm
‘arboon, bay‘ : A sale contract in which a small part of the price is paid
al- as an earnest money (down payment), the object and its
remaining price are exchanged in a future date. In case
the buyer rescinds from the contract he has to forego the
earnest money compensating the seller in causing a delay
in sale.
Band al-I sān : Beneficence clause in a Salam contract used in the Sudan.
It is aimed at compensating the party to the contract that
is adversely affected due to changes in prices between
Band al-Jazāa : Penalty clause in an Isti nā‘ to ensure contract
enforceability.
Bay‘ : Stands for sale and has been used here as a prefix in
referring to the different sale-based modes of Islamic
finance, like Murāba ah, Ijārah, Isti nā‘ and Salam.
Fiqh : Refers to the whole corpus of Islamic jurisprudence. In
contrast with conventional law, Fiqh covers all aspects of
life, religious, political, social or economic. In addition to
religious observances like prayer, fasting, Zakāh and
pilgrimage, it also covers family law, inheritance, social
and economic rights and obligations, commercial law,
criminal law, constitutional law and international
relations, including war. The whole corpus of Fiqh is
based primarily on interpretations of the Qur’an and the
Sunnah and secondarily on Ijmā‘ (consensus) and Ijtihād
(individual judgement). While the Qur’an and the Sunnah
are immutable, Fiqhī verdicts may change due to
changing circumstances.
Gharar : Uncertainty of outcome caused by ambiguous conditions
in contracts of deferred exchange.
12
Ijārah, bay‘ al- : Sale of usufructs (operating lease).
Isti nā‘ , bay‘ : Refers to a contract whereby a manufacturer (contractor)
al- agrees to produce (build) and deliver a certain good or
premise at a given price on a given date in the future.
This is an exception to the general Sharī‘ah ruling which
does not allow a person to sell what he does not own and
possess. As against Salam, the price here need not be
paid in advance. It may be paid in installments, in steps
with the preferences of the parties or partly at the front
end and the balance later on as agreed.
Ju‘ālah : Service contract, performing a given task for a fee.
Khiyār al-shar : The option to rescind from a sale contract based on some
conditions stipulated by one party without fulfilling,
which a party can rescind from the contract.
Muārabah : An agreement between two or more persons whereby one
or more of them provide finance, while the others provide
entrepreneurship and management to carry on any
business venture whether trade, industry or service, with
the objective of earning profits. The profit is shared by
them in an agreed proportion. The loss is borne only by
the financiers in proportion to their share in total capital.
The entrepreneur’s loss lies in not getting any reward for
his/her services.
Murāba ah, : Sale at a specified profit margin. The term is, however,
bay‘ al- now used to refer to a sale agreement whereby the seller
purchases the goods desired by the buyer and sells them
at an agreed marked-up price, the payment being settled
within an agreed time frame, either in installments or
lump sum. The seller bears the risk for the goods until
they have been delivered to the buyer. Murāba ah is also
referred to as bay‘al mu‘ajjal.
13
Mushārakah : An Islamic financing technique whereby all the partners
share in equity as well as management. The profits can be
distributed among them in accordance with agreed ratios.
However, losses must be shared according to the share in
equity.
Qar asan : A loan extended without interest or profit sharing.
Rahn : Collateral.
Ribā : Literally means increase or addition, and refers to the
‘premium’ that must be paid by the borrower to the
lender along with the principal amount as a condition for
the loan or an extension in its maturity. It is regarded by
a predominant majority of Muslims to be equivalent to
interest.
Salam, bay‘ al- : Sale in which payment is made in advance by the buyer
and the delivery of goods is deferred by the seller. This
is also, like Isti nā‘, an exception to the general Sharī‘ah
ruling that you cannot sell what you do not own and
possess.
Sharī‘ah : Refers to the divine guidance as given by the Qur’an and
the Sunnah and embodies all aspects of the Islamic faith,
including beliefs and practices.
Tawrīd, bay‘ : Contractual sale in which known quality and amount of
al- an object is supplied by a supplier for a known price to be
paid on an agreed upon periodic schedule.
Wakālah : Agency - appointment of someone else to render a work
on behalf of the principal for a fee.
14
15
ABBREVIATIONS
AAOIFI Accounting & Auditing Organization for Islamic Financial
Institutions
BCBS: Basel Committee for Banking Supervision
BIA: Basic Indicator Approach
BIS: Bank for International Settlements
BMA: Bahrain Monetary Agency
CA: Capital Arbitrage
CAMELS: Capital, Assets, Management, Earnings, Liquidity, and
Sensitivity to risk
CAPM: Capital Asset Pricing Model
DGAP: Duration Gap
EAD: Exposure at Default
EL: Expected Loss
FIs: Financial Institutions
FTSE: Financial Times Stock Exchange
G10: Group of Ten
GDP: Gross Domestic Product
HSBC: Hong Kong Shanghai Banking Corporation
IAIB: International Association of Islamic Banks
IAIS: International Association of Insurance Supervisors
IASC: International Accounting Standards Committee
IASs: International Accounting Standards
IDB: Islamic Development Bank
IMA: Internal Management Approach
IMF: International Monetary Fund
IOSCO: International Organization of Securities Commissioners
IRB: Internal Rating Based
IRTI: Islamic Research and Training Institute
ISDA: International Swap & Derivatives’ Association
JFFC: Joint Forum on Financial Conglomerates
LDA: Loss Distribution Approach
LGD: Loss Given Default
LIBOR: London Inter-bank Offered Rate
LLR: Lender of Last Resort
LR: Leverage Ratio
16
LTCM: Long Term Capital Management
MCM: Murāba ah Clearing Market
MDB: Multilateral Development Banks
M-M: Muārabah - Mushārakah
MOF: Maturity of Facility
ODS: Object Deferred Sale
OECD: Organization for Economic Co-operation and Development
OIC: Organization of Islamic Conference
OTC: Over the Counter
PD: Probability of Default
PLS: Profit-and-Loss Sharing
PPFs: Principal Protected Funds
RAROC: Risk Adjusted Rate of Return on Capital
RSA: Rate Sensitive Assets
RSL: Rate Sensitive Liabilities
RWA: Risk Weighted Assets
SA: Standard Approach
SPV: Special Purpose Vehicle
UAE: United Arab Emirates
UL: Unexpected Loss
VaR: Value at Risk
WL: Worst Loss
17
EXECUTIVE SUMMARY
Islamic financial industry has come a long way during its short history.
The future of these institutions, however, will depend on how they cope with the
rapidly changing financial world. With globalization and informational
technology revolution, scopes.
Studying risk management issues of the Islamic financial industry is an
important but complex subject. The present paper discusses and analyzes a
number of issues concerning the subject. First, it presents an overview of the
concepts of risks and risk management techniques and standards as these exist in
the financial industry. Second, the unique risks of the Islamic financial services
industry and the perceptions of Islamic banks about these risks are surveyed
through a questionnaire and analyzed. Third, the main regulatory concerns with
respect to risks and their treatment with a view to draw some lessons for Islamic
banks are discussed. Fifth, a number of Sharī‘ah related challenges concerning
risk management are identified and discussed. Finally, conclusions and policy
implications are summarized.
The study concludes that financial markets liberalization is associated
with an increase in risks and financial instability. Risk management processes
and techniques enable financial institutions to control undesirable risks and to
take benefit of the business opportunities created by the desirable ones. These
18
processes are of important concern for regulators and supervisors as these
determine the overall efficiency and stability of the financial systems.
The study shows that the Islamic financial institutions face two types of
risks. The first type of risks they have in common with traditional banks as
financial intermediaries, such as credit risk, market risk, liquidity risk and
operational risk. However, due to Sharī‘ah compliance the nature of these risks
changes. The second type is of new and unique risks that the Islamic banks face
as a result of their unique asset and liability structures. Consequently the
processes and techniques of risk identification and management available to the
Islamic banks could be of two types – standard techniques which are not in
conflict with the Islamic principles of finance and techniques which are new or
adapted keeping in view their special requirements.
Due to their unique nature, the Islamic institutions need to develop more
rigorous risk identification and management systems. The paper identifies a
number of policy implications the implementation of which can be instrumental
in promoting a risk management culture in the Islamic financial industry.
i. The management of all banks need to create a risk management
environment by clearly identifying the risk objectives and strategies of
the institution and by establishing systems that can identify, measure,
monitor, and manage various risk exposures. To ensure the effectiveness
of the risk management process, Islamic banks also need to establish a
proficient internal control system.
ii. Risk reporting is extremely important for the development of an efficient
risk management system. The risk management systems in Islamic
banks can be substantially improved by allocating resources for
preparing a number of periodic risk reports such as capital at risk
reports, credit risk reports, operational risk reports, liquidity risk reports
and market risk reports.
iii. An Internal Rating System (IRS) is highly relevant for the Islamic
banks. At initial stages of its introduction the IRS may be seen as a risk
based inventory of individual assets of a bank. Such systems have
proved highly effective in filling the gaps in risk management systems
hence in enhancing external rating of institutions. This contributes to
cutting the cost of funds. Internal rating systems are also very relevant
for the Islamic modes of finance. Most Islamic banks already use some
19
form of internal ratings. However, these systems need to be strengthened
in all Islamic banks.
iv. Risk-based management information, internal and external audit, and
asset inventory systems can greatly enhance risk management systems
and processes.
v. Substantial risks faced by the Islamic banks can be reduced if a number
of supporting institutions and facilities are provided. These include a
lender of last resort facility, deposit protection system, liquidity
management system, legal reforms to facilitate Islamic banking and
dispute settlement, uniform Sharī‘ah standards, adoption of AAOIFI
standards and establishing a supervisory board for the industry.
vi. The Islamic financial industry being a part of the global financial
markets is effected by the international standards. It is thus imperative
for the Islamic financial institutions to follow-up the process of standard
setting and to respond to the consultative documents distributed in this
regard by the standard setters on a regular basis.
vii. Risk management systems strengthen financial institutions. Therefore,
risk management needs to be assigned a priority in research and training
programs.
20
21
I
INTRODUCTION
Islamic financial institutions were established three decades ago as an
alternative to conventional financial institutions mainly to provide Sharī‘ah
compatible investment, financing, and trading opportunities. During its short
history, the growth in the nascent banking industry has been impressive. One of
the main functions of financial institutions is to effectively manage risks that
arise in financial transactions. To provide financial services at low risk,
conventional financial institutions have developed different contracts, processes,
instruments, and institutions to mitigate risks. The future of the Islamic financial
industry will depend to a large extent on how these institutions will manage
different risks arising from their operations.
1.1 UNIQUE NATURE OF ISLAMIC BANKING RISKS
A distinction between theoretical formulations and actual practices of
Islamic banking can be observed. Theoretically, it has been an aspiration of
Islamic economists that on the liability side, Islamic banks shall have only
investment deposits. On the asset side, these funds would be channeled through
profit sharing contracts. Under such a system, any shock on the asset side shall
be absorbed by the risk sharing nature of investment deposits. In this manner,
Islamic banking offers a more stable alternative to the traditional banking
system. The nature of systemic risks of such a system would be similar to the
risks inherent in mutual funds.
The focus of this study is on the actual practices of Islamic banks. The
practice of Islamic banking, however, is different from the theoretical
aspirations. On the assets side, investments can be undertaken using profit
sharing modes of financing (Muārabah and Mushārakah) and fixed-income
modes of financing like Murāba ah (cost-plus or mark-up sale), installment sale
(medium/long term Murāba ah), Isti nā‘/ Salam (object deferred sale or pre-
paid sale) and Ijārah (leasing). The funds are provided only for such business
activities which are Sharī‘ah compatible. On the liability side, deposits can be
made either in current accounts or in investment accounts. The former is
considered in Islamic banks as Qar asan (interest-free loan) or Amānah
(trust). These have to be fully returned to depositors on demand. Investment
depositors are rewarded on the basis of profit and loss sharing (PLS) method and
22
these deposits share the business risks of the banking operations. Using profit
sharing principle to reward depositors is a unique feature of Islamic banks. This
feature along with the different modes of financing and the Sharī‘ah compliant
set of business activities change the nature of risks that Islamic banks face.
1.2 SYSTEMIC IMPORTANCE OF ISLAMIC BANKS
The Islamic financial services industry comprises of Islamic commercial
and investment banks, windows of conventional banks offering Islamic financial
services, mutual and index funds, leasing and Muārabah companies and
Islamic insurance companies. The present paper specifically deals with the risks
facing the Islamic commercial and investment banks.
Since its beginning in the early 1970s, the growth of the Islamic
financial industry has been robust. While some countries have introduced
Islamic financial services along with conventional ones, three countries (Iran,
Pakistan, and Sudan) have opted for comprehensive reforms with the objective
of transforming their financial systems to an Islamic one. According to the
International Association of Islamic Banks (IAIB), the number of Islamic
financial institutions stood at 176 at the end of 1997.1 These financial institutions
had a combined capital of US$ 7.3 billion and assets and liabilities worth US$
147.7 billion. In 1997, Islamic banks managed funds worth US$ 112.6 billion
making a net profit of US$ 1.2 billion. The historical data on these financial
variables is given in Table 1.1.
Table 1.1
Size of Islamic Financial Institutions
Some Financial Highlights (Amount in US$ Million)
Year No. of Combined Combined Combined Combined
Banks Capital Assets Funds Net
Managed Profits
1993 100 2,309.3 53,815.3 41,587.3 N.A.
1994 133 4,954.0 154,566.9 70,044.2 809.1
1995 144 6,307.8 166.053.2 77,515.8 1,245.5
1996 166 7271.0 137,132.5 101,162.9 1,683.6
1997 176 7333.1 147,685.0 112,589.8 1,218.2
Source: Directory of Islamic Banks and Financial Institutions-1997, The International Association
of Islamic Banks, Jeddah.
١
The data collected by the IAIB is available only up to 1997 and the IAIB is no more
operational.
23
During a short history of its existence, the Islamic banks have performed
reasonably well. A recent study on the performance of Islamic banks shows that
these institutions are well capitalized, profitable and stable2. Furthermore, this
paper indicates that Islamic banks have not only grown at a faster rate than their
conventional counterparts, but have also outperformed them in other criteria. On
the average, Islamic banks have larger capital-asset ratios and have used their
resources better than conventional banks. Furthermore, the Islamic institutions
have yielded profitability ratios that are superior to those of traditional banks.
Linear and exponential forecasts from the data in Table 1.1 indicate that
the estimate of capital of Islamic financial institutions is valued between US$ 13
billion and US$ 23.5 billion in 2002.3 The corresponding projections for assets
held by these institutions are US$ 198.6 billion and US$ 272.7 billion
respectively in the same year. Given the performance and the potential market of
Islamic financial services, Islamic banking sector has grown at a fast pace and
rapidly gained global dimension. This is witnessed by the involvement of many
multinational financial institutions like ANZ Grindlays, Chase Manhattan,
Citicorp, Commerzbank AG, HSBC, and Morgan Stanley Dean Witter & Co. in
Islamic products. Major world stock exchanges like Dow Jones and FTSE
having introduced Islamic indices.
1.3 OBJECTIVES OF THE PAPER
While Islamic banks being commercial enterprises would be more
concerned with growth of assets and profitability, regulators would prefer the
banks to be more stable and have growth as secondary concern. Due to the
unprecedented developments in the areas of computing, information and
mathematical finance, the financial services markets have become extremely
complex. Moreover, cross-segment mergers, acquisitions, and financial
consolidation have blurred the risks of various segments of the industry.
Given this complexity, dynamism, and transformation in the financial
sector there are several questions that can be raised related to Islamic banks.
How do the Islamic banks perceive their own risks and these various
developments? How regulators expect to respond to the new risks inherent in
Islamic banks? What possible Sharī‘ah compatible risk management instruments
٢
For details, see Iqbal (2000).
٣
While linear projections are estimated assuming constant growth rate, the exponential
forecasts are the optimistic figures estimated by assuming exponential growth. Note that
given the small number of observations, these forecasts should be taken as indicative only.
24
are available at the present? What are the prospects of exploring new
instruments in the future? What are the implications of all these for the
competitiveness of Islamic banks? How is stability of the Islamic financial
institutions going to be affected? The objective of the present paper is to address
some of these questions. Specifically the paper aims at the following:
i. Presenting an overview of the concepts of risks and risk management
techniques and standards as these exist in the financial industry.
ii. Discussing the unique risks of the Islamic financial services industry and
the perceptions of Islamic banks about these risks.
iii. Reviewing the main regulatory concerns with respect to risks and their
treatment with a view to draw some lessons for Islamic banks.
iv. Discussing and analyzing the Sharī‘ah related challenges concerning
risk management in the Islamic financial services industry and
v. Presenting policy implications for developing a risk management culture
in Islamic banks.
1.4 OUTLINE OF THE PAPER
In section two, we discuss the basic concepts of risks and their
management as practiced in the conventional financial sector. This section also
provides details of the various processes to manage different risks. The section
ends with identifying the nature of risks found in Islamic financial institutions
and instruments. Section three, reports results from a survey on risk management
issues in Islamic financial institutions. The survey covered 17 Islamic
institutions from 10 different countries. The results covers the perspectives of
Islamic bankers towards different risks, the process of risk management in these
institutions, and some other aspects related to Islamic financial institutions.
Section four discusses the risk management aspects from the regulatory
viewpoints. Based on, among others, the proposals of Basel Committee, the
section touches on the regulatory aspects for Islamic financial institutions.
Among others, it covers issues related to capital requirements in Islamic
financial institutions and different approaches to manage various risks. Section
five covers some Fiqhī issues related to risk management. Other than pointing
out the Sharī‘ah viewpoints towards different techniques and instruments used
for risk mitigation, proposals are made on developing new techniques. Some
suggestions are also given for consideration of Sharī‘ah experts to deliberate on.
The last section concludes the paper and presents policy implications for
developing risk management culture in Islamic banks.
25
II
RISK MANAGEMENT:
BASIC CONCEPTS AND TECHNIQUES
In this section we discuss the basic risk concepts and issues related to
risk management. After defining and identifying different risks, we describe the
risk management process. Risk management process is a comprehensive system
that includes creating an appropriate risk management environment, maintaining
an efficient risk measurement, mitigating, and monitoring process, and
establishing an adequate internal control arrangement. After outlining the basic
idea of the risk management process and system, we discuss the main elements
of the management process for specific risks. The latter part of the section
examines the risks involved in Islamic financial institutions. We review the
nature of traditional risks for Islamic financial institutions and point out some
specific risks that Islamic banks face. We then discuss the risks inherent in
different Islamic modes of financing.
2.1. INTRODUCTION
Risk arises when there is a possibility of more than one outcome and the
ultimate outcome is unknown. Risk can be defined as the variability or volatility
of unexpected outcomes.4 It is usually measured by the standard deviation of
historic outcomes. Though all businesses face uncertainty, financial institutions
face some special kinds of risks given their nature of activities. The objective of
financial institutions is to maximize profit and shareholder value-added by
providing different financial services mainly by managing risks.
There are different ways in which risks are classified. One way is to
distinguish between business risk and financial risks. Business risk arises from
the nature of a firm’s business. It relates to factors affecting the product market.
Financial risk arises from possible losses in financial markets due to movements
in financial variables (Jorion and Khoury 1996, p. 2). It is usually associated
with leverage with the risk that obligations and liabilities cannot be met with
current assets (Gleason 2000, p. 21).
٤
This definition is from Jorion and Khoury (1996, p. 2).
26
Another way of decomposing risk is between systematic and
unsystematic components. While systematic risk is associated with the overall
market or the economy, unsystematic risk is linked to a specific asset or firm.
While the asset-specific unsystematic risk can be mitigated in a large diversified
portfolio, the systematic risk is nondiversifiable. Parts of systematic risk,
however, can be reduced through the risk mitigation and transferring techniques.
To understand the underlying principle of risk management, we use
Oldfield and Santomero (1997) classification of risks. Accordingly, financial
institutions face the following three types of risks: risks that can be eliminated,
those that can be transferred to others, and the risks that can be managed by the
institution. Financial intermediaries would avoid certain risks by simple business
practices and will not take up activities that impose risks upon them. The
practice of financial institutions is to take up activities in which risks can be
efficiently managed and shift risks that can be transferred.
Risk avoidance techniques would include the standardization of all
business-related activities and processes, construction of diversified portfolio,
and implementation of an incentive-compatible scheme with accountability of
actions. Some risk that banks face can be reduced or eliminated by transferring
or selling these in well-defined markets. Risk transferring techniques include,
among others, use of derivatives for hedging, selling or buying of financial
claims, changing borrowing terms, etc.
There are, however, some risks that cannot be eliminated or transferred
and must be absorbed by the banks. The first is due to the complexity of the risk
and difficulty to separate it from asset. The second risk is accepted by the
financial institutions as these are central to their business. These risks are
accepted because the banks are specialized in dealing with them and get
rewarded accordingly. Examples of these risks are the credit risk inherent in
banking book activities and market risks in the trading book activities of banks.
There is a difference between risk measurement and risk management.
While risk measurement deals with quantification of risk exposures, risk
management refers to “the overall process that a financial institution follows to
define a business strategy, to identify the risks to which it is exposed, to quantify
those risks, and to understand and control the nature of risks it faces” (Cumming
and Hirtle 2001, p. 3). Before we discuss the risk management process and
27
measurement techniques, we give an overview of the risks faced by financial
institutions and the evolution of risk management.
2.2. RISKS FACED BY FINANCIAL INSTITUTIONS
The risks that banks face can be divided into financial and non-financial
ones.5 Financial risk can be further partitioned into market risk and credit risk.
Non-financial risks, among others, include operational risk, regulatory risk, and
legal risk. The nature of some of these risks is discussed below.
Market Risk is the risk originating in instruments and assets traded in
well-defined markets. Market risks can result from macro and micro sources.
Systematic market risk result from overall movement of prices and policies in
the economy. The unsystematic market risk arises when the price of the specific
asset or instrument changes due to events linked to the instrument or asset.
Volatility of prices in various markets gives different kinds of market risks. Thus
market risk can be classified as equity price risk, interest rate risk, currency risk,
and commodity price risk. As a result, market risk can occur in both banking and
trading books of banks. While all of these risks are important, interest rate risk is
one of the major risk that banks have to worry about. The nature of this risk is
briefly explained below.
Interest Rate Risk is the exposure of a bank’s financial condition to
movements in interest rates. Interest rate risk can arise from different sources.
Repricing risk arises due to timing differences in the maturity and repricing of
assets, liabilities and off-balance sheet items. Even with similar repricing
characteristics, basis risk may arise if the adjustment of rates on assets and
liabilities are not perfectly correlated. Yield curve risk is the uncertainty in
income due to changes in the yield curve. Finally instruments with call and put
options can introduce additional risks.
٥
This classification of risk is taken from Gleason (2000).
28
Credit Risk is the risk that counterparty will fail to meet its obligations
timely and fully in accordance with the agreed terms. This risk can occur in the
banking and trading books of the bank. In the banking book, loan credit risk
arises when counterparty fails to meet its loan obligations fully in the stipulated
time. This risk is associated with the quality of assets and the probability of
default. Due to this risk, there is uncertainty of net-income and market value of
equity arising from non-payment and delayed payment of principal and interest.
Similarly, trading book credit risk arises due to a borrower’s inability or
unwillingness to discharge contractual obligations in trading contracts. This can
result in settlement risk when one party to a deal pays money or delivers assets
before receiving its own assets or cash, thereby, exposing it to potential loss.
Settlement risk in financial institutions particularly arises in foreign-exchange
transactions. While a part of the credit risk is diversifiable, it cannot be
eliminated completely.
Liquidity Risk arises due to insufficient liquidity for normal operating
requirements reducing the ability of banks to meet its liabilities when it falls due.
This risk may result from either difficulties in obtaining cash at reasonable cost
from borrowings (funding or financing liquidity risk) or sale of assets (asset
liquidity risk). One aspect of asset-liability management in the banking business
is to minimize the liquidity risk. While funding risk can be controlled by proper
planning of cash-flow needs and seeking newer sources of funds to finance cash-
shortfalls, the asset liquidity risk can be mitigated by diversification of assets
and setting limits of certain illiquid products.
Operational Risk is not a well-defined concept and may arise from
human and technical errors or accidents. It is the risk of direct or indirect loss
resulting from inadequate or failed internal processes, people, and technology or
from external events. While people risk may arise due to incompetence and
fraud, technology risk may result from telecommunications system and program
failure. Process risk may occur due to various reasons including errors in model
specifications, inaccurate transaction execution, and violating operational control
limits.6 Due to problems arising from inaccurate processing, record keeping,
system failures, compliance with regulations, etc., there is a possibility that
operating costs might be different from what is expected affecting the net-
income adversely.
٦
For a list of different sources of operational risk see Crouhy et.al. (2001, p. 487).
29
Legal Risks relate to risks of unenforceability of financial contracts.
This relates to statutes, legislation, and regulations that affect the fulfillment of
contracts and transactions. This risk can be external in nature (like regulations
affecting certain kind of business activities) or internal related to bank’s
management or employees (like fraud, violations of laws and regulations, etc.).
Legal risks can be considered as a part of operational risk (BCBS, 2001a).
Regulatory risk arises from changes in regulatory framework of the country.
2.3. RISK MANAGEMENT: BACKGROUND AND EVOLUTION
Though business activities have been always exposed to risks, the
formal study of managing risk started in the later half of the last century.
Markowitz’s (1959) seminal paper first indicated that portfolio selection was a
problem of maximizing its expected return and minimizing the risks. A higher
expected return of a portfolio (measured by the mean) can result only from
taking more risks. Thus, investors’ problem was to find the optimal risk-return
combination. His analysis also points out the systematic and unsystematic
components of risk. While the unsystematic component can be mitigated by
diversification of assets, the systematic component has to be borne by the
investor. Markowitz’s approach, however, faced operational problems when a
large number of assets are involved.
Sharpe’s (1964) Capital Asset Pricing Model (CAPM) introduces the
concepts of systematic and residual risks. Advances in this model include
Single-Factor Models of Risk that estimates the beta of an asset. While residual
(firm specific) risk can be diversified, beta measures the sensitivity of the
portfolio to business cycles (an aggregate index). The dependence of CAPM on
a single index to explain the risks inherent in assets is too simplistic.
Arbitrage Pricing Theory proposed by Ross (1976) suggests that
multiple factors affect the expected return of an asset. The implication of the
Multiple Factor Model is that the total risk is the sum of the various factor-
related risks and residual risk. Thus, a multiple of risk-premia can be associated
with an asset giving the respective factor-specific betas. Though the Multiple
Factors Model is widely accepted, there is however, no consensus regarding the
factors that affect the risk of an asset or the way it is estimated. There are three
approaches in which this model can be implemented. While the Fundamental
Factors model estimates the factor specific risk- premia assuming the respective
factor-specific betas as given, the macroeconomic model assumes the risk-
30
premia as given and estimates the factor-specific betas. Statistical models
attempt to determine both the risk-premia and betas simultaneously.
Modern risk management processes and strategies have adopted features
of the above mentioned theories and adopted many tools to analyze risk. An
important element of management of risk is to understand the risk-return trade-
off. Investors can expect a higher rate of return only by increasing the risks. As
the objective of financial institutions is to increase the net income of the
shareholders, managing the resulting risks created to achieve this becomes an
important function of these institutions. They do this by efficiently diversifying
the unsystematic risks and reducing and transferring the systematic risk.
There are two broad approaches to quantify risk exposures facing
financial institutions. One way is to measure risks in a segmented way (e.g.,
GAP analysis to measure interest rate risk and Value at Risk to assess market
risks). The other approach is to measure risk exposure in a consolidated way by
assessing the overall firm level risk (e.g., Risk adjusted rate of return, RAROC
for firm level aggregate risk).7
2. 4. RISK MANAGEMENT: THE PROCESS AND SYSTEM
Though main elements of risk management include identifying,
measuring, monitoring, and managing various risk exposures,8 these cannot be
effectively implemented unless there is a broader process and system in place.
The overall risk management process should be comprehensive embodying all
departments/sections of the institution so as to create a risk management culture.
It should be pointed out that the specific risk management process of individual
financial institutions depends on the nature of activities and the size and
sophistication of an institution. The risk management system outlined here can
be a standard for banks to follow. A comprehensive risk management system
should encompass the following three components.9 We outline the basic
concept of the risk management process and system in this section.
٧
For a discussion on adopting consolidated risk management from the supervisors’ and the
banks perspectives see Cumming and Hirtle (2001).
٨
See (Jorion 2001, p. 3) for a discussion.
٩
These three components are derived from BCBS’s recommendations of managing specific
risks. See BCBS (1999 and 2001b).
31
2.4.1. Establishing Appropriate Risk Management Environment and Sound
Policies and Procedures
This stage deals with the overall objectives and strategy of the bank
towards risk and its management policies. The board of directors is responsible
for outlining the overall objectives, policies and strategies of risk management
for any financial institution. The overall risk objectives should be communicated
throughout the institution. Other than approving the overall policies of the bank
regarding risk, the board of directors should ensure that the management takes
the necessary actions to identify, measure, monitor, and control these risks. The
board should periodically be informed and review the status of the different risks
the bank is facing through reports.
Senior management is responsible to implement these broad
specifications approved by the board. To do so, the management should establish
policies and procedures that would be used by the institution to manage risk.
These include maintaining a risk management review process, appropriate limits
on risk taking, adequate systems of risk measurement, a comprehensive
reporting system, and effective internal controls. Procedures should include
appropriate approval processes, limits and mechanisms designed to assure the
bank’s risk management objectives are achieved. Banks should clearly identify
the individuals and/or committees responsible for risk management and define
the line of authority and responsibility. Care should be taken that there is
adequate separation of duties of risk measurement, monitoring and control
functions.
Furthermore, clear rules and standards of participation should be
provided regarding position limits, exposures to counterparties, credit and
concentration. Investment guidelines and strategies should be followed to limit
the risks involved in different activities. These guidelines should cover the
structure of assets in terms of concentration and maturity, asset-liability
mismatching, hedging, securitization, etc.
2.4.2. Maintaining an Appropriate Risk Measurement, Mitigating, and
Monitoring Process
Banks must have regular management information systems for
measuring, monitoring, controlling and reporting different risk exposures. Steps
that need to be taken for risk measurement and monitoring purposes are
establishing standards for categorization and review of risks, consistent
32
evaluation and rating of exposures. Frequent standardized risk and audit reports
within the institution is also important. The actions needed in this regard are
creating standards and inventories of risk based assets, and regularly producing
risk management reports and audit reports. The bank can also use external
sources to assess risk, by using either credit ratings, or supervisory risk
assessment criterion like CAMELS.
Risks that banks take up must be monitored and managed efficiently.
Banks should do stress testing to see the effects on the portfolio resulting from
different potential future changes. The areas a bank should examine are the
effects of downturn in the industry or economy and market risk events on default
rates and liquidity conditions of the bank. Stress testing should be designed to
identify the conditions under which a bank’s positions would be vulnerable and
the possible responses to such situations. The banks should have contingency
plans that can be implemented under different scenarios.
2.4.3. Adequate Internal Controls
Banks should have internal controls to ensure that all policies are
adhered to. An effective system of internal control includes an adequate process
for identify and evaluating different kinds of risks and having sufficient
information systems to support these. The system would also establish policies
and procedures and their adherence are continually reviewed. These may include
conducting periodic internal audits of different processes and producing regular
independent reports and evaluations to identify areas of weakness. An important
part of internal control is to ensure that the duties of those who measure,
monitor, and control risks are separated.
Finally, an incentive and accountability structure that is compatible with
reduced risk taking on part of the employees is also an important element to
reduce overall risk. A prerequisite of these incentive-based contracts is accurate
reporting of the bank’s exposures and internal control system. An efficient
incentive compatible structure would limit individual positions to acceptable
levels and encourage decision makers to manage risks in a manner that is
consistent with the banks goals and objectives.
2.5. MANAGEMENT PROCESSES OF SPECIFIC RISKS
As mentioned above the total risk of an asset can be assigned to different
sources. Given the general guidelines of risk management process above, in this
33
section we give details of risk management processes for specific risks faced by
banks.
2.5.1. Credit Risk Management10
The board of directors should outline the overall credit risk strategies by
indicating the bank’s willingness to grant credit to different sectors, geographical
location, maturity, and profitability. In doing so it should recognize the goals of
credit quality, earnings, growth, and the risk-reward tradeoff for its activities.
The credit risk strategy should be communicated throughout the institution.
The senior management of the bank should be responsible to implement
the credit risk strategy approved by the board of directors. This would include
developing written procedures that reflect the overall strategy and ensure its
implementation. The procedures should include policies to identify, measure,
monitor, and control credit risk. Care has to be given to diversification of
portfolio by setting exposure limits on single counterparty, groups of connected
counterparties, industries, economic sectors, geographical regions, and
individual products. Banks can use stress testing in setting limits and monitoring
by considering business cycles, interest rate and other market movements. Banks
engaged in international credit need to assess the respective country risk.
Banks should have a system for ongoing administration of various credit
risk-bearing portfolios. A proper credit administration by a bank would include
an efficient and effective operations related to monitoring documentation,
contractual requirements, legal covenants, collateral, etc., accurate and timely
reporting to management, and compliance with management policies and
procedures and applicable rules and regulations.
Banks must operate under a sound, well-defined credit-granting criteria
to enable a comprehensive assessment of the true risk of the borrower or
counterparty to minimize the adverse selection problem. Banks need
information on many factors regarding the counterparty to whom they want to
grant credit. These include, among others, the purpose of the credit and the
source of repayment, the risk profile of the borrower and its sensitivity to
economic and market developments, borrowers repayment history and current
capacity to repay, enforceability of the collateral or guarantees, etc. Banks
should have a clear and formal evaluation and approval process for new credits
١٠
This section is based on the credit risk management process discussed in BCBS (1999).
34
and extension of existing credits. Each credit proposal should be subject to
careful analysis by a credit analyst so that information can be generated for
internal evaluation and rating. This can be used for appropriate judgements
about the acceptability of the credit.
Granting credit involves accepting risks as well as producing profits.
Credit should be priced so that it appropriately reflects the inherent risks of the
counterparty and the embedded costs. In considering the potential credit, the
bank needs to establish provisions for expected loss and hold adequate capital to
absorb the unexpected losses. Banks can use collateral and guarantees to help
mitigate risks inherent in individual transactions. Note, however, that collateral
cannot be a substitute for comprehensive assessment of a borrower and strength
of the repayment capacity of the borrower should be given prime importance.
Banks should identify and manage credit risk inherent in all of its assets
and activities by carefully reviewing the risk characteristics of the asset or
activity. Special care is needed particularly when the bank embarks on new
activities and assets. In this regard, adequate procedures and controls need to be
taken to identify the risks in new asset or activity. Banks must have analytical
techniques and information systems to measure credit risk in all on- and off-
balance sheet activities. The system should be able to provide information on
sensitivities and concentrations in the credit portfolio. Banks can manage
portfolio issues related to credit through loan sales, credit derivatives,
securitization, and involvement in secondary loan markets.
Banks must have a system for monitoring individual credits, including
determining the adequacy of provisions and reserves. An effective monitoring
system would provide the bank, among others, the current financial condition of
the counterparty. The system would be able to monitor projected cash-flow and
the value of the collateral to identify and classify potential credit problems.
While monitoring the overall composition and quality of the portfolio, a bank
should not only take care about the concentrations with respect to counterparties
activities but also the maturity.
Banks should develop internal risk rating systems to mange credit risk.
A well-structured internal rating system can differentiate the degree of credit risk
in different credit exposures of a bank by categorizing credits into various
gradations in risk. Internal risk ratings are important tool in monitoring and
controlling credit risk as periodic ratings enable banks to determine the overall
35
characteristics of the credit portfolio and indicates any deterioration in credit
risk. Deteriorating credit can then be subject to additional monitoring and
supervision.
A bank should have independent ongoing credit reports for the board of
directors and senior management to ensure that the bank’s risk exposure are
maintained within the parameters set by prudential standards and internal limits.
Banks should have internal controls to ensure that credit policies are adhered to.
These may include conducting periodic internal audits of the credit risk
processes to identify the areas of weakness in the credit administration process.
Once the problem credits are identified, banks should have a clear policy and
system for managing problem credits. The banks should have effective workout
programs to manage risk in their portfolio.
2.5.2. Interest Rate Risk Management11
The board of directors should approve the overall objectives, broad
strategies and policies that govern the interest rate risk of a bank. Other than
approving the overall policies of the bank regarding interest rate risk the board
of directors should ensure that the management takes the necessary actions to
identify, measure, monitor, and control these risks. The board should
periodically be informed and review the status of interest rate risk the bank is
facing through reports.
Senior management must ensure that the bank follows policies and
procedures that enable the management of interest rate risk. These include
maintaining an interest rate risk management review process, appropriate limits
on risk taking, adequate systems of risk measurement, a comprehensive interest
rate risk reporting system, and effective internal controls. Banks should be able
to identify the individuals and/or committees responsible for interest rate risk
management and define the line of authority and responsibility.
Banks should have clearly defined policies and procedures for limiting
and controlling interest rate risk by delineating responsibility and accountability
over interest rate risk management decisions and defining authorized
instruments, hedging strategies and position taking opportunities. Interest rate
risk in new products should be identified by carefully scrutinizing the maturity,
١١
This section is based on the interest rate risk management process discussed in BCBS
(2001).
36
repricing or repayment terms of an instrument. The board should approve new
hedging or risk management strategies before these are implemented.
Banks should have a management information system for measuring,
monitoring, controlling and reporting interest rate exposures. Banks should have
interest rate risk management systems that assess the effects of rate changes on
both the earnings and economic value. These measurement systems should be
able to utilize generally accepted financial concepts and risk management
techniques to assess all interest risk associated with a bank’s assets, liabilities,
and off-balance sheet positions. Some of the techniques for measuring a bank’s
interest risk exposure are GAP analysis, duration, and simulation. Possible stress
tests can be undertaken to examine the effects of changes in the interest rate,
changes in the slope of the yield curve, changes in the volatility of the market
rates, etc. Banks should consider the “worse case” scenarios and ensure that
appropriate contingency plans are available to tackle these situations.
Banks must establish and enforce a system of interest rate risk limits and
risk taking guidelines that can achieve the goal of keeping the risk exposure
within some self-imposed parameters over a range of possible changes in interest
rates. An appropriate limit system enables the control and monitoring of interest
rate risk against predetermined tolerance factors. Any violation of limits should
be made known to senior management for appropriate action.
Interest rate reports for the board should include summaries of the
bank’s aggregate exposures, compliance with policies and limits, results of stress
tests, summaries of reviews of interest rate risk policies and procedures, and
findings of internal and external auditors. Interest rate risk reports should be in
details to enable senior management to assess the sensitivity of the institution to
changes in the market conditions and other risk factors.
Banks should have adequate system of internal controls to ensure the
integrity of their interest rate risk management process and to promote effective
and efficient operations, reliable financial and regulatory reporting, and
compliance with relevant laws, regulations, and institutional policies. An
effective system of internal control for interest rate risk includes an adequate
process for identify and evaluating risk and having sufficient information
systems to support these. The system would also establish policies and
procedures and their adherence are continually reviewed. These periodic
reviews would cover not only the quantity of interest rate risk, but also the
37
quality of interest rate risk management. Care should be taken that there is
adequate separation of duties of risk measurement, monitoring and control
functions.
2.5.3. Liquidity Risk Management12
As banks deal with other people’s money that can be withdrawn,
managing liquidity is one of the most important functions of the bank. The
senior management and the board of directors should make sure that the bank’s
priorities and objectives for liquidity management are clear. Senior management
should ensure that liquidity risk is effectively managed by establishing
appropriate policies and procedures. A bank must have adequate information
system to measure, monitor, control and report liquidity risk. Regular reports on
liquidity should be provided to the board of directors and senior management.
These reports should include, among others, the liquidity positions over
particular time horizons.
The essence of liquidity management problem arises from the fact that
there is a trade-off between liquidity and profitability and mismatch between
demand and supply of liquid assets. While the bank has no control over the
sources of funds (deposits), it can control the use of funds. As such, a bank’s
liquidity position is given priority in allocating funds. Given the opportunity cost
of liquid funds, banks should make all profitable investments after having
sufficient liquidity. Most banks now keep protective reserves on top of planned
reserves. While the planned reserves are derived from either regulatory
requirements or forecasts, the amount of the protective reserve depends on the
management’s attitude towards liquidity risk.
Liquidity management decisions have to be undertaken by considering
all service areas and departments of the bank. Liquidity manager must keep track
and coordinate the activities of all departments that raise and use funds in the
bank. Decisions regarding the banks liquidity needs must be analyzed
continuously to avoid both liquidity surplus and deficit. In particular, the
liquidity manager should know in advance when large transactions (credit,
deposits, withdrawals) would take place to plan effectively for resulting liquidity
surpluses or deficits.
١٢
The discussion on Liquidity Risk Management is derived from BCBS (2000).
38
A bank should establish a process of measuring and monitoring net
funding requirements by assessing the bank’s cash inflows and outflows. The
bank’s off-balance sheet commitments should also be considered. It is also
important to assess the future funding needs of the bank. An important element
of liquidity risk management is to estimate a bank’s liquidity needs. Several
approaches have been developed to estimate the liquidity requirements of banks.
These include the sources and uses of funds approach, the structure of funds
approach, and the liquidity indicator approach.13 A maturity ladder is a useful
device to compare cash inflows and outflows for different time periods. The
deficit or surplus of net cash flows is a good indicator of liquidity shortfalls and
excesses at different points in time.
Unexpected cash flows can arise from some other sources. As more and
more banks are engaged in off-balance sheet activities, banks should also
examine the cash flows on this account. For example, contingent liabilities used
in these accounts (like financial guarantees and options) can represent
substantial sources of outflows of funds. After identifying the liquidity
requirements, a series of worse case scenarios can be analyzed to estimate both
possible bank specific shocks and economy-wide shock. The bank should have
contingency funding plans of handling the liquidity needs during these crises.
Possible responses to these shocks would include the speed with which assets
can be liquidated and the sources of funds that banks can use in the crisis. If the
bank is dealing with foreign currency, it should have a measurement, monitoring
and control system for liquidity in active currencies.
Banks should have adequate internal controls over its liquidity risk
management process that should be a part of the overall system of internal
control. An effective system would create a strong control environment and have
an adequate process of identifying and evaluating liquidity risk. It should have
adequate information system that can produce regular independent reports and
evaluations to review adherence to established policies and procedures. The
internal audit function should also periodically review the liquidity management
process to identify any problems or weaknesses for appropriate action by the
management.
١٣
For a discussion on these methods see Rose (1999).
39
2.5.4. Operational Risk Management14
The board of directors and senior management should develop the
overall policies and strategies for managing operational risk. As operational risk
can arise due to failures in people, processes, and technology, management of
this risk is more complex. Senior management needs to establish the desired
standards of risk management and clear guidelines for practices that would
reduce operational risks. In doing so, care needs to be taken to include people,
process, and technology risks that can arise in the institution.
Given the different sources in which operational risk can arise, a
common standard for identification and management of these needs to be
developed. Care needs to be taken to tackle operational risk arising in different
departments/organizational unit due to people, process, and technology. As such
a wide variety of guidelines and rules have to be spelled out. To do so, the
management should develop an ‘operational risk catalogue’ in which business
process maps for each business/ department of the institution are outlined. For
example, the business process for dealing with client or investor should be laid
out. This catalogue will not only identify and assess operational risk but also can
be used for transparency by the management and auditors.
Given the complexity of operational risk, it is difficult to quantify it.
Most of the operational risk measurement techniques are simple and
experimental. The banks, however, can gather information of different risks
from reports and plans that are published within the institution (like audit
reports, regulatory reports, management reports, business plans, operations
plans, error rates, etc.). A careful review of these documents can reveal gaps that
can represent potential risks. The data from the reports can then be categorized
into internal and external factors and converted into likelihood of potential loss
to the institution. A part of the operational risk can also be hedged. Tools for
risk assessment, monitoring, and management would include periodic reviews,
stress testing, and allocation of appropriate amount of economic capital.
As there are various sources of operational risk, it needs to be handled in
different ways. In particular, risk originating from people needs effective
management, monitoring, and controls. These include establishing an adequate
operating procedure. One important element to control operational risk is to have
clear separation of responsibilities and to have contingency plans. Another
١٤
This part is based on BCBS (1998) and Crouhy, et.al. (2001, Chapter 13).
40
significant element is to make sure that reporting systems are consistent, secure,
and independent of business. The internal auditors play an important role in
mitigating operational risk.
2.6. RISK MANAGEMENT AND MITIGATION TECHNIQUES
Many risk measurement and mitigation techniques have evolved in
recent times. Some of these techniques are used to mitigate specific risks while
others are meant to deal with overall risk of a firm. In this section we outline
some contemporary techniques used by well-established financial institutions.
2.6.1. GAP Analysis
GAP analysis is an interest rate risk management tool based on the
balance sheet. GAP analysis focuses on the potential variability of net-interest
income over specific time intervals. In this method a maturity/repricing schedule
that distributes interest-sensitive assets, liabilities, and off-balance sheet
positions into time bands according to their maturity (if fixed rate) or time
remaining to their next repricing (if floating rate) is prepared. These schedules
are then used to generate indicators of interest rate sensitivity of both earnings
and economic value to changing interest rates.
GAP models focus on managing net interest income over different time
intervals. After choosing the time intervals, assets and liabilities are grouped into
these time buckets according to maturity (for fixed rates) or first possible
repricing time (for flexible rates). The assets and liabilities that can be repriced
are called rate sensitive assets (RSAs) and rate sensitive liabilities (RSLs)
respectively, and GAP equals the difference between the former and the latter.
Thus for a time interval, GAP is given by,
GAP = RSAs – RSLs (2.1)
Note that GAP analysis is based on the assumption of repricing of
balance sheet items calculated according to book value terms. The information
on GAP gives the management an idea about the effects on net-income due to
changes in the interest rate. For example, if the GAP is positive, then the rate
sensitive assets exceed liabilities. The implication is that an increase in future
interest rate would increase the net interest income as the change in interest
income is greater than the change in interest expenses. Similarly, a positive
GAP and a decline in the interest rate would reduce the net interest income. The
41
banks can opt to hedge against such undesirable interest rate changes by using
interest rate swaps (outlined in Section 2.6.6.1).
2.6.2. Duration-GAP Analysis
Duration model is another measure of interest rate risk and managing net
interest income derived by taking into consideration all individual cash inflows
and outflows. Duration is value and time weighted measure of maturity of all
cash flows and represents the average time needed to recover the invested funds.
The standard formula for calculation of duration D is given by,
n
∑ CF × t × (1 + i )
−t
t
D= t =1
n
(2.2)
∑ CF × (1 + i )
−t
t
t =1
where CFt is the value of cash flow at time t, which is the number of periods the
cash flow from the instrument is received, and i is the instrument’s yield to
maturity. The duration analysis compares the changes in market value of the
assets relative to its liabilities. Average duration gaps of assets and liabilities are
estimated by summing the duration of individual asset/liability multiplied by its
share in the total asset/liability. A change in the interest rate affects the market
value through the discounting factor (1+i)-t. Note that the discounted market
value of an instrument with a longer duration will be affected relatively more
due to changes in the interest rate. Duration analysis, as such, can be viewed as
the elasticity of the market value of an instrument with respect to interest rate.
Duration gap (DGAP) reflects the differences in the timing of asset and
liability cash flows and given by,
DGAP = DA - u DL (2.3)
where DA is the average duration of the assets, DL is the average duration of
liabilities, and u is the liabilities/assets ratio. Note that a relatively larger u
implies higher leverage. A positive DGAP implies the duration of assets is
greater than that of liabilities. When interest rate increases by comparable
amounts, the market value of assets decrease more than that of liabilities
resulting in the decrease in the market value of equities and expected net-interest
income. Similarly, a decline in the interest rate decreases the market value of the
equity with a positive DGAP. Banks can use DGAP analysis to immunize
portfolios against interest rate risk by keeping DGAP close to zero.
42
2.6.3. Value at Risk (VaR)15
Value at Risk (VaR) is one of the newer risk management tools. The.
VaR has many variations and can be estimated in different ways. We outline the
underlying concept of VaR and the method of estimating it below.
Assume that an amount A0 is invested at a rate of return of r, so that after
a year the value of portfolio is A= A0 (1+r). The expected rate of return from
the portfolio is µ with standard deviation σ. VAR answers the question of how
much can the portfolio lose in a certain time period t (e.g., month). To compute
this, we construct the probability distribution of the returns r. We then choose a
confidence level c (say 95) percent. VaR tells us what is the loss (A*) that will
not be exceeded c percent of the cases in the given period t. In other words, we
want to find the loss that has a probability of 1-c percent of occurrence in the
time period t. Note that there is a rate of return r* corresponding to A*.
Depending on the basis of comparison, VaR can be estimated in the absolute and
relative sense. Absolute VaR is the loss relative to zero and relative VaR is the
loss compared to the mean µ. The basic idea of estimating VAR is shown in
Figure 2.1 below.
A simpler parametric method can be used to estimate VaR by converting
the general distribution into a standard normal distribution. This method is not
only easier to use but also gives more accurate results in some cases. To use the
parametric method to estimate VaR, the general distribution of the rates of return
are converted into a normal distribution in the following way
-α= (-r*-µ)/σ (2.4)
Note that α represents the standard normal distribution equivalent loss
corresponding to confidence level of 1-c of the general distribution (i.e., r*).
Thus, in a normal distribution, α would be 1.65 (or 2.33) for a confidence level
c=95 (or c= 99 percent). Expressing time period T in years (so that one month
would be 1/12), the absolute and relative VaRs using the parametric method are
then given as
١٥
For an extensive discussion on VaR, see Jorion (2001).
43
VaR( zero) = A0 (ασ T − µT ) (2.5)
and
VaR(mean) = A0ασ T (2.6)
respectively. Say, for a monthly series the VaR (zero) is estimated to be ‘y’ at 95
percent confidence level. This means that under normal market conditions, the
most the portfolio can lose over a month is an amount of y with a probability of
95 percent (see Box 1 for an example).
Figure 2.1
Basic Concept of Value at Risk
VAR (0)
Distribution of Returns
5 percent loss VAR (µ)
probability
r* 0 µ Monthly Return, r (%)
2.6.4. Risk Adjusted Rate of Return (RAROC)
Risk adjusted rate of return (RAROC), developed by Bankers Trust in
the late 1970s, quantifies risk by considering the tradeoff of risk and reward in
different assets and activities. By the end of the 1990s, RAROC was considered
a leading edge methodology to measure performance and a best practice
standard by financial institutions.. RAROC analysis shows how
much economic capital different products and businesses need and determines
the total return on capital of a firm. Though RAROC can be used to estimate the
44
capital requirements for market, credit and operational risks, it is used as an
integrated risk management tool.16
Figure 2.2
Estimation of Risk Capital for RAROC
Distribution of Loss
5 percent
0 EL WL
Loan Loss Risk Capital Provision
From a loss distribution over a given horizon (say a year) expected
losses (EL) can be estimated as average losses of the previous years. Worst case
loss (WL) is the maximum potential loss. The worst case loss is estimated at a
given level of confidence, c (e.g., 95 or 99 percent). The unexpected loss (UL) is
the difference between the worst case and expected loss (i.e., UL=WL-EL). Note
that while the expected loss is included as costs (as loan loss provision) when
determining the returns, the unexpected losses arising from random shocks
require capital to absorb the loss. The unexpected or worst case loss is estimated
at a given level of confidence, c, as it is too costly for an organization to have
capital for all potential loss. If the confidence level is 95 percent then there is a
probability of 5 percent that actual losses will exceed the economic capital. The
part of the loss that is not covered by the confidence level is the catastrophic risk
that the firm faces and can be insured. Estimation of risk capital from a loss
distribution function is shown in Figure 2.2. RAROC is determined as,
١٦
For a discussion of the use of RAROC to determine capital for market, credit and
operational risks, see Crouhy, et.al. (2000, pp. 543-48).
45
RAROC = Risk-adjusted Return / Risk Capital,
where risk-adjusted return equals total revenues less expenses and expected
losses (EL), and risk capital is that reserved to cover the unexpected loss given
the confidence level. While the expected loss is factored in the return (as loan
loss provision), the unexpected loss is equivalent to the capital required to
absorb the loss. A RAROC of x percent on a particular asset means that the
annual expected rate of return of x on the equity is required to support this asset
in the portfolio. Note that RAROC can be used as a tool of capital allocation by
estimating the expected loss ex ante, and used for performance evaluation by
utilizing realized losses ex post.
2.6.5. Securitization
Securitization is a procedure studied under the systems of structured
finance or credit linked notes.17 Securitization of a bank’s assets and loans is a
device for raising new funds and reducing bank’s risk exposures. The bank pools
a group of income-earning assets (like mortgages) and sells securities against
these in the open market, thereby transforming illiquid assets into tradable asset-
backed securities. As the returns from these securities depend on the cash flows
of the underlying assets, the burden of repayment is transferred from the
originator to these pooled assets. The structure of securitization process in
shown in Figure 2.3 below. The bank, the originator of the securities, packages
its assets into pools of similar assets. These assets are passed on to a special
purpose vehicle (SPV) or issuer of securities. Note that the SPV is a separate
entity than the originator so that the viability of the bank does not affect the
credit status of assets in the pool. The issued securities are sold to the investors.
A trustee ensures that the SPV fulfills all aspects of the transaction and provides
all services. These include transfer of assets to the pool, fulfilling guarantees and
collateral requirements in case of default. The trustee also collects and transfers
the cash flows generated from the pooled assets to the investors.
١٧
For a detailed discussion of structured finance and credit linked notes related to
securitization see Caouttee et.al (1998, Chapter 23) and Das (2000, Chapter 4) respectively.
46
BOX 1:
Examples of Estimating VaR and RAROC
Estimating VaR: An Example
Assume an investment portfolio marked to the market is valued at SR 100 million has
expected rate of return of 5 percent and standard deviation of 12 percent. We are
interested to estimate VaR for holding period of one month at 99 percent confidence
interval. Using the symbols in the text, this information can be written as follows:
A0=100 million, µ = 5 percent, σ = 12 percent, c=99, α=2.33, and T=1/12.
Note that 99 percent confidence interval yields α=2.33 in a normal distribution. Given
the above we can estimate the two variants of VaR as:
VaR(mean) = A0ασ T
= 100 × 2.33 × 0.12 × (1/12)0.5 = 8.07
and,
VaR( zero) = A0 (ασ T − µT )
=100[2.33 × 0.12 × (1/12)0.5 – 0.05× (1/12)] = 8.07-0.42= 7.65
The result in the relative sense (i.e. relative to mean) implies that under normal
conditions there is a 99 percent chance that the loss of the portfolio will not exceed SR
8.07 million over a month. In the absolute sense (i.e. relative to zero) this amount is
SR 7.65 million.
Estimating RAROC: An Example
Assume that a bank has funds of SR 500 million, of which SR 460million are deposits
and the remaining SR 40 million equity (Step 2 below shows how this amount is
determined). Say the bank pays an interest rate of 5 percent to the depositors. As
capital is used for unexpected losses, it is invested in risk-free asset (like government
bonds) that has a return of 6 percent. The institution invests its remaining liability in
projects that yields an expected return of 10 percent. The average loss per annum is
estimated at SR 5 million with the worst case loss of SR 45 million at 95 percent
confidence interval. The annual operating costs of the bank is SR 10 million. Given
this information, we can estimate the RAROC for the portfolio in the following steps.
1. Estimate Risk Adjusted Return (= Total Revenue – Total Cost – Expected Loss)
Total Revenue = Income from Investment + Income from Bonds
=460 × 0.10 + 40 × 0.06 = 46 + 2.4 = 48.4
Total Cost = Payment to Deposits + Operating Costs
=460 × 0.05 + 10 = 23 + 10 = 33
Expected Loss = 5
Risk Adjusted Return = 48.4 -33 – 5 = 10.4
2. Estimate Risk Capital (=Worst Case Loss – Expected Loss)
= 45 – 5 = 40
3. Estimate RAROC (=Risk Adjusted Return/Risk Capital) ×100
=10.4/40 ×100 = 26 percent
A RAROC of 26 percent means that the portfolio has an expected rate of return on
equity of 26 percent.
47
Figure 2.3
Securitization Process
Bank
(Originator) Bank Assets
Pool of Assets
Trustee
Special Purpose
Vehicle (SPV)
Asset-backed Securities or Issuer
Investors
By pooling assets through securitization, a bank can diversify its credit
risk exposure and reduce the need to monitor each individual asset’s payment
stream. Securitization also can be used to mitigate interest rate risk as a bank can
harmonize the maturity of the assets to that of the liabilities by investing in a
wide range of available securities. The process of securitization enables banks to
transfer risky assets from its balance sheet to its trading book.
2.6.6. Derivatives
In recent years derivatives have been increasingly taking an important
role not only as instruments to mitigate risks but also as sources of income
generation. A derivative is an instrument whose value depends on the value of
something else. The major categories of derivatives are futures, options, and
swap contracts.18 Futures are forward contracts of standardized amounts that are
traded in organized markets. Like futures, options are financial contracts of
standardized amounts that give buyers (sellers) the right to buy (sell) without
any obligation to do so. Swap involves agreement between two or more parties
to exchange set of cash flows in the future according to predetermined
specifications.
١٨
For a discussion on derivatives see Hull (1995) and Kolb (1997).
48
Recent years have witnessed the explosion of the use of derivatives. To
understand the size of the derivatives in some perspective, we compare it with
the global GDP. In 1999, when the world GDP stood at USD 29.99 trillion, the
notional amount of global over-the–counter derivatives was USD 88.2 trillion.
Of these, USD 60.09 trillion (or around 68 percent) were interest rate
derivatives. Interest rate swaps accounted to USD 43.94 trillion or 73 percent of
the interest contracts and around 50 percent of total notional value of
derivatives.19 In this section we briefly outline the structure of two derivatives
that have relevance to risk management in banking.
2.6.6.1. Interest-Rate Swaps
As mentioned above, interest rate swaps constitute almost half of the
notional value of all derivatives. Interest rate swaps are used to mitigate the
interest rate risk. Though interest rate swaps can take different types, we outline
formats of two basic ones here.
The simplest of the interest rate (plain vanilla) swap involves two
counterparties, one having an initial position in a fixed debt instrument and the
other in a floating rate obligation. To understand why the two counterparties
would be interested to swap their interest payments, assume that counterparty A
is a financial institution that has to pay a floating interest on its liability (say it
pays LIBOR+1 percent on its deposits). The counterparty, however, is locked in
an asset that pays a fixed rate of interest for a certain number of years (say 10
percent on a 5-year mortgage). An increase in LIBOR can affect the income of
the financial institution adversely. The counterparty B with the floating rate asset
of LIBOR+3 percent is exposed to interest rate risk and wants to eliminate it. By
swapping the interest payments on their assets, the counterparties can immune
their earnings from movements in the interest rate. Note that at the end of the
contract period, only the net difference of the interest payments takes place
between the counterparties, as the principal involved on both sides of a swap is
usually the same amount. The structure of an interest rate swap is shown in
Figure 2.4.
١٩
Data on world income is taken from World Development Indicators (2001) and on
derivatives from BCBS (2001c).
49
Figure 2.4
Interest Rate Swap
Fixed interest: 10%
Counterparty A Counterparty B
Floating interest: LIBOR+3%
The other example of interest rate swap we provide is the one where
parties raise funds at different rates. The swap is beneficial to parties even if one
party can raise funds at higher rates than the other for different types of funds.
The underlying concept of this swap contract is similar to that of theory of
comparative advantage of trade. The objective of the swap is to exchange the
costs of raising funds on the basis of comparative advantages. Table 2.1 shows
an example. We observe that party B can raise both short and long-term funds at
lower rates than party A. Party A, however, can raise short-term funds 0.50
percent cheaper than long-term funds and party B can raise long-term funds 0.25
percent cheaper than short-term funds. Say due to the asset structures, party A
needs long-term funds and party B short-term. Party B can raise long-term funds
at 2.5 percent (11.5%-9%) lower than party A. Party B can pay its own cost of
raising short-term funds 9.25 percent less 0.25 percent (i.e., 9 percent) to party
A. In this way B saves 0.25 percent on the cost of the funds of its own choice.
Party A also saves 0.25 percent if it raises short term funds at (9.25%+ 1.75%)
and pays 0.25 percent to party B (adding up to 11.25 percent), instead of paying
11.50 percent for raising long-term funds by its own. Both end up with a net
financial gain as well as paying in consistency with their own asset and liability
structures. Thus the principle of a swap is similar to that of free trade on the
basis of comparative advantages. Since swaps are arranged in trillions of US
dollars in real life, they are hence the practical manifestation of the theory of
gains from comparative advantages under free trade.
50
Table 2.1
Comparative Advantages in Fund Raising
Cost of raising Cost of raising Cost difference %
long-term short-term
fixed rate floating rate
funds % funds %
Party A 11.50% Benchmark Can raise short-term funds
rate .50% cheaper than long-
(9.25%)+1.75 term funds
Party B 9% Benchmark Can raise long-term funds
rate i.e., .25% cheaper than short-
9.25% term funds
B competitive in 2.5% 1.75%
both by
2.6.6.2. Credit Derivatives
Credit derivatives are instruments used to trade credit risk. Credit
derivatives may take different forms such as swaps, options, and credit linked
notes.20 The basic model involves the banks finding a counterparty that assumes
the credit risk for a fee, while the bank itself retains the asset on its book. We
outline the nature of a simple credit swap here. The purpose of the derivative is
to provide default protection to the bank (risk seller) and compensation to the
risk buyer for taking up the bank’s credit risk. By paying a premium, the default
risk of an asset is swapped for the promise of a full or partial payment if the
asset defaults. Credit derivative can be implemented to deal with any part of the
credit risk exposure, like the amount, maturity, etc. The structure of a credit
swap is illustrated in Figure 2.6.
Figure 2.6
Credit Swap
Pays a fixed premium
Bank (risk seller) Risk Buyer
Makes payment in case of default
2.7. ISLAMIC FINANCIAL INSTITUTIONS: NATURE AND RISKS
٢٠
For a discussion on different types of credit derivatives, see Caouette et.al (1998, pp. 307-
309) and Crouhy et.al (2001, pp. 448-61).
51
In order to understand the risks that Islamic financial institutions face,
we first briefly discuss the nature of these institutions. To have the discussion in
some perspective, we outline the types of conventional institutions. Financial
intermediaries are broadly classified as depositary institutions, investment
intermediaries, and contractual intermediaries. Commercial banks, forming bulk
of the depositary institutions, specialize in intermediation obtaining most of its
loanable funds from deposits of the public. Investment intermediaries offer
liquid securities to the public for long-term investment. Investment
intermediaries are mutuals, with customers being the owners who receive
income in form of dividends and capital gains. Investment intermediaries
typically invest in secondary markets and, as such, avail investors opportunities
to hold securities of private and public institutions. Contractual intermediaries
constitute insurance firms and pension funds.21
Iqbal et.al. (1998) distinguish two models of Islamic banks based on the
structure of the assets.22 The first is the two-tier Muārabah model that replaces
interest by profit-sharing (PS) modes on both liability and asset sides of the
bank. In particular, in this model all assets are financed by PS modes of
financing (Muārabah). This model of Islamic banking will also take up the role
of an investment intermediary, rather than being a commercial bank only
(Chapra 1985, p. 154). The second model of Islamic banking is the one-tier
Muārabah with multiple investment tools. This model evolved because Islamic
banks faced practical and operational problems in using profit-sharing modes of
financing on the asset side. As a result, they opted for fixed-income modes of
financing. As mentioned earlier, fixed-income instruments include Murāba ah
(cost-plus or mark-up sale), installment sale (medium/long-term Murāba ah),
Isti nā‘/ Salam (object deferred sale or pre-paid sale) and Ijārah.23
٢١
Depending on the regulatory framework of a specific country, financial institutions may
perform different functions. For example, universal banks are consolidated institutions
providing different financial services that may include intermediation, investment
management, insurance, brokerage, and holding equity of non-financial firms (Heffernan
1996). A simple case of universal bank is in which the liability is the same as commercial
banks, but the asset side differs. While the assets of commercial banks are in form of loans
only, universal banks can hold equity along with loans. By holding equity positions, universal
banks can essentially get involved in the decision making and management of the firm.
٢٢
Iqbal, et.al. (1998) mention three models, the third one being the case where Islamic
banks work as agent (wakeel), managing funds on behalf of clients on basis of fixed
commission.
٢٣
For a discussion on these modes of financing see Ahmad (1993), Kahf and Khan (1992),
and Khan (1991).
52
Islamic banking offers financial services by complying with the
religious prohibition of Ribā. Ribā is a return (interest) charged in a loan (Qar
asan) contract. This religious injunction has sharpened the differences between
current accounts (interest free loans taken by owners of the Islamic bank) and
investment deposits (Muar asan and
Muārakah contract characterizes the equity owners, deposits take
the form of Muārabah contracts.24.
٢٤
One difference between Mushārakah and Muārabah is that while in the former case the
financier also has a role in management of the project, it does not in the latter case.
53
2.7.1. Nature of Risks faced by Islamic Banks
Credit Risk: Credit risk would take the form of settlement/payment risk
arising when one party to a deal pays money (e.g. in a Salam or Isti nā‘contract)
or delivers assets (e.g., in a Murāba ah contract) before receiving its own assets
or cash, thereby, exposing it to potential loss. In case of profit-sharing modes of
financing (like Muārabah and Mushārakah)āba ah contracts are trading contracts, credit risk arises in the form of
counterparty risk due to nonperformance of a trading partner. The non-
performanceāba ah‘ah, Islamic banks cannot borrow
funds to meet liquidity requirement in case of need. Furthermore, Sharī‘ah does
not allow the sale of debt, other than its face value. Thus, to raise funds by
selling debt-based assets is not an option for Islamic financial institutions.
54 form of contracts for various financial instruments, Islamic banks
prepare these according to their understanding of the Sharī‘ah, the local laws,
and their needs and concerns. Lack of standardized contracts along with the fact
that there are no litigation systems to resolve problems associated with
enforceability of contracts by the counterparty increases the legal risks
associated with the Islamic contractual agreements.
Withdrawal Risk: A variable rate of return on saving/investment
deposits introduces uncertainty regarding the real value of deposits. Asset
preservation in terms of minimizing the risk of loss due to a lower rate of return
may be an important factor in depositors' withdrawal decisions. From the bank’s
perspective, this introduces a ‘withdrawal risk’ that is linked to the lower rate of
return relative to other financial institutions.
Fiduciary Risk: A lower rate of return than the market rate also
introduces fiduciary risk, when depositors/investors interpret a low rate of return
as breaching of investment contract or mismanagement of funds by the bank
(AAOIFI 1999). Fiduciary risk can be caused by breach of contract by the
Islamic bank. For example, the bank may not be able to fully comply with the
Sharī‘ah requirements of various contracts. While, the justification for the
Islamic banks’ business is compliance with the Sharī‘ah, an inability to do so or
not doing so willfully can cause a serious confidence problem and deposit
withdrawal.
Displaced Commercial Risk: This risk is the transfer of the risk
associated with deposits to equity holders. This arises when under commercial
pressure banks forgo a part of profit to pay the depositors to prevent withdrawals
due to a lower return (AAOIFI 1999).
55
will need to apportion part of their own share in profits to the investment
depositors.
2.7.2. Unique Counterparty Risks of Islamic Modes of Finance
In this section we discuss some of the risks inherent in some Islamic
modes of financing.
2.7.2.1. Murāba ah Financing
Murāba ah is the most predominantly used Islamic financial contract. If
the contract is standardized its risk characteristics can be made parable to
interest-based financing. Based on similarity in risk characteristics of the contract
with the risk characteristics of interest-based contracts, Murāba ah is approved
to be an acceptable mode of finance in a number of regulatory jurisdictions.
However, such a standardized contract may not be acceptable to all Fiqh
scholars. Moreover, as the contract stands at present, there is a lack of complete
uniformity in Fiqh viewpoints. The different viewpoints can be a source of
counterparty risks as a result of the atmosphere of an ineffective litigation.
The main point in this regard stems from the fact that financial
Murāba ah is a contemporary contract. It has been designed by combining a
number of different contracts. There is a complete consensus among all Fiqh
scholars that this new contract has been approved as a form of deferred trading.
The condition of its validity is based on the fact that the bank must buy (become
owner) and after that transfer the ownership right to the client. The order placed
by the client is not a sale contract but it is merely a promise to buy. According to
the OIC Fiqh Academy Resolution, a promise can be binding on one party only.
OIC Fiqh Academy, AAOIFI, and most Islamic banks treat the promise to buy
as binding on the client. Some other scholars, however, are of the opinion that
the promise is not binding on the client; the client even after putting an order and
paying the commitment fee can rescind from the contract. The most important
counterparty risk specific to Murāba ah arises due to this unsettled nature of the
contract, which can pose litigation problems.
Another potential problem in a sale-contract like Murāba ah is late
payments by the counterparty as Islamic banks cannot, in principle, charge
anything in excess of the agreed upon price. Nonpayment of dues in the
stipulated time by the counterparty implies loss to banks.
56
2.7.2.2 Salam Financing
There are at least two important counterparty risks in Salam. A brief
discussion of these risks is provided here.
i. The counterparty risks can range from failure to supply on time or even
at all, and failure to supply the same quality of good as contractually
agreed. Since Salam is an agricultural based contract, the counterparty
risks may be due to factors beyond the normal credit quality of the
client. For example, the credit quality of the client may be very good but
the supply may not come as contractually agreed due to natural
calamities. Since agriculture is exposed to catastrophic risks, the
counterparty risks are expected to be more than normal in Salam.
ii. Salam contracts are neither exchange traded nor these are traded over
the counter. These contracts are written between two parties to a
contract. Thus all the Salam contracts end up in physical deliveries and
ownership of commodities. These commodities require inventories
exposing the banks to storage costs and other related price risk. Such
costs and risks are unique to Islamic banks.
2.7.2.3 Isti nā‘ Financing
While extending Isti nā‘ finance the bank exposes its capital to a number of
specific counterparty risks. These include for example:
i. The counterparty risks under Isti nā‘ faced by the bank from the
supplier’s side are similar to the risks mentioned under Salam. There
could be a contract failure regarding quality and time of delivery.
However, the object of Isti nā‘ is more in the control of the counterparty
and less exposed to natural calamities as compared to the object of
Salam. Therefore, it can be expected that the counterparty risk of the
sub-contractor of Isti nā‘ although substantially high, is lesser severe as
compared to that of the Salam.
ii. The default risk on the buyer’s side is of the general nature, namely,
failure in paying fully on time.
iii. If the Isti nā‘ contract is considered optional and not binding as the
fulfillment of conditions under certain Fiqhī jurisdictions may need,
57
there is a counterparty risk as the supplier maintains the option to
rescind from the contract.
iv. Considering that like the client in the Murāba ah contract, if the client
in the Isti nā‘ contract is given the option to rescind from the contract
and decline acceptance at the time of delivery, the bank will be exposed
to additional risks.
These risks exist because, an Islamic bank while entering into an Isti nā‘
contract assumes the role of a builder, a constructor, a manufacturer and
supplier. Since the bank does not specialize in these traits, it relies on
subcontractors.
2.7.2.4. Mushārakah - Muārabah (M-M) Financing
Many academic and policy oriented writings consider that the allocation
of funds by the Islamic banks on the basis of the Mushārakah and Muārabah is
preferable as compared to the fixed return modes of Murāba ah, leasing and
Isti nā‘. But in practice the Islamic banks’ use of the M-M modes is minimal.
This is considered to be due to the very high credit risk involved25.
The credit risk is expected to be high under the M-M modes due to the
fact that there is no collateral requirement, there is a high level of moral hazard
and adverse selection and banks’ existing competencies in project evaluation and
related techniques are limited. Institutional arrangements such as tax treatment,
accounting and auditing systems, regulatory framework are all not in favor of a
larger use of these modes by banks.
One possible way to reduce the risks in profit sharing modes of
financing is for Islamic banks to function as universal banks. Universal banks
can hold equity along with loans. In case of Islamic banks this would imply
financing using Mushārakah mode. Before investing in projects on this basis,
however, the bank needs to do a thorough feasibility study. By holding equity
positions, universal banks can essentially get involved in the decision making
and management of the firm. As a result, the bank will be able to monitor the use
of funds by the project more closely and reduce the moral hazard problem.
Some economists however, argue that banks by not opting for these
modes are actually not benefiting from portfolio diversification and hence taking
25
Credit risk in context of these modes is similar to the common notion of not receiving
the funds back on time or fully.
58
more risks rather than avoiding risks. Moreover, the use of M-M modes on both
sides of the banks’ balance sheets will actually enhance systemic stability as any
shocks on the asset side will be matched by an absorption of the shock on the
deposit side. It is also argued that incentive compatible contracts can be
formulated which can reduce the effect of moral hazard and adverse selection.
However, these arguments ignore the fact that banks basically specialize in
optimizing credit portfolios not optimizing in credit and equity portfolios.
Furthermore, since the Islamic banks’ use of current accounts on the liability
side is very high, the shocks on the assets side cannot be absorbed by these
accounts on the liability side. Hence greater use of M-M on the asset side could
actually cause a systemic instability given the large current accounts utilized by
the Islamic banks.
59
60
III
RISK MANAGEMENT:
A SURVEY OF ISLAMIC FINANCIAL
INSTITUTIONS
3.1. INTRODUCTION
This chapter examines different aspects of risk management issues in
Islamic financial institutions (FIs). Results from a survey based on
questionnaires and field level interviews with Islamic bankers are reported.
Questionnaires were sent to 68 Islamic financial institutions in 28 countries and
the authors visited Bahrain, Egypt, Malaysia and the UAE to discuss issues
related to risk management with the officials of the Islamic financial institutions.
A total of 17 questionnaires were received from 10 countries. The financial
institutions that responded and included in the study are given in Appendix 1.
Before discussing risk management related issues, we report averages of
some basic balance sheet data on Table 3.1. The average value of assets of 15
Islamic financial institutions stands at US$ 494.2 million with a capital of US$
73.4 million.26 The average capital/asset ratio of these institutions stands at 32.5
percent. This ratio is relatively large due to the inclusion of investment banks
that have higher capital/asset ratio. The lower section of the table shows the term
structure of assets. A large percentage of the assets (68.8 percent) of the
financial institutions have short-term maturity (of less than a year), 9.8 percent
have maturity between 1 to 3 years, and the remaining 31.4 percent are invested
in assets that are invested assets that mature after three years.
Table 3.1
Basic Balance Sheet Data (1999-2000)
Number of Average
Observations
Assets (Million US$) 15 494.2
Capital (Million US$) 15 73.4
Capital/Asset Ratio (Percentage) 15 32.5
Maturity of Assets
Less than 1 year(Percentage of Assets) 12 68.8
1-3 yearsa 12 9.8
More than 3 years 12 21.4
a-One financial institution reports term structure for 1-5 years.
٢٦
The data for the IDB was not included in these estimations, as given its size and nature, it
would bias the results.
61
The questions related to risk management issues in the questionnaire can
be broadly divided into two types. The first set of questions relates to
perceptions of the bankers related to different issues. They were asked to
identify the severity of different problems that their institutions faced on a scale
of 1 to 5, with 1 indicating “Not Serious” and 5 implying “Critically Serious”.
We report the average ranks of the available responses. Note that these rankings
are indicative of the relative risk perceptions of the bankers and do not mean
anything in the absolute sense. The second type of questions had either
affirmative/negative answers or are marked with an × if applicable. In these
cases we report the number of institutions in our sample that were marked to be
affirmative answers. The remaining answers were either negative or left
unanswered. One possible reason for abstention is that the question might not
have been relevant to the institution. For example, FIs not engaged in certain
modes of financing (like Salam and diminishing Mushārakah) may not have
responded to questions related to these instruments. Similarly, banks that are
operating only in the domestic economy are not exposed to exchange rate or
country risks and may have ignored questions related to these issues. For some
questions, however, multiple answers are possible. In these cases, it is possible
that the percentage of affirmative responses may add up to be greater than 100.
The results from the survey are discussed into three sections. The first
section examines the risk perceptions of the Islamic financial institutions. Given
the different nature of Islamic banks, the risks faced by these institutions are
identified and ranked according to their severity. The second section scrutinizes
different aspects of the risk management system and process in Islamic financial
institutions. To do so, we divide discussion into the three constituents of risk
management process outlined in Chapter 2. The third section discusses some
other issues related to risk management in Islamic financial institutions.
3.2. RISK PERCEPTIONS
The nature of Islamic banks is different from that of conventional
interest-based banks, mainly due to the profit-sharing features and modes of
financing used. This changes the kind of risks that these institutions face. In this
section, we report some perspectives of Islamic bankers regarding the risks that
their institutions face.
62
3.2.1.Overall Risks faced by Islamic Financial Institutions
Table 3.2 reports the average rankings of different kinds to risks faced
by Islamic FIs. Note that the rank has a range of 1 to 5, the former indicating
“Not serious” and the latter implying “Critically Serious”. It appears that Islamic
bankers rank the mark-up (interest rate) risk as the most critical risk they face
(3.07) followed by operational risk (2.92), and liquidity risk at 2.71. While credit
risk is the risk that most FIs deal with, they do not rank this risk as severe as
these risks (2.71). Among the risks listed, Islamic FIs consider market risk to be
the least risky (2.50).
The reasons for considering mark-up risk the highest may be that
Islamic debt contracts (like Murāba ah) cannot be repriced and cannot use
swaps to transfer this risk. Operational risk may have been ranked high because
given the new nature of Islamic banking a lot of the issues related to the
operations need to be instituted. These include training of employees, creating
computer programs and legal documents, etc. Liquidity risk is also ranked higher
than credit risk due to the lack of money market instruments to manage liquidity.
One reason of a relatively low credit risk may be that with asset or commodity
based financing that most Islamic FIs use, this risk is minimized as the
asset/commodity serves as collateral.
Table 3.2
Risk Perception-Overall Risks Faced by Islamic Financial Institutions
Number of Relevant Average Rank*
Responses
Credit Risk 14 2.71
Mark-up Risk 15 3.07
Liquidity Risk 16 2.81
Market Risk 10 2.50
Operational Risk 13 2.92
*The rank has a scale of 1 to 5, with 1 indicating ‘Not Serious’ and 5 denoting ‘Critically
Serious’.
Market risk is incurred on instruments like commodities and equities
traded in well-traded markets appears to be least risky. This risk arising from
movements in the prices of goods/securities are usually a part of the trading
book of a bank. On the banking book, conventional banks trade in bonds to keep
63
a part of their assets in liquid money-market instruments. As the majority of the
Sharī‘ah scholars forbid the sale of debt, trading in bonds almost nonexistent in
Islamic FIs.27 Islamic banks, however, can trade in commodities and assets
backed securities. The later securities are scant, leaving only trading in
commodities that can be as a source of market risk for Islamic FIs. As not too
many banks are involved in commodity trading, this may be a reason for a low
ranking of market risk by Islamic FIs.
3.2.2. Risks in Different Modes of Financing
Table 3.3 reports the Islamic bankers’ viewpoints on different kinds of
risks inherent in various Islamic modes of financing. The results of these risks
are discussed below.
Table 3.3
Risk Perception-Risks in different Modes of Financing
Credit Mark-up Risk Liquidity Operational Risk
Risk Risk
Murāba ah 2.56 2.87 2.67 2.93
(16) (15) (15) (14)
Muārabah 3.25 3.0 2.46 3.08
(12) (11) (13) (12)
Mushārakah 3.69 3.4 2.92 3.18
(13) (10) (12) (11)
Ijārah 2.64 2.92 3.1 2.9
(14) (12) (10) (10)
Isti nā‘ 3.13 3.57 3.0 3.29
(8) (7) (6) (7)
Salam 3.20 3.50 3.20 3.25
(5) (4) (5) (4)
Diminishing 3.33 3.4 3.33 3.4
Mushārakah (6) (5) (6) (5)
Note: The numbers in parentheses indicates the number of respondents.
*The rank has a scale of 1 to 5, with 1 indicating ‘Not Serious’ and 5 denoting ‘Critically
Serious’.
Credit Risk
Credit risk appears to be the least in Murāba ah (2.56) and the most in
Mushārakah (3.69) followed by diminishing Mushārakah (3.33) and Muārabah
٢٧
A form of debt-based bonds exists in Malaysia.
64
(3.25). It appears that profit-sharing modes of financing are perceived to have
higher credit risk by the bankers. Note that credit risk in case of profit-sharing
modes of financing arises if the counterparties do not pay the banks their due
profit-share. Furthermore, this amount if not known to banks ex ante. Ijārah
ranks as second (2.64) after Murāba ah as having the least credit risks. Like the
Murāba ah contract, Ijārah contract gives the banks a relatively certain income
and the ownership of the leased asset remains with the bank. Isti nā‘ and Salam
ranked at 3.13 and 3.20 respectively are relatively more risky. These product-
deferred modes of financing are perceived to be riskier than price-deferred sale
(Murāba ah). This may arise as the value of the product (and hence the return)
at the end of the contract period is uncertain. There are chances that the
counterparty may not be able to deliver the goods on time. This may arise to
different reasons like natural disasters (for commodities in a Salam contract) and
production failure (for products in Isti nā‘ contract). Even if the good is
delivered, there can be uncertainty regarding the price of the good upon delivery
affecting the rate of return.
The results on credit risk give some insight to the composition of
instruments in Islamic banks. We have noted earlier, Islamic banks’ assets are
concentrated in fixed-income assets (like Murāba ah and Ijārah). The results
from the survey indicate that one explanation for the concentration of assets in
fixed income assets may be that these instruments are perceived as having the
least credit risk among the Islamic modes of financing. As banks business is to
take up and manage credit risks, Islamic banks do not opt for other profit-sharing
modes of financing (like Muārabah and Mushārakah) as they regard these
instruments to be relatively more risky.
Mark-up Risk
Table 3.3 shows that mark-up risk ranked highest in terms of severity in
product-deferred contracts of Isti nā‘ (3.57) and Salam (3.5), followed by profit-
sharing modes of financing of Mushārakah and diminishing Mushārakah
(ranked at 3.4) and Muārabah (3.0).28 Murāba ah is considered to have the
least mark-up risk (2.87) followed by Ijārah (2.92). Mark-up (interest rate) risk
tends to be higher in long-term instruments with fixed rates. One reason for
higher concern of mark-up risk in Isti nā‘ may be that these instruments are
٢٨
Mark-up risk can appear in profit sharing modes of financing like Muārabah and
Mushārakah as profit-sharing ratio depends on, among others, a benchmark rate like LIBOR.
For a discussion on the determining profit-sharing ratio see Ahmed (2002).
65
usually of long-term nature. This is particularly true for real estate projects. The
contracts are tied up to a certain mark-up rate and changes in the interest rate
expose these contracts to risks. Murāba ah shows the least risk as this mode of
financing is usually short-term. After Murāba ah, Ijārah is conceived to have
relatively less mark-up risk. Even though Ijārah contracts may be of long-term,
the return (rent) on these contracts can be adjusted to reflect market conditions.
Among the profit-sharing modes of financing, the Islamic bankers rank
Mushārakah and diminishing Mushārakah relatively higher as these are usually
longer-term engagements. Muārabah is usually used for short-term financing
and has a lower mark-up risk than these two instruments.
Liquidity Risk
Liquidity risk of instruments will be smaller if the assets can be sold in
the markets and/or have short-term maturity. The bankers consider Muārabah
to have the least liquidity risk (2.46) followed by Murāba ah (2.67). Note that
both of these instruments are usually used for short term financing. Other
instruments are perceived as relatively more risky, with diminishing
Mushārakah showing the highest liquidity risk (with a rank of 3.33) and
product-deferred instruments of Salam and Isti nā‘ closely following at 3.2 and
3.0 respectively. Ijārah is also perceived to have a relatively higher liquidity risk
(3.1).
Operational Risk
As mentioned above, operational risk can arise from different sources.
Some aspects relevant to operational risk in Islamic banks are the legal risk
involved in contracts, the understanding of the modes of financing by
employees, producing computer programs and legal documents for different
instruments, etc. The rankings showing the operational risk for different
instruments should include these concerns. It appears that operational risk is
lower in fixed income assets of Murāba ah and Ijārah (2.93 and 2.9
respectively) and one of the highest in product-deferred sale contracts of Salam
and Isti nā‘ (3.25 and 3.29). Profit-sharing modes of financing of Muārabah
and Mushārakah follow close with ranks of 3.08 and 3.18 respectively.
Operational risk is highest in diminishing Mushārakah (3.4). The relatively
higher rankings of the instruments indicate that banks find these contracts
complex and difficult to implement.
66
3.2.3. Additional Issues regarding Risks faced by Islamic Financial
Institutions
Table 3.4 shows the Islamic bankers’ viewpoints on some specific risk
related issues related to Islamic FIs. Given that the Islamic banking is a
relatively new industry, the Islamic bankers are of the view that there is a lack of
understanding of the risks involved in Islamic modes of financing. They rank the
gravity of this problem at 3.82. As the rates of returns on deposits in Islamic
banks are based on profit sharing, this imposes certain risks on the liability side
of the balance sheet. Even though returns on deposits can vary, Islamic banks are
under pressure to give the depositors return similar to that of other banks. They
rank this concern at 3.64. This factor is important as a lower return than that
given by other banks leads to two additional risks. First, the withdrawal risk that
can result from a lower rate of return is considered serious as shown by its
ranking of 3.64. The banks also regard fiduciary risk, in which the depositors
blame the bank for a lower rate of return, serious with a rank of 3.21.
Table 3.4
Risk Perception-Additional Issues regarding Risks faced by Islamic
Financial Institutions
No. of Average
Relevant Rank*
Responses
1. Lack of understanding of risks involved in Islamic 17 3.82
modes of financing.
2. The rate of return on deposits has to be similar to 14 3.64
that offered by other banks.
3. Withdrawal Risk: A low rate of return on deposits 14 3.64
will lead to withdrawal of funds
4. Fiduciary Risk: Depositors would hold the bank 14 3.21
responsible for a lower rate of return on deposits
*The rank has a scale of 1 to 5, with 1 indicating ‘Not Serious’ and 5 denoting ‘Critically
Serious’.
Note that most of the rankings in Table 3.4 that Islamic bankers assign
to specific risks faced by their institutions are higher than all the rankings of
traditional risks that financial institutions face (as reported in Table 3.2). To
have some indication of this we compare the averages of these specific risks
faced by Islamic banks (Table 3.4) with that of the traditional risks (Table 3.2).
The average for the former is 3.58 while it is 2.80 for the latter. Thus, Islamic
banks not only face some risks that are different from conventional banks, but
67
there is a feeling that these risks are more serious and not well understood. This
calls for more research in risk related issues in Islamic FIs to understand and
manage these risks.
Islamic financial institutions have also identified other risks that they
face. At the government level, these include legal aspects and taxes (e.g., taxes
on interest, leases, Murāba ah profit, and services). At the central bank level,
additional risks include those arising central bank regulations and legislation, no
Islamic window for borrowing in terms of need. Other risks pointed are those
arising from Sharī‘ah, non-availability of short-term funds foreign exchange,
natural disasters, specific industries, domestic economy and politics, and
international financial and market environment.
3.3. RISK MANAGEMENT SYSTEM AND PROCESS
As discussed in Chapter 2, the system and process of risk management has three
main constituents. We discuss the risk management practices of Islamic financial
institutions under these three heads below. As mentioned above, we report the
affirmative answers given to different questions by the institutions in the sample.
3.3.1. Establishing Appropriate Risk Management Environment and Sound
Policies and Procedures
Table 3.5 reports some aspects of establishing a risk management
environment. While 13 institutions (76.5 percent) of the institutions have a
formal risk management system in place, 16 (94.1 percent) institutions have a
section/committee responsible for identifying, monitoring and controlling
various risks. The same number of institutions (16) have internal guidelines,
rules and concrete procedures related to risk management. In the sample, 13
(76.5 percent) banks have a clear policy of promoting asset quality and 14 (82.4
percent) of them have guidelines that are used for loan approvals. Only 12 banks
(70.6 percent) determine the mark-up rates on loans by taking account of the
loan grading or the risks of the counterparty.
68
Table 3.5
Establishing an Appropriate Risk Management Environment, Policies and
Procedures
No. of Percentage
Affirmative of Total
Responses
1. Do you have a formal system of Risk 13 76.5
Management in place in your organization?
2. Is there a section/committee responsible for 16 94.1
identifying, monitoring, and controlling various
risks?
3. Does the bank have internal guidelines/rules and 16 94.1
concrete procedures with respect to the risk
management system?
4. Is there a clear policy promoting asset quality? 13 76.5
5. Has the bank adopted and utilized guidelines for a 14 82.4
loan approval system?
6. Are mark-up rates on loans set taking account of 12 70.6
the loan grading?
3.3.2. Maintaining an Appropriate Risk Measurement, Mitigating, and
Monitoring Process
Table 3.6 shows the number of affirmative responses to some issues
related to risk measurement and mitigating process. A relatively small number of
Islamic banks in the sample (41.2 percent) have a computerized support system
to estimate the variability of earnings for risk management purposes. The main
risk faced by banks is credit risk. To mitigate this risk majority of the banks
(94.1 percent) has credit limits for individual counterparty and 13 institutions
(76.5 percent) have a system for managing problem loans. Most banks have a
policy of diversifying investments across sectors and industries (88.2 percent
and 82.4 percent respectively). A smaller number of banks (64.7 percent)
diversify their investments across countries. This lower number may reflect the
fact that some banks operate only domestically. To measure and manage
liquidity risk, 12 institutions (70.6 percent) compile maturity ladder chart to
monitor cash flows and gaps. To measure benchmark or interest rate risk, a small
fraction of the institutions (only 29.4 percent) use simulation analysis. Around
three-quarters of the banks (76.5 percent) have a regular reporting system on risk
management for senior management.
69
Table 3.6
Maintaining an Appropriate Risk Measuring, Mitigating, and Monitoring
Process
No. of % of
Affirmative Total
Responses
1. Is there a computerized support system for 7 41.2
estimating the variability of earnings and risk
management?
2. Are credit limits for individual counterparty set 16 94.1
and are these strictly monitored?
3. Does the bank have a policy of diversifying investments across:
a) Different countries 11 64.7
b) Different sectors (like manufacturing, trading 15 88.2
etc.)
c) Different Industries (like airlines, retail, etc.) 14 82.4
4 Does the bank have in place a system for 13 76.5
managing problem loans?
7. Does the bank regularly (e.g. weekly) compile a 12 70.6
maturity ladder chart according to settlement date
and monitor cash position gaps?
8. Does the bank regularly conduct simulation 5 29.4
analysis and measure benchmark (interest) rate
risk sensitivity?
9. Does the bank have in place a regular reporting 13 76.5
system regarding risk management for senior
officers and management?
Table 3.7 shows the different risk reports that the banks in the sample
produce. Note that a few institutions have indicated that they may not have
separate risk reports as indicated in the table, but may prepare report(s) that may
include information on some of these risks. The table shows that the most widely
used report in the Islamic banks is liquidity risk report with 13 banks (76.5
percent) producing these, followed by credit risk report (70.6 percent). The
operational risk reports are least used with only 3 institutions (17.6 percent)
producing these. Few institutions produce interest rate reports (23.5 percent) and
aggregate market risk report (29.4 percent). While11 Islamic banks in the sample
(64.7 percent) have capital at risk report 10 banks (58.8 percent) produce
commodities and equities position risk reports, relatively fewer institutions
prepare foreign exchange and country risk reports (41.2 percent and 35.3 percent
respectively). One reason for these low numbers may be that some countries
operate domestically only and as such are not exposed to either foreign exchange
or country risks.
70
Table 3.7
Maintaining an Appropriate Risk Measuring, Mitigating, and Monitoring
Process-Risk Reports
No. of Percentage
Affirmative of Total
Responses
1. Capital at Risk Report 11 64.7
2. Credit Risk Report 12 70.6
3. Aggregate Market Risk Report 5 29.4
4. Interest Rate Risk Report 4 23.5
5. Liquidity Risk Report 13 76.5
6. Foreign Exchange Risk Report 7 41.2
7. Commodities & Equities Position Risk Report 10 58.8
8. Operational Risk Report 3 17.6
9. Country Risk Report 6 35.3
Some financial institutions produce other specific risk reports not
included in Table 3.7 above. These include Compliance Risk Report, Bad and
Doubtful Receivables Report, Monthly Progress Report, Defaulting Cases
Report, and Related Party Exposure Report.
Table 3.8 exhibits different risk measuring and mitigation techniques
used by Islamic banks. There may be a variety of formats in which these
techniques can be used, ranging from very simple analysis to sophisticated
models. The most common risk measuring and managing technique is the credit
ratings of prospective investors used by 76.5 percent of the institutions in the
sample. Around 65 percent of the institutions use internal rating system for these
ratings.29 Maturity matching analysis to mitigate liquidity risks is used by 10
institutions (58.8 percent). While more than half of the institutions (52.9 percent)
estimate worst case scenarios, 47.1 percent of them use duration analysis to
estimate interest rate risk and risk adjusted rate of return on capital (RAROC) to
determine the overall risk. Seven banks (41.2 percent) in the sample use
different types of Earnings at Risk, Value at Risk. Only 29.4 percent of the
banks use simulation techniques to assess different risks.
٢٩
The internal rating system is used by large commercial banks to determine the economic
capital they should hold as insurance against losses. BIS (2001) is trying to introduce the
internal rating system to determine the capital requirements of banks in its new standards
(see, section four). The internal rating system which the Islamic banks have reported using
can be considered as a simple listing of assets in accordance with their quality particularly,
for provisioning loan loss reserves.
71
Table 3.8
Maintaining an Appropriate Risk Measuring, Mitigating, and Monitoring
Process-Measuring and Management Techniques
No. of Affirmative Percentage
Responses of Total
1. Credit Ratings of Prospective Investors 13 76.5
2. Gap Analysis 5 29.4
3. Duration Analysis 8 47.1
4. Maturity Matching Analysis 10 58.8
5. Earnings at Risk 7 41.2
6. Value at Risk 7 41.2
7. Simulation techniques 5 29.4
8. Estimates of Worst Case scenarios 9 52.9
9. Risk Adjusted Rate of Return on Capital 8 47.1
(RAROC)
10. Internal Rating System 11 64.7
The banks indicate the use of some other techniques not listed in Table
3.8 above. These include analysis of the collateral, sector and market exposure
of debtors, lending, risk analysis, measuring the effect of price of a particular
commodity (like oil) and global stock market on borrower.
Table 3.9 focuses on the monitoring aspects of risk management. Note
that there can be more than one answer to the questions so the sum of the
percentages (given in parenthesis) may exceed 100.30 Almost 70 percent of the
banks reappraise the collateral regularly and 29.4 percent of them do so
occasionally. A large percentage of the banks (82.4 percent) also confirm a
guarantor’s intention to guarantee loans regularly. One institution reviews such
guarantee occasionally. For institutions engaged in international investments, 8
(47.1 percent) review the country ratings regularly, 3 (17.7 percent) do so
occasionally and 1 bank does not review these ratings at all. Note that specific
question on reserve provision for losses was omitted in the questionnaire.
Though most Islamic banks have excess reserves, the information on RAROC
indicates about half of these institutions estimate risk capital to account for
unexpected losses.
٣٠
Five institutions had more than one answer. The bank can have more than one answer as
they may take different approaches depending on the asset type and the tenure of the contract.
72
While a significant number of banks (76.5 percent) use international
accounting standards, only 64.7 percent of them use AAOIFI standards. Five
institutions report using other accounting standards, mainly national ones. The
frequency of assessing profit and loss positions is daily for 7 (41.2 percent)
institutions, weekly for 4 (23.5 percent) banks and monthly for almost 70
percent of the banks.
Table 3.9
Maintaining an Appropriate Risk Measuring, Mitigating, and Monitoring
Process- Risk Monitoring
Regularly Occasionally Never
1. Does the bank periodically 12 5
reappraise collateral (asset)? (70.6%) (29.4%)
2. Does the bank confirm a 14 1
guarantor’s intention to guarantee (82.4%) (5.9%)
loans with a signed document?
3. If loans are international, does the 8 3 1
bank regularly review country (47.1%) (17.7%) (5.9%)
ratings?
4. Does the bank monitor the 12 2
borrower’s business performance (70.6%) (11.8%)
after loan extension?
International AAOIFI Other
5. Does the accounting standards 13 11 5
used by the bank comply with the (76.5%) (64.7%) (29.4%)
following standards?
Daily Weekly Monthly
6. Positions and Profits/Losses are 7 4 12
assessed? (41.2%) (23.5%) (70.6%)
3.3.3. Adequate Internal Controls
Table 3.10 points out some aspects of internal controls that Islamic FIs
have in place. Eleven banks (64.7 percent) indicate that they have some form of
internal control system in place that can promptly identify risks arising from
changes in the environment. The same number of banks have countermeasures
and contingency plans against disasters and accidents. A large percentage (82.4
percent) of the banks has separated duties of those who generate risks and those
who manage and control risks. Thirteen banks (76.5 percent) indicate that
internal auditor reviews and verifies the risk management systems, guidelines,
73
and risk reports. A high of 94.1 percent of these institutions have backups of
software and data files.
Table 3.10
Adequate Internal Controls
No. of Percentage
Affirmative of Total
Responses
1. Does the bank have in place an internal 11 64.7
control system capable of swiftly dealing with
newly recognized risks arising from changes
in environment, etc.?
2. Is there a separation of duties between those 14 82.4
who generate risks and those who manage
and control risks?
3. Does the bank have countermeasures 11 64.7
(contingency plans) against disasters and
accidents?
4. Is the Internal Auditor responsible to review 13 76.5
and verify the risk management systems,
guidelines, and risk reports?
5. Does the bank have backups of software and 16 94.1
data files?
3.4. OTHER ISSUES AND CONCERNS
In recent times there has been an exponential growth in the use of
derivatives by conventional financial institutions for both risk mitigation and
income generating purposes. There are, however reservations regarding use of
derivatives from the Sharī‘ah perspectives. As such, with an exception of a few,
most Islamic financial institutions do not use derivatives. This is revealed in
Tables 3.11 and 3.12. Table 3.11 shows the institutions using derivatives for
hedging (risk mitigation) purposes and Table 3.12 points out the number of
banks using these instruments for income generating purposes. These tables
show that while there is only one case of use of forward contracts for income
generating purposes, there are several cases of use of derivatives for risk
mitigation purposes. Specifically, there are three cases of currency forwards, and
one case each of commodity forwards, currency swaps, commodity swaps, and
mark-up swaps. The case of mark-up swap (or profit rate swap) is interesting.
74
Table 3.11
Use of Derivatives for Hedging (Risk management) Purposes (No. of
Institutions)
Forwards Futures Options Swaps
Currency 3 - - 1
Commodity 1 - - 1
Equity - - - -
Mark-up Rate - - - 1
Table 3.12
Use of Derivatives for Income Generation Purposes (No. of Institutions)
Forwards Futures Options Swaps
Currency 1 - - -
Commodity - - - -
Equity - - - -
Interest Rate - - - -
Table 3.13
Lack of Instruments/Institutions related to Risk Management
No. of Relevant Average
Responses Rank*
1. Short-term Islamic financial assets that can be 15 3.87
sold in secondary markets
2. Islamic money markets to borrow funds in case 16 4.13
of need.
3. Inability to use derivatives for hedging. 14 3.93
4. Inability to re-price fixed return assets (like 16 3.06
Murāba ah) when the benchmark rate changes.
5. Lack of legal system to deal with defaulters. 15 4.07
6. Lack of regulatory framework for Islamic banks. 15 3.8
*The rank has a scale of 1 to 5, with 1 indicating ‘Not Serious’ and 5 denoting ‘Critically
Serious’.
Table 3.13 sheds light on the seriousness of some constraints that
Islamic financial institutions face in managing risks. The first two concerns
relate to the lack of financial instruments/institutions for liquidity risk
management. Lack of Islamic financial assets that can be bought/sold in
secondary markets is ranked at a high of 3.87 and the absence of Islamic money
markets to borrow funds in case of need at 4.13. The banks rank inability to use
derivatives to transfer risks at 3.93. Among the concerns listed in the table,
75
inability to reprice assets is considered the least serious (ranked at 3.06). This
may be due to the fact that the most of the assets in Islamic banks use have
short-term maturity and interest rate risk is relatively small. The bankers,
however, have worries about legal and regulatory risks. These are ranked at 4.07
and 3.8 respectively. Note that these constraints identified by Islamic banks are
ranked much higher than the traditional risks (like credit risk, interest rate risk,
etc. listed in Table 2) that these institutions face.
Table 3.14 reports the responses of Islamic banks to some issues related
to their operations. Ten banks (58.8 percent) in the sample are actively engaged
in research to develop Islamic compatible risk management instruments and
techniques. When a new risk management product or scheme is introduced, a
significant number of Islamic banks (76.5 percent) get clearance from the
Sharī‘ah board. Only three banks (17.7 percent) have used securitization to raise
funds and transfer risks. A relatively small number of banks (41.2 percent) have
a reserve that is used to increase the profit share of depositors in low performing
years. This is done mainly to mitigate the withdrawal and fiduciary risks that
Islamic banks face. Note that investment banks and the only development bank
(IDB) do not have depositors in the traditional sense and this question does not
apply to them.
Table 3.14
Other Issues related to Islamic Financial Institutions
No. of Affirmative Percentage
Responses of Total
1. Is your bank actively engaged in research to 10 58.8
develop Islamic compatible Risk
Management instruments and techniques?
2. When a new risk management product or 13 76.5
scheme is introduced, does the bank get
clearance from the Sharī‘ah Board?
3. Does the bank use securitization to raise 3 17.7
funds for specific investments/projects?
4. Do you have a reserve that is used to 7 41.2
increase the profit share (rate of return) of
depositors in low-performing periods.
5. Is your bank of the view that the Basel 10 58.8
Committee standards should be equally
applicable to Islamic banks?
6. Is your organization of the view that 9 52.9
supervisors/regulators are able to assess the
true risks inherent in Islamic banks?
7. Does your organization consider that the 9 52.9
76
risks of investment depositors and current
accounts shall not mix?
The final set of questions in Table 3.14 relates to the regulatory aspects
of Islamic banks. Only 9 banks (52.9 percent) are of the view that
supervisors/regulators are able to assess the risks inherent in Islamic banks and
10 (58.8 percent) of them conclude that the Basel Committee standards should
be equally applicable to Islamic banks. Nearly half of the banks (52.9 percent)
believe that risks of investment deposits and current accounts should not mix.
The views of Islamic financial institutions regarding the capital requirements is
shown in Table 15.3. As is seen, the views given are different. Seven banks
(41.2 percent) are of the view that the capital requirements for Islamic banks
should be less than conventional banks, 6 banks (35.3 percent) think it should be
equal to that of their conventional counterparts, and only three institutions
acknowledge this should be less.
Table 3.15
Capital Requirement in Islamic Banks compared to Conventional Banks
Less Same More
Do you think that the capital 7 6 3
requirements for Islamic (41.2%) (35.3%) (17.7%)
banks as compared to
conventional banks should be
3.5. RISK MANAGEMNET IN ISLAMIC FINANCIAL INSTITUTIONS:
AN ASSESSMENT
The above analysis has touched on different aspects of risk management
in Islamic FIs. It first identifies the severity of different risks and then examines
the risk management process in Islamic banks. Among the traditional risks
facing Islamic banks, mark-up risk is ranked the highest, followed by operational
risk. The results show that Islamic financial institutions face some risks that are
different from that faced by conventional financial institutions. These banks
reveal that some of these risks are considered more serious than the conventional
risks faced by financial institutions. Profit-sharing modes of financing
(diminishing Mushārakah, Mushārakah and Muārabah) and product-deferred
sale (Salam and Isti nā‘) are considered more riskier than Murāba ah and
Ijārah. Other risks arise in Islamic banks as they pay depositors a share of the
profit that is not fixed ex ante. The Islamic banks are under pressure to give
returns similar to other institutions, as they believe that the depositors will hold
77
the bank responsible for a lower rate of return and may cause withdrawal of
funds by the depositors.
In order to get an overall assessment of the risk management system of
Islamic FIs, we report averages of the three constituents of this process. The
average score represents the sum of affirmative answers as a percentage of the
total possible answers in each component. For example, the average score for
“Establishing an Appropriate Risk Management Environment, Policies, and
Procedures” (Table 3.5) is 82.4 percent. We arrive at this number by taking the
sum of all affirmative answers given by financial institutions in Table 3.5 (i.e.
84) as a percentage of all possible affirmative answers (i.e. 17×6=102). The
corresponding figures for “Maintaining an Appropriate Risk Measuring,
Mitigating, and Monitoring Process” (Table 3.6) and “Adequate Internal
Controls” (Table 3.10) are 69.3 percent and 76 percent respectively.
These figures indicate that Islamic banks have been able to establish
better risk management policies and procedures (82.4 percent) than measuring,
mitigating, and monitoring risks (69.3 percent), with internal controls
somewhere in the middle (76 percent). We note two points from these results.
First, the overall averages are relatively high. One reason of this may be that
there is an upward bias of the banks included in the sample. We believe that the
banks that have relatively better risk management systems have responded to the
questionnaires giving these higher averages. Second, the relative percentages
indicate that Islamic financial institutions have to upgrade their measuring,
mitigating, and monitoring process followed by internal controls to improve
their risk management system.
The results also point out that the lack of some instruments (like short-
term financial assets and derivatives) and money market hampers risk
management in Islamic financial institutions. There is a need for research in
these areas to develop instruments and their markets that are compatible with
Sharī‘ah. At the government level, the legal system and regulatory framework of
the Islamic financial system needs to be understood and appropriate policies
undertaken to cater to the needs of Islamic banks.
It should be noted that the views expressed in this chapter are those of
Islamic bankers. As pointed out in the Introduction, the view of risks and their
management of the bankers will differ from the perspectives of the regulators
and the members of Sharī‘ah board. Given the different objectives, regulators
78
and Sharī‘ah experts may take a more conservative approach towards risk and
its management. These perspectives are discussed in the following chapters.
79
IV
RISK MANAGEMENT:
REGULATORY PERSPECTIVES
4.1 ECONOMIC RATIONALE OF REGULATORY CONTROL ON
BANK RISKS
Banks generate assets by using depositors’ funds. Since the rate of
return on the banks’ equity depends on the volume of assets accumulated, banks
have the natural inclination to mix little amount of their own equity with as
much of depositors’ money as possible. Hence banks’ assets exceed their
equities several times. If assets are far larger than equities, even a small loss on
assets could be enough to wipe out a bank’s equity and cause it to collapse and
loss to depositors. As a result of the contagion effects and disruptions in the
payments and settlement processes, the collapse of even a small bank can be a
source of a big systemic instability. The Islamic banks are no exception to this
systemic phenomenon. Liberalization, electronic banking and clearance and
settlement processes, availability of diverse financial assets, financial
consolidation and emergence of highly leveraged institutions have added to the
fragility of financial systems. The primary concern of regulatory standards and
supervisory oversights is to ensure systemic stability, protect the interest of
depositors and enhance the public’s confidence on the financial intermediation
system. However, due to the rapidly changing nature of financial markets,
regulatory and supervisory standard setting appears to always remain as a “work
in progress”. In this section we discuss regulatory and supervisory concerns with
risk management at the level of individual banks. We also present an overview
of the recent supervisory trends in aligning bank capital with asset risks and the
implications of this for Islamic banks.
4.1.1. Controlling Systemic Risks
Systemic risk is the probability that failure of even a small bank could
result in the contagion effect and the whole payments system could be disrupted.
This could lead to a financial crisis, decline in the value of assets in place,
impairing growth taking capabilities of the economy, creating unemployment,
decreasing economic welfare and even causing social and political instability.
80
For a number of reasons banks are the only institutions having such significance
for systemic stability.
i) Banks are not only business firms but are also agents of the payments,
clearance and settlement system.
ii) Banks are highly leveraged and exposed to financial risks and instability.
iii. Regulatory interference is not always perfect. Particularly, deposit
protection schemes and lender of last resort facilities create moral hazard
on the part of both banks and depositors.
iv. Due to financial liberalization, technological and computing revolution,
and electronic banking, clearing and settlements systems have enabled
banking to cross-geographic boundaries and regulatory jurisdictions.
v. The extent of mergers, financial consolidation and cross-segment
activities – banks writing insurance contracts, insurance companies
undertaking investment activities and investment banks mobilizing
deposits etc., is on the rise leading to the mixing of the risks of these
different segments. The systemic importance of a bank is different
compared to an investment firm or an insurance company. The failure of
a bank creates a severe contagion effects due to the disruption of the
payments and settlement processes. Compared to this the failure of an
insurance or investment firm will have a more isolated effect on the firm
itself. Moreover, insurance firms and investment banks are not covered
by lender of last resort or deposit insurance schemes; hence they do not
face moral hazard and adverse selection problems as such. Furthermore,
the nature of liabilities and assets of banks and the other firms are
different. Cross-segment activities blur all these functional differences
and mix the different types of risks, making regulation and supervision
more important.31
vi. An important source of systemic risk is the relationship of banks with
highly leveraged firms. Banks are not only highly leveraged themselves,
31
Banks are prone to “runs” for a number of reasons: (i) Banks owe to depositors and other
creditors fixed obligations irrespective of the quality of their assets (this feature, however
does not exist in Islamic banks), (ii) The value of bank assets is not known to depositors –
bank runs thus are psychological and confidence matter rather than a true assessment of asset
values of banks (iii) depositors are paid on the basis of first come first served if bank run
happens leading to a run in case of problems and (iv) banks are much more interconnected
through the payments and settlement process – depositors know that. See (Llewellyn 1999).
81
but are also a source of creating leverage. Leverage increases financial
risks and creates financial instability. Banks themselves being highly
leveraged can be severely destabilized if they keep large exposures to
other highly leveraged firms. Therefore banks need to be aware about
the risks and risk management systems of their counterparties.32
vii. Banks undertake large amounts of off-balance sheet activities.
Particularly, due to an increase in securitization and derivative activities
these off-balance sheet activities have increased disproportionately.
These activities although useful are a source of disguise leverage of
banks.
4.1.2 Enhancing the Public’s Confidence in Markets
The efficiency of financial markets depends on the confidence of the
public in the financial intermediaries, which in turn depends on the integrity of
these institutions. Public confidence in financial institutions strengthens the
financial intermediation system and the society as a whole benefit from these in
terms of financial efficiency and stability. Some of the benefits of financial
intermediation which need to be strengthened by the regulatory process are
given here:
i. Due to economies of scale, specialization and technical expertise,
financial intermediaries are more suited to evaluate the risks of
counterparties as compared to individual savers. Thus financial
intermediation reduces information cost, moral hazard and adverse
selection and consequently the cost of finance. Due to lack of
confidence on the financial intermediation system the public could
withdraw from it. As a result, the cost of funds in the economy will
increase leading to an inefficient allocation of resources.
32
A classical real world case of a relatively small firm potentially causing a meltdown of
global financial markets happened in September 1998 when Long-Term Capital Management
(LTCM), a US Hedge Fund with a capital of $ 4.8 billion and assets of $ 200 billion was
rescued by regulators. Collapse of the LTCM could have caused a serious systemic
instability. This incident triggered a series of regulatory guidelines and standards regarding
banks’ relationship with highly leveraged firms and counter party risk management. See, for
example, Report of the (US) Presidents’ Working Group on Hedge Fund, Leverage and the
Lessons of Long-Term Capital Management (1999), BCBS, Sound Practices for Banks'
Interaction with Highly Leveraged Institutions (1999). All BCBS publications can be
accessed at:.
82
ii. Financial intermediaries reduce a number of mismatches between the
preferences and needs of savers and investors. These are maturity
mismatches, size of fund mismatches and liquidity mismatches. As a
result of confidence problem these mismatches could increase,
increasing the frictions in the process of resource allocation.
iii. Financial intermediaries are much more capable of assessing the risks of
alternative investment opportunities in comparison to individual savers.
As a result of a confidence problem this comparative advantage cannot
be utilized.
iv. Efficiency in processing the transactions of the payments system is
extremely important for reducing the transaction costs. Electronic
systems have made the process even more critical for the competitive
efficiency of an economy. Lack of public confidence in the financial
institutions can impair the utilization of the payments facilities and the
economy can be inefficient as compared to its competitors.
In order to enhance the public’s confidence on the financial
intermediation system, the interests of depositors and other users of financial
services need to be protected. Depositors of banks in particular and users of
financial services in general are not in a position to protect their own interests
like the shareholders of banks and other firms. There are a number of reasons for
this that require regulatory and supervisory oversight.
i. Depositors and other customers of the financial services industry are
numerous and often have short-term relationships with banks and other
financial institutions. As a group and individually, they are not able to
monitor the activities of financial institutions, which always involve
complex long-term contracts.
ii. Financial institutions play important fiduciary role. The financial
contracts at the time of selling to the clients may be of a particular
nature. These contracts may later be changed due to genuine needs or
merely due to moral hazard on the part of the institutions. Customers
cannot effectively monitor the enforcement of the contracts in their own
best interest overtime.
83
iii. Customer protection has become even more important in the new regime
of e-banking, rising trends of money laundering and other acts of deceit
on the part of some elements.
For these and other reasons the regulatory and supervisory authorities
have to safeguard and protect the interests of customers. Without such a
protection and safeguard the integrity of markets cannot be ensured and the
confidence of customers on the financial institutions cannot be strengthened. As
a result, inefficiency, systematic instability and financial crisis can grip the
markets effecting economic development and welfare adversely.
4.1.3 Controlling the Risk of Moral Hazard
Some of the policies and safety nets introduced by regulatory authorities
to protect the integrity of markets, to safeguard the interests of depositors and to
avoid systemic risks often become the source of moral hazard on the part of
depositors as well as banks. Regulation and supervision is also required to
safeguard these safety net arrangements.
i. The lender of last resort (LLR) facility of central banks aims at
preventing bank runs by providing liquidity facility to banks in times of
crisis. Many studies indicate that since central banks are there to rescue
banks, particularly, banks which are “too big to fail”, they behave
imprudently. In addition to the regulatory oversight, it is often
recommended that the LLR facility shall be provided at a very high cost
and that the private sector shall participate in overcoming any financial
crisis by taking direct responsibility for financial losses.
ii. The deposit insurance schemes aim at providing protection to depositors
in case of bank failure. Since the depositors have to lose nothing as the
deposits are insured, banks undertake risky activities. Since the rate of
interest on deposits is fixed, in case of success of their risky operations,
all the high return are accrued to the owners of banks and in case of
losses deposits are protected in any way. Since deposits are protected,
depositors also have no incentive to monitor the activities of banks. Thus
a number of studies have found that financial instability is high in
countries where deposits are fully protected33. An effective regulatory
and supervisory oversight is thus required to prevent or at least minimize
٣٣
See, Demirguc and Enrica (2000).
84
the adverse consequences of the safety net schemes put in place by the
public authorities.
4.2 INSTRUMENTS OF REGULATION AND SUPERVISION
The regulation of financial institutions is generally classified into
conduct of business regulation and prudential regulation. The former type of
regulation is required to protect the interests of customers. The interests of
customers can be protected by requiring banks to put certain minimum amount
of their own capital at stake and ensuring timely disclosure of accurate
information. Establishing a satisfactory level of competence and integrity in
supplying banking services, maintaining a leveled playing field for competition
and ensuring the production and supply of fair financial contracts and products
are also the primary requirements in this regard. To achieve these objectives of
conduct of business regulation, uniform sets of standards, rules and guidelines
are required. Prudential regulation is directed at systemic safety by ensuring the
soundness of individual financial institutions through the application of the set
standards and guidelines across institutions..
4.2.1 Regulating Risk Capital: Current Standards and New Proposals
Bank capital is the most effective source of protection against risks. It is
also an effective means of regulation because capital standards can be enforced
uniformly across institutions and jurisdictions. In general, capital refers to
shareholders’ equity. Capital is required to support the risks of assets and for its
stabilization and confidence building role, particularly, against the eventuality of
any crisis and in fact when it arises. Traditionally, adequacy of capital in a
banking firm is gauged by the capital/asset ratio i.e., the leverage ratio (LR). The
LR does not cover the relative risks of different assets. In addition, it does not
take into account the stabilization role of funds, which due to their long-term
maturity as compared to deposits have the potential to relieve the pressure on
shareholders’ equity in case of a crisis. Therefore, the 1988 Basel Capital
85
Accord34 introduced the concept of risk weights for assets and it distinguishes
between tier-1 and tier-2 capital35. The Accord requires that internationally
active banks in G-10 countries shall maintain at least 3% leverage ratio, at least
4% tier-1 capital against risk weighted assets (RWAs) and at least 8% total
capital (tier-1 plus tier-2) against RWAs36. In this section we briefly overview
the salient features of the existing and proposed regulatory capital standards.
4.2.1.1 Regulatory Capital for Credit Risk: Present Standards
Credit risks are so much important for banks and from regulators’
perspective that the 1988 Capital Accord requires capital only against credit
risks for on-balance sheet and off-balance sheet assets of banks. Banks are in
the business of borrowing money to lend. As a result of lending, receivables
from clients make an overwhelming part of their total assets. The quality of
these assets, therefore, depends on the timely and full repayment by the clients.
A failure to do so, i.e. default, is always probable depending on the credit quality
of the client. The primary concern of regulators is, therefore, that banks should
be aware of their credit risk and maintain a minimum level of capital to
overcome any instability caused by default by a client. Total assets of a bank are
put into five risk categories (0%, 10%, 20%, 50% and 100%). Summary
٣٤
The Basel Committee for Banking Supervision – an international standard setting body
was established by the Central Bank Governors of the Group of Ten Countries at the end of
1974. The Committee's members now come from Belgium, Canada, France, Germany, Italy,
Japan, Luxembourg, the Netherlands, Spain, Sweden, Switzerland, United Kingdom and
United States. In 1988, the Committee decided to introduce a capital measurement system
commonly referred to as the Basel Capital Accord. This system provided for the
implementation of a credit risk measurement framework with the aim to establish a minimum
capital standard of 8% of total risk-weighted assets by the end of 1992. In 1996 the Accord
was amended also to require capital for market risks. The Accord is expected to remain
effective till 2005 when the New Accord is expected to be implemented.
٣٥
The capital standards differentiate between Tier - 1 capital or core capital (pure capital or
basic equity), Tier – 2 capital or supplementary capital, tier-3 capital recognized by the 1996
amendment and leverage ratio in the following form. A. Supervisors shall ensure that Tier – 1
(core) capital, i.e., a) basic equity + b) disclosed reserves from post-tax bank earnings minus
a) good will, and b) investment in subsidiaries, shall not fall short of 50% of a bank's total
capital. B. Supervisors shall also ensure that Tier – 2 (supplementary) capital, i.e., a)
undisclosed reserves, + b) revaluation reserves, + c) general loan loss reserves, + d) hybrid
debt instruments, + e) subordinated term debt of 5 years’ maturity (maximum limit 50% of
tier – 1 capital), shall not exceed 50% of a bank's total capital. C. In some countries
subordinated debt having a maturity of less than 5 years is classified as tier - 3 capital in
accordance with the 1996 amendment of the Accord covering market risks.
٣٦
These standards are also maintained in the proposed New Basel Accord too, see the
discussion below.
86
composition of each risk bucket for on-balance sheet items is given in Table
4.137.
Total capital requirement for on-balance sheet assets is reached by
putting all assets into their respective buckets and deriving RWAs of the bucket
as a first step. For example, assets in 0% risk weight category are default risk
free assets. These assets do not need any capital for their protection. Assets in
100% risk weight category are very risky and all such assets need minimum 4%
tier-1 and 8% total capital protection. If the assets in this category are $100
million, a minimum of $ 8 million ($100m*.08) total capital is required for this
category of assets. In the second step the required capital for all categories is
added up to calculate the minimum capital requirement for the on-balance sheet
items.
Table 4.1
Summary of Risk Capital Weights by Broad On-Balance-Sheet Asset Categories
Risk Weights Asset Category
(%)
0 Cash and gold bullion claims on OECD governments such as
Treasury bonds or insured residential mortgages.
0, 10, 20, or 50% Claims on national public sector entities excluding central
at national government and loans guaranteed by them
discretion
20 Claims on OECD banks and OECD public sector entities such
as securities issued by US government agencies or claims on
municipalities. Claims on multilateral banks or claims
guaranteed by them
50 Loans fully secured by mortgage on residential property.
100 All other claims such as corporate bonds and less-developed
country debt, a claim on non-OECD banks, equity, real estate,
premises, plant and equipment.
For the non-derivative off-balance sheet exposures a credit conversion
system and a set of risk weights is provided. Using these guidelines the off-
balance sheet exposures are converted into their on-balance sheet equivalents
and capital requirement is determined. Capital requirement for the off-balance
sheet derivative positions is calculated separately, again using the standards set
for this purpose. The over all credit risk capital requirement according to the
٣٧
For explanations and details see, BCBS (1988).
87
1988 Accord is the sum total of the on-balance sheet and off-balance sheet
capital requirements.
4.2.1.2 Reforming Regulatory Capital for Credit Risk: The Proposed New
Basel Accord
Although the 1988 Accord was meant for application in the G10 and
other OECD countries, it has become a standard benchmark for determining the
capital adequacy of banks worldwide. For the first time, it provided a systematic
framework for aligning bank capital with the risks of their assets. A number of
studies confirm that since the introduction of the Accord, bank capital has been
strengthened in almost all countries. However, for a number of reasons, the 1988
Accord is under review and will be replaced by the proposed New Accord
during 200538. Some of the reasons, which have prompted the review of the
Accord, are given here.
i. The Accord was meant for internationally active banks of G10 and other
OECD countries. But the non-G10 and non-OECD countries have also
embraced it and it has assumed the position of an international
benchmark to measure capital adequacy of banks. From the non-OECD
country perspective, it is, hence, desirable to make adjustments in it to
enable it to fulfil the needs of the developing countries as well.
ii. Default risk also depends on maturity of the facility, longer maturity
assets being more risky as compared to the shorter maturity. Therefore,
regulatory capital requirements, which give lesser risk weight to assets
of short-term maturity, could have encouraged the flow of such capital
as compared to the more stable source of longer-term capital. This
consideration needs to be built-in the regulatory standards.
iii. At the time of its adoption, the Accord was truly revolutionary in
aligning capital with the relative risks of assets. During the past decade a
number of new risks have arisen, new methods of risk management have
been innovated and put into practice. There has been an unprecedented
٣٨
The Basel Committee issued a consultative document on the New Accord in June 1999.
After consultations the documents of the proposed New Accord was launched in January
2001. The Committee initially planned to finalize the agreement on the Accord during 2001
and its implementations from 2004. But, the response to the invitation for consultations was
so overwhelming that the Basel Committee now plans to finalize the agreement on the
document during 2002 and the agreement is supposed to become effective from 2005. The
New Accord comprises of three pillars, namely, capital adequacy, supervisory review process
and market discipline.
88
advancement in computing and information. E-banking and other
information and technology intensive services have bypassed regulatory
jurisdictions. Rapid consolidation has taken place in the financial
services industry. All these changes need to be taken into account while
measuring the real capital adequacy requirement of banks.
iv. The Accord has also encouraged capital arbitrage opportunities,
particularly by encouraging off-balance sheet and trading activities. The
merits of this have been tremendous, however it has provided an
opportunity for “capital arbitrage” (CA) and “cherry picking”. Through
securitization good quality assets have been taken away from the
balance sheets of banks and sold for raising additional funds without
removing the corresponding liabilities from the balance sheets. As a
result, additional funds are raised with the same amount of capital,
thereby reducing the overall quality of assets and making the banks
riskier.
v. The proposed New Accord by covering some of these and other
pertinent considerations aims at aligning bank capital and risk
management systems more strongly. It aims to encourage and give
incentives for risk management systems by keeping the capital
requirement under the Internal Rating Board (IRB) approach lesser than
the standardized approach. It also aims at enhancing disclosures about
risk management systems and other important information so that
market discipline can be strengthened. The proposed Accord also aims at
making bank supervision more risk-based and dynamic.
4.2.1.3 Treatment of Credit Risk under the Proposed New Accord
The consultative document for the proposed New Accord offers three
approaches to determine risk-weighted capital for credit risk, namely the
standardized approach, foundation IRB approach and advanced IRB approach.
The objective of offering alternative approaches is to encourage risk
management culture in banks by requiring lesser regulatory capital from those
banks which have put in place standard risk management systems. The risk
management systems of banks who will opt to adopt the IRB approaches will be
verified by supervisors. Depending on the supervisory risk assessment, banks
can graduate from the standard approach to the foundation IRB approach and
89
from there to the advanced IRB approach taking benefit from the regulatory
capital relief offered.
Treatment of Credit Risk under the standardized approach
The main proposal is to replace the risk weighting method of the 1988
Accord with a risk weighting of assets based on the ratings of external credit
assessment agencies according to the risk weights given in table 4.2.
90
Table 4.2
External credit assessment based risk weighting system
Assessment39
Claims on AAA A+ to BBB+ to BB+ to Below Not
to AA- A- BBB- B- B- rated
Sovereigns 0% 20% 50% 100% 150% 100%
Option 11 20% 50% 100% 100% 150% 100%
Option 22
20% 50%3 50%3 100%3 150% 50%
Banks Long-term
Option 2
3 20% 20% 20% 50% 150% 20%
Short-term
Corporations 20% 100% 100% 100% 150% 100%
1 Risk weighting based on risk weighting of the sovereign in which the bank is incorporated.
2 Risk weighting based on the assessment of individual banks.
3 Claims on banks of a short original maturity, for example, less than six months.
Source: Taken from BCBS 2001 (The New Basel Accord).
This risk weighting system implies, for example, that if the sovereign
counterparty of an asset, which is worth $ 100 million, is rated for example,
AAA+ to AA-, the asset will be treated as default risk-free and it will not require
any capital. But if the rating of the sovereign is BB+ to B-, the asset requires
100% capital protection (i.e. minimum 4 %, $ 4 million tier–1 capital and 8%, $
8 million total capital shall be kept by the bank against the asset). If the
sovereign counterparty is rated below B-, for capital requirements, the $ 100
million asset will be treated as $ 150 million and total capital requirement will
be 8% of $ 150 million.
Collateral, guarantee, credit derivatives and netting arrangements are the
most important instruments to mitigate credit risk. Based on the quality of these,
supervisors can give relief in the capital requirements under certain conditions
and subject to the satisfactory utilization of standard risk management
٣٩
Risk weights for claims secured by residential property 50%, commercial real estate
100%. Claims on multilateral development banks, case by case approach based on minimum
0% for AAA - AA- with very strong shareholder structure, paid in and callable capital. For
risk weights of off-balance sheet items, the risk weights of the 1995 modified Accord are
maintained with some modifications for maturity of the facility.
91
techniques and systems by banks. These are treated uniformly in the
standardized approach and in the foundation approach of internal rating.
i. Collateral is most important among the four techniques to control credit
risks. Cash, debt securities, equities, mutual fund units and gold can be
used as collateral. The actual strength of collateral depends on the loss in
the collateral value due to various risks. The estimate of this loss is
called ‘haircut’. Normally, the haircut of treasury bills is 0%, if the
collateral is equity, the haircut is 30%, if the collateral is an asset under
default, the haircut is 100% - total loss. Therefore, the proposed New
Accord provides the possibility of a supervisory relief of risk capital
allocation subject to the quality (haircut) of collateral. A methodology
for determining the haircut under various approaches is provided in the
New Accord.
ii. In addition to collateral, on-balance sheet netting, credit derivatives and
guarantees are recognized by the Accord as credit risk mitigants and
eligible for supervisory relief for risk capital allocation. These are,
however, subject to a number of conditions and existence of risk
management systems, disclosures and subject to other details given in
the Accord document.
Treatment of Credit Risk under the IRB Approach
An internal rating system, in the simplest form, can be considered as an
inventory of all assets of a bank keeping in view the future value of these assets.
In this way an IRB maps all assets of a bank in accordance with the risk
characteristics of each asset. All banks have some systems of internal ratings in
place for provisioning loan loss reserves, but an increasing number of banks are
putting in place formal systems of IRB often based on computerized models.
Internal rating systems can be instrumental in filling the gaps in the existing risk
management systems of a bank. Therefore, these are expected to enhance the
risk assessment of an institution by the external credit assessment as well as
supervisory risk assessment agencies leading to lesser capital requirements and
reducing the cost of funds.
The IRB approach to credit risk management has a number of
advantages. First, it makes the regulatory capital requirement more risk sensitive
- riskier banks will need more capital, less risky ones lesser capital. The IRB
approach is expected to be effective in this regard. Second, it is expected that the
92
IRB approach will provide incentives for risk management systems. As an
incentive for banks to develop their own internal system of risk management, the
New Accord recognizes internal ratings for credit risk capital allocation. The
Accord offers two alternative types of approaches to internal rating, namely, a
foundation approach and an advanced approach
The foundation approach is suitable for less sophisticated institutions
and the advanced one is open for use by the sophisticated institutions. Under
both approaches, the exposures of an institution are classified into corporations,
banks, sovereigns, retail, project finance and equity. These exposures are
specifically defined in the foundation and advanced approaches, but both the
approaches are based on five key concepts as determinants of credit risk. These
are probability of default (PD), loss given default (LGD), exposure at default
(EAD), maturity of facility (MOF) and granularity. Each one is briefly described
here.
i. Probability of Default (PD): The PD of a client is the measure of credit
risk faced by the bank. The works of rating agencies provide the vital
information about the PD of counterparties. The results of the S&P’s
default studies40 provide a number of factual information about the
historical characteristics of PDs. First, the higher the ratings, the lower
the probability of default, lower ratings always correspond to higher
default rates. Second, the lower the initial rating of a party, the sooner
the party faces default. An initial B rated company defaults in a period
of 3.6 years, AA company defaults in a period of 5 years from the
initial rating. A company downgraded to CCC defaults in an average
period of less than 6 months. Third, higher ratings are longer-lived. A
company rated AAA has the 90.3% chance to be rated AAA+ a year
later. This chance for the same initial rating is 84.3% for a BBB and
53.2% for a CCC. Thus, ratings provide reliable and systematic
information about credit risk. Financial institutions can measure their
credit risk by calculating a PD information and maintaining it overtime.
In all approaches individual banks must calculate their PDs for
corporate, bank and sovereign categories. In addition bank supervisors
will also calculate the PDs of clients of individual banks in order to
verify the accuracy of the PDs provided by banks.
٤٠
See, Standard & Poor S (2001).
93
ii. Loss Given Default (LGD): The LGD is a measure of the dollar value
of loss to the portfolio given a particular default. The PD is specific to a
given borrower, the LGD is specific to a given credit facility. Together,
the PD and the LGD make a better measure of the credit risk. Some
banks may not be able to calculate the LGDs of their facilities reliably
while others may be able to do so. After reviewing the individual
banks’ risk management systems, supervisors have the discretion to
decide whether to allow banks to use their own LGD calculations or to
assign their facilities supervisory LGD characteristics. Those banks,
which qualify for the use of their own LGD calculations will graduate
to the advanced IRB approach and those banks which are required to
use the supervisory assigned LGDs will be put in the foundation IRB
approach. Under the foundation IRB approach, supervisors will decide
the LGDs of various facilities with a supervisory benchmark of 50%
LGD for an unsecured facility, and with a 75% LGD value for
subordinated exposures (Table 5.1 provides the regulatory risk weights
given the benchmark LGD of 50%). For secured collateralized
transactions, supervisors will decide the LGD values using the
collateral haircut standards set under the standardized approach to the
capital allocation for credit risk. Under the advanced IRB approach,
banks will be allowed to use their own LDG estimates for various
facilities in allocating capital for credit risk. Banks are expected to use
scientific and verifiable processes for LGD calculation across facilities,
across collateral, across borrowers and across exposures. Supervisors
have the discretion to decline the use of their own LGDs by banks, i.e.
forcing them to follow the basic IRB approach.
iii. Exposure at Default (EAD): Like the LGD, EAD is also facility
specific. It is the measure of the total exposure of the facility at the time
of default. If the commitment is for example, for a $100 facility to be
utilized in two years drawn in 4 equal amounts and if the default
happens at the end of the first year, the EAD is $50. Indeed, the default
event will also have an impact on future exposures of the remaining
$50 of the facility. Like the LGD, under the basic IRB approach,
supervisors will calculate EADs, for individual banks, using set
supervisory rules. Under the advanced IRB approach, banks are entitled
to calculate their own EAD values for various facilities. The qualitative
94
characteristics of the system will be the same as described under the
LGD.
iv. Maturity of Facility (MOF): MOF is an important determinant of credit
risk. As shown in the above S&P default studies, a longer maturity
facility has higher probability of default for all rating classes. Banks are
required to provide a complete information about maturity of their
facilities.
v. Granularity: Granularity is the measure of a single borrower
concentration in the banks’ credit portfolio. The more spread is the
credit portfolio among borrowers, the more the non-systematic risks of
the borrowers are diversified and the less the credit risk and capital
requirement is. The benchmark granularity is the average for the
market. Granularity above the benchmark will require more capital and
below the benchmark less capital. This evaluation of granularity is
required to make each facility’s credit risk different for each other
facility so that capital can be allocated for each facility differently. The
IRB approach requires that the risk of each facility shall be measured
separately. The bank’s credit portfolio should not be exposed too much
to the non-systematic risk of a borrower by loan concentrations.
4.2.1.4 Regulatory Treatment of Market Risk
As discussed above market risks include interest rate, commodity price,
exchange rate risk and equity price risks faced by the banks’ asset portfolios as a
result of their trading positions. As mentioned earlier, the original 1988 Basel
Accord does not require capital for these risks. The risks were brought under the
regulatory umbrella by the 1996 amendment to the Accord and the amendment
became effective in 1998. The amendment introduces two approaches41 to
regulatory assessment of market risks:
i) The standardized approach; and
ii) The internal ratings-based approach.
٤١
The amendment in fact brought three fundamental changes to the original Accord, namely,
the market-risks are brought under the regulatory capital requirements, tier-3 capital was
introduced to cover market risks; and two approaches standardized and internal ratings
approaches were introduced.
95
The choice of an approach is a supervisory discretion based on review
and understanding of risk management systems and processes existing in banks.
Supervisors may also encourage banks to use both approaches simultaneously.
Capital requirements of banks in the first category are meant to be higher than
the second category. The objective of these alternative approaches is to
introduce an effective incentive system for better risk management by setting
lower capital requirements opting for internal ratings and relatively higher
capital requirements for the standardized approach. In fact this incentive system
proved to be successful and has triggered a revolutionary enhancement of risk
management culture in banks in a short period of time. Impressed by the benefits
of the alternative approaches, as discussed in case of credit risk, the New Accord
suggests adopting the internal ratings approach for credit risk as well. Therefore,
the New Accord, in some sense is an extension of the approaches of 1996
Accord to cover credit risks. In other words as far as market risks are concerned,
the 1996 amendments to the 1988 Accord will continue beyond 2005 with minor
modifications.
In the standardized approach the capital charge for each market risk is
first determined separately following standardized methods for each risk. After
that these capital charges are added to determine the total capital requirements.
Interest rate risk is subdivided into specific and general risks. Specific capital
charges are designed to capture the risk underlying any net position due to the
non-systematic risks of the counterparty and hence more specific to positions in
individual instruments. General risk refers to the risk of loss arising from the
changes in the market interest rates. Two methods: “maturity ladder” and
“duration” are allowed to banks to choose for allocating risk weights. The
underlying principle of the more commonly practiced maturity ladder approach
is the fact that longer maturity requires higher risk weights and short maturity
lower risk weights. For this specific general principle, it is alleged that the
regulatory framework is biased against long-maturity systemically stable sources
of funding and favors short maturity unstable sources of funding. As a result,
the system might have contributed to the flow of short-term of funds and the
resultant financial instability. The internal ratings approach is essentially based
on value at risk technique as briefly discussed in section two of this paper.
4.2.1.5 Banking Book Interest Rate Risk
96
The banking book42 interest rate risk refers to income or asset value loss
due to a change in the market rates of interest. This is recognized to be an
important risk, which warrants allocation of capital. However, the risk greatly
varies from bank to bank, therefore, it is not possible to set uniform standards for
capital allocation. Therefore, the New Accord keeps the allocation of capital for
this risk in the discretion of bank supervisions under pillar-2 of the Accord that
gives the framework for the supervisory review process. Supervisions are in
particular required to be attentive to the problem of “outlier” banks – banks
whose interest rate risk can lead to the decline in its asset value equal to 20% or
more of its tier-1 and tier-2 capital. Supervisors also need to carefully assess and
review bank’s internal risk assessment and management systems.
4.2.1.6 Treatment of Securitization Risk
Since securitization takes away assets from the balance sheet of a bank
and puts them in the balance of a special purpose vehicle as discussed in section
two, its risks need to be regulated at balance sheets of the two entities. The 1988
Accord is widely known to have caused capital arbitrage by giving capital relief
to securitized assets, particularly first ignoring market risks and afterwards
assigning lower risk weights on the trading book positions. The New Accord
while appreciating the benefits of securitization tries to minimize capital
arbitrage by trying to ensure that:
i. The securitizing (originator) bank must reach a “clean break” which is: a)
the asset transfer must be through a legal and transparent sale and b) the
bank shall not hold any control on the assets securitized.
ii. If the originator is obliged for first credit losses enhancement, it must do
so by deducting from its capital and
iii. If the originator is obliged for a second credit loss enhancement, the
position to be treated as a direct credit substitute.
4.2.1.7 Treatment of Operational risks
Operational risks are considered to be important in banking organizations.
However, it is only in the New Accord that a specific capital charge is proposed
to cover operational risk. Alternative methodologies are suggested for measuring
this risk:
٤٢
Banking book in this case covers also all those positions which due to any reason cannot
be sold.
97
a) Basic Indicator Approach (BIA);
b) Standardized Approach (SA);
c) Internal Management Approach (IMA), and
d) Loss Distribution Approach (LDA).
This menu of approaches is given in order of the level of sophistication
of a bank – starting from a simple bank using the BIA and the most advanced
banks opting for IMA or LDA in the future. Under the BIA, banks will be
required to maintain capital for operational risk equal to a fixed percentage of
gross income set by supervisions. Under the SA banks’ activities will be divided
into business lines. Capital charges will be set as Beta fractions for each line as
given in table 4.3. The same set of business lines and beta fractions are further
refined in the IMA, adding additional indicators by the supervisors such as
exposure indicators, loss event probability, given the expected loss etc. Suitable
approaches will be assigned to banks by supervisors after reviewing the
preferences as well as the state of risk management processes existing in banks.
Table 4.3
Proposed operational risk indicators in the New Accord
Business Business Lines Indicator Capital
Units factors
Investment Corporate finance Gross income β1
banking Trading and sales Gross income (or VAR) β2
Banking Retail banking Annual average assets β3
Commercial Annual average assets β4
banking
Retail banking Annual average assets β5
Payment and
Annual settlement through β5
settlement put
Others Retail brokerage Gross income β6
Asset management Total funds under β7
management
Source: The New Basel Accord Document
4.2.2 Effective Supervision
Effective bank supervision is the key to achieve financial efficiency and
stability. The objectives of bank supervision can be summarized in a few
sentences.
98
i. The key objective of supervision is to maintain stability and confidence
in the financial system, thereby reducing the risk of loss to depositors
and other creditors.
ii. Supervisors should encourage and pursue market discipline by
encouraging good corporate governance (through an appropriate
structure and set of responsibilities for a bank’s board of directors and
senior management) and enhancing market transparency and
surveillance.
iii. In order to carry out its tasks effectively, a supervisor must have
operational independence, the means and powers to gather information
both on and offsite, and the authority to enforce its decisions.
iv. Supervisors must understand the nature of the business undertaken by
banks and ensure to the extent possible that the risks incurred by banks
are being adequately managed.
v. Effective banking supervision requires that the risk profile of individual
banks be assessed and supervisory resources allocated accordingly.
vi. Supervisors must ensure that banks have resources appropriate to
undertake risks, including adequate capital, sound management, and
effective control systems and accounting records; and
vii. Close cooperation with other supervisors is essential, particularly where
the operations of banking organizations cross national boundaries.43
Effective supervision of banks ensures that banks function safely and
soundly, so that the financial system can attain the full confidence of savers and
investors. This enables the removal of the constraints imposed by the self-
financing system and increases the monetization of transactions. An increased
level of savings efficiently put into investments ensures economic development
and welfare. Supervisory systems are expected to depend on the socio-political
and legal frameworks prevailing in different countries. Hence there cannot be a
standardized supervisory system to be followed in all jurisdictions. Different
countries use different method and approaches to assess bank risks. These
approaches are however, converging on one important point, namely, risk-based
formal and systematic supervision shall gradually be adopted in order to make
43
Core Principles' document, pp.8-9.
99
supervision effective. The approaches to supervisory risk assessment as used in
different countries can be grouped into four.
a) Supervisory bank rating systems (such a CAMELS)
b) Financial ratio and peer group analysis systems
c) Comprehensive bank risk assessment systems and
d) Statistical models
The generic features of each approach are summarized in table 4.4.44
Table 4.4
Salient Features of Bank Supervisory Risk Assessment Systems
Assessment of
Specific focus
Approaches
analysis and
supervision
Forecasting
assessments
quantitative
Inclusion of
supervisory
procedures
qualitative
categories
Links with
statistical
condition
condition
financial
financial
to Bank
current
on risk
formal
Use of
action
future
Supervisory
Ratings
On-site *** * * *** * ***
Off-site *** * ** ** ** *
Financial ratio
and peer group
analysis *** * *** * ** *
Comprehensive
bank risk
assessment *** ** ** ** *** ***
systems
Statistical
models ** *** *** * ** *
* Not significant ** significant *** very significant
Despite the prevalence of different approaches to supervision in
different jurisdictions, a broadly acceptable framework of some of the core
principles for effective supervision can be equally relevant across different
countries. Such principles provide a widely recognized benchmark for effective
٤٤
See Sahajwala and Bergh (2000).
100
supervision, provide for a recognition for the minimum precondition for
effective supervision, define supervisory role in identifying risks and mitigating
them, and increase cooperation between supervisors of different countries to
enhance consolidated supervision. Due to these and other considerations, the
BCBS issued the Core Principle for Effective Banking Supervision document in
1997. The main features of these core principles are highlighted in table 4.5.
101
Table 4.5
Core Principles and Assessment Methodology of Banking Supervision
CLASSIFICATION COVERAGE* ASSESSMENT OF
OF CORE COMPLIANCE**
PRINCIPLES*
Principle – 1 Existence of sound economic Are the roles and duties of
Preconditions for policies, public infrastructure, different agencies clearly
Effective Banking market discipline, procedures for defined? Is there a coordination of
Supervision effective resolution of problems, activities? Is there suitable legal
sound public safety nets. framework for banking? Are
supervisors empowered?
Principles 2-5 Permissible activities to be Is the term bank clearly defined in
Licensing and licensed to banks, powers of laws? Are banking activities
Structure licensing authorities, methods of clearly defined in laws? Are
procedures of licensing owners, licensing authorities competent,
plans to operate and manage honest and well informed? Are
risks, competence and integrity they empowered to block any
of senior management, financial subsequent ownership control,
matters including capital activity changes, mergers, etc.?
required other approvals, transfer Are they in full contact with such
of control, major acquisition or authorities in other jurisdictions?
investment by banks.
Principles 6-15 Adequacy of risk-based capital, Are the authorities empowered to
Prudential Regulations credit risk management, asset set regulatory capital
and Requirements quality assessment, large requirements and to implement
exposures and risk concentration, these fully? Have they the
connected lending, country and required rules, regulations in
transfer risks, market risks, other place? Have they the required
risk (interest rate, liquidity, technical expertise to evaluate the
operational risk), internal control risk existing in banks? Are they
systems. empowered to take prompt
corrective measures?
Principles 16-20 Supervisory risk assessment Have they the conceptual and
Methods of on-going systems (offsite surveillance and technical expertise for
Banking Supervision on-site inspection), external audit supervisory risk assessment? Do
reports, consolidated they have the resource for onsite
supervision. inspection? Do they have
information for offsite
supervision?
Principle 21 Information disclosure, Are accounting and auditing
Information accounting standards periodicity systems in place? Are banks using
Requirements and accuracy of reports, valuation methods, which are
confidentiality of information. reliable? Is the required
information properly disclosed? Is
confidentiality kept?
Principle 22 Prompt corrective measures, Are supervisors equipped with the
Formal Powers of liquidation procedures. power and resources for prompt
Supervisors corrective measures? Are laws in
place to enforce liquidation?
Principle 23-25 Responsibilities of law and host Is consolidated supervision in
Cross Border Banking country supervisors. place? Is there cooperation with
supervisors from outside?
*This information is extracted from the Core Principles (1997) document of the BCBS.
102
**This information is based on the Core Principles Methodology (1999) document of the BCBS.
103
If the main objective of the Core Principles of effective banking
supervision is enhancing financial stability, technical assessment of the
compliance with these principles can provide useful insight in increasing the
effectiveness of various policies. A recent study conducted by the IMF staff
concludes that indicators of credit risks and bank soundness are primarily
influenced by macroeconomic and macro-prudential factors and that the direct
influence of the compliance with the Core Principles is insignificant in this
regard. The study suggests that the compliance could indirectly influence risk
through the transmission mechanism of effecting the macroeconomic variables45.
However, it may be noted that the existence of sound macroeconomic policies
and conditions is considered as one of the important pre-conditions for effective
banking supervision.
4.2.3 Risk Disclosures: Enhancing Transparency about the Future
The market mechanism functions efficiently with complete information.
Information cannot be considered complete unless it is transparent and timely.
There are many channels of disclosing such information to clients, shareholders,
debtors, supervisors and regulators and above all to the market. These channels
are annual reports, supervisory and regulatory review reports, external credit
assessment reports whenever available, periodic regulatory reports, reports of
market intelligence, stock market information and debt-market information etc.
The set of information provides a critical input to investors for allocating their
investment in accordance with their appetite for risk adjusted returns.
Transparency reduces moral hazard and adverse selection and enhances
efficiency and integrity of the markets and strengthens market discipline. Market
discipline is strengthened not only by the timely availability of appropriate
information about the risk level of a firm, but it is also effected by the
information about the firm’s risk management processes. Hence disclosure of
information is effective only if (a) it provides information about the risk of the
firm and (b) it provides information about the risk management processes of the
firm.
The traditional channels of information have been effective in providing
information about the levels of risks faced by a firm in the past as accounting
standards can largely cover these risks. However, due to a number of reasons, it
is difficult to set standards for disclosure of future risks and risk management
٤٥
See, Sundrarajan, Marston and Basu (2001).
104
processes across firms, across market segments and overtime. Some of these
factors are:46
a. The risk management technologies are changing rapidly due to
innovations making it difficult to set rigid standards.
b. The financial services industry is itself changing fast and financial
conglomerations are emerging blurring the difference between the risk of
various segments of the industry - insurance companies, investment
banks, commercial banks, etc.
c. Financial instruments are also changing fast due to the financial
engineering process, making standardized valuation of these instruments
almost impossible.
d. Due to e-banking, a totally new scenario has developed particularly in
relation to the control of the banks on their own banking infrastructure.
The Internet-based banking networks surpass regulatory jurisdictions.
The infrastructure providers practically control all the e-banking
information. Above all, the technology changes very fast.
e. There are borrower incentives for not disclosing full information. These
incentives range from hiding information from competitors, tax evasion,
and conflict of interests among shareholders and providers of funds, etc.
These factors are so strong that a recent survey of risk disclosure by
leading international banks found that the quality of disclosures provided
in the annual reports about risk management practices is far short of the
expectations. The survey recommends that there must be standardized
framework for disclosure of risks to enhance comparability of systems
across firms. Disclosures can be improved in almost all areas, but a lot
more improvement is needed in non-trading activities as well as credit
risk in trading activities. Disclosures also need improvement in the use of
models, internal rating system and safety procedures of using
computers.47
Since “one size fits for all” standards are not possible due to the fast
pace of innovation, management of financial institutions can be most effective in
46
See, Ribson, Rajna, “Rethinking the Quality of Risk Management Disclosure Practices”
http///newrisk.ifci.ch/146360.html
47
See, IFCI-Arthur Andersen, "Risk Disclosure Survey", http///newrisk.ifci.ch/ifci-
AASurvey.html
105
integrating the risk management systems with their annual reports. This requires
the evolution and adoption of:
One) risk-based accounting systems,
Two) risk-based auditing systems,
Three) risk-based management information
systems, and
Four) risk-based inventories of all assets of banks.
The common goal of these processes is to disclose information about the
risks that the firm is expected to face in the future in addition to the traditional
information which relates to the past risks. Once these processes are developed,
the annual reports will not only provide information about the risks faced by
financial institutions in the past but will also disclose sufficient information
about the risk management processes of the institutions and their risks in the
future.
Disclosures about risk in an institution and the risk management
processes adopted by the institution are so important that the international
regulatory standard setters have produced several reports and guidelines on the
subject.48 Keeping in view the increasing nature of cross-segment activities in
the financial industry, the risks which such activities are bound to pose,
regulators of various sectors need to enhance coordination of their activities to
enhance disclosure and strengthen market discipline. Keeping this consideration
in view a Multidisciplinary Working Group on Enhanced Disclosure was
established in June 1999 jointly by the BCBS, IOSCO, IAIS and Committee on
the Global Financial System of the G-10 Central Banks. The report of the
Working Group was released on April 26, 2001.
The report makes it clear that there are two complementary types of
disclosures: disclosures about the risks of the institution as given in the
traditional statistical information provided in annual reports about the current
48
These reports include: BCBS (1999a) Sound practices for Loan Accounting and
Disclosure, BCBS (1999b) Best Practices for Credit Risk Disclosures, BCBS (1998),
Enhancing Bank Transparency, Euro-Currency Standing Committee (1994) Public Disclosure
of Market and Credit Risks by Financial Intermediaries, BCBS & IOSCO (1999)
Recommendations for Public Disclosure and Derivative Activities of Banks and Securities
Firms, BCBS (1997) Core Principles of Effective Banking Supervision, BCBS (1999) the
New Basel Accord (Pillar-3 Market Discipline). In addition to the most
resources can be accessed from.
106
health of the institutions, and disclosure about the risk management processes of
the institution. The second type of disclosure, which is the subject matter of the
report, is classified into these groups.
a. A specific minimum level of disclosure must be a part of the
traditional periodic reports provided by the institution to its
shareholders, investors, creditors and counterparties.
b. Disclosure that could be useful, but their costs and benefits are yet
to be settled.
c. Certain statistical information could be disclosed which can fill the
gaps in disclosures of risk management systems. Again this type of
information needs to be studied further before making it a part of
the disclosure requirements.
The study concludes that in order to make disclosures transparent and
supportive to enhanced market discipline there should be:
a. A balance between quantitative and qualitative disclosures,
b. Disclosures should basically aim at reflecting the firm’s own true risk.
To achieve this the comparability (with other firms) may at times be
sacrificed, and
c. Appropriate disclosure of risk management system can be achieved by
providing information about intra-period risk exposure instead of the
traditional system of period-end data.
The report also recommends to the international standard setters to
consider enhancement of guidelines on disclosures on risk concentration, credit
risk mitigation and the evolution of overall risk management systems in financial
institutions. These recommendations increase the role of supervisors in
enhancing such disclosures in the framework of risk-based supervision49.
To achieve financial stability, disclosure requirements for banks can be
effective only if other agents participating in the economic and financial system
also comply with the respective standards. The set of international standards
cover a wide range of important areas such as monetary and financial policy
49
For more details see, Working Group (2001), Multidisciplinary Working Group on
Enhanced Risk Disclosures; Final Report to BCBS, CGFS, IOSCO and IAIS.
107
transparency, fiscal policy transparency, data dissemination, accounting,
auditing, payments settlements, market integrity etc50.
4.3 REGULATION AND SUPERVISION OF ISLAMIC BANKS
There could be no disagreement on the statement that the risk
management systems in the Islamic banks shall meet the required international
standards. However, as mentioned above a number of risks faced by the Islamic
banks are different as compared to the risks of traditional banks. Therefore,
some international standards meant for traditional banks may not be relevant for
the Islamic banks due to their different nature. Hence the effective supervision of
Islamic banks requires the study of the risks of Islamic banks and formulating
suitable guidelines for the effective supervisory oversight of Islamic banks.
Chapra and Khan (2000) undertake a survey of regulation and supervision of
Islamic banks. Some pertinent conclusions of that study are presented here.
4.3.1 Relevance of the International Standards for Islamic Banks
a. The Core Principles document sets pre-conditions for effective
banking supervision. In addition to these pre-conditions, there are
also a number of other pre-conditions specific for effective Islamic
banking supervision. One set of these preconditions has to be
fulfilled by bank regulators and supervisors. These include
providing a leveled playing field for competition, licensing
facilities, lender of last resort facility acceptable to the mandate of
Islamic banks, proper legal framework, proper Sharī‘ah
supervision, etc. The other set of preconditions has to be met by
the Islamic banks themselves. These include development of inter-
bank market and instruments, resolution of a number of unresolved
Fiqh related issues, development of proper internal control and risk
management systems, etc.
b. As for as the Core Principles for effective banking supervision and
the disclosure and transparency requirements are concerned, these
are equally relevant for the Islamic banks. Due to the risk sharing
٥٠
For international standards see, Financial Stability Forum, International Standards and
Codes to Strengthen Financial Systems, (). In
addition, the Accounting and Auditing Organization for the Islamic Financial Institutions
(AAOIFI) needs to be mentioned specially as it is the sole standard setter for the Islamic
financial industry.
108
nature of Islamic banks, these banks need even more effective
systems of supervision and transparency.
c. The difficulty in applying the international standards to Islamic
banks lies in applying capital adequacy standards. First, due to the
risk sharing nature of their modes of finance, Islamic banks need
more not lesser capital as compared to traditional banks. Second,
there is a need to separate the capital of current and investment
accounts. Third, the need to adapt the international standards for
the Islamic banks has prompted efforts towards establishing the
establishment of the Islamic Financial Services Supervisory Board.
Finally, the supervisory risk assessment systems like CAMELS51
are equally relevant for Islamic banks and these can be adapted
without difficulty.
d. A number of advantages of the IRB approach discussed in the
previous section are relevant for the Islamic banks. First, the
approach allows mapping the risk profile of each asset
individually. Since the Islamic modes of finance are diverse, the
IRB approach suits these modes more than the standardized
approach. Second, the IRB approach aligns the actual risk
exposure of banks with their capital requirements. This is
consistent with the nature of Islamic banks. Third, the IRB
approach is expected to encourage and motivate banks to develop a
risk management culture and thereby reduce the risks in the
banking industry and enhance stability and efficiency. Fourth, it is
expected to generate reliable data and information and enhance
transparency and market discipline. Fifth, it will use external credit
assessment as benchmark, and thus truly integrate internal and
external information to generate more reliable data. This is
important because external credit assessment may not have the full
set of reliable information that an internal-ratings system can have,
and internal-rating systems may lack the objectivity of external
ratings. This information, used in harmony with incentives for risk
51
The CAMELS rating system refers to capital adequacy, assets quality, management
quality, earnings, liquidity, and sensitivity to market risks (in some countries also systems
for internal controls).
109
management, will be instrumental in controlling moral hazard and
capital arbitrage.
4.3.2 The Present State of Islamic Banking Supervision
Most Islamic banks are located in the member countries of the IDB. The
study mentioned above identifies a number of issues regarding the present state
of Islamic banking supervision.
a. A growing number of these countries are in the process of adopting and
effectively implementing the international standards, namely, Core
Principles, minimum risk-weighted capital requirements and the
international accounting standards. In applying the risk-weighting
methodologies to Islamic banks, there are difficulties reported due to the
diverse nature of the Islamic modes of finance. Compliance with the
standards set by the Accounting & Auditing Organization for Islamic
Financial Institutions (AAOIFI), has not yet fully materialized. Only
Bahrain and Sudan have so far adopted these standards.
b. Some countries including Iran, Pakistan and Sudan are undertaking
financial sector reform programs. Strengthening the capital of banks is
an important part of these programs. Since most Islamic banks are very
small, some countries have announced a program of mandatory merger
of Islamic banks to strengthen their capital base.
c. An increasing number of countries where there are Islamic banks
located, are putting in place both Off-site and On-site supervisory
systems. The famous On-site supervisory risk assessment system,
namely, CAMELS is also being used in some countries. Islamic banks
are generally being supervised within the framework of the prevailing
international commercial banking supervisory systems. In some
countries special laws have been introduced to facilitate Islamic
banking, while in others no such laws exist. Islamic banking operations
in the latter group of countries are performed under the guidelines issued
by their respective central banks.
d. In almost all those countries where Islamic banks are operating,
commercial banking functions are segregated from securities and
insurance businesses, and distinct authorities are assigned the
supervisory task. Malaysia is the only exception, where banks and
110
insurance companies are supervised by the central bank. However, the
global trend is inclined towards the concept of universal banking with
emphasis on supervision by a single mega-supervisor. Moreover,
commercial banks in these countries are supervised by central banks.
However, the emerging trend in the world is to segregate the monetary
policy framework of macroeconomic management from the
microeconomic considerations of bank soundness. As a result of this
segregation, banking supervision is being separated from monetary
policy and being assigned to a specialized authority. In cases where
different supervisory authorities specialize in supervising different
banking and non-banking financial institutions, the need for cooperation
and coordination between these authorities increases.
e. In some countries conventional banks are allowed to open Islamic
windows, while in other countries this is not allowed.
f. Most private banks have their own Sharī‘ah supervisory boards.
However, in Malaysia, Pakistan and Sudan the central banks have a
central Sharī‘ah board. In Pakistan the Council of Islamic Ideology and
the Federal Shariat Court are empowered to review all laws in the light
of the Sharī‘ah. The Federal Shariat Court has declared interest to be a
form of Ribā.
g. A number of characteristics of Islamic banks require that the existing
international standards need to be properly adapted to apply these to
Islamic banking supervision effectively. The risk sharing nature of
investment deposits, the risks of various Islamic products, the
availability of risk management instruments, the presence of institutional
support such as lender of last resort facility and deposit protection are
some of the most important among these factors.
4.3.3 Unique Systemic Risk of Islamic Banking
Transmission and mixing of risks between different segments of the
financial services industry can be a source of improper identification of risks and
lack of their effective mitigation. Each segment of the financial services industry
is specialized in specific types of risks. For example, the insurance industry in
general deals with risks, which are of long-term in nature. Banks on the other
hand are good in managing short-term risks. The banking book of a bank
constitutes those risks, which profile the risk appetite of its depositors. The
111
trading book and fund management activities cater for the risk preferences of
investors. Hence specialization by financial institutions in different types of risks
enhances efficiency in identifying, pricing and mitigating the various risks.
Cross-segment transmission of the various risks can thus cause the mixing of
these various risks, and can trigger a conflict in the risk profile of the various
users of financial services and can weaken the customers confidence in the
overall financial intermediation system. This could cause macroeconomic
inefficiency as well as systemic instability. Therefore, most regulatory regimes
attempt to block such transmission of risks either by preventing cross-segment
activities of different institutions or by requiring separate capital and other
firewalls between the risks of activities of a bank in different sectors.
4.3.3.1 Preventing Risk Transmission
The raisons d’être of Islamic banking is conducting business practices
consistent with the religious prohibition of Ribā. Ribā is a return (interest)
charged in a loan (Qar) contract. This religious injunction has sharpened the
differences between current accounts (interest free loans taken by owners of the
Islamic bank) and investment deposits (Muārabah funds). In the former case,
the repayment on demand of the principal amount is guaranteed without any
return. In case of investment deposits, neither the principal nor a return is
guaranteed. The owners of current accounts do not share with the bank in its
risks. Whereas, the owners of investment accounts participate in the risks and
share in the bank’s profits on pro rata basis. The contracts of Qar and
Muārabah are thus the fundamental pillars of Islamic banking and their
characteristics must fully be protected for the preservation of the uniqueness of
Islamic banks.
In all Islamic banks sizeable proportion of funds under management are
comprised of current accounts. In some Islamic banks these accounts constitute
more than 75% of total funds under management. Thus current accounts are the
strength of Islamic banks as these are a vital source of their free money. The use
of funds of investment deposits (Muārabah money) along side such huge
proportion of borrowed money is unprecedented in the history of the Islamic
financial system. It poses at least two important challenges to Islamic banks,
namely the challenge of systemic risk and the challenge of barriers to market
entry.
112
The current account holders need to be fully protected against the
business risks of the bank. The investment account holders need to fully
participate in the business risks of the bank. But the current accounts are
guaranteed only theoretically in the sense that in case of a confidence problem,
the Islamic bank is not in a position to return all the accounts on demand. The
more an Islamic bank relies on these funds the more serious this systemic
problem is. This means that in case of crisis, the risks of the assets of investment
deposits will be borne by the current account holders. Since most Islamic banks
operate in jurisdictions where deposit insurance and lender of last resort facilities
are not available, this systemic risk is serious in nature.
Even though investment deposits are theoretically assumed to share the
business risks of the bank to the extent that investment deposits finance the
businesses, these deposits are also not immune to the systemic risks posed by the
current accounts. Current accounts tend to increase the leverage of the Islamic
banks, their financial risks and hence adversely affect their overall stability.
Thus in a crisis situation, the risks of one type of deposits cannot be separated
from the risks of the other type of deposits. This is not only systemically
unstable but also against the basic premises of Qar and Muārabah – the two
pillars of the unique nature of Islamic banking. A number of suggestions are
made to prevent the confidence problem, which may arise due to the
transmission of risks between the two accounts.
a. Some researchers suggest a 100% reserve requirement for current
accounts. As mentioned earlier current accounts are a vital source of
strength of Islamic banks. The drastic measure of 100% reserve
requirement will no doubt enhance systemic stability, but it imposes an
unreasonable cost on the Islamic banks due to which they may not be
able to even survive in the competitive markets.
b. The BMA has introduced prudential rules whereby it is mandatory to
disclose all assets financed by current accounts separately and all assets
financed by investment deposits separately.
c. In some regulatory jurisdictions the reserve requirements for current
accounts are much higher as compared to investment deposits.
d. Some other regulatory regimes combine the requirements as mentioned
in the second and third cases.
113
e. The AAOIFI has suggested a more elaborate and systematic procedure
to tackle the subject. The AAOIFI scheme is worth of discussing in more
detail.
The AAOIFI’s main concern has been to develop accounting, auditing
and income recognition standards for the Islamic financial institutions so that
transparency and disclosures can be enhanced in these institutions, which is an
Islamic requirement for conducting fair and honest business. In the process of
developing the standards, AAOIFI found that most Islamic banks are reporting
their investment deposits as off-balance sheet items. After a thorough technical
analysis, AAOIFI reached two crucial conclusions.
a. There is a need to differentiate two types of investment deposits; those
restricted to a specific use and general-purpose unrestricted deposits.
The magnitude of the first type of deposits is very small as compared to
the second type of deposits. While the Islamic banks can continue
keeping the first type of deposits off-balance sheet, the second type of
deposits shall be kept on-balance sheet. In all our discussion, investment
deposits imply this type of deposits.
b. The bank while managing investment deposits must face fiduciary and
displaced commercial risks. Fiduciary risk can be caused by breach of
contract by the Islamic bank. For example, the bank may not be able to
fully comply with the Sharī‘ah requirements of various contracts. While,
the justification for Islamic banking is the compliance with the Sharī‘ah,
an inability to do so or not doing so willfully can cause a serious
confidence problem and deposit withdrawal. will need to
apportion part of their own share in profits to the investment depositors.
The AAOIFI thus suggests that the Islamic bank’s capital shall bear the
risks of all assets financed by current accounts and capital. In addition,
the capital shall also bear the risks of 50% of assets financed by the
investment deposits. The risks of the remaining half of the assets
financed by the investment deposits shall be borne by the investment
depositors.
114
The results of our survey reported in section three of the paper show that
the risk of withdrawal is in fact a nightmare for the managers of Islamic banks.
This risk is indeed more serious in Islamic banks as compared to conventional
banks. This is because, neither the principal nor a return is guaranteed in Islamic
banks’ investment deposits unlike the deposits of conventional banks. Although
the nature of Islamic banks’ investment deposits does introduce market
discipline, it also causes a potential confidence problem as compared to
traditional bank deposits. Therefore, Chapra and Khan (2000) show reservations
about the AAOIFI suggestion to make capital responsible for the risks of only
50% of assets financed by investment deposits, as this will weaken the capital of
Islamic banks. They argue that due to the confidence problem mentioned above
Islamic banks in fact should need more capital as compared to conventional
banks. A stronger capital base coupled with the market discipline introduced by
the nature of investment deposits can indeed make Islamic banks more stable
and efficient.
4.3.3.2 Preventing the Transmission of Risks to Demand Deposits
The main concern of AAOIFI namely, the prevention of the
transmission of risks of investment deposits to current accounts is of a
fundamental nature. To strengthen this concern, Chapra and Khan (2000)
suggest for the consideration of standard setters that the capital requirement for
demand deposits must be completely separated from the capital requirement for
investment deposits. Islamic banks can thus have two alternatives with respect to
capital adequacy requirements as given in Figure 4.1. The first alternative would
be to keep demand deposits in the banking book and investment deposits in the
trading book with separate capital adequacy requirements for the two books. The
second alternative would be to pool investment deposits into a securities
subsidiary of the bank with separate capital adequacy requirement. There could
be other subsidiaries, of an Islamic bank but all with separate capital
requirements. These alternatives are expected to introduce a number of benefits
over the existing systems.
a. These will align the capital requirements of the two different deposits to
their respective risks. Demand deposits are the main source of leverage
of the Islamic banks as a source of free money. These depositors
therefore, need more protection as compared to investment account
115
holders. Therefore, the capital as well as statutory reserve requirements
must be substantially higher for demand deposits.
116
Figure 4.1
Proposed Capital Adequacy Alternatives for Islamic Banks
THE EXISTING SYSTEM
BANK
Capital
Current Accounts
Investment Accounts
PROPOSED ALTERNATIVE - 1
BANK
TRADING BOOKBANKING BOOK
Capital Capital
Current Accounts Investment Accounts (Mutual Fund)
SUBSIDIARIES
Capital
PROPOSED ALTERNATIVE - 2
BANK
Capital
Current Accounts
INVESTMENT SUBSIDIARY OTHER SUBSIDIARIES
Capital Capital
Investment Accounts (Mutual
Fund)
117
b. As for as the risk-return tradeoff is concerned, investment deposits and
mutual funds are not much different. However, mutual funds are
considered to be more transparent, liquid, and efficient in the allocation
of returns to risks. Therefore, several policy oriented writings, judgment
of courts and research works have called for establishing mutual funds
of various types. Capital requirements provide a strong incentive to
establish mutual funds. There is also evidence on this in many regulatory
jurisdictions. In these regimes, regulatory capital have played an
important role in creating incentives for securitization by requiring lower
capital for trading activities as compared to banking book activities of
financial institutions. As a result, the size of banking book activities of
banks has declined sharply overtime, and that of trading book activities
has widened.52 This incentive effect of regulatory capital can be
replicated in Islamic banks so that the investment accounts of these
banks get gradually transformed into mutual funds. The relatively lower
capital adequacy requirement on investment accounts (mutual funds) can
provide a strong incentive to Islamic banks to develop mutual funds,
enhance PLS financing and ensure efficient risk sharing, market
discipline and transparency in the distribution of returns.
c. As mentioned above, the uniqueness of Islamic banking lies in the fact
that the owners of Islamic banks raise demand deposits as interest-free
loans (Qar) and investment deposits as Muārabah funds. Unless the
transmissions of any risks between the two types of deposits are
completely prevented, this unique characteristic of Islamic banking
cannot be preserved. In this regard, separate capital adequacy standards
will serve the firewalls and safety net requirements of major regulatory
and supervisory jurisdictions around the world. Furthermore, these
alternatives will also help eliminate the difficulty of treating investment
accounts while applying the international capital adequacy standards. In
addition, segregation of the depository function of Islamic banks from
their investment function, will make these banks more credible and
acceptable under almost all jurisdictions, thus enhancing the growth of
Islamic finance. This will enhance acceptability of Islamic banking in
majority regulatory regimes and will remove barrier to market entry.
52
See for example, European Commission (1999), and Dale (1996).
118
4.3.3.3 Other Systemic Considerations
a. Transmission of the risk of permissible income and impermissible
income is a serious systemic risk in conventional banks offering Islamic
banking windows. This risk can be controlled if the Islamic banking
windows of these banks are brought under separate capital.
b. Establishment of specialized subsidiaries with separate capital can also
enhance the level of diversification of the business of Islamic banks.
Such a diversification can contribute to proper control of their business
risks. However, it is also prudent to ensure consolidated supervision of
the Islamic banks like traditional banks.
c. The unique risks of Islamic modes of finance, the nonexistence of
financial instruments, restrictions on sale of debts, and other special
features of Islamic banking force the Islamic banks to maintain a high
level of liquidity. This naturally affects their income adversely. As a
result, the withdrawal risk is strengthened. In this manner, unless these
risk factors are properly managed, they can culminate into a serious
systemic instability.
119
V
RISK MANAGEMENT:
FIQH RELATED CHALLENGES
5.1 INTRODUCTION
The discussion of the previous sections shows that Islamic banks can be
expected to face two types of risks – risks that are similar to the risks faced by
traditional financial intermediaries and risks that are unique due to their
compliance with the Sharī‘ah. Consequently the techniques of risk identification
and management available to the Islamic banks could be of two types. The first
type comprises of standard techniques, such as risk reporting, internal and
external audit, GAP analysis, RAROC, internal rating, etc., which are consistent
with the Islamic principles of finance. The second type consists of techniques
that either need to be developed or adapted keeping in view the requirements for
Sharī‘ah compliance. Hence the discussion of risk management techniques vis-
à-vis Islamic banking is a challenging one. In a study like this, these challenges
can neither be identified fully, nor can these be resolved even partially. The
objective of this section is to initiate a discussion on some aspects of the unique
risks faced by Islamic banks with a view to highlight the challenges and
prospects of mitigating these within the framework of the Islamic principles of
finance. However, in the outset, we briefly discuss the attitude of Islamic
scholars towards risk.
5.1.1 Attitude towards Risk
Risk is assigned significant importance in Islamic finance. Both the two
foundational Fiqhī axioms of Islamic finance, namely, a) al khirāju bi al-amān
and b) al ghunmu bi al-ghurm are in fact risk-based. Together the two axioms
can be described to mean that entitlement to the returns from an asset is
intrinsically related to the responsibility of loss of that asset53. Interest-based
financial contracts separate entitlement to return from the responsibility of loss
by protecting both the principal amounts of a loan as well as a fixed return on it.
Hence, these contracts transfer the risks of loans to the borrower while the lender
retains the ownership of the funds. Islamic finance prohibits the separation of
53
See, Kahf and Khan (1992) for an elaborate discussion of this premises.
120
entitlement to return from the responsibility for ownership. By doing so risk
transferring is discouraged and risk sharing is encouraged.
This does not however mean that the individual’s attitude towards risk is
subjected to any rigid rules. Due to their natural inclinations, some individuals
may like to take more risks than others do and others may like to avoid risk at
all. The universal principle of risk aversion, that expected return from an
investment depends on the level of the investment’s risk – higher risks warrant
the expectation for higher returns and lower risks warrant the expectation for
lower returns is also accepted by the Islamic scholars.
However, the rule of non-separation of entitlement to returns from the
ownership risks led Islamic economists to theorize that most needs of an Islamic
economy would be met by the risk sharing arrangements; leaving no role for
debt finance to play54. Hence within the framework of an interest-free (profit and
loss sharing – PLS) economy the effect of leverage on asset growth and the
resultant financial risks were ignored in the initial theoretical literature. If a bank
is financed only by risk sharing, the dollar value of its assets will be equal to the
dollar value of its equity; a dollar of equity capital will bear the burden of risks
of a dollar in assets. For a 100% equity-based firm, this risk can be referred to as
its normal business risk. As soon as the firm inducts debt finance, the dollar
value of its assets start exceeding the dollar value of its equity by the amount of
the debt finance inducted. In this case, a dollar of the equity capital of the firm
faces the risks of assets in excess of a dollar. This excess burden of risks faced
by the equity capital is due to the debt-financed assets and this can be referred to
the firm’s financial risk. The theoretical literature characterizes the Islamic
economy mainly PLS based, hence it ignores the fundamental difference
between the two types of risks and its implications for stability of Islamic
financial institutions.
It is in the nature of the banking business that assets exceed bank capital
several times. The Islamic banks are not an exception to this general rule
particularly due to their utilization of demand deposits for financing assets.
Islamic scholars agree that under such a condition, banks working on behalf of
depositors need to be very cautious about risks55.
54
See, e.g. Siddiqi (1983).
55
See, Zarqa (1999).
121
From this brief discussion we can derive two important conclusions
regarding the attitude of Islamic scholars towards risk. First, liabilities and
returns of an asset cannot be separated from each other. Indeed, this condition
has a far-reaching implication for all Islamic financial contracts. Second,
common people do not like risk; banks working on their behalf must be cautious
and avoid excessive risk taking.
5.1.2 Financial Risk Tolerance
Is it desirable for the Islamic banks to carry the same level of financial
risks as their peer group conventional banks carry? Or should one expect that
due to the nature of Islamic modes of finance, the Islamic banks should be
exposed to more risks as compared to conventional banks?
It is hard to bring practice and theory together for an answer to these
questions. From a practical perspective, banks should eliminate their financial
risks if possible. For example, without credit risks they will not be required to
apportion part of their current income in loan loss reserves. They can use their
capital more efficiently to accumulate assets, and maximize their rates of return
on equity. This can enable the Islamic banks to pay higher returns to the
investment deposit holders who take more risks as compared to the depositors of
traditional banks. Hence the Islamic banks can maintain their competitive
efficiency. Therefore, the existence of financial risk is an undesirable cost for the
Islamic banks, exactly in the same manner as it is undesirable for the
conventional banks. If the Islamic banks have to carry the same level of financial
risks as their peer group traditional banks, they require simplifying and refining
the Islamic modes of finance to make the risk profile of these modes exactly at
par with the risk profile of interest-based conventional credits.
However, from the theoretical perspective, in this regard, the challenge
is that as a result of such an oversimplification and refinement the Islamic modes
of finance can lose their Islamic characteristics and hence their raison d’être.
Thus from the perspective of the mandate of the Islamic banks, such a
refinement and simplification may not be possible. This is because all Islamic
modes of finance are based on undertaking real transactions and banks are
expected to take a certain degree of ownership risks in order to justify a return
on finance. To the extent of the existence of this inevitable level of additional
risks in the Islamic banks as compared to conventional banks, the Islamic banks
122
will need to keep additional capital and develop more rigorous internal control
and risk management techniques.
5.2. CREDIT RISKS
Credit risk is the most important risk faced by banks, because defaults
can also trigger liquidity, interest rate, downgrade and other risks. Therefore, the
level of a bank's credit risk adversely affects the quality of its assets in place. Do
the Islamic banks face more credit risks as compared to conventional banks or
less? A preliminary answer to this question depends on a number of factors, such
as:
One) General credit risk characteristics of Islamic financing,
Two) Counterparty risk characteristics of specific Islamic modes of
finance,
Three) Accuracy of expected credit loss calculation, and
Four) Availability of risks mitigating techniques.
The first two points have been discussed in sections two and three of the
paper. Here we discuss the last two points in more detail.
5.2.1 Importance of Expected Loss Calculation
The process of credit risk mitigation involves estimating and minimizing
expected credit losses. Calculation of expected credit losses, requires the
calculation of probability of default, maturity of facility, loss given default,
exposure at default and the sensitivity of the assets’ value to systematic and non-
systematic risks. Expected loss calculation is relatively easier for simple and
homogenous contracts as compared to relatively complex and heterogeneous
contracts. Since the Islamic financial contracts are relatively complex as
compared to the interest-based credit, the accurate calculation of expected losses
is supposed to be relatively challenging for the Islamic contracts. The lack of
consensus in dealing with a defaulter, illiquid nature of debts etc., add to the
complexity of this matter.
This challenge can be overcome by adapting the foundation IRB
approach as suggested in section four of this paper. Although our survey results
in section three reveal that most Islamic banks, who responded to the
123
questionnaire, already use some form of internal rating systems, it is early for the
Islamic banks to qualify for the IRB approach for regulatory capital allocation.
Nevertheless, the presence of some form of internal ratings in these banks
implies they can enhance their systems with an objective to gradually qualify for
the IRB approach. If that happens, these banks will be expected to initially
follow the supervisory benchmark LGD and the risk weights as given in Table-
5.156. Gradually, the banks can develop their own systems of calculating the
LGD and can graduate to the advanced IRB approach.
Table 5.1
Benchmark Regulatory Risk Weights
(Hypothetical, LGD 50%)
Probability of Default % Corporate Exposures Retail Exposures
0.03 14 6
0.05 19 9
0.1 29 14
0.2 45 21
0.4 70 34
0.5 81 40
0.7 100 50
1 125 64
2 192 104
3 246 137
5 331 195
10 482 310
15 588 401
20 625 479
30 - 605
Source: The New Basel Accord
5.2.2 Credit Risk Mitigation Techniques
A number of standard systems, methods, and procedures for credit risk
mitigation are also relevant for the Islamic banks. In addition, there is also a
need to keep in view the unique situation of these banks. A number of the
standard systems and some additional considerations are discussed here in
relation to the credit risk management of Islamic banks.
5.2.2.1 Loan Loss Reserves
56
It needs to be emphasized that at this stage the New Basel Accord is only a proposal. But it
is expected that the IRB approach will remain as an important part of the final document.
124
Sufficient loan loss reserves offer protection against expected credit
losses. The effectiveness of these reserves depends on the credibility of the
systems in place for calculating the expected losses. Recent developments in
credit risk management techniques have enabled large traditional banks to
identify their expected losses accurately. The Islamic banks are also required to
maintain the mandatory loan loss reserves subject to the regulatory requirements
in different jurisdictions. However, as discussed above the Islamic modes of
finance are diverse and heterogeneous as compared to the interest-based credit.
These require more rigorous and credible systems for expected loss calculation.
Furthermore, for comparability of the risks of different institutions there is also a
need for uniform standards for loss recognition across modes of finance,
financial institutions and regulatory jurisdictions. The AAOIFI Standards # 1
provides for the basis of income and loss recognition for the Islamic modes of
finance. However, except for a few institutions, banks and regulatory
organizations do not apply these standards.
In addition to the mandatory reserves some Islamic banks have established
investment protection reserves. The Jordan Islamic Bank has pioneered the
establishment of these reserves. The reserves are established with the
contributions of investment depositors and bank owners. The reserves are aimed
at providing protection to capital as well as investment deposits against any risk
of loss including default. However, investment deposit holders are not
permanent owners of the bank. Therefore, contributions to the reserve by old
depositors can be a net transfer of funds to new depositors and to the bank
capital. In this manner these reserves cannot ensure justice between old and new
depositors and between depositors and bank owners. This problem can be
overcome by allowing the depositors to withdraw their contributions at the time
of final withdrawal of deposits. However, such a facility will not be able to
provide a protection in the case of a crisis.
5.2.2.2 Collateral
Collateral is also one of the most important security against credit loss.
Islamic banks use collateral to secure finance, because al-rahn (an asset as a
security in a deferred obligation) is allowed in the Sharī‘ah. According to the
principles of Islamic finance, a debt due from a third party, perishable
commodities and something, which is not protected by the Islamic law as an
asset, such as an interest-based financial instruments are not eligible for use as
collateral. On the other hand, cash, tangible assets, gold, silver and other
125
precious commodities, share in equities, etc., and debt due from the finance
provider to the finance user are assets eligible for collateral. We discuss a
number of general characteristics of the collateral, which is available in the
Islamic financial industry.
a) As discussed in section four, in the proposed New Basel Accord some
types of collateral are given regulatory capital relief depending on the
quality of the collateral and subject to the standardized regulatory
haircuts as given in Table 5.2. These standards show that cash and
treasury bills are the most valuable collateral and can be given very high
regulatory capital relief. Suppose two clients offer collateral of dollars
100 each, e.g., US treasury bills of one-year maturity and the main index
equities, respectively. The haircut for the first collateral is 0.5% (the
collateral value after this haircut is dollars 95). In the second case, the
collateral haircut is 20% (collateral value is dollars 80). In the first case,
the capital requirement will be lesser as compared to the second case.
The Islamic banks not being able to take the first type of collateral, will
be considered more risky.
b) There may be some assets, which from the Islamic banks’ point of view
are good collateral and deserve a regulatory capital relief. For example, a
carefully selected asset financed by the Islamic bank may be at least as
good a collateral as a 5-year maturity bond issued by a BBB corporate
entity. Since the Islamic bank’s asset is not in the list of eligible
collateral, it is subject to a 100% haircut; the BBB-bond in this case is
subject to only a 12% haircut. For the purpose of regulatory capital
relief, the Islamic bank’s asset is worth nothing, whereas the bond is
worth dollars 88 (considering the collateral value as dollars 100).
c) Due to restrictions on sale of debts, there are no liquid Islamic debt
instruments. However, in view of their liquid nature debt instruments
like treasury bills etc., are generally considered as good collateral. These
assets are not available to the clients of Islamic banks to offer.
d) The Islamic banks have limited recourse to the assets they finance. As
compared to this the conventional banks can have unlimited recourse to
the assets of their clients. On a stand-alone basis, a particular asset
financed by the Islamic bank may depreciate fast even though during the
same time the firm’s assets may gain value in general. Thus due to its
126
limited recourse nature, the quality of the Islamic banks’ collateral may
in fact be lower as compared to the collateral of the peer group
conventional banks. Moreover, the value of limited recourse collateral is
normally highly correlated with the exposure of the credit. If the credit
goes bad, the collateral value depreciates too. Good quality collateral
should not have such a characteristic. Furthermore, if stand-alone
collateral depreciates faster in value as compared to the firm’s other
assets; there is an incentive to default.
Table 5.2
Standard Supervisory Haircuts for Collateral in % of Collateral Value
Issue rating for debt Residual Maturity Sovereigns Banks/
securities Corporates
≥ 1 year 0.5 1
AAA/AA > 1 year, ≤ 5 years 2 4
> 5 years 4 8
≥ 1 year 1 2
A/BBB > 1 year, ≤ 5 years 3 6
> 5 years 6 12
≥ 1 year 20
BB > 1 year, ≤ 5 years 20
> 5 years 20
Main index equities 20
Other equities listed on a recognized exchange 30
Cash 0
Gold 15
Surcharge for foreign exchange risk 8
Source: The New Basel Accord
e) The legal systems in the jurisdictions in which Islamic banks operate do
not support the qualitative aspects of a good collateral as in most cases it
is very difficult to obtain control of the asset and convert it into liquidity
without a high cost. This state worsens further due to the fact that the
supporting institutional infrastructure for Islamic banking is still in the
early stages of development. There are no uniform standards to
recognize a default event, to treat a default event when it happens and to
litigate disputes.
127
This discussion shows that due to a number of reasons the collateral
available to the Islamic banking industry in general is not eligible for regulatory
capital relief under the proposed international standards. This may be due to the
fact that the Islamic banks are not represented in the standard setting bodies.
They can however, carefully study the consultation documents distributed by the
standard setters and present their own points of view like other institutions.
Furthermore, the industry-wide general quality of collateral depends on a
number of institutional characteristics of the environment as well as the products
offered by the industry. An improvement in the institutional infrastructures and a
refinement of the Islamic banking products can be instrumental in enhancing the
collateral quality and reducing credit risks.
5.2.2.3 On-Balance Sheet Netting
On-balance sheet netting implies the matching out of mutual gross
financial obligations and the accounting for only the net positions of the mutual
obligations. For example, bank A owes to bank B $ 2 million resulting from a
previous transaction. Independent from this obligation, bank B owes to A $ 2.2
millions. In a netting arrangement, the $ 2 million mutual obligations will match
each other out so that $ 0.2 million will be settled by B as a net amount. There
could be several considerations in this arrangement including the maturity of the
two obligations, and the currencies and financial instruments involved. The
netting process could therefore include discounting, selling and swapping the
gross obligations.
Carefully prepared, netting overcomes credit risk exposures between the
two parties. With the participation of a third party, playing as a clearinghouse for
the obligations, the arrangement becomes a powerful risk mitigating technique.
Regulators recognize that role but also supervise the netting activities of banks.
The Islamic banks so far have not designed any such mechanism. It can be
considered as an important area for future cooperation between the Islamic
banks particularly, if the market for two-step contracts as discussed in this
section expands in which banks will have more mutual obligations.
5.2.2.4 Guarantees
Guarantees supplement collateral in improving the quality of credit.
Commercial guarantees are extremely important tools to control credit risk in
conventional banks. Those banks whose clients can provide good commercial
guarantees and who can fulfill other requirements can qualify for regulatory
128
capital relief under the proposed New Basel Accord. Although some Islamic
banks also use commercial guarantees, the general Fiqh understanding goes
against their use. In accordance with the Fiqh, only a third party can provide
guarantees as a benevolent act and on the basis of a service charge for actual
expenses. Due to this lack of consensus, therefore, the tool is not effectively
used in the Islamic banking industry.
Multilateral Development Banks (MDBs) enjoy special status in the
jurisdiction of their respective member countries. This status has a particular
privilege during times of financial crisis in a member country. Financial crisis
exposes banks’ credit to a country or an institution in its jurisdiction to serious
credit risks. In terms of their foreign exchange reserves some countries are
always in a crisis-like situation. Credit exposure to entities in such jurisdictions
is always risky. This also has an implication for the borrower’s cost of capital in
terms of foreign exchange. The cost of capital in getting finance in local
currency being always lower than the cost of capital in getting finance from
outside.
Participation in MDB-led syndication provides an automatic guarantee
to a participating commercial bank against the risks as mentioned. Thus it
enhances the finance user’s credit quality often to the extent that the cost
differential between borrowing at home and borrowing abroad is almost
eliminated. This implies that by participating in the MDB-led syndication
schemes, the commercial banks can mobilize funds in foreign currency at the
cost of mobilizing funds in local currency. The syndication generally takes the
form of Figure 5.157.
Figure 5.1
Fund Flows in an MDB-led Syndication
MDB Own Resources
MDB-led
Facility Users of
MDB fund flows funds
Commercial flows
57
See for example, (Hussain 2000), Standards & Poor’s (2001), Asian Development
Bank (2001).
129
Commercial Co-financiers
Based on these arguments, Hussain (2000) suggests that the IDB shall
play a more active role in providing syndicated facilities by strengthening its
existing facilities. By benefiting from the IDB’s preferred status in the member
countries the participating Islamic banks will be able to mitigate their foreign
exchange as well as country risk exposures substantially.
5.2.2.5 Credit Derivatives and Securitization
As discussed in section two, through credit derivatives the underlying risk
of a credit is separated from the credit itself and sold to possible investors whose
individual risk profile may be such that the default risk attracts their investment
decision. This new mechanism has already been described earlier. It has become
so effective that under certain conditions it is expected to fully protect banks
against credit risks. Therefore, the use of credit derivatives as a risk-mitigating
instrument is on a rapid rise.
However, at the present, the Islamic banks are not using any equivalent
of credit derivatives. The development of comparable instruments depends on
the permissibility of sale of debts, which is prohibited by almost a consensus
except for Malaysia. In addition to the Malaysian practice, there are a number of
proposals under discussion to overcome the sale of debt issue.
a. Some studies call for making a distinction between a fully secured debt
and an unsecured debt. It is argued that external credit assessment makes
the quality of a debt transparent. Moreover, credit valuation techniques
have improved drastically. Furthermore, all Islamic debt financing are
asset-based and secured financing. In view of these developments,
restrictions on sale of debt may be re-considered (Chapra and Khan
2000).
b. Some scholars suggest that although sale of debt is not possible as such,
but the owner of a debt can appoint a debt collector. For example, if the
due debt is $ 5 million58 and the owner considers that as a result of
default 0.5 million may be lost. The owner can offer some amount lesser
than this estimated loss, say for example 0.4 million to a debt collector.
The arrangement will be organized on the basis of Wakālah (agency
58
See, Al-Jarhi and Iqbal (forthcoming).
130
contract) or Ju‘ālah (service contract). There seems to be no Fiqhī
objection to this.
c. Debt can be used as a price to buy real assets. Suppose, bank A owes
debts worth $1m to bank B, which are due after 2 years. Meanwhile
bank B needs liquidity to buy real assets worth $1m from a supplier C
on deferred basis for 2 years. In this case, subject to the acceptance of C,
the payments for bank B’s installment purchase can be made by bank A.
Due to installment sale from C to B, C will charge Murāba ah profit of
say, 5%. This profit can be adjusted in two ways. First, upon mutual
agreement the supplier may supply goods worth $0.95 million to bank B
and the supplier will receive $1m cash from bank A in 2 years. Or as a
second option, C will receive $1m from A and $0.05m directly from B.
The implication of this is important. B receives assets worth $1m at the
present instead of receiving $1m after 2 years, but after paying 5%. As a
result, in net terms, B receives $0.95m today for $1m after 2 years. Thus
the arrangement facilitates a Fiqh compatible discount facility. The flow
of funds and goods resulting from the first case are given in Figure 5.2.
Figure 5.2
Sale of Debt for Real Assets
Bank A owes to Bank B, buys from
bank B, $ 1m in 2 supplier C on credit for 2
years years worth $ 0.95m
Supplier C, receives
from A, $1m in 2
years
The example cited above is based on the permission of the use of debts
in buying goods, services and other real assets. This permission can further be
extended to design quasi debt (equity) financial instruments by embedding
convertibility options. For instance in writing an Islamic debt contract, the user
of funds can inscribe a non-detachable option in the contract that subject to the
preference of the financier the receivables can be used to buy real assets or
131
shares from the beneficiary. This option in fact changes the nature of collateral
from a limited recourse to a full recourse as the option can be utilized depending
on the will of the financier. In this manner, it enhances the quality of credit
facility by reducing its risk. The potential of these instruments increases in the
framework of two-step contracts. However, the Islamic banks at the present do
not write such instruments.
5.2.2.6 Contractual Risk Mitigation
Gharar (uncertainty of outcome caused by ambiguous conditions in
contracts of deferred exchange) could be mild and unavoidable but could also be
excessive and cause injustices, contract failures and defaults. Appropriate
contractual agreements between counterparties work as risk control techniques.
A number of these can be cited as an example.
a) Price fluctuations after signing a Salam contract may work as a
disincentive for fulfilling contractual obligations. Hence if the price of,
for example, wheat appreciates substantially after signing the contract
and receiving the price in advance, the wheat grower will have an
incentive to default on the contract. The risk can be minimized by a
clause in the contract showing an agreement between the two parties that
a certain level of price fluctuation will be acceptable, but beyond that
point the gaining party shall compensate the party, which is adversely
effected by the price movements. In Sudan, such a contractual
arrangement known as Band al-I sān (beneficence clause) has now
become a regular feature of the Salam contract.
b) In Isti nā‘, contract enforceability becomes a problem particularly with
respect to fulfilling the qualitative specifications. To overcome such
counterparty risks, Fiqh scholars have allowed Band al-Jazāa (penalty
clause).
c) Again in Isti nā‘ financing, disbursement of funds can be agreed on a
staggered basis subject to different phases of the construction instead of
lumping them towards the beginning of the construction work. This
could reduce the banks’ credit exposure considerably by aligning
payments with the progress of the work.
132
d) In Murāba ah, to overcome the counterparty risks arising from the non-
binding nature of the contract, up-front payment of a substantial
commitment fee has become permanent feature of the contract.
e) In several contracts, as an incentive for enhancing re-payment, a rebate
on the remaining amount of mark-up is given.
f) Due to non-presence of a formal litigation system, dispute settlement is
one of the serious risk factors in Islamic banking. To overcome such
risks, the counterparties can contractually agree on a process to be
followed if disputes become inevitable. This is particularly significant
with respect to settlement of defaults, as interest-based debt re-
scheduling is not possible.
g) It can be proposed that to avoid the default by the client in taking
possession of the ordered goods, the contract shall be binding on the
client and not binding on the bank. This suggestion assumes that the
bank will honor the contract and supply the goods as contractually
agreed, even if the contract is not binding on it. An alternative proposal
could be to establish a Murāba ah clearing market (MCM) to settle
cases, which may not be cleared due to the non-binding nature of the
Murāba ah contract.
h) Since the Murāba ah contract is approved with the condition that the
bank will take possession of the asset, at least theoretically the bank
holds the asset for some time. This holding period is almost eliminated
by the Islamic banks by appointing the client as an agent for the bank to
buy the asset. Nevertheless, the raison d’être of approving the contract is
the responsibility of the bank for the ownership risk. For this risk
therefore, capital needs to be allocated.
All these features of contracts serve to mitigate counter party default
risks. Similar features can enhance the credit quality of contracts in different
circumstances. It is desirable to make a maximum benefit of such features
wherever new contracts are being written.
5.2.2.7 Internal Ratings
All banks undertake some form of internal evaluation and rating of their
assets and clients, particularly, for maintaining the regulatory loan loss
provisions. Depending on the sophistication of banks, these systems can be
133
different in different banks. Some banks have recently developed formal internal
rating systems of the client and/or the facility. As discussed above, in a general
sense an internal rating system can be described to be a risk-based inventory of
individual assets of a bank.
These systems identify credit risks faced by the banks on an asset-to-
asset basis in a systematic and planned manner instead of looking at bank's risk
on an entire portfolio basis. The asset-to-asset coverage of the system makes it
more relevant for banks whose asset structures are less homogenous. The
Islamic modes of finance are diverse and have different risk characteristics. For
example, a credit facility extended to a BBB rated client on the basis of
Murāba ah, Isti nā‘, leasing and Salam will have different not uniform risks
exposures. The risk exposure is expected to be different not only across modes
of finance but also across clients. For example, if there are two clients both rated
as BBB, due to the different nature of the businesses of the two clients, risk
exposure of the same mode can be different for the different clients. In addition,
different maturity can have different implication for risk across modes and
across clients. Therefore, due to the diversity of the Islamic modes of finance, it
is appropriate for the Islamic banks to measure the risk of each asset separately.
Developing a system of internal ratings can be instrumental in doing this.
Various banks use different systems. For establishing a basic internal
rating system in a bank, two basic information are required - maturity of the
facility and credit quality of the client. Maturity of facility is known in all cases
of funding. Credit quality of the client can be assessed by various means. The
client may have a previous record with the bank, it may be rated by rating
agencies and it must have audited reports. Moreover, the general reputation of
the client, and the type of collateral provided can also be helpful. Putting all
these and other relevant information together wherever available, bank staff can
judgmentally assess the clients’ credit quality.
Once this information is available, each client can be assigned an
expected probability of default. After having information about the maturity of
each facility and expected default probability for each client, as a first step, this
information needs to be mapped together as in Table – 5.359. As a second step, a
benchmark credit risk weight is assigned. In Table – 5.3 this benchmark credit
risk weight (100%) is for a probability of default of 0.17% - 0.25% for a facility
59
The table is based on ISDA (2000).
134
with a maturity of 3 years. With the same probability of default, the credit risk
weight for a facility of 2 years’ maturity will be 20% less than the benchmark
and for a 4 years’ maturity 18% more than the benchmark.
135
Table 5.3
Hypothetical Internal Rating Index Relative to 3 Year Asset with
(Default Probability of 0.17% - 0.25% = 100%)
of default %
Probability
0.5-1Yrs
0.5 Ysr
1-2Yrs
2-3Yrs
3-4Yrs
4-5Yrs
5-6Yrs
6-7Yrs
7-8Yrs
8-9Yrs
> 9Yrs
0.00- 6 8 12 17 21 25 28 32 36 40 43
0.025
0.025- 9 12 17 23 29 35 40 46 51 56 60
0.035
0.165- 48 69 80 100 118 134 149 164 178 191 203
0.255
0.255- 72 86 108 130 150 168 186 202 216 230 241
0.405
Most Islamic banks are technically capable to initiate some form of
internal credit risk weighting of all their assets separately. In the medium and
longer-run these could evolve into more sophisticated systems. Initiation of such
a system can be instrumental in filling the gaps in the risk management system
and hence enhancing the rating of these by the regulatory authorities and
external credit assessment agencies.60
At this stage, it is too early for the Islamic banks to qualify for even the
foundation IRB approach for regulatory capital allocation. However, one needs
also to reemphasize that the IRB approach is more consistent with the nature of
Islamic modes of finance. It is with this background that the Islamic banks need
to initiate programs for developing systems of internal rating. Regulatory
authorities will recognize these systems only if these are found to be robust.
5.2.2.8 RAROC
RAROC is used for allocating capital among different classes of assets
and across business units by examining their associated risk-return factors. An
application of RAROC in Islamic finance would be to assign capital for various
modes of financing. Islamic financial instruments have different risk profiles.
For example Murāba ah is considered less risky than profit-sharing modes of
60
Chapra and Khan (2000) recommend the Islamic banks to adopt this system. Bank Nagara
Malaysia (2001) calls for the BCBS to make this as the primary approach for regulation.
136
financing like Muārabah and Mushārakah. Using historical data on different
modes of financing for investments, one can estimate the expected loss and
maximum loss at a certain level of confidence for a given period for different
financial instruments. Then this information can be used to assign risk capital for
different modes of financing by Islamic financial instruments.
The concept of RAROC can also be used to determine the rate of return
or profit rate on different instruments ex-ante by equating their RAROCs as
shown below.
RAROCI = RAROCj ,
or (Risk Adjusted Return)I /(Risk capital)i =(Risk Adjusted Return)j/(Risk
capital)j
where i and j represents different modes of financing (e.g., Muārabah and
Mushārakah respectively). Thus if instrument j is more risky (i.e., has a larger
denominator) then the financial institution can ask for a higher return to equate
RAROC of the instrument with that of instrument i.
5.2.2.9 Computerized Models
Due to the revolutionary developments in the area of mathematical and
computational finance and the use of computers, banks are increasingly using
computerized models of risk management. These models are actually refined
versions of the internal ratings systems. In the internal ratings systems
information can be based on qualitative judgment, models are actually based on
quantitative data. A number of credit risk management models are now available
in the market such as the KMV, CreditMetrics, CreditPortfolioView, CreditRisk
etc. In future these models are going to be more important for risk management.
Therefore, there is a need for the Islamic banks to make planned and conscious
strategies towards developing advanced systems wherever feasible.
5.3 MARKET RISKS
As mentioned before, market risks comprise of interest rate risks,
exchange rate risks, and commodity and equity price risks. These are briefly
discussed here in perspective of Islamic banks.
5.3.1 Business Challenges of Islamic Banks: A General Observation
It is generally accepted that the non-availability of financial derivatives
to Islamic banks is a major hindrance in their way to manage market risks as
137
compared to the conventional banks. The direct competitors of Islamic banks are
however, Islamic banking windows of conventional banks. Obviously, due to
religious restrictions, the Islamic banks cannot enter the conventional banking
market. But the conventional banks are offering the Islamic products
simultaneously with their own products. Competition no doubt enhances
efficiency and a leveled playing field is a prerequisite for a healthy competitive
environment. A leveled playing field for competition between Islamic and
conventional banks in this regard cannot be ensured without a complete
separation of the risks of the Islamic products from the risks of conventional
banks’ other operations. There are a number of difficulties in separating these
risks effectively.
As discussed earlier regulators have been trying to bring as many risks
under the cover of capital as possible. Since capital is the ultimate protection
against risks, it is a prudent policy, for banks to manage the risks of the
organization at the group level. Particularly, derivatives for hedging purposes are
used to control the risks of the banking organization at the group level rather
than using these separately for activities of different units. This implies that the
positions of an Islamic banking unit can be left open to comply with the
requirements of the Sharī‘ah supervisors. But at the group level, the bank may
not leave any position open without hedging using interest-rate derivatives. As a
result, controlling the use of derivatives for hedging the group level positions are
beyond the reach of Sharī‘ah supervisors of an Islamic window in a
conventional bank.
In addition to supervisors, owners, credit assessment agencies and
depositors are expected to influence the activities of banks. Unlike the owners of
Islamic banks, the owners of most conventional banks cannot be expected to
offer Islamic products as a result of their own religious beliefs. These products
are offered as a result of pure business decisions. External rating agencies also
rate banks only on the basis of their financial soundness not on the basis of
religious commitment.
Clients, directly or through the Sharī‘ah boards can also be expected to
effect the decisions of banks. The prime concern of Islamic depositors is to
avoid any mixing of permissible and impermissible incomes. The clients of
Islamic banking windows of conventional banks are mostly on the asset side. In
most countries where Islamic banking windows are allowed, mutual funds are
offered as an alternative to investment deposits. Islamic depositors in such
138
systems will keep only current accounts. Since current account holders are not
entitled to any income, they will have no incentive in monitoring the
management or income earning activities of banks.
Thus, there is no effective mechanism to prevent the conventional banks
from using derivatives for managing the risks of their Islamic products. As a
result, Islamic banks competing with the Islamic banking windows of
conventional banks are in a serious competitive disadvantage as for as the use of
derivatives are concerned. This poses to the Islamic banks the most serious
business risk – that of competing on a playing field, which is not leveled. As
discussed in section four, this playing field can effectively be leveled only if
separate capital is required for the Islamic banking operations of a conventional
bank.
There is a need to distinguish the environment as described above from
an environment where all operations of banks could be subject to the principles
of Islamic finance. If the entire banking system is brought under Islamic
principles, the nature of this risk will change. In the ongoing dialogue in
Pakistan about the introduction of a comprehensive Islamic banking system, the
apprehension of local banks has been that as a result of introducing PLS
deposits, there would be a quick migration of funds from the weaker (local)
banks to the stronger (foreign) banks. This could prompt the collapse of local
banks. The apprehension in fact highlights the potential market discipline which
withdrawal risk of Islamic banking can introduce if the system is applied
economy-wide. In countries where banks are mostly in the public sector, the
subject of market discipline is however, not much relevant.
5.3.2 Composition of Overall Market Risks
The above discussion necessitates some analysis of the nature of
important risks for which derivatives are used and the types of most dominant
derivatives. There is no statistical information, which can tell us exactly about
the proportion of each of the various risks in the total global financial risk.
However, since derivatives are primarily used for the mitigation of risks, we can
use the data on derivative markets to gauge the actual intensity of the various
risks in the financial markets.
By end of December 2000, the total notional amount of the outstanding
volumes of OTC traded derivative contracts was US dollars 64.6 trillion for
interest rate derivatives and 15.6 trillions for FX derivatives. This makes the
139
interest rate contracts 78% of the total notional amount of derivatives, and FX
contracts 19% of the total. The remaining 2% of the total were in equity-linked
derivatives and 1% in commodity-based derivatives. A further narrowing down
of the composition of the most important derivative market, namely the interest
rate derivatives into its different components shows that 75% of these are in
swaps, 15% in options and 10% in forward rate agreements61.
From this information we can conclude that interest rate risk and foreign
exchange risk are the most important risks. The event of credit default in
addition to creating an immediate liquidity problem magnifies the risk of the
bank through two more channels. As a result of the delay in repayment, the
effect of changes in the market prices will be adverse for the net income of the
bank. Furthermore, as a result of the default the firm will be downgraded, again
having a negative implication for the bank’s net income. Conventional banks
effectively use this decomposition of credit risk in mitigating the risks through
credit derivatives. Credit derivatives are not available to Islamic banks.
Moreover, due to default, the Islamic banks cannot reschedule the debt on the
basis of the mark-up rate. Hence these banks are also more exposed to default
triggered interest-rate risks as compared to their conventional counterparts62.
5.3.3 Challenges of Benchmark Rate Risk Management
Among interest rate derivatives, swaps are the most dominant contracts.
Swaps facilitate dual cost reduction role simultaneously. On one hand these
contracts enable financial institutions to utilize their comparative advantages in
fund raising and exchanging the liabilities according to their needs. Thus, the
contract minimizes the funding costs of participating institutions. On the other
hand, they are used as effective hedging instruments to cut the costs of
undesirable risks. Thus, the effective utilization of swaps undisputedly enhances
competitive efficiency. Since swaps are primarily interest-based contracts, these
have not attracted the attention of Islamic scholars.
Although Islamic banks do not undertake interest-based transactions
they however, use the London inter-bank borrowing rate (LIBOR) as a
benchmark in their transactions. Thus, the effects of interest rate changes can be
transmitted to Islamic banks indirectly through this benchmark. In case of a
61
For the year 2000 as a whole, the turnovers of exchange traded derivatives were recorded
as US dollars 383 trillion (interest 339 trillion, equity 41 trillion, and currency 2.6 trillion).
62
The interrelationship between credit and market risk is an important area of current
research. But there are still no reliable measures of this relationship.
140
change in the LIBOR, the Islamic banks could face this risk in the sense of
paying more profits to future depositors as compared to receiving less income
from the users of long-term funds. Hence it is only more prudent to consider that
the assets of Islamic banks can be exposed to the risk of change in the LIBOR.
Chapra and Khan (2000) argue that the nature of investment deposits on
the liability side of the Islamic banks adds an additional dimension to this risk.
Profit rates to be paid to Muārabah depositors by the Islamic bank will have to
respond to the changes in the market rate of markup. However, as profit rates
earned on assets reflect the markup rates of the previous period, these cannot be
raised. In other words, any increase in new earnings has to be shared with
depositors, but it cannot be re-adjusted on the assets side by re-pricing the
receivables at higher rates particularly, due to restrictions on the sale of debts.
The implication is that the net Murāba ah income of the Islamic bank is
exposed to the markup price risk. Some techniques to mitigate the Murāba ah
(mark-up) price risk are discussed below.
5.3.3.1 Two-step Contracts and GAP Analysis
One of the most common and reliable tools to manage interest rate risk
is the technique of GAP analysis63 as discussed in section two. The GAP
analysis technique is used to measure the net income and its sensitivity with
respect to a benchmark. Risk management tools then target at ideally making the
net income immune to any changes in the benchmark rate, i.e., a target net
income is achieved whatever the market benchmark may be. If such an objective
is achieved, an increase in the benchmark will not pose any risks to the targeted
net income. The cash flows of the bank remain stable at a planned level ensuring
stability of net income
The effectiveness of interest rate risk management depends on the re-
price-ability of assets and liabilities. As for as the Islamic banks are concerned,
investment deposits are perfectly re-price-able as the expected rate of return
increases and decreases depending on the market rate of return. On the other
hand most of the assets of Islamic banks are perfectly non-re-price-able due to
restrictions on sale of debts. The effectiveness of a GAP management strategy
for the Islamic banks requires flexibility from the two extremes on both liability
and assets sides. On the asset side the Islamic banks’ managers need to have
more re-price-able assets. The list of probable financial instruments as given in
63
See, Koch (1995).
141
Table 5.4 are expected to make the asset side of Islamic banks more liquid in
future.
The re-price-ability of instruments on the liability side shall be in the
control of asset and liability managers; the re-price-ability of investment
deposits is not in their control. This goal is always very difficult to achieve.
However, the availability of more options is always expected to be helpful. One
of such options is available to Islamic banks in the form of two-step contracts.
In a two-step contract, the Islamic bank can play the role of a guarantor
in facilitating funds to the users. Since guarantee cannot be provided as a
commercial activity, in a two-step contract, it can be provided by the Islamic
bank’s participation in the funding process as an actual buyer. In the existing
Murāba ah contracts the bank makes an up-front payment to the suppliers on
behalf of the client. In the two-step contract the bank will have two Murāba ah
contracts, as a supplier with the client and as a buyer with the actual supplier
(see Figure 5.3). The bank will hence not make an up-front payment to the actual
supplier. The two-step Murāba ah contract will have a number of implications
for the banks.
Figure 5.3
Two-Step Contracts
Client Bank Supplier
Client-Bank Murāba ah Contract Bank-Supplier Murāba ah Contract
a. It can serve as a source of funds. In a longer maturity contract,
such funds can be considered as tier-2 capital of the bank, based on
the criteria allocated to such capital by the Basel Accord.
b. The contracts will enhance the banks' resources under
management. This will have both good and adverse implications.
The adverse implications will arise from the increased amount of
financial risks. If these risks are managed properly, the contracts
can prove to be instrumental in enhancing the net income and
hence competitiveness of Islamic banks.
142
c. This will enhance the liquidity position of Islamic banks. Although
liquidity is not the immediate problem of Islamic banks, the
availability of liquidity always enhances stability.
d. It will provide flexibility in liability management by offering
different maturity of liabilities. The banks can match the maturity
of their liabilities and assets more efficiently.
e. The banks will actually guarantee the re-payment of the funds by
the clients. Hence guarantee is provided in a more acceptable and
transparent manner.
f. The concept of the two-step contracts is not restricted to
Murāba ah, it is equally applicable to Isti nā‘, leasing and Salam.
g. Finally, the new contracts would be an addition to the available
financial instruments thus paving the way for developing further
instruments.
5.3.3.2 Floating Rate Contracts
Fixed rate contracts such as long maturity installment sale are normally
exposed to more risks as compared to floating rate contracts such as operating
leases. In order to avoid such risks therefore, floating rate leases can be
preferred. However, leases will expose the bank to the risk of equipment price
risks as discussed below.
5.3.3.3 Permissibility of Swaps
As mentioned above the basic economic rationale of a swap contract is
to cooperate in minimizing the cost of funds by reducing the borrowing cost and
by reducing the cost of undesirable risks for both parties. In this manner a swap
is a win-win contract for both participating parties. To start with, it is obvious
that one cannot expect any Fiqhī objections to such a cooperative strategy. It is
the process of implementing the swap contract, which may not be permissible.
Particularly, as all swaps are interest-based, there is no possibility for the Islamic
banks to use such contracts. To design Sharī‘ah compatible swaps the following
conditions need to be fulfilled.
a. There is a party whose credit rating is low because it holds long maturity
(illiquid) assets and short maturity liabilities. Because, of the shorter
maturity of liabilities, this party faces uncertainties in the short-run.
143
Since its assets are long-term it needs to borrow long-term, but its long-
term borrowing cost is high due to its low rating. Due to non-availability
of such cheaper and long-term funds, it actually borrows short-term at a
higher cost.
b. There is another party whose liabilities are long-term but assets are more
liquid as a result its credit rating is very good. It has no uncertainties in
the short-term but has uncertainties in the long run at the time of the
maturity of liabilities. It can borrow long-term at cheaper cost, but its
borrowing preference is for short-term to match its asset-liability
maturity. These two scenarios simplify the real world situation.
Obviously, the Fiqh cannot have an objection to these quite natural
situations.
c. There exists a fixed-income financial instrument, which is used for
raising long-term funds and there also exists a floating income financial
instrument, which is used for raising short-term funds.
The third prerequisite for having an acceptable swap depends on the
availability of suitable Sharī‘ah compatible instruments. At the present, there are
no fixed and floating income Islamic financial instruments available in the
secondary markets. However, ideas have been substantially conceptualized and
efforts are underway to make institutional arrangements for making such
instruments available. Once such instruments become available, the Islamic
banks will be empowered with one of the most powerful tools of market risk
management, namely, swaps.
As discussed in section two, the objective of a swap is to exchange the
costs of raising funds on the basis of comparative advantages. It has been shown
there that by means of swap both parties end up with a net financial gain as well
as paying in consistency with their own asset and liability structures. Thus the
argument for a swap is exactly the same as the argument for free international
trade on the basis of comparative advantages. Since swaps are arranged in
trillions of US dollars in real life, they are hence the practical manifestation of
the theory of gains from comparative advantages under free trade.
The last question in evaluating a swap contract from the Islamic finance
perspective is, is it permissible for two parties to pay the funding costs of each
other? As shown, a swap is basically a win-win contract. Both parties are better
off and hence there could not be any objection from the Sharī‘ah point of view.
144
Thus we conclude that there is a great need for swap contracts. There apparently
are no Sharī‘ah restrictions in developing swap contracts as such. The limitation
is that of the availability of Sharī‘ah compatible financial instruments suitable
for using in swap contracts.
5.3.3 Challenges of Managing Commodity and Equity Price Risks
In general, fluctuations in the prices of commodities and equities is not
of any serious concern for bank asset-liability managers. The statistical
information on derivatives provided earlier confirms this fact. However, banks
may consider investment in commodities particularly gold and equities as a
source of income for their clients and themselves. Banks’ exposures in this
regard are normally marginal and are included in the trading books. There are
however, some special considerations involved in conceptualizing these risks,
particularly commodity price risks, in Islamic banks. First we briefly discuss
these considerations. After that the challenges of controlling the risks are
discussed.
While conceptualizing commodity price risks in Islamic banking there is
a need to clarify a number of points.
a. The Murāba ah price risk and commodity price risk must be clearly
distinguished. In Islamic banking there could be a misconception about
the treatment of mark-up price risk as commodity price risk. The basis of
the mark-up price risk is LIBOR. Furthermore, it arises as a result of the
financing not trading process. Therefore, in our understanding it shall
conceptually be treated as an equivalent of interest (benchmark) rate risk
as discussed in the previous section.
b. In contrast to mark-up price risk, commodity price risk arises as a result
of the bank holding commodities for some reason. Some good examples
of such reasons are: a) The Islamic bank developing an inventory of
commodities for selling, b) Developing inventories as a result of Salam
financing, c) gold and real estate holdings and d) holding of equipment
particularly for operating leases. There is always a possibility that in
leases ending with ownership, the benchmark rate risk and equipment
price may not be properly identified and separated. This could be one of
the reasons due which the Fiqh scholars do not recommend such leases.
145
c. Commodity price risk arises as a result of ownership of a real
commodity or asset. Mark-up price risk arises as a result of holding a
financial claim, which could be the result of deferred trading. Therefore,
under leasing, the equipment itself is exposed to commodity price risk
and the overdue rentals are exposed to interest-rate risks. Similarly, if
the lease contract is a long one and the rental is fixed not floating, the
contract faces interest rate risk. Thus, a fixed rental, based on a long
operating lease is exposed to dual edged risks – commodity and mark-up
price. In order to avoid such risks banks shall prefer leases ending with
ownership possibly price fixed in the beginning and rentals periodically
re-priced. Such a lease would actually be an installment sale contract
based on a floating rate mark-up. In this case, the banks can actually
minimize their exposure to both the mark-up and commodity price risks.
Our survey results reveal that there is a tendency among Islamic banks
to prefer such contracts. However, neither such a lease nor such an
installment sale is compatible with the Sharī‘ah.
Thus we can conclude that the Murāba ah and Isti nā‘ transactions are
exposed to Murāba ah (mark-up price) or benchmark rate risk and Salam and
leases are exposed to both Murāba ah price and commodity price risks. Due to
Salam, and operating leases, commodity price risk exposure of Islamic banks is
expected to be higher as compared to their peer group conventional banks. We
discuss here some of the techniques which may be useful in managing the
commodity and equipment price risks.
5.3.3.1 Salam and Commodity Futures
Futures contracts enable their users to lock-in future prices of their own
expectations. For example, a wheat grower typically faces price risk - a
deviation of actual future wheat prices from the expected future wheat prices. A
farmer whose wheat will be ready for market in six months time may expect its
price to be some amount per bushel; after six months the price may actually
turnout to be more or less than that. If the farmer dislikes the uncertainty related
to the future prices of wheat, he simply has to find a future buyer on the basis of
Salam who would pay him the expected price per bushel now. If the deal is
reached, the farmer has removed the uncertainty by selling the wheat at the price
of his own expectations. Removal of the future wheat price risk enables the
farmer to project his business forecasts more accurately, particularly, if he had to
pay significant amount of debts.
146
The potential of futures contracts in risk management and control is
tremendous. Conventional banks manage risks by utilizing commodity forwards
and futures contracts. In these contracts, unlike Salam payment of the price of
the commodity is postponed to a future date. In the traditional Fiqh postponing
both the price and the object of sale is not allowed. Therefore, the Islamic banks
at the present do not utilize the commodity futures contracts in a large scale.
Nevertheless, by virtue of a number of Fiqh Resolutions, conventions, and new
research, the scope for commodity futures is widening in Islamic financing64. In
the future these contracts may prove to be instrumental in managing the risks of
commodities.
5.3.3.2 Bai‘ al-Tawrīd with Khiyār al-Shar
As mentioned above the objection of Islamic scholars to delaying both
the price and the object of sale is softening. An important reason for this has
been the need and sometimes inevitability of such transactions in the real life.
The classical example given about Bay‘ al Tawrīd as a long-term contract is the
supply of milk by the milkman. At the time of signing the contract, the milk
buyer and the milkman (the two parties) agree on the quantity of milk to be
delivered daily, the duration of the contract, the time of delivery and the price.
The milk is not present at the time of the contract and the price is mostly paid
periodically, normally on monthly basis. Public utilities provide a modern
example for the case. Public utilities are consumed and the bill is paid when it
comes in a future date. In this way, both the price and the service are not present
in the beginning. There are numerous other examples from real life where the
postponement of the two actually enhances efficiency and convenience and
sometimes the postponement becomes in fact inevitable.
The example of the milkman provides an important basis for extending
the postponement of both the price and the object to banking. Any type of
64
The OIC Fiqh Academy in its Resolution # 65/3/7 has resolved that in Isti nā‘ both the
price and its object of sale can be delayed. Isti nā‘ is the most dynamic mode of Islamic
finance. Thus, such delayed payment Isti nā‘ are expected to increase in volume. The
Fiqh Academy has also resolved on the acceptability of ‘arboon. Since, in ‘arboon most
part of the price is delayed as well as the object of sale is also delayed this also falls in the
framework of the definition of a forward sale. Islamic banks are using a special variant of
‘arboon in which the client pays small part of the price up-front and payment of the price
and the object of sale are both delayed. For all Resolutions of the Academy see IRTI-
OICFA (2000). Bay‘ al-tawrīd (continuous supply-purchase relationship with known but
deferred price and object of sale) is a popular contract among the Muslims of our time,
particularly in public procurements and finally, some prominent Islamic banks are already
using currency forwards and futures.
147
Islamic banking contract with a predetermined price, quantity and long duration
and in which the price and object are both postponed will have an analogy to the
example. In such contracts, both the two parties are exposed to price risk. The
risk is that immediately after the contract, of a fixed price and a fixed quantity,
the two parties may experience a noticeable change in the market price of the
commodity. If the market price declines the buyer will be at a loss by continuing
with the contract. If market price rises, the seller will lose by continuing with the
contract. Thus, in such contracts of continuous-supply-purchase, a Khiyār al-
Shar (option of condition) for rescinding the contract will make the contract
more just and will reduce the risk for both parties. In Figure 5.4, if the agreed
price is P0, and the two parties are uncertain about the future market prices, they
can mutually determine the upper and lower boundaries for the acceptable
movements in market prices. Beyond these boundaries they can agree to rescind
the contracts65.
Figure 5.4
Khiyār al-Shar with respect to future prices
Upper ceiling of
market price
appreciation
beyond which Future market prices
the supplier can
rescind the
contract
Price of the
Lower ceiling contract
of market price
beyond which
the buyer can Future market prices
rescind the
contract
5.3.3.3 Parallel Contracts
65
For more detailed discussion, see Obaidullah (1998).
148
Price risk can either be due to transitory changes in prices of specific
commodities and non-financial assets or due to a change in the general price
level or inflation. Inflation poses risk to the real values of debts (receivables),
which are generated as a result of Murāba ah transactions. However, as a result
of inflation it is expected that the prices of the real goods and commodities,
which the banks acquire as a result of Salam transactions will appreciate. This
divergent movement of asset values created as a result of Murāba ah and Salam
has the potential to mitigate the price risks underlying these transactions.
Although permanent shifts in assets’ prices cannot be hedged against, however,
the composition of receivable assets on the balance sheet can be systematically
adjusted in such a way that the adverse impact of inflation is reduced as
explained in Figure 5.5 (A).
Suppose that an Islamic bank has sold assets worth $100 on Murāba ah
basis for six months, it can fully hedge against inflation by buying $100 worth
on Salam basis. If for example, 10% of the value of the previous assets is wiped
out by inflation, its Salam-based receivables can become valuable by the same
percentage. Moreover, as for as the Salam is concerned, it can be fully hedged
by the bank by adopting an equivalent parallel Salam contract as a supplier.
Figure 5.5 (B) also explains this possibility.
Figure 5.5 (A):
Parallel Contracts: Implications for Inflation Risk Mitigation
Panel A. Trader as a Seller Panel B. Trader as a Buyer
Price Deferred Sale Object Deferred Purchase
(Receivables: Debts) (Receivables: Real Assets)
(Inflation 10 %)
149
Value of Debts (-10) Value of Real Assets (+10)
150
Figure 5.5 (B)
Managing Receivables as a Hedge Against Inflation
Panel A Panel B
Dollar Value of Price Deferred Sales Dollar Value of Object Deferred
Purchase
$100 $100
Total Receivables: Unadjusted for inflation $ 200
Inflation adjusted Inflation adjusted
Value of receivable value of receivables
$90 $110
Value of total receivables
adjusted for inflation $ 200
151
5.3.4 Equity Price Risks and the use of Bay‘ al-‘arboon
Options are another powerful risk management instrument. However, a
Resolution of the OIC Fiqh Academy prohibits the trading in options. Therefore,
the scope for the utilization of options by the Islamic banks, as risk management
tool is limited at the present. Nevertheless, some Islamic funds have successfully
utilized -‘arboon (down payment with an option to rescind the contract by
foregoing the payment as a penalty) to minimize portfolio risks in what are now
popularly known in the Islamic financial markets as the principal protected funds
(PPFs).
The PPF arrangement roughly works in the following manner; 97% of
the total funds raised are invested in low risk (low return) but liquid Murāba ah
transactions. The remaining 3% of the funds are used as a down payment for
‘arboon to purchase common stock in a future date. If the future price of the
stock increases as expected by the fund manager, the ‘arboon is utilized by
liquidating the Murāba ah transactions. Otherwise the ‘arboon lapses incurring
a 3% cost on the funds. This cost is however, covered by the return on the
Murāba ah transactions. Thus, the principal of the fund is fully protected. In this
way, ‘arboon is utilized effectively in protecting investors against undesirable
down side risks of investing in stocks while at the same time keeping an
opportunity for gains from favorable market conditions
5.3.5 Challenges of Managing Foreign Exchange Risk
Foreign exchange risk can be classified into economic risk, transaction
risk and translation risk. Economic risk is the risk of losing relative
competitiveness due to changes in relative exchange rates. For instance, an
appreciation of local currency will increase the relative price of exported goods
directly as well as indirectly. There is no better hedge against such a risk except
for having subsidiaries in countries of significant markets. This is a crucial
matter for non-financial firms, but financial firms at the same time cannot ignore
it either. In fact, the dominant Islamic groups of financing have such subsidiaries
in important markets.
Translation risk occurs only in the accounting sense. If the subsidiary of
a bank is operating in a country, where it may make a 13% profit during a year.
If the currency of these earnings depreciates by 10% during the period against
the home country currency, the earnings translated into the home country
152
currency actually increase by 3%. Hence this risk does not affect the value of
assets in place.
Transaction risks originate from the nature of the bank’s deferred
delivery transactions. Typically the implication of transaction risk is similar to
that of transitory changes in commodity prices. The currency in which
receivables are due (or assets in general are held) may depreciate in the future
and the currency in which payables (or liabilities in general) are held may
appreciate, thus posing a risk to the overall value of the firm.
This risk could have adverse and severe consequences for a business.
Therefore, it must be minimized by various techniques. Any remaining risk must
be hedged by using currency futures and forwards. Some of the possible
methods of reducing currency transaction risks are briefly discussed here.
5.3.5.1 Avoid Transaction Risks
On the banking book, the most sensible method to avoid transaction risk
is to avoid undertaking such transactions, which require dealing in unstable
currencies. However, this is not always possible, as by rigidly following this
strategy market share can be lost. Banks have to decide carefully to find an
optimal trade-off between market shares and possible transaction risks.
5.3.5.2 Netting
On-balance sheet netting is another method used to minimize the
exposure of risks to the net amount between the receivables and payables to
counterparty. Netting is more suitable for payments between two subsidiaries of
a company. With non-subsidiary counterparties, the currency position of
receivables and payables can generally be matched so that the mutual exposures
are minimized.
5.3.5.3 Swap of Liabilities
Exchange of liabilities can also minimize exposure to foreign exchange
risk. For instance, a Turkish company needs to import rice from Pakistan, and a
Pakistani company needs to import steel from Turkey. The two parties can
mutually agree to buy the commodities for each other, bypassing the currency
markets. If the dollar amount of the two commodities is the same, this
arrangement can eliminate transaction risk for both parties. If the ratings of the
153
two companies are good in their own home countries as compared to the other
country, this swap will also save them some of the cost of finance.
5.3.5.4 Deposit Swap
Islamic banks have been using the technique of deposit swaps. In this
method, two banks in accordance with their own expected risk exposures agree
to maintain mutual deposits of two currencies at an agreed exchange rate for
agreed period of time. For example, a Saudi bank will open a six months account
for SR 50m in a counterpart Bangladeshi bank. The Bangladeshi bank will open
the TK amount of the SR deposit in the Saudi bank for the same period. The
SR/TK exchange rate will be mutually agreed and will be effective for the
deposit period. After the six months both banks will withdraw their deposits. In
this way the risk exposure for the value of the deposits for the currency involved
are minimized according to the two banks’ own perceptions.
There are at least two Sharī‘ah objections to this contract. The exchange
rate cannot be any rate except the spot rate. In this case the rate is fixed for a
period during which there could be a number of spot rates not only one. The
exchange of deposits is also questionable. These deposits are supposed to be
current accounts, which are treated as Qar. There cannot be mutual Qar.
Further, Qar in two different currencies cannot be exchanged.
5.3.5.5 Currency Forwards and Futures
Forwards and futures are the most effective instruments of hedging
against currency risks. Most Islamic banks who have significant exposures to the
FX risk do use currency forwards and futures for hedging purposes as required
by regulators. However, all Fiqh scholars unanimously agree that such contracts
are not allowed in the Sharī‘ah. Keeping this apparent contradiction in view and
the tremendous difference between the stability of the present and past markets,
Chapra and Khan (2000) make a suggestion to the Fiqh scholars to review their
position and allow the Islamic banks to use these contracts for hedging. Such a
change in position will remove the contradiction between the practices of
Islamic banks and the existing Fiqhī positions on one hand and will empower the
Islamic banks on the other hand. Furthermore, it may be noted that hedging is
not an income earning activity. Since Ribā is a source of income and hedging
does not generate income, there is no question of involvement of Ribā. On the
other hand hedging actually reduces Gharar. It is important to note that this a
personal opinion of the two writers. The consensus among Fiqh scholars is that
154
currency futures and forwards are another form of Ribā which have been
prohibited by the Sharī‘ah.
5.3.5.6 Synthetic Forward
As an alternative to the currency forwards and future, Iqbal (2000)
proposes a synthetic currency forward contract. The purpose is to design a
currency forward without using the currency forward contracts. The synthetic
forward can be designed given a number of conditions. These conditions are the
need for hedging against foreign exchange rate risk, the existence of an equal
tenure local Murāba ah investment and foreign Murāba ah investment. The
existence of a known rate of return on the Murāba ah-based investments is
another condition. Finally, that the foreign bank, which invests in dollar-based
Murāba ah, is willing to collaborate with the local bank, which invests in the
local currency-based Murāba ah. The dollar amount of the two investments
must be the same66.
5.3.5.7 Immunization
Once the net exposure is minimized the possibility exists that the
exposure can be hedged. Suppose an Islamic bank has to pay in three months
time $1 million for a contract, which it has signed when the exchange rate is
Rs.60/$. The risk is that after three months, the dollar will appreciate as
compared to the initial exchange rate. The bank can protect against this risk, by
raising three months’ PLS deposit in Rupees for the dollar value of 1m and
buying with this amount $1m at the spot rate. These dollars can then be kept in a
dollar account for three months. After the three months and at the time of
making the payment, the PLS deposit will mature and the bank can share the
earning on the dollar deposit with the rupee deposit holders. Thus, the dollar
exchange rate risk for the three months’ period is fully hedged by the bank.
5. 4 LIQUIDITY RISK
As mentioned earlier, liquidity risk is the variation in a bank's net
income due to the bank's inability to raise capital at a reasonable cost either by
selling its assets in place (asset liquidity problem) or by borrowing through
issuing new financial instruments (funding liquidity problem). All other risks of
a bank culminate into liquidity crunch before bringing a problem bank down.
Operationally, a bank fails when its cash inflows from repayments of credits,
66
For details of the arrangement see, Iqbal (2000).
155
sale of assets in place and mobilization of additional funds fall short of its
mandatory cash outflows, deposit withdrawals, operating expenses, and meeting
its debt obligations.
A recent study commissioned by the Bahrain Monetary Agency (BMA
2001) shows that in general Islamic banks are facing the phenomenon of excess
liquidity. The total assets of the Islamic banks in the sample were 13.6 billion
dollars and 6.3 billion dollars were found to be in liquid assets. For Islamic
financial institutions with a combined asset of 40 billion dollars, the liquid assets
are calculated to be 18.61 billion dollars. The study shows that the peer group
conventional counterparts of the Islamic banks in the sample, in average, keep
46.5% less liquidity as compared to the Islamic institutions.
On the average, conventional banks are expected to maintain the bare
minimum liquidity, which can satisfy regulatory requirements. The liquidity
position of Islamic banks is much in excess of the regulatory requirements. This
means that these liquid funds are either not earning any return at all or earning a
return much lesser than the market rates. Thus the excess liquidity position of
the Islamic banks generates for these banks a serious business risk as it adversely
affects the rates of returns offered by them as compared to their conventional
competitors. Furthermore, in most cases these banks largely rely on current
accounts, which is a more stable source of free liquidity.
However, for a number of reasons, Islamic banks are prone to face
serious liquidity risks.
a. There is a Fiqh restriction on the securitization of the existing assets of
Islamic banks, which are predominantly debt in nature. Thus, the assets
of Islamic banks are not liquid as compared to the assets of conventional
banks.
b. Due to slow development of financial instruments, Islamic banks are
also not able to raise funds quickly from the markets. This problem
becomes more serious due to the fact that there is no inter-Islamic banks
money market.
156
Table 5.4
List of Islamic Financial Instruments
CERTIFICATE BRIEF DESCRIPTION
Declining participation Redeemable Mushārakah certificates were designed by the IFC for providing
certificates funds to the Modaraba companies in Pakistan.
Based on Muārabah principle, the proceeds of these certificates are meant for
Islamic Deposit Certificates
general purpose utilization by the issuing institution.
Installment sale debt certificate is proposed to finance big-ticket purchases by
Installment sale debt making a pool of smaller contributions. The certificate represents the principal
certificates amount invested plus the Murāba ah income. These are issued mostly in
Malaysia as Islamic debt certificates.
Islamic Investment Similar to Islamic deposit certificates, but the proceeds are meant to be utilized
certificates in a specific project.
Like the installment sale debt certificate, this certificate represents the investors’
Isti nā‘ debt certificates principal amount investment in the Isti nā‘ project plus the Murāba ah income
and is proposed for financing infrastructure projects.
Leasing certificate represents ownership of usufructs leased out for a fixed
Leasing certificates rental income. Since usufructs are marketable, this certificate can be bought and
sold.
Muārabah certificate represents ownership in the beneficiary company without
Muārabah certificates
a voting right issued so far by several institutions.
Muqaradah certificate is a hybrid between Muārabah certificate and declining
participation certificates to be issued by the government for the development of
Muqāraa certificates
public utility projects. A muqāraa certificate law was enacted in Jordan during
the early Eighties, but these certificates were never issued.
Mushārakah certificates are common stocks of companies doing Sharī‘ah
compatible business. In Iran the government for financing infrastructure projects
Mushārakah certificates
issues these certificates. In Sudan these are issued as an instrument of monetary
policy.
National participation certificates are proposed by the IMF staff as an
instrument for mobilizing resources for the public sector. These proposed
National participation
instruments are based on the concept of Mushārakah certificates are issued in
certificates
Iran. The certificates are assumed to represent as an ownership title in public
sector assets of a country.
Property income Property income certificate is a Muārabah income note with a secure stream of
certificates income from an ownership in a property without a voting right.
Participation term certificates were issued by the Bankers’ Equity Pakistan in
Participation term
the Eighties. These had some common characteristics of declining Mushārakah
certificates
and Muqarada certificates.
The holder of this certificate shares in the rental income of the asset against
Rent sharing certificates
which the certificate has been issued.
Revenue sharing Revenue sharing certificates were issued in Turkey for re-financing the
certificates privatized infrastructure projects.
The holder of a Salam certificate claims commodities, goods and services in a
Salam certificates
specified future date against the payments the holder has made.
Two-step contracts –
In these contracts the bank pays to the suppliers in installment and creates a
Leasing, Murāba ah,
fixed liability in its balance sheet instead of paying up-front to the suppliers.
Isti nā‘, Salam
Hybrid instruments allow the holder of any of the debt certificates to exchange
Hybrid certificates the certificate for other assets of the issuing entity or in any other entity subject
to the offer prescribed on the hybrid certificate.
157
c. The specific objective of Lender of Last Resort (LLR) facilities is to
provide emergency liquidity facility to banks whenever needed. The
existing LLR facilities are based on interest, therefore, Islamic banks
cannot benefit from these and,
d. Due to the non-existence of a liquidity problem at the present, these
banks do not have formal liquidity management systems in place. Hence
there is a large potential to develop financial instruments (see Table 5.4
for a list of potential instruments) and markets, which can utilize the
excess utility of the Islamic banks for income earning. The project on the
Islamic Capital Markets sponsored by the IDB, BMA and Bank Nagara
Malaysia is expected to formally launch a facility for tapping the
potential.
158
159
VI
CONCLUSIONS
The previous sections of this paper covered a number of important areas
concerning risk management issues in the Islamic financial industry. The
introductory section covered among others, the systemic importance of the
Islamic financial industry. An overview of the various concepts of risks and the
industry standards of risk management techniques were discussed in section two
along with the discussion of some of the unique risks of the Islamic modes of
finance. The perception of the Islamic banks about various risks were surveyed
through a questionnaire and analyzed in section three. In section four the
emerging regulatory concerns with risk management have been discussed and
some conclusions were drawn for the Islamic banking supervision. The Sharī‘ah
related challenges concerning risk management were analyzed in section five. In
the present section we summarize the main conclusions of the paper.
6.1 The Environment
Islamic financial industry has come a long way during its short history.
The future of these institutions, however, will depend on how they cope with the
rapidly changing financial world. With globalization and informational
technology revolution, activities. As a result, the general premise of universal banking is
becoming more dominant..
160
6.2 Risks Faced by the Islamic Financial Institutions
The risks faced by the Islamic financial institutions can be classified into
two categories – risks which they have in common with traditional banks as
financial intermediaries and risks which are unique due to their compliance with
the Sharī‘ah. Majority of the risks faced by conventional financial institutions
such as credit risk, market risk, liquidity risk, operational risk, etc., are also
faced by the Islamic financial institutions. However, the magnitudes of some of
these risks are different for Islamic banks due to their compliance with the
Sharī‘ah. In addition to these risks commonly faced by traditional institutions,
the Islamic institutions face other unique risks. These unique risks stem from the
different characteristics of the assets and the liabilities. Other than the risks that
conventional banks face, the profit-sharing feature of Islamic banking introduces
some additional risks. In particular, paying the investment deposits a share of the
bank’s profits introduces withdrawal risk, fiduciary risk, and displaced
commercial risks. In addition, the various Islamic modes of finance have their
own unique risk characteristics. Thus, the nature of some risks that Islamic
financial institutions face is different from their conventional counterparts.
6.3 Risks Management Techniques
Consequent on the common or unique nature of risks faced by the
Islamic financial institutions, the techniques of risk identification and
management available to these institutions are of two types. The standard
traditional techniques that are not in conflict with the Islamic principles of
finance are equally applicable to the Islamic financial institutions. Some of these
are, for example, GAP analysis and maturity matching, internal rating systems,
risk reports, and RAROC. In addition there is a need for adapting the traditional
tools or developing new techniques that must be consistent with the Sharī‘ah
requirements. Similarly, the processes, internal control systems, internal and
external audits as used by the conventional institutions are equally applicable to
Islamic financial institutions. There is, however, a need to develop these
procedures and processes further by the Islamic financial institutions to tackle
the additional unique risks of the industry.
6.4 Risk Perception and Management in Islamic Banks
Results from a survey of 17 Islamic institutions from 10 different
countries reveals the bankers’ perspectives on different risks and issues related
to the risk management process in Islamic financial institutions. The results
161
confirm that Islamic financial institutions face some risks arising from profit-
sharing investment deposits that are different from those faced by conventional
financial institutions. The bankers consider these unique risks more serious than
the conventional risks faced by financial institutions. The Islamic banks feel that
the returns given on investment deposits should be similar to that given by other
institutions. They strongly believe that the depositors will hold the bank
responsible for a lower rate of return and may cause withdrawal of funds by the
depositors. Furthermore, the survey shows that the Islamic bankers judge profit
sharing modes of financing (diminishing Mushārakah, Mushārakah and
Muārabah) and product-deferred sale (Salam and Isti nā‘) more riskier than
Murāba ah and Ijārah.
We found the overall risk management processes in Islamic financial
institutions to be satisfactory. We apprehend, however, that this may be because
the banks that have relatively better risk management systems have responded to
the questionnaires. The results from risk management process shows that while
Islamic banks have established a relatively good risk management environment,
the measuring, mitigating and monitoring processes and internal controls needs
to be further upgraded.
The survey also identifies the problems that Islamic financial institutions
face in managing risks. These include lack of instruments (like short-term
financial assets and derivatives) and money markets. At the regulatory level, the
financial institutions apprehend that the legal system and regulatory framework
is not supportive to them. The results indicate that the growth of Islamic
financial industry will to a large extent depend on how bankers, regulators, and S
Sharī‘ah scholars understand the inherent risks arising in these institutions and
take appropriate policies to cater to these needs. This calls for more research in
these areas to develop risk management instruments and procedures that are
compatible with Sharī‘ah.
6.5 Regulatory Concerns with Risks Management
The primary concern of regulatory and supervisory oversight standards
is to ensure a) systemic stability, b) protect the interest of depositors and c)
enhance the public’s confidence on the financial intermediation system. The
Islamic banks could not be an exception to these public policy considerations.
Due to the new risks introduced by the Islamic banks, it is expected that
162
regulatory and supervisory concerns will increase with the expansion of the
Islamic products.
6.6 Instruments of Risk-based Regulation.
These three instruments constitute the three pillars of the New Basel
Accord, which primarily aims at developing risk management culture in the
financial institutions by providing capital incentives for good systems and
processes. By issuing a consultative document and by inviting comments on the
document all financial institutions are provided an opportunity to participate in
the process of setting these standards. The Islamic banks must participate in the
process so that the standards can cater for their special needs.
6.7 Risk-based Regulation and Supervision of Islamic Banks
Adopting international standards for the regulation and supervision of
Islamic banks will increase the acceptance of these institutions in the
international markets and hence will prove to be instrumental in making the
institutions more competitive. Some standards could be applied without any
difficulty. However, there is a difficulty in applying in particular the risk
weighting standards to Islamic banks due to the different nature of the Islamic
modes of finance. This limitation is overcome by the internal ratings-based
approach of the New Basel Accord. It is early for the Islamic banks to qualify
for using internal ratings for regulatory capital allocation. However, by opting
for this approach in the future, the Islamic banks will not only be able to comply
with the international standards but will also be developing risk management
systems suitable for the Islamic modes of finance. Moreover, the nature of the
Islamic banks’ current and investment accounts creates a unique systemic risk,
namely, the transmission of risks of one account to the other. In the Islamic
banking windows of traditional banks, this systemic risk could be in the form of
risk transmission between permissible and impermissible sources of income.
163
These two systemic risks can be prevented by requiring separate capital for the
current and investment accounts of an Islamic bank as well as for traditional and
Islamic banking activities of a conventional bank.
6.8 Risk Management: Sharī‘ah -based Challenges
Risk management is an ignored area of research in Islamic finance. The
present paper is one of the few written in this area so far. Therefore, a number
of challenges are still being confronted in this area. These challenges stem from
different sources. First, a number of risk management techniques are not
available to Islamic financial institutions due to requirements for the Sharī‘ah
compliance. In particular, these are credit derivatives, swaps, derivatives for
market risk management, commercial guarantees, money market instruments,
commercial insurance, etc. Due to lack of research efficient alternatives to these
techniques have not been explored. Second, there are a number of Sharī‘ah
positions which effect the risk management processes directly. Some of these are
lack of effective means to deal with willful default, prohibition of sale of debts
and prohibition of currency forwards and futures. Third, lack of standardization
of Islamic financial contracts is also an important source of the challenges in this
regard.
A number of ideas have been discussed and analyzed in the paper which
can be considered to constitute an agenda for further research and deliberations
by researchers, practitioners and Sharī‘ah scholars. For their practical relevance
the ideas discussed in the paper must attain the consensus of the Sharī‘ah
scholars. There is a great need to enhance the process of consensus formation on
a priority basis so that the Islamic financial institutions can develop Sharī‘ah
compliant risks management systems as early as possible.
164
165
VII
POLICY IMPLICATIONS
Based on what has been reported in this study, a number of policy
implications can be suggested for the development of risk management culture
in the Islamic financial institutions. Some of these are mentioned here.
7.1 Management Responsibility
A risk management culture in Islamic banks can be introduced by
involving all the departments/sections in the risk management process discussed.
In particular, the Board of Directors can create the risk management
environment by clearly identifying the risk objectives and strategies. The
management needs to implement these policies efficiently by establishing
systems that can identify, measure, monitor, and manage various risk exposures.
To ensure the effectiveness of the risk management process, Islamic banks also
need to establish a proficient internal control system.
7.2 Risk Reports
Risk reporting is extremely important for the development of an
efficient risk management system. We consider that the risk management
systems in Islamic banks can be substantially improved by allocating resources
to preparing the following periodic risk reports. The sketches of some of the
reports are given in Appendix-2.
a. Capital at Risk Report
b. Credit Risk Report
c. Aggregate Market Risk Report
d. Interest Rate Risk Report
e. Liquidity Risk Report
f. Foreign Exchange Risk Report
g. Commodities and Equities Position Risk Report
h. Operational Risk Report
i. Country Risk Report
166
7.3 Internal Ratings
At initial stages of its introduction an internal rating system may be seen
as a risk-based inventory of individual assets of a bank. Such systems have
proved highly effective in filling the gaps in risk management systems, hence
enhancing the external rating of the concerned institutions. This contributes to
cutting the cost of funds. Internal rating systems are also very relevant for the
Islamic modes of finance. Most Islamic banks already use some form of internal
ratings. However, these systems need to be strengthened in all Islamic banks.
7.4 Risk Disclosures
Disclosures about risk management systems are extremely important for
strengthening the systems. Introducing a number of risk-based systems as given
here can enhance risk disclosures.
a. Risk-based Management Information System
b. Risk-based Internal Audit Systems
c. Risk-based Accounting Systems and
d. Risk-based Asset Inventory System
7.5 Supporting Institutions and Facilities
The risks existing in the Islamic financial industry can be reduced to a
great extent by establishing a number of Sharī‘ah compatible supporting
institutions and facilities such as:
a. Lender of last resort facility,
b. Deposit protection system,
c. Liquidity management system,
d. Legal reforms to facilitate Islamic banking and dispute settlement etc.,
e. Uniform Sharī‘ah standards,
f. Adoption of AAOIFI standards, and
g. Establishing a supervisory board for the industry.
167
7.6 Participation in the Process of Developing the International Standards
The Islamic financial industry being a part of the global financial
markets is effected by the international standards. In fact compliance with these
standards wherever relevant and feasible is expected to enhance the endorsement
of the Islamic financial institutions by the international standard setters. This in
turn is expected to enhance the growth and stability of the industry. It is thus
imperative for the Islamic financial institutions to follow-up the process of
standard setting and to respond to the consultative documents distributed in this
regard by the standard setters on a regular basis.
7.7 Research and Training
Risk management systems strengthen financial institutions. Therefore,
risk management needs to be assigned as a priority area of research and training
programs. Given the nascent nature of the Islamic financial industry there is a
need to develop Sharī‘ah compatible risk management techniques and organize
training programs to disseminate these among the Islamic banks. In the present
research we have made an attempt to cover a number of issues. These and other
issues can constitute an agenda for future research and training in the area. The
training programs need to be designed for Sharī‘ah supervisors, regulators and
managers of the Islamic financial institutions.
168
APPENDIX 1: LIST OF FINANCIAL
INSTITUTIONS INCLUDED IN THE STUDY
Name Country Type
ABC Islamic Bank Bahrain Offshore
Abu Dhabi Islamic Bank U.A.E. Commercial, Investment,
Retail, and Foreign
exchange dealers
Al-Ameen Islamic Financial & India Non-Banking Finance
Investment Corporation Company
AlBaraka Bank Bangladesh Limited Bangladesh Commercial
AlBaraka Islamic Bank Bahrain Commercial, Offshore, and
Investment
Al-Meezan Investment Bank Pakistan Investment
Limited
Badr Forte Bank Russia Commercial
Bahrain Islamic Bank Bahrain Commercial
Bank Islamic Malaysia Berhad Malaysia Commercial
Citi Islamic Investment Bank Bahrain Investment
First Islamic Investment Bank Bahrain Investment
Investors Bank Bahrain Investment
Islami Bank Bangladesh Limited Bangladesh Commercial
Islamic Development Bank Saudi Development
Arabia
Kuwait Turkish Evkaf Finans House Turkey Commercial, Investment,
and Foreign Exchange
dealers
Shamil Bank of Bahrain E.C. Bahrain Commercial and Offshore
Tadamon Islamic Bank Sudan Commercial, Investment,
and Foreign exchange
dealers
Note: We could not include two banks in the study for reasons mentioned here. Faisal
Islamic Bank Egypt - because of receiving the questionnaire at a very late stage of
the study. Al-Barakah Turkish Finance House - because of data gaps.
169
170
APPENDIX 2:
SAMPLES OF RISK REPORTS
We outline some sample risk reports used by financial institutions here.67 While
the actual reports can be complicated, the examples given here represent the basic format
of these reports.
1. Bank Level Credit Quality
In this report, the bank ranks all its receivables (different types of consumer and
commercial loans, leases, etc.) and contingent claims (like unused commitments, stand-
by letters of credit, commercial letters of credit, swaps, etc.) according to an internal risk
rating criterion. The averages of these risk ratings give an indication of the quality of
loans and overall portfolio. Note that internal risk ratings for various institutions are
based on different scales. The example below uses a five-point scale.
A Sample of Bank Level Credit Quality Report
Risk Rating
Receivables/Commitments 1 2 3 4 5 Total
Receivables
-Consumer Loans
-Commercial Loans
-Leases
.
.
Contingent Exposures
- Unused Commitments
- Standby Letters of Credit
- Swaps
.
.
Total
2. Credit Risk Exposure by Industry Sectors
In this report, all assets are categorized according to different industries and the
exposure of these industries are examined by taking some external rating agency’s
ranking of these industries (like Moody’s). This report not only gives an idea of the
concentration of the investments and commitments in different industries, but also
identifies the risks involved in these categories.
A Sample of Credit Exposure by Industry Groups Report
Industry Group Outstanding Commitments
% of Term to Risk % of Term to Risk
Total Maturity Rating Total Maturity Rating
Automobile
Banking
Beverage and
67
The basic formats of the risk reports are adapted from Santomero (1997).
171
Food
3. Net Interest Margin Simulation
In this report the effect of changes in the interest rate on the net income is
summarized. The effect of interest changes on net interest income (i.e., interest income
on assets-interest payable on liabilities) is estimated for different scenarios. A limit that
represents the maximum acceptable change in net income is also indicated.
A Sample of Interest Rate Simulation Report
Rate Scenario
Unchanged +100 +200 Limit -200 Limit
bps bps bps
12 Month Net Interest
Income
Total Earning Assets
-Change in Net Interest
Income
-%Net Interest Income
Portfolio Equity
Market Value
Change in Market Value
% Shareholder’s Equity
4. GAP Report
The GAP report estimates interest rate risk by distributing assets and liabilities
in time bands according to their maturity for fixed rate assets and first possible repricing
time for flexible rate assets. For each time bucket the GAP is calculated as the difference
between assets and liabilities. If the financial institution uses interest rate swaps, these
are factored in to find the Adjusted GAP.
A Sample of GAP Report
0-3 >3-6 >6-12 >1-5 Non- Total
Months Months Months Years Market
Assets
-Commercial
Loans
-Consumer Loans
-Lease financing
.
.
Total Assets
Liabilities
-Interest-bearing
Checking
Deposits
-Savings Deposits
-Savings
Certificates
.
.
Total Liabilities
GAP before
Interest rate
172
Derivatives
Interest-rate
Swaps
Adjusted GAP
5. Duration Analysis
Duration analysis compares the market value of assets and liabilities resulting
from changes in the interest rate. The formula for calculating duration is given in
Chapter 2 (Section 6). It is the time-weighted measure of cash flows representing the
average time needed to recover the invested funds. After the duration of assets and
liabilities are estimated the Duration-GAP can be calculated.
Sample of Duration Analysis Report
On-Balance Sheet
Balance Rate Effective
Duration Years
Assets
Variable Rate Assets
.
Fixed Rate Assets
.
Total Assets
Liabilities
Variable Rate Liabilities
.
Fixed Rate Liabilities
.
.
Total Liabilities
6. Operational Risk Report
Operational risk may arise from different sources and is difficult to measure.
The risk management unit of a financial institution can, however, use judgements
regarding different kinds of operational risks based on all the available information. The
list of sources used to gather information to measure operational risk is given in Table
below. From this information, the risk management unit can classify the different
sources of operational risks as low, medium, and high. A sample of a typical operational
and strategic risk report is shown below.68
Sample of Operating Risk Report
Category Risk Profilea
1. People Risk
- Incompetence
- Fraud
2. Process Risk
1st. Model Risk
-Model/methodology error
2nd. Transaction Error
68
These reports are adapted from Crouhy et.al. (2001, Chapter 13).
173
- Execution Error
- Booking Error
- Settlement Error
- Documentation/Contract Risk
3rd. Operational Control Risk
- Exceeding Limits
- Security Risk
- Volume Risk
3. Technology Risk
- System Failure
- Programming Error
- Telecommunication Failure
Total Operational Risk
Strategic Risk
- Political Risk
- Taxation Risk
- Regulatory Risk
Total Strategic Risk
a- Risk Profile can be Low, Medium, and High.
Sources of Information used as input to Measure Operational Risk
Assessing Likelihood of Occurrence Assessing Severity
- Audit Reports - Management interviews
- Regulatory Reports - Loss history
- Management Reports
- Business plans
- Budget plans
- Operational plans
- Business Recovery plan
- Expert Opinion
174
APPENDIX – 3
QUESTIONNAIRE
ISLAMIC DEVELOPMENT BANK
ISLAMIC RESEARCH & TRAINING INSTITUTE
PROJECT ON “A SURVEY OF RISK MANAGEMENT ISSUES IN ISLAMIC
FINANCIAL INDUSTRY”
Questionnaire for Islamic Banking and Financial Institutions
I. GENERAL
1. Name and location of the Bank:__________________________
2. Year of Establishment: _________________________________
3. Respondent’s Name: __________________
Position:_____________
4. Number of Branches:
5. Number of Employees__________
6. Legal Status of the Bank:
Public limited Company ________Private limited company ___________
Partnership ____________ Other (please specify) __________
7. How many shareholders do you have at present? ____________
8. What is the largest percentage share of a single share-holder? ____________
9. Name of the Chief Executive: ________________________________
10. Names of the Sharī‘ah Board:
11. Nature of Activities: (Please mark the appropriate boxes with U)
Commercial Banking F Investment Banking
F
Offshore Banking F Foreign Exchange dealers F
Investment (including funds) F Stock Brokers F
Insurance F Others (please specify) F
II. FINANCIAL INFORMATION
1. Most Recent Basic Balance Sheet Figures: Year ______________
Local Currency US Dollars
Total Assets
Total Liabilities
Equity (Capital)
2. Term Structure of the Assets (% Distribution)
Less that 6 Months % 6-12 months %
12-36 months % More than 36 Months %
175
3. Profit-Sharing ratio between the bank and the depositors
Depositors Share on:
Investment Accounts of Savings Accounts of
a) Less than 6 months ________ _______
b) 6-12 months ________ _______
c) 12-24 months ________ _______
d) More than 24 months ________ _______
4. Geographical Distribution of Investment
Foreign
Year Domestic Total US, Japan Middle Asia & Other
Foreign Canada -East Africa
&
Europe
1996
1997
1998
1999
2000
5. Details of Default
Year Total No. of Total Value No. of Cost of Average
Default Cases of Default litigation cases Litigation time lost in
litigation
1996
1997
1998
1999
2000
6. Comparative Rates of Return (in percentage) for Depositors and Equity
Holders
1996 1997 1998 1999 2000
Your Bank’s Equity Holders (dividend
Deposits of your bank (average)
Deposits of competing Islamic Banks
(average)
Deposits of Competing Commercial
Banks (average)
NOTE: Information on questions 6,7, and 8 can be skipped if you can provide us with the Annual
Reports of the latest two years (1999 and 2000).
176
7. Structure of the Assets (US$ Millions)
Year Total Reserves Debts Deposits Securities Physical Others
& Cash due with other Assets
in Vaults banks
1996
1997
1998
1999
2000
8. Capital and Structure of the Liabilities (US$ Millions)
Year Total Total Total Investment Saving Current Deposits
Capital Liabilities Deposits Accounts Account Accounts of Other
Banks
1996
1997
1998
1999
2000
8. Modes of Finance:
Year Total Murāba ah Mushā Muāra- Leasing Isti nā‘/S Other
Financial /Install. Sale -rakah bah alam s
Operations
1996
1997
1998
1999
2000
III. ISSUES IN RISK MANANGEMENT: A SURVEY
1. Severity of Various Types of Risks in Different Financial Instruments
(Can you please rank the seriousness of the following overall and instrument
specific risks in the tables below. Please mark the appropriate boxes with U)
Credit Risk Not Critically
Serious Serious
(The risk that counterparty will 1 2 3 4 5 N. A.
fail to meet its obligations timely
and fully in accordance with
agreed terms)
1. Overall Risk
2. Murāba ah
3. Muārabah
4. Mushārakah
5. Ijārah
6. Isti nā‘
7. Salam
177
8. Diminishing Mushārakah
Markup (Benchmark) Rate Risk Not Critically
Serious Serious
(The risk arising from 1 2 3 4 5 N. A.
changes in the level of
market interest rate or
benchmark rate)
1. Overall Risk
2. Murāba ah
3. Muārabah
4. Mushārakah
5. Ijārah
6. Isti nā‘
7. Salam
8. Diminishing Mushārakah
Liquidity Risk Not Critically
Serious Serious
(The risk of insufficient liquidity in 1 2 3 4 5 N.A.
meeting normal operating
requirements and taking growth
opportunities)
1. Overall Risk
2. Murāba ah /Bai-muajjal
3. Muārabah
4. Mushārakah
5. Ijārah
6. Isti nā‘
7. Salam
8. Diminishing Mushārakah
Market Risk Not Critically
Serious Serious
(Risk incurred on instruments 1 2 3 4 5 N. A.
traded in well-traded markets,
e.g., commodities and equities)
1. Overall Risk
2. Murāba ah
3. Muārabah
4. Mushārakah
5. Ijārah
6. Isti nā‘
7. Salam
8. Diminishing Mushārakah
178
Operational Risk Not Critically
Serious Serious
(The risk of losses from 1 2 3 4 5 N.A.
inadequate or failed internal
processes, people, or systems)
1. Overall Risk
2. Murāba ah
3. Muārabah
4. Mushārakah
5. Ijārah
6. Isti nā‘
7. Salam
8. Diminishing Mushārakah
Please list below any other risks that you think affects your institution:
Not Serious Critically
Serious
Other Risks 1 2 3 4 5 N. A.
1.
2.
3.
4.
5.
A Survey of Issues related to Islamic Banking
(Please mark the appropriate boxes with U)
Not Serious Critically
Serious
How would you rate the following 1 2 3 4 5 N. A.
issues according to their seriousness
1.A low rate of return on deposits will
lead to withdrawal of funds?
2.Depositors would hold the bank
responsible for a lower rate of return
on deposits?
3.The rate of return on deposits has to be
similar to that offered by other banks.
4.Lack of short-term Islamic financial
assets that can be sold in secondary
markets
5.Lack of Islamic money markets to
borrow funds in case of need.
6.Inability to re-price fixed return assets
(like Murāba ah) when the benchmark
rate changes.
7.Inability to use derivatives for hedging.
8.Lack of legal system to deal with
defaulters.
9.Lack of regulatory framework for
Islamic Banks.
0.Lack of understanding of risks
involved in Islamic modes of
financing.
179
3. Please Rank the top ten (10) risks faced by your organization in order of severity
1. _______________________________ 6. ______________________________
2. _______________________________ 7. __________________________
2. ___________________________ 8. ______________________
3. ____________________________ 9. _______________________
4. ____________________________ 10. ________________________
ISSUES IN RISK MANAGEMENT: GENERAL
Please mark the appropriate boxes with U.
YES NO
1. Do you have a formal system of Risk Management
in place in your organization?
2. Is there a section/committee responsible for
identifying, monitoring, and controlling various risks?
3. Does the bank have internal guidelines/rules and concrete
procedures with respect to the risk management system?
4. Does the bank have in place an internal control
system capable of swiftly dealing with newly
recognized risks arising from changes in environment,
etc.?
5. Does the bank have in place a regular reporting
system regarding risk management for senior officers
and management?
Is the Internal Auditor responsible to review and
rify the risk management systems, guidelines, and risk
orts?
7. Does the bank have countermeasures (contingency
plans) against disasters and accidents?
8. Does your organization consider that the risks of
investment depositors and current accounts shall not
mix?
9. Is your bank of the view that the Basel Committee
standards should be equally applicable to Islamic
banks?
10. Is your organization of the view that
supervisors/regulators are able to assess the true risks
inherent in Islamic banks?
11. Positions and profit/losses are assessed:
Every Business Day F Weekly F Monthly F
12. Would you prefer repricing of leased assets?
Periodically (e.g., monthly) F Continuously (benchmark rate + markup) F
13. Do you think that the capital requirements for Islamic banks as compared to
conventional banks should be
More F Same F Less F
180
ISSUES IN RISK MANANGEMENT: MEASUREMENT AND MITIGATION
Please mark the appropriate boxes with U.
YES NO
1. Is there a computerized support system for estimating the
variability of earnings and risk management?
2. Is there a clear policy promoting asset quality?
3. Does the bank have in place a support system for assessing
borrowers’ credit standing quantitatively?
4. Has the bank adopted and utilized guidelines for a loan approval
system?
5. Are credit limits for individual counterparty set and are these
strictly monitored?
6. Are mark-up rates on loans set taking account of the loan
grading?
7. Does the bank have in place a system for managing problem
loans?
8. Does the bank regularly (e.g. weekly) compile a maturity ladder
chart according to settlement date and monitor cash position
gaps?
9. Does the bank regularly conduct simulation analysis and
measure bench-mark (interest) rate risk sensitivity?
10. Does the bank have backups of software and data files?
11. Does the bank use securitization to raise funds for specific
investments/projects?
12. When a new risk management product or scheme is introduced,
does the bank get clearance from the Sharī‘ah Board?
13. Is your bank actively engaged in research to develop Islamic
compatible Risk Management instruments and techniques?
14. Is there a separation of duties between those who generate risks
and those who manage and control risks?
15. Do you have a reserve that is used to increase the profit share
(rate of return) of depositors in low-performing periods.
16. Does the bank produce the following reports at regular intervals?
YES NO
One) Capital at Risk Report
Two) Credit Risk Report
Three) Aggregate Market Risk Report
Four) Interest Rate Risk Report
Five) Liquidity Risk Report
Six) Foreign Exchange Risk Report
Seven) Commodities & Equities Position Risk Report
Eight) Operational Risk Report
Nine) Country Risk Report
Ten) Other Risk Reports (Please Specify)
17. Do you use any of the following procedures/methods to analyze risks?
181
YES NO
One) Credit Ratings of prospective investors
Two) Gap Analysis
Three) Duration Analysis
Four) Maturity Matching Analysis
Five) Earnings at Risk
Six) Value at Risk
Seven) Simulation techniques
Eight) Estimates of Worst Case scenarios
Nine) Risk Adjusted Rate of Return on Capital
(RAROC)
Ten) Internal Rating System
Eleven) Other (Please Specify)
YES NO
18. Does the bank have a policy of diversifying investments across:
(a) Different countries?
(b) Different sectors (like manufacturing, trading etc.)
(c) Different Industries (like airlines, retail,
etc.)
19. Do the accounting standards used by the bank comply with
a) International standards?
b) AAOIFI standards?
c) Other (please specify)
___________________________________________
Regularly Occasionally Never
20. Does the bank periodically
reappraise collateral (asset)?
21. Does the bank confirm a guarantor’s
intention to guarantee loans with a
signed document?
22. If loans are international, does the
bank regularly review country
ratings?
23. To keep the rate of return in line
with other banks, do you transfer
profit from shareholders to
depositors?
24. Does the bank monitor the
borrower’s business performance
after loan extension?
182
25. Does the bank engage in the spot market and any of the following derivatives for
hedging (risk management) purposes?(Please mark the appropriate boxes with U)
Spot Forwards Futures Options Swaps None
Currency
Commodity
Equity
Interest Rate
26. Does the bank engage in the spot market and any of the following derivatives for
income generation? (Please mark the appropriate boxes with U)
Spot Forwards Futures Options Swaps None
Currency
Commodity
Equity
Interest Rate
27. We will appreciate if you could share with us any Islamic compatible Risk
Management instruments and techniques that your institution uses.
183
184
BIBLIOGAPHY
AAOIFI (1999a), “Statement on the Purpose and Calculation of the Capital Adequacy
Ratio for Islamic Banks”, Bahrain: Accounting and Auditing Organization for
Islamic Financial Institutions
AAOIFI-Accounting and Auditing Organization for the Islamic Financial Institutions
(1999), Accounting, Auditing and Governance Standards for Islamic Financial
Institutions, Bahrain: Accounting and Auditing Organization for Islamic Financial
Institutions
Ahmad, Ausaf (1993), Contemporary Practices of Islamic Financing Techniques,
Research Paper No. 20, Islamic Research and Training Institute, Islamic
Development Bank, Jeddah.
Ahmed, Habib (2002), "Incentive Compatible Profit-Sharing Contracts: A Theoretical
Treatment" in Iqbal, Munawar and Llewellyn, David T (editors), Islamic Banking
and Finance: New Perspectives in Profit Sharing and Risk, Edward Elgar
Publishers, (forthcoming).
Al-Gari, M. Ali (2001), “Credit Risks in Islamic Banks' Finance”, IRTI-HIBFS Seminar
on Regulation and Supervision of Islamic Banks: Current Status and Prospective
Development held in Khartoum April 2001.
Al-Jarhi, Mabid Ali and Iqbal, Munawar (forthcoming), Islamic Banking: FAQs,
Jeddah: IRTI Occasional Paper # 4
Al-Jarhi, Mabid Ali, (2001), “Enhancing Corporate Governance in Islamic Financial
Institutions”, paper presented to the IRTI-AAOIFI Conference on Transparency,
Governance and Risk Management in Islamic Financial Institutions, held in Beirut
Lebanon, March 2001.
Al-Omar, Fouad (2000), “Supervision, Regulation and Adaptation of Islamic Banks to
the Best Standards: The Way Forward”, paper presented to the Conference on
Islamic Banking Supervision, Bahrain: AAOIFI February 18-19, 2000.
Al-Sabah, Sheikh Salem Abdul Aziz (1999), “Regulatory Framework of Islamic
Banking in the State of Kuwait”, keynote address delivered at the CBK-IMF
Seminar on Design and Regulation of Islamic Financial Instruments, Kuwait,
October 25-26, 1997.
Al-Sadah, Anwar Khalifa (2000), “Liquidity Management in Islamic Banks”, paper
presented to the conference on Islamic Banking Supervision, Bahrain, AAOIFI,
February 2000.
Arnold, Glen (1998), Corporate Financial Management, London: FT
Asian Development Bank (2001), “Comments on the January 21st Draft of the New
Basel Accord”,
Bahrain Monetary Agency (2001), “A Feasibility Study for a Liquidity Management
Center”, Manama Ernest & Young Consultants.
185
Bank Nagara Malaysia (2000), “Supervision and Examination of Islamic Banking
Operations in Malaysia”, paper prepared for a Seminar on Islamic Financial Industry
to be held in Alexandria, Egypt during November 2000.
Bank Nagara Malaysia (2001), “Comments on the New Capital Accord Consultative
Paper 2001”,
BCBS (1988), International Convergence of Capital Measurement and Capital
Standards (Basel: Basel Committee on Banking Supervision).
BCBS (1997), Core Principles for Effective Banking Supervision, (Basel: Basel
Committee on Banking Supervision).
BCBS (1998), Framework for Internal Control Systems in Banking Organizations
(Basel: Basel Committee on Banking Supervision).
BCBS (1998), Operational Risk Management, Basel Committee on Banking
Supervision, September 1998.
BCBS (1999), Principles for the Management of Credit Risk, Consultative Paper, Basel
Committee on Banking Supervision, July 1999.
BCBS (1999a), Performance of Models-Based Capital Charges for Market Risk (Basel:
Basel Committee on Banking Supervision).
BCBS (1999b), Credit Risk Disclosure (Basel: Basel Committee on Banking
Supervision).
BCBS (1999c), Principles for the Management of Credit Risk, (Basel: Basel Committee
on Banking Supervision).
BCBS (1999d), Credit Risk Modeling: Current Practices and Applications (Basel:
Basel Committee on Banking Supervision).
BCBS (2000), Sound Practices for Managing Liquidity in Banking Organizations
(Basel: Basel Committee on Banking Supervision).
BCBS (2001), The New Capital Accord, (Basel: Basel Committee on Banking
Supervision).
BCBS (2001a), Operational Risk, (Basel: Basel Committee on Banking Supervision)..
BCBS (2001b), Principles for the Management and Supervision of Interest Rate Risk,
(Basel: Basel Committee on Banking Supervision).
BCBS (2001c), Slowdown of the global OTC derivatives market in the second half of
2000, Press Release, Bank of International Settlements, 16 May, 2001.
Beckers, Stan (1999), “A Survey of Risk Management Theory and Practice”, in Carol
Alexander (Editor), Risk Management and Analysis, Volume 1: Measurement and
Modeling Financial Risk, John Wiley and Sons, West Sussex.
Bonte, Rudi (1999), Supervisory Lessons to be Drawn From the Asian Crisis (Basel:
Bank for International Settlement).
Caouette, John B., Edward I. Altman, and Paul Narayan (1998), Managing Credit Risk:
The Next Great Financial Challenge, John Wiley and Sons, Inc., New York.
Chapra, M. Umer and Khan, Tariqullah (2000), Regulation and Supervision of Islamic
Banks, Jeddah: IRTI
186
Chapra, M. Umer, “Strengthening Islamic Banks” Paper presented to the Seminar on
IRTI-HIBFS Seminar on Regulation and Supervision of Islamic Banks: Current
Status and Prospective Development, held in Khartoum April 2001
Charles, F. (2000), “Core Principles of Effective Banking Supervision”, presented to the
Conference on Islamic Banking Supervision,, Bahrain, AAOIFI.
Cordewener, Karl (2000), “Risk-Weights, On-and Off-balance Sheet” (Lecture
delivered at the 12th International Banking Supervisory Seminar held in the BIS
during April 28 – May 6).
Council of Islamic Ideology, Pakistan (1981), The Elimination of Interest from the
Economy of Pakistan (Islamabad: Council of Islamic Ideology).
Crouhy, Michel, Dan Galai, and Robert Mark (2001), Risk Management, McGraw Hill,
New York.
Cumming, Christine and Beverly J. Hirtle (2001), “The Challenges of Risk
Management in Diversified Financial Companies”, Economic Policy Review,
Federal Reserve Bank of New York, 7, 1-17.
Cunningham, Andrew (1999), “Moody’s Special Comment Analyzing the
Creditworthiness of Islamic Financial Institutions”, Moody, Booklet
Dale, Richard (1996), Risk & Regulation in Global Securities Markets (New York: John
Wiley & Sons).
Dale, Richard (2000), “Comparative Models of Banking Supervision”, paper presented
to the Conference on Islamic Banking Supervision, Bahrain: AAOIFI, February.
Das, Satyajit (Editor) (2000), Credit Derivatives and Credit Linked Notes, 2nd Edition,
John Wiley and Sons, Singapore.
Demirguc, Asli and Enrica Detragiache (2000), “Does Deposit Insurance Increase
Banking System Stability?” Washington DC, International Monetary Fund, Working
Paper.
El-Mekkawy, Zahra (2000), “Internal Ratings – Credit Risk Models”, Lecture delivered
at the 12th International Banking Supervisory Seminar held in the BIS during April
28 – May 6, 2000.
Errico, Luca and Farahbaksh, Mitra (1998), “Issues in Prudential Regulations and
Supervision of Islamic Banks”, Washington D.C., the IMF, WP/98/30.
European Commission (1999), “A Review of Regulatory Capital Requirements for EU
Credit Institutions and Investment Firms – Consultation Document”, # MARKT/
1123/99-EN See, EC Website
Financial Stability Institute of the BIS (2000), Proceedings of the 12th International
Banking Supervisory Seminar held in the BIS during April 28 – May 6.
Garcia, Gillian (1999), “Deposit Insurance: A Survey of Actual and Best Practices in
1998”, IMF Working Paper 99/54 (Washington: IMF).
Gleason, James T. (2000), Risk: The New Management Imperative in Finance,
Bloomberg Press, Princeton, New Jersey.
Gunay, Rifat (2000), “Institutional Framework for Special Finance Houses Operations
in Turkey”, paper prepared for a Seminar on Islamic Financial Industry to be held in
Alexandria, Egypt during November.
187
Hassan, Kabir (2001), “Identification and Management of Market Risks for Islamic
Bank” IRTI-HIBFS Seminar on Regulation and Supervision of Islamic Banks:
Current Status and Prospective Development held in Khartoum April 2001.
Heffernan, Shelagh (1996), Modern Banking in Theory and Practice, John Wiley and
Sons, Chichester.
Hull, John C. (1995), Introduction to Futures and Options Markets, Prentice Hall, Inc.,
Upper Saddle River, New Jersey.
Hussain, Tan Sri Abdul Rashid (2000), “Resource Mobilization from Capital Markets
for Financing Development in the IDB Member Countries”, Jeddah: Eleventh IDB
Annual Symposium.
Idris, Rustam Mohammad (2000), “Experience of Central Banks in Supervising Islamic
Banks – the Experience of Malaysia”, paper prepared for a Seminar on Islamic
Financial Industry to be held in Alexandria, Egypt during November.
Institute of Islamic Banking and Insurance (1995), Encyclopedia of Islamic Banking and
Insurance (London: Institute of Islamic Banking and Insurance).
International Association of Islamic Banks (1997), Directory of Islamic Banks and
Financial Institutions (Jeddah: International Association of Islamic Banks).
Iqbal, Munawar (2000), “Islamic and Conventional Banking in the Nineties: A
Comparative Study”, paper presented at the Fourth International Conference on
Islamic Economics and Banking, Loughborough University, U.K., August 13-15,
2000.
Iqbal, Munawar, Ausaf Ahmad, and Tariqullah Khan (1998), Challenges facing Islamic
Banking, Occasional Paper No. 1, Islamic Research and Training Institute, Islamic
Development Bank, Jeddah.
Iqbal, Zamir (2000), “Risk and Risk Management in Islamic Finance”, paper presented
to the Conference on Islamic Financial Services Industry in the 21st Century,
Alexandria University, October 2000.
Iqbal, Zamir (2001), “Financial engineering in Islamic finance” mimeograph
Iqbal, Zamir (2001), “Scope of Off-balance sheet transactions in Islamic finance”
mimeograph
Iqbal, Zubair and Mirakhor, Abbas, Islamic Banking, IMF Occasional Paper No. 49
(Washington: International Monetary Fund).
IRTI-OICFA (2000) Resolutions and Recommendations of the Council of Fiqh Academy
of the Organization of Islamic Conference, Jeddah: IRTI and OIC Fiqh Academy.
ISDA (2000), A New Capital Adequacy Framework, Comments on a Consultative
Paper Issued by the Basel Committee on Banking and Supervision in June 1999
(London: International Swaps and Derivatives Association).
Jackson, Patricia (1999), Capital Requirements and Bank Behavior: The Impact of the
Basel Accord (Basel: Bank for International Settlements).
Jones, D. (2000), “Emerging Problems with the Accord Regulatory Capital Arbitrage
and Related Issues”, Journal of Banking and Finance, Vol.24, pp.35-58.
Jorion, Phillipe (2001), Value at Risk, The New Benchmark for Managing Financial
Risk, McGraw Hill, New York.
188
Jorion, Phillippe and Sarkis J. Khoury (1996), Financial Risk Mangement Domestic and
International Dimensions, Blackwell Publishers, Cambridge, Massachusetts.
Kahf, Monzer (1996) “Distribution of Profits in Islamic Banks” Studies in Islamic
Economics, Vol. 4, No.1.
Kahf, Monzer and Tariqullah Khan (1992), Principles of Islamic Financing, Research
Paper No. 16, Islamic Research and Training Institute, Islamic Development Bank,
Jeddah.
Khan, M. Fahim (1991), Comparative Economics of Some Islamic Financing
Techniques, Research Paper No. 12, Islamic Research and Training Institute, Islamic
Development Bank, Jeddah.
Khan, M. Fahim (1995), Essays in Islamic Economics, Leicester: The Islamic Foundation.
Khan, M. Fahim (1995), Islamic Futures and their Markets with special reference to
developing rural financial markets, Jeddah: IRTI.
Khan, Mansour ur Rahman (2000), “Experience of Central Banks in Supervising Islamic
Banks – Pakistan’s Experience”, paper prepared for a Seminar on Islamic Financial
Industry to be held in Alexandria, Egypt during November.
Khan, Mohsin S, (1986), "Islamic Interest-Free Banking", IMF Staff Papers, pp. 1-27.
Kolb, Robert W. (1997), Futures Options and Swaps, Blackwell Publishers, Malden
MA.
Llewellyn, David T. (2000), The Economic Rationale for Financial Regulation
(London: Financial Services Authority, Occasional Paper Series – 1).
Llewellyn, David T. (2001), “A Regulatory Regime for Conventional and Islamic
Banks”, paper presented to IRTI-HIBFS Seminar on Regulation and Supervision of
Islamic Banks: Current Status and Prospective Development, held in Khartoum
April 2001.
Markowitz, H. (1959), Portfolio Selection: Efficient Diversification of Investment, John
Wiley and Sons, New York.
Marshall, Christopher, (2001), Measuring and Managing Operational Risks in
Financial Institutions, Singapore: John Wiley
Merton, Robert C. (1995) “Financial Innovation and the Management and Regulation of
Financial Institutions” Journal of Banking & Finance.
Mills, Paul S.and John R. Presley (1999), Islamic Finance: Theory and Practice
(London: Macmillan).
Mirakhor, Abbas and Mohsin S. Khan, eds., (1987), Theoretical Studies In Islamic
Banking And Finance (Texas: The Institute for Research and Islamic Studies).
Mishkin, Frederic S. (1995), The Economics of Money, Banking and Financial Markets
(New York, N.Y.: HarperCollins, 4th ed.).
Nakata, Yoshinori (2000), “Concepts, Role and Definition of Capital, Scope of
Application”, Lecture delivered at the 12th International Banking Supervisory
Seminar held in the BIS during April 28 – May 6.
Obaidullah, Mohammad (1998) “Financial Engineering with Islamic Options”, Islamic
Economic Studies, 6(1).
189
Obaidullah, Mohammad (1998), “Capital Adequacy Norms for Islamic Financial
Institutions”, Islamic Economic Studies, Vol. 5 Nos. 1&2
Prevost, Johanne, “The Core Principles for Effective Banking Supervision” (2000),
Lecture delivered at the 12th International Banking Supervisory Seminar held in the
BIS during April 28 – May 6, 2000.
Raskopf, Ronald (2000), “Market Risk – From the Standard Approach to Modelling
Techniques”, Lecture delivered at the 12th International Banking Supervisory
Seminar held in the BIS during April 28 – May 6.
Reilly, Fran K. and Keith C. Brown (1997), Investment Analysis and Portfolio
Management, Fifth Edition, The Dryden Press, Orlando.
Rose, Peter S (1999), Commercial Bank Management, New York, McGraw-Hill
Ross, S. (1976), “The Arbitrage Theory of Capital Asset Pricing”, Journal of Economic
Theory, 13, 341-60.
Sahajwala, Ranjana and Bergh, Paul Van den (2000), “Supervisory Risk Assessment
and Early Warning Systems”, Basel: BIS, BCBS Working Paper No. 4.
Santomero, Anthony M. (1997), “Commercial Bank Risk Management: An Analysis of
the Process”, Journal of Financial Services Research, 12, 83-115.
Scholtens, Bert and Dick van Wensveen (2000), "A Critique on the Theory of Financial
Intermediation," Journal of Banking and Finance, 24, 1243-51.
Sebton, Emmanuelle (2000), “ISDA Perspective on the New Accord”, Lecture delivered
at the 12th International Banking Supervisory Seminar held in the BIS during April
28 – May 6.
Sharpe, W. (1964), “Capital Asset Prices: A Theory of Market Equilibrium”, Journal of
Finance, (September), 425-42.
Siddiqi, M. N. (1983), Issues in Islamic Banking (Leicester: The Islamic Foundation).
Standard & Poor S (2001) "Features - Corporate Defaults: Will Things Get Worse
Before They Get Better" CreditWeek January 31, 2001
Sundararajan, V., David Marston, and Ghiath Shabsigh, (1998), “Monetary Operations
and Government Debt Management under Islamic Banking”, WP/98/144.
(Washington, DC: IMF, September).
Sundararajan, V., David Marston and Ritu Basu (2001), "Financial System Standards
and Financial Stability: The Case of Basel Core Principles”, IMF Working Paper
No. 01/62
Usmani, Muhammad Taqi, An Introduction to Islamic Finance (Karachi; Idaratul
Ma‘arif, 1998).
Vogel, Frank E. and Samuel L Hayes, (1998), Islamic Law and Finance: Religion, Risk
and Return (The Hague: Kluwer Law International).
White, William R (2000), What Have We Learned From Recent Financial Crises And
Policy Responses? (Basel: Bank for International Settlement).
Yasseri, A (2000), “Modalities of Central Bank Supervision as Practiced in Iran”, paper
prepared for a Seminar on Islamic Financial Industry to be held in Alexandria, Egypt
during November.
190
Yoghourtdjian, Sarkis (2000), “The Process of Supervision: On, and Off-site
Supervision and Bank Rating Systems” Lecture delivered at the 12th International
Banking Supervisory Seminar held in the BIS during April 28 – May 6, 2000.
Yousuf, Shaheed Yousuf, (2001), “Liquidity Management Issues Pertaining to
Regulation of Islamic Banks”, IRTI-HIBFS Seminar on Regulation and Supervision
of Islamic Banks: Current Status and Prospective Development, held in Khartoum
April 2001.
Zarqa, M. Anas (1999), “Comments” on the paper of Sami Suweliem on “Gharar”
presented to the International Conference on Islamic Economics towards the 21st
Century, held in IIUM, KL Malaysia during August
Zarqa, M. Anas, and M. Ali El-Gari, (1991), “Al-Ta‘wīd ‘an Darar al Mumāmatālah fī
al-Dayn bayn al-Fiqh wa al-Iqtisād” Journal of King Abdul Aziz University: Islamic
Economics, p.25-57, See also the comments by M. Zaki al-Barr and Habib al-Kaf on
pp.61-4.
Zubair, Mohammad Ahmed (1999), “Global Financial Architecture: Policy Options for
Resolving Issues Relating to Regulatory Framework of Islamic Financial
Institutions”, Jeddah: IDB Working paper.
Zuberbuhler, Daniel (2000), “The Financial Industry in the 21st Century: Introduction”,
Proceedings of the 11th International Conference of Bank Supervisors (Basel: BIS,
September 2000).
191
192
193 | https://www.scribd.com/document/8624114/Khan-and-Ahmed-Risk-Management-in-Islamic-Financial-Industry | CC-MAIN-2019-04 | en | refinedweb |
Overview
Atlassian Sourcetree is a free Git and Mercurial client for Windows.
Atlassian Sourcetree is a free Git and Mercurial client for Mac.
skd - a lightweight socket daemon ================================= skd is a small daemon which binds to a udp, tcp or unix-domain socket, waits for connections and runs a specified program to handle them. It is ideal as a secure, efficient replacement for traditional inetd as well as being an easy-to-use tool for non-privileged users wanting to run their own network services. Datagram and stream sockets are available in both the internet and unix namespaces, each with the expected inetd behaviour.. Building and installing ----------------------- Unpack the source tar.gz file and change to the unpacked directory. Run 'make', then 'make install' to install in /bin. Alternatively, you can set DESTDIR and/or BINDIR to install in a different location, or strip and copy the compiled skd binary into the correct place manually. skd was developed on GNU/Linux and FreeBSD, but should be reasonably portable. In particular, it is expected to compile on most modern unix platforms. Please report any problems or bugs to Chris Webb <chris@arachsys.com>. Usage ----- Usage: skd [OPTIONS] PROG [ARGS]... Options: -i [INTERFACE:]PORT bind a listening socket in the internet namespace -l PATH, -x PATH bind a listening socket in the local unix namespace -s create a stream socket (default socket style) -d create a datagram socket instead of a stream socket -t [INTERFACE:]PORT create a TCP socket: equivalent to -s -i -u [INTERFACE:]PORT create a UDP socket: equivalent to -d -i -b BACKLOG set the listen backlog for a stream socket -c MAXCLIENTS set the maximum number of concurrent connections accepted by a stream socket (default is unlimited) -n set TCP_NODELAY to disable Nagle's algorithm for TCP stream connections -v report information about every connection accepted or initial datagram received to stderr or the log -B fork, establish new session id, redirect stdin and stdout to/from /dev/null if they are attached to a terminal, and run as a daemon in the background -L TAG[:FAC.PRI] start a logger subprocess, redirecting stderr to the system log with tag TAG, facility FAC and priority PRI (defaulting to daemon.notice if unspecified) -L >LOGFILE redirect stderr to create and write to LOGFILE -L >>LOGFILE redirect stderr to append to LOGFILE -P PIDFILE write pid to PIDFILE (after forking if used with -B) -U after binding the socket, drop privileges to those specified by $UID and $GID, and if $ROOT is set, chroot into that directory When a stream socket is specified, listen on it and accept all incoming connections, executing the given program in a child process with stdin and stdout attached to the connection socket. Do not wait for the child to exit before accepting another connection on the listening socket. When a datagram socket is specified, wait for an initial datagram to arrive before launching the given program with stdin and stdout attached to the listening socket. Until this program exits, don't attempt to check for more datagrams or spawn another child. If none of -i, -l, -u is used, no socket is bound and the given program is executed immediately, after any background, logging, pidfile and privilege dropping actions have been completed. This allows use of these facilities for standalone and non-network services. Examples -------- A unix domain echo server running in the foreground, reporting connections to stderr: skd -vl /dev/cat.sock cat An motd server running in the background with a pidfile /var/run/motd.pid, reporting connections to syslog with tag 'testsrv', facility 'daemon' and priority 'info': skd -vt 3000 -BP /var/run/motd.pid -L testsrv:daemon.info \ cat /etc/motd Uwe Ohse's uscheduled running in the background, logging errors from stderr to syslog: skd -BL uschedule:daemon.notice -- uscheduled -d /var/lib/uschedule The last example demonstrates how skd can be useful as a daemontools replacement in a more standard unix environment. I use it to daemonise uschedule, dnscache and tinydns with logs sent to syslog. Copying ------- skd was written by Chris Webb <chris@arachsys.com> and is distributed as Free Software under the terms of the MIT license in COPYING. | https://bitbucket.org/arachsys/skd | CC-MAIN-2019-04 | en | refinedweb |
[deal.II] Memory loss in system solver
Dear community I have written the simple code below for solving a system using PETSc, having defined Vector incremental_displacement; Vector accumulated_displacement; in the class LargeStrainMechanicalProblem_OneField. It turns out that this code produces a memory loss, quite significant
Re: [deal.II] KDTree implementation error
KDTree needs nanoflann to be available. Did you compile deal.II with nanoflann exnabled? Check in the summary.log if DEAL_II_WITH_NANOFLANN is ON. RTree, on the other hand, does not require nanoflann, as it is included with boost (and it is faster than nanoflann). L. > On 24 Jul 2020, at
Re: [deal.II] KDTree implementation error
Dear Heena, here is a snippet to achieve what you want: #include namespace bgi = boost::geometry::index; … Point<2> p0; const auto tree = pack_rtree(tria.get_vertices()); for (const auto : tree | bgi::adaptors::queried(bgi::nearest(p0, 3))) // do something with p
Re: [deal.II] Cannot find local compiled petsc library
Yuesun, Apparently, CMake was able to find the file petscvariables, but not the include directories or the library. Can you search for "libpetsc.so" yourself? Our CMake find module tries to find this library in {PETSC_DIR}/lib or {PETSC_DIR}/lib64. See if you can adjust PETSC_DIR accordingly.
Re: [deal.II] Accessing nodal values of a FEM solution
On Thu, Jul 23, 2020 at 5:14 PM Daniel Arndt wrote: > > You can do similarly, > > Quadrature q(fe.get_unit_support_points()); > FEValues fe_values (..., q, update_q_points); > for (const auto& cell) > ... > points = fe_values.get_quadrature_points(); >
Re: [deal.II] KDTree implementation error
Dear Luca, Thank you very much. It now works in both ways. Thanks for advice. Regards, Heena On Fri, Jul 24, 2020 at 12:31 PM luca.heltai wrote: > KDTree needs nanoflann to be available. Did you compile deal.II with > nanoflann exnabled? Check in the summary.log if
[deal.II] Re: Memory loss in system solver
Dear community, if I am not mistaking my analysis, it turned out that the memory loss is caused by this call: BiCG.solve (this->system_matrix, distributed_incremental_displacement, this->system_rhs, preconditioner); because if I turn it off the top command shows no change in the RES at all.
Re: [deal.II] Cannot find local compiled petsc library
Dear Daniel, Thank you for the instruction! I gave the architecture directory, which is a sub-directory : /home/yjin6/petsc/arch-linux-c-debug. It returns message like this: ***
Re: [deal.II] Re: Memory loss in system solver
Alberto, Have you tried running valgrind (in parallel) on your code? Admittedly, I expect quite a bit of false-positives from the MPI library but it should still help. Best, Daniel Am Fr., 24. Juli 2020 um 12:07 Uhr schrieb Alberto Salvadori < alberto.salvad...@unibs.it>: > Dear community, > >
Re: [deal.II] Cannot find local compiled petsc library
Dear all, This problem has been solved. I copied the petscversion.h file to the arch/include folder therefore cmake found all petsc files and finished compilation. Best regards On Fri, Jul 24, 2020 at 3:17 PM yuesu jin wrote: > Dear Daniel, > Thank you for the instruction! I gave the
Re: [deal.II] Accessing nodal values of a FEM solution
On 7/23/20 12:07 PM, Xuefeng Li wrote: Well, the above function calculates the gradients of a finite element at the quadrature points of a cell, not at the nodal points of a cell. Such a need arises in the following situation. for ( x in vector_of_nodal_points ) v(x) = g(x, u(x), grad
Re: [deal.II] Location for Boundary Condition Application
On 7/23/20 11:47 AM, Daniel Arndt wrote: McKenzie, I'm interested in applying a non-homogeneous Dirichlet boundary condition to a specific edge. However, I'm unsure how to identify or specify a particular edge or face to add the boundary condition to. Could you help clear this
Re: [deal.II] Memory loss in system solver
On 7/24/20 3:32 AM, Alberto Salvadori wrote: It turns out that this code produces a memory loss, quite significant since I am solving my system thousands of times, eventually inducing the run to fail. I am not sure what is causing this issue and how to solve it, maybe more experienced users
Re: [deal.II] Re: KDTree implementation error
Dear Luca, I am using 9.2 version and the implementation I try to follow from your presentation at SISSA 2018 but it gives me error. Following are the lines I added to step-1. I want to implement K nearest neighbor. I will work on your suggestion. *#include * | https://www.mail-archive.com/search?l=dealii@googlegroups.com&q=date:20200724 | CC-MAIN-2020-45 | en | refinedweb |
Mobile web view
Wanted to show your awesome mobile app to other without letting them into the hassle of installing it? Thanks to flutter web (currently in beta) we have covered you.
Installation
Make sure that you are on beta and web is enabled if not then, here
Add in your pubspec.yaml
dependencies: mobile_web_view: ^0.1.2
Then import it in your main
import 'package: mobile_web_view/mobile_web_view.dart';
Warp MobileWebView to your initial route
home: MobileWebView( statusBarIconColor: Colors.white, content: Text("MobileWebView") child: MyHomePage(title: 'Flutter Demo Home Page'), ),
Want to contribute?
A help is always welcomed, check our CONTRIBUTING.md and don't forget to add yourself to CONTRIBUTORS.md | https://pub.dev/documentation/mobile_web_view/latest/ | CC-MAIN-2020-45 | en | refinedweb |
isw_mobile_sdk 1.0.0-5.6
A new flutter plugin project.
isw_mobile_sdk #
This library aids in processing payment through the following channels
- [x] Card
- [x] Verve Wallet
- [x] QR Code
- [X] USSD
Getting started #
There are three steps you would have to complete to set up the SDK and perform transaction
- Install the SDK as a dependency
- Configure the SDK with Merchant Information
- Initiate payment with customer details
Installation #
To install the sdk add the following to your dependencies map in pubspec.yaml
dependencies: #.... others # add the dependency for sdk isw_mobile_sdk: '<latest-version>'
Configuration #
You would also need to configure the project with your merchant credentials.
import 'dart:async'; import 'package:isw_mobile_sdk/isw_mobile_sdk.dart'; class _MyAppState extends State<MyApp> { @override void initState() { super.initState(); initSdk(); } // messages to SDK are asynchronous, so we initialize in an async method. Future<void> initSdk() async { // messages may fail, so we use a try/catch PlatformException. try { String merchantId = "your-merchant-id", merchantCode = "your-merchant-code", merchantSecret = "your-merchant-secret", currencyCode = "currency-code"; // e.g 566 for NGN var config = new IswSdkConfig ( merchantId, merchantKey, merchantCode, currencyCode ); // initialize the sdk await IswMobileSdk.initialize(config); // intialize with environment, default is Environment.TEST // IswMobileSdk.initialize(config, Environment.SANDBOX); } on PlatformException {} } }
Once the SDK has been initialized, you can then perform transactions.
Performing Transactions
You can perform a transaction, once the SDK is configured, by providing the payment info and payment callbacks, like so:
Future<void> pay(int amount) async { var customerId = "<customer-id>", customerName = "<customer-name>", customerEmail = "<customer.email@domain.com>", customerMobile = "<customer-phone>", // generate a unique random // reference for each transaction reference = "<your-unique-ref>"; // initialize amount // amount expressed in lowest // denomination (e.g. kobo): "N500.00" -> 50000 int amountInKobo = amount * 100 // create payment info var iswPaymentInfo = new IswPaymentInfo( customerId, customerName, customerEmail, customerMobile, reference, amountInKobo ); // trigger payment var result = await IswMobileSdk.pay(iswPaymentInfo); // process result handleResult(result) }
Handling Result
To handle result all you need to do is process the result in the callback methods: whenever the user cancels, the
value would be
null and
hasValue would be
false. When the transaction is complete,
hasValue would be true and
value would have an instance of
IswPaymentResult: an object with the below fields.
void handleResult(Optional<IswPaymentResult> result) { if (result.hasValue) { // process result showPaymentSuccess(result.value); } else { showPaymentError() } }
And that is it, you can start processing payment in your flutter app. | https://pub.dev/packages/isw_mobile_sdk | CC-MAIN-2020-45 | en | refinedweb |
Qiskit Aer - High performance simulators for Qiskit
Project description
Qiskit Aer
Qiskit is an open-source framework for working with noisy quantum computers at the level of pulses, circuits, and algorithms.
Qiskit is made up of elements that each work together to enable quantum computing. This element is Aer, which provides high-performance quantum computing simulators with realistic noise models.
Installation
We encourage installing Qiskit via the PIP tool (a python package manager), which installs all Qiskit elements, including this one.
pip install qiskit
PIP will handle all dependencies automatically for us and you will always install the latest (and well-tested) version.
To install from source, follow the instructions in the contribution guidelines.
Installing GPU support
In order to install and run the GPU supported simulators, you need CUDA® 10.1 or newer previously installed. CUDA® itself would require a set of specific GPU drivers. Please follow CUDA® installation procedure in the NVIDIA® web.
If you want to install our GPU supported simulators, you have to install this other package:
pip install qiskit-aer-gpu
This will overwrite your current
qiskit-aer package installation giving you
the same functionality found in the canonical
qiskit-aer package, plus the
ability to run the GPU supported simulators: statevector, density matrix, and unitary.
Simulating your first quantum program with Qiskit Aer
Now that you have Qiskit Aer installed, you can start simulating quantum circuits with noise. Here is a basic example:
$ python
import qiskit from qiskit import IBMQ from qiskit.providers.aer import QasmSimulator # Generate 3-qubit GHZ state circ = qiskit.QuantumCircuit(3, 3) circ.h(0) circ.cx(0, 1) circ.cx(1, 2) circ.measure([0, 1, 2], [0, 1 ,2]) # Construct an ideal simulator sim = QasmSimulator() # Perform an ideal simulation result_ideal = qiskit.execute(circ, sim).result() counts_ideal = result_ideal.get_counts(0) print('Counts(ideal):', counts_ideal) # Counts(ideal): {'000': 493, '111': 531} # Construct a noisy simulator backend from an IBMQ backend # This simulator backend will be automatically configured # using the device configuration and noise model provider = IBMQ.load_account() vigo_backend = provider.get_backend('ibmq_vigo') vigo_sim = QasmSimulator.from_backend(vigo_backend) # Perform noisy simulation result_noise = qiskit.execute(circ, vigo_sim).result() counts_noise = result_noise.get_counts(0) print('Counts(noise):', counts_noise) # Counts(noise): {'000': 492, '001': 6, '010': 8, '011': 14, '100': 3, '101': 14, '110': 18, '111': 469}
Contribution Guidelines
If you'd like to contribute to Qiskit, Exchange.
Next Steps
Now you're set up and ready to check out some of the other examples from our Qiskit IQX Tutorials or Qiskit Community Tutorials repositories.
Authors and Citation
Qiskit Aer. | https://pypi.org/project/qiskit-aer/ | CC-MAIN-2020-45 | en | refinedweb |
A Subscription in RxJS is a disposable resource that usually represents the execution of an Observable. It has the
unsubscribe method which lets us dispose of the resource held by the subscription when we’re done.
It’s also called ‘Disposable’ in earlier versions of RxJS.
Basic Usage
A basic example of a subscription can be seen in the following code:
import { of } from "rxjs"; const observable = of(1, 2, 3); const subscription = observable.subscribe(val => console.log(val)); subscription.unsubscribe();
In the code above, the
subscription returned when we call
subscribe on an Observable is a
subscription. It has the
unsubscribe method, which we called on the last line to clean up when the Observable is unsubscribed.
Combining Subscriptions
We can combine subscriptions with the
add method that comes with Subscription objects.
For example, if we have 2 Observables:
import { of } from "rxjs"; const observable1 = of(1, 2, 3); const observable2 = of(4, 5, 6); const subscription1 = observable1.subscribe(val => console.log(val)); const subscription2 = observable2.subscribe(val => console.log(val)); subscription1.add(subscription2);
In the code above, we have 2 Subscriptions,
subscription1 and
subscription2, which we joined together with the
add method of
subscription1.
subscription1 is a parent of
subscription2.
We should get
1 2 3 4 5 6
as the output.
When we join Subscriptions together, we can unsubscribe to all the Subscriptions that were joined together by calling
unsubscribe on the first Subscription that the
add method is called on.
For example, if we have:
import { interval } from "rxjs"; const observable1 = interval(400); const observable2 = interval(300); const subscription = observable1.subscribe(x => console.log(x)); const childSubscription = observable2.subscribe(x => console.log(x)); subscription.add(childSubscription);
Then calling
unsubscribe on
subscription will unsubscribe from all the subscriptions that are joined together.
setTimeout(() => { subscription.unsubscribe(); }, 1000);
Undoing Child Subscriptions
We can undo child subscriptions by using the Subscription’s
remove method. It takes a child Subscription as the argument.
To use it, we can write something like the following code:
import { interval } from "rxjs"; const observable1 = interval(400); const observable2 = interval(300); const subscription = observable1.subscribe(x => console.log(x)); const childSubscription = observable2.subscribe(x => console.log(x)); subscription.add(childSubscription); (async () => { await new Promise(resolve => { setTimeout(() => { subscription.remove(childSubscription); resolve(); }, 600); }); await new Promise(resolve => { setTimeout(() => { subscription.unsubscribe(); childSubscription.unsubscribe(); resolve(); }, 1200); }); })();
Once the
childSubscription is removed, it’s no longer a child of the
subscription Subscription.
Therefore, we’ve to call
unsubscribe on both subscriptions separately so that we clean up both subscriptions once we’re done.
Subscriptions let us get the values emitted from an Observable. We can join multiple Subscriptions together with the
add method, which takes a child Subscription as its argument.
When they’re joined together, we can unsubscribe to them together.
We can remove a Subscription as a child of another Subscription with the
remove method. | https://thewebdev.info/2019/05/ | CC-MAIN-2020-45 | en | refinedweb |
NEWSLETTER I TAX
CONTENTS NEWSLETTER TAX LAW I JUNE 2017 I NATIONAL LEGISLATION II ADMINISTRATIVE INSTRUCTIONS III INTERNATIONAL CASE LAW IV NATIONAL CASE LAW V OTHER MATTERS
2 2 4 5 6
NEWSLETTER I TAX I 1/7
NEWSLETTER TAX LAW
I NATIONAL LEGISLATION
Ministry of Finance Ordinance no. 185/2017, of 1 June
Regulates Decree-Law 19/2017, of 14 February, which establishes an electronic system for the communication of data on travellers', and respective purchases, intending to benefit from the Value Added Tax ("VAT") exemption on purchases made in Portugal.
Office of the Secretary of State for Tax Affairs Decision no. 212/2017-XXI, of 31 May 2017, published in June 2017
Determines that the obligation to submit the Corporate Simplified Information return ("IES") can be fulfilled up to 22 July 2017, without incurring in any penalties.
Ministry of Finance Ordinance no. 191/2017, of 16 June
Approves a new version of Form 38 Declarations of Cross-Border Transactions ("Modelo 38 Declaraes de Operaes Transfronteiras"), and respective instructions, to be submitted by credit institutions, financial institutions and other institutions which render payment services to comply with the obligation of declaring money remittances and transfers made to an entity located in a country, territory or region with a more favourable tax regime.
II ADMINISTRATIVE INSTRUCTIONS
Tax and Customs Authority Binding Information concerning case no. 11 591/2017, of 31 May, published in June 2017 Framework - Foundation - Services provided in relation to childcare, such as: teaching of music, dance, theatre, swimming; school/study support and after-school activities; activities during the holidays (educational holiday camp).
Clarifies that the sale of photographs and photo albums to students and the sale of uniforms (mandatory as per the internal regulations of the institution) to students and staff, are not ancillary to the educational activity and therefore may not benefit from the exemption provided for in Article 9 (9) of the VAT Code.
NEWSLETTER I TAX I 2/7
It also clarifies that sports and artistic extracurricular activities, school and study support, after-school activities and activities during the holidays (i.e., educational holiday camps) may benefit from the exemption provided for in Article 9 (7) of the VAT Code.
Tax and Customs Authority Binding Information concerning case no. 11 643/2017, of 31 May, published in June 2017 Framework Non-profit association Organization and coordination of courses, studies and research in psychodrama and sociodrama
Clarifies that the rendering of services and related sales of goods by a non-profit association with a cultural and recreational purpose, for which the sole consideration are the fees paid by the members, shall be VAT exempt. However, such exemption shall not apply if other payments are made in addition to the membership fee.
Further clarifies that vocational training activities are only exempt from VAT if the provider is accredited by the Directorate-General of Employment and Labour Relations ("DGERT").
It also states that the organization of cultural congresses by a non-profit association may benefit from VAT exemption, whereas the VAT reduced rate of 6% applies to the sale of magazines.
Tax and Customs Authority Binding Information concerning case no. 11 771/2017, of 31 May, published in June 2017 Rates Economic and employers' organizations Improving the efficiency of existing irrigation systems Rural Development Program 2014/2020 Renovation of irrigation systems; Monitoring the application to PDR2020.
Clarifies that the works for the renovation of irrigation systems fall under the provision of item 4.2. (f) of List I annex to the VAT Code and are therefore subject to the reduced VAT rate of 6%.
Further clarifies that the services rendered in connection with the monitoring of applications for the Rural Development Programme 2014/2020 are taxed at the standard VAT rate of 23%.
Tax and Customs Authority Binding Information concerning case no. 11 895/2017, of 26 May, published in June 2017 Framework: Non-profit organization Foundation that organizes annual congresses, with free access to its members Financial support of the organizations by donors
Clarifies that the activity of a non-profit organization that essentially organizes events with a cultural and educational scope for the promotion of universal human values to young people benefits from the VAT exemption foreseen in Article 9 (14) of the VAT Code.
NEWSLETTER I TAX I 3/7
Further clarifies that sponsorships received by the same entity, which have a predominantly commercial purpose related to the rendering of advertising services, are deemed consideration for a service subject to VAT at the standard rate of 23%.
Tax and Customs Authority Tax Management Area - VAT Circular no. 30 191, of 8 June 2017
Discloses the concepts of "immovable property", "services connected with immovable property" and "supply of equipment for carrying out work on immovable property", added to Council Implementing Regulation (EU) No. 282/2011, of 15 March 2011, for the purpose of the qualification of services rendered in connection with immovable property.
Tax and Customs Authority Department of Customs Taxation Services Division of Customs Debt, Customs Value and Origins Circular no. 15 591, of 12 June 2017
Clarifies the procedures to be adopted by economic operators in the context of the Global Economic and Commercial Agreement signed between Canada and the European Union.
Further clarifies that the economic operators must take into account the content of Circular no. 15579, of 30 March 2017, of the Department of Customs Taxation Services, regarding the Registered Exporter System, the only mechanism foreseen for the issuing of proof s of preferential origin for exports from the European Union to Canada.
Tax and Customs Authority Department of Excise Tax and Vehicle Tax Services Vehicle Tax Division Circular no. 35 077, of 12 June 2017
Discloses the procedures to be adopted for the issuing of Vehicle Customs Declaration ("DAV") in electronic format and other DAV that continue to be processed at customs.
III INTERNATIONAL CASE LAW
Court of Justice of the European Union Judgment of 8 June 2017 Case C-580/15
In the Judgment in question, rendered in the context of a request for a preliminary ruling, the Court of Justice of the European Union states that Article 56 of the Treaty on the Functioning of the European Union ("TFEU") and Article 36 of the European Economic Area ("EEA") Agreement must be interpreted as precluding a national legislation which provides for a
NEWSLETTER I TAX I 4/7
national tax exemption system which, although applicable to savings income from deposits in banks domiciled in that Member State or in another EEA Member State, is reserved to services that comply with specific criteria, particular to that national market, and not other services which are essentially similar but do not fulfil said criteria.
Court of Justice of the European Union Judgment of 14 June 2017 Case C-26/16
In the Judgment in question, rendered in the context of a request for a preliminary ruling, the Court of Justice of the European Union states that Article 138 (2) (a) of the VAT Directive precludes national provisions from making the VAT exemption of an intra-Community supply of a new means of transport subject to the requirement that the purchaser of that means of transport is established or domiciled in the Member State of destination of that means of transport.
The Court of Justice of the European Union further clarifies that said exemption cannot be refused in the Member State of supply solely because the means of transport has been subject to a temporary registration in the Member State of destination.
The Court of Justice of the European Union also states that the above mentioned article of the VAT Directive precludes the vendor of a new means of transport, transported by the purchaser to another Member State and subject to a temporary registration in that latter State, from being required to pay the VAT at a later stage when it is not established that the temporary registration regime has ended and value added tax has or will be paid in the Member State of destination, or in the event of tax evasion by the purchaser, unless it has been established that that vendor knew or ought to have known that the transaction was part of a fraud committed by the purchaser and did not take all reasonable steps within his power to avoid his participation in that fraud.
IV NATIONAL CASE LAW
Constitutional Court Judgment no. 267/2017, of 31 May 2017, published in June 2017 Case no. 466/16
In the Judgment in question, the Constitutional Court declared unconstitutional the provision set out in Article 135 of the 2016 State Budget Law, in the part in which it determines that Article 88 (21) of the IRC Code according to which the total amount of CIT autonomous taxation assessed in a year cannot be deducted of the amount of special payment on account performed in that year applies to tax years prior to 2016 due to the interpretative nature of the norm.
NEWSLETTER I TAX I 5/7
Supreme Administrative Court Judgment of 7 June 2017 Case no. 01 417/16
In the Judgment in question, the Court stated that a wind turbine, which is part of a wind farm feeding electric energy into the public distribution system, cannot be considered a building for Municipal Property Tax ("IMI") purposes as it does not have an individual economic value.
South Central Administrative Court Judgment of 25 May 2017 Case no. 618/13.1BELLE
In the Judgment in question, the Court considered that, should the taxpayer timely submit a request for the provision of a guarantee with the purpose of suspending a tax enforcement procedure, the Tax and Customs Authority may not determine the cancellation of tax benefits granted to the taxpayer before it issues a decision concerning the aforementioned request, as otherwise the principle of good faith would be breached.
Administrative Arbitration Centre Tax Arbitration Court Arbitration Decision of 4 April 2017, published in June 2017 Case no. 96/2015-T
In the Arbitration Decision in question, the Tax Arbitration Court stated that, for the purposes of determining the depreciation quotas for photovoltaic panels and wind turbines (which were not legally determined at the time) one should consider solely the expected useful life of the assets in normal conditions, and not the duration of the investment plan, the profitability of the investment vis--vis the agreements entered into with the Portuguese State or even the duration of the surface area under contract.
The Tax Arbitration Court further stated that the expected useful life of the assets indicated by the manufacturers should be considered the maximum period, and thus the taxpayer would be allowed to determine the useful life of its assets between that reference period and half of that.
V OTHER MATTERS
Portugal signs multilateral convention to prevent base erosion and profit shifting
On 7 June 2017, at the OECD Forum in Paris, the State Secretary for Tax Affairs signed the Multilateral Convention to Implement Tax Treaty Related Measures to Prevent Base Erosion and Profit Shifting, together with 67 other countries and territories, thus simultaneously
NEWSLETTER I TAX I 6/7
altering approximately 1100 Agreements for the Avoidance of Double Taxation signed between the different parties.
CONTACTS
CUATRECASAS, GONALVES PEREIRA & ASSOCIADOS, RL Sociedade de Advogados TAX I 7/7 | https://www.lexology.com/library/detail.aspx?g=89b2b4bf-de71-4314-80e8-69a3f8203a75 | CC-MAIN-2020-45 | en | refinedweb |
TL.
1. Http Deprecated, HttpClient Here to Stay
Before version 4.3, the
@angular/http module was used for making HTTP requests in Angular applications. The Angular team has now deprecated
Http in version 5. The
HttpClient API from
@angular/common/http package that shipped in version 4.3 is now recommended for use in all apps. The
HttpClient API features include:
- & flush based testing framework.
import { HttpClientModule } from '@angular/common/http';
2. Support for Multiple Export Alias in Angular 5
In Angular 5, you can now give multiple names to your components and directives while exporting. Exporting a component with multiple names can help your users migrate without breaking changes.
"Exporting a component with multiple names can help your users migrate without breaking changes."
Example Usage:
import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'], exportAs:'dashboard, logBoard' }) export class AppComponent { title = 'app'; }
3. Internationalized Number, Date, and Currency Pipes
Angular 5 ships with new number, date, and currency pipes that increase standardization across browsers and eliminate the need for i18n polyfills. The pipes rely on the CLDR to provide extensive locale support and configurations for any locales you want to support. To use the old pipes, you will have to import the
DeprecatedI18NPipesModule after the
CommonModule.
import { NgModule } from '@angular/core'; import { CommonModule, DeprecatedI18NPipesModule } from '@angular/common'; @NgModule({ imports: [ CommonModule, // import deprecated module after DeprecatedI18NPipesModule ] }) export class AppModule { }
4. Improved Decorator Support in Angular 5
Angular 5 now supports expression lowering in decorators for lambdas, and the value of
useValue,
useFactory, and
data in object literals. Furthermore, a lambda can be used instead of a named function like so:
In Angular 5
Component({ provider: [{provide: 'token', useFactory: () => null}] }) export class MyClass {}
Before Angular 5
Component({ provider: [{provide: 'token', useValue: calculated()}] }) export class MyClass {}
5. Build Optimization
The Angular team focused on making Angular 5 faster, smaller and easier to use. In Angular 5, production builds created with the Angular CLI will now apply the build optimizer by default..
"In Angular 5, production builds created with the Angular CLI will now apply the build optimizer by default."
6. Angular Universal Transfer API
The Angular Universal team has added Domino to the
platform-server. This simply means more DOM manipulations can happen out of the box within server-side contexts.
Furthermore, two modules,
ServerTransferStateModule and
BrowserTransferModule have been added to Angular Universal. These modules allow you to generate information as part of your rendering with platform-server and then transfer it to the client side to avoid re-generation of the same information. In summary, it transfers state from the server which means developers do not need to make a second HTTP request once the application makes it to the client.
7. Faster Compiler in Angular 5
A lot of improvements have been made to the Angular compiler to make it faster. The Angular compiler now leverages TypeScript transforms. You can take advantage of it by running:
ng serve --aot
Angular.io was used as a case study and the compiler pipeline saved 95% of the build time when an incremental AOT build was performed on it.
Note: TypeScript transforms are a new feature introduced as part of TypeScript 2.3, that allows hooking into the standard TypeScript compilation pipeline.
8. Forms Validation in Angular 5
In Angular 5, forms now have the ability to decide when the validity and value of a field or form are updated via on
blur or on
submit, instead of every input event.
Example Usage
<input name="nickName" ngModel [ngModelOptions]="{updateOn: 'blur'}">
Another Example
<form [ngFormOptions]="{updateOn: 'submit'}">
In the case of Reactive forms, you can add the option like so:
ngOnInit() { this.newUserForm = this.fb.group({ userName: ['Bob', { updateOn: 'blur', validators: [Validators.required] }] }); }
9. Animations in Angular 5
In Angular 5, we have two new transition aliases,
:increment and
:decrement.
... animations: [ trigger('bannerAnimation', [%' })) ]) ])), ]) ]
Animation queries now support negative limits, in which case elements are matched from the end rather than from the beginning like so:
... animations: [ trigger( 'myAnimation', [ transition( '* => go', [ query( '.item', [ style({opacity: 0}), animate('1s', style({opacity: 1})), ], {limit: -3}), ]), ]), ] ...
10. New Router Lifecycle Events in Angular 5
Some new lifecycle events have been added to the router. The events are
GuardsCheckStart,
ChildActivationStart,
ActivationStart,
GuardsCheckEnd,
ResolveStart,
ResolveEnd,
ActivationEnd, and
ChildActivationEnd. With these events, developers can track the cycle of the router from the start of running guards through to completion of activation.
Furthermore, you can now configure the router to reload a page when it receives a request to navigate to the same URL.
providers: [ // ... RouterModule.forRoot(routes, { onSameUrlNavigation: 'reload' }) ]
11. Better Support for Service Workers in Angular 5
In Angular 5, we have better support for service workers via the @angular/service-worker package. The service worker package is a conceptual derivative of the
@angular/service-worker package that was maintained at github.com/angular/mobile-toolkit, but has been rewritten to support use-cases across a much wider variety of applications.
Note: Right now you will have to manually integrate the package because it's not fully integrated with the CLI v1.5 yet. It is available as beta in version 1.6 of the CLI.
Deprecations and Other Updates
NgForhas been removed as it was deprecated since v4. Use
NgForOfinstead. This does not impact the use of
*ngForin your templates.
- The compiler option
enableLegacyTemplateis now disabled by default as the
<template>element was deprecated since v4. Use
<ng-template>instead. The option
enableLegacyTemplateand the
<template>element will both be removed in Angular v6.
- The method
ngGetContentSelectors()has been removed as it was deprecated since v4. Use
ComponentFactory.ngContentSelectorsinstead.
ReflectiveInjectoris now deprecated. Use
Injector.createas a replacement.
NgProbeTokenhas been removed from
@angular/platform-browseras it was deprecated since v4. Import it from
@angular/coreinstead.
Upgrading to Angular 5
The Angular team built a nifty tool to make upgrading as easy as possible.
Angular 5 upgrade tool
Aside: Authenticate an Angular App with Auth0
By integrating Auth0 in your Angular application, you will be able to manage user identities, including password resets, creating, provisioning, blocking, and deleting users. It requires just a few steps.
Angular 5 came loaded with new features and significant improvements. It is smaller and faster. I am proud of what the Angular team achieved with this release.
Have you switched to Angular 5 yet? What are your thoughts? Did you notice any significant improvement? Let me know in the comments section! 😊 | https://auth0.com/blog/whats-new-in-angular5/ | CC-MAIN-2020-45 | en | refinedweb |
Upgrade guide
4.4.0
Datastax Enterprise support is now available directly in the main driver. There is no longer a separate DSE driver.
For Apache Cassandra® users
The great news is that reactive execution is now available for everyone.
See the
CqlSession.executeReactive methods.
Apart from that, the only visible change is that DSE-specific features are now exposed in the API:
- new execution methods:
CqlSession.executeGraph,
CqlSession.executeContinuously*. They all have default implementations so this doesn’t break binary compatibility. You can just ignore them.
- new driver dependencies: Tinkerpop, ESRI, Reactive Streams. If you want to keep your classpath lean, you can exclude some dependencies when you don’t use the corresponding DSE features; see the Integration>Driver dependencies section.
For Datastax Enterprise users
Adjust your Maven coordinates to use the unified artifact:
<!-- Replace: --> <dependency> <groupId>com.datastax.dse</groupId> <artifactId>dse-java-driver-core</artifactId> <version>2.3.0</version> </dependency> <!-- By: --> <dependency> <groupId>com.datastax.oss</groupId> <artifactId>java-driver-core</artifactId> <version>4.4.0</version> </dependency> <!-- Do the same for the other modules: query builder, mapper... -->
The new driver is a drop-in replacement for the DSE driver. Note however that we’ve deprecated a few DSE-specific types in favor of their OSS equivalents. They still work, so you don’t need to make the changes right away; but you will get deprecation warnings:
DseSession: use
CqlSessioninstead, it can now do everything that a DSE session does. This also applies to the builder:
// Replace: DseSession session = DseSession.builder().build() // By: CqlSession session = CqlSession.builder().build()
DseDriverConfigLoader: the driver no longer needs DSE-specific config loaders. All the factory methods in this class now redirect to
DriverConfigLoader. On that note,
dse-reference.confdoes not exist anymore, all the driver defaults are now in reference.conf.
plain-text authentication: there is now a single implementation that works with both Cassandra and DSE. If you used
DseProgrammaticPlainTextAuthProvider, replace it by
PlainTextProgrammaticAuthProvider. Similarly, if you wrote a custom implementation by subclassing
DsePlainTextAuthProviderBase, extend
PlainTextAuthProviderBaseinstead.
DseLoadBalancingPolicy: DSE-specific features (the slow replica avoidance mechanism) have been merged into
DefaultLoadBalancingPolicy.
DseLoadBalancingPolicystill exists for backward compatibility, but it is now identical to the default policy.
Class Loader
The default class loader used by the driver when instantiating classes by reflection changed. Unless specified by the user, the driver will now use the same class loader that was used to load the driver classes themselves, in order to ensure that implemented interfaces and implementing classes are fully compatible.
This should ensure a more streamlined experience for OSGi users, who do not need anymore to define a specific class loader to use.
However if you are developing a web application and your setup corresponds to the following
scenario, then you will now be required to explicitly define another class loader to use: if in your
application the driver jar is loaded by the web server’s system class loader (for example,
because the driver jar was placed in the “/lib” folder of the web server), then the default class
loader will be the server’s system class loader. Then if the application tries to load, say, a
custom load balancing policy declared in the web app’s “WEB-INF/lib” folder, then the default class
loader will not be able to locate that class. Instead, you must use the web app’s class loader, that
you can obtain in most web environments by calling
Thread.getContextClassLoader():
CqlSession.builder() .addContactEndPoint(...) .withClassLoader(Thread.currentThread().getContextClassLoader()) .build();
See the javadocs of SessionBuilder.withClassLoader for more information.
4.1.0
Object mapper
4.1.0 marks the introduction of the new object mapper in the 4.x series.
Like driver 3, it relies on annotations to configure mapped entities and queries. However, there are a few notable differences:
- it uses compile-time annotation processing instead of runtime reflection;
- the “mapper” and “accessor” concepts have been unified into a single “DAO” component, that handles both pre-defined CRUD patterns, and user-provided queries.
Refer to the mapper manual for all the details.
Internal API
NettyOptions#afterBootstrapInitialized is now responsible for setting socket options on driver
connections (see
advanced.socket in the configuration). If you had written a custom
NettyOptions
for 4.0, you’ll have to copy over – and possibly adapt – the contents of
DefaultNettyOptions#afterBootstrapInitialized (if you didn’t override
NettyOptions, you don’t
have to change anything).
4.0.0
Version 4 is major redesign of the internal architecture. As such, it is not binary compatible with previous versions. However, most of the concepts remain unchanged, and the new API will look very familiar to 2.x and 3.x users.
New Maven coordinates
The core driver is available from:
<dependency> <groupId>com.datastax.oss</groupId> <artifactId>java-driver-core</artifactId> <version>4.0.0</version> </dependency>
Runtime requirements
The driver now requires Java 8 or above. It does not depend on Guava anymore (we still use it internally but it’s shaded).
We have dropped support for legacy protocol versions v1 and v2. As a result, the driver is compatible with:
- Apache Cassandra®: 2.1 and above;
- Datastax Enterprise: 4.7 and above.
Packages
We’ve adopted new API conventions to better organize the driver code and make it more modular. As a result, package names have changed. However most public API types have the same names; you can use the auto-import or “find class” features of your IDE to discover the new locations.
Here’s a side-by-side comparison with the legacy driver for a basic example:
// Driver 3: import com.datastax.driver.core.ResultSet; import com.datastax.driver.core.Row; import com.datastax.driver.core.SimpleStatement; SimpleStatement statement = new SimpleStatement("SELECT release_version FROM system.local"); ResultSet resultSet = session.execute(statement); Row row = resultSet.one(); System.out.println(row.getString("release_version")); // Driver 4: import com.datastax.oss.driver.api.core.cql.ResultSet; import com.datastax.oss.driver.api.core.cql.Row; import com.datastax.oss.driver.api.core.cql.SimpleStatement; SimpleStatement statement = SimpleStatement.newInstance("SELECT release_version FROM system.local"); ResultSet resultSet = session.execute(statement); Row row = resultSet.one(); System.out.println(row.getString("release_version"));
Notable changes:
- the imports;
- simple statement instances are now created with the
newInstancestatic factory method. This is because
SimpleStatementis now an interface (as most public API types).
Configuration
The configuration has been completely revamped. Instead of ad-hoc configuration classes, the default mechanism is now file-based, using the Typesafe Config library. This is a better choice for most deployments, since it allows configuration changes without recompiling the client application (note that there are still programmatic setters for things that are likely to be injected dynamically, such as contact points).
The driver JAR contains a
reference.conf file that defines the options with their defaults:
datastax-java-driver { basic.request { timeout = 2 seconds consistency = LOCAL_ONE page-size = 5000 } // ... and many more (~10 basic options, 70 advanced ones) }
You can place an
application.conf in your application’s classpath to override options selectively:
datastax-java-driver { basic.request.consistency = ONE }
Options can also be overridden with system properties when launching your application:
java -Ddatastax-java-driver.basic.request.consistency=ONE MyApp
The configuration also supports execution profiles, that allow you to capture and reuse common sets of options:
// application.conf: datastax-java-driver { profiles { profile1 { basic.request.consistency = QUORUM } profile2 { basic.request.consistency = ONE } } } // Application code: SimpleStatement statement1 = SimpleStatement.newInstance("...").setExecutionProfileName("profile1"); SimpleStatement statement2 = SimpleStatement.newInstance("...").setExecutionProfileName("profile2");
The configuration can be reloaded periodically at runtime:
datastax-java-driver { basic.config-reload-interval = 5 minutes }
This is fully customizable: the configuration is exposed to the rest of the driver as an abstract
DriverConfig interface; if the default implementation doesn’t work for you, you can write your
own.
For more details, refer to the manual.
Session
Cluster does not exist anymore; the session is now the main component, initialized in a single
step:
CqlSession session = CqlSession.builder().build(); session.execute("...");
Protocol negotiation in mixed clusters has been improved: you no longer need to force the protocol version during a rolling upgrade. The driver will detect that there are older nodes, and downgrade to the best common denominator (see JAVA-1295).
Reconnection is now possible at startup: if no contact point is reachable, the driver will retry at periodic intervals (controlled by the reconnection policy) instead of throwing an error. To turn this on, set the following configuration option:
datastax-java-driver { advanced.reconnect-on-init = true }
The session now has a built-in throttler to limit how many requests can execute concurrently. Here’s an example based on the number of requests (a rate-based variant is also available):
datastax-java-driver { advanced.throttler { class = ConcurrencyLimitingRequestThrottler max-concurrent-requests = 10000 max-queue-size = 100000 } }
Load balancing policy
Previous driver versions came with multiple load balancing policies that could be nested into each other. In our experience, this was one of the most complicated aspects of the configuration.
In driver 4, we are taking a more opinionated approach: we provide a single default policy, with what we consider as the best practices:
- local only: we believe that failover should be handled at infrastructure level, not by application code.
- token-aware.
- optionally filtering nodes with a custom predicate.
You can still provide your own policy by implementing the
LoadBalancingPolicy interface.
Statements
Simple, bound and batch statements are now exposed in the public API as interfaces. The internal implementations are immutable. This makes them automatically thread-safe: you don’t need to worry anymore about sharing them or reusing them between asynchronous executions.
Note that all mutating methods return a new instance, so make sure you don’t accidentally ignore their result:
BoundStatement boundSelect = preparedSelect.bind(); // This doesn't work: setInt doesn't modify boundSelect in place: boundSelect.setInt("k", key); session.execute(boundSelect); // Instead, reassign the statement every time: boundSelect = boundSelect.setInt("k", key);
These methods are annotated with
@CheckReturnValue. Some code analysis tools – such as
ErrorProne – can check correct usage at build time, and report mistakes
as compiler errors.
Unlike 3.x, the request timeout now spans the entire request. In other words, it’s the
maximum amount of time that
session.execute will take, including any retry, speculative execution,
etc. You can set it with
Statement.setTimeout, or globally in the configuration with the
basic.request.timeout option.
Prepared statements are now cached client-side: if you call
session.prepare() twice with the same query string, it will no longer log a warning. The second
call will return the same statement instance, without sending anything to the server:
PreparedStatement ps1 = session.prepare("SELECT * FROM product WHERE sku = ?"); PreparedStatement ps2 = session.prepare("SELECT * FROM product WHERE sku = ?"); assert ps1 == ps2;
This cache takes into account all execution parameters. For example, if you prepare the same query string with different consistency levels, you will get two distinct prepared statements, each propagating its own consistency level to its bound statements:
PreparedStatement ps1 = session.prepare( SimpleStatement.newInstance("SELECT * FROM product WHERE sku = ?") .setConsistencyLevel(DefaultConsistencyLevel.ONE)); PreparedStatement ps2 = session.prepare( SimpleStatement.newInstance("SELECT * FROM product WHERE sku = ?") .setConsistencyLevel(DefaultConsistencyLevel.TWO)); assert ps1 != ps2; BoundStatement bs1 = ps1.bind(); assert bs1.getConsistencyLevel() == DefaultConsistencyLevel.ONE; BoundStatement bs2 = ps2.bind(); assert bs2.getConsistencyLevel() == DefaultConsistencyLevel.TWO;
DDL statements are now debounced; see Why do DDL queries have a higher latency than driver 3? in the FAQ.
Dual result set APIs
In 3.x, both synchronous and asynchronous execution models shared a common result set implementation. This made asynchronous usage notably error-prone, because of the risk of accidentally triggering background synchronous fetches.
There are now two separate APIs: synchronous queries return a
ResultSet; asynchronous queries
return a future of
AsyncResultSet.
ResultSet behaves much like its 3.x counterpart, except that background pre-fetching
(
fetchMoreResults) was deliberately removed, in order to keep this interface simple and intuitive.
If you were using synchronous iterations with background pre-fetching, you should now switch to
fully asynchronous iterations (see below).
AsyncResultSet is a simplified type that only contains the rows of the current page. When
iterating asynchronously, you no longer need to stop the iteration manually: just consume all the
rows in
currentPage(), and then call
fetchNextPage to retrieve the next page asynchronously. You
will find more information about asynchronous iterations in the manual pages about asynchronous
programming and paging.
CQL to Java type mappings
Since the driver now has access to Java 8 types, some of the CQL to Java type mappings have
changed when it comes to temporal types such as
date and
timestamp:
getDatehas been replaced by
getLocalDateand returns java.time.LocalDate;
getTimehas been replaced by
getLocalTimeand returns java.time.LocalTime instead of a
longrepresenting nanoseconds since midnight;
getTimestamphas been replaced by
getInstantand returns java.time.Instant instead of java.util.Date.
The corresponding setter methods were also changed to expect these new types as inputs.
Metrics
Metrics are now divided into two categories: session-wide and per-node. Each metric can be enabled or disabled individually in the configuration:
datastax-java-driver { advanced.metrics { // more are available, see reference.conf for the full list session.enabled = [ bytes-sent, bytes-received, cql-requests ] node.enabled = [ bytes-sent, bytes-received, pool.in-flight ] } }
Note that unlike 3.x, JMX is not supported out of the box. You’ll need to add the dependency explicitly:
<dependency> <groupId>io.dropwizard.metrics</groupId> <artifactId>metrics-jmx</artifactId> <version>4.0.2</version> </dependency>
Metadata
Session.getMetadata() is now immutable and updated atomically. The node list, schema metadata and
token map exposed by a given
Metadata instance are guaranteed to be in sync. This is convenient
for analytics clients that need a consistent view of the cluster at a given point in time; for
example, a keyspace in
metadata.getKeyspaces() will always have a corresponding entry in
metadata.getTokenMap().
On the other hand, this means you have to call
getMetadata() again each time you need a fresh
copy; do not cache the result:
Metadata metadata = session.getMetadata(); Optional<KeyspaceMetadata> ks = metadata.getKeyspace("test"); assert !ks.isPresent(); session.execute( "CREATE KEYSPACE IF NOT EXISTS test " + "WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}"); // This is still the same metadata from before the CREATE ks = metadata.getKeyspace("test"); assert !ks.isPresent(); // You need to fetch the whole metadata again metadata = session.getMetadata(); ks = metadata.getKeyspace("test"); assert ks.isPresent();
Refreshing the metadata can be CPU-intensive, in particular the token map. To help alleviate that, it can now be filtered to a subset of keyspaces. This is useful if your application connects to a shared cluster, but does not use the whole schema:
datastax-java-driver { // defaults to empty (= all keyspaces) advanced.metadata.schema.refreshed-keyspaces = [ "users", "products" ] }
See the manual for all the details.
Query builder
The query builder is now distributed as a separate artifact:
<dependency> <groupId>com.datastax.oss</groupId> <artifactId>java-driver-query-builder</artifactId> <version>4.0.0</version> </dependency>
It is more cleanly separated from the core driver, and only focuses on query string generation. Built queries are no longer directly executable, you need to convert them into a string or a statement:
import static com.datastax.oss.driver.api.querybuilder.QueryBuilder.*; BuildableQuery query = insertInto("user") .value("id", bindMarker()) .value("first_name", bindMarker()) .value("last_name", bindMarker()); String cql = query.asCql(); // INSERT INTO user (id,first_name,last_name) VALUES (?,?,?) SimpleStatement statement = query .builder() .addNamedValue("id", 0) .addNamedValue("first_name", "Jane") .addNamedValue("last_name", "Doe") .build();
All query builder types are immutable, making them inherently thread-safe and share-safe.
The query builder has its own manual chapter, where the syntax is covered in detail.
Dedicated type for CQL identifiers
Instead of raw strings, the names of schema objects (keyspaces, tables, columns, etc.) are now
wrapped in a dedicated
CqlIdentifier type. This avoids ambiguities with regard to case
sensitivity.
Pluggable request execution logic
Session is now a high-level abstraction capable of executing arbitrary requests. Out of the box,
the driver exposes a more familiar subtype
CqlSession, that provides familiar signatures for CQL
queries (
execute(Statement),
prepare(String), etc).
However, the request execution logic is completely pluggable, and supports arbitrary request types (as long as you write the boilerplate to convert them to protocol messages).
We use that in our DSE driver to implement a reactive API and support for DSE graph. You can also
take advantage of it to plug your own request types (if you’re interested, take a look at
RequestProcessor in the internal API). | https://docs.datastax.com/en/developer/java-driver/4.7/upgrade_guide/ | CC-MAIN-2020-45 | en | refinedweb |
Available items
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
Hiccup-style generation of Graphviz graphs in Clojure and ClojureScript.
Dorothy is extremely alpha and subject to radical change. Release Notes Here
Dorothy assumes you have an understanding of Graphviz and DOT. The text below describes the mechanics of Dorothy's DSL, but you'll need to refer to the Graphviz documentation for specifics on node shapes, valid attributes, etc.
The Graphviz dot tool executable must be on the system path to render
Dorothy is on Clojars. In Leiningen:
[dorothy "x.y.z"]
A graph consists of a vector of statements. The following sections describe the format for all the types of statements. If you're bored, skip ahead to the "Defining Graphs" section below.
A node statement defines a node in the graph. It can take two forms:
node-id
[node-id]
[node-id { attr map }]
where
node-idis a string, number or keyword with optional trailing port and compass-point. Here are some node statement examples:
:node0 ; Define a node called "node0"
:node0:port0 ; Define a node called "node0" with port "port0"
:node0:port0:sw ; Similarly a node with southwest compass point
the node's attr map is a map of attributes for the node. For example,
[:start {:shape :Mdiamond}] ; => start [shape=Mdiamond];
Dorothy will correctly escape and quote node-ids as required by dot.
A node id can also be auto-generated with
(gen-id object). For example,
[(gen-id some-object) {:label (.getText some-object)}]
It allows you to use arbitrary objects as nodes.
An edge statement defines an edge in the graph. It is expressed as a vector with two or more node-ids followed optional attribute map:
[node-id0 node-id1 ... node-idN { attr map }] ; => "node-id0" -> "node-id1" -> ... -> "node-idN" [attrs ...];
In addition to node ids, an edge statement may also contain subgraphs:
[:start (subgraph [... subgraph statements ...])]
For readability,
:>delimiters may be optionally included in an edge statement:
[:start :> :middle :> :end]
A graph attribute statement sets graph-wide attributes. It is expressed as a single map:
{:label "process #1", :style :filled, :color :lightgrey} ; => graph [label="process #1",style=filled,color=lightgrey];
alternatively, this can be expressed with the
(graph-attrs)function like this:
(graph-attrs {:label "process #1", :style :filled, :color :lightgrey}) ; => graph [label="process #1",style=filled,color=lightgrey];
A node attribute or edge attribute statement sets node or edge attributes respectively for all nodes and edge statements that follow. It is expressed with
(node-attrs)and
(edge-attrs)statements:
(node-attrs {:style :filled, :color :white}) ; => node [style=filled,color=white];
or:
(edge-attrs {:color :black}) ; => edge [color=black];
As mentioned above, a graph consists of a series of statements. These statements are passed to the
graph,
digraph, or
subgraphfunctions. Each takes an optional set of attributes followed by a vector of statements:
; From (digraph [ (subgraph :cluster_0 [ {:style :filled, :color :lightgrey, :label "process #1"} (node-attrs {:style :filled, :color :white})
[:a0 :> :a1 :> :a2 :> :a3]])
(subgraph :cluster_1 [ {:color :blue, :label "process #2"} (node-attrs {:style :filled})
[:b0 :> :b1 :> :b2 :> :b3]])
[:start :a0] [:start :b0] [:a1 :b3] [:b2 :a3] [:a3 :a0] [:a3 :end] [:b3 :end]
[:start {:shape :Mdiamond}] [:end {:shape :Msquare}]])
Similarly for
(graph)(undirected graph) and
(subgraph). A second form of these functions takes an initial option map, or a string or keyword id for the graph:
(graph :graph-id ...) ; => graph "graph-id" { ... }
(digraph { :id :G :strict? true } ...) ; => strict graph G { ... }
Given a graph built with the functions described above, use the
(dot)function to generate Graphviz DOT output.
(require '[dorothy.core :as dot]) (def g (dot/graph [ ... ])) (dot/dot g) "graph { ... }"
Dorothy currently doesn't include any facilities for rendering dot-format output to images. However, you can pull in viz.cljc or viz.js, both of which will allow you to produce png, svg, and other image formats from your dorothy-generated dot-formatted graph content.
Wanted: pull requests to implement node equivalents to the rendering functions available for Clojure/JVM in the
dorothy.jvmnamespace. link to github issue here
graphviz(Clojure/JVM)
Once you have DOT language output, you can render it as an image using the
(render)function:
(require '[dorothy.jvm :refer (render save! show!)])
; This produces a png as an array of bytes (render graph {:format :png})
; This produces an SVG string (render graph {:format :svg})
; A one-liner with a very simple 4 node digraph. (-> (dot/digraph [ [:a :b :c] [:b :d] ]) dot/dot (render {:format :svg}))
The dot tool executable must be on the system path
other formats include
:gif, etc. The result will be either a java byte array, or String depending on whether the format is binary or not.
(render)returns a string or a byte array depending on whether the output format is binary or not.
Alternatively, use the
(save!)function to write to a file or output stream.
; A one-liner with a very simple 4 node digraph (-> (dot/digraph [ [:a :b :c] [:b :d] ]) dot/dot (save! "out.png" {:format :png}))
Finally, for simple tests, use the
(show!)function to view the result in a simple Swing viewer:
; This opens a simple Swing viewer with the graph (show! graph)
; A one-liner with a very simple 4 node digraph (-> (dot/digraph [ [:a :b :c] [:b :d] ]) dot/dot show!)
which shows:
Distributed under the Eclipse Public License, the same as Clojure. | https://xscode.com/daveray/dorothy | CC-MAIN-2020-45 | en | refinedweb |
ISO/IEC JTC1 SC22 WG21 N3658 = 2013-04-18Jonathan Wakely, cxx@kayari.org
Revision History
Introduction
Rationale and examples of use
Piecewise construction of
std::pair.
Convert array to tuple
N3466
more_perfect_forwarding_async example
Solution
Type of integers in the sequence
Non-consecutive integer sequences
Members of integer sequences
Efficiency considerations
Impact on standard
20.x Compile-time integer sequences [intseq]
20.x.1 In general [intseq.general]
20.x.2 Class template
integer_sequence [intseq.intseq]
20.x.3 Alias template
make_integer_sequence [intseq.make]
References
N3658 Revision 1 (post-Bristol mailing)
xxx_seqtemplates to
xxx_sequence.
to_index_sequenceto
index_sequence_for.
intand
unsignedalias templates.
nextand
append, change
sizeto function.
typetypedef to
value_type.
N3493 Initial paper (Spring 2013 mid-term mailing) index_sequence { };
so that an instantiation like
index_sequence<0, 1, 2, 3>
can be passed to a function template that deduces a parameter
pack containing the integer sequence 0, 1, 2, 3.
In order to pass an
index_sequence_index_sequence<sizeof...(T)> (or equivalently
make_index_sequence<tuple_size<tuple<T...>>::value>)
or by using the alias
index_sequence_for<T...>.
With such a type in your toolbox it is easy to write the generic, reusable solution described above for expanding tuple elements as arguments to a function object:
template<typename F, typename Tuple, size_t... I> auto apply_(F&& f, Tuple&& args, index_sequence<I...>) -> decltype(std::forward<F>(f)(std::get<I>(std::forward<Tuple>(args))...)) { return std::forward<F>(f)(std::get<I>(std::forward<Tuple>(args))...); } template<typename F, typename Tuple, typename Indices = make_index_sequence_index_sequence, where users don't
need to write or understand it.
Standardizing the components makes it trivial for users to expand
tuple-like types in any situation, including the examples below.
A type like
index_sequence
index_sequence
index_sequence, for example
template<class... Args1, class... Args2> pair(piecewise_construct_t, tuple<Args1...> args1, tuple<Args2...> args2) : pair(args1, args2, index_sequence_for<Args1...>{}, index_sequence_for<Args2...>{}) { } private: template<class... A1, class... A2, size_t... I1, size_t... I2> pair(tuple<A1...>& a1, tuple<A2...>& a2, index_sequence<I1...>, index_sequence
index_sequence
index_sequence the implementation is almost trivial:
template<typename Array, size_t... I> auto a2t_(const Array& a, index_sequence<I...>) -> decltype(std::make_tuple(a[I]...)) { return std::make_tuple(a[I]...); } template<typename T, std::size_t N, typename Indices = make_index_sequence makes_sequence
class template which can be instantiated as
integer_sequence<int>
or
integer_sequence<unsigned char>
or any other integer type.
By using alias templates to denote specializations for common types it is
just as convenient to use the generic template as it would be if it wasn't
parameterized by the integer type, for instance the examples above work
equally well whether
index_sequence<I...> is a specialization
of a class template or an alias for
integer_sequence<size_t,
I...>.
There is no reason to restrict the values in an integer sequence to
consecutive integers starting from zero.
It is very simple to transform
index_sequence<I...>
to
index_sequence<I+n...>
or
index_sequence<I*n...>.
Such sequences would be useful to access only the even elements of an array,
for example, but such transformations are not necessary for the main
purpose of unpacking tuples. If the standard provides the
integer_sequence type and simple metafunctions for generating
basic sequences users can more easily create custom sequences.
For most uses
template<int...> struct index_sequence { };_sequence
works.
For a sequence of N integers the reference implementation recursively instantiates N templates, limiting the length of sequences to (at most) the instantiation depth of the compiler. It is quite easy to reduce the number of instantiations to log2(N) by starting at both ends of the sequence and working inwards to meet the middle.
A compiler intrinisic could be used to instantiate an arbitrary-length
integer_sequence with no recursion.
There have been suggestions of a core feature that would define
...3 as the parameter pack
{0, 1, 2, 3},
which would provide an alternative way to solve the problems solved
by this proposal.
This language feature will not be in C++14 if accepted at all, so
should not prevent solving the problem now.
This is a pure addition to the library.
It is expected that implementations would be able to replace existing
equivalent code with uses of
integer_sequence if desired.
Add to the synopsis in [utility] paragraph 2, after the declaration of
tuple:
// 20.x Compile-time integer sequences template<class T, T...> struct integer_sequence; template<size_t... I> using index_sequence = integer_sequence<size_t, I...>; template<class T, T N> using make_integer_sequence = integer_sequence<T, see below>; template<size_t N> using make_index_sequence = make_integer_sequence<size_t, N>; template<class... T> using index_sequence_for = make_index_sequence<sizeof...(T)>;
Add a new subclause after [pairs]:
20.x Compile-time integer sequences [intseq]
20.x.1 In general [intseq.general]
The library provides a class template that can represent an integer sequence. When used as an argument to a function template the parameter pack defining the sequence can be deduced and used in a pack expansion.
[Example:
template<class F, class Tuple, std::size_t... I> auto apply_impl(F&& f, Tuple&& t, index_sequence<I...>) -> decltype(std::forward<F>(f)(std::get<I>(std::forward<Tuple>(t))...) { return std::forward<F>(f)(std::get<I>(std::forward<Tuple>(t))...); } template<class F, class Tuple, class Indices = make_index_sequence<std::tuple_size<Tuple>::value>> auto apply(F&& f, Tuple&& t) -> decltype(apply_impl(std::forward<F>(f), std::forward<Tuple>(t), Indices())) { return apply_impl(std::forward<F>(f), std::forward<Tuple>(t), Indices()); }
— end example ]
20.x.2 Class template
integer_sequence[intseq.intseq]
namespace std { template<class T, T... I> struct integer_sequence { typedef T value_type; static constexpr size_t size() noexcept { return sizeof...(I); } }; } // namespace std
T shall be an integer type.
20.x.3 Alias template
make_integer_sequence[intseq.make]
template<class T, T N> using make_integer_sequence = integer_sequence<T, see below>;
If
Nis negative the program is ill-formed. The alias template
make_integer_sequencedenotes a specialization of
integer_sequencewith
Ntemplate non-type arguments. The type
make_integer_sequence<T, N>denotes the type
integer_sequence<T, 0, 1, ..., N-1>. [ Note:
make_integer_sequence<int, 0>denotes the type
integer_sequence<int>— end note ] | http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n3658.html | CC-MAIN-2020-45 | en | refinedweb |
Created on 2020-04-14 22:30 by vstinner, last changed 2020-05-05 05:52 by rhettinger. This issue is now closed.
The random module lacks a getrandbytes() method which leads developers to be creative how to generate bytes:
It's a common use request:
* bpo-13396 in 2011
* bpo-27096 in 2016
* in 2020
Python already has three functions to generate random bytes:
* os.getrandom(): specific to Linux, not portable
* os.urandom()
* secrets.token_bytes()
These 3 functions are based on system entropy and they block on Linux until the kernel collected enough entropy: PEP 524.
While many users are fine with these functions, there are also use cases for simulation where the security doesn't matter, and it's more about being able to get reproducible experience from a seed. That's what random.Random is about.
The numpy module provides numpy.random.bytes(length) function for such use case:
One example can be to generate UUID4 with the ability to reproduce the random UUID from a seed for testing purpose, or to get reproducible behavior.
Attached PR implements the getrandbytes() method.
If we have to have this, the method name should be differentiated from getrandbits() because the latter returns an integer. I suggest just random.bytes(n), the same as numpy.
> Python already has three functions to generate random bytes:
Now, there will be four ;-)
> I suggest just random.bytes(n), the same as numpy.
The problem with this is that people who `from random import *` (some schools insist on this, probably because most functions they need already start with `rand`) will shadow builtin `bytes`. Not that those schools do anything with `bytes`, but still, it might be inconvenient.
(The metaproblem is of course that some functions already do the "poor man's namespacing" in C-style by starting with `rand`, and some don't. I'm always for user control of namespacing, but I'm just saying that it doesn't correspond to how many beginners use `random` module.)
Do you have another name suggestion that doesn't have a parallelism problem with the existing name? The names getrandbytes() and getrandbits() suggest a parallelism that is incorrect.
I think that the "module owner";-P must decide whether the `random` module should follow the C-namespacing or not. Of course, I'm in the "not" camp, so I believe those two "rand..." functions (randrange is completely redundant with random.choice(range)) should be supplemented with random.int and random.float. And then random.bytes will be completely natural. And people might be gently nudged into the right direction when using Python module namespaces.
I concur that bytes() isn't a good name, but am still concerned that the proposed name is a bad API decision.
Maybe randbytes()?
I like "from random import randbytes" name. I concur that "from random import bytes" overrides bytes() builtin type and so can likely cause troubles.
I updated my PR to rename the method to randbytes().
The performance of the new method is not my first motivation.
My first motivation is to avoid consumers of the random to write a wrong implementation which would be biased. It's too easy to write biased functions without notifying.
Moreover, it seems like we can do something to get reproducible behavior on different architectures (different endianness) which would also be a nice feature.
For example, in bpo-13396, Amaury found this two functions in the wild:
* struct.pack("Q", random.getrandbits(64))
* sha1(str(random.getrandbits(8*20))).digest()
As I wrote, users are creative to workaround missing features :-) I don't think that these two implementations give the same result on big and little endian.
All Random methods give the same result independently of endianess and bitness of the platform.
> I don't think that these two implementations give the same result on big and little endian.
The second one does.
I wrote a quick benchmark:
---
import pyperf
import random
gen = random.Random()
# gen = random.SystemRandom()
gen.seed(850779834)
if 1: #hasattr(gen, 'randbytes'):
func = type(gen).randbytes
elif 0:
def py_randbytes(gen, n):
data = bytearray(n)
i = 0
while i < n:
chunk = 4
word = gen.getrandbits(32)
word = word.to_bytes(4, 'big')
chunk = min(n, 4)
data[i:i+chunk] = word[:chunk]
i += chunk
return bytes(data)
func = py_randbytes
else:
def getrandbits_to_bytes(gen, n):
return gen.getrandbits(n * 8).to_bytes(n, 'little')
func = getrandbits_to_bytes
runner = pyperf.Runner()
for nbytes in (1, 4, 16, 1024, 1024 * 1024):
runner.bench_func(f'randbytes({nbytes})', func, gen, nbytes)
---
Results on Linux using gcc -O3 (without LTO or PGO) using the C randbytes() implementation as the reference:
+--------------------+-------------+----------------------------------+-------------------------------+
| Benchmark | c_randbytes | py_randbytes | getrandbits_to_bytes |
+====================+=============+==================================+===============================+
| randbytes(1) | 71.4 ns | 1.04 us: 14.51x slower (+1351%) | 244 ns: 3.42x slower (+242%) |
+--------------------+-------------+----------------------------------+-------------------------------+
| randbytes(4) | 71.4 ns | 1.03 us: 14.48x slower (+1348%) | 261 ns: 3.66x slower (+266%) |
+--------------------+-------------+----------------------------------+-------------------------------+
| randbytes(16) | 81.9 ns | 3.07 us: 37.51x slower (+3651%) | 321 ns: 3.92x slower (+292%) |
+--------------------+-------------+----------------------------------+-------------------------------+
| randbytes(1024) | 1.05 us | 173 us: 165.41x slower (+16441%) | 3.66 us: 3.49x slower (+249%) |
+--------------------+-------------+----------------------------------+-------------------------------+
| randbytes(1048576) | 955 us | 187 ms: 196.30x slower (+19530%) | 4.37 ms: 4.58x slower (+358%) |
+--------------------+-------------+----------------------------------+-------------------------------+
* c_randbytes: PR 19527, randbytes() methods implemented in C
* py_randbytes: bytearray, getrandbits(), .to_bytes()
* getrandbits_to_bytes: Serhiy's implementation: gen.getrandbits(n * 8).to_bytes(n, 'little')
So well, the C randbytes() implementation is always the fastest.
random.SystemRandom().randbytes() (os.urandom(n)) performance using random.Random().randbytes() (Mersenne Twister) as a reference:
+--------------------+-------------+---------------------------------+
| Benchmark | c_randbytes | systemrandom |
+====================+=============+=================================+
| randbytes(1) | 71.4 ns | 994 ns: 13.93x slower (+1293%) |
+--------------------+-------------+---------------------------------+
| randbytes(4) | 71.4 ns | 1.04 us: 14.60x slower (+1360%) |
+--------------------+-------------+---------------------------------+
| randbytes(16) | 81.9 ns | 1.02 us: 12.49x slower (+1149%) |
+--------------------+-------------+---------------------------------+
| randbytes(1024) | 1.05 us | 6.22 us: 5.93x slower (+493%) |
+--------------------+-------------+---------------------------------+
| randbytes(1048576) | 955 us | 5.64 ms: 5.91x slower (+491%) |
+--------------------+-------------+---------------------------------+
os.urandom() is way slower than Mersenne Twister.
Well, that's not surprising: os.urandom() requires at least one syscall (getrandom() syscall on my Linux machine).
New changeset 9f5fe7910f4a1bf5a425837d4915e332b945eb7b by Victor Stinner in branch 'master':
bpo-40286: Add randbytes() method to random.Random (GH-19527)
New changeset 223221b290db00ca1042c77103efcbc072f29c90 by Serhiy Storchaka in branch 'master':
bpo-40286: Makes simpler the relation between randbytes() and getrandbits() (GH-19574)
New changeset 87502ddd710eb1f030b8ff5a60b05becea3f474f by Victor Stinner in branch 'master':
bpo-40286: Use random.randbytes() in tests (GH-19575)
The randbytes() method needs to depend on genrandbits(). It is documented that custom generators can supply there own random() and genrandbits() methods and expect that the other downstream generators all follow. See the attached example which demonstrates that randbytes() bypasses this framework pattern..
Also, please don't change the name of the genrand_int32() function. It was a goal to change as little as possible from the official, standard version of the C code at . For the most part, we just want to wrap that code for Python bindings, but not modify it.
Direct link to MT code that I would like to leave mostly unmodified:
When a new method gets added to a module, it should happen in a way that is in harmony with the module's design.
I created bpo-40346: "Redesign random.Random class inheritance".
$ ./python -m timeit -s 'import random' 'random.randbytes(10**6)'
200 loops, best of 5: 1.36 msec per loop
$ ./python -m timeit -s 'import random' 'random.getrandbits(10**6*8).to_bytes(10**6, "little")'
50 loops, best of 5: 6.31 msec per loop
The Python implementation is only 5 times slower than the C implementation. I am fine with implementing randbytes() in Python. This would automatically make it depending on the getrandbits() implementation.
Raymond:
>.
I don't see how 30 lines makes Python so harder to maintain. These lines make the function 4x to 5x faster. We are not talking about 5% or 10% faster. I think that such optimization is worth it. When did we decide to stop optimizing Python?
Raymond:
> The randbytes() method needs to depend on genrandbits().
I created bpo-40346: "Redesign random.Random class inheritance" for a more generic fix, not just randbytes().
Raymond:
> Also, please don't change the name of the genrand_int32() function. It was a goal to change as little as possible from the official, standard version of the C code at .
This code was already modified to replace "unsigned long" with "uint32_t" for example. I don't think that renaming genrand_int32() to genrand_uint32() makes the code impossible to maintain. Moreover, it seems like was not updated for 13 years.
Raymond:
> The randbytes() method needs to depend on genrandbits().
I created PR 19700 which allows to keep the optimization (C implementation in _randommodule.c) and Random subclasses implement randbytes() with getrandbits().
New changeset 2d8757758d0d75882fef0fe0e3c74c4756b3e81e by Victor Stinner in branch 'master':
bpo-40286: Remove C implementation of Random.randbytes() (GH-19797)
It removed the C implementation of randbytes(): it was the root issue which started discussions here and in bpo-40346. I rejected bpo-40346 (BaseRandom) and related PRs.
I close the issue.
New changeset f01d1be97d740ea0369379ca305646a26694236e by Raymond Hettinger in branch 'master':
bpo-40286: Put methods in correct sections. Add security notice to use secrets for session tokens. (GH-19870) | https://bugs.python.org/issue40286 | CC-MAIN-2020-45 | en | refinedweb |
C++ Condition Statements
The if statement
The if statement can cause other statements to execute only under certain conditions. You might think of the statements in a procedural program as individual steps taken as you are walking down a road. To reach the destination, you must start at the beginning and take each step, one after the other, until you reach the destination.
Program below illustrates the use of an if statement. The user enters three test scores and the program calculates their average. If the average equals 100, the program congratulates the user on earning a perfect score.
#include <iostream.h> #include <iomanip.h> using namespace std; int main() { int score1, score2, score3; double average; // Get the three test scores cout << "Enter 3 test scores and I will average them: "; cin >> score1 >> score2 >> score3; // Calculate and display the average score average = (score1 + score2 + score3) / 3.0; cout << fixed << showpoint << setprecision(1); cout << "Your average is " << average << endl; // If the average equals 100, congratulate the user if (average == 100) { cout << "Congratulations! "; cout << "That's a perfect score!\n"; } return 0; }
Output: Enter 3 test scores and I will average them: 80 90 70 Your average is 80.0 Output (if average==100): Enter 3 test scores and I will average them: 100 100 100 Your average is 100.0 Congratulations! That's a perfect score!
The if else statement
The if else statement will execute one set of statements when the if condition is true, and another set when the condition is false.The if...else statement is an expansion of the if statement.
This program uses the modulus operator to determine if a number is odd or even. If the number is evenly divisible by 2, it is an even number. A remainder indicates it is odd.
#include <iostream.h> #include <iomanip.h> using namespace std; int main() { int number; cout << "Enter an integer and I will tell you if it\n"; cout << "is odd or even. "; cin >> number; if (number % 2 == 0) cout << number << " is even.\n"; else cout << number << " is odd.\n"; return 0; }
Output: Enter an integer and I will tell you if it is odd or even. 17 17 is odd
The else part at the end of the if statement specifies one or more statements that are to be executed when the condition is false.
When number % 2 does not equal 0, a message is printed indicating the number is odd. Note that the program will only take one of the two paths in the if...else statement.
Ads Right | https://www.infocodify.com/cpp/if-else-condition-statements | CC-MAIN-2020-45 | en | refinedweb |
The header file ap_config_auto.h is a bit flawed. It defines PACKAGE_NAME and PACKAGE_VERSION, and similar constants. If one would like to compile their own module, one could write something like:
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif
#include <httpd.h>
#include <http_config.h>
#include <http_protocol.h>
#include <ap_config.h>
This would give compiler warnings (not errors) about PACKAGE_NAME being re-defined. This is a bit sloppy and certainly not necessary. Hardly any C program defines PACKAGE_NAME and similar constants outside config.h, and header files should certainly not re-define them.
It would be better to have this in ap_config_auto.h:
#ifndef PACKAGE_BUGREPORT
# define PACKAGE_BUGREPORT ""
#endif
So it's not very critical, but it's also not nice to cause warnings that need not be caused. | https://bz.apache.org/bugzilla/show_bug.cgi?id=46578 | CC-MAIN-2020-45 | en | refinedweb |
Welcome, initially started off with one hot encoding approach, where each word in the text is represented using an array whose length is equal to the number of unique words in the vocabulary.
Ex: Sentence 1: The mangoes are yellow.
Sentence 2: The apples are red.
The unique words are {The, mangoes, are, yellow, apples, red}. Hence sentence 1 will be represented as [1,1,1,1,0,0] & sentence 2 will be[1,0,1,0,1,1].
This approach works well for small datasets but doesn’t work efficiently for very large datasets. Hence there are several n-gram model implemented for this. We shall not explore this area in this tutorial. The topic of interest is word2vec model for generation of word embeddings. This covers many concepts of machine learning. We shall learn about a single hidden layer neural network, embeddings, and various optimisation techniques.
Any machine learning algorithm needs three domains to work hand in hand. They are representation of classifier, evaluation of the hypothesis, and optimization of the model for higher accuracy.
In the word2vec model, we have a single hidden layered neural network of size N, that is used to obtain the word embeddings in a dimension N. The way to visualise the embeddings is as follows…
Continuous Bag of Words Model- CBOW: Introduced by Tomas Mikolov in his paper, this model assumes that there is only one word considered per context. Hence the model will predict one target word given one context word. Let the vocabulary size be V
(CBOW model with only one word in context)
The weights matrix between the input layer and the output layer can be represented by a V*N matrix. Each row of the matrix represents the embedding vector for each word. Note that the activation function in this case is a linear function. The objective function is the conditional probability of observing the actual output word given the input context word. We need to maximise the objective function, that is maximise the prediction of a word given its context… Simple right!
CBOW also has a multi- word context, where instead of having one word in the context, it takes average of a certain window sized length of words, and then sends it as an input to the neural net.
The skip gram model is introduced in Mikolov et al, which is the opposite of CBOW model. The target word is now at the input layer, and the context words are on the output layer.
The objective function being the probability of the output word in the group of target words given the context word. W_O,c is the actual output word in the cth group of output words.
(Objective function)
Word2vec model implements skip-gram, and now… let’s have a look at the code. Gensim also offers word2vec faster implementation… We shall look at the source code for Word2Vec. Lets import all the required libraries and the dataset available in nltk.corpus. It is a replica of Project Gutenberg.
from __future__ import absolute_import from __future__ import division from __future__ import print_function import collections import math import random import numpy as np from six.moves import xrange import tensorflow as tf import nltk #this is the dataset of interest from nltk.corpus import brown emma = nltk.corpus.gutenberg.words('austen-emma.txt') vocabulary=list() for i in brown.categories(): vocabulary.append(emma) e_list=list() vocabulary=e_list vocabulary_size=len(vocabulary) #print(vocabulary,vocabulary_size)
Let's’ preprocess the dataset by getting rid of uncommon words, and marking them as UNK tokens.
def build_dataset(words, n_words): """Process raw inputs into a dataset.""" count = [['UNK', -1]] count.extend(collections.Counter(words).most_common(n_words - 1)) dictionary = dict() for word, _ in count: dictionary[word] = len(dictionary) data = list() unk_count = 0 for word in words: if word in dictionary: index = dictionary[word] else: index = 0 # dictionary['UNK'] unk_count += 1 data.append(index) count[0][1] = unk_count reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys())) return data, count, dictionary, reversed_dictionary data, count, dictionary, reverse_dictionary = build_dataset(vocabulary, vocabulary_size) del vocabulary # Hint to reduce memory. print('Most common words (+UNK)', count[:5]) print('Sample data', data[:10], [reverse_dictionary[i] for i in data[:10]]) data_index = 0
Implementing the skip gram model is the next part.
# Step 3: Function to generate a training batch for the skip-gram model.) # Backtrack a little bit to avoid skipping words in the end of a batch data_index = (data_index + len(data) - span) % len(data) return batch, labels batch, labels = generate_batch(batch_size=8, num_skips=2, skip_window=1) for i in range(8): print(batch[i], reverse_dictionary[batch[i]], '->', labels[i, 0], reverse_dictionary[labels[i, 0]])
Training the Skip gram model results in the model understanding the language structure.
# Step 4: Build and train a skip-gram model.. graph = tf.Graph() with graph.as_default(): # Input data. train_inputs = tf.placeholder(tf.int32, shape=[batch_size]) train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1]) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # Ops and variables pinned to the CPU because of missing GPU implementation with tf.device('/cpu:0'): # Look up embeddings for inputs. embeddings = tf.Variable( tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)) embed = tf.nn.embedding_lookup(embeddings, train_inputs) # Construct the variables for the NCE loss nce_weights = tf.Variable( tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0 / math.sqrt(embedding_size))) nce_biases = tf.Variable(tf.zeros([vocabulary_size])) # Compute the average NCE loss for the batch. # tf.nce_loss automatically draws a new sample of the negative labels each # time we evaluate the loss. loss = tf.reduce_mean( tf.nn.nce_loss(weights=nce_weights, biases=nce_biases, labels=train_labels, inputs=embed, num_sampled=num_sampled, num_classes=vocabulary_size)) # Construct the SGD optimizer using a learning rate of 1.0. optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss) # Compute the cosine similarity between minibatch examples and all embeddings., normalized_embeddings, transpose_b=True) # Add variable initializer. init = tf.global_variables_initializer() # Step 5: Begin training. num_steps = 100001 with tf.Session(graph=graph) as session: # We must initialize all variables before we use them. init.run() print('Initialized') average_loss = 0 for step in xrange(num_steps): batch_inputs, batch_labels = generate_batch( batch_size, num_skips, skip_window) feed_dict = {train_inputs: batch_inputs, train_labels: batch_labels} # We perform one update step by evaluating the optimizer op (including it # in the list of returned values for session.run() _, loss_val = session.run([optimizer, loss], feed_dict=feed_dict) average_loss += loss_val if step % 2000 == 0: if step > 0: average_loss /= 2000 # The average loss is an estimate of the loss over the last 2000 batches. print('Average loss at step ', step, ': ', average_loss) average_loss = 0 # Note that this is expensive (~20% slowdown if computed every 500 steps) if step % 100000 == 0: sim = similarity.eval() for i in xrange(valid_size): valid_word = reverse_dictionary[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k + 1] print("nearest",nearest) log_str = 'Nearest to %s:' % valid_word for k in xrange(top_k): close_word = reverse_dictionary[nearest[k]] print(nearest[k]) log_str = '%s %s,' % (log_str, close_word) print(log_str) final_embeddings = normalized_embeddings.eval()
Let’s visualise the embeddings.
# Step 6: Visualize the embeddings with tsne. def plot_with_labels(low_dim_embs, labels, filename='tsne.png'): assert low_dim_embs.shape[0] >= len(labels), 'More labels than embeddings' plt.figure(figsize=(18, 18)) # in inches for i, label in enumerate(labels): x, y = low_dim_embs[i, :] plt.scatter(x, y) plt.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points', ha='right', va='bottom') plt.savefig(filename) try: # pylint: disable=g-import-not-at-top from sklearn.manifold import TSNE import matplotlib.pyplot as plt tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000) plot_only = 500 low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only, :]) labels = [reverse_dictionary[i] for i in xrange(plot_only)] plot_with_labels(low_dim_embs, labels) except ImportError: print('Please install sklearn, matplotlib, and scipy to show embeddings.')
Optimisation is used to refine the embeddings obtained. Let’s review the various techniques that we know and use. I suggest you to go through this due to limitations of typing math on Medium.
(Results for comparison of various optimisers)
Hence, we can conclude that RMSProp and Adam, which are state of the art do not work well on these models. On the other hand, Proximal Adagrad and SGD work really well. Let’s see the results of Proximal Adagrad and SGD.
(Proximal Adaptive Gradient Descent Optimizer)
Check the words that go together often being represented close enough on the images. Also… compare the location of the numbers… in the two images… Decide which one is the better one accordingly!
(Stochastic Gradient Descent Optimizer)
This is the third tutorial in a five part series… Excited for the next two… Share your thoughts and feedback at lalith@dataturks.com. | https://hackernoon.com/various-optimisation-techniques-and-their-impact-on-generation-of-word-embeddings-3480bd7ed54f | CC-MAIN-2020-45 | en | refinedweb |
You want to build an email message with attachments in a servlet.
Use the JavaMail API for basic email messaging, and the the JavaBeans Activation Framework (JAF) to generate the file attachments.
The JAF classes provide fine-grained control over setting up a file attachment for an email message.
If you are using both the JavaMail API and the JAF, make sure to import the packages in your servlet class:
import javax.activation.*; import javax.mail.*; import javax.mail.internet.*; //class definition continues
The sendMessage( ) method in Example 20-8 creates a new email message ( specifically , a new javax.mail.internet.MimeMessage ), adds its text message, and inserts a file attachment inside the message. The method then sends the message using the code you may have seen in Recipe 20.2 and Recipe 20.3:
Transport.send(mailMsg);
To accomplish this, the code creates a container (a javax.mail.Multipart object) and two javax.mail.BodyPart s that make up the the container. The first BodyPart is a text message (used usually to describe the file attachment to the user ), while the second BodyPart is the file attachment (in this case, a Microsoft Word file). Then the code sets the content of the MimeMessage to the Multipart . In a nutshell , the MimeMessage (an email message) contains a Multipart , which itself is composed of two BodyPart s: the email's text message and an attached file.
If you want to look at the headers of a MimeMessage that contains attachments, call the getAllHeaders( ) method on the MimeMessage . See Recipe 20.8 for details.
package com.jspservletcookbook; import java.io.*; import java.util.Properties; import javax.activation.*; import javax.mail.*; import javax.mail.internet.*; import javax.servlet.*; import javax.servlet.http.*; public class EmailAttachServlet extends HttpServlet { //default value for mail server address, in case the user //doesn't provide one private final static String DEFAULT_SERVER = "mail.attbi.com"; public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, java.io.IOException { response.setContentType("text/html"); java.io.PrintWriter out = response.getWriter( ); out.println( "<html><head><title>Email message sender</title></head><body>"); String smtpServ = request.getParameter("smtp"); if (smtpServ == null smtpServ.equals("")) smtpServ = DEFAULT_SERVER; String from = request.getParameter("from"); String to = request.getParameter("to"); String subject = request.getParameter("subject"); try { sendMessage(smtpServ, to, from, subject); } catch (Exception e) { throw new ServletException(e.getMessage( )); } out.println( "<H2>Your attachment has been sent.</H2>"); out.println("</body></html>"); }//doPost public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, java.io.IOException { doPost(request,response); }//doGet private void sendMessage(String smtpServ, String to, String from, String subject) throws Exception { Multipart multipart = null; BodyPart bpart1 = null; BodyPart bpart2 = null; Properties properties = System.getProperties( ); //populate the 'Properties' object with the mail //server address, so that the default 'Session' //instance can use it. properties.put("mail.smtp.host", smtpServ); Session session = Session.getDefaultInstance(properties); Message mailMsg = new MimeMessage(session);//a new email message InternetAddress[] addresses = null; try { if (to != null) { //throws 'AddressException' if the 'to' email address //violates RFC822 syntax addresses = InternetAddress.parse(to, false); mailMsg.setRecipients(Message.RecipientType.TO, addresses); } else { throw new MessagingException( "The mail message requires a 'To' address."); } if (from != null) { mailMsg.setFrom(new InternetAddress(from)); } else { throw new MessagingException( "The mail message requires a valid 'From' address."); } if (subject != null) mailMsg.setSubject(subject); //This email message's content is a 'Multipart' type //The MIME type for the message's content is 'multipart/mixed' multipart = new MimeMultipart( ); //The text part of this multipart email message bpart1 = new MimeBodyPart( ); String textPart = "Hello, just thought you'd be interested in this Word file."; // create the DataHandler object for the text part DataHandler data = new DataHandler(textPart, "text/plain"); //set the text BodyPart's DataHandler bpart1.setDataHandler(data); //add the text BodyPart to the Multipart container multipart.addBodyPart( bpart1); //create the BodyPart that represents the attached Word file bpart2 = new MimeBodyPart( ); //create the DataHandler that points to a File FileDataSource fds = new FileDataSource( new File( "h:/book/chapters/chap1/chap1.doc") ); /); //The DataHandler is instantiated with the //FileDataSource we just created DataHandler fileData = new DataHandler( fds ); //the BodyPart will contain the word processing file bpart2.setDataHandler(fileData); //add the second BodyPart, the one containing the attachment, to //the Multipart object multipart.addBodyPart( bpart2 ); //finally, set the content of the MimeMessage to the //Multipart object mailMsg.setContent( multipart ); // send the mail message; throws a 'SendFailedException' //if any of the message's recipients have an invalid adress Transport.send(mailMsg); } catch (Exception exc) { throw exc; }//try }//sendMessage }//EmailAttachServlet
The comments in Example 20-8 explain what happens when you use the javax.activation classes to create a file attachment of the intended MIME type. The most confusing part is creating a javax.activation.FileDataSource that points to the file that you want to attach to the email message. The code uses the FileDataSource to instantiate the javax.activation.DataHandler , which is responsible for the content of the file attachment.
//create the DataHandler that points to a File FileDataSource fds = new FileDataSource( new File( "h:/book/chapters/chap1/chap1.doc") );
Make sure that the MimeMessage identifies the attached file as a MIME type of application/msword , so that the user's email application can try to handle the attachment as a Microsoft Word file. Set the FileTypeMap of the FileDataSource with the following code:
/);
A MimetypesFileTypeMap is a class that associates MIME types (like application/msword ) with file extensions such as .doc .
Make sure you associate the correct MIME type with the file that you are sending as an attachment, since you explicitly make this association in the code. See for further details.
Then the code performs the following steps:
Creates a DataHandler by passing this FileDataSource in as a constructor parameter.
Sets the content of the BodyPart with that DataHandler .
Adds the BodyPart to the Multipart object (which in turn represents the content of the email message).
Sun Microsystem's JavaMail API page:; the JAF web page:; Recipe 20.1 on adding JavaMail- related JARs to your web application; Recipe 20.2 on sending email from a servlet; Recipe 20.3 on sending email using a JavaBean; Recipe 20.4 covering how to access email in a servlet; Recipe 20.5 on accessing email with a JavaBean; Recipe 20.6 on handling attachments in a servlet; Recipe 20.8 on reading an email's headers. | https://flylib.com/books/en/4.255.1.208/1/ | CC-MAIN-2020-45 | en | refinedweb |
For the past few months, we’ve been using React.js here at Jscrambler. We may even write about it again in the future, but this post is a short piece about a specific and very interesting React feature.
Since React v0.12 there’s a new yet undocumented feature that gives new possibilities on how to communicate between components, we’re talking about contexts. The feature is still undergoing changes, but it’s already being used in a lot of projects.
In brief, a context is an object that is implicitly passed from a component to its children, so by using contexts you don’t have to explicitly pass around a whole bunch of
props to pass some contextual data. This was one of the not so elegant parts of React that went away when contexts were introduced.
Apart from the missing official documentation, as of the time of this writing, it’s very difficult to find accurate literature that sheds some light on how React contexts really work. We spent some time understanding its inner workings and using it here at JScrambler, so we thought to contribute with this blog post.
In this article, we’ll show you how to code a toggle that changes the context of the parent and a panel that changes its content given a different context.
Parent Component
var React = require('react'); var DummyWrapper = require('./dummy-wrapper'); var ItemToggle = require('./item-toggle'); var Parent = React.createClass({ getInitialState: function() { return {} }, childContextTypes: { activeItem: React.PropTypes.any }, getChildContext: function() { return { activeItem: this.state.activeItem }; }, setActiveItem: function(item) { this.setState({ activeItem: item }); }, render: function() { return ( < div > < ItemToggle setActiveItem = { this.setActiveItem } /> < DummyWrapper / > < /div> ); } }); module.exports = Parent;
childContextTypes is useful while in development because it validates the schema against the object returned from
getChildContext. While in production this step is bypassed.
Here we render the
ItemToggle, while passing down
@setActiveItem as prop to allow the child to change the parent state (and context). We also render a
DummyWrapper that will have a child that depends on the
Parent component.
Item Toggle Component
var React = require('react'); var ItemToggle = React.createClass({ onClick: function(type) { var item; switch (type) { case 'A': item = "Item A"; break; case 'B': item = "Item B"; break; default: throw new Error('Unimplemented type'); } this.props.setActiveItem(item); }, render: function() { return ( < div > Select an Item: < ul > < li onClick = { this.onClick.bind(this, 'A') } > Item A < li onClick = { this.onClick.bind(this, 'B') } > Item B < /ul> < /div> ); } }); module.exports = ItemToggle;
Here we receive a prop
setActiveItem from the parent that allows us to set the React component that is going to be instanced and set in the parent context.
To keep this example simple,
Item A and
Item B are just strings. Of course, you could easily replace them with React components.
Dummy Wrapper Component
var React = require('react'); var ActiveItemPanel = require('./active-item-panel'); var DummyWrapper = React.createClass({ render: function() { return <ActiveItemPanel / > ; } });
This component is just to show how
ActiveItemPanel will be able to access the context of the
Parent component even though they’re not even directly related.
Active Item Panel Component
var React = require('react') var ActiveItemPanel = React.createClass({ contextTypes: { activeItem: React.PropTypes.any }, render: function() { return ( < div > Active Item: { this.context.activeItem } < /div> ); } }); module.exports = ItemPanel;
Here we’re defining the context types that we’re receiving through
contextTypes.
Besides this, we simply render
@context.activeItem inside the component. As you can see we didn’t even pass any
prop to this component. The
context just lays there with no effort.
To play around with this example check this jsfiddle.
Conclusions
As of now, the API allows owner-ownee communication but it’s already announced that on React’s v0.14 release context will work on the parent-child relationship, which allows even more options than the former.
Though the API may change, this is extremely powerful and avoids a lot of explicit props to be passed from parents to children. At the same time, your component will be a little more coupled to where he’s being used, which in some scenarios may not be what you’re looking for.
Lastly, if you're building React applications with sensitive logic, be sure to protect them against code theft and reverse-engineering by following our guide. | https://blog.jscrambler.com/react-js-communication-between-components-with-contexts/ | CC-MAIN-2020-45 | en | refinedweb |
This article demonstrates how to get and set a user's profile information within a Windows Store app.
Windows 8 provides an API to get and set a user's account information including from a Windows 8 online account as well as local account. For example, we can get and set a user's display name or profile picture from code using this API. The UserInformation class is used for this purpose.
UserInformation Class
The UserInformation class provides static methods to get and set user profile settings including user name and account picture.
User Name
The GetDisplayNameAsync method gets the display name for the user account.
The GetFirstNameAsync method gets the user's first name.
The GetLastNameAsync method gets the user's last name.
The GetPrincipalNameAsync method gets the principal name for the user.
The following code snippet demonstrates how to get a user's name.
// Get display namestring displayName = await Windows.System.UserProfile.UserInformation.GetDisplayNameAsync();UserTextBox.Text = displayName;
// Get first name, last name and domain namestring firstName = await Windows.System.UserProfile.UserInformation.GetFirstNameAsync();string lastName = await Windows.System.UserProfile.UserInformation.GetLastNameAsync();string domainName = await Windows.System.UserProfile.UserInformation.GetDomainNameAsync();
Account Picture
The GetAccountPicture and SetAccountPictureAsync methods are used to get and set the user's account picture.
The following code snippet gets the user's picture and displays it in an Image control.
// Get account picture and display it in an Image controlStorageFile image = Windows.System.UserProfile.UserInformation.GetAccountPicture (Windows.System.UserProfile.AccountPictureKind.SmallImage) as StorageFile;if (image != null){ try { IRandomAccessStream imageStream = await image.OpenReadAsync(); BitmapImage bitmapImage = new BitmapImage(); bitmapImage.SetSource(imageStream); Image1.Source = bitmapImage; Image1.Visibility = Visibility.Visible;
}
catch (Exception ex)
{
}}
You need to make sure to import the following namespaces.
using Windows.Storage;using Windows.Storage.Streams;using Windows.UI.Xaml.Media.Imaging;
The UserInformation class also has a method to set a user account picture. Read the following article:
Set Account Picture in Windows Store app
Summary
In this article, we learned how to get and set the user name and account pictures.
View All | https://www.c-sharpcorner.com/UploadFile/mahesh/user-information-in-windows-8-app/ | CC-MAIN-2019-43 | en | refinedweb |
.18.2.>${release-train}</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement>
The current release train version is
Ingalls-SR14.:
public interface UserRepository extends CrudRepository<User, Long> { Long countByLastname(String lastname); }
The following list shows the interface definition for a derived delete query:
public:
public class SomeClient { @Autowired private PersonRepository repository; public the prior example, you defined a common base interface for all your domain repositories and exposed
find public class Person { … } interface UserRepository extends Repository<User, Long> { … } @Document public") interface public
Often it is necessary to provide a custom implementation for a few repository methods. Spring Data repositories easily allow you to provide custom repository code and integrate it with generic CRUD abstraction and query method functionality.
8); }
Then you can let your repository interface additionally extend from the fragment interface, as shown in the following example:
class UserRepositoryImpl implements UserRepositoryCustom {="MyPostfix" />
The first configuration example tries to look up a class
com.acme.repository.UserRepositoryImpl to act as custom repository implementation, whereas the second example will try to lookup
com.acme.repository.UserRepositoryFooBar.
Manual Wiring
If your custom implementation uses annotation-based configuration and autowiring only, the preceding approach shown works well, because it is treated as any other Spring bean. If your custom implementationDomainEvents> { T find") public class UserController { @Autowired UserRepository repository; @RequestMapping public String showUsers(Model model, Pageable pageable) { model.addAttribute("users", repository.findAll(pageable)); return "users"; } }
The preceding method signature causes Spring MVC try to derive a
Pageable instance from the request parameters by using the following default configuration:
To customize this behavior extend either
SpringDataWebConfiguration or the HATEOAS-enabled equivalent and:
public/retrieved using MongoDB.
Log4j log appender.ongo.DB, to communicate directly with MongoDB. The goal with naming conventions on various API artifacts is to copy those in the base MongoDB Java driver so you can easily map your existing knowledge onto the Spring APIs.
10.1. Getting Started
Spring MongoDB support requires MongoDB 2.6 or higher and Java SE 6 or higher.>{version}</version> </dependency> </dependencies>
Change the version of Spring in the pom.xml to be
<spring.framework.version>{springVersion}<ongo
MappingMongoConverter, but you can also write your own converter. Please refer to the section on MongoConverters property to an enum with the following values,
LOG,
EXCEPTION, or
NONE to either log the error, throw and exception or do nothing. The default is to use a
WriteResultChecking value of
NONE.
10: persistent. Now that we have a domain object, we can define an interface that uses it, as follows:
public interface PersonRepository extends PagingAndSortingRepository<Person, Long> { //.) }
The following table shows the keywords that are supported for query methods:.); }.
11.3()); } }
11.3"),.
12.
13. how to use conventions for mapping objects to documents and how to override those conventions with annotation-based mapping metadata.
13.
13driver. If the specified
idvalue cannot be converted to an ObjectId, then the value will be stored as is in the document’s _id field..
13.2.:
13ongo and MongoTemplate by using either Java-based or XML-based metadata. The following example uses Spring’s Java-based configuration:
.
13; }
13.4.1. Mapping Annotation Overview
The MappingMongoConverter can use metadata to drive the mapping of objects to documents. The following annotations are available:
@Id: Applied at the field level to mark the field used for identity.
@TextIndexed: Applied at the field level to mark the field to be included in the text index.
@Language: Applied at the field level to set the language override property for text index.
and described the name of the field as it will be represented in the MongoDB BSON document thus allowing the name to be different than the fieldname of the class.
@Version: Applied at field level is used for optimistic locking and checked for modification on save operations. The initial value is
zerowhich
13.
13.4.3. Compound Indexes
Compound indexes are also supported. They are defined at the class level, rather than on individual properties.
Here’s an example that creates a compound index of
lastName in ascending order and
age in descending order:
package com.mycompany.domain; @Document @CompoundIndexes({ @CompoundIndex(name = "age_idx", def = "{'lastName': 1, 'age': -1}") }) public class Person { @Id private ObjectId id; private Integer age; private String firstName; private String lastName; } selectively handle the conversion for a particular type or.
Below is an example of a Spring Converter implementation that converts from a DBObject to a Person POJO.
@ReadingConverter.
@WritingConverter; } }
14. Cross Store Support implementation...
17.1. Using Spring Data MongoDB with MongoDB 3.0
The rest of this section describes how to use Spring Data MongoDB with MongoDB 3.0.
17>
17.
17ongo mongo() throws Exception {>
17.1. | https://docs.spring.io/spring-data/mongodb/docs/1.10.14.RELEASE/reference/html/ | CC-MAIN-2019-43 | en | refinedweb |
The common language runtime defines a common runtime for all .NET languages. Although C# and VB.NET are the two flagship languages of .NET, any language that targets the common language runtime is on equal footing with any other language. In this section we'll talk about the features that the .NET languages offer.
The common language runtime deals in managed types. The runtime can load types, execute methods of a type, and instantiate objects of a type. Although the common language runtime supports several forms of types, such as classes, interfaces, structures, enumerations, and delegates, all of these forms ultimately are represented as classes at the lowest levels of the runtime. And although the common language runtime does support exporting entry points that are not enclosed in a type, at least two languages (VB.NET and C#) do not support entry points outside of a type definition.
Although in OOP it is possible to invoke some class methods without objects, the most common use of classes is to produce objects. In the common language runtime, an object is an instance of exactly one class. The operations that may be performed on an object are determined by the object's class. The amount and type of storage used by the object is determined by the object's class. In essence, the class acts as a factory or template for objects that belong to that class.
The common language runtime supports instantiating objects based on a class. Each programming language exposes this functionality somewhat differently. In C# and VB.NET, for example, the new keyword is used to create a new object of a given class, as shown in the following two code snippets:
IceCream ic = new IceCream(); ic.MakeACone();
Dim ic as new IceCream(); ic.MakeACone ()
Classes are defined in C# and in VB.NET using the class keyword. The following code shows sample classes in each:
public class IceCream { string strFlavor = "Chocolate"; public void MakeACone() { System.Console.Write( "I have made a cone. My flavor is " ); System.Console.WriteLine( strFlavor + "." ); } }
Public Class IceCream Dim strFlavor as string = "Chocolate" Public Sub MakeACone() System.Console.Write("I have made a cone. My flavor is ") System.Console.WriteLine(strFlavor & ".") End Sub End Class
By default, all class members are implicitly private and can be accessed only by methods of that class. This access can be made explicit using the private access modifier. Class members can be made accessible to all parties using the public access modifier, which informs the runtime that any party that can access the class can access this particular member. Class members can also be made accessible only when the assembly that contains the class is accessible. This is accomplished using the internal access modifier. Members marked internal can be accessed only by code that is compiled into the same assembly as the containing class. Table 1.3 shows the various protection keywords.
C#
VB.NET
Description
Type
public
Public
Type is visible everywhere
internal
Private
Type is only visible inside of assembly
Member
Member is visible everywhere
Friend
Member is visible only inside of assembly
private
Member is visible only inside of declaring type
protected
Protected
Member is visible to type and deriving type only
Unless the programmer takes special steps, the fields of the class will be set to a well-known initial value when an object is created. Numeric types are set to zero, and objects are set to null (C#) or Nothing (VB). You can change the values that are used by writing a constructor. A constructor is a special method that is called automatically to set the class's fields to a programmer-determined state prior to the first use of the object (or class).
Constructors are called when a new object is created, before the new operator returns the reference to the new object. Constructors may accept parameters, and they may be overloaded based on parameter count or type. The following code shows typical constructors:
public class IceCream { private int i = 5; public IceCream() { System.Console.WriteLine( "This is delicious!" ); } }
Public Class IceCream Sub New() System.Console.WriteLine( "This is delicious!" ) End Sub End Class
Namespaces in the .NET runtime are used to organize classes and types into separate spaces. You define namespaces using the namespace keyword, as shown in the following code:
namespace Rick { public class MyClass { public static void DoSomething() { } } }
The using keyword in C# promotes elements in other namespaces into the global namespace. The following code shows an example of referencing two separate namespaces with the using keyword:
using Rick; using System; MyClass.DoSomething(); Console.WriteLine( "Just called DoSomething()" );
An interface is a special kind of type in the common language runtime. Interfaces are used to partition the space of all possible objects into subcategories based on shared semantics. When two objects support the same interface, one can assume that the two objects share the functionality or behavior implied by the shared interface. A given object might support multiple interfaces, which implies that it belongs to multiple categories of objects.
Interface-based design was first popularized in component-based software engineering (such as COM and CORBA). Interface-based designs tend to express application constraints in a less implementation-specific manner than traditional class-based, object-oriented software engineering. In general, interface-based software is more extensible, maintainable, and easier to evolve than traditional class-based designs.
Part of a class definition is the list of supported interfaces. A given class may support as many interfaces as it wishes. Each language provides a syntax for expressing the list of supported interfaces. In C#, a class definition must list the interfaces it supports between the class name and the opening curly brace, as show in the following code:
public interface IIceCream { void TakeABite(); void AddSyrup(); } public class Sundae : IIceCream { public void TakeABite() { // code goes here... } public void AddSyrup() { // Code goes here... } }
When a class supports an interface, instances of that class are acceptable anywhere a reference of the supported interfaces is allowed. This means that variables of a given class type may be passed where a supported interface type is expected.
Interfaces typically have one or more method declarations. When an interface contains a method declaration, all that is specified in the interface definition is the signature of the method; no actual executable code is provided in the interface definition. Rather, each class that supports the interface must provide its own implementation of that method signature that complies with the method's semantics as documented.
The method declarations that appear inside an interface definition are sometimes called abstract methods. Abstract methods are method declarations that must be supported in a derived class. If a given class does not provide an implementation of every method defined in every interface it claims to support, that class is itself abstract and cannot be used to instantiate objects.
Interfaces that have methods typically have several methods that work together in concert to define a protocol for interacting with objects of that type. The order of method invocation and acceptable parameter values often are documented as part of this protocol. To this end, interfaces act as useful fools for partitioning a software project into largely independent components.
Interfaces themselves can be marked as either public for inter-assembly use or internal for intra-assembly use. All methods of an interface must be public. In support of this requirement, it is illegal to use the public keyword inside of an interface definition.
The common language runtime supports two method-invocation mechanisms: virtual and non-virtual. With both mechanisms, the actual method invoked is based on type. In the case of non-virtual methods, the choice of implementation is based on the compile-time type of the variable expression used to invoke the method. In the case of virtual methods, the choice of implementation is based on the run-time type of the most-derived class of the object, independent of the compile-time type of the object reference used to invoke the method.
All abstract methods are also virtual. This means that all methods invoked via interface-based references are virtual. For example, because they are abstract, all methods declared in an interface are virtual.
Each language provides its own syntax for indicating that a method is virtual. In C#, methods declared in a class are virtual if they use either the abstract or virtual method modifiers. In VB.NET, methods declared in a class are virtual if they use either the MustOverride (wherein there is no default implementation) or Overridable (wherein there is a default implementation) method modifiers. A method declared as virtual/overridable must have a method implementation. However, a derived type is permitted to override the base class's implementation with one of its own.
When a derived class declares a method whose name and signature matches a method declared in a base class or interfaces, there is room for confusion. To keep the confusion to a minimum, the common language runtime requires the derived type to indicate its relationship to the base type's method declaration. The rules work differently depending on whether the base method declaration was virtual or non-virtual.
When a derived class declares a method whose name and signature matches a method declared as virtual or abstract in its base class, the derived class must indicate whether it is overriding the virtual method of the base or trying to introduce a new method. In C#, the derived method declaration must use the override method modifier. In VB.NET, the derived method declaration must use the override method modifier. The following code shows a C# class with a virtual method, and a derived class overriding the method:
public class BaseClass { public virtual void DoIt() { Console.WriteLine( "In BaseClass.DoIt()" ); } } public class DerivedClass : BaseClass { public override void DoIt() { Console.WriteLine( "In DerivedClass.DoIt()" ); } } BaseClass bc = new DerivedClass(); bc.DoIt(); // Writes "In DerivedClass.DoIt()"
Each programming language provides its own constructs for defining properties. A property definition in C# looks like a hybrid of a field declaration with scoped method definitions. The following code shows a class with a property definition:
public class IceCreamCone { int m_nScoops; public int Scoops { get { return m_nScoops; } set { m_nAge = p.nScoops; } } } // Using the property using System; IceCreamCone p = new IceCreamCone(); p.Scoops = 4; Console.WriteLine( p.Scoops );
Every language has its own way of allowing attributes to be applied. The following code shows how to apply attributes to a C# class:
[ ShowInToolbox(true) ] public class ThisControl : WebControl { private string text; [ Bindable(true), DefaultValue("") ] [ Category("Appearance") ] public string Text { get { return text; } set { text = Value; } } }
Exceptions are instances of classes that extend the System.Exception type either directly or indirectly. The common language runtime provides built-in exception types that extend System.Exception, System.ApplicationException, and System.SystemException. The following code shows how to catch an exception in C#:
using System; using System.IO; public Class MyClass { public static void Main() { try { File f( "C:\\SomeData.txt" ); f.Delete(); } catch( DirectoryNotFoundException ex ) { // This is a common exception for I/O. Console.WriteLine( ex.Message ); } catch( Exception ex ) { // Here we catch all other exceptions. } } } | https://flylib.com/books/en/2.627.1.13/1/ | CC-MAIN-2019-43 | en | refinedweb |
unique way when compared to other machine learning algorithms.
In this article we'll see what support vector machines algorithms are, the brief theory behind support vector machine and their implementation in Python's Scikit-Learn library. We will then move towards an advanced SVM concept, known as Kernel SVM, and will also implement it with the help of Scikit-Learn.
Simple SVM
In case of linearly separable data in two dimensions, as shown in Fig. 1, a typical machine learning algorithm tries to find a boundary that divides the data in such a way that the misclassification error can be minimized. If you closely look at Fig. 1, there can be several boundaries that correctly divide the data points. The two dashed lines as well as one solid line classify the data correctly.
Fig 1: Multiple Decision Boundaries
SVM differs from the other classification algorithms in the way that it chooses the decision boundary that maximizes the distance from the nearest data points of all the classes. An SVM doesn't merely find a decision boundary; it finds the most optimal decision boundary.
The most optimal decision boundary is the one which has maximum margin from the nearest points of all the classes. The nearest points from the decision boundary that maximize the distance between the decision boundary and the points are called support vectors as seen in Fig 2. The decision boundary in case of support vector machines is called the maximum margin classifier, or the maximum margin hyper plane.
Fig 2: Decision Boundary with Support Vectors
There is complex mathematics involved behind finding the support vectors, calculating the margin between decision boundary and the support vectors and maximizing this margin. In this tutorial we will not go into the detail of the mathematics, we will rather see how SVM and Kernel SVM are implemented via the Python Scikit-Learn library.
Implementing SVM with Scikit-Learn
The dataset that we are going to use in this section is the same that we used in the classification section of the decision tree tutorial.
Our task is to predict whether a bank currency note is authentic or not based upon four attributes of the note i.e. skewness of the wavelet transformed image, variance of the image, entropy of the image, and curtosis of the image. This is a binary classification problem and we will use SVM algorithm to solve this problem. The rest of the section consists of standard machine learning steps.
Importing libraries
The following script imports required libraries:
import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline
Importing the Dataset
The data is available for download at the following link:
The detailed information about the data is available at the following link:
Download the dataset from the Google drive link and store it locally on your machine. For this example the CSV file for the dataset is stored in the "Datasets" folder of the D drive on my Windows computer. The script reads the file from this path. You can change the file path for your computer accordingly.
To read data from CSV file, the simplest way is to use
read_csv method of the pandas library. The following code reads bank currency note data into pandas dataframe:
bankdata = pd.read_csv("D:/Datasets/bill_authentication.csv")
Exploratory Data Analysis
There are virtually limitless ways to analyze datasets with a variety of Python libraries. For the sake of simplicity we will only check the dimensions of the data and see first few records. To see the rows and columns and of the data, execute the following command:
bankdata.shape
In the output you will see (1372,5). This means that the bank note dataset has 1372 rows and 5 columns.
To get a feel of how our dataset actually looks, execute the following command:
bankdata.head()
The output will look like this:
You can see that all of the attributes in the dataset are numeric. The label is also numeric i.e. 0 and 1.
Data Preprocessing
Data preprocessing involves (1) Dividing the data into attributes and labels and (2) dividing the data into training and testing sets.
To divide the data into attributes and labels, execute the following code:
X = bankdata.drop('Class', axis=1) y = bankdata['Class']
In the first line of the script above, all the columns of the
bankdata dataframe are being stored in the
X variable except the "Class" column, which is the label column. The
drop() method drops this column.
In the second line, only the class column is being stored in the
y variable. At this point of time
X variable contains attributes while
y variable contains corresponding labels.
Once the data is divided into attributes and labels, the final preprocessing step is to divide data into training and test sets. Luckily, the
model_selection library of the Scikit-Learn library contains the
train_test_split method that allows us to seamlessly divide data into training and test sets.
Execute the following script to do so:
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20)
Training the Algorithm
We have divided the data into training and testing sets. Now is the time to train our SVM on the training data. Scikit-Learn contains the
svm library, which contains built-in classes for different SVM algorithms. Since we are going to perform a classification task, we will use the support vector classifier class, which is written as
SVC in the Scikit-Learn's
svm library. This class takes one parameter, which is the kernel type. This is very important. In the case of a simple SVM we simply set this parameter as "linear" since simple SVMs can only classify linearly separable data. We will see non-linear kernels in the next section.
The
fit method of SVC class is called to train the algorithm on the training data, which is passed as a parameter to the
fit method. Execute the following code to train the algorithm:
from sklearn.svm import SVC svclassifier = SVC(kernel='linear') svclassifier.fit(X_train, y_train)
Making Predictions
To make predictions, the
predict method of the
SVC class is used. Take a look at the following code:
y_pred = svclassifier.predict(X_test)
Evaluating the Algorithm
Confusion matrix, precision, recall, and F1 measures are the most commonly used metrics for classification tasks. Scikit-Learn's
metrics library contains the
classification_report and
confusion_matrix methods, which can be readily used to find out the values for these important metrics.
Here is the code for finding these metrics:
from sklearn.metrics import classification_report, confusion_matrix print(confusion_matrix(y_test,y_pred)) print(classification_report(y_test,y_pred))
Results
The evaluation results are as follows:
[[152 0] [ 1 122]] precision recall f1-score support 0 0.99 1.00 1.00 152 1 1.00 0.99 1.00 123 avg / total 1.00 1.00 1.00 275
From the results it can be observed that SVM slightly outperformed the decision tree algorithm. There is only one misclassification in the case of SVM algorithm compared to four misclassifications in the case of the decision tree algorithm.
Kernel SVM
In the previous section we saw how the simple SVM algorithm can be used to find decision boundary for linearly separable data. However, in the case of non-linearly separable data, such as the one shown in Fig. 3, a straight line cannot be used as a decision boundary.
Fig 3: Non-linearly Separable Data
In case of non-linearly separable data, the simple SVM algorithm cannot be used. Rather, a modified version of SVM, called Kernel SVM, is used.
Basically, the kernel SVM projects the non-linearly separable data lower dimensions to linearly separable data in higher dimensions in such a way that data points belonging to different classes are allocated to different dimensions. Again, there is complex mathematics involved in this, but you do not have to worry about it in order to use SVM. Rather we can simply use Python's Scikit-Learn library that to implement and use the kernel SVM.
Implementing Kernel SVM with Scikit-Learn
Implementing Kernel SVM with Scikit-Learn is similar to the simple SVM. In this section, we will use the famous iris dataset to predict the category to which a plant belongs based on four attributes: sepal-width, sepal-length, petal-width and petal-length.
The dataset can be downloaded from the following link:
The rest of the steps are typical machine learning steps and need very little explanation until we reach the part where we train our Kernel SVM.
Importing Libraries
import numpy as np import matplotlib.pyplot as plt import pandas as pd
Importing the Dataset
url = "" # Assign colum names to the dataset colnames = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'Class'] # Read dataset to pandas dataframe irisdata = pd.read_csv(url, names=colnames)
Preprocessing
X = irisdata.drop('Class', axis=1) y = irisdata['Class']
Train Test Split
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20)
Training the Algorithm
To train the kernel SVM, we use the same
SVC class of the Scikit-Learn's
svm library. The difference lies in the value for the kernel parameter of the
SVC class. In the case of the simple SVM we used "linear" as the value for the kernel parameter. However, for kernel SVM you can use Gaussian, polynomial, sigmoid, or computable kernel. We will implement polynomial, Gaussian, and sigmoid kernels to see which one works better for our problem.
1. Polynomial Kernel
In the case of polynomial kernel, you also have to pass a value for the
degree parameter of the
SVC class. This basically is the degree of the polynomial. Take a look at how we can use a polynomial kernel to implement kernel SVM:
from sklearn.svm import SVC svclassifier = SVC(kernel='poly', degree=8) svclassifier.fit(X_train, y_train)
Making Predictions
Now once we have trained the algorithm, the next step is to make predictions on the test data.
Execute the following script to do so:
y_pred = svclassifier.predict(X_test)
Evaluating the Algorithm
As usual, the final step of any machine learning algorithm is to make evaluations for polynomial kernel. Execute the following script:
from sklearn.metrics import classification_report, confusion_matrix print(confusion_matrix(y_test, y_pred)) print(classification_report(y_test, y_pred))
The output for the kernel SVM using polynomial kernel looks like this:
[[11 0 0] [ 0 12 1] [ 0 0 6]] precision recall f1-score support Iris-setosa 1.00 1.00 1.00 11 Iris-versicolor 1.00 0.92 0.96 13 Iris-virginica 0.86 1.00 0.92 6 avg / total 0.97 0.97 0.97 30
Now let's repeat the same steps for Gaussian and sigmoid kernels.
2. Gaussian Kernel
Take a look at how we can use polynomial kernel to implement kernel SVM:
from sklearn.svm import SVC svclassifier = SVC(kernel='rbf') svclassifier.fit(X_train, y_train)
To use Gaussian kernel, you have to specify 'rbf' Gaussian kernel looks like this:
[[11 0 0] [ 0 13 0] [ 0 0
3. Sigmoid Kernel
Finally, let's use a sigmoid kernel for implementing Kernel SVM. Take a look at the following script:
from sklearn.svm import SVC svclassifier = SVC(kernel='sigmoid') svclassifier.fit(X_train, y_train)
To use the sigmoid kernel, you have to specify 'sigmoid' Sigmoid kernel looks like this:
[[ 0 0 11] [ 0 0 13] [ 0 0 6]] precision recall f1-score support Iris-setosa 0.00 0.00 0.00 11 Iris-versicolor 0.00 0.00 0.00 13 Iris-virginica 0.20 1.00 0.33 6 avg / total 0.04 0.20 0.07 30
Comparison of Kernel Performance
If we compare the performance of the different types of kernels we can clearly see that the sigmoid kernel performs the worst. This is due to the reason that sigmoid function returns two values, 0 and 1, therefore it is more suitable for binary classification problems. However, in our case we had three output classes.
Amongst the Gaussian kernel and polynomial kernel, we can see that Gaussian kernel achieved a perfect 100% prediction rate while polynomial kernel misclassified one instance. Therefore the Gaussian kernel performed slightly better. However, there is no hard and fast rule as to which kernel performs best in every scenario. It is all about testing all the kernels and selecting the one with the best results on your test dataset.
Resources
Want to learn more about SVMs, Scikit-Learn, and other useful machine learning algorithms? I'd recommend checking out some more detailed resources, like one of these books:
- Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
- Python Data Science Handbook: Essential Tools for Working with Data
- Data Science from Scratch: First Principles with Python
Conclusion
In this article we studied both simple and kernel SVMs. We studied the intuition behind the SVM algorithm and how it can be implemented with Python's Scikit-Learn library. We also studied different types of kernels that can be used to implement kernel SVM. I would suggest you try to implement these algorithms on real-world datasets available at places like kaggle.com.
I would also suggest that you explore the actual mathematics behind the SVM. Although you are not necessarily going to need it in order to use the SVM algorithm, it is still very handy to know what is actually going on behind the scene while your algorithm is finding decision boundaries. | https://stackabuse.com/implementing-svm-and-kernel-svm-with-pythons-scikit-learn/ | CC-MAIN-2019-43 | en | refinedweb |
10;
Next, after your view has appeared, show the test suite as follows:
Swift
GoogleMobileAdsMediationTestSuite.present(on:self, delegate:nil)
Objective-C
[GoogleMobileAdsMediationTestSuite presentOnViewController:self delegate:nil];
Note that this requires that you have
correctly entered your AdMob app ID
in your
Info.plist.
Also.
Launching the test suite not persisted across sessions.
Enabling testing in production
By default, the mediation test suite will only launch in development, adhoc, and enterprise,-xxxxxxxxxxxxxxxx~yyyyyyyyyy"-xxxxxxxxxxxxxxxx~yyyyyyyyyy"; @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; } - (IBAction)presentMediationController:(id)sender { #ifdef DEBUG [GoogleMobileAdsMediationTestSuite presentWithAppId:APP_ID onViewController:self delegate:nil]; } #endif }. | https://developers.google.com/admob/ios/mediation-test-suite?hl=id | CC-MAIN-2019-43 | en | refinedweb |
How to get video preview from QProcess to fill a rectangle in qml
Hi!
I have a c++ class, where I am initiating QProcess in detached mode. This process launches a python script to get raspberry pi camera preview.
How can I get that preview and show it in a rectangle element?
The c++ function is a void type. Should it be some other type? You can make a QString function to return a QString, but there is no QProcess type.
I would appreciate any help. Thanks
@Kachoraza you should provide more details about your runtime environment, i.e.
is the python script running on the RPi device, or is just connecting remotely?
the python script is running on the pi.yes the python script is running on the pi locally.
@Kachoraza and what about the Qt application using QProcess?
it is also running locally on the pi
@Kachoraza so what about driving the Pi camera directly from your QML app? just for instance this guide
thanks. I will check it
I tried to access the camera through Camera element and VideoOutput element. Camera status is 2, which I cannot understand.
Camera {
id: mycamera
Component.onCompleted: { console.log(cameraStatus); mycamera.start() console.log(cameraStatus); } }
I tried it on pi and on ubuntu as well
@Pablo-J-Rogina I am trying to get camera preview with the following code, but all I have is a white window. Can you please see what am I doing wrong here. I have tried it on pi and ubuntu with no errors, but similar result. Thanks in advance
import QtQuick 2.9
import QtQuick.Window 2.3
import QtMultimedia 5.9
import QtQuick.Controls 2.2
Window {
id: window
width: 640
height: 480
property alias window: window
property alias videooutput: videooutput
visible: true
property Camera camera: Camera {
id: camera
captureMode: Camera.CaptureViewfinder
cameraState: Camera.ActiveState
Component.onCompleted: {
camera.start()
if(cameraState === Camera.ActiveState){
console.log("Camera ready")
}
}
}
VideoOutput {
id: videooutput
anchors.fill: parent
source: camera
focus: visible
enabled: true
Component.onCompleted: {
console.log(camera.cameraState)
}
}
} | https://forum.qt.io/topic/93936/how-to-get-video-preview-from-qprocess-to-fill-a-rectangle-in-qml/8 | CC-MAIN-2019-43 | en | refinedweb |
Introduction
By definition, web scraping refers to the process of extracting a significant amount of information from a website using scripts or programs. Such scripts or programs allow one to extract data from a website, store it and present it as designed by the creator. The data collected can also be part of a larger project that uses the extracted data as input.
Previously, to extract data from a website, you had to manually open the website on a browser and employ the oldie but goldie copy and paste functionality. This method works but its main drawback is that it can get tiring if the number of websites is large or there is immense information. It also cannot be automated.
With web scraping, you can not only automate the process but also scale the process to handle as many websites as your computing resources can allow.
In this post, we will explore web scraping using the Java language. I also expect that you are familiar with the basics of the Java language and have Java 8 installed on your machine.
Why Web Scraping?
The web scraping process poses several advantages which include:
- The time required to extract information from a particular source is significantly reduced as compared to manually copying and pasting the data.
- The data extracted is more accurate and uniformly formatted ensuring consistency.
- A web scraper can be integrated into a system and feed data directly into the system enhancing automation.
- Some websites and organizations provide no APIs that provide the information on their websites. APIs make data extraction easier since they are easy to consume from within other applications. In their absence, we can use web scraping to extract information.
Web scraping is widely used in real life by organizations in the following ways:
- Search engines such as Google and DuckDuckGo implement web scraping in order to index websites that ultimately appear in search results.
- Communication and marketing teams in some companies use scrapers in order to extract information about their organizations on the internet. This helps them identify their reputation online and work on improving it.
- Web scraping can also be used to enhance the process of identifying and monitoring the latest stories and trends on the internet.
- Some organizations use web scraping for market research where they extract information about their products and also competitors.
These are some of the ways web scraping can be used and how it can affect the operations of an organization.
What to Use
There are various tools and libraries implemented in Java, as well as external APIs, that we can use to build web scrapers. The following is a summary of some of the popular ones:
JSoup - this is a simple open-source library that provides very convenient functionality for extracting and manipulating data by using DOM traversal or CSS selectors to find data. It does not support XPath-based parsing and is beginner friendly. More information about XPath parsing can be found here.
HTMLUnit - is a more powerful framework that can allow you to simulate browser events such as clicking and forms submission when scraping and it also has JavaScript support. This enhances the automation process. It also supports XPath based parsing, unlike JSoup. It can also be used for web application unit testing.
Jaunt - this is a scraping and web automation library that can be used to extract data from HTML pages or JSON data payloads by using a headless browser. It can execute and handle individual HTTP requests and responses and can also interface with REST APIs to extract data. It has recently been updated to include JavaScript support.
These are but a few of the libraries that you can use to scrap websites using the Java language. In this post, we will work with JSoup.
Simple Implementation
Having learned of the advantages, use cases, and some of the libraries we can use to achieve web scraping with Java, let us implement a simple scraper using the JSoup library. We are going to scrap this simple website I found - CodeTriage that displays open source projects that you can contribute to on Github and can be sorted by languages.
Even though there are APIs available that provide this information, I find it a good example to learn or practice web scraping with.
Prerequisites
Before you continue, ensure you have the following installed on your computer:
- Java 8 - instructions here
- Maven - instructions here
- An IDE or Text Editor of your choice (IntelliJ, Eclipse, VS Code or Sublime Text)
We are going to use Maven to manage our project in terms of generation, packaging, dependency management, testing among other operations.
Verify that Maven is installed by running the following command:
$ mvn --version
The output should be similar to:
Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T21:33:14+03:00) Maven home: /usr/local/Cellar/Maven/3.5.4/libexec Java version: 1.8.0_171, vendor: Oracle Corporation, runtime: /Library/Java/JavaVirtualMachines/jdk1.8.0_171.jdk/Contents/Home/jre Default locale: en_KE, platform encoding: UTF-8 OS name: "mac os x", version: "10.14.1", arch: "x86_64", family: "mac"
Setup
With Maven set up successfully, let us generate our project by running the following command:
$ mvn archetype:generate -DgroupId=com.codetriage.scraper -DartifactId=codetriagescraper -DarchetypeArtifactId=Maven-archetype-quickstart -DinteractiveMode=false $ cd codetriagescraper
This will generate the project that will contain our scraper.
In the folder generated, there is a file called
pom.xml which contains details about our project and also the dependencies. Here is where we'll add the JSoup dependency and a plugin setting to enable Maven to include the project dependencies in the produced jar file. It will also enable us to run the jar file using
java -jar command.
Delete the
dependencies section in the
pom.xml and replace it with this snippet, which updates the dependencies and plugin configurations:
<dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> <!-- our scraping library --> <dependency> <groupId>org.jsoup</groupId> <artifactId>jsoup</artifactId> <version>1.11.3</version> </dependency> </dependencies> <build> <plugins> <!-- This plugin configuration will enable Maven to include the project dependencies in the produced jar file. It also enables us to run the jar file using `java -jar command` --> <plugin> <groupId>org.apache.Maven.plugins</groupId> <artifactId>Maven-shade-plugin</artifactId> <version>3.2.0</version> <executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <transformers> <transformer implementation="org.apache.Maven.plugins.shade.resource.ManifestResourceTransformer"> <mainClass>com.codetriage.scraper.App</mainClass> </transformer> </transformers> </configuration> </execution> </executions> </plugin> </plugins> </build>
Let's verify our work so far by running the following commands to compile and run our project:
$ mvn package $ java -jar target/codetriagescraper-1.0-SNAPSHOT.jar
The result should be
Hello World! printed on the console. We are ready to start building our scraper.
Implementation
Before we implement our scraper, we need to profile the website we are going to scrap in order to locate the data that we intend to scrap.
To achieve this, we need to open the CodeTriage website and select Java Language on a browser and inspect the HTML code using Dev tools.
On Chrome, right click on the page and select "Inspect" to open the dev tools.
The result should look like this:
As you can see, we can traverse the HTML and identify where in the DOM that the repo list is located.
From the HTML, we can see that the repositories are contained in an unordered list whose class is
repo-list. Inside it there are the list items that contain the repo information that we require as can be seen in the following screenshot:
Each repository is contained in list item entry whose
class attribute is
repo-item and class includes an anchor tag that houses the information we require. Inside the anchor tag, we have a header section that contains the repository's name and the number of issues. This is followed by a paragraph section that contains the repository's description and full name. This is the information we need.
Let us now build our scraper to capture this information. Open the
App.java file which should look a little like this:
package com.codetriage.scraper; import java.io.IOException; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; import org.jsoup.nodes.Element; import org.jsoup.select.Elements; public class App { public static void main(String[] args) { System.out.println( "Hello World!" ); } }
At the top of the file, we import
IOException and some JSoup classes that will help us parse data.
To build our scraper, we will modify our main function to handle the scraping duties. So let us start by printing the title of the webpage on the terminal using the following code:()); // In case of any IO errors, we want the messages written to the console } catch (IOException e) { e.printStackTrace(); } }
Save the file and run the following command to test what we've written so far:
$ mvn package && java -jar target/codetriagescraper-1.0-SNAPSHOT.jar
The output should be the following:
Our scraper is taking shape and now we can extract more data from the website.
We identified that the repositories that we need all have a class name of
repo-item, we will use this along with JSoup
getElementsByClass() function, to get all the repositories on the page.
For each repository element, the name of the repository is contained in a Header element that has the class name
repo-item-title, the number of issues is contained in a span whose class is
repo-item-issues. The repository's description is contained in a paragraph element whose class is
repo-item-description, and the full name that we can use to generate the Github link falls under a span with the class
repo-item-full-name.
We will use the same function
getElementsByClass() to extract the information above, but the scope will be within a single repository item. That is a lot of information at a go, so I'll describe each step in the comments of the following part of our program. We get back to our main method and extend it as follows:()); // Get the list of repositories Elements repositories = doc.getElementsByClass("repo-item"); /** * For each repository, extract the following information: * 1. Title * 2. Number of issues * 3. Description * 4. Full name on github */ for (Element repository : repositories) { // Extract the title String repositoryTitle = repository.getElementsByClass("repo-item-title").text(); // Extract the number of issues on the repository String repositoryIssues = repository.getElementsByClass("repo-item-issues").text(); // Extract the description of the repository String repositoryDescription = repository.getElementsByClass("repo-item-description").text(); // Get the full name of the repository String repositoryGithubName = repository.getElementsByClass("repo-item-full-name").text(); // The reposiory full name contains brackets that we remove first before generating the valid Github link. String repositoryGithubLink = "" + repositoryGithubName.replaceAll("[()]", ""); // Format and print the information to the console System.out.println(repositoryTitle + " - " + repositoryIssues); System.out.println("\t" + repositoryDescription); System.out.println("\t" + repositoryGithubLink); System.out.println("\n"); } // In case of any IO errors, we want the messages written to the console } catch (IOException e) { e.printStackTrace(); } }
Let us now compile and run our improved scraper by the same command:
$ mvn package && java -jar target/codetriagescraper-1.0-SNAPSHOT.jar
The output of the program should look like this:
Yes! Our scraper works going by the screenshot above. We have managed to write a simple program that will extract information from CodeTriage for us and printed it on our terminal.
Of course, this is not the final resting place for this information, you can store it in a database and render it on an app or another website or even serve it on an API to be displayed on a Chrome Extension. The opportunities are plenty and it's up to you to decide what you want to do with the data.
Conclusion
In this post, we have learned about web scraping using the Java language and built a functional scraper using the simple but powerful JSoup library.
So now that we have the scraper and the data, what next? There is more to web scraping than what we have covered. For example: form filling, simulation of user events such as clicking, and there are more libraries out there that can help you achieve this. Practice is as important as it is helpful, so build more scrapers covering new ground of complexity with each new one and even with different libraries to widen your knowledge. You can also integrate scrapers into your existing projects or new ones.
The source code for the scraper is available on Github for reference. | https://stackabuse.com/web-scraping-the-java-way/ | CC-MAIN-2019-43 | en | refinedweb |
Thread ThreadLocal object in the code above needs to done only per thread.
Just like most classes, once you have an instance of ThreadLocal you can call methods on it. Some of the methods are:
- get() : returns the value in the current thread’s copy of this thread local variable
- initialValue() : returns the current thread’s initial value for current thread local variable
- remove() : removes the value from the current thread for the current thread local variable
- set(T value) : sets the current thread’s copy of the current thread local variable to the specified value
For more detailed information about the methods visit the original Oracle documentation.
ThreadLocal instances are private static fields (in most cases) in classes that wish to associate state with a thread
Example of implementation
public class ThreadLocalExample { public static class Example implements Runnable { private ThreadLocal<String> example = new ThreadLocal<String>(); // override the run() method that comes from implementing Runnable class @Override public void run() { try { System.out.println("Getting values..."); Thread.sleep(2000); } catch (InterruptedException e) { System.out.println(e); } example.set("Just a random text that will be displayed before the remove function"); System.out.println("Before remove: " + example.get()); example.remove(); System.out.println("After remove: " + example.get()); } } public static void main(String[] args) { /* EXAMPLE THAT DOES NOT HAVE TO DO ANYTHING WITH THE STATIC CLASS ABOVE main*/ ThreadLocal<String> local = new ThreadLocal<String>(); local.set("First"); System.out.println("Value: " + local.get()); local.set("Second"); System.out.println("Value: " + local.get()); local.remove(); System.out.println("Value: " + local.get()); /* NEW EXAMPLE THAT USES THE STATIC CLASS DECLARED ABOVE main */ Example runnable = new Example(); Thread thread = new Thread(runnable); thread.start(); } }
Output
Value: First Value: Second Value: null Getting values... Before remove: Just a random text that will be displayed before the remove function After remove: null
Breakdown
The above code shows two ways you can get it working: one is by having Runnable object and pass it to a Thread instance and then override the run() method or you can simply create a ThreadLocal instance and set the value to it and then you can get or remove it. As you can see from the example above, even though it is the same variable (local), it can contain different values. In the first case, it contains the value “First”. In the second case, it contains the value “Second”. For the other implementation, I only displayed one thread. However, every thread is on its own – meaning, if you were to create another Thread instance, say thread2, and start() it, it will act on its own and it will have nothing to do with the other Thread instance variable. To check that, you could create a ThreadLocal instance within your static class and then within the overriden run() method, you can generate a random number and pass it to the current thread using the set() method. You will see that if you call it on two or more different threads, they will all have different values. | https://javatutorial.net/threadlocal-java-example | CC-MAIN-2019-43 | en | refinedweb |
span8
span4
span8
span4
Idea by david_r · · pythoncallerpython
It would be very helpful to have a Rejected-port on the PythonCaller.
Outputting a failed feature could e.g. look like this:
self.pyoutput_failed(feature)
This would simplify the pattern where you need to set a temporary attribute to indicate success/failure, then a Tester and then removing the temporary attribute.
It would be great if this port included a Line Number, Type Name and settable Context attribute, e.g.
import sys ... context = '' try: parsed_json = json.loads('{...}') #some valid JSON for k, v in parsed_json.items(): context = k # do stuff except Exception as e: error = 'Error on line ' + str(sys.exc_info()[-1].tb_lineno) + ' - ' + str(type(e).__name__) + ' - ' + str(e) + '. Context: ' + context print error feature.setAttribute("_error", error)
ekkischeffler commented ·
How about an additional port for a failing Python script instead of making the whole workspace fail?
I like it. It might be ideal if the user could add (and rename) output ports optionally. How about this?
self.pyoutput(feature) # to the topmost port self.pyoutput(feature, 'port_name') # to an additional port
Share your great idea, or help out by voting for other people's ideas.
Add numpy and scipy to PythonCaller
Save personalised scripts PythonCaller
Return FME_MacroValues as parameter value AND definition
Add fmeobjects code for each transformer in its Help documentation
Use external editor for Python coding
FME Objects: introduce setPartAt() and removePartAt() methods for all aggregates/multis
Python API: new setAttributeType() method for FMEFeature class
Add 32/64 bit information to python module fme
PythonCaller: simple syntax checking | https://knowledge.safe.com/idea/24903/add-rejected-port-to-the-pythoncaller.html | CC-MAIN-2019-43 | en | refinedweb |
Note for later:
Bug in biztalk mapper with multiple messages.
BizTalk mapper does not support multiple input messages where 2 (or more) use the same namespace. I get a compile error on the btm.cs file -- a file I cannot even edit.
Workaround: Use two maps. In second one, take the result of the first one, mass copy, plus add the second message.
And a *somewhat* related bug report: | http://geekswithblogs.net/JenniferZouak/archive/2008/10/01/biztalk-mapper-multiple-schemas-same-namespace.aspx | CC-MAIN-2019-43 | en | refinedweb |
AutocompletionFuzzy
Sublime anyword completion
Details
Installs
- Total 12K
- Win 7K
- OS X 3K
- Linux 2K
Readme
- Source
- raw.githubusercontent.com
Sublime Autocompletion plugin
This is really glorious plugin that reduce plain typing considerably while coding.
Demo
Installation
- Package Control: plugin is avaiable under
AutocompletionFuzzyin sublime's
package-control.
- GIT: plugin should be cloned under
AutocompletionFuzzyname.
Features
Provides 8 different types of autocompletion:
Complete word - basic completion; completes word that occurenced in text and opened files.
Complete word (fuzzy) - like “complete word” but uses fuzzy match over words.
Complete subword - completes snake_case and CamelCase words parts.
Complete long word - complete long words: class names with namespaces (Class::Name), method calls (object.method), filenames (file/name.py), urls (http://…).
Complete long word (fuzzy) - line “complete long word” but uses fuzzy match over words.
Complete nesting - completes over and into brackets: can complete full method call (method(arg1, arg2)), method arguments (arg1, arg2), array ([value1, value2]) and everything that has brackets over it or after it.
Complete nesting (fuzzy) - like “complete nesting” but uses fuzzy match.
Complete line - competes whole line.
However it maps only 6 types of autocompletion. Not fuzzy completions aren't mapped to keyboard shortcuts by default. See “installation” section if you would like map non-fuzzy completion behavior.
All lists are build in order of first occurence of word. That makes autocomplete very predictable and easy to use.
Words completion works over all files opened. Nesting completion works only in current file (because of performance issues)
Installation
This plugin is part of sublime-enhanced plugin set. You can install sublime-enhanced and this plugin will be installed automatically.
If you would like to install this package separately check “Installing packages separately” section of sublime-enhanced package.
If you don't like fuzzy behavior you should rebind keyboard shortcuts after installation in the “Autocompletion/Default (OSNAME).sublime-keymap” file (non-fuzzy behavior are commented by default).
Usage
Type a one of two characters of the beginning of word. Than hit keyboard shortcut or run command to complete the word. You can run same command again to complete next/previous occurence.
If you like fuzzy completion it is really useful to type a start character and following character from the middle of word to receive more accurate completion. E.g. for complete localvariable type “lv” and hit keyboard shortcut. First character of word should always be character of completion. However if word starts with underscore () it possible to type next character, e.g. for complete _local_variable same “lv” will work. | https://packagecontrol.io/packages/AutocompletionFuzzy | CC-MAIN-2019-51 | en | refinedweb |
Restricting Services
Restrict Services
You can change the Visibility and Access restrictions on any service using the
[Restrict] attribute. This is a class based attribute and should be placed on your Service class.
Visibility affects whether or not the service shows up on the public
/metadata pages, whilst access restrictions limits the accessibility of your services.
Named Configurations
The Restrict attribute includes a number of Named configurations for common use-cases. E.g You can specify a Service should only be available from your local machine with:
[Restrict(LocalhostOnly = true)] public class LocalAdmin { }
Which ensures access to this service is only allowed from localhost clients and the details of this service will only be visible on
/metadata pages that are viewed locally.
This is equivalent to using the underlying granular form of specifying individual
RequestAttributes, e.g:
[Restrict(AccessTo = RequestAttributes.Localhost, VisibilityTo = RequestAttributes.Localhost)] public class LocalAdmin { }
There are many more named configurations available. You can use VisibleInternalOnly to only have a service listed on internally viewed
/metadata pages with:
[Restrict(VisibleInternalOnly = true)] public class InternalAdmin { }
Services can be restricted on any EndpointAttribute, e.g. to ensure this service is only called by XML clients, do:
[Restrict(RequestAttributes.Xml)] public class XmlOnly { }
Restriction Combinations
Likewise you can add any combination of Endpoint Attributes together, E.g. this restricts access to service to Internal JSON clients only:
[Restrict(RequestAttributes.InternalNetworkAccess | RequestAttributes.Json)] public class JsonInternalOnly { }
Multiple restriction scenarios
It also supports multiple restriction scenarios, E.g. This service is only accessible by internal JSON clients or External XML clients:
[Restrict( RequestAttributes.InternalNetworkAccess | RequestAttributes.Json, RequestAttributes.External | RequestAttributes.Xml)] public class JsonInternalOrXmlExternalOnly { }
A popular configuration that takes advantage of this feature would be to only allow HTTP plain-text traffic from Internal Networks and only allow external access via secure HTTPS, which you can enforce with:
[Restrict(RequestAttributes.InSecure | RequestAttributes.InternalNetworkAccess, RequestAttributes.Secure | RequestAttributes.External)] public class InternalHttpAndExternalHttps { }
Restricting built-in Services
In addition to statically annotating classes, you can also add .NET Attributes at runtime to restrict Services you don’t control. So to only allow built-in Services to be visible from localhost requests you can add restriction attributes to Request DTO’s in your AppHost with:
typeof(AssignRoles) .AddAttributes(new RestrictAttribute { VisibleLocalhostOnly = true }); typeof(UnAssignRoles) .AddAttributes(new RestrictAttribute { VisibleLocalhostOnly = true });
Or to hide it for all requests you can set the visibility to none:
typeof(AssignRoles) .AddAttributes(new RestrictAttribute { VisibilityTo=RequestAttributes.None }); typeof(UnAssignRoles) .AddAttributes(new RestrictAttribute { VisibilityTo=RequestAttributes.None });
Note they’ll still be shown in development mode when
Debug=true which is automatically enabled for
Debug builds, to simulate what it would look like in a release build you can set it to
false, e.g:
SetConfig(new HostConfig { DebugMode = false }); | http://docs.servicestack.net/restricting-services | CC-MAIN-2019-51 | en | refinedweb |
Given by Angie Jones, held on Jun 28, 2017
Recording: A Software Test Professionals Webinar (STP)
!
Happy Testing!
-T.J. Maher
Twitter | LinkedIn | GitHub
// Sr. QA Engineer, Software Engineer in Test, Software Tester since 1996.
// Contributing Writer for TechBeacon.
// "Looking to move away from manual QA? Follow Adventures in Automation on Facebook!"
!
The Page Object is the industry design standard, modeling an object on the page you are testing, encapsulating how to interact with radio buttons or textboxes. You then create public methods that interact with these private methods. These methods can be used in your tests.
Behavior Driven Development:
- The Three Amigos, a QA, DEV and Product owner discuss how a certain feature should behave. You can describe it in the Gherkin style language.
Want to read how George Dinwiddie, Agile Adoption Coach, came up with the concept of “The Three Amigos”?
- Read his The Tree Amigos Strategy of Developing User Stories (April 24, 2014).
- See LinkedIn profile Blog
See Dan North, creator of the Behavior Driven Development (BDD), and JBehave, and then RSpec, being interviewed at OOPSLA 2007.
- Cucumber started off as being a compliment to RSpec and Dan North’s work.
Let’s say you have two source branches of your code:
- src/main/java
- src/test/java
Your main branch can contain the page objects. The test branch can contain the packageds cucumber -> features.
Install the Cucumber plug into your framework’s Maven or Gradle files. If you go to Cucumber.Io it has documentation how to do this.
You can have as many feature files as you want.
Feature: Search for Products
As a user, I want to search the catalog so that I can find specific products
Scenario: Click search result
Given there is a product named ‘Apple TV’
And I search for the product
When I click the product
Then I should be taken to the product page
This feature file acts as manual documentation for how the software product is supposed to work.
Givens: Setup before the test starts
When: The actual test
Then: The results of the test.
Note that we don’t have any extraneous details, where to click, how to see the product is Apple TV. Testers can be a bit long-winded being too explicit what each step means.
Push the details down to the code level.
Under src/test/java/cucumber, Angie likes adding two directories:
- features
- stepdefs
The “features” folder contains the outlines for the scenarios.
One steps definitions per functional area keeps things clean.
These are all you need for Cucumber. You can use your already written Page Objects into your tests.
public class SearchStepDefs {
protected String productName;
@Given(“^threre is a product named (.*)$”)
public void setProduct(String productName) {
this.productName = productName;
}
}
@Given, @When, @Then, are annotations borrowed from Cucumber.
@When(“^I search for the product$”)
public void search(){
Page currentPage = new Page(BaseTests.getWebDriver());
searchResults = currentPage.search(productName);
}
… Angie is using the getWebDriver() method already declared in her framework to spin up a new WebDriver.
A new instance of the page is spun up, in order to create a search method.
@When(“^I click the product$”)
public void clickProduct(){
currentPage = searchResults.clickProduct(productName);
}
Note, we are NOT dealing with elements in our step definitions just as we do not deal with them in our test classes. Keep it clean!
Let’s create a new method… clickProduct… and put it in our ProductPage.
public ProductPage clickProduct(String productName){
WebElement product = findProduct(productName);
product.findElement(productTitle).click();
return new ProductPage(webDriver);
@Then(“^I should be taken to the product page$”)
public void verifyProductPage(){
ProductPage productPage = (ProductPage)currentPage;
Assert.assertEquals(“Page title”,productPage.getTitle());
}
We are using existing code in our step definitions.
Let’s try a different scenario:
Scenario: Add the product from search result to cart
Given there is a product named Apple TV
When I search for the product
And I add the product to the cart
We don’t want to add cart step definitions to our search step definitions… that becomes messy and unorganized. Yes, it is a search test, but we really should create CartStepDefs, adding it to BaseStepDefs and SearchStepDefs. Cucumber can search through our library of Step Defs.
Therefore, we can write:
public class CartStepDefs {
public void verifyProductInCart(){
CartPage cartPage = new CartPage(BaseTests.getWebDriver());
Assert.assertTrue( productName + “is in cart”, cartPage.isProductInCart(productName));
Assert.assertEquals( “Number of items in cart”, 1, cartPage.getNumberOfProducts(());
}
}
How do you share data across multiple steps and multiple classes? Dependency Injection!
Pass a dependency to whoever needs it. Create a BaseStepDefiniton that holds globals that will be needed across all steps, such as the productName variable.
public class BaseStepDefs {
protected String productName;
protected Page currentPage;
public BaseStepDefs() {
currentPage = new Page(BaseTests.getWebDriver());
}
@Given(“^there is a product named (.*)$”)
public void setProduct(String productName) {
this.productName = productName;
}
}
… Now we need to inject this in the class that needs this dependency.
There are many ways you can set up dependency injection. Angie Jones likes using pico container in Spring.
Check in Cucumber.io on how to do this.
Add a constructor to each of the dependent classes, then include the dependency as an argument.
Happy Testing!
-T.J. Maher
Twitter | LinkedIn | GitHub
// Sr. QA Engineer, Software Engineer in Test, Software Tester since 1996.
// Contributing Writer for TechBeacon.
// "Looking to move away from manual QA? Follow Adventures in Automation on Facebook!" | http://www.tjmaher.com/2017/07/notes-on-angie-jones-make-your.html | CC-MAIN-2019-51 | en | refinedweb |
We organize known future work in GitHub projects. See Tracking SPIRV-Tools work with GitHub projects for more.
To report a new bug or request a new feature, please file a GitHub issue. Please ensure the bug has not already been reported by searching issues and projects. If the bug has not already been reported open a new one here.
When opening a new issue for a bug, make sure you provide the following:
For feature requests, we use issues as well. Please create a new issue, as with bugs. In the issue provide
Before we can use your code, you must sign the Khronos Open Source.
See README.md for instruction on how to get, build, and test the source. Once you have made your changes:
clang-format -style=file -i [modified-files]can help.
Fixes you do this, the issue will be closed automatically when the commit goes into master. Also, this helps us update the CHANGES file.
The reviewer can either approve your PR or request changes. If changes are requested:
After the PR has been reviewed it is the job of the reviewer to merge the PR. Instructions for this are given below.
The formal code reviews are done on GitHub. Reviewers are to look for all of the usual things:
When looking for functional problems, there are some common problems reviewers should pay particular attention to:
We intend to maintain a linear history on the GitHub master branch, and the build and its tests should pass at each commit in that history. A linear always-working history is easier to understand and to bisect in case we want to find which commit introduced a bug.
The following steps should be done exactly once (when you are about to merge a PR for the first time):
It is assumed that upstream points to git@github.com:KhronosGroup/SPIRV-Tools.git or.
Find out the local name for the main github repo in your git configuration. For example, in this configuration, it is labeled
upstream.
git remote -v [ ... ] upstream (fetch) upstream (push)
Make sure that the
upstream remote is set to fetch from the
refs/pull namespace:
git config --get-all remote.upstream.fetch +refs/heads/*:refs/remotes/upstream/* +refs/pull/*/head:refs/remotes/upstream/pr/*
If the line
+refs/pull/*/head:refs/remotes/upstream/pr/* is not present in your configuration, you can add it with the command:
git config --local --add remote.upstream.fetch '+refs/pull/*/head:refs/remotes/upstream/pr/*'
The following steps should be done for every PR that you intend to merge:
Make sure your local copy of the master branch is up to date:
git checkout master git pull
Fetch all pull requests refs:
git fetch upstream
git checkout pr/1048
Rebase the PR on top of the master branch. If there are conflicts, send it back to the author and ask them to rebase. During the interactive rebase be sure to squash all of the commits down to a single commit.
git rebase -i master
Build and test the PR.
If all of the tests pass, push the commit
git push upstream HEAD:master
Close the PR and add a comment saying it was push using the commit that you just pushed. See as an example. | https://skia.googlesource.com/external/github.com/KhronosGroup/SPIRV-Tools/+/1cea3b7853c8914ed9c6428687562ad44ced5d5a/CONTRIBUTING.md | CC-MAIN-2019-51 | en | refinedweb |
Jaqua StarrPro Student 516 Points
Calling a function challenge
I am trying to call my new function and pass it the argument 3, but I am not sure what it is asking. Please help!
def square(number): return (number * number) result = square print(result) #prints the square value
1 Answer
KRIS NIKOLAISENPro Student 51,749 Points
To call a function and pass an argument include the name of the function, followed by the argument within parentheses.
So in this case to call square and pass in a value of 3:
result = square(3) | https://teamtreehouse.com/community/calling-a-function-challenge | CC-MAIN-2019-51 | en | refinedweb |
PEACE
PROCESS
third edition
PEACE
PROCESS
american diplomacy and
the arab-israeli conflict
since 1967
William B. Quandt
b ro o k i n g s i n s t i t u t i o n p r e s s
Washington, D.C.
u n i v e rs i t y o f c a l i f o r n i a p r e s s
Berkeley and Los Angeles
the brookings institution
1775 Massachusetts Avenue, N.W., Washington, DC 20036
and
university of california press
2120 Berkeley Way, Berkeley, California 94720
The Library of Congress has cataloged the previous edition as follows:
Quandt, William B.
Peace process : American diplomacy and the Arab-Israeli conflict since
1967 / William B. Quandt. — Rev. ed.
p. cm.
Includes bibliographical references and index.
ISBN 0-520-22374-8 (cloth : alk. paper) —
ISBN 0-520-22515-5 (pbk. : alk. paper)
1. Arab-Israeli conflict—1967–1973. 2. Arab-Israeli
conflict—1973–1993. 3. Arab-Israeli conflict—1993—Peace. 4. Middle
East—Foreign relations—United States. 5. United States—Foreign
relations—Middle East. 6. United States—Foreign relations—20th
century. I. Title.
DS119.7 .Q69 2001
00-012148
327.73056—dc21
CIP
ISBN-13: 978-0-520-24631-7 (pbk. : alk. paper)
ISBN-10: 0-520-24631-4
987654321
The paper used in this publication meets minimum requirements of the
American National Standard for Information Sciences—Permanence of Paper
for Printed Library Materials: ANSI Z39.48-1992.
Typeset in Sabon
Composition by Cynthia Stock
Silver Spring, Maryland
Printed by R. R. Donnelley
Harrisonburg, Virginia
Contents
Preface to the Third Edition
1
Introduction
ix
1
part one:
The Johnson Presidency
2
Yellow Light: Johnson and the
Crisis of May–June 1967
23
part two:
The Nixon and Ford Presidencies
3
4
5
Cross-Purposes: Nixon, Rogers,
and Kissinger, 1969–72
55
Kissinger’s Diplomacy: Stalemate
and War, 1972–73
98
Step by Step: Kissinger and the Disengagement
Agreements, 1974–76
130
part three:
The Carter Presidency
6
7
Ambition and Realism: Carter and
Camp David, 1977–78
177
Forging Egyptian-Israeli Peace
205
vii
viii
contents
part four:
The Reagan and Bush Presidencies
8
Cold War Revival: Who’s in Charge?
245
9
Back to Basics: Shultz Tries Again
269
Getting to the Table: Bush and Baker,
1989–92
290
10
part five:
The Clinton Presidency
11
Clinton the Facilitator
321
12
Clinton’s Finale: Distractions, Hesitation,
and Frustration
342
part six:
The Second Bush Presidency
13
“With Us or Against Us”: The Warrior
President in His First Term
385
part seven:
Conclusion
14
Challenges Facing Future Administrations
415
Notes
429
Bibliography
509
Index
517
Preface to the
Third Edition
Since the publication of the second edition of
Peace Process in 2001, much has happened in the Middle East—and in
American policy toward the region. In part this was result of the dramatic failure of peace efforts at the very end of Bill Clinton’s presidency.
Many in the United States and in Israel blamed Palestinian leader Yasir
Arafat for the breakdown of diplomacy, and the newly elected George W.
Bush was unwilling to give him a second chance. Moreover, the new Bush
administration, especially after 9/11 and the invasion of Iraq, had other
priorities in the region. For much of President Bush’s first term there was,
quite simply, nothing worthy of calling a “peace process” between Israelis
and Palestinians.
I have tried to account for the distinctive approach of the George W.
Bush administration to the issues surrounding the Arab-Israeli conflict—
to say the least, it has been a departure from earlier presidents’ views. In
doing so I have had to work primarily with publicly available information,
but in time we will learn more about the internal deliberations that led to
many of the policies described in this book. I have tried to write a first draft
of that history, knowing that it will have to be rewritten in due course.
In addition to the new chapter on George W. Bush’s first term, this edition incorporates new material that has become available in recent years.
For example, the State Department has finally published its selection of
documents on the 1967 war in the Foreign Relations of the United States
ix
x
preface to the third edition
series. Although I had already seen most of this material, new documentation has been woven into the narrative and footnotes where relevant.
Similarly, the chapter on the 1973 war has been updated in light of new
documentary sources and a book by Henry Kissinger that details some of
his phone conversations during the crisis. Most important, much new
material has become available on the Clinton years, and I have significantly rewritten those chapters to incorporate the accounts of President
Clinton, Secretary of State Madeleine Albright, chief Middle East negotiator Dennis Ross, and several others who have since written about that
period. As in the second edition, supplementary documentation is available at and referenced in the notes to the text. In preparing this new edition, I was assisted
by a number of special people: Carol Huang and Stacie Pettyjohn assisted
with research. Tanjam Jacobson edited the revised manuscript, Inge
Lockwood proofread the pages, and Sherry Smith prepared the index.
As I conclude this note, the prospects for peace between Israel and the
Palestinians may have improved somewhat with the death of Yasir Arafat.
But the substantive issues that divide the parties will still be difficult to
resolve. I continue to believe that the United States will have to play a
major role if negotiations are to succeed. I have never believed that the
conflict is destined to last forever, but at the same time, it must be recognized that nothing less than an all-out effort—by Israelis, Palestinians, and
Americans—is likely to produce the long-sought peace that the peoples of
the region deserve.
The United States, now engaged in a struggle against Islamic extremism and committed to trying to build democracy in Iraq, has more reasons
than ever to wish for peace between Israelis and Palestinians. This would
help to stem the tide of anti-Americanism in the region, could provide an
example of how peace and democracy can by mutually reinforcing, and
might unleash a host of other beneficial results in a region that has suffered
from too much war and too little democracy and development.
But to be a catalyst for peace between Israelis and Palestinians—and
perhaps Syrians and Lebanese as well—the United States will have to do
more than offer its good offices. Procedural gimmicks will not get very far
in the highly charged atmosphere of the Middle East. Confidence-building
measures have been tried extensively in the past and have often proved
wanting. If peace is to come, the parties must now tackle the big questions
of the shape of a final peace settlement. A strategy based on incrementalism will be a waste of time. The United States, with broad international
support, is well poised to help shape the substantive compromises on
preface to the third edition
xi
which peace can be built. The general outline is widely understood. The
peace talks of the 1990s came close to defining eventual areas of agreement. George W. Bush’s first term coincided with a four-year hiatus in the
peace process. It is time to get back to business. In the next phase, the task
will be to bridge remaining gaps, to restore a degree of mutual trust, to
provide a vision of peace as the key to regional development and improved
governance, and to promote a concept of security that does not rely exclusively on the gun. This may sound like an impossible dream in early 2005.
But it is a worthy goal for American diplomacy.
Beirut
LEBANON
Lita
ni
Ri
ve
The Arab-Israeli Arena
r
Damascus
SYRIA
Tyre
GOLAN
HEIGHTS
Haifa
MEDITERRANEAN SEA
Nazareth
Irbid
WEST
BANK
ISRAEL
Jerusalem
GAZA
STRIP
Dead Sea
Hebron
Gaza
Rafah
Port Said
Alexandria
Amman
Beersheba
Al-Arish
Suez Canal
JORDAN
Great Bitter
Lake
Cairo
Suez
S
Y
P
I
N
A I
T
Gu
G u lf o
ez
f Su
lf o
iver
Elat
Aqaba
Taba
f Aqa
ba
G
Nile
R
E
S A U D I
A R A B I A
Mt. Sinai
Sharm
al-Sheikh
0
0
Red Sea
chapter
1
Introduction
Sometime in the mid-1970s the term peace process
began to be widely used to describe the American-led efforts to bring about
a negotiated peace between Israel and its Arab. This procedural bias, which frequently seems to characterize American diplomacy, reflects a practical, even legalistic side of American political culture. Procedures are also less controversial than substance, more
susceptible to compromise, and thus easier for politicians to deal with.
Much of U.S. constitutional theory focuses on how issues should be resolved—the process—rather than on substance—what should be done.
Yet whenever progress has been made toward Arab-Israeli peace through
American mediation, there has always been a joining of substance and procedure. The United States has provided both a sense of direction and a mechanism. That, at its best, is what the “peace process” has been about. At worst,
it has been little more than a slogan used to mask the marking of time.
The Pre-1967 Stalemate
The stage was set for the contemporary Arab-Israeli peace process by the
1967 Six-Day War. Until then, the conflict between Israel and the Arabs
1
2
introduction
had seemed almost frozen, moving neither toward resolution nor toward
war. The ostensible issues in dispute were still those left unresolved by the
armistice agreements of 1949. At that time, it had been widely expected
that those agreements would simply be a step toward final peace talks.
But the issues in dispute were too complex for the many mediation efforts
of the early 1950s, and by the mid-1950s the cold war rivalry between
Moscow and Washington had left the Arab-Israeli conflict suspended somewhere between war and peace. For better or worse, the armistice agreements had provided a semblance of stability from 1949 to 1967.
During this long truce the Israelis had been preoccupied with questions
of an existential nature. Would the Arabs ever accept the idea of a Jewish
state in their midst? Would recognition be accompanied by security arrangements that could be relied on? Would the Arabs insist on the return
of the hundreds of thousands of Palestinian refugees who had fled their
homes in 1948–49, thereby threatening the Jewishness of the new state?
And would the Arabs accept the 1949 armistice lines as recognized borders, or would they insist on an Israeli withdrawal to the indefensible lines
of the 1947 United Nations partition agreement? As for tactics, would
Israel be able to negotiate separately with each Arab regime, or would the
Arabs insist on a comprehensive approach to peacemaking? Most Israelis
felt certain that the Arabs would not provide reassuring answers to these
questions and therefore saw little prospect for successful negotiations,
whether with the conservative monarchs or with the new brand of nationalistic army officers.
From the Arab perspective, the conflict also seemed intractable, but the
interests of existing regimes and the interests of the Palestinians, who had
lost most from the creation of Israel, were by no means identical. The
regimes struck the pose of defending the rights of the Palestinians to return to their homes or to be compensated for their losses. They withheld
recognition from the Jewish state, periodically engaging in furious propaganda attacks against the “Zionist entity.” The more militant Arabs sometimes coupled their harsh rhetoric with support for guerrilla attacks on
Israel. But others, such as Jordan and Lebanon, were fairly content with
the armistice arrangements and even maintained under-the-table contacts
with the Israelis. “No war, no peace” suited them well.
The Palestinians, not surprisingly, used all their moral and political capital to prevent any Arab regime from recognizing the Jewish state, and by
the mid-1950s they had found a champion for their cause in Egypt’s president Gamal Abdel Nasser. From that point on, Arab nationalism and the
demand for the restoration of Palestinian rights were Nasser’s most potent
introduction
3
weapons as he sought to unify the ranks of the Arab world. But Nasser
also sought to steer a course between war and peace, at least until the
momentous days of May 1967. Then, as tensions rose, Palestinian radicals, who had hoped to draw the Arab states into conflict with Israel on
their behalf, rallied to Nasser’s banner and helped to cut off any chance
that he might retreat from the brink to which he had so quickly advanced.
The 1967 Watershed
The 1967 war transformed the frozen landscape of the Arab-Israeli conflict
in dramatic ways. Israel revealed itself to be a military power able to outmatch
all its neighbors. By the end of the brief war, Israel was in control of the
Sinai desert; the West Bank of the Jordan River, including all of East Jerusalem; Gaza, with its teeming refugee camps; and the strategically important
Golan Heights. More than a million Palestinians came under the control of
the Israeli military, creating an acute dilemma for Israel. None of the post1921 British mandate of Palestine was now free of Israeli control. If Israel
kept the newly conquered land and granted the people full political rights,
Israel would become a binational state, which few Israelis wanted. If it kept
the land but did not grant political rights to the Palestinians, it would come
to resemble other colonial powers, with predictable results. Finally, if Israel
relinquished the land, it would retain its Jewish character, but could it live in
peace and security? These were the alternatives debated within the fractious, often boisterous Israeli democracy.
Given the magnitude of their victory in the 1967 war, some Israelis
seemed to expect right afterward that the Arabs would have no option but
to sue for peace. But that did not happen. So, confident of its military
superiority and assured of American support, Israel decided to wait for
the Arabs to change their position. But what would happen to the occupied territories while Israel waited? Would they be held in trust, to be
traded for peace and security at some future date? Or would they gradually and selectively be incorporated into Israel, as the nationalists on the
right demanded? East Jerusalem, at least, would not be returned, and almost immediately Israel announced the unilateral expansion of the
municipal boundaries and the annexation of the Arab-inhabited parts of
the city. Palestinians living in Jerusalem would have the right to become
Israeli citizens, but few took up the offer. Israel signaled its willingness to
return most of the occupied territories, apart from Jerusalem, although
the passage of time and changing circumstances gradually eroded that
position.
4
introduction
The 1967 war was a shock to Arabs who had believed Nasser could
end their sense of weakness and humiliation at the hands of the West.
Indeed, although Nasser lived on for another three years after the war, his
prestige was shattered. Arab nationalism of the sort he preached would
never again be such a powerful force. Instead, regimes came to look more
and more after their own narrow interests, and the Palestinians followed
suit by organizing their own political movement, free of control by any
Arab government. One of the few dynamic developments in the Arab world
after the 1967 war was the emergence of a new generation of Palestinians
leading the fight for their rights.
The Palestine Liberation Organization (PLO), originally supported by
Arab regimes to keep the Palestinians under control, quickly became an
independent actor in the region. It symbolized the hopes of many Palestinians and caused much concern among established Arab regimes, which
were not used to seeing the Palestinians take matters into their own hands.
In theory, these changes in the Arab world might have opened the way
for an easing of the Arab-Israeli conflict. A certain amount of self-criticism took place in Arab intellectual circles. Political realism began to challenge ideological sloganeering. But no one made any serious peace effort
immediately after the 1967 war, and by September of that year the Arab
parties had all agreed there would be no negotiations with Israel, no peace,
and no recognition. Once again, “neither war nor peace” seemed to be a
tolerable prospect for both Arabs and Israelis.
The Need for a Mediator
With the parties to the conflict locked into mutually unacceptable positions, the chance for diplomatic movement seemed to depend on others,
especially the United States. Because of the close U.S.-Israeli relationship,
many Arabs looked to Washington to press Israel for concessions. The
example of President Dwight D. Eisenhower, who had pressed Israel to
relinquish its gains from the Suez war of 1956, was still a living memory.
The two main areas of Arab concern were the return of territories seized
in the 1967 war and some measure of justice for the Palestinians. In return, it was implied, something short of peace would be offered to Israel,
perhaps an end to belligerency or strengthened armistice arrangements.
The Arab regimes were still reluctant to promise full peace and recognition for Israel unless and until the Palestinians were satisfied, and that
would require more than Israeli withdrawal from occupied territories. As
time went by, and the PLO gained in prestige, it became more and more
introduction
5
difficult for the Arab states to pursue their narrowly defined interests with
no regard for Palestinian claims. And the Arabs were reluctant to deal
directly with Israel. If a deal were to be struck, it would be through the
efforts of the two superpowers—the United States and the Soviet Union—
and the United Nations.
By contrast, Israel was adamant that territory would not be returned
for less than peace, recognition, and security. And the means for getting to
a settlement would have to include direct negotiations with Israel by each
Arab party. For most Israelis, the claims of the Palestinians were impossible to deal with. At best, Jordan could act as a stand-in for the Palestinians, who would have to be satisfied with some form of internationally
supported rehabilitation and compensation scheme. Above all, Palestinians would not be allowed to return to their homes, except in very special
circumstances of family reunions and in very small numbers.
American Ambivalence: Positions and Policies
Confronted with these almost contradictory positions, the United States
was reluctant to get deeply involved in Arab-Israeli diplomacy. The Vietnam War was still raging in 1967, and the needs of the Middle East seemed
less compelling than the daily demands of a continuing war in Southeast
Asia. Still, from the outset the United States staked out a position somewhere in between the views of Israelis and Arabs. Israel, it was believed,
was entitled to more than a return to the old armistice arrangements.
Some form of contractually binding end to the state of war should be
achieved, and Israeli security concerns would have to be met. At the same
time, if the Arabs were prepared to meet those conditions, they should
recover most, if not all, of the territory lost in 1967. These views were
spelled out by President Lyndon Johnson soon after the war, and they
became the basis for UN Resolution 242 of November 22, 1967, which
thereafter provided the main reference point, with all its ambiguities, for
peacemaking.1
The basic American position adopted in 1967 has remained remarkably consistent. For example, each American president since 1967 has
formally subscribed to the following points:
—Israel should not be required to relinquish territories captured in
1967 without a quid pro quo from the Arab parties involving peace, security, and recognition. This position, summarized in the formula “land for
peace” and embodied in UN Resolution 242, applies to each front of the
conflict.
6
introduction
—East Jerusalem is legally considered to be occupied territory whose
status should eventually be settled in peace negotiations. Whatever its final
political status, Jerusalem should not be physically redivided. Reflecting
the legal American position on the city, the American embassy has remained in Tel Aviv, despite promises by many presidential candidates to
move the embassy to Jerusalem.
—Israeli settlements beyond the 1967 armistice lines—the “green line”—
are obstacles to peace. Until 1981 they were considered illegal under international law, but the administration of Ronald Reagan reversed position
and declared they were not illegal. But Reagan, and especially George
Bush, continued to oppose the creation of settlements. No American funds
are to be used by Israel beyond the green line.
—However Palestinian rights may eventually be defined, they do not
include the right of unrestricted return to homes within the 1967 lines, nor
do they entail the automatic right of independence. All administrations
have opposed the creation of a fully independent Palestinian state, preferring, at least until the mid-1990s, some form of association of the West
Bank and Gaza with Jordan. Over time, however, the Jordan option—
the idea that Jordan should speak for the Palestinians—has faded, and
since 1988 the United States has agreed to deal directly with Palestinian
representatives.
—Israel’s military superiority, its technological edge against any plausible coalition of Arab parties, has been maintained through American
military assistance. Each U.S. administration has tacitly accepted the existence of Israeli nuclear weapons, with the understanding that they will
not be brandished and can be regarded only as an ultimate deterrent, not
as a battlefield weapon. American conventional military aid is provided,
in part, to ensure that Israel will not have to rely on its nuclear capability
for anything other than deterrence.
With minor adjustments, every president from Lyndon Johnson to Bill
Clinton has subscribed to each of these positions. They have been so fundamental that they are rarely even discussed. To change any one of these
positions would entail costs, both domestic and international. These positions represent continuity and predictability. But they do not always determine policy. Policy, unlike these positions, is heavily influenced by tactical
considerations, and here presidents and their advisers differ with one another, and sometimes with themselves, from one moment to the next.
Policies involve judgments about what will work. How can a country
best be influenced? What levers exist to influence a situation? Should aid
be offered or withheld? Will reassurance or pressure—or both—be most
introduction
7
effective? When is the optimal time to launch an initiative? Should it be
done in public or private? How much prior consultation should take place,
and with whom? On these matters, there is no accepted wisdom. Each
president and his top advisers must evaluate the realities of the Middle
East, of the international environment, of the domestic front, and of human psychology before reaching a subjective judgment. While positions
tend to be predictable, policies are not. They are the realm where leadership makes all the difference. And part of leadership is knowing when a
policy has failed and should be replaced with another.
How Policy Is Made: Alternative Models
More than any other regional conflict, the Arab-Israeli dispute has consistently competed for top priority on the American foreign-policy agenda.
This study tries to account for the prominence of the Arab-Israeli peace
process in American policy circles since 1967. It seeks to analyze the way
in which perceived national interests have interacted with domestic political considerations to ensure that Arab-Israeli peacemaking has become
the province of the president and his closest advisers.
Because presidents and secretaries of state—not faceless bureaucrats—
usually set the guidelines for policy on the Arab-Israeli dispute, it is important to try to understand how they come to adopt the views that guide
them through the labyrinthine complexities of Arab-Israeli diplomacy. Here
several models compete for attention.
One model would have us believe that policies flow from a cool deliberation of national interest. This strategic model assumes that decisions
are made by rational decisionmakers. Such a perspective implies that it
does not much matter who occupies the Oval Office. The high degree of
continuity in several aspects of the American stance toward the conflict
since 1967 would serve as evidence that broad interests and rational policy
processes provide the best explanation for policy.
But anyone who has spent time in government will testify that
policymaking is anything but orderly and rational. As described by the
bureaucratic politics model, different agencies compete with one another,
fixed organizational procedures are hard to change, and reliable information is difficult to come by. This perspective places a premium on bureaucratic rivalries and the “game” of policymaking. Policy outcomes are much
less predictable from this perspective. Instead, one needs to look at who is
influencing whom. Microlevel analysis is needed, in contrast to the broad
systemic approach favored by the strategic model. Much of the gossip of
8
introduction
Washington is based on the premise that the insiders’ political game is
what counts in setting policy. Foreign embassies try desperately to convince their governments back home that seemingly sinister policy outcomes are often simply the result of the normal give and take of everyday
bureaucratic struggles, the compromises, the foul-ups, the trading of favors that are part of the Washington scene. If conspiracy theorists thrive
on the strategic model—there must be a logical explanation for each action taken by the government—political cynics and comics have a field
day with the bureaucratic politics model.2
A third model, one emphasizing the importance of domestic politics, is
also injected into the study of American policy toward the Arab-Israeli
conflict. Without a doubt Arab-Israeli policymaking in Washington does
get tangled up in internal politics. Congress, where support for Israel is
usually high, and where pro-Israeli lobbies tend to concentrate their efforts, can frequently exert influence over foreign policy, largely through
its control over the budget.3 While some senators and representatives no
doubt do consider the national interest, for many others positions taken
on the Arab-Israeli conflict are little more than part of their domestic reelection strategy. Some analysts have maintained that American Middle
East policy is primarily an expression of either the pro-Israeli lobby or the
oil lobby. Little evidence will be found here for such an extreme view, even
though in some circumstances the lobbies can be influential.
Besides considering the role of Congress, one must also take into account the effect of the workings of the American political system, especially the four-year cycle of presidential elections. This cycle imposes some
regular patterns on the policymaking process that have little to do with
the world outside but a great deal to do with the way power is pursued
and won through elections.4 One should hardly be surprised to find that
every four years the issue of moving the American embassy to Jerusalem
reemerges, arms sales to Arab countries are deferred, and presidential contenders emphasize those parts of their programs that are most congenial
to the supporters of Israel. Nor should one be surprised to find that once
the election is over, policy returns to a more evenhanded course.
The Mind of the President
As much as each of these approaches—strategic-rational analysis, bureaucratic politics, and domestic politics—can illuminate aspects of how the
United States has engaged in the Arab-Israeli peace process,5 the most
important factor, as this book argues, is the view of the conflict—the defi-
introduction
9
nition of the situation—held by the president and his closest advisers, usually including the secretary of state. The president is more than just the
first among equals in a bureaucratic struggle or in domestic political debates. And he is certainly not a purely rational, strategic thinker.
More than anything else, an analyst studying American policy toward
the Arab-Israeli conflict should want to know how the president—and the
few key individuals to whom he listens—makes sense of the many arguments, the mountain of “facts,” the competing claims he hears whenever
his attention turns to the Arab-Israeli conflict. To a large degree he must
impose order where none seems to exist; he must make sense out of something he may hardly understand; he must simplify when complexity becomes overwhelming; and he must decide to authorize others to act in his
name if he is not interested enough, or competent enough, to formulate
the main lines of policy.
What, then, do the president and his top advisers rely on if not generalized views that they bring with them into office? No senior policymaker in
American history has ever come to power with a well-developed understanding of the nuances of the Arab-Israeli dispute, the intricacies of its
history, or even much knowledge of the protagonists. At best policymakers
have general ideas, notions, inclinations, biases, predispositions, fragments
of knowledge. To some extent “ideology” plays a part, although there has
never really been a neat liberal versus conservative, Democrat versus Republican divide over the Arab-Israeli conflict.
Any account of policymaking would, however, be incomplete if it did
nothing more than map the initial predispositions of key decisionmakers.
As important as these are in setting the broad policy guidelines for an
administration, they are not enough. Policy is not static, set once and forever after unchanged. Nor is policy reassessed every day. But over time
views do change, learning takes place, and policies are adjusted. As a result, a process of convergence seems to take place, whereby the views of
senior policymakers toward the Arab-Israeli conflict differ most from those
of their predecessors when they first take office and tend to resemble them
by the end of their terms. Gerald Ford and Jimmy Carter disagreed on
Middle East policy in 1976–77 but were later to coauthor articles on what
should be done to resolve the Arab-Israeli conflict. Even Ronald Reagan
in his later years seemed closer to his predecessor’s outlook than to his
own initial approach to Arab-Israeli diplomacy.
It is this process of adjustment, modification, and adaptation to the
realities of the Middle East and to the realities of Washington that allows
each administration to deal with uncertainty and change. Without this
10
introduction
on-the-job learning, American foreign policy would be at best a rigid,
brittle affair.
What triggers a change in attitudes? Is the process of learning incremental, or do changes occur suddenly because of crises or the failure of
previous policies? When change takes place, are core values called into
account, or are tactics merely revised? The evidence presented here suggests that change rarely affects deeply held views. Presidents and their
advisers seem reluctant to abandon central beliefs. Basic positions are adhered to with remarkable tenacity, accounting for the stability in the stated
positions of the United States on the issues in dispute in the Arab-Israeli
conflict. They represent a deep consensus. But politicians and diplomats
have no trouble making small adjustments in their understanding of the
Arab-Israeli conflict, and that is often enough to produce a substantial
change in policy, if not in basic positions or in overall strategy. One simple
change in judgment—that President Anwar Sadat of Egypt should be taken
seriously—was enough to lead to a major reassessment of American policy
in the midst of the October 1973 war.
Since most of the American-led peace process has been geared toward
procedures, not substance, the ability of top decisionmakers to experiment
with various approaches as they learn more about the conflict has imparted an almost experimental quality to American foreign policy in the
Middle East. Almost every conceivable tactic is eventually considered, some
are tried, and some even work. And if one administration does not get it
right, within a matter of years another team will be in place, willing to try
other approaches. Although American foreign policy is sometimes maddening in its lack of consistency and short attention span, this ability to
abandon failed policies and move on has often been the hallmark of success.
Foreign-policymaking seems to involve an interplay among the initial
predispositions of top policymakers, information about the specific issues
being considered, the pull of bureaucratic groupings, the weight of domestic political considerations, the management of transitions from one presidency to the next, and the impact of events in the region of concern. It is
often in the midst of crises that new policies are devised, that the shortcomings of one approach are clearly seen, and that a new definition of the
situation is imposed. And it is in the midst of crises that presidential powers are at their greatest.
Only rarely are crises anticipated and new polices adopted to ward
them off. As a result, American policy often seems to run on automatic
pilot until jolted out of its inertial course by an event beyond its control.
Critics who find this pattern alarming need to appreciate how complex it
introduction
11
is to balance the competing perspectives that vie for support in the Oval
Office and how difficult it is to set a course that seems well designed to
protect the multiple interests of a global power like the United States—
and to do all this without risking one’s political life.
National Interests
To get a sense of the difficulty, consider the nature of American interests in
the Middle East, as seen from the perspective of the White House. An
assessment of these interests almost always takes place at the outset of a
new administration, or just after a crisis, in the belief, usually unjustified,
that light will be shed on what should be done to advance the prospects of
Arab-Israeli peace at the least risk to American interests.
Politicians and some analysts like to invoke the national interest because it seems to encompass tangible, hard-headed concerns as opposed
to sentimental, emotional considerations. There is something imposing
about cloaking a decision in the garb of national security interests, as if no
further debate were needed.
In the real world of policymaking, interests are indeed discussed, but
most officials understand that any definition of a national interest contains a strong subjective element. Except for limited areas of foreign affairs, such as trade policy, objective yardsticks do not exist to determine
the national interest.
In discussions of the Arab-Israeli conflict, several distinct national interests often compete, confounding the problems of policymaking. For
example, most analysts until about 1990 would have said that a major
American interest in the Middle East, and therefore related to the handling of Arab-Israeli diplomacy, was the containment of Soviet influence
in the region. This interest derived from a broader strategy of containment that had been developed initially for Europe but was gradually universalized during the cold war.
In Europe the strategy of containment had led to creation of the Marshall
Plan and the North Atlantic Treaty Organization (NATO). But the attempt
to replicate these mechanisms of containment in the Middle East had failed,
in part because of the unresolved Arab-Israeli conflict. So, however much
American policymakers might worry about the growth of Soviet influence
in the region, they rarely knew what should be done about it. In the brief
period of a few months in 1956–57, the United States opposed the IsraeliFrench-British attack on Egypt (the Suez war), announced the Eisenhower
Doctrine of support to anticommunist regimes in the area, forced the Israelis
12
introduction
to withdraw from Sinai, and criticized Nasser’s Egypt for its intervention
in the affairs of other Arab countries. How all that contributed coherently
to the agreed-on goal of limiting Soviet influence was never quite clear.
Over the years many policies toward the Arab-Israeli conflict have been
justified, at least in part, by this concern about the Soviet Union. Arms
sales have been made and denied in pursuit of this interest; and the Soviets
have been excluded from, and included in, discussions on the region, all as
part of the goal of trying to manage Soviet influence in the region.
One might think that a strategy of challenging the Soviets in the region
would have led the United States to adopt belligerent, interventionist policies, as it did in Southeast Asia. But in the Middle East the concern about
overt Soviet military intervention was high, especially from the mid-1960s
on, and therefore any American intervention, it was felt, might face a
comparable move by the Soviets. Indeed, on several occasions, in the June
1967 war, in 1970 in Jordan, during the October 1973 war, and to a
lesser degree in 1982 in Lebanon, the United States feared a possible military confrontation with the Soviet Union. Thus, however ardently American officials might want to check Soviet advances, they wanted to do so
without much risk of direct military confrontation with Moscow. In brief,
the Soviet angle was never far from the minds of policymakers, but it did
little to help clarify choices. With the collapse of the Soviet Union in 1990–
91, this interest suddenly disappeared, leaving oil and Israel as the two
main American concerns in the Middle East.
Oil has always been a major reason for the United States to pay special
attention to the Middle East, but its connection to the Arab-Israeli conflict
has not always been apparent. American companies were active in developing the oil resources of the area, especially in Saudi Arabia; the industrialized West was heavily dependent on Middle East oil; and American import
needs began to grow from the early 1970s on.6
The basic facts about oil in the region were easy to understand. Saudi
Arabia, Iraq, and Iran, along with the small states of the Persian Gulf
littoral, sit atop about two-thirds of the known reserves of oil in the
world—reserves with remarkably low production costs. Thus Middle East
stability seemed to go hand in hand with access to relatively inexpensive
supplies of oil.
Throughout most of the 1950s and 1960s Middle East oil was readily
available for the reconstruction of Europe and Japan. American companies made good profits, and threatened disruptions of supply had little
effect. A conscious effort to keep Persian Gulf affairs separate from the
Arab-Israeli conflict seemed to work quite well.
introduction
13
But by the late 1960s the British had decided to withdraw their military presence from east of Suez. How, if at all, would that affect the
security of Gulf oil supplies? Should the United States try to fill the vacuum
with forces of its own, or should it try to build up regional powers, such
as Iran and Saudi Arabia? If arms were sold to Saudi Arabia to help
ensure access to oil supplies, how would the Israelis and the other Arab
countries react? What would the Soviets do? In short, how could an
interest, which everyone agreed was important, be translated into concrete policies?
American calculations about oil were further complicated by the fact
that the United States is both a large producer and a large importer of oil.
For those concerned with enhancing domestic supplies, the low production costs of Middle East oil are always a potential threat. Texas oil producers argue for quotas to protect them from “cheap” foreign oil. But
consumers want cheap oil and will therefore resist gasoline taxes, tariffs,
or quotas designed to prop up the domestic oil industry. No American
president would know how to answer the question of the proper price of
Middle East oil. If forced to give an answer, he would have to mumble
something like “not too high and not too low.” In practice, the stability
and predictability of oil supplies have been seen as more important than a
specific price. This perception has reinforced the view that the main American interest is in reliable access to Middle East oil, and therefore in regional stability. Still, price cannot be ignored. In 2000 the annual American
import bill for oil from the Middle East exceeded $20 billion, out of a
total oil import bill of more than $60 billion. Each one-dollar increase in
the price of oil added more than $1 billion to the oil import bill.
The other main interest that has dominated discussion of the ArabIsraeli conflict is the special American commitment to Israel. The United
States was an early and enthusiastic supporter of the idea of a Jewish state
in part of Palestine. That support was clearly rooted in a sense of moral
commitment to the survivors of the holocaust, as well as in the intense
attachment of American Jews to Israel. During the 1980s a “strategic”
rationale was added to the traditional list of reasons for supporting Israel,
although this view was never universally accepted.
Support for Israel was always tempered by a desire to maintain some
interests in surrounding Arab countries, because of either oil or competition with the Soviet Union. As a result, through most of the years from
1949 until the mid-1960s, the United States provided few arms and only
modest amounts of aid. As Eisenhower demonstrated in 1956, support for
Israel did not mean offering a blank check.
14
introduction
Managing the relationship with the Soviet Union in the Middle East,
access to inexpensive oil, and support for Israel were American interests
readily accepted by successive administrations. Yet what the implications
for policy were of any one, to say nothing of all three, of these interests
was not clear. To take the most difficult case, what should be done when
one set of interests seemed to be at variance with another? Which should
get more weight, the economic interest of oil, the strategic interest of checking Soviet advances, or the moral interest of supporting Israel?
Without a common yardstick, the interests were literally incommensurate. How could arms for the Saudis or Jordanians be squared with support for Israel? How could Soviet inroads in a country like Egypt be
checked? Was it better to oppose Nasser to teach him a lesson about the
costs of relying on the Soviets, or should an effort be made to win him
away from dependence on Moscow? And what would either of these approaches mean for relations with Israel and Saudi Arabia?
In brief, U.S. national interests were clearly involved in the Middle East
and would be affected by every step of the Arab-Israeli peace process. But
there was almost no agreement on what these interests meant in terms of
concrete policies. Advocates of different perspectives, as will be seen, were
equally adept at invoking national interests to support their preferred
courses of action. Often policy preferences seemed to come first, and then
the interests were found to justify the policy. Precisely because of these
dilemmas, policymaking could not be left to bureaucrats. The stakes were
too high, the judgments too political. Thus Arab-Israeli policy, with remarkable frequency, landed in the lap of the president or his secretary of
state. More than for most issues of foreign policy, presidential leadership
became crucial to the Arab-Israeli peace process.
Insofar as presidents and their advisers saw a way to resolve the potential conflict among American interests in the Middle East, it was by promoting the Arab-Israeli peace process. This policy is the closest equivalent
to that of containment toward the Soviet Union—a policy with broad
bipartisan support that promises to protect a range of important American interests. If Arab-Israeli peace could be achieved, it was thought, Soviet influence in the region would decline, Israeli security would be
enhanced, and American relations with key Arab countries would improve.
Regional stability would therefore be more easily achieved, and oil supplies would be less threatened. Obviously, other sources of trouble would
still exist in the region, but few disagreed on the desirability of ArabIsraeli peace or the need for American leadership to achieve it. The differences, and they were many, came over the feasibility of a peace settlement
introduction
15
and over appropriate tactics. In making these judgments, presidents made
their most important contribution to the formulation of policy.
Presidential Leadership and Policymaking
In U.S. politics, there is a strong presumption that who is president matters. Huge sums are spent on electoral campaigns to select the president.
The office receives immense respect and deference, and most writers of
American political history assume that the man occupying the White House
can shape events. Does this perspective merely reflect an individualism
rooted in American culture, or does it contain a profound truth?
One can easily imagine situations in which it would be meaningless to
explain a policy by looking at the individuals responsible for making the
decisions. If no real margin for choice exists, individuals do not count for
much. Other factors take precedence. For example, to predict the voting
behavior of senators from New York on aid to Israel, one normally need
not consider their identity. It is enough to know something about the constituency, the overwhelming support for Israel among New Yorkers, and
the absence of countervailing pressures to be virtually certain about the
policy choice of an individual senator.
If context can account for behavior, so can the nature of perceived interests or objectives. If we were studying Japan’s policies toward the ArabIsraeli conflict, we would not be especially concerned with who was prime
minister at any given moment. It would make more sense to look at the
dependence of Japan on Arab oil and the lack of any significant cultural
or economic ties to Israel to predict that Japan would adopt a generally
pro-Arab policy. When interests easily converge on a single policy, individual choice can be relegated to the background.
Finally, if a nation has no capability to act in foreign policy, we will not
be particularly interested in the views of its leaders. To ask why a small
European country does not assume a more active role in promoting an
Arab-Israeli settlement does not require us to examine who is in charge of
policy. Instead, the absence of significant means to affect the behavior of
Arabs and Israelis is about all we need to know. A country without important economic, military, or diplomatic assets has virtually no choices to
make in foreign policy. Obviously none of these conditions holds for the
United States in its approach to the Arab-Israeli conflict. Capabilities for
action do exist. The nature of American interests, as generally understood
by policymakers, does not predetermine a single course of action. And,
despite the obvious constraints imposed by the structure of the interna-
16
introduction
tional system and domestic politics, choices do exist on most issues, even
though at times the margin of choice may be narrow.
Confronting Complexity and Uncertainty
Most political leaders, with no noteworthy alteration in personality or
psychodynamics, are likely at some point to change positions on policy
issues. Often such changes will be portrayed as opportunism or waffling.
But they could instead be a reaction to a complicated situation, suggesting
that people can learn as new information is acquired. Particularly when
dealing with complex events and ambiguous choices, people may shift
their positions quite suddenly, without altering the fundamental aspects of
their approaches to policy. As Raymond Bauer said, “Policy problems are
sufficiently complex that for the vast majority of individuals or organizations it is conceivable—given the objective features of the situation—to
imagine them ending up on any side of the issue.”7
Policymakers often find it difficult to recognize the difference between
a good proposal and a bad proposal. In normal circumstances, bargaining
and compromise may be rational courses for a politician to follow, but
adopting either of these courses of action assumes that issues have been
defined according to some understood criteria. When such criteria are not
obvious, what should one do?
On most issues of importance, policymakers operate in an environment
in which uncertainty and complexity are dominant. Addressing an unknowable future with imperfect information about the past and present,
policymakers must use guidelines and simplifications drawn from their
own experience, the “lessons of history,” or the consensus of their colleagues. The result is often a cautious style of decisionmaking that strives
merely to make incremental changes in existing policies.8 At times, however, very sudden shifts in policy may also take place. How can one account for both these outcomes?
Leadership is only rarely the task of selecting between good and bad
policies. Instead, the anguish and challenge of leadership is to choose between equally plausible arguments about how best to achieve one’s goals.
For example, most presidents and their advisers have placed a very high
value on achieving peace in the Middle East. But values do not easily
translate into policy. Instead, several reasonable alternatives, such as the
following, are likely to compete for presidential attention:
—If Israel is to feel secure enough to make the territorial concessions
necessary to gain Arab acceptance of the terms of a peace agreement, it
introduction
17
must continue to receive large quantities of American military and economic aid.
—If Israel feels too strong and self-confident, it will not see the need for
any change in the status quo. U.S. aid must therefore be used as a form of
pressure.
Presidents Nixon, Ford, Carter, and Bush subscribed to both the foregoing views at different times.
Similarly, consider the following propositions, which were widely entertained by U.S. presidents until the breakup of the USSR:
—The Soviet Union has no interest in peace in the Middle East, because
it would lose influence unless it could exploit tensions in the area. Hence
the United States cannot expect cooperation from the Soviet Union in the
—The Soviets, like ourselves, have mixed interests in the Middle East.
They fear a confrontation and are therefore prepared to reach a settlement, provided they are allowed to participate in the diplomatic process.
By leaving the Soviet Union out, the United States provides it with an
incentive to sabotage the peacemaking effort. Therefore, U.S.-Soviet agreement will be essential to reaching peace in the Middle East.
Concerning the Arabs, one may also hear diverse opinions:
—Only when the Arabs have regained their self-respect and feel strong
will they be prepared to make peace with Israel.
—When the Arabs feel that time is on their side, they increase their
demands and become more extreme. Only a decisive military defeat will
convince them that Israel is here to stay and that they must use political
means to regain their territory.
Each of these propositions has been seriously entertained by recent
American presidents and secretaries of state. One could almost say that all
of them have been believed at various times by some individuals. The key
element in selecting among these plausible interpretations of reality is not
merely whether one is pro-Israeli or pro-Arab, or hard line or not so hard
line on relations with Moscow. A more complex process is at work.
Lessons of History
In choosing among plausible, but imperfectly understood, courses of action, policymakers inevitably resort to simplifications.9 Categorical inferences are thus made; confusing events are placed in comprehensible
structures; reality is given a definition that allows purposive action to take
place. Recent experience is a particularly potent source of guidance for
18
introduction
the future. If a policy has worked in one setting, policymakers will want to
try it in another context as well. Secretary of State Henry A. Kissinger, for
example, apparently relied on his experiences in negotiating with the Chinese, Russians, and Vietnamese when he approached negotiations with
the Arabs and Israelis in 1974–75. Step-by-step diplomacy was the result.
More general historical “lessons” may loom large in the thinking of
policymakers as they confront new problems.10 President Harry Truman
was especially inclined to invoke historical analogies. He well understood
that the essence of presidential leadership was the ability to make decisions in the face of uncertainty and to live with their consequences. By
relying on history, he was able to reassure himself that his decisions were
well founded.11
Several historical analogies have been notably effective in structuring
American views of reality. The lessons of Munich, for example, have
been pointed to repeatedly over the years, principally that the appeasement of dictators serves only to whet their appetite for further conquest.
Hence a firm, resolute opposition to aggression is required. The “domino
theory” is a direct descendant of this perspective, as was the policy of
containment.
A second set of guidelines for policy stems from President Woodrow
Wilson’s fourteen points after World War I, especially the emphasis on
self-determination and opposition to spheres of influence. As embodied in
the Atlantic Charter in 1941, these principles strongly influenced American policy during the Second World War.12 Since the failure of U.S. policy
in Southeast Asia, new “lessons” have been drawn, which warn against
overinvolvement, commitments in marginal areas, excessive reliance on
force, and risks of playing the role of world policeman. Whether these will
prove as durable as the examples of Munich and Wilsonian idealism remains to be seen, but American policy continues to be discussed in terms
of these historical analyses.
When recent experience and historical analogies fail to resolve dilemmas of choice, certain psychological mechanisms may come to the rescue.
Wishful thinking is a particularly potent way to resolve uncertainty. When
in doubt, choose the course that seems least painful, that fits best with
one’s hopes and expectations; perhaps it will turn out all right after all. In
any event, one can almost always rationalize a choice after making it.
Good reasons can be found even for bad policies, and often the ability to
come up with a convincing rationale will help to overcome uncertainties.
Apart from such well-known but poorly understood aspects of individual psychology, the social dynamics of a situation often help to resolve
introduction
19
uncertainty. If through discussion a group can reach consensus on the
proper course of action, individuals are likely to suppress their private
doubts. Above all, when a president participates in a group decision, a
strong tendency toward consensus is likely. As some scholars have emphasized, presidents must go to considerable lengths to protect themselves
from the stultifying effects of group conformity in their presence and the
tendency to suppress divergent views.13 Neither President Johnson’s practice of inviting a large number of advisers to consult with him nor President Nixon’s effort to use the National Security Council to channel
alternatives to him are guarantees against the distortions of group consensus, in part because presidents value consensus as a way to resolve doubts.
At any given moment presidents and their key advisers tend to share
fairly similar and stable definitions of reality. However such definitions
emerge, whether through reference to experience or to history, through
wishful thinking and rationalization, or through group consensus, they
will provide guidelines for action in the face of uncertainty. Complexity
will be simplified by reference to a few key criteria. In the Arab-Israeli
setting, these will usually have to do with the saliency of issues, their amenability to solution, the role of the Soviet Union (up until late 1990), and
the value of economic and military assistance to various parties.
Crises and the Redefining of Issues
Crises play an extremely important role in the development of these guidelines. By definition, crises involve surprise, threat, and increased uncertainty. Previous policies may well be exposed as flawed or bankrupt. Reality
no longer accords with previous expectations. In such a situation a new
structure of perceptions is likely to emerge, one that will reflect presidential perspectives to the degree that the president becomes involved in handling the crisis. If the crisis is satisfactorily resolved, a new and often quite
durable set of assumptions will guide policy for some time.
Often crises can produce significant policy changes without causing a
sweeping reassessment of a decisionmaker’s views. It may be only a greater
sense of urgency that brings into play a new policy. Or it may be a slight
shift in assumptions about the Soviet role, for example, or the advantages
of pursuing a more conciliatory policy toward Egypt. Small adjustments
in a person’s perceptions, in the weight accorded to one issue as opposed
to another, can lead to substantial shifts of emphasis, of nuance, and therefore of action. Again, policymakers do not change from being pro-Israeli
to being pro-Arab overnight, but crises may bring into focus new relations
20
introduction
among issues or raise the importance of one interest, thus leading to changes
in policy. Basic values will remain intact, but perceptions and understanding of relationships may quickly change.
In the case studies that follow, I explore the important role of crises in
defining issues for presidents and their advisers. And I try to account for
their views, to understand their reasoning, and to see situations from their
standpoint. Between crises, as is noted, it is difficult to bring about changes
in policies that were forged in crisis and have the stamp of presidential
approval.
Admittedly, this approach shortchanges the role of Congress, public
opinion, interest groups, the media, and the bureaucracy. All these are
worthy subjects of study and undoubtedly have influenced American diplomacy toward the Arab-Israeli conflict. Nor do I discuss in this book
why Arabs and Israelis made the decisions that they did. Only in passing
do I deal with the protagonists in the conflict, describing their views but
not subjecting them to the kind of analysis reserved for American policy.
Starting, then, with the key role of the president and his advisers in
shaping policy, particularly in moments of crisis, when domestic and organizational constraints are least confining, the book examines how politics and bureaucratic habits affect both the formulation and implementation
of policies in normal times. But at the center of the study are those rare
moments when policymakers try to make sense out of the confusing flow
of events, when they strive to relate action to purposes, for it is then that
the promises and limitations of leadership are most apparent.
part one
The Johnson
Presidency
This page intentionally left blank
chapter
2
Yellow Light:
Johnson and the Crisis
of May–June 1967
Lyndon Baines Johnson brought to the presidency a remarkable array of political talents.1 An activist and a man of
strong passions, Johnson seemed to enjoy exerting his power. As majority leader in the Senate, he had used the art of persuasion as few other
leaders had; building consensus through artfully constructed compromises
had been one of his strong suits. His political experience did not, however,
extend to foreign-policymaking, an area that demanded his attention,
especially as American involvement in Vietnam grew in late 1964 and
early 1965.
Fortunately for the new president, one part of the world that seemed
comparatively quiet in the early 1960s was the Middle East. Long-standing disputes still simmered, but compared with the turbulent 1950s, the
situation appeared to be manageable. The U.S.-Israeli relationship had
been strengthened by President Kennedy, and Johnson obviously was prepared to continue on this line, particularly with increases in military assistance. His personal sentiments toward Israel were warm and admiring. To
all appearances he genuinely liked the Israelis he had dealt with, many of
his closest advisers were well-known friends of Israel, and his own contacts with the American Jewish community had been close throughout his
political career.2
Johnson’s demonstrated fondness for Israel did not mean he was particularly hostile to Arabs, but it is fair to say that Johnson showed little
sympathy for the radical brand of Arab nationalism expounded by Egypt’s
23
24
the johnson presidency
president Gamal Abdel Nasser. And he was sensitive to signs that the
Soviet Union was exploiting Arab nationalism to weaken the influence of
the West in the Middle East. Like other American policymakers before
him, Johnson seemed to waver between a desire to try to come to terms
with Nasser and a belief that Nasser’s prestige and regional ambitions
had to be trimmed. More important for policy than these predispositions,
however, was the fact that Johnson, overwhelmingly preoccupied by Vietnam, treated Middle East issues as deserving only secondary priority.
U.S.-Egyptian relations had deteriorated steadily between 1964 and
early 1967, in part because of the conflict in Yemen, in part because of
quarrels over aid. By 1967, with Vietnam becoming a divisive domestic
issue for Johnson, problems of the Middle East were left largely to the
State Department. There, the anxiety about increased tension between
Israel and the surrounding Arab states was growing after the Israeli raid
on the Jordanian town of Al-Samu’ in November 1966, and especially
after an April 1967 Israeli-Syrian air battle that resulted in the downing
of six Syrian MiGs. Under Secretary of State Eugene Rostow was particularly concerned about the drift of events, suspecting that the Soviets were
seeking to take advantage in the Middle East of Washington’s preoccupation with Vietnam.3
If the tensions on the Syrian-Israeli border provided the fuel for the
early stages of the 1967 crisis, the spark that ignited the fuel came in the
form of erroneous Soviet reports to Egypt on May 13 that Israel had
mobilized some ten to thirteen brigades on the Syrian border. Against the
backdrop of Israeli threats to take action to stop the guerrilla raids from
Syria,4 this disinformation apparently helped to convince Nasser the time
had come for Egypt to take some action to deter any Israeli moves against
Syria and to restore his own somewhat tarnished image in the Arab
world.5 The Soviets, he seemed to calculate, would provide firm backing
for his position.
On May 14 Nasser made the first of his fateful moves. Egyptian troops
were ostentatiously sent into Sinai, posing an unmistakable challenge to
Israel, if not yet a serious military threat. President Johnson and his key
advisers were quick to sense the danger in the new situation. Because of
his well-known sympathy for Israel and his forceful personality, Johnson
might have been expected to take a strong and unambiguous stand in
support of Israel at the outset of the crisis, especially as such a stand might
have helped to prevent Arab miscalculations. Moreover, reassurances to
Israel would lessen the pressure on Prime Minister Levi Eshkol to resort
to preemptive military action. Finally, a strong stand in the Middle East
the crisis of may–june 1967
25
would signal the Soviet Union that it could not exploit tensions there
without confronting the United States.
The reality of U.S. policy as the Middle East crisis unfolded in May
was, however, quite different. American behavior was cautious, at times
ambiguous, and ultimately unable to prevent a war that was clearly in the
offing. Why was that the case? This is the central puzzle in Johnson’s
reaction to the events leading up to the June 1967 war. Also, one must ask
how hard Johnson really tried to restrain Israel. Some have alleged that
Johnson in fact gave Israel a green light to attack, or in some way colluded
with Israel to draw Nasser into a trap.6 These charges need to be carefully
assessed. And what role did domestic political considerations play in Johnson’s thinking? Did the many pro-Israeli figures in Johnson’s entourage
influence his views?
Initial Reactions to the Crisis
Nasser’s initial moves were interpreted in Washington primarily in political terms. Under attack by the conservative monarchies of Jordan and
Saudi Arabia for being soft on Israel, Nasser was seen as attempting to
regain prestige by appearing as the defender of the embattled and threatened radical regime in Syria. Middle East watchers in the State Department thought they recognized a familiar pattern. In February 1960 Nasser
had sent troops into Sinai, postured for a while, claimed victory by deterring alleged Israeli aggressive designs, and then backed down.7 All in all,
a rather cheap victory, and not one that presented much of a danger to
anyone. Consequently the initial American reaction to Nasser’s dispatch
of troops was restrained. Even the Israelis did not seem to be particularly
alarmed.
On May 16, however, the crisis took on a more serious aspect as the
Egyptians made their initial request for the removal of the United Nations
Emergency Force (UNEF).8 This prompted President Johnson to sound out
the Israelis about their intentions and to consult with the British and the
French. On May 17 Johnson sent Eshkol the first of several letters
exchanged during the crisis, in which he urged restraint and specifically
asked to be informed before Israel took any action. “I am sure you will
understand that I cannot accept any responsibilities on behalf of the
United States for situations which arise as the result of actions on which
we are not consulted.”9
From the outset, then, Johnson seemed to want to avoid war, to restrain
the Israelis, and to gain allied support for any action that might be taken.
26
the johnson presidency
Two possible alternative courses of action seem not to have been seriously
considered at this point. One might have been to stand aside and let the
Israelis act as they saw fit, even to the extent of going to war.10 The danger, of course, was that Israel might get into trouble and turn to the United
States for help. Johnson seemed to fear this possibility throughout the crisis, despite all the intelligence predictions that Israel would easily win a war
against Egypt alone or against all the surrounding Arab countries.
The second alternative not considered at this point was bold unilateral
American action opposing Nasser’s effort to change the status quo. Here
the problems were twofold. A quarrel with Egypt might inflame the situation and weaken American influence throughout the Arab world. The
Suez precedent, and what it had done to British and French positions in
the region, was very much in the minds of key American officials. Nasser
was not noted for backing down when challenged. Moreover, U.S. military assets were deeply committed in Vietnam, ruling out a full-scale military confrontation with Egypt. But even if American forces had been
available, Congress was in no mood to countenance unilateral military
action, even in support of Israel. Therefore, the initial United States effort
was directed toward restraining Israel and building a multilateral context
for any American action, whether diplomatic or military.
Eshkol’s reply to Johnson’s letter reached Washington the following
day, May 18. The Israeli prime minister blamed Syria for the increase in
tension and stated that Egypt must remove its troops from Sinai. Then,
appealing directly to Johnson, Eshkol requested that the United States
reaffirm its commitment to Israeli security and inform the Soviet Union in
particular of this commitment. Johnson wrote to Premier Aleksei Kosygin
the following day, affirming the American position of support for Israel
as requested, but suggesting in addition a “joint initiative of the two powers to prevent the dispute between Israel and the UAR [United Arab
Republic, or Egypt] and Syria from drifting into war.”11
After Egypt’s initial request for the withdrawal of the UNEF on May
16, there was danger that Nasser might overplay his hand by also closing
the Strait of Tiran to the Israelis. The opening of the strait to Israeli shipping was Israel’s one tangible gain in the 1956 war. American commitments concerning the international status of the strait were explicit. It
was seen as an international waterway, open for the free passage of ships
of all nations, including Israel. The Israelis had been promised that they
could count on U.S. support to keep the strait open.12
The UNEF had stationed troops at Sharm al-Shaykh since 1957, and
shipping had not been impeded. If the UNEF withdrew, however, Nasser
would be under great pressure to return the situation to its pre-1956
the crisis of may–june 1967
27
status. Israel had long declared that such action would be considered a
casus belli.
In light of these dangers, one might have expected some action by the
United States after May 16, aimed at preventing the complete removal of
the UNEF. But the record shows no sign of an urgent approach to UN secretary general U Thant on this matter, and by the evening of May 18 U
Thant had responded positively to the formal Egyptian government
request that all UNEF troops leave Egyptian territory.
The strait still remained open, however, and a strong warning by Israel
or the United States about the consequences of its closure might conceivably have influenced Nasser’s next move. From May 19 until midday on
May 22, Nasser took no action to close the strait, nor did he make any
threat to do so. Presumably he was waiting to see how Israel and the
United States, among others, would react to withdrawal of the UNEF. The
United States made no direct approach to Nasser until May 22, the day
Nasser finally announced the closure of the strait. It issued no public
statements reaffirming the American view that the strait was an international waterway, nor did the reputedly pro-Israeli president respond to
Eshkol’s request for a public declaration of America’s commitment to
Israel’s security.13
On May 22 Johnson finally sent a letter to the Egyptian leader. The
thrust of the message was to assure Nasser of the friendship of the United
States while urging him to avoid any step that might lead to war. In addition, Johnson offered to send Vice President Hubert Humphrey to Cairo.
Johnson ended the letter with words he had personally added: “I look forward to our working out a program that will be acceptable and constructive for our respective peoples.” The message was not delivered by
ambassador-designate Richard Nolte until the following day, by which
time the strait had already been declared closed to Israeli shipping and
strategic cargoes bound for Israel.14
Johnson informed Eshkol the same day that he was writing to the
Egyptian and Syrian leaders, warning them not to take actions that might
lead to hostilities.15 In addition, another message from Johnson to Kosygin was also sent on May 22. Reiterating his suggestion of joint action to
calm the situation, Johnson went on to state, course of moderation, including our influence over action by the United Nations.”16
28
the johnson presidency
These messages, which might have helped to calm the situation earlier,
were rendered meaningless by the next major escalation of the crisis.17 The
well-intentioned American initiative of May 21–22 was too little and too
late. Shortly after midnight May 22–23, Nasser’s speech announcing the
closure of the strait was broadcast.
The Crisis over the Strait
If Johnson had feared that Israel might resort to force unilaterally before
May 23, the danger now became acutely real. Therefore he requested that
Israel not make any military move for at least forty-eight hours.18 During
the day of May 23 arrangements were made for Foreign Minister Abba
Eban to visit Washington for talks prior to any unilateral action. Johnson
also decided to accede to an Israeli request for military assistance worth
about $70 million, but he rejected an Israeli request for a U.S. destroyer
to visit the port of Eilat.19
American diplomacy went into high gear. Johnson issued a forceful
public statement outlining the U.S. position: “The United States considers the gulf to be an international waterway and feels that a blockade of
Israeli shipping is illegal and potentially disastrous to the cause of peace.
The right of free, innocent passage of the international waterway is a vital
interest of the international community.”20
In Tel Aviv, U.S. ambassador Walworth Barbour repeated the request
for a forty-eight-hour delay before any unilateral Israeli action and raised
the possibility of pursuing a British idea of a multinational naval force to
protect maritime rights in the event that UN action failed to resolve the
crisis. Eban’s trip to Washington was designed in part to explore the feasibility of this idea.
In Washington, Israeli ambassador Avraham Harman and minister
Ephraim Evron met with Under Secretary Eugene Rostow and were told
that “the United States had decided in favor of an appeal to the Security
Council. . . . The object is to call for restoring the status quo as it was
before . . . the blockade announcement. Rostow explained that the congressional reaction compels a president to take this course.”21 Rostow
reportedly referred to the realities created by the Vietnam War in describing Johnson’s approach to the blockade.22
The basic elements of Johnson’s approach to the crisis as of May 23
were:
—Try to prevent war by restraining Israel and warning the Egyptians
and Soviets.
the crisis of may–june 1967
29
—Build public and congressional support for the idea of an international effort to reopen the Strait of Tiran. (Unilateral U.S. action was
ruled out without much consideration.)
—Make an effort through the UN Security Council to open the strait.
If that failed, as was anticipated, a multilateral declaration in support of
free shipping would be drawn up. This would be followed, as the British
suggested, by a multinational naval force transiting the strait.
Noteworthy was the continuing reluctance either to consider unilateral
American action or to “unleash Israel,” as a second option came to be
known. These alternatives had been ruled out virtually from the beginning, and even the closure of the strait did not lead to a reevaluation of
the initial policy. Instead, the two key elements of policy dating from May
17 were merely embellished as conditions changed.
A complex multilateral plan was discussed with the British that would
surely be supported by Congress and public opinion, but could it produce
results rapidly enough to ensure the other element in the U.S. approach—
restraint of Israel? A dilemma clearly existed. To keep Israel from acting
on its own, as even the United States acknowledged it had a right to do,
in order to reopen the strait, an acceptable alternative had to be presented. The stronger the stand of the United States and the firmer its commitment to action, the more likely it was that Israel could be restrained;
by the same token, the less likely it was that Nasser would probe further.
Yet a strong American stand was incompatible with the desire for multilateral action, which had to be tried, in Johnson’s view, to ensure congressional and public support. Such support was essential at a time of
controversy over the U.S. role in Vietnam.
Johnson was mindful of the furor over his handling of the Gulf of
Tonkin incident in 1964. Then he had seized on a small incident to
broaden his power, with full congressional approval, to act in Vietnam.
Subsequently, however, he had been charged with duplicity, with misleading Congress concerning the event, and with abusing the authority he
had received. In mid-1967 Johnson was not about to lead the United
States into another venture that might entail the use of force unless he had
full congressional and public backing. Consequently he insisted on trying
the United Nations first, and only then seeking a multilateral maritime
declaration and sending ships through the strait. By moving slowly, cautiously, and with full support at home, Johnson would minimize the
domestic political risks to his position.
The goals of restraining Israel and pursuing a multilateral solution
were not necessarily incompatible, if sufficient time was available. For
30
the johnson presidency
time to be available, perhaps as much as two to three weeks, the situation
on the ground could not be allowed to change radically, nor could the balance of forces within Israel shift toward those who favored war. At a minimum, then, Nasser had to sit tight, the Soviet Union had to remain on the
sidelines, and Eshkol had to be given something with which to restrain his
hawks. If any of these conditions could not be met, the assumptions of
U.S. policy would be undermined, and war would probably ensue.23
Eban’s Visit to Washington
The impending visit of Israel’s foreign minister served as a catalyst for the
further definition of an American plan of action for dealing with the closure of the Strait of Tiran. If Israel was to refrain from forceful action, it
needed to be given a credible alternative to war. But the process of moving
from general principles—restrain Israel, act within a multilateral context—
to a more detailed proposal revealed inherent contradictions and ambiguities, as well as bureaucratically rooted differences of opinion. What had
initially been a fairly widespread consensus among Johnson’s top advisers
on how to deal with the crisis began to fragment as the crisis grew more
acute. When Johnson felt most in need of broad support for his cautious,
restrained approach, the viability of that position came under question.
The key to Johnson’s policy on the eve of Eban’s visit was the idea of
a multinational naval force. On May 24 Eugene Rostow met with the
British minister of state for foreign affairs, George Thomson, and an admiral of the Royal Navy to discuss the British proposal. They agreed to try
for a public declaration on freedom of shipping through the Strait of
Tiran, to be signed by as many countries as possible. A multinational
naval force would then be set up, composed of ships from as many maritime countries as were prepared to act, and a flotilla, soon to be known
as the Red Sea Regatta, would then pass through the strait.24 Rostow
talked to Johnson later in the day about the plan and found the president
receptive toward it.
The Pentagon was charged with coming up with a concrete plan for
forming a naval force. At this point, consensus began to erode.25 Although
some Pentagon analysts reported that the United States was capable of
managing a crisis involving possible military intervention in the Middle
East as well as Vietnam, most believed that Israel could deal with the
Arab threat perfectly well on its own and that there was no need for a
costly American commitment of forces. In any event, it would take time
to get the necessary forces in place to challenge Nasser’s blockade.
the crisis of may–june 1967
31
The idea of a token display of U.S. force to reopen the strait did not
have many fans in the Pentagon. What would happen if the Egyptians
fired on an American ship? Would the United States respond with force?
Would Egyptian airfields be attacked? Would ground troops be required?
If so, how many? Furthermore, what could the Navy do about the growing numbers of Egyptian ground troops deployed along Israel’s borders?
On balance, the military was not in favor of the use of U.S. force.
Bureaucratic self-interest and a professional attitude that dictated the use
of force only when success was assured and when superior power was
available lay at the root of the opposition. The multinational fleet was a
military man’s nightmare. It was not the way the military would plan
such an operation. It was too political. Deeming it undesirable, the military did little to make it feasible.
The State Department, at least at the top levels, was, by contrast, enthusiastic about the idea. Secretary of State Dean Rusk endorsed it, and Under
Secretary Rostow became its chief advocate. From their point of view, the
fact that it was a flawed military concept was less important than its politically attractive features. First, it would associate other nations with the
United States in defense of an important principle—freedom of navigation—and in the upholding of a commitment to Israel. Second, it would
deflate Nasser’s prestige, which was once again on the rise, without putting him in an impossible position. If Nasser wanted to back down from
confrontation with Israel, the fleet would provide him with an honorable
excuse to do so. The State Department therefore set out to find cosigners
to the maritime declaration and donors of ships for the fleet. This essentially political task was what State was best at performing; the planning
of the fleet was the province of Defense. Unfortunately, little coordination
went on between the two.
Foreign Minister Eban arrived in Washington on the afternoon of May
25. His first talks were held at the State Department at 5:00 p.m. The
result was to sow confusion among U.S. policymakers, who had just
adjusted to the crisis and thought they saw a way out of it. Eban, who had
left Israel with instructions to discuss American plans to reopen the Strait
of Tiran, arrived in the United States to find new instructions awaiting
him.26 No longer was he to emphasize the issue of the strait. A more
urgent danger, that of imminent Egyptian attack, had overshadowed the
blockade. Eban was instructed to inform the highest authorities of this
new threat to peace and to request an official statement from the United
States that an attack on Israel would be viewed as an attack on the United
States.27 Despite his own skepticism, Eban followed his instructions in his
32
the johnson presidency
first meeting with Secretary Rusk, Under Secretary Rostow, and Assistant
Secretary Lucius Battle.
Rusk quickly ended the meeting so that he could confer with Johnson
about the new situation. The meeting with Eban resumed at 6:00 p.m. for
a working dinner. The Israelis were told that U.S. sources could not confirm an Egyptian plan to attack.28
After the talks ended, Israeli ambassador Harman returned to the State
Department at about midnight to reemphasize Israel’s need for a concrete
and precise statement of U.S. intentions.29 He also warned that Israel
could not accept any plan in which the strait might be opened to all ships
except those of Israel. an attack was not pending.30. After all, as president he had to worry about the Soviet Union,
about Congress and public opinion, and even about U.S.-Arab relations;
he did not want to be stampeded, to use the imagery of his native Texas.31
Johnson was obviously reluctant to see Eban on Friday, May 26. He
knew it would be an important, perhaps crucial meeting. The Israeli cabinet was to meet on Sunday, and what Johnson told Eban might make the
difference between war and peace. The Israelis were pressing for a specific
commitment, for a detailed plan, for promises to act, and for understanding in the event Israel took matters into its own hands. Faced with
these pressures, Johnson tried to stall. Rusk called Harman early in the
morning to find out whether Eban could stay in Washington through Saturday. This would allow Johnson to learn the results of U Thant’s mission
to Cairo. Eban, stressing the importance of the Sunday cabinet meeting,
said he had to leave Friday evening for Israel.32
Meanwhile Secretary Rusk and Under Secretary Eugene Rostow had
prepared a policy memorandum for the president. Rusk’s memo to the
the crisis of may–june 1967
33
president began with a review of his talk with Eban the previous evening,
including the Israeli information that an Egyptian and Syrian attack was
imminent and the request for a public statement of American support for
Israel against such aggression. Eban, it was stated, would not press this
point with Johnson, and the president’s talk could concentrate on the
British proposal for a multinational fleet. Rusk then outlined two basic
options:
—“to let the Israelis decide how best to protect their own national
interests, in the light of the advice we have given them: i.e., to ‘unleash’
them”; or
—“to take a positive position, but not a final commitment, on the
British proposal.”
Rusk recommended strongly against the first option. Noting that the
British cabinet would meet on the plan for the multinational fleet the following day, Rusk endorsed the second option, which he then reviewed in
some detail. Included in his outline was the idea that a UN force should
take a position along both sides of the Israeli-Egyptian frontier. If Egypt
refused, Israel might accept.
Eban’s need for a strong commitment from Johnson was made clear in
the Rusk memorandum. Congressional views were reviewed, and the
option of unilateral U.S. action was referred to with caution. A draft joint
resolution of Congress was being prepared to support international
action on the strait. In closing, Rusk referred to the possibility of offering
Israel economic and military aid to help offset the strains of continuing
mobilization.33
On May 26, shortly after noon, President Johnson convened at the
White House the most important full-scale meeting of his advisers held
during the crisis. One by one, each of Johnson’s advisers expressed his
views to the president. Discussion turned to the idea of a multinational
fleet, with Secretary of Defense Robert McNamara stating his disapproval
of the idea on military grounds.34 Rusk then reported on U Thant’s talks
in Cairo, which had elicited from Nasser a promise not to take preemptive action and had led to some discussion of how the blockade might be
modified. He then introduced a phrase that was to be repeated to the
Israelis frequently in the coming two weeks: “Israel will not be alone
unless it decides to go alone.”35 To Rusk, it clearly mattered who opened
fire first.36 Johnson, who seemed to be reassured by the judgment that the
military situation would not deteriorate suddenly, spoke of the maritime
effort approvingly, terming it his “hole card” for his talk with Eban. But
he realized this might not be enough for Eban. He asked his advisers if
34
the johnson presidency
they thought Eban would misinterpret this as a “cold shoulder.” Johnson
expressed his feeling that he could not make a clear commitment to use
force because of congressional sentiment.
Supreme Court Justice Abe Fortas, a close friend of Johnson who was
invited especially for this NSC meeting, joined the discussion, stating that
the problem was to keep Israel from making a first strike. This required
an American commitment that an Israeli ship would get through the strait.
Fortas recommended that Johnson promise to use whatever force was
necessary. Johnson said he was in no position to make such a promise.
Eban was not going to get everything he wanted. Congress, he said, was
unanimously against taking a stronger stand. He wondered out loud if he
would regret on Monday not having given Eban more today. Then he left
the meeting. The others talked on for a few minutes, with both McNamara and Rusk taking the strong stand that Israel would be on its own if
it decided to strike first. Fortas countered by saying that Johnson could not
credibly say to Israel that it would be alone. The president did not have a
choice of standing on the sidelines.37
Thus were the two main schools of thought among Johnson’s advisers
presented. The president seemed to be taking his cues from McNamara
and Rusk, but no doubt he was also attentive to what Fortas was saying.
The drama of the next few days in American policy circles involved the
gradual shift on Johnson’s part from supporting Rusk’s “red light” views
to siding with Fortas, who began to argue that Israel should be allowed
to act on its own if the United States was unwilling or unable to use force
to reopen the strait—the “yellow light” view.
By late afternoon the Israelis were becoming anxious to set a definite
time for Eban’s meeting with the president.38 Minister Evron called
National Security Adviser Walt Rostow and was invited to come to the
White House to talk. Johnson, he was told, did not want any leaks to the
press from the meeting, and several details of the visit had to be discussed.
While Evron was in his office, Rostow contacted Johnson, who, on learning of Evron’s presence, asked him in for an informal talk. Johnson knew
and liked Evron, and presumably felt that it would be useful to convey his
position through Evron before the more formal meeting with Eban. Johnson began by stressing that any American action would require congressional support of the president. He repeated this point several times. Talks
in the UN, though not expected to produce anything, were an important
part of the process of building support. On a more positive note, Johnson
mentioned the multinational-fleet effort. He acknowledged that Israel, as
a sovereign state, had the right to act alone, but if it did, the United States
the crisis of may–june 1967
35
would feel no obligation for any consequences that might ensue.39 He
stated that he did not believe Israel would carry out such unilateral action.
In closing, Johnson stressed that he was not a coward, that he did not
renege on his promises, but that he would not be rushed into a course of
action that might endanger the United States simply because Israel had set
Sunday as a deadline.40
Eban arrived at the White House unannounced while Evron was with
the president. After some confusion, their meeting began shortly after
7:00 p.m. In response to Eban’s appeal that the United States live up to its
explicit commitments, Johnson emphasized that he had termed the blockade illegal and that he was working on a plan to reopen the strait. He
noted that he did not have the authority to say that an attack on Israel
would be considered an attack on the United States. He again stressed the
two basic premises of American policy: any action must have congressional support, and it must be multilateral. He told Eban he was fully
aware of what three past presidents had said, but their statements were
“not worth five cents” if the people and Congress did not support the
president.
Twice Johnson repeated the phrase that Rusk had coined: “Israel will
not be alone unless it decides to go alone.” He said he could not imagine
Israel’s making a precipitate decision. In case Eban doubted his personal
courage, Johnson stressed that he was “not a feeble mouse or a coward.”
Twice Eban asked the president if he could tell the cabinet that Johnson
would do everything in his power to get the Gulf of Aqaba open to all
shipping, including that of Israel. Johnson replied “yes.”41 Eban was given
an aide-mémoire spelling out U.S. policy along the lines that the president
had just laid out.42
As Eban left the White House, Johnson turned to his advisers and
stated: “I’ve failed. They’ll go.”43
Prelude to the June 1967 War
Johnson obviously was aware of the awkwardness of the policy he was pursuing. The multinational-fleet effort would take time, and even then might
fall through for any number of reasons. The alternative of unilateral American action was not seriously considered. Congress was obviously a major
concern, and behind Congress lay the realities of the Vietnam conflict.
Johnson understood that Israel was subject to a different set of pressures
and might be forced to go to war. But if so, the United States, he had said,
would not be committed to act. He apparently still wanted the Israelis to
36
the johnson presidency
hold off on military action, but as time went by he seems to have become
resigned to letting the Israelis take whatever action they felt was necessary.
Above all, he was not prepared to give Israel the one thing that might have
kept it from acting on its own—a firm guarantee to use force if necessary
to reopen the strait. Eban had almost extracted such a promise, but in
Johnson’s mind it was clearly hedged by references to United States constitutional processes and “every means within my power.”
What Johnson had asked for was time—time for the fleet idea to be
explored, for passions to cool, for compromises to be explored. He had
tried to pin the Israelis down with a commitment to give him two weeks,
beginning about May 27. On that day the Soviets had told Johnson they
had information that Israel was planning to attack. The president replied
to Kosygin and sent a message to Eshkol, which reached him on May 28,
repeating the information from Moscow and warning Israel against starting hostilities.44 Meanwhile, the president decided to initiate further contacts with Nasser.
Rusk followed up Johnson’s message to Eshkol with one of his own to
Ambassador Barbour, for transmittal to the Israelis: “With the assurance
of international determination to make every effort to keep the strait open
to the flags of all nations, unilateral action on the part of Israel would be
irresponsible and catastrophic.”45 Rusk also paralleled Johnson’s message
to Kosygin, which had called for a U.S.-USSR effort to find a prompt
solution to the Strait of Tiran issue, with a message to Foreign Minister
Andrei Gromyko calling for a two-week moratorium on the Egyptian closure of the strait. The message to Eshkol had its intended effect. At its
Sunday, May 28, meeting, the cabinet seemed evenly split on the issue of
whether to go to war. Eshkol, reflecting on Johnson’s letter and Eban’s
report of his talks, decided to accede to the president’s request.
From this point on, many Washington officials began to act as if they
had at least two weeks in which to work toward a solution. The critical
period, it was felt, would begin after Sunday, June 11. Although there was
reason to believe that the Israelis would stay their hand until that date, as
Johnson had requested, clearly such a pledge would lose validity if the situation on the ground or within Israel changed substantially. And in the
ensuing days, changes did indeed occur.
Informal Lines of Communication
During this period, Justice Abe Fortas, presumably with Johnson’s blessing,
spoke frequently with Israel’s respected ambassador, Avraham Harman.46
the crisis of may–june 1967
37
Fortas and Harman were close personal friends who met on a regular
basis during late May and the first days of June. And with considerable
regularity, Fortas talked to the president by telephone. The Israelis had
every reason to assume they were dealing with one of Johnson’s true confidants, although Harman reportedly did not view his talks with Fortas as
constituting an alternative channel for dealing with the U.S. government.
He and Evron, who also talked to Fortas, did know, of course, that they
were dealing with someone who was close to Johnson and whose views
deserved careful attention. They were also dealing with a man who was
deeply committed to Israel and who seems to have been suspicious of the
State Department, and Dean Rusk in particular.47 What they heard from
Fortas would be one more piece of evidence they could use in trying to
fathom Johnson’s thinking.
Eban’s report of Johnson’s views was not universally credited in Israel.
Some thought he had misunderstood the import of the phrase “Israel will
not be alone unless its decides to go alone.” It was not an absolute prohibition. In fact, Johnson had acknowledged that Israel had the right to
act on its own. But he had urged them not to do so, at least not right away.
And he had made it clear that he could not do much to help if they got into
trouble. Over the next several days the Israelis mounted a major effort to
check on exactly where Johnson stood and to signal that time was working against Israeli interests. Central to this effort was a visit by Meir Amit,
the head of Israel’s intelligence service (the Mossad), who traveled to
Washington under an assumed name on May 31.48
Just before Amit’s arrival in Washington, an extremely important
change took place in the Middle East situation. Jordan’s King Hussein,
under great pressure to join the Arab nationalist mainstream, had flown
to Cairo and signed a mutual-defense pact with Nasser. He returned to
Jordan with an Egyptian general in tow who would head the joint military command. Walt Rostow saw this as a major turning point, and he
underscored the military danger represented by the dispatch of Egyptian
commandos to Jordan. From this point on, he believed, Arab actions were
making war virtually inevitable. Unless the Arabs backed down, or unless
enough time was available for American power to make itself felt, Israel
was bound to take matters into its own hands.49
Eshkol replied to Johnson’s letter of May 28 on May 30, noting that
American assurances to take “any and all measures to open the straits”
had played a role in Israel’s decisions not to go to war and to agree to wait
for “a week or two.”50 Within that time frame, Eshkol urged, a naval
escort must move through the strait. Apprised of this message on May 31,
38
the johnson presidency
Johnson became angry, claiming he had not promised Israel that he would
use “any and all measures,” but rather had stressed that he would make
every effort within his constitutional authority.
From Cairo, the president’s special envoy, Charles Yost, reported on
May 30 his impression that Nasser “cannot and will not retreat,” that “he
would probably welcome, but not seek, military showdown with Israel,”
and that any American effort to force the strait would “undermine, if not
destroy, US position throughout Arab world.”51 The following day,
another presidential envoy, Robert Anderson, met with Nasser and discussed the possibility that Egyptian vice president Zakariyya Muhieddin
would visit Washington on June 7.52
Rumors began to circulate in Washington on May 31 that the United
States was looking for possible compromises to end the crisis.53 In fact,
some consideration was being given in the State Department to such
steps, and Rusk’s consultations with Congress to this effect rapidly
reached Israeli ears and caused alarm.54 That same day the Israelis picked
up a report that Rusk had told a journalist, “I don’t think it is our business to restrain anyone,” when asked if the United States was trying to
restrain Israel.55
This was the atmosphere that Amit found when he filed his first report
on his soundings in Washington. His advice was to wait a few more days,
but he observed that the mood was beginning to change. In his opinion the
fleet idea was increasingly seen as bankrupt. If Israel were to act on its
own, and win decisively, no one in Washington would be upset. The
source for these impressions, it is worth noting, was not the State Department or the president. Amit’s talks on June 1 and the morning of June 2
were focused on the Pentagon, where he saw McNamara, and on the
CIA, where he talked with Director Richard Helms and James Angleton.56 On June 1 the simmering political crisis in Israel broke. Late in the
day Moshe Dayan, hero of the 1956 Suez campaign, was brought into the
cabinet as minister of defense. War now seemed likely in the near future.
Some leaders in the Is | https://ar.b-ok.org/book/834986/13b3ee | CC-MAIN-2019-51 | en | refinedweb |
Tutorial: Amazon price tracker using Python and MongoDB (Part 1)
A two-part tutorial on how to create an Amazon price tracker.
Recently there was an Amazon sale and I wanted to buy a product that I had been checking on for a long time. But, as I was ready to buy it I noticed that the price had increased and I wondered if the sale was even legitimate. So I figured by creating this price tracker app, it would not only increase my fluency in python but I would also have my very own home-brewed app to track amazon prices.
While I have been programming for a long time, it is only recently that I have picked up python but it’s a bliss so far. If any of you python experts find my code to be not very “Pythonic”, my apologies, I will learn more :).
This tutorial assumes that you have at least basic knowledge of python. Also that you have Python and MongoDB installed on your system.
Note that this tutorial is meant to demonstrate on how to create an amazon price tracker and not to teach programming.
So, without further ado let us begin part 1 of this tutorial.
Step 1: Creating files and folder for the project
- Open whichever directory you like and create a folder, name it amazon_price_tracker or just anything you want.
- Now open the folder and create two files
scraper.pyand
db.py.
That’s all for the first step, now open the terminal in the projects directory and head to the next step.
Step 2(Optional): Creating a virtual environment with virtualenv
This is an Optional step to isolate the packages that are being installed. You can find more about virtualenv here.
Run this to create an environment.
$ virtualenv ENV
And run this to activate the environment.
$ source ENV/bin/activate
If you want to deactivate the environment then simply run the following.
$ deactivate
Now, activate the environment if you haven’t already and head to step 3.
Step 3: Installing the required packages.
- Run this command to Install requests (a library to make HTTP requests)
$ pip install requests
- Run this command to Install BeautifulSoup4 (a library to scrape information from web pages)
$ pip install bs4
- Run this command to install html5lib(modern HTML5 parser)
$ pip install html5lib
- Run this command to install pymongo (a driver to access MongoDB)
$ pip install pymongo
Step 4: Starting to code the extract_url(URL) function
Now, open
scraper.py and we need to import a few packages that we had previously installed.
import requests
from bs4 import BeautifulSoup
Now, let us create a function
extract_url(URL) to make the URL shorter and verify if the URL is valid URL or not.
This function takes an Amazon India URL such as:”
and converts them to shorter URL which is more manageable. Also if the URL is not valid URL then it would return a None
Step 5: What we need for the next function
For the next function Google “my user agent”, copy your user agent and assign it to variable headers.
headers = { "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36"
}
Before we create a function to scrap details from the page let us visit an Amazon product page like this one and find the elements that have the name and the price of the product. We will need the element’s id to be able to find the elements when we extract the data.
Once the page is rendered, do a right-click on the name of the product and click on “inspect” which will show the element which has the name of the product.
We can see that the
<span> element with
id=“productTitle” now hold on to this id, we will use it later to scrap the name of the product.
We will do the same for the price, now right-click on the price and click on inspect.
The
<span>element with
id=“priceblock_dealprice” has the price that we need. But, this product is on sale so its id is different from a normal id which is
id=“priceblock_ourprice”.
Step 6: Creating the price converter function
If you look closely the
<span> element has the price but it has many unwanted pieces of stuff like the ₹ rupee symbol, blank space, a comma separator, and decimal point.
We just want the integer portion of the price, so we will create a price converter function that will remove the unwanted characters and gives us the price in integer type.
Let us name this function
get_converted_price(price)
With some simple string manipulations, this function will give us the converted price in integer type.
UPDATE: As mention by @oorjahalt we can simply use regex to extract price.
NOTE: While tracker is meant for, this may very well be used for or other similar websites with very minor changes such as:
To make this compatible with the global version of amazon simply do this:
- change the ₹ to $
stripped_price = price.strip("$ ,")
- we can skip
find_dotand
to_convert_priceentirely and just do this
converted_price = float(replace_price)
We would, however, will be converting the price to a float type.
- And changing to in
extract_url(URL)function
This would make it compatible with.
Now, as we buckle up we can finally proceed towards creating the scraper function.
Step 7: Onto the details scraper function
OK so let us create the function that would extract the details of the product such as its name, price and returns a dictionary that contains the name, price and the URL of the product. We will name this function
get_product_details(URL).
The first two variables for this function are
headers and
details,
headers which will contain your user-agent and
details is a dictionary that will contain the details for the product.
headers = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36"
}
details = {"name": "", "price": 0, "deal": True, "url": ""}
The another variable
_url will hold the extracted URL for us and we will check if the URL is valid or not. An invalid URL would
return None, if the URL is invalid then we will set the details to
None and return at the end so that we know something is wrong with the URL.
_url = extract_url(url)
if _url is None:
details = None
Now, we come to the
else part. This has 4 variables
page,
soup,
title and
price.
page variable will hold the requested product’s page.
soup variable will hold the HTML, with this we can do lots of stuff like finding an element with an id and extract its text, which is what we will do. You can find more about other BeautifulSoup’s function here.
title variable as the name suggests will hold the element that has the title of the product.
price variable will hold the element that has the price of the product.
page = requests.get(url, headers=headers)
soup = BeautifulSoup(page.content, "html5lib")
title = soup.find(id="productTitle")
price = soup.find(id="priceblock_dealprice")
Now that we have the elements for title and price, we will do some checks.
Let us begin with price, as mentioned earlier the id of price can be either
id=”priceblock_dealprice” on deals or
id=”priceblock_ourprice” on normal days.
if price is None:
price = soup.find(id="priceblock_ourprice")
details["deal"] = False
Since we are first checking if there is any deal price or not, the code will change
price from deal price to normal price and also set the deal to
false in
details[“deal”] if there is no deal price. This is done so that we know the price is normal.
Now, even then if we don’t get the price that means something is wrong with the page, maybe the product is out of stock or maybe the product is not released yet or some other possibilities. The following code will check if there are title and price or not.
if title is not None and price is not None:
details["name"] = title.get_text().strip()
details["price"] = get_converted_price(price.get_text())
details["url"] = _url
If there are price and title of the product then we will store them.
details["name"] = title.get_text().strip()
This will store the name of the product but, we have to strip any unwanted blank leading and trailing spaces from the title. The strip() function remove any trailing and leading spaces.
details["price"] = get_converted_price(price.get_text())
This will store the price of the product. With the help of the get_converted_price(price) function that we created earlier gives us the converted price in integer type.
details["url"] = _url
This will store the extracted URL.
else:
details = None
return details
We will set the details to
None if the price or title doesn’t exist.
Finally, the function is complete and here is the complete code
Note: While this code does not work for books since books have different
productid, you can make it work for books if you tweak the code.
Step 8: Let us run scraper.py
At the end of the file add the following.
print(get_product_details("Insert an Amazon URL"))
Open the terminal where you have your scraper.py file and run the scraper.py like so.
$ python3 scraper.py
If you have done everything correctly you should get an output like this.
{‘name’: ‘Nokia 8.1 (Iron, 4GB RAM, 64GB Storage)’, ‘price’: 19999, ‘deal’: False, ‘url’: ‘'}
And voila we have completed Part 1 of the Amazon price tracker tutorial.
I will see you next week with the follow-up part 2 where we will explore MongoDB using PyMongo to store our data.
This was my first article on medium or blogging in general and I am excited to share my thoughts, experiments and stories here in the future. Hope you’ll like it.
Find the complete source code for this article below.
ArticB/amazon_price_tracker
amazon_price_tracker-An amazon price tracker using Python and MongoDB
github.com
Follow the link below for part 2. | https://medium.com/analytics-vidhya/tutorial-amazon-price-tracker-using-python-and-mongodb-part-1-aece6347ec63 | CC-MAIN-2019-51 | en | refinedweb |
What if you have a very small dataset of only a few thousand images and a hard classification problem at hand? Training a network from scratch might not work that well, but how about transfer learning.
Dog Breed Classification with Keras
Recently, I got my hands on a very interesting dataset that is part of the Udacity AI Nanodegree. In several of my previous posts I discussed the enormous potential of transfer learning. As a matter of fact, very few people train an entire convolutional network from scratch (with random initialization), because it is relatively rare to have a dataset of sufficient size. Instead, it is common to pre-train a convolutional network on a very large dataset (e.g. ImageNet, which contains 1.2 million images with 1000 categories), and then use the convolutional network either as an initialization or a fixed feature extractor for the task of interest.
In this post, I aim to compare two approaches to image classification. First, I will train a convolutional neural network from scratch and measure its performance. Then, I will apply transfer learning and will create a stack of models and compare their performance to the first approach. For that purpose, I will use Keras. While I got really comfortable at using Tensorflow, I must admit, using the high-level wrapper API that is Keras gets you much faster to the desired network architecture. Nevertheless, I still would recommend to every beginner to start with Tensorflow, as its low-level API really helps you understand how different types of neural networks work.
Dog Breed Dataset
The data consists of 8351 dog images. The images are sorted into 133 directories, each directory contains only images of a single dog breed. Hopefully, the dataset will stay here. If the url is not available, feel free to contact me. Ok, let’s load the dataset.
from sklearn.datasets import load_files from keras.utils import np_utils import numpy as np from glob import glob def load_dataset(path): data = load_files(path) dog_files = np.array(data['filenames']) dog_targets = np_utils.to_categorical(np.array(data['target']), 133) return dog_files, dog_targets train_files, train_targets = load_dataset('dog/assets/images/train') valid_files, valid_targets = load_dataset('dog/assets/images/valid') test_files, test_targets = load_dataset('dog/assets/images/test') dog_names = [item[20:-1] for item in sorted(glob("dog/assets/images/train/*/"))] # Let's check))
Using TensorFlow backend. There are 133 total dog categories. There are 8351 total dog images. There are 6680 training dog images. There are 835 validation dog images. There are 836 test dog images.
The dataset is already split into train, validation and test parts. As the training set consists of 6680 images, there are only 50 dogs per breed on average. That is really a rather small dataset and an ambitious task to do. The cifar 10 dataset for example contains 60000 images and only 10 categories. The categories are airplane, automobile, bird, cat, etc. Thus, objects to be classified are very different and therefore easier to classify. In my post Image classification with pre-trained CNN InceptionV3 I managed to achieve an accuracy of around 80%. Hence, it is now my goal to achieve similar accuracy with the dog breed dataset, that has much more categories, while it is much much smaller.
Why is the dataset interesting?
The task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that even a human would have great difficulty in distinguishing between a Brittany and a Welsh Springer Spaniel.
It was not difficult to find other dog breed pairs with only few inter-class variations (for instance, Curly-Coated Retrievers and American Water Spaniels).
Likewise, labradors come in yellow, chocolate, and black. A vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.
When predicting between 133 breeds, a random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imbalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%. Hence, even an accuracy of 2-3% would be considered reasonable.
Pre-process the Data
When using TensorFlow as backend, Keras CNNs require a 4D array (which we’ll also refer to as a 4D tensor) as input, with shape \times 224$
The
paths_to_tensor function takes a numpy array of string-valued image paths as input and returns a 4D tensor with shape
Here,
nb_samples is the number of samples, or number of images, in the supplied array of image paths. It is best to think of
nb_samples as the number of 3D tensors (where each 3D tensor corresponds to a different image) in your dataset!
from keras.preprocessing import image from tqdm import tqdm def path_to_tensor(img_path): # loads RGB image as PIL.Image.Image type img = image.load_img(img_path, target_size=(224, 224)) # convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3) x = image.img_to_array(img) # convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor return np.expand_dims(x, axis=0) def paths_to_tensor(img_paths): list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)] return np.vstack(list_of_tensors)
We rescale the images by dividing every pixel in every image by 255.
from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True # pre-process the data for Keras train_tensors = paths_to_tensor(train_files).astype('float32')/255 valid_tensors = paths_to_tensor(valid_files).astype('float32')/255 test_tensors = paths_to_tensor(test_files).astype('float32')/255
100%|██████████| 6680/6680 [01:00<00:00, 110.84it/s] 100%|██████████| 835/835 [00:06<00:00, 127.85it/s] 100%|██████████| 836/836 [00:06<00:00, 136.50it/s]
Create a CNN to Classify Dog Breeds (from Scratch)
After few hours of trial and error, I came up with the following CNN architecture:
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D from keras.layers import Dropout, Activation, Dense from keras.models import Sequential from keras.layers.normalization import BatchNormalization model = Sequential() model.add(Conv2D(16, (3, 3), padding='same', use_bias=False, input_shape=(224, 224, 3))) model.add(BatchNormalization(axis=3, scale=False)) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size=(4, 4), strides=(4, 4), padding='same')) model.add(Dropout(0.2)) model.add(Conv2D(32, (3, 3), padding='same', use_bias=False)) model.add(BatchNormalization(axis=3, scale=False)) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size=(4, 4), strides=(4, 4), padding='same')) model.add(Dropout(0.2)) model.add(Conv2D(64, (3, 3), padding='same', use_bias=False)) model.add(BatchNormalization(axis=3, scale=False)) model.add(Activation("relu")) model.add(MaxPooling2D(pool_size=(4, 4), strides=(4, 4), padding='same')) model.add(Dropout(0.2)) model.add(Conv2D(128, (3, 3), padding='same', use_bias=False)) model.add(BatchNormalization(axis=3, scale=False)) model.add(Activation("relu")) model.add(Flatten()) model.add(Dropout(0.2)) model.add(Dense(512, activation='relu')) model.add(Dense(133, activation='softmax')) model.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_46 (Conv2D) (None, 224, 224, 16) 432 _________________________________________________________________ batch_normalization_105 (Bat (None, 224, 224, 16) 48 _________________________________________________________________ activation_1479 (Activation) (None, 224, 224, 16) 0 _________________________________________________________________ max_pooling2d_66 (MaxPooling (None, 56, 56, 16) 0 _________________________________________________________________ dropout_84 (Dropout) (None, 56, 56, 16) 0 _________________________________________________________________ conv2d_47 (Conv2D) (None, 56, 56, 32) 4608 _________________________________________________________________ batch_normalization_106 (Bat (None, 56, 56, 32) 96 _________________________________________________________________ activation_1480 (Activation) (None, 56, 56, 32) 0 _________________________________________________________________ max_pooling2d_67 (MaxPooling (None, 14, 14, 32) 0 _________________________________________________________________ dropout_85 (Dropout) (None, 14, 14, 32) 0 _________________________________________________________________ conv2d_48 (Conv2D) (None, 14, 14, 64) 18432 _________________________________________________________________ batch_normalization_107 (Bat (None, 14, 14, 64) 192 _________________________________________________________________ activation_1481 (Activation) (None, 14, 14, 64) 0 _________________________________________________________________ max_pooling2d_68 (MaxPooling (None, 4, 4, 64) 0 _________________________________________________________________ dropout_86 (Dropout) (None, 4, 4, 64) 0 _________________________________________________________________ conv2d_49 (Conv2D) (None, 4, 4, 128) 73728 _________________________________________________________________ batch_normalization_108 (Bat (None, 4, 4, 128) 384 _________________________________________________________________ activation_1482 (Activation) (None, 4, 4, 128) 0 _________________________________________________________________ flatten_2 (Flatten) (None, 2048) 0 _________________________________________________________________ dropout_87 (Dropout) (None, 2048) 0 _________________________________________________________________ dense_100 (Dense) (None, 512) 1049088 _________________________________________________________________ dense_101 (Dense) (None, 133) 68229 ================================================================= Total params: 1,215,237 Trainable params: 1,214,757 Non-trainable params: 480 _________________________________________________________________
As already elaborated, designing a CNN architecture that achieves even 2% accuracy is not an easy task. The first thing you notice is that increasing the filters depth leads to better results, yet slower training.Batch Normalization seems not only to lead to faster training, but also to better results. I used the source code of InceptionV3 as an example when configuring the batch normalization layers. As batch normalization allowed for the model to learn much faster and I added a fourth convolutional layer and further increased the filter depth. Then, I altered the max pooling layer to shrink the layers by a factor of x4 instead of x2. This drastically decreased the number of trainable params and increased the speed by which the model is learning. At the end I added Dropout, to decrease overfitting, as the network started to overfit after the 4th epoch.
from keras.callbacks import ModelCheckpoint EPOCHS = 10 model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch.hdf5', verbose=1, save_best_only=True) model.fit(train_tensors, train_targets, validation_data=(valid_tensors, valid_targets), epochs=EPOCHS, batch_size=32, callbacks=[checkpointer], verbose=1)
Train on 6680 samples, validate on 835 samples Epoch 1/10 6656/6680 [============================>.] - ETA: 1s - loss: 4.8959 - acc: 0.0207Epoch 00000: val_loss improved from inf to 5.09728, saving model to saved_models/weights.best.from_scratch.hdf5 6680/6680 [==============================] - 434s - loss: 4.8946 - acc: 0.0207 - val_loss: 5.0973 - val_acc: 0.0156 Epoch 2/10 6656/6680 [============================>.] - ETA: 0s - loss: 4.4014 - acc: 0.0524Epoch 00001: val_loss improved from 5.09728 to 4.45084, saving model to saved_models/weights.best.from_scratch.hdf5 6680/6680 [==============================] - 292s - loss: 4.4012 - acc: 0.0524 - val_loss: 4.4508 - val_acc: 0.0479 Epoch 3/10 6656/6680 [============================>.] - ETA: 0s - loss: 4.1037 - acc: 0.0726Epoch 00002: val_loss did not improve 6680/6680 [==============================] - 274s - loss: 4.1032 - acc: 0.0731 - val_loss: 4.4804 - val_acc: 0.0443 Epoch 4/10 6656/6680 [============================>.] - ETA: 0s - loss: 3.9247 - acc: 0.0959Epoch 00003: val_loss improved from 4.45084 to 4.43195, saving model to saved_models/weights.best.from_scratch.hdf5 6680/6680 [==============================] - 273s - loss: 3.9240 - acc: 0.0960 - val_loss: 4.4319 - val_acc: 0.0491 Epoch 5/10 6656/6680 [============================>.] - ETA: 0s - loss: 3.7687 - acc: 0.1175Epoch 00004: val_loss did not improve 6680/6680 [==============================] - 276s - loss: 3.7678 - acc: 0.1175 - val_loss: 4.9665 - val_acc: 0.0347 Epoch 6/10 6656/6680 [============================>.] - ETA: 0s - loss: 3.6533 - acc: 0.1315Epoch 00005: val_loss did not improve 6680/6680 [==============================] - 293s - loss: 3.6520 - acc: 0.1317 - val_loss: 4.6552 - val_acc: 0.0671 Epoch 7/10 6656/6680 [============================>.] - ETA: 1s - loss: 3.5410 - acc: 0.1513Epoch 00006: val_loss improved from 4.43195 to 4.18182, saving model to saved_models/weights.best.from_scratch.hdf5 6680/6680 [==============================] - 313s - loss: 3.5407 - acc: 0.1510 - val_loss: 4.1818 - val_acc: 0.0743 Epoch 8/10 6656/6680 [============================>.] - ETA: 1s - loss: 3.4297 - acc: 0.1773Epoch 00007: val_loss improved from 4.18182 to 4.05759, saving model to saved_models/weights.best.from_scratch.hdf5 6680/6680 [==============================] - 310s - loss: 3.4286 - acc: 0.1772 - val_loss: 4.0576 - val_acc: 0.1066 Epoch 9/10 6656/6680 [============================>.] - ETA: 1s - loss: 3.3242 - acc: 0.1881Epoch 00008: val_loss did not improve 6680/6680 [==============================] - 297s - loss: 3.3236 - acc: 0.1883 - val_loss: 4.4697 - val_acc: 0.0683 Epoch 10/10 6656/6680 [============================>.] - ETA: 1s - loss: 3.1783 - acc: 0.2160Epoch 00009: val_loss did not improve 6680/6680 [==============================] - 300s - loss: 3.1793 - acc: 0.2156 - val_loss: 4.2501 - val_acc: 0.1006
Running the model for 10 epochs took less than an hour, when running on 8-Core CPU. Meanwhile, I am using Floyd Hub to rent a GPU when considerably more power is required. In mostly works fine, once you manage to upload your dataset (their upload pipeline is currently buggy). Let’s load the weights of the model that had the best validation loss and measure the accuracy.
model.load_weights('saved_models/weights.best.from_scratch.hdf5') # get index of predicted dog breed for each image in test set dog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors] # report test accuracy test_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions) print('Test accuracy: %.4f%%' % test_accuracy)
Test accuracy: 11.1244%
That is not a bad performance. Probably as good as someone that is not an expert, but really likes dogs would manage to achieve.
Using pre-trained VGG-19 and Resnet-50
Next, we will use transfer learning to create a CNN that can identify dog breed from images. The model uses the pre-trained VGG-19 and Resnet-50 models as a fixed feature extractor, where the last convolutional output of both networks is fed as input to another, second level model. As a matter of fact, one can choose between several pre-trained models that are shipped with Keras. I have already tested VGG-16, VGG-19, InceptionV3, Resnet-50 and Xception on this dataset and found VGG-19 and Resnet-50 to have the best performance considering the limited memory resources and training time that I had at my disposal. At the end, I combined both models to achieve a small boost relative to what I achieved by using them separately. Here are few lines, that extract the features from the images:
We only add a global average pooling layer and a fully connected layer, where the latter contains one node for each dog category and is equipped with a softmax. Let’s extract the last convolutional output for both networks.
from keras.applications.vgg19 import VGG19 from keras.applications.vgg19 import preprocess_input as preprocess_input_vgg19 from keras.applications.resnet50 import ResNet50 from keras.applications.resnet50 import preprocess_input as preprocess_input_resnet50 def extract_VGG19(file_paths): tensors = paths_to_tensor(file_paths).astype('float32') preprocessed_input = preprocess_input_vgg19(tensors) return VGG19(weights='imagenet', include_top=False).predict(preprocessed_input, batch_size=32) def extract_Resnet50(file_paths): tensors = paths_to_tensor(file_paths).astype('float32') preprocessed_input = preprocess_input_resnet50(tensors) return ResNet50(weights='imagenet', include_top=False).predict(preprocessed_input, batch_size=32)
Extracting the features may take a few minutes…
train_vgg19 = extract_VGG19(train_files) valid_vgg19 = extract_VGG19(valid_files) test_vgg19 = extract_VGG19(test_files) print("VGG19 shape", train_vgg19.shape[1:]) train_resnet50 = extract_Resnet50(train_files) valid_resnet50 = extract_Resnet50(valid_files) test_resnet50 = extract_Resnet50(test_files) print("Resnet50 shape", train_resnet50.shape[1:])
VGG19 shape (7, 7, 512) Resnet50 shape (1, 1, 2048)
For the second level model, Batch Normalization yet again proved to be very important. Without batch normalization the model will not reach 80% accuracy for 10 epochs. Dropout is also important as it allows for the model to train more epochs before starting to overfit. However a Dropout of 50% leads to a model that trains all 20 epochs without overfitting, yet does not reach 82% accuracy. I’ve found Dropout of 30% to be just right for the model below. Another important hyper parameter was the batch size. A bigger batch size leads to a model that learns faster, the accuracy increases very rapidly, but the maximum accuracy is a bit lower. A smaller batch size leads to a model that learns slower between epochs but reaches higher accuracy.
from keras.layers.pooling import GlobalAveragePooling2D from keras.layers.merge import Concatenate from keras.layers import Input, Dense from keras.layers.core import Dropout, Activation from keras.callbacks import ModelCheckpoint from keras.layers.normalization import BatchNormalization from keras.models import Model def input_branch(input_shape=None): size = int(input_shape[2] / 4) branch_input = Input(shape=input_shape) branch = GlobalAveragePooling2D()(branch_input) branch = Dense(size, use_bias=False, kernel_initializer='uniform')(branch) branch = BatchNormalization()(branch) branch = Activation("relu")(branch) return branch, branch_input vgg19_branch, vgg19_input = input_branch(input_shape=(7, 7, 512)) resnet50_branch, resnet50_input = input_branch(input_shape=(1, 1, 2048)) concatenate_branches = Concatenate()([vgg19_branch, resnet50_branch]) net = Dropout(0.3)(concatenate_branches) net = Dense(640, use_bias=False, kernel_initializer='uniform')(net) net = BatchNormalization()(net) net = Activation("relu")(net) net = Dropout(0.3)(net) net = Dense(133, kernel_initializer='uniform', activation="softmax")(net) model = Model(inputs=[vgg19_input, resnet50_input], outputs=[net]) model.summary()
____________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ==================================================================================================== input_3 (InputLayer) (None, 7, 7, 512) 0 ____________________________________________________________________________________________________ input_4 (InputLayer) (None, 1, 1, 2048) 0 ____________________________________________________________________________________________________ global_average_pooling2d_3 (Glob (None, 512) 0 input_3[0][0] ____________________________________________________________________________________________________ global_average_pooling2d_4 (Glob (None, 2048) 0 input_4[0][0] ____________________________________________________________________________________________________ dense_5 (Dense) (None, 128) 65536 global_average_pooling2d_3[0][0] ____________________________________________________________________________________________________ dense_6 (Dense) (None, 512) 1048576 global_average_pooling2d_4[0][0] ____________________________________________________________________________________________________ batch_normalization_4 (BatchNorm (None, 128) 512 dense_5[0][0] ____________________________________________________________________________________________________ batch_normalization_5 (BatchNorm (None, 512) 2048 dense_6[0][0] ____________________________________________________________________________________________________ activation_4 (Activation) (None, 128) 0 batch_normalization_4[0][0] ____________________________________________________________________________________________________ activation_5 (Activation) (None, 512) 0 batch_normalization_5[0][0] ____________________________________________________________________________________________________ concatenate_2 (Concatenate) (None, 640) 0 activation_4[0][0] activation_5[0][0] ____________________________________________________________________________________________________ dropout_3 (Dropout) (None, 640) 0 concatenate_2[0][0] ____________________________________________________________________________________________________ dense_7 (Dense) (None, 640) 409600 dropout_3[0][0] ____________________________________________________________________________________________________ batch_normalization_6 (BatchNorm (None, 640) 2560 dense_7[0][0] ____________________________________________________________________________________________________ activation_6 (Activation) (None, 640) 0 batch_normalization_6[0][0] ____________________________________________________________________________________________________ dropout_4 (Dropout) (None, 640) 0 activation_6[0][0] ____________________________________________________________________________________________________ dense_8 (Dense) (None, 133) 85253 dropout_4[0][0] ==================================================================================================== Total params: 1,614,085 Trainable params: 1,611,525 Non-trainable params: 2,560 ____________________________________________________________________________________________________
model.compile(loss='categorical_crossentropy', optimizer="rmsprop", metrics=['accuracy']) checkpointer = ModelCheckpoint(filepath='saved_models/bestmodel.hdf5', verbose=1, save_best_only=True) model.fit([train_vgg19, train_resnet50], train_targets, validation_data=([valid_vgg19, valid_resnet50], valid_targets), epochs=10, batch_size=4, callbacks=[checkpointer], verbose=1)
Train on 6680 samples, validate on 835 samples Epoch 1/10 6676/6680 [============================>.] - ETA: 0s - loss: 2.5900 - acc: 0.3751Epoch 00000: val_loss improved from inf to 1.06250, saving model to saved_models/bestmodel.hdf5 6680/6680 [==============================] - 69s - loss: 2.5887 - acc: 0.3754 - val_loss: 1.0625 - val_acc: 0.6838 Epoch 2/10 6672/6680 [============================>.] - ETA: 0s - loss: 1.5383 - acc: 0.5679Epoch 00001: val_loss improved from 1.06250 to 0.87527, saving model to saved_models/bestmodel.hdf5 6680/6680 [==============================] - 50s - loss: 1.5376 - acc: 0.5680 - val_loss: 0.8753 - val_acc: 0.7401 Epoch 3/10 6676/6680 [============================>.] - ETA: 0s - loss: 1.3559 - acc: 0.6257Epoch 00002: val_loss improved from 0.87527 to 0.79809, saving model to saved_models/bestmodel.hdf5 6680/6680 [==============================] - 50s - loss: 1.3568 - acc: 0.6256 - val_loss: 0.7981 - val_acc: 0.7784 Epoch 4/10 6676/6680 [============================>.] - ETA: 0s - loss: 1.2502 - acc: 0.6552Epoch 00003: val_loss improved from 0.79809 to 0.74536, saving model to saved_models/bestmodel.hdf5 6680/6680 [==============================] - 49s - loss: 1.2503 - acc: 0.6551 - val_loss: 0.7454 - val_acc: 0.8012 Epoch 5/10 6676/6680 [============================>.] - ETA: 0s - loss: 1.1436 - acc: 0.6824Epoch 00004: val_loss did not improve 6680/6680 [==============================] - 49s - loss: 1.1438 - acc: 0.6823 - val_loss: 0.7806 - val_acc: 0.8084 Epoch 6/10 6672/6680 [============================>.] - ETA: 0s - loss: 1.0829 - acc: 0.7052Epoch 00005: val_loss improved from 0.74536 to 0.72584, saving model to saved_models/bestmodel.hdf5 6680/6680 [==============================] - 49s - loss: 1.0820 - acc: 0.7054 - val_loss: 0.7258 - val_acc: 0.8024 Epoch 7/10 6672/6680 [============================>.] - ETA: 0s - loss: 1.0586 - acc: 0.7136Epoch 00006: val_loss did not improve 6680/6680 [==============================] - 48s - loss: 1.0578 - acc: 0.7136 - val_loss: 0.7493 - val_acc: 0.8072 Epoch 8/10 6676/6680 [============================>.] - ETA: 0s - loss: 1.0034 - acc: 0.7218Epoch 00007: val_loss did not improve 6680/6680 [==============================] - 52s - loss: 1.0041 - acc: 0.7217 - val_loss: 0.7958 - val_acc: 0.8120 Epoch 9/10 6676/6680 [============================>.] - ETA: 0s - loss: 0.9489 - acc: 0.7326Epoch 00008: val_loss improved from 0.72584 to 0.72160, saving model to saved_models/bestmodel.hdf5 6680/6680 [==============================] - 47s - loss: 0.9484 - acc: 0.7328 - val_loss: 0.7216 - val_acc: 0.8228 Epoch 10/10 6676/6680 [============================>.] - ETA: 0s - loss: 0.9080 - acc: 0.7431Epoch 00009: val_loss did not improve 6680/6680 [==============================] - 47s - loss: 0.9076 - acc: 0.7433 - val_loss: 0.7365 - val_acc: 0.8228
Training the model takes only a few minutes… Let’s load the weights of the model that had the best validation loss and measure the accuracy.
model.load_weights('saved_models/bestmodel.hdf5') from sklearn.metrics import accuracy_score predictions = model.predict([test_vgg19, test_resnet50]) breed_predictions = [np.argmax(prediction) for prediction in predictions] breed_true_labels = [np.argmax(true_label) for true_label in test_targets] print('Test accuracy: %.4f%%' % (accuracy_score(breed_true_labels, breed_predictions) * 100))
Test accuracy: 82.2967%
The accuracy, when tested on the test set, is 82.3%. I find it really impressive compared to the 11% accuracy achieved by the model that was trained from scratch. The reason the accuracy is so much higher is that both VGG-19 and Resnet-50 were trained on ImageNet, which is not only huge (1.2 million images), but also contains a considerable amount of dog images. As a result, the accuracy achieved by using models pre-trained on ImageNet is much higher than the accuracy that could possibly be achieved by training a model from scratch. Well, Andrew Ng, the founder of Coursera and one of the biggest names in the ML realm said during his widely popular NIPS 2016 tutorial that transfer learning will be the next driver of ML commercial success. I can imagine, that in the future, models pre-trained on massive datasets would be made available by Google, Apple, Amazon, and others in exchange for some kind of subscription fee or some other form of payment. As a result, data scientists would be able to achieve remarkable results even when only provided with a limited set of data to use for training.
As always feel free to contact me or check out and execute the whole jupyter notebook: Dog Breed Github Repo | http://machinememos.com/python/keras/artificial%20intelligence/machine%20learning/transfer%20learning/dog%20breed/neural%20networks/convolutional%20neural%20network/tensorflow/image%20classification/imagenet/2017/07/11/dog-breed-image-classification.html | CC-MAIN-2019-51 | en | refinedweb |
Annotation for checking required session fields
Recently I worked on a project where I used spring security plugin. Its a very wonderful plugin for making your application secured from unauthorized users. It gives you a simple annotation @Secured to add security to your action and controller. Thats the first time I got to know the real use case of annotation. So I started reading about annotation and few days later I found the use case to implement my own annotation.
All the projects I have worked on had login functionality where we put the userId and projectId into the session. Then in my code I use to get the user from session.userId. Something like
def books = { User user = User.get(session.userId) Project project = Project.get(session.projectId) ...... ..... }
The above code fails when user directly hits this action because there is no check to verify the user is not null. The simple answer for this problem is either use beforeInterceptor or filters. So we started checking the session.userId in filters. But again there are cases where you dont want to check this session value or you can say there are public urls as well. Now we have to put few if else statements in filter.
Here I got my use case to implement an annotation for controllers and actions which checks for the required fields in the session before getting into the action. So I created an annotation in src/groovy folder
import java.lang.annotation.ElementType import java.lang.annotation.Retention import java.lang.annotation.RetentionPolicy import java.lang.annotation.Target @Target([ElementType.FIELD, ElementType.TYPE]) // Annotation is for actions as well as controller so target is field and for class @Retention(RetentionPolicy.RUNTIME) // We need it at run time to identify the annotated controller and action @interface RequiredSession { String[] exclude() default [] // To exclude some of the actions of controller String[] fields() default ["userId","projectId"] // The default value is set to userId and projectId that can be overridden while using the annotation on controller or action. String onFailController() default "home" // Default controller when the field not in session is set to index page String onFailAction() default "index" // Default action when the field not in session is set to index page }
Now I created a ApplicationFilters and before redirected the request to any action I check for the condition of session fields if the requested action or controller is annotated. The code in the filter is something like
class ApplicationFilters { def filters = { validateSession(controller: '*', action: '*') { before = { if (controllerName) { //Get the instance of controller class from string value i.e; controllerName def controllerClass = grailsApplication.controllerClasses.find {it.logicalPropertyName == controllerName} //Read the RequiredSession annotation from controller class def annotation = controllerClass.clazz.getAnnotation(RequiredSession) //Get the current action from actionName otherwise read default action of controller String currentAction = actionName ?: controllerClass.defaultActionName //Look for the annotation on action if controller is not annotated or the action name is excluded if (!annotation || currentAction in annotation.exclude()) { //Get the action field from string value i.e; currentAction def action = applicationContext.getBean(controllerClass.fullName).class.declaredFields.find { field -> field.name == currentAction } //If action is found get the annotation else set it to null annotation = action ? action.getAnnotation(RequiredSession) : null } //Check for the field in session whether the are null or not if any of the field is null loginFailed is true boolean loginFailed = annotation ? (annotation.fields().any {session[it] == null}) : false if (loginFailed) { // If login is failed user redirected to on fail action and controller redirect(action: annotation.onFailAction() , controller: annotation.onFailController()) return false; } } } } } }
And its all done. Now we just annotate our controller and actions accordingly.
@RequiredSession(exclude = ["registration", "joinProject"]) class UserController { def edit ={} def update = {} def list ={} def save ={} def registration ={} def joinProject = {} }
In above example registration and joinProject action will bypass the session fields check.
@RequiredSession class ItemController { def index={} def buy ={} def save ={} }
All the action of above examples can be accessed only when user is logged in.
class HomeController { def index ={} def aboutUs={} @RequiredSession def dashboard = {} }
Actions other than dashboard are public actions which can be accessed without login.
class UserController { @RequiredSession(fields = ["loggedInUserId"]) def updatePassword = { } }
For updating password user dont need to have some project into session so we specified the fields to be checked in session.
Hope it helps
Uday Pratap Singh
uday@intelligrape.com
Everyone would benfiet from reading this post
Hey, it’s really cool! Anyway, I prefer to store my session variables in a service with session scope. What do you think?
| https://www.tothenew.com/blog/annotation-for-checking-required-session-fields/ | CC-MAIN-2019-51 | en | refinedweb |
Common Lisp is a general-purpose programming language, in contrast to Lisp variants such as Emacs Lisp and AutoLISP which are embedded extension languages in particular products. Unlike many earlier Lisps, but like Scheme, Common Lisp uses lexical scoping[?] for variables.
Common Lisp is a multi-paradigm programming language that:
Syntax Common Lisp is a Lisp; it uses S-expressions to denote both code and data structure. Function and macro calls are written as lists, with the name of the function first, as in these examples:
(+ 2 2) ; adds 2 and 2, yielding 4 (setq pi 3.1415) ; sets the variable "pi" equal to 3.1415
; Define a function that squares a number (defun square (x) (* x x)) ; Execute the function (square 3) ; Returns "9"
The Common Lisp character type is not limited to ASCII characters; unsurprising, as Lisp predates ASCII. Some modern implementations allow Unicode characters. [1] ()
The symbol type is common to Lisp languages, but largely unknown outside them. A symbol is a unique, named data object. Symbols in Lisp are similar to identifiers in other languages, in that they can be used as variables to hold values; however, they are more general and can be used for themselves as well. Boolean values in Common Lisp are represented by the reserved symbols T and NIL.
As in any other Lisp, lists in Common Lisp are composed of conses, sometimes called cons cells or pairs. A cons is a data structure of two elements, called its car and cdr. A list is a linked chain of conses wherein each cons's cdr points to the next element, and the last cdr points to the NIL value. Conses can also easily be used to implement trees and other complex data structures.
Hash tables store associations between data objects. Any object may be used as key or value. Hash tables, like arrays, are automatically resized as needed.
Conditions are a special type used to represent errors, exceptions, and other "interesting" events to which a program may respond.
For instance, the
sort function takes a comparison operator as an argument. This can be used not only to sort any type of data, but also to sort data structures according to a key.
(sort '(5 2 6 3 1 4) #'>) ; Returns (6 5 4 3 2 1), using the > function as the comparison operator
(sort '((9 a) (3 b) (4 c)) #'(lambda (x y) (< (car x) (car y)))) ; Returns ((3 b) (4 c) (9 a)), i.e. the list sorted by the first element
Common Lisp is a Lisp-2, meaning that there are separate namespaces for defined functions and for variables. (This differs from, for instance, Scheme, which is a Lisp-1.) Lisp-2 has the advantage that a local variable name will never shadow a function name: One can call a variable
cons or even
if with no problems. However, to refer to a function variable one must use the
#' notation, as in the above examples.
Common Lisp also includes a toolkit for object-oriented programming, the Common Lisp Object System or CLOS.
Macros allow Lisp programmers to create new syntactic forms in the language. For instance, this macro provides the
until loop form, which may be familiar from languages such as Perl:
(defmacro until (test &rest body) `(do () (,test) ,@body))
This differs from a function in that it can repeatedly evaluate its arguments. A function's arguments are evaluated only once, before the function is called; a macro controls its arguments' evaluation or other use in the macro-expansion.
Variable capture is sometimes a desired effect; when it is not, it must be avoided using gensyms[?], or guaranteed-unique symbols.
Implementations Common Lisp is defined by a specification (like Ada and C) rather than by a single implementation (like Perl). There are many implementations, and the standard spells out areas in which they may validly differ.
In addition, implementations tend to come with divergent sets of library packages, which provide functionality not covered in the standard. Some of these features have been rolled back into the standard, such as CLOS and the LOOP construct; others remain implementation-specific. Unfortunately, many valuable facilities for the modern programmer -- such as TCP/IP networking -- remain unstandardized.
It is a common misconception that Common Lisp implementations are all interpreters. In fact, compilation is part of the language specification. Most Common Lisp implementations compile functions to native machine code. Others compile to bytecode, which reduces speed but improves portability.
Some Unix-based implementations, such as CLISP, can be used as script interpreters.
Freely redistributable implementations include:
There are also commercial implementations available from Franz, Xanalys, Digitool, Corman and Scieneer.
see also: WCL, Kyoto Common Lisp. | http://encyclopedia.kids.net.au/page/co/Common_Lisp | CC-MAIN-2019-51 | en | refinedweb |
I've connected the GPS on my arduino UNO via the grove shield.
The following code doesn't send anything except the "Hello" message. Is it normal ?
I thought that the gps module need time to see the satellite so I've let it for 2 hours outside whit no result.
Thanks for help
Code: Select all
#include <SoftwareSerial.h> SoftwareSerial gps(2,3); char data; void setup() { Serial.begin(9600); gps.begin(9600); Serial.println("Hello"); } void loop() { if(gps.available() > 0) { Serial.println("The GPS want to send something"); data = gps.read(); Serial.print(data); } } | https://forum.seeedstudio.com/viewtopic.php?t=4257 | CC-MAIN-2019-51 | en | refinedweb |
A recent tweet by Fermat's Library noted that the Fundamental theorem of arithmetic provides a novel (if inefficient) way of determining whether two words are anagrams of one another.
The Fundamental theorem of arithmetic states that every integer greater than 1 is either a prime number itself or can be represented as the unique product of prime numbers.
First, one assigns distinct prime numbers to each letter (e.g. a=2, b=3, c=5, ...). Then form the product of the numbers corresponding to each letter in each of the two words. If the two products are equal, the words are anagrams. For example,
'cat': $5 \times 2 \times 71 = 710$
'act': $2 \times 5 \times 71 = 710$
The following code implements and tests this algorithm.
from functools import reduce a = 'abcdefghijklmnopqrstuvwxyz' p = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101] d = dict(zip(a, p)) def prod(s): return reduce((lambda n1, n2: n1*n2), [d[lett] for lett in s]) def is_anagram(s1, s2): return prod(s1) == prod(s2)
Here,
d is a dictionary mapping the letters to their primes. The
functools.reduce method applies a provided function cumulatively to the items of a sequence.
To see that it works, try:
is_anagram('cat', 'act') True is_anagram('tea', 'tee') False
Note that for longer words the products formed get quite large:
prod('floccinaucinihilipilification') 35334111214198884032311058457421182500
Comments are pre-moderated. Please be patient and your comment will appear soon.
Dominik Stańczak 9 months, 1 week ago
This is utterly dreadful and I love it. Clever idea!Link | Reply
New Comment | https://scipython.com/blog/using-prime-numbers-to-determine-if-two-words-are-anagrams/ | CC-MAIN-2019-51 | en | refinedweb |
Build a Stateful Real-Time App with React Native and Pusher
Free JavaScript Book!
Write powerful, clean and maintainable JavaScript.
RRP $11.95
Users now expect apps to update and react to their actions in real-time. Thankfully there are a lot of language varieties and libraries now available to help you create these highly dynamic apps. In this tutorial you will learn how to build a real-time chat application with Pusher, React-native and Redux to manage the state of the app.
You can find the complete project on GitHub.
Install Dependencies
Pusher
Pusher is a realtime communication platform used to broadcast messages to listeners via their subscription to a channel. Listeners subscribe to a channel and the messages are broadcast to the channel and all the listeners receive the messages.
You will first need to create an account and then install the Pusher npm module with the following command:
npm init npm install pusher -g npm install pusher-js -g
Under the App Keys section of your Pusher project, note the
app_id,
key, and
secret values.
React Native
React Native is a framework for building rich, fast and native mobile apps with the same principles used for building web apps with React.js. React (for me) presents a better way to build UIs and is worth checking out for better understanding of this tutorial and to make your front-end life a lot easier. If you have not used React Native before, SitePoint has a lot of tutorials, including a Quick Tip to get you started.
Redux
Redux is a simple state container (the simplest I’ve used so far) that helps keep state in React.js (and React Native) applications using unidirectional flow of state to your UI components and back from your UI component to the Redux state tree. For more details, watch this awesome video tutorials by the man who created Redux. You will learn a lot of functional programing principles in Javascript and it will make you see Javascript in a different light.
App Backend
First the app needs a backend to send chat messages to, and to serve as the point from where chat messages are broadcast to all listeners. You will build this backend with Express.js, a minimalist web framework running on node.js.
Install Express with the following command:
npm install express -g
Create a folder for the project called ChatServer and inside it an index.js file.
In index.js, require the necessary libraries and create an express app running on port
5000.
var express = require('express'); var Pusher = require('pusher'); var app = express(); app.set('port', (process.env.PORT || 5000));
Create your own instance of the Pusher library by passing it the
app_id,
key, and
secret values:
... const pusher = new Pusher({ appId: 'YOUR PUSHER APP_ID HERE', key: 'YOUR PUSHER KEY HERE', secret: 'YOUR PUSHER SECRET HERE' })
Create an endpoint that receives chat messages and send them to pusher to make a broadcast action to all listeners on the chat channel. You also need to setup a listener for connections on the set port.
... app.get('/chat/:chat', function(req,res){ const chat_data = JSON.parse(req.params.chat); pusher.trigger('chat_channel', 'new-message', {chat:chat_data}); }); app.listen(app.get('port'), function() { console.log('Node app is running on port', app.get('port')); });
Mobile App
Now to the mobile app, move up a level and run the following command to create a new React Native project:
react-native init PusherChat cd PusherChat
The app needs some other dependencies:
- Axios – For Promises and async requests to the backend.
- AsyncStorage – For storing chat messages locally.
- Moment – For setting the time each chat message is sent and arrange messages based on this time.
- Pusher-js – For connecting to pusher.
- Redux – The state container
- Redux-thunk – A simple middleware that helps with action dispatching.
- React-redux – React bindings for Redux.
You should have already installed
pusher-js earlier, and
AsyncStorage is part of React native. Install the rest by running:
npm install --save redux redux-thunk moment axios react-redux
Now you are ready to build the chat app, starting by building the actions that the application will perform.
With Redux you have to create application action types, because when you dispatch actions to the reducers (state managers), you send the action to perform (action type) and any data needed to perform the action (payload). For this app the actions are to send a chat, get all chats, and receive a message
Create a new file in src/actions/index.js and add the following:
import axios from 'axios' import { AsyncStorage } from 'react-native' import moment from 'moment' import Pusher from 'pusher-js/react-native'; export const SEND_CHAT = "SEND_CHAT"; export const GET_ALL_CHATS = "GET_ALL_CHATS"; export const RECEIVE_MESSAGE = " RECEIVE_MESSAGE";
You also need helper functions that encapsulate and return the appropriate
action_type when called, so that when you want to send a chat you dispatch the
sendChat function and its payload:
const sendChat = (payload) => { return { type: SEND_CHAT, payload: payload }; }; const getChats = (payload) => { return { type: GET_ALL_CHATS, payload: payload }; }; const newMessage = (payload) => { return { type: RECEIVE_MESSAGE, payload: payload }; };
You also need a function that subscribes to pusher and listens for new messages. For every new messages this function receives, add it to the device
AsyncStorage and dispatch a new message action so that the application state is updated.
// function for adding messages to AsyncStorage const addToStorage = (data) => { AsyncStorage.setItem(data.convo_id+data.sent_at, JSON.stringify(data), () => {}) } // function that listens to pusher for new messages and dispatches a new // message action export function newMesage(dispatch){ const socket = new Pusher("3c01f41582a45afcd689"); const channel = socket.subscribe('chat_channel'); channel.bind('new-message', (data) => { addToStorage(data.chat); dispatch(newMessage(data.chat)) } ); }
You also have a function for sending chat messages. This function expects two parameters, the sender and message. In an ideal chat app you should know the sender via the device or login, but for this input the sender:
export function apiSendChat(sender,message){ const sent_at = moment().format(); const chat = {sender:sender,message:message, sent_at:sent_at}; return dispatch => { return axios.get(`{JSON.stringify(chat)}`).then(response =>{ }).catch(err =>{ console.log("error", err); }); }; };
Finally is a function that gets all the chat messages from the device
AysncStorage. This is needed when first opening the chat app, loading all the messages from the device storage and starting to listen for new messages.
export function apiGetChats(){ //get from device async storage and not api return dispatch => { dispatch(isFetching()); return AsyncStorage.getAllKeys((err, keys) => { AsyncStorage.multiGet(keys, (err, stores) => { let chats = []; stores.map((result, i, store) => { // get at each store's key/value so you can work with it chats.push(JSON.parse(store[i][1])) }); dispatch(getChats(chats)) }); }); }; }
The next step is to create the reducer. The easiest way to understand what the reducer does is to think of it as a bank cashier that performs actions on your bank account based on whatever slip (Action Type) you present to them. If you present them a withdrawal slip (Action Type) with a set amount (payload) to withdraw (action), they remove the amount (payload) from your bank account (state). You can also add money (action + payload) with a deposit slip (Action Type) to your account (state).
In summary, the reducer is a function that affects the application state based on the action dispatched and the action contains its type and payload. Based on the action type the reducer affects the state of the application.
Create a new file called src/reducers/index.js and add the following:
import { combineReducers } from 'redux'; import { SEND_CHAT, GET_ALL_CHATS, RECEIVE_MESSAGE} from './../actions' // THE REDUCER const Chats = (state = {chats:[]}, actions) => { switch(actions.type){ case GET_ALL_CHATS: return Object.assign({}, state, { process_status:"completed", chats:state.chats.concat(actions.payload) }); case SEND_CHAT: case NEW_MESSAGE: return Object.assign({}, state, { process_status:"completed", chats:[...state.chats,actions.payload] }); default: return state; } }; const rootReducer = combineReducers({ Chats }) export default rootReducer;
Next create the store. Continuing the bank cashier analogy, the store is like the warehouse where all bank accounts (states) are stored. For now you have one state, Chats, and have access to it whenever you need it.
Create a new src/store/configureStore.js file and add the following:
import { createStore, applyMiddleware } from 'redux' import thunkMiddleware from 'redux-thunk' import createLogger from 'redux-logger' import rootReducer from '../reducers' const createStoreWithMiddleware = applyMiddleware( thunkMiddleware, createLogger() )(createStore) export default function configureStore(initialState) { const store = createStoreWithMiddleware(rootReducer, initialState) return store }
Now let’s create the main chat component that renders all the chat messages and allows a user to send a chat message by inputting their message. This component uses the React Native
ListView.
Create a new src/screens/conversationscreen.js file and add the following:
import React, { Component, View, Text, StyleSheet, Image, ListView, TextInput, Dimensions} from 'react-native'; import Button from './../components/button/button'; import { Actions } from 'react-native-router-flux'; import KeyboardSpacer from 'react-native-keyboard-spacer'; import { connect } from 'react-redux'; import moment from 'moment'; import { apiSendChat, newMesage } from './../actions/'; const { width, height } = Dimensions.get('window'); const styles = StyleSheet.create({ container: { flex: 1 }, main_text: { fontSize: 16, textAlign: "center", alignSelf: "center", color: "#42C0FB", marginLeft: 5 }, row: { flexDirection: "row", borderBottomWidth: 1, borderBottomColor: "#42C0FB", marginBottom: 10, padding:5 }, back_img: { marginTop: 8, marginLeft: 8, height: 20, width: 20 }, innerRow: { flexDirection: "row", justifyContent: "space-between" }, back_btn: {}, dp: { height: 35, width: 35, borderRadius: 17.5, marginLeft:5, marginRight:5 }, messageBlock: { flexDirection: "column", borderWidth: 1, borderColor: "#42C0FB", padding: 5, marginLeft: 5, marginRight: 5, justifyContent: "center", alignSelf: "flex-start", borderRadius: 6, marginBottom: 5 }, messageBlockRight: { flexDirection: "column", backgroundColor: "#fff", padding: 5, marginLeft: 5, marginRight: 5, justifyContent: "flex-end", alignSelf: "flex-end", borderRadius: 6, marginBottom: 5 }, text: { color: "#5c5c5c", alignSelf: "flex-start" }, time: { alignSelf: "flex-start", color: "#5c5c5c", marginTop:5 }, timeRight: { alignSelf: "flex-end", color: "#42C0FB", marginTop:5 }, textRight: { color: "#42C0FB", alignSelf: "flex-end", textAlign: "right" }, input:{ borderTopColor:"#e5e5e5", borderTopWidth:1, padding:10, flexDirection:"row", justifyContent:"space-between" }, textInput:{ height:30, width:(width * 0.85), color:"#e8e8e8", }, msgAction:{ height:29, width:29, marginTop:13 } }); const username = 'DUMMY_USER'; function mapStateToProps(state) { return { Chats: state.Chats, dispatch: state.dispatch } } class ConversationScreen extends Component { constructor(props) { super(props); const ds = new ListView.DataSource({rowHasChanged: (r1, r2) => r1 != r2}); this.state = { conversation: ds, text:"", username } } componentDidMount(){ const {dispatch, Chats} = this.props; const chats = Chats; chats.sort((a,b)=>{ return moment(a.sent_at).valueOf() - moment(b.sent_at).valueOf(); }); this.setState({ conversation: this.state.conversation.cloneWithRows(chats) }) } componentWillReceiveProps(nextProps) { const {dispatch, Chats} = this.props; const chats = Chats; chats.sort((a,b)=>{ return moment(a.sent_at).valueOf() - moment(b.sent_at).valueOf(); }); this.setState({ conversation: this.state.conversation.cloneWithRows(chats) }) } renderSenderUserBlock(data){ return ( <View style={styles.messageBlockRight}> <Text style={styles.textRight}> {data.message} </Text> <Text style={styles.timeRight}>{moment(data.time).calendar()}</Text> </View> ) } renderReceiverUserBlock(data){ return ( <View style={styles.messageBlock}> <Text style={styles.text}> {data.message} </Text> <Text style={styles.time}>{moment(data.time).calendar()}</Text> </View> ) } renderRow = (rowData) => { return ( <View> {rowData.sender == username ? this.renderSenderUserBlock(rowData) : this.renderReceiverUserBlock(rowData)} </View> ) } sendMessage = () => { const message = this.state.text; const username = this.state.username; const {dispatch, Chats} = this.props; dispatch(apiSendChat(username,message)) } render() { return ( <View style={styles.container}> <View style={styles.row}> <Button style={styles.back_btn} onPress={() => Actions.pop()}> <Image source={require('./../assets/back_chevron.png')} style={styles.back_img}/> </Button> <View style={styles.innerRow}> <Image source={{uri:""}} style={styles.dp}/> <Text style={styles.main_text}>GROUP CHAT</Text> </View> </View> <ListView renderRow={this.renderRow} dataSource={this.state.conversation}/> <View style={styles.input}> <TextInput style={styles.textInput} onChangeText={(text) => this.setState({username:text})} <TextInput style={styles.textInput} onChangeText={(text) => this.setState({text:text})} <Button onPress={this.sendMessage}> <Image source={require('./../assets/phone.png')} style={styles.msgAction}/> </Button> </View> <KeyboardSpacer/> </View> ) } } export default connect(mapStateToProps)(ConversationScreen)
React Native gives you a lifecycle function,
componentWillReceiveProps(nextProps) called whenever the component is about to receive new properties (props) and it’s in this function you update the state of the component with chat messages.
The
renderSenderUserBlock function renders a chat message as sent by the user and the
renderReceiverUserBlock function renders a chat message as received by the user.
The
sendMessage function gets the message from the state that the user intends to send, the username of the recipient and dispatches an action to send the chat message.
The
renderRow function passed to the
Listview component contains properties and renders the data of each row in the
Listview.
You need to pass state to the application components and will use the React-redux library to do that. This allows you to connect the components to redux and access to the application state.
React-Redux provides you with 2 things:
- A ‘Provider’ component which allows you to pass the store to it as a property.
- A ‘connect’ function which allows the component to connect to redux. It passes the redux state which the component connects to as properties for the Component.
Finally create app.js to tie everything together:
import React, { Component, StyleSheet, Dimensions} from 'react-native'; import { Provider } from 'react-redux' import configureStore from './store/configureStore' const store = configureStore(); import ConversationScreen from './screens/conversation-screen'; const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: "#fff", }, tabBarStyle: { flex: 1, flexDirection: "row", backgroundColor: "#95a5a6", padding: 0, height: 45 }, sceneStyle: { flex: 1, backgroundColor: "#fff", flexDirection: "column", paddingTop:20 } }) export default class PusherChatApp extends Component { render() { return ( <Provider store={store}> <ConversationScreen /> </Provider> ) } }
And reference app.js in index.android.js and index.ios.js, replacing any current contents:
import React, { AppRegistry, Component, StyleSheet, Text, View } from 'react-native'; import PusherChatApp from './src/app' AppRegistry.registerComponent('PusherChat', () => PusherChatApp);
Talk to Me
And that’s it, a scalable and performant real-time app that you can easily add to and enhance for your needs. If you have any questions or comments then please let me know below.
Get practical advice to start your career in programming!
Master complex transitions, transformations and animations in CSS! | https://www.sitepoint.com/build-a-stateful-real-time-app-with-react-native-and-pusher/ | CC-MAIN-2021-04 | en | refinedweb |
Nitpicking over Code Standards with Nitpick CI
Free JavaScript Book!
Write powerful, clean and maintainable JavaScript.
RRP $11.95
There are many ways to make sure your code respects a given code standard – we’ve covered several before. But enforcing a standard team-wide and making sure everyone knows about mistakes before they’re applied to the project isn’t something that’s very easy to do. Travis and Jenkins can both be configured to do these checks, but aren’t as easygoing about it as the solution we’re about to look at: Nitpick CI.
Nitpick CI is a dead simple single-click tool for making sure submitted Github pull requests adhere to the PSR-2 standard. Unfortunately, Nitpick currently only works with these two specific (but popular) vectors – only Github, and only PSR-2. It’s free for open source projects, so let’s give it a try.
Bootstrapping
To test Nitpick, we’ll create a brand new repository based on thephpleague/skeleton and pretend we’re building a new PHP package. The skeleton already respects PSR-2, so it’s a perfect candidate for an invalid PR. Feel free to follow along!
git clone nitpick-test cd nitpick-test find . -type f -print0 | xargs -0 sed -i 's/:author_name/Bruno Skvorc/g' find . -type f -print0 | xargs -0 sed -i 's/:author_usernamename/swader/g' find . -type f -print0 | xargs -0 sed -i 's/:author_website/http:\/\/bitfalls.com/g' find . -type f -print0 | xargs -0 sed -i 's/:author_email/bruno@skvorc.me/g' find . -type f -print0 | xargs -0 sed -i 's/:vendor/sitepoint/g' find . -type f -print0 | xargs -0 sed -i 's/:package_name/nitpick-test/g' find . -type f -print0 | xargs -0 sed -i 's/:package_description/nitpick-test package for a SitePoint tutorial/g' rm CONDUCT.md rm -rf .git
The above commands clone the skeleton, replace placeholder values with real values, and delete some files we don’t need. The project is now ready to be committed and pushed online (as long as you created a repo on Github).
git init git remote add origin YOUR_ORIGIN git add -A git commit -am "Initial commit" git push -u origin master
We can now tell Nitpick to track it.
Configuring Nitpick
To set up Nitpick, we really only have to register for it through Github’s Oauth:
Once authorized to look at an account’s repos, Nitpick will offer a list with a single “Activate” button next to each.
There’s even a handy search field for instantly filtering the repos list if you have hundreds. One click on the Activate button, and that’s all it takes. Nitpick is now watching the project for PRs and automatically scanning them. Let’s give it a go!
A Non-Code PR
First, let’s test this on non-code PRs and see how Nitpick will react. We deleted the CONDUCT file in the first steps of this tutorial, but we failed to delete the reference to it from the README file. There’s also a “note” at the top of the README file that needs to be removed.
We can do these changes in the online UI by clicking the edit button on the README file:
To actually create an inspectable PR, we tell Github to create a new branch and a new pull request:
As expected, the PR is left untouched – nothing really happens and we can merge it. The file has no code to inspect, so no notifications are issued.
A Valid Code Pull Request
Now, let’s add a change to the sample controller. The change should be minimal, just to see if Nitpick will do anything. For simplicity, we can use the web UI again.
We can change the content of
src/SkeletonClass.php to:
< ?php namespace League\Skeleton; class SkeletonClass { /** * Create a new Skeleton Instance */ public function __construct() { // constructor body } /** * Friendly welcome * * @param string $phrase Phrase to return * * @return string Returns the phrase passed in */ public function echoPhrase($phrase) { $phrase .= "- and here is a suffix!!"; return $phrase; } }
And again, nothing happens. Is Nitpick even working?
An Invalid Code Pull Request
Now, let’s write some PSR-2 unfriendly code on purpose. PSR-2 states:
There MUST NOT be a hard limit on line length; the soft limit MUST be 120 characters; lines SHOULD be 80 characters or less.
and
Visibility MUST be declared on all properties and methods; abstract and final MUST be declared before the visibility; static MUST be declared after the visibility.
While there are many more rules, these seem easy enough to break. Let’s change the contents of
src/SkeletonClass.php to:
<?php namespace League\Skeleton; class SkeletonClass { $someProp = "foo"; public $someOtherProp = "bar"; /** * Create a new Skeleton Instance */ public function __construct() { // constructor body } public final; } }
We’ve added two properties, of which only one is defined according to rules. The other is missing its visibility declaration. Furthermore, we added a technically valid but standards-invalid method which defines its finality after the visibility, and then has its entire body in the same line as the method’s declaration. Lastly, we extended the previously added suffix to an extreme length.
Once we submit the PR, we can see things are a bit different:
Nitpick displays the mistakes inline, specifically pointing out what’s wrong – whether it’s just one offense or multiple ones as in the case of our added method.
Due to Nitpick commenting on the PR directly as if a user were doing it, notifications are also sent via email, so it’s nearly impossible to miss them:
One thing it missed, however, is the line length. It didn’t react to it at all. This is because under the hood, Nitpick uses PHP CodeSniffer and it’s PSR-2 preset, and what ever PHPCS considers valid, Nitpick agrees with. On the specific line length issue, PHPCS considers it merely a warning, which we can see if we run it manually on our code:
composer global require squizlabs/php_codesniffer ~/.config/composer/vendor/bin/phpcs --standard=PSR2 src/SkeletonClass.php
After inspection, it’s not like Nitpick will block the merge attempt. We can still go ahead and merge the PR as usual. The inspection messages, however, are there to stay for public record until we get them fixed. Let’s do that now. On the command line, we’ll clone that PR’s branch:
git fetch origin git checkout -b Swader-patch-3 origin/Swader-patch-3
We then make the required edits, changing the
SkeletonClass.php‘s contents to:
<?php namespace League\Skeleton; class SkeletonClass { public $someProp = "foo"; public $someOtherProp = "bar"; /** * Create a new Skeleton Instance */ public function __construct() { // constructor body } final public; } }
After committing and pushing with:
git add -A git commit -m "Fixed nitpicks" git push origin Swader-patch-3
… the only feedback we get about having fixed the mistakes is the fact that Nitpick’s previous comments on the PR are now outdated:
Our PR is now ready to be merged!
Conclusion
While Nitpick is, admittedly, very specific in its use case, it’s also very good at it – it does one thing, and does it well. A single click is all it takes to configure it, and the entire team benefits from advanced standard inspections which can now be left out of other CI tools like Jenkins and Travis, instead letting them focus on actual tests. Being free for open source and incredibly easy to set up, there’s truly no reason not to give it a go.
Granted, there are some nitpicks with Nitpick:
- due to Nitpick using PHPCS under the hood, we’re constrained by what the tool can do. Not only that, we’re also constrained by the tool’s interpretation of the PSR-2 rules. There’s no way to customize the ruleset, or to define a different implementation of the standard (for now).
- when mistakes get introduced, there’s no blocking indicator in the PR. CI tools like Travis and Scrutinizer do this really well in that they work together and use the PR’s readiness indicator to mark a build fail or pass. While code standards have no real effect on a build’s technical realization passing or failing, it’d be nice to have some more visual feedback in the spirit of those tools.
- when mistakes get fixed, there’s no clear indication of the issues having been rectified. I like Scrutinizer‘s approach there – clear indications in the PR, and an email saying some issues were fixed. This gets sent to the entire team, so everyone’s immediately on the same page.
That said, I’ll definitely activate Nitpick on all my open source projects – there’s just no reason not to. As a heavy PSR-2 user, I can leave the standards checking to Nitpick and forget all about it in tools like Travis and Scrutinizer, instead letting those tools focus on what matters – code inspection and tests.
What do you use for code inspection? Do you rely on external tools, or local tools only? Maybe both? Let us know!
Get practical advice to start your career in programming!
Master complex transitions, transformations and animations in CSS! | https://www.sitepoint.com/nitpicking-over-code-standards-with-nitpick-ci/ | CC-MAIN-2021-04 | en | refinedweb |
Hands on with Records in Java 14 – A Deep Dive
To celebrate the release of Java 14, here’s a deep dive into Records in JDK 14. It’s written by Developer Advocate at JetBrains, founder of eJavaGuru.com and Java Champion, Mala Gupta. What a treat! So let’s get stuck in.
With Records, Java 14 is welcoming compact syntax for declaring data classes. It lets you model your data with ease – by using just a single line of code. The compiler would do the heavy lifting in terms of the implicit methods that are required to make the class work with you.
In this article, I’d cover what records are, the issues they address, and how to use them. Let’s get started.
Why do we need Records in Java?
It is rare to see an application that doesn’t interact with a data store. Often your application would need to read data from a data source or write to it. As developers, we are used to defining a data class, encapsulating variables to store its state. However, with a regular class, the rules are to define private instance variables to store the state of your data object, add public constructors, accessor or mutators methods and override methods like
toString(),
equals() and
hashCode() from the
Object class.
At times, developers cut down on their efforts – either by not defining these methods, or by using an existing class that is vaguely like the one they require. Both the cases may throw up surprises later.
Following is an example of a class you could use to represent a Rectangle and store it to a datastore:
package com.jetbrains; import java.util.Objects; public class Rectangle { private int length; private int breadth; public Rectangle(int length, int breadth) { this.length = length; this.breadth = breadth; } public int getLength() { return length; } public void setLength(int length) { this.length = length; } public int getBreadth() { return breadth; } public void setBreadth(int breadth) { this.breadth = breadth; } @Override public String toString() { return "Rectangle{" + "length=" + length + ", breadth=" + breadth + '}'; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; Rectangle = (Rectangle) o; return length == rectangle.length && breadth == rectangle.breadth; } @Override public int hashCode() { return Objects.hash(length, breadth); } }
Though we could use IDEs like IntelliJ IDEA to generate all this code (constructors, getters, setters and override methods from
Object), a class couldn’t be tagged just as a data class. Without any help from the language, a developer would still need to read through all the code to realize that it is just a data class. Let’s see how records can help.
What is a Record?
Records introduce a new type declaration in Java, which simplifies the task of modelling your data as data. It also cuts down significantly on the boilerplate code (however, this isn’t the primary reason for its introduction).
Let’s define the class Rectangle (from preceding section) as a Record:
record Rectangle(int length, int breadth) {}
With just one line of code, the preceding example defines a record with components
length and
breadth. By default, the compiler creates a bunch of methods for a record, like instance methods to store its state, constructors, getters (no setters-the intent to the data be immutable), and overrides
toString(),
equals() and
hashCode() method from the
Object class.
In IntelliJ IDEA 2020.1, ‘New Java Class’ dialog lets you choose your type as ‘Record (Preview Feature)’:
It is interesting to know what happens when you compile your records and the implicit members that are added to it by the compilation process.
Implicit members added to a record
Defined as a final class, a record implicitly extends the
java.lang.Record class from the core Java API. For each of the components (data variables) it defines, the compiler defines a
final instance variable. Interestingly, the name of the getter methods is same as the name of the data variable (and don’t start with ‘
get’). Since a record is supposed to be immutable, no setter methods are defined.
The methods
toString(),
hashCode() and
equals() are also generated automatically for records.
You can use the Java Class File Disassembler (javap.exe from JDK with option -p) to show the variables and methods of a decompiled class.
You can also open a .class file in IntelliJ IDEA to view its decompiled version. Just click on its .class file in the ‘out’ folder:
SEE ALSO: Pattern Matching for instanceof in Java 14
How to use a record
You can initialize a record like any other class by using the operator new, passing values to its constructor, defining its state. Let’s instantiate the record Rectangle defined in the previous section:
Rectangle smallRectangle = new Rectangle(2, 5); Rectangle aBigRectangle = new Rectangle(100, 150);
Let’s call a couple of (implicit) methods on these references, which were added to it by the compilation process:
System.out.println("Let's see what toString() returns: " + smallRectangle); System.out.println(smallRectangle.equals(aBigRectangle)); System.out.println("Let's getTheHashCode() :" + smallRectangle.hashCode()); System.out.println("Seriously? no getLength() method: " + smallRectangle.length());
Here’s the output of the preceding code:
Let's see what toString() returns: Rectangle[length=2, breadth=5] false Let's getTheHashCode() :67 Seriously? no getLength() method: 2
As evident in the preceding output, the
toString() method includes the name of the record, followed by the name of its state variables and their values. Another noticeable point is that the getters for record properties don’t include the word ‘get’; they are the same as the property name. Also, no setter methods (since records are supposed to be immutable).
Support of Java 14 features in IntelliJ IDEA
The latest version of IntelliJ IDEA – 2020.1 fully supports all the new language features of Java 14. You can download the Beta version now and check it out yourself.
Configure IntelliJ IDEA 2020.1 to use JDK 14 for Project SDK and choose the Project language level as ‘14 (Preview) – Records, patterns, text blocks’ for your Project and Modules settings.
The Ultimate version is free to use in its Early Access Program (EAP).
Modifying the default behavior of a Record
You can modify the default implementation of the methods that are generated by a compiler. However, you must not do that unless it is required.
The default implementation of the constructor generated by a compiler, initializes the state of a record. You can modify this default implementation to validate the argument values or to add business logic.
IntelliJ IDEA lets you insert a Compact, Canonical or Custom constructor, when you invoke action ‘Generate’, by using Alt + Insert on Windows or Linux system/ or ^ N on macOS:
A compact constructor doesn’t define a parameter list, not even parenthesis. Here’s an example:
package com.jetbrains; public record Rectangle { // no parameter list here! public Rectangle { if (length < 0 || breadth < 0) throw new IllegalArgumentException("Length & breadth must be > 0"); } }
Here’s an example of a canonical constructor to validate the parameter values passed to its constructor. This constructor also adds in some business logic – incrementing a static field in the constructor:
public record Rectangle(int length, int breadth) { public Rectangle { if (length < 0 || breadth < 0) { throw new IllegalArgumentException("-ve dimensions. Seriously?"); } totalRectanglesCreated++; } private static int totalRectanglesCreated; static int getTotalRectanglesCreated() { return totalRectanglesCreated; } }
As you can notice in the preceding example, a record can add static fields or instance and static methods, if required.
By inserting a custom constructor in IntelliJ IDEA, you can choose the parameters you want to explicitly accept in a constructor. The remaining state variable would be assigned a default value (for example null for reference variables, 0 for integer values).
You can’t add multiple constructors to a record like a regular class.
SEE ALSO : Java 14 – “Small changes can significantly improve the developer experience”
Defining a Generic record
You can define record with generics. Here’s an example of a record, say,
Parcel, which can store any object as its contents, and capture the parcel’s dimensions and weight:
public record Parcel<T>(T contents, double length, double breadth, double height, double weight) {} // You can instantiate this record as follows : class Table{ /* class code */ } public class Java14 { public static void main(String[] args) { Parcel <Table> parcel = new Parcel<>(new Table(), 200, 100, 55, 136.88); System.out.println(parcel); } }
Adding annotations to record components
You can add annotation to the components of a record, which are applicable to them. For example, you could define a record Car, applying @NotNull annotation to its component model:
package com.jetbrains; import org.jetbrains.annotations.NotNull; import java.time.LocalDateTime; public record Car(@NotNull String model, LocalDateTime purchased){}
If you pass a
null value as the first parameter while instantiating record
Car, you’ll get a warning.
SEE ALSO: Java 14 – “Regression and compatibility tests are essential”
Reading and Writing Records
Writing a record to a file and reading it using the Java File I/O API is simple. Let your record implement the Serializable interface and you are good to go. Here’s the modified version of the record Rectangle, which you can write to and read from a file:
package com.jetbrains; public record Rectangle(int length, int breadth) implements Serializable {} public class Java14 { public static void main(String[] args) { Rectangle smallRectangle = new Rectangle(2, 5); writeToFile(smallRectangle, "Java14-records"); System.out.println(readFromFile("Java14-records")); } static void writeToFile(Rectangle obj, String path) { try (ObjectOutputStream oos = new ObjectOutputStream( new FileOutputStream(path))){ oos.writeObject(obj); } catch (IOException e) { e.printStackTrace(); } } static Rectangle readFromFile(String path) { Rectangle result = null; try (ObjectInputStream ois = new ObjectInputStream(new FileInputStream(path))){ result = (Rectangle) ois.readObject(); } catch (ClassNotFoundException | IOException e) { e.printStackTrace(); } return result; } }
There are always two sides to a coin. With a lot of advantages to records, let’s see what is it that you can’t do with it.
Are there any restrictions on a record?
Since a record is a final class, you can’t define it as an abstract class. Also, it can’t extend another class (since it implicitly extends
java.lang.Record class). But there are no restrictions on it implementing interfaces.
Even though you can add static members to a record, you can’t add instance variables to it. This is because a record is supposed to limit the instance members to the components it defines in the record definition. Also, you can override the default implementation of the methods which are generated by a compiler for a record – like its constructor, methods
equals(),
hashCode(), and
toString.
A restricted identifier
‘
record’ is a restricted identifier (like ‘
var’) and isn’t a keyword (yet). So, the following code is valid:
int record = 10; void record() {}
However, you should refrain from using record as an identifier because it could be included as a keyword in a future Java version.
Preview Language Feature
Records have been released as a preview language feature in Java 14. With Java’s new release cadence of six-month, new language features are released as preview features, which are neither incomplete nor half-baked features. A preview language feature essentially means that even though this feature is ready to be used by developers, its finer details could change in a future Java release – depending on the feedback received on this feature by developers. Unlike an API, language features can’t be deprecated in future. So, if you have any feedback on text blocks, share it on the JDK mailing list (membership required).
If you are using command prompt, you must enable them during compilation and runtime. To compile class Rectangle, which defines a record, use the following command:
javac --enable-preview --release 14 Java14.java
The compiler would warn that you are using a preview language feature.
To execute a class, say, Java14, that uses records (or another preview language feature in Java 14), use the following command:
java --enable-preview Java14
Looking ahead
Work is in progress to derive deconstruction patterns for records. It should present interesting use cases with ‘Sealed Types’ and ‘Pattern Matching’. A future version of Java should also see addition of methods to the Object class to enables reflections to work with Records.
In the meantime, why not take a look at my screencast on Java 14 language features: | https://jaxenter.com/java-14-records-deep-dive-169879.html | CC-MAIN-2021-04 | en | refinedweb |
Collection of curried functional utils made entirely in TypeScript. Compatible with all modern JS environments:
UsageUsage
This package can be installed as a dependency or used directly.
Usage as ECMAScript moduleUsage as ECMAScript module
import { isObject } from "";
Or in HTML:
<script type="module" src=""></script>
Usage with local installationUsage with local installation
First:
npm i @vangware/utils
And then:
import { isObject } from "@vangware/utils";.
Test coverageTest coverage
Test coverage can be found HERE. | https://preview.npmjs.com/package/@vangware/utils | CC-MAIN-2021-04 | en | refinedweb |
Introduction :
Loading a gif from URL is easy in react native. React native provides one Image component that can load gifs from a URL. In this tutorial, I will show you how to load one gif from a remote URL. I will provide you the full working code at the end of this post.
For this example, I am using one gif from giphy: You can replace it with any other URL if you want.
For loading a gif, the Image component is used. import it from react-native :
import {Image} from 'react-native';
Store the URL in a constant and assign that value as a source to the Image :
const imageUrl = ''; <Image source={{uri: imageUrl}} style={styles.image} />
It looks as like below on iPhone :
Loading gif in android is different. Android can’t load gif automatically. For that you need to add one library. Open android/app/build.gradle file and add the below line inside dependencies block :
implementation 'com.facebook.fresco:animated-gif:2.0.0'
For android version before Ice cream sandwich(API 14), use it :
implementation 'com.facebook.fresco:animated-base-support:1.3.0'
Most app doesn’t support before API 14, so only the first line will work.
You can check the official guide to find out the latest version of this lib.
Rerun the app and here is the result :
| https://www.codevscolor.com/react-native-load-gif-url | CC-MAIN-2021-04 | en | refinedweb |
Introduction :
React native provides one component to add pull to refresh functionality called RefreshControl. You can use it with a ScrollView or ListView. We need to manually change the state of this component. It calls one function when the refresh starts and we need to manually set it as true to make it appear. Next, once everything i.e. refresh is done, we need to set it as false to make it stop.
It provides a couple of props that I am listing down first before showing you how it works :
Props of RefreshControl :
1. refreshing :
This is a boolean value to indicate if the refresh is active or not.
2. onRefresh :
This is a function that calls when the view starts refreshing. Once it is called, we need to mark refreshing as true to show the component.
3. colors :
It is an array of colors that will be used for the refresh indicator. It is available only on Android.
4. enabled :
Only for Android. It is a boolean value to define whether this functionality is enabled or not.
5. progressBackgroundColor :
Only for Android. The background color of the progress indicator.
6. progressViewOffset :
Top offset of the progress view.
7. size :
It is an enum value. It can take any one of the RefreshLayoutConsts.SIZE.DEFAULT, RefreshLayoutConsts.SIZE.LARGE. It is available only for android.
8. tintColor :
Color of the refresh indicator. Available for iOS.
9. title and titleColor :
Available only on iOS. These are used for title and color of the title respectively.
Example :
I am using the default react native project where App.js holds everything :
import React, {useState, useCallback} from 'react'; import { SafeAreaView, StatusBar, StyleSheet, RefreshControl, ScrollView, Text, } from 'react-native'; function delay(timeout) { return new Promise(resolve => { setTimeout(resolve, timeout); }); } const App = () => { const [loading, setLoading] = useState(false); const loadMore = useCallback(async () => { setLoading(true); delay(1500).then(() => setLoading(false)); }, [loading]); return ( <> <StatusBar barStyle="dark-content" /> <SafeAreaView style={styles.container}> <ScrollView contentContainerStyle={styles.scrollView} refreshControl={ <RefreshControl progressBackgroundColor="red" tintColor="red" refreshing={loading} onRefresh={loadMore} /> }> <Text>Swipe down to refresh</Text> </ScrollView> </SafeAreaView> </> ); }; const styles = StyleSheet.create({ container: { flex: 1, }, scrollView: { flex: 1, alignItems: 'center', justifyContent: 'center', }, }); export default App;
If you run it, it will produce one result like below :
We are using one delay here. We can load the data during this time like making an API call.
The props work differently on both android and iOS. Make sure to test it on both platforms | https://www.codevscolor.com/react-native-refreshcontrol-example | CC-MAIN-2021-04 | en | refinedweb |
asinh, asinhf, asinhl - inverse hyperbolic sine function
#include <math.h> double asinh(double x); float asinhf(float x); long double asinhl(long double x); Link with -lm. Feature Test Macro Requirements for glibc (see feature_test_macros(7)): asinh(): _BSD_SOURCE || _SVID_SOURCE || _XOPEN_SOURCE >= 500 || _ISOC99_SOURCE; or cc -std=c99 asinhf(), asinhl(): _BSD_SOURCE || _SVID_SOURCE || _XOPEN_SOURCE >= 600 || _ISOC99_SOURCE; (-0), +0 (-0) is returned. If x is positive infinity (negative infinity), positive infinity (negative infinity) is returned.
No errors occur..24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://huge-man-linux.net/man3/asinhf.html | CC-MAIN-2021-04 | en | refinedweb |
Python.
Example: Repeated Measures ANOVA in Python.
Use the following steps to perform the repeated measures ANOVA in Python.
Step 1: Enter the data.
First, we’ll create a pandas DataFrame to hold our data:
import numpy as np import pandas as pd #create data df = pd.DataFrame({'patient': np.repeat([1, 2, 3, 4, 5], 4), 'drug': np.tile([1, 2, 3, 4], 5), 'response': [30, 28, 16, 34, 14, 18, 10, 22, 24, 20, 18, 30, 38, 34, 20, 44, 26, 28, 14, 30]}) #view first ten rows of data df.head[:10] patient drug response 0 1 1 30 1 1 2 28 2 1 3 16 3 1 4 34 4 2 1 14 5 2 2 18 6 2 3 10 7 2 4 22 8 3 1 24 9 3 2 20
Step 2: Perform the repeated measures ANOVA.
Next, we will perform the repeated measures ANOVA using the AnovaRM() function from the statsmodels library:
from statsmodels.stats.anova import AnovaRM #perform the repeated measures ANOVA print(AnovaRM(data=df, depvar='response', subject='patient', within=['drug']).fit()) Anova ================================== F Value Num DF Den DF Pr > F ---------------------------------- drug 24.7589 3.0000 12.0000 0.0000 ==================================
Step 3: Interpret the results.
A repeated measures ANOVA uses the following null and alternative hypotheses:
The null hypothesis (H0): µ1 = µ2 = µ3 (the population means are all equal)
The alternative hypothesis: (Ha): at least one population mean is different from the rest
In this example, the F test-statistic is 24.7589 and the corresponding p-value is 0.0000. Since this p-value is less than 0.05, we reject the null hypothesis and conclude that there is a statistically significant difference in mean response times between the four drugs..75887, p < 0.001). | https://www.statology.org/repeated-measures-anova-python/ | CC-MAIN-2021-04 | en | refinedweb |
the object, people will be able to use your service in their project. Let’s see how we can do it!
Docker in a nutshell
For this part I’m assuming we have basic knowledge of Docker. For those unfamiliar with this, Docker is a tool to build isolated environments (containers) in your computer in such a way that it doesn’t get into conflict with any file or program in your local filesystem (the host). Among all its advantages, I would highlight these:
- Unlike with virtual machines, you can run containers with only what’s strictly necessary to run a single component of your project. This helps you generate containers as light as you want.
- Docker networking capabilities allows you easily communicate multiple containers to each other.
- Even if your OS is not fully compatible with the tool you want to use, with containers you don’t run into compatibility issues anymore.
- Docker containers will run in the same way regardless of the hosting environment, be in your computer or a server running in a cloud service.
Whenever I step into learning something new I recommend go for a tutorial or quick start in the documentation itself. Tensorflow Serving has a short one in its Github repo:
# Download the TensorFlow Serving Docker image and repo docker pull tensorflow/serving git clone # Location of demo models TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata" # Start TensorFlow Serving container and open the REST API port docker run -t --rm -p 8501:8501 \ -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \ -e MODEL_NAME=half_plus_two \ tensorflow/serving & # Query the model using the predict API curl -d '{"instances": [1.0, 2.0, 5.0]}' \ -X POST # Returns => { "predictions": [2.5, 3.0, 4.5] }
Pay attention to the arguments passed to the
docker run command, specifically the ones accepting external values:
-p 8501:8501, publishes the container’s port specified at the right of the colon, and is mapped to the same port in the host, specified at the left of the colon. For REST API, Tensorflow Serving makes use of this port, so don’t change this parameter in your experiments.
-v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two", attaches a volume to the container. This volume contains a copy of the folder where you saved your Tensorflow object. Just a level above the folder named
/1/. This folder will appear in the container, under
/models/.
-e MODEL_NAME=half_plus_two, defines an environment variable. This variable is required to serve your model. For convenience, use the same identifier as the container’s folder name where you attached your model.
Deploying servables in containers
You can design an API for your servable, but Tensorflow Serving abstracts away this step thanks to Docker. Once you deploy the container, you can make a request to the server where your container is located to perform some kind of computation. Within the body of the request you may attach some input (required to run the servable) and obtain some output in return.
In order to make the computation you need to specify the endpoint URL of the servable in your request. In the example shown above this endpoint URL is. Now everything is ready to run our Tensorflow objects. We will start with the Keras model:
docker run -t --rm -p 8501:8501 -v "$(pwd)/mobilenet_v2_test:/models/mobilenet_v2_test" -e MODEL_NAME=mobilenet_v2_test tensorflow/serving &
When this command was executed, the current directory was
tmp/. This is the folder where I’ve been saving all my models. This is what the terminal returns:
The model is up and ready to be used.
Make requests to servables
With
curl library
Now that the container is up and running we can make requests by sending an image to be recognized. I’ll show you two ways to achieve that. First I made a little shell script (download it from here) that receives the path of an image file as an argument and makes the call itself with the library
curl. Here I show you how we make the request and what is the image that the model is trying to classify.
And this is how we make the call with the API we built.
The second example involves the servable that adds 2 to every element of the vector. This is how the call is made after the container is up.
With
requests library
The library
requests allows you doing the same thing but using Python code
import json import requests import base64 data = {} with open('../../Downloads/imagenes-osos-panda.jpg', mode='rb') as file: img = file.read() data = {"inputs":[{"b64":base64.encodebytes(img).decode("utf-8")}]} # Making the request r = requests.post("", data=json.dumps(data)) r.content # And returns: # b'{\n "outputs": [\n "giant panda"\n ]\n}'
In this piece of code, you will notice there are some rules you have to follow when defining the JSON that is sent in your request, such as the naming of the keys and the presence of nested structures. This is explained more extensively in the Tensorflow documentation. Regarding images, they are binarized using the Base64 encoding before being sent to the servable.
And this covers everything I wanted to explain with Tensorflow Serving (for now). I hope this tutorial will spark your motivation for building machine learning services. This is only the tip of the iceberg. Good luck!
3 thoughts on “Building a REST API with Tensorflow Serving (part 2)” | https://thelongrun.blog/2020/01/26/rest-api-tensorflow-serving-pt2/ | CC-MAIN-2021-04 | en | refinedweb |
#include <LOCA_MultiContinuation_ExtendedGroup.H>
Inheritance diagram for LOCA::MultiContinuation::ExtendedGroup:..
Implements. | http://trilinos.sandia.gov/packages/docs/r10.0/packages/nox/doc/html/classLOCA_1_1MultiContinuation_1_1ExtendedGroup.html | crawl-003 | en | refinedweb |
#include <LOCA_MultiContinuation_ConstrainedGroup.H>
Inheritance diagram for LOCA::MultiContinuation::ConstrainedGroup:
This class represents a constrained system of nonlinear equations:
where
is the solution vector,
is a set of constraint parameters,
is represented by some LOCA::MultiContinuation::AbstractGroup, and
is a constraint represented by a LOCA::MultiContinuation::ConstraintInterface object. Newton steps for this system are computed via some LOCA::BorderedSolver::AbstractStrategy which is specified via the
constraintParams argument to the constructor.. | http://trilinos.sandia.gov/packages/docs/r10.0/packages/nox/doc/html/classLOCA_1_1MultiContinuation_1_1ConstrainedGroup.html | crawl-003 | en | refinedweb |
Martin Fowler writes:
...
The creation of a unit of work instance is a complex process and as such is a
good candidate for a factory.
Since a UoW (Unit of Work) is basically a wrapper around a NHibernate session
object I'll need to open such a session whenever I start a new UoW. But to be
able to get such a session NHibernate has to be configured and a NHibernate
Session Factory has to be available. The interface of my UoW factory is defined
as follows
I basically have two public methods on my factory. One for the creation of
the UoW and the second for the disposal of a specific UoW instance. The disposal
involves also some work and thus justifies the existence of this second factory
method. The 3 properties Configuration,
SessionFactory and CurrentSession are there
mostly for advanced scenarios (and as such could theoretically be omitted in a
first simple implementation).
Let's now write the test for the Create method of the UoW factory
implementation. At first I prepare a new test fixture class
using System;
using NHibernate;
using NUnit.Framework;
namespace NHibernateUnitOfWork.Tests
[TestFixture]
public class UnitOfWorkFactory_Fixture
private IUnitOfWorkFactory _factory;
[SetUp]
public void SetupContext()
{
_factory = (IUnitOfWorkFactory) Activator.CreateInstance(typeof (UnitOfWorkFactory), true);
}
Why the heck do I need the Activator.CreateInstance to get
an instance of the factory? Well I decided that the construction of a new
factory instance should be internal to the assembly implementing the Unit of
Work pattern. Thus the (default) constructor of the
UnitOfWorkFactory class has a modifier
internal and as a consequence cannot be used from code outside
the assembly in which the factory is implemented. But I need an instance when
testing and thus have to resort to the technique used above.
Now the question is: "what confirms me that the method Create of the factory
works as expected?". I have chosen the following: I expect
Now the code
public void Can_create_unit_of_work()
IUnitOfWork implementor = _factory.Create();
Assert.IsNotNull(implementor);
Assert.IsNotNull(_factory.CurrentSession);
Assert.AreEqual(FlushMode.Commit, _factory.CurrentSession.FlushMode);
public class UnitOfWorkFactory : IUnitOfWorkFactory
private static ISession _currentSession;
private ISessionFactory _sessionFactory;
internal UnitOfWorkFactory()
public IUnitOfWork Create()
ISession session = CreateSession();
session.FlushMode = FlushMode.Commit;
_currentSession = session;
return new UnitOfWorkImplementor(this, session);
As you can see I first need a new session instance. I then set the flush mode
to commit and assign the session to an instance variable for further use.
Finally I return a new instance of the UnitOfWorkImplementor
(which implements the interface IUnitOfWork). The constructor of the
UnitOfWorkImplementor class requires two parameters, the
session and a reference to this factory.
To be able to compile I also have to define the
UnitOfWorkImplementor class. Of this class I just implement the
minimum needed so far
public class UnitOfWorkImplementor : IUnitOfWork
private readonly IUnitOfWorkFactory _factory;
private readonly ISession _session;
public UnitOfWorkImplementor(IUnitOfWorkFactory factory, ISession session)
_factory = factory;
_session = session;
Now back to the implementation of the CreateSession method.
To be able to create (and open) a session NHibernate must have been configured
previously. So I solve this problem first.
First we add a file hibernate.cfg.xml to our test project
and put the following content into it (don't forget to set the property 'Copy to
Output Directory' of this file to 'Copy always').
<=(local);Database=Test;Integrated Security=SSPI;</property>
<property name="show_sql">true</property>
</session-factory>
</hibernate-configuration>
The above configuration file assumes that you have an SQL Server
2005 installed on your local machine and that you are using
integrated security to authenticate and authorize the access to
the database. It further assumes that there exists a database called
'Test' on this server. To configure NHibernate for other types
of databases please consult the online documentation here.
Now we define this test method
public void Can_configure_NHibernate()
var configuration = _factory.Configuration;
Assert.IsNotNull(configuration);
Assert.AreEqual("NHibernate.Connection.DriverConnectionProvider",
configuration.Properties["connection.provider"]);
Assert.AreEqual("NHibernate.Dialect.MsSql2005Dialect",
configuration.Properties["dialect"]);
Assert.AreEqual("NHibernate.Driver.SqlClientDriver",
configuration.Properties["connection.driver_class"]);
Assert.AreEqual("Server=(local);Database=Test;Integrated Security=SSPI;",
configuration.Properties["connection.connection_string"]);
Here we test whether we can successfully create a configuration and whether
this configuration contains the attributes we have defined via the NHibernate
configuration file.
To fulfill this test we can e.g. implement the following code
public Configuration Configuration
if (_configuration == null)
_configuration = new Configuration();
string hibernateConfig = Default_HibernateConfig;
//if not rooted, assume path from base directory
if (Path.IsPathRooted(hibernateConfig) == false)
hibernateConfig = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, hibernateConfig);
if (File.Exists(hibernateConfig))
_configuration.Configure(new XmlTextReader(hibernateConfig));
return _configuration;
Note that I have defined a class variable _configuration
such as that I only execute the configuration logic once during the
life time of the application. Note further that in this implementation I assume
that the configuration of NHibernate is defined via the
hibernate.cfg.xml file which must be in the same directory as
the test assembly.
As soon as I have a valid configuration I can create a NHibernate session
factory. Note that this is a rather expensive operation (takes
some time depending on the number of entities you are mapping) and thus should
only be executed once during the life time of the application. The access to the
NHibernate SessionFactory is thread safe!
Let's write a test for the creation of the SessionFactory.
public void Can_create_and_access_session_factory()
var sessionFactory = _factory.SessionFactory;
Assert.IsNotNull(sessionFactory);
Assert.AreEqual("NHibernate.Dialect.MsSql2005Dialect", sessionFactory.Dialect.ToString());
This is a rather basic test and only checks whether I can successfully create
a session factory and whether at least one of its properties is configured as I
expect it, namely the dialect used.
The implementation to fulfill the test is rather simple
public ISessionFactory SessionFactory
if (_sessionFactory == null)
_sessionFactory = Configuration.BuildSessionFactory();
return _sessionFactory;
I have defined a class variable _sessionFactory such that I only execute the
BuildSessionFactory method once during the life time of the application. The
session factory is created on demand, that is when first needed. At the same
time this property getter method accesses the previously defined Configuration
property which in turn triggers the configuration of NHibernate (if not already
done).
The last missing piece is the implementation of the CreateSession method
which now is trivial
private ISession CreateSession()
return SessionFactory.OpenSession();
I
want to be able to access the current (open) session via my factory. If there is
no such session open at the moment a meaningful exception should be raised. To
test this write
public void Accessing_CurrentSession_when_no_session_open_throws()
var session = _factory.CurrentSession;
catch (InvalidOperationException)
and the implementation is easy
public ISession CurrentSession
if (_currentSession == null)
return _currentSession;
set { _currentSession = value; }
The last method to implement is the UoW disposal method. In this method I
want to reset the CurrentSession to null and then forward the call to our static
UnitOfWork class (defined in the first
part of this post series) such as that this class can also make an internal
clean-up. The implementation might look as follows
public void DisposeUnitOfWork(UnitOfWorkImplementor adapter)
CurrentSession = null;
UnitOfWork.DisposeUnitOfWork(adapter);
That's all we have to do for now regarding the UoW factory. Again we have
implemented it in a TDD way. When we later re-factor our implementation to make
it thread-safe this TDD approach will be a huge benefit since we have a complete
test coverage of out code.
This class defines an actual Unit of Work instance. It implements the already
mentioned (and defined) interface IUnitOfWork which in turn inherits from
IDisposal. This allows us to work with a UoW using the following syntax
using(UnitOfWork.Start())
// use Unit of Work
at the end of the block the Dispose method of the UoW will be called. So
let's first think of a test for the implementation of this method. Here we have
to use mocking to be able to test our SUT (system under test) in isolation. So
let me setup a test fixture first.
public class UnitOfWorkImplementor_Fixture
private ISession _session;
private IUnitOfWorkImplementor _uow;
_session = _mocks.DynamicMock<ISession>();
Again I use Rhino.Mocks as my
mocking framework. My UoW instance has two external dependencies namely the
UnitOfWork factory and the NHibernate session. In the SetupContext I generate a
dynamic mock for each of them.
Now I can write my test method
public void Can_Dispose_UnitOfWorkImplementor()
Expect.Call(() => _factory.DisposeUnitOfWork(null)).IgnoreArguments();
Expect.Call(_session.Dispose);
using (_mocks.Playback())
_uow = new UnitOfWorkImplementor(_factory, _session);
_uow.Dispose();
During the recording phase I define my expectations (of what should happen
when calling the Dispose method of the UoW). First the
DisposeUnitOfWork method of the factory and second the
Dispose method of the session object should be called.
Here is the code which fulfills this test
public void Dispose()
_factory.DisposeUnitOfWork(this);
_session.Dispose();
As you can see I just forward the call to the Unit of Work factory and
dispose my internal (NHibernate) session instance as formulated in the
expectations of the test.
When working with a UoW I also have to be able to use transactions. Thus I
need a way to "play" with transactions. Let's call the corresponding methods
BeginTransaction and TransactionalFlush.
To shield the client code from all NHibernate specifics and to provide a
simple(r) interface I define a GenericTransaction class (which
implements the interface IGenericTransaction) which is just a
wrapper around the NHibernate transaction.
I think that the implementation is trivial and doesn't need any further
explanation
public interface IGenericTransaction : IDisposable
void Commit();
void Rollback();
public class GenericTransaction : IGenericTransaction
private readonly ITransaction _transaction;
public GenericTransaction(ITransaction transaction)
_transaction = transaction;
public void Commit()
_transaction.Commit();
public void Rollback()
_transaction.Rollback();
public void Dispose()
_transaction.Dispose();
just note that the IGenericTransaction inherits form
IDisposable.
With this definition of a generic transaction we can proceed to the
implementation of the BeginTransaction and
TransactionalFlush methods.
First I'll implement the BeginTransaction method. As usual I define the test
first
public void Can_BeginTransaction()
Expect.Call(_session.BeginTransaction()).Return(null);
var transaction = _uow.BeginTransaction();
Assert.IsNotNull(transaction);
I expect that during execution of the method under test the BeginTransaction
method of the NHibernate session is called. After calling
BeginTransaction on the UoW (in the Playback phase) I
additionally test whether I get a not-null transaction.
I also want to be able to start a transaction and define an isolation
level for the transaction. The corresponding test might look like
public void Can_BeginTransaction_specifying_isolation_level()
var isolationLevel = IsolationLevel.Serializable;
{
Expect.Call(_session.BeginTransaction(isolationLevel)).Return(null);
var transaction = _uow.BeginTransaction(isolationLevel);
To fulfill the test(s) I just have to create a new instance of a generic
transaction and pass the result of the _session.BeginTransaction call
to the constructor (that is: a NHibernate ITransaction object)
public IGenericTransaction BeginTransaction()
return new GenericTransaction(_session.BeginTransaction());
public IGenericTransaction BeginTransaction(IsolationLevel isolationLevel)
return new GenericTransaction(_session.BeginTransaction(isolationLevel));
The methods each return me an instance of type GenericTransaction which I can
use in my code and call their respective commit or rollback methods if
appropriate.
The TransactionalFlush method should flush the content of
the NHibernate session to the database wrapped inside a transaction. Let's
define the test(s) for it
public void Can_execute_TransactionalFlush()
var tx = _mocks.CreateMock<ITransaction>();
var session = _mocks.DynamicMock<ISession>();
SetupResult.For(session.BeginTransaction(IsolationLevel.ReadCommitted)).Return(tx);
_uow = _mocks.PartialMock<UnitOfWorkImplementor>(_factory, _session);
using (_mocks.Record())
Expect.Call(tx.Commit);
Expect.Call(tx.Dispose);
_uow = new UnitOfWorkImplementor(_factory, session);
_uow.TransactionalFlush();
public void Can_execute_TransactionalFlush_specifying_isolation_level()
SetupResult.For(session.BeginTransaction(IsolationLevel.Serializable)).Return(tx);
_uow = _mocks.PartialMock<UnitOfWorkImplementor>(_factory, session);
_uow.TransactionalFlush(IsolationLevel.Serializable);
and the implementation is straight forward
public void TransactionalFlush()
TransactionalFlush(IsolationLevel.ReadCommitted);
public void TransactionalFlush(IsolationLevel isolationLevel)
IGenericTransaction tx = BeginTransaction(isolationLevel);
//forces a flush of the current unit of work
tx.Commit();
catch
tx.Rollback();
throw;
finally
tx.Dispose();
Note that when you don't provide an explicit transaction isolation level our
implementation automatically assumes an isolation level equal to "read
commited". Note also that if the Commit of the transaction fails the transaction
is rolled back.
Important: In the case of an exception during
TransactionalFlush do not try to reuse the current unit of work
since the session is in an inconsistent state and must be
closed. Thus abandon this UoW and start a new one.
To increase the usability of the UoW instance a little bit I add the
following members (which need no further explanation)
public bool IsInActiveTransaction
return _session.Transaction.IsActive;
public IUnitOfWorkFactory Factory
get { return _factory; }
public ISession Session
get { return _session; }
public void Flush()
_session.Flush();
That's it! We have now a simple implementation of the Unit of Work pattern
for NHibernate. Let me provide an overall picture of our implementation
Please remember: In this current version the UoW is strictly
NOT thread-safe! But I'll show you how to make it thread-safe in the next post.
In the following section I'll quickly show the typical usage of the UoW. As
usual I do this via TDD. I want to demonstrate how one can define a new instance
of an entity and add it to the database. In this case the entity is a simple
Person type object.
Let us first set up a test fixture as follows
using System.Reflection;
using NHibernate.Tool.hbm2ddl;
public class Test_usage_of_UnitOfWork
UnitOfWork.Configuration.AddAssembly(Assembly.GetExecutingAssembly());
new SchemaExport(UnitOfWork.Configuration).Execute(false, true, false, false);
In the SetupContext method I first add the current assembly
(the one in which this test is running) as a source of the schema mapping
information to the configuration of NHibernate (which I can access via our
UnitOfWork class). It is important that this call is done prior
to the creation of the SessionFactory (that is prior to using a
session).
After extending the configuration I use the SchemaExport
class of NHibernate to automatically generate the database schema for me.
But wait: up to now I have not defined any schema at all! So let's do it now.
Let's define a simple Person entity. Add a new file
Person.cs to the test project and add the following code
public class Person
public virtual Guid Id { get; set; }
public virtual string Name { get; set; }
public virtual DateTime Birthdate { get; set; }
add an XML mapping document called Person.hbm.xml to the
test project and define the following content
<hibernate-mapping
<class name="Person">
<id name="Id">
<generator class="guid"/>
</id>
<property name="Name" not-
<property name="Birthdate" not-
</class>
</hibernate-mapping>
Don't forget to set the 'Build Action' property of this item to
'Embedded Resource'!
Now we can write a test to insert a new person instance into the
database.
public void Can_add_a_new_instance_of_an_entity_to_the_database()
using(UnitOfWork.Start())
var person = new Person {Name = "John Doe", Birthdate = new DateTime(1915, 12, 15)};
UnitOfWork.CurrentSession.Save(person);
UnitOfWork.Current.TransactionalFlush();
When running this test the output generated by NHibernate looks similar to
this
NHibernate: INSERT INTO Person (Name, Birthdate, Id) VALUES (@p0, @p1, @p2);
@p0 = 'John Doe', @p1 = '15.12.1915 00:00:00',
@p2 = 'bb72ae4c-7d64-4c16-84dc-c0ba2c7d306b'
I have shown you a possible implementation of the Unit of
Work pattern for NHibernate. The implementation has been developed by
using TDD.
The usage of a Unit of Work pattern simplifies the data manipulation tasks and
hides the infrastructure details from the consumer of the UoW. NHibernate offers
it's own implementation of the UoW in the form of the Session object. But
NHibernate is just one of several possible ORM tools. With this implementation
of the UoW pattern we have hidden most of the specifics of NHibernate an provide
a generic interface to the consumer of the UoW.
There are just two open points
The former I'll cover in the third part of this article and the latter problem can be solved by
introducing a (generic) repository.
In the previous two chapters. | http://nhforge.org/wikis/patternsandpractices/nhibernate-and-the-unit-of-work-pattern/revision/2.aspx | crawl-003 | en | refinedweb |
Tell us what you think of the site.
I have buddy who sculpted a mesh in zbrush.
we want to get the mesh into mudbox because hes been experiencing vertex reordering in zbrush.
its a nightmare.
i told him everything would be much easier in mudbox to do.
but the problem right now is.
when we export the highres sculpt from zbrush and the low res......
and then reimport the lowres into mudbox and then import the highres as a layer.
everything shoots off into space. execpt for the low level in mudbox.
can anyone help us get this model into Mudbox? with all the subdivision levels?
thank you.
visit
to see what 3d printing can do for you.
my personal website:
Core i7
Windows 7
24 gigs of RAM
Quadro 4000
Tesla c2070
3dsMax2013
Mudbox 2013
import lowres with clean uvs…
import highres…
bake a 32bit floating point displacementmap between them…
and displace the lowress with the “sculpt model using displacement” feature…
You can also create some AUV or GUV in Zbrush
Export high and low res meshes with those UVs
Import the low res into Mudbox
Subdivide that mesh to the same level as the high Res in Zbrush.
Then do an Import UV (select the highRes from Zbrush ) command in Mudbox and select “Position” in the import options. Use a very small tolerance. It will depend how low based on how the UVs came out but something like 0.0001 perhaps.
This has worked for me.
thanks guys we actually got this into Mudbox.
and its working great.
except that we are having a problem going back to Maya.
we imported a blend shape from Maya into mudbox using Import as alyer by UV position.
with a tolerance of .0001 and it working nearly perfectly.
but now when we export back to Maya to use the blend shape it wont work.
could it be possible that Mudbox could be changing vertex order?
we checked a couple of Vertex numbers and it seems to match.
so i dont know what would be causing this.
thanks.
What options in Maya obj import did you use? There are few. One way is correct.
BTW with Mudbox 2010 this issue will go away because of the .fbx format support.
which is the one way that is correct?
It seems that the vertice order changed
in Maya, Check at import that ‘create multiple objects” in unchecked in the Obj import options
or use the Mesh - Transfert attributes using Uvs space sample to transfert the new shape to the original one.
thanks Rimmason i belive we have that unchecked like you said.
well have to try the other route you suggested. | http://area.autodesk.com/forum/autodesk-mudbox/community-help/bringing-obj-from-zbrush-into-mudbox/ | crawl-003 | en | refinedweb |
The QtPieMenu class provides a pie menu popup widget. More...
#include <qtpiemenu.h>
List of all member functions.
The QtPieMenu class provides a pie menu popup widget.
A pie menu is a popup menu that is usually invoked as a context menu, and that supports several forms of navigation.
Using conventional navigation, menu items can be highlighted simply by moving the mouse over them, or by single-clicking them, or by using the keyboard's arrow keys. Menu items can be chosen (invoking a submenu or the menu item's action), by clicking a highlighted menu item, or by pressing \key{Enter}.
Pie menus can also be navigated using gestures. Gesture navigation is where a menu item is chosen as the result of moving the mouse in a particular sequence of movements, for example, up-left-down, without the user having to actually read the menu item text.
The user can cancel a pie menu in the conventional way by clicking outside of the pie menu, or by clicking the center of the pie (the "cancel zone"), or by pressing \key{Esc}.
Pie menus are faster to navigate with the mouse than conventional menus because all the menu items are available at an equal distance from the origin of the mouse pointer.
The circular layout and the length of the longest menu text imposes a natural limit on how many items can be displayed at a time. For this reason, submenus are very commonly used.
Use popup() to pop up the pie menu. Note that QtPieMenu is different from QPopupMenu in that it pops up with the mouse cursor in the center of the menu (i.e. over the cancel zone), instead of having the cursor at the top left corner.
Pie menus can only be used as popup menus; they cannot be used as normal widgets.
A pie menu contains of a list of items, where each item is displayed as a slice of a pie. Items are added with insertItem(), and are displayed with a richtext text label, an icon or with both. The items are laid out automatically, starting from the top at index 0 and going counter clockwise. Each item can either pop up a submenu, or perform an action when activated. To get an item at a particular position, use itemAt().
When inserting action items, you specify a receiver and a slot. The slot is invoked in the receiver when the action is activated. In the following example, a pie menu is created with three items: "Open" and "Close" are actions, and "New" pops up a submenu. The submenu has two items, "New project" and "New dialog", which are both actions.
Editor::Editor(QWidget *parent, const char *name) : QTextEdit(parent, name) { // Create a root menu and insert two action items. QtPieMenu *rootMenu = new QtPieMenu(tr("Root menu"), this); rootMenu->insertItem(tr("Open"), storage, SLOT(open())); rootMenu->insertItem(tr("<i>Close</i>"), storage, SLOT(close()); // Now create a submenu and insert two action items. QtPieMenu *subMenu = new QtPieMenu(tr("New"), rootMenu); subMenu->insertItem(tr("New project"), formEditor, SLOT(newProject())); subMenu->insertItem(tr("New dialog"), formEditor, SLOT(newDialog())); // Finally add the submenu to the root menu. rootMenu->insertItem(subMenu); }
By default, each slice of the pie takes up an equal amount of space. Sometimes it can be useful to have certain slices take up more space than others. QtPieItem::setItemWeight() changes an item's space weighting, which by default is 1. QtPieMenu uses the weightings when laying out the pie slices. Each slice gets a share of the size proportional to its weight.
At the center of a pie menu is a cancel zone, which cancels the menu if the user clicks in it. The radius of the whole pie menu and of the cancel zone are set in the constructor, or with setOuterRadius() and setInnerRadius().
Any shape or layout of items is possible by subclassing QtPieMenu and reimplementing paintEvent(), indexAt(), generateMask() and reposition().
See also
This enum describes the reason for the activation of an item in a pie menu.
See also activateItem().
If this pie menu is a submenu, the richtext text argument is displayed by its parent menu.
The parent and name arguments are passed on to QWidget's constructor.
If this pie menu is a submenu, the icon argument is displayed by its parent menu.
The parent and name arguments are passed on to QWidget's constructor.
If this pie menu is a submenu, the icon and richtext text arguments are displayed by its parent menu.
The parent and name arguments are passed on to QWidget's constructor.
This signal is emitted just before the pie menu is hidden after it has been displayed.
Warning: Do not open a widget in a slot connected to this signal.
See also aboutToShow(), insertItem(), and removeItemAt().
This signal is emitted just before the pie menu is displayed. You can connect it to any slot that sets up the menu contents (e.g. to ensure that the correct items are enabled).
See also aboutToHide(), insertItem(), and removeItemAt().
See also QtPieMenu::ActivateReason.
This signal is emitted when a pie menu action item is activated, causing the popup menu to hide. It is used internally by QtPieMenu. Most users will find activated(int) more useful.
This is an overloaded member function, provided for convenience. It behaves essentially like the above function.
This signal is emitted when the action item at index position i in the pie menu's list of menu items is activated.
See also highlighted().
This signal is emitted when the pie menu is canceled. Only the menu that emits this signal is canceled; any parent menus are still shown.
See also canceledAll().
This signal is emitted when the pie menu is canceled in such a way that the entire menu (including submenus) hides. For example, this would happen if the user clicked somewhere outside all the pie menus.
See also canceled().
When subclassing QtPieMenu, this function should be reimplemented if the shape of the new menu is different from QtPieMenu's shape.
Example: ../hexagon/hexagonpie.cpp.
This signal is emitted when the item at index position i in the pie menu's list of menu items is highlighted.
See also activated().
This function is reimplemented if a subclass of QtPieMenu provides a new item layout.
Returns the inner radius of the pie menu. See the "innerRadius" property for details.
QIconSet icon(QPixmap("save.png")); rootMenu->insertItem(icon, this, SLOT(save()));
The items are ordered by their ordinal position, starting from the top of the pie and going counter clockwise.
Examples: ../editor/editor.cpp and ../hexagon/main.cpp.
Adds an action item to this pie menu at position index. The text is used as the item's text, or the text that is displayed on this item's slice in the pie. The receiver gets notified through the signal or slot in member when the item is activated.
rootMenu->insertItem("Save", this, SLOT(save()));
The items are ordered by their ordinal position, starting from the top of the pie and going counter clockwise.
Adds an action item to this pie menu at position index. The icons and text are used as the item's text and icon, or the text and icon that is displayed on this item's slice in the pie. The receiver gets notified through the signal or slot in member when the item is activated.
QIconSet icon(QPixmap("save.png")); rootMenu->insertItem(icon, "Save", this, SLOT(save()));
The items are ordered by their ordinal position, starting from the top of the pie and going counter clockwise.
Adds the submenu item to this pie menu at position index. The items are ordered by their ordinal position, starting from the top of the pie and counting counter clockwise.
QtPieMenu *subMenu = new QtPieMenu("<i>Undo</i>", this, SLOT(undo())); rootMenu->insertItem(subMenu);
See also setItemEnabled().
See also setItemIcon() and itemText().
See also setItemText().
See also setItemWeight().
Returns the outer radius of the pie menu. See the "outerRadius" property for details.
To translate a widget's contents coordinates into global coordinates, use QWidget::mapToGlobal().
Examples: ../editor/editor.cpp and ../hexagon/main.cpp.
The default implementation does nothing.
Example: ../hexagon/hexagonpie.cpp.
Sets the inner radius of the pie menu to r. See the "innerRadius" property for details.
See also isItemEnabled().
Example: ../editor/editor.cpp.
See also itemIcon() and setItemText().
pieMenu->setItemText("<i>Undo</i>", 0);
See also itemText() and itemIcon().
The pie menu in this image has three pie menu items with icons and no text. The item at position 0 has a weight of 2, and the rest have the default weight of 1.
See also itemWeight().
Example: ../editor/editor.cpp.
Sets the outer radius of the pie menu to r. See the "outerRadius" property for details.
Examples: ../hexagon/hexagonpie.cpp and ../hexagon/main.cpp.
This property holds the inner radius of the pie menu.
The radius in pixels of the cancel zone (in the center) of the pie menu. Setting the radius to 0 disables the cancel zone.
See also outerRadius.
Set this property's value with setInnerRadius() and get this property's value with innerRadius().
This property holds the outer radius of the pie menu.
The outer radius of the pie menu in pixels.
See also innerRadius.
Set this property's value with setOuterRadius() and get this property's value with outerRadius().
This file is part of the Qt Solutions. | http://doc.trolltech.com/solutions/3/qtpiemenu/qtpiemenu.html | crawl-003 | en | refinedweb |
It starts one thread that writes to a file, and multiple threads that read from the file. The threads post logging messages to the GUI thread that displays those messages in a text display. When the application is terminated the example prints some statistics that show how much time each thread spends waiting for the mutex to become available.
The example can be built with the STD_MUTEX preprocessor symbol defined, in which case it will use a standard QMutex to synchronize file access. To do this, run qmake DEFINES+=STD_MUTEX to generate the makefile and rebuild the example.
#include <qapplication.h> #include <qdatetime.h> #include <qdeepcopy.h> #include <qfile.h> #include <qtextedit.h> #include <qthread.h> #include <qtimer.h> #include "qtreadwritemutex.h" //#define STD_MUTEX #define NumReaders 30 #ifdef STD_MUTEX #include <qmutex.h> QMutex mutex; #else QtReadWriteMutex mutex(NumReaders); #endif class LogEvent : public QCustomEvent { public: enum LogType { WriteRequest = QEvent::User + 1, WriteBegin, WriteEnd, ReadLog }; LogEvent(LogType type, const QString &log = QString()) : QCustomEvent(type), text(log) { } QString logText() { return text; } private: QDeepCopy<QString> text; }; class Log : public QTextEdit { public: Log() { setTextFormat(Qt::LogText); } protected: void customEvent(QCustomEvent *e) { QString logstring(QDateTime::currentDateTime().toString() + ": "); LogEvent *le = (LogEvent*)e; switch (e->type()) { case LogEvent::WriteRequest: logstring += "Write access requested"; break; case LogEvent::WriteBegin: logstring += "Write access begins"; break; case LogEvent::WriteEnd: logstring += "Write access finished"; break; case LogEvent::ReadLog: logstring += le->logText() + " read from file"; break; default: return; } append(logstring); } }; Log *logger = 0;
A custom event is used to send string-messages to the GUI thread. The Log class subclasses QTextEdit to handle the event. Note the use of QDeepCopy in the event class.
class ReadThread : public QThread { public: ReadThread(const QString &filename, int number); ~ReadThread(); void stop(); protected: void run(); private: QString logfile; int id; bool cancel; double percentblock; int accesses; }; class WriteThread : public QThread { public: WriteThread(const QString &filename); ~WriteThread(); void stop(); protected: void run(); private: QString logfile; bool cancel; double percentblock; int accesses; };
The ReadThread and WriteThread classes implement a QThread. The implementation of the QThread::run() functions is what is interesting.
void ReadThread::run() { int totaltime = 0; int blocktime = 0; while (!cancel) { msleep(100 + 20 * id); // take a break QTime timer; timer.start();
The thread initiliazes some local variables, and then perform some work in a while loop that runs until the thread is stopped from the outside. The time spent in the thread is stopped using QTime.
#ifdef STD_MUTEX QMutexLocker locker(&mutex); #else QtReadWriteMutexLocker locker(&mutex, QtReadWriteMutex::ReadAccess); #endif blocktime += timer.elapsed();A mutex locker is used to lock and unlock the mutex that is used to synchronize access to the file. Depending on how the example was configured this is either a standard QMutex, or a QtReadWriteMutexLocker using ReadAccess. The time spent waiting for the mutex is stopped.
if (cancel) { totaltime += timer.elapsed(); break; } QFile file(logfile); if (!file.open(IO_ReadOnly)) continue; ++accesses; QTextStream in(&file); QString string = in.readLine(); string += QString(" read by %1").arg(id); // take a bit... msleep(100); QApplication::postEvent(logger, new LogEvent(LogEvent::ReadLog, string)); totaltime += timer.elapsed(); } percentblock = double(blocktime)/double(totaltime) * 100; }
If the thread has been stopped while waiting for the mutex the operation is canceled. Otherwise a file is opened for read access, the contents are read into a QString that is then posted to the log.
WriteThread::WriteThread(const QString &filename) : logfile(filename), cancel(false), accesses(0) { }Before the thread terminates the percentage of time spent waiting for the mutex is calculated.
void WriteThread::run() { int totaltime = 0; int blocktime = 0; int writecount = 0; while (!cancel) { msleep(2500); // take a (longer) break QApplication::postEvent(logger, new LogEvent(LogEvent::WriteRequest)); QTime timer; timer.start(); #ifdef STD_MUTEX QMutexLocker locker(&mutex); #else QtReadWriteMutexLocker locker(&mutex, QtReadWriteMutex::WriteAccess); #endif blocktime += timer.elapsed(); if (cancel) { totaltime += timer.elapsed(); break; } QApplication::postEvent(logger, new LogEvent(LogEvent::WriteBegin)); QFile file(logfile); file.open(IO_WriteOnly); ++accesses; QTextStream out(&file); // take some time getting data... msleep(100); QString string = QString("some data: %1").arg(++writecount); out << string << endl; QApplication::postEvent(logger, new LogEvent(LogEvent::WriteEnd)); totaltime += timer.elapsed(); } percentblock = double(blocktime)/double(totaltime) * 100; }
The writer-thread uses QtReadWriteMutexLocker to request write-access to the file, and posts events to the log indicating the state of the write operation. It does this every 2.5 seconds.
int main(int argc, char **argv) { QApplication app(argc, argv); QString filename = argc > 2 ? argv[1] : "logfile.txt"; QFile::remove(filename); logger = new Log(); // High-priority writer thread WriteThread writer(filename); writer.start(QThread::HighPriority); int i; ReadThread *readers[NumReaders]; for (i = 0; i < NumReaders; ++i) { readers[i] = new ReadThread(filename, i); readers[i]->start(); } logger->show(); app.setMainWidget(logger); // quit after 10 seconds QTimer::singleShot(10000, &app, SLOT(quit())); int result = app.exec(); // first stop all threads writer.stop(); for (i = 0; i < NumReaders; ++i) readers[i]->stop(); // then wait for all threads to finishe writer.wait(); for (i = 0; i < NumReaders; ++i) { readers[i]->wait(); delete readers[i]; } delete logger; return result; }
The main() function creates the user interface, creates and starts one WriteThread (with high priority) and multiple ReadThreads, and runs the application for 10 seconds. Then all threads are stopped and deleted.
Running this example with STD_MUTEX defined and with different numbers of reader threads shows that all threads spend a high percentage of their total working time waiting for the mutex to become available. Even with only three reader threads almost 50% of the time is spent waiting for the mutex; with 6 or more readers this grows quickly to 75% or more percent. The writer almost never has a chance to perform the 4 write operations expected (10 seconds, with a write access every 2.5 seconds)
Using the QtReadWriteMutex instead shows that the performance is already improved
for very few threads - the readers spend almost no time waiting for the mutex,
and the writer thread hardly ever has to wait more than 50% of the total time even
with a large number of readers. Even then it performs the 4 write operations with
a high probability. | http://doc.trolltech.com/solutions/3/qtreadwritemutex/qtreadwritemutex-example-fileaccess.html | crawl-003 | en | refinedweb |
The QtService class provides an API from implementing Windows services and Unix daemons. More...
#include <qtservice.h>
List of all member functions.
A Windows service or Unix daemon (a "service"), is a program that runs regardless of whether a user is logged in or not. Services are usually non-interactive console applications, but can also provide a user interface.
A service is installed in a system's service database with install(). The system will start the service on startup, depending on the StartupType specified. A service can also be started explicitly with exec().
On Windows 95, 98 and ME the service is registered in the Windows Registry's RunServices entry.
On Windows NT based systems, such as Windows NT 4, 2000, and XP, the the service is implemented as a daemon.
When a service is started, it calls initialize(), and then the run() implementation which usually creates a QApplication object and enters the event loop. The stop() implementation must make the application leave the event loop, so that the run() implementation can return; usually this is done by calling qApp->quit().
On Windows NT based systems the service control manager can send commands to the service to pause(), resume(), or stop() the service, as well as service specific user() commands. This can be achieved on other systems by running the executable itself with suitable command line arguments.
A service can report events to the system's event log with reportEvent(). The status of the service can be queried with isInstalled() and isRunning().
A running service can be stopped by the service control manager, or by calling terminate(). If the service is no longer needed it can be uninstalled from the system's service database with uninstall().
The implementation of a service application's main() entry point function usually creates a QtService object, calls parseArguments(), and returns the result of that call.
When a service object is destroyed the service is not stopped and is not uninstalled.
Warning: On Windows, different versions have different limitations as to what a service is allowed to do. For example, on Windows XP Home Edition the Local System Account does not have the privilege to "Logon as a service". If you have problems with a service consult the documentation provided by Microsoft, i.e. in MSDN.
This enum describes the different types of events reported by a service to the system log.
This enum describes when a service should be started.
There can only be one QtService object in a process.
The service is not installed or started. The name must not contain any backslashes, cannot be longer than 255 characters, and must be unique in the system's service database.
See also install() and exec().
See also uninstall().
Tries to start the service, passing argc and argv to the run() implementation.
If the service is installed it is started as a separate process, and exec() returns immediately with the return value 0 if the service could be started. Otherwise a non-zero error code is returned and an error event is reported to the system's event log.
If the service is not installed the run() implementation is called directly, and the result is returned.
The default implementation of parseArguments() calls this function when the command line option -e (or -exec) is used.
See also install(), start(), and run().
The default implementation does nothing and returns true.
See also run().
Installs the service in the system's service control manager and returns true if successful; otherwise returns false.
The service reports the result of the installation to the system's event log.
The default implementation of parseArguments() calls this function when the command line option -i (or -install) is used.
Warning: Due to the different implementations of how services (daemons) are installed on various UNIX-like systems, this function is not implemented on such systems.
See also uninstall().
Returns true if the service is installed in the system's default service control manager; otherwise returns false.
Warning: This function always returns false on UNIX-like systems.
See also install().
Returns true if the service is running; otherwise returns false.
A service must be installed before it can be run, except on UNIX-like systems; see install().
Note that isRunning() returns false if the program runs stand-alone, so you can modify your application's behavior depending on the context it is running in:
int MyService::run( int argc, char **argv ) { QApplication app( argc, argv ); QWidget *gui = new ServiceGui( ... ); if ( !isRunning() ) // running stand alone -> quit when GUI is closed app.setMainWidget( gui ); gui->show(); return app.exec(); }
See also isInstalled(), exec(), start(), and stop().
The following arguments are recognized:
If no argument is provided the service calls start() to run the service and listen to commands from the service control manager.
Examples: interactive/main.cpp and server/main.cpp.
The default implementation does nothing.
See also resume() and requestPause().
Example: interactive/main.cpp.
Report an event of type type with text message to the local system event log. The event identifier ID and the event category category are user defined values. data can contain arbitrary binary data.
Message strings for ID and category must be provided by a message file, which must be registered in the system registry. Refer to the MSDN for more information about how to do this on Windows.
Requests the running service to pause. The service will call the pause() implementation.
This function does nothing if the service is not running.
See also requestResume().
Requests a paused service to continue. The service will call the resume() implementation.
This function does nothing if the service is not running.
See also requestPause().
The default implementation does nothing.
See also pause() and requestResume().
Example: interactive/main.cpp.
This pure virtual function must be implemented in derived classes in order to perform the service's work. Usually you create the QtService subclass instance in main() and return the value of a call to parseArguments(). In the subclass's run() function you create the QApplication object passing argc and argv, initialize the application, and enter the event loop (e.g. by calling return app.event(); in this function.
See also start(), exec(), terminate(), and stop().
Example: interactive/main.cpp.
Sends the user command code to the service. The service will call the user() implementation.
This function does nothing if the service is not running.
See also serviceName().
See also serviceDescription().
Starts the service, and returns true if successful; otherwise returns false.
A service must be installed before it can be started, except on UNIX-like systems; see install().
The service reports an error EventType to the system event log if starting the service fails.
The default implementation of parseArguments() calls this function if no command line options are used.
See also install(), exec(), stop(), and isRunning().
See also StartupType.
See also run() and terminate().
Asks the service control manager to stop the service if the service is running, otherwise does nothing. Returns true if the service could be stopped (or was not running); otherwise returns false.
The default implementation of parseArguments() calls this function when the command line option -t (or -terminate) is used.
See also exec() and isRunning().
Uninstalls the service from the system's default service control manager and returns true if successful; otherwise returns false.
The service reports the result of the uninstallation to the system's event log.
The default implementation of parseArguments() calls this function when the command line option -u (or -uninstall) is used.
Warning: Due to the different implementations of how services (daemons) are installed on various UNIX-like systems, this function is not implemented on such systems.
See also install() and isInstalled().
The default implementation does nothing.
See also sendCommand().
Example: interactive/main.cpp.
This file is part of the Qt Solutions. | http://doc.trolltech.com/solutions/3/qtservice/qtservice.html | crawl-003 | en | refinedweb |
The QtSharedMemory class provides a shared memory segment. More...
#include <qtsharedmemory.h>
List of all member functions.
The QtSharedMemory class provides a shared memory segment.
A shared memory segment is a block of memory that is accessible across processes and threads. It provides the fastest method for passing data between programs, but it is also potentially dangerous and should be used with caution. The QtSharedMemory class represents a handle to a shared memory segment.
QtSharedMemory shared("My Shared Data");
A shared memory segment is identified with a text key. The key is set through QtSharedMemory's constructor or with setKey(), and must be unique for the segment. Two instances of QtSharedMemory using the same key will refer to the same shared memory segment.
if (!shared.exists() && !shared.create(128)) { qDebug("Failed to allocate shared memory: %s", shared.errorString().latin1()); return false; }
exists() can be used to check if the handle has any memory associated with it. Memory is allocated using create(), which optionally takes the size and permissions of the segment. The permissions apply to everyone who wishes to attach to it.
if (!shared.attach()) { qDebug("Failed to attach to shared memory: %s", shared.errorString().latin1()); return false; } char *byteData = (char *) shared.data(); byteData[1] = 'a';
Before accessing the shared memory segment, it must be attached to with attach(). If this operation succeeds, data() is used to return a pointer to the shared memory segment.
When you've finished using the segment, call detach(). Call destroy() to destroy the segment, but use numAttachments() to check that there are no attachments to the segment when this is done.
Accessing a shared memory segment must always be properly synchronized with lock() and unlock():
if (shared.lock()) { int *buffer = reinterpret_cast<int *>(shared.data()); ++buffer[3]; shared.unlock(); }
Acquiring a lock guarantees that the shared memory segment is available and safe to read from and write to. Destroying a segment will also destroy the lock.
Describes the access rights provided when allocating memory or attaching to a shared memory segment.
Constructs an empty shared memory handle. The size and key are set to 0 and the empty string. setKey() must be called before create() or attach().
See also setKey().
Call create() to allocate space for the segment, and call attach() to be able to read or write to the segment.
See also create() and attach().
If you want to delete the shared memory segment, call detach(), and when numAttachments() is 0, call destroy().
Attaches to the shared memory segment, using the permissions specified by mode. Returns true on success; otherwise returns false. If false is returned, the cause of the error can be retrieved with error() and errorString().
After attaching, data() is used to access the shared memory segment.
Note: Certain operating systems allow only a limited number of concurrent shared memory attachments per process. For example, Mac OS X seems to only allow 8 attachments. If this limit is reached, this function will return false, and error() will return QtSharedMemory::OutOfResources.
See also detach().
Examples: ../processcounter/factory.cpp and ../threadcounter/factory.cpp.
Allocates size bytes of memory for the segment in a shared memory area. Returns true on success; otherwise returns false. If false is returned, the cause of the error can be retrieved with error() and errorString().
See also size() and destroy().
See also create().
Examples: ../processcounter/factory.cpp and ../threadcounter/factory.cpp.
Destroys the shared memory segment and the lock. Returns true on success; otherwise returns false. If false is returned, the cause of the error can be retrieved with error() and errorString().
The method used when destroying the segment depends on the value of mode.
This function must be used with caution. When destroy() is called, all other accessors of the shared memory segment must have been notified to prevent access of the shared segment after it has been destroyed. Any such access will almost certainly result in a crash.
If all accessors use lock() and unlock() when accessing the shared memory, a safe way of using this function is to combine it with lock(), exists(), numAttachments() and unlock(), as in the following example:
QtSharedMemory shared("shared.shm"); shared.create(128); // Make use of the shared memory // Finished with the shared memory, so now we want to destroy it for (;;) { if (!shared.lock()) { qDebug("Unable to lock segment: %s", shared.errorString()); break; } if (!shared.exists()) { qDebug("Segment has been destroyed."); break; } if (shared.numAttachments() == 0) { shared.destroy(); // also destroys the lock break; } shared.unlock(); qDebug("Waiting for everybody to detach."); sleep(5); }
Detaches from the shared memory segment. Returns true on success; otherwise returns false. If false is returned, the cause of the error can be retrieved with error() and errorString().
Detaching from the segment does not destroy it; use destroy() to destroy the segment.
Examples: ../processcounter/factory.cpp and ../threadcounter/factory.cpp.
Returns the error type of the last failed operation.
See also errorString().
Example: ../threadcounter/factory.cpp.
See also error().
Examples: ../processcounter/factory.cpp and ../threadcounter/factory.cpp.
Returns true if the shared memory segment exists; otherwise returns false.
Example:
QtSharedMemory sharedMemory("Accounting app shared memory"); if (!sharedMemory.exists() && !sharedMemory.create()) { qDebug("Failed to create shared memory segment: %s", sharedMemory.errorString().latin1()); return; }
This function is only reliable when used between a lock() and an unlock(), or if the caller is certain that nobody else is accessing the shared memory segment.
See also isValid(), create(), attach(), and destroy().
See also setKey().
Locks the shared memory segment. Returns true on success, indicating that the memory segment is locked and valid; otherwise returns false.
This is one of the most important functions in QtSharedMemory, and it is used to ensure safe access to the shared memory segment by avoiding race conditions.
The proper way to use this locking mechanism is to call lock() before and unlock() after every access, both when reading and writing. While locked, the segment should also be checked for validity as in the following example:
bool increaseCounter(QtSharedMemory *shared, int counterIndex) { if (!shared->lock()) { qDebug("Error updating shared data: %s", shared->errorString().latin1()); return false; } int *buffer = reinterpret_cast<int *>(shared->data()); ++buffer[counterIndex]; shared->unlock(); return true; }
Examples: ../processcounter/factory.cpp and ../threadcounter/factory.cpp.
Returns the number of attachments to this segment. The number returned is only reliable when exactly one instance of QtSharedMemory is accessing the shared memory segment. Call this function between lock() and unlock() to get an accurate value while avoiding race conditions.
Sets the unique identifier for this shared memory segment to the string key.
See also key().
Unlocks the shared memory segment.
See also lock().
Examples: ../processcounter/factory.cpp and ../threadcounter/factory.cpp.
This file is part of the Qt Solutions. | http://doc.trolltech.com/solutions/3/qtsharedmemory/qtsharedmemory.html | crawl-003 | en | refinedweb |
As announced yesterday, the new February 2010 release of F# is out. For those using Visual Studio 2008 and Mono, you can pick up the download here. This release is much more of a stabilization release instead of adding a lot of features including improvements in tooling, the project system and so on.
One of the more interesting announcements today was not only the release, but that the F# PowerPack, a collection of libraries and tools, was released on CodePlex under the MS-PL license. By releasing the PowerPack on CodePlex, it allows the F# team to have the PowerPack grow more naturally, free of major release cycles such as Visual Studio releases. What’s included in this release?
What’s in the Box?
The PowerPack includes such tools as:
- FsLex – a Lexical Analyzer, similar in nature to OCamlLex
- FsYacc – a LALR parser generator which shares the same specification as OCamlYacc
- FsHtmlDoc – an HTML document generator for F# code
Just as well, there are quite a few libraries which include:
- FSharp.PowerPack.dll – includes additional collections such as the LazyList, extension methods for asynchronous workflows, native interoperability, mathematical structures units of measure and more
- FSharp.PowerPack.Compatibility.dll – support for OCaml compatibility
- FSharp.PowerPack.Linq.dll – provides support for the LINQ provider model
- FSharp.PowerPack.Parallel.Seq.dll – perhaps the most interesting in that it provides support for Parallel LINQ and the Task Parallel Library
Just to name a few…
Let’s look briefly though at some of the features.
LINQ Support
One piece that’s not particularly new, but under-looked is the support for LINQ providers through the FSharp.PowerPack.Linq.dll library. This means we could support providers such as NHibernate, LINQ to SQL, Entity Framework, MongoDB or any other provider. To enable this behavior, simply use the query function with an F# expression. An F# quotation is much like the .NET BCL Expression but with a few extra added goodies and represented in the <@ … @> form.
let result = query <@ ... @>
Just as well, there are additional query functions that are necessary when dealing with data including:
- contains
- groupBy
- groupJoin
- join
- maxBy
- minBy
Each one of those are rather self explanatory in how it gets transformed back into F# quotations/expressions. Let’s look at a simple example of grouping customers in California by their customer level from a LINQ to SQL provider.
#if INTERACTIVE #r "FSharp.PowerPack.Linq.dll" #endif open Microsoft.FSharp.Linq open Microsoft.FSharp.Linq.Query let context = DbContext() let groupedCustomers = query <@ seq { for customer in context.Customers do if customer.BillingAddress.State = "CA" then yield customer } |> groupBy (fun customer -> customer.Level) @>
As you can see, inside the query function, we have a sequence expression to iterate through our customers looking for all in California, and then outside of the sequence expression, we call the groupBy function which allows us to Key off the customer level.
Perhaps there is one more interesting piece than LINQ support, which is what we find in the FSharp.PowerPack.Parallel.Seq.dll library.
Parallel Extensions via PSeq
In a previous post, I went over how you could use F# and Parallel LINQ (PLINQ) as well as the Task Parallel Library (TPL) together nicely. There was a bit of a translation layer needed at the time due to the inherent mismatch between .NET Func and Action delegates and F# functions. What’s nice now is the support for PLINQ and TPL now comes out of the box with the F# PowerPack through the aforementioned library and in particular the PSeqModule. This module contains many of the same combinators as the SeqModule such as filter, map, fold, zip, iter, etc but with the backing of both PLINQ and the TPL. For a quick example, we can do the Parallel Grep sample using F# and the PSeq module.
open System open System.IO open System.Text.RegularExpressions open System.Threading let regexString = @"^[\w-\.]+@([\w-]+\.)+[\w-]{2,4}$" let searchPaths = [@"C:\Tools";@"C:\Work"] let regexOptions = RegexOptions.Compiled ||| RegexOptions.IgnoreCase let regex = new ThreadLocal<Regex>(Func<_>(fun () -> Regex(regexString, regexOptions))) let searchOption = SearchOption.AllDirectories let files = seq { for searchPath in searchPaths do for file in Directory.EnumerateFiles(searchPath, "*.*", searchOption) do yield file } type FileMatch = { Num : int; Text : string; File : string } let matches = files |> PSeq.withMergeOptions(ParallelMergeOptions.NotBuffered) |> PSeq.collect (fun file -> File.ReadLines(file) |> Seq.map2 (fun i s -> { Num = i; Text = s; File = file } ) (seq { 1 .. Int32.MaxValue }) |> Seq.filter (fun line -> regex.Value.IsMatch(line.Text)))
This above sample looks in the Tools and Work directory of my C drive and determines whether there are any email addresses in there, in parallel. We’ll cover more of this in depth in the near future, but this is enough to whet your appetite.
Conclusion
With the new release of the F# language, we also have a welcome surprise in the F# PowerPack now finding a home on CodePlex. This move by the F# team allows the PowerPack to grow more naturally and not be confined to major cycles such as .NET Framework or Visual Studio releases. Sometimes, the best way to learn the language is to just learn how the libraries were written, and given it is on CodePlex, we now easily have that opportunity. | http://codebetter.com/matthewpodwysocki/2010/02/11/the-f-powerpack-released-on-codeplex/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+CodeBetter+%28CodeBetter.Com%29 | crawl-003 | en | refinedweb |
Topic: Navair Publications Online
Not finding your answer? Try searching the web for Navair Publications Online
Answers to Common Questions
How to Block Public Information Online
Public information is information that is made accessible to the public. With the rise of modern technology, though, much information is made public on the Internet that does not have to be so. If you want to prevent your information from b... Read More »
Source:...
How to Become a Notary Public for Free Online
A notary public is a public officer tied to a state's laws and regulations. Notaries public witness signatures, certify the validity of documents and in some states perform marriages. Becoming a notary public is not a free process, and beco... Read More »
Source:
How to Make Money by Writing Online for Web Publications
A good place to get started with your online writing is a company called Demand Media. You can go to their website and apply to become a writer for their various online publications. They will walk you through the process, and you will get ... Read More »
Source:...
More Common Questions
Answers to Other Common Questions
There was a time when the only way to find public records information was to physically visit a courthouse. Now public records are available 24 hours a day if you have a computer and internet access. Public records encompass everything from... Read More »
Source:...
Decide on how much you're willing to spend on the process before committing. You need to be aware of the various costs: the class itself (either online or in an actual classroom); then the exam (which you cannot take online, but must take i... Read More »
Source:
Public speaking is often a difficult subject to teach because many people are simply afraid of speaking in public. Public-speaking teachers in physical classes find this subject tricky to teach. Teaching it online can be even more tricky be... Read More »
Source:
If you don't have a library card, visit your local branch and obtain one. Just make sure you bring a valid picture I.D. and proof of address. You can look up your local library branch on-line by visiting: Once... Read More »
Source:
The Internet has made it much easier to find information, including free public records online. However, there isn't a friendly office clerk to ask how and where to find those records, so how can you be sure you are looking in the best plac... Read More »
Source:
Film copyrights are not forever. The owner of the movie must renew the copyright before it expires to retain ownership rights. When a copyright on a film is not renewed by the owner, the film becomes public domain. This means that there are... Read More »
Source:....
Creating a public class online can give your website visitors an incentive to come back again and again. To impart knowledge and interact with students on the Internet, several methods are available. The public class can be an effective too... Read More »
Source: | http://www.ask.com/questions-about/Navair-Publications-Online | crawl-003 | en | refinedweb |
#include <AnasaziEpetraAdapter.hpp>
List of all members.
This interface will ensure that any Epetra_Operator and Epetra_MultiVector will be accepted by the Anasazi templated solvers.
Definition at line 894 of file AnasaziEpetraAdapter.hpp.
This method takes the Epetra_MultiVector
x and applies the Epetra_Operator
Op to it resulting in the Epetra_MultiVector
y.
Definition at line 901 of file AnasaziEpetraAdapter.hpp. | http://trilinos.sandia.gov/packages/docs/r10.0/packages/anasazi/doc/html/classAnasazi_1_1OperatorTraits_3_01double_00_01Epetra__MultiVector_00_01Epetra__Operator_01_4.html | crawl-003 | en | refinedweb |
Situation
I am creating a simple endpoint that allows for the creation of a user. I need a field that is not in my user model (i.e.,
UserSerializer
from django.contrib.auth import get_user_model
from rest_framework import serializers
class UserSerializer(serializers.ModelSerializer):
confirm_password = serializers.CharField(allow_blank=False)
def validate(self, data):
"""
Checks to be sure that the received password and confirm_password
fields are exactly the same
"""
if data['password'] != data.pop('confirm_password'):
raise serializers.ValidationError("Passwords do not match")
return data
def create(self, validated_data):
"""
Creates the user if validation succeeds
"""
password = validated_data.pop('password', None)
user = self.Meta.model(**validated_data)
user.set_password(password)
user.save()
return user
class Meta:
# returns the proper auth model
model = get_user_model()
# fields that will be deserialized
fields = ['password', 'confirm_password',
'username', 'first_name', 'last_name', 'email']
# fields that will be serialized only
read_only_fields = ['is_staff', 'is_superuser']
# fields that will be deserialized only
write_only_fields = ['password' 'confirm_password']
validate
Gotwhen attempting to get a value for fieldwhen attempting to get a value for field
KeyErroron serializeron serializer
confirm_password. The serializer field might be named incorrectly and not match any attribute or key on the. The serializer field might be named incorrectly and not match any attribute or key on the
UserSerializerinstanceinstance
OrderedDict
You are looking for a write-only field, as I'm assuming you won't want to display the password confirmation in the API. Django REST Framework introduced the
write_only parameter in the 2.3.x timeline to complement the
read_only parameter, so the only time validation is run is when an update is being made. The
write_only_fields meta property was added around the same time, but it's important to understand how both of these work together.
The
write_only_fields meta property will automatically add the
write_only property to a field when it is automatically created, like for a
password field on a
User model. It will not do this for any fields which are not on the model, or fields that have been explicitly specified on the serializer. In your case, you are explicitly specifying the
confirm_password field on your serializer, which is why it is not working.
Got
KeyErrorwhen attempting to get a value for field
confirm_passwordon serializer
UserSerializer. The serializer field might be named incorrectly and not match any attribute or key on the
OrderedDictinstance
This is raised during the re-serialization of the created user, when it is trying to serialize your
confirm_password field. Because it cannot find the field on the
User model, it triggers this error which tries to explain the problem. Unfortunately because this is on a new user, it tells you to confusingly look at the
OrderedDict instance instead of the
User instance.
class UserSerializer(serializers.ModelSerializer): confirm_password = serializers.CharField(allow_blank=False, write_only=True)
If you explicitly specify
write_only on the serializer field, and remove the field from your
write_only_fields, then you should see the behaviour your are expecting.
You can find documentation about this on this link | https://codedump.io/share/8f3VoeQ6SVk8/1/additional-serializer-fields-in-django-rest-framework-3 | CC-MAIN-2017-09 | en | refinedweb |
form_designer 0.4.0
Form Designer - a simple form designer for FeinCMS
- Hidden input fields.
Configuring the export
The CSV export of form submissions uses the Python’s CSV module, the Excel dialect and UTF-8 encoding by default. If your main target is Excel, you should probably add the following setting to work around Excel’s abysmal handling of CSV files encoded in anything but latin-1:
FORM_DESIGNER_EXPORT = { 'encoding': 'latin-1', }
You may add additional keyword arguments here which will be used during the instantiation of csv.writer.
ReCaptcha
To enable [ReCaptcha]() install [django-recaptcha]() and add captcha to your INSTALLED_APPS. This will automatically add a ReCaptcha field to the form designer. For everything else read through the django-recaptcha readme.
Override field types
Define FORM_DESIGNER_FIELD_TYPES in your settings file like:
FORM_DESIGNER_FIELD_TYPES = 'your_project.form_designer_config.FIELD_TYPES'
In your_project.form_designer_config.py something like:
from django import forms from django.utils.translation import ugettext_lazy as _ FIELD_TYPES = [ ('text', _('text'), forms.CharField), ('email', _('e-mail address'), forms.EmailField), ]
Version history
0.4
- Built-in support for Django 1.7-style migrations. If you’re using South, update to South 1.0 or better.
0.3
- Support for Python 3.3, 2.7 and 2.6.
- Support for overridding field types with FORM_DESIGNER_FIELD_TYPES.
Visit these sites for more information
- form_designer:
- FeinCMS:
-
- Package Index Owner: matthiask, fabiang
- DOAP record: form_designer-0.4.0.xml | https://pypi.python.org/pypi/form_designer/0.4.0 | CC-MAIN-2017-09 | en | refinedweb |
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ExampleProject.Contracts
{
/// <summary>
/// The time-zone has changed on a view.
/// </summary>
/// <remarks>
/// <para>
/// <list>
/// <listheader>Version History</listheader>
/// <item>10 October, 2009 - Steve Gray - Initial Draft</item>
/// </list>
/// </para>
/// </remarks>
/// <param name="view">View instance</param>
/// <param name="newZone">New time-zone</param>
public delegate void TimeZoneChanged(IClockView view, TimeZone newZone);
}) | https://www.codeproject.com/script/articles/viewdownloads.aspx?aid=42967&zep=exampleproject.contracts%2Ftimezonechangeevent.cs&rzp=%2Fkb%2Farchitecture%2Fmvp_through_dotnet%2F%2Fexampleproject.zip | CC-MAIN-2017-09 | en | refinedweb |
How to set a PDF to expire (Working Script)(Bryan_Hardesty) Mar 21, 2008 9:12 AM
Here is a little script I made up the other night. You can use it to allow a PDF to be opened only until a set date. I use this for when my employees go to service a customer. I want them to be able to see the customer's information, but only for 24 to 48 hours.<br /><br />CheckExpiration()<br /><br />function CheckExpiration()<br />{<br />/*-----START EDIT-----*/<br />var LastDay = 21<br />var LastMonth = 3<br />var LastYear = 2008<br />/*-----END EDIT-------*/<br /><br />/* DO NOT EDIT PAST HERE !!! */<br />var today = new Date();<br />var myDate=new Date();<br />LastMonth = LastMonth - 1<br />myDate.setFullYear(LastYear,LastMonth,LastDay);<br /><br />if (myDate<today)<br /> {<br /> this.closeDoc(1);<br /> app.alert("This files has expired.",1,0,"Expired");<br /> }<br />}
This content has been marked as final. Show 39 replies
1. Re: How to set a PDF to expire (Working Script)Patrick Leckey Mar 21, 2008 9:19 AM (in response to (Bryan_Hardesty))Well, as long as the user doesn't change the date on their computer or turns off JavaScript in Acrobat. If a user does either of those things, the form will open regardless of your script.
Adobe LiveCycle Policy Server provides this functionality in a server / client setup so that it authenticates the user, date and time against a trusted server before the form is opened.
2. Re: How to set a PDF to expire (Working Script)Patrick Leckey Mar 24, 2008 6:30 AM (in response to (Bryan_Hardesty))Actually that was the name for the LiveCycle version 7 suite. In version 8 the product that provides this functionality is called LiveCycle Rights Management.
It does not use scripting of any sort and so it cannot be so easily disabled as by removing a check from a preference setting or by changing the year on your clock to 2007 (since the script above does not have a start date to validate against either). It is embedded into the document and forces Acrobat or Reader to validate against your Rights Management server before you are allowed to open the document.
3. Re: How to set a PDF to expire (Working Script)(SteveMajors) Apr 5, 2008 2:40 PM (in response to (Bryan_Hardesty))If a fairly technical person with nearly no coding skills (for example, a person like ME...) wanted to use this script, just what does it take to implement.
I totally understand that it is limited to 'keeping honest people honest' and has nearly zero "real" protection.
However, I have a request from an associate to protect pdfs by date and he wants it quick and cheap..... (we don't have tons of servers or want to acquire the know-how of maintaining them ourselves).
We've been trying the ADC, but have issues with it opening files on a Mac. Other solutions look nice (FileOpen, etc.), though they are a bit pricey-er than we'd like since the main thing is simply to have docs expire on a given date (we like the 'by user' features, but that's really not a huge requirement at this time).
Thanks for your input.
4. Re: How to set a PDF to expire (Working Script)(Aandi_Inston) Apr 7, 2008 1:08 AM (in response to (SteveMajors))Be very sure to explain to your associate: here is a solution, but the
end user only has to turn off JavaScript to avoid it.
Aandi Inston
5. Re: How to set a PDF to expire (Working Script)(SteveMajors) Apr 7, 2008 8:09 AM (in response to (Bryan_Hardesty))Thanks, Aandi.
I clearly understand that and have tried to tell him, but 'pictures speak louder than words' and seeing it happen (or not) will likely help him understand everything.
What I have found (by studying the help files all weekend) is ONE way to implement this Java code (I'm not sure if it is 'right'...) that WORKS on ONE OUT OF SIX computers...
I placed the code on the Page Properties of the first page (steps below for the 'newbies' like me...) - it was really simple once I found out how to do it...
**** STEP BY STEP FOR OTHER NEWBIES ****
To use the Java script as described in this post (at least, as I did - maybe someone else will have better ideas and describe it for us...)
1. click on Pages on the left sidebar
2. right-click on the first page
3. select Page Properties, Action
4. under Select Action choose Run a Javascript
5. click Add
6. in the window that pops up, copy/paste the code from the post
7. click OK, OK to get back to the doc (change the Javascript later if you want, this is just 'initial testing'...)
8. IMPORTANT! Save the file under a different file name (or you may not be able to open it later!)
*********END OF STEP-BY-STEP*********
I included #8 because now, when I try to open the 'expired' doc in Acrobat, I get the 'expired' message and can't open it at all, but in IE, I see the document - on 5 of my 6 computers (including the one that Acrobat won't open!)
So, even though this is not 'secure' nor a 'pro' solution, please help me understand if a) I did this right and b) why it only works on the ONE computer (running Win 2000 Server and IE 6 with Reader 7) and not on any others (incluing NT, XP, Vista and 2000 Pro with various versions of IE and Reader).
Thank you for your time.
What I would really like to see is your reply with DETAILS on how to test this and both 1) learn how this stuff is done and 2) show my associate the difference in turning it 'on' and 'off'.
Best regards
Steve Majors
6. Re: How to set a PDF to expire (Working Script)(Aandi_Inston) Apr 7, 2008 8:42 AM (in response to (SteveMajors))One tip is to always check the JavaScript console. There may be a
message in there about the problem.
Aandi Inston
7. Re: How to set a PDF to expire (Working Script)(SteveMajors) Apr 7, 2008 8:58 AM (in response to (Bryan_Hardesty))Aandi,
Thanks. Sure wish I knew what that was, or where to find it....
guess it's back to the 'search the help file' again.....
I really do thank you (and all the experienced folks out there) for your tips/guidance, however, PLEASE remember that I got Adobe Acrobat last week (haven't even received the CD yet...) and I'm about as lost as anyone can get! (step-by-step is highly appreciated... - not only for me, but in reading the forums, it seems there are many more out there as bad, if not worse off than me!)
Steve
P.S. (added after doing some searching) I found that "Javascript console" is something that browsers have.... check out for a nice page... They say, "In IE, go to Tools > Internet Options and choose the Advanced tab. Make sure the check box for "Display a notification for every script error" is checked. " I'm off to try that....
P.P.S. Turned that on, tried the 'expired' page and nothing special - was able to read the entire thing...
8. Re: How to set a PDF to expire (Working Script)(Aandi_Inston) Apr 7, 2008 9:39 AM (in response to (SteveMajors))They now call it the JavaScript debugger in Acrobat Professional, look
under Advanced > Document Processing. Not sure about other products.
Browsers have a different JavaScript environment to Acrobat; each one
may have a console, but when running Acrobat JavaScript you need the
AcrobatJavaScript console.
Please remember that you are now learning to be a programmer, and that
isn't something you can get good at in a day, a week, or a month; nor
through a handful of tips.
Aandi Inston
9. Re: How to set a PDF to expire (Working Script)(SteveMajors) Apr 7, 2008 10:07 AM (in response to (Bryan_Hardesty))Thanks for the message with details!
When I turn on the JavaScript debugger in Acrobat Pro, I get this message (whether the 'clean' file or the 'expired' - same message).
Don't know if it has anything to do with the issue, but it is what it is...
Acrobat Database Connectivity Built-in Functions Version 8.0
Acrobat EScript Built-in Functions Version 8.0
Acrobat Annotations / Collaboration Built-in Functions Version 8.0
Acrobat Annotations / Collaboration Built-in Wizard Functions Version 8.0
Acrobat Multimedia Version 8.0
Acrobat SOAP 8.0
NotSupportedError: Not supported in this Acrobat configuration.
Doc.closeDoc:17:Page undefined:Open
As for being a 'programmer' - that ain't gonna happen.... I've been in computers since 1978 (Apple ][ days...) and found out a long time ago that I don't think like a programmer (it takes a special person with special skills and dedication, IMHO...), though I do like to 'hack around' and try out the simple stuff on my own.
What I'd really like to do/have/find is someone that totally understands this stuff as well as the business side of things that will stick to being 'on staff' (what we find is that "programmers" tend to get the majority of the project done, then something else comes along.... - certainly on the bigger projects.... we have a mostly-written back-end project done in perl, but now the programmer isn't answering the phone, or not getting into it to finish....
That's why I like to 'hack' - small changes are something I can do myself.
Anyway, I think this has gone as far as I want to go with it - something that 'kinda' works on one out of 6 computers seems to me to be a FAILURE, not an OPTION....
Thanks for your replies and the instructions on how to at least look at the debugger thing.
All the best.
10. Re: How to set a PDF to expire (Working Script)(Aandi_Inston) Apr 7, 2008 10:57 AM (in response to (SteveMajors))>NotSupportedError: Not supported in this Acrobat configuration.
>Doc.closeDoc:17:Page undefined:Open
Ok, if we refer to the JavaScript Reference there may be some clues
there. It's basically saying that closeDoc (which you'll find in the
code somewhere) isn't being allowed, usually for some security or
impracticality reason, rather than because you didn't ask right.
But no: no useful notes. It may be that you are trying to do this in a
browser document? You can't close the document window for a browser.
>
>As for being a 'programmer' - that ain't gonna happen....
It's already happened. You may feel you're just doing copy-and-paste
programming - but an increasing number of "programmers" actually
believe this is all there is to programming. I respect your judgement
that you don't want to be what you see a programmer as being, but you
are doing programming tasks, just as someone who has to saw a piece of
wood is doing carpentry tasks - and the saw is still sharp!
What I'm saying, I guess, is that trying to get this working while
simultaneously saying "No! I don't want to learn this stuff" isn't
going to work.
Aandi Inston
11. Re: How to set a PDF to expire (Working Script)Patrick Leckey Apr 7, 2008 11:55 AM (in response to (Bryan_Hardesty))Good catch, Aandi. Hadn't occurred to me until now. closeDoc does not work in a browser window, you're right. Since AcroJS stops processing at the first error, you never get the app.alert message because closeDoc comes before it, and you see the document because closeDoc can't close a browser window.
Just another reason why "JavaScript-based Document Security" is a misnomer and why this script really shouldn't be used for any sort of security - all you have to do is open the document in a browser to bypass the security.
12. Re: How to set a PDF to expire (Working Script)(Dawn_Kay) May 9, 2008 10:13 AM (in response to (Bryan_Hardesty))When I attempt this.. I get to step 4 and then it will not allow me to Add...
Is there a specification I am missing... a file type.. or reason my PDF won't allow me to edit such things.
13. Re: How to set a PDF to expire (Working Script)gkaiseril May 9, 2008 10:23 AM (in response to (Bryan_Hardesty))These instructions apply to PDFs NOT created by LiveCycle Designer.
14. Re: How to set a PDF to expire (Working Script)(Dawn_Kay) May 9, 2008 10:28 AM (in response to (Bryan_Hardesty))Thanks Geo - that is my issue.
Does anyone know if there is there a way to generate silimar results in designer? (an expiration by date, or number of times opened etc...)
15. Re: How to set a PDF to expire (Working Script)Patrick Leckey May 9, 2008 11:56 AM (in response to (Bryan_Hardesty))You do understand that this method provides no security at all, right?
Turning off JavaScript or changing the date on your computer will circumvent any form of "security" this script may seem to provide.
16. Re: How to set a PDF to expire (Working Script)(Dawn_Kay) May 12, 2008 5:45 AM (in response to (Bryan_Hardesty))Yes, I understand there are obvious ways around it, but it is better than having no alternative.
17. Re: How to set a PDF to expire (Working Script)Patrick Leckey May 12, 2008 5:54 AM (in response to (Bryan_Hardesty))The alternative is Adobe LiveCycle Policy Server.
EDIT: Sorry, it's been renamed Adobe LiveCycle Rights Management for the LiveCycle ES Suite.
18. Re: How to set a PDF to expire (Working Script)(sobencha) May 18, 2008 5:48 AM (in response to (Bryan_Hardesty))Two questions regarding the JavaScript approach...
1) Is there a way to add this bit of JavaScript information to a pdf file in a batch fashion via Java, Python, etc.? I would like to add this to a build process in Ant. Any suggestions?
2) The responses regarding how insecure this is and to use the client/server approach seems worthless for a situation where the pdfs may never see internet connectivity. The only reason for using pdfs is for mobile users of my information. That is first and foremost why I need some document-embedded solution. Are there any document-embedded solutions that are known that do not involve client/server communication?
Thanks a lot for any assistance.
19. Re: How to set a PDF to expire (Working Script)gkaiseril May 18, 2008 10:14 AM (in response to (Bryan_Hardesty))And Acrobat JavaScript may not work on many PDA and other wireless devices that might display content.
20. Re: How to set a PDF to expire (Working Script)Patrick Leckey May 20, 2008 5:43 AM (in response to (Bryan_Hardesty))They may be "worthless" solutions for non-internet-connect scenarios, but that doesn't make the implementation any more secure. It's still a laughably insecure approach to expiring a document. Unless you are in complete control of the viewing environment, I know for a fact that a lot of corporate deployments of Acrobat default to JavaScript turned off - which means to them, your document never expires. A lot of home users turn JavaScript off too because they don't "trust" JavaScript. Not to mention all the people that use non-Adobe viewers (Foxit Reader for example) that may not handle the JavaScript correctly and cause unknown results.
There is no perfect solution for expiring a PDF in a non-internet-connected scenario. If you can't control the timechecking in a known-safe server environment and have to rely on information from the local system, your security it lost since anybody can do anything to the local system. If you're looking for something showy that will make people feel warm and think that you have some form of security on your documents, use the above script. But I would be weary about passing it off, especially in a professional environment, as "secure". Anybody who wants to spend 5 minutes on Google looking at PDF security will realize your "security" is a complete sham and that could reflect badly on you.
The best option to secure a PDF in a non-connected environment is to apply document encryption and only give the password to those who need to view the document.
Oh and LiveCycle Rights Management ES has plenty of fallback configuration options for how to handle non-connected environments with policy-protected PDFs.
21. Re: How to set a PDF to expire (Working Script)(And_Be) Jun 16, 2008 8:21 AM (in response to (Bryan_Hardesty))Do you know how to add a specific time to the expiry?
22. Re: How to set a PDF to expire (Working Script)gkaiseril Jun 16, 2008 9:43 AM (in response to (Bryan_Hardesty))You need to add the variables for hours, minutes, seconds and milliseconds to the code. A generalized version that will work with omitted time elements follows.
function CheckExpiration(LastYear, LastMonth, LastDate, LastHour, LastMin, LastSec, LastMS) {
// document level function to see if passed date less than today's date
// check that numbers are passed as parameters
if (isNaN(LastYear) ) LastYear = 1900;
if (isNaN(LastMonth) ) LastMonth = 1;
if (isNaN(LastDate) ) LastDate = 1;
if (isNaN(LastHour) ) LastHour = 0;
if (isNaN(LastMin) ) LastMin = 0;
if (isNaN(LastSec) ) LastSec= 0;
if (isNaN(LastMS) ) LastMS = 0;
LastMonth = LastMonth - 1; // adjust the passed month to the zero based month
// make the expiration date time object a numeric value
var myDate = new Date( Number(LastYear), Number(LastMonth), Number(LastDate), Number(LastHour), Number(LastMin), Number(LastSec), Number(LastMS) ).valueOf(); // convert passed expiration date time to a date time object value
// get the current date time's object as a numeric value
var today = new Date().valueOf();
// return logical value of the comparison of the passed expiration date value to today - if true document has expired
return (myDate < today);
}
// the following code has to be executed after the above function is defined
// edit following fields
var ExpireYear = 2008; // 2008
var ExpireMonth = 3; // March
var ExpireDate = 21; // 21st
var ExpireHour = 12; // noon
var ExpireMin = 0;
// the following code has to executed after the above function and variables are defined.
// test for expired document based on result of the passed time elements compared to the current date and time as returned by the CheckExpiration() function.
if (CheckExpiration(ExireYear, ExpireMonth, ExoireDate, ExpireHour, ExpireMin) ) {
this.closeDoc(1);
app.alert("This files has expired.",1,0,"Expired");
}
23. Re: How to set a PDF to expire (Working Script)(Allegra_Pilosio) Aug 1, 2008 7:53 PM (in response to (Bryan_Hardesty))Yes the script does work to expire the pdf, however you can still open the pdf file through Photoshop or Illustrator even though it has expired. Is there any other solution?
24. Re: How to set a PDF to expire (Working Script)(SteveMajors) Aug 1, 2008 9:16 PM (in response to (Bryan_Hardesty))Here's a slightly different method, but it works..... (and, there is certainly no 'work around' for anyone to get data you don't want them to get!) <br /> <br />It uses the Action on Open (sorry, I don't know the 'official' name of this - I'll describe it below..) in the pdf to send the user to a server checking mechanism (in this case, a date checker - but you could do the same thing with lots of stuff...) <br /> <br />Here's how I set it up and it works great! <br /> <br />1. Start a new document that you can get a form field into (don't use LiveDesign for this - that's too fancy... - just make a document in Word that says ______ then print to pdf to get a nice simple pdf that you can edit directly in Acrobat) <br /> <br />2. Do a 'Run Form Field Recognition' and Acrobat will find the blank as a field. <br /> <br />3. Edit this field to have a "key" name (my example, I will use "today" which tells me this is the day that my document starts). <br /> <br />4. Under the 'Options' tab, make a Default Value of today's date (this will be the 'test against' date - for my example I'm showing how to expire a document in 30 days from creation date, but you could use anything you like - it is simple to modify!) <br /> <br />5. From the 'Format' tab, use the Select format category of Date. Use a date format you like (my example uses mm/dd/yyyy) Note that this MUST match the server side code described later). <br /> <br />6. For a 'live' document, you may want to go to the General tab and make the field Hidden as well as other things to not see the 'false' page, but that is outside the scope of this example. <br /> <br />OK, now on to setting up the document to automatically send this info to your server for testing... <br /> <br />1. Go to the 'Pages' view (click on the icon at the top left) <br />2. right-click on the page, select Page Properties. <br />3. Under the 'Actions' tab, select "Page Open" and "Submit a form" from the drop down boxes. <br />4. Enter a URL to your web page (see below) that does the checking (like so it is easy to remember) <br />5. Check the HTML radio button. <br />6. Check the 'Only these...' button and then 'Select fields'. Make sure your "key" field is checked as well as the 'Include Selected' button (sometimes, for me, these weren't - I don't know why, so check it!) <br /> <br />Now, on to the server side... <br /> <br />Here's the code that goes into the php server file; <br /> <br /><?php<br />$day = date("d");<br />$month = date("m");<br />$year = date("Y");<br />$today = substr($_REQUEST["today"],3,2);<br />$tomonth = substr($_REQUEST["today"],0,2);<br />$toyear = substr($_REQUEST["today"],6,4);<br /> $tsp = mktime(0, 0, 0, $month, $day, $year);<br /> $tsd = mktime(0, 0, 0, $tomonth, $today, $toyear);<br /> $xdays = ($tsp - $tsd)/(24*60*60); // as calculated from dates<br />$maxdays = 30; //set this to whatever your 'expire from today' date is<br />if ($xdays >= $maxdays)<br /> //do what you like to tell the user it is expired<br />else<br /> //show the info you want them to see<br />?> <br /> <br />i (the info shown (or not) is outside the scope of this message, this is just a 'one way to make this work' example. In my system, I use a program called fpdf [] to create the pdf from 'scratch', building it to show the reader what I want them to see - whether it is a "Sorry, this is expired" document, or the data that they came for if the time hasn't expired.) <br /> <br />THAT'S IT!!! It works great and you can use it to do tons of stuff - just set the "key" on the original document and make the server code check whatever it is you want! <br /> <br />Pretty cool, I think! (and, it only took me, a novice programmer, about 3 hours to figure it all out!) <br /> <br />Best of success with it - enjoy!
25. Re: How to set a PDF to expire (Working Script)(Allegra_Pilosio) Aug 1, 2008 11:42 PM (in response to (Bryan_Hardesty))Thank you for your time Steve.
Sorry I should have been a little more clearer, I am actually a designer (not a programmer)
so the above is a little overwhelming, are you able to set it out step by step?
Basically what I am trying to do is to set an expiration date on pdf files that I supply to my clients, so that once the file has expired it also can not be opened and edited via Photoshop/Illustrator should the pdf file land in the hands of another designer.
Is this possible?
26. Re: How to set a PDF to expire (Working Script)Bernd Alheit Aug 2, 2008 1:48 AM (in response to (Bryan_Hardesty))Use digital rights management (DRM) software.
27. Re: How to set a PDF to expire (Working Script)(Allegra_Pilosio) Aug 2, 2008 2:13 AM (in response to (Bryan_Hardesty))Can you recommend any? I am a mac user. I don't have a big budget I am a freelancer?
28. Re: How to set a PDF to expire (Working Script)(SteveMajors) Aug 2, 2008 5:58 AM (in response to (Bryan_Hardesty))try Adobe Document Center. you will need Acrobat 9 for a Mac (as I understand it - I have a client that told me that yesterday...)
29. Re: How to set a PDF to expire (Working Script)(M._Ahmad) Sep 18, 2008 5:30 AM (in response to (Bryan_Hardesty))Hi,
I have posted the same problem in forum at:
M. Ahmad, "How to close a PDF doc opended in IE web browser using JavaScript?" #, 16 Sep 2008 6:44 am
The example of my script works in pdf but not in browser, as most of you are having the same problem.
I will just say:
1) SetPageAction is not the right place to put your javascript for this purpose. As the script will not run untill file is not opened to that page or user does not go to that page.
2) I put this script as Document Level Script and that way it will run as soon as the file is opened.
3) Yes, the script can be added to one file or a group of files through BATCH PROCESSING. To do this you have to write another script to add this expiry script to the file/files automatically thorugh batch processing
4) I agree, it is not the true security but it is better than having nothing at all.
Thanks.
M.Ahmad
30. Re: How to set a PDF to expire (Working Script)gkaiseril Sep 18, 2008 6:13 AM (in response to (Bryan_Hardesty))Just be aware that any user who turns off JavaScript in their copy of Reader or Acrobat will not be restricted by this approach.
31. Re: How to set a PDF to expire (Working Script)Bernd Alheit Sep 18, 2008 6:40 AM (in response to (Bryan_Hardesty))Or any user with an other PDF viewer.
32. Re: How to set a PDF to expire (Working Script)(Peter_Wepplo) Nov 19, 2008 4:28 PM (in response to (Bryan_Hardesty))I am interested in enabling the method of SteveMajors - Aug 1, 08 PST (#24 of 31). I have tried doing it, but it is beyond my knowledge of what i need to do or how to do it.
I don't know if or whether the options he advocated are necessary or not. I believe he was populating his form from his server, so disabling the script blocks access to the file as well. However, I would like to start with something simple like just the date being sent from my server.
I think my problem is in properly setting up the .php code and capturing it in acrobat (instruction #6).
33. Re: How to set a PDF to expire (Working Script)(faisal_naik) Dec 16, 2008 1:17 AM (in response to (Bryan_Hardesty))Hello,
Using SteveMajor's method of having the pdf checked against a script on a website, is it possible to cross check against the date a pdf is created on the user system with a expiry. To put it simply, i want a user downloading a copy of pdf from my intranet to be able to use it for 24 hours only (the counter starting from the time it is downloaded on the user system). This is to ensure some form of document control / version control.
Thanks!
34. Re: How to set a PDF to expire (Working Script)(Valentine_Deepak_Crasta) Dec 18, 2008 4:35 AM (in response to (Bryan_Hardesty))Allegra Pilosio, you said the script to expire PDF is working. To restrict opening the file in Illustrator or Photoshop, you can set the document secruity in PDF.
Use clt+D or Cmd+D, choose password security
Compactibility: Acrobat 3 or later
Encrypt All document contents
Set a Password
Printing Allowed: High Resolution
Changes Allowed: None
Thats it, people can't open your PDF in Photoshop or in Illustrator without knowing the password
35. Re: How to set a PDF to expire (Working Script)(Melissa_Green) Jan 10, 2009 11:02 AM (in response to (Bryan_Hardesty))Too bad we can't turn this around and somehow require Javascript is ON and then proceed. Maybe a pair of documents, first one that checks for Javascript that shows a link to the second document with the time sensitive data. Again, not a long term solution.
36. Re: How to set a PDF to expire (Working Script)DimitriM Jan 10, 2009 12:01 PM (in response to (Bryan_Hardesty))Hi Melissa,
Yes, the problem with all these solutions is that in order to make them work JavaScript must be turned on. But your mention of a "cover" document explaning to the user that JS must be turned on is possible. There is an example PDF at-
AcroDialogs Product Page
Scroll down to the link for "Document License Dialog Example" to download it. If the user does not turn JS on then they cannot view the information under the "cover" layer. If JS is already turned on then they can view it.
Again, this is not an airtight security method, just a pretty good deterrent.
Hope this helps,
Dimitri
WindJack Solutions
37. Re: How to set a PDF to expire (Working Script)(pari_patel) Jan 10, 2009 4:28 PM (in response to (Bryan_Hardesty))Not a big expert on wondows active directory but I am sure within active directory there is an option to force users not to be able to switch off programs like javascript.
The solution you have proposed here is actually quite handy. As mentioned this is really to help protect honest people. If you need something more secure I would suggest looking at windows active directory, although off course you need to be running this service in the first place.
Regards. Peter.
38. Re: How to set a PDF to expire (Working Script)try67 Jan 11, 2009 3:18 AM (in response to (Bryan_Hardesty))Switching off JavaScript can be done from within Acrobat, so Active Directory can't prevent it. The layer option is good... I thought of another one -- hiding the pages in templates that are made hidden only when JavaScript is enabled and the script has not yet expired. Of course, an experienced user can display the pages themselves, but it would work for most.
39. Re: How to set a PDF to expire (Working Script)(JRG) Mar 12, 2009 7:48 AM (in response to (Bryan_Hardesty))I'm not familiar with the JavaScript API, but is there a way to modify document appearance with the API? If so, then the document could be rendered in an unreadable format (e.g. white print on white background) and the JavaScript could check the date and modify the appearance. Thusly, if JavaSript were disabled, the document could be opened but not "used".
Yes, I understand that any of these measures can be likened to a lock on a screen door, but sometimes that's all you need to redirect the actions of the "almost honest". | https://forums.adobe.com/message/1096909 | CC-MAIN-2017-09 | en | refinedweb |
The model relationship is
section has_many :sections
section belong_to :section
section has_many :questions
question_set has_many :questions, :through => :question_sets_questions
def test
question_set_id = params[:qset_id].to_i
q_ids = QuestionSetsQuestion.where(:question_set_id => question_set_id).pluck(:question_id)
questions = Question.where(:id => q_ids).includes(:section)
questions.each do |q|
section = {id: q.section.id, name: q.section.name}
parent_section = q.section.section rescue nil
p parent_section.id
end
end
bullet
N+1 Query detected
Section => [:section]
Add to your finder: :include => [:section]
N+1 Query method call stack
.includes(section: :section)
questions.each do |q|
section = {id: q.section.id, name: q.section.name}
parent_section = q.section.section rescue nil
while parent_section.present?
section = {id: parent_section.id, name: parent_section.name, children: [section]}
parent_section = parent_section.section rescue nil
end
p section
end
first thing the associations are really confusing.
I think you are calling two hierarchies of sections on question, so changing this line should work
questions = Question.where(:id => q_ids).includes(section: :section)
You are accessing the parent section of
q.section so we need to include that as well
parent_section = q.section.section rescue nil
EDIT
Its getting messier I am not sure if you should do this but i think this will solve the problem
questions = Question.where(:id => q_ids).includes(section: [section: :section]) | https://codedump.io/share/THZ4ZS6Y4Kvf/1/rails-n1-query-with-hasmany-association | CC-MAIN-2017-09 | en | refinedweb |
C++ Programming/Code/IO/Streams/string
Contents
The string class[edit].
Basic usage[edit]
Declaring a std string is done by using one of these two methods:
using namespace std; string std_string; or std::string std_string;
Text I/O[edit]);
Getting user input[edit][edit]
We will be using this dummy string for some of our examples.
string str("Hello World!");
This invokes the default constructor with a
const char* argument. Default constructor creates a string which contains nothing, i.e..
Size[edit].
I/O[edit] (i.e. it returns the input stream), you can nest multiple
getline() calls to get multiple strings; however this may significantly reduce readability.
Operators[edit], similar to the C strcmp() function. These return a true/false value.
if(str == "Hello World!") { std::cout << "Strings are equal!"; }
Searching strings[edit].
Inserting/erasing[edit].
Backwards compatibility[edit]).
String Formatting[edit].
Example[edit]
; } | https://en.wikibooks.org/wiki/C%2B%2B_Programming/Code/IO/Streams/string | CC-MAIN-2017-09 | en | refinedweb |
MemberInfo::Module Property. namespace System; using namespace System::Reflection; public ref class Test { public: virtual String^ ToString() override { return "An instance of class Test!"; } }; int main() { Test^ target = gcnew Test(); MethodInfo^ toStringInfo = target->GetType()->GetMethod("ToString"); Console::WriteLine("{0} is defined in {1}", toStringInfo->Name, toStringInfo->Module->Name); MethodInfo^ getHashCodeInfo = target->GetType()->GetMethod("GetHashCode"); Console::WriteLine("{0} is defined in {1}", getHashCodeInfo->Name, getHashCodeInfo->Module->Name); } /* * This example produces the following console output: * * ToString is defined in source.exe * GetHashCode is defined in mscorlib.d. | https://msdn.microsoft.com/en-us/library/system.reflection.memberinfo.module(v=vs.100).aspx?cs-save-lang=1&cs-lang=cpp | CC-MAIN-2017-09 | en | refinedweb |
This action might not be possible to undo. Are you sure you want to continue?
• The statements that enable you to control program flow in a C# application fall into three main categories: selection statements, iteration statements, and jump statements. • In each of these cases, a test resulting in a Boolean value is performed and the Boolean result is used to control the application's flow of execution. • You use selection statements to determine what code should be executed and when it should be executed. C# features two selection statements: the switch statement, used to run code based on a value, and the if statement which runs code based on a Boolean condition.
The if statement's syntax follows-the square brackets denote the optional use of the else statement (which we'll cover shortly): • if (expression) statement1 [else statement2] .• The most commonly used of these selection statements is the if statement. • The if statement executes one or more statements if the expression being evaluated is true.
control is passed to statement2. Also note that statement1 and statement2 can consist of a single statement terminated by a semicolon (known as a simple statement) or of multiple statements enclosed in braces. If expression results in true. control is passed to statement1. expression is any test that produces a Boolean result.• Here. If the result is false and an else clause exists. This is in contrast to languages such as C++ where you're allowed to use the if statement to test for any variable having a value other than 0. . • One aspect of the if statement that catches new C# programmers off guard is the fact that the expression evaluated must result in a Boolean value.
you can specify an expression that returns an integral value and one or more pieces of code that will be run depending on the result of the expression. a switch statement consists of only one conditional statement followed by all the results that your code is prepared to handle. It's similar to using multiple if/else statements. but although you can specify multiple (possibly unrelated) conditional statements with multiple if/else statements.The switch Statement • Using the switch statement. Here's the syntax: • switch (switch_expression) { case constant-expression: statement • jump-statement • case constant-expressionN: .
uint. control is passed to the first line of code in that case statement. the switch_expression is evaluated. you must provide a jump-statement for each case statement unless that case statement is the last one in the switch. . long. • First.• There are two main rules to keep in mind here. and then the result is compared to each of the constant-expressions or case labels. the switch_expression must be of the type (or implicitly convertible to) int. or string (or an enum based on one of these types). char. • Second. including the break statement. ulong.defined in the different case statements. First. Once a match is made.
or looping. • The do/while statement takes the following form: do embedded-statement while ( Boolean-expression ) . a specified simple or compound statement is executed until a Boolean expression resolves to true. for statements enable you to perform controlled iteration.• In C#. do/while. the while. • In each case.
• Because the evaluation of the while statement's Booleanexpression occurs after the embedded-statement. you're guaranteed that the embedded-statement will be executed at least one time. .
Control is then passed to the line of code following that loop's or conditional statement's embedded statement.The break Statement • You use the break statement to terminate the current enclosing loop or conditional statement in which it appears. the break statement has no parentheses or arguments and takes the following form at the point that you want to transfer out of a loop or conditional statement . • Having the simplest syntax of any statement.
using System. if (i == 66) break.Text.• • • • • • • • • • • • • • • • • • • • • using System. i <= 100.WriteLine(i).Generic.Collections.Linq. } } } } . using System. i++) { if (i%6==0) Console. using System. for (i = 1. namespace ConsoleApplication22 { class Program { static void Main(string[] args) { int i.
The continue Statement • Like the break statement. However. the continue statement enables you to alter the execution of a loop. instead of ending the current loop's embedded statement. the continue statement stops the current iteration and returns control back to the top of the loop for the next iteration. .
It's the result of that conversion that is passed back to the caller. and it causes an immediate return to the caller.• The return statement has two functions. it evaluates whether the returnexpression can be implicitly converted into a form compatible with the current method's defined return value. It specifies a value to be returned to the caller of the currently executed code (when the current code is not defined as returning void). . The return statement is defined with the following syntax: • return [ return-expression ] • When the compiler encounters a method's return statement that specifies a return-expression.
880/ 94 90 30 41 . 89. 89. $9.483 445 47 .9 4: .78 43974 8 903 5.%0-70.38107 4:9 41 . 445 47 .08 90 1443 1472 .70390808 47 .902039 .3/ 9. 89.9 445 8 47 .7:20398 . 89..43/943. 41 .:77039 03. 9 .4/0 1443 9.902039 3 .902039 90 -70. 89.9 90 5439 9.8 34 5.902039 W 4: :80 90 -70.902039 W .90 90 .39 94 97.902039 8 02-0//0/ 89.550.43/943.902039 94 90723.43/943.902039 .3 90 825089 839.3 89.
2085. < < < < ..W W W W W W W W W W W W W W W W W W W W W :83$8902 :83$890240.3 8973(.943 .9. :83$890236 :83$8902%09 3.94380307.88!747.78 39 147 1 4384079030 1 -70..04384055.2 89...4/.
.43974 -.4393:0 89.4393:0 89. 445 40.:77039 445 8 02-0//0/ 89.902039 90 .:77039 907.902039 W 0 90 -70.907 90 00.4393:0 $9.07 3890.943 ./ 41 03/3 90 . 89.902039 89458 90 .-08 4: 94 .3/ 709:738 .:943 41 . 94 90 945 41 90 445 147 90 309 907.943 .%0.902039 90 .902039 03.
.9 .4/0 8 349 /0130/ .108 .8 709:733 .902039 .07 %0 709:73 89.43.880/ -.:90/ .:808 .4/0 03 90 . 1472 .07 41 90 .9 850.425.07843 9.908 0907 90 709:73 05708843 .3 -0 25. W 709:73 709:73 05708843 ( W 03 90 .:0 94 -0 709:730/ 94 90 ..90 709:73 94 90 .42507 03..W %0 709:73 89.9 .:77039 2094/ 8 /0130/ 709:73 . 94 90 .4/ . .902039 9.07 ..3/ 9 .9-0 9 90 .43.8 94 1:3..4:39078 .108 ...902039 8 /0130/ 9 90 1443 839.0790/ 394 .9438 9 850.9 8 5.3 220/..:0 9 8 90 708:9 41 9.:.:77039 . 2094/ 8 709:73 89. 709:73 05708843 9 0.:77039 00.. | https://www.scribd.com/presentation/64286836/15812-anu1 | CC-MAIN-2017-09 | en | refinedweb |
Hi all!
I'm trying to read a string of characters from a file (terminating at the first space, newline, or tab character), dynamically allocate memory for it, and then copy the contents of the file to the allocated memory.
I keep getting weird output from the following code. For some reason, it allocates the memory for the array fine, but it reports the array size as 4. It only outputs part of the array (first 4 characters), and then when it goes to deallocate the memory, it gives me a heap corruption error! :P I've also reproduced the file's contents at the end of this (really small). Can anyone help figure out what's going wrong here?
Please note the int flag is no longer really used. I used him for debugging earlier.
Thanks!!
-Max
File mytest.txt:File mytest.txt:Code:#include <iostream> #include <string> #include <fstream> using namespace std; int main() { char c =0; char *chararray = NULL; int flag = 0; int count = 0; ifstream myin("mytest.txt"); if (myin.bad()) cout << "Bad file!!"; while (c != '\t' && c != '\n' && c!= ' ') { c = myin.peek(); cout << c; ++count; myin.seekg(count, ios::beg); flag = myin.tellg(); } chararray = new char[count]; int sizeofarray = sizeof(chararray); cout << endl << endl << sizeofarray << endl << sizeof(char) << endl; myin.seekg(0, ios::beg); myin.read(chararray, count); chararray[count] = '\0'; for (int a=0; a < sizeofarray; a++) cout << chararray[a]; cout << endl << count << endl; delete chararray; }
Readmeyoubiotch!! 3.46 34534sdf | https://cboard.cprogramming.com/cplusplus-programming/119422-using-peek-make-dynamic-array.html | CC-MAIN-2017-09 | en | refinedweb |
I'm trying to race 6 snails in a 100m track using threads. Here's the whole code. Why do some of the snails do not run at all? Why do
they don't finish the 100m track? (I actually want all of them to reach the finish line. Then I'll print the winners at the end of the program.)
struct snail_thread{
int move;
char snail_name[10];
char owner[10];
};
int sum = 0;
void printval(void *ptr) {
struct snail_thread *data;
data = (struct snail_thread *) ptr;
while(sum < 100) {
sum += data->move;
printf("%s moves %d mm, total: %d\n",data->snail_name, data->move, sum);
}
pthread_exit(0);
}
int main(void) {
pthread_t t[6];
struct snail_thread s[6];
int i;
srand(time(NULL));
for(i = 0; i < 6; i++)
s[i].move = rand() % ((5 + 1) - 1) + 1;
strcpy(s[0].snail_name, "Snail A");
strcpy(s[0].owner, "Jon");
strcpy(s[1].snail_name, "Snail B");
strcpy(s[1].owner, "Ben");
strcpy(s[2].snail_name, "Snail C");
strcpy(s[2].owner, "Mark");
strcpy(s[3].snail_name, "Snail D");
strcpy(s[3].owner, "Jon");
strcpy(s[4].snail_name, "Snail E");
strcpy(s[4].owner, "Mark");
strcpy(s[5].snail_name, "Snail F");
strcpy(s[5].owner, "Ben");
for(i = 0; i < 6; i++)
pthread_create(&t[i],NULL,(void *) &printval, (void *) &s[i]);
for(i = 0; i < 6; i++)
pthread_join(t[i], NULL);
return (0);
}
Because your
sum is global and all snails are incrementing it.
Put
sum also in the
struct.
Another little tip, for nicer results, make the
move random for each step. Now the
move is the same as the speed and you can know who wins without racing.
(And come on, give your snails better names than "Snail A" ;-)). | https://codedump.io/share/IqqIxm3QF2Bb/1/stimulate-a-racing-game-using-threads | CC-MAIN-2017-09 | en | refinedweb |
tag:blogger.com,1999:blog-31790418380041009362017-02-18T04:02:24.581-05:00Alexis Hevia on Web DevelopmentA collection of tips & tricks I've learned during my work as a full-stack web developerAlexis Hevianoreply@blogger.comBlogger11125devalexishevia React + Flux using Yahoo's Fluxible, part 2<p>On my <a href="">last post</a>, I demonstrated how to create a simple Isomorphic Application with multiple routes using Yahoo's Fluxible library. On this post, I will expand that app so it can communicate with a backend API, in order to build a simple To-Do application.</p><a name='more'></a><p>** All the source code for this post is available at <a href="">this repo</a>. Each commit corresponds to a new section on this post. </p><h4>Todos</h4><div><span style="background-color: white; color: #444444; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; line-height: 16.7999992370605px;"><a href="">6f3ba93</a></span></div><p>We'll start by creating a CRUD service for our Todos. </p><p data-</p><p>As you can see, this is a very basic service that just stores todos as an array on memory. However, we use a setTimeout on every action to simulate we're calling a database.</p><p>In order to get our app to use the todos service, we'll use Yahoo's <a href="">fluxible-plugin-fetchr</a>. <a href="">Fetchr</a> allows us to consume a service from both the server and the client without duplicating code.</p><p>We need to make some changes on app.js and server.js</p><p data-</p><p data-</p><p>It's important to note that Fetchr expects our app to be using body-parser (or an alternative middleware that populates req.body) before we use the fetchr middleware.</p><p><p>Now we need to define a showTodos action to fetch our todos.</p><p data-</p><p>Notice how our action takes a <i>context</i> parameter as its first argument (line 7). This context has a <i>dispatch</i> method that we use to notify our stores (lines 8, 12, and 16). It also has a <i>service</i> attribute, which allows us to interact with any service that has been registered with our app. On line 10 we execute a <i>read()</i> action on our todo service, in order to fetch all todos. </p><p>Since we want todos to be displayed on the home page, we'll need to make sure the showTodos action runs every time the user visits the home page. This is very simple, we just need to declare it on routes.js.</p><p data-</p><p>After the todos have been fetched, we need to save them on a store so they're available to the rest of the application. Let's create a TodoStore for this. </p><p data-</p><p>And now all we're missing is modifying our Home.jsx component so it reads todos from the TodoStore and then renders them.</p><p data-</p><p. </p><p>Our app should now be able to fetch all todos and render them on the home screen. However, this might be a good time to review our now expanded request/response cycle: <ol><li>A new request is made, server.js receives it.</li><li>Our middleware creates a new context instance and calls context.executeAction(navigateAction), passing it the current route.</li><li>navigateAction uses the routrPlugin to look for a matching route (home). Since we defined the showTodosAction as an action for the home route, the showTodosAction is executed.</li><li>The showTodosAction uses the fetchr plugin to make a 'read' request on the todos service.</li><li>After the read request succeeds, the showTodosAction dispatches a 'RECEIVE_TODOS_SUCCESS' action, with the todos that were returned.</li><li>The 'RECEIVE_TODOS_SUCCESS' action is dispatched to all stores registered with the app.</li><li>The TodoStore executes its _receiveTodos() method in response to the 'RECEIVE_TODOS_SUCCESS', and updates its local copy of todos.</li><li>The showTodosAction calls its callback function, letting the NavigateAction know it can continue.</li><li>The navigateAction emits the 'CHANGE_ROUTE_SUCCESS' action, which is dispatched to all stores registered with the app.</li><li>ApplicationStore executes its handleNavigate() method in response to the 'CHANGE_ROUTE_SUCCESS' action, and updates its state.</li><li>Inside the executeAction callback, we create a new instance of our Application component, passing it the current context as a prop.</li><li>We render the Application component as a string, and send the result as our response.</li></ol></p><p>Notice: when we visit the About page, and then click on the Home page, an AJAX request will be made to read from the todos service.</p><h4>Add Todos</h4><div><span style="background-color: white; color: #444444; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; line-height: 16.7999992370605px;"><a href="">bd2f097</a></span></div><p>We can now read existing todos, but we're missing the ability to add new ones. Let's fix that by adding a createTodo action.</p><p data-<p data-<h4>Conclusion</h4><p>We now have a working Isomorphic application that communicates with a CRUD service. Even though the Update and Delete actions were not implemented, doing so should be pretty trivial now, and is left as an exercise for the reader.</p><p>Thanks for reading this 2-part series on building Isomorphic apps with Fluxible, hope you enjoyed it!</p>Alexis Hevia React + Flux using Yahoo's Fluxible, part 1<div>Earlier this year Facebook announced Flux, an "architecture for building user interfaces that eschews MVC in favor of a unidirectional data flow". The <a href="">official Flux website</a> does a pretty good job explaining the virtues and overall philosophy of Flux, but falls a bit short when demonstrating a real implementation.</div><div><br /></div><div>That's where the <a href="">Yahoo Flux examples</a> come in. Not only do they demonstrate how to use Flux, but they do it by building an Isomorphic application.<br /><br /><a name='more'></a></div><div>For the examples, the Yahoo team is using some pretty cool, open source libraries they created: <a href="">fluxible</a>, <a href="">dispatchr</a>, <a href="">fetchr</a>, and <a href="">routr</a>, among others. At first glance I didn't quite understand what was going on and what role each library was playing, so I decided to rewrite the examples, one step at a time.</div><div><br /></div><div>** All the source code for this post is available at <a href="">this repo</a>. Each commit corresponds to a new section on this post.<br /><br /></div><div>** Update: Feb. 8, 2015 - Code was updated to work with Fluxible v0.2.0 and nodemon v1.3.6 (thanks for the help <a href="">Legogris</a>!).<br /><br /></div><div>** Update: Mar. 26, 2015 - The Fluxible team has published a slightly modified version of this post <a href="">on their site</a>.<br /><br /></div><div><h4>Hello World</h4></div><div><span style="background-color: white; color: #444444; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; line-height: 16.7999992370605px;"><a href="">cffce64</a></span></div><div><br />We'll start by creating a very basic React component that renders the classic "Hello World".</div><div data-</div><br /><div>We'll be rendering this component server-side, using an Express application.</div><div><br /></div><div>This is how our server.js looks like:</div><div data-</div><br /><div>This is a pretty basic Express app, but there are some React-specific details:</div><div><br /><ul><li>Line 5: Notice how we're requiring 'node-jsx' and calling its install() method. This allows us to require JSX files as if they were regular JS files, without having to worry about compiling.</li><li>Line 9-13: we're defining a custom middleware, which creates a new instance of our component and renders it using React's renderToString().</li></ul></div><div><h4></h4><h4>Using routes</h4></div><div><span style="background-color: white; color: #444444; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; line-height: 16.7999992370605px;"><a href="">29a099f</a></span></div><div><br /></div><div>To make our example more interesting, let's create two different components: <Home> and <About>, each representing one page.</div><div data-</div> <div data-</div><br /><div>And let's define some routes to match each page.</div><div data-</div><div><br /><div>Let's also create an ApplicationStore, whose only job will be keeping track of which page should currently be displayed.</div><div data-</div><div><br /></div><div>Things to note:</div><div><ul><li>Line 8: we're using fluxible's createStore() to create our store.</li><li>Line 35-42: we're defining a getState() method that returns which pages are available in the application and which one should currently be displayed.</li><li>Line 11: we're defining an action handler for 'CHANGE_ROUTE_SUCCESS'. Whenever this action is dispatched, we'll call the handleNavigate() method. </li><li>Line 19-31: handleNavigate() takes a route object and updates the page to be displayed.</li></ul></div><div>Keep in mind the ApplicationStore doesn't do any rendering, it just keeps track of what should be rendered. We need to update Application.jsx so it reads the current page from the ApplicationStore.</div><div data-</div><div><br /></div><div>Things to note:</div><div><ul><li>Line 9: We're using the FluxibleMixin, which adds some convenient methods to interact with stores.</li><li>Line 11: We're getting the component state by reading from the ApplicationStore.</li><li>Line 16: We're using state.currentPageName to determine which component to render.</li></ul></div><div>We now have all the pieces in place, but we need a way to tie them together. That's the job of our Fluxible App:</div> <div data-</div> <div><br />Things to note:<br /><ul><li>Line 6: We're creating our app by calling Fluxible() and defining Application.jsx as our top-level component.</li><li>Line 10: We're adding the routrPlugin to our app, and telling it to use our previously defined routes.</li><li>Line 14: We're registering the ApplicationStore with our app</li></ul></div><div>And the last step to get our app working is to update our middleware on server.js, so it uses the Fluxible App</div> <div data-</div> <br />Let's take a step back and analyze what's going on here, by explaining what happens when a request is received:<br /><ol><li>A new request is made by the browser, server.js receives it.</li><br /><li>Line 11: Our middleware takes over and creates a new context. <i>Fluxible uses contexts as an encapsulation mechanism, to prevent data from leaking between requests.</i></li><br /><li>Line 13: The middleware then executes navigateAction, and passes it the current route as a param. <i>navigateAction is a convenient method defined on the 'flux-router-component' library. It helps us deal with route matching.</i></li><br /><li>navigateAction uses the routrPlugin to look for a matching route. </li><ol><li>If a match is found, a 'CHANGE_ROUTE_SUCCESS' action is dispatched. </li><li>If a match is not found, an error with 404 status is provided to the callback.</li></ol><br /><li>The 'CHANGE_ROUTE_SUCCESS' action is dispatched to all stores registered with the app.</li><br /><li>ApplicationStore executes its handleNavigate() method in response to the 'CHANGE_ROUTE_SUCCESS' action, and updates its state.</li><br /><li>Line 25-28: The navigateAction callback is executed. Inside the callback, a new Application component instance is created, and is handed the current context. <i>The context contains, among other things, the updated ApplicationStore.</i></li><br /><li>Line 29-31: We render the Application component as a string, and send the result as our response. <i>Since the Application component gets its state from the ApplicationStore, the correct page is rendered. Note: the getStore() method provided by the FluxibleMixin knows how to get stores from the provided context.</i></li></ol>The application should now be working. If we visit we'll se our home page, and if we visit we'll see our about page.<br /><br /></div><div><h4>Adding a NavBar</h4><span style="background-color: white; color: #444444; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; line-height: 16.7999992370605px;"><a href="">1e08f68</a></span><br /><br />Having to manually enter URLs to change pages is not much fun. How about we add a navigation bar at the top of our application to let users switch between pages?<br /><br />Let's create a Nav.jsx component that renders one NavLink for each available page. A NavLink is a special React component defined by the flux-router-component library, which executes the navigateAction when clicked. <div data-</div> <br />And let's modify Application.jsx so it renders our Nav component above the pages. <div data-</div> <br / <a href="">Pure CSS</a> library. <div data-</div> <br />Finally, let's modify server.js so it renders our App component inside our new HTML component.<br /> <div data-</div> <br /><h4>Client App</h4><span style="background-color: white; color: #444444; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; line-height: 16.7999992370605px;"><a href="">b7b6f4b</a></span><br /><br />Up to this point all we've really done is create a traditional web application. We're still missing the client side code to get a real Isomorphic application in place.<br /><br /.<br /><br />We need to define these hydrate functions on our ApplicationStore: <div data-</div> <br />Then we need to modify server.js so it attaches the current app state on the Express response object. <div data-</div> <br />Things to note:<br /><br /><ul><li>Line 12-13: We're extending our Express server with the <a href="">express-state</a> library. This library adds an expose() method to the Express response object. Whatever we set with expose() will be available on the response object's "locals.state".</li><li>Line 30: We're calling expose() on the response object, telling it to hold on to our dehydrated app state. </li><li>Line 34: We're passing res.locals.state as a prop to our HTML component.</li></ul><div>Now we can create a client app that grabs the dehydrated state sent by the server and rehydrates itself before re-rendering and taking over. </div> <div data-</div> <div><br /></div><div>We'll need a way to bundle up the client app so the browser can read it, so we'll use <a href="">Webpack</a> and its <a href="">jsx-loader</a>. The following config file should be enough to have it working:</div> <div data-</div> <div><br /></div><div>At this point our client side app should be able to take over after the HTML is finished rendering. <br /><br />Although the client app re-renders after it rehydrates, we shouldn't see any flashing. <i>This is because React compares the new render with the current DOM state, and won't update the DOM unless something has actually changed.</i></div><div><br /></div><div>However, if you try switching between pages, you'll notice the NavBar is now broken. Even though the navigateAction is being executed on click, and our ApplicationStore is being updated, our Application component is not re-rendering. </div><div><br /></div><div>This is easy to fix. Let's add a store listener so our Application component updates whenever the ApplicationStore changes.</div> ="12-21" data-</div> <br />Things to note:<br /><ul><li>Line 12-14: We're adding a store listener for the ApplicationStore. Whenever a store we're listening to emits a 'change' event, our component's onChange() function will be called (this functionality is provided by the FluxibleMixin of the fluxible library).</li><li>Line 18-21: our onChange function just gets a new state by asking the ApplicationStore for its current state.</li></ul><br />You should now see the correct pages being rendered when you click on the NavBar, but you might notice the browser's URL is not actually changing. This can also be easily fixed, by using the RouterMixin provided by the flux-router-component library.<br /><br />The RouterMixin updates the browser history whenever a route changes, and also handles popstate events (when a user clicks on the browser's back button) in a Flux-appropiate way. ="7-11" data-</div> <br /><h4>Conclusion</h4>In this post we've seen how easy it is to create an Isomorphic application with multiple routes by using Yahoo's Fluxible library. On part 2, I will expand this app so it can communicate with a backend API, in order to build a simple To-Do application. </div>Alexis Hevia apps with Backbone.js and ReactEver since AJAX came into the scene more than 10 years ago, developers have been looking for a way to build fast, efficient, and powerful web applications with as little maintenance nightmares as possible. Many developers now prefer to create single page applications (SPAs), delegating the entire UI rendering to the client. However, SPAs come with a set of problems:<br /><br /><a name='more'></a><ul><li>SEO<br />Since your server basically returns a blank page, crawlers have no way to index your content. There are <a href="">some ways</a> to fix this problem, but it requires additional code and maintenance issues.<br /></li><li>Performance<br />Your application now has to wait for all of the Javascript to load before rendering something on the screen. This initial time before content is rendered (called <a href="">time to first tweet</a> by Twitter) has a huge impact on the perceived-speed of your application.<br /></li><li>Accessibility<br />Even though it is a very low percentage, there are people out there who don't have Javascript enabled on their browser. For these people, a SPA is as good as nothing.<br /></li><li>Stateful URLs<br /.<br /></li><li>Complexity<br />On the old days, the request-response cycle was always the same: <ol><li>A client requests a route.</li><li>The route is matched to a controller.</li><li>The controller fetches one or more models from the database.</li><li>The view is rendered with those models.</li></ol>MVC was simple and clean.<br /><br /.</li></ul><br /><h4>Isomorphic Apps</h4><br /><div>There is a new trend picking up strength in the web development world: <a href="">Isomorphic Apps</a>. The core idea is pretty simple: create applications that can be rendered both client-side and server-side with little or no code changes.</div><br /> <div>I decided to build my own Isomorphic App, trying to keep things as simple as possible. You can see the app running <a href="" target="_blank">here</a>, and you can find the source code <a href="" target="_blank">here</a>.).</div><br/><div>In the rest of this post I will explain how to build it.</div></div><br /><h4>Routes</h4><br /><div>I decided to write my routes using the <a href="">RFC 6570</a> URI Templates format, so they would be parseable by the <a href="">uri-templates</a> library.</div><div><br /></div><div>My app only has two routes:</div><ol><li>Index: displays either the latest or the most popular posts. You decide which list to show by modifying the "display" query param.</li><li>Show Post: displays a single post.</li></ol><div><br />My routes.js file looks like this:</div><div><pre class="prettyprint linenums">/*-- routes.js --*/<br /><br />define(function(){<br /> return {<br /> '/posts/{id}': 'show_post',<br /> '/{?display}': 'index'<br /> }<br />});<br /></pre></div><div>Each route is made from a uri-template and a controller id. The controller id will be used by require.js to load the controller.</div><br /><br /><h4>Controllers</h4><br /><div>On my application, a controller has 3 objectives:</div><ol><li>Determine which view to render.</li><li>Determine which params to pass to the view.</li><li>Determine which data to fetch from the datastore.</li></ol><div>This is how the index controller looks like:</div><div><pre class="prettyprint linenums"><br />/*-- controllers/index.js --*/<br /><br />define([<br /> 'views/index',<br /> 'collections/posts/latest',<br /> 'collections/posts/most_popular'<br />], function(IndexView, LatestPosts, MostPopularPosts){<br /><br /> return {<br /><br /> // 1. which view to render<br /> view: IndexView,<br /><br /> // 2. which params to pass to the view<br /> params: function(urlObj, getURL){<br /><br /> // urlObj contains the parsed route<br /> // if no display param was sent, we'll display the latest posts.<br /> var display = urlObj.display || 'latest';<br /><br /> // the collection to use depends on the display param<br /> var collection = {<br /> latest: LatestPosts,<br /> popular: MostPopularPosts<br /> }[display];<br /><br /> // these are the params sent to the view<br /> return {<br /> getURL: getURL,<br /> display: display,<br /> posts: new collection()<br /> };<br /><br /> },<br /><br /> // 3. which data to fetch from the datastore<br /> mustFetch: function(params){<br /> return ['posts'];<br /> }<br /><br /> };<br /><br />});<br /></pre></div><div>As you can see, the controller doesn't actually fetch the data or render the view. It only specifies what should be done. You might have noticed the params function takes 2 arguments:</div><ol><li>urlObj: an object with the result from parsing the route against the uri template.</li><li>getURL: a function that lets views render urls easily (we'll see it in action later)</li></ol><h4>Models</h4><br /><div>I'm using regular <a href="">Backbone.js</a> models and collections. I also use the <a href="">Supermodel.js</a> plugin on some models, but that is not required for the isomorphic app to work.</div><br/><h4>Views</h4><br /><div>I decided to use <a href="">React</a> components as views. React works especially well with our lifecycle because we can re-render the whole app on every route change without incurring in performance penalties. To read more about React's virtual dom and why it's so efficient, check out <a href="" target="_blank">Why did we built React?</a></div><br /><div>This is how the index view looks like: </div><div><pre class="prettyprint linenums"><br />/*-- views/index.js --*/<br /><br />/** @jsx React.DOM */<br /><br />define([<br /> 'react',<br /> 'underscore',<br /> 'mixins/backbone_binding'<br />], function(React, _, BackboneBinding){<br /><br /> var TABS = {<br /> latest: { text: 'Latest', url: '/?display=latest' },<br /> popular: { text: 'Most Popular', url: '/?display=popular' }<br /> };<br /><br /> return React.createClass({<br /> displayName: 'Index',<br /><br /> render: function(){<br /> return (<br /> <div><br /> <div><br /> { this.renderTab('latest') }<br /> { this.renderTab('popular') }<br /> </div><br /> { this.renderPosts() }<br /> </div><br /> );<br /> },<br /><br /> renderTab: function(name){<br /> var attrs = TABS[name];<br /><br /> varclient-side orchestrator</a> and <a href="" target="_blank">server-side orchestrator</a>. They both do the same actions:</div><ul><li>Parse the current route against the uri-templates.</li><li>Fetch the corresponding router.</li><li>Find out which view to render, along with the params that must be sent.</li><li>Fetch data from the datastore.</li><li>Render the view with the correct params and data.</li></ul><br /><div>Even though they share most of the code, there are some things that are done differently on each side, for example:</div><ul><li>On the client-side we find the current url from window.location. On the server-side we get the current url from the request.</li><li>On the client-side we render the view and then fetch data asynchronously, re-rendering when each request is completed. On the server-side we fetch all data before rendering.</li></ul><br /><h4>Conclusion</h4><br /><div>After this small experiment with Isomorphic apps, I'm starting to see a lot of benefits from this approach. Even though it was pretty easy to implement using current libraries, I expect to see new tools being created in the near future to make writing isomorphic apps even easier to accomplish.</div><br /><div>Like I mentioned before, you can take a look at the source code <a href="" target="_blank">here</a>. Please let me know in the comments if you find anything that can be improved.</div>Alexis Hevia loading optimized require.js filesOne of the greatest benefits of using <a href="" target="_blank">require.js</a> is that you're able to structure your code into small, manageable modules, each living in its own file. This is great during development, but loading multiple files as separate HTTP requests is a performance-killer on production.<br /><br />That is why require.js comes with <a href="" target="_blank">r.js,</a>.<br /><a name='more'></a><br /><h4>FooBar Inc - A Sample Application</h4><br />I wrote a very simple application to demonstrate how to split your app into separate bundles using require.js. You can see the app running on <a href="" target="_blank"></a>, and you can grab the source code <a href="" target="_blank">here</a>.<br /><br />There are 3 important files on this application:<br /><ul><li><a href="">app.js</a>: loads the organization data and delegates content rendering to one of the renderers.</li><li><a href="" target="_blank">renderers/list</a>: renders the company members as a list.</li><li><a href="" target="_blank">renderers/chart</a>: renders the company members as a weird, folder-based organizational chart (it was the fastest thing to code).</li></ul>The easiest thing to do would be requiring ListRenderer and ChartRenderer from app.js, like this: <pre class="prettyprint"><br />/* app.js */<br /><br />define([<br /> 'organization_data', 'renderers/list', 'renderers/chart'<br />], function(data, ListRenderer, ChartRenderer){<br /><br /> function choseRenderer(){<br /> if(someCondition){<br /> return ListRenderer;<br /> }<br /> else {<br /> return ChartRenderer;<br /> }<br /> }<br /><br /> function render(el){<br /> var renderer = choseRenderer();<br /> el.innerHTML = renderer.render(data);<br /> }<br /><br /> return { render: render };<br />});<br /></pre>If we run the optimizer on app.js, we would get a single file that would contain all of the dependency tree, which is: <ul> <li>app.js <ul> <li>organization_data.js</li> <li>renderers/list.js <ul> <li>underscore.js</li> </ul> </li> <li>renderers/chart.js <ul> <li>jstree.js</li> </ul> </li> </ul> </li></ul>Since it is a very basic application, that wouldn't be so bad. However, imagine we had a lot of renderers, and each of these renderers had many unique dependencies (for example, ChartRenderer loads jstree, which is not used anywhere else on the app). <br /><br />If we required all of them on app.js, then we would make the client load a bunch of code that could potentially never be used, making our app unnecessarily slower to load. <br /><br /><h4>Loading files dynamically</h4><br />Let's modify app.js so each renderer gets required if and when it is going to be used, like this: <pre class="prettyprint"><br />/* app.js */<br /><br />define([<br /> 'organization_data'<br />], function(data){<br /><br /> function choseRenderer(){<br /> if(someCondition){<br /> return 'list';<br /> }<br /> else {<br /> return 'chart';<br /> }<br /> }<br /><br /> function render(el){<br /> var renderer = choseRenderer();<br /><br /> // dynamically load renderer<br /> require(['renderers/' + renderer], function(loadedRenderer){<br /> el.innerHTML = loadedRenderer.render(data);<br /> });<br /> }<br /><br /> return { render: render };<br />});<br /></pre>If we run the optimizer on app.js now, only app.js and organization_data.js will be included in our result, because r.js doesn't include dynamically loaded files. <ul> <li>app.js <ul> <li>organization_data.js</li> </ul> </li></ul>This means we can now run the optimizer on each renderer individually, and each renderer will contain its own dependencies: <ul> <li>renderers/list.js <ul> <li>underscore.js</li> </ul> </li> <li>renderers/chart.js <ul> <li>jstree.js</li> </ul> </li></ul>And now our whole application is optimized, but our renderers will only load if they are actually going to be used. <br /><br /><h4>Avoiding duplicate dependencies</h4><br />Let's say we had multiple renderers that depended on underscore.js. If we optimize each renderer with the default values, then each would get a separate copy of underscore.js, which is not ideal. <br /><br />Instead, you can optimize the base app.js using "include=underscore", and optimize each renderer using "exclude=underscore". That way underscore will only be included once, on app.js. <br/><br />To read more about the include and exclude options, see <a href="">RequireJS Optimizer</a>. <br /><br />Remember you can take a look at the source code for FooBarInc <a href="">here</a>. I even included a Gruntfile that shows how to dynamically look for every renderer and optimize it, so you don't have to remember to add a new grunt task every time you add a new renderer.Alexis Hevia your postgres database before each specIf you run integration tests, you've probably come across this problem:<br />You need to find a way to clean your database before each spec runs.<br /><br />There are two common approaches to solve this problem:<br /><ol><li>Run every spec inside a transaction</li><li>Truncate all tables before (or after) every spec</li></ol><a name='more'></a>Running your specs inside transactions is the faster approach, and since integration tests can become quite large and slow, it's the approach preferred by many developers. However, it has one caveat: you can't test scenarios where transactions are used (you can't open a transaction inside a transaction). This leaves some developers with no choice but to use the slower but more flexible approach of truncating tables before each spec.<br /><br />However, if you happen to use PostgreSQL, there is another option:<br /><b><i>Dropping the database and creating it again from a template.</i></b><br /><br />While this sounds slow and expensive, because of the way PostgreSQL handles templates, it's actually pretty fast.<br /><br />On my test suite, this is what I usually do:<br /><br /><h3>Before all specs</h3><ol><li>Make sure an empty database called 'test_template' exists.</li><li>Dump my development database's schema and populate the test_template database with it. Now 'test_template' is effectively an empty copy of your development database.</li></ol>This can be accomplished with 3 simple commands (I run them using <a href="">grunt-exec</a>, but any script that can run shell commands should work):<br /><pre class="prettyprint"># drop database<br />psql -c "DROP DATABASE IF EXISTS test_template" postgres<br /><br /># create database<br />psql -c "CREATE DATABASE test_template WITH OWNER test_user" postgres<br /><br /># dump dev db schema and pipe to test_template<br /># make sure your test user is the one who creates the new tables<br /># (that way he has implicit permissions to access them)<br />pg_dump -s --clean dev_db_name | PGPASSWORD=test_password psql -h localhost -U test_user test_template</pre>**Note you can ignore the before all steps if you keep the test_template schema in sync with your dev db schema<br /><br /><h3>Before each spec</h3><ol><li>Drop the 'test' database (if it exists)</li><li>Create the 'test' database from the 'test_template'</li></ol>I run this on a beforeEach clause in Jasmine, but as long as it runs before each spec you should be fine: <br /><pre class="prettyprint"># drop test database<br />DROP DATABASE IF EXISTS test;<br /><br /># create test database from test_template<br />CREATE DATABASE test TEMPLATE test_template;<br /></pre><br>Now you can run your specs in a clean environment without compromising on speed. Alexis Hevia Documentation with JavascriptDocumentation is an essential part of software development. Good documentation allows developers to collaborate and build upon each other's work, speeds up the onboarding process for new team members, and makes debugging easier.<br /><br />However, many developers fail at writing documentation. Very often you'll find projects where:<br /><ul><li>There's no documentation, because developers never got around to it.</li><li>There's documentation, but it's so outdated it's practically useless.</li></ul><br /><a name='more'></a>This might not be a problem at first, but as a project grows in size (both in amount of code and in amount of developers), the lack of documentation starts taking a significant toll on development speed.<br /><br /><h4> How to write documentation</h4><br />The only way to make sure you'll get documentation written is by writing it <i>before</i> your code. If you wait until the code is ready or wait until you have spare time, chances are you'll never write your documentation.<br /><br /). <br /><br /><h4>LiveSpecs</h4><br />There are several testing libraries for Javascript, and some of them can be used to produce Living Documentation. However, after trying a few of them and struggling to get them to work like I wanted, I decided to write <a href="">LiveSpecs</a>.<br /><br />LiveSpecs is based on 3 ideas:<br /><ul><li>Every project needs good documentation and test coverage</li><li>Documentation should never go out of date</li><li>Code samples are the best type of documentation</li></ul>With LiveSpecs you can write code samples that are be displayed on your documentation site, but also run as part of your test suite. <br /><br /><h4>JSCalc - An example application</h4><br />JSCalc is an example application I wrote to demonstrate how LiveSpecs works. You can find the source code for JSCalc on the <a href="" target="_blank">LiveSpecs example directory</a>.<br /><br />You can also visit <a href="" target="_blank">livespecs.alexishevia.com</a> to see the example running on your browser.<br /><br /><b>1) Read the documentation</b><br />When you open the site, you'll see something like this:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" height="400" width="315" /></a></div><br /><br />The cool thing about this documentation is that it includes examples where you can see how JSCalc should be used.<br /><br /><b>2) Try out the code</b><br />Even cooler than getting code examples is being able to try them out. If you click the "Try it out!" buttons, you'll see the code running in your browser. You can even tweak the code, modify some parameters and see what happens. <br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" height="120" width="400" /></a></div><br /><br /><b>3) Run the examples as part of your test suite</b><br />If you open <a href="" target="_blank">/docs/spec_runner.html</a> you'll see a spec runner going through all your examples and verifying no errors are thrown. This is how it looks like when it finds an error: <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" height="101" width="320" /></a></div><br /><br />You can also run the specs using PhantomJS, as explained <a href="" target="_blank">here</a>.<br /><br />There's still a lot of work to be done to get LiveSpecs to version 1.0, but I hope you can try it out and let me know what you think.Alexis Hevia your dotfiles with PuppetAs developers, we spend a lot of time tweaking our workstations to make sure it adjusts perfectly to our workflow. Most of our configuration is stored in dotfiles, ie, hidden files that start with a dot (.bashrc, .bash_profile, .vimrc, .gitconfig, etc). Lately, <a href="" target="_blank">many</a> <a href="" target="_blank">developers</a> have decided to store their dotfiles using version control. That way they have a safe copy in case anything might happen to their local computer, and it makes it easier to set up a new computer if they need to.<br /><br />While everyone agrees this is a good idea, there's not consensus on what's the best way to do it. Github even published a <a href="" target="_blank">website</a> where it keeps track of different plugins to use and paths to follow. After trying a few of them, I decided Puppet was the best tool for this task.<br /><br /><a name='more'></a>To manage your dotfiles with puppet, start by creating a directory for your dotfiles (I use ~/.puppet/, but you can use any name you prefer).<br /><br />Make sure you have the following structure setup:<br /><pre class="prettyprint"><br />.puppet/<br /> manifests/<br /> default.pp<br /> modules/<br /> dotfiles/<br /> manifests/<br /> init.pp<br /> files/<br /> # copy your dotfiles here<br /> .bash_profile<br /> .vim/<br /></pre>As you can see, we're creating a "dotfiles" module, where we'll keep all the dotfiles we want to save. On the module's init.pp, we're going to tell Puppet where each of this files belong: <pre class="prettyprint"><br /># dotfiles/manifests/init.pp<br /><br />file { "/home/$id/.bash_profile":<br /> mode => 644,<br /> source => "puppet:///modules/bash_config/.bash_profile",<br />}<br /><br />file { "/home/$id/.vim":<br /> mode => 755,<br /> ensure => directory,<br /> recurse => true,<br /> source => "puppet:///modules/vim_config/.vim",<br />}<br /></pre>There's a couple of things to mention from this file: <ul><li>The $id variable will be initialized by Puppet. When your script runs, it will contain the name of the user that's running the script.</li><li>We can set file permissions with the "mode" param, which uses the <a href="">chmod numeric permissions notation.</a></li><li>The "source" param tells vagrant where to get the file from. It uses the special puppet:/// url scheme, which starts from the root of your puppet directory (~/.puppet on our case)</li><li>If you want to copy an entire directory (like your .vim dir), then make sure you add "ensure => directory" and "recurse => true"</li></ul>We're almost done now, but we first need to include the "dotfiles" module on our "default.pp" file <pre class="prettyprint"><br /># manifests/default.pp<br /><br />include dotfiles<br /></pre>Now, anytime you want to restore your dotfiles you just need to run <pre class="prettyprint"><br />puppet apply --modulepath=$HOME/.puppet/modules $HOME/.puppet/manifests/default.pp<br /></pre <a href="">.puppet repo</a> on Github.Alexis Hevia Up a Node.js Dev Environment with PuppetOn my <a href="" target="_blank">last post</a> I explained why it's a good idea to set up your dev environment using Vagrant and Puppet. On this post I'm going to provide a specific example: setting up a dev environment for a Node.js/MySQL application.<br /><br /><a name='more'></a>** All code presented on this post is also available at <a href="">this repo</a>. <br />** 2014/03/01: Updated the code to use librarian-puppet to handle puppet modules <br />** 2014/06/26: Updated the code to use latest Ubuntu and to allow using separate JSON file for configuration <br /><br /><h4>Pre-requisites</h4><br />Make sure you have Vagrant installed. You can download it from <a href=""></a><br /><br /><h4>Creating the Virtual Machine</h4><br />Let's start by creating a blank Ubuntu 14.04 virtual machine. Create a new folder for your project and add a file called "Vagrantfile" with the following code: <br /><pre class="prettyprint"><br />require 'json'<br /><br /># ------------------------------ #<br /># Config Values<br /># ------------------------------ #<br />#<br /># If you wish to override the default config values, create a JSON<br /># file called Vagrantfile.json on the same folder as this file<br />#<br /><br />configValues = {<br /> # box to build from<br /> "box" => "Official Ubuntu 14.04 daily Cloud Image amd64 " +<br /> "(Development release, No Guest Additions)",<br /><br /> # the url from where the 'config.vm.box' box will be fetched if it<br /> # doesn't already exist on the user's system<br /> "box_url" => "" +<br /> "current/trusty-server-cloudimg-amd64-vagrant-disk1" +<br /> ".box",<br /><br /> # private IP address for the VM<br /> "ip" => '192.168.60.2',<br /><br /> # hostname for the VM<br /> "hostname" => "dev.nodejs"<br />}<br /><br />if File.exist?('./Vagrantfile.json')<br /> begin<br /> configValues.merge!(JSON.parse(File.read('./Vagrantfile.json')))<br /> rescue JSON::ParserError => e<br /> puts "Error Parsing Vagrantfile.json", e.message<br /> exit 1<br /> end<br />end<br /><br /># ------------------------------ #<br /># Start Vagrant<br /># ------------------------------ #<br /><br />Vagrant.configure("2") do |config|<br /> config.vm.box = configValues["box"]<br /> config.vm.box_url = configValues["box_url"]<br /> config.vm.network "private_network", ip: configValues['ip']<br /> config.vm.hostname = configValues['hostname']<br />end<br /></pre><br />As you can see, we first define some configuration values.<br /><dl><dt>box</dt><dd>what type of image we want to use for our virtual machine (ubuntu 14.04 64bits on this case).</dd><br /><dt>box_url</dt><dd>where to download the image from (in case the user doesn't have it already).</dd><br /><dt>ip</dt><dd>which IP address it should assign to the virtual machine.</dd><br /><dt>hostname</dt><dd>what hostname to use for the virtual machine.</dd></dl>Now you can cd into your directory and start the virtual machine with this command: <br /><pre class="prettyprint">vagrant up<br /></pre>The first time you run it, vagrant is going to download the entire ubuntu 14.04 image, so it will take a while. After the image is downloaded, vagrant will boot up a new virtual machine with the settings we specified on the Vagrantfile. <br /><br />You can ssh into your machine by running <br /><pre class="prettyprint">vagrant ssh<br /></pre><br /><h4>Installing Puppet </h4><br />Some vagrant boxes come with Puppet preinstalled, but most of them are outdated and lack important features. That's why I prefer using a blank Ubuntu box (like the one we're using on this project) and install Puppet myself. <br /><br />Create a "shell" directory containing two files: "install-puppet.sh" and "install-librarian-puppet.sh". Your directory structure should look like this: <br /><pre class="prettyprint"><br />shell/<br /> install-puppet.sh<br /> install-librarian-puppet.sh<br />Vagrantfile<br /></pre>Copy the contents of <a href="">this file</a> into install-puppet.sh and <a href="">this other file</a> into install-librarian-puppet.sh. <br /><br />These two shell scripts will install puppet and librarian-puppet, respectively. <a href="">Librarian-puppet</a> is a gem that makes it a lot easier to manage Puppet modules. <br /><br />Now add the following code to your Vagrantfile: <pre class="prettyprint"><br /> config.vm.provider :virtualbox do |vb|<br /> # This allows symlinks to be created within the /vagrant dir<br /> vb.customize [<br /> "setextradata", :id,<br /> "VBoxInternal2/SharedFoldersEnableSymlinksCreate/v-root", "1"<br /> ]<br /> end<br /><br /> # install puppet and librarian-puppet<br /> config.vm.provision :shell, :path => "shell/install-puppet.sh"<br /> config.vm.provision :shell, :path => "shell/install-librarian-puppet.sh"<br /></pre>And run "vagrant provision" to get puppet installed. <br /><br /><h4>Adding Node.js and MySQL</h4><br />Now that we have a basic Ubuntu vm up and running, let's add some puppet configurations to install Node.js and MySQL.<br /><br />Create a "puppet" folder containing a file called "Puppetfile" and two subfolders: "modules" and "manifests". Your directory structure should look like this: <br /><pre class="prettyprint"><br />puppet/<br /> modules/<br /> manifests/<br /> Puppetfile<br />shell/<br /> install-puppet.sh<br /> install-puppet-librarian.sh<br />Vagrantfile<br /></pre>The Puppetfile is used to declare all puppet modules we want to use. Since we want to use the <a href="">stdlib</a>, <a href="">apt</a>, and <a href="">mysql</a> modules, add the following content to your Puppetfile: <pre class="prettyprint"><br />forge ""<br /><br />mod "puppetlabs/stdlib", "3.2.1"<br />mod "puppetlabs/apt", "1.5.0"<br />mod "puppetlabs/mysql", "2.2.3"<br /></pre>Create a "default.pp" file inside the manifests folder, with the following content: <br /><pre class="prettyprint"><br />Exec {<br /> path => ['/usr/sbin', '/usr/bin', '/sbin', '/bin', '/usr/local/bin']<br />}<br /><br /># --- Preinstall Stage ---#<br /><br />stage { 'preinstall':<br /> before => Stage['main']<br />}<br /><br /># Define the install_packages class<br />class install_packages {<br /> package { ['curl', 'build-essential', 'libfontconfig1', 'python',<br /> 'nodejs', 'npm', 'g++', 'make', 'wget', 'tar', 'mc', 'htop']:<br /> ensure => present<br /> }<br />}<br /><br /># Declare (invoke) install_packages<br />class { 'install_packages':<br /> stage => preinstall<br />}<br /><br /># Setup your locale to avoid warnings<br />file { '/etc/default/locale':<br /> content => "LANG=\"en_US.UTF-8\"\nLC_ALL=\"en_US.UTF-8\"\n"<br />}<br /><br /># --- NodeJS --- #<br /><br /># Because of a package name collision, 'node' is called 'nodejs' in Ubuntu.<br /># Here we're adding a symlink so 'node' points to 'nodejs'<br />file { '/usr/bin/node':<br /> ensure => 'link',<br /> target => "/usr/bin/nodejs",<br />}<br /><br /># --- MySQL --- #<br /><br />class { '::mysql::server':<br /> root_password => 'foo'<br />}<br /></pre>Let's go over this file one block at a time. <br /><pre class="prettyprint">Exec {<br /> path => ['/usr/sbin', '/usr/bin', '/sbin', '/bin', '/usr/local/bin']<br />}<br /></pre>We start by adding several folders to our $PATH, so Puppet knows where to find system commands like wget, curl, etc. <pre class="prettyprint">stage { 'preinstall':<br /> before => Stage['main']<br />}<br /></pre>We then define a new stage called 'preinstall'. Inside Puppet, stages are containers that allow you to execute blocks of code in a certain order. By default everything is executed inside the 'main' stage, but you're free to define additional stages if you want to. <br /><br />On our example, we're telling Vagrant that everything inside the 'preinstall' stage should be executed before the 'main' stage starts. <br /><pre class="prettyprint"><br />class install_packages {<br /> package { ['curl', 'build-essential', 'libfontconfig1', 'python',<br /> 'nodejs', 'npm', 'g++', 'make', 'wget', 'tar', 'mc', 'htop']:<br /> ensure => present<br /> }<br />}<br /></pre>On the next block we're defining a new class called 'install_packages'. Like its name suggests, this class will let you install a bunch of useful packages on your development machine. We make use of the <a href="">package directive</a> to let apt-get install these packages for us. <br /><br /><pre class="prettyprint"><br />class { 'install_packages':<br /> stage => preinstall<br />}<br /></pre>Even though the syntax looks very similar to the class declaration, on this block we're actually invoking the 'install_packages' class. Notice we add a stage parameter to make sure the class is executed inside the preinstall stage. <br /><br /><pre class="prettyprint"><br />file { '/usr/bin/node':<br /> ensure => 'link',<br /> target => "/usr/bin/nodejs",<br />}<br /></pre>Here we're creating a symlink from /usr/bin/node to /usr/bin/nodejs. This is required because Ubuntu has a different package called 'node', so apt installs node as 'nodejs'. <br /><br /><pre class="prettyprint"><br />class { '::mysql::server':<br /> root_password => 'foo'<br />}<br /></pre>The '::mysql::server' class is defined on the puppetlabs/mysql module, and it will install a MySQL server for you. We're invoking it with a root_password param, telling it to set the MySQL root password to 'foo'. <br /><br />Now we need to tell Vagrant where our Puppet files are. Just add the following to your Vagrantfile: <br /><pre class="prettyprint"># provision with Puppet stand alone<br /> config.vm.provision :puppet do |puppet|<br /> puppet.vagrant provision<br /></pre>After the command finishes running, your virtual machine will have Node.js and MySQL installed. You can test it by running: <br /><pre class="prettyprint">vagrant ssh<br />node --version<br />mysql --version<br /></pre>You can also login to the root mysql account by running: <br /><pre class="prettyprint">mysql -u root -pfoo<br /></pre><br /><h4>Our App</h4><br />Our dev environment is ready now, so let's take it for a spin by writing a very simple Express.js app. <br /><br />Add a "package.json" file with the following content: <br /><pre class="prettyprint">{<br /> "name": "hello_world",<br /> "description": "hello world test app",<br /> "version": "0.0.1",<br /> "private": true,<br /> "dependencies": {<br /> "express": "3.3.8",<br /> "mysql": "2.0.0-alpha9"<br /> },<br /> "scripts": {<br /> "start": "node ./app.js"<br /> }<br />}<br /></pre><br />And an "app.js" file with the following content: <br /><pre class="prettyprint">var express = require('express');<br />var mysql = require('mysql');<br />var app = express();<br /><br />// This is for demonstration purposes only.<br />// You should NEVER store your db credentials on an<br />// unprotected, plain-text file<br />var connection = mysql.createConnection({<br /> host : 'localhost',<br /> user : 'root',<br /> password : 'foo',<br />});<br /><br />app.get('/', function(req, res){<br /> res.writeHead(200, {<br /> 'Transfer-Encoding': 'chunked',<br /> 'Content-Type': 'text/plain;<br />app.js<br />package.json<br />puppet/<br /> modules/<br /> manifests/<br /> default.pp<br /> Puppetfile<br />shell/<br /> install-puppet.sh<br /> install-puppet-librarian.sh<br />Vagrantfile<br /></pre><br />Now we can start our app by running: <br /><pre class="prettyprint">vagrant ssh<br />cd /vagrant<br />npm install<br />npm start<br /></pre><br />Your app will be running on <a href=""></a><br /><br />** For a more robust puppetfile containing PostgreSQL and node moudule installations, see <a href="">this gist</a>Alexis Hevia rid of "it doesn't work on my machine" issuesA very common, yet frustrating situation when developing software in a team is having to debug errors that only happen on a specific machine for a specific developer. These errors are usually caused by differences in libraries or configuration files, and are a real pain to debug.<br /><a name='more'></a><br />To.<br /><br />That is why I've decided to include a Vagrantfile and a puppet folder on all my projects.<br /><br /><h4>About Puppet</h4><div><br /></div><a href="" target="_blank">Puppet</a> is an automation software that helps system administrators manage their infrastructure. You can read a lot more about it on <a href="" target="_blank">their website</a>, but the main takeaway is this:<br /><br />If you take any number of blank computers (having nothing but a default OS installation) and run some Puppet scripts on them, all of them will end up having the exact same configuration.<br /><br />With Puppet, you can automate pretty much anything you could do manually: install apps or libraries, set up databases, create user accounts, modify configuration files, etc.<br /><br /><h4>About Vagrant</h4><div><br /></div><div.<br /><br /></div><h4>Vagrant + Puppet</h4><div><br /></div><div>When we use these tools together, we have the perfect setup:</div><div><ul><li>Thanks to Vagrant we can make sure everyone is using the same OS</li><li>Thanks to Puppet we can make sure everyone has the exact same configuration on their machine</li></ul></div><div>And by taking advantage of Vagrant's shared folders and port forwarding, developers can still use their favorite text editor, IDE, and browser to do coding and debugging, but you won't have to deal with those "it doesn't work on my machine" issues anymore.</div><br /><div>** On <a href="">this post</a> I provide a specific example: setting up a dev environment for a Node.js/MySQL application </div>Alexis Hevia Map - a small project for the vdhackathonLast month Panama celebrated a hackathon against domestic violence. It was a two-day event where developers met with non-profit organizations and government agencies to help them solve some of their issues through the use of technology. You can read a little bit more at <a href="">this site</a>.<br /><br />I participated as a developer, and chose to work on the "Virtual Map" project. Our goal was to create an online form where victims could anonymously send info about their situation and get help from <a href="">EEM</a>, a non-profit organization that promotes women rights. EEM would then be able to collect stats, generate reports, and plot results over a map to see which areas in the country require most help.<br /><a name='more'></a><br />We divided the project into several parts, and each developer was assigned a specific task. My task consisted in taking the generated data and plotting it over a map. You can see the end result <a href="">here</a>.<br /><br />When the weekend started I had no idea how to render maps or display custom information over them. I thought about using the Google Maps API, but after some research I found an interesting library called <a href="">Leaflet</a> and decided it was perfect for the job. The thing I liked the most about Leaflet is that it came with a series of easy-to-follow examples, and there was one in particular that would help me a lot during the weekend: <a href="">Interactive Choropleth Map</a>.<br /><br />So the weekend was just starting and I already had half of my work done. All I needed to do was take the data from the app and convert it to a format Leaflet could understand.<br /><br />The data was handed to me as a JSON document with <a href="">this format</a>, <a href="">this GeoJSON</a> as a base, and just "injecting" it the data I obtained from the app.<br /><br />Things got a little more interesting because I had to render a different result for each domestic violence category, and the user had to be able to switch between each one. Leaflet's <a href="">layers</a> were of great help for getting this part done.<br /><br /.<br /><br />Like I mentioned before, you can see the end result <a href="">here</a>. And if you want to check out the source code you can visit the <a href="">Github repository</a>.Alexis Hevia first contribution to Cucumber <a href="">Cucumber</a> or <a href="">Relish</a> yet, I suggest you visit their website and see what's all the fuzz about. <i><span style="color: #990000;">Warning: it may forever change how you do web development.</span></i><br /><a name='more'></a><br />When I started working with Node.js I was pleased to find <a href="">Cucumber.js</a>, a port of Cucumber for Javascript. Since I couldn't find a port of Relish, I decided to do something similar by parsing my Cucumber files and displaying them with <a href="">docpad</a>.<br /><br />Soon enough I ran into <a href="">this bug</a>, and after doing some debugging I found out the issue was coming from the Gherkin parser (Gherkin is the language Cucumber uses under the hood to convert files from a business-readable format into working tests).<br /><br /: <a href="">change the for...in with a good old for loop</a>.<br /><br />Gherkin version 2.6.11 was released today, and to my surprise <a href="">I was credited</a> for contributing with my little bug fix. I couldn't be happier to contribute to such a great project.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="63" src="" width="320" /></a></div>Alexis Hevia | http://feeds.feedburner.com/devalexishevia | CC-MAIN-2017-09 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.