text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
unfetchunfetch
Tiny 500b fetch "barely-polyfill"
- Tiny: about 500 bytes of ES3 gzipped
- Minimal: just
fetch()with headers and text/json responses
- Familiar: a subset of the full API
- Supported: supports IE8+ (assuming
Promiseis polyfilled of course!)
- Standalone: one function, no dependencies
- Modern: written in ES2015, transpiled to 500b of old-school JS
🤔What's Missing?
- Uses simple Arrays instead of Iterables, since Arrays are iterables
- No streaming, just Promisifies existing XMLHttpRequest response bodies
- Use in Node.JS is handled by isomorphic-unfetch
Table of ContentsTable of Contents
InstallInstall
This project uses node and npm. Go check them out if you don't have them locally installed.
$ npm install --save unfetch
Then with a module bundler like rollup or webpack, use as you would anything else:
// using ES6 modules import fetch from 'unfetch' // using CommonJS modules var fetch = require('unfetch')
The UMD build is also available on unpkg:
<script src="//unpkg.com/unfetch/dist/unfetch.umd.js"></script>
This exposes the
unfetch() function as a global.
UsageUsage
import fetch from 'unfetch'; fetch('/foo.json') .then( r => r.json() ) .then( data => { console.log(data); });
import 'unfetch/polyfill'; // "fetch" is now installed globally if it wasn't already available fetch('/foo.json') .then( r => r.json() ) .then( data => { console.log(data); });
Examples & DemosExamples & Demos
Real Example on JSFiddle
// simple GET request: fetch('/foo') .then( r => r.text() ) .then( txt => console.log(txt) ) // complex POST request with JSON, headers: fetch('/bear', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ hungry: true }) }).then( r => { open(r.headers.get('location')); return r.json(); })
APIAPIAccepts a
"include"string, which will allow both CORS and same origin requests to work with cookies. As pointed in the 'Caveats' section, Unfetch won't send or receive cookies otherwise. The
"same-origin"value is not supported.
⚠
body: The content to be transmited in request's body. Common content types include
FormData,
JSON,
Blob,
ArrayBufferor plain text.
response Methods.
CaveatsCaveats
Adapted from the GitHub fetch polyfill readme.
The
fetch specification differs from
jQuery.ajax() in mainly two ways that
bear keeping in mind:
- By default,Promise reject on HTTP error statuses, i.e. on any non-2xx status, define a custom response handler:
fetch('/users') .then( checkStatus ) .then( r => r.json() ) .then( data => { console.log(data); }); function checkStatus(response) { if (response.ok) { return response; } else { var error = new Error(response.statusText); error.response = response; return Promise.reject(error); } }
ContributeContribute
First off, thanks for taking the time to contribute! Now, take a moment to be sure your contributions make sense to everyone else.
Reporting IssuesReporting Issues
Found a problem? Want a new feature? First of all see if your issue or idea has already been reported. If it hasn't, just open a new clear and descriptive issue.
Submitting pull requestsSubmitting pull requests
Pull requests are the greatest contributions, so be sure they are focused in scope, and do avoid unrelated commits.
💁Remember: size is the #1 priority.
Every byte counts! PR's can't be merged if they increase the output size much.
- Fork it!
- Clone your fork:
git clone<your-username>/unfetch
- Navigate to the newly cloned directory:
cd unfetch
- Create a new branch for the new feature:
git checkout -b my-new-feature
- Install the tools necessary for development:
npm install
- Make your changes.
npm run buildto verify your change doesn't increase output size.
npm testto make sure your change doesn't break anything.
- Commit your changes:
git commit -am 'Add some feature'
- Push to the branch:
git push origin my-new-feature
- Submit a pull request with full remarks documenting your changes.
LicenseLicense
MIT License © Jason Miller
|
https://giters.com/misund/unfetch
|
CC-MAIN-2022-33
|
refinedweb
| 602
| 52.05
|
2 sum problem goes like: given an array a[] and a number X, find two elements or pair with given sum X in the array. For example:
Given array : [3,4,5,1,2,6,8] X = 10 The answer could be (4,6) or (2,8).
Before looking at the post below, we strongly recommend to have a pen and paper and git it a try to solve it.
Thought process
Ask some basic questions about the problem, it’s a good way to dig more into the problem and gain more confidence. Remember interviewers are not trained interrogators, they slip hint or two around solution when you ask relevant questions.
- Is it a sorted array? If not, think additional complexity you would be adding to sort it
- If duplicates present in array?
- Whether returning first pair is enough or should we return all such pairs with a sum equal to X?
- If there can be negative numbers in array?
Watch out the video here:
1. Initialize two variable left = 0 and right = array.length-1 2. While left and right do not cross each other, 3. Get sum of elements at index left and right, i.e A[left] + A[right] 4. If sum > K, move towards left from end i.e decrease right by 1 5. Else if sum < K,then move towards right from start, i.e increment left 6. At last, if sum is equal to K, then return (left, right) as pair.
Let’s see how this works with an example and then we will implement it. Given an array as shown and sum = 17, find all pairs which sum as 17.
/>
Initialization step, left = 0 and right = array.length – 1
/>
A[left] + A[right] = 20 which is greater than sum (17), move right towards left by 1.
/>
Again, A[left] + A[right] = 18 which is greater than sum (17), move right towards left by 1.
/>
At this point, A[left] + A[right] is less than sum(17), hence move left by 1
/>
Now, A[left] + A[right] is equal to the sum and so add this pair in the result array. Also, decrease right by 1, why?
/>
At this point, A[left] + A[right] is less than sum(17), hence move left by 1
/>
Again, A[left] + A[right] is less than sum(17), hence move left by 1
/>
A[left] + A[right] is equal to the sum and so add this pair in the result array. Also, decrease right by 1.
/>
Since left and right point to the same element now, there cannot be a pair anymore, hence return.
/>
Show me the implementation
package com.company; import javafx.util.Pair; import java.util.ArrayList; /** * Created by sangar on 5.4.18. */ public class PairWithGivenSum { public static ArrayList<Pair<Integer, Integer>> pairWithGivenSum(int[] a, int sum){ int left = 0; int right = a.length - 1; ArrayList<Pair<Integer, Integer>> resultList = new ArrayList<>(); while(left < right){ /*If sum of two elements is greater than sum required, move towards left */ if(a[left] + a[right] > sum) right--; /* If sum of two elements is less than sum required, move towards right */ if(a[left] + a[right] < sum) left++; if(a[left] + a[right] == sum){ resultList.add(new Pair(left, right)); right--; } } return resultList; } public static void main(String[] args) { int a[] = new int[] {10, 20, 30, 40, 50}; ArrayList<Pair<Integer, Integer>> result = pairWithGivenSum(a,50); for (Pair<Integer, Integer> pair : result ) { System.out.println("("+ pair.getKey() + "," + pair.getValue() + ")"); } } }
Complexity of this algorithm to find two numbers in array with sum K is dependent on sorting algorithm used. If it is merge sort, complexity is O(n log n) with added space complexity of O(n). If quick sort is used, worst case complexity is O(n2) and no added space complexity.
Solution with hashmap
In first method, array is modified, when it is not already sorted. Also, Preprocessing step (sorting) dominates the complexity of algorithm. Can we do better than
O(nlogn) or in other words, can we avoid sorting?
An additional constraint put on the problem is that you cannot modify original input. Use basic mathematics, if A + B = C, then A = C-B. Consider B is each element for which we are looking for A. Idea is to scan the entire array and find all A’s required for each element. Scan array again and check there was B which required the current element as A.
To keep track of required A values, we will create a hash, this will make second step O(1).
We can optimize further by scanning array only once for both steps.
1. Create an hash
2. Check element at each index of array
2.a If element at current index is already in hash. return pair of current index and value in hash
2.b If not, then subtract element from sum and store (sum-A[index], index) key value pair in hash.
This algorithm scans the array only once and does not change input. Worst-case time complexity is O(n), hash brings additional space complexity. How big should be the hash? Since all values between the sum-max value of the array and sum-min value of array will be candidate A’s hence hash will be of difference between these two values.
This solution does not work in C if there are negative numbers in an array. It will work in languages that have HashMaps in-built. For C, we have to do some preprocessing like adding absolute of smallest negative number to all elements. That’s where our fourth question above helps us to decide.
2 sum problem hash based implementation
package com.company; import javafx.util.Pair; import java.util.ArrayList; import java.util.HashMap; /** * Created by sangar on 5.4.18. */ public class PairWithGivenSum { public static ArrayList<Pair<Integer, Integer>> pairsWithGivenSum2(int[] a, int sum){ int index = 0; ArrayList<Pair<Integer, Integer>> resultList = new ArrayList<>(); HashMap<Integer, Integer> pairMap = new HashMap<>(); for(int i=0; i< a.length; i++){ if(pairMap.containsKey(a[i])){ resultList.add(new Pair(pairMap.get(a[i]), i)); } pairMap.put(sum-a[i], i); } return resultList; } public static void main(String[] args) { int a[] = new int[] {10, 20, 30, 40, 50}; ArrayList<Pair<Integer, Integer>> result = pairsWithGivenSum2(a,50); for (Pair<Integer, Integer> pair : result ) { System.out.println("("+ pair.getKey() + "," + pair.getValue() + ")"); } } }
Please share if there is some error or suggestion to improve. We would love to hear what you have to say. If you want to contribute to the learning process of others by sharing your knowledge, please write to us at [email protected]
|
https://algorithmsandme.com/2-sum-problem-pairs-with-given-sum-in-array/
|
CC-MAIN-2020-40
|
refinedweb
| 1,108
| 66.54
|
I'm trying to watch for an event on my frontend but a bug is getting in the way.
Here's the JS watching for the event
import web3 from './web3';export async function callEvent () { await Contract.events.PracticeEvent().watch((response) => { console.log('the event has been called', response); }).catch((err) => { console.log(err); }) await Contract.triggerEventFunc().call();}
import web3 from './web3';
export async function callEvent () {
await Contract.events.PracticeEvent().watch((response) => {
console.log('the event has been called', response);
}).catch((err) => {
console.log(err);
})
await Contract.triggerEventFunc().call();
}
Contract Code:
event PracticeEvent (string _message, uint _timestamp);function checkEvent() public { emit PracticeEvent("event has been called", gts);}
event PracticeEvent (string _message, uint _timestamp);
function checkEvent() public {
emit PracticeEvent("event has been called", gts);
}
---web3.js file
import Web3 from 'web3';const web3 = new Web3(window.web3.currentProvider);export default web3;
import Web3 from 'web3';
const web3 = new Web3(window.web3.currentProvider);
export default web3;
So when I run the app I get an error saying
"Uncaught (in promise) TypeError: Contract.default.events.PracticeEvent(...).watch is not a function"
This all works fine in Remix but it gets messed up when I try run it in my actual app
I'm assuming the bug has something to do with web3 but I'm not sure why because the web3 stuff is working fine in the rest of my app.
Any help? thanks!
Events can only be emitted inside transactions. You're doing a .call(), which just reads data from the blockchain, and can't emit an event.If you want the function to emit the event you should replaceawait Contract.triggerEventFunc().call();withawait Contract.triggerEventFunc().sendTransaction();This will send a transaction to the blockchain, and it will cost ether to send. Inside this transaction, an event can be emitted, so you should be able to catch it with web3.
|
https://intellipaat.com/community/17657/solidity-event-not-being-called-ask
|
CC-MAIN-2020-34
|
refinedweb
| 308
| 58.48
|
#include <hallo.h> * André Luiz Rodrigues Ferreira [Tue, Apr 11 2006, 01:44:57PM]: > Hi ! > > I'm creating a meta package for install a lite desktop for old > machines with poor hardware. > I would like to receive opinions about my packages list: > > - x-window-system-core > - xfce4 (beautiful!) Depends. IceWM with a tiny Filemanager (eg. emelfm) suffies my idea of "light desktop" much, much more. I would say - set "xfce4 | x-window-manager". > - evince As other pointed out, does basically the same job as xpdf but pulls half of the Gnome. > - eog As said, Gnome bloat. Use gqview or pornview. > - gaim Please depend on "gaim | psi | licq" or so. > - arj Who cares about arj nowadays? I suggest installing the "unp" package instead, it will tell user which program is needed to install for a certain type of archive. > - file-roller Is there really no other choice without GNOME dependencies? > - wvdial What's wrong with chat? > - gnome-ppp Bloat for a simple task like Interface status watching. Configuration can be done wiht pppconfig once. And even then, it's too specific for a "Desktop" meta package, IMO. > - gnome-utils You can replace all of them with lightweigt alternatives: xwd (or import from imagemagic or even gimp for screenshots), ding, locate (make some GUI), superformat (make some GUI, I have written one years ago, see GSwissKnife on Freshmeat). > - inkscape Too specific for a "Desktop" package. > - gimp > - abiword > - gnumeric > - gnumeric-plugins-extra > - gnome-system-monitor The last time I used this monitor, it was the application using much more memory than every other one. Do you really want it for a lightweight desktop? Eduard.
|
https://lists.debian.org/debian-devel/2006/04/msg00334.html
|
CC-MAIN-2017-34
|
refinedweb
| 271
| 67.76
|
Set Variables
Description:
Set Variables allows you to save key/value pairs in the global context of the flow execution. Variables set via the widget are accessible throughout your flow via other widgets under the key
{{flow.variables.<key>}}. This allows you to enable use cases such as counters that are dynamically updated as your flow executes.
Optional Configuration:
You can add any number of Variables. Variables can have static values like a single number or string, or dynamic values set via the Liquid templating language.
Example that sets a variable called count to 0 if not set and increments it if it exists:
{% if flow.variables.count %} {{flow.variables.count | plus: 1}} {% else %} 0 {% endif %}
Configuration
Transitions
There is only one transition from this widget, "Next" which fires once any variables specified are set.
Example Flow
In the call flow screenshot below, the count variable is set to 0 when the flow starts, and the flow will loop until the count variable is equal to 3 and then exit (The set variable widget uses the sample liquid template code from above).
Need some help?
We all do sometimes; code is hard. Get help now from our support team, or lean on the wisdom of the crowd browsing the Twilio tag on Stack Overflow.
|
https://www.twilio.com/docs/studio/widget-library/set-variables
|
CC-MAIN-2020-05
|
refinedweb
| 214
| 60.85
|
This document takes you through the basics of using NetBeans IDE to develop
web applications. It demonstrates how to create a simple web application,
deploy it to a server, and view its presentation in a browser. The application
employs a JavaServer
Pages™ (JSP) page to ask you to input your name. It then uses
a JavaBeans component to persist the name during the HTTP session, and retrieves the
name for output on a second JSP page.
Contents
To follow this tutorial, you need the following software and resources.
Notes:
The IDE creates the $PROJECTHOME/HelloWeb project folder.
You can view the project's file structure in the Files window
(Ctrl-2), and its logical structure in the Projects window (Ctrl-1).
The project folder contains all of your sources and project metadata, such
as the project's Ant build script. The HelloWeb project opens in the IDE.
The welcome page, index.jsp, opens in the Source Editor in the main
window.
Note. Depending on the server and Java EE version
that you specified when you created the project, the IDE might generate index.html
as the default welcome page for the web project. You can perform the steps in this tutorial
and use the index.html file or you can use the New File wizard to generate
an index.jsp file to use as the welcome page, in which case you should delete the
index.html file..
String name;
public NameHandler() { }
name = null;.
Getter and setter methods are generated for the
name field. The modifier for the class variable is set to
private while getter and setter methods are generated with
public modifiers. The Java class should now look similar to the
following.
package org.mypackage.hello;
/**
*
* @author nbuser
*/
public class NameHandler {
private String name;
/** Creates a new instance of NameHandler */
public NameHandler() {
name = null;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
In the Palette (Ctrl-Shift-8) located to the right of the Source Editor, expand HTML Forms and drag a Form item to a point after the <h1> tags in the Source Editor.
The Insert Form dialog box displays.
Click OK. An HTML form is added to the index.jsp file.
<html>
<head>
<meta http-
<title>JSP Page</title>
</head>
<body>
<h1>Entry Form</h1>
<form name="Name Input Form" action="response.jsp">
Enter your name:
<input type="text" name="name" />
<input type="submit" value="OK" />
</form>
</body>
</html>
In the Palette to the right of the Source Editor, expand JSP and
drag a Use Bean item to a point just below the <body>
tag in the Source Editor. The Insert Use Bean dialog opens. Specify
the values shown in the following figure.
<jsp:setProperty
As indicated in the
<h1>Hello, !</h1>
Click OK. Notice that <jsp:getProperty>
tag is now added between the <h1> tags.
Caution: Property names are case-sensitive. The "name" property must be in the same case in response.jsp and in the input form in index.jsp.
<body>
<jsp:useBean
<jsp:setProperty
<h1>Hello, <jsp:getProperty!</h1>
<). When you run a web application, the IDE performs the following steps:
Note: By default, the project has been created with the Compile on Save feature enabled, so you do not need to compile your code first in order to run the application in the IDE.
The IDE opens an output window that shows the progress of running the application. Look at the HelloWeb tab in the Output window. In this tab, you can follow all the steps that the IDE performs. If there is a problem, the IDE displays error information in this window.
Important: If the GlassFish server fails to start, start it manually and run the project again. You can start the server manually
from the Services window, by right-clicking the server node and selecting Start.
The server output window is very informative about problems running Web applications. The server's logs can also be helpful. They are located in the server's relevant domain directory. You can also view the IDE log, visible by selecting View > IDE log.
The index.jsp page
opens in your default browser. Note that the browser window may open before the IDE displays the server output.
Enter your name in the text box, then click OK. The response.jsp
page displays, providing you with a simple greeting.
I've built and run the project. When I click the OK button for index.jsp,
an error page displays indicating that response.jsp is not available.
Have you looked in the IDE's Output window (Ctrl-4) in the project tab or in the server tab?
What error messages are there? What JDK does your project use? What server? JDK 7 requires GlassFish 3.x or Tomcat
7.x. Right-click the project's node in the Projects window and select Properties. The
JDK is in the Libraries category, in the Java Platform field. The server version is in
the Run category. Lastly, download
the sample project and compare it with your own.
I've built and run the project but no name appears, only "Hello, !"
Does your <jsp:setProperty> tag contain a value = "" attribute? This overwrites the value you passed in the index.jsp form and replaces it with an empty string. Delete the value attribute.
I've built and run the project but get "Hello, null!"
First, check the IDE's Output windows for both application and server, and the server log. Is the server running? Was the application deployed? If the server is running and the application was deployed, are you getting an org.apache.jasper.JasperException: java.lang.NullPointerException? This usually means that a value in your code is not initialized correctly. In this tutorial, it means that you probably have a typo somewhere in a property name in your JSP files. Remember that property names are case-sensitive!
This concludes the Introduction to Developing Web Applications tutorial.
This document demonstrated how to create a simple web application using
NetBeans IDE, deploy it to a server, and view its presentation in a browser.
It also showed how to use JavaServer Pages and JavaBeans in your application
to collect, persist, and output user data.
For related and more advanced information about developing web applications
in NetBeans IDE, see the following resources:
|
https://netbeans.org/kb/docs/web/quickstart-webapps.html
|
CC-MAIN-2016-50
|
refinedweb
| 1,052
| 67.35
|
Hi Axis User.
I am getting the below error in the client code when I try to
invoke the service with WS-Security Rampart. I am using Asix2-1.5 and
Rampart-1.4
XMLStreamException "the prefix ==> Already exists for namespace
in “urn:com1” "
when WS-Security Rampart is engaged for Axis 2 web-services
Attached are the error log file (stacktrace) & client code. I am
using XML import in the schema. I am able to successfully test the Rampart
policy samples without any issue.
I am getting error when I add rampart in our existing service
which has complex schema. Has this error occurred due to xml import?
Below is my XSD hierarchy
service.wsdl --
<wsdl:types>
<xsd:import - in
<xsd:import - out
</wsdl:types>
hub_channel.xsd(urn:chl) imports the below scheams.
<xsd:import
<xsd:import
hub_message.xsd(urn:msg) import
<xsd:import
Can you please help on this issue? I will really appreciate you.
I have been trying for last 1 week to resolve it. I couldn’t resolve it.
I searched in Axis User, people encountered the same. I don't
think it's resolved.
Here is the link:
In the forum, one of the user Richard mentioned Rampart devs
would like to acknowledge the
problem and maybe even fix it
One user debugged the issue and added the below comments in the
forum
"I debugged the code and observed that, in the serialize method
of the XML node POJO
(generated by WSDL2JAVA), the "MTOMAwareXMLStreamWriter
xmlWriter" parameter gets an
instance of MTOMAwareOMBuilder if WS-Security is enabled.
Whereas, without WS-Security it gets an instance of
org.apache.axis2.databinding.utils.writer.MTOMAwareXMLSerializer which uses
MTOMXMLStreamWriter which in turn uses
com.ctc.wstx.sw.SimpleNsStreamWriter to
serialize the response.
I also tried using AXIS 1.4 without any success. Any idea how
can this problem can be solved?
Is there a way to let Rampart know which serializer should be
used? Am I missing any
configuration details of Rampart? "
Thanks
Srini Maran Error1.rtf Client.rtf
--
View this message in context:
Sent from the Axis - User mailing list archive at Nabble.com.
|
http://mail-archives.apache.org/mod_mbox/axis-java-user/200910.mbox/%3C26083056.post@talk.nabble.com%3E
|
CC-MAIN-2018-09
|
refinedweb
| 356
| 60.82
|
Receiving Meshes via ROS in Python
I am using
Voxblox_ros() which is a real-time mesh generator, and it saves a
.ply file under a folder.
What I want to achieve is to read the mesh that is saved under that folder. I've looked it up and found that there is a message type called
shape_msgs/Mesh, which sounded like something I could use. However, I am having question marks in my head about how exactly to fill in the data to this message.
I created a msg (
MeshInfo.msg):
string mesh_id shape_msgs/Mesh part_mesh
and a srv(
MeshArray.srv):
--- MeshInfo[] mesh_array
which I refer in the code as:
self.mesh_srv = rospy.Service("voxblox_ros/generate_mesh", MeshArray, self.mesh_processor_srv) # get the mesh
I've come up with this kind of code which I have not tested yet, just wanted to illustrate what I have in mind:
def mesh_processor_srv(self, req): # this service does not return anything by itself, it only saves the mesh to the designated folder # so, one must implement a mechanism here to import the mesh which is saved in that folder rospy.wait_for_service('voxblox_ros/generate_mesh') MeshArrayResponse = rospy.ServiceProxy('voxblox_ros/generate_mesh', MeshArray) # navigate to voxblox_ros package, under mesh_results, there should be mesh file stored as .ply files = os.listdir(home + "/ros_ws/src/voxblox/voxblox_ros/mesh_results/") # get the mesh files meshlist = [] # to keep the meshes in # loop over each mesh, import and create our mesh object for i in range(len(files)): mesh = MeshInfo() mesh.mesh_id = "mesh_" + os.path.splitext(os.path.basename(files[i]))[0] # get the mesh name from the file mesh.part_mesh = PlyData.read(home + "/ros_ws/src/voxblox/voxblox_ros/mesh_results/" + files[i]) # get the mesh data meshlist.append(mesh) response = MeshArrayResponse() response.mesh_array = meshlist return response
which I am not sure of. Did someone have similar task and had somehow figured out how to crack?
|
https://answers.ros.org/question/330092/receiving-meshes-via-ros-in-python/
|
CC-MAIN-2021-21
|
refinedweb
| 309
| 66.64
|
Python library for tracking time and displaying progress bars
Project description
Chronometry
ProgressBar
Estimator
Estimator is an object that estimates the running time of a single argument function.
You can use it to avoid running a script for too long.
For example, if you want to cluster a large dataset and running it might take too long,
and cost too much if you use cloud computing,
you can create a function with one argument
x which takes a sample with
x rows
and clusters it; then you can use
Estimator to estimate how long it takes to run it
on the full dataset by providing the actual number of rows to the
estimate() method.
Estimator uses a Polynomial Linear Regression model
and gives more weight to larger numbers for the training.
Usage
from chronometry import Estimator from time import sleep def multiply_with_no_delay(x, y): return (x ** 2 + 0.1 * x ** 3 + 1) * 0.00001 + y * 0.001 def multiply(x, y): sleep_time = multiply_with_no_delay(x, y) if sleep_time > 30: raise sleep(sleep_time) if y == 6: sleep(12) elif 7 < y < 15: raise Exception() return sleep_time estimator = Estimator(function=multiply, polynomial_degree=3, timeout=5) # the `unit` argument chooses the unit of time to be used. By default unit='s' estimator.auto_explore() estimator.predict_time(x=10000, y=10000)
The above code runs for about 53 seconds and then estimates that
multiply(10000, 10000) will take 1002371.7 seconds which is only slightly
smaller than the correct number: 1001010 seconds.
max_time is the maximum time allowed for the estimate function to run.
If you are using
Estimator in Jupyter,
you can plot the measurements with the
plot() method (no arguments needed) which
returns a
matplotlib
AxesSubplot object and displays it at the same time.
estimator.plot('x') estimator.plot('y')
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/chronometry/
|
CC-MAIN-2021-04
|
refinedweb
| 325
| 51.18
|
Containers to optimally price ads for advertisers, drive customer engagement, and place ads in the right position.
Recently, there are neural network architectures that make use of feature embedding techniques. In Deep Interest Network for Click-Through Rate Prediction, most of these models adopt embedding followed by a multi-layer perceptron (MLP) with explicit or implicit feature interactions defined by the neural network architecture.
Our Amazon Advertising team works with big data and deep learning to model CTR predictions and bring relevant ads to delight our customers. We deal every day with datasets at petabyte scale and mine insights from data to improve predictions. The work requires us to be able to quickly analyze and process large datasets and the solution we use must easily scale to a large team of scientists and engineers.
In this blog post, we will use ad CTR prediction to introduce Amazon Elastic Kubernetes Service (Amazon EKS) as a compute infrastructure for big data workloads.
A typical solution
We use Apache Spark to process the datasets. Spark has been the leading framework in the big data ecosystem for years. It’s used by many production systems and can easily handle petabyte-scale datasets. Spark has many built-in libraries to efficiently build machine learning applications (for example, Spark MLLib, Spark Steaming, Spark SQL, and Spark GraphX). Our compute infrastructure for Spark is running on Amazon EMR, a service for automating Spark cluster lifecycle management for customers. Our team runs many ad-hoc and production jobs on Amazon EMR.
Going agile with Amazon EKS
Kubernetes is the industry-leading container orchestration engine. It provides powerful abstractions for managing all kinds of application deployment patterns, optimizing resource utilizations, and building agile, consistent environments in the application development process.
It’s natural to ask if we can run big data workloads on Kubernetes. It’s been discussed in AWS blog posts like Deploying Spark jobs on Amazon EKS and Optimizing Spark performance on Kubernetes. We think running some of our Spark applications on Kubernetes can greatly improve our engineering agility.
These are our requirements for a solution built with Amazon EKS:
- Agile: Users can start a Spark cluster and run applications in under a minute.
- Minimal configuration
- Autoscaling: Upon user request, compute resources must be allocated automatically.
- Multi-tenant: A large team of users must be able to run applications simultaneously without interfering with each other’s applications.
Architecture
The following figure shows the architecture of our solution. To run Jupyter notebook on EKS, we build Docker images that include Jupyter, Spark, and any required libraries. For information about how to build docker/jupyter/Dockerfile and docker/spark/Dockerfile, see the README on GitHub. Developers create a Jupyter notebook server as a pod in an Amazon EKS cluster. We set up a K8s service to front the pod deployment. Finally, we can use kubectl port-forward to set up a local connection to the Jupyter server and work on the notebooks locally.
Tutorial: CTR modeling on Amazon EKS
In this tutorial, we will use the classic MLP model to illustrate the workflow of deep learning modeling running Spark on Kubernetes. We show how to create a Jupyter notebook server pod in an Amazon EKS cluster, how to connect to the notebook server from a local browser, and then from inside the notebook, we do end-to-end feature engineering and deep learning model training in Amazon EKS.
Clone the repository
To clone the repository, run
git clone.
In this tutorial, we assume that you’re running an Amazon EKS cluster. If you do not already have a cluster, follow the steps in Getting started with Amazon EKS to create one. Your cluster should have at least one node group backed by Amazon Elastic Compute Cloud (Amazon EC2) instances that have at least 4 cores and 20 GB of memory. For more information, see Amazon EC2 Instance Types. Enable Cluster Autoscaler to automatically adjust the number of nodes in your cluster as you spin up Spark drivers and executors.
Create namespace
First, create a namespace called
notebook. For YAML files and information about how to create a namespace, roles, service account, deployment, and service, see the README on GitHub.
Create Spark ServiceAccount
Next, create a
ClusterRoleBinding and
ServiceAccount, which you will use to run Spark on Kubernetes.
Build Docker images
Note: Before you run the following commands, update the
AWS_ACCOUNT_ID,
REGION, and
ECR-REPO.
The following image can be used as
spark.kubernetes.container.image later in the tutorial notebook.
The following image can be used in jupyter-notebook.yaml.
Create Jupyter notebook server
Edit jupyter-notebook.yaml and replace the
<REPLACE_WITH_JUPYTER_DOCKER_IMG> with the
$JUPYTER_IMAGE you just built. Now run the notebook server as a Kubernetes deployment and build a service on it to expose IP addresses for the notebook UI and Spark UI.
To check the status of your service:
After your service is running, you can access the Jupyter notebook through port forwarding and open.
It’s surprisingly fast to start a Spark cluster in Amazon EKS. You can start hundreds of executors in less than a minute depending on the availability of EC2 instances. This is a great time-saving feature that enables rapid data analysis. To make Spark available in your Jupyter notebook, you can use findspark to initialize and import PySpark just as regular libraries. You’ll be using the following library to create your Spark session with CPU, memory, Spark Shuffle partitions, and so on. The Kubernetes scheduler assigns the Spark executor containers to run as pods on the EC2 instances in an Amazon EKS cluster.
Create a notebook
In the file browser, click the + button and then in the Launcher tab, select the Python kernel. You can also start with the CTRModelingOnEKS.ipynb.
Note: Use the Spark Docker image you built earlier as the value of
spark.kubernetes.container.image.
Sampling
At Amazon, we deal with petabyte-scale datasets. For the purpose of the tutorial, we synthesized two sample datasets to illustrate the workflow rather than the scale. We first load a small dataset with 100,000 rows to do feature engineering, model training, and so on. Later, we use a larger dataset with around one billion rows for execution. For information about how to create synthetic datasets in Spark, see GitHub notebook.
Feature engineering
Machine learning models expect numeric inputs in order to make predictions, but datasets usually come in an unstructured format. Because using the right numeric representation of features is crucial for the success of a machine learning model, we usually spend a lot of time understanding the data and applying appropriate transformations to the raw data to produce high-quality features. This process is called feature engineering. We cover a few common feature types and their associated feature engineering APIs in PySpark.
There are ten numerical features and one categorical feature in this raw dataset.
Click is the label column. For simplicity, we use names like
numeric_n for features. A quick exploratory data analysis (EDA) can show the ratio of
click vs.
non-click impressions in this dataset. As the following screenshot shows, there are 13,536 click impressions and 86,464 non-click impressions in this sample dataset.
Categorical features
Categorical data is the most common type of data. The values are discrete and form a finite set. A typical example is the size of shirts. The values include S, M, L, XL, and so on. Feature engineering for categorical data applies transformations on the discrete values to produce distinct numeric values.
Here is a simple example of how we can achieve that with StringIndexer from PySpark.
As the output shows, StringIndexer maps
B in column
categoric_0 to
0.0 in column
numeric_10 and
A to
1.0.
Vectorization
We can now use the VectorAssembler API to merge all of the transformed values into a vector for further normalization so they can be easily processed by the learning algorithm.
Normalization
When you’re dealing with numeric data, each column might have a totally different range of values. The normalization step standardizes the data. Here we perform normalization with StandardScaler from PySpark.
Training data preparation
After feature engineering, you normally need to split the dataset to three non-overlapped sets for training, calibration, and testing. You can either choose full-scale training with all impressions or for more efficient training, use a down-sampling strategy for the training dataset to adjust the ratio of clicks vs. non-clicks. You can also apply calibration to correct the overall bias after training. Because the dataset in this tutorial is small, we just do fair sampling on training and testing the dataset and skip the calibration step.
Model training
A basic deep learning-based CTR model follows the embedding MLP paradigm, where network layers are fed by an encoding layer. This layer uses encoding techniques to encode the historical features like click-through rate (CTR) and clicks over expected clicks (COEC) and text features (for example, shopping query) and the standard categorical features (for example, page layout). First, the large-scale sparse feature inputs are mapped into low dimensional embedding vectors. Then, the embedding vectors are transformed into fixed-length vectors through average pooling. Finally, the fixed-length feature vectors are concatenated together to feed into MLP. In this tutorial, we build a basic MLP model for CTR prediction through the MultilayerPerceptronClassifier in the Spark ML library. The raw feature set has ten numerical features and one categorical feature, so we define one single hidden layer with 25 nodes. The output dimensions of the MLP layers are 11, 25, and 2. Nodes in the intermediate layers use a sigmoid function. Nodes in the output layer use a softmax function.
Model evaluation
We will use log loss and Area under the ROC curve or AUC to evaluate the quality of model. Log loss is a standard evaluation criterion for events such as click-through rate prediction. The ROC curve shows the ratio of false positive rate (FPR) and true positive rate (TPR) when we liberalize the threshold to give a positive prediction.
In this tutorial, because CTR prediction is a binary classification problem, we use BinaryClassificationMetrics in Spark ML to compute metrics. We prepare a tuple
<probability, label> for metrics computation.
ROC and AUC
A receiver operating characteristic curve, or ROC curve, is a graph that shows the performance of a classification model at all classification thresholds. This curve plots two parameters: True Positive Rate (TPR) and False Positive Rate (FPR). A higher Area under the ROC curve or AUC means the model is better at differentiating the classes.
Log Loss
Log loss is the most important classification metric based on probabilities. For a given problem, a lower log loss value means better predictions.
CTR distribution
Feature engineering on 1B dataset
After our exploring on the small dataset, we go through the same feature engineering process on the 1B dataset and prepare data for model training. Our solution works well on data of this size.
After completing the model training and evaluation on the 1B dataset, the AUC and log loss are as follows:
Multi-tenant
After you complete this tutorial, you can call the
spark.stop() API. Spark will terminate itself in seconds, and the compute resources will be returned to the Amazon EKS cluster. Finally, when you’re done using Amazon EKS cluster, you should delete the resources associated with it so that you don’t incur any unnecessary costs. One of the major benefits of running workloads on Kubernetes is that the heterogeneous workloads (Spark, Services, Tooling etc.) and infrastructure tenants (users, projects etc.) can share the same compute environment. And resource-intensive jobs can use the idle resources when there aren’t many active jobs.
Conclusion
This blog post covered the problem of click prediction in digital advertising and a typical workflow of feature engineering and model training. Amazon EKS makes it possible to create Spark cluster in just a few minutes. It provides serverless experience so users no longer need to worry about hardware provisioning. It also offers a higher degree of resource sharing so that users can run larger scale jobs in a more cost-effective way.
You’ll find the code used in this blog post at.
|
https://aws.amazon.com/blogs/containers/advertising-click-prediction-modeling-on-amazon-eks/
|
CC-MAIN-2021-21
|
refinedweb
| 2,042
| 54.73
|
Is there any way to do it with java 8 Stream API?
I need to transform each item of collection to other type (dto mapping) and return all set as a list...
Something like
Collection<OriginObject> from = response.getContent();
DtoMapper dto = new DtoMapper();
List<DestObject> to = from.stream().forEach(item -> dto.map(item)).collect(Collectors.toList());
public class DtoMapper {
public DestObject map (OriginObject object) {
return //conversion;
}
}
response.getContent()
I think you're after the following:
List<SomeObject> result = response.getContent() .stream() .map(dto::map) .collect(Collectors.toList()); // do something with result if you need.
Note that
forEach is a terminal operation. You should use it if you want to do something with each object (such as print it). If you want to continue the chain of calls, perhaps further filtering, or collecting into a list, you should use
map.
|
https://codedump.io/share/CsDwWLeDxTxl/1/iterate-through-collection-perform-action-on-each-item-and-return-as-list
|
CC-MAIN-2016-50
|
refinedweb
| 139
| 50.84
|
.:
- A new silverlight.js file that detects both the beta and the RC version;
- A breaking changes document that highlights differences between the beta and RC;
- An updated Visual Studio template that demonstrates the correct way to embed the new control;
- A EULA that governs legal usage of the above 🙂i
Avec l’approche de la disponibilité de la prochaine version intermédiaire de Silverlight 1.0, la Release
Avec l'approche de la disponibilité de la prochaine version intermédiaire de Silverlight 1.0, la
Penso che tutti sappiate cosa sia Silverlight, ma in breve potrei definirlo come: un plug-in per diversi
Silverlight 1.0 RC – Soon
Prepare your self to the Silverlight 1.0 RC1 edition which can be downloaded from here " Silverlight
Thats nice.
so I’ve plugged in the new Sliverlight.js, and checked all my JS namespaces are fine. Now Run it.
Triffic, marvellous. I have the Aplha installed, why on earth am I being prompted to download the RC1, sure I can understand it if I had just the beta installed. But I got the alpha.
I take it that at exactly the same time as the RC1 there will be a new Alpha that Silverlight.js knows about, if not I feel a newer version of Silverlight.js coming on even though I’m not allowed to edit it !!!!!!
Tim, is there a Silverlight Starter kit or template that includes the master page with the script code for Silverlight so that we can create Silverlight web sites? I have been trying to get this answered over on the Silverlight forms.
I got a lot of feedback when I posted my copy of the Silverlight Surface demo via my Silverlight hosting account. I and others were confused for a while as most people already had the 1.0 bits and couldn’t understand why they were getting the Silverlight download prompt (as it turns out, quite correctly – for the 1.1 Alpha release).
Just a thought but is there any mileage in the Silverlight.js informing people of the bits they currently have and do not have? (at least just until things settle down a bit?)
NB The channel9 discussion thread refered to is here:
Keep up the good work!
Hi,
Are controls like TextBox, ComboBox, … available for the RC1 ?
We’ve been working like mad to get Silverlight v1.0 ready to ship. We’ve improved perf, added a small
We’ve been working like mad to get Silverlight v1.0 ready to ship. We’ve improved perf, added a small
Preparing for Silverlight 1.0 RC and Beyond Silveright Breaking Changes between Mix and Version 1.0…
that Great!
Silverlight 1.0 preview
Hi
I am using Silverlight Alpha 1.1, Is there any updates ( RC) for the ALPHA .
Link Listing – July 13, 2007
The last editorial I wrote about Visual Studio 2008’s multi-targeting support , however as the Launch
The last editorial I wrote about Visual Studio 2008's multi-targeting support , however as the Launch
There are going to be some breaking changes in the upcoming Silverlight RC. We want to prepare people
There are going to be some breaking changes in the upcoming Silverlight RC. We want to prepare people
Hi,
We have developped a video player using the PreviewMedia.js from the Media Encoder 1.0Templates.
Where is it possible to find a version of that script that will work with the RC version?
As indicated in a previous post , we’re homing in on the launch of Silverlight 1.0, and today marks another
As indicated in a previous post , we’re homing in on the launch of Silverlight 1.0, and today marks another
Vu sur le blog de Tim Sneath : As indicated in a previous post , we’re homing in on the launch of Silverlight
Can’t say too much about code, programming, that’s not my bag – I’m just a dedicated Microsoft Products user, and guess what? About 04:30 hours today Thursday September 6, 2007 I downloaded and installed Silverlight v.1.02. Had a couple of incidental problems yesterday with the download, quite possibly linked to my computer’s RAID network – however, no big deal – all is cool right now. Well, let me tell you, that the quality of the video is equal to or surpasses the quality of my HDTV reception. I wear eye glasses, am Visually_Impaired, but Lo’ and Behold – Silverlight took care of my problems – SNAP – POP – and KRACKLE – WOW – what a performance. Never saw anything like it before, and surely how can anyone TOP THIS ONE? Impossible, unless 3 dimension – my Hat’s OFF – to Microsoft Corporation, the Silverlight Team and all of the partners that supported this exciting, innovating, and brilliant technology…….and, I’m telling the truth.
Thank you for this opportunity to provide you with feedback.
|
https://blogs.msdn.microsoft.com/tims/2007/07/13/preparing-for-silverlight-1-0-rc-and-beyond/
|
CC-MAIN-2016-50
|
refinedweb
| 812
| 72.87
|
returned one by one by a method (load) called repeatedly. I defined a
generator to do this (loadall), but this seems unwieldy in general. Is
there a common idiom here that can usefully be encapsulated in a general
method?
On Mon, 2009-08-31 at 17:57 +0100, Chris Withers wrote:The above iterates over one Stat object returned by h.load(f). I assume
Sverker Nilsson wrote:
Sverker Nilsson wrote:
It reads one Stat object at a time and wants to report somethingHmm, am I right in thinking the above can more nicely be written as:
when there is no more to be read from the file.
when there is no more to be read from the file.
...
from guppy import hpy
h = hpy()
f = open(r'your.hpy')
sets = []
for s in iter(h.load(f)): sets.append(s)
h = hpy()
f = open(r'your.hpy')
sets = []
for s in iter(h.load(f)): sets.append(s)
you want to iterate over all the objects loaded.
A way to iterate over the objects (not having to define the loadall()
function from previous mail) that I came up with now, is via itertools
imap with h.load applied to an infinite iterator repeating the file:
from guppy import hpy
from itertools import imap, repeat
h=hpy()
f=open('your.hpy')
sets = list(imap(h.load,repeat(f)))
Maybe the above idiom could usefully be encapsulated in a standard
function?
def iapply(func, *args):
'''Iterator repeatedly calling func with *args, infinitely or
until it raises StopIteration'''
from itertools import imap, repeat
return imap(func, *[repeat(arg) for arg in args])
Usage, eg:
or
sets = list(iapply(h.load,f))
...
for s in iapply(h.load,f):
What do you think, should something like iapply be added to itertools?
What is the better name? :-)
Sverker
--
Expertise in Linux, embedded systems, image processing, C, Python...
|
https://grokbase.com/t/python/python-list/0991hby6xt/an-iteration-idiom-was-re-guppy-pe-list-loading-files-containing-multiple-dumps
|
CC-MAIN-2022-33
|
refinedweb
| 313
| 66.13
|
Hi I made a script to export the XML scheme of each one of the time series model I run. But when I look the results, the scheme is always the same. Do I have something to do to "reset" the scheme between 2 runs?
The command that I used:
from modeler.api import FileFormat stream = modeler.script.stream() TSId="id8IDRSQ799L6" TSNode = stream.findByID(TSId) results = [] TSNode.run(results) applynode = stream.findByType("applytimeseries", None) #print cm.getXMLAsString() print "Time series tags =", applynode.getContentModelTags() cm = applynode.getContentModel("TSCXML") print cm.getXMLAsString()
Answer by TravisL (256) | Jun 08, 2016 at 04:19 PM
Time series model nuggets are not supported for export to PMML. Refer to the online help topic here which lists the supported model types available for PMML export:
Answer by blacoste (5) | Jun 09, 2016 at 06:30 AM
Thanks you for the answer. In this case, IBM should update their code in order that the function getContentModelTags() returns "Noner" with an applytimeseries node and clean up the return of the getXMLAsString()...
72 people are following this question.
|
https://developer.ibm.com/answers/questions/277916/why-the-xml-scheme-is-not-updated-after-a-refresh.html
|
CC-MAIN-2019-35
|
refinedweb
| 179
| 69.68
|
Hi,
I am doing a painting program (KIds Paint - you can find in Android Market) and I have a lot of requests to save the content on disk or to wallpaper. I have been searching around but cannot find solution.
My guess is that I probably wanted to get the bitmap from the canvas, but I can't find ways to get it (why isn't there a getBitmap or capturePicture or some sort?). Then I try to set an empty bitmap into the canvas and draw on the canvas, and save the bitmap... but I got an empty bitmap.
Please help! Thanks. I would like to add the feature to the application.
Here's my codes:
public class KidsPaintView extends View {
Bitmap bitmap = null;
...
protected void onDraw(Canvas canvas) {
if (bitmap == null) {
bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
canvas.setBitmap(bitmap);
}
... // do painting on canvas
}
}
Then in my main code I try to retrieve the bitmap and save it as
wallpaper:
Bitmap bitmap = view.bitmap;
try { setWallpaper(bitmap); }
catch (IOException e) { e.printStackTrace(); }
But all I got is a black wallpaper. What am I doing wrong? Or is there a better way? Thanks!
|
http://www.anddev.org/other-coding-problems-f5/saving-canvas-to-disk-t8105.html
|
CC-MAIN-2017-47
|
refinedweb
| 196
| 75.1
|
With Google Maps in your Android apps, you can provide users with localization functions, such as geographical information. Throughout this series we have been building an Android app in which the Google Maps Android API v2 combines with the Google Places API. So far we have displayed a map, in which the user can see their current location, and we have submitted a Google Places query to return data about nearby places of interest. This required setting up API access for both services. In the final part of the series, we will parse the Google Places JSON data and use it to show the user nearby places of interest. We will also make the app update the markers when the user location changes.
This is the last of four parts in a tutorial series on Using Google Maps and Google Places in Android apps:
- Working with Google Maps - Application Setup
- Working with Google Maps - Map Setup
- Working with Google Maps - Places Integration
- Working with Google Maps - Displaying Nearby Places
1. Process the Place Data
Step 1
You will need to add the following import statements to your Activity class for this tutorial:
import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import android.util.Log;
In the last tutorial we created an inner AsyncTask class to handle fetching the data from Google Places in the background. We added the doInBackground method to request and retrieve the data. Now we can implement the onPostExecute method to parse the JSON string returned from doInBackground, inside your AsyncTask class, after the doInBackground method:
protected void onPostExecute(String result) { //parse place data returned from Google Places }
Step 2
Back in the second part of this series, we created a Marker object to indicate the user's last recorded location on the map. We are also going to use Markers to show the nearby places of interest. We will use an array to store these Markers. At the top of your Activity class declaration, add the following instance variable:
private Marker[] placeMarkers;
By default, the Google Places API returns a maximum of 20 places, so let's define this as a constant too:
private final int MAX_PLACES = 20;
When we create the Markers for each place, we will use MarkerOptions objects to configure the Marker details. Create another array instance variable for these:
private MarkerOptions[] places;
Now let's instantiate the array. In your Activity onCreate method, after the line in which we set the map type, create an array of the maximum required size:
placeMarkers = new Marker[MAX_PLACES];
Now let's turn to the onPostExecute method we created. First, loop through the Marker array, removing any existing Markers. This method will execute multiple times as the user changes location:
if(placeMarkers!=null){ for(int pm=0; pm<placeMarkers.length; pm++){ if(placeMarkers[pm]!=null) placeMarkers[pm].remove(); } }
When the app code first executes, new Markers will be created. However, when the user changes location, these methods will execute again to update the places displayed. For this reason the first thing we must do is remove any existing Markers from the map to prepare for creating a new batch.
Step 3
We will be using Java JSON resources to process the retrieved place data. Since these classes throw certain exceptions, we need to build in a level of error handling throughout this section. Start by adding try and catch blocks:
try { //parse JSON } catch (Exception e) { e.printStackTrace(); }
Inside the try block, create a new JSONObject and pass it to the result JSON string returned from doInBackground:
JSONObject resultObject = new JSONObject(result);
If you look at the Place Search page on the Google Places API documentation, you can see a sample of what the query actually returns in JSON. You will see that the places are contained within an array named "results". Let's first retrieve that array from the returned JSON object:
JSONArray placesArray = resultObject.getJSONArray("results");
You should refer to the sample JSON result as we complete each section of this process - keep the page open in a browser while you complete the remainder of the tutorial. Next let's instantiate the MarkerOptions array we created with the length of the returned "results" array:
places = new MarkerOptions[placesArray.length()];
This should give us a MarkerOptions object for each place returned. Add a loop to iterate through the array of places:
//loop through places for (int p=0; p<placesArray.length(); p++) { //parse each place }
Step 4
Now we can parse the data for each place returned. Inside the for loop, we will build details to pass to the MarkerOptions object for the current place. This will include latitude and longitude, place name, type and vicinity, which is an excerpt of the address data for the place. We will retrieve all of this data from the Google Places JSON, passing it to the Marker for the place via its MarkerOptions object. If any of the values are missing in the returned JSON feed, we will simply not display a Marker for that place, in case of Exceptions. To keep track of this, add a boolean flag:
boolean missingValue=false;
Now add local variables for each aspect of the place we need to retrieve and pass to the Marker:
LatLng placeLL=null; String placeName=""; String vicinity=""; int currIcon = otherIcon;
We create and initialize a LatLng object for the latitude and longitude, strings for the place name and vicinity and initially set the icon to use the default icon drawable we created. Now we need another try block, so that we can detect whether any values are in fact missing:
try{ //attempt to retrieve place data values } catch(JSONException jse){ missingValue=true; jse.printStackTrace(); }
We set the missing value flag to true for checking later. Inside this try block, we can now attempt to retrieve the required values from the place data. Start by initializing the boolean flag to false, assuming that there are no missing values until we discover otherwise:
missingValue=false;
Now get the current object from the place array:
JSONObject placeObject = placesArray.getJSONObject(p);
If you look back at the sample Place Search data, you will see that each place section includes a "geometry" section which in turn contains a "location" section. This is where the latitude and longitude data for the place is, so retrieve it now:
JSONObject loc = placeObject.getJSONObject("geometry").getJSONObject("location");
Attempt to read the latitude and longitude data from this, referring to the "lat" and "lng" values in the JSON:
placeLL = new LatLng( Double.valueOf(loc.getString("lat")), Double.valueOf(loc.getString("lng")));
Next get the "types" array you can see in the JSON sample:
JSONArray types = placeObject.getJSONArray("types");
Tip: We know this is an array as it appears in the JSON feed surrounded by the "[" and "]" characters. We treat any other nested sections as JSON objects rather than arrays.
Loop through the type array:
for(int t=0; t<types.length(); t++){ //what type is it }
Get the type string:
String thisType=types.get(t).toString();
We are going to use particular icons for certain place types (food, bar and store) so add a conditional:
if(thisType.contains("food")){ currIcon = foodIcon; break; } else if(thisType.contains("bar")){ currIcon = drinkIcon; break; } else if(thisType.contains("store")){ currIcon = shopIcon; break; }
The type list for a place may actually contain more than one of these places, but for convenience we will simply use the first one encountered. If the list of types for a place does not contain any of these, we will leave it displaying the default icon. Remember that we specified these types in the Place Search URL query string last time:
food|bar|store|museum|art_gallery
This means that the only place types using the default icon will be museums or art galleries, as these are the only other types we asked for.
After the loop through the type array, retrieve the vicinity data:
vicinity = placeObject.getString("vicinity");
Finally, retrieve the place name:
placeName = placeObject.getString("name");
Step 5
After the catch block in which you set the missingValue flag to true, check that value and set the place MarkerOptions object to null, so that we don't attempt to instantiate any Marker objects with missing data:
if(missingValue) places[p]=null;
Otherwise, we can create a MarkerOptions object at this position in the array:
else places[p]=new MarkerOptions() .position(placeLL) .title(placeName) .icon(BitmapDescriptorFactory.fromResource(currIcon)) .snippet(vicinity);
Step 6
Now, at the end of onPostExecute after the outer try and catch blocks, loop through the array of MarkerOptions, instantiating a Marker for each, adding it to the map and storing a reference to it in the array we created:
if(places!=null && placeMarkers!=null){ for(int p=0; p<places.length && p<placeMarkers.length; p++){ //will be null if a value was missing if(places[p]!=null) placeMarkers[p]=theMap.addMarker(places[p]); } }
Storing a reference to the Marker allows us to easily remove it when the places are updated, as we implemented at the beginning of the onPostExecute method. Notice that we include two conditional tests each time this loop iterates, in case the Place Search did not return the full 20 places. We also check in case the MarkerOptions is null, indicating that a value was missing.
Step 7
Finally, we can instantiate and execute our AsyncTask class. In your updatePlaces method, after the existing code in which we built the search query string, start this background processing to fetch the place data using that string:
new GetPlaces().execute(placesSearchStr);
You can run your app now to see it in action. It should display your last recorded location together with nearby places of interest. The colors you see on the Markers will depend on the places returned. Here is the app displaying a user location in Glasgow city center, UK:
Perhaps unsurprisingly a lot of the places listed in Glasgow are bars.
When the user taps a Marker, they will see the place name and snippet info:
2. Update With User Location Changes
Step 1
The app as it stands will execute once when it is launched. Let's build in the functionality required to make it update to reflect changes in the user location, refreshing the nearby place Markers at the same time.
Alter the opening line of the Activity class declaration to make it implement the LocationListener interface so that we can detect changes in the user location:
public class MyMapActivity extends Activity implements LocationListener {
A Location Listener can respond to various changes, each of which uses a dedicated method. Inside the Activity class, implement these methods:
@Override public void onLocationChanged(Location location) { Log.v("MyMapActivity", "location changed"); updatePlaces(); } @Override public void onProviderDisabled(String provider){ Log.v("MyMapActivity", "provider disabled"); } @Override public void onProviderEnabled(String provider) { Log.v("MyMapActivity", "provider enabled"); } @Override public void onStatusChanged(String provider, int status, Bundle extras) { Log.v("MyMapActivity", "status changed"); }
The only one we are really interested in is the first, which indicates that the location has changed. In this case we call the updatePlaces method again. Otherwise we simply write out a Log message.
At the end of the updatePlaces method, add a request for the app to receive location updates:
locMan.requestLocationUpdates(LocationManager.NETWORK_PROVIDER, 30000, 100, this);
We use the Location Manager we created earlier in the series, requesting updates using the network provider, at delays of 30 seconds (indicated in milliseconds), with a minimum location change of 100 meters and the Activity class itself to receive the updates. You can, of course, alter some of the parameters to suit your own needs.
Tip: Although the requestLocationUpdates method specifies a minimum time and distance for updates, in reality it can cause the onLocationChanged method to execute much more often, which has serious performance implications. In any apps you plan on releasing to users, you should therefore limit the frequency at which your code responds to these location updates. The alternative requestSingleUpdate method used on a timed basis may be worth considering.
Step 2
Last but not least, we need to take care of what happens when the app pauses and resumes. Override the two methods as follows:
@Override protected void onResume() { super.onResume(); if(theMap!=null){ locMan.requestLocationUpdates(LocationManager.NETWORK_PROVIDER, 30000, 100, this); } } @Override protected void onPause() { super.onPause(); if(theMap!=null){ locMan.removeUpdates(this); } }
We check for the GoogleMap object before attempting any processing, as in onCreate. If the app is pausing, we stop it from requesting location updates. If the app is resuming, we start requesting the updates again.
Tip: We've used the LocationManager.NETWORK_PROVIDER a few times in this series. If you are exploring localization functionality in your apps, check out the alternative getBestProvider method with which you can specify criteria for Android to choose a provider based on such factors as accuracy and speed.
Before We Finish
That pretty much completes the app! However, there are many aspects of the Google Maps Android API v2 that we have not even touched on. Once you have your app running you can experiment with features such as rotation and tilting. The updated maps service displays indoor and 3D maps in certain places. The following image shows the 3D facility with the app if the user location was in Venice, Italy:
This has the map type set to normal - here is another view of Venice with the hybrid map type set:
Conclusion
In this tutorial series we have worked through the process of integrating both Google Maps and Google Places APIs in a single Android app. We handled API key access, setting up the development environment, workspace and application to use Google Play Services. We utilized location data, showing the user location together with nearby places of interest, and displaying the data with custom UI elements. Although what we have covered in this series is fairly extensive, it really is only the beginning when it comes to building localization features into Android apps. With the release of Version 2 of the Maps API, Android apps are set to take such functions to the next level.
|
http://code.tutsplus.com/tutorials/android-sdk-working-with-google-maps-displaying-places-of-interest--mobile-16145
|
CC-MAIN-2014-42
|
refinedweb
| 2,349
| 51.07
|
Logging in from Caché on one server to Caché on another server
Until this week, my customer had Ensemble writing HL7 messages to my TCP listener, working fine on ageing physical servers (Windows Server 2003, Caché 2009). They had four app servers (app1, app2, app3 and app4) with an overarching DNS simply called "app". Ensemble was connecting to "app:port", and it somehow found whichever of the four app servers my background listener was running on, and the interface worked fine for 10+ years.
They have now moved to shiny new virtual servers (Windows Server 2016, Caché 2018) and now have only two app servers(appv1 and appv1), with an overarching DNS of "appv". However, Ensemble no longer consistently finds the appv server that my listener is running on, so we have had to change the Ensemble connection to always connect to "appv2:port", and the users have to ensure that the listener is also running on appv2. Not ideal, and certainly reduces any resilience capability.
People who are far more knowledgeable about networking than I am have concluded that they don't understand how it used to work, let alone why it no longer does.
Therefore I now need to get my listener running on both servers simultaneously.
For a while I had hopes of using the "joblocation" parameter on the JOB command (a new parameter to me), but it turns out that that only works across ECP, which isn't available at this site.
I'm now thinking that I will need to initiate a login from whichever appv server my background control process is running on (it's random, as it inherits from the interactive user, who lands on whichever server their load-balancer points them to) to the other appv server and JOB the listener on there.
Finally, to my question - how, just using Caché Object Script, do I initiate a login from Caché on appv1 to Caché on appv2, or vice-versa, and then run a specific COS .int routine when it gets there?
I've done similar things, but only swapping namespace on the same app server, never to the same namespace on another app server i.e.
Forgive any howlers in this - I am simply a long-in-the-tooth green-screen Mumpster!
|
https://community.intersystems.com/post/logging-cach%C3%A9-one-server-cach%C3%A9-another-server
|
CC-MAIN-2021-21
|
refinedweb
| 380
| 53.44
|
Hi Clinton,
I suppose formatted sql is slow might be caused by database engine waste
time to parse the white space, not only \r,\n,\t, but ' ' as well. And
actually ' ' is more than the other three white in real world.
So I recommend following code,
public static String tidySql(String sql) {
String[] values = sql.split("\\n");
for (int i = 0; i < values.length; i++) {
values[i] = values[i].trim();
}
StringBuilder sb = new StringBuilder();
for (int i = 0; i < values.length; i++) {
sb.append(values[i]).append(' ');
}
return sb.toString();
}
This code would reduce the size of the formatted sql, and should be helpful.
Best regards,
Jiming
On Tue, Feb 3, 2009 at 3:41 AM, Clinton Begin <clinton.begin@gmail.com>wrote:
> This is a good summary. I've been watching the other thread.
>
> I think I might have found a possible candidate:
>
> public class SqlText implements SqlChild {
> //...
> public void setText(String text) {
> this.text = text.replace('\r', ' ').replace('\n', ' ').replace('\t', '
> ');
> this.isWhiteSpace = text.trim().length() == 0;
> }
> //...
> }
>
> I'll have to wait to get home to check to see if this is called on
> each execution, instead of just once at SQL mapper build time.
>
> Cheers,
> Clinton
>
|
http://mail-archives.apache.org/mod_mbox/ibatis-user-java/200902.mbox/%3Caeed78f50902022122x764f26e3kaade5fdf504e15cb@mail.gmail.com%3E
|
CC-MAIN-2013-48
|
refinedweb
| 199
| 68.57
|
Hi!
I'm trying to create an array of structs from the contents of a .txt file, but am having a 'few' problems.
I realise this is quite a mess right now, I wouldn't normally ask for such help with such a 'mess', but I think with a couple of good suggestions, I'll be able to fix it up.I realise this is quite a mess right now, I wouldn't normally ask for such help with such a 'mess', but I think with a couple of good suggestions, I'll be able to fix it up.Code:#include <iostream> #include <fstream> using namespace std; const int MAXCHARS = 20; const int MAXITEMS =10; struct structname { char itemname; double price; int luxury; }; int main() { structname mystruct; char ItemName[MAXITEMS][MAXCHARS] = { "Caviar", "Sprouts", "Salmon", "Eggs", "Truffles", "Quail", "Champagne", "Bread", "Brioche", "Apples"}; double Price [MAXITEMS] = {12.90, 0.80, 6.50, 0.75, 7.29, 5.55, 21.90, 0.80, 1.20, 1.10}; bool Luxury [MAXITEMS] = {false, false, true, false, true, true, false, false, true, false}; for(int i = 0;i<10;i++) { mystruct.itemname = ItemName[i]; mystruct.price = Price[i]; mystruct.luxury = Luxury[i]; fout << mystruct.itemname << " " << mystruct.price << " " << mystruct.luxury << endl; } system("pause"); return 0; }
Also (embarrassed.jpeg) I created an empty project in Visual Studio to attempt this, and am unsure where to place the .txt file. The 'default'/typical location for within a project is what I'm after.
The attached screenshot shows where it is located right now (in with main.cpp):-
If anyone could offer me a point or two in the right direction, I'd be extremely grateful.
Many thanks!
|
http://cboard.cprogramming.com/cplusplus-programming/100246-newbie-array-structs-txt-file.html
|
CC-MAIN-2014-15
|
refinedweb
| 278
| 71.04
|
selected branch fails to download
Bug Description
Binary package hint: groundcontrol
Ubuntu 10.04
Ground Control 1.6.5-1
Expected Behavior:
Click "Fetch Branch", select branch & local name; branch content downloads to project folder
Actual Behavior:
Click "Fetch Project", select project -- project folder is created
Browse to project folder
Click "Fetch Branch", select branch & local name; "Retrieving Data" dialog appears, but no data is downloaded
Yes -- the same behavior occurs after upgrading to 1.6.6 via the PPA.
OK so it's reproducible and your using the latest, I will need to get you to run from the command line and give me a log here so I can understand what is going on. Right now your instructions work on my machine.
touch ~/groundcontrol.log
nautilus --quit; nautilus --no-desktop
Then reproduce your error, you may have to try it more than once to get the result in the log.
Hmm... from the log, it appears that it's failing when looking for a Python module named "winrandom"?
Full log is attached, including two failed attempts to download branches...
On 09/02/2010 06:48 PM, Brian Dunnette wrote:
> Hmm... from the log, it appears that it's failing when looking for a
> Python module named "winrandom"?
>
> Full log is attached, including two failed attempts to download
> branches...
>
> ** Attachment added: "groundcontrol.log"
> https:/
>
From the log, it looks like the issue is that the Crypto module is
importing winrandom, which is a Windows-only library. Normally, this
import would fail and the module would go onto an alternative.
I think hg tries to speed up its runtime by delaying Python imports
until the imported code is used. This breaks the behavior of some
modules, because they depend on imports of non-extant modules to fail
during import, not usage.
--
╒══════
│Luke Faraone ╭Debian / Ubuntu Developer╮│
│http://
│PGP: 5189 2A7D 16D0 49BB 046B DC77 9732 5DD8 F9FD D506 │
╘══════
Luke: Yes that's what I thought. I gave this branch a download last night myself and it worked perfectly on Ubuntu 10.04 i386 with 1.6.6 from the ppa. So this error must be something to do with some package installed on Brian's machine.
Brian can you confirm?
Affects me at groundcontrol 1.6.7 from ppa.
attached log mentions winrandom module.
@doctormo, have you tried to reproduce this bug after installing mercurial?
@Andrei - Yes, I just installed mecurial on ubuntu 10.10 and tried to download the branch shown in the logs. No error occurred :-(
I can confirm this bug, too. I run the latest Ubuntu 10.10 with the ground-control 1.6.6-1 from the repositories.
Same error message, mercurial was installed all the time.
Running
touch ~/groundcontrol.log
nautilus --quit; nautilus --no-desktop
Lead to hundreds of nautilus windows opening and pestering my taskbar, but the windows were not clickable nor alt-tab helped reaching them. Console output was:
jan@jan-x61:~$ touch ~/groundcontrol.log
jan@jan-x61:~$ nautilus --quit; nautilus --no-desktop
Initializing nautilus-
Initializing nautilus-gdu extension
Bazaar nautilus module initialized
Initializing groundcontrol-1.6.6 extension
Initializing nautilus-dropbox 0.6.7
sys:1: GtkWarning: IA__gtk_
Oh, I got some output! Hinting at SSL problems. See attached logfile.
And here goes groundcontrol.log, but it reveals no news.
Does this problem appear in 1.6.6 which is available in Debian and the ppa?
https:/
/edge.launchpad .net/~doctormo/ +archive/ groundcontrol
|
https://bugs.launchpad.net/groundcontrol/+bug/627172
|
CC-MAIN-2019-39
|
refinedweb
| 571
| 68.77
|
Opened 5 years ago
Last modified 5 years ago
#28556 closed Bug
resolve of i18n_patterns does not match default language paths with prefix_default_language=False — at Version 1
Description (last modified by )
I believe this to be a follow-up of #27402
### urls.py
urlpatterns += i18n_patterns( url(r'^', include('main.urls', namespace="main")), prefix_default_language=False)
Now in a custom tag I try to resolve the URL(e.g. '/login/', '/en/login/'). This fails if the path is not prefixed(e.g. '/login/'):
### my_custom_tag
path = context['request'].path url_parts = resolve(path) # raises 404
The URL of course exists as it is the current URL directly from the context.
My version: 1.11.4
Sorry for the brief bug report, but my laptop is running out of battery and I don't have a charger
Change History (2)
comment:1 Changed 5 years ago by
Changed 5 years ago by
Note: See TracTickets for help on using tickets.
|
https://code.djangoproject.com/ticket/28556?version=1
|
CC-MAIN-2022-27
|
refinedweb
| 155
| 62.78
|
Func:generic delegate is a cool feature introduced with .NET 3.5. We will look at Func in this short article. But let's start with .NET 1.1.
Func
Func
In C# 1.1, we had code like the one shown below:
delegate bool IsGreaterThan(int inputNumber);
static bool IsGreaterThan3(int number)
{
return number > 3 ? true : false;
}
IsGreaterThan igt = new IsGreaterThan(IsGreaterThan3);
Console.WriteLine("Number is greater than 3 : {0}", igt(4));
In step 1, we have a delegate definition. IsGreaterThan3 is a delegate target method. We can use this delegate as shown in step 3.When C# 2.0 came out, this code was refactored as follows:
delegate
IsGreaterThan3
IsGreaterThan igt1 = delegate(int number1)
{
return number1 > 3 ? true : false;
};
Notice the anonymous method call.And with C# 3.5, we got this new thing called Func<(T, TResult> Generic Delegate. As per the documentation – “[Func] Encapsulates a method that has one parameter and returns a value of the type specified by the TResult parameter”. First, it looked like a keyword. But it is a .NET generic delegate.
Func<(T, TResult>
Delegate
[Func]
TResult
delegate
In Func<int, bool>, integer is the input parameter and boolean value is the output parameter. All you need to use this generic delegate is to map it against the matching method call. For example:
Func<int, bool>
Func<int, bool> testFunc = delegate(int number2)
{
return number2 > 3 ? true : false;
};
Console.WriteLine("Number is greater than 3 : {0}", testFunc(4));
Here, the anonymous method will take an integer as an input parameter and will return a boolean value based on the comparison result.
Notice: We did not define any delegate with this code. With Func, we got the delegate for free. And then with the lambda expression, same code can be refactored as follows:
Func<int, bool> testFuncWithLambda = inNum => inNum > 3 ? true : false;
Console.WriteLine("Number is greater than 3 : {0}", testFuncWithLambda(4));
One good example of Func can be found on this blog on C# in Depth.
We can use Func with multiple input variables of the same type. Look at the following example:
Func<int, int, int, int> ManyFunc = (int a, int b,int c) => a + b+ c;
Console.WriteLine("ManyFunc is {0}", ManyFunc(2, 2, 3));
If you use Reflector on this code, you will find code like the one shown below:
Func<int, int, int, int> ManyFunc = delegate (int a, int b, int c)
{
return (a + b) + c;
};
Console.WriteLine("ManyFunc is {0}", func123(2, 2, 3));
Also, Func plays an important role in the LINQ Expression Tree implementation. Expression tree can be used to convert the executable code into a data structure. Consider the following example:
Func<int, int, int, int> ManyFunc = (int a, int b,int c) => a + b+ c;
Console.WriteLine("ManyFunc is {0}", ManyFunc(2, 2, 3));
Expression<Func<int,int, int>> ExpressionFunc = (int a, int b) => a + b ;
InvocationExpression ie =
Expression.Invoke(ExpressionFunc, Expression.Constant(4), Expression.Constant(5));
Console.WriteLine(ie.ToString());
Console.WriteLine(ExpressionFunc.Body);
Console.WriteLine(ExpressionFunc.Body.NodeType);
foreach (var paramname in ExpressionFunc.Parameters)
{
Console.WriteLine("parameter name is {0}", paramname.Name);
}
Console.WriteLine(ExpressionFunc.Type);
In this example, we have a lambda expression as (int a, int b) => a + b, and an ExpressionFunc. ExpressionFunc is not data but an expression tree. We can find out the body, parameters, nodetype and type of the lambda expression using ExpressionFun.
(int a, int b) => a + b
ExpressionFunc
ExpressionFunc
ExpressionFun
I refactored some .NET 1.1 code with Func and lambda expression. Honestly, it was a lot of fun. Let me know if you find any other interesting Func test.
|
https://www.codeproject.com/articles/27582/fun-with-func?fid=1471932&df=90&mpp=10&noise=1&prof=true&sort=position&view=expanded&spc=none&select=2628597&fr=1
|
CC-MAIN-2017-04
|
refinedweb
| 606
| 59.09
|
Created on 2015-08-12 10:47 by flying sheep, last changed 2015-08-14 20:06 by rhettinger. This issue is now closed.
Things like progressbars want len() to work on iterated objects.
It’s possible to define __len__ for many of the iterables returned by itertools.
some arguments have to be iterated to find the len(): of course we have to check if those are reentrant, and raise a TypeError if they are non-reentrant. (signified by “(r)→”)
for the predicate functions, it’s questionable if we should offer it, since they might take a long time and “len” is a property-like function that feels like it should return fast.
map(func, iterable) → len(iterable)
count(), cycle(), repeat() → infinty, but because len() returns integers, and there’s only float infinity, that’s impossible
accumulate(iterable) → len(iterable)
chain(*iterables) → sum(len(it) for it in iterables)
chain.from_iterable(iterables) (r)→ like the above
compress(data, selectors) (r)→ sum(1 for s in selectors if s)
dropwhile(pred, iterable) (r)→ for skip, r in enumerate(map(pred, iterable)): if r: return len(iterable) - skip
filterfalse(pred, iterable) (r)→ sum(1 for r in map(pred, iterable) if r)
groupby(iterable[, keyfunc]) (r)→ no way but to actually execute it all
islice(seq, [start,] stop [, step]) → calculatable if len(seq) is possible
starmap(function, iterables) → len(iterables)
takewhile(pred, iterable) (r)→ for skip, r in enumerate(map(pred, iterable)): if not r: return skip
tee(iterable[, n]) → n
zip_longest(*iterables[, fillvalue]) (r)→ max(len(it) for it in iterables)
product(), permutations(), combinations(), combinations_with_replacement() → there’s math for that.
No, you may not iterate the iterator in order to compute the len, because then the iterator would be exhausted. In addition, the point of itertools is to *lazily* do operations on iterables of indefinite length, so to offer __len__ if and only if the arguments supported len (for cases where that would work) would be essentially false advertising :)
Unfortunately, this fails because there is no way to tell how long an arbitrary iterable is, or whether it is reentrant or not. Consider:
def gen():
while True:
if random.random() < 0.5:
return random.random()
Not only is it not reentrant, but you cannot tell in advance how long it will be.
There's also the problem that not all iterables need to have a defined length. The iterator protocol, for example, does not demand that iterators define a length, and we should not put that burden on the programmer.
There's one more serious problem with the idea of giving iterators a length. Consider this case:
it = iter([1, 2, 3, 4, 5])
next(it)
next(it)
print(len(it))
What should be printed? 5, the length of the underlying list, or 3, the number of items still remaining to be seen? Whichever answer you give, it will be misleading and a bug magnet under certain circumstances.
I don't believe it is worth giving iterators like map, zip etc. a length depending on the nature of what they are iterating over. That can only lead to confusion. Programmers just have to understand that sequences have lengths, but arbitrary iterables may not.
Hi, and sorry David, but I think you haven’t understood what I was proposing.
Maybe that was too much text and detail to read at once, while skipping the relevant details:
Python has iterators and iterables. iterators are non-reentrant iterables: once they are exhausted, they are useless.
But there are also iterables that create new, iterators whenever iter(iterable) is called (e.g. implicitly in a for loop). They are reentrant. This is why you can loop sequences such as lists more than once.
———————————————————————
One of those reentrant iterables is range(), whose __iter__ functions creates new lazy iterables, which has a __len__, and so on. It even has random access just like a sequence.
Now it’s always entirely possible to *lazily* determine len(chain(range(200), [1,2,5])), which is of course len(range(200)) + len([1,2,5]) = 200 + 3 = 203. No reentrant iterables are necessary here, only iterables with a __len__. (Simply calling len() on them all is sufficient, as it could only create a TypeError which would propagate upwards)
———————————————————————
To reiterate:
1. Lazy doesn’t mean non-reentrant, just like range() demonstrates.
2. I didn’t propose that this works on arbitrary iterables, only that it works if you supply iterables with suitable properties (and throws ValueError otherwise, just like len(some_generator_function()) already does)
3. I know what I’m doing, please trust me and read my proposal carefully ;)
To elaborate more on my second point (“No reentrant iterables are necessary here, only iterables with a __len__”)
What i meant here is that inside a call of chain(*iterables), such as chain(foo, bar, *baz_generator()), the paramter “iterables” is always a tuple, i.e. a sequence.
So it is always possible to just call len() on each element of “iterables” and either get a ValueError or a collection of summable integers.
With other itertools functions, we’d need to determine beforehand if we have reentrant iterables or not. This might be a problem, and for some too un-lazy (e.g. groupby)
But at the very very least, we could implement this for everything where i didn’t write “(r)”: map, accumulate, chain, islice, starmap, tee, product, permutations, combinations, combinations_with_replacement
No, I guessed that despite saying "some arguments have to be iterated" that you were really talking about arguments that had __len__. That's why I added the sentence about it not being appropriate even if you only did it when the inputs had __len__.
But I'll let Raymond re-close this. Who knows, maybe I'll be surprised and it will turn out that he's interested :)
On Wed, Aug 12, 2015 at 09:23:26PM +0000, flying sheep wrote:
> Python has iterators and iterables. iterators are non-reentrant
> iterables: once they are exhausted, they are useless.
Correct.
> But there are also iterables that create new, iterators whenever
> iter(iterable) is called (e.g. implicitly in a for loop). They are
> reentrant. This is why you can loop sequences such as lists more than
> once.
The *iterable* itself may be reentrant, but the iterator formed from
iter(iterable) is not. So by your previous comment, giving the iterator
form a length is not appropriate.
Do you know of any non-iterator iterables which do not have a length
when they could? With the exception of tee, all the functions in
itertools return iterators.
> One of those reentrant iterables is range(), whose __iter__ functions
> creates new lazy iterables, which has a __len__, and so on. It even
> has random access just like a sequence.
You are misinterpreting what you are seeing. range objects already
are sequences with a length, and nothing needs be done with them. But
iter(range) are not sequences, they are iterators, and then are not
sized and have no __len__ method:
py> it = iter(range(10))
py> len(it)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'range_iterator' has no len()
If range_iterator objects were given a length, what would it be? Should
it be the length of the underlying range object, which is easy to
calculate but wrong? That's what you suggest below (your comments about
chain). Or the length of how many items are yet to be seen, which is
surprising in other ways?
> Now it’s always entirely possible to *lazily* determine
> len(chain(range(200), [1,2,5])),
Sure. But chain doesn't just accept range objects and lists as
arguments, it accepts *arbitrary iterables* which you accept cannot be
sized. So len(chain_obj) *may or may not* raise TypeError. Since you
can't rely on it having a length, you have to program as if it doesn't.
So in practice, I believe this will just add complication.
> which is of course len(range(200)) +
> len([1,2,5]) = 200 + 3 = 203. No reentrant iterables are necessary
> here, only iterables with a __len__. (Simply calling len() on them all
> is sufficient, as it could only create a TypeError which would
> propagate upwards)
That would be wrong. Consider:
it = chain("ab", "cd")
throw_away = next(it)
assert len(it) == 2 + 2 # call len() on the sequences
assert len(list(it)) == len(it) # fails since 3 != 4
I had explored this idea previously at some length (no pun intended) but it was mostly a dead-end. The best we ended-up with has having __length_hint__ to indicate size to list().
There were several issues some of which at detailed in the comment at the top of . Another *big* issue was that Guido was adamantly opposed to iterators having a length because it changed their boolean value from always-true and it broke some of his published code that depended on iterators never being false, even when empty.
> The *iterable* itself may be reentrant, but the iterator formed
> from iter(iterable) is not. So by your previous comment, giving
> the iterator form a length is not appropriate.
> With the exception of tee, all the functions in itertools return
> iterators.
ah, so your gripe is that the itertools functions return iterators, not (possibly) reentrant objects like range(). and changing that would break backwards compatibility, since the documentation says “iterator”, not “iterable” (i.e. people can expect e.g. next(groupby(...))) to work.
that’s probably the end of this :(
the only thing i can imagine that adds reentrant properties (and an useful len()) to iterators would be an optional function (maybe __uniter__ :D) that returns an iterable whose __iter__ function creates a restarted iterator copy, or an optional function that directly returns such a copy. probably too much to ask for :/
> Since you can't rely on it having a length, you have to program as if
> it doesn't. So in practice, I believe this will just add complication.
I don’t agree here. If something accepts iterables and expects to sometimes be called on iterators and sometimes on sequences/len()gthy objects, it will already try/catch len(iterable) and do something useful if that succeeds.
> The best we ended-up with has having __length_hint__ to indicate size to list().
Just out of interest, how does my __uniter__ compare?
> because it changed their boolean value from always-true
it does? is it forbidden to define methods so that int(bool(o)) != len(o)?
[flying sheep]
> that’s probably the end of this :(
Yes, I think so.
|
https://bugs.python.org/issue24849
|
CC-MAIN-2021-21
|
refinedweb
| 1,760
| 61.87
|
On Jun 16, 11:03 pm, Gerald Kaszuba <gerald.kasz... at gmail.com> wrote: > On Jun 17, 6:16 am, Neal Becker <ndbeck... at gmail.com> wrote: > > > Code at global scope in a module is run at module construction (init). Is > > it possible to hook into module destruction (unloading)? > > Try the __del__ method. > > See the docs. > > -- > Gerald Kaszuba I doubt python calls __del__ when unloading module... and plus, I don't really think python does module unloading though. del module after import it got it removed from the global namespace but it does not clean reference I bet, so completely unloading a module should take more energy of that... Jim
|
https://mail.python.org/pipermail/python-list/2007-June/462031.html
|
CC-MAIN-2014-15
|
refinedweb
| 110
| 76.62
|
I'm developing an Application where the Zebra RFD8500 Sled scanner communicates with an Android phone via Bluetooth.
I want to programmatically turn off the beeping sound on the RFD8500 Sled when I scan a barcode, so I can play my own custom sound. I've seen examples on how to initiate the beep sound on the Sled through the "Zebra Scanner SDK for Android developer guide". And I understand how to play a sound on an Android phone which would replace the Sled's beep. But haven't seen anything with regards to a config parameter setting I could set programmatically that would turn off the Sleds' sound altogether.
Any thoughts?
After a some more research, I found out how to do this through the following developer guide section 5.2
The only problem now is, the beeper is not turned off when I scan a barcode. It is turned off when the RFD8500 sled connects and disconnects, but NOT when a barcode is scanned.
So I'm really back to square one. Because the whole point of this was to silence the beep sound when a barcode is scanned and replace that sound with my own custom sound.
In any event, below is the code to connect to the 8500 sled and turn off the beeping sound up to a point as I mentioned.
You'll need to import the following Java archive library -> API3_LIB-release.aar
That archive can be found in the following zip file Zebra_RFID_Mobile_Android_1.2.3.26.zip
The following url has the zip file
Zebra RFID Mobile Application for Android Support & Downloads | Zebra
import com.zebra.rfid.api3.BEEPER_VOLUME;
import com.zebra.rfid.api3.RFIDReader;
import com.zebra.rfid.api3.ReaderDevice;
public static Readers readers;
public static RFIDReader rfidReader;
The specific code gets executed in a Async method when the Sled is found...
private class ProcessAsyncTask extends AsyncTask<Void,Integer,Boolean> {
{
@Override
protected Boolean doInBackground(Void... voids) {
try {
readers = new Readers();
ArrayList<ReaderDevice> availableRFIDReaderList = readers.GetAvailableRFIDReaderList();
ReaderDevice readerDevice = availableRFIDReaderList.get(0);
rfidReader = readerDevice.getRFIDReader();
} catch (InvalidUsageException e) {
e.printStackTrace();
}
try {
rfidReader.connect();
rfidReader.Config.setBeeperVolume(BEEPER_VOLUME.QUIET_BEEP);
rfidReader.Config.saveConfig();
} catch (Exception e) {
e.printStackTrace();
}
}
}
|
https://developer.zebra.com/thread/36652
|
CC-MAIN-2019-22
|
refinedweb
| 362
| 57.98
|
Java programs have a specific structure in how the code is written. There are key elements that all Java programs share.
The Program
We have the text of a program inside the file called HelloWorld.java.
// This program outputs the message "Hello World!" to the monitor public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World!"); } }
This program writes
Hello World! to your terminal when run.
Case-Sensitivity
Java is a case-sensitive language. Case sensitivity means that syntax, the words our computer understands, must match the case. For example, the Java command for outputting text to the screen is
System.out.println(). If you were to type
system.out.println() or
System.Out.println(), the compiler would not know that your intention was to use
System or
out.
Let’s go over this HelloWorld.java program line by line:
// This program outputs the message "Hello World!" to the monitor
This is a single-line comment that documents the code. The compiler will ignore everything after
// to the end of the line. Comments provide information outside the syntax of the language.
Classes
public class HelloWorld { // class code }
This is the class of the file. All Java programs are made of at least one class. The class name must match the file: our file is HelloWorld.java and our class is
HelloWorld. We capitalize every word, a style known as pascal case. Java variables and methods are named in a similar style called camel case where every word after the first is capitalized.
The curly braces,
{ and
}, mark the scope of the class. Everything inside the curly braces is part of the class.
Methods
public static void main(String[] args) { // Statements }
Every Java program must have a method called
main(). A method is a sequence of tasks for the computer to execute. This
main() method holds all of the instructions for our program.
Statements
System.out.println("Hello World!");
This code uses the method
println() to send the text “Hello World!” to the terminal as output.
println() comes from an object called
out, which is responsible for various types of output. Objects are packages of state and behavior, and they’re often modeled on real-world things.
out is located within
System, which is another object responsible for representing our computer within the program! We can access parts of an object with a
., which is known as dot notation.
This line of code is a statement, because it performs a single task. Statements always conclude with a semicolon.
Whitespace
Java programs allow judicious use of whitespace (tabs, spaces, newlines) to create code that is easier to read. The compiler ignores whitespace, but humans need it! Use whitespace to indent and separate lines of code. Whitespace increases the readability of your code.
Practice
The structure of a Java program will feel familiar the more you work with this language. Continue learning at Codecademy and you’ll be a Java pro in no time!
|
https://www.codecademy.com/articles/java-program-structure
|
CC-MAIN-2020-05
|
refinedweb
| 495
| 69.07
|
The QImageWriter class provides a format independent interface for writing images to files or other devices. More...
#include <QImageWriter>
Note: All the functions in this class are reentrant.
The QImageWriter class provides a format independent interface for writing images to files or other devices.
QImageWriter supports setting format specific options, such as the gamma level, compression level and quality, prior to storing the image. If you do not need such options, you can use QImage::save() or QPixmap::save() instead.
To store an image, you start by constructing a QImageWriter object. Pass either a file name or a device pointer, and the image format to QImageWriter's constructor. You can then set several options, such as the gamma level (by calling setGamma()) and quality (by calling setQuality()). canWrite() returns true if QImageWriter can write the image (i.e., the image format is supported and the device is open for writing). Call write() to write the image to the device.
If any error occurs when writing the image, write() will return false. You can then call error() to find the type of error that occurred, or errorString() to get a human readable description of what went wrong.
Call supportedImageFormats() for a list of formats that QImageWriter can write. QImageWriter supports all built-in image formats, in addition to any image format plugins that support writing.().
If the currently assigned device is a QFile, or if setFileName() has been called, this function returns the name of the file QImageWriter writes to. Otherwise (i.e., if no device has been assigned or the device is not a QFile), an empty QString is returned.
See also setFileName() and setDevice().
Returns the format QImageWriter uses for writing images.
See also setFormat().
Returns the gamma level of the image.
See also setGamma().
Returns the quality level of the image.
See also setQuality().
Sets QImageWriter's device to device. If a device has already been set, the old device is removed from QImageWriter and is otherwise left unchanged.
If the device is not already open, QImageWriter will attempt to open the device in QIODevice::WriteOnly mode by calling open(). Note that this does not work for certain devices, such as QProcess, QTcpSocket and QUdpSocket, where more logic is required to open the device.
See also device() and setFileName().
Sets the file name of QImageWriter to fileName. Internally, QImageWriter will create a QFile and open it in QIODevice::WriteOnly mode, and use this file when writing images.
See also fileName() and setDevice().
Sets the format QImageWriter will use when writing images, to format. format is a case insensitive text string. Example:
QImageWriter writer; writer.setFormat("png"); // same as writer.setFormat("PNG");
You can call supportedImageFormats() for the full list of formats QImageWriter supports.
See also format().()).
See also quality().
Sets the image text associated with the key key to text. This is useful for storing copyright information or other information about the image. Example:
QImage image("some/image.jpeg"); QImageWriter writer("images/outimage.png", "png"); writer.setText("Author", "John Smith"); writer.write(image);
If you want to store a single block of data (e.g., a comment), you can pass an empty key, or use a generic key like "Description".
The key and text will be embedded into the image data after calling write().
Support for this option is implemented through QImageIOHandler::Description.
This function was introduced in Qt 4.1.
See also QImage::setText() and QImageReader::text().
Returns the list of image formats supported by QImageWriter.
By default, Qt can write the following formats:
See also setFormat(), QImageReader::supportedImageFormats(), and QImageIOPlugin.
Writes the image image to the assigned device or file name. Returns true on success; otherwise returns false. If the operation fails, you can call error() to find the type of error that occurred, or errorString() to get a human readable description of the error.
See also canWrite(), error(), and errorString().
|
http://doc.trolltech.com/4.1/qimagewriter.html
|
crawl-002
|
refinedweb
| 645
| 60.51
|
« Happy New Year: Flex 2.0.1 is Here! | Main | We'd like your opinion »
January 12, 2007
Building Modular Applications
If you haven't been to Roger Gonzalez's blog about Modules, then zip over there and get the details and thoughts behind this feature of Flex 2. I'm not going to go too much into the why's but want to show you a simple Flex application that uses Modules. You can let your imagination take it from there.
Sample Code
You can download a zip file with this example here: download here
Modules
Modules are one solution to building a large Flex application by allowing you to partition your user interface into useful, descrete bits. For example (and this is from the Flex 2 documentation), an insurance company might have hundreds of forms - specific to each region, specific to each type of claim, specific to each application, etc. Creating a Flex application with all of these forms would result in a huge SWF with several problems:
- The larger the 'application' the more complex the development process;
- The larger the 'application' the more complex the testing process.
- The larger the 'application' the more complex the deployment process;
- The larger the SWF the longer it takes to download;
My sample application is based on one in the Flex 2 documentation, but I've modified it a bit to address a couple of common questions. The sample shows a main, or shell application and three modules which share common data.
One important design element is the use of an interface which is essentially a contract between the implementor of the interface and the users of that same interface. This example will show you what I mean. The interface portion of modules is not necessary but it makes maintenence and future development go a lot smoother. For example, if you have one team working on the report screens and another team working on the chart screens, if they first agree on the interfaces, the implementations can take as many twists and turns as necessary without affecting the outcome of the project. Interfaces also play another role with modules which I reveal later.
Modules are MXML (or ActionScript) files that have <mx:Module> as their root tag instead of <mx:Application>. Think of the <mx:Module> tag as an Application, but without the ability to run it.
The sample application has a shell file and two modules along with an interface. Open the main application file and you will see:
<mx:Panel
<mx:Text
<mx:RadioButton
<mx:RadioButton
<mx:RadioButton
</mx:Panel>
<mx:Panel
<mx:ModuleLoader
</mx:Panel>
The first Panel contains RadioButtons to get the modules to load and unload for the demonstration. The second Panel is where the modules are loaded using the <mx:ModuleLoader> tag. Notice that the ModuleLoader, currentModule, has an event handler for the ready event. The ready event (or ModuleEvent.READY) is dispatched by the ModuleLoader when enough of the module SWF has been downloaded to begin using it.
This is the readyModule function, in a <mx:Script> block:
private function readyModule( event:ModuleEvent ) : void
{
var ml:ModuleLoader = event.target as ModuleLoader;
var ichild:IExpenseReport = ml.child as IExpenseReport;
if( ichild != null ) {
ichild.expenseReport = expenses;
}
}
Notice how the child property of the ModuleLoader is cast to the class IExpenseReport. IExpenseReport is an interface which all of the modules implement. As long as every module implements this interface, it can fit easily into the application. In other words, imagine that you need to make another form or report. Instead of changing the main application and adding IF statements for the new module, you implement the IExpenseReport interface in the new module and it will work perfectly with the application.
The IExpenseReport interface is:
public interface IExpenseReport
{
function set expenseReport( ac:ArrayCollection ) : void;
}
Each module implements this interface, defining the expenseReport set function as it sees fit. This is the root tag for the ChartModule and the implementation of the IExpenseReport interface:
<mx:Module xmlns:
<mx:Script><![CDATA[
import mx.collections.ArrayCollection;
[Bindable] public var expenses:ArrayCollection;
public function set expenseReport( ac:ArrayCollection ) : void
{
expenses = ac;
}
]]></mx:Script>
...
</mx:Module>
Going back to the main application shell, the RadioButton click event causes any currently loaded module to unload and then loads a new module. Here is the RadioButton tag for the ChartModule:
<mx:RadioButton
The click event invokes the readyModule which is listed above.
Compiling and Running the Application
If you are using Flex Builder 2, be sure to modify the project's properties to include the modules as applications. This way Flex Builder 2 will compile them into SWFs and place them into the bin directory.
Flex Builder Note: To build a project with Modules, use the project's Properties and add the module files as "Applications". This will get them compiled into SWFs.
Once the SWFs have been built you can run the main shell application and click the RadioButtons to switch between the modules.
Flex Builder Note: Flex Builder does not maintain any dependency information about your modules and the shell application. Whenever you make a change to a module you may need to force a recompile on the shell or other modules with dependencies.
Optimizing the SWFs
If you take a look at the size of the SWFs for the main application and modules you will see that they are similar in size. Meaning, the module SWFs have many of the same component definitions in them as the main application SWF.
The Flash Player does not keep duplicate copies of symbols. For instance, if the main Application has a Button component and a module also has a Button component, the Flash Player will not load the Button from the module since it already has that definition from the main application.
Compile the main application with -link-report=report.xml which will create a file containing information about all of the symbols that it is being linked with. Then use that report when compiling the modules. For example:
mxmlc -load-externs=report.xml ChartModule.mxml
When ChartModule is compiled, all of the symbols listed in the link report, report.xml, are left out of its SWF. When I compiledthe ChartModule.swf without doing this, it came to 202K. When I used the report.xml, the SWF become only 68K in size. This greatly reduces the download time for modules.
In the beginning of this entry I mentioned another use for interfaces when it comes to modules. Suppose you do not use an interface but instead reference your modules' classes from your shell application. When you run the link-report, your modules' classes will appear in the report. When you compile your modules using that link report your module will not be included in its own SWF! At first that won't be a problem, although the main shell application will be large since it holds the definitions of your modules. As important however, is what happens when you change your modules. If you do not recompile your main application, your main application's SWF will have the old definition of your modules - not the changes you made.
mxmlc -link-report=report.xml Main.mxml
mxmlc -load-externs=report.xml ChartModule.mxml
// etc.
If you decide to use this technique for reducing your modules' sizes, use interfaces to make sure the end-user always has the latest version of your modules.
Flex Builder Note: Flex Builder does not have a way to do this for you within a single project. If you believe you will be building a project using modules, consider putting common classes and interfaces (including event classes) into a SWC (Flex Library Project) and separating the modules into their own projects.
Or, you can build everything as a single Flex project and do the optimzation outside of Flex Builder as a pre-production or pre-test deployment step.
Summary
- Divide applications which have parts that not everyone will use into modules. This way the initial main application is smaller than it would normally have been and most users will only use a portion of the entire application.
- Use interfaces to allow the shell or including modules to communicate with the modules they load. This makes it easier to maintain.
- Compile the main application using the -link-report compiler switch to generate a list of symbols it is using.
- Compile the modules with -load-externs and the link report from the main application which makes them smaller.
Posted by pent at January 12, 2007 04:19 PM
|
http://weblogs.macromedia.com/pent/archives/2007/01/building_module.cfm
|
crawl-002
|
refinedweb
| 1,424
| 53.81
|
README
React hCaptcha Component LibraryReact hCaptcha Component Library
DescriptionDescription
hCaptcha Component Library for ReactJS.
hCaptcha is a drop-replacement for reCAPTCHA that protects user privacy, rewards websites, and helps companies get their data labeled.
Sign up at hCaptcha to get your sitekey today. You need a sitekey to use this library.
InstallationInstallation
You can install this library via npm with:
npm install @hcaptcha/react-hcaptcha --save
UsageUsage
The two requirements for usage are the
sitekey prop and a
parent component such as a
<form />. The component will automatically include and load the
hCaptcha API library and append it to the parent component. This is designed for ease of use with the hCaptcha API!
Basic UsageBasic Usage
import HCaptcha from '@hcaptcha/react-hcaptcha'; <FormComponent> <HCaptcha sitekey="your-sitekey" onVerify={(token,ekey) => handleVerificationSuccess(token, ekey)} /> </FormComponent>
A note about TypeScript usage: If you want to reassign the component name, you could consider making a util that imports the component, then re-exports it as a default. Example:
// utils/captcha.ts import HCaptcha from '@hcaptcha/react-hcaptcha'; export default HCaptcha; // MyFormComponent.tsx import { default as RenamedCaptcha } from '../utils/captcha'; <FormComponent> <RenamedCaptcha sitekey="your-sitekey" /> </FormComponent>
Advanced usageAdvanced usage
In most real-world implementations, you'll probably be using a form library such as Formik or React Hook Form.
In these instances, you'll most likely want to use
ref to handle the callbacks as well as handle field-level validation of a
captcha field. For an example of this, you can view this CodeSandbox. This
ref will point to an instance of the hCaptcha API where can you interact directly with it.
PropsProps
EventsEvents
MethodsMethods
NOTE: Make sure to reset the hCaptcha state when you submit your form by calling the method
.resetCaptcha on your hCaptcha React Component! Passcodes are one-time use, so if your user submits the same passcode twice then it will be rejected by the server the second time.
Please refer to the demo for examples of basic usage and an invisible hCaptcha.
Alternatively, see this sandbox code for a quick form example of invisible hCaptcha on a form submit button.
Please note that "invisible" simply means that no hCaptcha button will be rendered. Whether a challenge shows up will depend on the sitekey difficulty level. Note to hCaptcha Enterprise (BotStop) users: select "Passive" or "99.9% Passive" modes to get this No-CAPTCHA behavior.
Note for maintainersNote for maintainers
ScriptsScripts
npm run start- will start the demo app with hot reload
npm run test- will test the library: unit tests
npm run build- will build the production version
PublishingPublishing
To publish a new version, follow the next steps:
- Bump the version in
package.json
- Create a Github Release with version from step 1 without a prefix such as
v(e.g.
1.0.3)
publishworkflow will be triggered which will: build, test and deploy the package to the npm @hcaptcha/react-hcaptcha.
Running locally for developmentRunning locally for development
Please see: Local Development Notes.
Summary:
sudo echo "127.0.0.1 fakelocal.com" >> /private/etc/hosts
npm start -- --disable-host-check
|
https://www.skypack.dev/view/@hcaptcha/react-hcaptcha
|
CC-MAIN-2021-49
|
refinedweb
| 513
| 54.93
|
Details
Issue Links
Activity
Same issue in ipc.py
Not sure about the performance impact of this patch, but it should do the trick.
Bruce: if we don't import the module hashlib, then the hashlib name is undefined. If you subsequently say if hashlib, you're going to throw a NameError
Lame error on my part then ... Your fix for that part looks good.
Found another fun one: we use finally in test_protocol.py and test_schema.py
Not important, but wouldn't
{{
try:
from hashlib import md5
except ImportError:
from md5 import md5
}}
be simpler and preferable? or "as compute_md5" if you're stuck on that.
Michael: I'll check it out. The code currently calls hashlib.md5() or md5.new(), so your idea would not work as it currently stands, as the latter appears to use a factory. md5.new and md5.md5 are the same thing.
Yep, that's what I wanted to confirm once I got in front of a computer. Your way is certainly cleaner, so I'll update the patch.
I've addressed most of the issues (thanks for the comments!) but I'm stuck on how to modify our use of the uuid module, which appeared in Python 2.5. I've asked on Quora about the best way to generate RFC 4122-compliant UUIDs in Python 2.4:'s-the-best-way-to-generate-RFC-4122-compliant-UUIDs-in-Python-2.4. If you happen to know the answer, please let me know!
Okay, added uuid_24.py by taking the source from. I need to figure out the right way to license it, but I'd love it if someone else could take this patch for a spin on a Python 2.4 installation and see if it works for them.
The uuid code is under the PSF license:
This requires that we include a copyright statement and a copy of the license in our distributions. HTTPD provides a good model. Look at the end of:
So, in our LICENSE.txt file, we should add sections at the end for sub-components whose licenses differ, providing the relative path for each such subcomponent followed by its copyright and licence. We already have one such appended license, but it should probably be amended to provide the path, and we should add an introductory paragraph for all sub-component licenses. In short, we should model the structure of HTTPD's LICENSE.
CentOS 5 is still shipping it, so we should make it work.
Current issue is our use of struct.Struct objects in io.py.
|
https://issues.apache.org/jira/browse/AVRO-588?focusedCommentId=12884992&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
|
CC-MAIN-2014-15
|
refinedweb
| 431
| 76.82
|
After talking about it for the past few weeks the XML Developer Center on MSDN is finally here. As mentioned in my previous post on the Dev Center the most obvious changes from the previous incarnation of are
The XML Developer Center will provide an entry point to working with XML in Microsoft products such as Office and SQL Server.
The XML Developer Center will have an RSS feed.
The XML Developer Center will pull in content from my work weblog.
The XML Developer Center will provide links to recommended books, mailing lists and weblogs.
The XML Developer Center will have content focused on explaining the fundamentals of the core XML technologies such as XML Schema, XPath, XSLT and XQuery.
The XML Developer Center will provide sneak peaks at advances in XML technologies at Microsoft that will be shipping future releases of the .NET Framework, SQL Server and Windows.
As mentioned in my previous post the first in a series of articles describing the changes to System.Xml in version 2.0 of the .NET Framework is now up. Mark Fussell has published What’s New in System.Xml for Visual Studio 2005 and the .NET Framework 2.0 Release which mentiones the top 10 changes to the core APIs in the System.Xml namespace.
There is one cool new addition that is missing from Mark’s article, which I guess would be number 11 of his top 10 list. The XSD Inference API which can be used to create an XML Schema definition language (XSD) schema from an XML instance document will also be part of System.Xml in Whidbey. Given the enthusiasm we saw in various parties about XSD inference we decided to promote it from just being a freely downloadable tool to being part of the .NET Framework. Below are a couple of articles about XSD Inference
- Generate XSD Schemas by Inference by Roger Jennings, XML Web Services Magazine.
- Modeling biz docs in XML by Jon Udell, InfoWorld.
- Using the XSD Inference Utility by Nithya Sampathkumar, MSDN.
If you have any thoughts about what you’d like to see on the Dev Center or any comments on the new design, please let me know.
What is the title of this feed? ()
Hint: Looks like it was borrowed from the Security Dev Center…
Thanks for the feedback, the feed should be fixed now.
Thanks to your team for putting the site together..
great stuff! it’s like the Pope blessing XML.. 🙂
It’s good to see this stuff concentrated in one place.
I’d like:
– Recommendations and guidelines for representing objects
– Recommendations and guidlines when dealing cross process
– Always to think about those of us who create large web-farmed n-tier distributed transaction systems
– Whenever you mention performance, give us quantitative not just qualitative information. ‘But this depends on the system’ you say – then provide us with configurable performance test harnesses
– Not to spend time on Whidbey/Yukon/Longhorn/Avalon until we can actually use it on our projects. Hint, this will be some time after the above are released.
thanks
Matt
Dare, it looks good. Congrats!
|
https://blogs.msdn.microsoft.com/dareobasanjo/2004/03/29/xml-developer-center-on-msdn-launched/
|
CC-MAIN-2017-26
|
refinedweb
| 520
| 64.51
|
Custom Steps can be added to your Skills repository within the
test/behave/steps directory.
The Mycroft Timer Skill for example provides test/behave/steps/timer.py. This has a range of custom Steps to ensure the system is in an appropriate state before the test is run, and that a timer is stopped correctly when requested.
Let's use an example from this Skill to see how we can define our own custom Steps.
from behave import givenfrom test.integrationtests.voight_kampff import wait_for_dialog, emit_utterance@given('a {timer_length} timer is set')@given('a timer is set for {timer_length}')def given_set_timer_length(context, timer_length):emit_utterance(context.bus, 'set a timer for {}'.format(timer_length))wait_for_dialog(context.bus, ['started.timer'])context.log.info('Created timer for {}'.format(timer_length))context.bus.clear_messages()
First we import some packages.
from behave import givenfrom test.integrationtests.voight_kampff import wait_for_dialog, emit_utterance
From
behave we can get the behave decorators -
given,
when, or
then. For this Step we also need some helper functions from the Voight Kampff module.
Like any Python script, you can import other packages from Mycroft or externally as needed.
The
test.integrationtests.voight_kampff module provides a number of common tools that may be useful when creating Step files.
Wait for a specified time for criteria to be fulfilled.
Arguments:
msg_type - Message type to watch
criteria_func - Function to determine if a message fulfilling the test case has been found.
context - Behave context object
timeout - Time allowance for a message fulfilling the criteria, defaults to 10 sec
Returns:
tuple (bool, str) - test status, debug output
mycroft_responses(context)
Collect and format mycroft responses from context.
Arguments:
context - Behave context to extract messages from.
Returns:
(str) - Mycroft responses including skill and dialog file
Emit an utterance on the bus.
Arguments:
bus (InterceptAllBusClient) - Bus instance to listen on
dialogs (list) - List of acceptable dialogs
Returns:
None
Wait for one of the dialogs given as argument.
Arguments:
bus (InterceptAllBusClient) - Bus instance to listen on
dialogs (list) - list of acceptable dialogs
timeout (int) - Time allowance to wait, defaults to 10 sec
Returns:
None
Now we can use the
@given() decorator on our function definition.
@given('a {timer_length} timer is set')@given('a timer is set for {timer_length}')def given_set_timer_length(context, timer_length):
This decorator tells the system that we are creating a
Given Step. It takes a string as it's first argument which defines what phrase we can use in our tests. So using the first decorator above means that in our tests we can then write
Given Steps like:
Given a 10 minute timer is set
A handy feature of decorators is that they can be stacked. In this example we have two stacked decorators applied to the same function. This allows us to use variations of natural language, and both versions will achieve the same result. So now we could write another Step phrased differently:
Given a timer is set for 10 minutes
Either way, it will ensure a 10 minute timer exists before running the test Scenario.
When we define a Step function, the first argument will always be a Behave
Context Object. All remaining arguments will map to variables defined in the decorators.
@given('a {timer_length} timer is set')@given('a timer is set for {timer_length}')def given_set_timer_length(context, timer_length):
In our current example, we have only one variable "
timer_length". This corresponds to the second argument of our function. Additional variables can be added to the argument list such as:
@given('a timer named {timer_name} is set for {timer_length}')def given_set_timer_named(context, timer_name, timer_length):
The first argument of each Step function is always a Behave
Context Object, with some additional properties added by Voight Kampff. These are:
context.bus - an instance of the Mycroft
MessageBusClient class.
context.log - an instance of the Python standard library
logging.Logger class.
context.msm - a reference to the Mycroft Skills Manager
Now we have the structure of our Step function in place, it's time to look at what that Step does.
def given_set_timer_length(context, timer_length):emit_utterance(context.bus, 'set a timer for {}'.format(timer_length))wait_for_dialog(context.bus, ['started.timer'])context.log.info('Created timer for {}'.format(timer_length))context.bus.clear_messages()
In this example we have four lines:
Emitting an utterance to the Mycroft MessageBus to create a timer for the given length of time.
Waiting for dialog to be returned from the MessageBus confirming that the timer has been started.
Logging an
info level message to confirm we created a timer.
Clearing any remaining messages from the MessageBus to prevent interference with the test.
Note: the log message in this Step isn't really necessary. Voight Kampff will confirm that each Step is completed successfully. It just serves as a useful example to show how messages can be logged.
For further assistance with Skill testing, please post your question on the Community Forums or in the Skills channel on Mycroft Chat.
See our tips for how to ask the best questions. This helps you get a more complete response faster.
|
https://mycroft-ai.gitbook.io/docs/skill-development/voight-kampff/custom-steps
|
CC-MAIN-2020-24
|
refinedweb
| 828
| 55.84
|
I have two iterables in Python, and I want to go over them in pairs:
foo = (1, 2, 3) bar = (4, 5, 6) for (f, b) in some_iterator(foo, bar): print "f: ", f, "; b: ", b
It should result in:
f: 1; b: 4 f: 2; b: 5 f: 3; b: 6
One way to do it is to iterate over the indices:
for i in xrange(len(foo)): print "f: ", foo[i], "; b: ", b[i]
But that seems somewhat unpythonic to me. Is there a better way to do it?.
In Python 2,
zip
returns a list of tuples. This is fine when
foo and
bar are not massive. If they are both massive then forming
zip(foo,bar) is an unnecessarily massive
temporary variable, and should be replaced by
itertools.izip or
itertools.izip_longest, which returns an iterator instead of a list.
import itertools for
You want the
zip function.
for (f,b) in zip(foo, bar): print "f: ", f ,"; b: ", b
|
https://pythonpedia.com/en/knowledge-base/1663807/how-to-iterate-through-two-lists-in-parallel-
|
CC-MAIN-2020-29
|
refinedweb
| 163
| 79.7
|
Hi!
Working on Oracle v8i with OS as NT!
I have this problem:
The DB constains around 2 GB of data but the Temp segment consisting of temp01.dbf file
has shot up to 4 GB.This DB was created with default temp segment size and after that
import was done on this newly created DB with 2 GB of data.
Suddenly, the Temp segment consisting of temp01.dbf file has shot up to 4 GB.
Now i am left with only 15 MB of space in D drive where Oracle is stored.Other drives are not
having much space left either.
Size of Oradata folder is 6 GB. I need to resize the temp segment to around 500 MB.I hope
that this size would work with no problems for 2 GB data and that not much sorting would
be done on this temp size!I hope I am correct!
Moreover,I canot resize the temp file to 500 MB as Oracle throws up this error message:
used size is beyond the resize file size.I believe I have to drop the existing temp file of
4 GB and recreate a new temp file of 500 MB with the following command :
ALTER TABLESPACE TEMP
DATAFILE 'D:\ORACLE\ORADATA\PVPL\TEMP01.DBF' SIZE 500M
AUTOEXTEND ON NEXT 1M;
Is this command Ok for the temp segment and would the DB work if I drop and recreate a new
temp segment as above!Pleae advise!
Please correct me I am wrong!
Regards,
Amit.
Oracle DBA (OCP) v8i,v9i
Forum Rules
|
http://www.dbasupport.com/forums/showthread.php?28450-Temp-segment-size-problem!&p=120152&mode=threaded
|
CC-MAIN-2014-42
|
refinedweb
| 259
| 82.24
|
I wrote a program to read 256KB array to get 1ms latency. The program is pretty simple and attached.
However, when I run it on VM on Xen, I found that the latency is not stable. It has the following pattern: The time unit is ms.
#totalCycle CyclePerLine totalms
22583885 5513 6.452539
3474342 848 0.992669
3208486 783 0.916710
25848572 6310 7.385306
3225768 787 0.921648
3210487 783 0.917282
25974700 6341 7.421343
3244891 792 0.927112
3276027 799 0.936008
25641513 6260 7.326147
3531084 862 1.008881
3233687 789 0.923911
22397733 5468 6.399352
3523403 860 1.006687
3586178 875 1.024622
26094384 6370 7.455538
3540329 864 1.011523
3812086 930 1.089167
25907966 6325 7.402276
#include <iostream>
#include <cstdio>
#include <cstdlib>
#include <string>
#include <ctime>
using namespace std;
#if defined(__i386__)
static __inline__ unsigned long long rdtsc(void)
{
unsigned long long int x;
__asm__ volatile (".byte 0x0f, 0x31" : "=A" (x));
return x;
}
#elif defined(__x86_64__)
static __inline__ unsigned long long rdtsc(void)
{
unsigned hi, lo;
__asm__ __volatile__ ("rdtsc" : "=a"(lo), "=d"(hi));
return ( (unsigned long long)lo)|( ((unsigned long long)hi)<<32 );
}
#endif
#define CACHE_LINE_SIZE 64
#define WSS 24567 /* 24 Mb */
#define NUM_VARS WSS * 1024 / sizeof(long)
#define KHZ 3500000
// ./a.out memsize(in KB)
int main(int argc, char** argv)
{
unsigned long wcet = atol(argv[1]);
unsigned long mem_size_KB = 256; // mem size in KB
unsigned long mem_size_B = mem_size_KB * 1024; // mem size in Byte
unsigned long count = mem_size_B / sizeof(long);
unsigned long row = mem_size_B / CACHE_LINE_SIZE;
int col = CACHE_LINE_SIZE / sizeof(long);
unsigned long long start, finish, dur1;
unsigned long temp;
long *buffer;
buffer = new long[count];
// init array
for (unsigned long i = 0; i < count; ++i)
buffer[i] = i;
for (unsigned long i = row-1; i >0; --i) {
temp = rand()%i;
swap(buffer[i*col], buffer[temp*col]);
}
// warm the cache again
temp = buffer[0];
for (unsigned long i = 0; i < row-1; ++i) {
temp = buffer[temp];
}
// First read, should be cache hit
temp = buffer[0];
start = rdtsc();
int sum = 0;
for(int wcet_i = 0; wcet_i < wcet; wcet_i++)
{
for(int j=0; j<21; j++)
{
for (unsigned long i = 0; i < row-1; ++i) {
if (i%2 == 0) sum += buffer[temp];
else sum -= buffer[temp];
temp = buffer[temp];
}
}
}
finish = rdtsc();
dur1 = finish-start;
// Res
printf("%lld %lld %.6f\n", dur1, dur1/row, dur1*1.0/KHZ);
delete[] buffer;
return 0;
}
The use of the RDTSC instruction in a virtual machine is complicated. It is likely that the hypervisor (Xen) is emulating the RDTSC instruction by trapping it. Your fastest runs show around 800 cycles/cache line, which is very, very, slow... the only explanation is that the RDTSC results in a trap that is handled by the hypervisor, that overhead is a performance bottleneck. I'm not sure about the even longer time that you see periodically, but given that the RDTSC is being trapped, all timing bets are off.
You can read more about it here
By the way, that article is wrong in that the hypervisor doesn't set a
cpuid bit to cause RDTSC to trap, it is bit #2 in Control Register 4 (CR4.TSD):
|
https://codedump.io/share/tEJw0k33JVRm/1/weird-program-latency-behavior-on-vm
|
CC-MAIN-2017-09
|
refinedweb
| 528
| 82.75
|
> . > > Michael Park wrote: >! > > Daniel Pravat wrote: > Exposing a constructor taking both the error code and the message should > be suficient. > > The convenience provided by the parameters reordering is not adding a lot > of value given the small number of instance where the error code is > interpreted. > > Michael Park wrote: > I'm not quite following. From > > Exposing a constructor taking both the error code and the message > should be suficient. > > It seems like you're saying we should introduce a constructor that takes > an error code, something like: > ``` > WindowsSockerError::WindowsSocketError(int code, const std::string& > message); > ``` > But as I mentioned, even if we do that, the order of evaluation of the > arguments is unspecified. > and even if we were to say, well it's "implementation-defined", we still > have to be mindful > of the code between `::connect` and `WindowsSocketError`. You seem to be > supporting this argument > with the second quote: > > The convenience provided by the parameters reordering is not adding a > lot of value given the small number of instance where the error code is > interpreted. > > so I'm a bit confused. Could you be a little more specific/concrete? > > Daniel Pravat wrote: > I was agreeing with you. We can both agree that we need both parameters > to the constructor (the error message for logging and the error code for the > execution flow). Your example from the first comment seems to imply you agree > with a constructor with two parameters. > > You also made a reference to the parameter reorder that may allow one > line return `return ConnectError(message, ::WSAGetLastError());`. However > given this error is returned/used only in a few places at this time, the > parameters can be in any order. > > The user (returning `WindowsError`) has to be aware that last error may > be overwritten, has to capture it ASAP and later used it to construct > `WindowsError`.
Advertising
Ok, I've given this more thought and I'm still inclined to keep it the way it is. Just to capture our discussion, you're suggesting that we just have `WindowsError` with a constructor that looks like: `WindowsError::WindowsError(int code, const std::string& message);`, and make the user pass `::GetLastError()` or `::WSAGetLastError()` explicitly. We could also make `::GetLastError()` be the default or whatever. I'm not concerned about that. Concretely, the following is what you’re looking for: ```cpp int result = ::connect(...); if (result < 0) { int code = ::WSAGetLastError(); return WindowsError(code, "Failed to connect to " + stringify(address)); } ``` This ultimiately results in something like this: (a) ``` int result = ::connect(...); if (result < 0) { #ifdef __WINDOWS__ int code = ::WSAGetLastError(); return WindowsError(code, "Failed to connect to " + stringify(address)); #else return ErrnoError("Failed to connect to " + stringify(address)); #endif } ``` or something like: (b) ``` using ConnectError = #ifdef __WINDOWS__ WindowsSocketError; #else ErrnoError; #endif int get_socket_error() { #ifdef __WINDOWS__ return ::WSAGetLastError(); #else return errno; #endif } int result = ::connect(...); if (result < 0) { int code = get_socket_error(); return ConnectError(code, "Failed to connect to " + stringify(address)); } ``` I think we agree that the sequence of operations we want is: `connect, get error, ... construct error message` as opposed to `connect, ..., get error, construct error message`. In order to tie `connect` and error retrival, we could introduce a `Try<int, ErrorCode> os::raw::connect(...);` where `ErrorCode` is just `class ErrorCode { int code; };`. Then we can write `os::connect` based on `os::raw::connect` with the assumption that the error retrival is immediately after the `::connect` call. This approach of introducing `os::raw::` versions of everything I think clearly ties the recommended binding of action and error retrieval, but seems overboard unless we can show that it's likely for simple error string constructions to overwrite the last error. A few other things: (1) `WindowsError` was already introduced and is currently used without worrying about the potential overwriting behavior of `::GetLastError()`. (2) For (b), we would have to introduce a constructor: `ErrnoError::ErrnoError(int code, const std::string& message)`. I really don't like this, allowing a custom code to a type that's supposed to __always__ capture `errno` makes no sense. If we were to go in this direction, I think we would probably have a `OSError` class which could be constructed from any of `::GetLastError()`, `::WSAGetLastError()`, or `errno`. - Michael ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: ----------------------------------------------------------- On April 3, 2016, 9:34 p.m., Michael Park wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > > ----------------------------------------------------------- > > (Updated April 3, 2016, 9:34 p.m.) > > > Review request for mesos, Alex Clemmer and Joris Van Remoortere. > > > Repository: mesos > > > Description > ------- > > Updated `network::connect` to use the typeful `Try` error state. > > > > >
|
https://www.mail-archive.com/reviews@mesos.apache.org/msg29550.html
|
CC-MAIN-2016-50
|
refinedweb
| 757
| 53.41
|
This class provides communication capabilities between several clients. More...
#include <qcopchannel_qws.h>
Inherits QObject.
List of all member functions.
This class provides communication capabilities between several clients.
The Qt Cop (QCOP) is a COmmunication Protocol, allowing clients to communicate inside of the same address space or between different processes.
Currently, this facility is only available on Qt/Embedded as on X11 and Windows we are exploring the use of existing standard such as DCOP and COM.
QCopChannel contains important functions like send() and isRegistered() which are static and therefore usable without an object.
In order to listen to the traffic on the channel, you should either subclass from QCopChannel and provide an re-implementation for receive(), or you should connect() to the received() signal.
Returns TRUE if channel is registered.
The default implementation emits the received() signal.
Note that the format of data has to be well defined in order to demarshall the contained information.
See also send().
This signal is emitted whenever the receive() function gets incoming data.
See also receive().
This file is part of the Qtopia platform, copyright © 1995-2005 Trolltech, all rights reserved.
|
http://doc.trolltech.com/qtopia2.2/html/qcopchannel.html
|
crawl-001
|
refinedweb
| 187
| 50.63
|
Here's my code, basically a small program to calculate Body Mass Index. For some reason I get the follow error below the code.
import java.util.Scanner;
public class BMI{
public static void main(String[] args) {
Scanner input = new Scanner(System.in);
double BMI;
double weight;
double height1;
double height2;
// was using these, but not atm
/* double overweightValue = 25;
double underweightValue = 18.5; */
System.out.println("What is your weight? ");
weight = input.nextDouble();
System.out.println("What is your height? ");
height1 = input.nextDouble();
height2 = (height1 * 12);
BMI = (weight * 703) / (height2 * height2);
if (BMI > 25){
System.out.println("Your BMI is in the overweight range! ");
}
if (25 > BMI > 18.5) {
System.out.println("Your BMI is in the optimal range! ");
}
if (BMI < 18.5) {
System.out.println("Your BMI is in the underweight range! ");
}
}
}
C:\Users\jamea\Desktop\Java Programs\BMI.java:31: error: bad operand types for binary operator '>'
if (25 > BMI > 18.5) {
^
first type: boolean
second type: double
1 error
Tool completed with exit code 1
It's pretty clear that I cannot use the "<" and ">" operators in an if statement. But how else would I achieve this?
|
http://www.javaprogrammingforums.com/whats-wrong-my-code/41665-bad-operand-binary-type-error-cant-figure-out.html
|
CC-MAIN-2019-18
|
refinedweb
| 191
| 52.46
|
Unit testing is relatively new to ActionScript projects. Although FlexUnit has been around for a while it wasn't intuitive to set up and there were a lot of inconsistencies with the documentation. Lucky for us, FlexUnit is now built into Flash Builder 4. Although the documentation is still sparse, this tutorial will be a good primer for setting up a Test Suite, going over several unit test examples and showing how to run/analyze them.
For those of you who aren't familiar with unit testing let's see what Wikipedia has to say
In computer programming, unit testing is a software verification and validation method where the programmer gains confidence that individual units of source code are fit for use... Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended. - wikipedia
In this tutorial I'll show you how I set up a few simple unit tests on my Flash Camo framework. You'll need to download a copy of Flash Builder 4 (in Beta right now) to follow along. Also, you'll want to download the latest version of Flash Camo (2.2.1) from here.
Step 1: Setting up a Unit Test Project
Let's create a new project called UnitTestIntro.
We'll need to place our Flash Camo SWC into a libs/swcs folder inside our project:
Finally we'll need to tell our project where to find our SWC, right-click on the project and go into its properties. Go to the ActionScript Build Path and select the Library Path tab. Click on Add SWC Folder and point it to the lib/swcs directory.
Now that we have everything setup, we can begin doing some basic unit testing. It's important to note that you cannot do unit testing directly in a Flex Library Project. That's not an issue for this project but if you want to test a library of code what I usually do is set up a new project (like we are doing here) then link the two projects together and create all the tests in the new project.
If you are not working in a Flex Library project and want to test your code you can simply create your tests in the same project. I would suggest keeping the two separate, this way you can clearly see what are test classes and what are real classes. We'll get into this a little later when you see how we set up the test.
Step 2: Drafting a Plan
Before I start anything, I take a moment to figure out exactly what I'm going to do. This is very critical when setting up unit tests. This kind of development is called Test Driven Development. I think I see another definition coming up:
Test-driven development (TDD) is a software development technique that uses short development iterations based on pre-written test cases that define desired improvements or new functions. Each iteration produces code necessary to pass that iteration's tests. Finally, the programmer or team refactors the code to accommodate changes. A key TDD concept is that preparing tests before coding facilitates rapid feedback changes. - wikipedia
Since this is a short intro we're going to use an existing code library to test against. However, as you build your own application you should be writing tests along the way to validate that the code works and that any changes/refactoring doesn't break your code's implementation. Here's an outline of the tests we'll perform:
- Create an instance of a CamoPropertySheet.
- Validate that it can parse CSS.
- Test the number of selectors it found.
- Test that the CamoPropertySheet can be converted back to a string.
- Test what happens when we request a selector that was not found.
- Validate clearing a CamoPropertySheet.
If you're new to Flash Camo you can check out the intro I wrote (part 1 and part 2) but you can easily do this tutorial without any knowledge of how the framework works. Again, this is simply going to be a code library for us to test with.
Step 3: Creating Our First Test
Now that we have a plan for doing our testing, let's create our first test. Right-click on your project and select New > Test Case Class
You will now be presented with the creation wizard. It should be familiar to anyone who has created a class in Flex/Flash Builder before. Here is what the window looks like:
Let's talk about a few of the new fields in the wizard. We can start with the fact that Superclass is already filled in for us: flexunit.framework.TestCase. You can't change this and probably shouldn't. All this does is extend the base test class from the Unit Test Framework. Next you'll see a few check boxes for code generation. Leave all of these checked by default. Finally there is a field for the Class we want to test.
Since we're going to test Flash Camo's CamoPropertySheet let's fill the following values into the form:
Name: CamoPropertySheetTest Class to test: camo.core.property.CamoPropertySheet
Here's a screen shot of how I set this up:
Hit finish and we should have our first test ready for us to add some code to.
Step 4: Anatomy of a TestCase Class
Let's take a look at the code Flash Builder has generated for us:
We start out by having a private property called classToTestRef which is set to the value of our CamoPropertySheet. This allows the test to create an instance of this class and forces the compiler to import it when we run our test.
The next important method is setUp. This is where we'll create an instance of our test class, configure it and make sure everything is ready for us to perform a test.
The last method here is tearDown. This is where we'll destroy our test class' instance when the test is complete. This is very important when running multiple tests.
You may have noticed at the end of the class there's another method called testSampleMethod which is commented out. This is an example of how you would set up a single test.
Each test will be a method we add to this class. When we run the Unit Test Harness it will automatically call all of our methods dynamically, of course starting with setUP and finishing with tearDown.
Now that we have a basic understanding of the TestClass setup let's look at running it.
Step 5: Running the Unit Test
Before we can run this we'll need at least one test. Let's uncomment the testSampleMethod for this example.
Once you've uncommented the testSampleMethod let's right-click on our project and select Run As > Execute FlexUnit Test.
You should now see the following window prompting us to select which test we want to run.
As you see, this is set up for when we have lots to test but for now we only have one testSampleMethod to run. Click Select All and hit OK.
After the test is run, your web browser will pop up with the following page:
If you go back to Flash Builder you'll also see this in the FlexUnit Results panel:
See how easy this was to run? We've performed our first unit test and already we have a single error. Before we move on to fix this error let's talk about these two windows.
The webpage that was open should automatically close but sometimes it doesn't. This is a simple swf that performs our tests in FLash and outputs some data the Flash Builder reads to display to final results of the test. You can ignore this webpage for the most part.
The FlexUnit Results panel is where all of the results will be shown as well as a place that allows you to organize and filter the test feedback. We'll go over this a little later in the tutorial when we actually have something to test.
Step 6: Set Up
Before we can really get into testing we'll need to setup our test class. let's add the following code to the setUp method:
var xml:XML = <css><![CDATA[/* This is a comment in the CSS file */ baseStyle { x: 10px; y: 10px; width: 100px; height: 100px; padding: 5px; margin: 10px; } baseStyle .Button{ x: 0px; y: 0px; background-color: #000000; } #playButton { background-color: #FFFFFF; background-image: url('/images/full_screen_background.jpg'); } #fullScreenButton{ background-color: #FF0000; background-image: url('/images/full_screen_background.jpg'); } #playButton:over { background-color: #333333; } interactive { cursor: hand; } ]]> </css>; rawCSS = xml.toString(); sheet = new CamoPropertySheet(); sheet.parseCSS(rawCSS);
We'll also need to setup some properties:
private var sheet:CamoPropertySheet; private var rawCSS:String;
Here's what's happening in this setup: in order for us to test our CamoPropertySheet we are going to need some CSS as a string. Normally I would load in the CSS from an external file but since we're doing a simple test I just create a new XML block and put the CSS text inside of the first node. Normally, you don't have to wrap CSS for the CamoPropertySheet inside of XML but when working with large strings inside of the editor I find it easier to use xml since you can wrap the text and it retains some formatting.
Next you'll see that we set our rawCSS property to the xml's string value. This converts the xml into a string. Then we create a new CamoPropertySheet. Finally, we tell the sheet to parse the rawCSS.
That's all there is to setting up this particular class. The setup is different for each class you test. It is important to demonstrate that we're doing the bare minimum to get a class ready to be tested and we can't test a class without values can we?
Step 7: Our First Test
Let's get right to it. Once a CamoPropertySheet has successfully parsed a css string we can request an array of Selector names to verify everything has indeed been parsed. For those not familiar with CSS jargon, a selector is the name of a css style ie baseStyle{...} would have a selector called baseStyle.
In here is what our test would look like in English:
- Get a list of selectors from the CamoPropertySheet.
- Get the length of the selector array.
- Compare the length value to 6 (the number we are expecting returned).
Let's replace our testSampleMethod with the following method:
public function testParseCSS():void { var selectors:Array = sheet.selectorNames; var total:Number = selectors.length; assertEquals(total,6); }
As you can see we get an array of selector names. Next we get the total and introduce our first test assetEquals. In the next step I'll explain the assertMethods in more detail, but let's just run this and see if the test passes.
When you run the test you should see the following in the FlexUnit Results panel:
Nice, our test passed. We received the exact number of selectors that we were expecting. Let's look at what assert tests we can use.
Step 8: Assertions
In unit testing we run assertions. Each assertion handles a particular type of test. Here is a brief overview of the most common assertions you will probably use:
- assertEquals - test to see if one value equals another.
- assertFalse - test to see if value equals false.
- assertNotNull - test to see if value is not equal to null.
- assertNotUndefined - test if value is not undefined.
- assertNull - test if value is null.
- assertStrictlyEquals - test to see if two values strictly equal each other.
- assertTrue - test to see if value is true.
- assertUndefined - test to see if value is undefined.
Now, before we test an example of each one let's set up our tearDown method.
Step 9: Tear Down
This is going to be a very short step but it's a very important one. Let's add the following line to our tearDown method after super.tearDown():
sheet = null;
What this basically does is remove the reference to our CamoPropertySheet so the Garbage Collector can remove it.
You should always set up your tearDown especially when running multiple test classes or a large test suite.
Step 10: Assert Equals
We've already seen an example of this before in Step 7, but let's go through and add another assertEquals. Here is the next test we'll perform:
- Compress CSS text (remove white spaces, special characters and other obstacles the css parser may not be able to recognize) since the CamoPropertySheet automatically compresses css test when it is parsed.
- Convert the CamoPropertySheet into text (this will be a compressed version of the rawCSS we used earlier).
- Compare that the CamoPropertySheet text is equal to our compressed css string.
In order to run the test, let's add the following method:
public function testToString():void { var compressedCSS:String = "baseStyle{x:10;y:10;width:100;height:100;padding:5;margin:10;}baseStyle .Button{x:0;y:0;background-color:#000000;}#playButton{background-color:#FFFFFF;background-image:url('/images/full_screen_background.jpg');}#fullScreenButton{background-color:#FF0000;background-image:url('/images/full_screen_background.jpg');}#playButton:over{background-color:#333333;}interactive{cursor:hand;}"; assertEquals(sheet.toString(), compressedCSS); }
Now run the Unit Test and make sure you select both tests from the check boxes. New tests don't automatically get selected. If everything went well you should see a success and that 2 tests were run.
Step 11: Assert False
We can do a simple test to make sure if we request a Selector that doesn't exist we get a false value. Here's how we would do this with the CamoPropertySheet:
- Make a request for a selector that doesn't exist.
- Check the returned selector's name to see if it equals "EmptySelector" - a constant on the PropertySelector class.
- Assert if the value is false.
Here is the code to perform the test:
public function testEmptySelector():void { var selector:PropertySelector = sheet.getSelector("testSelector"); var exists:Boolean = (selector.selectorName == PropertySelector.DEFAULT_SELECTOR_NAME) ? false : true; assertFalse(exists); }
As you can see, we're simply requesting a fake style name testSelector. We check to see if the selector's name is the default name applied when no selector is found. Finally we pass the exists variable to the assertFalse method. When you run this you should now see 3 passes totaling a success.
Step 12: Assert Not Null
Next we want to make sure that the text value from our CamoPropertySheet is never null. Let's look at how to structure our test:
- call toString on our CamoPropertySheets instance and test to see if it is not null
Here's our test method:
public function testCSSValue():void { assertNotNull(sheet.toString()); }
This is pretty straight forward so when you run the test we should now have 5 successes. Every time we run the test we can check to see the names of our method tests by clicking on the Default Suite folder our FlexUnit Results Panel back in Flash Builder.
Step 13: Assert Not Undefined
In this next test we're going to follow up on the empty selector test to verify that every selector has a selectorName.
- Get a selector that doesn't exist.
- Get a selector that does exist.
- Test to see if both selectors' names are not undefined.
Here is the test method:
public function testSelectorsHaveNames():void { var selectorA:String = sheet.getSelector("testSelector").selectorName; var selectorB:String = sheet.getSelector("baseStyle").selectorName; assertNotUndefined(selectorA, selectorB); }
The first two lines are self explanatory; we simply ask for two selectors and one of which we know does not exist. When we do the assert however you'll notice we're passing in two values instead of the normal one we've done up until this point. This is not a unique example, in fact each of the assert methods allow you pass in any number of values to test. Here we simply make sure that selectorA and selectorB are not undefined.
Step 14: Assert Strictly Equals
Here is an example of how to strictly compare two objects. Here I'm using strings which may not be the best use of this example but it's good to see the test in action. What are we going to do?
- Clone the CamoPropertySheet.
- Test that the string value of our CamoPropertySheet is equal to the value of a cloned CamoPropertySheet.
public function testClone():void { var clone:CamoPropertySheet = sheet.clone() as CamoPropertySheet; assertStrictlyEquals(sheet.toString(), clone.toString()); }
As you can see we call the clone method of the CamoPropertySheet to get back an exact copy of the PropertySheet. Next we run it through the assert test by calling the toString method on each. If the returned CSS test is the same we have a success for the test.
Step 15: Assert True
Now we want to test that when we request a selector, it has a property we are expecting. Here is the test:
- Request the baseStyle selector.
- Test to see if the selector has the property x.
Here is the test method:
public function testSelectorHasProperty():void { var selector:PropertySelector = sheet.getSelector("baseStyle"); assertTrue(selector.hasOwnProperty("x")); }
As you can see here we are expecting our baseStyle selector to have the x property. If this exists we can assume that it was correctly parsed from the CSS string. Since it exists we have passed this test.
Each of these tests becomes self explanatory as to how you implement them. Let's look into what happens when we fail a test in the next two steps.
Step 16: Assert Undefined
We're going to test for undefined now but Flash Camo has been designed to not return undefined. So the following test will fail. Let's check out what we're going to test for.
- Call the clear method on the CamoPropertySheet.
- Test to see if calling toString will return undefined.
Here's the code for the test:
public function testClear():void { sheet.clear(); assertUndefined(sheet.toString()); }
Now let's run this test and go onto the next step to discuss the results.
Step 17: Failing A Test
If you did the previous step and ran the unit test, you should see the following in the FlexUnit Results panel:
Notice how we have 1 failure from our testClear method?
If you double-click on the failed test in the Test Results panel you will jump to the source of the test that failed. This is a great way to correct your mistake or alter the test so it doesn't fail. There isn't much more to failing a test then this. Every test that fails will show up in this panel, you can tell the panel to only show failed tests by clicking on the red exclamation mark above where it tells you how many errors you had.
Now that we have failed this test replace it with the following:
public function testClear():void { sheet.clear(); assertEquals(sheet.toString(),""); }
If you run the test again you'll see that it will pass. Now you have 7 out of 7 passed tests and this class is successfully working. Let's talk about setting up unit tests for your own custom classes.
Step 18: Auto Generating Test Classes
Up until this point we have been testing a precompiled library, but you may be interested in how this will work on your own classes. We're going to alter the doc class a little then run a custom unit test on it. To get started, replace all of the code in the UnitTestIntro class with the following:
package { import flash.display.Sprite; public class UnitTestIntro extends Sprite { private var _firstName:String; private var _lastName:String; private var _loggedIn:Boolean; public function UnitTestIntro() { _firstName = "Jesse"; _lastName = "Freeman"; } public function get firstName():String { return _firstName; } public function get lastName():String { return _lastName; } public function isLoggedIn():Boolean { return _loggedIn; } } }
Once you have the code in place, right-click on UnitTestIntro and select New > Test Case Class. If you look at the wizard this time you'll see all of the fields are filled in for us:
This time, instead of clicking Finish, hit next and look at the following window:
Here you can select all of the public methods of that class to test. Notice how our getters for firstName and lastName are not part of this list. Unit testing can only be performed on public methods. Also, you will see every inherited method of the class so we have Sprite/DisplayObject methods here since our doc class extends Sprite. Select isLoggedIn and hit finish.
If you scroll down to the bottom of the new test class that was just generated you'll see it has automatically added in a testMethod for isLoggedIn.
When testing your own code Flash Builder can help automate the process of scaffolding your tests. This is a great help when dealing with large classes that have lots of methods.
Step 19: Best Practices
By now you should have a solid understanding of how Unit Testing in Flash Builder works. You may even be ready to start setting up your own test. There are a lot of things I wasn't able to cover in this short tutorial so here are some things to keep in mind when creating your tests.
- Keep your code small and easily testable. Make short methods, break code down into "units" to help facilitate testing. It's ok to have lots of methods in your classes. Not only does it help you when unit testing but also when extending classes and dealing with inheritance overriding.
- Test a behavior and not a method. This tutorial showed you how I would interact with the instance of the CamoPropertySheet. I was testing behavior responses for the underlying parsing/retrieval system. Make sure you are not testing that functions simply return values but that the underlying logic is correct. Did something get parsed, did the dependent methods do what they were expected to do?
- Keep your test names clean. You should be able to easily understand what is going on by simply looking at the name of the test method. Remember this code is not compiled into your final application so if you have incredibly long method names, that's ok as long as they're descriptive.
- Don't rely on the console window for your test. This means you shouldn't expect a developer to watch trace outputs to see if the test is working correctly. Instead make the test fail or succeed and not output its results.
- Do a search for Unit Testing in other languages to see how it has been implemented elsewhere. Also pick up a book on Test Driven Development (TDD).
Conclusion
As you can see, setting up Unit Testing is very simple, but creating applications revolving around Test Driven Development is an art form in its own right. Hopefully after this intro you'll be comfortable setting up a simple test to validate that your code works as expected. As you rely more and more on Unit testing the number of bugs in your code will dramatically go down. As long as you remember to code towards passing a test, keeping your methods small and validating your unit tests often, you'll be well on your way to building more stable code.
Thanks for reading.
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
|
https://code.tutsplus.com/tutorials/introduction-to-unit-testing--active-1215
|
CC-MAIN-2018-17
|
refinedweb
| 3,943
| 63.49
|
Seam Remoting and Contextnathan dennis Mar 5, 2008 7:27 PM
I've been working on a Web 2.0 app that is pretty complex. In the app i perform a lot of partial rerenders based on the users actions.. drag and drop for adding elements to an array... i also use a lot of javascript to calculate various div positions on the screen.... position, dimensions, etc.
the page also consist of a few mod panels that contain wizard type functionality,, and also contain partial rerenders within them on their own session beans.
I was originally using session context on this main backing bean. that would allow me to do as many rerenders as i wanted with no problem. But im worried about scalability. This bean could contain massive amounts of data that i really don't want hanging around as long as the user is log in.
When i tried to switch the main session bean to conversation context... my rerenders no longer work. I'm sure this is a fundamental misunderstanding of when the appserver is cleaning up the conversation context beans... I was under the impression it was when the conversation id changed. if that was the case i could use something like
Seam.Remoting.getContext().setConversationId( id ); Seam.Component.getInstance("modeditAction").getRemoteFile(tmedianame, showCropperBox);
before i did the ajax call and i would keep the same cid. obviously where
id = #{conversation.id}
I would consider doing a complete page refresh,,, which seems to work with conversation context but part of my page contains several large movable image files... png format so i can maintain an alpha channel. i really dont want the user waiting on those to load more than once.
what am i missing regarding seam.remoting rerenders on callback and the conversation scope? Is this possible using conversation scope without a complete rerender of the page?
if not,,, is there a way i can clean up a group of session context beans manually before the user's overall session has ended?
1. Re: Seam Remoting and ContextShane Bryzak Mar 5, 2008 10:25 PM (in response to nathan dennis)
This should work, could you post your code for your Seam component (modeditAction) plus the relevent bits from your view?
2. Re: Seam Remoting and Contextnathan dennis Mar 5, 2008 10:54 PM (in response to nathan dennis)
this is definitely a bit... no need in making everyone's head swim with 25 thousand lines of code... literally.
from one of the modalPanel.. i oringinaly thought i could use
@Begin(nested=true)
on these ,, but had no luck.
<h:form> <a:jsFunction ... <a:outputPanel ... <rich:menuGroup <rich:menuItem <s:graphicImage </a:outputPanel> <h:form>
from the js Remote Call
function specialFunction(i){ Seam.Remoting.getContext().setConversationId( id ); //passed to javascript when modPanel opened Seam.Component.getInstance("modeditAction").specialFunction(i, editItCallback); } function editItCallback(result){ if(result == "true"){ rerenderCropperBox(); } else { alert("an error has occurred while loading the selected file"); } }
from the modalPanel backing bean i make calls too.
@Stateful @Name("modeditAction") @Scope(ScopeType.SESSION) public class ModularEditAction implements ModularEditLocal, Serializable{ ... //imaged editing byte[] outpic =null; public String getRemoteFile(String medianame){....sudo get editedImage...} .... public String specialFunction(int i) throws IOException{ try{ PictureUtilLocal sfunction = (PictureUtilLocal) Component.getInstance("pictureUtil"); sfunction.loadBuffered(editedImage); sfunction.setQuality(10); sfunction.specialFunction(i); editedImage = sfunction.writeResult("PNG"); sfunction.destroy(); //is this a good way to clean up???? ByteArrayOutputStream outputStream = new ByteArrayOutputStream(); ImageIO.write(editedImage, filetype, outputStream); outpic = outputStream.toByteArray(); outputStream.close();outputStream.flush(); rotsw=false; return "true"; } catch (Exception e){ return "false"; }
and the interface
@Local public interface ModularEditLocal { .... public byte[] getOutpic(); public void setOutpic(byte[] outpic); @WebRemote public String specialFunction(int i) throws IOException; ....
3. Re: Seam Remoting and Contextnathan dennis Mar 5, 2008 10:59 PM (in response to nathan dennis)
ignore the try... dont ask me how it got there..
4. Re: Seam Remoting and ContextShane Bryzak Mar 6, 2008 1:16 AM (in response to nathan dennis)
I assume that when you tried this you changed it to @Scope(CONVERSATION). Where/how was your conversation started?
5. Re: Seam Remoting and Contextnathan dennis Mar 6, 2008 2:49 AM (in response to nathan dennis)
I apologize..
actually i used @Scope(Scope.CONVERSATION) which i believe might be the same thing.
I tried starting this conversation several ways actually... both with annotation and xml.
as i said before the above is an snippet from one of the modalPanels contained on this page.
the conversation was originally begin using
@Begin(join=true)
from the main backing bean of the page.
in for the nested modalPanel conversation i created a modpanel.page.xml and used something like
<begin-conversation
when that didnt work, i created a method that pretty much did nothing and added @WebRemote to the interface and start it from the javascript call using
@Begin(nested=true) public startAConversation()
all this while attempting to use the
@Scope(Scope.CONVERSATION)
strange this was, if i tried to start a conversation on the modPanels things seemed to break. currently running this modPanel in session context, and only beginning the conversation on the page that it is imported into seems to work fine... as long as i keep track of my conversationId...
here is more snippet from the main page that may help you understand how the above example fits into the application..
@Stateful @Name("picAction") //@Synchronized //@Scope(ScopeType.CONVERSATION) @Scope(ScopeType.SESSION) public class PicAction implements PicLocal, DropListener, Serializable { .... @Begin(join=true) public void start(){ initEdit(); }
the main xhtml where the modalPanel is included.
<h:panelGroup <a:include </h:panelGroup>
then obviously (maybe not so obviously) modpanel.xhtml is directed toward modeditAction.
so the conversation is currently only started once on the picAction bean, and both are running in session context. i dont even know why the modPanel is working. my understanding was it shouldnt. after two days of debug with no luck nesting the conversation and getting my partial refresh calls to work in anything other than session context... i start thinking more about how i could keep myself out of memory trouble by cleaning object manually,, instead of building a well designed application. but i know this approach will haunt me... know what i mean?
6. Re: Seam Remoting and ContextShane Bryzak Mar 6, 2008 7:59 AM (in response to nathan dennis)
Ok how about you try to start the conversation via a remoting call. Make sure you set the conversation ID to null before you make the call (Seam.Remoting.getContext().setConversationId(null)), then make sure the conversation ID is set to whatever ID is returned in the context after invoking your @Begin method. There should be no problem at all having remoting maintain it's own conversation, separate to the rest of the page.
7. Re: Seam Remoting and Contextnathan dennis Mar 6, 2008 6:34 PM (in response to nathan dennis)
i tried this. used
@Begin(nested=true) public String getRemoteFile(String medianame)
in modeditAction.
code used in the javascript
Seam.Remoting.getContext().setConversationId( null ); Seam.Component.getInstance("modeditAction").getRemoteFile(tmedianame, showCropperBox); id=Seam.Remoting.getContext().getConversationId();
which didnt quite work as desired. so i used debug to look at the object hanging around out there.
i saw a conversation of modeditAction associated with the initial instance of the main page.. like it loaded an instance before i called Begin(). it contained a bunch of null values. then i also saw a conversation with a null view-id that contained all the information that my bean was suppose to to contain.
its like the s:graphic is directed at the wrong instance,,
thanks for the help. apparently, this is by far my weakest point of understanding the seam framework.
8. Re: Seam Remoting and ContextShane Bryzak Mar 7, 2008 12:15 AM (in response to nathan dennis)
Don't use a nested conversation. Simply let remoting start its own conversation.
9. Re: Seam Remoting and Contextnathan dennis Mar 7, 2008 6:16 AM (in response to nathan dennis)
I really feel like i am starting to get close on this. i just did a major amount of reading/rereading of every seams book and documentation i could get my hands on.
i got rid of that nest with no luck...
correct me if im wrong though...
since i have a login. @Begin signifying the beginning of one long running conversation, then i should nest the initial entry point into the this function of my application. And all @Begin(joins) after that would also be nested from the initial entry point until i call @End (only cleaning up the nested conversation).
i havent try it yet. and here is why..
the app should work theoretically run under one long running conversation starting at the login and using joins.
AFTER trying all combinations of the above (Passing cid to Seam Remoting) my ajax refresh is still pointed at the wrong instance. Data is correct on the initial load, but after a action, the refresh displays as if the page wasnt initialized and i see too many conversation in debug.
The only place on the page that it preforms the refresh correctly is where i used a4j:support... that one works like a charm...
so here is what im thinking.... the problem is here
<a:jsFunction
it is getting a new conversationid on these rerenders...
the question is how do i force this component to use the correct conversation id?
somehow with
<a4j:actionParam>
maybe???
10. Re: Seam Remoting and Contextnathan dennis Mar 7, 2008 7:24 AM (in response to nathan dennis)
oh man,,, am i confused now... i just build a simplest case to prove my point and i proved myself wrong and this guy
My Link
wrong in the process.
the simplest case works fine if you pass it the id. my app does the exact same thing just with more noise and fluff... and the calls might take longer to process...
im at a loss...
11. Re: Seam Remoting and Contextnathan dennis Apr 1, 2008 10:25 PM (in response to nathan dennis)
after letting this issue rest a bit,, and reading a bit more.. i figured it out. for any other poor souls through an unfortunate system of events that has lead them to this post... your best friend with ALL richfaces functions is
<s:conversationId/>
i mean use it everywhere for every call returning an ajax response to the server if the conversation context must land on the same instantiated bean. it may look like the address is staying the same during the call by looking at the address bar. but its not. it is giving you a new conversation id on the update.
this includes all drop zones, jsFunctions.
as well as setting the conversation id in seams remoting as described above.
i would have seen this sooner if the app wasnt already so complex.
lesson well learned.
12. Re: Seam Remoting and ContextSachin Rajshekarappa Jun 10, 2008 12:24 PM (in response to nathan dennis)
I am facing a similar issue with nested conversation on ajax calls.
Can anyone direct me to a small example program that illustrates the above discussed topic. i.e using nested conversations with ajax.
Thanks
13. Re: Seam Remoting and Contextnathan dennis Jun 10, 2008 6:53 PM (in response to nathan dennis)
i dont have an example i can post on here... but post some of your code and ill help.
|
https://developer.jboss.org/thread/180727
|
CC-MAIN-2018-17
|
refinedweb
| 1,906
| 57.98
|
New in the Softimage 2011 Subscription Advantage Pack
The siutils Python module makes it easier to import modules into your self-installing plugins. Just put your modules in the same location as your plugin file , and you can use the __sipath__ variable to specify the module location.
__sipath__ is always defined in the plugin namespace, so no matter where you put a plugin, you can simply use __sipath__ to specify the location.
Here’s a simple example that shows how to import a module into your plugin.
- Line 04: Import the siutils module
- Line 39: Use add_to_syspath() to add __sipath__ to the Python sys path.
If the module was located in a subfolder of the plugin location, you could use siutils.add_subfolder_to_syspath( __sipath__, ‘mysubfolder’ )
- Line 40: Import the module
import win32com.client from win32com.client import constants import siutils null = None false = 0 true = 1 def XSILoadPlugin( in_reg ): in_reg.Author = "blairs" in_reg.Name = "TestPlugin" in_reg.Major = 1 in_reg.Minor = 0 in_reg.RegisterCommand("Test","Test") #RegistrationInsertionPoint - do not remove this line return true def XSIUnloadPlugin( in_reg ): strPluginName = in_reg.Name Application.LogMessage(str(strPluginName) + str(" has been unloaded."),constants.siVerbose) return true def Test_Init( in_ctxt ): oCmd = in_ctxt.Source oCmd.Description = "" oCmd.ReturnValue = true return true def Test_Execute( ): Application.LogMessage("Test_Execute called",constants.siVerbose) # print __sipath__ siutils.add_to_syspath( __sipath__ ) import TestModule TestModule.test( "hello world" ) return true
Pingback: New SDK stuff in the Subscription Advantage Pack « eX-SI Support
What’s the correct way to *not* have to use “import” atop each custom command that needs a particular custom module? I noticed __sipath__ is not set all the time the global scope of the plugin file is executed. Sometimes it’s empty/missing. 😦
Sorry, I’m not sure I understand the question. What do you mean by “atop each custom command” ?
I tried to repro, but I didn’t. But I spent only 5 minutes loading and unloading the plugin and calling the command…does it take longer for the problem to manifest itself?
My bug was something that was happening to me a long time ago in Windows on 2011 or 2012 and at the time __sipath__ was being set sometimes and other times it was blank, messing up my appending to the PATH and therefore importing my custom module.
I’m on 2012 on Linux now and I tried __sipath__ again and it seems to work OK. Sorry for the noise. Maybe it’s been fixed. (I’ll check on Windows at home later.)
Oh, and for clarity sake, by “atop each custom command” I meant that in your sample you do the import at the function scope not in the global scope. When you do that the import is not registered globally, which would imply you would do an import statement on each custom command using that custom module. That seemed silly and redundant, and it is, because if __sipath__ is working as advertised you can do the import outside, first thing in your file and use it later in any functions you wish.
I take it back… It DOES fail sometimes:
# ERROR : Traceback (most recent call last):
# File “”, line 16, in
# PLUGIN_PATH = __sipath__
# NameError: name ‘__sipath__’ is not defined
# – [line 16 in myconfidentialfile.py]
# ERROR : Property Page Script Logic Error (Python ActiveX Scripting Engine)
# ERROR : [14]
# ERROR : [15] PLUGIN_NAME = ‘someName’
# ERROR : >[16] PLUGIN_PATH = __sipath__
# ERROR : [17] if PLUGIN_PATH not in sys.path:
# ERROR : [18] siutils.add_to_syspath( __sipath__ )
# ERROR : [19] import mymodule
# ERROR : Traceback (most recent call last):
# File “”, line 16, in
# PLUGIN_PATH = __sipath__
# NameError: name ‘__sipath__’ is not defined
#
I can’t send you the addon though, but if it helps… I got that error when opening a custom property in the scene, which triggered the PPG’s _DefineLayout() which triggered its _OnInit() which had a PPG.Refresh() in it. Not sure if it’s related.
Maybe PPG logic code doesn’t “see” __sipath__?
PPGLogic could be a problem…usually it is executed its own instance of the scripting engine, and doesn’t see things (like functions) defined in the global scope. I don’t remember if I tested with modules.
Woo! I can repro. 🙂
Don’t know how to paste code here so I put it at
Load that plugin. It will load without errors, but then try to select something and create the “fooProperty” by invoking its creation from the Plugin Manager. When it pops up, you will get an __sipath__ not set error.
Tell me if it repros for you.
This workaround works for the PPG logic erroring:
PLUGIN_NAME = ‘TestCmdPlugin’
try:
pluginpath = __sipath__
except:
pluginpath = Application.Plugins(PLUGIN_NAME).OriginPath
if pluginpath not in sys.path:
sys.path.append(pluginpath)
import TestModule
|
https://xsisupport.com/2010/10/24/python-importing-modules-into-plugins/?replytocom=2806
|
CC-MAIN-2022-05
|
refinedweb
| 775
| 57.16
|
I believe this issue has more to do with Maven than the Jetty web container. Feel free to reassign if you disagree.
1. I managed to successfully debug a Maven project using Jetty by changing the goal to: "jetty:run" and adding "jpda.listen=maven" to the Action properties.
2. When I attempted to do the same for the "Profile Project" action the project runs, a breakpoint is hit but Netbeans remains "Connecting to the target VM..."
I tried adding "-Xdebug -Xrunjdwp:transport=dt_socket,server=n,address=${jpda.address}" to MAVEN_OPTS but this didn't help either.
What do I need to get this to work? Perhaps there is something wrong with "jetty:run"'s fork mode?
jpda.listen=maven is a special mode distinct from regular application debugging; it allows you to debug the Maven process itself, useful for debugging mojos, and also for plugins like Jetty or Surefire which can run your code in the same JVM as Maven if you really want them to.
There is no equivalent for profiling. It could be added, but in the meantime it should be easy to do what you want: just run your app without doing anything special, then Profile > Attach Profiler to connect to it. (If it is running in a web container, none of your app's code should be run during startup so there is no rush to connect.)
Actually in my case, something does run right at startup. Is there a way to make the application wait for the JPDA connection to establish before launching?
(In reply to comment #2)
> Is there a way to
> make the application wait for the JPDA connection to establish before
> launching?
It does so already, but then immediately proceeds with running the app. If you want to step through startup code, the easiest thing would be to set a breakpoint in some method of interest you expect to run during startup, then run with jpda.listen=maven. The debugger will pause when that line is encountered.
The IDE does not pass suspend=n, and according to docs suspend=y is the default, but this does not seem to work. Passing it manually from the command line (with server=y thus attaching manually from the IDE) has no apparent effect either.
It is however possible to use jpda.stopclass=org.apache.maven.cli.MavenCli which will start listening from the IDE, launch Maven in the debugger, and cause the launched process to immediately suspend in MavenCli.<clinit>. I am not sure this is really any better than just setting a breakpoint somewhere "interesting".
You said "JPDA" so I responded to that, not profiling. If you want to pause a Maven build during launch so as to attach the profiler, I do not know of any straightforward technique; there is no apparent Maven option that would pause it at startup, or read from stdin. Perhaps you could run the app under JPDA and *also* attach the profiler, though the results might then be skewed. You could write a simple mojo which just paused for a few seconds or until a prompt on stdin was answered; I know archetype:generate does so, as does nbm:populate-repository, but neither is convenient. This mojo does it:
package pausemojo;
/**
* @goal pause
* @phase validate
*/
public class PauseMojo extends org.apache.maven.plugin.AbstractMojo {
@Override public void execute() throws org.apache.maven.plugin.MojoExecutionException {
System.out.print("Press ENTER to continue:");
try {
System.in.read();
} catch (java.io.IOException x) {
throw new org.apache.maven.plugin.MojoExecutionException("pausing", x);
}
}
}
built using:
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>test</groupId>
<artifactId>pausemojo</artifactId>
<packaging>maven-plugin</packaging>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.apache.maven</groupId>
<artifactId>maven-plugin-api</artifactId>
<version>2.0</version>
</dependency>
</dependencies>
</project>
and used via:
<plugin>
<groupId>test</groupId>
<artifactId>pausemojo</artifactId>
<version>1.0-SNAPSHOT</version>
<executions>
<execution>
<goals>
<goal>pause</goal>
</goals>
</execution>
</executions>
</plugin>
though I am getting a JVM crash after this which I will report.
I'm a bit lost here.
Are you trying to debug or profile the jetty:run execution? jpda.listen property is only available for debugging. For profiling, you only can attach profiler to your application (in this case maven build). I suppose [1] is applicable here even though quite old.
If you are looking for an equivalent of of jpda.listen=maven for profiling, named eg. profile.maven=true or profile.app=maven, then it's likely a task for maintainer of the maven.profiler module. It's unclear to me how the application execution gets passed the right parameters at startup to let profiler attach. nothing relevant in maven.profiler module that I could find.
[1]
Marking as incomplete until it is clear what is the problem.
Milos,
I am asking for a way to have the application pause on startup until the profiler finishes attaching, at which point it should resume execution.
(In reply to comment #6)
> Milos,
>
> I am asking for a way to have the application pause on startup until the
> profiler finishes attaching, at which point it should resume execution.
This is exactly the way, how the local attach works.
(In reply to comment #7)
> (In reply to comment #6)
> > Milos,
> >
> > I am asking for a way to have the application pause on startup until the
> > profiler finishes attaching, at which point it should resume execution.
> This is exactly the way, how the local attach works.
Is it? for any user edited value of Profile action or only for the limited set of default behaviours? (run main class via exec:exec, tests via surefire:test, webapp deploy via netbeans's own deployment...)
At least there's a hardcoded limitation on packaging types supported that I know of.
The requirement here is to profile the maven build itself which I believe is not supported currently.
|
https://netbeans.org/bugzilla/show_bug.cgi?id=200771
|
CC-MAIN-2014-10
|
refinedweb
| 981
| 56.86
|
Use the following instead of plain strings in your comments:
For example:
Generally speaking, our goal is that users should be able to access information:
See how it's done for GenericImage.
Modulessection of the documentation:
And note how we use the arrobas curly braces to here include into our group all the operators that will be defined.
For classes that have helpers, for example, some enums that define parameters of functions associated with the class: Just define your group to only include the class; then indicate for each helper that it relates to the class. This will list the helpers in the doc page of the class, as opposed to cluttering the page of your group. See for example how it's done in Image.H where:
However, for enums that are only used in functions and cannot be related to a class, we mush add those to the group where the function belongs. Have a look at Text.H for an example: TextAnchor is documented and added to textdrawing group, like the drawText() function is.
NRT_MACRO_
NRT_
For enums defined using NRT_DEFINE_ENUM_CLASS, we basically manually instruct doxygen that we are creating a new class. Adapt the following to your own enums:
Try as much as possible to create
test-XXX.C files in the
tests/ directory, and to focus each one on fully exercising a particular class or set of functions.
Either make a little test program that does something, like test-Component.C where people can play with parameters, etc, or use the boost testing framework to check a bunch of things, like test-GenericImage.C
The way examples are treated in doxygen is as follows: one declares a file as being en example somewhere, using the
\example directive. That file is then scanned for classes, functions, etc, and for the ones that are found, an
Examples: section is added to their documentation.
In NRT, we also instruct Doxygen that all files in
tests/ are examples (see
doc/doxygen.cfg)
Note that sometimes doxygen does not make the connection between a class and an example file. For example, in
test-Module.C we do show how nrt::Module works, but only via classes that derive from nrt::Module ... In such cases, add an explicit link to the example file in the doc of your class. For instance:
There is a bit of trial and error that is required here. After you add a new example file, run a
make
doc, and check out the classes that you think should link to your example. If you don't find the relevant
Examples: sections in there, just add the explicit link. The easiest is to look at your example file as marked up by doxygen and as it appears in the
Examples tab at the top of your doxygen pages: everything that is clickable in there will be correctly linked (which you can check by clicking on those classes, functions, etc and verifying they have an
Examples: section).
It may help doxygen if you fully qualify the classes or functions you use in your examples. For example, in test-Async.C we clearly specify nrt::async() (as opposed to using the namespace nrt and then calling async() without the namespace qualifier). This properly links test-Async.C to the documentation of nrt::async().
|
http://nrtkit.net/documentation/pm_DocumentationRules.html
|
CC-MAIN-2018-51
|
refinedweb
| 554
| 61.36
|
PROBLEM LINK:
Setter & Editorialist : Mayuresh Patle
Tester : Uttkarsh Bawankar
DIFFICULTY:
MEDIUM
PREREQUISITES:
Sieve of Eratosthenes, Segmented Sieve, Factors of a Number
PROBLEM:
Given 2 integers X and Y, you have to derive a password using following procedure:
- X and Y represent the range [X,Y].
- Let C_{min} be the minimum count of factors of any number in this range.
- S_{C_{min}} is the sum of all those numbers in the range [X,Y] which have C_{min} number of factors.
- Let C_{max} be the maximum count of factors of any number in this range.
- S_{C_{max}} is the sum of all those numbers in the range [X,Y] which have C_{max} number of factors.
- Let C_{sum} be the sum of counts of factors of those numbers in range [X,Y] for which the count of factors is neither C_{max} nor C_{min}.
- S' is the sum of all the numbers whose count of factors was added in C_{sum}, i.e. the sum of all the numbers in the range [X,Y] for which the count of factors is neither C_{max} nor C_{min}.
- The English Alphabets are indexed, starting from 'a' at index 0.
- F= \text{alphabet at index } (C_{min} \mod 26).
- M= \text{alphabet at index } (C_{sum} \mod 26)
- L= \text{alphabet at index } (C_{max} \mod 26).
- An alphabet must be in uppercase if its index is Even, and in lowercase if its index is Odd.
- The password contains exactly 5 fields in this format (without any spaces between them): \text{Alphabet Integer Alphabet Integer Alphabet}
- These 5 fields of the password are F,\ S_{C_{min}},\ M,\ S_{C_{max}} and L respectively.
- If S' is odd, then the actual password is the reverse of the derived password.
Notable Constraints:
- 1 \leq X \leq Y \leq 10^{12}
- 1 \leq Y - X + 1 \leq 5*10^5
QUICK EXPLANATION
Find and store all primes less than 10^6. For each testcase, calculate the required parameters by applying Segmented Sieve Algorithm on range [X,Y].
EXPLANATION:
The explanation is divided into some parts, try making your own approach first after reading each part. If you are stuck at some point, then you can refer the next part.
Observation 1
- We know that any number N can be represented in the form p_1^{c_1}*p_2^{c_2}* \dots * p_n^{c_n} where p_1,p_2,\dots,p_n are prime factors of N and c_i is the maximum power of p_i (for i \in [1,n]) which divides N.
- Now, any factor of N can have any of these prime factors p_i multiplied 0 to c_i times in its representation in the above format. This will be applicable for all i \in [1,n].
- Hence, total number of factors of the number N will be (c_1+1)*(c_2+1)*\dots*(c_n+1)
Observation 2
For the given range of numbers, there can be at most 1 prime factor of any number which will be greater than 10^6. And, if it exists then its maximum power which divides the number will be exactly 1.
Proof for Observation 2
Because the product of any two (or more) numbers greater than 10^6 will be greater than 10^{12}.
Observation 3
We can derive the above form for any number in the given range with the help of primes in range [1,10^6]. And, we can find and store these primes in O(log(log(10^6)) using Sieve of Eratosthenes.
Observation 4
Length of the range [X,Y] is small enough to store the prime factors for each number in this range. In other words, you are allowed to make Y-X+1 arrays, such that i^{th} array, say pf[i], contains all the primes factors of i+X which are less than 10^6. You can do this by applying segmented sieve on this range.
Segmented Sieve
Let’s say we have stored all the prime numbers less than 10^6 in a container named primes.
First of all, initialise Y-X+1 empty containers, say pf[0\dots (Y-X+1)].
For each p \in primes do:
- i=p* \lceil \frac{X}{p} \rceil //(this will initialise i as the first multiple of p which is greater than X.
- while i\leq Y do:
- Insert p in (i-X)^{th} Container, i.e. insert p in pf[i-X].
- i = i + p //(If i is a multiple of p, then i+p will be a multiple of p too).
Now, for each i \in [0,Y-X+1], the i^{th} container, pf[i] will contains the prime factors of i+X.
Observation 5
For each prime factor p \in pf[i], you can keep on dividing and updating the value of tmp, where tmp=i+X initially, and count how many times p divides tmp.
Hence, you can now calculate the total number of factors of i+X for each i \in [0,Y-X+1].
Observation 6
What else are you expecting now? Remaining task is all about implementation!What else are you expecting now? Remaining task is all about implementation!
You can refer the C++ solution for the implementation.
Time Complexity : O( (N+T*L)*log(log(N))), here N=10^6 and L=Y-X+1.
SOLUTIONS:
C++ Solution
#include<bits/stdc++.h> using namespace std; typedef long long ll; #define pb push_back typedef vector <ll> vll; #define chk(c) cerr<<#c<<" : "<<c<<"\n" const ll N=1e6+1, oo=LLONG_MAX; vll primes; //This will store all prime numbers till 10^6 vector <vll> pf; //pf[i] will store all primes factors<10^6 for i+X //sieve of eratosthenes, stores all prime numbers till 10^6 in primes void generate_primes() { ll r=sqrt(N)+1,i,j; vll p(N,1); p[0]=p[1]=0; for(i=2;i<r;++i) if(p[i]) for(j=i*i;j<N;j+=i) p[j]=0; for(i=0;i<N;++i) if(p[i]) primes.pb(i); } //stores the primes factors<10^6 for each number in range [l,r] void prime_fctrs(ll l,ll r) { ll i; pf.assign(r-l+1,vll()); //reset pf for(ll prime:primes) { for(i=((l+prime-1)/prime)*prime;i<=r;i+=prime) { pf[i-l].pb(prime); //add currennt prime to pf[i] } } } //returns password based on the parameter values string get_password(ll Cmin, ll Cmax, ll Csum, ll SCmin, ll SCmax, ll S) { char a[]="Aa"; Cmin %= 26; Csum %= 26; Cmax %= 26; char F = a[Cmin & 1] + Cmin; char M = a[Csum & 1] + Csum; char L = a[Cmax & 1] + Cmax; string pwd = F + to_string(SCmin) + M + to_string(SCmax) + L; if(S & 1) reverse(pwd.begin(),pwd.end()); return pwd; } //returns password for given range string password(ll X,ll Y) { ll Cmin=oo, SCmin=0; //initially Cmin = infinity ll Cmax=0, SCmax=0; //initially Cmax = 0 ll Csum=0, S=0; //S will store sum of counts of factors of all numbers ll min_freq = 0; //stores the count of occorrences of Cmin ll max_freq = 0; //stores the count of occorrences of Cmax ll i, curr, pp, fc, tmp; for(i=0;i+X<=Y;++i) { tmp=curr=i+X; fc=1; //Initially factor count = 1 for(ll p:pf[i]) { pp=0; //power of prime which divides i+X while(tmp%p==0) ++pp,tmp/=p; //increment power fc*=pp+1; //update factor count } if(tmp>1) //prime factor > 10^6 { fc*=2ll; } if(fc==Cmin) //update parameters related to Cmin { SCmin+=curr; ++min_freq; } if(fc==Cmax) //update parameters related to Cmax { SCmax+=curr; ++max_freq; } Csum+=fc; //Add current factor count to CSum S+=curr; //Add current number to S if(fc<Cmin) //update Cmin { Cmin=fc; SCmin=curr; min_freq=1; } if(fc>Cmax) //update Cmax { Cmax=fc; SCmax=curr; max_freq=1; } } //update Csum and S' if(Cmin==Cmax) { Csum=0; S=0; } else { Csum -= min_freq*Cmin + max_freq*Cmax; S -= SCmin + SCmax; } //Now S contains the value of S' as described in the problem //Uncomment the following part to check the calculated values /* chk(Cmin); chk(Csum); chk(Cmax); chk(SCmin); chk(SCmax); chk(S); chk(min_freq); chk(max_freq); */ return get_password(Cmin,Cmax,Csum,SCmin,SCmax,S); } int main() { //fast io ios_base::sync_with_stdio(0); cin.tie(0); cout.tie(0); ll T,X,Y; generate_primes(); //store primes till 10^6 cin>>T; while(T--) { cin>>X>>Y; prime_fctrs(X,Y); //store prime factors for range [X,Y] cout<<password(X,Y)<<"\n"; //Compute & print the password } return 0; }
Python 3 Solution
n=10**6+1 p=[1]*n r=int(n**.5)+1 primes=[] for i in range(2,r): if p[i]: primes.append(i) for j in range(i*i,n,i): p[j]=0 for i in range(r,n): if p[i]: primes.append(i) pf=[] def init(l,r): global pf t=r-l+1 pf=[[] for _ in range(t)] for p in primes: for i in range(p*((l+p-1)//p),r+1,p): pf[i-l].append(p) def fc(n,l): ans=1 for p in pf[n-l]: c=0 while n%p==0: n,c=n//p,c+1 ans*=c+1 if n>1: ans<<=1 return ans for _ in range(int(input())): x,y=map(int,input().split()) init(x,y) Cmin,Cmax,Csum,SCmax,SCmin,S=10**20,0,0,0,0,0 fmin,fmax=0,0 for i in range(x,y+1): c=fc(i,x) if c==Cmin: SCmin,fmin=SCmin+i,fmin+1 if c==Cmax: SCmax,fmax=SCmax+i,fmax+1 S,Csum=S+i,Csum+c if c<Cmin: Cmin,SCmin,fmin=c,i,1 if c>Cmax: Cmax,SCmax,fmax=c,i,1 if Cmin==Cmax: Csum,S=0,0 else: Csum-=fmin*Cmin+fmax*Cmax S-=SCmin+SCmax Cmin,Cmax,Csum=Cmin%26,Cmax%26,Csum%26 F,M,L=[chr(x+ord('a' if x&1 else 'A')) for x in [Cmin,Csum,Cmax]] pwd=F+str(SCmin)+M+str(SCmax)+L if S&1: pwd=pwd[::-1] print(pwd)
|
https://discuss.codechef.com/t/cmx1p05-an-s-ranked-mission-code-mates-x1-editorial/76549
|
CC-MAIN-2020-40
|
refinedweb
| 1,702
| 57.2
|
Switch Case Statement in C# Programming
Submitted by Saurav Shrestha
switch case is a conditional statement where a switch statement compares a variable with a number of provided values called cases and executes a block of statements for each case.
Syntax for switch case Statement
switch (expression) { case value1: { statements; break; case value2: statements; break; . . . . default: statements; }
Flowchart for switch case Statement
Key points for switch case statement
- Each case statement is followed by a colon and statements for that case follows after that.
- The expression in switch statement must have a certain data type that is supported by switch statement like int, char, string, enum etc. The data type cannot be an array or float.
- Any number of cases can be present in a switch. Each case contain a value and statements to be executed. The value in case must be of same data type as expression in switch statement.
- Each case statement ends with a break statement. This exits the switch statement. If break statement is not used, following case is also checked even if required case has already been found.
- Many case statements can execute a same statement by using multiple cases together.
Example:
case 1: case 2: case 3: Console.WriteLine("First three cases.");
- Switch statement also consists of a default statement which is usually at the end of the switch. It is usually used to handle exceptional cases.
- Arranging cases according to their value name or number order is considered a good practice. Same goes for their occurrence. It is better when most common occurring case is placed at the first.
Example 1: C# example for switch case Statement
C# Program to perform an operation of user choice for addition, subtraction, multiplication and division
using System; namespace conditional { class Program { static void Main() { int a, b, choice; Console.WriteLine("Enter first number:"); a = (int)Convert.ToInt32(Console.ReadLine()); Console.WriteLine("Enter second number:"); b = (int)Convert.ToInt32(Console.ReadLine()); Console.WriteLine("Enter the number operation you want to perform from the menu."); Console.WriteLine("1) Addition"); Console.WriteLine("2) Subtraction"); Console.WriteLine("3) Multiply"); Console.WriteLine("4) Divide"); Console.Write("Choice: "); choice = (int)Convert.ToInt32(Console.ReadLine()); switch (choice) { case 1: Console.WriteLine(a + b); break; case 2: Console.WriteLine(a - b); break; case 3: Console.WriteLine(a * b); break; case 4: Console.WriteLine(a / b); break; default: Console.WriteLine("Invalid choice!"); break; } Console.ReadLine(); } } }
In this program, user is asked to enter two numbers. Then, a menu is displayed where users have to choose an operation from a numbered list. The entered number is passed onto the switch statement and according to user choice, an operation is performed.
Output:
Enter first number: 2 Enter second number: 3 Enter the number operation you want to perform from the menu. 1) Addition 2) Subtraction 3) Multiply 4) Divide Choice: 1 5
|
https://www.programtopia.net/csharp/docs/switch-case-statement-c-programming
|
CC-MAIN-2019-30
|
refinedweb
| 475
| 59.9
|
victor escarcega7,573 Points
How can I count the items in each list?
the second part is returning 2 lists, I cant seem to use len because its not counting each individual item in each list. How can I count the total items since it returns 2 lists, one with 2 classes each?
# The dictionary will look something like: # {'Andrew Chalkley': ['jQuery Basics', 'Node.js Basics'], # 'Kenneth Love': ['Python Basics', 'Python Collections']} # # Each key will be a Teacher and the value will be a list of courses. # # Your code goes below here. def num_teachers(dictionary): return int(len(dictionary.keys())) def num_courses(dictionary): return int(len(dictionary.values())
2 Answers
Daniel TuratoJava Web Development Techdegree Graduate 30,118 Points
When you do dictionary.values() on the given dictionary in the challenge, it will return a list of lists and the length of that list is 2 as there are 2 lists of courses which is incorrect. So you would have to do something like this so you can reach into both sub lists:
def num_courses(dictionary): num = 0 for courses in dictionary.values(): num += len(courses) return num
Julian Ramirez1,371 Points
a bit simple but this did it for me:
def num_courses(a): b= sum(a.values(),[]) print(len(b))
|
https://teamtreehouse.com/community/how-can-i-count-the-items-in-each-list
|
CC-MAIN-2020-40
|
refinedweb
| 211
| 63.9
|
[Neel Krishnaswami] > There are actually a whole lot of optimizations that could be done, > but aren't, in the current Python interpreter. > > For example, in the following code: > > import operator > > lst = [] > for i in range(100): > lst.append(i**2 + i**2) > > result = reduce(operator.add, lst) > >. This is why the Types-SIG is so keen to impose restrictions on what modules can do to each others' namespaces (in the presence of optional static typing). > ... > You don't have to win every game to make it to the championships. :) you-don't-even-need-to-win-one-but-it-sure-can't-hurt<wink>-ly y'rs - tim
|
https://mail.python.org/pipermail/python-list/2000-March/024590.html
|
CC-MAIN-2016-50
|
refinedweb
| 108
| 57.57
|
The Monadic Way/Part I
From HaskellWiki
Revision as of 02:54, 7 November 2006
Note: this is the first part of The Monadic Way one of Monad and monadic computation.
While "Yet Another Haskell Tutorial" gave me a good understanding of the type system when it comes to monads I find it almost unreadable.
But I had also Wadler all.
Note: The source of this page can be used as a Literate Haskell file and can be run with ghci or hugs: so cut paste change and run (in emacs for instance) while reading it...
2 A Simple Evaluator
Let's start with something simple: suppose we want to implement a new programming language. We just finished with Abelson and Sussman's Structure and Interpretation of Computer Programs and we want to test what we have learned.
Our programming language will be very simple: it will just compute the sum of two terms.
So we have just one primitive operation (Add) that takes two constants and calculates their sum.
Moreover we have just one kind of data type: Con a, which is an Int.
For instance, something like:
(Add (Con 5) (Con 6))
should yield: and sums the result with the result of the evaluation of the second Term.
As you may understand, our evaluator uses some of the powerful features of Haskell type system. Instead of writing a parser that takes a string (the user input) and transforms that string into an expression to be evaluated, we use the two type constructors defined for our data type Term (Con and Add) to build the expression - such as (Add (Con 5) (Con 6)) - and to match the expression's elements in our "eval" function. ("a"):
)),:
>:
bindM (evalM_1 u) (\b -> ((a + b), formatLine (Add t u) (a + b)))
bindM takes the result of the evaluation "evalM_1 u", a type Mout Int, and a function. It will extract the Int from that type and use it to bind "b".
So in bindM (evalM_1 u) (\b ->) "b" will be bound to the value returned by evalM_1 u, and this bound variable will be available in what comes after "->" as a bound.
We can try to explain "bindM" in a different way by using more descriptive names.
As we have seen, "bindM" extracts the Int part from our type. The Int part will be used for further computations and the Output part will be concatenated. As a result we will have a new pair with a new Int and an accumulated Output.
The new version of "bindM":
> getIntFromType typeMOut doSomething = (newInt,oldOutput ++ newOutput) > where (oldInt,oldOutput) = typeMOut > (newInt,newOutput) = (doSomething oldInt)
As you can see it does the very same things that "bindM" does: it takes something of type MOut and a function to perform some computation with the Int part.
In the "where" clause, the old Int and the old output will be extracted from our type MOut (first line of the "where" clause).
A new Int and a new output will be extracted from evaluating (doSomething oldInt) in the second line.
Our function will return the new Int and the concatenated outputs.
We do not need to define our doSomething function, because it will be an anonymous function:
> evaluator (Con a) = (a, "output-") > evaluator (Add t u) = > getIntFromType (evaluator t) > (\firstInt -> getIntFromType (evaluator u) > (\secondInt -> ((firstInt + secondInt),("-newoutput"))))
As you can see we are feeding our "getIntFromType" with the evaluation of an expression ("evaluator t" and "evaluator u"). The second argument of "getIntFromType" is an anonymous function that takes the "oldInt" and does something with it.
So we have a series of nested anonymous functions. Their arguments ("\firstInt" and "\secondInt") will be used to produce the computation we need ("(firstInt + secondInt). Moreover "getIntFromType" will take care of concatenating the outputs.
This is the result:
*TheMonadicWay> evaluator (Add (Con 5) (Con 6)) (11,"output-output--newoutput") *TheMonadicWay>
Going back to our "bindM", we can now, "")
As you can see, this function will just push an Int and an empty string ("") inside our type M.
You can see this function as one that pushes a string, paired with a void int, inside our type MOut., and then we insert the Int paired with an empty string: "bindM" will not use the void int (the anonymous function will not use it's argument: "\_"), but will take care of concatenating the non empty string inserted by "outPut" with the empty one inserted by "mkM".
Let's rewrite the a function for this specific case, when we concatenate computations without the need of binding variables for later uses. Let's call it `combineM`:
> combineM :: MOut a -> MOut b -> MOut b > combineM m f = m `bindM` \_ -> f
This is just something that will allow us to write the evaluator in a more concise way. Monadic evaluator with output in do-notation
In order to be able to use the "do-notation" we need to define a new type and make it an instance of the Monad class. To make a new type an instance of the Monad class we will have to define the two methods of this class: (>>=) and "return".
This is not going to be difficult, because we already created these two methods: "bindM" and "mkM". Now we will have to rewrite them in order to reflect the fact that we are not going to use a type, for our evaluator, that is a synonymous of other types, as we did before. Indeed our MOut was defined with the "type" keyword. Now we will have to define a "real" new type with either "newtype" or "data". Since we are not going to need multiple constructors, we will use "newtype".
> newtype Eval_IO a = Eval_IO (a, O) > deriving (Show) > type O = String
This is our new type: it will have a single type constructor, whose name is the same of the type name ("Eval_IO"). The type constructor takes a parameter ("a"), a variable type, and will build a type formed by a type "a" (an Int in our case) and a String (O is indeed synonymous of String).
We now need to define our "bind" function to reflect the fact that we are now using a "real" type, and, to unpack its content, we need to do pattern-matching we the type constructor "Eval_IO". Moreover, since we must return an Eval_IO type, we will use the type constructor also for building the new type with the new int and the concatenated output.
For the rest our "bind" function will be identical to the one we defined before.
We are going to use very descriptive names:
> getInt monad doSomething = Eval_IO (newInt,oldOutput ++ newOutput) > where Eval_IO (oldInt,oldOutput) = monad > Eval_IO (newInt,newOutput) = (doSomething oldInt)
As you can see, we are using Eval_IO to build the result of the computation to be returned by getInt: "Eval_IO (newInt,oldOutput ++ newOutput)". And we are using it to match the internal components of our type in the "where" clause.
We also need to create a function that, like mkO, will take an Int and, using the type constructor "Eval_IO", will create an object of type Eval_IO with that Int and an empty string:
> createEval_IO :: a -> Eval_IO a > createEval_IO int = Eval_IO (int,"")
And, finally, we need a function that will insert, in our type, a string and a void ():
> print_IO :: O -> Eval_IO () > print_IO string = Eval_IO ((), string)
With these functions we could write our monadic evaluator without the "do-notation" like this:
> evalM_4 :: Term -> Eval_IO Int > evalM_4 (Con a) = createEval_IO a > evalM_4 (Add t u) = evalM_4 t `getInt` \a -> > evalM_4 u `getInt` \b -> > print_IO (formatLine (Add t u) (a + b)) `getInt` \_ -> > createEval_IO (a + b)
It is very similar to the previous evaluator, as you can see. The only differences are related to the fact that we are now using a "real" type and not a type synonymous: this requires the use of the type constructor to match the type and its internal part (as we do in the "where" clause of our "bind" function: "getInt") or to build the type (as we do in the "bind" function to return the new Int with the concatenated output).
Running this evaluator will produce:
*TheMonadicWay> evalM_4 (Add (Con 6) (Con 12)) Eval_IO (18,"eval (Add (Con 6) (Con 12)) <= 18 - ") *TheMonadicWay>
Now we have everything we need to declare our type, Eval_IO, as an instance of the Monad class:
> instance Monad Eval_IO where > return a = createEval_IO a > (>>=) m f = getInt m f
As you see we are just using our defined functions as methods for our instance of the Monad class.
This is all we need to do. Notice that we do not have to define the "combineM" function, for chaining computation, as we do with "getInt", without binding variables for later use within the series of nested anonymous functions (the "doSomething" part) that will form the "do" block.
This function comes for free by just defining our type as an instance of the Monad class. Indeed, if you look at the definition of the Monad class in the Prelude you see that "combineM", or (>>) is deduced by the definition of (>>=):
We can now write our evaluator using the do-notation:
> eval_IO :: Term -> Eval_IO Int > eval_IO (Con a) = do print_IO (formatLine (Con a) a) > return a > eval_IO (Add t u) = do a <- eval_IO t > b <- eval_IO u > print_IO (formatLine (Add t u) (a + b)) > return (a + b)
As you can see the anonymous functions are gone. Instead we use this:
a <- eval_IO t
This seems like an assignment, that cannot be possible in Haskell. In fact it is just the way our anonymous function's arguments is bound within a do block.
Even if it does not seem like a series of nested anonymous functions, this is what actually a do block is.
Our monad is defined by three elements:
- a type, with its type constructor(s);
- a bind method: it will bind an unwritten anonymous function's argument to the value of the Int part of our type, or more generally, of the variable type "a". It will also create a series of anonymous functions: a line for each function;
- a "return" function, to insert, into out type, a value of type Int, or, more generally, a value of variable type "a".
Additionally, we need a function to insert a string in our type, string that the derived "bind" (>>) will concatenate ignoring the void (), our "print_IO".
Within a do block we can thus perform only tree kinds of operations:
- a computation that produces a new Int, packed inside our monad's type, to be extracted and bound to a variable (an anonymous function's argument really):
- this operation requires a binding (">>= \varName ->"), translated into "varName <- computation"
- example: a <- eval_IO t
- a computation that inserts a string into our monad, a string to be concatenated, without the need of binding a variable (an anonymous function's argument really):
- this operation does not require a binding: it will be ">>= \_ ->", i.e. ">>", translated into a simple new line
- example: print_IO (formatLine (Add t u) (a + b))
- a computation that inserts an Int into our monad without the need of binding a variable (an anonymous function's argument really):
- this operation is carried out by themethod (usually at the end of a do block, useless in the middle)return
- examplereturn (a + b)
To sum up, within a block, "do" will take care of creating and nesting, for us, all the needed anonymous functions so that bound variables will be available for later computations.
In this way we can emulate a direction of our computation, a "before" and an "after", even within a pure functional language. And we can use this possibility to create and accumulate side effects, like output.
Let's see the evaluator with output in action:
*TheMonadicWay>
7 Type and Newtype:.
We can now review some of our basic knowledge of Haskell's type system.
What's changed? First the type definition. We have now:
newtype Eval_IO a = Eval_IO (a, O) deriving (Show)
instead of
type MO a = (a, Out)
type constructor Eval_IO to the pair that are going to form our monad.
"return" takes an Int and inserts it into our monad. It will also insert an empty String "" that (>>=) or (>>) will then concatenate in the sequence of computations they glue together.
The same for (>>=). It will now return something constructed by Eval_IO:
- "newInt", the result of the application of "doSomething" to "oldInt" (better, the binding of "oldInt" in "doSomething");
- the concatenation of "oldOutput" (matched bywith the evaluation of "monad" - "eval_IO t") and "newOutput", (matched by "Eval_IO(newInt,newOutput)" with the evaluation of "(doSomething monad)" - "eval_IO u").Eval_IO (oldInt, oldOutput)
That is to say: in the "where" clause, we are matching for the elements paired in a type Eval_IO: this is indeed the type of "monad" (corresponding to "eval_IO t" in the body of the evaluator) and "(doSomething monad)" (where "doSomething" correspond to the evaluation of "eval_IO u" within an anonymous function with \oldInt as its argument, argument bound to the result of the previous evaluation of "monad", that is to say "eval_IO t").
And so, "Eval_IO (oldInt,oldOutput) = monad" means: match "oldInt" and "oldOutput", paired in a type Eval_IO, and that are produced by the evaluation of "monad" (that is to say: "eval_IO t"). The same for Eval_IO (newInt,newOutput): match "newInt" and "newOutput" produced by the evaluation of "(doSomething mon them together by using our type constructor, feeding it with a pair composed by and Int and a String. This is what we do with the "return" method of out monad and with "print_IO" function, where:
- return insert into the monad an Int;
- print_IO insert into the monad a String.
did not required all this additional work (apart the need to specifically define (>>)? met, Integer > a1 = a > a2 :: IamNotAfunction Integer > a2 = NF a > a3 :: IamNotAfunctionButYouCanUnPackAndRunMe Integer >
We will now rewrite our basic evaluator with the counter in do-notation.
As we have seen, in order to do so we need:
- a new type that we must declare as an instance of the Monad class;
- a function for binding method (>>=) and a function for the "return" method, for the instance declaration;
Now our type will be holding a function that will take the initial state 0 as we did before.
In order to simplify the process of unpacking the monad each time to get the function, we will use a label sector:
> newtype MS a = MS { unpackMSandRun :: (State -> (a, State)) }
This is it: MS will be out type constructor for matching and for building our monad. "unpackMSandRun" will be the method to get the function out of the monad to feed it with the initial state of the counter, 0, to get our result.
Then we need the "return" function that, as we have seen does nothing but inserting into our monad an Integer:
> mkMS :: a -> MS a > mkMS int = MS (\x -> (int, x))
"mkMS" will just take an Integer "a" and apply the MS type constructor to our anonymous function that takes an initial state and produces the final state "x" and the integer "a".
In other words, we are just creating our monad with inside an Integer.
Our binding function will be a bit more complicated then before. We must create a type that holds an anonymous function with elements to be extracted from our type and passed to the anonymous function that comes next:
> bindMS :: MS a -> (a -> MS b) -> MS b > bindMS monad doNext = MS $ \initialState -> > let (oldInt, oldState) = unpackMSandRun monad initialState in > let (newInt, newState) = unpackMSandRun (doNext oldInt) oldState in > (newInt,newState)
So, we are creating an anonymous function that will take an initial state, 0, and return a "newInt" and "newState".
To do that we need to unpack and run our "monad" against the initialState in order to get the "oldInt" and the "oldState".
The "oldInt" will be passed to the "doNext" function (the next anonymous function in our do block) together with the "oldState" to get the "newInt" and the "newState".
We can now declare our type "MS" as an instance of the Monad class:
> instance Monad MS where > return a = mkMS a > (>>=) m f = bindMS m f
We now need a function to increase the counter in our monad from within a do block:
> incState :: MS () > incState = MS (\s -> ((), s + 1))
This is easier then it looks like. We use the type constructor MS to create a function that takes a state an returns a void integer () paired with the state increased by one. We do not need any binding, since we are just modifying the state, an integer, and, in our do block, we will insert this function before a new line, so that the non binding ">>" operator will be applied.
And now the evaluator:
> evalMS :: Term -> MS Int > evalMS (Con a) = do incState > mkMS a > evalMS (Add t u) = do a <- evalMS t > b <- evalMS u > incState > return (a + b)
Very easy: we just added the "incState" function before returning the sum of the evaluation of the Terms of our expression.
Let's try it:
*TheMonadicWay> unpackMSandRun (evalMS (Add (Con 6) (Add (Con 16) (Add (Con 20) (Con 12))))) 0 (54,7) *TheMonadicWay>
As you can see, adding a counter makes our binding operations a bit more complicated by the fact that we have an anonymous function within our monad. This means that we must recreate that anonymous function in each step of our do block. This makes "incState" and, as we are going to see in the next paragraph, the function to produce output a bit more complicated. Anyway we can handle this complexity quite well, for now.
9.3 The monadic evaluator with output and counter in do-notation
Adding output to our evaluator is now quite easy. It's just a matter of adding a field to our type, where we are going to accumulate the output, and take care of extracting it in our bind function to concatenate the old one with the new one.
> newtype Eval_SIO a = Eval_SIO { unPackMSIOandRun :: State -> (a, State, Output) }
Now our monad contains an anonymous function that takes the initial state, 0, and will produce the final Integer, the final state and the concatenated output.
So, this is bind:
> bindMSIO monad doNext = > Eval_SIO (\initialState -> > let (oldInt, oldState, oldOutput) = unPackMSIOandRun monad initialState in > let (newInt, newState, newOutput) = unPackMSIOandRun (doNext oldInt) oldState in > (newInt, newState, oldOutput ++ newOutput))
And this is our "return":
> mkMSIO int = Eval_SIO (\s -> (int, s, ""))
Now we can declare our type, "Eval_SIO", as an instance of the Monad class:
> instance Monad Eval_SIO where > return a = mkMSIO a > (>>=) m f = bindMSIO m f
Now, the function to increment the counter will also insert an empty string "" in our monad: "bind" will take care of concatenating it with the old output:
> incSIOstate :: Eval_SIO () > incSIOstate = Eval_SIO (\s -> ((), s + 1, ""))
The function to insert some new output will just insert a string into our monad, together with a void Integer (). Since no binding will occur (>> will be applied), the () will not be taken into consideration within the anonymous functions automatically created for us within the do block:
> print_SIO :: Output -> Eval_SIO () > print_SIO x = Eval_SIO (\s -> ((),s, x))
And now the evaluator, that puts everything together. As you can see it did not change too much from the previous versions:
>)
Running it will require unpacking the monad and feeding it with the initial state 0:
unPackMSIOandRun (eval_SIO (Add (Con 6) (Add (Con 16) (Add (Con 20) (Con 12))))) 0 *TheMonadicWay> unPackMSIOandRun (eval_SIO (Add (Con 6) (Add (Con 16) (Add (Con 20) (Con 12))))) 0 (54,7,>
(I formatted the output).
10 If There's A State We Need Some Discipline: Dealing With Complexity
(Text to be done yet: just a summary)
|
http://www.haskell.org/haskellwiki/index.php?title=The_Monadic_Way/Part_I&diff=8046&oldid=6672
|
CC-MAIN-2014-35
|
refinedweb
| 3,282
| 56.22
|
VictorN
Please, provide the exact error message together with the error code, as well as some info about your system, like Windows OS, VS OS
i already share the exact code
Originally Posted by existenceproduct
i already share the exact code
You didn't.
You only posted the #includes with #pragmas and meaningless description.
So no code, no exact error message.
... nor have you said whether my suggestion in post #17 re requiring a unicode string parameter rather than ASCII was right or not.
Originally Posted by existenceproduct
I am getting this dmvfoam error when I compile my code.
I had already included the function SHGetFolderPath
sucessfully, but when I added SHCreateDirectory(NULL, p_path)
it stopped compiling. I have the following headers
#include <windows.h>
#include <stdlib.h>
#include <malloc.h>
#include <memory.h>
#include <tchar.h>
#include "Format.h"
#include <wchar.h>
#include <stdio.h>
#include <shlwapi.h> // for SHGetFolderPath function
#include <shlobj.h> // for SHGetFolderPath and SHCreateDirectory function
//*************************************************************************************************************************
#pragma comment(lib, "shlwapi")
#pragma comment(lib, "shell32")
Any suggestions?
no solution to this
Originally Posted by existenceproduct
no solution to this
There is nothing to solve, since we don't see the code casing the error, nor have we any info about what the error was.
and also you haven't said whether my suggestion in previous posts has been tried and worked or not. But without having compilable code that demonstrates the problem, like Victor there's not much else we.
|
https://forums.codeguru.com/showthread.php?309633-SHCreateDirectory-not-found&s=833306c4ef3222a4ca8cf7321aad72a1&p=2241074
|
CC-MAIN-2022-40
|
refinedweb
| 243
| 60.01
|
import org.apache.http.HttpResponse; 34 35 /** 36 * Abstract non-blocking server-side HTTP connection interface. It can be used 37 * to receive HTTP requests and asynchronously submit HTTP responses. 38 * 39 * @see NHttpConnection 40 * 41 * @since 4.0 42 */ 43 public interface NHttpServerConnection extends NHttpConnection { 44 45 /** 46 * Submits {link @HttpResponse} to be sent to the client. 47 * 48 * @param response HTTP response 49 * 50 * @throws IOException if I/O error occurs while submitting the response 51 * @throws HttpException if the HTTP response violates the HTTP protocol. 52 */ 53 void submitResponse(HttpResponse response) throws IOException, HttpException; 54 55 /** 56 * Returns {@code true} if an HTTP response has been submitted to the 57 * client. 58 * 59 * @return {@code true} if an HTTP response has been submitted, 60 * {@code false} otherwise. 61 */ 62 boolean isResponseSubmitted(); 63 64 /** 65 * Resets output state. This method can be used to prematurely terminate 66 * processing of the incoming HTTP request. 67 */ 68 void resetInput(); 69 70 /** 71 * Resets input state. This method can be used to prematurely terminate 72 * processing of the outgoing HTTP response. 73 */ 74 void resetOutput(); 75 76 }
|
http://hc.apache.org/httpcomponents-core-dev/httpcore-nio/xref/org/apache/http/nio/NHttpServerConnection.html
|
CC-MAIN-2014-49
|
refinedweb
| 186
| 56.45
|
This action might not be possible to undo. Are you sure you want to continue?
Financial management – An overview
Financial management can be defined as the activity concerned with the planning, raising, controlling and administering the funds used in the business. In simple terms, it is an activity concerned with acquisition of funds use of funds and distribution of profits by a business organization. Finance function Finance function can be defined as the task of providing funds needed by the enterprise on the terms that are most favourable to it keeping in view its objectives. Generally, finance function should answer for the following three questions: a) How large and how fast should a company grow? b) What will be the specific forms of its assets? And c) What should be the composition of its liabilities? Concepts of Profit Maximization and Wealth Maximization There are two approaches Viz. 1) Profit maximization and 2) Wealth Maximization Profit Maximization According to this approach , actions that increase profits should be undertaken and those that decrease profits are to be avoided. Under this approach, profit is considered as a test of economic efficiency. More is the profit; more efficient is the firm is the conclusion under this approach. But this approach does not consider the quality of profit but considers only the quantity of profit. The survival of the firm depends upon its ability to earn profits. Wealth Maximization The concept of wealth maximization refers to the gradual growth of the value of assets of the firm in terms of benefits it can produce. The wealth maximization attained by a company is reflected in the market value of shares. In other words wealth maximization simply refers to maximizing wealth of the share holders. Example: Company has 50,000 shares of Rs. 10 each, has an earning per share of Re.0.40, with a profit of Rs. 20000. Assume that the company has issued an additional capital of Rs. 50000 shares of Rs. 10 each for its financial requirements. Now the profit will increase up to Rs. 30000 after taxes, resulting in a net increase of Rs. 10000. Though the additional profit of Rs. 10000 is increased, the earnings per share has come down to Re. 0.30. This does not add to the wealth of the shareholders. This is the reason why the finance manager always concentrates on wealth maximization, cash flows and time value of money
2 Functions of Finance Manager 1. Estimating the Financial needs Financial manager estimation should be based on the sound financial principles so that neither there are inadequate nor excess funds with the concern. Inadequacy of funds will adversely affect the day-do –day working of the concern whereas excess funds may tempt management to spent on speculative activities 2. Selection of right source of funds After ascertaining total amount needed for the organization, it is the responsibility of the financial manager to select the right type of source of funds at right time at right cost. Each source will have its own cost. Careful selection should made in the light of duration, risk, cost and the purpose 3. Allocation of funds After mobilizing funds, it is the responsibility of the finance manager to distribute the funds to capital expenditure and revenue expenditure. Each investment must yield fair amount of return, so that it should contribute to the goal of wealth maximization. 4. Analysis and interpretation of financial performance An efficient system of financial management necessitates the use of various control devices to interpret the financial performance of various operations. Financial control devices generally used are: 1) ROI 2) Budgetary control 3) Break Even analysis 4) Ratio analysis 5) cost and internal audit. ROI is the best control device to evaluate the performance of various financial policies. The use of various control techniques by the finance manager will help him in evaluating the performance in various areas and take corrective measures whenever needed. 5. Analysis of CVP It is another important tool of the financial management that helps the management to evaluate different proposals of investments. It will help the management to know whether the organization is moving in the right direction or not. Make or buy decision, continue or drop the product line are the important decisions possible using CVP analysis. 6. Capital budgeting It is the technique through which finance manager evaluates proposed investment in fixed assets. In how many years the original investment can be recovered? At what percentage of returns business should run? These are the issues answered by capital budgeting technique. PBP, IRR, ARR, NPV are some of the modern techniques of evaluating the proposals.
3 7. Working capital management Working capital is the financial lubricant which keeps business operations going. Fate of large investments mainly depends upon relatively small amount of working capital. Financial manager must assess various cash needs at different times and then make arrangements for arranging cash. Cash may be needed for 1) purchase of raw materials 2) making payments to creditors 3) meet wage bills 4) met day – to –day expenses. usual sources of cash may be 1) cash sales and 2) collections from debtors. Cash management should be such that neither there is shortage nor it is idle. Any shortage of cash will damage the creditworthiness. Profit planning guides the management in attaining the corporate goals. Profit may be earned either through sales or through reducing cost. Cost reduction technique really helps in increasing profits. A judicial use of profit is essential for expansion and diversification plans and also in protecting the interests of shareholders. Ploughing back of profits is the best policy of further financing. A balance should be maintained in using funds for paying dividend and retaining earnings for financing expensive plans. 8. Fair return to the investors Organization should not ignore the interest of the shareholders. Equity holders normally expects fair amount of divided and hence capital appreciation of their investment. If this is not done, confidence of the investors will be lost. Hence it is advised the organization to maintain regular dividend policy with growth 9. Maintaining liquidity and wealth maximization This is considered the prime objective of an organization. Liquidity increases the borrowing capacity. Expansion and diversification can be conducted conformably. Increased liquidity builds the firm’s ability to meet short term obligations. Once the flow of funds is assured continuously, in turn the overall profitability of the firm can be maximized. This wealth maximization takes place in the form of growth of capital over the years.
Financial Plan
The basic purpose of the financial plan is to make sure that adequate funds are raised at the minimum cost and that they are used wisely. Factors affecting financial plan 1. Nature of the industry
4 The nature of industry decides the quantum of capital and the sources of its procurement. Capital intensive industries require a larger amount of capital in comparison to the labor-intensive industries. Moreover, industries having stability and regularity in earnings may collect capital form the market very easily in comparison to those having instability or irregularity in their income.
2. Amount of risks The amount of risks involved in the process of production also affects the planning. The industries would depend more upon the ownership securities such as shares in case of greater amount of risks and uncertainties are involved whereas the other industries having lesser amount of risk may depend upon debts and thus earning higher profits for equities. Amount of risks also affects the liquidity of cash 3. Standing of the concern Certain individual characteristics of the unit such as age, size, goodwill of the industry, area of operation, credit rating in the market, past performance etc. effect the financial planning. Large concerns or companies with good credit rating in the market may collect their finance (equity, debenture or public deposit) very easily whereas the new companies get difficulties in raising their capital and loan from the market 4. Availability of sources There are a number of sources from which funds can be raised. Various alternative sources of finance must be appraised of in the light of their cost, availability, regularity etc. at the time of formulating financial plan 5. Attitude of management Management may would like to establish control over the industry. In such a case, they would not like to issue equity shares or if they issue thy would purchase a majority of such shares themselves or would prefer debt financing. Such management would not like to issue new equity shares even for its expansion programmes. Ploughing back of profits is always preferred in such units 6. Future expansion plans The future plans of a concern should be considered while formulating a financial plan. The plans for expansion and diversification in near future will require a flexible financial plan. The sources of funds should be such which will facilitate rising required funds without any difficulty
5 7. Magnitude of external capital requirement It would be good policy for the industry to finance its expansion or diversification programmes though internal sources such as ploughing back of profits, reserves and surpluses etc., but short term finances may be obtained from external sources by issuing redeemable preference shares or debentures.
8. Government control Government policies regarding issue of shares and debentures, payment of dividend and interest rates, entering foreign collaborations etc. will influence a financial plan. The legislative restrictions on using certain sources, limiting dividend and interest rates, etc. will make it difficult to raise funds. So government controls should be considered properly while selecting a financial plan 9. Flexibility Flexibility and not the rigidity should be the main principle to be followed in the financial plan. If there is no flexibility in the plans, it would be very difficult later on, to carry on its expansion programmes due to lack of funds 10. Capital structure The capital structure of the company should be diversified but balanced. A fair rate of dividend should also be maintained Characteristic features of a good financial Plan 1. Simplicity It should be easily understandable by all concerned and free from complications and suspicion. There must be no confusion in the minds of the investors about the nature of the securities issued by the organization 2. Foresight The planners should always keep in mind not only the needs of today but also the needs of tomorrow so that a sound capital structure may be formed. 3. Flexibility The capital structure of a company should be flexible enough to meet the capital requirements of the company. The financial plan should be chalked out in such a way that both increase and decrease in capital may be feasible.
6 4. Intensive use Every ‘paisa’ should be used properly for the prosperity of the enterprise. Wasteful use of capital is as bad as inadequate capital. It must be seen that there is neither over-capitalization nor undercapitalization. 5. Liquidity Reasonable amount of current assets must be kept in the form of liquid cash so that business operations may be carried on smoothly without any shocks due to shortage of funds. 6. Economy It means that cost of raising capital should be the minimum. Dividend or interest should not be a burden to the company 7. Provision for anticipated contingencies A sound financial plan should provide for the future contingencies caused by business cycles Risk, Return and Value of firm Financial decisions involve alternative course of action. The alternative course of action will have different risk-return implications. In general the financial manager is required to answer the following questions :a) What is the expected return b) What is the amount of risk involved c) How would the decision influence the market value of the firm
Capital budgeting Decisions Capital Structure Decisions Dividend Decisions Working capital Decisions
Return
Market value of firm
Risk
Relationship between risk, return and valuation of firm
7
Sources of finance
Equity shares Dividend is the cost to the company. No Tax benefits. Not repayable during the life time of the company Preference shares Fixed rate of Dividend is the cost to the company. No Tax benefits. Repayment depends on the type of Preference share Debentures Fixed rate of interest is the cost of capital. Tax benefit is available. Repayment depends on the type of debenture. Bank Loan Fixed rate of interest is the cost of capital. Tax benefit is available. Public deposit Fixed rate of interest is the cost of capital. Tax benefit is available. Retained Earnings Free source of finance. Opportunity cost should be properly calculated.
Leverage Analysis
Leverage refers to the utilization of fixed cost/ interest bearing securities to maximize the wealth of equity shareholders. Master table for calculating the leverages Amount Rs. Sales Less: Variable Cost Contribution Less: Fixed Cost Operating profit or EBIT Less: Interest Earnings Before Tax (EBT) Less: Tax Earnings After Tax (EAT) Less: Preference Divided Earnings available to equity holders ******* ******* ******* ******* ******* ******* ******* ******* ******* ******* *******
8 Financial Leverage FL= EBIT EBT Operating Leverage OL= Contribution EBIT Degree of Operating Leverage Degree of OL = % change in Income % Change in Sales Combined Leverage CL= FL x OL Or EPS= EAESH CL= EBIT No. Equity Shares EBT Or CL= EBIT x Contribution = Contribution EBT EBIT EBT It is true that capital structure cannot affect the total earnings of a firm but it can affect the share of earnings available for equity shareholders.
Types of leverages
1.Financial Leverage 2.Operating Leverage 3.Combined Leverage. Financial Leverage Financial leverage is a tool with which a financial manager can maximize the returns to the equity shareholders. He should choose proper capital structure. The proper blend of debt to equity should be maintained. The use more and more fixed interest bearing securities to maximize the wealth of the equity shareholders is called Financial Leverage or Trading on Equity. When the amount of debt is relatively larger in relation to capital stock, a company is said to be trading on their equity Operating Leverage Costs can be classified into 1) Fixed cost and 2) Variable cost. Variable cost varies depending upon the volume of operation. Fixed cost remains constant irrespective of the level of activity. Reducing variable cost can increase operating profit of an organization.
9 Operating leverage results from the presence of fixed costs that help in magnifying net operating income fluctuations flowing from small variations in revenue. The changes in sales are related to changes in revenue. The fixed costs do not change with the change is sales. Any increase in sales, fixed costs remaining the same, will magnify the operating revenue. The operating leverage occurs when a firm has fixed costs which must be recovered irrespective of sales volume. The fixed costs remaining the same, the % change in operating revenue will be more than the percentage change in sales. The occurrence is known as operating leverage. The degree of operating leverage depends upon the amount of fixed elements in the cost structure. Thus, the operating leverage has impact on fixed cost, variable cost and contribution. Operating leverage simply explains whether the firm has better operating efficiency or not.
Problems on Financial Leverage Financial plans Debt ( Int. @ 10% p.a.) Equity Shares ( Rs. 10 each) Plan I Plan II Rs.4,00,000 Rs. 1,00,000 Rs.1,00,000 Rs. 4,00,000 Rs.5,00,000 Rs. 5,00,000
The earnings before interest and tax are assumed as Rs. 50,000, Rs.75,000 and 1,25,000. The tax be taken at 50% (1) When EBIT is Rs. 50,000 Plan I Rs Earnings before interest and tax(EBIT) 50,000 Less : Interest on debts 40,000 Earnings before tax(EBT) 10,000 Less: Tax @ 50% 05,000 Earnings after tax 05,000 No. of equity shares 10,000 Earning per share (EPS) 5,000 =0.50 P. 10,000 (2) When EBIT 1s Rs. 75,000 Earnings before interest and tax(EBIT) Less: Interest on debts Earnings before tax (EBT) Less: Tax @ 50% Earnings after tax Plan I Rs 75,000 40,000 35,000 17,500 17,500 Plan II Rs. 75,000 10,000 65,000 35,500 32,500 Plan II Rs. 50,000 10,000 40,000 20,000 20,000 40,000 20,000 = 0.50 P. 40,000
10 No. of equity shares Earning per share (EPS) (3) When EBIT is Rs. 1,25,000 Earnings before interest and tax(EBIT) Less : Interest on debts Earnings before tax(EBT) Less: Tax @ 50% Earnings after tax No. of equity shares Earning per share (EPS) Plan I Rs 1,25,000 40,000 85,000 42,500 42,500 10,000 42,500 = 4.25 P. 10,000 Plan II Rs. 1,25,000 10,000 1,15,000 57,500 57,500 40,000 20,000 =1.44 P. 40,000 10,000 17,500 =1.75 P. 10,000 40,000 32,500 = 0.81 P. 40,000
2 . A limited company has equity share capital of Rs. 5,00,000 divided into shares of Rs. 100 each. It wishes to raise further Rs. 3,00,000 for expansion programmes. The company plans the following financing schemes: 1) 2) 3) 4) all common stock Rs. 1,00,000 in common stock and Rs. 2,00,000 in debt @ 10% p.a All debt at 10% Rs. 1,00,000 in common stock and Rs. 2,00,000 in preference capital with a dividend rate of 8%
The company has EBIT of Rs. 1,50,000. Tax may be taken at 50% Plan I EBIT Less: Interest Less: Tax @ 50% Earning after tax Less: Pref. Dividend @ 8% Earnings available for equity No. equity shares EPS Impact of Leverage on Loss. 1,50,000 1,50,000 75,000 75,000 75,000 8,000 Rs. 9.375 (4) Plan II 1,50,000 20,000 1,30,000 65,000 65,000 65,000 6,000 Rs.10.83 (2) Plan III Plan IV
1,50,000 1,50,000 30,000 1,20,000 1,50,000 60,000 75,000 60,000 75,000 16,000 60,000 59,000 5,000 6,000 Rs. 12.00 Rs.9.83 ( 1) (3)
11 The leverage will have an adverse impact on earning if the firm suffers losses. This impact is discussed as below: Taking the above figures if company suffers a loss of Rs. 70,000, discuss the impact of leverage under all the four plans Plan I Loss before interest and tax Less: Interest Loss after interest No. equity shares Loss Per share -70,000 -70,000 8,000 Rs.8.75 Plan II -70,000 -20,000 -90,000 6,000 Rs.15 Plan III -70,000 -30,000 -100,000 5,000 Rs.20 Plan IV -70,000 -70,000 6,000 Rs.11.67
3. The Indo American Company Ltd. had the following capital structure on 31-3-2003 7 % Debentures 8 % Bank Loan 9 % Preference shares @ Rs. 10 each 38,000 Equity shares of Rs. 50 each Retained Earnings Rs. 12,00,000 Rs. 2,00,000 Rs. 14,00,000 Rs. 19,00,000 Rs. 13,00,000 60,00,000 The present earnings before interest and taxes are Rs. 9,00,000. The company is contemplating an expansion programmes requiring an additional investment of Rs. 10,00,000. Company hopes to maintain the same rate of earnings. The company has the following alternatives:1) The issue debentures @ 8% 2) To issue preference shares at 10% 3) To issue equity shares at a premium of Rs. 10 per share Examine the alternatives assuming income tax @ 55% Comparative Statement of EPS in Different Alternatives Earnings on 60,00,000 capital = 9,00,000 Therefore, earnings on 10,00,000 = 9,00,000 60,00,000 x 10,00,000 = 1,50,000
12
Particulars Earnings (EBIT) [On present cap. of Rs. 60,00,000] Earnings (EPS) [On new cap. of Rs. 10,00,000]
Present 9,00,000
Alternative I 9,00,000
Alternative II 9,00,000
Alternative III 9,00,000
-
1,50,000
1,50,000
1,50,000
Total Earnings 9,00,000 10,50,000 10,50,000 10,50,000 Less: Interest a) B.L 16000 b) Deb. 84000 1,00,000 1,00,000 1,00,000 1,00,000 c) Int. on proposed Debt 80,000 EBT 8,00,000 8,70,000 9,50,000 9,50,000 Less: Tax @ 55% 4,40,000 4,78,500 5,22,500 5,22,500 PAT 3,60,000 3,91,500 4,27,500 4,27,500 Less: Pref. Div. a) on 14 Lks. @ 9% 1,26,000 1,26,000 1,26,000 1,26,000 b) on proposed pref. @ 10% 1,00,000 Earnings equity 2,34,000 No. shares EPS 38000 Rs. 6.16 2,65,500 38,000 Rs. 6.99 2,01,500 38,000 Rs. 5.30 3,01,500 54,667 Rs. 5.52 38000 16667 54,667
Total No. of Equity shares
: Existing Shares Add: New equity 10,00,000 60
13
WORKING CAPITAL ANALYSIS
The prime objective of management is to make profit, whether or not this is accomplished, in most business depends largely on the manner in which the working capital is administered The fate of large scale investment in fixed capital is often determined by a relatively small amount of current assets Inadequate working capital is disastrous; whereas redundant working capita is a criminal waste Without working capital, fixed assets are like a gun which can not shoot as there are no cartridges Meaning of Working Capital Capital needed for day-to-day operation of a business is termed as working capital There are two concepts of working capital 1) Gross Working Capital 2) Net Working Capital Gross working capital refers to the firm’s total investment in current assets. Net working capital refers to the excess of current assets over current liabilities of the firm. However, for all practical purposes, it means net working capital only. Factors Influencing Working Capital Needs Following factors should be considered in order to determine the amount of working capital required 1) Nature of Business The amount of working capital is related to the nature and volume of the business. Working capital requirement is more in case of firms dealing in finance and manufacturing industries. On the contrary, concerns having large investments in fixed assets require less amount of working capital. It is, therefore, public utility concerns (such as railways, electricity services) as compared to trading or manufacturing concern require a lesser amount of working capital partly because of cash nature of their business and partly their selling a service instead of commodity. 2) Size of Business/ scale of operation The volume of transaction directly influences the working capital requirements of a concern. Greater the volume of transaction, generally, larger will be working capital requirement. However, in some
14 cases, even a smaller concern may need more working capital due to high overhead charges, inefficient use of available resources and other economic advantages of small size 3) Seasonal variations In certain industries, (like oil mills, cotton textiles etc) raw material is not available throughout the year. They have to buy raw materials in bulk during the season to ensure an uninterrupted flow and process them during the entire year. A huge amount is, thus, blocked in the form of material inventories during such seasons, which give rise to more working capital requirements. Generally, during the busy season, a firm requires larger working capital than in the slack season. 4) Length of production process Time taken in the process of manufacture/ length of production process is also an important factor in determining the amount of working capital. Longer the process period of manufacture, larger is the amount of working capital required. 5) Working capital Cycle Cash Drs R/M
Sales F/G
WIP
In a manufacturing concern, the working capital cycle starts with the purchase of raw material and ends with the realization of cash form the sale of finished products. The speed with which the working capital completes one cycle determines the requirement of working capital. Longer the period of the cycle, larger is the requirement of working capital. 6) Credit Policy A concern that purchases its requirements on credit and sells its products on cash requires lesser amount of working capital. On the other hand, a concern buying its requirements for cash and allowing credit to its customers need larger amount of working capital as huge amount is blocked up in debtors. 7) Rate of stock turnover Stock turnover refers to the speed with which sales are effected. A firm having a high rate of stock turnover needs lower amount of working capital as compared to the firm having a low rate of turnover.
15 8) Growth and expansion of Business Growing concerns require more working capital than those that are static. It is logical to expect larger amount of working capital in a growing concern to meet its growing needs of funds for its expansion programmes 9) Profit Margin and profit appropriations Some firms enjoy a dominant position in the market due to quality product or good marketing management. Such firms are in a position to earn more cash profits and contribute to their working capital. The way in which profits are apportioned is also a factor which determines working capital. If huge amount is transferred to reserve and other contingency purposes, available cash profit is reduced in turn which reduces the fund available for working capital. 10) Dividend policy There is a well-established relationship between dividend and working capital. If constant dividend policy is followed, management needs to adjust cash position before declaring dividend. Storage of cash is one of the reasons for the issue of stock dividend. Retaining profits in this fashion increases working capital position. If the whole of the profits are distributed among the shareholders, company’s working capital position would not be better 11) Business cycle Business cycle referrers to expansion and contraction in general business activity. When the business is prosperous (period of boom), there is a need for larger amount of working capital due to increase in sales , optimistic expansion of business etc. On the contrary, in the times depression, the business contracts, sales decline, difficulties are faced in collections form debtors and firms may have large amount of working capital lying idle 12) Price level changes Changes in the price level also affect the working capital requirements. Generally, rising prices will require the firm to maintain larger amount of working capital. However, if firms may revise their product prices, will not face severe working capital problem. The effects of rising price levels will be different for different firms depending upon their price policies, nature of product etc., 13) Other Factors Certain other factors like operating efficiency, management ability, irregularities of supply, asset structure, banking facilities etc., also influence the requirements of working capital Importance and advantages of adequate working capital Inadequate working capital is disastrous; whereas redundant working capita is a criminal waste
16 Both starvation and indigestion are injurious to health No business can run without an adequate amount of working capital. The main advantages of maintaining adequate working capital are as follows: 1) Cash discounts If proper cash balance is maintained, the business can avail the advantage of cash discount by paying cash for the purchase of raw materials and merchandise. It reduces the cost of production 2) Solvency of business. In order to maintain the solvency of the business, it is very essential to maintain sufficient amounts of funds. Funds should readily be available to make all the payments in time as and when they are due. Without ample working capital, production will suffer and business can never flourish in the absence of adequate working capital
3) Goodwill It is common experience of all prudent businessmen that promptness of payment in business creates goodwill and increases the debt capacity of the business. Due to this, at times, supplies can be obtained on credit also. In addition, firm can raise funds form the market due to the payment of interest and principal in time. 4) Easy loans form Banks An adequate working capital, high solvency and good credit standing helps firm to borrow short term loans from bank. Banks fell free to grant seasonal loans, if business has good credit standing and trade reputation 5) Distribution of Dividend If company has shortage of working capital, it cannot distribute good dividend to its shareholders in spite of sufficient profits. Profits are to be retained in the business to make up the deficiency of working capital. On the contrary, if working capital is sufficient, ample dividend can be declared and distributed. It increases the market value of shares 6) Exploitation of Good Opportunities Only concerns with adequate working capital can exploit favourable market conditions such as purchasing its requirements in bulk when the prices are lower and by holding inventories for higher prices
17 7) Ability to face crisis Depression shoots up the demand of working capital because stockpiling of finished goods become necessary. Certain other contingencies like unexpected losses, business oscillations etc., can easily be overcome, if company maintains adequate working capital. 8) Increases Fixed Assets Efficiency. Adequate working capital increases the efficiency of the fixed assets of the business because of its proper maintenance. Without working capital , fixed assets are like a motorbike which can not run as there is no petrol. It is, therefore, said “ the fate of large scale investment in fixed capital is often determined by a relatively small amount of current assets.” 9) High Morale A company which has ample working capital can make regular payment of wages, salaries, and other day-to-day commitments which raises the morale of its employees, increases their efficiency, reduces wastages and costs and enhances production and profits. 10)Production efficiency A continuous supply of raw materials, research programmes, innovation and technical developments and expansion programmes can successfully be carried out if adequate working capital is maintained in the business. It will increase the production efficiency Disadvantages of Redundant/ Excess Working Capital Redundant working capital is also not good for the health of the business. Following are the drawbacks of excess working capital: 1) No proper return on Investment The business can not earn a proper rate of return on its investment because excess capital does not earn anything whereas the profits are distributed on the whole of its capital, thus bringing down the rate of return to the shareholders. 2) Fall in share value Due to low rate of return on investments, the value of shares may also fail 3) Unnecessary purchasing In case of surplus money, it becomes difficult to control the purchases of many things which are not required in the business or it may result in the purchase of excessive inventories and fixed assets
18 which are not so frequently used or it increases the chances of theft, waste, and mishandling of inventories. 4) No expansion activities / inefficient management Redundant working capital may be taken as an indication that the management is not interested in expanding the business. Otherwise, the redundant working capital might have been used in expanding the business 5) Destruction of relationship When there is excess working capital, relations with bank and other financial institutions may not be maintained. This may lead to problems when demand for working capital arises 6) Excessive debtors/ Stock Excessive working capital implies excessive debtors and defective credit policy which may cause high rate of bad debts. If the same is implied as stock, it means poor stock turnover which may lead to outdated stock 7) Overall inefficiency Excess working capital may lead to overall inefficiency in the organization in turn which may lead to the closure of the business itself Principles of Working Capital We need to know when to look for working capital, how to use them, and how to measure, plan and control them ( is the core of working capital management) 1. 2. 3. 4. Principle of Risk Variation Principle of Cost of Capital Principle of equity position Principle of maturity of payment
1. Principle of Risk Variation Risk here refers to the inability of the firm to meet its obligations as and when they become due for payment. Larger amount in current assets with less dependence on sort-term borrowings increases liquidity, reduces risk and thereby decreases the possibilities of loss. On the other hand, less investment in current assets with greater dependence on short-term borrowings increases risk, reduces liquidity and the profitability. A conservative management prefers to minimize risk by maintaining a higher level of current assets or working capital. However, the goal of the management should be to establish a suitable trade off between profitability and risk
19
2. Principle of Cost of Capital The various sources of raising working capital finance have different cost of capital and the degree of risk involved. Generally, higher the risk lower is the cost and lower the risk higher the cost. A sound working capital management should always try to achieve a proper balance between these two 3. Principle of equity position This principle is concerned with planning the total investment in current assets. According to this principle, the amount of working capital invested in each component should be adequately justified by firm’s equity position. Every rupee invested in the current assets should contribute to the net worth of the firm. The level of current asset may be measured with the help of two ratios: 1) Current assets as a percentage of total assets, 2) current asset as a percentage of total sales. While deciding about the composition of current assets, the financial manager may consider the relevant industrial averages 4. Principle of maturity of payment This principle is concerned with planning the sources of finance for working capital. According to this principle, a firm should make every effort to relate maturity of payment to its flow of internally generated funds. Maturity pattern of various current obligations is an important factor in risk assessments. Generally, shorter the maturity, the greater the inability to meet its obligations in time Estimation of working capital requirement Estimation of working capital is not an easy task and a large number of factors have to be considered while determining corking capital requirement. Following factors are to be considered while estimating working capital requirement of a manufacturing concern. 1. Total cost incurred on material, wages and overheads 2. Time-lag in the payment of wages and other expenses 3. The length of time for which raw materials are to remain in stores 4. The length of production cycle- time for conversion of raw material into finished goods. 5. The length of sales cycle during which finished goods are to be kept waiting for sale 6. The average period of credit allowed to customer 7. The amount required to pay day-to-day expense of the business 8. The average amount of cash required to make advance payments, if any 9. The average credit period expected to be allowed by suppliers 10. Margin for contingencies
20 Management of Working Capital The basic goal of working capital management is to manage the current assets and current liabilities in such a way that a satisfactory level of working capital is maintained. A sound working capital management policy is one which ensures higher profitability, proper liquidity and sound health of the organization. Following are the important functions of working capital management 1) 2) 3) 4) 5) Determination of the size of Working Capital Determination of working capital mix Determination of source/ means of working capital Continuous and adequate supply of working capital Control on the usage of working capital
The management of working capital can be studied under the following three heads :(1) Management of Cash Balances (2) Management of Receivables and (3) Management of Inventory MANAGEMENT OF CASH BALANCES Every undertaking is desirous of utilizing the available cash most efficiently so as to accomplish the goals of the undertaking i.e. maximization of profit or wealth with minimum efforts. But management of cash is not as simple as it appears. If, firm does not maintain sufficient cash balances, it may not be in a position to face unexpected challenges that may bring down its credit in the market. On the other hand, excessive cash reserves will remain idle in the business, contributing nothing towards the wealth of the firm. Not only that , if heavy amounts are blocked for unforeseen contingencies, the company will not be in a position to carry on its day to day working efficiently. It is where the real problem of cash management comes, i.e. how much cash should be set aside for the unexpected challenges and how much for the regular day-to-day working. Cash management involves the following 1) 2) 3) 4) Controlling level of cash Controlling inflow of cash Controlling outflow of cash Optimal investment of surplus cash
1) Controlling level of cash Every concern desires to keep minimum balance of cash for its unforeseen obligations. What should be that minimum balance of cash is really a problem for the financial management to solve. In deciding the level of cash the following considerations should be taken into account: -
a) b) c) d)
21 knowledge of inflow and outflow of cash knowledge of unforeseen problems Sources of funds within the business and other sources Relations with banks
a) knowledge of inflow and outflow of cash The basic toll for the management to forecast the requirement of cash balances is the knowledge of cash inflow and cash outflow. Cash budget reveals the timing and the size of net cash flow as well as the period during which surplus cash may be available for temporary investment. b) knowledge of unforeseen problems In addition to the known inflows and outflows, there are certain unexpected discrepancies like strike, lock out, rise in the cost of material, problem of transportation etc. It is desirable to reserve an adequate balance to meet such contingencies. Such reserves must be created carefully as unnecessary blockage of funds may destroy the smooth operation of the business c) Sources of funds Cash level depends very much on the sources of funds form which the company can obtain funds at short notice. The company can maintain less cash, it has internal sources of funds to meet the expenses d) Relation with Banks The level of cash balance is determined largely by relationship of the firm with banks. Relationship with banks mainly depends upon the credibility of the concern. If company has cordial relations with banks, banks will come forward to assist the undertaking as and when it needs cash. In this connection, points of major importance are financial condition of the bank, its location, the services it offers, and the managerial ability of its chief officers. 2) Controlling inflow of cash Adequate control should be exercised on the inflow of cash. Follow up and collection mechanism should be improved to collect the receivables as and as and when they need to be collected. Fraudulent diversion of cash can be checked easily by installing an internal check system.
22 3) Controlling outflow of cash Company can maintain credibility in the market and in the minds of suppliers only if the company makes payment in time. While dealing with outflow of cash, care should be taken to see that the early payments are not made before due date unless cash discounts are offered by the suppliers and payment is not delayed so as not to create incredibility in the minds of the suppliers. 4) Optimal investment of surplus cash After controlling inflow and outflow of cash, the next problem is to invest surplus cash available with the company for a short period. In investing the surplus fund, the following consideration are usually given due weight – a) security b) liquidity c) yield and d) maturity. From this point of view short-term government securities or treasury bills are better investments. MANAGEMENT OF RECEIVABLES Selling goods on credit increases the volume of sale. It involves an element of risk. While managing receivables following points are to be noted 1) Whom to grant credit Sales can be made to customers after considering (Character, Capacity, Collateral (Security) and nature of product 2) Period of credit Period of credit must not lead to a disastrous condition to the company. Effect of credit on liquidity of funds must be carefully analyzed. 3) Credit Collection Policy Company should take all necessary steps to collect receivables as early as possible to avoid bad debts. The company should follow up collection procedure in a clear-cut sequence i.e. polite letter, progressively strong –worded reminders, personal visits and then legal action. Though the collection procedure should be firmly adhered to but individual cases should be considered in its merit. 4) Analyzing receivable mechanism From time to time receivable mechanism should be appraised through various ratios like debtor turnover ratio, ratio of receivable to current asset etc.,
23 MANAGEMENT OF INVENTORY. Inventories mean the stock of raw materials semi-finished products and finished products. Nearly 60% of the current assets are represented by inventories. Shortage of stock reduces profitability and holding excess inventories involves storage costs. Inventory management will minimize these costs. It is, therefore, the prime responsibility of the financial executives to have proper management and control over the investment in inventories so that it should not be unprofitable for the business. The inventory management includes the following aspects 1) size of inventory, 2) establishing time schedules, procedures and lot of sizes for the new orders, 3) asserting minimum safety levels 4) coordinating sales, production and inventory policies, 5) providing proper storage facilities, 6) arranging the receipts, disbursement of materials and developing the form of recording these transactions, 7) assigning responsibilities for carrying out inventory functions, 8) providing the report necessary for supervising over all activity. Objectives of inventory management 1) Availability of Materials: The first and foremost objective of inventory management is to make all types of materials available at all times whenever they are needed by the production departments so that the production many not be held up for want of materials. It is therefore advisable to maintain a minimum quantity of all types of materials to move on the production on schedule 2) Minimizing wastage Inventory control is essential to minimize the wastage at all levels i.e. during its storage in the godown or at work in the factory. Normal wastage, in other words uncontrollable wastage, should only be permitted. Any abnormal but controllable wastage should strictly be controlled. Wastage of materials by leakage, theft, embezzlement and spoilage due to rust dust or dirt should be avoided. 3) Promotion of manufacturing efficiency The manufacturing efficiency of the enterprise increases if right types of raw materials is made available to the production at the right time. It reduces wastage and cost of production and improves the profitability and morale of workers
24 4) Better service to customer In order to meet the demand of the customers, it is the responsibility of the concern to produce sufficient stock of finished goods to execute the orders received. It means, a flow of production should be maintained 5) Control of production level The concern may decide to increase or decrease the production level in favourable time and the inventory may be controlled accordingly. Proper control of inventory helps in creating and maintaining buffer stock to meet any eventuality. Production variations can also be avoided through proper control of inventories. 6) Optimal level of inventories Proper control of inventories help management to procure materials in time in order to run the operation smoothly. It also avoids the out of stock situations 7) Economy in purchasing Proper inventory control brings certain advantages and economies in purchasing the raw materials. Management makes every attempt to purchase the raw material in bulk quantities and to take advantage of favourable market conditions. 8) Optimum investment and efficient use of capital The prime objective of inventory control is to ensure optimum level of investment in inventories. There should neither be any deficiency of stock of raw materials so as to hold up the production process nor should there be any excessive investment in inventories so as to block the capital that could be used in an efficient manner otherwise. It is, therefore, the responsibility of financial management to set up the maximum levels of stocks to avoid deficiency or surplus stock positions. 9) Reasonable price Management should ensure the supply of raw materials at a reasonably low price but without sacrificing quality of it. It helps in controlling the cost of production and the quality of finished goods in order to maximize the profits of the concern. 10) Minimizing cost Minimizing inventory costs such as handling, ordaining and carrying costs etc., is one of the main objectives of inventory management. Financial management should help controlling the inventory costs in a way that reduces the cost per unit of inventory. Inventory costs are the part of total cost of production hence cost of production can also be minimized by controlling the inventory costs.
25 11) Designing proper organization To design proper organization for inventory management. A clear-cut accountability should be fixed at various levels of the organization Inventory control System A proper inventory control not only provides liquidity but also increases profit and cause substantial reduction in the working capital of the concern. Inventory control involves the following: 1) Stock Levels a) b) c) d) Minimum Level Re-ordering Level Maximum Level Danger Level
2) Determination of safety stock Safety stock is a buffer to meet some unanticipated increase in usage. 3) Ordering system of inventory (e.g. EOQ, Periodic re-ordering system, single order and scheduled part delivery system) 4) Inventory reports The management should be kept informed with the latest stock position of different items. This is usually done by preparing periodical inventory reports. These reports should contain all information necessary for managerial action. The more frequently these reports are prepared the less will be the chances of lapse in the administration of inventories. Tools and techniques of inventory management A-B-C Analysis It is generally seen that in manufacturing concerns, a small percentage of items contribute a large percentage of value of consumption and a large percentage of items of material contribute a small percentage of value. In between these two items there are some items which have almost equal percentage of value of materials. Under A-B-C Analysis , the materials are dividend into 3 categories viz. A, B and C. Past experience has shown that almost 10% of the items contribute to 70% of value of consumption and this category is called ‘A’ Category. About 20% of items contribute about 20% of value of consumption and is known as ‘B’ Category. Category ‘C’ covers about 70% of items of materials which contribute only 10% of value of consumption.
Class A B C
26 No of items (%) 10 20 70
Value of Items (%) 70 20 10
A-B-C analysis helps to concentrate more on category A since greatest monetary advantage will come by controlling these items. An attention should be paid in estimating requirements, purchasing, maintaining safety stocks and properly storing of ‘A’ category materials. These items are kept under a constant review so that a substantial material cost may be controlled. The control of ‘C’ items may be relaxed and these stocks may be purchased for the year. A little more attention should be given towards ‘B’ category items and their purchase should be undertaken at quarterly or half-yearly intervals VED Analysis The VED Analysis is generally used for spare parts. The requirements and urgency of spare parts is different form that of materials. A-B-C analysis may not properly used for spare parts. The demand for spare type of spares are also necessary but their stocks may be kept at low figures. The stocking of D type of spares may be avoided at times. If the lead-time of these spares is less than stocking of these spares can be avoided.
Tandon Committee Report RBI set up a committee under the chairmanship of Shri. P.L Tandon in July 1974. The terms of reference of the Committee were: 1. To suggest guidelines for commercial banks to follow up and supervise credit from the point of view of encoring proper end-use of funds and keeping watch on the safety of advances 2. To suggest the type of operational data and other information that may be obtained by banks periodically form the borrower and by the RBI form the leading banks 3. To make recommendations for obtaining periodical forecasts form borrowers in regarding 1) business/production plans and 2) credit needs 4. To make suggestions for prescribing inventory norms for the different industries, both in the private and public sectors. 5. To make recommendations regarding satisfactory capital structure and sound financial basis in relation to borrowings 6. To make recommendation regarding resources for financing the minimum working capital 7. To make recommendations as to whether the existing pattern of financing working capital requirements by cash credit/overdraft system etc., requires to be modified. If so, to suggest suitable modifications. 8. To make recommendations on any other related matter as the Group may consider necessary.
27 The Study Group reviewed the system of working capital financing and identified its major shortcomings as follows: 1. The cash credit system of lending wherein the borrower can draw freely within limits sanctioned by the banker hinders sound credit planning on the part of the banker and induces financial indiscipline in the borrower. 2. The committee was of the opinion that bank credit is extended on the amount of security available and not according to the level of operations of the customer 3. Bank credit instead of being taken as a supplementary to other sources of finance is treated as the first source of finance. 4. Working capital finance provided by banks theoretically supposed to be short-term in nature, but in practice, has become long-term source of finance. The committee of the opinion that the banks should get the information regarding the operational plans of the customer in advance so as to carry a realistic appraisal of such plans and the banks should also know the end-use of bank credit so that the finance is used only for the purpose for which it was lent Regarding bank credit, the Study group made comprehensive recommendations which have been by and large accepted the RBI. The recommendations relate to : 1) Norms for inventory and receivables 2) Quantum of permissible bank finance a) 0.75(CA-CL) b) 0.75(CA)-CL c) 0.75(CA-CL)-CL 3) Style of lending Loan, cash credit( by charging a slightly higher rate of interest than the loan) . Excess borrowings than estimated over a period of time should also carry a slightly higher rate. Apart from loan and cash credit bank may go for bills lending also. Each bank may take its own decision in this regard. 4) Information and reporting The Study Group suggested a comprehensive information and reporting system which seeks to 1) induce the borrower to plan his credit needs carefully and maintain greater discipline in its use 2) promote a freer flow of information between the borrower and the banker so that the latter can monitor the credit situation better and 3) ensure that credit is used for the purpose for which it was taken Quarterly P/L Statement, Quarterly CA and CL statement. Apart from these (1) half-yearly P/L and B/s within 60 days 2) Annual audited B/s within 3 months and monthly stock statement should also to be submitted Chore Committee Recommendations The RBI in March 1979 appointed another committee under the chairmanship of Shri. K.B Chore to review the working or cash credit system in recent yeas with particular reference to the gap between sanctioned limits and the extent of their utilization and also to suggest alternative type of credit facilities which should ensure greater credit discipline. The important recommendations of the Committee are as follows:
28 1. The existing three types of lending, viz. CC, Loan and Bill should be retained. At the same time it is necessary to give some directional changes to ensure that wherever possible the use of CC would be supplanted by loans and bills. It would also be necessary corrective measures to remove the drawbacks observed in the CC system 2. The banks should obtain quarterly statements in the prescribed format from all the borrowers having working capital credit limits of Rs. 50 lacks and above. 3. The banks should undertake a periodical review of limits of Rs. 10 lacks and above 4. The banks should not bifurcate cash credit accounts into demand loan and fluctuating cash credit component. Such bifurcation may not serve any purpose of better planning by narrowing the gap between sanctioned limits and the extent of utilization thereof. 5. If a borrower does not submit the quarterly returns in time, the banks may charge penal interest o one percent on the total amount outstanding for the period of default 6. The banks should discourage sanction of temporary limits by charging additional one percent interest over the normal rate on the term loan 7. The bank should fix separate credit limits for peak level and non-peak level wherever possible 8. The need for reducing the over-dependence of the medium and large borrowers on bank finance for their production/trading purpose is recognized. The net surplus cash generation of an established industrial unit should be utilized partly at least for reducing borrowing for working capital purpose 9. Requests for relaxation of inventory norms and for ad hoc increases in limits should be subjected by banks to close scrutiny and agreed to only in exceptional circumstances 10. Delays on the part of banks in sanctioning credit limits could be reduced in cases where the borrowers co-operate in giving necessary information about their past performance and future projects in time 11. Banks should give particular attention to monitor the key branches and critical accounts 12. The communication channels and systems and procedures within the banking system should be toned up so as to ensure that minimum time is taken for collection of instruments 13. Although banks usually object to their borrowers’ dealing with other banks without their consent, some of the borrowers still maintain current accounts and arrange bills facilities with other banks, which vitiate the credit discipline. The RBI may issue suitable instructions in this behalf.
29 Statement of working capital requirements Rs. Current Assets: (1) Cash Required (2) Stock of Raw materials (3) Work-in-progress: a) Raw Materials b) Labour c) Overheads (4) Stock of Finished goods (5) Debtors/Receivables (6) Advance Payments (7) Others Current Liabilities: (1) Creditors (2) Lag in payment of expenses (3) Others Working Capital (CA-CL) Add: Provision for Contingencies Net Working Capital Required ***** ***** ***** ***** ***** ***** ***** *****
***** **** ***** ***** ***** *****
***** ***** ***** *****
Notes: 1) Profits should be ignored while calculating working capital requirement as it may or may not be used as working capital. 2) Even profits have to be used for working capital, they should be reduced by the amount of income tax, dividend, drawings etc. 3) Element of depreciation included in overheads should be ignored as it is a notional expenditure. 4) If problem is silent regarding W-I-P, it is assumed that WIP is complete to the extent of 100% regarding raw materials 50% complete regarding labour and overheads. 5) Calculation for stocks of finished goods and debtors should made at cost unless otherwise asked in the question.
30 Features of Capital Budgeting 1) Large investments 2) Long-term commitment of funds – hence involves financial risk 3) Irreversible nature – difficult to reverse the decision as it is very difficult to dispose the asset without heavy loss 4) Long term effect on profitability 5) Difficulties of investment decisions – future uncertain 6) National importance- employment, economic growth
Methods of capital budgeting / Evaluation of investment proposals
1) 2) 3) 4) Pay-Back method NPV IRR Profitability Payback period refers to the minimum number of years required to recover the original cash outlay invested in a project. It means where the total earnings (cash-inflow) from investment equals the total outlay, that period is the payback period. In case of the project generating constant cash inflows, the PBP is calculated as below Outflow/initial investment Annual inflows Net cash inflow is calculated as below: Cash inflow from sales revenue Less: Operating Exp. including depn. Net Income before Tax Less: IT Net Income after Tax Add: Depn Net Cash inflow ***** ***** ***** ***** ***** ***** *****
1) Pay-Back method
As because depreciation does not affect the cash inflow, it shall not be taken into consideration in calculating net cash inflow. But it is an admissible deduction under income tax act, it has been deducted from the gross sale revenue and added in the net income 1) Determine the payback period for a project which require a cash outlay of Rs. 10,000 and generates cash inflows of Rs. 2000, Rs. 4000, Rs. 3000 and Rs. 2000 in the first, second, third and fourth year respectively
Inflow
Rs. 10,000
31 Cash Outflows I 2000 II 4000 III 3000 IV 2000
9000
PBP is 3 years and a fraction of 4th year. The fraction year is calculated as below Balance earnings required Total earnings in the 4th year PBP= 3 ½ years 1000 x 12 2000 = 6 months
2)
Capital Budgeting: Capital budgeting is the process of making investment decisions in capital expenditures. The significance of capital budgeting: (1) Large investments: Capital budgeting decisions, generally, involve large investment of funds. But the funds available with a firm are always limited and the demand for fund far exceeds the resources. Hence, it is very important for a firm to plan and control its capital. (3) Irreversible Nature: The capital expenditure decisions are of irreversible nature. Once the decision for acquiring a permanent asset is taken, it becomes very difficult to reverse that decisions for the reason that it is very difficult to) National Importance: Investment decision through taken by individual concerns is of national importance because it determines employment, economic activities and economic growth. Methods of evaluation (1) Pay back Period Method The payback. Thus, it measures the period of time for the original cost of a project
32 to be recovered from the additional earnings of the project itself. Under this method, various investments are ranked according to the length of their pay back periods in such a manner that the investment with a shorter pay-back period is preferred to the one which has a longer pay-back period. Advantages of pay-back period method (1) The main advantage of this method is that it is simple to understand and easy to calculate. (2) It saves in cost, it requires lesser time and labour as compared to other methods of capital budgeting. (3) In this method, as a project with a shorter pay-back period is preferred to the one having a longer pay-back period, it reduces the loss through obsolescence and is more suited to the developing countries like India which are in the process of development and have quick obsolescence. (4) Due to its short-term approach, this method is particularly suited to a firm which has shortage of cash or whose liquidity position is not particularly good. Disadvantages ` Though pay-back period method is the simplest, oldest and most frequently used method, it suffers from the following limitations: (1) It does not take into account the cash inflows earned after the pay-back period and hence the true profitability of the projects cannot be correctly assessed. For example, there are two projects x and y. Each project requires an investment of Rs.25,000. The profit before depreciation and after taxes from the two projects are as follows: Year 1 2nd 3rd 4th 5th 6th
st
Project x Rs. 5,000 8,000 12,000 3,000 ___ ___
Project y Rs. 4,000 6,000 8,000 7,000 6,000 4,000
According to the pay-back method, project X is better because of earlier pay-back period of 3 years as compared to 4 years pay-back period in case of project Y. But it ignores the earnings after the pay-back period. Project X gives only Rs.3,000 of earnings after the pay-back period while project Y gives more earnings, i.e., Rs.10,000 after the pay-back period. It may not be appropriate to ignore earnings after the pay-back period especially when these are substantial. (2) This method ignores the time value of money and does not consider the magnitude and timing of cash inflows. It treats all cash flows as equal though they occur in different periods. It ignores the fact that cash received today is more important than the same amount of cash received after say, 3 years. For Example:
Years
33 Annual cash inflows
Project No.1 Project No.2 Rs. Rs. 1 10,000 4,000 2 8,000 6,000 3 7,000 7,000 4 6,000 8,000 5 4,000 10,000 _______ _________ 35,000 35,000 _______ _________ According to the pay-back method, both the projects may be treated equal as both have the same cash inflow in 5 years. But in reality Project No.1 gives more rapid returns in the initial years and is better than project No. 2. (3) It does not takes into consideration the cost of capital which is a very important factor in making sound investment decisions. (4) It may be difficult to determine the minimum acceptable pay-back period, it is usually, a subjective decision. (5) It treats each asset individually in isolation with other assets which is not feasible in real practice. (6) Pay-back period method does not measure the true profitability of the project as the period considered under this method is limited to a short period only and not the full life of the asset. In spite of the above mentioned limitations, this method can be used in evaluating the profitability of short-term and medium-term capital investment proposals. It this technique can also be made powerful if we consider post payback results. 2) NPV: The Net Present Value( NPV) takes into consideration the time value of money. It recognizes the fact that a rupee earned today is worth more than the same rupee earned tomorrow. The net present value of all inflows and outflows of cash occurring during the entire life of the project is determined separately for each year by discounting these flows by the firm’s cost of capital or a pre-determined rate. NPV= Discounted inflows- Discounted outflow Evaluation : to chose between mutually exclusive projects, the projects should be ranked in order of net present values i.e. the project having maximum positive NPV is given first preference. The project with negative NPV is rejected.
34 Formula used to determine discounting factors at a given cutoff rate is given under: PV= 1 (1+r)n
Advantages of NPV technique 1. It recognizes time value of money 2. It considers earnings over the entire life of the project and hence true profitability of the proposal can be ascertained 3. It takes into consideration the objective of maximum profitability Disadvantages: 1. compared to traditional methods of evaluation, the net present value method is more difficult to understand and operate. 2. this technique cannot be applied effectively to the projects with unequal lives 3. in the same way the proposal cannot applied effectively to the projects with unequal investment 4. it is not easy to determine an appropriate discount rate. IRR (Internal Rate of Return) The internal rate of return can be defined as that rate of discount at which the present value of cash inflows is equal to the present value of cash outflows. IRR is sometimes called trial and error method. Under this technique two discounted factors are used Viz. lower discounting factor and the higher discounted factor Advantages of IRR 1. It takes into account the time value of money 2. this technique can be usefully applied in the situations with even as well as uneven cash flow at different periods of time 3. the determination of cost of capital is not a prerequisite for the use of this method and hence it is better than net present value method where the cost of capital cannot be determined easily 4. this method is also compatible with the objective of maximum profitability and is considered to be amore reliable technique of capital budgeting IRR= L+PVLF - I. I X (H-L) PVLF – PVHF PVLF= Present Value of Cash inflows at Lower dis. Factor PVHF= Present Value of Cash inflows at Higher dis. Factor I.I = Initial Investment H= Higher Discounting factor L= Lower Discounting factor
35 Disadvantages 1. It is difficult to understand and is the most difficult method of evaluation of investment proposals 2. this method is based on the assumption that the earnings are reinvested at the internal rate of return for the remaining life of the project which is not a justified assumption 3. it takes more time to evaluate a proposal as it is a technique of trial and error Accounting Rate of Returns (ARR) ARR consider the earnings of the project of the economic life of the project. ARR can be calculated either on the basis of earnings or cash inflows. ARR= Annual Average Net Earnings x 100 Average Investment Or Annual Average Cash inflows x 100 Average Investment Annual Average Net Earnings = Inflows after tax but before depreciation Number of years under consideration Annual Average Cash inflows = PVS of cash inflows Number of years under consideration Average Investment = Initial Investment / Cost of the proposal 2 Advantages: 1. it is very simple to understand 2. it considers earnings throughout the economic life of the proposal
36
DIVIDEND DECISION
Dividend policy determines the division of earnings between payments to shareholders and retained earnings. Dividend is that part of profit which is distributed among the shareholders according to the decision taken and resolution passed in the Board Meeting. Factors Affecting Dividend Policy 1. Magnitude and Stability of earnings The amount and stability of earnings is an important aspect of dividend policy. As dividend can be paid only out of present or past year’s profits, earnings of the company fix the upper limit on dividends. The dividends should, generally, be paid out of current year’s earnings only as the retained earnings of the previous years become more or less part of permanent investment in the business to earn current profits. The past trend of the company’s earnings should also be kept in consideration while making the dividend decision 2. Legal restrictions Legal provisions provided by the Companies Act, 1956 relating to dividend are relevant because they lay down a framework within which dividend policy is formulated. These provisions require that dividend can be paid only out of current profits or past profits after providing for depreciation or out of money provided by the Government for the payment of dividends in pursuance of a guarantee given by the Government. The Act require that a company providing more than 10% of dividend to transfer certain % of the current profit’s to reserves. The Act, further, provides that dividends cannot be paid out of capital, as it will amount to reduction of capital adversely affecting the security of its creditors. 3. Nature of Industry Nature of industry considerably affects the dividend policy. Certain industries have comparatively a steady and stable demand irrespective of prevailing economic conditions. For instance, people used to drink liquor both in boom as well as in recession. The same is the case regarding smoking. Such firms expect regular earnings and hence can follow a consistent dividend policy. On the other hand, if the earnings are uncertain, as in the case of luxury goods, conservative policy should be followed. Such firms should retain a substantial part of their current earnings during boom period in order to provide funds to pay adequate dividends in recession periods. Thus, industries with steady demand for their products can follow a higher dividend payout ratio while cyclical industries should follow a lower payout ratio. 4. Age of the Company Age of the company also influences the dividend decisions of a company. A newly established concern has to limit payment of dividend and retain a substantial part of earnings for financing its future growth and development, while older companies which have established sufficient resources can afford to pay liberal dividends.
37 Past dividend rates While formulating dividend policy, the directors must keep in mind the dividend paid in the past year. The current rate should be around the average past rate. If it has been abnormally increased, the shares will be subject to speculation. In a new concern, the company should consider the dividend policy of the rival organizations 5. Ability to borrow Well established and large firms have better access to the capital market than the new companies and may borrow funds from other external sources if there arises any need. Such companies may have a better dividend payout ratio. Where as smaller firms have to depend on their internal sources and therefore they will have to build up good reserves by reducing the dividend payout ratio for meeting any obligation requiring heavy funds. 6. Policy of control Policy of control is another determining factor earnings. If the directors do not bother about the control of the affairs they will follow a liberal dividend policy. Thus, control is an influencing factor in framing the dividend policy. 7. Future financial requirements It is not only the desires of the shareholders but also future financial requirements of the company that have to be taken into consideration while making a dividend decision. The management of a concern has to reconcile the conflicting interests of shareholders and those of the company’s financial needs. If a company has highly profitable investment opportunities it can convince the shareholders of the need for limitation of dividend to increase the future earnings and stabilize its financial position. But when profitable investment opportunities do not exist then the company may not be justified in retaining substantial part of its current earnings. Thus, a concern having few investment opportunities should follow high payout ratio as compared to one having more profitable investment opportunities 8. Taxation policy The taxation policy of the Government also affects the dividend decision of a firm. A high or low rate of business taxation affects the net earnings of a company and thereby its dividend policy. Similarly, a firm’s dividend policy may be dictated by the income-tax status of its shareholders. If the dividend income of shareholders is heavily taxed being in high income bracket, the shareholders may forgo cash dividend and prefer bonus shares and capital gains.
38 9. Liquid resources The divided policy of a firm is also influenced by the availability of liquid resources. Although, a firm may have sufficient available profits to be declare dividends, yet it may not be desirable to pay dividends if it does not have sufficient liquid resources. Hence the liquidity position of a company is an important consideration in paying dividends. If company does not have liquid resources, it is better to declare stock-dividend i.e. issue of bonus shares also amounts to distribution of a firm’s earnings among the existing shareholders without affecting its cash position 10. Time of payment of dividend When should the dividend be paid is another consideration. Payment of dividend means out flow of cash. It is , therefore, desirable to distribute dividend at a time when it is least needed by the company because there are peak times as well as lean periods of expenditure. Wise management should plan the payment of dividend in such a manner that there is no cash outflow at the time when the undertaking is already in need of urgent finances. 11. 12. Repayment of loan A company having loan indebtedness generally transfers considerable amount of profits to reserves, unless some other arrangements are made for the redemption of debt on maturity. It will naturally lower down the rate of dividend. Sometimes the lenders put restrictions on the dividend distribution till such time their loan is outstanding. Management is bound to honour such restriction and to limit rate of dividend payout. 13. State of capital market If the capital market position is comfortable in the country and the funds may be raised from different sources without much difficulty, the management may tempt to declare a high rate of dividend to attract the investors and maintain the existing shareholders. Contrarily, if there is a slump in the stock market and the stockholders are not interested in making the investment in securities, the management should follow a conservative dividend policy by maintaining a low rate of dividend and ploughing back a sizable portion of profits to face any contingency. Likewise, if the term lending financial institutions advance loans of stiffer terms, it may be desirable to rely on internal sources of financing, and accordingly conservative dividend policy should be pursued
39 Dividend decision and valuation of firm The value of firm can be maximized. There are conflicting views regarding the impact of dividend decision on the valuation of the firm. According to one school of thought, dividend decision does not affect the shareholders’ wealth and hence the valuation of the firm. On the other hand, according to the other school of thought, dividend decision materially affects the shareholders’ wealth and also the valuation of the firm. Below are the views of the two schools of thought under two groups: 1. The Irrelevance Concept of Dividend or the Theory of Irrelevance and 2. The Relevance Concept of Dividend or the Theory of Relevance
Theory Of Irrelevance
a) Residual Approach b) MM Model a) for the business they may be distributed as dividends. Thus, the decision to pay dividends or retain the earnings may be taken as a residual decision. This theory assumes that investors do not differentiate between dividends and retentions by the firm. Their basic desire is to earn higher returns on their investment. In case the firm has profitable investment opportunities giving a higher rate of return than the cost of retained earnings, the investors would be content with firm retaining the earnings to finance the same. However, if the firm is not in a position to find profitable investment opportunities, the investors would prefer to receive the earnings in the form of dividends. Thus, a firm should retain the earnings if it has profitable investment opportunities otherwise it should pay them as dividends. b) MM Approach. Following are the assumptions MM Hypothesis 1) 2) 3) 4) 5) 6) There are perfect markets Investors behave rationally Information about the company is available to all with out any cost There are no floatation and transaction costs No investor is large enough to affect the market price of shares There are either no taxes or there are no differences in the rates of taxes applicable to dividends and capital gains 7) The firm has a rigid investment policy 8) There is no risk or uncertainty in regard to the future of the firm
40 Argument of MM Hypothesis The argument given by MM in support of their hypothesis is that whatever increase in the value of the firm resulted form the payment of dividend, will be exactly offset payments
P0 = Market price per share at the beginning of the period or prevailing market price of a share D1= Dividend to be received at the end of the period P1 = Market price per share at the end of the period Ke = Cost of equity capital or rate of capitalization The above formula can also be written as P1=P0 (1+ke)-D1 The MM hypothesis can be explained in another form also presuming that investment required by the firm on account of payment of dividends is financed out of the new issue of equity shares In such case, the number of shares to be issued can be computed with the help of the following equation: m =I-(E-nD1) P1 Further, the value of the firm can be ascertained with the help of the following formula: nP0= (n +m) P1-(I-E) 1+Ke Where m= number of shares to be issued I= Investment required E= Total earnings of the firm during the period P1= Market price per share at the end of the period Ke= Cost to equity capital n= number of shares outstanding at the beginning of the period D1= Dividend to be paid at the end of the period Problem: 1 ABC Ltd., belongs to a risk class for which the appropriate capitalization rate is 10%. It currently has outstanding 5000 shares selling at Rs. 100 each. The firm is contemplating the declaration of dividend of Rs. 6 per share at the end of the current financial year. The company expects to have a net income of Rs. 50,000 and has a proposal of making new investments of Rs. 100,000. Show that under the MM hypothesis, the payment of dividend does not affect the value of the firm
41 A) Valuation of the firm when dividends are paid Step –1 Price of the share at the end of the current financial year P1= P0 (1+Ke)-D1 =100(1+. 10)-6 =100 x 1.10-6 =110-6 Rs. 104 Step-2 Number of shares to be issued m= I-(E-nD1) P1 = 100,000-(50,000-5000x6) 104 = 80,000 104 Step-3 Value of the firm nP0= (n + m) P1-(I-E) 1+Ke = (5,000 + 80,000) x 104 – (1,00,000 – 50,000) 104 1+. 10 = (5,000 + 80,000) x 104 – (50,000) 104 1 1. 10 = 5,20,000 + 80,000 x 104 – (50,000) 104 1.10 = 6,00,000 – 50,000 1+. 10 5,50,000 1.10 Rs. 5,00,000
42 B) Valuation of the firm when dividends are not paid Step –1 Price of the share at the end of the current financial year P1= P0 (1+Ke)-D1 =100(1+. 10)-0 =100 x 1.10 =110 Step-2 Number of shares to be issued m= I-(E-nD1) P1 = 100,000-(50,000-0) 110 = 50,000 110 Step-3 Value of the firm nP0= (n + m) P1-(I-E) 1+Ke = (5,000 + 50,000) x 110 – (1,00,000 – 50,000) 110 1+. 10 = (50,000 + 50,000) x 110 – (50,000) 110 1 1. 10 = 5,50,000 + 50,000 x 110– (50,000) 110 1 1.10 = 6,00,000 – 50,000 1. 10 5,50,000 1.10 Rs. 5,00,000 Hence, whether dividends are paid or not, the value of the firm remains the same i.e. Rs. 5,00,000
43 Criticism of MM Approach MM hypothesis has been criticized on account of various unrealistic assumptions as given below: 1. Perfect capital market does not exist in reality 2. Information about the company is not available to all the persons 3. The firms have to incur floatation costs while issuing securities 4. Taxes do exist and there is normally different tax treatment for dividends and capital gains 5. The firms do not follow a rigid investment policy 6. The investors have to pay brokerage, fees etc, while doing any transaction 7. Shareholders may prefer current income as compared to further gains. 2. Theory of Relevance The other school of thought on dividend decision holds that the dividend decisions considerably affect the value of the firm. The advocates of this school of thought include mainly Walter’s Approach and Garden’s Approach Walter’s Approach Pro. Walter’s Approach supports the doctrine that dividend decisions are relevant and affect the value of the firm. The relationship between the internal rate of return earned by the firm and its cost of capital or required rate or return is very significant in determining the dividend policy to sub-serve the ultimate goal of maximizing the wealth of the shareholders. According to Prof. Walter, if r > k i.e. if the firm earns a higher rate of return on its investment than the required rate of return, the firm should retain the earnings and plough back the entire earnings within the firm . Such firms are termed as growth firms and the optimum payout would be zero in their case. This would maximize the value of shares In case of declining firms which do not have profitable investment i.e. where r < k , the shareholders would stand to gain if the firm distributes the earnings. The implication of r < k is that the shareholders can earn a higher return by investing elsewhere. For such firms, the optimum payout would be 100% and the firms should distribute the entire earnings as dividends In case of normal firms where r = k, the dividend policy will not affect the market value of shares as the shareholders will get the same return from as expected by them. For such firms, there is no optimum dividend payout and the value of the firm would not change with the change in dividend rate. Assumptions of the Model 1) The investments of the firm are financed through retained earnings only and the firm does not use external sources of funds 2) The internal rate of return (r) and cost of capital (k) of the firm are constant, with additional investments undertaken 3) Earnings and dividends do not change while determining the value 4) The firm has a very long life
44 Formula P = D + r(E-D)/Ke Ke Ke P= Market price per share D= Dividend per share r = Internal rate of return E = Earnings per share Ke = Cost of equity P= r Ke E-D Ke
D +
Following illustration explains the concept Capitalization rate(Ke) = 10% Earnings per share(E) = Rs. 10 Assumed rate of Return on Investments (R) 1) 15% (2) 8% and (3) 10% Show the effect of the dividend policy on market price of shares using Walter’s Model when Payout ratio (P/O) is a) 0% b) 25% c) 50% d) 75% and e) 100% Dividend Policy and Value of Shares (Walter’s Model) – When r= 15% (a) D/P Ratio = 0 % (Dividend per share = Zero) 0 + 0.15 (10-0) 0.10 P= 0.10 (b) D/P Ratio = 25 % (Dividend per share = Rs. 2.5) 2.5 + 0.15 (10-2.5) 0.10 P= = Rs.137.50 0.10 (c) D/P Ratio = 50 % (Dividend per share = Rs.5) 5.0 + 0.15 (10-5.0) 0.10 P= = Rs.125.00 0.10 (d) D/P Ratio = 75 % (Dividend per share = Rs.7.5) 7.5 + 0.15 (10-7.5) 0.10 = Rs.150
45 P= = Rs.112.50 0.10 (e) D/P Ratio = 100 % (Dividend per share = Rs.10) 10 + 0.15 (10-10) 0.10 P= 0.10 Dividend Policy and Value of Shares (Walter’s Model) – When r= 8% (a) D/P Ratio = 0 % (Dividend per share = Zero) 0 + 0.08 (10-0) 0.10 P= = Rs.80 0.10 (b) D/P Ratio = 25 % (Dividend per share = Rs. 2.5) 2.5 + 0.08 (10-2.5) 0.10 P= = Rs.90 0.10 (c) D/P Ratio = 50 % (Dividend per share = Rs.5) 5.0 + 0.08 (10-5.0) 0.10 P= 0.10 (d) D/P Ratio = 75 % (Dividend per share = Rs.7.5) 7.5 + 0.08 (10-7.5) 0.10 P= 0.10 (e) D/P Ratio = 100 % (Dividend per share = Rs.10) 10 + 0.15 (10-10) 0.10 P= 0.10 = Rs.100 = Rs.112.50 = Rs.95.00 = Rs.100
46 Dividend Policy and Value of Shares (Walter’s Model) – When r= 10% (a) D/P Ratio = 0 % (Dividend per share = Zero) 0 + 0.10 (10-0) 0.10 P= = Rs.100 0.10 (b) D/P Ratio = 25 % (Dividend per share = Rs. 2.5) 2.5 + 0.10 (10-2.5) 0.10 P= = Rs.100 0.10 (c) D/P Ratio = 50 % (Dividend per share = Rs.5) 5.0 + 0.10 (10-5.0) 0.10 P= 0.10 (d) D/P Ratio = 75 % (Dividend per share = Rs.7.5) 7.5 + 0.10 (10-7.5) 0.10 P= = Rs.100 0.10 (e) D/P Ratio = 100 % (Dividend per share = Rs.10) 10 + 0.10 (10-10) 0.10 P= 0.10 Criticism of the Model The model has been criticized on the following grounds 1) The basic assumption that investments are financed out of retained earnings is seldom true in the real world. Firms do raise funds by external financing 2) The internal rate of return (r) also does not remain constant, with increased investment the rate of return also changes 3) The assumption that cost of capital (k) will remain constant also does not hold good. As firm’s risk pattern does not remain constant, it is not proper to assume that k will always remain constant, = Rs.100 = Rs.100
47
Gorden’s Approach
According to this model value of a rupee of dividend income is more than the value of a rupee of capital gain. This is an account of uncertainty of future and shareholders discount future dividends at a higher rate. According to Gorden, the market value of future stream of dividends. As the investors are rational, they want to avoid risk. The term risk here refers to the possibility of not getting a return on investment. The argument underlying Gorden’s model of dividend relevance is also described as a bird-in-the –hand argument. That a bird in hand is better than two in the bush is based on the logic that what is available at present is preferable to what may be available in the future. It is certain that the future is uncertain and more distant the future is, the more uncertain is the receipt of return. Therefore, omission of dividends or payment of low dividends would lower the value of the shares. Shareholders discount the value of shares of a firm which postpones dividends. The discount rate would vary with the retention rate or level of retained earnings
Gorden’s Model is symbolically expressed as P = E (1-b)
Ke-br P= Price of shares E= Earnings per share b= Retention ratio or % of earnings retained 1-b= D/P ratio i.e. % of earnings distributed as dividends Ke = Cost of capital br = growth rate =rate or return on investment
Gorden’s Model is explained below Given r = 12% E = Rs.20 Determine the value of shares, assuming the following: D/P ratio (1-b) 1) 2) 3) 4) 5) 6) 7) 10 20 30 40 50 60 70 retention ratio (b) 90 80 70 60 50 40 30 Ke(%) 20 19 18 17 16 15 14
48
Dividend policy and value of shares using Gordon’s Model a) D/P ratio 10 % Retention ratio 90 % br = 0.9 x 0.12 = 0.108 P = Rs. 20 (1-0.9) = Rs.2 0.20-0.108 0.092 b) D/P ratio 20 % Retention ratio 80 % br = 0.8 x 0.12 = 0.096 P = Rs. 20 (1-0.8) = Rs.4 0.19-0.096 0.094 c) D/P ratio 30 % Retention ratio 70 % br = 0.7 x 0.12 = 0.084 P = Rs. 20 (1-0.7) = Rs.6 0.18-0.084 0.096 d) D/P ratio 40 % Retention ratio 60 % br = 0.6 x 0.12 = 0.072 P = Rs. 20 (1-0.6) = Rs.8 0.17-0.072 0.098 e) D/P ratio 50 % Retention ratio 50 % br = 0.5 x 0.12 = 0.060 P = Rs. 20 (1-0.5) = Rs.10 0.16-0.060 0.10 f) D/P ratio 60 % Retention ratio 40 % br = 0.4 x 0.12 = 0.048 P = Rs. 20 (1-0.4) = Rs.12 0.15-0.048 0.102 g) D/P ratio 70% Retention ratio 30 % br = 0.3 x 0.12 = 0.036 P = Rs. 20 (1-0.3) = Rs.14 0.14-0.036 0.104 = 81.63 = 21.74
= 42.55
= 62.50
= 100
= 117
= 134.62
49 From the above it is observed that dividend decision has a bearing on the market price of the share. The market price of the share is favorably affected with more dividends
COST OF CAPITAL
Compare And Contrast Various Long Term Sources Normally, financial requirements of a concern are financed either through Owned Capital or through Borrowed Capital. Following sources are available for long term financing 1) Share capital a) Preference b) Equity 2) Debentures 3) Ploughing-back of profits Advantages of Preference shares 1. Almost a permanent source of finance 2. As preference shareholders have no voting rights, control of the company remains with the company only 3. By issuing redeemable preference shares company can have flexible capital structure 4. No need to mortgage any property 5. Trading on equity is ensured because of the fixed financial obligation Disadvantages 1. Cost of raising the preference share capital is relatively higher 2. Permanent burden on the company (especially in case of cumulative preference shares) Equity shares 1. 2. 3. 4. 5. No obligation to pay a fixed rate of dividend No charge on the assets of the company Permanent source of finance Voting right Value of the firm is mainly dependent on equity
Disadvantages 1. 2. 3. 4. 5. Trading on equity is not possible of company issue only equity It does not provide flexibility in the capital structure. It may cause over capitalization They can put obstacles in management by manipulation and organizing themselves Due to higher rates of dividends during prosperous periods may lead to speculation It is not suited for those who needs security and fixed returns on their investments
50 Debentures Difference between shares and debentures Shares Debentures 1) Part of owned capital Part of debt 2) Dividend is the reward Interest is the reward 3) No Fixed obligation Involves fixed obligation 4) Charge against P/L Appn. A/c Charge against P/L A/c 5) Have voting rights No voting rights 6) Not redeemable( except R.P.S) Normally redeemable after a certain period 7) At the time of liquidation, share Debentures are payable in priority over share capital capital is payable after meeting all outside liabilities Advantages 1) 2) 3) 4) Control of the company is not surrendered to the debenture holders Trading on equity is possible Interest on debenture is an allowable expenditure under IT Act It provides flexibility as they can be redeemed after a certain period
Disadvantages 1) Cost of raising debentures is relatively high due to high stamp duty 2) As they bear high denomination, common people cannot by them 3) It is not suited for newly started companies Ploughing Back of Capital Advantages 1) 2) 3) 4) 5) Economical source of finance Cushion for expansion programmes and dividends Aids in capital formation Makes the company self-dependent Smooth and undisrupted running of the business is ensured
Disadvantages 1) 2) 3) 4) Manipulation in the value of shares. Over-capitalization Misuse of savings by the authorities Opportunity cost
51 Cost of Retained earnings Some people regard retained earnings as a source of capital without any cost : while others think that retained earnings do have cost Retained earnings are profits which have not been distributed by the company to its shareholders in the form of dividend and have been retained in the company to be used for future expansion. They are represented by the uncommitted or free reserves and surplus. The company is not required to pay any dividend on retained earnings. It is, therefore , observed that this source of finance is cost free. This view seems to be based on the assumption that the company is a separate entity from that of the shareholders and it pays nothing to the shareholders to withhold these earnings. But this view is not correct. Retained earnings is the dividend foregone by the shareholders by not putting funds elsewhere. From shareholder’s point of view, the opportunity cost of retained earnings is the rate of return that they can obtain by putting after tax dividend in the other securities of equal qualities, if earnings are paid to them as dividend in cash. An individual pays income tax on dividend hence he would only be able to invest the amount remained after paying individual income tax on such earnings. This can be expressed as below Kr = Ke(1-tp)(1-B)
Where Kr = Cost of retained earnings Ke = Required rate of return to shareholders tp = Personal Tax rate B = Brokerage on purchase of securities For example, a company retains Rs. 50,000 out of its current earnings. The expected rate of return to the shareholders, if they had invested the funds elsewhere, is 10%. The brokerage is 2% and the shareholders came in 30% tax bracket. The cost of retained earnings may be calculated as:Kr = Ke (1-tp)(1-B) = 0.10 (1-0.30) (1-0.02) = 0.10 (0.70) (0.98) = 0.0686 or 6.85% It is always less than the shareholders required rate of return. The cost of retained earnings would be equal to the cost of capital on equity shares (i.e. Kr = Ke) if the shareholders do not pay any income tax on dividends, and incur no brokerage cost when investing the dividend received.
52 Cost of Preference Shares Preference shares are the fixed cost bearing securities. The rate of dividend is fixed well in advance at the time of their issue. So, cost of capital of preference shares is equal to the ratio called current dividend yield. The formula for calculating the cost of preference share is: When there is Div. Tax Kp = R Kp = R+ Div. Tax P P Kp = Cost of Preference shares R = Rate of Preference Dividend P= Net Proceeds ( sale price – floatation cost) For example, suppose a company issues 9% preference share of Rs. 100 each at a premium of Rs. 5 per share. The issue expenses per share comes to Rs.3. The cost of preference capital shall be calculated as under: Kp = = 9 100+5-3
9 = 8.82% 102
3) A company issues 14% irredeemable preference shares of the face value of Rs. 100 each. Floatation costs are estimated at 5% of the expected sale price. What is Kp, if preference shares are issued at 1) par 2) 10% premium 3) 5% discount. What if there is 10% tax on dividend When there is no dividend tax 1. Issued at Par Kp = 14 = 14.73% 100.5 2. Issued at Premium Kp = 14 = 13.40% 110-5.5 3. Issued at Par Kp = 14= 15.51% 95-4.75 When there is dividend tax 1. Issued at Par Kp = 14+(14x 10%) = 16.20% 100-5 2. Issued at Premium Kp = 14+(14x 10%) = 14.70% 110-5.5 3. Issued at Par Kp = 14+(14x 10%) = 17.10% 95-4.75
The cost of preference share capital is not adjusted for taxes, because dividend on preference capital is paid after the tax had been paid by the company as the dividend is not a deductible expenditure under tax laws. Moreover, company is not under an obligation to pay dividend on preference shares. Thus, the cost of preference capital is substantially greater than cost of debt.
53 Cost of Redeemable Preference Shares The explicit cost of preference shares in such a situation is the discount rate that equates the net proceeds of the sale of preference shares with the present value of the future dividends and principal payments. The formula is given bellow (explicit cost of any source of capital is the discount rate that equates the present value of the cash inflows that are incremental to the financial opportunity with the present value of its incremental cash outflows)
n dt + Pn
P0 (1-f)t-1 Σ
(1+ kp)t (1+ kp)n P0 = Expected sale price of preference shares f= floatation Cost as % of P0 d= Dividends paid on preference shares Pn= Repayment of preference capital amount ABC Ltd has issued 14% Preference shares of the face value of Rs. 100 each to be redeemed after 10 years. Floatation cost is expected to be 5%. Determine the cost of preference shares (Kp) The value of Kp is likely to be between 14% and 15% as the rate of dividend is 14% Determination of the PV at 14% and 15%
10 Rs.14 + Rs.100
Rs.95 Year 1.10 10
t-1
Σ
(1+ kp)t (1+ kp)10 Cash Outflows Rs. 14 Rs. 100 PV Factor 14% 15% 5.216 5.019 0.270 0.247 100 Total PV 14% 15% Rs.73 27 95.00 Rs.70.30 Rs. 24.70
Kp = 15%
Cost of perpetual debt
54 Cost of debt is the contractual interest rate adjusted further for the tax-liability of the firm. Formula Before-Tax Cost of Debt Kd = I SV I= Annual Interest Rate SV= Sale Value of Debenture t= tax rate A company issues 15% Rs.100,000 debentures. Tax rate is 35% . Determine before-tax and after-tax cost of debt if debt is issued at 1) par 2) 10% discount and 3) 10% premium Debt issued at par 1) Before tax Kd = 15000 1,00,000 2) After Tax Kd = 15000 ( 1-0.35) 1,00,000 Debt issued at discount 1) Before tax Kd = 15000 90,000 2) After Tax Kd = 15000 ( 1-0.35) 90,000 Debt issued at premium 1) Before tax Kd = 15000 110,000 2) After Tax Kd = 15000 ( 1-0.35) 110,000 = 13.6% = 8.84% = 16.7% = 15% After -Tax Cost of Debt Kd = I (1-t) SV
= 9.75%
= 10.85%
55 Cost of Redeemable debt In the case of calculation of cost of redeemable debt, account has to be taken, in addition to interest payment, repayment of the principal. The formula is given by Kd = I(1-t) + (f+d+pr-pi)/Nm (RV+SV)/2 I= Annual Interest RV= Redeemable Value of debt SV= Net sale proceeds (FV-Issue expenses) Nm = Term of debt f= Flatation cost d= discount on issue of debt pi= Premium on issue of debt Pr= premium on redemption of debt I = Tax Rate A company issues a new 15% debentures of Rs.1000 face value to be redeemed after 10 years. The debenture is expected to be sold at 5% discount. It will also involve floatation costs of 2.5%. The company’s tax rate is 35%. What would be the cost of debt? Cost of debt = Interest cost + (other cost-benefit)/ No of years (FV) + ( RV ) RV= FV+ Premium on deb-Discount on deb-Floatation cost The same formula can be reduced and written as I(t-1)+(FV-RV)/N FV+RV Kd= Rs. 150(1-0.35)+(Rs.50+Rs.25)/10 (Rs. 925 + Rs. 1,000)/2 = 10.9 %
56
Cost of Equity
The cost of equity is difficult to measure . There are two different methods of calculating equity capital. Viz., Dividend Approach and Capital Asset Pricing Model Approach Dividend Approach Under this method cost of capital is based on the expected dividend to be declared to the equity shareholders. It is assumed here that the dividend will be keep on growing every year. The formula is given by Ke= D1 + g P0 D1 = Expected Dividend P0= Net proceeds or current market price g= growth expected in dividend Market value of share in different year is calculated using the formula Ex: price at the end of first year P1= D2 Ke-g price at the end of 2nd year P2= D3 Ke-g 1) Suppose the dividend of the company is expected to be at Re.1 per share next year and is expected to grow at 6% per year perpetually. Assume market price as Rs.25. Determine the cost of equity Ke = D1 P0 +g Re. 1 + 0.06 = 10% Rs.25
To predict the price at the end of a future year Price at the end of the first year (P1) = D2 Ke-g Price at the end of the second year (P2) = D3 Ke-g Rs. 1.06 0.10-0.06 Rs. 1.124 0.10-0.06 = Rs.26.50 = Rs.28.00
57
CAPM Approach
CAPM tries to explain that the investors are risk averse and hence they think rationally while investing in securities. They will not be considering the individual investment. They would like to calculate the investment proposal on their entire portfolio management. Risk can be either diversifiable risk nondiversifiable risk. Diversifiable risk may be in the form of management capabilities and decisions, strikes, unique government regulations, availability or otherwise of raw materials, competition, level of financial and operating leverage etc. This types of risks can be minimized/ eliminated through diversification. Non-diversifiable risk is attributed to factors that affect all firms. Interest rates, changes in investor expectations about the overall performance of the company etc., Major problem of CAPM approach is the availability of data. It may not be available readily available or in a country like India may be altogether absent . The symbolic expression of the method is given by Ke= Rf + b (Km-Rf) Ke= Cost of equity Rf = The rate of return required on a risk free asset Km= the required rate of return on the market portfolio of assets that can be viewed as rate of return on all assets b= the beta coefficient
the average
1) A Ltd. Company finds that the risk free rate of return on the investment is 10%: firms beta equals to 1.50 and the return on the market portfolio equals to 12.5% Ke= 10% +(1.5 x ( 12.5%-10%))= 13.75% 2) Following information is available to you Investment in Equity a) Cement Co Steel Co Liquor Ltd b) Govt. of India Bonds initial Price Dividends Rs.25 Rs.35 Rs.45 Rs.1000 Rs.2 Rs.2 Rs.2 Rs.140 Year-end Mkt. Price Rs. 50 Rs. 60 Rs. 135 Rs. 1005 Beta risk Factor Rs.0.80 Rs.0.70 Rs.0.50 Rs.0.99
You are requested to calculate 1) expected rate of returns of market portfolio and 3) expected return in each security, using capital asset pricing model
58 Expected return on market portfolio Security A Cement Ltd. Steel Ltd. Liquor Ltd. B. Govt. of India Bonds Returns Dividends Cap. Appn Rs. 2 Rs. 2 Rs. 2 Rs.140 Rs.146 Rs.25 Rs.25 Rs.90 Rs. 5 Rs. 145 Investment Total Rs. 27 Rs. 27 Rs. 92 Rs. 145 Rs.291 Rs. 25 Rs. 35 Rs. 45 Rs. 1000 Rs. 1105
Rate of return (expected) on market portfolio = Rs. 291/Rs. 1105 = 26.33% Expected returns on individual security Ke = Rf +b(Km-Rf) Cement Ltd. = 14% + 0.8(26.33% -14%) Steel Ltd. = 14% + 0.7(26.33% -14%) Liquor Ltd. = 14% + 0.5(26.33% -14%) Govt. Bonds = 14% + 0.99(26.33% -14%) 23.86 % 22.63 % 20.16 % 26.21%
Guidelines pertaining to the issue of bonus shares
1) There should be a provision in the articles of association for capitalization of reserves. If not, the company should produce a resolution passed at the general body making provisions in the articles for capitalization 2) Consequent to the issue of bonus shares if the subscribed and paid-up capital exceeds the authorized capital, a resolution is passed at the genera body meeting in respect of increase in the authorized capital is necessary 3) The company should furnish a resolution passed at the general body meeting for bonus issue before an application is made to the CCI. In the general body resolution the management’s intention regarding the rate of dividend to be declared in the year immediately after the bonus issue should be indicated 4) The bonus issue is permitted to be made out of free reserves built out of genuine profits or share premium collected in cash only 5) Reserves created by revaluation of fixed assets are not permitted to be capitalized 6) Development rebate reserve / investment allowance reserve is considered as free reserve for the purpose of calculation of residual reserve test 7) All contingent liabilities disclosed in the audited account which have a bearing on the net profits shall be taken into account in the calculation of the minimum residual reserves 8) The residual reserves after the proposed capitalization should be at least 40% of the increased paidup capital 9) 30% of the average profits before tax of the company for the previous 3 years should yield a rate of dividend on the extended capital base of the company at 10%
59 10) There must be an interval of 36 months between two successive bonus issues 11) Bonus issues are not permitted unless the partly paid shares, if any, are made fully paid up 12)No bonus issue will be permitted if there is sufficient reason to believe that the company has defaulted in respect of the payment of statutory dues of the employees such as contribution to PF, gratuity, bonus etc 13) Application for issue of bonus shares should be made within one month of the bonus announcement by the board of directors of the company 14) In case where there is any default in the payment of any term loans outstanding to any public financial institution a no objection letter from that institution in respect of the issue of bonus shares should be furnished by the companies concerned with the bonus issue application 15)If there is a composite proposal for the issue of bonus shares and right shares , the bonus shares will be sanctioned first and the right shares only after some time Stock Splits (Sub –division of shares) One share of Rs. 50/- may be subdivided into Rs. 5 new shares of Rs. 10/- each Stock split refers to an act of subdividing shares of higher denomination into shares of lower denominations. The sub division takes place when the present face value of the shares is considered too high and when the shares of such high denominations are not popular on the Stock Exchange 1) Board meeting will consider the plan for stock splits and fix the date of extraordinary general meeting 2) Notices and circulars relating to the extraordinary general meeting will be issued to members 3) An ordinary resolution (if the articles do not require otherwise) is sufficient to sanction subdivision o shares 4) Notice of subdivision should be given to the Registrar with in 30 days 5) Notice of closure of transfer will be issued and the list of members will be prepared 6) A circular to the member will be issued requesting them to surrender their share certificate for cancellation 7) The old certificates will be cancelled and new certificates will be sealed and signed, necessary entries will be made in the registrar of members, shares will be re-numbered and the new certificates will be issued to the members in due course The impact of Bonus and stock splits is one and the same- increase in the number of shares. Capital structure and Dividend policies Capital structure refers to the composition of its capitalization and it includes all long term capital resources, viz., loans, reserves, shares and debentures. The capital structure is made up of debit and equity and refers to permanent financing of a firm. It is composed of long term debt, preference share and share holders’ fund.
60 Theories of capital structure 1) 2) 3) 4) Net income approach Net operating income approach The traditional approach Modigliani and Miller Approach
1. Net Income Approach According to this approach, a firm can minimize the cost of capital and increase the value of the firm as well as market price of equity shares by using debt financing to the maximum possible extent. This approach is based upon the following assumptions a) The cost of debt is less than the cost of equity b) There is no tax c) The risk perception of the investors is not changed by the use of debt. The line of argument in favour of net income approach is that as the proportion of debt financing increases the proportion of an expensive source of funds decreases. This results in the decrease in overall cost of capital leading to an increase in the value of the firm. The reasons for assuming cost of debt to be less than the cost of equity are that interest rates are usually lower than dividend rates due to element of risk and the benefit of tax as the interest is a deductible expense. This theory is explained in the following illustration a) A Company expects a net income of Rs. 80000. It has Rs. 2,00,000, 8% Debentures. The equity capitalization rate is 10%. Calculate the value of the firm and overall capitalization rate according to the net income approach( ignore tax) b) If the debenture debt is increased to Rs. 300,000, what shall be the value of the firm and overall capitalization rate? Calculation of the value of the firm Net income Less: interest on 8% Debentures (200,000 x 8/100) Earnings available to equity shareholders Equity capitalization ratio Market Value of Equity = 64000 x 100/10 Market Value of Debentures = 6,40,000 = 2,00,000 8,40,000 Earnings x 100 ( EBIT) Value of the firm V 80000 x 100 = 9.52% 840000
Rs. 80000 Rs. 16000 Rs. 64000 10%
Overall cost of capital =
61 Calculation of the value of firm if debenture debt is increased to 3,00,000 Net income Less: interest on 8% Debentures (300,000 x 8/100) Earnings available to equity shareholders Equity capitalization ratio Market Value of Equity = 56000 x 100/10 Market Value of Debentures Overall cost of capital (ke)= = 5,60,000 = 3,00,000 8,60,000 Rs. 80000 Rs. 24000 Rs. 56000 10%
Earnings x 100 Value of the firm 80000 x 100 = 9.30% 860000 This is evident that with the increase in debt financing the value of the firm has increased and the overall cost of capital has decreased. 2. Net operating income approach According to this approach, change in the capital structure of a company does not affect the market value of the firm and the overall cost of capital remains constant irrespective of the method of financing. It implies that the overall cost of capital remains the same whether the debt-equity mix is 50:50, 20:80, or 0:100. Thus, there is nothing as an optimum capital structure. This theory presumes that: 1) the market capitalizes the value of the firm as a whole; 2) the business risk remains constant at every level of debt- equity mix The reasons propounded for such assumptions are that the increased use of debt increases the financial risk of the equity shareholders and hence the cost of equity increases. On the other hand, the cost of debt remains constant with the increasing proportion of debt as the financial risk of the lenders is not affected. Thus, the advantage of using the cheaper sources of funds i.e. debt is exactly offset by the use of equity. Net income approach is explained bellow with an illustration (a) Company expects a net operating income of Rs. 100,000. It has Rs. 500,000, 6% debentures. The overall capitalization rate is 10%. Calculate the value of the firm and equity capitalization rate (b) If the debenture debts is increased to Rs. 7,50,000, what will be the effect on the value of firm and the equity capitalization rate a) Net operating Income Rs.100,000 Overall cost of capital 10% Market value of the firm (V) = Net operating income (EBIT) Overall cost of capital ke
62 1,00,000 x 100 10 Market value of firm Rs. 10,00,000 Less: Market value of Debentures 5,00,000 Total market value of equity 5,00,000 Equity capitalization rate or cost of equity (ke) = EBIT-I EBIT = Earnings before Interest = 1,00,000-30,000 x100 and tax 1,00,000-5,00,000 V = Value of the firm D = Value of debt capital = 70,000 x 100 = 14% I = Interest on debt 5,00,000 V-D
b) If the debenture debt is increased to Rs. 7,50,000, the value of the firm shall remain unchanged at Rs. 10,00,000. The equity capitalization rate will increase as follows
Net operating Income Rs.100, 000 Overall cost of capital 10% Market value of the firm (V) = Net operating income (EBIT) Overall cost of capital ke 1,00,000 x 100 10 Market value of firm Rs. 10,00,000 Less: Market value of Debentures 7,50,000 Total market value of equity 2,50,000
Equity capitalization rate or cost of equity (ke) = EBIT-I EBIT = Earnings before Interest = 1,00,000-45,000 x100 and tax 1,00,000-7,50,000 V = Value of the firm D = Value of debt capital = 55,000 x 100 = 22% I = Interest on debt 2,50,000 3. The Traditional approach The traditional approach is a compromise between the two extremes of net income approach and net operating income approach. According to this theory, the value of the firm can be increased initially or the cost of capital can be decreased by using more debt as the debt is ore cheaper source of funds than V-D
63 equity. Thus, cannot be offset by the advantage of low cost debt. Thus, overall cost of capital, according to this theory, decreases upto a certain point, remains more or less unchanged for moderate increase in debt thereafter; and increases beyond a certain point. Even the cost of debt may increase at this state due to increased financial risk. The theory has been explained as below Net operating income Rs. 2,00,000 Total Investment Rs. 10,00,000 Equity capitalization a) If the firm uses no debt 10% b) If the firm uses Rs. 4,00,000 Debentures 11% c) If the firm uses Rs. 6,00,000 Debentures 13% Assume that the Rs. 4,00,000 debentures can be raised at 5% interest and Rs.6,00,000 debenture can be raised at 6% interest Calculation of market value of firm, value of shares and the average cost of capital Particulars (a) No debt (b) Rs. 4,00,000 @ 5% interest (c) 6,00,000 @ 6% interest
Net operating income Rs. 2,00,000 Less: Int.(Cost of cap.) -Earnings av. for equity sh. 2,00,000 Equity capitalization rate 10% Market value of sh. 200,000x100 10 = 20,00,000 Market value of deb. Market value of the firm 20,00,000 Average cost of capital
Rs. 2,00,000 Rs. 2,00,000 20,000 36,000 1,80,000 1,64,000 11% 13% 180000x100 164000x 100 11 13 = 16,36,363 = 12,61,538 4,00,000 6,00,000 20,36,361 18,61,538
Earnings
Value of firm
EBIT 2,00,000 x 100
V
2,00,000 x 100 2,00,000 x 100
20,00,000 20,36,363 18,61,538 =10% =9.8% = 10.7% It is clear from the above that if debt of Rs.400,000 is used the value of the firm increases and the overall cost of capital decreases. But if more debt is used to finance in place of equity Rs.600,000 debentures, the value of the firm decreases and the overall cost of capital increases
64 4. Modigliani and Miller Approach (MM approach) M and M hypothesis is identical with the net operating income approach, if taxes are ignored. However, when corporate taxes are assumed to exist, their hypothesis is similar to the net income approach. 1) In the absence of taxes: The theory proves that the cost of capital is not affected by changes in the capital structure or say that the debt-equity mix is irrelevant in the determination of the total value of a firm. The reason argued is that though debt is a cheaper to equity, with increased use of debt as a source of finance, the cost of equity increases. This increase in cost of equity offsets the advantage of the low cost of debt. Thus, although the financial leverage affects the cost of equity, the overall cost of capital remains constant. The theory emphasizes the fact that a firm’s operating income is determinant of its total value. The theory further propounds that beyond a certain limit of debt, the cost of debt increases (due to increased financial risk) but the cost of equity falls thereby again balancing the two costs. In the opinion of M& M, two identical firms in all respects except their capital structure cannot have different market values or cost of capital because of arbitrage process. In case two identical firms, except their capital structure, have different market value or cost of capital arbitrage will take place and the investors will engage in personal leverage ( i.e. they will buy equity of the other company in preference to the company having lesser value) as against the corporate leverage and this will again render the two firms have the same total value The MM Model is based upon the following assumptions: (1) There are corporate Taxes (2) There is a perfect market (3) Investors act rationally (4) The expected earning of all the firms have identical risk characteristic (5) The cut-off point for investment in a firm is capitalization rate. (6) Risk to investors depends upon the random fluctuations of the expected earning and the possibility that the actual value of the variables may turn out to be different from their best estimate. (7) All earnings are distributed to the shareholders A Company has EBIT of Rs. 1,00,000. It expects a return on its investment at a rate of 12.5%. What is the total value of firm according to the M & M theory? According to the M and M theory, total value of the firm remains constant. It does not change with the change in capital structure. The total value of firm = EBIT Ke = 1,00,000 12.50 100
= 1,00,000 x 100 = 8,00,000 12.50
65 There are two firms X and Y which are exactly identical except that X does not use any debt in its financing, while Y has Rs. 100,000 5% debentures in its financing. Both the firms have EBIT of Rs. 25,000 and the equity capitalization rate is 10%. Assuming the corporate tax of 50% calculate the value of the firm The market value of firm X which does not use any debt Vu = EBIT Ke = 25,000 =25000 x 100 = 2,50,000 10 10 100 The market value of firm Y which uses debt financing of Rs.100,000 Vt= Vu + td Tax Rate Quantum of debt
= 2,50,000 + 0.5 x 1,00,000 = 2,50,000 + 50,000 = 3,00,000
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.
|
https://www.scribd.com/document/16411953/Financial-Management
|
CC-MAIN-2017-04
|
refinedweb
| 20,034
| 51.38
|
stand point,.
I just uploaded a source package to my PPA: It'll take a little while to build, but it builds locally so it should succeed. Source branch is here: Once the PPA builds, we'll see if the package is actually useful for porting applications!
Say you've defined the metaclass "MyType". In both Python 2 and Python 3 you can instantiate a base class -- e.g. "MyObject = MyType('MyObject', (object,), {})" -- from which you can derive the rest of your class hierarchy.
Ah, eryksun beat me to the direct invocation of the metaclass suggestion - is there a specific reason you went with string execution over that approach?
Also, since sys.version_info[0] works regardless of Python version, I'm not clear on why you try the attribute access first.
Very nice write-up though!
> Watch out for next() vs. __next__() when writing iterators. Python 2 uses the former while Python 3 uses the latter. Best to define the method once, and then support compatibility via `next = __next__` in your class definition.
And use the `next` builtin rather than a direct call to `.next()`, of course
> operator.isSequenceType() is gone in Python 3. Here's the code I use for compatibility:
Python 2.6 has `collections.Sequence`, so the `ImportError` will never be raised, you can just rip out that code and inline the `isisntance(value, collections.Sequence)` I think.
@eryksun, @nick: uh, duh! on calling the metaclass type to create the class instance. Thanks for reminding me about that one.
@nick: attribute access first - well, just 'cause i love namedtuples :). iow, no good reason.
@anonymous: yep, although this library uses the object in iterators, so the next() vs. __next__() methods are just to support the iterator protocol. Thanks for the clue about collections.Sequence.
Lovely to hear about your use case (as a KDE Pythonista) and also very happy about Ubuntu's big Python 3 push :).
Barry, awesome work!
Just tried out python-dbus from your PPA with some of my Python2 code, and got this error:
Traceback (most recent call last):
File "./dc3-service", line 50, in
from dbus.mainloop.glib import DBusGMainLoop
ImportError: No module named dbus.mainloop.glib
This is a regression compared to what's currently in Precise. Is there a more correct, modern way of doing the `dbus.mainloop.glib` import?
This is the service I was testing with, BTW:
Thanks!
Ah, looks like a small packaging error, the Python files aren't getting included in python-dbus:
jderose@jgd-ws:~/bzr/dc3/trunk$ dpkg -L python-dbus
/.
/usr
/usr/share
/usr/share/doc
/usr/share/doc/python-dbus
/usr/share/doc/python-dbus/copyright
/usr/share/doc/python-dbus/changelog.gz
/usr/share/doc/python-dbus/NEWS.gz
/usr/share/doc/python-dbus/changelog.Debian.gz
/usr/share/doc/python-dbus/README
Hi Jason, thanks for trying out the PPA package so quickly! I've gotten some feedback from Simon McVittie (upstream maintainer) on the original patch. I'll be addressing those comments first, and then will build a new PPA package that incorporates those changes and fixes the packaging snafu. Stay tuned!
|
http://www.wefearchange.org/2011/12/lessons-in-porting-to-python-3.html
|
CC-MAIN-2014-41
|
refinedweb
| 521
| 68.77
|
Contents
Welcome to Geospatial for Java - this workbook is aimed at Java developers who are new to geospatial and would like to get started.
We are going to start out carefully with the steps needed to set up your IDE and are pleased this year to cover both NetBeans and Eclipse. If you are comfortable with the build tool Maven, it is our preferred option for downloading and managing jars but we will also document how to set up things by hand.
This is our second year offering visual tutorials allowing you to see what you are working with while learning. While these examples will make use of Swing, please be assured that that this is only an aid in making the examples easy and fun to use.
These sessions are applicable to both server side and client side development.
We are going to be making use of Java, so if you don’t have a Java Development Kit installed now is the time to do so. Even if you have Java installed already check out the optional Java Advanced Imaging and Java Image IO section.
Download the latest JDK from the the java.sun.com website:
At the time of writing the latest JDK was:
jdk-6u20-windows-i586.exe
Click through the installer you will need to set an acceptance a license agreement and so forth. By default this will install to:
C:\Program Files\Java\jdk1.6.0_20\
Optional – Java Advanced Imaging is used by GeoTools for raster support. If you install JAI 1.1.3 performance will be improved:
Both a JDK and JRE installer are available:
Optional – ImageIO Is used to read and write raster files. GeoTools uses version 1_1 of the ImageIO library:
Both a JDK and JRE installer are available:”.
At the time of writing the latest release was:
Eclipse does not provide an installer; just a directory to unzip and run.
To start out with create the folder C:\java to keep all our java development in one spot.
Unzip the downloaded eclipse-java-galileo-SR1-win32.zip file to your C:\java directory – the folder C:\java\eclipse will be created.
Navigate to C:\java\eclipse and right-click on the eclipse.exe file and select Send To -> Desktop (create shortcut).
Open up the eclipse.ini file.
-startup plugins/org.eclipse.equinox.launcher_1.1.0.v20100507.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.1.0.v20100503 _20\bin -vmargs -Dosgi.requiredJavaVersion=1.5 -Xms40m -Xmx756m
Double click on your desktop short cutavenclipse plugin from Sonyatype.
To install the M2Eclipse plugin:
Open the Install dialog using Select Help ‣ Install New Software from the menubar.
In the work with: field enter the update site url:
m2eclipse -
You be prompted by an Add Repository dialog, check the Name and Location and press OK
From the list of available plugins and components select Maven Integration for Eclipse and press Next
The Install Details page checks to see if the plugin will work with you eclipse, press Next
For Review Licenses we get check I accept the terms of the license agreement and Finish
At the end of this workbook we offer two alternatives to using the M2Eclipse plugin:clipse.xml in that we want to use (2.7.0 for this example).
<properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <geotools.version>2.7-M2</geotools.version> </properties>
We are going to add a dependence to GeoTools gt-main and gt-swing jars. Note we are making use of the geotools.version defined above.
>
For comparison here is the completed pom.xml file for download.
You may find cutting and pasting to be easier then typing, you can choose Source –> Format to fix indentation
Tips:
If maven has trouble downloading any jar; you can try again by selecting Project ‣ Update All Maven Dependencies.
If it really cannot connect you will need to switch to 2.7-SNAPSHOT and add the following snap shot>
If the dependencies do not update automatically use Project ‣ Clean
Now that your environment is setup we can put together a simple Quickstart. This example will display a shapefile on screen.
Create the org.geotools.tutorial.Quickstart class using your IDE..DefaultMapContext; import org.geotools.map.MapContext; context and add our shapefile to it MapContext map = new DefaultMapContext(); map.setTitle("Quickstart"); map.addLayer(featureSource, null); // Now display the map JMapFrame.showMap(map); } }
We need to download some sample data to work with. The project is a great project supported by the North American Cartographic Information Society.
Please unzip the above data into a location you can find easily such as the desktop.
Run the application to open a file chooser. Choose a shapefile from the example dataset.
The application will connect to your shapefile, produce a map context, and display the shapefile.
A couple of things to note about the code example:
Each tutorial consists of very detailed steps followed by a series of extra questions. If you get stuck at any point please ask your instructor; or sign up to the geotools-users email list.
Here are some additional challenges for you to try:
Try out the different sample data sets
You can zoom in, zoom out and show the full extents and Use the select tool to examine individual countries in the sample countries.shp file
Download the largest shapefile you can find and see how quickly it can be rendered. You should find that the very first time it will take a while as a spatial index is generated. After that performance should be very good when zoomed in. cache = new CachingFeatureSource(featureSource); // Create a map context and add our shapefile to it MapContext map = new DefaultMapContext(); map.setTitle("Using cached features"); map.addLayer(cache, null); // Now display the map JMapFrame.showMap(map); }
For the above example to compile hit Control-Shift-O to organise imports; it will pull in the following import:
import org.geotools.data.CachingFeatureSource;
File file = JFileDataStoreChooser.showOpenFile("shp", null); Map<String,Object> params = new HashMap<String,Object>();] );
Important: GeoTools is an active open source project – you can quickly use maven to try out the latest nightly build by changing your pom.xml file to use a “SNAPSHOT” release.
At the time of writing 2.7-SNAPSHOT under active development.
<properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <geotools.version>2.7-SNAPSHOT</geotools.version> </properties>
You will also need to change your pom.xml file to include the following snapshot>
So what jars did maven actually use for the Qucikstart application? Open up your pom.xml and switch to the depdendency heirarchy or dependency graph tabs to see what is going on.
We will be making use of some of the project is greater depth in the remaining tutorials.
There are two alternatives to the use of the M2Eclipse plugin; you may find these better suite the needs of your organisation.
The first alternative to putting maven into eclipse; it to put eclipse into maven.
The maven build tool also works directly on the command line; and includes a plugin for generating eclipse .project and .classpath files.
Download Maven from
The last version we tested with was: Maven 2.2.1
Unzip the file apache-maven-2.2.1-bin.zip to C:javaapache-maven-2.2.1
You need to have a couple of environmental variables set for maven to work. Use Control Panel ‣ System ‣ Advanced ‣ Environmental Variables to set the following.
Open up a commands prompt Accessories ‣ Command Prompt
Type the following command to confirm you are set up correctly:
C:java> mvn -version
This should produce the following output
We can now create our project with:
C:>cd C:\java C:java> mvn archetype:create -DgroupId=org.geotools -DartifactId=tutorial
Use Windows ‣ Preferences to open the Preference Dialog. Using the tree on the left navigate to the Java > Build path > Classpath Variables preference Page.
Add an M2_REPO classpath variable pointing to your “local repository”
We can now import your new project into eclipse using File ‣ Import 2.7 although you may wish to try a newer version, or make use of a nightly build by using 2.7-SNAPSHOT.
<properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <geotools.version>2.7-M2</geotools.version> </properties>
>
You may find it easier to cut and paste into your existing file; or just download pom.xml directly.:
GeoTools allows you to work with many different databases; however to make them work you will need to download jdbc drivers from the manufacturer.
For now remove the follow plugins from your Library Manager definition:.
|
http://docs.geotools.org/stable/tutorials/quickstart/eclipse.html
|
CC-MAIN-2016-40
|
refinedweb
| 1,435
| 64.71
|
Simple Python progress bar for parallel programs.
Project description
Features
Provides an all purpose Python based progress bar utility that can be run in loops either running in serial or parallel with MPI or other utilities.
- When using parallel loops, each parallel progress gets a progress bar.
- The progress bar shows the percent completion of the tasks assigned to that process.
- If there are multiple processes, all progress bars remain until the final onefinishes.
Installation
PProgess can simply be install using pip as:
pip install pprogress
Then you can try out a simple serial example as:
from pprogress import ProgressBar from time import sleep N = 100 pb = ProgressBar(N) for i in range(N): pb.update() sleep(0.1) pb.done()
Documentation
Full documentation with parallel examples coming soon!
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/PProgress/
|
CC-MAIN-2019-47
|
refinedweb
| 152
| 63.59
|
42324/python-pandas-typeerror-tuple-indices-must-integers-slices
Hi. Please help me solve this error. I am trying to print row values
import pandas as pd
df = pd.read_csv(“/home/user/data1”)
for row in df.iterrows():
print (row['Email'])
df.iterrows() returns a tuple with index and the row value. So, you will have handle both the index and the value. Try this:
import pandas as pd
df = pd.read_csv(“/home/user/data1”)
for index, row in df.iterrows():
print (row['Email'])
Hope this helps!!
If you need to learn more about Python, It's recommended to join Python Training today.
Thanks!
I get the following error when I try row[1]['Email']
TypeError: string indices must be integers, not str
Traceback (most recent call last):
File "main.py", line ...READ MORE
Hi@akhtar,
As we know that the index of ...READ MORE
Im a begginer and im trying to ...READ MORE
Hi, @Varshap
It’s a TypeError, which tells us ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
Enumerate() method adds a counter to an ...READ MORE
You can simply the built-in function in ...READ MORE
Try "py" instead of "python" from command line:
C:\Users\Cpsa>py
Python 3.4.1 (v3.4.1:c0e311e010fc, May ...READ MORE
The error says the list is not ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.
|
https://www.edureka.co/community/42324/python-pandas-typeerror-tuple-indices-must-integers-slices?show=49115
|
CC-MAIN-2022-33
|
refinedweb
| 260
| 78.25
|
Code. Collaborate. Organize.
No Limits. Try it Today.
Being a back-end programming expert, I was always leaning towards the server-side template based rendering. However, a while ago, after working with numerous client-side templating engines and weighing all pros and cons, I finally decided to switch most of my web apps to the pure JavaScript driven templated UI. While there are tons of existing frameworks for this, and some of them work really well, there are several reasons that pushed me to roll out my own client-side template rendering solution. Working with other frameworks I always had a feeling that there was an easier solution. DOM based libraries flood your HTML templates with dummy elements and attributes, making it hard to read, plus performance suffers from heavy DOM transformations. Client-side MVCs and compiled templates are very interesting, but say goodbye to clear WYSIWYG markup. Plus the framework with the compiler themselves are huge, plus the need of extra frameworks for debugging all this, plus long learning period. In my opinion, this all is much more over complicated than it should be. And likely this complexity is visible to all those devs who hesitate to use client-side rendering and prefer to stick with MVC4 Razor, PHP, and others.
I always say "simple must stay simple" (slogan on my dev site), and I'm writing this post to share my own client templating solution that I came up with after using other frameworks. On one hand, this new solution allows to render the output of unlimited complexity. On the other hand, it is stupidly and unbeatably simple. It's simpler than you can imagine. The whole library is about 100 lines of code, it's super efficient, it consists of two functional primitives, and you literally have to spend 2 minutes to learn it.
Oh yes, I know how this sounds The other part of me had the same reaction when my dark side came up with this rendering solution. So, let me guide you through the approach, do some good examples, and demonstrate how it all works.
Note: This article is also published on my site here. If you wish, you can leave you comments and download all source code from there. Project repo is on the GitHub:
UPD: I've made important changes to jsRazor to turn it into the ultimate client-side application framework. I'm going to publish an overview article as well as jsRazor vs. AngularJS article towards the end of May. See my comment for more details.
UPD2: jsRazor vs. AngularJS: "Todo" and "JS Projects" demos with jsRazor! article has just been published! The article does to actual side to side comparison of jsRazor and AngularJS using demo apps.
The idea of generating HTML output by executing JavaScript against JSON and client-side template is not new. For at least 5 years it was a continuous discussion whether back-end application task should be reduced to providing JSON data only, leaving rendering task for the client-side. After pushState() feature of HTML5 came out, many devs started leaning towards the client-side more and more.
I started working with client-side from using JSONDataSet from Adobe Spry framework. It's a DOM based library that works pretty well for certain tasks, but it's not capable of doing fully custom output. Then I used JavaScriptMVC, Knockout, Angular, and some others from this list here. I liked MVC and compiled JS templates for flexibility and customization. I suggest you take a look at the frameworks I listed in order to feel how the solution below is different from them. However, there are reasonable opinions that if, for example, client-side MVC syntax does not make it much simpler in terms of programming, why not just use server-side MVC4 or PHP5 to achieve the same? Moreover, server-side debugging is a way more convenient. Yes, client frameworks can do a lot, but why bother switching from our lovely sever-side, unless there is some glaring programming benefit?
In this short article I'll show that there is actually a huge benefit and many good reasons to switch. I'll propose the solution that DOES make programming simpler and more flexible. It makes it almost trivial actually, so every graphics designer with knowledge of JavaScript can do page rendering like senior expert. This raises another interesting question - if the entire complex server-side rendering mechanism can be replaced with trivial JavaScript, do we need server page frameworks (ASP, MVC, PHP, whatever) at all? Sounds scary, huh?
OK, let me now introduce my simple solution to client-side template rendering.
Assume we have some JSON data on the web page. There are several good options to load JSON, but the best practice is to include JSON data within the page on initial page load (i.e. no extra Ajax call needed) and then use async Ajax to update parts of JSON data on demand. But this is offtopic here, so we just assume that JSON is already available.
Now, what's the best way to build HTML based on this JSON? After thinking about this with a pen and a paper, I came to the conclusion that, speaking abstractly, every possible rendering task can be accomplished as a random combination of two functional primitives:
The key part of our solution is the template and the way jsRazor works with it. Same principle "simple must stay simple" is applied here: jsRazor does not do DOM transformations, JS compilation, or anything like this. The template is entirely treated as plain text and all transformations are simple search-replace-insert operations over the string object. There is nothing in JS world that could be more efficient than this!
Typical jsRazor template looks like the following:
some HTML
<!--repeatfrom:name1-->
HTML fragment to repeat
plus value placeholders {Value1}
<!--repeatfrom:name2-->
nested HTML fragment to repeat
plus value placeholders {Value2}
<!--repeatstop:name2-->
more HTML
<!--showfrom:name3-->
HTML fragment to show or hide
<!--showstop:name3-->
more HTML here
<!--repeatstop:name1-->
more HTML here
So, everything is very simple here. To mark target fragment for repeat function we use <!--repeatfrom:name--> and <!--repeatstop:name--> comment limiters. For toggle function, use <!--showfrom:name--> and <!--showstop:name--> limiters. To output actual values, use placeholders of {SomeValue} format. Now, combine these two simple functionals in any possible configuration and get the desired output. For example, to achieve logical OR on toggle, you put two or more toggles beside each other; for logical AND, you nest one toggle into another. And so on.
repeat
<!--repeatfrom:name-->
<!--repeatstop:name-->
toggle
<!--showfrom:name-->
<!--showstop:name-->
{SomeValue}
OR
AND
The good thing about comments is that they don't interfere with your HTML and do not break WYSIWYG experience. Comment limiters are used by default, but you can change them to any format you like (see next section). Sometimes you want do deliver your template wrapped inside a comment itself, so that setup scripts do not touch the template. In this case you'd need to change limiter format to something other than comment.
Now let's see the actual functions that do all work.
As I stated before, jsRazor consits of only two functions to accomplish all work. I decided to create them as jQuery plugin in jsrazor namespace. This is just because I like jQuery plugin syntax, but there is no actual framework dependency. Although I’ll use jQuery in my examples, the plugin will work fine without jQuery library. Let’s see the actual call syntax and parameters.
jsrazor
First, there is one setting that you can use to change limiter format. To use [repeatfrom:name] format instead of comments, you'd have to do the following:
[repeatfrom:name]
$.jsrazor.settings.limiterFormat = "[{type}:{name}]";
And now let's look at the functions. First one is repeat:
$.jzrazor.repeat(template, name, items[, render])
string
Array
Function
Number
Object
item
tmp
idx
items
{<property-name>}
{item}
In other words, inside render callback you process content passed in tmp and return the desired output for the current item. All item properties are output to {<property-name>} placeholders automatically. To insert custom values, I suggest usage of placeholders in {SomeValue} format replaced with string.replace(..) function, but you can choose any format at your own preference. There is no limitation on processing inside the callback, so you can achieve any custom output you like.
render
string.replace(..)
Second function is toggle:
$.jzrazor.toggle(template, name, flag)
Boolean
true
false
Everything is trivial here. If flag is true, then fragment stays (limiters are removed). Otherwise, both fragment and limiters are removed.
flag
Now let's just think for a moment. Two JavaScript functions instead of the entire ASP.NET, PHP, JSP, etc. rendering mechanism! All those server controls, control flow statements, binding expressions, page lifecycle, etc. etc. suddenly just become redundant. In terms of 3-tier web app design, the entire presentation goes away to the client-side. Server-side is left with data layer and the business logic, and only needs to generate JSON data feeds.
Sounds good, doesn't it? Now it's time to see this all in action!
I’ll use same JSON data as Adobe Spry uses for their demos - I hope they don’t mind So, assume that JSON is created on the back-end from our data layer and is delivered to the client-side. Here are some good examples.
Assume we have an array of simple JSON objects describing colors:
var data_Colors =
[
{
color: "red",
value: "#f00"
},
{
color: "green",
value: "#0f0"
},
// ...
];
We want to output the colors as comma separated names followed by hex values in brackets. Here is the desired output:
I do identical output twice just to demonstrate slightly different JavaScript syntax:
<div class="cont-ex1">
<div>
1) <!--repeatfrom:colors1-->{color} ({value}), <!--repeatstop:colors1-->
</div>
<div>
2) <!--repeatfrom:colors2-->{color} ({value}), <!--repeatstop:colors2-->
</div>
</div>
There are 2 repeaters here beside each other: colors1 and colors2. They both will do the same, but with little different code just demonstrate one feature to you. Now let's see the actual JavaScript:
colors1
colors2
1: var $cont = $(".cont-ex1");
2: {
3: var output = $cont.html();
4: output = $.mksft.repeat(output, "colors1", data_Colors); // most basic default repeater
5: output = $.mksft.repeat(output, "colors2", data_Colors, function (tmp, idx, item) { return tmp; });
6: $cont.html(output);
7: }
All we do here, we get the template from inner HTML of some element, process it, and insert the result back. I use jQuery to work with inner HTML, but, as I mentioned earlier, plugin itself does not depend on any framework.
In our case, template is taken from inner HTML of <div class="cont-ex1"> container and assigned to output variable. First, we apply colors1 repeater to it on line 4. Notice that optional render callback is omitted, so there is no custom processing, but only automatic placeholder replacement based on JSON object properties. In the template we used {color} and {value} placeholders which will be replaced with color and value properties of current item on each iteration.
<div class="cont-ex1">
{color}
{value}
color
value
On line 5 we apply second colors2 repeater. The only difference is that I included render callback this time. All this callback does, just returns the current item template without modifications, which is essentially the same as omitting the callback for colors1 repeater.
One more basic example. Assume we have an array of values:
var data_Numbers = [100, 500, 300, 200, 400];
We want to output them as following:
Template is simple:
<div class="cont-ex2">
<div><!--repeatfrom:numbers-->{item}, <!--repeatstop:numbers--></div>
</div>
The JavaScript code is even simpler:
var $cont = $(".cont-ex2");
{
var output = $.mksft.repeat($cont.html(), "numbers", data_Numbers);
$cont.html(output);
}
No need for render callback here. Recall from API, when current item is of string or Number type, then its value automatically output into {item} placeholder. Done.
In this example let's do some custom processing. Our data is the same array again, but this time, for each number N in the array we want to output N-1<N<N+1. Here is the desired output:
N
N-1<N<N+1
For N-1 and N+1 values we will need custom placeholders. We don't want trailing comma after the last item as it was in our previous examples. To accomplish this, we will need to use the fragment toggle. Here is the template:
N-1
N+1
<div class="cont-ex3">
<div>
<!--repeatfrom:numbers-->
{ItemL}<{item}<{ItemM}<!--showfrom:not-last-->, <!--showstop:not-last-->
<!--repeatstop:numbers-->
</div>
</div>
In this template we have {ItemL} and {ItemM} extra placeholders. Also, comma is wrapped into the not-last toggle and will be displayed only when current item is not last one in the array. Now let's see the code:
{ItemL}
{ItemM}
not-last
1: var $cont = $(".cont-ex3");
2: {
3: var output = $.mksft.repeat($cont.html(), "numbers", data_Numbers, function (tmp, idx, item)
4: {
5: // toggle conditional area
6: tmp = $.mksft.toggle(tmp, "not-last", idx < data_Numbers.length - 1);
7: // custom placeholders
8: tmp = tmp
9: .replace(/{ItemL}/g, (item - 1))
10: .replace(/{ItemM}/g, (item + 1));
11:
12: return tmp;
13: });
14:
15: $cont.html(output);
16: }
OK, now it's more interesting, because render callback has some processing in it! On line 3 we apply numbers repeater as before. Then, render callback needs to return rendered result for every item. On line 6 we apply not-last toggle to the fragment with comma. Recall that 3rd parameter is flag that needs to be true to show the fragment, so in our case it is true when item is not last one in the array. Then, on lines 8-10 we output our custom placeholders by simple string.replace(..) operation. Value of item equals current item in the input array i.e. in our case it an integer number. That's all.
numbers
And the last example for today is the complex JSON with multiple nested arrays that require nested repeaters to output them. We will need to combine them with several fragment toggles to achieve even more custom output. Here is a JSON describing donuts menu:
var data_Donuts =
{
"donuts":
[
{
"id": "0001",
"type": "donut",
"name": "Cake",
"ppu": 0.55,
"batters":
[
{ "id": "1001", "type": "Regular" },
{ "id": "1002", "type": "Chocolate" },
{ "id": "1003", "type": "Blueberry" },
{ "id": "1004", "type": "Devil's Food" }
],
"toppings":
[
{ "id": "5001", "type": "None" },
{ "id": "5002", "type": "Glazed" },
{ "id": "5005", "type": "Sugar" },
{ "id": "5007", "type": "Powdered Sugar" },
{ "id": "5006", "type": "Chocolate with Sprinkles" },
{ "id": "5003", "type": "Chocolate" },
{ "id": "5004", "type": "Maple" }
]
},
// ...
{
"id": "0004",
"type": "bar",
"name": "Bar",
"ppu": 0.75,
"batters":
[
{ "id": "1001", "type": "Regular" }
],
"toppings":
[
{ "id": "5003", "type": "Chocolate" },
{ "id": "5004", "type": "Maple" }
],
"fillings":
[
{ "id": "7001", "name": "None", "addcost": 0 },
{ "id": "7002", "name": "Custard", "addcost": 0.25 },
{ "id": "7003", "name": "Whipped Cream", "addcost": 0.25 }
]
},
// ...
]
};
Here is the output we want to achieve:
Here is a template to achieve this:
1: <div class="cont-ex4">
2: <ul>
3: <!--repeatfrom:donuts-->
4: <li>[{id}] | {type} | <b>{name}</b> | ${ppu}
5: <ul>
6: <li>Batters
7: <ul>
8: <!--repeatfrom:batters-->
9: <li>[{id}] {type}</li>
10: <!--repeatstop:batters-->
11: </ul>
12: </li>
13: <li>Toppings
14: <!--showfrom:has-chocolate-->
15: <span style="color:brown;">
16: {CountChocolate} chocolate toppings available
17: </span>
18: <!--showstop:has-chocolate-->
19: <ul>
20: <!--repeatfrom:toppings-->
21: <li>[{id}] {type}</li>
22: <!--repeatstop:toppings-->
23: </ul>
24: </li>
25: <!--showfrom:has-fillings-->
26: <li>Fillings
27: <ul>
28: <!--repeatfrom:fillings-->
29: <li>[{id}] {name}
30: <!--showfrom:cost-extra-->
31: <span style="color:red;font-weight:bold;">+${addcost} = ${NewPPU}</span>
32: <!--showstop:cost-extra-->
33: </li>
34: <!--repeatstop:fillings-->
35: </ul>
36: </li>
37: <!--showstop:has-fillings-->
38: </ul>
39: </li>
40: <!--repeatstop:donuts-->
41: </ul>
42: </div>
So, to output our hierarchical data we want to use nested HTML lists. First level outputs donuts and their direct attributes using donuts repeater on lines 3-40. Second level outputs batters (lines 8-10), toppings (lines 20-22), and fillings (lines 28-34) arrays. Some donuts do not have fillings, and in this case we want to show nothing instead of showing "Fillings" title with empty list. To achieve this, we enclose the entire fillings section in the has-fillings toggle on lines 25-37. Next, some fillings add extra cost to PPU. When this is the case, we want to output extra amount in red font using "+$X=$X1" format where X is extra cost and X1 is the updated PPU. For this purpose we have cost-extra toggle on lines 30-32 and custom {NewPPU} placeholder. Finally, if there are any chocolate toppings available, we want to show a message like "2 chocolate toppings available". For this purpose we have another has-chocolate toggle on lines 14-18 and {CountChocolate} placeholder. And this is all for the template! Now let's see the code:
donuts
has-fillings
cost-extra
{NewPPU}
has-chocolate
{CountChocolate}
1: var $cont = $(".cont-ex4");
2: {
3: // first level repeater for all donuts
4: var output = $.mksft.repeat($cont.html(), "donuts", data_Donuts.donuts, function (tmp, idx, item)
5: {
6: // default repeaters for batters and toppings
7: tmp = $.mksft.repeat(tmp, "batters", item.batters);
8: tmp = $.mksft.repeat(tmp, "toppings", item.toppings);
9:
10: // display fillings only if there any
11: tmp = $.mksft.toggle(tmp, "has-fillings", !!item.fillings);
12: // render fillings list
13: if (item.fillings)
14: {
15: var ppu = item.ppu; // save ppu for child value scope
16: // custome repeater for fillings to display extra cost and new PPU
17: tmp = $.mksft.repeat(tmp, "fillings", item.fillings, function (tmp, idx, item)
18: {
19: // display price warning if there is additional cost involved
20: tmp = $.mksft.toggle(tmp, "cost-extra", item.addcost > 0);
21: // custom placeholder to display new PPU value
22: tmp = tmp.replace(/{NewPPU}/g, ppu + item.addcost);
23: return tmp;
24: });
25: }
26:
27: // display number of chocolate toppings
28: var countChoco = 0; // count number of chocolate toppings
29: $.each(item.toppings, function (idx, item) { if (/chocolate/i.test(item.type)) countChoco++; });
30: tmp = $.mksft.toggle(tmp, "has-chocolate", countChoco > 0); // show/hide chocolate message
31: tmp = tmp.replace(/{CountChocolate}/g, countChoco); // render chocolate topping counter
32:
33: return tmp;
34: });
35:
36: $cont.html(output);
37: }
Note that due to JavaScript variable scoping, I can re-use tmp, idx, and item variables in all nested repeater render callbacks, which saves a lot of code lines.
So, outer donuts repeater we have on line 4. Render callback starts from applying nested batters and toppings repeaters on lines 7-8. We leave them default, since we only need to output property placeholders. Next, on line 11 we apply our has-fillings toggle to hide the entire fillings area if there are no fillings. And if there are fillings, we output them applying fillings repeater on line 17. On line 15 I have to save PPU value into separate ppu variable, because item will have different meaning in the render callback or the nested repeater. Inside render callback for fillings repeater I start from applying cost-extra toggle on line 20. Then on line 22 I output value for {NewPPU} placeholder. Finally, we need to deal with chocolate toppings. We count them on lines 28-29, apply has-chocolate toggle on line 30, and output count on line 31. We're DONE!
Render
batters
toppings
has-fillings
fillings
ppu
cost-extra
I think you got the idea. Combining two simple functions you can achieve output of any complexity based on JSON of any structure - that's all. The template is as simple and transparent as it can be. Comment limiters do not interfere with HTML leaving your template markup absolutely clean and viewable in any HTML editor. I don't see any reason to use compiled templates when you can achieve here all output you need in a stupidly simple way.
Now it's time to summarize everything we've seen so far.
I'll not go deeply into general pros and cons of client-side rendering, because it was spoken already thousand times. Instead, I'll mostly focus on the aspects specific to jsRazor solution.
Seriously, how long did it take to read Solution and Examples sections above? 5 minutes? 10? And that's all you need to know about jsRazor. There is no learning required here, while other client-side rendering frameworks might get quite complex. For example, JS MVC requires quite a bit of learning plus extra framework to debug you views. I'm not even mentioning server-side like ASP.NET here, where it takes several years to become rendering expert and learn all peculiarities of the server controls. Here, after 10 minute tutorial literally every HTML designer becomes able to build apps with flexibility that no server-side framework can achieve!
Look at our last example. Having that JSON data, it would take you 10 minutes to create the template and 5 minutes to write the JavaScript, agreed? It's trivial and transparent. Just for comparison, how long would it take you to do the same in ASP.NET, MVC, JSP? You'd probably use some monster like DataGrid? MVC Razor would take less than classic ASP.NET, but still much more than jsRazor. Now, what about client-side? Compiled templates and JS MVC would do the job, but syntax and simplicity would not be even close to jzRazor solution. Here is the list of client MVCs that you can take a look for comparison.
DataGrid
Due to overall simplicity, jsRazor rendering task can be accomplished by HTML designer with basic knowledge of JavaScript. Furthermore, to maintain the already existing solution even lower skill will fit. The back-end developer gets involved only when there is need to create new JSON feeds or modify existing ones, which also does not require much framework knowledge. On the other hand, creating a regular server-page site requires senior programmer from the beginning, and at least a medium level programmer to maintain the existing solution. The client-side MVC also requires advanced developer, especially for debugging the code.
Application architecture is where jsRazor wins everywhere! First of all, if back-end is used only for generating JSON, this automatically means 100% presentation separation. Second, jsRazor also provides separation on the client-side! Look, in compiled templates you have your code mixed with HTML markup everywhere. Then compiler turns it into executable JavaScript. In jsRazor template is separate and code is separate just by design. Absolutely separate. So it's double separation here: back-end separation plus front-end template/controller separation.
It was also noted many times by other IT experts that client-side rendering provides the maximum theoretical level of client-sever app simplicity. With trivial jsRazor templates this feature becomes even more obvious. No matter how complex your UI functionality is, on the back-end it's always a bunch of JSON endpoints, and on the front-end it's always a bunch of simple templates with tiny JavaScript controller (if I can call it this way).
Number one concern that comes to our mind about client-side rendering is performance. There are usually 2 arguments that I hear against client-side rendering in general: need for JSON data call on initial page load, and need for executing a bunch of JavaScript before page is displayed.
Regarding Ajax call on the initial load, I already mentioned that it is not really needed if application is written properly. Initial JSON has to be inserted into the page and loaded together with it, so it's available without any additional calls, and Ajax should be used to update existing JSON or load extra data. So there is no problem here. It's just a design consideration that every developer have to keep in mind.
Next question is how bad is it to execute JS to finalize page UI? Well, I would say that today, when tiny smartphones are capable of doing FullHD playback, this concern becomes negligible. Just to understand what we're talking about here, let's see the use case. I'll use ASP.NET here as example, but results are true for other frameworks too, because, in terms of workflow, they all are pretty much the same.
Page rendering consists of a number of steps. Let's depict the steps in the period between client request hitting the page and the page is displayed in the user browser. The picture is below:
There are more steps in a real world, but this is just a model. So, request passes through IIS and gets to ASP.NET. Then there is some housekeeping to run modules, setup context, etc. Then page begins lifecycle execution: usually we get entities from data layer and process them through some business logic. Then page is rendered (time spent is tR). Then housekeeping again, and then response is sent back to client through the network (time spent is tW). After it reaches the client, it's displayed in the browser. If jsRazor rendering is used (time spent is tR*), then it has to execute before the page is displayed. Let's call "case A" the case where pure server-side rendering is used, and "case B" - where jsRazor client-side rendering is used instead.
tR
tW
tR*
In case A, there is only tR. In case B - only tR*, because tR entirely goes away. So, roughly, it comes to comparing tR and tR*, right? I did not do any actual benchmarking here, so let's just think theoretically. To render the server-page (the tR), ASP.NET builds control tree, goes through it, calls entire lifecycle for each control, etc. - a lot of work. However, this is all compiled code on the server, so, unless the page is really crazy complex, the rendering should be fast. On the other hand, tR* is taken by browser JavaScript. The entire rendering procedure of jsRazor is based on simple string operations, so it should be a way more efficient. However, it runs in the browser and JavaScript is slower than compiled code. So, intuitively I think that tR<tR*, but I'll try to do actual benchmarking in the future. Anyway, even if this is the case, this is still NOT a concern, because it's not going to be visible to the eye anyway.
If we think further, we may notice that tW for case B is actually less than tW for case A. This is because in case B, the response is a raw page with templates and JSON, but in case A, it is a completely rendered web page which is much larger. And the bigger your data grid is, the bigger will be the difference between tW in both cases. So, jsRazor approach gives much better performance in this terms. For slow internet connections this could be a big win actually.
Also, there is a benefit for server load, since we just take a large part of processing off the server. We can go further and say that there is no need to use ASP.NET for generating JSON and use web services instead. This would take even more load off the ASP.NET.
Caching is something where server-side rendering will be faster. Whether it is ASP.NET page cache or network level cache (CDN), for case B we only can cache the raw pages and JSON data, which means that tR* is always involved. In case A we cache fully complete pages that do not need any processing. So, under these conditions, case A will be tR* faster than case B. This is something to mention, but NOT a big deal at all. Again, in case of trivial JavaScript string operations that jsRazor does, tR* is negligible.
Of course, there are certain types of applications that may require server-side rendering only. Paradigm shift is always hard to accept, but for regular Web 2.0 apps (e-commerce, portals, social apps, etc.) it seems like pure client-side rendering in general and particularly jsRazor is a much simpler and more flexible solution. A huge part of server-page framework can be replaced with two tiny JavaScript functions - is not it cool?
Also consider another important side of this. A part of work that was usually done by back-end developers, can now be done by HTML designers! This is actually very good, because it means more division of labour within the development team, and more spare time for busy senior dev team members
Visit jsRazor page for download links and more info. Visit GitHub repo if you wish to contribute:
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
$.jsrazor.settings.limiterFormat = "..."
foreach
if..endif
BillWoodruff wrote:I am not sure if you are proposing this "architecture" for "web applications," or only for data-driven web-sites.
matik07 wrote:Regarding wrapping stuff into scope objects. In Angular this thing makes the code much cleaner and organized, so why don’t you provide this feature natively? Those who want their own architecture will use just raw rendering. Others will prefer some type of jsRazorMVC. I understand your point that you want to keep it simple, that’s fine. You don’t have to do all those bindings, event handlers, and stuff. I’m only talking about getting rid of some annoying boilerplate code.
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/581362/Cutting-edge-web-development-with-jsRazor-tiny-Jav?msg=4550977
|
CC-MAIN-2014-23
|
refinedweb
| 4,942
| 65.62
|
These methods should not be prefixed with "get". They can conflict with
schema generated methods. I suggest:
selectChildren ( ... )
selectAttributes ( ... )
selectAttribute ( ... )
I choose "select" prefix because there is a precedent of the existing
method on XmlObject already:
selectPath( ... )
These new select methods are convenience methods for the, more powerful,
selectPath method.
- Eric
-----Original Message-----
From: Cezar Andrei
Sent: Tuesday, March 09, 2004 4:05 PM
To: xmlbeans-dev@xml.apache.org
Subject: RE: get/set methods for <xsd:any> element
Giving the fact that a solution should not do validation for pref and
stability reasons discussed earlier and also having predictable results
for any instance, I'd like to propose as a solution the addition of the
following methods on XmlObject:
XmlObject[] getChildren(QName qname);
XmlObject[] getChildren(String uri, String localName);
XmlObject[] getChildren(QNameSet qnameSet);
They will return the content of all elements that have the given name or
are contained in the qnameSet.
For attributes:
XmlObject getAttribute(QName qname);
XmlObject getAttribute(String uri, String localName);
XmlObject[] getAttributes(QNameSet qnameSet);
QNameSet can represent a set of QNames, it can represent ##any qnames.
Or ##other and ##targetNamespace for a given target namespace. Or any
set of qnames because QNameSet supports set operations like union,
intersect and invert.
This doesn't solve any case related to wildcards but for less usual
cases there are other ways to get to the needed attributes or elements:
- selecting an xpath.
- navigate with XmlCursor.
- and soon the ability to get to the DOM nodes.
I'd like to hear your opinion if this solution is (or is not) solving
your concrete cases?
Cezar
> -----Original Message-----
> From: Wolfgang Hoschek [mailto:whoschek@lbl.gov]
> Sent: Friday, February 20, 2004 7:29 PM
> To: xmlbeans-dev@xml.apache.org
> Subject: Re: get/set methods for <xsd:any> element
>
>
> Yep, I voted for (B).
>
> What you suggest below would work fine for us as well, the ambigous
> corner cases discussed in this thread simply don't come up in
> our practice.
>
> Wolfgang.
>
> David Bau wrote:
> > I think we should be guided by the "element declarations
> > consistent" rule in the schema specification requires that
> > any definition for <foo> be of exactly the same type, even
> > if it's pulled in via a wildcard. (I've asked the W3C XSWG
> > to clarify this fact on the email lists
> >
>
> 03OctDec/0029.html)
> >
> > This rule strongly suggests that the meaning of <foo>
> > shouldn't change whether or not it happens to be pulled in
> > via wildcard, or whether it's matched by a particular
> > <choice> branch or substitution group. A <foo> should mean
> > the same thing no matter what different elements surround it
> > and in what order.
> >
> > So... XMLBeans currently takes advantage of this rule so
> > that when you say getFooArray(), it returns _all_ elements
> > <foo>, even if one or more of them happen to be matched by
> > <any> or substitution group or some combination of
> > <choice>s. XMLBeans does not validate to bind.
> >
> > As I mentioned in the last email, I think that whatever
> > method we have for getOtherArray() should have similar
> > by-name-binding behavior, i.e., if "foo"s are in the
> > recognized element set, getOtherArray shouldn't return
> > "foo"s.
> >
> > The thing I'd like to really avoid is forcing validation to
> > occur for any binding. Why? Because it's fragile and
> > expensive, and the schema spec has a rule (EDC) that is
> > explictly designed to avoid making validation necessary for
> > binding. We should take advantage of it.
> >
> > David
> >
> >
> > ----- Original Message -----
> > From: "Radu Preotiuc-Pietro" <radup@bea.com>
> > To: <xmlbeans-dev@xml.apache.org>
> > Cc: <davidbau@hotmail.com>
> > Sent: Thursday, February 19, 2004 9:21 PM
> > Subject: [xmlbeans-dev] RE: get/set methods for <xsd:any>
> > element
> >
> >
> > If we go down this path, then what if we modify David's
> > example to the following:
> >
> > <xs:complexType
> > <xs:sequence>
> > <xs:any
> > </xs:sequence>
> > </xs:complexType>
> >
> > <xs:complexType
> > <xs:complexContent>
> > <xs:restriction
> > <xs:sequence>
> > <xs:element
> > <xs:any
> > </xs:sequence>
> > </xs:restriction>
> > </xs:complexContent>
> > </xs:complexType>
> >
> > This is still a valid restriction, but what will the
> > Derived.getAnyArray() return?
> > (suppose we have <foo/><foo/> in the instance document)
> > Two elements of type Foo, or just one? If two, how do you
> > then know how many <foo/> elements you had in the instance
> > document?
> > This is a strange issue...
> >:
|
http://mail-archives.apache.org/mod_mbox/xml-xmlbeans-dev/200403.mbox/%3C4B2B4C417991364996F035E1EE39E2E10D8E7F@uskiex01.amer.bea.com%3E
|
CC-MAIN-2017-04
|
refinedweb
| 715
| 53.41
|
I am using JBoss_4.0.3RC1, upgraded with Jboss_EJB3.0_RC1. The JBoss AS with a simple clustered Stateful Session Bean which keeps the number of counts it has been accessed, is deployed in two server machines. The Session Bean is accessed by JSP deployed in the web container of the JBoss AS. I am fronting JBoss' Tomcat with Apache2 in another web server and was able to make request to either one of the JBoss AS server machine if one of them is shutdown. However, I could not simulated the replication of the Stateful Session Bean. I tested acessing a JBoss AS server machine and increase the count, then shutdown this server, and the request is diverted to the other server, but the count starts from 1. I am expecting the count to continue from where I last started. Would appreciate if someone point out whether my expectation is correct and what's the solution if my requirement to keep the counter is to be achieved.
Here's a snippet of my session bean
@Stateful
@Clustered
@Remote(AccountMaintenance.class)
public class AccountMaintenanceBean implements AccountMaintenance, Serializable {
@PersistenceContext(unitName="MySqlMgr")
private EntityManager manager;
private int hitcount = 0;
private String SessionName = "Compaq nx5000 - Hit Count : ";
public String increment() {
hitcount ++;
return SessionName + Integer.toString(hitcount);
}
|
https://developer.jboss.org/thread/67461
|
CC-MAIN-2018-17
|
refinedweb
| 211
| 53
|
Important: Please read the Qt Code of Conduct -
Problem with rotating text (Qt 4.8.2, WinCE6)
Hi
We are using QPainter::rotate to rotate some text on the screen using the following code:
@void Screen::paintEvent(QPaintEvent*)
{
QPainter painter(this);
painter.translate((qreal)100, (qreal)100);
painter.rotate(-90.0);
painter.drawText(0, 0, "Rotated text");
}@
This worked perfectly well under Qt4.7.2 but does not work anymore under Qt4.8.2 for WinCE 6.0. The debug version throws now an assert, while the release version simply lines up the text vertically, without rotating the glyphs. If we run the same test program under Win32, everything works as expected.
Didi anybody else observe such a behaviour? Any idea what went wrong here?
Best regards
Andreas
Hi Andreas,
Did you find any solution to this problem?
I just ran into the same problem. I try to write vertical text in qml and got the same assert in qfontengine_win.cpp. (I run win ce 6 and qt 4.8.2.)
Regards,
Erik
Hi Erik
I have added bug QTBUG-26347 to the issue tracker. Basically, the work-around is as follows:
There this is a problem in the following code:
3439 bool QRasterPaintEngine::supportsTransformations(qreal pixelSize, const QTransform &m) const
3440 { 3441 #if defined(Q_WS_MAC) || (defined(Q_OS_MAC) && defined(Q_WS_QPA)) 3442 // Mac font engines don't support scaling and rotation 3443 if (m.type() > QTransform::TxTranslate) 3444 #else 3445 if (m.type() >= QTransform::TxProject) 3446 #endif 3447 return true; 3448 3449 if (pixelSize * pixelSize * qAbs(m.determinant()) >= 64 * 64) 3450 return true; 3451 3452 return false; 3453 }
adding "|| defined(Q_WS_WINCE)" to line 3441 solves the problem.
Best regards
Andreas
Thank you!
It worked fine.
Best regards,
Erik
|
https://forum.qt.io/topic/17751/problem-with-rotating-text-qt-4-8-2-wince6
|
CC-MAIN-2021-10
|
refinedweb
| 287
| 68.47
|
31041/error-while-creating-security-group-using-aws-cli
I am trying to create a Security group using AWS CLI but am getting error.
aws ec2 create-security-group --group-name my-sg --description "My security group" --vpc-id vpc-208fft56hhg
An error occurred (InvalidVpcID.NotFound) when calling the CreateSecurityGroup operation: The vpc ID 'vpc-1a2b3c4d' does not exist
The error says you have entered a wrong VPC Id.
Check your VPC Id and then execute the statement. Your security group will be created.
C:\>aws ec2 create-security-group --group-name my-sg --description "My security group" --vpc-id vpc-0f137be1215hgyta16
{
"GroupId": "sg-0788ff40f5gt56a55"
}
This way you will be given the Security Group Id stating that the security group is created.
Hope this helps.
For the first error you should add ...READ MORE
Hello @Jino,
The command for creating security group ...READ MORE
The error clearly says the error you ...READ MORE
import boto3
ec2 = boto3.resource('ec2')
instance = ec2.create_instances(
..
You can use the following command to ...READ MORE
Hi @Faheem, try this script. It will ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/31041/error-while-creating-security-group-using-aws-cli
|
CC-MAIN-2019-47
|
refinedweb
| 187
| 52.15
|
Randomize objects in object manager?
On 06/02/2013 at 02:14, xxxxxxxx wrote:
Hi all,
Is there a way to make a script that randomizes the order of objects under a null in the object manager?
Grtz,
Hans Willem
On 06/02/2013 at 03:14, xxxxxxxx wrote:
Easy:
import random def main() : objs = op.GetChildren() [x.Remove() for x in objs] random.shuffle(objs) [x.InsertUnder(op) for x in objs] main()
Note: no undos
On 06/02/2013 at 03:26, xxxxxxxx wrote:
Wow! Thanks!
On 06/02/2013 at 14:11, xxxxxxxx wrote:
Forgot to add c4d.EventAdd() as the last line of the main() function. This will update the Cinema
4D interface so you can see the change immediately.
|
https://plugincafe.maxon.net/topic/6921/7772_randomize-objects-in-object-manager
|
CC-MAIN-2020-40
|
refinedweb
| 124
| 75.1
|
Did you know you can ping a ByteBlower Port? This requires no extra configuration on your end, the port will respond to Ping requests as soon as it has an IP address.
Pinging a ByteBlower Port is especially helpful when debugging connectivity issues, this allows you to check where the ByteBlower Port is still reachable and from which link connectivity is lost.
In the examples below we use IPv4, this is of course also works in IPv6.
A ByteBlower Port is reachable with Ping as soon as the port has valid address in the Realtime View. This will becomes available very early in the configuration phase and remains so throughout the whole test run.
/>
To increase the time for debugging, you can enable a pause between scenario configuration and test-run. Right before the test traffic starts, you'll receive the pop-up below.
When the issue is easily solved, you can still continue the test-run. From ByteBlower 2.11.4 on, the NAT entries will be kept alive until the test starts.
This pop-up is shown by default. To disable you can use checkbox. It can later be enabled again from the Preferences.
Finally to make debugging even easier It tends to help having a very minimal scenario: only enough to configure the ByteBlower Ports. To this end we suggest to disable NAT (Port View) and to use only TCP flows.
Pinging works just the same for the ByteBlower API: a ByteBlower Port pingable as soon as it has a proper IP address. As the example below shows, this the default behavior and requires no extra configuration.
More examples can be found via
import byteblowerll.byteblower as byteblower
api = byteblower.ByteBlower.InstanceGet()
bb_server = api.ServerAdd('10.8.254.111')
bb_port = bb_server.PortCreate('nontrunk-1')
l2 = bb_port.Layer2EthIISet()
l2.MacSet('00-bb-00-11-22-33')
l3 = bb_port.Layer3IPv4Set()
dhcp = l3.ProtocolDhcpGet()
dhcp.Perform()
print('ByteBlower Port is pingable on %s' % (l3.IpGet()))
I consent for Excentis to process my data and agree to the terms of the Privacy Policy
|
https://support.excentis.com/index.php?/Knowledgebase/Article/View/debugging-connection-errors
|
CC-MAIN-2020-40
|
refinedweb
| 342
| 59.8
|
This guide will cover the basic setup of namespaces and mosaics in the NanoWallet.
To create a namespace you need to pay the following fees: (note that fees may change.)
To create a mosaic you need:
You are a farmer with 50 potato fields. Since 50 fields are too many for yourself, you start to think about selling some of the fields to investors.
How can this be done with NEM and NanoWallet? With namespaces and mosaics of course...)
Login to the NanoWallet, go to Services and choose "Create namespace". First, we create the root namespace.
Once you have entered the values, click "Register". Go to the Dashboard and check, if the registration was successful:
Now that the root-namespace exists, we can create a sub-namespace.
Once you have entered the values, click "Register". Go to the Dashboard and check, if the registration was successful:
|
http://docs.nem.io/en/nanowallet/namespaces
|
CC-MAIN-2018-47
|
refinedweb
| 146
| 76.22
|
Encouraged by the positive feedback I've received so far, I did my best to stay away from GTAIV during the weekend and did some more work on the Zune Clock. So, I've added a new feature: the the ability to customize the background.
What's new in this version: - New feature: Background customization (you need to have a few pictures on the Zune) - Bug: Fixed a bug which caused the settings data to become corrupt in some cases - Bug: Performance improvements (application loops slower than initially - a clock doesn't need a very high refresh rate) - Update: Using white color for the settings (it makes the text more visible when having a custom background) - Update: The application version is displayed when in setting mode - Code fix: Fixed the sources so that they compile fine on Debug - Code fix: Refactored some of the settings code to make space for future settings
A few days ago, I've decided to give a try to the XNA CTP which allows creating applications for the Zune. So, I've started going through the tutorials and I've decided to create a Clock application. I know, XNA is meant to be used for writing games... but I always wanted to have a clock on the Zune :). I was impressed to discover that the Zune has an internal clock, which works even when the Zune is shut down. The only issue I had was that the time was a little bit off, and there was no way to set the internal clock. So I've added the ability to "adjust" the time displayed. Basically, you define an offset which gets applied to the hardware clock. The value gets saved so that the next time when you start the application, it will still show the correct time.
Description: Simple clock which displays the date and time. It allows adjusting the date and time displayed (it doesn't modify the internal hardware clock). The value gets saved and loaded the next time you start the application.
Features:- displays the time- displays the date- allows setting the date/time- the date/time adjustment is persisted so that the next time you start it you don't have to adjust it again.
Controls:- Back Button - exits the application- Pause Button - enters setting mode * Up/Down - increments the selected setting * Left/Right - cycles through the different settings (hours, minutes, seconds, days) * Click on the Zune pad - resets the adjustment to 0 (so that you will see the hardware date/time) * Pause/Back - exits setting mode
I hope you will find this application useful.
EDITED: This article explains how to install Games on the Zune:
In order to provide an editing experience as close as possible to the runtime, Expression Blend executes some of the user defined code (it instantiates controls, data sources, sets properties, etc.). But in some cases, you may want to avoid this behavior (if that code doesn't work properly when executed inside Blend or if it is a more time consuming experience).
This post explains how to detect if your code is executed inside Blend and provide an alternate implementation for that case. For example your datasource could connect to a database and extract the data, but, when running inside Blend, it could generate a minimal sample data which can be used for creating some data templates).
The following sample uses a custom TextBlock which displays a different message when the application is loaded in a designer (in our case Expression Blend):
public class MyTextBlock : TextBlock
{
public MyTextBlock() : base()
{
if (System.ComponentModel.DesignerProperties.GetIsInDesignMode(this))
{
this.Text = "Design mode detected!";
}
else
{
this.Text = "No design mode";
}
}
}
For further information you may check out the documentation for the DesignerProperties Class.
The new version can be donloaded from here. Here are some of the changes:!
Some time ago, Peter Blois posted "Snoop", a tool which I find really useful for debugging WPF applications. It allows you to:
You may find more info and downloads on the Snoop official web page:
Today I played a little bit more with Expression Graphic Designer... It was the first time when I used it more seriously for drawing things (I had used it before, but mainly for adjusting and stitching photos). So, this is what I managed to draw in about one hour...
The new Windows Live Toolbar includes, for free, Onfolio, which is a very nice and useful feeds reader.
For a more comprehensive list of features, you may take a look here.
The Windows Live Toolbar can be downloaded from here:
One more thing... don't forget to add the Expression Blog to Onfolio ;)
[ValueConversion(typeof(string), typeof(string))]
public class ReverseStringConverter: IValueConverter
{
public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
{
string source = value as string;
return ReverseStringConverter.RevertString(source);
}
public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture)
{
string source = value as string;
return ReverseStringConverter.RevertString(source);
}
private static string RevertString(string source)
{
int sourceLength = source.Length;
char[] destination = new char[source.Length];
for (int iterator = 0; iterator < sourceLength; iterator++)
{
destination[iterator] = source[sourceLength - iterator - 1];
}
return new string(destination);
}
}
Your wait is over! The March CTP version of Microsoft Expression Interactive Designer is available for download. The download instructions are available on the Expression Blog here.
This example shows how databinding and styles can be used for displaying a set of connected nodes. To better illustrate this, I've added some basic interactivity: the nodes can be dragged using the mouse and databinding will update automatically the positions of the lines.
For simplicity, I will use a set of predefined nodes and a set of predefined lines connecting those nodes (but, the functionality could be easily exteded to create them dynamically).
I've used mainly Microsoft Expression Interactive Designer for creating this sample, but because of some limitations in the current CTP, I had to tweak a few times the code manually. The project can be opened and edited by Microsoft Expression Interactive Designer without any problems.
This time I won't create a step by step tutorial; I will explain shortly each part of the code and how everything works. So feel free to download the project, open it and Visual Studio or Microsoft Expression Interactive Designer, run it and look through the code.
We will use a Canvas as the DocumentRoot. Canvas is a simple layout container which allows positioning children by specifying X-Y coordinates.
By default, Microsoft Expression Interactive Designer creates the DocumentRoot a Grid. In order to create a new scene with a Canvas, you need to uncheck the option "New Scene Creates a Grid" from the "Tools" menu. The scenes created after that will have a Canvas as the DocumentRoot
Styles represent a great way to reuse properties and events. This is what our style does:
<Canvas.Resources>
<Style x:
<Setter Property="Stroke" Value="#FF000000" />
<Setter Property="Fill" Value="sc#1, 1, 0, 0"/>
<Setter Property="Width" Value="50" />
<Setter Property="Height" Value="50"/>
<Setter Property="RenderTransform">
<Setter.Value>
<TransformGroup>
<TranslateTransform X="0" Y="0"/>
<ScaleTransform ScaleX="1" ScaleY="1"/>
<SkewTransform AngleX="0" AngleY="0"/>
<RotateTransform Angle="0"/>
<TranslateTransform X="0" Y="0"/>
<TranslateTransform X="-25" Y="-25"/>
</TransformGroup>
</Setter.Value>
</Setter>
<EventSetter Event="MouseDown" Handler="OnMouseDown" />
<EventSetter Event="MouseUp" Handler="OnMouseUp" />
</Style>
</Canvas.Resources>
Now, when we create a new node, we just need to specify its style and its position, because all the other properties and the event handlers are defined inside the style.
<Ellipse x:
You can also add new nodes using Microsoft Expression Interactive Designer:
We will define a set of predefined lines, and databind the coordinates of the two points to the coordinates of the nodes (the attached properties Canvas.Left and Canvas.Top). Unfortunately, the only way to do this step manually, because Microsoft Expression Interactive Designer has currently some limitations (we know about this issues and we hope that we will improve them in the next releases):
A Line object has two pairs of values representing the coordinates for the two points: X1, Y1 and X2, Y2. Here is how we can bind one of the ends to the coordinates of one of the ellipses:
<Line Stroke="#FF000000"
X1="{Binding ElementName=Ellipse, Path=(Canvas.Left), Mode=Default}"
Y1="{Binding ElementName=Ellipse, Path=(Canvas.Top), Mode=Default}"
X2="50" Y2="150" />
First of all we will add an event handler for the MouseMove event of the DocumentRoot element. We will name it OnMouseMove. This step can be done using Microsoft Expression Interactive Designer by selecting the DocumentRoot and using the Events palette to add the event.
We also need to add the events handler OnMouseUp and OnMouseDown defined in the style. Unfortunately this step needs to be done manually at this point.
The logic is simple: when we get a MouseDown notification, we store in a private field the sender (the node) of that notification; in the MouseMove handler we will update the position of that node. This is the final code:
public partial class Scene1
{
public Scene1()
{
// This assumes that you are navigating to this scene.
// If you will normally instantiate it via code and display it
// manually, you either have to call InitializeComponent by hand or
// uncomment the following line.
// this.InitializeComponent();
// Insert code required on object creation below this point.
}
private FrameworkElement movingObject = null;
private void OnMouseMove(object sender, System.Windows.Input.MouseEventArgs e)
{
if (this.movingObject != null)
{
Point mousePosition = Mouse.GetPosition(this.DocumentRoot);
this.movingObject.SetValue(Canvas.LeftProperty, mousePosition.X);
this.movingObject.SetValue(Canvas.TopProperty, mousePosition.Y);
}
}
private void OnMouseDown(object sender, System.Windows.Input.MouseButtonEventArgs e)
{
this.movingObject = sender as FrameworkElement;
}
private void OnMouseUp(object sender, System.Windows.Input.MouseButtonEventArgs e)
{
this.movingObject = null;
}
}
This implementation could be improved (for example, if you click on the edge of an ellipse and move the mouse just a little bit, you will notice that the ellipse will "jump" so that it will be centered on the mouse pointer.
Once you've got all the code in place, you just need to build and run the application. You can click and move different nodes and the lines will just update automatically.
Notes:
The February CTP of the WinFX Runtime Compoments is available for download.
Note: The current version of Microsoft Expression Interactive Designer doesn't support the new version of the WinFX runtime. For more details click here.
This article describes how to create a simple style with one trigger in Microsoft Expression Interactive Designer. We will create a blue rectangle which becomes red when the mouse is over it.
Draw a rectangle using the "Rectangle" tool from the Tools paletteClick on the "Selection" tool on the tools palette
EID by default sets a Fill to your rectangle which would override our Style settings, so we need to delete that property (if you switch to the xaml view, you will notice that there is a Fill attribute specified for the rectangle): In the Properties palette, Click on the Fill property and Select "Clear/Default" - this should clear the property
In the timeline, right click on your rectangle->Edit Style->Create Empty Style (usually you should be able to do this in the scene, but now, because the Fill property is not set, the rectangle may not selectable in the scene - known issue)
Click "Ok" in the "Create Style Resource" dialog
Bring up the "Timeline" palette - note that we are currently editing the style (the node displayed should be "Style"). In order to get the mouse events at runtime, our rectangle will need a Fill; so we will set a Fill in the Style.
In the Appearance palette, set the Fill to "blue" (because we are in the Style editing mode, this will create a Setter inside the style)
<Grid.Resources>
<Storyboard x:
<Style x:
<Setter Property="Fill" Value="sc#1, 0, 0, 1"/>
<Style.Triggers>
<MultiTrigger>
<MultiTrigger.Conditions>
<Condition Property="IsMouseOver" Value="True"/>
</MultiTrigger.Conditions>
<Setter Property="Fill" Value="sc#1, 1, 0, 0"/>
</MultiTrigger>
</Style.Triggers>
</Style>
</Grid.Resources>
<Rectangle Stroke="#FF000000" Style="{DynamicResource RectangleStyle1}"
Width="Auto" Height="Auto" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"
MinWidth="0" MinHeight="0" Margin="128,99,183,202" x:
This article describes how to create a master details list using Microsoft Expression Interactive Designer (and guess what: no code needed!). We are going to use databinding to achieve this.
We will use a simple xml, containing the list of products from the Expression family, and a short description for each of them. Download the attached "products.xml" and save it on your local machine.
Start Microsoft Expression Interactive Designer. We will use the empty project created by default
Add products.xml as an XML Data source. For that, in the "Data" palette, click on the "Add XML Data Source" and Browse to the "products.xml". Press Ok. Now the Data Palette will show the new data source (its default name is "ProductsDS").
Expand the "ProductsDS" data source and "Product[n]" node on the scene. Choose ListBox from the context menu. Press "Ok" at the first dialog; in the second one, uncheck "Description" (we want to show only the name in the list) and press Ok. At this point, you should have a list displaying the names of the products.
We want to display the description of the selected item in a separate TextBox.
Create a TextBlock control from the Library Palette
In the "Properties" palette, click on the "Data Context" property and select "Databind". Click on the "Element Property" button, select the "ListBox" as the scene element (left side) and "Selected Item" as the property (right side). Press "Finish". This way we set the Data Context to the Selected Item of the Listbox.
In the "Properties" palette, click on the "Text" property and select "Databind". Click on the "Explicit Data Context" button and expand the tree and select "Description". Press Finish. This way we specify what exactly we want to display as the Text.
The only thing left: run the project (In the main menu, Project click on "Run Project"). The application will be compiled and executed. The Description textbox shows the Description of the selected item.
That's because, by default, there is no item selected in the list. To do that, select the listbox and set the SelectedIndex property to 0 (by default it is -1).
|
http://blogs.msdn.com/adrianvinca/
|
crawl-002
|
refinedweb
| 2,389
| 52.8
|
Philipp Auersperg writes: > .... missing control over stack trace .... I agree with you: you should have control over this stack trace. However, I fear, you will need to change the Zope source to get this. It should be easy to implement: Bind the traceback to a variable in the namespace, the "standard_html_error" page is rendered in rather than appending it directly. The "standard_html_error" is free to do with it whatever it likes. Maybe, you file a feature request into the Collector. Dieter _______________________________________________ Zope maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - )
Re: [Zope] Intercepting NotFoundError with standard_error_message
Dieter Maurer Sun, 03 Dec 2000 01:40:00 -0800
- [Zope] Intercepting NotFoundError with standard_error_me... Philipp Auersperg
- Dieter Maurer
|
https://www.mail-archive.com/zope@zope.org/msg11801.html
|
CC-MAIN-2017-51
|
refinedweb
| 119
| 68.06
|
tag:blogger.com,1999:blog-15521187762424083912013-10-08T14:46:22.603-06:00William TravelsMy journey from paper trail to wilderness.William on the Road<table style="WIDTH: auto"><tbody><tr><td><a href=""><img src="" /></a></td></tr><tr><td style="TEXT-ALIGN: right; FONT-FAMILY: arial,sans-serif; FONT-SIZE: 11px">From <a href="">Untitled Album</a></td></tr></tbody></table><p>Things are going well on the trip home. Currently in Seville, Spain. At this point, we´ve seen too much to write about and we´ve been having trouble loading pictures onto picasa, so be patient.<br /><br />Fortunately, Jeremy already wrote up some stuff about our 3 weeks in Tunisia. You can also see his pictures.<br /><br />Go <a href="">here</a> first.<br /><br />Then, mosey <a href="">here</a>.<br /><br />Still with us? Go <a href="">here</a>. </p><p>Until next time.<br /></p><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William of the Videos present my villagers saying goodbye to me and saying hello to all of my friends and family back home.<br /><br />This is Illa, he was my landlord and neighbor. The setting is just outside of my house. I spent a lot of time with his family. In the background is Chaibou (next to the cart) and Moubarak (the little one to the left).<br /><br />I'm hoping I'll get a few more of these up before I hit the road. So far this is the shortest of them and therefore the easiest to upload. One hour of uploading for a 6 second video. <span style="font-style: italic;">C'est la vie au Niger.</span><br /><br />I can't promise I'll transcribe them all, but this one is short.<br /><br />Illa: <span style="font-style: italic;">A gaida guida</span>. Greet your family.<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William't Posted in a While<table style="WIDTH: auto"><tbody><tr><td><a href=""><img src="" /></a></td></tr><tr><td style="FONT-SIZE: 11px; FONT-FAMILY: arial,sans-serif; TEXT-ALIGN: right">From <a href="">Giraffes!</a></td></tr></tbody></table><br />Thanks to everyone who has reminded me that I haven't posted in a while. I'm currently in Niamey and I'm working on all the paperwork that I have to complete before I leave Niger. There is quite a lot of reports that Peace Corps requires and it's a bit difficult to concentrate in front of a computer when you spent most of your time in the bush. My eyes are getting tired.<br /><br />I'm out of here in a week and I'll try to get a few more posts up before I leave.<br /><br />Until then check out <a href="">these pictures</a> if you haven't seen them on facebook already.<br /><br />Enjoy.<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William much to say, so many comparisons. I don't know where to start. <br /><br />Since I'm on vacation, I'm not gonna write too much on here.<br /><br />But, I did want to let you know that my friend Jeremy has internet at his post now and he's been working on a "Week in the Life of...". He's in a city post and thought people would be interested in reading something a little different.<br /><br />Start <a href="">here (day 1)</a><br /><br />Then go <a href="">here (day 2)</a><br /><br />Then <a href="">here (day 3)</a><br /><br />Then go to his <a href="">blog</a> and check back.<br /><br />Hope y'all enjoy.<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William<table style="width:auto;"><tr><td><a href=""><img src="" /></a></td></tr><tr><td style="font-family:arial,sans-serif; font-size:11px; text-align:right">From <a href="">Kokowa - Traditional Wrestling</a></td></tr></table><br />I'm going home to America for 3 weeks. I must admit that I frightened. It will be the first time I have left Africa since I arrived in July of 2007. Wow!<br /><br />I haven't really decided what all I'll be doing (or more importantly eating) but these things will come together.<br /><br />The past few weeks have been great. We've finished yet another draft on the proposal that the mayor's office is writing for grain banks. I feel really great about this project because I'm not actually writing the proposal. I wanted to teach them how to do it and so far it's going slow, but it's going. As they say here, "<span style="font-style: italic;">sannu sannu, ba ta hana zuwa</span>". Going slowly doesn't prevent one from going.<br /><br />Things are still going well with the student governments, but nothing major to report.<br /><br />Ryan, Tim, and I did a <a href="">radio show</a> in Maradi last week. It was my first time to do a radio show in Maradi and the radio station there is really nice.<br /><br />More to come soon. Check out these other photos. Sorry didn't have time to sort them properly and add captions, but coming soon... hopefully.<br /><a href=""><br />Kids</a><br /><a href="">Traditional Wrestling - Kokowa</a><br /><a href="">Zinder</a><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William Ibro and Hyena<em>This is a folk tale that was written by Seyni Maïga Adama the head-master at the college (middle school level) in my village. He is originally from the other side of the country near the town of Tera. He wrote it and I edited it a bit. I hope you enjoy, more to come later.</em><br /><br />Malam Ibro was a great Marabout. Talibé from different tribes and countries came to study Islamic theology in his school. One day he decided to go on pilgrimage to Mecca. At that time there were no cars and no motorocycles. People traveled on foot or by horse. Malam Ibro took a horse and went to Mecca. On his way he met Hyena. Hyena was very hungry. She asked Malam Ibro to kill for her, and then she would carry him to Mecca and back. Malam Ibro accepted and killed his horse for Hyena. When Hyena finished her meal, she ran away into the bushes laughing at him.<br /><br />Malam Ibro took his baggage and went to sit hopelessly under a big baobab tree on the riverside. Hare came out from the bushes to drink from the river. When he saw Malam Ibro he went to greet him. After they greeted each other, Malam Ibro told him his problem. Hare knew exactly what to do. He told Malam Ibro to wait for him in the bushes and he would bring Hyena.<br /><br />Hare went into the bush and met Hyena. He told Hyena that he had organized a great feast for his birthday celebration and nobody came. Hare did not know what to do with all of the food and all of the meat. Hare asked Hyena to take a message to all of the animals and to tell them to come quickly. Hyena told him that it would be of no use and that he himself would have to play the role of all of the animals of the bush.<br /><br />On their way to the place where Malam Ibro was hidden, Hare moved slowly and Hyena asked him to hurry up. He told her that his leg was hurting him. Hyena asked Hare to ride her and he did it. He rode her to the river side and before he realized what was going on they reached the tree where Malam Ibro was hidden. Malam Ibro caught Hyena and rode her all the way to Mecca. When they reached Mecca, Malam Ibro tied up Hyena and performed his ritual washings before prayer.<br /><br />During the journey, Hyena didn’t eat anything. People tried to give her vegetables and grass, but she could not eat them. In Mecca she saw some children eating meat and she begged them for the bones. Full of fear the children ran away and told their parents that the horse of Malam Ibro asked for the bones they were eating. When the adults arrived, they realized that it was not a real horse at all, but Hyena. They beat her and when she was free, she ran back to the bush.<br /><br />Hyena came back to the bush very worn out, hungry, and correctly beaten. After she rested a while she went to the river to drink. At the river she met lady Tortoise. Hyena told Tortoise to go into the woods, fetch firewood, make a fire and kill and roast herself. Then, Hyena would eat her, because she was dying of hunger and fatigue. Tortoise went into the bush fetching wood to do what Hyena ordered her to do. She was fetching wood and crying. By fortune she came across Hare who asked her what was the matter. Hare told Tortoise to climb a tree and he would call to her and if Hyena asks you what was going on you should tell her that Malam Ibro was looking for Hyena.<br /><br />Tortoise executed the plan and Hyena asked her who was calling. Tortoise told her that it was Malam Ibro. Upon hearing the name of Malam Ibro she ran like a fugitive into the bushes.<br /><br />After helping Malam Ibro, Hare saved the life of Tortoise, proving once again that Tortoise and Hyena are two of the stupidest animals in the bush.<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William<table style="WIDTH: auto"><tbody><tr><td><a href=""><img src="" /></a></td></tr><tr><td style="FONT-SIZE: 11px; FONT-FAMILY: arial,sans-serif; TEXT-ALIGN: right">From <a href="">FESPACO</a></td></tr></tbody></table><p>OUAGADOUGOU, BURKINA FASO—FESPACO, the Panafrican Film and Television Festival of Ouagadougou, takes place every two years and gives film makers from the continent a chance to showcase their films. This year was the 40th anninversary. I traveled there with 4 friends.<br /></p><p>We saw about 6 full length films and about 15 short films and documentaries at about 4 different theaters. Some were good and some lackluster. It was easy to see which areas of the continent have money available for film makers by the number of entries, with things being weighted toward North African countries and South Africa. My favorite film was Teza which won the top prize, the Stallion of Yennenga. I’m no film critic, so I want try to bore you with my opinion, but if you get a chance to see it, I’d recommend it.<br /></p><p>Equally important as the films, was the food and drink. Avocado sandwiches, pork (!), tapas bars, good street food, plantains, more than one beer, and juices and wines made of crazy delicious local fare. If we batted about .300 on watching good films, we batted about .800 on food.<br /></p><p>One of my favorite parts about Ouaga is the fact that there are so many cyclists. They even have bike lanes for bikes and motorcycles. It made me shed a tear for my bike in Maradi—Oh, how I miss thee!<br /></p><p>We finished out the week, having only learned a few words in the dominant language of Burkina, Mouré. No one seemed to want to help us out. They were perfectly content listening to our French—my broken French. But we were able to find a few people that spoke some Hausa and Zarma, which made us feel like we were home.<br /></p><p>Although it's been nice getting back to Niger, I hope to get back to Ouaga some day—a great little city with a whole mess of potential.</p><p>--------</p><p>To read Jeremy’s account of Ouaga and other recent events look no further than <a href="">here</a>. For pictures of our trip look <a href="">here</a>. Have a nice day. You enjoy your snow and I’ll enjoy my 100 degree dustiness.<br /></p><p>--------</p><p>Also, just added a new feature to allow easy feedback. There should be 3 buttons under each post—thumbs up, indifferent, thumbs down. I’ve forgotten what is interesting, so this will help me post more things you’d like to read. I hope it works. Thanks. </p><p>--------</p><p>Heading back out to the bush tomorrow (Friday) morning. <br /><br /><br /></p><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William on Tsalla Girl<a href=""></a><br /><br /><br /><br /><a href=""><img id="BLOGGER_PHOTO_ID_5311169502549197874" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; WIDTH: 240px; CURSOR: hand; HEIGHT: 320px" alt="" src="" border="0" /></a>This is Aicha. She delivers my breakfast of tsalla and sauce every morning. I found out since I last wrote about her that she doesn't actually live in my village, but only lives there during the school year. She helps out at the family who houses her by selling tsalla in the mornings and on market day.<br /><br /><br /><br /><br /><br />Read more <a href="">here</a>.<br /><br /><br /><br /><br /><br /><br /><br /><p></p><p></p><br /><br /><br /><p><a href=""><img id="BLOGGER_PHOTO_ID_5311173235815955154" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; WIDTH: 320px; CURSOR: hand; HEIGHT: 240px" alt="" src="" border="0" /></a></p>My bowl is on the left and soon the tsalla will be covered in sauce, Mmmm.<br /><br /><br /><br /><br /><br /><br /><br />-------<br /><br /><br /><p>Lastly, notes from <a href="">FESPACO</a> coming soon (apologies, the FESPACO website is not only difficult to navigate, it is uninformative).</p><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William on the 'Interwebs'<table style="WIDTH: auto"><tbody><tr><td><a href=""><img src="" /></a></td></tr><tr><td style="FONT-SIZE: 11px; FONT-FAMILY: arial,sans-serif; TEXT-ALIGN: right">From <a href="">Student Government Elections--Primary School Sherkin Haoussa</a></td></tr></tbody></table>I'm sorry it's been so long without a new update. Things around here have been really busy. In December my parents came for a visit. I've organized the election and installation of a student government at the two schools in my village. I've uploaded pictures from both of those and you can see them <a href="">here</a> and <a href="">here</a>.<br /><br />------<br /><br />I'm working with the headmaster of the college (middle school) on a collection of folk tales from his region of Niger, near the Burkina Faso border, and I'll be posting them up as I progress through that.<br /><br />------<br /><br />This is the third attempt to type this. Each time either the power goes out or things are moving so slow that it makes sitting around typing very frustrating. Also, the heat doesn't help me want to sit around and sweat while typing.<br /><br />------<br /><br />One side project I'm working on is turning biomass waste into charcoal. Lachlan, another volunteer who lives near me, and I are running our first attempts in our "kiln" when I get back from Burkina. I'll let you know how that goes.<br /><br />------<br /><br />Now I'm getting ready to go on vacation to Ouagadougou, Burkina Faso for the FESPACO film festival. I know that's not really good enough for a 3 month blogging hiatus. But, I hope y'all are doing well.<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William from Vacation<a href=""><img id="BLOGGER_PHOTO_ID_5256723750331849794" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><div></div><div></div><div></div><div><div><div><div></div><div></div><div>These are the pictures from vacation. Enjoy while I continue to procrastinate writing about the vacation. Thanks for your patience.<br /></div><br /><div><a href=""></a>#<br /></div><br /><div><a href=""></a>#</div><br /><div></div><div><a href=""></a>#</div><div> </div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div></div><div><br><br>Also, I finished uploading the pictures from <a href="">science camp</a> and you can see them <a href="">here</a>.</div><br /><br /><br /><br /><div></div></div></div></div><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William Year, by the NumbersI've been in Niger as a volunteer for over a year now and while I'm uploading pictures and writing about my vacation I thought this would be kind of fun to read. Enjoy!<br /><br /><br />If you want to know anything else, just ask and I'll add it.<br /><br /><strong></strong><br /><strong>Books read: </strong>37<br /><a id="wud6" title="Harry Potter and the Deathly Hollows" href="" goog_docs_charindex="38">Harry Potter and the Deathly Hollows</a> - J. K. Rowling<br /><a id="vcu5" title="Hoot!" href="" goog_docs_charindex="94">Hoot!</a> - Carl Hiaasen<br /><a id="p4_g" title="Chronicles: Vol I" href="" goog_docs_charindex="118">Chronicles: Vol I</a> - Bob Dylan<br /><a id="xbnh" title="A Walk in the Woods: Rediscovering America on the Appalachian Trail" href="" goog_docs_charindex="151">A Walk in the Woods: Rediscovering America on the Appalachian Trail</a> - Bill Bryson<br /><a id="cpzn" title="African Diary" href="" goog_docs_charindex="237">African Diary</a> - Bill Bryson<br /><a id="xxt3" title="Blue Like Jazz" href="" goog_docs_charindex="268">Blue Like Jazz</a> - Donald Miller<br /><a id="qi4w" title="Searching for God Know's What" href="" goog_docs_charindex="302">Searching for God Know's What</a> - Donald Miller<br /><a id="x5au" title="Walden & Other Writings" href="" goog_docs_charindex="351">Walden & Other Writings</a> - Thoreau<br /><a id="wmis" title="How Good Do We Have to Be?" href="" goog_docs_charindex="388">How Good Do We Have to Be?</a> - Kushner<br /><a id="d-3n" title="Brave New World" href="" goog_docs_charindex="428">Brave New World</a> - Aldous Huxley<br /><a id="osl9" title="Lonesome Dove" href="" goog_docs_charindex="463">Lonesome Dove</a> - Larry McMurtry<br /><a id="ihfb" title="Big Sur" href="" goog_docs_charindex="497">Big Sur</a> - Jack Kerouac<br /><a id="xz58" title="The Kite Runner" href="" goog_docs_charindex="523">The Kite Runner</a> - Khaled Hosseini<br /><a id="k51t" title="Slaughterhouse 5" href="" goog_docs_charindex="560">Slaughterhouse 5</a> - Kurt Vonnegut Jr.<br /><a id="yn0c" title="Who Were the Celts" href="" goog_docs_charindex="600">Who Were the Celts</a> - Kevin Duffy<br /><a id="ylzu" title="A History of the Arab Peoples" href="" goog_docs_charindex="636">A History of the Arab Peoples</a> - Albert Hourani<br /><a id="jq8t" title="Beyond Humanitarianism: What You Need to Know about Africa and Why it Matters" href="" goog_docs_charindex="686">Beyond Humanitarianism: What You Need to Know about Africa and Why it Matters</a> - Council on Foreign Relations<br /><a id="zdm:" title="The Professor and the Madman: A Tale of Murder, Insanity, and the Making of the Oxford English Dictionary - Simon Winchester" href="" goog_docs_charindex="798">The Professor and the Madman: A Tale of Murder, Insanity, and the Making of the Oxford English Dictionary</a> - Simon Winchester<br /><a id="zu97" title="Tales of Mystery" href="" goog_docs_charindex="926">Tales of Mystery</a> - Edgar Allen Poe<br /><a id="s7zt" title="Confessions of an Economic Hitman" href="" goog_docs_charindex="964">Confessions of an Economic Hitman</a> - John Perkins<br /><a id="q556" title="Blood Meridian" href="" goog_docs_charindex="1016">Blood Meridian</a> - Cormac McCarthy<br /><a id="evo7" title="The Alchemist" href="" goog_docs_charindex="1052">The Alchemist</a> - Paulo Coehlo<br /><a id="nx22" title="Deep Blues: A Musical and Cultural History from the Mississippi Delta to Chicagos South Side, to the World" href="" goog_docs_charindex="1084">Deep Blues: A Musical and Cultural History from the Mississippi Delta to Chicagos South Side, to the World</a> - Robert Palmer<br /><a id="xxg5" title="The Baobab and the Mangoe Tree: Lessons about Development, African and Asian contrasts" href="" goog_docs_charindex="1210">The Baobab and the Mangoe Tree: Lessons about Development, African and Asian contrasts</a> - Nicholas and Scott Thompson<br /><a id="sg:q" title="Soul Survivor: How 13 unlikely Mentors Helped my Faith Survive the Church" href="" goog_docs_charindex="1330">Soul Survivor: How 13 unlikely Mentors Helped my Faith Survive the Church</a> - Phillip Yancey<br /><a id="ivxy" title="Zen Guitar" href="" goog_docs_charindex="1424">Zen Guitar</a> - Philip Toshio Sudo<br /><a id="v.5m" title="The Corner: A Year in the Life of an Inner-City Neighborhood" href="">The Corner: A Year in the Life of an Inner-City Neighborhood</a> - David Simon & Edward Burns<br /><a href="">Into the Wild</a> - John Krakauer<br /><a href="">Three Cups of Tea: One Man's Mission to Promote Peace... One School at a Time</a> - Greg Mortenson and David Oliver Relin<br /><a href="">Joshua: A Parable for Today</a> - Joseph F Girzone<br /><a href="">Captain Alatriste</a> - Arturo Perez-Reverte<br /><a href="">Hausaland Divided: Colonialism and Independence in Nigeria and Niger</a> - William Miles<br /><div><a href="">Middlesex</a> - Jeffrey Eugenides </div><a href="">Robinson Crusoe</a> - Daniel DeFoe<br /><a href="">Sidhartha</a> - Herman Hesse<br /><a href="">Blood Done Sign My Name: A True Story</a> - Timothy B. Tyson<br /><br /><br /><strong>Bacteria </strong>(tested): 6 or 7<br /><br /><strong>Bacteria</strong> (self diagnosed) : I gave up on counting<br /><br /><strong>Amoebas:</strong> 3 (maybe 4, I forgot)<br /><br /><strong>Giardia:</strong> 1<br /><br /><strong>Ear Infections:</strong> 2<br /><br /><br /><strong>Most languages heard in one day:</strong> 6 (recognized: Arabic, Hausa, French, Zarma, English and Fulfulde, 7 if you count <em>Broka (broken English)</em> and <em>Grammar (correct English)</em> as separate languages)<br /><strong></strong><br /><br /><strong>Days spent in Africa:</strong> 472 (as of oct 10,2008)<br /><br /><strong></strong><br /><strong>Days spent as a volunteer in Niger: </strong>410<br /><br /><br /><strong>Days spent in Benin:</strong> 4<br /><br /><br /><strong>Days spent in Togo:</strong> 4<br /><br /><br /><strong>Days spent in Ghana:</strong> 11<br /><br /><br /><strong>Days spent in Burkina Faso: </strong>2 (I'll hope to back in February for FESPACO)<br /><br /><strong>Times I've pooped my pants:</strong> zero, ok once<br /><strong></strong><br /><br /><strong>Times my hair has been cut: </strong>2<br /><br /><br /><strong>Number of people who have purchased tickets to come visit: </strong>2, thanks mom and dad<br /><br /><br /><strong>Number of days I fasted during Ramadan '08: </strong>4<br /><br /><br /><strong>Number of electronic devices that Niger has destroyed/damaged: </strong>2 (iPod and Canon Powershot A5)<br /><br /><br /><strong>Number of subscribed readers of this blog</strong> (according to Feedburner)<strong>: </strong>70<br /><strong></strong><br /><br /><strong>Text messages sent: </strong>2177<br /><strong></strong><br /><br /><strong>Text messages received: </strong>3020<br /><strong></strong><br /><strong>Number of times Oumarou said he would come over to play guitar: </strong>4<br /><br /><strong>Number of times he's shown up: </strong>0<br /><br /><strong>Number of volunteers villages I could easily walk to:</strong> 3<br /><br /><strong>Avg number of shot glasses of tea per day: </strong>3-4<br /><br /><strong>Avg time to bush taxi to Maradi:</strong> 2 - 6 hours<br /><br /><strong>Avg time to bus from Maradi - Niamey:</strong> 10-12 hours<br /><strong></strong><br /><strong>Avg time spent greeting people per day:</strong> 1-2 hours<br /><br />****<br />updated 10/13/08<br /><br /><strong>Shots received in America:</strong> 3-ish (tetanus booster, Polio, Yellow Fever) that's all I remember<br /><br /><strong>Shots received in Niger: </strong>11-ish (Diphtheria?, Meningitis, Rabies series of 2 or 3, Hepatitis A series of 3, Hepatitis B series of 2, and yearly Tb tests) Sorry, don't have the info in front of me.<br /><br /><strong>Drugs taken in Niger:</strong> 8 ish (The ones I remember are: Mefloquine (every week), Fasygin, Humatin, Cipro, Cefalaxin, and Augmentin)<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William!<a href=""><img style="float:left; margin:0 10px 10px 0;cursor:pointer; cursor:hand;" src="" border="0" alt=""id="BLOGGER_PHOTO_ID_5246636948420560434" /></a><br />My stay in Niamey is almost over. Swear-In for the new volunteers went well. I had an excellent birthday. Thanks to all those in Niger, America, and elsewhere for all the birthday wishes.<br /><br />I'm leaving tomorrow (Wednesday) morning at 5:00 am. I'm headed to Benin, Togo, Ghana, and Burkina Faso. It will be the first time I've left Niger since I arrived over a year ago. <br /><br />I've posted some pictures recently and wanted you to have something to look at while I was gone. I'll get a post of my vacation travels when I return.<br /><br /><a href="">Science Camp</a> (not finished uploading)<br /><a href="">Map Project</a><br /><a href="">Calendar</a><br /><br />Read about these projects in the last two posts, <a href="">here</a> and <a href="">here</a>.<br /><br />Enjoy!<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William, Meetings, and Fasting Oh My!These are just a few things I wrote in the village this past week. Enjoy! <br /><br />Don't have a lot of time on the internet, please forgive me for errors. Will try to clean it up later.<br /><br />Here goes nothing...<br /><br />Sunday, August 31, 2008<br /><br />Thursday I came back to the village by bush taxi after finding out that budget cuts had eliminated our shuttles until the start of the new fiscal year in October.<br /><br />Friday we completed the calendar in the mayor’s office. Also, I set up a meeting to set commune priorities in construction projects of grain banks, classrooms, and wells within the commune. I also made the suggestion that we plant more trees in the 3 or 4 large markets in the commune, because in Serkin Hausa’s market, there is very little shade. This meeting is scheduled for Tuesday and Wednesday the 2nd and 3rd of September. It was a really great day and if for no other reason, it felt productive.<br /><br />Saturday, the morning was spent reading and playing with my kids. In the afternoon I went to Mallam Ada’s shop on the main road to hang out with his son Saminou. Saminou claims that he was top of his 4eme (quatrième) class (approximately 7th grade) and will be in Serkin Hausa’s inaugural troisième class later this year. He usually speaks to me in French and I respond in Hausa which is his native tongue. <br /><br />Sunday morning, I read a bit more and went and sat with Illa and tsoho, old man, under the tree outside of our concessions. After I was there for a small time the trainee that Illa had been hosting, a university student getting in some practical field work was preparing to leave. I walked them to the tasha, “bush taxi station”. Moussa the trainee and I exchanged contact information for when I’m in Niamey.<br /><br />After that I went home and grabbed a few things and went off to Yaou’s shop. He had been in Maradi for a few days and he returned with a new sewing machine. He said, “It’s old, but it has a lot of kindness” in Hausa, meaning that it’s old but it is in good condition and will work hard. Yaou patched a couple of pairs of pants and we had tea for the better part of the morning. Around lunch time we looked through some magazines and I showed him pictures of Obama and McCain.<br /><br />Once the pictures had been discussed, I returned home to get some things ready for the meetings to be had on Tuesday and Wednesday. I mostly just worked out a plan for how the meetings should go and familiarized myself with some French and Hausa vocabulary. <br /><br />Hashem and Chaibou’s laundry was finished drying--they have the clothes washing business cornered in my area of town. In the afternoons we have tea and I help them with the ironing and folding of the clothes. If we have time, Chaibou and I play dara godegay a traditional game. <br /><br /><br />Monday, September 1, 2008<br /><br />It’s 7:00 am and I’ve been awake for a couple of hours. Tsalla was the order of the day for breakfast. Today was supposed to be the start of Ramadan, but no one in my village could see the moon last night because it was below the horizon. I started my fast and some of my villagers did, but the people in my “neighborhood” didn’t see the moon. Those of us who started the fast received our information from the radio that Ramadan was starting.<br /><br />I don’t know how many days I will successfully fast and I’ll only be in my village about a week during the fasting period. I have no intention of continuing during vacation.<br /><br />The Islamic calendar is based on the lunar cycle and Ramadan is the 9th month of the year. Fasting during the month of Ramadan is one of the five pillars of Islam. Those who are fasting refrain from food and drink during daylight hours. Many men in my village walk around spitting all day so they don’t accidentally swallow their saliva. The other four pillars are Shahada, the confession of the creed: “There is one God and Mohammed is his prophet”, Salat, prayer five times a day facing Mecca, Zakat, charitable almsgiving, and the Hajj, the pilgrimage to Mecca for all who can afford it.<br /><br />In the morning, you can hear people starting around 4:00am chanting and shouting for others to wake up and eat. So, that’s when I’ll get up and start drinking water and kunu or koko (both millet porridge like drinks) and get ready to venture out in to the village flashlight in hand to find something a little more substantial. After that I take a little nap for a couple of hours.<br /><br />During the day, most people can be found napping in the shade. That is after they return from the work in the fields.<br /><br />In the evenings I plan to stay up drinking tea and eating for most of the night. Also, I found out ice will be available at five o’clock from Mayahi--what a treat!<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William CampI helped out with a science camp that was put on by Annie, a volunteer in my region, to help give hands on experience to girls who were in the equivalent of 7th and 8th grade to help prepare them for the test they have to take to get to the high school level. Here is account of the way things went down as I remember them… Enjoy!<br /><br /><br /><strong>Day 1:</strong> Arrive in Tessaoua<br /><br />We arrived late in the afternoon on Saturday, July19th to meet and greet the lycée (high school) staff. We brought with us about 150 pounds of grains (rice, pasta, and couscous) also set up our shower area and went over the schedule to prepare for the week. There were six of us this first day including one of the three Jenns who would arrive during the week.<br /><br />Getting ready for science camp took me back to a special place, with so many fond memories... Camp Beckwith. Most of the nostalgia came from writing the schedule, finalizing group lists, making name tags, and finalizing plans for morning and afternoon sessions. To all of my Beckwith friends… Camp Beckwith Rules!<br /><br />High point: Playing Scrabble Junior with two boys who lived on the lycée grounds.<br /><br />Low point: Seeing how much prep work was going to be needed.<br /><br /><strong>Day 2:</strong> Training Day<br /><br />I woke up early with pre camp excitement and went walking around the lycée grounds taking pictures until Annie woke up. We walked down to one of the bus stations to pick up the second of the Jenns. Once we returned we continued with preparations for the week. The five lycée girls, who would be acting as counselors hadn't arrived yet and none of the groups (one or two volunteers and one lycée student) had learned their experiments yet. On top of that, the chef de laboratoire had said he wasn't going to show up if he didn't get per diem. Thankfully, he did show up and all other or at least most other glitches worked out.<br /><br />Each group set up their respective labs: a chemistry lab, physics lab, two earth sciences labs and the computer session that I was working with.<br /><br />Just like the previous night and the following night, we went to the night market to find food. This night, Nate and I ended up splitting street meat and tuwo and sauce. Tuwo is millet flour that has been boiled until it is somewhere between the consistency of play dough and mashed potatoes. Some volunteers refuse to eat tuwo, but we didn't bring much money and you can get a lot of tuwo for not much money. It's what most people eat for most meals in Niger.<br /><br />High point: Finding tsalla for breakfast on the way back from the bus station. Oh life's simple pleasures.<br /><br />Low point: Getting eaten alive by mosquitoes.<br /><br /><strong>Day 3:</strong> Second Day of Training<br /><br />Uh oh! I started to get ciwon ciki, stomach aches. I wasn't surprised because of all of the random street food we'd been eating and I guess it was Murphy's Law in play, but our latrines were about 200 yards from our base of operations. Needless to say, I had the path worn thin after only a few hours. Later I began relying on raw garlic to make my stomach a less friendly place for bacteria to thrive and I slept most of the day away. As if some cruel joke, I also developed an ear infection. Despite sleeping most of the day away, it was the only night I had a mattress and I slept pretty well.<br /><br />That afternoon, 3 more volunteers showed up including the final Jenn.<br /><br />High point: On our way to the night market for food, I saw some kids playing Streets of Rage on a Sega Master System (the European version of Sega Genesis). I stopped, asked if I could play and the other kid and I got to level 4 and only lost one life...<br /><br />Low point: ...then someone bumped the TV and we had to start over. Oh well, I was getting hungry.<br /><br />Update: Ear is better now and stomach still giving me concern. Just found out I have giardia, amoebas, and bacteria… the tri-fecta!<br /><br /><strong>Day 4:</strong> The Girls Arrive<br /><br />We got everything finalized and waited on the collège (middle school) girls to arrive. After that there was a safety session in one of the labs and we played ice breakers and then attempted to watch the movie, Bend it like Beckham. Due to tech problems we watched it in English without French subtitles. The girls enjoyed it anyway.<br /><br />After dinner I was exhausted and despite not having a mattress I slept like a little baby log.<br /><br />High Point: Feeling like I was working at summer camp again.<br /><br />Low Point: Realizing how bad I was at eating rice and sauce with my hand.<br /><br /><strong>Day 5:</strong> First Day of "Classes"<br /><br />Annie's counterpart led the first two morning sessions one on "Best Study Methods" and the other on "Obstacles to Girls Education". The parts that I sat in on (and understood) seemed to go really well. He is a very patient and dedicated educator.<br /><br />When our first group arrived in the afternoon, I honestly had no expectation of how the session might go. First things first, I hoped Allah would bless us with electricity. Basically, Fatima the lycée student in my group nailed her part and we realized that we wouldn't get through all of the activities we created for each group. We eliminated creating the Nigerien flag in Paint.<br /><br />Fatima reviewed the technology vocabulary that we had been going over for the last couple of days and then helped each girl use the mouse, open a program, close it, and she demonstrated what is capable with different tools in Microsoft Word and Paint. Neither Frances nor I knew that she was going to make each girl practice each action. It was time consuming, but worth it for the girls.<br /><br />For the next activity, they sat down two or three to a computer (depending on how many computers were available/working and opened Microsoft Word. They then typed their names then we took turns making the text larger and larger and changing the fonts and colors. Once we finished with their names, some of them wanted to write their boyfriend's names. Then, we took turns typing sentences in French (if it wasn't for Word's ability to correct my spelling and grammar I would have looked like a fool).<br /><br />High point: Our session going so well.<br /><br />Low point: My horrible French grammar.<br /><br /><strong>Day 6:</strong> Day of the Lizard<br /><br />Day two of classes went just as well as the first day the only difference was we had two classes instead of just one. Fatima felt even more comfortable and the collège girls responded even better to the whole session because of that. That was amazing because I thought they had responded so well the first session.<br /><br />During our afternoon break we heard screams coming from the rooms where the girls were staying. A few of us ran down there and discovered the girls throwing rocks at a giant lizard which had become quite comfortable on the wall above the door leading into one of the rooms. It's definitely the largest lizard I've seen in Niger. Although the size tends to increase upon each telling of the story I'm pretty sure that the lizard was about 15 - 18 inches long and it was a portly fellow. We got a few long sticks and knocked it off the wall and it ran off into the school grounds amidst high pitched screams. We tracked it down and took pictures. My favorite pictures though are of the girls watching the lizard.<br /><br />For the evening activity we divided the girls into two groups. One group played soccer and the other participated in a self defense class led by Jenn W. Both groups seemed to really enjoy their activities and I had a lot of fun helping with the self defense class despite having to be the bad guy as I was the only boy with the group.<br /><br />Also, Jenn F. had been sick and left that afternoon leaving us with only two Jenns.<br /><br />High point: Two words, lizard hunt.<br /><br />Low point: Having a volunteer leave for Maradi due to illness.<br /><br /><strong>Day 7:</strong> Soccer<br /><br />Again things went well with the classes and again I think Fatima did even better each time. When the last class was finished, I was both glad and sad that they were over. Glad because giving instructions in a foreign language consumes a lot of patience and energy and being sick I was just tired. Sad because I really enjoy teaching people how to use technology and I won't be doing it for a while because my commune doesn't have electricity.<br /><br />In the afternoon, Jenn B. also was quite sick and also left for Maradi. Then we were down to the strongest or the luckiest of the Jenns, Jenn W.<br /><br />In the evening, we played soccer until prayer call and my team won 1-0.<br /><br />High point: After soccer one of the collège girls, Binta, asked if I'd help her work on penalty kicks.<br /><br />Low point: Losing a second Jenn.<br /><br /><strong>Day 8:</strong> Fête<br /><br />The morning session was a discussion panel of professional women who worked in the sciences. It went really well and the girls in addition to asking them questions went around the room and said what they wanted to do when they grew up. Some of the girls wanted to be teachers, nurses, doctors, government ministers, and president.<br /><br />There was also an evaluation session where the girls were asked several questions about what they enjoyed most/least and if they would attend again. Annie seemed to be pleased with the feedback she received.<br /><br />We had a closing fête (party) where we played bingo, pin the tail on the donkey, musical chairs, and had 3 legged races. I couldn't have imagined the games going any smoother and the girls had such a good time. After our dinner of guinea fowl and couscous, we set up the projector and laptop to play Hausa music videos. The girls loved it, but they did let me know which songs they didn't like and we skipped them. After I saw several yawns and people getting tired I played a slide show of pictures taken from the week. Every time someone was in a picture all of her friends would shout out her name… every time.<br /><br />After the slide show all the girls pretty much went to bed, it was a long week. We volunteers spent another few hours cleaning up.<br /><br />High point: Fatima coming up after we set up the projector and wanting to know more about the things that we didn't teach the girls in the computer session.<br /><br />Low point: Despite being excited about going to Maradi the next day, I really enjoyed the week and didn't want it to end.<br /><br /><br />****<br />Update: 10.10.08<br />Uploaded pictures, click <a href="">here</a>.<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William's a Thief in the Village<em>another post dated story. i'll get more information about mid-service training soon.</em><br /><br />I was in Idrissa’s office when it started. I was there inviting him as my counterpart to Mid-Service Training. Walking back to the front of the office, where the benches are and most people spend most of their time, I hear the sounds of unfamiliar voices. To be more precise, I hear voices that are unfamiliar for the location.<br /><br />I round the corner and see a mass of people standing underneath the large tree out front. I can only pick out random words and can make no sense of what I do understand. I can read the faces and body language of the group only enough to know that something is wrong.<br /><br />Deciding that I didn’t want to stand around any longer not knowing what is truly going on I return to the shaded benches in the entrance way of the office. It’s too hot to stand around anyway. I watch for a while longer. More people show up. School is out for the year and more and more school-aged kids appear, making up the majority of the crowd.<br /><br />Only a few, besides those originally posted under the tree, wear concerned looks. One of the concerned is a shop owner on the main road through the village. The other volunteers in the area and I do most of our non market day bush shopping at his shop. Lachlan keeps the shop’s tea inventory dangerously close to non-existent and we all buy a little something to try and break our larger bills. He hates departing with his coins but likes his repeat customers.<br /><br />The crowd begins shifting and spreading out, though no one is leaving. I ask one of my coworkers what is going on. He tells me in Hausa that there was a thief. Then, another tells me in French as if the other man hadn’t spoken. After that a third says in awkward English, “We have chief”. So I replied to all three, “Oh, thief, voleur, barawo”. I couldn’t believe it. I’d always thought that I could leave my house unlocked. I lock it, but I always felt that it wasn’t necessary.<br /><br />The shop owner is chatting with the office guard. The guard’s job is more guardian in the traditional Hausa role than a security officer. He makes the tea every day, sweeps the conference room, and is always around to run an errand for someone else. Now his task is to call the mayor. No answer. <br /><br />The crowd shifts about and I catch my first glimpse of the young thief, a boy on the front side of his teen years. His head down, heavy with the great responsibility of carrying this shame, and his arms tied behind his back. The crowd stares and ignores him. Staring ensures that he feels his shame and ignoring him leaves him thinking that they thought he would always be a thief. <br /><br />“Le numéro est indisponible” or whatever the recording says when someone’s phone is off or without service. Someone else tried to reach Mr. le Maire again this time using speaker phone. <br /><br />The shop owner is visibly irritated that he can’t go about his day because no one is there to decide what to do with the boy. I guess it is hard enough not to take matters into your own hands when you find someone breaking into your home. Some one else attempts a phone call. <br /><br />Headed east into town the shop owner, with thief by his side, walk. The crowd parades behind them. I ask a few questions to those who remained in the shade. Apparently I’m not asking the right questions. “That way” pointing east was not the answer I was looking for when I asked where they were taking him.<br /><br />That’s ok because moments later they return to the large bedi tree in front of the office. Now, the boy is leaning his shoulder hard into the tree, staring at his feet as if they contain the last bit of dignity left in his body and if he looks away, even for a moment, he will lose that. He’s afraid. He’s trying to hide it, but I doubt that there is a single person here who can’t see through it.<br /><br />With their toy cars made of millet stalk, old tomato paste cans, and old bits of flip-flops the younger boys continue to filter in. Some one else shows their effort by the use of speaker phone. <br /><br />The shop owner has come to sit with us in the shade on the benches in the entrance to the office. A few young girls have shown up to try and sell food. As best as I can tell we are all waiting on the mayor. Another unsuccessful speaker phone attempt. <br /><br />Another twenty minutes pass and the crowd has thinned out except for the shop owner, thief, office workers, a handful of children and myself. The boy, still leaning against the tree looked up and we made contact. It was only for a moment, but I’m certain I saw his last shred of dignity blow away in the hot breeze.<br /><br />An hour passes and although I noticed no comings or goings, the crowd is back thrice as strong. Everyone is mocking the boy by ignoring him or making light chatter about the situation. How embarrassing? Now I’m beginning to feel uncomfortable about all this. I wish the mayor would show up so that we can get some resolution to all of this. <br /><br />It all seemed fun like a vigilante act at first. Tie ‘em up, then call the sheriff. Parading a thief up and down the street seems as good a socializing act as any. If you don’t want to be paraded up and down the street, don’t steal. That rule sounds easy enough to follow. But, that didn’t make it any easier to watch.<br /><br />Rainy season usually marks the beginning of the hunger season. Outside of the rising global food costs, hunger season is an annual occurrence here as the last year’s millet supplies dwindle making it difficult for people to find food. Many of my villagers have stopped eating three meals a day and have even stopped eating twice a day resorting to one daily meal. I’ve heard in other villages that people are eating only once every 2-3 days. This one meal is usually just a millet dish with some kind of sauce and doesn’t exactly cover the bases on the food pyramid.<br /><br />I don’t know what this young boy was accused of taking, but regardless of what it was it is hard to justify theft in a society where everyone is being pressed by hunger season. It’s equally hard to get upset when someone steals food because they are starving.<br /><br />It’s also hard to argue with the community’s decision to allow the boy to be tied up and paraded around town. Why would you risk all of that humiliation and shame, unless you really needed something?<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William Girl<em>wrote this, but didn't get it posted. i'm post dating it for the day it was written. enjoy.</em><br /><br />I’ve never asked her name. Often, she is the first person I see in the morning. If I forget to lock my concession door at night, she may be the first thing I open my eyes to. Here she is every morning, shortly after the sun. And, I’ve never asked her name.<br /><br />It didn’t matter that I was gone for two weeks. She said that she came by every day to see if I had come back early. I must admit, it was a sight I missed. A breakfast ritual missed. <br /><br />A young girl, no more than 10 and probably closer to 8, she is balancing a tub large enough for laundry on her head and carrying a smaller bucket in her right hand. The bowl contains a mountain of tsalla and the bucket contains red sauce. Every morning she shows up at my door. <br /><br />Besides being down-right tasty, tsalla—fried millet balls that must be kin to hushpuppies—and the sauce—tomato based with a kick of onions, peppers, garlic and ginger—have become an early morning tradition of mine. If nothing else, it provides another excuse to get out of bed in the morning.<br /><br />While I was away I craved the village food, despite being surrounded by the food distractions and conveniences the city provides. Maybe I should be getting tired of the same thing every morning, but I guess I’ve always been a creature of habit.<br /><br />This morning I purchase 100 CFA worth, my usual. She asks about my trip and listens wide-eyed as I groggily recount the details—about the big city, the rain, and how tall the millet already is in some places along Route Nationale 1.<br /><br />She excuses herself to go sell the rest of this morning’s batch, hurrying so she doesn’t miss the first few minutes of school. I asked her to come back tomorrow as I put water on for tea that will round out my breakfast.<br /><br />Well, it is almost 7:00 am now and I’ve been away for a while. Lots of chores to take care of and there’s no better time to get started than before the heat sets in.<br /><br />Tomorrow, I’ll ask her name.<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William!<a href=""><img id="BLOGGER_PHOTO_ID_5206586556022097042" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a>I think I'll start this post by saying that Jeremy's pictures are much better than mine. Not just a little bit better, but a LOT better. So, before you read any further check <a href="">these</a> (and <a href="">these</a>) out. I won't have enought time to upload my pics now anyway, but have patience and I'll get them up next time I'm in Niamey.<br /><div></div><br /><div>It's been a really great trip into Niamey, but I'm ready to get back out to the bush. It's expensive being in the city. I'm not looking forward to getting on the bus tomorrow but I am excited about being back in my village. After all, I have a meeting in Niamey on the 13th and Mid-Service training the 16th -18th at Hamdallaye. Mid-Service training!!!! I'm nearly half way done? Ok, it's not quite half way yet, but it is very close. </div><br /><div></div><div>Pangea was absolutely amazing! It was so great to see all of my friends and it was great to connect with Nigeriens in such a constructive way. I made contact with many new musicians and look forward to practicing what they taught me. I never got a chance to teach my second class on Cash, Dylan, and Redding, but maybe I'll get the chance to hang out with some of my new friends and do impromptu classes. One guy said that I could come stay with him any time I'm in Niamey and we'll talk about music and drink tea all night.</div><br /><div></div><div>All of the classes went really well this week. The volunteers that led the dance classes worked really hard and the routines turned out really well. Thursday's theater and radio classes also went really well. These classes had the most amount of collaboration between volunteers and their Nigerien counterparts and that was very inspiring to see everyone come together. Most other classes were led either by Peace Corps volunteers or Nigeriens.</div><div></div><div></div><div></div><div></div><div></div><div>During the hip hop conference (led by Nigeriens) Koy, a Nigerien rapper, was talking about the origins of hip hop culture in America and mentioned the artist Afrika Bambaataa. Some of the volunteers asked if he was Zarma, one of the ethnic groups in Niger, because bambata in Zarma means big. He actually took the name from the Zulu chief Bhambatha. Although I think "Big Africa" is a pretty cool name.</div><div></div><br><div>There is no way to describe everything that happened this week, or to truly express how I feel about it. I hope that it continues next year and that each year it gets bigger and better. Like I mentioned in my last post, there was a videographer every day and at least one photographer. As the fruits of their labors are collected and combiled I will be linking to them. The video that was put together last year was really good and I think that this years video will be well worth the weight, so be patient (but don't forget I'm on Nigerien time).</div><div></div><div></div><br><div></div><div></div><div></div><div>I hope you are all doing well and thanks for reading. I look forward to posting again when I get a chance.</div><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William<a href=""><img id="BLOGGER_PHOTO_ID_5205028440671356034" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br />First of all, Pangea is going great, I'm having a blast. Monday morning I taught a class on American folk traditions in music. The class was scheduled to be an hour and about half way through a television crew showed up (you can see in the picture above). So, I did the second half of the class with a camera shoved in my face.<br /><br />The class was directed at the Nigeriens present (mostly musicians) and it went really well. Everyone there seemed to enjoy it. It was just a surface skimming presentation on origins of popular American musical traditions. It was incredibly difficult to narrow down such a broad topic, but it was alot of fun to put together.<br /><br />My favorite question that was asked during the presentation was, "Can you tell us everything about Johnny Cash?" At this point, I had about 5 minutes left in the class. We decided that I would do a follow up class highlighting a few artists. With input from the class we decided on Bob Dylan, Johnny Cash, and Otis Redding. Basically, I'm just going to put together short bios of the artists and play lots of video and music clips. I think that there is an opening tomorrow that I am going to fill with my second class.<br /><br />After my class was a demonstration of the biram by Malam Barka. You can read more about Malam Barka <a href="">here</a>. I don't think I can describe this instrument with words and be believed, so I'll just wait until I have some pictures. You can purchase Malam Barka's CD from Amazon <a href="">here</a>.<br /><br />Yesterday, I was supposed to help out with a guitar basics class, but no one really showed up for that. I spent that time learning some West African guitar styles. It was really great.<br /><br />During lunch, there were groups of people pocketed all over the Centre de Formation et de Promotion Musicale (CFPM) playing and jamming in various styles. It was beautiful and was where I realized that I was going to have a lot to learn and nothing to teach in any instrumental class.<br /><br />Well, I'm going to get back to preparing for my next class. I'll leave you with a general overview of the week.<br /><br />Monday/26: American & Nigerien Traditional Music<br /><br />Tuesday/27: Jazz, Blues, Funk, Soul<br /><br />Wednesday/28: Hip-Hop, Reggae, Rap<br /><br />Thursday/29: Theatre and Expressive Arts<br /><br />Friday/30: Ceremony and Performances<br /><br />Every Night: Concerts<br /><br />Every day has a videographer and several photographers, so I'll be linking to all available video/pictures as they become available. Also, I'm taking pictures in my free time and will post them soon.<br /><br />Thanks for the picture <a href="">Jeremy</a>.<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William in Niamey<a href=""><img id="BLOGGER_PHOTO_ID_5203158102673032306" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a>I'm back in Niamey and I'm getting ready for Pangea, more information on that coming later. Just know I'm very excited about it. The guys in the picture are from a soccer game we recently had in my village not from the upcoming Pangea.<br /><p></p><p>Before I discuss Pangea any further I'd like to talk about what I've been working on in the village. </p>On, April 29th (Yeah, I know it's almost June) I held a meeting where I walked the guys in the office through how to assess a problem and write a proposal for a project. We recently had a well collapse in another village in the commune and one day the secretary general asked me what I was going to do about the problem. So, I told him that I would teach him how to fix the problem. They started working on the information that afternoon and I just got hold of some proposal applications that can be submitted in French. So, when I return to the village we will complete the proposals. When I say 'we', I hope that I mean 'they'.<br /><br />The next day, we were supposed to have a meeting to establish a calendar for the office. I sketched out some designs and created a few keys so that we can color code different things such as meetings, travel, projects, market, etc. No one showed up. Who would have a thought you needed a calendar to plan a meeting for establishing a calendar. I tried again a few times to have the meeting, but there just weren't enough people interested/around. I left them the task of creating a list of dates of activities within the commune during the next three months and when I return we will put the calendar together. <em>In sha Allah</em>.<br /><br />On Friday, May 2nd Sarah took two girls from my village to Maradi for a Young Women's Fair that the volunteers in our region put on. I didn't go, because we had some <a href="">soccer games</a> in my village, including the first game ever by the girls team at the CEG. The boys played on Saturday evening in front of about 600 people. We beat Kanambakache 2-0. After the match the students performed songs they had written. One song was about the importance of girls education and the other about how to treat the visitors. Sunday morning the girls played and we lost 1-0. It was a great weekend.<br /><br />The following Friday, we were supposed to do a project celebrating <a href="">Global Youth Service Day</a> (2 weeks late of course). That got postponed to the following week and then postponed again. The rains have started, so school will get out soon. I don't think we will get around to doing this project this year.<br /><br />May 13th was Nigerien Women's Day and I went the following day to Tibiri to help with a school there. We made paper flowers for all the mothers.<br /><br />Well, that's the low down on what's been going on here. I'll post about Pangea soon.<br /><br />Thanks for reading.<br /><br />------<br /><br />Best thing about hot season: mangoes<br /><br />Worst thing about hot season: well, it's hot<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William My BusIf it left on time, it left about the time my alarm went off at 4:30. Not sure why I set my alarm wrong. So, now I'm in Niamey for another day.<br /><br />I really want to get back out to my village. I have a soccer game on Friday, but this time on the internet allowed me to do a bit of tinkering to the blog.<br /><br />You'll notice I changed the banner. It's a lot less boring now, thanks <a href="">Jermey</a> for the help with that.<br /><br />Underneath the archive I've created a list of the 5 posts that have been viewed the most times. This way, if you are new to the site and don't have much time you can see what people have read the most. I will update it every chance I get, so that I can keep it as up-to-date as possible. All I ask is for your patience.<br /><br />I've also added a list of my PC Niger friends blogs that automatically updates itself and keeps the most recent updates on top. So, if I haven't updated in a while you can read what is going on in their lives.<br /><br />At the bottom of the left hand bar I've added a search box. It will allow you to search everything on my blog and all the links I've included. Could be handy if you forgot where you found something.<br /><br />I'm still having trouble getting Facebook to import my blog, no fun. So, if you used to read an imported note on Facebook, stop being lazy and just visit the site.<br /><br />I also helped Jeremy add some features to his blog. With our powers combined true nerddom can be achieved.<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William Comes to Play<a href=""><img id="BLOGGER_PHOTO_ID_5189045879122883330" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br /><div>Yesterday I went to see Niger and Ghana play for their second meeting for the African Nations Championship (CHAN). This tournament is being used to showcase new local talent. No players that play on foreign soil are allowed to represent their country. </div><div> </div><div></div><div></div><div></div><div><br>I was amazed at the hostility the fans showed for Niger's coach and the praise they had for Ghana's coach. Niger's coach had to be escorted by security forces when moving about. The pictures I took don't really show much but there were many bags of water and plastic bottles being thrown down from the stands. When he exited the field, people ran around the stadium to get a better angle to taunt and throw. I was impressed with their effort. </div><div> </div><div></div><div></div><div></div><div><br>Niger lost 1-2 and combined with the 0-2 loss received when they traveled to Ghana two weeks ago, they are not in a good position to continue in the tournament. </div><div> </div><div></div><div><br><a href="">Pics from match</a></div><div></div><div></div><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William FoolsFirst of all, I want to thank everyone for reading. I know I don't get to post very often and your patience is very much appreciated. I want to clear up some things about the <a href="">post</a> March 31st. Actually, I want to clear up the whole thing. It was all an April Fools joke.<br /><br />So, there has been sickness. No camel polo league. No electricity/running water.<br /><br />Sorry for any inconveniences.<br /><br />Wishing you all well.<br /><br />Click <a href="">here</a> for pictures of camel polo. Not mine.<div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William<a href=""><img id="BLOGGER_PHOTO_ID_5188392950654448818" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a><br />With a great deal of satisfaction, I'm looking at a clothesline of freshly washed clothes. I still don't have clothespins, but my carabineers have been working just fine. (Note to self: go buy some clothespins in Maradi!). I’ve recently learned that I really enjoy washing my clothes by hand. Although I’m not entirely sure why I enjoy it so much, because those who have lived with me can attest, I’ve never been a huge fan of washing my own clothes back home in America, even with the modern “convenience” of the washing machine. Granted the amount of clothes I can wash at any given time are limited by several factors: I have much fewer clothes here and I wear each article much longer than I would in America which tend to make each load rather small.<br /><br />But, I think, at least in part, some of it is because I can see the fruits of my labor. Right now, with my work in the Peace Corps, I feel that it is going so slow that I will never see the effects of any of my work. I know that this isn’t true, but things move slowly here and people are often slow to change.<br /><br />After discovering this bizarre change in my life, I began noticing many other changes going on around me.<br /><br />One example is how my perception of some people has changed. Shortly after arriving in my village I met a man on my way back from work on day who I thought would be someone who would be great to help me with language and to teach me about the village. At first, things went well. He taught me several words. He invited me over during the Ramadan celebration where we shared macaroni, sauce, and sheep stomach. The meat was very chewy! The he started inviting me to help pick peanuts in his field. After we were done picking peanuts, he would give me the bag I picked. It was a lot of peanuts. I had to give them away to my kids.<br /><br />Sometimes I wouldn’t be able to stop by to visit him because I was traveling, busy, or ill and every time I’d return he’d ask if I was trying to sever the friendship. It got old really fast.<br /><br />Now, I get a barrage of the same question over and over. Where is your wife? The other white girl, she’s your wife. Do you sleep alone? Do you eat meat? Do you drink hura (millet drink)? This is just a sampling of the questions he throws out each time and each of these questions he asks over and over and over but he already knows the answers to them all. I thought he was going to be really helpful in my work and integration and now he just annoys me.<br /><br />Then, on the flip-side are Koursiya and Sarrey. Both are middle school girls in the early teens who I only recently realized aren’t part of the family I live with. They live there so that they can attend school because their villages are too far away. This is a fairly typical set-up when people choose to/are allowed to continue their education. Anyways, these two girls are like “typical” middle school age girls everywhere, inquisitive, energetic, excited, and talkative (and boy do they talk fast). The difference is that I can barely understand them. But, it was a great day when I realized they were talking something like the equivalent of Pig Latin or Double Talk. It took me several weeks to make the connection. I’ve learned a lot from them about Niger and now I’m able to double talk a little now. My villagers cheer me on shouting “He can! He can!”<br /><br />After nearly 7 months in my village my house is almost finished. I spent the last four days essentially hoemless under my shade hangar while my house work was being done. I missed a trip to do radio but my coworkers greeted me on my new house. After they found out that is what you say to someone who is getting married, they decided to go ahead and greet me anyway. The back wall of my house was torn down and re-errected. I was certain it was going to fall on its own with enough time and would have leaked once the rains came. The wall around my latrine was raised so that I don't have to greet the entire village every time I enter. Best of all, my two room house is finally a two room house with a door inside connecting each room. Previously I was only able to live in one of the ten foot by ten foot rooms.<br /><br />Cold season has come and gone. Hot season is here and now we are waiting on the rains. The only redeeming thing about hot season is MANGOES! When the heat of the morning air doesn’t get me out of bed, the thought of sinking my teeth into a delicious mango usually does the trick. Some days I’ll eat 4 or 5 mangoes in a sitting. I fear the day when I say, “Another mango? I don’t know, maybe later.” I can’t even describe how good these things are.<br /><br />With the change in seasons, I’m sure there will be many more changes to my life. I’m not looking forward to traveling in this country during the rainy season. Also, I’m not looking forward to the constant moving of my bed to avoid the heat or the rain. Maybe I’ll find something about rainy season that I do like. I’ll let you know then.<br /><br />Well, until next time I hope that everyone is doing well and again, thanks for reading.<br /><br /><a href="">Pics from house work</a><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William's Been A Long Long Time...<p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><span style="font-size:85%;">Well, it's been a while since I've let you kind folks know what's going on in my world. <span style="mso-spacerun: yes"> </span>So, I've got a lot of exciting things to update.<span style="mso-spacerun: yes"> </span>I think it may be easier as a list, so here goes nothing:</span></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><?xml:namespace prefix = o<o:p><span style="font-size:85%;"> </span></o:p></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><span style="font-size:85%;">-I haven't been really sick since I last updated my blog in <?xml:namespace prefix = st1<st1:city><st1:place>Niamey</st1:place></st1:city> (really sick = needing medication because my body won't fight off the things living inside of it).<span style="mso-spacerun: yes"> </span>Health is so relative.</span></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><span style="font-size:85%;"> </span></o:p></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><span style="font-size:85%;">-The volunteers in my region started playing in the camel polo league and we lost our first two matches versus the teams in the lowest bracket in the Maradi League.</span></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><span style="font-size:85%;"> </span></o:p></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><span style="font-size:85%;">-I've been working hard to help my village get their electrical hook-ups completed and water connections finished.<span style="mso-spacerun: yes"> </span>Much to my chagrin, they made sure that my house was the first to be connected to the grid and my refrigerator has already gone out c'est la vie. </span></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><span style="font-size:85%;"> </span></o:p></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><span style="font-size:85%;">-The people in my office secretly slipped my name into running for the mayoral race in early 2009, which were originally slated for late this year.</span></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><span style="font-size:85%;"> </span></o:p></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><span style="font-size:85%;">-During a recent trip to <st1:city><st1:place>Niamey</st1:place></st1:city>, some of my fellow volunteers were filmed in the background of the next season of CBS's "The Amazing Race".<span style="mso-spacerun: yes"> </span>Things always look more exotic with Americans in the background. </span></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><span style="font-size:85%;"> </span></o:p></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><span style="font-size:85%;">-My friends Tim and Jolene have a new family of Scientologist missionaries who have moved into their neighborhood.<span style="mso-spacerun: yes"> </span>I haven't met them yet but Tim and Jolene both agree that they are nice.<span style="mso-spacerun: yes"> </span>(side note: Tim and Joelne greet everyone!)</span></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><span style="font-size:85%;"> </span></o:p></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><span style="font-size:85%;">-<st1:country-region><st1:place>Niger</st1:place></st1:country-region>'s League Nationale de la Baseball announced that opening day will coincide with MLB's opening day and due to time differences the first pitch here will be thrown about 5 hours prior to the first pitch in <st1:country-region><st1:place>America</st1:place></st1:country-region>.<span style="mso-spacerun: yes"> </span>The LNB has also begun plans to enter the next World Baseball Classic.</span></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><span style="font-size:85%;"> </span></o:p></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><span style="font-size:85%;">-The Nigerien government and French venture capitalists have finished the first trans-Nigerien high speed rail system which runs parallel to the Route Nationale from <st1:city><st1:place>Niamey</st1:place></st1:city> in the west to Diffa in the east.<span style="mso-spacerun: yes"> </span>The trip from <st1:city><st1:place>Niamey</st1:place></st1:city> to Maradi used to take me between 10 16 hours and average time of trial runs on the new rail system are around 5 6 hours.</span></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><span style="font-size:85%;"> </span></o:p></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><span style="font-size:85%;">-In an unprecedented policy change that volunteers have dubbed "No Volunteer Left Behind", Peace Corps Niger has decided that volunteers would be more efficient with laptops and have started bringing laptops and generators to the volunteers without electricity in their villages.<span style="mso-spacerun: yes"> </span>It shouldn't cost that much money because we will probably just be buying gas that's been illegally smuggled from <st1:country-region><st1:place>Nigeria</st1:place></st1:country-region>.%;">It's been a busy busy few weeks here in <st1:country-region><st1:place>Niger</st1:place></st1:country-region> and I hope everyone reading has had a great first few months of the year.<span style="mso-spacerun: yes"> </span>As always, I hope to get another update as soon as possible, but technology in <st1:country-region><st1:place>Niger</st1:place></st1:country-region> 'tis funny sometimes.%;">To all of my family and friends in <st1:country-region><st1:place>America</st1:place></st1:country-region>, my family and friends in <st1:country-region><st1:place>Niger</st1:place></st1:country-region> greet you and wish you a happy April Fool's Day.</span></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><span style="font-size:85%;"></span> </p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><span style="font-size:85%;">----</span></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><span style="font-size:85%;">UPDATE: 04.16.2008</span></p><p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><span style="font-size:85%;">Click <a href="">here</a> for more information.</span></p><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William<a href=""><img id="BLOGGER_PHOTO_ID_5176141862107267794" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a> As cold season transitions into hot season the Fulani in my area start preparing for Sharo. Traditionally the Fulan are pastoralists and therefore spread out over a broad area and several tribes will come together for Sharo. Sharo is a test of a young mans bravery and ability to endure pain, rendering the young man very attractive to the ladies. In some places, I'm told, they take turns hitting an "opponent" usually from a rival tribe. That's not quite what I saw.<br /><br />I'm not entirely sure how to describe what was seen, but I hope the pictures help. It was a bit overwhelming. I saw several scars from past years but only saw one boy actually being hit. These pictures are from the trip Daryn, Tim, and I took on February 8th. Katie and I went back two weeks later but I didn't have my camera. Maybe Katie will let me steal the pictures from her camera--or, better yet, check her <a href="">blog</a> and <a href="">photos</a> of the February, 22 Sharo.<br /><br />Well, here are my pictures:<br /><br /><div><div><a href=""></a></div><div></div><div>Tonight the poeple from the first Ag/NRM stage of 2008 will swear in. Congratulations, you will soon be full-fledged volunteers!</div><div></div><div>I'm heading back to Maradi on Sunday and will try to get one more post up tomorrow.</div><div></div><div>Take care and thanks for reading.</div></div><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1"/>William
|
http://feeds.feedburner.com/WilliamTravels
|
CC-MAIN-2014-10
|
refinedweb
| 14,178
| 70.43
|
I have adopted the approach suggested by Jonas Boner in an infoQ presentation, starting from a bird eyes view and then going deeper in to the code. This learning involved starting with Venkat book and then going to the actors book, using the big programming book as an incredible powerful reference (reading it teaches you a lot about Scala, Java... and yourself).
Humility implies going back and going back again to basics and one pleasant way to do it, is trying to grab presentations on channels. As I was recently looking at some Martin's Odersky's presentation, he provided quite an interesting short topic about how to implement a try-with-resource jdk7 (TWR) stuff but as a Scala implementation. This is a small interesting part for a newbie like me because this small example gathers some typical features of Scala.
The presentation was made in 2009, when the jdk7 was not even a beta version.
We talked about the TWR feature in jdk earlier. Basically, this feature reproduces the C#.NET language enhancement allowing to declare a closeable resource in a try expression that would ensure on your behalf that the resource is closed (works for I/O streams, bundles, JDBC connections etc...):
try (final AsynchronousFileChannel sourceChannel =... final AsynchronousFileChannel targetChannel =... ) { final ByteBuffer buffer = allocateDirect(65536); final CountDownLatch countDownLatch = new CountDownLatch(1); final SourceCompletionHandler handler = new SourceCompletionHandler( sourceChannel, targetChannel, buffer, countDownLatch ); sourceChannel.read(buffer, 0, null, handler); countDownLatch.await(); }
and channels are closed by the underlying JVM plumbing.
How would I want to implement it in Scala ?
Naturally I started with some testing code in order to foresee how I would like to write this invocation.
Basically I would like to create a closeable resource, invoke what I have to invoke and bang, done!
It is a start, so I wrote
import org.junit.Test import org.junit.Assert._ import com.promindis.tools.CloseableUtils._ final class TestCloseUtils { @Test def using_StubCloseable_ShouldFireCloseAction() { val closeable = SpyingCloseable() using(closeable) { closeable => closeable.invoke(); } assertTrue(closeable wasInvoked) assertTrue(closeable wasClosed) } }
In order to trace trace that, both the closeable would be invoked and, that the method close would be applied I wrote a small Spying class:
class SpyingCloseable { var closed = false var invoked = false def close() { closed = true} def invoke() {invoked = true} def wasClosed() = closed def wasInvoked() = closed } object SpyingCloseable { def apply() = { new SpyingCloseable } }
Indeed I wrote a class definition and the implementation of the "companion" object. As there are no statics methods nor field objects in Scala, you can create an object instance very close to Martin Fowler's knowledge level implementation of your class. This object provides you with class level utilities and particularly you have the opportunity to define a default behavior implementing an
apply()method. Creating an apply method allows you to create instance without the
newkeyword.
Basically the spy caches whether you've called the expected methods.
The natural elegance of the Scala syntax makes the class content easily readable even for a novice. The purist will not be very happy with the variable instance which by definition are mutable and in functional programming, mutability should have not place to live in. But it is a test so I beg your pardon.
So how does look like the implementing class ? Very close to Martin Odersky's demo:
object CloseableUtils { def using[T <: {def close()}] (resource: T) (block: T => Unit){ try { block(resource) } finally { if (resource != null) resource.close() } } }
Firts of all I wanted my class to accept all possible closeable resources. So my method - as in Java Generics - must accept all kinds of derived definitions of closeable. In one word I must apply type inference rules on my method, using a generic typing parameter: T. As all the handled resources derived from some closeable resource definition, my type is limited by an upper bound type definition. The upper bound definition is provided by the following notation:
<:.
What is a closeable ? Indeed everything that exposes a close() method. This is where we can use duck type. Remember :
"When I see a bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck."
Indulge me and think about this version:
"When I see a bird that walks like a duck and swims like a duck and quacks like a duck, I assume that this bird behaves like duck."
Thinking like that opens the path to role driven design. But this is another discussion...
The duck typing implementation of my type is quite simple:
{def close()}
One of my purpose designing this class was providing this feature with a more "native" like look. You know with the open/closed braces that gives this impression of fluency and that you extended the language definition.
Scala allows the developer to apply braces instead of parenthesis on single parameter methods. That would provide this look alike the while control for example. But my utility class has two parameters.
That's where currying and partial functions come into the party. Function (specially idempotent ones) taking multiple input parameters can be seen as a composition of multiple functions. You can than create a partial function instance setting one of the parameters.
For example
def sum(x: Int, y: Int) = x + y
could also be written
def composedSum(x: Int)(y: Int) = x + y
You can then set
val fivePlus = composedSum(5)_
that will create fivePlus as a partial function instance, ready to be invoked with the lasting parameter:
fivePlus(6)
I can do exactly the same with my utility function, splitting the parameters declarations.
So I will be able to invoke
using(closeable) { closeable => closeable.invoke(); }
creating my partial curried function invoking using(closeable). This partial function only accepts one parameter that I can invoke using braces:
def using[...](resource: T)(block: T => Unit)
Tests are green. So now as a little example I can open my pom.xml project file and list its content, sure that my source file abstraction has been closed:
using(fromFile("pom.xml")) { data => println(data.mkString) }
Thanks to the ones who did not fall asleep =D
Be seeing you !
|
http://patterngazer.blogspot.com/2011/05/touch-of-scala.html
|
CC-MAIN-2019-09
|
refinedweb
| 1,021
| 55.03
|
For what its worth, I managed to get around the problem with a small
patch on yum itself:
--- ORIG/usr/lib/python2.6/site-packages/yum/yumRepo.py 2017-03-22
05:32:26.000000000 +0000
+++ NEW/usr/lib/python2.6/site-packages/yum/yumRepo.py 2018-02-14
09:14:04.879902463 +0000
@@ -863,6 +863,7 @@ class YumRepository(Repository, config.R
text=text,
cache=cache,
size=package.size,
+ copy_local=1,
)
def getHeader(self, package, checkfunc = None, reget = 'simple',
Although ne
Hi All,
I'm trying to use yum with the downloadonly option to collect a set of
packages including dependencies.
Hi everyone,
On CentOS 7 I'm running into an issue with the latest nvidia driver
from elrepo: kmod-nvidia-384.98-1.el7_4.elrepo.x86_64
This driver version seem to introduce issue in detecting video modes
when a monitor is connected using DVI. As soon as the machine attempts
to start X, nothing happens and the monitor goes into sleep mode
reporting that it has 'no signal'.
It is interesting because it occurs with several monitors, but only
when they are connected through DVI.
Hello everyone,
I'm looking for a way to install icecc/icecream on CentOS 7 for
distributed builds, preferably like any other package rpm a
repository.
I already found some effort done by someone else to create an rpm
package for CentOS 7 and published in a private repository :
<a href="" title="">...</a>
This package already seem to work. However, if possible, I prefer a
solution that is used/available for the whole world to use.
I noticed that with CentOS 6 a iceream package was provided by the
epel repositories.
I'm having some issue with a Qt application using D-Bus (Qt API) in a
threaded appllication on CentOS 6.
Searching for the cause has led me to the following issues in the dbus library:
<a href="" title=""></a>
<a href="" title=""></a>
It appears that there are threading issues with the dbus library version < 1.6.
However there doesn't seem to be other version than 1.2 available for CentOS 6.
Can someone advise what approach can be used to either work around or
solve the issue?
Dear all,
I'm running into a possible memory issue with the SCTP implementation
in CentOS 6, using lksctp-tools.
|
http://www.devheads.net/people/60555
|
CC-MAIN-2018-17
|
refinedweb
| 388
| 56.35
|
IRC log of swarch on 2003-03-07
Timestamps are in UTC.
00:00:27 [libby]
(
is a syntax url i found recently)
00:01:15 [libby]
returns sets of bindings.
00:03:40 [libby]
want to be able to specify justification....
00:05:51 [libbyscri]
ian: impleemnting the entire thing (e.g. queries with cycles) is hard. some restritions not so bad
00:05:53 [danbri]
I made a front page for rdfq test case repository,
00:06:16 [danbri]
...haven't any content (test cases) yet, but gathered pointers to all existing test stuff i can find
00:06:19 [danbri]
...surveys etc
00:07:22 [libbyscri]
ericP - could you do a quick one-pass squish-like pass?
00:07:33 [libbyscri]
benjamin - yes, even if the backend was complex
00:07:46 [libbyscri]
ericP: wouldnt lose perfortmance on simple queries
00:08:07 [libbyscri]
Mike: can assert premises
00:09:02 [libbyscri]
ericP: variable feature: may bind, must bind, whether the node is a vraiable. also - find all results or work for a while and then return.
00:09:37 [libbyscri]
ericP: on the reporting - which variables do you want to see.
00:10:05 [libbyscri]
ericP: = 3 areas so far. wold like input. would like to do a taxonomy of query charactristics
00:10:25 [libbyscri]
josD: the same query can have different proofs.
00:10:55 [libbyscri]
ericP: in algae, you'll get multiple rows - different reasons
00:11:23 [libbyscri]
[missed some]
00:12:05 [libbyscri]
josd thinks this is a whole different dimension
00:12:15 [libbyscri]
ericP: exhaustive search/ result grouping
00:12:38 [libbyscri]
josd: 2 dimensions
00:13:08 [libbyscri]
raphael: kaon project - QL for ontologies (one binding is RDF)
00:14:09 [libbyscri]
...different datamodels for rel dbs, RDF dbs, ontology stores.
00:14:58 [libbyscri]
...retrurns not tuples but classes and properties - things that correspond to the datamodel (of onbtology)
00:15:12 [libbyscri]
....modelled by a set of datalog ruless
00:16:10 [libbyscri]
...easy composition of orthogonal operators
00:16:43 [libbyscri]
....can rewrite e.g. to SQL where posisble
00:17:33 [libbyscri]
benjamin: can oit bind against uri or an arbitrary literal (R: yes) - sounds like ruleml
00:18:23 [libbyscri]
...has things like, SOME, INVERSEOF, ....
00:18:40 [danbri]
timecheck -- how much longer do we have? Does the meeting run until 7.30pm?
00:19:09 [libbyscri]
that's right yep
00:19:21 [libbyscri]
oh, only 10 mins more...
00:19:55 [libbyscri]
....and, or and not, where not is negation-as-failure
00:22:06 [libbyscri]
ian: can;t express cycles in non-distinguished variables.
00:23:49 [libbyscri]
Raphael: 'oneof' is expressed as a !
00:25:04 [danbri]
cf
(hmm, probably not the best ref for owl:oneOf)
00:25:37 [libbyscri]
....
- can download it.
00:25:56 [libbyscri]
ericP: example query now in
00:27:20 [libbyscri]
we decide to continue till 8
00:27:35 [libbyscri]
said talking about ruleml
00:27:50 [libbyscri]
...close to 40 participating groups
00:30:03 [libbyscri]
...several ruleml engines and translators avilable, eg mandarax. some free
00:30:28 [danbri]
is this channel still being logged?
00:30:37 [libbyscri]
that's a good q
00:30:43 [libbyscri]
I can get it if not
00:30:56 [libbyscri]
rssagent is still there
00:33:01 [danbri]
rrsagent, help?
00:33:01 [danbri]
I'm logging. Sorry, nothing found for 'help'
00:33:05 [danbri]
ok cool
00:33:21 [libbyscri]
but on
only got partial
00:33:31 [libbyscri]
oh wait, maybe it's 'tomorrow'?
00:34:01 [libbyscri]
coudl you check danbri? e.g.
(forbidden)
00:35:27 [libbyscri]
...plan to submit usecaes to alberto's and andy's repository (
)
00:36:13 [libbyscri]
....rulebase - GEDCOM - family relationships
00:36:22 [libbyscri]
...created by Mike Dean
00:37:11 [libbyscri]
josd: how do those things cope with unique names assumption? very important, as we found in testcases
00:37:31 [libbyscri]
mikeD: one source of data, so implicit unque names assumption
00:38:00 [libbyscri]
josd: could do daml:differentFrom based on syntaxtic diffences?
00:38:16 [libbyscri]
mikeD: could do, but don't. should preobably ahve somethign more
00:40:12 [libbyscri]
thanks :)
00:40:24 [danbri]
world ACL
00:41:02 [libbyscri]
Said: workign on an ecommerce demo
00:41:58 [libbyscri]
...on website soon
00:42:27 [libbyscri]
ericP: could we use the rules for rule languages?
00:43:20 [libbyscri]
benjamin: yes you can use this [...] but often yopu want actions triggered, which is beyond the QL scope
00:43:35 [libbyscri]
Said: can use outside services.
00:45:33 [libbyscri]
[??] why doyou can it object orientated?
00:45:42 [libbyscri]
(harold will talk about it later)
00:46:06 [libbyscri]
danbri: this event-triggering rules seems very difefrent from timbl's rules...
00:46:26 [libbyscri]
danbri worried about scope - 'if and then'
00:47:21 [libbyscri]
said: didnt really start off as a rules language
00:47:59 [libbyscri]
benjamin: built-ins are very common in commercial rules sytems.
00:49:50 [libbyscri]
harold talks about object-orientated ruleml
00:50:28 [libbyscri]
....ruleml and rdfs overlap
00:50:41 [libbyscri]
...can use oo ruleml as an rdf ql and rles language
00:51:00 [danbri]
I just added a bunch more test-related links to
00:52:23 [libbyscri]
...the subject and object are detrminted by their position
00:54:53 [libbyscri]
[sorry scribe missed if that was the positional one or the oo one]
00:55:24 [libbyscri]
...the two versions can be translated using xslt
00:56:35 [libbyscri]
...harold works us through an example
00:56:43 [libbyscri]
(url?)
00:57:38 [libbyscri]
...can express e.g. which page was accessed by person. in ruleml, queries are aq special case of rules, including only the body.
00:58:28 [libbyscri]
...there's an issue with bnodes
00:59:28 [libbyscri]
....need to give bnodes ids? [scribe missing soem of this, coudl be wrong]
01:00:21 [libbyscri]
...can do conjunctive queries: use 2 atoms
01:00:28 [libbyscri]
...now generalize to bnodes....
01:02:24 [libbyscri]
scribe fading... :(
01:02:33 [danbri]
imho ruleml includes a proposal for a new RDF syntax
01:03:23 [libbyscri]
....seesm to use generated nodeids
01:04:02 [libbyscri]
....model theory can bbuild on ruleml's rdf-xml integrating data model via flogic or triple
01:04:19 [libbyscri]
josd: a very diffreent datamodel to RDF.
01:04:31 [libbyscri]
harold: higher-order syntactic sugar
01:05:08 [libbyscri]
josd: why 'object-orientated'?
01:05:41 [libbyscri]
harold: as RDF is OO. also all the descriptions of objects clustered togetger
01:05:53 [libbyscri]
said: not the same as a definition of an OO language
01:05:59 [danbri]
timecheck!
01:06:04 [danbri]
we should be winding up...
01:06:07 [libbyscri]
benjamin: does not rely on positionality - has a name for the variable.
01:06:09 [libbyscri]
yep
01:06:14 [chaalsBOS]
'night folks
01:06:28 [libbyscri]
harold 'subject-oriented'
01:07:23 [libbyscri]
we are 10 minutes over time
01:07:34 [libbyscri]
banjamin says: 4 minutes!
01:07:47 [libbyscri]
he is outlining note draft on ruleml
01:08:05 [libbyscri]
....has also been discussed in the joint committee
01:08:22 [libbyscri]
....requiremeents, play nicely w the rest of SW, and also ws, xquery
01:08:43 [libbyscri]
...different tpes of rules
01:09:07 [chaalsBOS]
2 minutes gone
01:09:18 [libbyscri]
...those that derive new beliefs (like RDF query). also action rules - actiosn, get info. transfoemation can be vieweed as derrivation....
01:09:43 [libbyscri]
...some RDF qs return bindings, some graphs
01:10:21 [libbyscri]
...vraious chaacteristics of rules
01:10:37 [libbyscri]
...OO-ness (non-positional)
01:11:07 [libbyscri]
...lits of first-order expressiveness...
01:11:13 [libbyscri]
s/lits/lots
01:11:48 [libbyscri]
...in teh markup need to talk about things being derrived. this KB derrived this other stuff
01:12:36 [libbyscri]
...using several KVs
01:12:43 [libbyscri]
KBs even!
01:12:56 [libbyscri]
...complimentary doc on usecaese in teh joint committee
01:13:51 [libbyscri]
...questions?
01:14:09 [libbyscri]
benjamin proposes we head to bar, declaring victory
01:14:32 [libbyscri]
ericP: this would be good foddder - clarifying different parts of query
01:14:43 [libbyscri]
b: 3-4 weeks, public draft
01:14:51 [libbyscri]
...will post to RDF rules
01:15:29 [libbyscri]
---scribe declares victory. ajourned.....
02:06:10 [em-lap]
em-lap has joined #swarch
02:49:39 [las]
las has joined #swarch
03:55:53 [Tantek]
Tantek has joined #swarch
03:56:25 [Tantek]
Tantek has left #swarch
10:44:27 [DaveB]
DaveB has joined #swarch
11:39:15 [AndyS]
AndyS has joined #swarch
11:40:45 [AndyS]
AndyS has joined #swarch
11:51:35 [AndyS]
AndyS has joined #swarch
13:20:16 [em-lap]
em-lap has joined #swarch
13:20:28 [em-lap]
ack... no opps
13:20:41 [Zakim]
Zakim has joined #swarch
13:20:53 [em-lap]
em-lap has changed the topic to: semweb arch tech plen meeting - 2002-03-07
13:26:17 [em-lap]
ericP, you here?
13:47:23 [PStickler]
PStickler has joined #swarch
13:49:49 [jhendler_]
jhendler_ has joined #swarch
14:09:05 [jhendler_]
jhendler_ has joined #swarch
14:12:17 [danbri]
danbri has joined #swarch
14:12:38 [danbri]
(we got stuck in traffic)
14:14:46 [JosD___]
JosD___ has joined #swarch
14:15:02 [pfps]
pfps has joined #swarch
14:15:06 [danb_lap]
danb_lap has joined #swarch
14:15:57 [danb_lap]
uj
14:17:33 [danb_lap]
timbl: interested in scoping new work areas, how much time things would likely take, etc
14:17:48 [danb_lap]
...seems to me from way layers are developing, Query is next item ready for standardisation
14:18:07 [danb_lap]
...i read thru various bits and pieces
14:18:31 [danb_lap]
...has anyone read thru this? (not many hands go up)
14:18:42 [danb_lap]
...ben and harold fed back some possible changes
14:18:47 [DanC]
DanC has joined #swarch
14:19:01 [danb_lap]
...see
14:19:12 [danb_lap]
...b/g includes AndyS and Alberto's use case repository
14:19:15 [bwm]
bwm has joined #swarch
14:19:19 [danb_lap]
...can register examples
14:19:24 [GuusS]
GuusS has joined #swarch
14:19:30 [danb_lap]
...going over that, you get a pretty good idea of what folk are doing w/ rdf query
14:19:34 [danb_lap]
rrsagent, help?
14:19:34 [danb_lap]
I'm logging. Sorry, nothing found for 'help'
14:19:55 [danb_lap]
...interesting. Most syntaxes are non-xml
14:20:15 [danb_lap]
...half of them use the word 'SELECT', ie. SQL mind set, sometimes 'FROM', 'USING' etc
14:20:24 [libby]
libby has joined #swarch
14:20:28 [danb_lap]
...some have some punctation around a chunk of rdf, and an ipmlication sign
14:20:31 [danb_lap]
...fall into various groups
14:20:58 [danb_lap]
...seems that concrete syntaxes may have questions, but the abstract syntax there is much commonality
14:21:05 [danb_lap]
...simplest might be versa, which is just a path
14:21:15 [danb_lap]
...this is a subset of a general rdf graph matching template
14:21:48 [danb_lap]
...if you look at the graph match, ie. rdf with holes, a question of what you can put in the graph
14:21:59 [danb_lap]
...so some case to standardise a non-xml syntax for such abstract queries
14:22:11 [danb_lap]
...obviously good to have an XML version of that (compare xquery, which did both)
14:22:28 [danb_lap]
...also case for some overlap w/ rule stuff eg ruleml (?missed point)
14:22:33 [danb_lap]
...abstract syntax pretty common
14:22:46 [danb_lap]
...engines differ a lot re the kind of inferences they do underneath
14:22:51 [danb_lap]
...but the querying language loioks the same
14:23:03 [danb_lap]
...you query notional, possibly infinite, dataset
14:23:18 [danb_lap]
...like simpler if we make thw QL simpler, leave service capabilities a separate problem
14:23:32 [danb_lap]
jos: you said Fwd inference
14:23:43 [danb_lap]
timbl: conceptually you are querying all the possible derrived data
14:23:50 [danb_lap]
...but we're covering that up
14:23:52 [danb_lap]
...i looked at ruleml
14:24:02 [danb_lap]
....it has a language which has an xml notation
14:24:12 [danb_lap]
...original goal to unify all tyhe various rule systems out there
14:24:20 [danb_lap]
...typically those weren't webized, ie. use URIs
14:24:33 [danb_lap]
....ruleml now extended to be URI capable
14:24:53 [danb_lap]
...can convert eg RQL and ruleml, but if rules lack uris, you have to go add namespace uris etc before get interop
14:25:00 [danb_lap]
....so ruleml was extended
14:25:08 [danb_lap]
...u/stand theres a later unpublished version
14:25:24 [danb_lap]
(aside from danbri:
)
14:25:36 [danb_lap]
timbl, there are dtds to be found on ruleml site
14:25:49 [danb_lap]
...another difference: some languages allow a template for what you want returned
14:25:54 [danb_lap]
...this looks like a template language
14:26:03 [danb_lap]
(er sorry, entailment language, or something)
14:26:19 [danb_lap]
...if you think of these as languages sent to services, can see result as bindings versus an rdf graph
14:26:36 [danb_lap]
timbl shows his table of pro/cons re graph vs query, see
14:26:53 [danb_lap]
...mentions poss of generating an alternate(???)
14:27:00 [danb_lap]
...if you think of more pro/con, let me know
14:27:08 [danb_lap]
...i tihnk you probably need both
14:27:15 [danb_lap]
...there are also syntax choices
14:27:18 [danb_lap]
...SELECT yadda
14:27:25 [danb_lap]
...sort of thing we do in a WG
14:27:33 [danb_lap]
...arguing round brackets vs square ones
14:27:46 [danb_lap]
...semantics of what happens underneath more of less orthogonal
14:27:51 [danb_lap]
...deciding on builtins again orthogonal
14:27:56 [danb_lap]
...has happened already in various places
14:28:05 [danb_lap]
...would make sense to pick up XQuery work on this
14:28:21 [danb_lap]
...Steve Reed from Cyc has a list of builtins they support, and those of xml query, compare/contrast
14:28:26 [danb_lap]
(@@Url/google anyone)
14:28:35 [danb_lap]
...have to pick your favourite libraries
14:28:53 [danb_lap]
...will be services that do/don't support these
14:29:06 [danb_lap]
....we can make intelligent systems that add such things to dumber stores
14:29:11 [danb_lap]
...didn't get into remote query
14:29:19 [danb_lap]
...main reason we do stuff at w3c network centric
14:29:30 [danb_lap]
...one way to do this is wrap the query, uri-encoded, and append it to an http URI
14:29:36 [danb_lap]
...simple way of query, eg with a '?"
14:29:44 [danb_lap]
...easy way of attaching to existing servers
14:29:50 [danb_lap]
...or else do it one way or another in soap
14:29:58 [danb_lap]
...some arbitrary design choices there
14:30:05 [danb_lap]
...if i had to think of possible work, this is what came up
14:30:14 [danb_lap]
...i would imagine that a wg would aim for an abstract syntax
14:30:38 [danb_lap]
see
'possible deliverables' section
14:30:46 [danb_lap]
[[
14:30:47 [danb_lap]
So, if work were begun in this area, formally or informally, more or less in chronological order, one might hope to see:
14:30:47 [danb_lap]
Abstract syntax of query language - probably described in RDF.
14:30:47 [danb_lap]
Definition of a few conformance levels (monotonically increasing in features supported)
14:30:47 [danb_lap]
A common concrete syntax in compact (non-XML) form
14:30:48 [danb_lap]
Ontology for description of inference services provided by a service.
14:30:51 [danb_lap]
A set or sets of standard functions
14:30:52 [danb_lap]
A profile or profiles which combine the above to enhance interoperability, when experience with common engines is sufficient to define interop levels.
14:30:56 [danb_lap]
]]
14:31:02 [danb_lap]
...inference service profiles, so you 'know what you're getting'
14:31:07 [AndyS]
A concrete syntax needed for network use between machines
14:31:32 [danb_lap]
[timbl talks through bullets above]
14:32:10 [danb_lap]
...remote query: could be trivial, or ratholes
14:32:27 [danb_lap]
(i think lots of work there -- jdbc etc have lots of admin features, danbri)
14:32:45 [danb_lap]
timbl, adding DELETE etc would make a bigger job
14:32:53 [danb_lap]
...how much work, is it time to do it as a WG?
14:33:18 [alberto]
alberto has joined #swarch
14:33:30 [danb_lap]
...any general feelings re scope?
14:33:41 [danb_lap]
Harold: do you have general notion of queries that encompass rules?
14:33:49 [libby]
hey alberto!
14:33:56 [danb_lap]
timbl: if you look at the rdf query, there is a rule language that is very connected to it
14:34:03 [danb_lap]
(@@url for logs for alberto to read?)
14:34:11 [RRSAgent]
See
14:34:11 [alberto]
hello - sorry I am late :)
14:34:13 [danb_lap]
...clearly there is a connection
14:34:20 [alberto]
ok, going through it now...
14:34:39 [danb_lap]
...for eg. RQL examples could all convert into ruleml
14:34:47 [danb_lap]
harold: rules are slightly more general concept than queries
14:34:56 [danb_lap]
...can chain queries as a rule
14:34:59 [alberto]
alberto has joined #swarch
14:35:06 [danb_lap]
...woudln't speak really of a chain of queries
14:35:20 [danb_lap]
...could call it rules and query, or really just rules
14:35:26 [danb_lap]
timbl: we have www-rdf-rules list
14:35:28 [PStickler]
I wouldn't call all chains of queries rules
14:35:32 [danb_lap]
...some folk demanded separate mailing lists
14:35:43 [danb_lap]
...i agree they're basically the same thing, but there are different systems, engines
14:36:00 [danb_lap]
harold: from last night, ruleml folk promised to submit more discussions to www-rdf-rules
14:36:09 [danb_lap]
beng: what would be way to distinguish non-query rules?
14:36:15 [danb_lap]
...a lot of rule systems run fwd
14:36:22 [danb_lap]
...query is pretty much backward
14:36:49 [danb_lap]
...if you have procedural attachment, actions, that extends beyond basic semantics of query
14:36:51 [danb_lap]
(?)
14:36:57 [danb_lap]
...two aspects that go behyond basic query
14:37:08 [danb_lap]
...from a tech pt of view, there remains a v close relationship
14:37:19 [danb_lap]
...you'd want same semantics
14:37:35 [danb_lap]
...can have simple stateless way to define rules mechanics
14:37:37 [danb_lap]
(?)
14:37:43 [danb_lap]
...v closely associated w/ a pure belief view
14:37:56 [danb_lap]
...storing queries, having queries built from subqueries
14:38:11 [danb_lap]
...wasn't emphasised
14:38:18 [danb_lap]
...should think about when is the right time to get into that
14:38:28 [danb_lap]
timbl: some people mentioned desire to store queries
14:38:31 [RalphS]
RalphS has joined #swarch
14:38:41 [danb_lap]
...one motivation for sending rules w/ a query is to add
14:38:42 [danb_lap]
(?)
14:38:47 [sanScribe]
sanScribe has joined #swarch
14:38:59 [danb_lap]
ianh: from a tech pt of view, rules in general are nothing diff from what we have in the onto languages
14:39:12 [danb_lap]
...std axioms we have in ontology, eg for subclass, is just a rule
14:39:30 [danb_lap]
...but onthe other side, query languages normally have this special feature that you only get back answers w/ fininte set of things
14:39:56 [danb_lap]
...when you say that the answer to a rule isn't all poss conclusions, it is just concrete answers
14:40:02 [danb_lap]
...eg 'tell me all the people that live in ...'
14:40:11 [las]
las has joined #swarch
14:40:19 [danb_lap]
...that gives a completely diff computational properyty to the language
14:40:23 [danb_lap]
timbl: does it change the syntax?
14:40:36 [danb_lap]
...can we have different operationals but keep the ql the same
14:40:41 [danb_lap]
ian: maybe...
14:40:47 [sandro]
sandro has joined #swarch
14:40:53 [danb_lap]
...my point was that we might allow some things in a QL that we don't allow in an assertional language
14:41:06 [danb_lap]
...since QL resultsets have different characteristics
14:41:27 [danb_lap]
timbl: [draw attention to ian, ben ... work on OWL<->Rule mapping] @@url?
14:41:40 [danb_lap]
frank: in QL, constructing new tuples
14:41:53 [danb_lap]
...would want a ql to support certain kinds of construction
14:42:06 [danb_lap]
ian: true, but elements in the tuples are things we know about in advance
14:42:28 [danb_lap]
[...]
14:42:39 [danb_lap]
frank: we need to be clear about the extent to which this is a restrictuion
14:42:51 [danb_lap]
...if we have cities example, links between cities
14:42:58 [danb_lap]
...q is: am i constructing paths
14:43:04 [danb_lap]
...perfectly reasonable thing to do
14:43:18 [danb_lap]
ian: infinite answer in general, as can to/fro the same pair of cities
14:43:34 [danb_lap]
...generally you disallow such queries, by saying 'gimme acyclic paths'
14:43:50 [danb_lap]
frank: is notion of a path, or instances of paths, what you're considering old things vs new things
14:43:50 [danb_lap]
(?)
14:43:56 [shellac]
shellac has joined #swarch
14:44:00 [danb_lap]
timbl: you may be talking about 'path' differently
14:44:31 [danb_lap]
14:44:46 [danb_lap]
ian: if you allow infinte paths and infinite poss answers, thats when the wheels fall off
14:44:58 [danb_lap]
frank: am just trying to clarify what kinds of things we can get back
14:45:01 [danb_lap]
(he isn't)
14:45:04 [libby]
sandro - no
14:45:06 [danb_lap]
(shellac i mean)
14:45:23 [danb_lap]
patrick: [...] if you havd a Q engine without rules, inference, you get back just ground stuff
14:45:33 [danb_lap]
...if it does have such support, you get things back that are implicit
14:45:42 [danb_lap]
(photos - good idea)
14:46:10 [danb_lap]
patrick: whether underlying engine has inference is separable
14:46:21 [danb_lap]
...what you get back isn't necc asserted/explicit in your kb
14:46:32 [danb_lap]
ian: i wasn't intending to say what ought be in/out of language
14:46:48 [danb_lap]
...just note that computational properties of a ql vs an assertional language differ
14:46:57 [danb_lap]
...because of this fact that you know in advance finite set of answers
14:47:18 [danb_lap]
timbl: proposal is that you can use the same QL in both contexts (albeit w/ diff comp properties)
14:47:26 [danb_lap]
...use same lang to talk to it, results come back the same
14:47:45 [danb_lap]
...hypothesis is that the ql can be the same
14:48:00 [libby]
DanC: this sort of thing?
(slow...)
14:48:08 [DanC]
Volz: ...
14:48:46 [danb_lap]
timbl: qls i saw didn't allow (various kinds of) fancy path ql
14:49:04 [DanC]
spiffy, libby. that's more than I had in mind, but that's cool.
14:49:32 [danb_lap]
pathayes: here when you say Query Lnaguage, are you talking about the abstract pattern, or all the other additional features too
14:49:32 [PStickler]
If a given query engine is not able to answer 'true' that doesn't mean the answer is 'false'
14:49:39 [bwm]
volz said he things query language should support expression of rules that can be used for inference - at least thats what I heard
14:49:59 [PStickler]
If one engine does not have inference, and the answer is implicit, then it simply cannot say, but another engine with inference may be able to answer positively
14:50:05 [danb_lap]
timbl: ... [summaries earlier discussion for pat's benefit]
14:50:16 [danb_lap]
bwm, maybe later so i can say stuff! ok for now.
14:50:23 [DanC]
hmmm... the calwk photos aren't all from the workshop. I'm thinking of photos that give evidence that, e.g., I was here.
14:50:56 [alberto]
yes very useful work danb
14:51:28 [danb_lap]
ericp: [...] fact that you have a rule engine that might stop on 1st answer... is characteristic of the engine ... not of the engine
14:51:38 [danb_lap]
(some discussion about whether eric had this backwards)
14:51:41 [danb_lap]
s/engine/rule/
14:52:10 [danb_lap]
...if we can keep in mind there's a large commonality, and that you can express things as characteristics of the query or the report
14:52:22 [danb_lap]
(timbl is doing things on paper)
14:52:36 [DanC]
I don't expect "I want the first answer only" to be part of the query language syntax. hmm... .
14:52:42 [danb_lap]
jos: w.r.t. resultset, i think the results as a primitive form of proof/explanation in rdf, is wortyh considering in our experience
14:52:46 [danb_lap]
me neither, danc
14:53:00 [danb_lap]
jos: ...worth explaining in an rdf graph, facts/rules/queries/proofs all part of this
14:53:13 [danb_lap]
...proofs as continuations, could be given to another engine to continue processing
14:53:23 [danb_lap]
timbl: this is a list of other poss extensions
14:53:40 [danb_lap]
(@@we should transcribe later. Have 'saving result set'; returning proof; so far)
14:54:18 [danb_lap]
brian: way we're looking at this in jena
14:54:22 [danb_lap]
..rel w/ rules v query
14:54:35 [danb_lap]
..you target a query at an engine, you identify the rules you want to apply by picking a target
14:54:45 [danb_lap]
...implicit suggestion that the two differ
14:54:58 [danb_lap]
...but QL could have a hole saying 'and some companion rules could go here'
14:55:57 [danb_lap]
danbri (barging in): this suggets a way that work can be punted from QL to companion APIs
14:56:25 [danb_lap]
ian: just to try to clarify point about computation, same thing raphael was saying...
14:56:29 [DanC]
:flipA log:semantics { :OtherFeatures is rdf:type of :savingResultSet, :returnedProof }.
14:56:43 [danb_lap]
...is perfectly possible to take OWL and guarantee you can compute answers to all quewries in that lang
14:56:47 [danb_lap]
...ie complete
14:57:03 [danb_lap]
...same language as an assertional language, can show it isn't complete
14:57:09 [danb_lap]
(@@did i get that right?)
14:57:25 [DanC]
:flipA dc:description "(timbl is doing things on paper)".
14:57:31 [danb_lap]
...you may not care about completeness, but point is you get diff comp properties
14:57:38 [PStickler]
Query and Rule need not be different languages, but rather, Query is a subcomponent of the Rule language
14:57:47 [danb_lap]
harold: ... you can perhaps have two kinds of queries, lookup vs inferential
14:58:00 [danb_lap]
...subqueries, could be lookup or further composite
14:58:12 [danb_lap]
...in ruleml, we have an xml element 'query' with flags for lookup vs inference
14:58:27 [danb_lap]
timbl: thinks thats a shared model
14:58:38 [danb_lap]
mikedean: one big question... how much do q and rules overlap
14:58:42 [danb_lap]
...am trying to do a venn diagram
14:58:54 [danb_lap]
...what i have not quite ready to share but suggest useful technique
14:59:02 [danb_lap]
...prelim, that most stuff is out on the edges
14:59:12 [danb_lap]
...suggest interesection is pretty small
14:59:23 [danb_lap]
(@@is that good? ie. a small focussed workitem?! --danbri)
14:59:41 [danb_lap]
path: there is a std logical picture of the query process
14:59:46 [danb_lap]
...and how query/rules overlap
14:59:54 [danb_lap]
...rules are like horn clause implication
15:00:05 [danb_lap]
...a query is a pattern put up as a candidate conclusion
15:00:07 [danb_lap]
(?)
15:00:10 [danb_lap]
...
15:00:10 [PStickler]
Specifying whether a given engine does or does not perform inference is a parameter to the engine, not a feature of the core query language
15:00:20 [danb_lap]
...get this on table as the 'off the shelf' picture of the rel'nship
15:00:27 [danb_lap]
...also Jos's point about returning proofs
15:00:50 [danb_lap]
...nice story about poss responses, range from 'yes'! to getting entire proof, versus intermemdiate, getting bindings
15:00:55 [PStickler]
I am in the queue after DanBri (though got skipped over already once)
15:00:57 [danb_lap]
...might well be other things
15:01:01 [danb_lap]
q-
15:01:18 [danb_lap]
zakim, DanBri is danb_lap
15:01:20 [Zakim]
sorry, danb_lap, I do not recognize a party named 'DanBri'
15:01:39 [DanC]
ack Pat
15:01:39 [danb_lap]
ben: to follow that up, jos and pat touched on proof
15:01:45 [DanC]
ack Ben
15:01:47 [danb_lap]
...query as a concept in KR is something that any KR can have
15:02:04 [PStickler]
I am in the queue on the whiteboard
15:02:05 [danb_lap]
...you usually start conceptually from a KR, eg principles of sanctioned entailment
15:02:25 [danb_lap]
w/ proof, other actions, consistency, monotoniciity, syntactic violations, resource limits, max answers etc.
15:02:34 [danb_lap]
...mechanical or complemenetary surrounding considerations
15:02:44 [danb_lap]
...much of xquery focussed on such things
15:02:53 [danb_lap]
timbl: did xquery cover resource limits?
15:03:03 [danb_lap]
ben: eg don't try more than 1000 seconds on this
15:03:19 [danb_lap]
...if you doing info integration across sites, spend only so much money/time
15:03:25 [danb_lap]
timbl: didn't think this was in xquery
15:03:44 [DanC]
ack PatrickS
15:03:44 [danb_lap]
ben: not all in xquery, but similar concerns
15:04:00 [danb_lap]
patricks: (i) there needn't be two different languages, query vs rule
15:04:05 [danb_lap]
...q can be subcomponent of Rule
15:04:19 [danb_lap]
...in add to what you just got back from this q, here do some things
15:04:32 [danb_lap]
(ii) should distinguish the QL versus rest of msg you're communicating to server
15:04:49 [danb_lap]
(lists some practical stuff as ben did above, eg. how much server resource to spend)
15:04:58 [danb_lap]
...additional component, what you do with the results
15:05:08 [danb_lap]
timbl: these things don't have to clutter up the QL
15:05:09 [DanC]
ack bwm
15:05:29 [danb_lap]
brian: i wish andys was here! re harolds point about query decomposition... should note that Qs often go remotely
15:05:41 [danb_lap]
...so processing model isn't necc that of a low-latency API
15:05:49 [danb_lap]
...often want to get bigger chunks due to net
15:06:20 [danb_lap]
...process issue: when it comes to doing some WG-ish work, we need to start w/ basics first, get something simple running first
15:06:22 [danb_lap]
(danbri claps_)
15:06:32 [PStickler]
Having chains of queries doesn't mean that each subquery in the chain is executed between client and server independently, rather the entire chain can be specified and passed to the engine to process
15:06:34 [danb_lap]
timbl: so if we restricted rule-oriented work in first phase...
15:06:35 [libby]
+1 bwm
15:06:46 [danb_lap]
...eg don't do anything at this stage that we don't need for query
15:06:59 [danb_lap]
brian: i wouldn't go that far
15:07:07 [danb_lap]
...just when chartering, emphasise simple/quick/soon
15:07:23 [danb_lap]
danc: counterpoint to mike's point that intersection is smaller than union
15:07:31 [danb_lap]
...does that advance the state of the art
15:07:34 [danb_lap]
...i think it does
15:07:41 [danb_lap]
[break schduled in 25 mins]
15:07:50 [danb_lap]
zakim, remind me in 25 minutes to break
15:07:51 [Zakim]
ok, danb_lap
15:08:05 [DanC]
ack timbl
15:08:15 [DanC]
ack DanC
15:08:18 [danb_lap]
timbl: [talks about bindings... vs graph]
15:08:45 [danb_lap]
timbl: likely on server, many things happening, going over web, inference etc
15:09:00 [danb_lap]
...so affects return proof, complexity
15:09:06 [danb_lap]
...but i don't see that changing the query language
15:09:23 [danb_lap]
em: swordfish folks, pls follow up on brian's pt
15:09:33 [danb_lap]
randy: i feel brian's point well taken
15:09:42 [danb_lap]
(Randy from Sun)
15:09:45 [danb_lap]
em: you're doing rdf query
15:09:52 [danb_lap]
randy: we're pulling triples really, not full query
15:10:01 [danb_lap]
[speaker?] [missed point]
15:10:11 [danb_lap]
path: how are you handling bnodes?
15:10:16 [danb_lap]
timbl: do you have a ql?
15:10:25 [danb_lap]
[?]: no, we tried versa, temporarily
15:10:47 [libby]
? is sudeep something
15:10:51 [danb_lap]
timbl: cc/pp WG going into CR phase, lang for describing device capacities
15:11:05 [Nobu]
Nobu has joined #swarch
15:11:10 [danb_lap]
...data is there, and thy're generating SVG-with-SMIL or whatever, as a function of a piece of RDF
15:11:36 [danb_lap]
timbl: they want a .js api
15:11:40 [danb_lap]
...i'm told mozilla does this
15:11:42 [DanC]
ack Harold
15:11:45 [danb_lap]
danbri: it does, they have a full rdf api
15:11:49 [DanC]
ack EricM
15:12:00 [danb_lap]
harold: views in rdbms have always been rules, datalog like
15:12:15 [danb_lap]
...since sql99 allows recursive rules
15:12:23 [danb_lap]
...i think we shouldnt' go behind sql99
15:12:41 [danb_lap]
...in yr table tim, already classical db langauges have datalog rules
15:12:52 [danb_lap]
timbl: you mean we shouldn't lag behind sql folks
15:13:00 [DanC]
a webized syntax for datalog is what I think is ripe.
15:13:07 [danb_lap]
[missing cople points from tim]
15:13:16 [danb_lap]
harold: new work would be reducing a query to subqueries
15:13:40 [danb_lap]
...looking at rules from bottom up(?)
15:13:57 [danb_lap]
frank: Is concurrency at all within scope of this activity
15:14:10 [danb_lap]
timbl: i'd say out of scope for rule language, in scope for SOAP
15:14:19 [danb_lap]
...web services will allow ways of composing web services
15:14:26 [danb_lap]
(locks, atomic ops etc i guess --danbri)
15:14:53 [danb_lap]
...i put up 'profiling', eg we coudl say 'everyone using R*QL in practice will also need to agree following behaviours'
15:15:17 [danb_lap]
frank: an addon... in a general siutaiont i might want to send an ontology based meta description re concurrency
15:15:26 [danb_lap]
timbl: you mean, meta info about the query...?
15:15:36 [danb_lap]
frank: std assumption is serializability
15:15:55 [danb_lap]
...you can imagine describing in a concurrency ontology, richer details
15:16:27 [danb_lap]
ericp: i think the (my?) QL survey paper has some stuff that grounds this
15:16:38 [danb_lap]
timbl: i should mention that my summary based on three summary papers
15:16:59 [bwm]
tim brings up eric p's page
15:17:14 [danb_lap]
see
15:17:15 [bwm]
url to follow
15:17:27 [bwm]
what looks like an xml structure
15:17:30 [bwm]
a language binding
15:17:40 [bwm]
match has characteristics
15:17:46 [bwm]
as do report and bindings
15:18:27 [bwm]
query languages characterised by characteristics of various components of the language
15:18:46 [bwm]
identifying components and characteristic ontology is approach to design
15:20:02 [bwm]
bwm asks if update is in scope
15:20:09 [bwm]
timbl says no
15:20:42 [bwm]
timbl would like keep update off critical path - we don't have 10 update languages yet
15:21:46 [bwm]
pats: comment to eric's slide; 3rd option - ask for bindings, ask for subgraph, ask for graph of all you found
15:22:12 [bwm]
pats: doesn't see that chaining of queries is necessarily a rule
15:22:55 [bwm]
pats: do one query, then rank those results, then project
15:23:05 [bwm]
timbl: no one has mentioned ranking
15:23:12 [bwm]
timbl: does the query language need ordering
15:23:17 [bwm]
timbl: problem for rdf
15:23:33 [bwm]
pats: that is not part of a query language, its metadata about the query
15:24:25 [bwm]
ben: ordering is very useful in general especially for broad area web query
15:24:40 [bwm]
ben: you have to be careful, not to be alittle bit pregnant
15:24:47 [danb_lap]
(hmm ordering seems pretty close to datatyping issues...)
15:24:53 [bwm]
ben: you can't do mathcing "more or less closely" on the cheap
15:25:17 [bwm]
specifying the ntion of closeness is really very different from ranking
15:25:41 [bwm]
if we incorporate it we pull in techniques from info retrieval, baysian reasoning, fuzzy reasoning
15:25:42 [PStickler]
You can express abstractions of ordering without having to specify how each engine actually calculates ordering
15:26:05 [bwm]
pats: its not opart of the query langauge, its metadata about the query
15:26:13 [bwm]
pats: different engines may order things differently
15:26:25 [bwm]
pats: may choose engine that is appropriate
15:26:57 [bwm]
lynn: ordering in a query result set is something that we will need and its important, but its different from the fundamental query langauge
15:27:15 [bwm]
lynn: the reason you need ordering is that the things you order higher are better
15:27:24 [bwm]
lynn: doesn't belong in this query language
15:27:24 [PStickler]
Goodness can simply be percentage of partial match
15:27:52 [bwm]
lynn: its a differnt thing to ask a quyery that has a yes/no answer
15:28:05 [danb_lap]
(i think each match can be 100% good, yet we still operationally want some of them first, eg for UI generation reasons -- mozilla have some use cases here, see XUL)
15:28:26 [bwm]
lynn: if we do goodness of fit it will make simple matching must harder
15:29:05 [bwm]
lynn: build langauge with binary answers but keep in mind that we will build an infrastructure on top that will do richer things
15:29:16 [bwm]
sandro: I thought we were talking about ordering not goodness of fit
15:29:37 [bwm]
sandro: wants ordering, but no lynn's type of ordering
15:29:41 [bwm]
s/no/not/
15:29:41 [PStickler]
I used the term 'ranking' which has to do with goodness of fit, or completeness of match
15:29:51 [las]
It's important to have (and to understand that sweb will have) imprecise matching. It's just not the same as the first query language.
15:30:27 [bwm]
harold: wants to support sandro: ordering is a kind of aggregate which could be built into a rules language
15:30:32 [PStickler]
Ordering/ranking is not part of the query language, but is part of the query solution, and is communication to the engine just as requests for bindings or proofs as results
15:30:47 [bwm]
harold: its like applying a built in afger a query
15:31:23 [bwm]
ericP': report characteristic - gives back all triples that have proved themselves - triple closure
15:31:34 [bwm]
... can nest them like nested sql statements
15:31:39 [bwm]
... look for best case
15:31:48 [bwm]
... give me back the first answer, not exhaustive
15:32:04 [bwm]
... that would handle scenario without aggregates or ordering
15:32:22 [bwm]
frankm: when the quetsion of ordering came up, can we produce an ordered thing (whatever the ordering)
15:32:32 [bwm]
... i.e. can we produce an rdf:Seq
15:32:40 [bwm]
... what kind of closuer does the query language have
15:32:51 [bwm]
... can it produce all the collection types in RDF
15:32:51 [Zakim]
danb_lap, you asked to be reminded at this time to break
15:33:06 [bwm]
... there is no word for that set of triples
15:33:36 [bwm]
path: in the matter of order its vital that if queries can specifiy order, the engine should be able to ignore it
15:33:41 [DanC]
rev 1.40 of mtg page has lightning talks.
15:33:50 [bwm]
... invite randy to say more about the other issue
15:33:54 [bwm]
... which is update
15:34:28 [bwm]
pats: agrees fully with pat
15:35:17 [bwm]
ben: its important to distinguish between semantically generated notion of exact, better best, and the ability to manupulate ordered lists as concepts in the langauge
15:35:26 [bwm]
... most rule languages have that sort of thing
15:35:45 [bwm]
... that sort of ordereredness is something we all agree we want to have
15:36:03 [bwm]
... disagrees with distrinction that patS is trying to make
15:36:15 [bwm]
miked: people will want sql like ORDERED clause
15:36:27 [PStickler]
I am speaking about RANKING not ORDERING
15:36:31 [bwm]
... thinks that is something that is not really in a rules language
15:36:40 [bwm]
whose speaking
15:37:02 [libby]
rand possibly?
15:37:03 [ericP]
Rand is speaking
15:37:07 [bwm]
??? says query language gives you an ordering mechansim
15:37:14 [bwm]
... other ordering is higher order
15:37:24 [bwm]
timbl: we have collections
15:37:37 [bwm]
... there are time when we want to do aggregations
15:37:58 [bwm]
... before you can say something is closest requires a closed world
15:38:04 [PStickler]
We need to differentiate between ORDERING, which is an operation based on the results of a query, and RANKING which is an operation performed during the execution of a query
15:38:19 [bwm]
...ordering requires a closed world
15:38:38 [bwm]
... need formulae
15:38:49 [bwm]
... otherwise the rule makes no sense
15:39:07 [bwm]
scribe isn't following this
15:39:19 [bwm]
timbl: pats can have last word
15:39:32 [bwm]
pats: differentiate between ordering and ranking
15:39:43 [bwm]
... ordering is about sorting the results of a query
15:39:50 [bwm]
... ranking affects the query itself
15:40:16 [bwm]
... metadata about the query is separate from the query itself
15:40:32 [bwm]
ajournded for a short time
15:40:44 [bwm]
lightening talks - the timer is non negotiable
15:41:00 [bwm]
break for 5 mins
15:47:29 [DanC]
q= Stickler, Volz, Miller, Grosof, Boley, Tabet, Brickley, Sudeem, McBride, De Roo, Hawke
15:47:40 [DanC]
Zakim, give each speaker 5 minutes
15:47:41 [Zakim]
I don't understand 'give each speaker 5 minutes', DanC
15:48:07 [DanC]
Zakim, allow each speaker 5 minutes
15:48:08 [Zakim]
ok, DanC
15:49:20 [Nobu0]
Nobu0 has joined #swarch
15:49:26 [bwm]
zakim, has more power than I realised
15:49:27 [Zakim]
I don't understand 'has more power than I realised', bwm
15:51:47 [DanC]
q= Stickler, Volz, Miller, Boley, Grosof, Tabet, Brickley, Sudeem, McBride, De Roo, Hawke
15:53:50 [DanC]
ack Stickler
15:53:55 [timbl]
timbl has joined #swarch
15:56:44 [libby]
erk
15:57:28 [Zakim]
Zakim has joined #swarch
15:58:59 [timbl]
I have pointed out on #RDFIG before that MGET is counter to Semantic Web architectrue and Web architecture.
15:59:21 [DanC]
ack Volz
16:00:10 [RalphS]
KAON
16:00:59 [RalphS]
Raphael Volz presents KAON
16:01:39 [timbl]
ref: Graphlog, Mendelson (sp?) Univ toronto [PODS90]
16:02:58 [JosD___]
JosD___ has joined #swarch
16:03:01 [Zakim]
Zakim has joined #swarch
16:05:01 [Zakim]
Zakim has joined #swarch
16:05:13 [RalphS]
queue=miller
16:05:35 [DanC]
started
16:05:40 [PStickler]
PStickler has joined #swarch
16:06:11 [alberto]
alberto has left #swarch
16:06:50 [RalphS]
Libby Miller speaks about Query testcases
16:08:30 [RalphS]
DanBri: please mail testcases to www-rdf-rules
16:10:43 [DanC]
Boley started
16:10:53 [timbl]
ack Boley
16:11:04 [libby]
all the info for that 'talk' is in,
16:16:12 [simonMIT]
simonMIT has joined #swarch
16:16:18 [chaalsBOS]
chaalsBOS has joined #swarch
16:16:37 [DanC]
Grosof started
16:17:18 [DanC]
ACTION Boley: contribute presentation materials to meeting record
16:17:27 [DanC]
ACTION Volz: contribute presentation materials to meeting record
16:18:51 [DanC]
I see 20030217-outline.html in the addressbar of what's projected
16:19:27 [alberto]
alberto has joined #swarch
16:20:19 [libby]
yeah it starts with C: or somethign though - I was trying to get it down last night.
16:22:01 [DanC]
Tabet started
16:22:34 [timbl]
Ben said that the document he just showed in on teh Join Committee archive
16:24:27 [DanC]
ACTION Tabet: contribute presentation materials to meeting record
16:24:35 [timbl]
I am surprised to see CWM described as a RuleML application, And wonders how many if any of the others on the list actually use RuleML!
16:27:55 [DanC]
danbri started
16:28:14 [libby]
16:33:09 [libby]
a javascript and SVG demo which queries RDF data about people
16:33:48 [DanC]
Sudeed (sp?) started
16:39:07 [jhendler_]
jhendler_ has joined #swarch
16:39:25 [DanC]
q?
16:39:35 [RalphS]
Bernard: would like a query language to be useable without requiring the user to understand the entire model
16:40:25 [DanC]
ack McBride
16:41:07 [DanC]
McBride: not talking about 'using RDF to track group work' as advertised
16:41:23 [em-lap]
rdf button...
16:41:48 [em-lap]
16:42:06 [em-lap]
for the size BrianM is looking for
16:42:53 [DanC]
hmm... restaurant service... gotta tell dwj...
16:42:55 [DanC]
djw
16:43:58 [timbl]
Hmmmm.. I must talk with Brian about the file system ontology as I want to add cwm builtins.
16:44:04 [danb_lap]
more src files from foafnaut demo:
(these are .gz'd person descriptuions)
16:45:58 [danb_lap]
brian, timbl: see
for Mozilla rdf filesystem datasource
16:46:04 [DanC]
ack De Roo
16:46:50 [GuusS]
DanC, any chance of a short demo from me on semantic image annotation? A bit lte...
16:49:13 [dfg_olin]
dfg_olin has joined #swarch
16:51:52 [DanC]
ack Sandro
16:52:29 [JosD___]
JosD___ has joined #swarch
16:52:35 [RalphS]
->
Sandro's talk
16:57:19 [danb_lap]
semrun:
16:57:40 [DanC]
ack Carroll
16:58:11 [RalphS]
Jeremy Carroll: Signing an Ontology
17:07:02 [libby]
I can try and get some form now
17:07:08 [libby]
josd likes testcase approach
17:07:11 [danb_lap]
brian: we (jena group) would like to see a WG, a synthesis of existing query work
17:07:25 [danb_lap]
jjc: HP has decided we would have someone on a WG
17:07:34 [danb_lap]
em: would you be willing to put rdql up as a note
17:07:38 [libby]
danbri/libby not keen on wg, like testcases etc
17:07:43 [danb_lap]
brian: dont seee why not. andy's person to ask.
17:07:56 [danb_lap]
...
17:08:10 [danb_lap]
brian: plethora of similar-but-different rdfq systems... don't see same w/ rdf rules right now
17:08:22 [danb_lap]
am nervous of taking on too much, a lesson from rdfcore
17:08:35 [danb_lap]
ben: what is the right way to structure a discussion about whether to have a wg
17:09:12 [danb_lap]
danbri: get out there on the mailing lists and show progress!
17:09:19 [danb_lap]
timbl: at some pt we need a charter...
17:10:43 [danb_lap]
timbl: if we have a handle on the architecture, we can start drafting a charter
17:10:53 [danb_lap]
harold: when the webont group was started, was an explicit decision not to treat rules
17:11:03 [danb_lap]
...was often discussed that rules might be the next wg
17:11:11 [danb_lap]
...am nervous that the next wg might only be re query
17:11:19 [danb_lap]
...that for 2nd time, rules postponed
17:11:34 [danb_lap]
said: add one comment. If you want RDF to industry, it is important to start the work.
17:11:43 [danb_lap]
...instead of saying 'lets do queries first'
17:11:52 [danb_lap]
em: 2nd danbri's point
17:12:02 [danb_lap]
...yes when charter was written we tried to bound OWL work
17:12:06 [danb_lap]
...criticial to scope work
17:12:15 [danb_lap]
...charter also said 'this will be done in 8 months'
17:12:27 [danb_lap]
...things tend to take longer when hit edge cases
17:12:38 [danb_lap]
...case has to be made, in terms of industry adoption, individuals,
17:12:50 [danb_lap]
...case is still being made
17:13:01 [danb_lap]
danc: a strategy used succesfully w/ xml
17:13:06 [danb_lap]
...put a long roadmap in place
17:13:12 [danb_lap]
...jon bosak wrote xml activity statement
17:13:26 [danb_lap]
...wg produced xml 1.0 after 18 months, then activity split into pieces to finish the work
17:13:39 [danb_lap]
...strategy is used a lot, to say 'come join, then we'll do bits at a time'
17:13:53 [danb_lap]
frank: seconding harold's nervousness about considering queries without rules
17:14:08 [danb_lap]
...for the SW apps of ontology langauges, i'm lost without ability to specifiy rules
17:14:09 [alberto]
alberto has joined #swarch
17:14:30 [danb_lap]
path: i'd like disagree... one can rationally consider queries without rules
17:14:38 [danb_lap]
...and then put them together
17:14:51 [danb_lap]
em: frank, yes v important. There is a list, www-rdf-rules, that people look to for that discussion
17:15:33 [danb_lap]
...if you have these concerns, ...
17:15:44 [danb_lap]
danbri: part of making the case for a WG, is finding internal-to-w3c customers, other WGs
17:16:00 [danb_lap]
patricks: re rules, would hope rule folk would participate to ensure a rule-ready ql
17:17:10 [danb_lap]
adjourned for lunch.
17:17:42 [danb_lap]
--------------
17:25:13 [aaronofmo]
aaronofmo has joined #swarch
18:07:21 [libby]
libby has joined #swarch
18:25:02 [libby]
libby has joined #swarch
18:28:44 [shellac]
shellac has left #swarch
18:31:13 [chaalsBOS]
chaalsBOS has joined #swarch
18:33:09 [DanC]
coming back after lunch
18:33:13 [PStickler]
PStickler has joined #swarch
18:34:14 [DanC]
DanC has changed the topic to: SemWeb Arch in Boston
18:36:08 [bwm]
bwm has joined #swarch
18:36:24 [DanC]
EricM convenes...
18:36:42 [DanC]
[[ 13:30 - 15:00 First afternoon session
18:36:42 [DanC]
Best Practices / Education and Outreach - Eric Miller ]]
18:37:51 [danbri]
danbri has joined #swarch
18:38:06 [DanC]
EricM introduces Kathy McDougal of Sun.
18:38:54 [danbri]
kathy: will show you one application of rdf... not claiming sun espouse rdf across the board
18:39:20 [danbri]
...how did we arrive at rdf? Knowledge Technologies. Met Uche from Forethought. Located near our Sun offices.
18:39:28 [danbri]
...walked away w/ a focus on both RDF and on DC
18:39:35 [danbri]
...a year of education w/ Uche
18:39:42 [danbri]
...talking to EricM too
18:39:51 [DanC]
ACTION KathyM: contribute presentation materials to the meeting record
18:40:02 [danbri]
...talking here both about what we're done, and about future work ideas, best practice etc
18:40:25 [danbri]
...our group, Knowledge Services, within Sun. We try to create and share knowledge to solve service issues.
18:40:29 [em-lap]
em-lap has joined #swarch
18:40:40 [danbri]
intros: melissa, randy, sandeep
18:41:08 [danbri]
...goals: help engineers on phone w/ customer support, help online self service, and ultimately avoid having problems in the 1st place!
18:41:14 [timbl]
timbl has joined #swarch
18:41:20 [danbri]
...so use rules and problem diagnosis for problem avoidance
18:41:57 [danbri]
...IC management: process by which business applies [missed]
18:42:11 [danbri]
(see slides for detail; I can't scribe all this and listen properly...)
18:42:33 [danbri]
(explanaiton that swoRDFfish -- only word with 'rdf' in it in sequence)
18:42:46 [danbri]
(no connection to swordfish.rdfweb.org btw)
18:43:19 [danbri]
slide re SunSolve site screenshot. -- more precise search.
18:43:24 [danbri]
...also content aggregation
18:43:33 [danbri]
...often traditiaonlly you need to know where knowledge lives
18:43:55 [danbri]
..eg whitepapers about products, in 5 different KBs around company
18:44:10 [danbri]
...another thing we're solving with Ontology: std names for products
18:44:47 [danbri]
......consistency across site
18:44:54 [danbri]
'progress to date' slide
18:45:47 [danbri]
- corp wide stds defined; -cross-org team formed; -exec sponsorship gained; - business education ongoing; -ontology designed and implemented; -tech infrastructure deployed; -reference implementations created.
18:46:55 [danbri]
...one of the most powerful things was to create a demo
18:47:08 [danbri]
...to show folk the value of adding metadata
18:47:54 [danbri]
...stakholders: (source systems); global sales; marketing; corporate; sun services; software.
18:48:01 [danbri]
...parts of the org we've gone out to educate
18:48:22 [danbri]
...product knowledge that we're trying to aggregate is spread around the company
18:48:29 [danbri]
sandro: how many people work at Sun
18:48:50 [danbri]
36,000
18:49:14 [danbri]
(danbri stole 'swordfish' from ericp, gave it to libby)
18:49:21 [danbri]
architecture summary:
18:49:46 [danbri]
- open standards based; sun ONE Web Server; Java RDF API; Oracle Database; JAX-RPC Web Services; N-Tier Capable
18:50:02 [DanC]
partners... yeah... fractal community...
18:50:08 [danbri]
...Standards: RDF, SOAP/XML, DAML+OIL, Java
18:50:20 [danbri]
ben: you commented earlier that you 'liked rules'; can you expand?
18:50:33 [danbri]
kathy: we do, would like to work with them more. We do configuration management...
18:50:48 [danbri]
...you can imagine someone in sun store, figiuring out what works with what
18:50:58 [danbri]
melissa: [missed example]
18:51:21 [danbri]
kathy: many other apps of rule across the business
18:51:29 [danbri]
pat hayes: same question re DAML+OIL
18:51:39 [danbri]
sandeep(?): transitive properties, some constraints
18:51:44 [danbri]
...restrictions
18:51:47 [danbri]
...not full thing
18:52:00 [danbri]
harold: do you use this at runtime, or do you pre-deduce facts
18:52:05 [danbri]
...do you do any realtime inference?
18:52:16 [danbri]
k: not realtime inference
18:52:21 [danbri]
randy: not doing much inference yet
18:52:36 [danbri]
k: we got the concept 2 years ago, spent 1st year coming up w/ the standards
18:52:52 [danbri]
getting support, data etc
18:53:03 [danbri]
ian: how do you work with the daml+oil constraints, process?
18:53:18 [danbri]
sandeep: nothing on the market, so we have something of our own in Java
18:53:27 [danbri]
(mention of a/the daml validator)(?)
18:53:35 [danbri]
mike: where do you get the data? go out to the web?
18:53:43 [danbri]
kathy: building those interfaces, getting things more automated
18:53:52 [danbri]
r: has been more hand-coded previously
18:54:02 [danbri]
melissa: as we migrate from central KB things get marked up better
18:54:15 [danbri]
...a tool consolidation effort
18:54:20 [danbri]
ian: how much data do you have?
18:54:29 [danbri]
...is it proprietary? could be useful src of tests
18:54:41 [danbri]
sandeep: we have 30k triples, mostly DC and sun product markup
18:54:49 [danbri]
...some proprietary, some opensource
18:55:11 [danbri]
randy: we have gone back and fwd with laywers and determined that we are not going to patent our schemas
18:55:19 [danbri]
(scribe note; did i get that right?)
18:55:30 [danbri]
...but that some knowledge in the schemas/ontos is pre-release
18:55:45 [danbri]
...
18:55:55 [danbri]
randy: re entitlement...
18:56:00 [danbri]
...we are a product company
18:56:16 [danbri]
...at certain pts in prod cycle, you can get to see things based on who you are
18:56:33 [danbri]
...eg 45 days pre release, peeople in certain category see certain stuff
18:56:48 [danbri]
...we are trying to use corporate ldap, not reinvent infrastructure in rdf unless needed
18:56:55 [danbri]
jos: can you comment more on web services?
18:57:16 [danbri]
sandeep: ...
18:57:20 [danbri]
...use wsdl file?
18:57:34 [danbri]
(missed detail of pt)
18:57:45 [danbri]
kathy: (returning to slides)
18:57:51 [danbri]
....recommended tags and vocab
18:57:56 [danbri]
...started with dublin core
18:58:03 [danbri]
...sub product vocab, and tech areas
18:58:27 [danbri]
(scribe note: perhaps s/sandeep/sudeep/ -- correction welcomed)
18:58:38 [danbri]
....how do we know what people we want to exchange info with are using?
18:59:00 [danbri]
....controlled vocabs: language, RFC 3066; Format, based on Mime std. Publisher. Rights. Sun Product, ...
18:59:13 [danbri]
...future opportunities
18:59:22 [danbri]
...common practices for business education
18:59:27 [danbri]
...case studies proving business impact
18:59:31 [danbri]
....registry of standards
18:59:37 [danbri]
....shared approaches to common business models
18:59:42 [DanC]
"Future Opportunities" slide
18:59:44 [danbri]
...governance model best practices
18:59:52 [danbri]
...ontology modeling best practices
18:59:56 [danbri]
...expanding tech support
19:00:15 [danbri]
- we can go out to other companies, talk about where they're seeing impact
19:00:17 [danbri]
(?)
19:00:19 [GuusS]
GuusS has joined #swarch
19:00:26 [danbri]
(re case studies)
19:00:37 [danbri]
...registry of stds being mainly vocabs,
19:00:39 [danbri]
randy: also models
19:00:46 [danbri]
...certainly to date, focus on vocabs
19:00:56 [danbri]
timbl: for a purist, a model is just an ontology
19:01:09 [danbri]
pat hayes: did you find any exisitng ontology work of utilty, eg Cyc
19:01:14 [danbri]
kathy: did look at some of that stuff
19:01:22 [JosD___]
JosD___ has joined #swarch
19:01:28 [danbri]
randy: when i looked at daml.org, number of other vocabs was somewhat overwhelming
19:01:35 [danbri]
pat: not just daml, larger vocabs
19:01:39 [danbri]
danc: hard to see forest for trees!
19:01:43 [danbri]
randy: yup
19:02:05 [danbri]
timbl: if you met in street, you might say 'look at these three!' not list all of them
19:02:13 [danbri]
randy: didn't have any foaf in place ;-)
19:02:23 [danbri]
...we had eric to talk to, but not early on enough(?)
19:02:34 [danbri]
em: instead of going via me, could talk to partners
19:02:46 [danbri]
...not just paring things down
19:02:51 [DanC]
I attempted to do a 'look at these 3' in
19:03:01 [danbri]
...oftentimes you want to look at these thru a filter
19:03:07 [danbri]
randy: need metadata about the ontologies
19:03:21 [danbri]
kathy: i'd like to be able to say 'here are the 3 most commonly used'
19:03:30 [danbri]
pat: don't re-invent time and processes, they've been done
19:03:32 [danbri]
@@url!
19:03:42 [sandro]
Pat is suggesting:
19:03:43 [danbri]
timbl: the way you divide product and ocmpany up might be v sun-specific
19:03:47 [danbri]
thanks sandro :)
19:04:12 [danbri]
timbl: for those, modelling is yr own business, for other areas, more value in sharing
19:04:21 [danbri]
kathy: 90%ish we can share
19:04:32 [danbri]
ericp: any commitment to figuring out which that 90% is
19:04:36 [danbri]
randy: thats why we are here
19:04:41 [danbri]
kathy: yes, definitely
19:04:52 [danbri]
ian: re ontology, what tools are you using to build and maintain it
19:04:59 [danbri]
sandeep: custom built, based on java
19:05:06 [danbri]
...uses daml validator
19:05:13 [danbri]
...can validate(?) data with it
19:05:16 [danbri]
randy: pretty manual
19:05:27 [danbri]
...primary ontologist edits file in vi, then checks with validator
19:05:33 [danbri]
s: yes atthis point in time, vi
19:05:41 [danbri]
...in june timeframe, going for UI and more automation
19:05:46 [PStickler]
Ahhh, so they're doing it the right way ;-)
19:06:36 [danbri]
(scribe missed a few pts pluggin in laptop)
19:06:46 [danbri]
kathy: governance model, best practices...
19:06:50 [danbri]
... could learn from you guys
19:06:58 [danbri]
...internal, external... getting agreement is challenging
19:07:10 [danbri]
...onto best practice similarly
19:07:18 [danbri]
(next slide: technology opportunities)
19:07:58 [danbri]
- rdf aware search engines; improved navigation utilizing metadata; automated application of metadata; rdf/daml/owl-aware onto management tools; knowledge mininig and inference tools/approaches; topic maps leverage
19:08:10 [danbri]
randy: re knowledge mining
19:08:20 [danbri]
...this is where data gets v interesting, good to see something in this space
19:08:28 [danbri]
...also need to understand interaction, overlap with Topic Maps
19:08:43 [danbri]
...don't know where momentum from Knowledge Tech discussions re rdf/tm went
19:08:50 [danbri]
kathy: want to learn about vocabs to leverage
19:09:10 [danbri]
aside...ooOO(I wonder if RSS 1.0 might be useful for Swordfish... --danbri)
19:09:22 [danbri]
slide: critical enablers
19:09:27 [danbri]
...strong exec sponsorship is a big need
19:09:36 [danbri]
...herding cats, need for strong support
19:10:00 [danbri]
...education of the business re value proposition, not easy to get acrsos the 'whats in it for me'
19:10:12 [danbri]
...have to keep doing this, focus on practical savings
19:10:20 [danbri]
...business and technical partnership is key to success
19:10:27 [danbri]
...methodology: Sun Sigma based
19:10:40 [danbri]
...bus. process improvements, not just from a tech standpoint
19:10:50 [danbri]
...randy and i went out and sold this product almost as if a software product
19:10:52 [danbri]
thanks jos
19:11:23 [danbri]
...importance of relnship with groups such as this
19:11:26 [danbri]
...to see what is coming
19:11:31 [danbri]
...also to talk to other companies
19:11:42 [danbri]
...we are here with an interest in a Best Practice Sharing Working Group
19:12:09 [danbri]
contacts: program manager, kathy.macdougall@sun.com, chief architect: randy.willard@sun.com
19:12:13 [danbri]
frank: is this operational?
19:12:14 [danbri]
k: yes
19:12:21 [danbri]
randy: working its way into wider use
19:12:27 [danbri]
k: a web search that uses it
19:12:32 [danbri]
...its an infrastructure component
19:12:35 [danbri]
...that defines the stds
19:12:38 [danbri]
...not a website
19:13:02 [danbri]
frank: one issue, to what extent is this in operational use
19:13:06 [danbri]
kathy: it is
19:13:15 [danbri]
randy: yes, prod'n operation in real use
19:13:25 [danbri]
danc: re 30k triples thats just the ontology, right?
19:13:36 [danbri]
melissa: 3 mill assets being described...
19:13:47 [danbri]
kathy: also have data warehouse folk getting in touch
19:13:59 [danbri]
...re-using vocab to connect structured and unstructured data
19:14:06 [danbri]
randy: swordfish is just metadata registry piece
19:14:26 [danbri]
(re sigma, i found
for more context)
19:14:42 [danbri]
ralphs: re going out and teaching folk this world's lingo, do they often have their own models...
19:14:50 [danbri]
...can you say more about issues you've encountered there
19:14:55 [danbri]
k: 1st thing def the lingo
19:15:01 [danbri]
...all of us agreeing on certain words for things
19:15:09 [danbri]
...i rarely tell people about rdf unless they are engineers
19:15:15 [danbri]
...focussing on what it is going to do for them
19:15:26 [danbri]
...if you do this to your content, the following things become more possible
19:15:32 [danbri]
randy: q was about content models?
19:15:40 [danbri]
ralph: yes... re unstructured vs structured...
19:15:58 [danbri]
... do you find when you go to a content group, they often have models that are useful to you when building an ontology
19:16:04 [danbri]
randy: infrequently...
19:16:40 [danbri]
k: helping educatiion of content owners
19:16:50 [danbri]
...some don't have an agreed content model
19:17:04 [danbri]
timbl: can you partition those you've talked to, eg. RDBMS vs Java vs ...
19:17:17 [danbri]
randy: sun has one of each of every kind of content management system
19:17:21 [danbri]
...often backened by oracel
19:17:27 [danbri]
...lots of way to carve things up
19:17:34 [danbri]
melissa: most content not in rdbms
19:17:36 [danbri]
...most is html
19:17:47 [danbri]
randy: one problem swordfish should solve...
19:17:54 [danbri]
...we have 100s of websites that support teams use
19:18:14 [danbri]
...this should help us rationalise and aggregate things
19:18:32 [danbri]
k: one thing from content agg standpoint
19:18:44 [danbri]
...we are NOT saying that we want one content management system across the company
19:19:03 [danbri]
...instead we say 'if you just use this common content model, common metadata, you can do what you want!
19:20:08 [danbri]
em: want to reduce barriers...
19:20:17 [danbri]
randy: like snowball just picking up on way downhill
19:20:24 [danbri]
...just coming out of first phase
19:20:40 [danbri]
harold: makes sense to build on top of an opensource content management system
19:20:45 [danbri]
eg plone.org (zope based)
19:20:49 [RalphS]
Tlone
19:20:50 [danbri]
...portal building, intranet
19:20:53 [danbri]
thanks ralph
19:21:46 [danbri]
timbl: ...when you've got it syntactically into rdf, you still have the prob of agreeing the common vocab within rdf's structure
19:21:48 [PStickler]
19:22:09 [danbri]
...not just use cases, but supporting business of agreeing the 80% that can share a model
19:22:19 [danbri]
k: yes, business<->tech partnership is key
19:22:23 [danbri]
...need both sides on board
19:22:52 [RalphS]
thanks, Patrick -- I did hear wrong
19:22:53 [danbri]
kathy: express regrets that Bernard (Sun Labs) couldn't attend...
19:23:03 [danbri]
...having him take some requirements, keep dialog going
19:23:15 [danbri]
pat hayes: roughly how deep is your class hieraarchy
19:23:35 [danbri]
sandeep/randy: about three levels of class hierarchy
19:24:51 [danbri]
danbri: asked about whether looked into RSS 1.0
19:24:57 [danbri]
a: yes, early days though
19:25:04 [danbri]
harold: re rdf search engine?
19:25:16 [danbri]
...isn't every rdf query system a semantic search service
19:25:22 [danbri]
...you can always click through
19:25:33 [danbri]
...could get a hit list of links etc
19:25:54 [danbri]
kathy: looking for search engines, in traditional vein, that don't make use of embedded metadata
19:26:10 [danbri]
rrandy: rdf in html discussion
19:26:33 [danbri]
em: TAP and sematnic search seems relevance, augmenting of trad search w/ rdf-based additions
19:26:41 [danbri]
...more focussed areas
19:27:16 [PStickler]
SW search engines could crawl the SW via MGET just as existing Web engines crawl the Web using GET
19:27:25 [danbri]
ben: in daml program, there are various tools for such boosting
19:27:44 [danbri]
...one way is to do the inferecne to generate more terms, so simpler tools find the doc.
19:28:23 [danbri]
danbri: works across the web: google finds html pages that are generated from rdf descriptions
19:28:33 [danbri]
timbl: interested re discovery, browsing...
19:28:55 [danbri]
...normally if you go to a site, you often go to support section, or downloads section
19:29:04 [danbri]
...versus lookup by serial number
19:29:11 [danbri]
...evenentually you get the item not a page
19:29:21 [danbri]
...you can identify the specific operating system, driver, etc
19:29:29 [danbri]
...after that, i can plunge back into html world
19:29:50 [danbri]
...things liek the foafnaut, where you have a known item lookup, i find very nice
19:30:01 [danbri]
...if you know exactly which item to lookup, navigation can be very crisp
19:30:12 [PStickler]
Sounds like TimBL will like browsing the SW with MGET
19:30:21 [danbri]
...i'd like to see much more powerful rdf browsers, starting at an item and folding out linked things
19:30:39 [danbri]
...that blows a spreadsheet out of the water in terms of analytic facilities, interaative exploration
19:31:04 [danbri]
...in a very defined environment, browsing works well
19:31:29 [danbri]
timbl: it was just scripting plus SVG
19:32:05 [PStickler]
A generic RDF "browser" requires machinery such as MGET to achieve consistent behavior across SW enabled servers...
19:32:39 [sandro]
Or machinery such as Tim's recommended uses of # :-)
19:33:18 [chaalsBOS]
Guus Schreiber. Some thoughts on Semantic Web Best practices
19:34:24 [PStickler]
Using the # approach and arbitrary RDF documents is like doing a GET and getting a dozen HTML pages in response. What is needed is a well defined concept of 'concise bounded resource description' which forms the basis for consistent behavior of SW servers
19:35:06 [chaalsBOS]
GS: a goal is to help people set up their first semantic web app.
19:35:49 [chaalsBOS]
... so there are some vocabularies available, such as wordnet (in several versions) which we use in our own applications.
19:36:34 [danbri]
I have a wordnet rep too, and agree a common one would be useful.
19:36:59 [libby]
:)
19:37:18 [RalphS]
VRA - Visual Resources Association; specialization of Dublin Core
19:37:19 [RalphS]
19:37:35 [RalphS]
sigh, appears to require flash
19:37:36 [chaalsBOS]
... GS shows his VRA vocabulary - a specialisation of Dublin Core that can be used as a model for people to copy
19:38:16 [chaalsBOS]
... There are a lot of things out there that can be used, so it isn't always necesary to construct something from scratch.
19:38:53 [chaalsBOS]
... It doesn't take (experts at least) very long to convert a vocabulary to an RDF format, but making them available would be very helpful.
19:39:38 [chaalsBOS]
... Which raises the issue of how to deal with the different ontologies/vocabularies.
19:39:59 [Nobu]
Nobu has joined #swarch
19:40:02 [chaalsBOS]
Mik Dean: Should there be people creating data about whether they recommend a particular ontology?
19:40:14 [chaalsBOS]
GS There are people working on these areas
19:40:20 [chaalsBOS]
... it is a bit subjective
19:40:32 [chaalsBOS]
Pat Hayes: There are some tools that support this
19:41:06 [libby]
nice idea danc
19:41:11 [chaalsBOS]
Ian Horrocks: There are methodologies developed for describing characteristics of ontologies as metrics for quality.
19:42:41 [chaalsBOS]
GS It is easier to develop applications and publish them in public or semi-public domains (e.g. medicine, etc)
19:42:49 [danbri]
interesitng point
19:43:12 [chaalsBOS]
GS 2: Guidelines, FAQs, etc:
19:43:43 [chaalsBOS]
It would be good to have an archive of guidelines and examples for various common tasks
19:44:36 [chaalsBOS]
... transforming a vocabulary into RDFS/OWL, mapping ontologies, integration of various information sources,
19:44:51 [chaalsBOS]
... combining RDF with XML/HTML, etc...
19:46:14 [chaalsBOS]
GS 3: Tools and demos. Ones that show you using the tools as your own production systems are effective demonstrations
19:47:26 [Nobu]
Nobu has joined #swarch
19:47:28 [chaalsBOS]
... things that have nice pictures are good demos.
19:48:16 [chaalsBOS]
... (and it is important to have a real system behind them - "live demos" not special easy cases)
19:49:00 [chaalsBOS]
GS 4: Links to related techniques
19:49:59 [chaalsBOS]
GS Publishing technical notes describing relationships between similar approaches is useful - the relationship between Semantic Web and Topic Maps, etc...
19:50:30 [chaalsBOS]
Kathy: A lot of this is the stuff that we have done or wanted to do
19:50:38 [chaalsBOS]
Benjamin: Ditto
19:52:17 [chaalsBOS]
Benjamin: What is our story about how to work with people who are thinking in XML (presumably other than RDF/XML) - topic for open discussion.
19:53:00 [chaalsBOS]
Liddy Nevile gives presentation.
19:54:07 [chaalsBOS]
LN: Looking at how to talk to people about why replace a system that works with something new - what gets people asking about RDF in the first place.
19:54:34 [chaalsBOS]
... "making the magic explicit" - getting people to see some reason why RDF is cool.
19:55:49 [chaalsBOS]
... I spent a long time working on Logo because I saw some cool magic in Turtle Graphics that could lead to kids doing lts of cool stuff
19:56:25 [danbri]
liddy: 'knowledgeable is being good at knowing'
19:56:27 [danbri]
liddy++
19:56:29 [chaalsBOS]
... We all talked about the turtle as a way to do cool things, but some people didn't see the magic.
19:56:46 [danbri]
(using 'knowledge' as a mass noun doesn't really work, k as skill much better...)
19:58:16 [chaalsBOS]
LN: There are lots of rough diagrams, but there are very few well-developed demonstrations.
19:58:59 [chaalsBOS]
... I started to use FOAF in an aboriginal community.
19:59:26 [chaalsBOS]
... Problem is that the society model is very different, so "normal" relationships are mostly meaningless.
20:00:11 [chaalsBOS]
... Showing that FOAF could be readily used for a completely different group of relationship types was something that showed the value in that scenario.
20:00:20 [danbri]
I guess you can't just tell 'em "go make yr own namespace..."
20:00:30 [danbri]
(that works on rdf geeks...)
20:01:51 [chaalsBOS]
LN: It is important to explicitly show the thing the makes us excited - going one level past where we get the A-Ha, to show what it was we see that is so cool.
20:01:59 [chaalsBOS]
Break time...
20:02:18 [DanC]
======== break 'till 15:15
20:02:18 [RalphS]
20:25:36 [libby]
libby has joined #swarch
20:26:04 [Nobu]
Nobu has joined #swarch
20:33:38 [DanC]
[... discussion of syntax, quantification, ... ]
20:34:38 [DanC]
PatH: I think the approach of DL in RDF could be extended to all of FOL. Ugly, but seems doable.
20:36:43 [danbri]
would be intresting to see the details...
20:45:26 [DanC]
[... discussion of whether RDF(S) entailment is actually implemented, and hence whether there's something else that should be the basis of future work...]
20:45:51 [GuusS]
GuusS has joined #swarch
20:47:00 [sandro]
TimBL: Ben's saying people should using owl:Class unless they really mean rdfs:Class.
20:51:54 [sandro]
Controversy over how bad it is for systems to silently assume rdfs:Class means owl:Class.
20:52:19 [sandro]
Peter: you wont be able to see the difference if it's owl-DL.
20:57:11 [DanC]
ISWC in Oct
20:57:16 [DanC]
WWW2003 in May
20:57:24 [DanC]
DAML meeting in April
20:57:38 [DanC]
^possible places to meet
21:00:21 [danbri]
adjourned======================================
21:00:52 [danbri]
ACTION: ericm to send out slides to RDF IG lists, soonish.
21:11:22 [danbri]
danbri has left #swarch
21:19:35 [Nobu]
Nobu has joined #swarch
21:29:20 [Nobu]
Nobu has joined #swarch
21:53:18 [libby]
libby has joined #swarch
|
http://www.w3.org/2003/03/07-swarch-irc
|
CC-MAIN-2015-11
|
refinedweb
| 12,871
| 70.23
|
How to identify duplicate files with Python
Published on September 28th, 2020 by Natural Language Processingin
Suppose you are working on an NLP project. Your input data are probably files like PDF, JPG, XML, TXT or similar and there are a lot of them. It is not unusual that in large data sets some documents with different names have exactly the same content, i.e. they are duplicates. There can be various reasons for this. Probably the most common one is improper storage and archiving of the documents.
Regardless of the cause, it is important to find the duplicates and remove them from the data set before you start labeling the documents.
In this blog post I will briefly demonstrate how the contents of different files can be compared using the Python module filecmp. After the duplicates have been identified, I will show how they can be deleted automatically.
Example documents
For the purpose of this presentation, let us consider a simple data set containing six documents.
Here a figure showing the documents:
doc1.pdf
doc2.jpg
doc3.pdf
doc4.pdf
doc5.pdf
doc6.jpg
We see that the documents "doc1.pdf", "doc4.pdf" and "doc5.pdf" have exactly the same content. The same applies to "doc2.jpg" and "doc6.jpg". The goal is therefore to identify and remove the duplicates "doc4.pdf", "doc5.pdf" and "doc6.jpg".
Finding the duplicates
The module filecmp offers a very nice function -
filecmp.cmp(f1, f2, shallow=True) - for this purpose. It compares the files named
f1 and
f2 and returns
True if they seem to be identical. Otherwise it returns
False. The
shallow parameter allows the user to specify whether the comparison should be based on the
os.stat()-signatures of the files or rather on their contents. The comparison of the contents is ensured by the setting
shallow=False.
An exemplary Python code for finding the duplicates could therefore look like this:
import os from pathlib import Path from filecmp import cmp # list all documents DATA_DIR = Path('/path/to/your/data/directory') files = sorted(os.listdir(DATA_DIR)) # list containing the classes of documents with the same content duplicates = [] # comparison of the documents for file in files: is_duplicate = False for class_ in duplicates: is_duplicate = cmp( DATA_DIR / file, DATA_DIR / class_[0], shallow = False ) if is_duplicate: class_.append(file) break if not is_duplicate: duplicates.append([file]) # show results duplicates
Output:
[['doc1.pdf', 'doc4.pdf', 'doc5.pdf'], ['doc2.jpg', 'doc6.jpg'], ['doc3.pdf']]
The above output is a list which contains the identified "equivalence classes", i.e. lists of documents with the same content. Note that it's enough to compare a given document with only one representative from each class, e.g. the first one
class_[0].
We learn, for example, that the document "doc1.pdf" has the same content as the documents "doc4.pdf" and "doc5.pdf". Furthermore, the document "doc2.jpg" has the same content as "doc6.jpg" and the document "doc3.pdf" has no duplicates. All this corresponds to what we have observed in the image above.
Removing duplicates
The next step would be to remove the duplicates "doc4.pdf", "doc5.pdf" and "doc6.jpg". An exemplary Python code that accomplishes this task could look like this:
# remove the duplicates for class_ in duplicates: for file in class_[1:]: os.remove(DATA_DIR / file)
There are certainly other ways to write the code or generally compare files. In this article I simply demonstrated one of the many possibilities.
I would also like to encourage you to take a closer look at the filecmp module. In addition to the
filecmp.cmp()-function, it offers also other methods such as
filecmp.cmpfiles() which can be used to compare files in two directories and may therefore suit your needs even better.
|
https://dida.do/blog/how-to-identify-duplicate-files-with-python
|
CC-MAIN-2021-17
|
refinedweb
| 626
| 68.47
|
Talk:WikiProject U.S. Bicycle Route System
Contents
Network tag
Why don't you just use the ncn/rcn/lcn tags for these routes like in the rest of the world? Then the routes can be displayed with the current maps that show cycle routes and you don't need to setup your own renderers. Perhaps create a new tag if it's needed to have a usbrs tag. --Eimai 12:35, 11 March 2009 (UTC)
- I agree, and I'm in the US. operator=AASHTO seems more appropriate to distinguish the US bike system from any other national cycle network. --Hawke 16:34, 11 March 2009 (UTC)
Now we do (use ncn routes for USBRS). Also, operator=AASHTO isn't exactly correct as AASHTO simply acts as a coordinating body to "coalesce" USBRs published by each state into the national USBRS network. If you MUST enter an operator, it would either be the state DOT or county, city or private owners of the road/cycleway segment. This can be difficult to determine for any given route segment, though the state DOT should have all of these data available as/when they make the application to AASHTO for the USBR. --Stevea (talk) 18:25, 7 August 2014 (UTC)
Update: OSM now uses cycle_network=US:US as a tag on the route relation to identify USBRS routes. We use cycle_network=US to identify the quasi-national routes as distinguished from USBRs. --Stevea (talk) 00:18, 9 August 2016 (UTC)
Numbered network vs unnumbered
What is the relationship between the numbered and unnumbered networks? The Adventure Cycling Route Network appears different to the USBRS? I take it that the only difference USBRS is intended to be a signposted network, whereas the Adventure Cycling Association network is unsigned? Is there even an OpenStreetMap project for mapping the ACA network? ChrisB 21:32, 29 September 2011 (BST)
Numbered routes in the national network are USBRs. Named routes in the national network (there are currently only two, East Coast Greenway and Mississippi River Trail) are known as "quasi-national" routes, as they are not strictly part of the USBRS, but are so "national in scope" that they rise to become members of the national level of the hierarchy (as named, not numbered routes). The next level down, "regional/state" contains statewide (DOT) networks, as well as larger-scope regional routes like ACA routes (e.g. Transamerica Trail, Underground Railroad Bicycle Route...). To be clear, regional ACA routes are NOT part of the USBRS. Also, not all ACA routes (known as "private") are in OSM, as they are proprietary/commercial property of ACA. When ACA routes DO get entered into OSM, and traverse multiple states, it seems most useful to enter them state-at-a-time (network=rcn) and bundle these into a named super-relation. This is because it can happen that an ACA route might eventually get "promoted" to a USBR, through a lengthy public process involving first the state where this takes place, then as that state's DOT applies to AASHTO for the route to become a USBR. In fact, exactly this process is taking place with ACA's Transamerica Trail (TA) as it is slowly-but-surely becoming superseded by the super-relation for USBR 76 (generally westwardly, now as far west as Kansas). --Stevea (talk) 18:25, 7 August 2014 (UTC)
Update: there are now THREE quasi-national routes. In addition to ECG and MRT there is now also WNEG. --Stevea (talk) 23:57, 8 August 2016 (UTC)
Update: there are now FOUR quasi-national routes; we have added ISL. --Stevea (talk) 11:52, 12 November 2016 (UTC)
Link to GPX without copyright
There was a link to, which is a GPX file without any copyright information and which appears to be google-drived. I have removed it.
Paragraphs
Some of the entries in the proposals table contain multiple paragraphs separated by line breaks (
<br />). It's hard to tell where a paragraph begins and ends. Let's skip a line to create a new paragraph. – Minh Nguyễn (talk, contribs) 10:24, 9 June 2014 (UTC)
Adventure Cycling Association (ACA) routes getting promoted from rcn to ncn or icn
Continuing from "Numbered network vs unnumbered" above, some ACA routes which are or could be entered into OSM might get promoted from rcn to ncn, or even icn if they cross an international border. To wit, six ACA routes (should they be entered into OSM) have been identified as candidates for such promotions: Underground Railroad (UGRR), Northern Tier (NT), Pacific Coast (PCBR), Transamerica Trail (TA), Atlantic Coast (AC), Southern Tier (ST). The first three might be promoted to icn, the last three to ncn (as quasi-national, quasi-private). However, these are not fully entered, nor are they guaranteed to be up-to-date AND, these are proprietary route data belonging to ACA under copyright. This makes determining what to do difficult, as ACA has stated (via Kerry Irons) that ACA management would prefer that these data not be entered into OSM, but "they realize that this is going to happen." A major concern of ACA is that OLD and OBSOLETE data will remain in OSM after entry of ACA's routes (in violation of our ODBL), but will not be corrected as ACA updates them. -- Stevea (talk) 03:21 24 January 2016 (UTC)
Additional note: it appears that GDMBR, TA west of USBR 76 at the Kansas/Colorado border (from Colorado to Oregon) and UGRR (partially) are the only ACA routes entered into OSM. The first two appear to be completely entered. GDMBR is not network=ncn nor network=rcn, which require a route=bicycle tag, rather GDMBR is tagged route=mtb, which does not get a network=* tag. TA is essentially superseded by USBR 76 east of Colorado, and is entered as statewide network=rcn relations in Colorado, Wyoming, Idaho and Oregon. UGRR is substantially, though only partially entered (perhaps 75%?) as statewide network=rcn relations. This is only 3 (2.75?) out of 24 ACA routes. Any or all of these ACA route data now entered into OSM might be obsolete, having been updated by ACA. Finally, it should be noted that while both the state of California and ACA "offer" slightly different routes called Pacific Coast Bicycle Route, it is intended that OSM represent the state of California version, not the ACA version, as the California version is signed on the ground whereas no ACA routes are signed. -- Stevea (talk) 23:50 16 February 2016 (UTC) and 03:48 10 August 2016 (UTC)
Recently, the Colorado-to-Oregon portion of (not-necessarily-authorized-to-enter-into-OSM by ACA) Transamerica Trail (TA) was "artificially promoted" from network=rcn to network=ncn. This was changed back to network=rcn for several reasons:
• While a good argument can be made that TA is a "national route," it truly is being subsumed by USBR 76 and so in reality 76 is the "more" national route
• OCM's rendering of purple/red boundary vividly shows where the boundaries between regional and national route exists
• Until further AASHTO approvals which the remaining four states' DOT might apply for (but haven't yet), such a distinction is important to make, respect and note with something like a level / color change
• OSM identified in 2014 the overloading of network=* hierarchies for route=bicycles in the USA needed to do this to accommodate this relationship between ACA routes and USBRs (in this particular way, following these particular assignments of routes into hierarchy levels).
These hierarchy guidelines have been further refined with the modest and controlled growth of quasi-national routes, accommodating growth in the USBRS and documenting ACA routes as expressed in OSM. This is especially true with clarification via cycle_network=US, cycle_network=US:US and cycle_network=US:ACA tags.
OSM-US, ACA (management) and AASHTO have a respectful relationship built with lots of give and take and clear understanding of rules (such as copyright as well as the more informal ones noted here that we crafted in 2014). A bold assertion by a single OSM volunteer mountain bike enthusiast to "artificially promote" from regional to national both confuses our defined semantics for wider map consumers and disrespects the understandings OSM has in place to minimize, de-emphasize and correct data which should not be in OSM (unless with explicit permission, which, for TA and indeed all ACA routes, OSM does not have). Let's continue with a non-disruptive consensus of "keeping it to 2.something, and falling" ACA routes with the understandings in place which have worked for years: keep ACA routes regional (and shrinking, unless explicit permission to enter them is granted), document the growth in the USBRS as we have and do and allow sensible growth into US quasi-national namespace, well-documented with cycle_network=* tagging. If/as ACA were to grant OSM explicit permission to enter ACA route data, OSM has identified sensible tagging strategies for them, but OSM does not have that permission today, so we must respect today's hierarchy guidelines as they now work by informed consensus. I continue to welcome Discussion on these topics here. -- Stevea 03:48 10 August 2016 (UTC)
|
http://wiki.openstreetmap.org/wiki/Talk:WikiProject_U.S._Bicycle_Route_System
|
CC-MAIN-2017-43
|
refinedweb
| 1,536
| 55.07
|
Daily Meditations and Group Reflections 29th Sunday in Ordinary Time – Christ the King (Year A) 19 October to 29 November 2008
£1
Acknowledgements Appointed by God Nihil Obstat: Right Reverend Alan Hopes VG Auxiliary Bishop in Westminster Imprimatur: HE Cardinal Cormac Murphy-O’Connor,Archbishop of Westminster Date: Dedication of the Basilicas of SS Peter and Paul 18.11.2008. Writing Group: Dr Mark Nash, Fr Michael O’Boy, Fr Richard Parsons, Mrs Margaret Wickware. With thanks to Kristina Cooper, Clare Ward, Michael Parry,Andrew Brookes and Stephen Fox for permission to feature their faith stories which can be found in full on the CASE website also to Sr Amadeus Bulger CJ for contributing her testimony. The Westminster Diocesan Agency for Evangelisation is grateful to Darton, Longman & Todd for permission to use Scripture texts from the Jerusalem Bible © 1966 Darton, Longman & Todd Ltd and Doubleday and Company Ltd. Excerpts from The Divine Office © 1974, hierarchies of Australia, England and Wales, Ireland.All rights reserved. Excerpts from the English Translation of the Roman Missal © 1973, International Committee on English in the Liturgy, Inc.All rights reserved. Produced by The Agency for Evangelisation,Vaughan House, 46 Francis Street, London, SW1P 1QN.Tel: 020 7798 9152 or email: evangelisation@rcdow.org.uk Published by WRCDT, copyright © 2008, Diocese of Westminster,Archbishop’s House, Ambrosden Avenue, London SW1P 1QJ Designed by Julian Game Front Cover: Mosaic of St. Paul, c.799,Vatican Museums,Vatican City (originally decorated the state banquet hall of the Papal Lateran Palace of the Middle Ages) Back Cover: Copy of St. Paul’s letter to the Galatians, c.180-200, © The Trustees of the Chester Beatty Library, Dublin Maps: Paul’s Missionary Journeys, © 2008, Mark Nash Print and distribution.
Foreword Dear Brothers and Sisters in Christ, Two thousand years ago, St. Paul who was appointed by God to be the ‘Apostle to the Gentiles’ was born in Tarsus, in present day Turkey.To mark the bimillenium of his birth Pope Benedict XVI called for a Holy Year from June 2008 – June 2009. From the moment of his conversion, Paul fearlessly proclaimed the Good News. He sought to take this News into the Gentile world and to build up the Body of Christ beyond the confines of the Jewish community from which he came. For this he journeyed, he suffered, he was martyred. One suspects that were you and I to meet him today, we would encounter an unshakable, almost infectious faith. His was a faith which was permeated with hope: which trusted in the faithfulness of God. He looked to the transforming power of the Holy Spirit, and knew the abiding presence of the Risen Lord who opened the way to the Father and eternal life. Today, faced with the challenges that agitate the Catholic conscience in our society, the opportunity to reflect on the life and ministry of St. Paul seems especially appropriate. If we are to build up the Body of Christ, if others are to encounter He who is ‘wisdom, virtue, holiness and freedom’ (1 Corinthians 1:30), our faith and witness must be as convincing as St. Paul’s. My dear people, I ask you to pray for me and for my fellow bishops that we can lead well as you grow in faith and confidence in the Risen Lord. Pray for your priests that they lead you to deeper understanding of God the Father’s gift, Jesus Christ. Pray too for yourselves, that you may truly hear Christ’s voice and act upon it, using your gifts to help build his Church and to serve the world in which we live (1 Corinthians 12:7). May God the Father and the Lord Jesus Christ grant peace, love and faith to all of you. May grace and eternal life be with all who love our Lord Jesus Christ (Ephesians 6:23-24). With my blessing and prayers,
Cardinal Cormac Murphy-O’Connor Archbishop of Westminster 3
About this book Appointed by God is an opportunity to reflect on our vocation as baptised Christians in the light of the writings of St. Paul, apostle, martyr and evangelist. Running over six weeks, Appointed by God includes six sessions for small groups or communities, as well as a series of daily meditations which you may wish to use on your own. Weekly Themes Each week of Appointed by God looks at a different aspect of St. Paul’s life and writings.Week One looks at Paul the Person;Week Two considers Paul and suffering;Week Three explores St. Paul and the ideas of mystery and faith in Christ Jesus;Week Four explores community;Week Five looks at Paul and righteousness, while Week Six encourages an exploration of evangelisation through the lens of St. Paul’s missionary endeavours. Group Reflections These begin with an opening prayer drawing on the psalms and a few moments of silence.The opening prayer is followed by a Scripture passage and a reflection. Following each of these there is an opportunity for the group to share their thoughts and to explore the implications for Christian living using the questions provided.The session is concluded with a series of petitions and a closing prayer. Daily Meditations The daily meditations for Sundays provide a background to that Sunday’s second reading. Mondays,Tuesdays,Thursdays and Fridays draw on the riches to be found in the writings attributed to St. Paul, while Wednesday will contain a testimony related to the weekly theme.To help our preparation for the Sunday Mass the Saturday meditation will introduce the Gospel passage for the next day.
4
About this book Church documents & Texts A number of Church documents are referred to in the course of this booklet.You may wish to explore the following further: Lumen Gentium, the Second Vatican Council’s Dogmatic Constitution on the Church (November 1964); Apostolicam Actuositatem, the Second Vatican Council’s Decree on the Apostolate of the Laity (November 1965); Pope John XXIII’s encyclical Princeps Pastorum – on the Missions, Native Clergy and Lay Participation (November 1959); Pope John Paul II’s documents Salvifici Doloris – on the Meaning of Human Suffering (February 1984), and Christifideles Laici – an exhortation on the vocation and mission of the laity (December 1988); Pope Benedict XVI’s encyclical on hope, Spe Salvi (November 2007) and the Catechism of the Catholic Church.
When you pray to God in Psalms and hymns, think over in your hearts the words that come from your lips. From the Rule of St. Augustine (c. 400)
5
Paul’s life Initially St. Paul had no desire to be involved in the early Christian movement. On the contrary, St. Paul persecuted those Jews who believed that Jesus was God’s promised Messiah (Galatians 1:13; 1 Corinthians 15:9; Acts 22:4). Paul’s change of heart came about as a result of divine intervention, a vision of the Risen Jesus and a commission to preach to the Gentiles (Galatians 1:16;Acts 9:1-19; 22:1-16; 26:1-13). Paul’s life can be divided into 5 periods. Period 1 c.AD 8
Early Life born in Tarsus in Cilicia [modern-day Turkey] (Acts 22:3).
c.AD 20
educated as a strict Pharisee in Jerusalem (Philippians 3:5).
AD 30 – 34 persecuted the new Jewish Messianic movement (Acts 8:3). AD 34
underwent a conversion experience, was baptised in Damascus, called into Christian ministry and given a commission (Galatians 1:15-16).
Period 2 Preparation for ministry AD 34 – 37 preached in Arabia and Damascus (Galatians 1:17), returned to Jerusalem. AD 37 – 48 preached in Syria and Cilicia (Galatians 1:21), settled in Antioch, returned to Jerusalem for the apostolic conference, a meeting with the apostles who had seen Jesus during his earthly life (Galatians 2:1-10).
6
Paul’s life Period 3 Itinerant Evangelistic ministry AD 48 – 49 preached in Cyprus and southern Galatia = 1st journey (see map on inside back cover). AD 49 – 52 from Antioch revisited the Galatian churches, preached in Asia, then in Macedonia and Achaia [modern-day Greece] (1 Thessalonians 2:2; 3:1-5), settled in Corinth for 18 months, brought before the proconsul Gallio, returned to Jerusalem (Acts 18:22) = 2nd journey (see map on inside back cover). AD 53 – 56 revisited the churches, settled in Ephesus for 2 years and 3 months, returned to Macedonia and Asia. Period 4 AD 57
To Jerusalem and then to Rome returned to Jerusalem with the collection (2 Corinthians 8-9; Romans 15:26;Acts 21:6) = 3rd journey (see map on inside back cover). AD 57 – 59 arrested in Jerusalem, appealed to Caesar, journeyed to Rome as a prisoner, shipwrecked on Malta. Period 5 To Rome, Spain? Rome, martyrdom AD 60 – 62 Paul seemed to be under house arrest in Rome (Acts 28:16). It is uncertain if he preached in Spain as was his intention (Romans 15:24). AD 64
Christians blamed by the Emperor Nero for starting the great fire and many were persecuted (recorded by the Roman historian,Tacitus,Annals, 15:44.2-3). St. Peter was martyred.
c.AD 67
St. Paul martyred (1 Clement 5:5-7) by beheading (Eusebius, The History of the Church, 2:25.5) as he was a Roman citizen (Acts 22:25-29) as opposed to Peter who was crucified. 7
Week One - Group Session (Paul the Person) Opening Prayer Leader:
I was silent and still; I held my peace to no avail; my distress grew worse, my heart became hot within me.
Group:
While I mused, the fire burned; then I spoke with my tongue: ‘Lord, let me know my end, and what is the measure of my days; let me know how fleeting my life is.
Leader:
‘And now, O Lord, what do I wait for? My hope is in you. Deliver me from all my transgressions. Do not make me the scorn of the fool.
Group:
‘Hear my prayer, O Lord, and give ear to my cry; do not hold your peace at my tears. From Psalm 39 8
Week One - Group Session (Paul the Person) Explore the Scriptures Romans 15:14-21 It is not because I have any doubts about you, my brothers; on the contrary to. Following a short period of silence you may wish to share an image, a thought, a phrase that has struck you. Reflection St. Paul leaves us with great riches both in terms of the letters he wrote and his influence on the members of the Early Church.This influence is strongly felt even today, 2000 years after his birth. In his letters, Paul tells us that he was ‘not a polished speechmaker’ (2 Corinthians 11:6). If so, he shared with Moses and Jeremiah a lack of oratory skill. Moreover, his 9
Week One - Group Session (Paul the Person) opponents testified ‘his bodily presence is weak, and his speech of no account’ (2 Corinthians 10:10). His confidence and his success came not from himself but from ‘the grace of God’ that was within him, a ‘grace that has not been fruitless’ as he risked ridicule and eventually death (1 Corinthians 15:10, cf. Ephesians 3:7). From this we can draw a particularly important lesson for every Christian.The Church's action is effective only to the extent to which those who belong to her are open to the transforming and inspiring power of God’s freely given grace. What is so special about St. Paul, who has been seen as chauvinistic (cf. 1 Corinthians 14) and who from the very beginning has been considered difficult to understand (2 Peter 3:15-16)? Why should we take the trouble to get to know this ‘difficult’ saint? The reasons are many. Because, St. Paul was more like us than any of the other apostles in that he had not seen Christ in the flesh. Because, Paul wrote his letters before the gospels were written, providing us with the earliest insight into the Early Church. Because, Paul was first and foremost a pastor and a missionary, a man with very real, practical concerns for the people of God. Paul was an Israelite (Romans 11:1), a descendent of Abraham (2 Corinthians 11:22) who spoke with conviction and from the heart, firmly believing in the power of God to save. Finally, because the ‘new life of the Spirit’ (Romans 7:6) preached by St. Paul, this new ‘life with Christ’ (Galatians 2:20), is a constant struggle.Yet, if we open ourselves to God’s grace we can be well placed to follow the example of this great ‘servant of Jesus Christ’ (Romans 1:1).We ‘are new born, and, like babies, [we] should be hungry for nothing but milk – the spiritual honesty which will help [us] grow up to salvation’ (I Peter 2:2). At the important stages of life, at moments of transition and change, do you look to Scripture for inspiration? In times of trouble do you look to Scripture or elsewhere for guidance? What actions and path do you hear God calling you towards? 10
Week One - Group Session (Paul the Person). Today or over the coming week take a few moments to consider St. Paul’s second letter to Timothy (2 Timothy 3:14-17, where he talks about the preparation necessary for leading others to a knowledge of Christ Jesus. Ephesians 6:10-20), too, is a wonderfully rich passage regarding the proclamation of the Good News.
11
Sunday of Week One Read the Scripture from 29th Sunday in Ordinary Time (Year A) – 1 Thessalonians 1:1-5 From Paul, Silvanus and Timothy, to the Church in Thessalonica which is in God the Father and the Lord Jesus Christ; wishing you grace and peace from God the Father and the Lord Jesus Christ.We always mention you in our prayers and thank God for you all, and constantly remember before God our Father how you have shown your faith in action, worked for love and persevered through hope, in our Lord Jesus Christ.We know, sisters and brothers, that God loves you and that you have been chosen, because when we brought the Good News to you, it came to you not only as words, but as power and as the Holy Spirit and as utter conviction. Background Co-authored by Paul, Silvanus and Timothy, the Church’s original evangelists, this letter to the church in Thessalonica, probably written from Corinth around AD 50, is the oldest piece of Christian literature. It is composed in the letter form which was common in the ancient world. It contains an opening greeting (1:1), with a thanksgiving (1:25:22) in the main body of the text, and a closing farewell containing a request for prayer (5:23-28). Its purpose is to deal with the theological and pastoral issues confronting the church. Specifically, this letter is concerned with the parousia or the return of Christ (4:13-5:11) and how the community should regard itself as living the Christian life both amongst themselves (4:9-12) and in society (4:1-8). What separates this letter from ‘secular letters’ is the claim that the assembly derives its being from God who is declared as ‘Father’, an ancient Hebrew title (e.g., 2 Samuel 7:14).What differentiates the Thessalonian community from Judaism, however, is the confession and worship of Jesus, who is acknowledged as the Messiah (Christ) who shares in the Lordship of God (see, 1 Corinthians 12:3; Philippians 2:11). 12
Sunday of Week One From this greeting all other Christian activity radiates, both prayer and the inner life (1:3) and the public activity of the practice of the faith (1:3). As a result of the foundation of the Thessalonian church a deep communion is established between the activity of the community and the work of the apostolic missionaries.Their Gospel ministry is not merely about human proclamation (see, 1 Corinthians 2:1-2) but concerns the divine work of the Holy Spirit in the midst of the community (1:5a). Dear
13
Monday of Week One perfectly well instructed (Romans 15:14) Having the money, time and the opportunity to study and reflect, St. Paul was well-versed in the Old Testament and the exact observance of the Jewish Law. He was taught by Gamaliel, a member of the Sanhedrin who was a ‘Pharisee… a doctor of the Law and well respected by the whole people’ (cf.Acts 6:34 and 22:3). Paul was ‘perfectly well instructed’. Many of us, however, have not had the same opportunities as Paul. How are we to hand on our faith if we are unable to express what is in our hearts? In 1980 the Bishops of England and Wales produced a document called The Easter People which talked, in part, of everyone’s need for continued formation and education in faith.‘Our educational heritage is indeed rich’ the document said, reminding us that at the centre of our learning and faith ‘is Jesus Christ, teacher and supreme catechist. It is he who will speak to adult and child alike on their pilgrim journey’ (The Easter People, 155). Education in faith is a life-long process – not ending at confirmation – which needs to remain faithful to the gospel message and be sensitive to contemporary needs and aspirations; it is also a community process where everyone is to be involved. From the Catechism 133.The Church ‘forcefully and specifically exhorts all the Christian faithful... to learn “the surpassing knowledge of Jesus Christ”, by frequent reading of the divine Scriptures (Philippians 3:8).“Ignorance of the Scriptures is ignorance of Christ” (St. Jerome)’. In order to enter more deeply into the life of prayer and to come to grips with St. Paul's challenge to pray unceasingly (1 Thessalonians 5:1), the Orthodox Tradition offers the Jesus Prayer, which is sometimes called the prayer of the heart. Say it often, both out loud and in your heart. Lord Jesus Christ, Son of God, have mercy on me, a sinner. The Jesus Prayer, see Luke 17:13; 18:14; 18:38 14
Tuesday of Week One appointed me as a priest of Jesus Christ (Romans 15:16) Here Paul talks of his appointment to serve Jesus Christ. His is a special position, an evangelistic and priestly ministry (Romans 15:1516). However, Paul was very conscious of the difference between himself and the Twelve and of the ‘pride of place’ they were to be given. Often we can feel that ministry is the special preserve of the ‘holy’ but St. Paul recognised that throughout his ministry he won, for Christ, the allegiance of the Gentiles by the power of signs and wonders, by the power of the Holy Spirit. We, like Paul, are given the mandate to witness to Christ, to offer our lives to God.Though we may feel unworthy (cf. 1 Corinthians 15:9), we are called to accept Christ’s message and understand that it is a ‘living power among those who believe it’ (1 Thessalonians 2:13). From the faithful, God calls each of us to a definite service to the Church (Ephesians 4:11-13).We all share in the common priesthood bestowed at baptism and irrespective of vocation, profession, ethnicity or sex; we are called to serve the needy, to spread God’s word, to do whatever we do for the glory of God… taking Paul as our model as he takes Christ (1 Corinthians 10:31 and 11:1). From Apostolicam Actuositatem (On the apostolate of the laity) 3. From the fact of their union with Christ the head flows lay people’s holy nation (cf. 1 Peter 2:4-10), it is in order that they may in all their actions offer spiritual sacrifices and bear witness to Christ all the world over.
15
Tuesday of Week One Holy
16
Wednesday of Week One I’ve always been a churchgoer without even the usual teenage blip of rebelling. However, it was more a sign of my dutiful spirit than any real living faith. Just as I ate my greens, did my homework and all the other things that I didn’t particularly like doing, I went to church. But, my Christian faith was not at the heart of my life. My focus was on having a meaningful job, travelling, meeting interesting people and generally having fun and adventure. I fitted God in round the edges. But because I did go to church and generally led a moral life, I felt I was doing all that was expected of me. It didn’t occur to me that there was anything more – that I had actually missed the whole point: that the Christian life is not about spiritual practices and duties but about a love relationship with Jesus Christ who, if you give him permission, floods your whole existence and gives you a totally new perspective on life. God led me to a place where I had to face myself and my need of him to make sense and meaning of my life.At a charismatic prayer group, I heard Catholics talk about God in a way I had never heard before. I had thought a personal relationship with Jesus was reserved for Mother Teresa types, not ordinary mortals, but the faith of those present made me question my own. I realised I could justify and defend myself or I could admit the truth – that I was empty and hollow inside. For all my outward practices I didn’t know God at all. I was faced with a choice. Did I want to run my own life as I had been doing, or was I prepared to hand it over to God and allow his Holy Spirit to direct me instead? Ultimately, this is what being Christian means – trying to live a life, guided by God’s Holy Spirit, instead of by human desires, fears and needs.As it says in the Penny Catechism,‘God made us to know him, love him and serve him in this world and to be happy with him forever in the next.’ Kristina Cooper (each of the Wednesday testimonies are taken from) O God, teach me to breath deeply in faith. Amen. Søren Kierkegaard (1813-1845) 17
Thursday of Week One utmost of my capacity (Romans 15:19) What can we accomplish? What are our limits? Do we need degrees in theology to pass on our faith? Do we think so little of ourselves and our ‘capacity’ that we fail to act or speak out in groups, or as parents leave the faith education of our children to others? St. Paul leaves us in no doubt where his great giftedness lies.When he came before the Corinthians he came ‘in great fear and trembling’ relying not on his own power but on the ‘power of the Spirit… the power of God’ (1 Corinthians 2:3-5).This same Spirit has been promised to us (John 7:37-39; John 14:16 and Galatians 3:14).Through baptism and strengthened in confirmation, the gifts of the Holy Spirit have been bestowed on us (1 Corinthians 12:4-11).We pray, therefore, that we make the right choices, ‘choose the right course’ (2 Timothy 4:5) and to use these gifts for the service of others and the Church.When you are next able, read 1 Corinthians 14 and consider how well you are using the talents and gifts you have been given.You may wish to take the time to read and reflect on 2 Corinthians 4:7-15. Lord, we celebrate the conversion of Saint Paul, your chosen vessel for carrying your name to the whole world. Help us to make our way towards you by following his footsteps, and by witnessing to your truth before the men and women of our day. Through Christ our Lord. Amen. Concluding Prayer, Conversion of St. Paul the Apostle (25 January), Divine Office
18
Friday of Week One Christ’s name has already been heard (Romans 15:20) What would it be like to live in a country where the name of Christ had not yet been heard? More curious still what would it be like to live in a country where Christ’s name and words had been heard but are reviled, considered dangerous and thrust underground. Present-day China and Saudi Arabia (in the past the Soviet Union) attempt to repress Christ’s message and persecute his followers. However, as the Holy Father reminds us,‘the one who has hope lives differently; the one who hopes has been granted the gift of a new life’ (Spe Salvi, 2).After the collapse of communism, almost without exception, the countries of the former Soviet Union experienced a ‘spring-like awakening’ (Russian Orthodox Patriarch Alexei II).The power of the Word of God, survived a time when there seemed to be ‘no hope, no God in the world’ (Ephesians 2:12). The spreading of Christ’s message to those who have not heard, and those who have heard and yet forgotten, will never be over.Those of us asleep in our faith must awake; relying on others to do God’s work is not enough. We are on a real and daily journey towards God. Is there any good reason to ignore the challenge to grow in the light of the gospel? Now, if you have time and a Bible to hand, read Ephesians 5:14 or read it when you get home.) 19
Saturday of Weekhew 22:34-40 St. Paul, when commenting on Jesus’ call to love our neighbour as ourselves, wrote ‘Love does no wrong to a neighbour; therefore, love is the fulfilling of the law.’ (Romans 13:10). Is our love of neighbour evident in our words and actions? Does the love we have for God shine forth? O Teacher, Jesus, be favourable to your children. Grant that we who follow your command may attain the likeness of your image and in accord with our strength find in you both a good God and a judge who is not severe. Amen. Clement of Alexandria (c.150-c.211)
20
What does Luke say about St. Paul? In addition to St. Paul’s own letters we learn about his life in the writings of St. Luke.The purpose of Luke’s writings (St. Luke’s Gospel and the Acts of the Apostles) is to demonstrate to Theophilus, Luke’s literary patron (Luke 1:3), the basis on which Christianity developed and to highlight the principal features of the Christian movement. Luke’s writings cover the period from the annunciation of the birth of John the Baptist (Luke 1:5ff) to the arrival of St. Paul as a prisoner in Rome (Acts 28:16). Paul enters Luke’s narrative at Acts 7:58 where, as a zealous Jew, he consents to the martyrdom of Stephen. From this point until his missionary activity in Cyprus, Luke uses Paul’s Jewish name Saul to remind the readers of his Jewish heritage. Luke uses the Roman name Paul when his Christian missionary activity begins (Acts 13:9). Paul’s ministry, for Luke, forms part of his overall scheme, the description of God’s activity in bringing the Jesus movement to birth in the power of the Spirit (cf. Luke 1:35 with Acts 2:4). For Paul, this ministry begins with his conversion which arises as a result of the direct intervention of the glorified Lord Jesus (Acts 9:4-5). At his direction Saul is commissioned for ministry and baptised by Ananias (Acts 9:15-16, 18). In Acts 15:36 Luke presents Paul as the principal evangelist to the Greco-Roman world. Occasionally Luke becomes part of the narrative, accompanying Paul on his journeys (Acts 16:10-17; 20:5-8; 13-15; 21:1-8; 27:1-28:16), often fading into the background, especially when Paul gives important addresses (e.g. Acts 20:17-35). There are two particular difficulties in asking what Luke says about Paul. Firstly, Luke does not reveal that Paul wrote letters to churches and secondly, what Luke tells us about Paul’s chronology does not marry precisely with what Paul says himself. In these areas it must be recognised among other things that Luke is interpreting Paul in c.AD80, some 12 or so years after Paul’s martyrdom. 21
Week Two – Group Session (Paul and suffering) Opening Prayer Leader:
Some wandered in desert wastes, finding no way to an inhabited town; hungry and thirsty, their soul fainted within them.
Group:
Then they cried to the Lord in their trouble, and he delivered them from their distress; he led them by a straight way, until they reached an inhabited town.
All:
Let them thank the Lord for his steadfast love, for his wonderful works to humankind.
Leader:
Some sat in darkness and in gloom, prisoners in misery and in irons, for they had rebelled against the words of God, and spurned the counsel of the Most High.
Group:
Then they cried to the Lord in their trouble, and he saved them from their distress; he brought them out of darkness and gloom, and broke their bonds asunder.
All:
Let them thank the Lord for his steadfast love, for his wonderful works to humankind. From Psalm 107
All:
Glory be to him whose power, working in us, can do infinitely more than we can ask or imagine; glory to him from generation to generation in the Church and in Christ Jesus for ever and ever. Amen. Ephesians 3:20-21 22
Week Two – Group Session (Paul and suffering) Let us listen carefully to the Word of the Lord, and attend to it with the ear of our hearts. Let us welcome it, and faithfully put it into practice. St. Benedict of Nursia (c.480-c.547) adapted
Explore the Scriptures 2 Corinthians 11:16, 23-27 and 12:7-10 As I have said before, let no one take me for a fool; but if you must, then treat me as a fool and let me do a little boasting of my own. I have worked harder, I have been sent to prison more often, and whipped so many times more, often almost to death. in the open country, danger at sea and danger from so-called brothers. I have worked and laboured, often without sleep; I have been hungry and thirsty and often starving; I have been in the cold without clothes..
23
Week Two – Group Session (Paul and suffering) Following a short period of silence you may wish to share an image, a thought, a phrase that has struck you. Reflection What does St. Paul mean when he says ‘when I am weak that I am strong’? In a society that prizes self-sufficiency Paul’s claim may seem rather strange.We are led to believe that independence and strength are things to be embraced not relinquished. For many, weakness is something to be avoided as it puts us at the mercy of others and can leave us exposed, vulnerable, needy. However, Paul, in his second letter to the Corinthians, speaks of suffering humiliation, disempowerment, bitterness and anger for the sake of Christ. Here Paul seems to be embraces weakness seeing it as an opportunity. ‘Offering our living bodies as a holy sacrifice’ (Romans 12:1), we are no longer able to model ourselves on the way the world works but our behaviour must change. St. Paul says that such total dedication to God the Father is the only way to ‘discover the will of God’ for us, to ‘know what is the perfect thing to do’ (Romans 12:2).When Paul says ‘when I am weak that I am strong’ it is because he knows that God has power beyond comprehension.Where our knowledge, power or tolerance of suffering fall short, God is ready to make up for any deficiency. God never allows us to take more than we can stand (1 Corinthians 10:13). In weakness we learn, in a particular way, a lot about the relationship that lies at the heart of our wellbeing; our dependence on God. It may seem easy to turn to God in moments of weakness but do we invite him into our everyday lives? From day to day, when things are ‘OK’ or ‘fine’ rather than in a moment of crisis, how conscious am I of my dependence on God?
24
Week Two – Group Session (Paul and suffering) How have I used weakness as a way of offering something up to God? What weaknesses have helped me to appreciate my dependence upon God? Am I truly candid in my prayer? ponder another of St. Paul’s letters (1 Corinthians 4:1-4, 8-17) in which St. Paul calls the Corinthians, and us, to our senses, criticising self-importance and pride. Here, he talks of the trials we can endure for our faith in the sure knowledge that we can call God, Our Father. You may also care to reflect (2 Timothy 2:1-13 and Philemon 13).
25
Sunday of Week Two Read the Scripture from the 30th Sunday in Ordinary Time (Year A) – 1 Thessalonians 1:5-10. Background St. Paul’s first letter to the Thessalonians contains several words and expressions that are commonplace to us, but which would have been new to the people of the day. Moreover, old expressions are used in a new way, transformed in meaning by Christ’s arrival (see for example Isaiah 52:7 and 61:1).The authors write of ‘the gospel’ (1:6) and the spreading of ‘the word of the Lord’ (1:8), phrases related to the missionary proclamation of Jesus Christ, who died, rose from the dead and whose return is awaited. For the Thessalonians, embracing the gospel, the Good News, meant turning (1:9) from idolatry, from cultic religion including Emperor worship, to serve the living God. In this monotheistic faith they received instruction both theologically and ethically (1:6). Having embraced the Christian gospel opposition and suffering (1:7) had to be endured.Their hope of peace and salvation did not rest with the Emperor but with the eternal life offered through 26
Sunday of Week Two sharing in Jesus’ Resurrection (1:10). In their imitation of the apostolic ministry as enshrined in the gospel of Jesus (1:6) the faith of the Thessalonians became an example of the Christian life to others in the region (1:8). Behold me, my beloved Jesus, weighed down under the burden of my trials and sufferings, I cast myself at Your feet, that You may renew my strength and my courage, while I rest here in Your Presence. Amen.
27
Monday of Week Two make my weaknesses my special boast (2 Corinthians 12:9) St. Paul leaves us in little doubt that following Christ is a constant challenge, not necessarily of making small adjustments or keeping on an even keel but a life-changing, world-shattering challenge. As it was 2000 years ago, living a life of faith can bring you into conflict with prevailing attitudes (Romans 12:2). Equally challenging is the commonplace assumption that a middle way must be found. Living the Gospel daily is a real challenge, one which we fall short of and may be tempted to run away from. ‘Weakness’ is essential to living a life of faith. By recognising weakness and vulnerability we can realise that all that burdens us is not ours alone to bear.Weakness is an opportunity to free ourselves from pride, to put ourselves in the hands of another – suffering and weakness stop us short.The ability to be ‘weak’ is a gift.While striving to be perfect (Philippians 3:12), ‘all’ is not ours to do. Our weakness, our submission to God, will help us glimpse the way to him.)
28
Tuesday of Week Two let no one take me for a fool (2 Corinthians 11:16) No one likes to be taken for a fool yet it is all too easy to mock those who passionately express what they feel or believe.We too, may for fear being considered different or foolish neglect to share our opinions, our gifts or our faith. By our words and actions we can exclude people, create outcasts. By our comments and deeds, by ridiculing others we can be the cause of suffering, we can kill a reputation or a person’s confidence, offending their dignity. It is perfectly possible to be mocked for being different yet this is precisely what Jesus calls us to do. During the Sermon on the Mount, Jesus preached a higher standard than the old, he took a commonly accepted rule and went further: ‘I have not come to abolish [the Law or the Prophets],’ he said ‘but to complete them’ (Matthew 5:17). Jesus demands that we go one step further, no longer just ‘you must not kill’ but you must not get angry, call names, you must come to terms with those who offend you. Jesus and Paul, his faithful follower, call us to this higher standard; to go the extra mile in alleviating suffering and anguish and where possible avoiding the circumstances in which it may be caused. God our Father, grant that I may rejoice always in hope, be patient in suffering and persevere in prayer. I ask this through your Son, Jesus Christ, Amen. Based on Romans 12:12 You may wish to consider joining a small community in your parish for the remainder of this season. If you have not done so already contact your parish priest or a small community leader to explore St. Paul with others in your parish. 29
Wednesday of Week Two The prognosis was not good. I had cancer. I went into hospital for the major operation and felt confused and asked the question ‘Why me?’ I asked for the Sacrament of the Anointing of the Sick. For me it was a profound experience. It was the Friend of All Friends, Jesus, saying ‘Peace’. During the first post-operative weeks complete recovery was uncertain. The ‘time’ that is given when a person is diagnosed with cancer is both a challenge and a comfort. Things are never the same again – though that is true of all of us if only we realised it.There is a heightened awareness of the precious nature of time, and of life itself, when living with an intrusive illness. In the Church we speak of ‘the grace of the present moment’ of time. Each moment is recognised as a gift, precious, to be treasured. Faith invited a deeper discovery of who and what I was in the sight of God, in the presence of God, beloved by God. I had seen with exceptional clarity that NOTHING, NO-THING can separate me from the love of Jesus Christ.This was, and is being continually confirmed, in the life-giving sacraments of the Church. Knowing that I had been forgiven and am being continually forgiven for having made such a mess of it all - is almost breathtaking! I am witness to these things in a new way. I long for others to share this fundamental consolation. St. Paul’s letter to the Romans expresses it so beautifully ‘For I am certain of this: neither death nor life…nor any created thing whatever will be able to come between us and the love of God, known to us in Christ Jesus Our Lord’ (Romans 8:38). So, the invitation: If all is going well in my life and I love God that is excellent. If all is going badly in my life and I go on loving God that is even better.That supposes an even greater love. Sr Amadeus Bulger CJ 30
Wednesday of Week Two. Amen. attributed to St Richard of Chichester (1197-1253)
31
Thursday of Week Two thorn in the flesh (2 Corinthians 12:7) Suffering is everywhere: from famine or devastation caused by natural disaster to the pain following the loss of a loved one or the suffering caused by illness.While the alleviation of suffering is to be of prime concern, only God is able to eliminate sin, the constant source of suffering (Job 36). It is possible to see suffering in two ways. Firstly, in suffering there is ‘concealed a particular power that draws a person close to Christ, a special grace’ (Salvifici Doloris or On the meaning of Human Suffering, 26). Secondly, the suffering of others permits us to be Christ-like. Pope John Paul II wrote that ‘suffering is present in the world in order to release love, in order to give birth to works of love towards neighbour, in order to transform the whole of human civilization into a “civilization of love” ’(On the meaning of Human Suffering, 30). In suffering there is a chance to unite ourselves with the Cross of Christ, to bear our burdens willingly and turn them to God’s advantage as witness to the world. From Spe Salvi (In hope we are saved) 36. Like action, suffering is a part of our human existence. Suffering stems partly from our finitude (our limited time on earth),… Indeed, we must do all we can to overcome suffering, but to banish it from the world altogether is not in our power… We know that God exists, and hence that this power to ‘take 32
Thursday of Week Two away the sin of the world’ (John 1:29) is present in the world.Through faith in the existence of this power, hope for the world's healing has emerged in history.. Amen. St.Thomas More (1478-1535)
33
Friday of Week Two Constantly travelling (2 Corinthians 11:26) Life is made up of many journeys and is in a very real sense a journey itself.We are ‘constantly travelling’, following the Good Shepherd, confident that he will show us, indeed has shown us by his words and actions, the way to the Father (John 10:1-10; John 14:1-12).The memorial acclamation ‘Christ has died, Christ is Risen, Christ will come again’ expresses this, it is a shout of joy as we remember what God has done for us, a reminder of Christ’s triumph over death.The sufferings and hardships we endure, he has endured. Death which will greet each of us, greeted him. Jesus’ resurrection from the dead, we hope to share (Philippians 3:10-11).)
34
Saturday of Week Two. You, however, must not allow yourselves to be called Rabbi, since you have only one Master, and you are all brothers and sisters.’ Matthew 23:1-12 We are often called to ‘blow our own trumpet’ and to shout of our successes and skills.Where can we find the opportunity to demonstrate humility for Christ and before each other? O blessed Jesus, give me stillness of soul in you. Let your mighty calmness reign in me. Rule me, O King of Gentleness, King of Peace. St. John of the Cross (1542-1591)
35
Paul’s journeys It is easy to conceive of St. Paul’s journeys much as we would a holiday; thinking that the route, the stops, the accommodation and visits were all pre-planned. However, no-one had been an evangelist in this way before and there was no model to follow.When Paul had been in Jerusalem, he started out on what we loosely call his ‘missionary journeys’. But they were not journeys in the normal sense of the word. He didn't seem to have any particular plan for which community he would visit next; no fixed itinerary. He relied on the inspiration of the Spirit which does appear to have been a clear and identifiable force in his life. He had no particular project in mind; how long his journey would last or how many communities would be included in his visits. For example, it seems he lived for some two years in Corinth, until he felt the need to move on. The ‘journeys’ took him around most of the Eastern Mediterranean, where he preached in Cyprus and southern Galatia (1st journey). From Antioch he revisited the Churches in Galatia and then on to Asia and Macedonia and Achaia before returning to Jerusalem via Corinth (2nd journey). Much of Paul’s work involved revisiting churches he had previously founded such as the one at Ephesus were he stayed for over two years. When he could not visit he wrote. Paul’s third ‘journey’ took him back to Jerusalem (3rd journey) with money for the ‘Christian’ community there, much as we have a collection on Good Friday for the Holy Places. His final ‘journey’, that to Rome, led eventually to his death. See the map of St. Paul’s journeys in the pull-out at the back of the booklet.
36
Week Three – Group Session (Paul, faith and mystery) Opening Prayer Leader:
O Lord, you have searched me and known me. You know when I sit down and when I rise up; you discern my thoughts from far away.
Group:
Where can I go from your spirit? Or where can I flee from your presence?
Leader:
If I ascend to heaven, you are there; If I take the wings of the morning and settle at the farthest limits of the sea, even there your hand shall lead me.
Group:
For it was you who formed my inward parts; you knit me together in my mother’s womb. I praise you, for I am fearfully and wonderfully made. Wonderful are your works; that I know very well.
Leader:
How weighty are your thoughts, O God! From Psalm 139 37
Week Three – Group Session (Paul, faith and mystery) Explore the Scriptures Ephesians 3:8-21’s wisdom really is, exactly according to the plan which he had had from all eternity in Christ Jesus our Lord.This is why we are bold enough to approach God in complete confidence, through our faith in him; so, I beg you, never lose confidence just because of the trials that I go through on your account: they are your glory. to him from generation to generation in the Church and in Christ Jesus for ever and ever. Amen. Following a short period of silence you may wish to share an image, a thought, a phrase that has struck you.
38
Week Three – Group Session (Paul, faith and mystery) Reflection In the course of his early years, St. Paul learned his trade as a tentmaker. He went on to complete his studies to be a rabbi while living in Jerusalem. Clearly, he had ample opportunities to develop his intellectual gifts.Today, with the onset of the Internet and other technological advances, there is little that we cannot investigate and learn about to satisfy our own quest for knowledge. Given the extent of these resources, some may doubt that there are matters beyond human comprehension.Yet, to be a person of faith is to accept what cannot be fully explained by scientific discovery or human perception. In other words, God cannot be succinctly defined by any rationale no matter how brilliant or by any expression no matter how pious. By not fully knowing or completely understanding, we are drawn into the mystery - to ponder again and again God’s plan for our salvation, which he has chosen to reveal to us through his Son, Christ Jesus. St. Paul did not see Christ’s earthly ministry first hand during his earthly ministry. Nor did he initially grasp Christ’s message. However, St. Paul’s perspective of the world was challenged, on the road to Damascus, when he encountered the Risen Lord.Temporarily blinded by the brightness of the light, Paul accepted God’s loving invitation and began ‘to walk by faith not by sight’ (2 Corinthians 5:7).While our own experience may not have been quite as dramatic, the same life-changing invitation has been given to each of us.Washed clean in baptismal waters and wrapped in a white garment, we were transformed and made ‘one with Christ’ (Ephesians 2:11-22).This transformation, our having been ‘made righteous by faith with Christ’, (Galatians 4:5-7) at our own baptisms may have happened quite sometime ago, yet, each year at the Easter Vigil we have an opportunity to re-live this life-giving moment and resolve yet again to model Christ in our daily living - to walk by faith not by sight. St. Paul reminds us that the practice of our faith is nourished in the Church, the visible Body of Christ. Each time we gather to celebrate the Eucharist we enter into the great mystery that is our faith. In the 39
Week Three – Group Session (Paul, faith and mystery) sacraments visible signs help us to contemplate the invisible, our life ‘in Christ’.This life in Christ is a wondrous gift of the Father which ‘entitles each of us to say the prayer of the children of God:“Our Father” (Catechism of the Catholic Church, 1243). It is often easy to follow the wisdom of the world than to live ‘in Christ’.Where in your life are you being invited to ‘walk in faith not by sight’? What event or person has helped you to a greater understanding of the mystery of faith? also want to look at (Colossians 2:9-10) in which Paul tells us where we can find fulfilment.The life, death and resurrection of Christ is a gift without parallel, one we often find hard to grasp but believe through faith. Reflect on (Romans 5:6-15) and (Galatians 3:23-29) in this light. 40
Sunday of Week Three Read the Scripture from All Saints’ Day -. Background As we have learned, St. Paul frequently engaged in letter writing to convey the message of God to specific communities. Although the author of 1 John does not use the same letter writing formula, he and St. Paul sent words of encouragement to challenge and guide the early Christians when they could not be with them. For the most part, the themes were similar in that they both wrote about the implications of becoming one of God’s children (1 John 3:1; Romans 5:5), the difficulties of living in imitation of Christ in a world that continually challenges these values (1 John 2:15; John 15:17-19; Romans 13:8), and the promise of the eternal life given at baptism and glimpsed here on earth (1 John 3:2; Philippians 3:14). Father, all powerful and ever-ling God, today we rejoice in the holy men and women of every time and place. May their prayers bring us your forgiveness and love. We ask this through Jesus Christ, your Son, who lives and reigns with you and the Holy Spirit, one God, for ever and ever. Amen. Opening Prayer, Solemnity of All Saints, Roman Missal (1974) 41
Monday of Week Three All the Saints (Ephesians 3:8) For centuries the Church has looked upon the saints as intercessors; there to intercede on our behalf or help us in our prayerful petitions to God, our Father. During his papacy, Pope John Paul II encouraged us to look to the saints as role models – examples of how we might live in the likeness of Christ. Having canonised more saints (over 450) than all of his predecessors combined, Pope John Paul II gave us a number of twentieth century saints. These included young persons, married men and women as well as religious. Regardless of their particular vocation the saints can be looked upon as inspiration for daily living.Whatever trials and difficulties they encountered during their earthly lives, the saints offered them to God – just as St. Paul instructed when he wrote to the Corinthians, ‘whatever you do, do it for the glory of God’ (1 Corinthians 10:31). Through our baptism, we are called to do the same – to imitate the saints. Most days, the Church honours a particular saint for whom information abounds in books, newspapers and via the Internet (for example). Perhaps the life of a saint could become a part of your walk-to-school or dinner-time conversation. God of all holiness, you gave your saints different gifts on earth but one reward in heaven. May the prayers of the saints be our constant encouragement and may we, with the faithful departed we commemorate today, share the joys and blessings of the life to come. We ask this through Jesus Christ, your Son, who lives and reigns with you and the Holy Spirit, one God, for ever and ever. Amen. Adapted from the Opening Prayers for All Saints and All Souls, Roman Missal (1974) 42
Tuesday of Week Three kneeling before the Father… (Ephesians 3:14) Gestures such as kneeling, standing and processing enable us to actively participate in the Eucharist. Most importantly, they are intended help us set aside our earthly thoughts and endeavours so that we can turn our minds and hearts to God our Father. Similarly, on entering a church, we dip our figures in holy water and make the Sign of the Cross as a reminder of our baptism and our life ‘in Christ’. By this gesture, we not only recall His death on the cross, we affirm our own dying with him. As St. Paul explained to the Galatians: ‘). May Christ's words be in my mind, on my lips, and in my heart. Prayer to accompany the signing of the forehead, the lips and the breast before the gospel reading
43
Wednesday of Week Three I'm over 21 (by a number of years), was born into a Catholic family in Bedfordshire, and am the youngest of five children. Faith was an important part of our family life and in addition to Sunday Mass attendance, bedtime prayers were part and parcel of my childhood routine. I suppose my faith didn't really become my own until my late teenage years. Going to Church and saying my prayers were something I did, but I wouldn't have said I had a deep personal relationship with Jesus, that came later. Going to University marked a real turning point in my faith. It was here that I joined a weekly parish prayer group and discovered a fellowship that I'd not found before.The people there were really sincere about their faith.They wanted to learn more about Jesus and the Bible and that rubbed off on me in a powerful way. I began to read the Bible on my own. I began to pray more, spending quiet time before Jesus in the Blessed Sacrament. Gradually my Catholic faith and my relationship with Jesus developed. Life has not always been easy. My father died when I was six, others dear to me have since died, and members of my own family have been struck down with serious illnesses and tragedies. I can say that my faith has carried me through these times. Hand on heart I can truly say that my faith brings me deep happiness and fulfilment which I can't find in anything else. Clare Ward Lord, make me an instrument of your peace; where there is hatred, let me sow love; when there is injury, pardon; where there is doubt, faith; where there is despair, hope; where there is darkness, light; and where there is sadness, joy. Amen. The Peace Prayer attributed to St. Francis of Assisi (c.1181-1226) 44
Thursday of Week Three planted in love and built on love (Ephesians 3:17) By our baptism we are lovingly joined with Christ in his death and resurrection to become ‘a new creation’ (Galatians 6:15, 2 Corinthians 5:17). Baptism is clearly more than just a welcome into the Church or a reason for a social celebration. Here, a new life gifted by God is presented to him as the parents seek, for their child, an opportunity to share eternal life with Christ. From that moment, through the love of God, our earthly life’s goal is planted in our hearts. All new life needs nourishment. Parents, as primary educators of their children, are called to build a spiritual home – a place where their witness to God’s love will enable their children to grow in faith and love for one another. As a sign of this undertaking, to keep the ‘flame of faith alive’, parents and godparents are given a lighted candle. Father in heaven, the light of your revelation brought Paul the gift of faith in Jesus your Son. Through his prayers may we always give thanks for your life given us in Christ Jesus, and for having been enriched by him in all knowledge and love. We ask this through Christ our Lord. Amen. Alternative Opening Prayer,Vigil Mass of SS Peter and Paul, Roman Missal (adapted)
45
Friday of Week Three Glory to him… (Ephesians 3:20) The word ‘doxology’ may not be a familiar term yet these prayers of praise are an integral part of the practice of the Catholic faith. Besides the Glory Be that is said at the end of each decade of the Rosary, there are other well known doxologies including the Gloria that is said during the celebration of Holy Mass. Using a Jewish tradition, St. Paul included several doxologies in his writings to early Christians to give praise to God our Father (cf. Romans 11: 36, Galatians 1:5). All too easily our personal prayer can become a cycle of petition and thanksgiving – asking God for special favours and thanking him on their receipt. For sure, God invites us to seek his help, yet prayer is not akin to bargaining or a matter of negotiation. Christ himself taught us to pray that our wills be moulded to the will of the Father – thy will be done on earth as it is in heaven. Have our prayers become a rather hollow recitation of words or an opportunity for loving praise of God's greatness – secure in the knowledge that each of us has been offered the gift of eternal salvation? forever. Amen. Doxology from the Letter of Jude 24-25
46
Saturday of Week Three.’ Matthew 25:1-13 It is natural to put of until tomorrow what ought to be done today.The five foolish bridesmaids were caught off guard having no spare oil. Our preparations for Christmas may well be underway; we may even have the presents bought, cards written and a date for picking up the turkey.This reading is a call to action, an opportunity to start doing what needs to be done to prepare ourselves for Christ’s coming again in glory. God, you reveal your glory in the life and in the power of your Risen Son. We pray that your Kingdom will come. We long for the glorious day of Christ’s revelation when the kingdom of death and tears will end and your kingdom of peace, justice and love will be established forever. Amen. Prayer by Ethiopian Orthodox during Week of Prayer for Christian Unity 2004 in Jerusalem 47
Paul and the place of scripture ‘Let the Word of Christ, in all its richness, find a home in you’ (Colossians 3:16). Scripture is the ‘truth which God wished to be set down… for the sake of our salvation’ (Dei Verbum, 11),‘acting in’ and ‘through’ sacred writers such as St. Paul.At the time of St. Paul’s writing the Canon of Scripture (what is included in the Bible) had not been decided upon. Indeed, St. Paul’s letters which teach what had been revealed by God through the life of Christ and handed on by the Apostles, constitute the earliest pieces of the New Testament, written before the Gospels. For Paul and the other Apostles, the important thing was for communities and believers to hold on to the apostolic traditions being handed on by word of mouth or by letter (cf. 2 Thessalonians 2:15).While some of these oral teachings were eventually written down and now form Holy Scripture, other teachings such as the number of sacraments and the Assumption of Mary come to us in the form of Sacred Tradition (Latin tradere – to hand on). Both are equally authoritative and together form the ‘Sacred deposit’ of the faith. The Pope together with the bishops of the Church have a teaching authority (Magisterium) and are tasked with giving an authentic interpretation of the Word of God, whether in its written form or in the form of Tradition (CCC, 85). It is important to understand each passage of Scripture in the harmony and coherence of all the truths of faith (Dei Verbum, 12) making constant reference to the Tradition and teaching authority of the Church.‘In accord with God's most wise design, Sacred Tradition, Holy Scripture and the Magisterium are so linked and joined together that one cannot stand without the others…all together and each in its own way under the action of the one Holy Spirit contribute effectively to the salvation of souls’ (Dei Verbum, 10). St. Paul recognised that Jesus’ life, death and resurrection fulfilled the promises of the Old Testament.Together the Old and New Testaments demonstrate the unity of God’s plan and his Revelation. It is this unity which we speak of when we say that all Sacred Scripture is but one book, and that one book is Christ, because all divine Scripture speaks of Christ, and all divine Scripture is fulfilled in Christ (CCC, 134). 48
Week Four – Group Session (Paul and the Church) Opening Prayer Leader:
The voice of the Lord is over the waters; the God of glory thunders, the Lord, over mighty waters.
Group:
The voice of the Lord is powerful; the voice of the Lord is full of majesty.
Leader:
The voice of the Lord flashes forth flames of fire. The voice of the Lord shakes the wilderness; the Lord shakes the wilderness of Kadesh.
Group:
The Lord sits enthroned over the flood; the Lord sits enthroned as king for ever.
Leader:
May the Lord give strength to his people! May the Lord bless his people with peace! From Psalm 29
49
Week Four – Group Session (Paul and the Church) Explore the Scriptures 1 Corinthians 12:12-31, 'I am not a hand and so I do not belong to the body', would that mean that it stopped being part of the body? If the ear were to say, 'I, 'I do not need you', nor can the head say to the feet, 'I so 50
Week Four – Group Session (Paul and the Church) interpret them? Be ambitious for the higher gifts. And I am going to show you a way that is better than any of them. Following a short period of silence you may wish to share an image, a thought, a phrase that has struck you. Reflection With such a familiar reading, it is hard to imagine that anything new can be said.This passage is frequently used in confirmation preparation, it has been used to prepare groups for parish leadership and we hear it every year at Pentecost. Its familiarity, however, lies in the fact that we hear echoes of its message in daily life as a Christian. We all search for a role, consciously or subconsciously, whether it is in our families, the workplace, in a group of friends or in the Church. The desire to belong and feel needed is a strong motivator. It is not surprising that the image of the family is particularly strong when referring to the Church. Everywhere in the New Testament and particularly in St. Paul’s letters, the Church is described as being composed of women and men united as sisters and brothers in one family. ‘The Spirit himself and our spirit bear united witness that we are the children of God. And if we are children we are heirs as well: heirs of God and coheirs with Christ, sharing his sufferings so as to share in his glory’ (Romans 8:16-17). St. Paul created communities throughout the Mediterranean, places where experiences could be shared, bread broken and Scripture heard and lived out.Where Roman gods demanded individual responsibility for the offering of sacrifice and oblation, this new faith in Christ was manifested in community. Catholic Christianity was, is, a personal relationship with God grounded in the Church as the Second Vatican Council’s Dogmatic Constitution on the Church tells us:
51
Week Four – Group Session (Paul and the Church) God does not make men holy and save them merely as individuals, without bond or link between one another… Christ instituted [a] new covenant, in His Blood, calling together a people made up of Jew and Gentile, making them one, not according to the flesh but in the Spirit.This was to be the new People of God…). What distinguished the communities of the early Church was the shared belief in the saving power and the love of God incarnate in Jesus Christ. Such love should characterise our parishes, our families, our communities. St. Paul asks us to ‘put on love’ (Colossians 3:14).This love is to serve both those present in the community and to act as a beacon to those around it, calling them to itself. St. Augustine sums this up beautifully in one of his sermons: ‘We ourselves are the house of God. By becoming Christians we are like stones newly quarried and when catechised, baptised, formed we are hewn and evened up.’ Nevertheless, Augustine continued, we ‘do not make the house of God unless we are cemented together by love. It is only when people see that the stones and wood in the building are securely fastened to each other that they would enter without fear of collapse’. At what point did you feel that you ‘belonged’ to your parish? How has your experience of a small community helped you to ‘belong’? How might you, as individuals and as a group, help others feel a sense of ‘belonging’? Do we see our gifts as personal possessions or as something to be used for the good of our community?
52
Week Four – Group Session (Paul and the Church). Should you have the time you may wish to look at (Colossians 1:15-20 and 25), which focuses on the power and majesty of Christ, the head of all creation, and how as members of the Church we are able to share in God’s message. (1 Timothy 3:14-16) and (1 Thessalonians 5:14-18) may also be helpful.
53
Sunday of Week Four Read the Scripture from The Dedication of the Lateran Basilica - 1 Corinthians 3: 9-11, 16-17 You are God's building. By the grace God gave me, I succeeded as an architect and laid the foundations, on which someone else is doing the building. Everyone doing the building must work carefully. For the foundation, nobody can lay any other than the one which has already been laid, that is Jesus Christ. Didn't you realise that you were God's temple and that the Spirit of God was living among you? If anybody should destroy the temple of God, God will destroy him, because the temple of God is sacred; and you are that temple. Background St. Paul had an uneasy relationship with the church of Corinth. He detected disunity amongst its members (1 Corinthians 1:11-13), discontent with his apostolic ministry (2 Corinthians 10:7-12), difficulties in understanding the effects of the salvation wrought through Christ as a result of his death and resurrection (1 Cor. 15:12-14) and an inability on the part of some church members to reflect Christian values in society (1 Cor. 5:1-2; 6:17-20). ‘God’s building’ (1 Cor. 3:9) is a reference to the Christian community, who in Paul’s time would have met in the houses of its members. As a result terms like ‘building’, ‘foundations’ and ‘architect’ become metaphors for how the Church ought to be perceived (1 Cor. 3:10b), of its relationship to its apostolic founder, Paul (1 Cor. 3:10a) and of the necessity of viewing the whole structure as being based on the mission and ministry of Jesus Christ (1 Cor. 3:11). Paul is offering a challenge to the church leaders in Corinth to remain faithful to the Gospel which he has preached (1 Cor. 15:3-4) and to the testimony of the earliest eye-witnesses of Jesus’ resurrection.
54
Sunday of Week Four Paul also uses another metaphor for the church, that of ‘God’s Temple’ which contains ‘God’s Spirit’ (1 Cor. 3:16-17).The physical body of the Christian should be regarded as ‘a temple of the Holy Spirit’ (1 Cor. 6:19). It is likely that Paul is reflecting upon the role and purpose of the Jerusalem Temple which was believed to be the dwelling place of God on earth (1 Kings 8:12-13). In using this metaphor Paul is making a direct link between the status of the Church and the ethical behaviour expected of Christians. God)
55
Monday of Week Four In the one Spirit we were all baptised (1 Corinthians 12:13) Through baptism we enter into communion with Christ’s death, are buried with him, and rise with him (Romans 6:3-4, cf. Colossians 2:12).The baptised, reborn in the Holy Spirit (Acts 2:38, John 3:5), become ‘living stones’ incorporated into the ‘spiritual house’ of the Church (Catechism, 1267). Thirty to forty years ago the parish community was all important. Important events, moments of transition, were experienced and marked within the ‘spiritual house’ of the faith community. Increasingly though, religion is seen as a private or individualistic affair. Moreover, the commitment to the parish community has to take its place alongside the other demands on our time. For St. Paul community was everything.Throughout his letters we see a concern to build up the community through a unity of belief and through fellowship. For Paul the building of communio is the antidote to loneliness and isolation. In community the weak are strengthened, anonymity challenged and genuine love and welcome become possible. From Apostolicam Actuositatem 10. In the manner of the men and women who helped Paul in spreading the Gospel (cf.Acts 18:18, 26; Romans 16:3) the laity with the right apostolic attitude supply what is lacking to their brethren and refresh the spirit of pastors and of the rest of the faithful (cf. 1 Corinthians… and offer their special skills to make the care of souls… more efficient and effective. We believe in one holy, catholic and apostolic Church. We acknowledge one baptism for the forgiveness of sins. We look for the resurrection of the dead, and the life of the word to come. Amen. Part of the Profession of Faith 56
Tuesday of Week Four the parts are many but the body is one (1 Corinthians 12:20) How can we be Christians in the modern world? How can we live Christian lives in societies caricatured as soulless and self-serving? These are not trick questions. God has given us the means of living as believers in a troublesome world.We have the Church, the sacraments and the outpouring of the Holy Spirit.All these are given to strengthen us. In addition we have the example and support of each other. St. Paul, in his first letter to the Church in Thessalonica, wrote:‘You observed the sort of life we lived, you were led to become imitators of us, you took to the gospel, from you the Word of God started to spread, news of your faith has spread everywhere’ (1 Thessalonians 1:5-8). Our example to those around us, in our parish community, makes it easier for them to bear the joyful burden of Christian and Eucharistic living. Likewise, their example supports us. From Apostolicam Actuositatem 10. Amen. Opening Prayer, Mass for the Universal Church, Roman Missal (1974) 57
Wednesday of Week Four I would like to share with you how I came to know Jesus Christ. It wasn’t necessarily a dramatic encounter, but rather a step by step journey. However, even if the encounter wasn’t dramatic, the changes in my life have been. I wasn’t brought up a Christian at all, but for some reason at the age of about fourteen I started going to my local sleepy Anglican parish. Shortly after, there was a Billy Graham mission in the parish and a call for those who wanted to give their lives to Jesus. I went forward. If God had been scripting a Hollywood film, then this would have been the cue for a dramatic life changing moment. Instead, seemingly, nothing altered in my life. A few years later I went to university in Bristol. Here I met a Jehovah’s Witness, who awoke in me a desire to find God and to find the truth.We studied Scripture together and I became hungry to discover more. I was lead by some Christians to a course, similar to Alpha, at an evangelical Anglican church. Here it all made sense. Here I came to see that God is real, God is alive and my life meant nothing without Him. I met Jesus as my Lord, King, Saviour, brother and friend. Now that could be the end but meeting Jesus was only the beginning. I was a believer, but Jesus had two further extremely vital encounters for me. Firstly, He showed me the Holy Spirit, who made my faith alive and active by taking it the incredibly long distance from the mind to the heart. Secondly, He introduced me to Christian community. In Poland, I met Koinonia John the Baptist, which is an international charismatic Catholic community.This was a life-changing experience. Praise the Lord, I didn’t meet either the Church or the community as an organisation but as people who are alive in Christ and gave witness to Jesus’ resurrection in their lives. It is not possible to overstate how important Christian community is. Michael Parry Living God, we praise you for the multitudes of women, men, young people and children who, across the earth, are striving to be witnesses to peace, to trust and to reconciliation. Amen. Taizé prayer,Time of the Church 14 58
Thursday of Week Four the weakest which are the indispensable ones (1 Corinthians 12:22) Welcome and inclusivity are not optional extras to parish life.The parish is a sacred space where we are able to live out our baptismal vocation, to act on the call to minister to one another and provide witness to the world.The notion of mutual support and the need for prayer are imperative to the proper functioning of a parish and to a healthy life as one of God’s children. During the ‘I Confess’, each Sunday we pray the following: ‘I ask blessed Mary, ever virgin, all the angels and saints, and you, my brothers and sisters, to pray for me to the Lord our God.’ In this we are not simply asking for help but pledging our support to those around us.We acknowledge the importance that each part of the Church’s body plays and assert that each part be equally concerned for all the others (cf. Romans 15:1-6). If we look to the ‘margins’ of our community, who do we see? Who are the ‘weakest’ in our community? How can we facilitate their involvement? Father, look with love on those you have called to share in the one sacrifice of Christ. By the power of the Holy Spirit make them one body, healed of all division. Keep us all in communion of mind and heart, and help us to work together for the coming of your kingdom. Amen. From the Eucharistic Prayer for Reconciliation I If you have not done so already, there may still be an opportunity to join a small community in which you can pray and share your faith. 59
Friday of Week Four Set your mind on the higher gifts (1 Corinthians 12:31) ‘Greater love hath no man, than to lay down his life for his friends’ (John 15:13).The shedding of Christ’s blood on Calvary is a gift beyond compare, a sacrifice which makes all others redundant (Hebrews 10:12). It is possible, in the midst of the here and now, in the sacrifice of the Mass to see once more this great act of God’s love. Rightly described as the source and summit of Christian life (Lumen Gentium, 11), the Eucharist feeds us for life, it is the source from which we spring, it too is the place where we can outpour our thanks and love to the Father for the gift of grace though his Son. We are told by Paul to seek grace, that strength which comes from God, before concerning ourselves with gifts; but where we aspire to gifts, to look to those which are the most valuable in themselves or the most serviceable to others. Set our minds on the higher gifts, and by their use, by example and word, inspire and convince others of the beauty and greatness of God. From Christifideles Laici (Lay Members of Christ's Faithful People) 36. ‘to the ends of the earth’ (Acts 1:8). Lord 60
Saturday of Week Four.
61
Saturday of Week.� ’ Matthew 25:14-30 How careful have we been in our stewardship of the many gifts God has given to us? Where have we used our gifts for the building up of the Kingdom here on earth, for the good of the Church and for the good of each other? Let all of us then live together in oneness of mind and heart, mutually honouring God in ourselves, whose temples we have become. Amen. From the Rule of St. Augustine (c. 400)
62
Paul as an apostle Paul understood himself to be an apostle, an agent of some higher authority. Many of Paul’s letters begin with a declaration of his apostolic status (Galatians 1:1; 1 Corinthians 1:1; 2 Corinthians 1:1; Romans 1:1; Colossians 1:1).This tradition was continued in the later letters (Ephesians 1:1; 1 Timothy 1:1 and 2 Timothy 1:1) whose authorship is uncertain. Paul is clear that he is an ambassador ‘through Jesus Christ and God the Father’ (Gal. 1:1). For him being an apostle conferred an ‘office’, within God’s plan of salvation through Christ, and a particular commission to evangelise among the Gentiles (Gal. 1 :16). Paul’s apostolic ministry brought him great suffering. He calls himself a slave of Christ (Rom. 1:1 and Gal. 1:10), one with no rights except those granted to him by Christ. He identifies his suffering apostleship to the sufferings of Jesus (1 Cor. 4:8-13; Gal. 6:14 and 6:17). Paul’s claim to apostolic status was not unproblematic. He had been a virulent persecutor of Christians and whereas his claim to apostolic status rested on a vision of the Risen Jesus (Gal. 1:12) the other apostles, had seen Jesus in the flesh, being called by him during Christ’s earthly life.This difference between Paul and the other apostles seems not to have concerned him. On one hand, he stressed his independence (Gal. 1:17), on the other he recognised the ‘pride of place’ to be given to those who were apostles before him (Gal. 1:17; 1 Cor. 15:4-10). Paul usually calls Peter by his Jewish name, Cephas. In Galatians he recalls a quarrel that they had over the table fellowship between Jewish and Gentile believers at Antioch (Gal. 2:11-14a). Paul had a universalistic understanding of his mission,‘all are one in Christ Jesus’ (Gal. 3:28) as a result of baptism. Therefore he could not understand how Cephas could refuse to share food with Gentiles. In due course Peter seems to have changed his mind: witness the incidents in the house of Cornelius (Acts 10:1-43). Historically, the alleged division in early Christianity between Paul and Peter has been exaggerated. United in faith, both Paul and Peter were martyred in Rome under the persecution of the Emperor Nero. 63
Week Five – Group Session (Paul and righteousness) Opening Prayer Leader:
How lovely is your dwelling place, O Lord of hosts! My soul longs, indeed it faints for the courts of the Lord; my heart and my flesh sing for joy to the living God.
Group:
Even the sparrow finds a home, and the swallow a nest for herself, where she may lay her young, at your altars, O Lord of hosts, my King and my God.
Leader:
Happy are those who live in your house, ever singing your praise.
Group:
For a day in your courts is better than a thousand elsewhere. I would rather be a doorkeeper in the house of my God than live in the tents of wickedness. From Psalm 84 64
Week Five – Group Session (Paul and righteousness) Exploring the Scriptures Colossians 2:20-23, 3:1-4 If you hadabasement, and their severe treatment of the body; but once the flesh starts to protest, they are no use at all. Since. Following a short period of silence you may wish to share an image, a thought, a phrase that has struck you. Reflection Very soon we will enter once more into the season of Advent. Initially the readings will focus on the end of the world and the Second Coming of Christ. Often, Christ’s return at the end of time is spoken of in fearsome terms. Here, however, St. Paul implies that it is something to look forward to.‘You too’, he says,‘will be revealed in all your glory with him’. Paul’s optimistic approach to the Second Coming is mirrored in his approach to death; ‘my desire’, he says, ‘is to depart and be with Christ, for that is far better’ (Philippians 1:23). Paul’s optimism is rooted in an utter conviction about God’s faithfulness; that ‘the one who calls you is faithful’ (1 Thessalonians 5:24), and in the belief that having strived to live 65
Week Five – Group Session (Paul and righteousness) in union with Christ on earth, Christ will keep company with us in death. ‘The saying is sure: If we have died with him, we will also live with him; if we endure, we will also reign with him’ (2 Timothy 2:11-12). Pope Benedict takes up this theme in Spe Salvi, his recent encyclical on hope. Here the Holy Father speaks of Christ the Shepherd; he who is the way, the truth and the life, who, having journeyed through the valley of death, enables us to approach the same journey in hope. realisation that there is One who even in death accompanies me, and with his ‘rod and his staff comforts me’, so that ‘I fear no evil’ (cf. Psalm 23:4) – this was the new ‘hope’ that arose over the life of believers. (Spe Salvi, 6) If, as St Paul puts it, we are dead to the principles of this world,‘our eyes set on heaven’, then death, judgement, heaven and hell – the so-called ‘Last Things’ – are to be embraced. Be slaves of righteousness, he says, for this will end in eternal life (Romans 6:17-23). Paul came to see that being right with God was not about the observance of rules and regulations, but about living in Christ.What this means, Paul makes clear again and again. It is that self-sacrificing love which stops our words and actions from being empty gestures (1 Corinthians 13:1-13); it’s the acknowledgement of our being members of a community, the body of Christ, the Church, rather than isolated individuals (1 Corinthians 12:26); it’s the life of prayer, of thankfulness to God (1Thessalonians 4:14-22). Indeed, all that is required is an openness to the working of his Holy Spirit; to what God wants to and will achieve in us if we but allow (Ephesians 3:16-19). 66
Week Five – Group Session (Paul and righteousness) What, if any difficulties, do you have with Paul’s optimistic approach to death? What has living in Christ meant to you? How has your experience of small communities reshaped your understanding of what it is to live in Christ? look at (1 Thessalonians 4:1318, 5:1-22) in which St. Paul tells us to prepare for the coming of the Lord by ‘holding fast to what is good’.You may also care to reflect on these passages too: (2 Thessalonians 2:1-2; 1 Timothy 4:1-10).
67
Sunday of Week Five Read the Scripture from the 33rd Sunday in Ordinary Time (Year A) – 1 Thessalonians 5:1-6. (Matthew 28:20). Background One of the major issues in the Thessalonian church centred on the timing of Christ’s return (parousia) as judge of the universe. It was believed that this would happen very soon. However, as Christian converts began to die people began to ask how those who had already died would meet Christ (1 Thessalonians 4:13). In the letter to the Thessalonians Paul, along with Silvanus and Timothy, reassures the church at Thessalonica. ‘For this we declare to you by the word of the Lord, that we who are alive, who are left until the coming of the Lord, will by no means precede those who have died’ (1 Thess 4:15). Paul and his coauthors also reminded their audience that the timing of Christ’s return is known only to God (1 Thess. 4:15-17) and that they must be prepared and watchful for Christ’s coming (1 Thess. 5:4). Metaphors found in the Jewish tradition, such as ‘the Day of the Lord’ (Joel 2:1), ‘the thief in the night’ (Matthew 24:43), and the labour pains of 68
Sunday of Week Five pregnancy before childbirth (Jeremiah 4:31) serve to highlight the suddenness of Christ’s return and the need for believers to live and behave as children of the light rather than darkness (1 Thess. 5:4-5). Practically speaking, believers are to avoid drunkenness (1 Thess. 5:6) and to get on with doing what needs to be done (1 Thess. 4:11). Metaphorically, believers are to be armed like soldiers (1 Thess. 5:8) in order that the battle for the faith might be won. Now, as then, we are challenged to live in two worlds; expectant for Christ’s return, living as the Thessalonians were instructed to live but conscious, in the exercise of our mission and ministry, of Christ’s ever-abiding presence (Matthew 28:20).
69
Monday of Week Five If you have really died with Christ (Colossians 2:20) St. Paul was a faithful Jew, a Pharisee, for whom the way to God was through the strict observance of the Mosaic Law.With his conversion however, he came to see that the way to God was through Christ. Put simply, Paul did not convert to another God, rather he came to understand that Jesus was the promised Messiah – the Way, the Truth and the Life – who opened up the way to the Father. For Paul, Christ was no longer considered cursed because he had ‘hanged on a tree’ (Deuteronomy 21:23) but is seen as ‘wisdom, virtue, holiness and freedom’ (1 Corinthians 1:30). It is because Paul came to understand Christ as the way to the Father that he objected to Jewish observances, such as circumcision, being imposed upon those Gentiles who came to believe.The proof of living in Christ was in the imitation of Christ’s life rather than faithful adherence to the Law. From Christifideles Laici 16. Life according to the Spirit, whose fruit is holiness (cf. Romans 6:22; Galatians. Christ, Office 70
Tuesday of Week Five why do you still let rules dictate to you (Colossians 2:20) In his letter to the Romans (7:19), St Paul concludes, ‘For I do not do the good I want, but the evil I do not want is what I do’. How often, as we have tried to follow Christ, have we experienced a similar feeling; conscious of the personal failings that seem so well ingrained? Yes, we know how we would want to be, how we would like to act, but something less noble - fear, greed, pride, anger, jealousy - seems to dictate the pattern of our lives.. Amen. Taken from ‘The Universal Prayer’, attributed to Pope Clement XI (1649-1721) Today is the Feast of the Dedication of the Basilicas of Saints Peter and Paul. Just as it was in the time of Constantine (c.274-337), the two great pilgrimages sites of Rome remain the tombs, or memorials, of St. Peter upon the Vatican Hill and the tomb of St. Paul off the Ostian Way.Today the Church honours Peter the fisherman, the rock on which the Church is built and Paul the tentmaker, reformed persecutor of Christians and Apostle to the Gentiles. As we pray today we unite our thoughts with those praying at the tombs of these Apostles.
71
Wednesday of Week Five I had been going through a period of difficulty, compounded by a medical illness. I entered a crisis period marked by anguish, soul searching, selfloathing and desperate prayer and pleading with God too. I could see no way out until I woke up on the morning of 31st May 1979 to a new reality. I was totally at peace, filled with a deep assurance of being loved, a sense of the closeness of God to me. It was like a revelation. So there and then I committed my life to God, in sheer gratitude and joy for what had happened. I sustained a prayer life and developed a real hunger for reading Scripture. This new sense of God’s closeness and his love for me persisted and grew. All this began to affect my attitude to myself, to others and my behaviour. Although there have been other key moments I have come to realise that faith is a journey, a relationship and a mystery. I realise it is always a gift – from God who first loves us – but one that needs constantly to be responded to. For the most part with me it has been about fostering a steady and faithful life of discipleship with a commitment to personal prayer, sacramental participation, fellowship with other Christians, a growth in holy living, study of Scripture and other Christian writing, a commitment to Christian witness, service and mission and in all of this to seek to specifically know and do the will of God. In fact the times of greatest personal difficulty and disappointment have proved to be the most fruitful for my growth in Christ. I have made lots of mistakes and increasingly realise my frailty yet also the faithfulness and mercy of God. Like Peter, I often find myself perplexed but confessing ‘Lord, to whom else shall I go? You have the words of eternal life! (John 6:68)’. It is a joy but also a constant challenge just to be called to daily grow in my relationship with Christ, ‘pressing on’ as St. Paul puts it. It is a privilege to be caught up in the Mystery of Christ and somehow and very imperfectly, to be part of his body on earth, an instrument used by him to bring his love, truth and life to others. 72
Wednesday of Week Five My life has steadily been shaped in subtle but powerful ways by the Eucharist.The church takes us into the heart of the communion of saints, those living on earth and those with the Lord beyond the grave. I know my Christian journey would be impossible without this communion and their help, love and support.The church’s sheer diversity has also been enriching, and I have benefited immensely from all sorts of contacts and involvement with different groups, religious orders, new communities and movements. The Holy Spirit is so rich and lavish in his distribution of gifts and the ways he works! Andrew Brookes Come Holy Spirit, fill the hearts of your faithful and kindle in them the fire of your love. Send forth your Spirit, and they shall be created. and you shall renew the face of the earth. Amen.
73
Thursday of Week Five Let your thoughts be on heavenly things (Colossians 3:2) Thinking of Heaven can be reassuring. Home of the Saints, who remind us that Christian living is possible, and of the angels, so often the bearers of good news that speaks of God’s abiding care, Heaven, to paraphrase St Paul, is that everlasting home where questioning and uncertainty will cease and we will see God face to face (1 Corinthians 13:8-12; Philippians 3:20; 2 Corinthians 5:1).Yet, thinking about Heaven can also be frustrating. What will it be like? Who else will be there? What will we do? How will I cope with living eternally? In the answering the inadequacy of our language and knowledge becomes all too clear. Where knowledge fails St. Paul invites us to trust in the Holy Spirit for it is only through the Spirit that we can begin to comprehend what God, in his goodness, has given us.‘What no eye has seen, nor ear heard, nor the heart of man conceived, what God has prepared for those who love him – these things God has revealed to us through the Spirit; for the Spirit searches everything, even the depth of God’ (1 Corinthians 2:9-10). From the Catechism 1024. O God in whom is all consolation, who doth discern in us nothing that is not thine own gift, grant me, when the term of this life is reached, – the knowledge of the first truth, the enjoyment of your Divine Majesty. Amen. St.Thomas Aquinas (c.1225-1274) 74
Friday of Week Five he is your life (Colossians 3:4) In his letter to the Philippians St. Paul reminds us that life in Christ is the only life worth living. ‘I have suffered the loss of all things, and I regard them as rubbish,’ he says, ‘in order that I may gain Christ and be found in him’ (Philippians 3:8-9). Paul clings to Christ because Christ had already made Paul his own.As one translation of the Bible puts it,‘I press on till I conquer Christ Jesus, as I have already been conquered by him’ (Philippians 3:12). Just as he claimed Paul, Christ has claimed us in the waters of baptism. For us being claimed by Christ for himself, being ‘owned’ by Christ, is all privilege for it opens up for us the way to eternal life. ‘He is your life’ says St. Paul, and the question in the face of today’s text is a simple one…is he? And, if Christ is your life, how will others know? model for souls. Jesus Life, may my presence bring grace and consolation everywhere. From ‘Invocations to the Divine Master’ by Blessed James Alberione, Practices of Piety and the Interior Life
75
Saturday of Week Five:31-46
76
Saturday of Week Five Living the ‘virtuous’ life which Jesus maps out for us is a real challenge. Reflecting on this gospel passage we may well ask ourselves, where and how have we neglected to serve Christ? Yet, the gospel is not given to us to burden us, it is given to us to lighten our way.This Advent we will hear again John the Baptist’s invitation to repent. It is for this reason that our reading of the gospel can be coloured by hope. God of your goodness, give me yourself, for you are sufficient for me. If I were to ask for anything less I should always be in want, for in you alone do I have all. Amen. Julian of Norwich (1342–1416)
77
Stages in Paul’s thought We can divide Paul’s letters into three groups and by doing this it is possible to glimpse stages in the development of his thought. Stage One – Paul’s letters while on active missionary service (c.AD 50-58) a) 1 Thessalonians and (probably) 2 Thessalonians in which Paul discusses issues relating to Christ’s return to the world as judge. b) Galatians – Paul’s most personal and heated letter which maintains that we can be made righteous before God in Christ without recourse to the Jewish Law especially in relation to circumcision. c) The Corinthian correspondence represents Paul’s most lengthy writings (29 chapters are preserved). Here Paul discusses his apostleship and living the Christian life in an urban environment. d) Romans, continues some of the ideas found in Galatians and was written to advance Paul’s evangelistic campaign to Spain by enlisting the support of the Roman church. Stage Two – Paul’s letters from prison (c. AD 58-65?) a) Philippians, written to one of his favourite churches in which Paul argues that the humility of the Philippian church should model that of Christ. b) Philemon, written to a friend asking that he receives back into his household the runaway slave, Onesimus. c) Colossians, in which Paul addresses questions relating to following Christ and living the Christian life. d) Ephesians, has some common features with Colossians but it is sometimes thought that Ephesians was written after Paul’s death by a disciple. If we wish to detect how Paul’s thought has developed it may be that we should consider his understanding of the Church by comparing Ephesians with 1 Thessalonians. 78
Stages in Paul’s thought Stage Three – Paul’s letters about succession (AD 65-80?) 1 Timothy, 2 Timothy and Titus – These letters may have been written (even in part) by Paul before his martyrdom or by a disciple who wished to maintain Paul’s spiritual legacy. Again we must compare how Paul’s thought has developed with regard to ministry by comparing these ‘Pastoral letters’ to 1 Thessalonians and Galatians.
79
Week Six – Group Session (Paul and Evangelisation) Opening Prayer Leader:
Give ear, O my people, to my teaching; incline your ears to the words of my mouth.
Group:
I will open my mouth in a parable; I will utter dark sayings from of old, things that we have heard and known, that our ancestors have told us.
Leader:
He commanded our ancestors to teach to their children; so that they should set their hope in God, and not forget the works of God, but keep his commandments;
Group:
We will not hide them from their children; we will tell to the coming generation the glorious deeds of the Lord, and his might, and the wonders that he has done. From Psalm 78 80
Week Six – Group Session (Paul and Evangelisation) Explore the Scriptures Romans 10:8-17 is a welcome sound. Not everyone of course listens to the Good News. As Isaiah says: Lord, how many believed what we proclaimed? So faith comes from what is preached, and what is preached comes from the word of Christ. Following a short period of silence you may wish to share an image, a thought, a phrase that has struck you. Reflection You are a son or daughter of God.The way we live our lives is to be determined by this simple but mind-blowing truth. All too easily such words can seem like a basic platitude, a phrase that slips off the tongue in sermons or reflections such as this but Stop! Consider! Be moved! The Creator of all invites us to enter into a deeply personal relationship. Of this we can be justifiably proud; it is ‘no cause for shame’. All too frequently we can be embarrassed when talking of our faith, consciously avoiding situations where we may have to reveal what we do on a 81
Week Six – Group Session (Paul and Evangelisation) Sunday morning when the rest of the world is sleeping or washing the car. As children of God, gifted with the Holy Spirit in baptism and strengthened through confirmation, the Spirit of truth which the world does not understand nor perceive (John 14:17), remains our Advocate. Accompanied as we are by God’s Spirit we have strength enough to accomplish the mission entrusted to us, ‘to proclaim the Good News to all creation’ (Mark 16:16), whether from the mountain tops as Moses did (Exodus 33:19), the housetops (Luke 12:3) or at the bus stop. How do we go about doing this? How can we spread the message to those who need Christ’s help, who do not yet believe, who have not yet heard? St. Paul writes that ‘for the weak he made himself weak,’ he ‘made himself all things to all men in order to save some at any cost’(1 Corinthians 9:22). Motivated by love, we must try to understand the complexity of people’s situations. Again, in attempting to witness to Christ, we must be conscious that many do not speak ‘our’ language and that of the Church.The Second Vatican Council’s Pastoral Constitution on the Church in the Modern World, Gaudium et Spes (GS), said that the Church and by extension those in the Church should talk ‘in language intelligible to each generation’ in order to ‘respond to the perennial questions which are asked about this present life and the life to come’ (GS, 4).We are called to live in the world so as to understand it and respond in love. Of course, living in the world is not to be ‘of the world’.We live in the world, and seek to understand it, so as to challenge its values all the more effectively, and to reshape it according to Christ. St. Paul wrote and spoke to real people, in real situations. Paul was concerned about empty gestures, idolatry and sexual ethics (1 Corinthians 5-11).What would he write about today?
82
Week Six – Group Session (Paul and Evangelisation) Of all the things that concern you in this world today, what would be the one thing that you would like to change? How in this particular situation could you as an individual, as a group, show God’s love in a language understood by the world at large?. Take a few moments to ponder (2 Corinthians 5:20-6:2), where Paul describes our role and our duty as baptised Christians in a world that is in desperate need of the one who sends us, Christ Jesus. You may also care to reflect on (Philippians 4:6) and (2 Thessalonians 2:13-3:5).
83
Sunday of Week Six Read the Scripture from Christ the King (Year A) – 1 Corinthians 15:20-26. And when everything is subjected to him, then the Son himself will be subject in his turn to the One who subjected all things to him, so that God may be all in all. Background In 1 Corinthians 15 Paul addresses the notion of resurrection, a belief which some of the Corinthian Christians doubted (1 Cor. 15:12). Belief in the resurrection of Christ is the starting point for reflection upon the resurrection of Christian believers. For this purpose Paul uses the harvest metaphor of ‘the first fruits’ (1 Cor. 15:20, 23). Christ’s resurrection prepares the way for the abundant ‘crop’ of the resurrection of the faithful.As our resurrection is dependant on that of Christ, so his resurrection was dependent on God’s mighty action. In this passage God’s power and might is the recurring theme (1 Cor. 15:24, 27, 28). As resurrection marks the victory over sin and death Paul discusses human alienation from God by contrasting the actions of Adam (Hebrew for man) with those of Christ (cf. Romans 5:15-20). Through disobedience to God’s will (Gen. 3:11) Adam brought death to all humanity (Genesis 3:22; 1 Cor. 15:22a). As a result of his obedience Christ brings resurrection from death and the offer of eternal life 84
Sunday of Week Six (1 Cor. 15:22b). Although Christ has won salvation for us through his saving death and resurrection, and although through the sacraments and in the Church we enter into this, the totality of the experience of being saved awaits us in heavenly glory (cf. Lumen Gentium, 48).anรงois Fenelon (1651-1715)
85
Monday of Week Six The word, that is the faith we proclaim, is very near to you, it is on your lips and in your heart. (Romans 10:8) Is it really possible to ‘lose’ one’s faith? Certainly, it is easy to think of our faith, whether we have it or whether we have ‘lost’ it, as depending on ourselves – what we do or don’t do.Yet faith is not something we achieve, reach or obtain. Faith is God’s gift. Like any gift it something that we can unwrap, explore and use; or something we can leave aside, unexplored and forgotten. Ultimately, God will not take back his gift of faith (cf. Romans 11:29). As today���s text reminds us ‘the faith we proclaim, is very near, it is on your lips and in your heart’. Simply put, God is ever-present, searching and probing the depths of our heart.The question is not whether we have lost faith, but where and how we fail to respond to his Word alive and active in us (Hebrews 4:12-13). God and Father, to those who go astray you reveal the light of your truth and enable them to return to the right path: grant that all who have received the grace of baptism may strive to be worthy of their Christian calling, and reject everything opposed to it. Through Christ our Lord. Amen. Concluding Prayer, Eastertide Mondays Weeks 2 to 6, Divine Office
86
Tuesday of Week Six all belong to the same Lord (Romans 10:12) In his letter to the Romans, Paul reminds us that in Christ there are no distinctions between Jew and Greek, slave and citizen (Romans 10:12; cf. 1 Corinthians 12:12), ‘the same Lord is Lord of all and is generous to all who call on him’.The Lord comes for everyone, sinners and saints alike, he does not discriminate as we might do. We can be confident too that he cares for those he calls. As one of the Prefaces to the Apostles says: ‘You are the eternal Shepherd who never leaves his flock untended’, as sheep of his flock, ‘we have no need to be afraid’ (Luke 12:32). It is with this confidence in God’s pastoral care that we can spread the gospel of Christ. From Lumen Gentium (Christ, Light of Nations) 6.The Church is a sheepfold whose one and indispensable door is Christ (John 10:1-10). It is a flock of which God Himself foretold He would be the shepherd, (Isaiah 40:11 and Exodus 34:11) and whose sheep, although ruled by human shepherds; are nevertheless continuously led and nourished by Christ Himself, the Good Shepherd and the Prince of the shepherds,(John 10:11 and 1 Peter 5:4) who gave His life for the sheep (John 10:11-15). Father, through Jesus our Lord and our brother, we ask you to bless us. Grant that our parishes be true homes,. Amen. L’Arche prayer (adapted) 87
Wednesday of Week Six My relationship with God used to be very personal and private, and the very idea of talking about it to someone else would have filled me with horror! Fortunately, that has changed, but it took many years for that to happen. I was brought up a Catholic and had an outstanding Catholic education with 10 years at Stonyhurst, the Jesuit boarding school. At the age of 18 I was a speaker for the Catholic Evidence Guild where my talks were of dogma and doctrine.We were positively discouraged from including any personal testimony! My faith was contained in a closed box and like many of my generation the thought of talking about one’s relationship with God was unthinkable.While working and living in Nigeria, Malaysia, Hong Kong and China my family and I spent time with the exuberant and lively Christians of Africa and Asia and I was challenged by ‘the dreaded sharing’ at the lively home groups. God seemed to be so much more active in their lives! It was during the three days of a Cursillo (Spanish for short course) in Christianity in Hong Kong that my isolation was challenged and that I came to see the real value, in fact the need to share with others. I realised that an isolated Christian is a paralysed Christian. During Cursillos we have three key subjects to review – our prayer life, what we are doing to improve our own spiritual formation, and what we are doing to evangelise others. I have learnt that it is the third subject, evangelisation, that really should be put first; the groups that thrive are the ones that consider evangelisation the key task. I have also learnt the power of prayer to give each other personal support. At these meetings I have learnt that prayer is answered. How else would I know? Stephen Fox
88
Wednesday of Week Six On Thee do I set my hope, O my God, that Thou shalt enlighten my mind and understanding with the light of Thy knowledge, not only to cherish those things which are written, but to do them. Thou art the enlightenment of those who lie in darkness, and from Thee cometh every good deed and every gift. Amen. St. John Chrysostom (c. 347-407)
89
Thursday of Week Six unless they have heard of him (Romans 10:14) St. John Chrysostom wrote, in his tenth homily on St. Paul’s first letter to Timothy, the following passage. It is a reminder of our vocation (1 Peter 2:9) and the power that living a Christian life can effect. ‘.’ In a world crying out for God, though it may not realise it, there is a very real need for believers to lead ‘shining lives’ so that people might be led to Christ, much as the magi were led to his manger by his star. Perhaps in the time leading up to Christmas, we might consider wearing an external symbol of our faith, a crucifix, a lapel pin or a badge and be ready to tell those who ask about the one whom God sent. From Princeps Pastorum (Prince of the Shepherds) 32. Profession of the Christian faith is not intelligible without strong, lively apostolic fervour; in fact, ‘everyone is bound to proclaim his faith to others, either to give good example and encouragement to the rest of the faithful, or to check the attacks of unbelievers,’ (St.Thomas Aquinas) especially in our time, when the universal Church and human society are beset by many difficulties.
90
Thursday of Week Six All this day, O Lord, let me touch as many lives as possible for thee; and every life I touch, do thou by thy spirit quicken, whether through the word I speak, the prayer I breathe, or the life I live. Amen. Mary Sumner (1828-1921)
91
Friday of Week Six a welcome sound (Romans 10:15) Think of the last time you opened a letter, received an email or answered a phone call from a friend.Was the message you received full of sadness or glad tidings? It is wonderful to hear good news however it comes, from whatever source, indeed it is ‘a welcome sound’.We too have a profound and joyous message to share. It is one of salvation, peace and love (Isaiah 52:7).This love, expressed in Christ’s death for us, moved St. Paul so much that his life was dedicated to sharing it. Let us all permit the Word of God and the love of Christ to move us in a similar way. May it transform us more fully as disciples of Jesus Christ, to have our lives converted by his death and resurrection, to realise that we are all called in a variety of ways just as St. Paul was appointed by God, chosen in our mother’s womb to share what we have with others (1 Corinthians 1:1 and Galatians 1:15-16). From the Catechism. Give us grace, almighty God, so to unite ourselves in faith with your only Son, who underwent death and lay buried in the tomb that we may rise again in newness of life with him, who lives and reigns for ever and ever. Amen. Concluding Prayer, Night Prayer Friday, Divine Office
92
Saturday of Week Six!’ Mark 13:33-37 As we enter into Advent we will be reminded of the need to ‘stay awake’, alert for the coming of the Lord. In the light of the last six weeks of reflection, is there a particular resolution you might make to help you in this? You may want to read the letters of St. Paul – perhaps taking a letter at a time and a chapter each day.. Opening Prayer The Conversion of St. Paul,Apostle (25 January), Roman Missal (1974) me on that Day; and not only to me but to all those who have longed for his Appearing (2 Timothy 4:6-8). 93
© The Trustees of the Chester Beatty Library, Dublin
The Letters of Saint Paul Greek text on papyrus c.AD 180-200 Egypt CB BP II (P46) f.86r The original letter of Paul to the several ‘churches of Galatia’ is thought to have been written between AD 48-55. ‘There is neither Jew nor Greek, there is neither slave nor free, there is neither male nor female; for you are all one in Christ Jesus’ (Gal. 3:28).
Letter to the Galatians
|
https://issuu.com/exploringfaith/docs/appointed_by_god
|
CC-MAIN-2016-18
|
refinedweb
| 19,124
| 68.3
|
I want to create DTO class for User. my input to program is
firstname, lastname,lastname.role,group1,group2,group3.
so for each user role consist of group_1,group_2,group_3.....
In database i want to store in following format
demo,demo,demo,roleId, gorup_1_name group_1_Id
demo,demo,demo,roleId, gorup_2 and group_2_Id
demo,demo,demo,roleId, gorup_3 and group_3_Id
I was able separate all this things , but i want to assign this value to userDTO class and stored into database. basically im new to core java part. so how can create structure for this?
A Data Transfer Object (DTO) class is a java-bean like artifact that holds the data that you want to share between layer in your SW architecture.
For your usecase, it should look more or less like this:
public class UserDTO { String firstName; String lastName; List<String> groups; public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public List<String> getGroups() { return groups; } public void setGroups(List<String> groups) { this.groups = groups; } // Depending on your needs, you could opt for finer-grained access to the group list }
|
https://codedump.io/share/LMKcO6qpfhgr/1/how-to-create-dto-class
|
CC-MAIN-2017-13
|
refinedweb
| 198
| 55.74
|
Why do you need to adapt your custom code during system conversion from the classic SAP ERP system running on any DB to SAP S/4HANA? The blog SAP S/4HANA System Conversion – Challenge for your custom code gives you the answer to this question.
Considering SAP S/4HANA system conversion (more on this in SAP S/4HANA System Conversion – At a glance) we focus in this blog on the custom code related process, which consists basically of two major phases. Before SAP S/4HANA system conversion – during preparation phase – we recommend to get rid of your old unused custom code (custom code evaluation) and then analyze your custom ABAP code with the Simplification Database and find out which objects need to be changed to get adapted to the SAP HANA and SAP S/4HANA (SAP S/4HANA checks). After SAP S/4HANA system conversion – during the realization phase – you need to adapt your custom ABAP code to the new SAP S4/HANA software (functional adaptation) and optimize performance for SAP HANA database (performance tuning).
Custom code scoping
A typical ERP customer system contains a large amount of custom development objects (Z-Objects, enhancements and modifications) that are not used productively. Therefore, it is recommended to monitor your system landscape for a longer period of time in order to do some housekeeping and eliminate the code, which is not used anymore within your productive business applications. This procedure is very important and will significantly minimize your custom code adaptation efforts.
For this purpose, we recommend to turn on the ABAP Call Monitor (SCMON) or Usage Procedure Log (UPL) in your production system to find out, which custom ABAP objects are really used within your running business processes. You can also use this step for prioritization: to find out which objects are more important as the others. The SCMON or UPL has no impact on the performance of your production system.
The advantage of the SCMON compared to the UPL is that using this tool you not only collect the usage data (how often a specific ABAP object was called), but also the information about the calling business process. See ABAP Call Monitor (SCMON) – Analyze usage of your code for more details.
The recommended procedure for usage data collection is ABAP Call Monitor with aggregation of the collected usage data using the SUSG transaction. See Aggregate usage data in your production system with SUSG transaction for more details.
Usage data collected in the SAP Solution Manager 7.2/CCLM can be also used for custom code scoping. Generally, the Solution Manager 7.2 collects either SCMON or UPL data depending whether the connected system is capable of SCMON or UPL.
NOTE: You should collect usage data for a longer period of time (at least one year) to get really reliable results for not productively used code.
SAP HANA and SAP S/4HANA checks
This step is the most important one for your custom ABAP code on the way to the system conversion to SAP S/4HANA. Here you check your custom ABAP code for SAP HANA and SAP S/4HANA related changes.
The SAP S/4HANA is based on the SAP HANA database. Generally, the ABAP code runs on SAP HANA as on any other supported database. So why do you need to adapt it to SAP HANA? One reason is, that you possibly used native SQL of the predecessor database and these database vendor specific dependencies must be eliminated. Another reason is, that in some custom code implementations the SELECT statement without ORDER BY is used. This can lead to unexpected behavior when the database is changed (for example to SAP HANA) because the results return in a different order without ORDER BY. Therefore, you need to check your SQL SELECTs without ORDER BY statement if they are still correct. Beyond this pool/cluster tables were removed with SAP HANA, therefore the database operations on these tables also need to be removed from your custom ABAP code.
To prepare your custom ABAP code for the actual SAP S/4HANA conversion, you need to compare it with the Simplification Database. For more information on Simplification Database please refer to the blogs Upcoming tools for SAP S/4HANA migration – the simplification database, Simplification List SAP S/4HANA 2020.
In SAP NetWeaver AS ABAP 7.52 the coverage of checked custom code was significantly improved. Custom code checks now scan all custom code in Enhancements, Modifications, Customer EXITs, Smart Forms, Adobe Forms and ignore findings in SAP includes and generated code. Additionally simplification item information (e.g. OSS note) is displayed in ATC result list including filtering and sorting capabilities. It is also now possible to scan custom SAP queries, see the blog How to check your SAP Queries for SAP S/4HANA readiness.
Technical infrastructure for custom code analysis
The tool of choice for SAP S/4HANA and SAP HANA checks is the ABAP Test Cockpit (ATC) with remote code analysis. You set up only one central ATC check system for all static checks of your custom ABAP code in your system landscape, which needs to be migrated to SAP S/4HANA. More details in the blog series Remote Code Analysis in ATC.
The recommended procedure is the following:
Custom code analysis options
The ABAP Test Cockpit comes along with its classic SAP GUI based user interfaces to administer the tool and maintain the necessary configuration steps in order to check the custom code.
With ABAP platform 1809 we simplified the whole custom code adaptation process by providing a new SAP Fiori App Custom Code Migration that allows you to execute SAP S/4HANA checks just with a few clicks.
SAP Fiori App Custom Code Migration can also be configured and run in the SAP BTP ABAP Environment – formerly known as SAP Cloud Platform ABAP Environment.
The following picture presents all custom code analysis options at one glance:
Remote ATC with SAP GUI
The prerequisites for this option are the remote ATC infrastructure and ATC central check system (SAP_BASIS 7.52). Run ATC transaction with the check variant S4HANA_READINESS_REMOTE containing SAP S/4HANA and SAP HANA checks and analyze the ATC result list. ATC offers you best support for detailed analysis of SAP S/4HANA findings using Statistics View, simplification information in ATC result and various navigation capabilities. See also the blog Remote Code Analysis in ATC – Working efficiently with ATC Result List.
For live demonstration of end-to-end custom code adaptation process in SAP GUI check also the Video on YouTube:
Remote ATC with SAP Fiori App Custom Code Migration
You can configure and use SAP Fiori App Custom Code Migration in the cloud with the SAP BTP ABAP Environment or on-premise in an SAP S/4HANA >=1809 system.
The App performs SAP S/4HANA checks on your custom code based on remote ATC infrastructure and provides the analytical presentation of the SAP S/4HANA check results with comprehensive aggregation, filtering and navigation capabilities. Beyond this the App identifies the unused custom code based on your collected usage data, and this enables you to remove it automatically via SUM during a system conversion to SAP S/4HANA.
Using the visual predefined filters above you can group your ATC findings to get a quick overview on for example:
- How many findings are in your used custom code (exclude unused code)
- In which SAP S/4HANA simplification areas do you get the most findings
- What are the most urgent findings (by priority)
- For which findings there are Quick Fixes available and which findings you need to fix manually
From the App you can drill-down to ATC results and display affected source code of your ERP system directly in the browser.
For more details on this custom code analysis option see the blog:
Functional adaptation
After you did the system conversion to SAP S/4HANA with Software Update Manager (SUM) (you don’t need first to migrate the database to SAP HANA and then update the software to the SAP S/4HANA, the SUM does it in one step) we recommend to run ATC with SAP HANA and SAP S/4HANA checks. After that you need to carry out functional adaptation based on the ATC results.
Adjust modifications and enhancements
First you need to adapt the modifications and enhancements using the standard transactions SPDD, SPAU and SPAU_ENH. This is the same process as in previous upgrades within the SAP Business Suite product portfolio, only the tools SPDD and SPAU have been renewed. Especially when moving the very old system to SAP S/4HANA many modifications and enhancements can be removed or set to SAP standard. For this purpose the new UI was invented for SPAU, which supports mass activities in order to adjust modifications and enhancements or reset objects to SAP standard more easily.
New UI in SPAU
Recommendation: Reset as many objects as possible to SAP standard.
Fix SAP HANA and SAP S/4HANA findings
Second, you need to fix SAP HANA and SAP S/4HANA findings. Adapt your.
SAP HANA finding example: if you selected from the table without any order and execute binary search, it will return the wrong entries, therefore you need to fix your SELECT by either providing ORDER BY statement or sort the internal table before the statement READ TABLE … BINARY SEARCH.
Fix ORDER BY
SAP S/4HANA finding example: replace your own defined material number with the SAP data type MATNR:
Fix MATNR
The TOP 10 Simplification Items
No doubt it is substantial manual effort to look at every ATC finding and adjust your custom code. Besides this very often the most ATC findings are the SAP S/4HANA standard known issues, which could be fixed quickly in the automated way.
Therefore, in order to minimize your adaptation efforts, we started to offer automatic code adaptations using the Quick Fixes (or Ctrl +1 shortcut) feature of ABAP Development Tools in Eclipse (ADT).
Currently the Quick Fixes for the most prominent SAP S/4HANA simplification use cases like MATNR extension, accesses to database tables VBFA, VBUK, VBUP, KONV, BSEG, usage of VBTYPE data elements in source code are available. Beyond this the mass-enabled Quick Fixes make it possible to adapt full packages or software components in one shot and in this way drastically reduce your custom code adaptation efforts. Check also the blog:
Naturally after functional adaptation you need to test your business processes (using automated testing like eCATT).
Performance tuning
After you did the system conversion to SAP S/4HANA and the system is up and running, you need to look which business processes can to be optimized on SAP HANA database, since you can now make use of full power of SAP HANA regarding performance. Therefore you need to look which SQL statements can be optimized. The SQL Monitor (ABAP SQL Monitor – Implementation Guide and Best Practices) allows you to get performance data for all SQL statements executed in your productive system. You can run it for a longer time period directly in your productive system (transaction SQLM) without major performance impact on your business processes (performance overhead < 3%). SQL Monitor helps you to understand, what are the most expensive and most frequently executed SQL statements, which SQLs read/write millions of records and provides the transparent SQL profile. SQL Monitor allows you to link the monitored SQL statements to the corresponding business processes including various aggregation and drill-down options.
SQL Monitor
As shown on the screenshot you can use for example transaction as the entry point and drill down the collected SQL runtime information to optimize SQL statements (e.g. by pushing the application logic to the SAP HANA database).
SQL Monitor Example:
Further information
For further details check the Custom Code Migration Guide for SAP S/4HANA 2020. The frequently asked questions are answered in the Custom code adaptation for SAP S/4HANA – FAQ.
good read!
Thanks for sharing the info.Good document to start with when we have to deal with Custom Code while migrating to SAP HANA DB.
K.Kiran.
Hello,
Thanks for your description. But I am completely confused now. In the SAP Note 2271900 - Custom Code Management: Generation of Code Inspector Variant and SAP Note 2270689 - RFC Extractor for performing static checks SAP describes how to use the SAP Code Inspector to execute a remote check for the custom code analysis. These SAP Notes contain also How-To-Guides for the SAP Code Inspector.
But in your description the ATC is the tool of choice.
What is the right recommendation for customers?
Thanks in advance
Stephan Scheithauer
Hi Stephan,
we recommend to use ATC with Remote Code Analysis.
The Remote Code Analysis with ATC is new and available since October 2016 with AS ABAP 7.51. Therefore some previous SAP Notes and guides relate to Code Inspector and need to be updated subsequently. Thank you for this notice.
Best Regards,
Olga.
Hello Olga,
thank you for your recommendation. So I will focus on your blocks to get familiar with the ATC Scenario.
Best Regards
Stephan
Using SYCM (in NW 7.50 and previous) there was a possibility to "sharpen" the Custom code migration worklist based on the UPL data from the productive system. In the new way via ATC I'm missing this step.
Is it really missing or have I overseen it?
Hi Martin,
you are right, this is currently missing in ATC. But we are working on it.
Will be available with the next major version of ATC.
Regards,
Thomas.
Hey Olga,
you mention note 2436688 but there seems to be something wrong with it:
This is what I get when I try to access it:
Any ideas?
Joachim
Hi Joachim,
I've just checked the link to the note: it works. Please try again. Maybe it was a temporary issue with access to the knowledge base.
Regards,
Olga.
Hi Olga,
I checked again, still, it doesn’t work for me!
(But other notes do work, see screenshot).
Is it maybe only available SAP-internally or something (without giving me a better message?).
best
Joachim
PS: there is no other way of accessing notes as via Launchpad, is there?
Well, there is SNOTE, of course, which says the note is “incomplete”?
Edit: Oh, there is a message saying it’s not released, it’s just hidden deep down:
[Yet another Edit]: As suggested by the SNOTE-message, I now opened an Incident (200273 / 2017).
OK, it's there now! (Version 4, Released On 28.04.2017 🙂 )
Hi Olga,
we are using ATC and doing custom code conversion on priority1 .
we are in a situation not able to handle few suggestion given by sap notes or cook books.
for example KNOV is replaced by PRCD_ELEMENTS and some changes to VBFA table fields. cookbooks suggest to use some factory classes to select or insert data from these new data models.
My question here is,do we need to handle these using new classes only after migration? oR is there any other way to handle at time of conversion as my current system doesn't contain these factory classes or even these new data model.
your suggestion will help us a lot to go further.
Hi Narasimha,
functional adaptations to S/4HANA should take place after S/4HANA system conversion, see also the "Functional adaptation" chapter of this blog. Before you cannot adapt your custom code since you don't have the new S/4HANA data types and models in your classic ERP system as you correctly noticed it.
Regards,
Olga.
Hi Olga,
Thank you very much, you cleared our way.
Please keep assisting us , more queries to land your inbox 🙂
Hello Olga,
When we run ATC, I was wondering why standard RV13A* programs are listed.
Could you put a light how these objects have to be handled. Program header comments log says.
Regards,
Narasimha
Hi Narasimha,
just ignore them. We plan not to list ATC findings in generated code and SAP Includes in the next ATC release.
Regards,
Olga.
Dear Olga,
In sap help: SAP S/4HANA Conversion steps it was mentioned (before 7.51) to generate the Custom Code Migration Worklist for code analysis in addition to the ATC or SCI anaylis. Now this step is not mentioned anymore in help.sap.com and neither is mentioned in your blog.
Is it current recommendation to do not generate the Custom Code Migration Worklist and only perform checks with the ATC?
Thanks,
Susana
Dear Susana,
after you execute ATC checks, you will get the worklist of ATC findings, which you need to fix in your custom code to get it adapted to S/4HANA. This worklist is the Custom Code Migration Worklist. In the 7.50 release you got this list by executing Custom Code Analyzer, in 7.51 you get it just by executing ATC.
Best Regards, Olga.
Hi Olga,
Thanks for the Article.
I understand that we need to run the ATC check with variant S4HANA_READYNESSin the preparation phase ( before the technical conversion ). What are we expected to do with the findings in the ATC check? Should we fix them after the conversion or should they be fixed before or would there be items to be fixed before and after as well?
Thanks,
Ajith
Hi Ajith,
using ATC check results before system conversion you can estimate the effort for your custom code adaptation and fix some findings (e.g.like statements without ORDER BY or replace your custom data types for material numbers with SAP data type MATNR). But still the actual functional custom code adaptation takes place after system conversion, because only then you have the new S/4HANA models, APIs, datatypes in place.
Best Regards,
Olga.
Hi Olga,
we have an S4 converted system but the Readiness Check variant is not in the system, so how do we manage the code changes and know that they are complete?
thanks,
Malcolm.
Hi Malcolm,
is you S4 converted system SAP S/4HANA 1511 on SAP NetWeaver AS ABAP 7.50 release? If it is so,we recommend to set up ATC Remote Code Analysis in the SAP NetWeaver AS ABAP 7.51 - based system and use the S4HANA_READINESS_REMOTE variant for code checking and Remote ATC for Developers scenario for functional code adaptation.
Best Regards,
Olga.
Hi Olga,
It is very useful information.. We are facing one issue while running S4H custom code impact analysis using SPA NW 7.5. Here evaluation system is already on NW 7.50 so we followed extract based approach running - SAPRSEUB, SYCM* programs to download extract from NW 7.50 & again imported it into same NW 7.5 system.
While running program - SYCM_DISPLAY_SIMPLIFICATIONS to get the simplification result it running for ever & not showing up results. Even I scheduled the job & execution time is already above 2k sec. Generally this take very little time to show results.
Can you please assist here.. why it is not showing simplification results. here evaluation system is on SAP NW 7.5
Thanks & Regards
Rajesh Dadwal
Hi Rajesh,
for such types of issues please open a ticket to SAP.
Best Regards,
Olga.
I would like to check all the repository stuff we have within our own namespace.
So I created a run with "Check Variant": S4HANA_READINESS_REMOTE
Objects to Check - by Query - Package /BLA/* for instance.
When saving it comes up with a information message, that the input data is inconsistent. "Selection contains restricted packages e.g. /BLA/MAIN"
After that I save my Run and execute it immediately in background. In the monitor I can see that there were no objects found to check.
However, if I choose the same Check Variant but only check for Z* packages then there are plenty of objects to check.
Is this because of the /BLA/MAIN package which is flagged as "Main Package" which directly adds "Adding further objects not possible"(restricted-Flag) to it?
Hi Jan,
Did you try to register your name space (/BLA/) in the system which shall be analyzed by ATC as described in the following documentation?
Michael
Thats it. Thanks!
Hi Olga,
ATC has given code error on MARA , MARC and QMAT tables DB statements like INSERT and DELETE statements. I am not success in finding any Factory classes on these tables, could you Help how to handle these.
Regards,
Narasimha
Hi Narasimha,
is your system before or after system conversion? Have you tried to follow the SAP notes linked to the corresponding ATC findings? Maybe you can drop me an email with related screenshots of ATC findings.
Best regards,
Olga.
Hi Olga,
we are doing post conversion functional changes and handling few custom programs as part of post conversion. I tried to follow the notes , I could not find any Update/Delete related factory class methods on MARC and MARA. But was able to see DB select related. Find below ATC messages on objects:
Hi Narasimha,
I double-checked note 2206980: Updates on master data attributes are still possible for hybrid tables.
MARC should be also a hybrid table mentioned in the beginning of the Note:
I hope this helps.
Michael
Dear Olga,
I am new to S/4 HANA and your blog is very informative and helpful.
I have a question. We are working on a system conversion from ECC to S/4 HANA 1709 OP.
The current patch levels on our SAP ECC system are given below. Based on the blog, I understand that we cannot use ATC for custom code analysis and correction. Is this correct? If we cannot use ATC, which other SAP tool can we use instead of ATC? Can you share any links to documentation/blogs for the valid tool that can be used with the below system configuration/patch levels.
ECC System:
ECC 6.0 EHP 5, NetWeaver 7.02
Software Component ReleaseLevel
SAP_BASIS 702 0012
SAP_ABA 702 0012
SAP_APPL 605 0009
Regards,
Jay
Hi Jay,
you can and should use remote ATC for custom code analysis. Please take a look at the "Technical Requirements" chapter in the blog Remote Code Analysis - Technical Setup Step by Step . The system, you want to check, must be at least on SAP_BASIS 7.00 therefore your SAP_BASIS 7.02 can be checked with remote ATC. Please use the blogs series Remote Code Analysis in ATC – One central check system for multiple systems on various releases.
Best regards,
Olga.
Thank you Olga for our response.
If I have understood the blog correctly, it will require setting up a new Netweaver system on 7.5X and from there build RFC connections to the system to be converted to run the ATC checks.
If the above understanding is correct then isnt it an additional overhead? Also is it mandatory to do the check using ATC?
Regards,
Jay
Hi Jay,
yes, the custom code analysis with remote ATC is required for the S/4HANA system conversion. Only with the new ATC system (7.51 or 7.52-based) you will get the S/4HANA checks and S4HANA_READINESS check variant. Setup of one "SAP_BASIS only" ABAP system is not so much overhead. Apart of this you can still further use the central ATC system for the static quality assurance and check with the newest checks your whole landscape(independent of system releases). All advantages are also listed here under "Advantages": Remote Code Analysis in ATC – One central check system for multiple systems on various releases.
Regards,
Olga.
Hi Olga,
thanks for yet another very helpful blog!
We recently updated our ERP-systems to NW 750 with HANA DB and had our custom code adapted with the help of the Smartshift tool for (semi)automated code remediation. So, we should have a fairly clean state as far as HANA DB goes. We now have a conversion to S/4HANA looming ahead but there are no defined plans yet of how and when this will happen.
One of my current tasks is to evaluate and help to estimate the development efforts for incoming change requests for our custom code and I'm wondering how much the not yet timed S/4HANA conversion should be taken into account when providing feedback to the requesters.
I tried finding some best practices but wasn't really successful in finding answers to thoughts and questions like these:
We plan to get a central ATC-system with NW 7.52 setup soon and will then start to run the recommended readiness checks. I'd nonetheless like to get a better handle on things even before then.
Please let me know if this is not the best place for my questions and I can put them somewhere else, like e.g. a discussion thread in Answers.
Thanks and Cheers
Baerbel
Hi Baerbel,
actually what you want to have, are the instructions how to program now “S/4HANA ready” in the ERP systems, which will be later converted to S/4HANA, in order to avoid such things like using SELECTs without ORDER BY, custom MATNR types, use of objects, which were simplified in S/4HANA, implementations in obsolete (in S/4HANA) components, accesses to data models, which were changed in S/4HANA etc.
Unfortunately we don’t have such general instructions or best practices in place, only high-level recommendations: What can you do today to prepare your custom code for SAP S/4HANA. I can just recommend to setup remote ATC checks for S/4HANA readiness asap and execute them on regular base during development for newly developed or changed custom code in an ERP system.
Best regards,
Olga.
Hi Olga,
I watched your Youtube Video. We are missing the ATC variant S4HANA_READINESS. We have a ECC 6.0 system with SAP_BASIS 750 SP06. Moreover, SYCM is installed and up to date, ZIP file is imported.
Which note am I missing to get the variant, or isn't it available for our system?
Cheers,
Mark
Hi Mark,
the S4HANA_READINESS variant is only available in the SAP_BASIS 7.51 or 7.52 based system. You need to setup such system as a central ATC system for remote code analysis. See the chapter "SAP HANA and SAP S/4HANA checks" of this blog for details.
Best regards,
Olga.
hi;
my system is also 752 but ı cant see S4HANA_READINESS variant ın atc
Hi Muzaffer,
is the S4HANA_READINESS_REMOTE check variant available? It it a central ATC system or development system?
If it is a central ATC system, then please ensure, that all technical requirements of the are fulfilled and the SAP Notes are applied.
If it is a development system, connected to a central ATC system, then please ensure, that all technical requirements of the are fulfilled and the SAP Notes are applied.
Regards,
Olga.
hi again ,
S4HANA_READINESS_REMOTE variant is also available in my system , its a Central ATC system and all SAP notes done. maybe ı overlook some notes but ı dont think so.
thanks Olga
Hello Olga
Thank you for posting this blog.
It is very useful.
By the way, you are using ATC in this blog.
As my understanding, ATC’s role is to send the result of code inspector to other systems.
So it is not really necessary to use ATC, if it is not necessary to send the result.
In fact, S/4 HANA Readiness Checks via RFC can be done without using ATC like this video.
What Is the advantage of using ATC?
Is it a possibility of checking code inspector result for several systems centrally?
Please give me your advice to use ATC.
Best regards
Jake
Hi Jake,
this blog is our SAP official information, which is also depicted in the relevant SAP Help. ATC is the recommended tool for static quality checking and custom code check for SAP S/4HANA. A very good general overview about ATC and benefits compared with SCI is in the blog ABAP Test Cockpit – an Introduction to SAP’s new ABAP Quality Assurance Tool. In the following linked blog are also advantages of the Remote Code Analysis with ATC.
ATC just reuses SCI variants but provides better code coverage, support for customer modifications and enhancements, delivers better check results, guidance for functional adaptation via relevant SAP Notes and detailed information in the check results, direct developer support for functional adaptation and so on.
The video, you are referring to, is not SAP official and cannot be used as a guidance for custom code checks for SAP S/4HANA.
Regards,
Olga.
Hi Olga
Thank you for your advise.
Regards
Jake
Hi Olga,
Can you plz suggest how custom tables (transparent, cluster & pool) will be handled during conversion process. I assume transparent tables will be moved to target S4H system along with data by DMO. If any table using any standard data element/domain which is changed, will need to be adjusted in SPDD.
How it will handle pool and cluster table as HANA only have transparent tables and standard SAP cluster/pool tables are converted to transparent tables. Will it will also convert custom cluster/pool tables to transparent tables? Or customer have to take any action on these table types.
We are planning for system conversion to S4H and not clear on custom table migration. Please suggest..
Thanks & Regards
Rajesh
Hi Rajesh,
generally your custom pool and cluster tables get automatically converted to transparent tables. Otherwise you are informed via simplification items (from ATC checks) if any adjustments (and how) have to be done.
Best Regards,
Olga.
Thanks Olga for prompt response...
Hi Olga
I run the ATC check ( netweaver 7.51 sp00) with variant S4HANA_READYNESS in my custom code that have a simple call to standard bapi "BAPI_ALM_ORDERHEAD_GET_LIST"
like code bellow:
CALL FUNCTION 'BAPI_ALM_ORDERHEAD_GET_LIST'.
I expected no have any findings in the ATC check, but my surprise was the result in error:
Syntactically incompatible change of existing functionality
(FUNC BAPI_ALM_ORDERHEAD_GET_LIST, see Note(s):0002438131)
I believe the ATC checked not only my custom code, but the sap code inside the bapi 'BAPI_ALM_ORDERHEAD_GET_LIST'.
This is true ? How can check only my custom code, avoiding sap code ?
Thanks
Dear Olga,
Thank you for detailed guide, that's very useful!
I faced an issue in my 7.52 system: ATC does not have variant S4HANA_READINESS_REMOTE, nor any other variant...
I guess the reason is described in note 2485726, but it does not explains how to get new variants.
Steps describes in note 2444208 worked for me, but there is no variant for S4HANA 1809. Do you think it is right way to get it, and do you know maybe way to get variant for 1809?
Thanks
Ruslan
Hi Ruslan,
the SAP note 2659194 explains how to get the S4HANA_READINESS_1809 check variant into your 7.52 system.
Best Regards,
Olga.
Hi Olga,
I'm currently trying to find the right analysis tools to be used for the preparation/assesment phase. Within this area I'm struggeling a bit with the S/4 HANA Readiness Check and the Custom Code Analysis described within this (very well described) blog. The user guide for S/4 HANA Readiness Check states "For a deep-dive analysis of your custom code and an evaluation of necessary adjustment information, please use the ABAP Test Cockpit (ATC)". This means for me, that for planning/estimation purpose the Custom Code Analysis based on on a central check system is a must and the S/4 HANA Readiness check is only optional. Is this correct?
Furthermore do you know to what extent the used check logic is overlapping between S/4 HANA Readiness and the Custom Code Analysis via ATC?
Thanks,
Florian
Hi Florian,
regarding custom code analysis is SAP Readiness Check suitable if you are in the planning phase of SAP S/4HANA conversion in order to get the overview about the upcoming efforts. If you already ready to start SAP S/4HANA conversion and have an SAP S/4HANA system for it, then you can start with this blog.
Technically the SAP Readiness Check is based on the SYCM tool, not ATC. The difference to ATC is, that SYCM is based purely on where-used list, meaning it finds e.g. all usages of MATNR in your custom code but doesn't analyze if this usage is critical, therefore the false-positive rate is extremely high. The analysis if the usage is critical can do only ATC.
Best Regards,
Olga.
Please note: I "converted" this comment to a stand-alone question as it might be of wider interest. The question is here.
Hi Olga,
I’m currently working on a small code-change (in NW 7.50 SP13) which involves selecting data from table VBFA based on VBTYP. In parallel, I’m reading some S/4HANA-Cookbooks related to the SD-simplifications one of which is the extension of VBTYP from 1 to 4 characters (Note 2198647). In that Cookbook, interface IF_SD_DOC_CATEGORY is mentioned and I noticed that this is already available in our system. So, I quickly changed my code from literals to the corresponding associated types in that interface which gives me the same results as before when I run the program:
I just did a where-used on the interface and see that it gets used quite a bit in SAP-code but I don’t see any mentions yet in our custom-code.
So, I’m wondering if this is a) okay to already make use of in our own code and would b) make things at least a little bit easier once we actually do a conversion to S/4HANA?
If this is something we can/should already use, I’ll get the word out to our developers to switch from literals and/or own constants to these associated types from the interface.
Thanks and Cheers
Bärbel
Hi Bärbel,
it always a good idea to fix as much as possible in the ERP before S/4 conversion if you have time and resources for such an exercise. The foundation must be the ATC S/4 readiness check run for your custom code For example you can already eliminate such things like using SAP standard obsolete functionality, which was supported within ERP upgrades but removed in S/4HANA. But I think that in this particular case the ATC won't bring any error (using the literal instead of interface constant), therefore this change is nice but not necessarily required for the S/4HANA adaptation.
I recommend to run ATC S/4 readiness check and see, what you can adapt in ERP.
Best Regards,
Olga.
Hi Olga,
Could you please provide any documents available for ECC to S4 HANA migration /Conversion ?
Hi Durgaprasad,
you need to look on SAP Help Portal at the SAP S/4HANA product page: under "Product Documentation" (e.g. Conversion Guide for SAP S/4HANA) and "Conversion & Upgrade Assets".
Regards,
Olga.
Hi Olga,
Quick fixes are only available on S/4 scenarios? In my case, we are migrating to a Suite on Hana first, and will jump to S/4 a bit later.
We are very interested on the ADT quick fixes assistant but looks like it is available only in S/4 environments, is this right?
Thanks in advance.
Hello Alvaro,
yes, the prerequisite for Quick Fixes is at least SAP S/4HANA 1809 SP00.
Regards,
Olga.
Hello Olga,
we need to convert (not an ERP but) a decentral EWM to S4/Hana. Are there helpful tools for custom code checking for such a EWM conversion?
Best regards,
Werner Deistler
Hello Werner,
not yet, since there is no conversion path from standalone EWM to embedded EWM in S/4HANA.
Kind Regards,
Olga.
Thanks for updating us about custom code adaptation process with example and analysis options.
Caroline
Hi Olga,
We have searched through many differences sources whether the Custom Code adaption step in a conversion type project can be a parallel activity with the data migration activities but unfortunatelly, our verifications have been unsuccessful. This would mean several weeks of work that we could save if we do this activities in parallel, could you please let us know what you know about this?
Nicolás.
Hi Nicolas,
do you mean the whole custom code adaptation process or only the functional adaptation step? Custom code adaptation itself is independent from data migration, you can do it in parallel.
Regards,
Olga.
Greatly appreciated! This will save us time!
hi Olga Dolinskaja,
thanks for sharing very useful information. by reading the blog I came to know 3 ways we can check S/4 HANA readiness. but how to use each option? could you please explain in detail.
Hi Balu,
for the 1. see the blog chapters "Technical infrastructure for custom code analysis" and "Remote ATC with SAP GUI", you can also watch the video linked there. Local ATC checks for S/4HANA readiness make sense on the converted S/4HANA system if you want to adapt custom code. See for details Semi-automatic custom code adaptation after SAP S/4HANA system conversion.
The 2. and 3. relate to the same app. See the chapter "Remote ATC with SAP Fiori App Custom Code Migration" and the blog linked there in. For the setup of the app in the cloud see the blog ABAP custom code analysis using SAP Business Technology Platform
Regards,
Olga.
|
https://blogs.sap.com/2017/02/15/sap-s4hana-system-conversion-custom-code-adaptation-process/
|
CC-MAIN-2021-21
|
refinedweb
| 6,208
| 62.07
|
This preview shows
pages
1–3. Sign up
to
view the full content.
Java Software Solutions, 4e Lewis/Loftus Chapter 6 Chapter 6 Exercise Solutions 6.1. Write a method called average that accepts two integer parameters and returns their average as a floating point value. public double average (int num1, int num2) { return (num1 + num2) / 2.0; } 6.2. Overload the average method of Exercise 4.9 such that if three integers are provided as parameters, the method returns the average of all three. public double average (int num1, int num2, int num3) { return (num1 + num2 + num3) / 3.0; } 6.3. Overload the average method of Exercise 4.9 to accept four integer parameters and return their average. public double average (int num1, int num2, int num3, int num4) { return (num1 + num2 + num3 + num4) / 4.0; } 6.4. Write a method called multiConcat that takes a String and an integer as parameters. Return a String that consists of the string parameter concatenated with itself count times, where count is the integer parameter. For example, if the parameter values are "hi" and 4 , the return value is "hihihihi" . Return the original string if the integer parameter is less than 2. public String multiConcat (String text, int repeats) { String result = text; if (repeats > 1) for (int count = 2; count <= repeats; count++) result += text; return result;
View Full
Document
This
preview
has intentionally blurred sections.
Java Software Solutions, 4e Lewis/Loftus Chapter 6 } 6.5. Overload the multiConcat method from Exercise 4.12 such that if the integer parameter is not provided, the method returns the string concatenated with itself. For example, if the parameter is "test" , the return value is "testtest" public String multiConcat (String text) { String result = text + text; return result; } 6.6. Write a method called drawCircle that draws a circle based on the method's parameters: a Graphics object through which to draw the circle, two integer values representing the (x, y) coordinates of the center of the circle, another integer that represents the circle's radius, and a Color object that defines the circle's color. The method does not return anything. // assumes java.awt.* is imported
This note was uploaded on 05/17/2011 for the course COP 3530 taught by Professor Davis during the Spring '08 term at University of Florida.
- Spring '08
- Davis
Click to edit the document details
|
https://www.coursehero.com/file/6252308/CHAPTER-6/
|
CC-MAIN-2017-17
|
refinedweb
| 395
| 55.64
|
You can interface Python to ANY .DLL by using CALLDLL. I've written a wrapper that I feel makes doing this easier that is posted at: -Larry "Torsten Mohr" <tmohr at s.netic.de> wrote in message news:c1qlv5$58m$1 at schleim.qwe.de... > Hi, > > i have written a DLL that implements some C++ classes and > their methods. Now i would like to make the classes and > their methods known to python. > > Is there some example code available on how to do this? > I want to make some classes, their constructors and some > methods known to python. > > I've read the python docu "Tutorial", "Distributing Python Modules", > "Extending and Embedding" and "Python/X API". > But none of them seems to tell me how i can interface to: > > > namespace abc { > class Abc { > Abc(); > ~Abc(); > > int meth1(int abd, std::string s); > }; > > class Def { > Def(); > ~Def(); > > int meth1(long abd, char* g); > } > } > > Has anybody got some example code for the necessary wrapper to make > all the above known to python? > > > Thanks for any hints, > Torsten. > >
|
https://mail.python.org/pipermail/python-list/2004-February/277871.html
|
CC-MAIN-2017-04
|
refinedweb
| 174
| 82.04
|
Walkthrough: Configure Windows Azure ACS (Version 1) for Microsoft Dynamics CRM Integration
This walkthrough guides you through an advanced configuration scenario for Windows Azure Access Control Service, version 1. In this walkthrough, you will create an issuer, scope, and rules to allow a listener application to read the Microsoft Dynamics CRM message posted to the service bus. This walkthrough applies to integration with any type of Microsoft Dynamics CRM installation.
As a prerequisite, perform the following tasks before continuing with this walkthrough.
- Install the Windows Azure AppFabric SDK V1.0 and code samples.
- Compile the acm tool in the
AccessControl\ExploringFeatures\Management\AcmToolfolder of the sample code.
- Configure Microsoft Dynamics CRM for Windows Azure integration. For more information, see Walkthrough: Configure CRM for Integration with Windows Azure.
- Create a project in Windows Azure and record the service endpoint Uri, service namespace, and management key values for use in this walkthrough.
Configure the Acm.exe Tool
To configure the acm.exe tool, follow these steps.
Copy the public certificate to the same folder as the acm.exe tool.
In that same folder, create a configuration file named “acm.exe.config” that contains the following information in XML format.
Replace <servicenamespace> with your Windows Azure solution namespace.
Replace <mymgmtkey> with the management key value, which is case-sensitive.
The “-sb” suffix in the service refers to the service bus instance of ACS. If you are using federated mode, remove the “-sb” from the service namespace value.
Configure the issuer
Open a Command or Windows PowerShell console window and change the working directory to the acm.exe tool folder. Run the acm.exe tool using the following command.
Substitute the appropriate values for <name>, <issuername>, and <filename> as described below.
You can type the command “
.\acm.exe getall issuer” to view the created issuer information.
Configure the scope
The following information describes how to configure the scope of ACS for a normal mode post by Microsoft Dynamics CRM.
To view information on the existing token policy and scope, enter the following commands:
You can use an existing base scope or create a new scope with the Uri of the service endpoint that your listener will use. For example, to create a new scope, enter the following command and substitute the appropriate values where indicated by <>.
In the previous command, <myscope> is a name for your new scope, <tokenpolicyid> is the ID shown in the token policy information output from the first command, and <uri-of-service-endpoint> is the Uri of your Windows Azure project’s service endpoint.
Create rules for the scope
The final step in configuring ACS is to create a rule for a scope. First, list the available scopes because you will need the target scope’s ID. Next, list all rules of that scope.
Substitute the appropriate scope ID value in the <scopeid> parameter of the second command.
Create a rule for the target scope that will allow Microsoft Dynamics CRM to send or “post” to the Windows Azure Service Bus. You do this by configuring ACS to map the input “Organization” claim from Microsoft Dynamics CRM, identified by inclaimissuerid, to the output “Send” claim for the service bus, by executing the following command:
Substitute the appropriate values for <myRule>, <scopeid>, and <orgName> as described below.
See Also
Microsoft Dynamics CRM 2011Microsoft Dynamics CRM 2011
Send comments about this topic to Microsoft.
|
http://technet.microsoft.com/en-us/library/gg328104.aspx
|
CC-MAIN-2013-48
|
refinedweb
| 563
| 56.96
|
In this article, we will learn about the structure of the C++ program with a simple c++ hello world program.First we will start from the structure of c++ program.
Structure of c++ program
A C++ program is structured in a specific and particular manner. In C++, a program is divided into the following three sections:
- Standard Libraries Section
- Main Function Section
- Function Body Section
Let’s take a example to understood the above three sections.
#include <iostream> using namespace std; int main() { cout << "Hello World!" << endl; return 0; }
Above the simple hello world c++ program, now we will understand each and every section.
Standard libraries section
#include <iostream> using namespace std;
#includeis a specific preprocessor command that effectively copies and pastes the entire text of the file, specified between the angle brackets, into the source code.
- The file
<iostream>, which is a standard file that should come with the C++ compiler, is short for input-output streams. This command contains code for displaying and getting an input from the user.
namespaceis a prefix that is applied to all the names in a certain set.
iostreamfile defines two names used in this program – cout and endl.
- This code is saying: Use the cout and endl tools from the std toolbox.
- Here cout is used for printing the code where as endl is used for next line.
Main Function Section
int main() {}
- The starting point of all C++ programs is the
mainfunction.
- This function is called by the operating system when your program is executed by the computer.
{signifies the start of a block of code, and
}signifies the end.
Function body section
cout << "Hello World" << endl; return 0;
- The name
coutis short for character output and displays whatever is between the
<<brackets.
- Symbols such as
<<can also behave like functions and are used with the keyword
cout.
- The
returnkeyword tells the program to return a value to the function
int main
- After the return statement, execution control returns to the operating system component that launched this program.
- Execution of the code terminates here.
Do you want to hire us for your Project Work? Then Contact US.
|
https://blog.codehunger.in/structure-of-c-plus-plus-program-with-simple-c-plus-plus-program/
|
CC-MAIN-2021-43
|
refinedweb
| 356
| 63.7
|
Thanks.')))
I can't go that path, as the data can't be moved (at this time) to a different
database. It must continue to exist in the form that the legacy app expects
it in. I'm attempting to provide a new-and-improved interface to the data.
Perhaps in time it will remove the need for the legacy interface, but not in
the near future.
Yes, I know I would need to create my own query engine. In reality, that
engine would only need to map the SQL queries into the legacy system's queries.
Thanks for the reply. I'll look into the SQL parsers in the projects you
mentioned.
This comment is in my old snmp-tables-via-gadfly code - it looks like it's
still current.
"""A remote view must define self.column_names
to return a (fixed) list of string column names and
self.listing() to return a possibly varying
list of row values. If there is a single column
the listing() list must return a list of values,
but for multiple columns it must return a list
of tuples with one entry for each column.
The remote view implementation may optionally
redefine __init__ also, please see introspect.py
"""
A simple (untested!) example might be like:
class MySubclassOfRemoteView(RemoteView):
# static = 0 means reconstruct internal structures for each
# query.
static = 0
# these must be fixed
column_names = [ "user", "extension" ]
# but the rows returned by this can vary.
def listing(self):
return [ ('anthony','7015'),
('rjones','7002'),
('rupert','7014'),
]
Once you've subclassed RemoteView, something like
conn = gadfly()
conn.startup("dbtest", "dbtest") # assume directory "dbtest" exists
conn.add_remote_view("name", MySubclassOfRemoteView() )
will add it to the DB.
For more, check out
I'm pretty sure this still works with the current state-of-the-art gadfly,
but I can't be sure.
I can't remember where I found this stuff out - I suspect it was from
asking Aaron in an email. It could probably do with being documented and
some examples and the like...
Anthony
[Legacy database antics]
> Yes, I know I would need to create my own query engine. In reality, that
> engine would only need to map the SQL queries into the legacy system's
> queries.
I've been in a similar situation before (having to write an SQL-based
layer sitting on top of a proprietary, badly specified, non-standard
database system, albeit with various commercial entities playing "pass
the parcel" with it) and my advice is, if you really have to go
through with the activity of writing a query engine, is to find out
exactly which kinds of queries you really need and to concentrate only
on those - this may help to reduce the complexity of the engine quite
considerably.
Of course, my ultimate advice is to rewrite the application to use a
standard, open database system and to migrate the data across to that
system, but I suppose we don't all have that luxury. Sometimes,
however, what might be considered "luxury" is actually "necessity" and
vice versa, but organisational politics may obscure this
realisation...
Paul
Good luck, and tell us how you get on. ;-)
Cheers,
Simon Brunning
TriSystems Ltd.
sbru....
|
https://groups.google.com/g/comp.lang.python/c/sJzvljJO7ic/m/g73FGH7R5xMJ
|
CC-MAIN-2022-33
|
refinedweb
| 528
| 63.29
|
Run a ResNet50 model in ONNX format on TVM Stack with LLVM backend
Sign up for FREE 1 month of Kindle and read all our books for free.
Get FREE domain for 1st year and build your brand new site
Reading time: 15 minutes | Coding time: 15 minutes
In this guide, we will run a ResNet50 model in ONNX format on the TVM Stack with LLVM backend. You do not need any specialized equipment like GPU and TPU to follow this guide. A simple CPU is enough.
You need to have TVM Stack installed on your system to follow along. You can do this in 15 minutes by following our TVM installation guide
Step 1: Get the ResNet50 model in ONNX format
We need the pre-trained ResNet50 model in ONNX format. You can train and build your own ResNet50 model from scratch but in this guide, we are using an available model to get started quickly.
We will get the model from the Official ONNX Model Zoo which contains several sample models in ONNX format:
wget
Step 2: Get the input image for inference
We need a sample image to feed to our model:
wget
Step 3: Get the TVM code
In short, we will load the ONNX model (resnet50v1.onnx) and the input image (kitten.jpg). We will convert the ONNX model to NNVM format and compile it using the NNVM compiler. Once done, we will define the backend as LLVM and run the model using the TVM runtime.
Following code is written in Python:
import nnvm import tvm import onnx import numpy as np import matplotlib matplotlib.use('Agg') import time import mxnet as mx import tensorflow as tf from tensorflow.core.framework import graph_pb2 from tensorflow.python.framework import dtypes from tensorflow.python.framework import tensor_util import nnvm.testing.tf import os.path onnx_model = onnx.load_model('resnet50v1.onnx') sym, params = nnvm.frontend.from_onnx(onnx_model) from PIL import Image img = Image.open('kitten.jpg').resize((256, 256)) img = img.crop((16, 16, 240, 240)) img = img.convert("YCbCr") # convert to YCbCr x = np.array(img)[np.newaxis, :, :, :] x = normalize(x) x = x.swapaxes(1,3) x = x.swapaxes(2,3) #") # Execute on TVM from tvm.contrib import graph_runtime ctx = tvm.cpu(0) dtype = 'float32' m = graph_runtime.create(graph, lib, ctx) # set inputs m.set_input(input_name, tvm.nd.array(x.astype(dtype))) m.set_input(**params) start = time.clock() m.run() end = time.clock() print ("Time taken to successfully execute: ") print (end-start)
Save the above code as "resnet50_onnx.py"
Step 4: Execute the code
To execute the code, use the following command:
python resnet50_onnx.py
The execution time will vary from system to system.
|
https://iq.opengenus.org/run-resnet50-in-onnx-format-on-tvm-stack-with-llvm-backend/
|
CC-MAIN-2021-17
|
refinedweb
| 446
| 68.36
|
You can add a Web reference, by following the next steps:
1. In Solution Explorer, right-click the name of the project to add the Web service to and then click Add Web Reference.
The Add Web Reference dialog box is displayed
2. In the the URL box, enter the URL of the Web service to use, or use the links in the browse pane to locate the Web service you want. If you are developing a Web application on a machine that is behind a firewall and the application will consume Web services from outside the firewall, you have to include the address and port of your network’s proxy server in the URL.
3. In the Web services found at this URL box, select the Web service to use.
4. Verify that your project can use the Web service, and that any external code provided is trustworthy. computer.
5. In the Web reference name field, you should enter a name that you will use in your code to access the selected Web service programmatically. By default, Web references are assigned a namespace that corresponds to their server name. You can change this value and enter a custom namespace name.
6. Click Add reference.
If your Web site does not already have one, Visual Studio creates an App_WebReferences folder. It then creates files required for the proxy class using the name you provided in Step 5.
|
https://www.howtoasp.net/how-to-add-a-web-reference-with-visual-studio-2010/
|
CC-MAIN-2020-29
|
refinedweb
| 237
| 70.02
|
Circular-buffers are very simple data structures that uses a start and end pointers on a resizable-array which provides strictly better performance than vanilla resizable arrays. Specifically, without sacrificing space and implementation complexity, clear(), prepend(), trimStart() and trimEnd() and removes from start or end become constant-time and all other operations stay the same complexity.
clear()
prepend()
trimStart()
trimEnd()
I have a working example with tests here:
import scala.collection.mutable
import scala.reflect.ClassTag
/**
* A data structure that provides O(1) get, update, length, append, prepend, clear, trimStart and trimRight
* @tparam A
*/
class CircularBuffer[A: ClassTag](initialSize: Int = 1<<4) extends mutable.Buffer[A] {
private var array = Array.ofDim[A](initialSize)
private var start, end = 0
override def apply(idx: Int) = {
checkIndex(idx)
array(mod(start + idx))
}
override def update(idx: Int, elem: A) = {
checkIndex(idx)
array(mod(start + idx)) = elem
}
override def length = mod(mod(end) - mod(start))
override def +=(elem: A) = {
ensureCapacity()
array(mod(end)) = elem
end += 1
this
}
override def clear() = start = end
override def +=:(elem: A) = {
ensureCapacity()
start -= 1
array(mod(start)) = elem
this
}
override def prependAll(xs: TraversableOnce[A]) =
xs.toSeq.reverse.foreach(x => x +=: this)
override def insertAll(idx: Int, elems: Traversable[A]) = {
checkIndex(idx)
if (idx == 0) {
prependAll(elems)
} else {
val shift = (idx until size).map(this)
end = start + idx
this ++= elems ++= shift
}
}
override def remove(idx: Int) = {
val elem = this(idx)
remove(idx, 1)
elem
}
override def remove(idx: Int, count: Int) = {
checkIndex(idx)
if (idx + count >= size) {
end = start + idx
} else if (count > 0) {
if (idx == 0) {
start += count
} else {
((idx + count) until size).foreach(i => this(i - count) = this(i))
end -= count
}
}
}
/**
* Trims the capacity of this CircularBuffer's instance to be the current size
*/
def trimToSize(): Unit = resizeTo(size)
override def iterator = indices.iterator.map(apply)
override def trimStart(n: Int) = if (n >= size) clear() else if (n >= 0) start += n
override def trimEnd(n: Int) = if (n >= size) clear() else if (n >= 0) end -= n
override def head = this(0)
override def last = this(size - 1)
private def mod(x: Int) = Math.floorMod(x, array.length)
private def resizeTo(len: Int) = {
require(len >= size)
val array2 = Array.ofDim[A](len)
val (l, r) = (mod(start), mod(end))
if (l <= r) {
Array.copy(src = array, srcPos = l, dest = array2, destPos = 0, length = size)
} else {
val s = array.length - l
Array.copy(src = array, srcPos = l, dest = array2, destPos = 0, length = s)
Array.copy(src = array, srcPos = 0, dest = array2, destPos = s, length = r)
}
end = size
start = 0
array = array2
}
private def checkIndex(idx: Int) = if(!isDefinedAt(idx)) throw new IndexOutOfBoundsException(idx.toString)
private def ensureCapacity() = if (size == array.length - 1) resizeTo(2 * array.length)
}
I propose we replace the current implementation of mutable.ArrayBuffer with CircularBuffers. I filed SI-10167. Thoughts?
mutable.ArrayBuffer
That seems an interesting idea.
However, accessing or updating an element requires one more addition and one modulo* operation, on top of the bound checks. In fact, you're using floorMod which seems slower— and since its argument seems always positive you should be able to use % (which isn't fast, though).
%
To use this to replace ArrayBuffer (instead of adding to it) one would have to ensure the overhead is negligible, even when the memory access cost is minimal (e.g. when the entire array is in L1 cache). That's far from obviously true.
However, the expert on these matters is @Ichoran, so let me defer to him.
I think that's an interesting idea! The timing is good as a group of us are currently working on a strawman design for new collections. You might add an issue (or, even better, a PR) here:
How about adding a new CircularBuffer collection alongside ArrayBuffer?
CircularBuffer
ArrayBuffer
The other question is whether to make it the default mutable Seq. That would probably be less of an issue since if you're relying on characteristics of ArrayBuffer you should be explicitly using that class anyway.
Seq
Yeah, that sounds like the right tack to me. As @Blaisorblade points out, the characteristics aren't identical, and folks have established expectations of ArrayBuffer, so replacing the implementation with an entirely different data structure seems a bit iffy to me -- not least, because a circular buffer isn't what I expect an implementation of "array buffer" to be.
But adding it as a new data structure seems totally uncontroversial, and potentially quite useful. And making it the default mutable Seq is an intriguing idea, although I can't say I've got a handle on all the ramifications there...
Although my very naive benchmarks did not show any regressions (obviously the prepend and trims are constant time now) from mutable.ArrayBuffer, I agree with you. floorMod was introduced in Java 8 so we can get rid of it especially as the second argument is positive here. Also, if we always allocate arrays whose sizes are powers of 2, its trivial to calculate mod: x mod y = x & (y -1) when y is power of 2.
floorMod
x mod y = x & (y -1)
It's critical (and less obvious) that also the first argument is non-negative — -1 % 10 = -1, unlike with floorMod.
-1 % 10 = -1
Sounds excellent—and that works even for negative x. Power-of-2 sizes are sufficient to ensure the asymptotic complexity. There's no point in non-power-of-2 sizes, as long as you always double the size when resizing.
x
In fact, if you want to trade off time for space (in constant factors, not asymptotically), you can multiply the array size by any number > 1 — 2 is just the standard choice, but any factor works, while adding a constant to the size doesn't. Picking less than 2 reduces the maximum amount of empty slots but makes resizes more frequent. IIRC, the Cormen book has the details. I don't remember many resizable arrays offering this amount of control so I'd leave it out and stick to power-of-2 sizes.
Last nitpick: for better constant factors, you want to use Java's arraycopy inside resize, since that does a single boundary check for the entire operation.
arraycopy
resize
If this is to become a bit more generic it could be considered dimensions like in an array. As new values are added the dimension rolls around. But since more usages will be 1D, 2D, 3D maybe you can have a 1D, 2D, 3D optimised special case.
E.g.
Forgive me if I'm wrong, but the data structure described is not a Circular Buffer, because Circular Buffers are not growable.
A circular buffer, circular queue, cyclic buffer or ring buffer is a fixed size queue that on overflow will throw an exception, or drop either the head or the entire buffer. Source: - and making it growable does not necessarily make it more useful, because the fixed size (and consequently its behaviour on overflow) is the whole point.
I do have simple implementations for such circular buffers for reference:
Also, the most famous implementation is the Disruptor.
I concede that I might be wrong and there are people here knowing much more than I do.
But if I am correct please don't overload the term, because having precise meanings for CS notions is useful.
Thanks,
Good point. Actually, in general "buffer" tends to refer to fixed-length data structures, so ArrayBuffer and Buffer in Scala maybe aren't the best names either. In JavaScript for example ArrayBuffer is a fixed-length raw data buffer. But it may be too late to change the terminology in Scala.
Buffer
ArrayDeque might be a good name for the "resizing circular buffer" structure. Deques do normally resize, so it's a deque that also behaves like an array. In Java ArrayDeque uses a circular buffer internally, though it does not provide random access to elements.
ArrayDeque
I guess we could also call it ArraySeqDeque, since it's capable of behaving like a Seq and deque, and it's implemented as an array, but I prefer ArrayDeque.
ArraySeqDeque
I think a growable, mutable circular buffer, as a general purpose mutable data structure, is a wonderful idea.
Whether we call it a "circular buffer" or something else doesn't really matter to me. Both growable and non-growable circular buffers would be valuable, but non-growable circular buffers are a more specific use case where-as growable circular buffers can replace all sorts of things:
After all, growable circular buffers are basically Deques, and provide really fast adding/removing at the start/end. This is in addition to providing similar performance characteristics to a normal growable-array buffer for normal growable-array operations (indexed-lookup, iteration, indexed-replacement, ...), , without much implementation complexity.
For what it's worth, I have wanted this enough that I have implemented it twice already:
I think a growable circular buffer would be a wonderful general-purpose mutable data structure, and would serve well as a "default" mutable data structure.
I agree with everything you said, I felt the need for it myself, I just think we shouldn't use the name Circular Buffer. Name it something else please. ArrayDeque for example is probably fine and more specific, as many users know what a Deque is and they'll immediately see it's backed by an Array, so cool, it means it has good traversal / indexing performance, etc.
ArrayDeque (or something like that) sounds indeed like an improvement—that name is also used by C++'s STL (). I was also confused I hadn't seen this before—I just wasn't recognizing a deque because of the name (and because I last saw them ages ago). And calling this a ring buffer confused me too — ring buffers are what you need, for instance, when you receive video streaming.
FWIW, Wikipedia also mentions growing circular buffers here.
Another alternative to "growable circular buffers" are hashed array trees. They are also very simple - basically instead of doubling allocation and copying when full, we allocate a fixed block each time we reach capacity. They are asymptotically as fast as "growable circular buffers" but only allocates O(sqrt(n)) extra space instead of O(n/2) extra space. In practice random-access might be slightly slower because of locality.
O(sqrt(n))
O(n/2)
They compare more to normal ArrayBuffer—head insertions aren't O(1) but O(n). I'm not sure they should be in the core—but a goal of the new collection libraries, IIRC, is to simplify creating new Scala collections.
I have a PR ready for strawman-collections:
I agree with you re: the name, CircularBuffer, I would usually associate it with the 'Disruptor' and other implementations. Where I've seen it used (algos) it has traditionally been a fixed size/lockless data structure.
I have a question - should this go in Scala 2.13 or Scala 2.12.x?
The code can be easily put into Scala 2.12 -
The only thing is binary compatibility against Scala 2.12.0 - does adding an extra class matter? If yes, can we introduce this as a private class and just change mutable.{Queue, Stack, Buffer} to use this instead?
mutable.{Queue, Stack, Buffer}
The pull request for this is ready for full review:
Also provided some proof of concept implementations of Stacks and Queues using this data structure.
Preliminary benchmarks look great but soliciting additional code reviews and benchmarking.
|
https://contributors.scala-lang.org/t/using-circular-buffers-as-mutable-arraybuffer/454
|
CC-MAIN-2017-43
|
refinedweb
| 1,909
| 53.61
|
Results 1 to 3 of 3
Hi all, I am trying to setup a socket that listens and send on a specific NIC. This socket also joins a multicast group. For some reason it doesn't work. ...
- Join Date
- May 2010
- 9
Multicast on a specific NIC
I am trying to setup a socket that listens and send on a specific NIC. This socket also joins a multicast group. For some reason it doesn't work. Here is a sample code of what I do:
#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#define MY_IP "20.1.1.1"
#define MY_PORT 52101
#define MY_MC "230.0.0.8"
int main()
{
int sock = socket (AF_INET, SOCK_DGRAM, 0);
if (sock <= 0)
return -1;
sockaddr_in addr;
addr.sin_family = AF_INET;
addr.sin_addr.s_addr = inet_addr (MY_IP);
addr.sin_port = htons (MY_PORT);
int ret = bind (sock, (sockaddr*)&addr, sizeof (addr));
if (ret != 0)
return -1;
ip_mreq mreq;
mreq.imr_multiaddr.s_addr = inet_addr (MY_MC);
mreq.imr_interface.s_addr = inet_addr (MY_IP);
ret = setsockopt (sock, IPPROTO_IP, IP_ADD_MEMBERSHIP, &mreq, sizeof (mreq));
if (ret != 0)
return -1;
char buff[1500];
ret = recv (sock, buff, sizeof (buff), 0);
if (ret <= 0)
{
printf ("Error in recv!\n");
return -1;
}
printf ("Received packet from MC!!");
return 0;
}
In this sample the recv function never returns. If I change MY_IP to be INADDR_ANY the function recv returns. In my application I want to get only packets from a specific interface and not from all of them.
Thx in advance for help.
- Join Date
- Jul 2010
- 53
instead of:
mreq.imr_interface.s_addr = inet_addr (MY_IP);
i think you want:
mreq.imr_interface.s_addr=htonl(INADDR_ANY);
if that's not the issue then i can look more closely...
- Join Date
- May 2010
- 9
Hi,
Thank you for answering.
What I meant was that if I replace the line:
addr.sin_addr.s_addr = inet_addr (MY_IP);
with
addr.sin_addr.s_addr = htonl (INADDR_ANY);
then it works fine and my socket receives multicast packets. The problem is that in this case I also receive non-multicast packets with destination port=MY_PORT which arrived on other interfaces.
|
http://www.linuxforums.org/forum/networking/166882-multicast-specific-nic.html
|
CC-MAIN-2014-41
|
refinedweb
| 346
| 62.04
|
git_hooks alternatives and similar packages
Based on the "Debugging" category
visualixir9.7 1.6 git_hooks VS visualixirA process visualizer for remote BEAM nodes.
Wobserver9.6 0.0 git_hooks VS WobserverWeb based metrics, monitoring, and observer
observer_cli9.6 4.0 git_hooks VS observer_cliVisualize Elixir & Erlang nodes on the command line, it aims to helpe developers debug production systems.
elixometer9.5 6.7 git_hooks VS elixometerA light Elixir wrapper around exometer.
eper9.3 0.0 git_hooks VS eperErlang performance and debugging tools.
exometer9.3 0.0 git_hooks VS exometerBasic measurement objects and probe behavior in Erlang.
eflame8.9 0.0 L3 git_hooks VS eflameFlame Graph profiler for Erlang.
ex_debug_toolbar8.7 0.0 git_hooks VS ex_debug_toolbarA toolbar for Phoenix projects to interactively debug code and display useful information about requests: logs, timelines, database queries etc.
beaker8.4 0.0 git_hooks VS beakerStatistics and Metrics library for Elixir.
dbg7.3 0.0 git_hooks VS dbgDistributed tracing for Elixir.
GenMetrics6.7 0.0 git_hooks VS GenMetricsElixir GenServer and GenStage runtime metrics.
exrun6.6 0.0 git_hooks VS exrunDistributed tracing for Elixir with rate limiting and simple macro-based interface.
rexbug6.5 4.0 git_hooks VS rexbugAn Elixir wrapper for the redbug production-friendly Erlang tracing debugger.
erlang-metrics6.5 0.0 git_hooks VS erlang-metricsA generic interface to different metrics systems in Erlang.
quaff6.3 0.0 git_hooks VS quaffThe Debug module provides a simple helper interface for running Elixir code in the erlang graphical debugger.
booter4.0 0.9 git_hooks VS booterBoot an Elixir application, step by step.
eh3.5 0.0 git_hooks VS ehA tool to look up Elixir documentation from the command line.
Graphitex1.3 0.0 git_hooks VS GraphitexGraphite client for Elixir
ether1.0 0.0 git_hooks git_hooks or a related project?
README
GitHooks
Installs git hooks that will run in your Elixir project.
Any git hook type is supported, check here the hooks list.
Table of Contents
<!-- vim-markdown-toc Marked -->
- Installation
- Configuration
- Removing a hook
- Execution
<!-- vim-markdown-toc -->
Installation
Add to dependencies:
def deps do [{:git_hooks, "~> 0.4.1", only: [:test, :dev], runtime: false}] end
Then install and compile the dependencies:
mix deps.get && mix deps.compile
Backup current hooks
This project will backup automatically your the hook files that are going to be overwrite.
The backup files will have the file extension
.pre_git_hooks_backup.
Automatic installation
This library will install automatically the configured git hooks in your
config.exs file.
Manual installation
You can manually install the configured git hooks at any time by running:
mix git_hooks.install
Configuration
One or more git hooks can be configured, those hooks will be the ones installed in your git project.
Currently there are supported two configuration options:
- tasks: A list of the commands that will be executed when running a git hook. See types of tasks for more info.
- verbose: If true, the output of the mix tasks will be visible. This can be configured globally or per git hook.
Example config
In
config/config.exs
if Mix.env() != :prod do config :git_hooks, verbose: true, hooks: [ pre_commit: [ tasks: [ "mix format" ] ], pre_push: [ verbose: false, tasks: [ "mix dialyzer", "mix test", "echo 'success!'" ] ] ] end
Type of tasks
Command
To run a simple command you can either declare a string or a tuple with the
command you want to run. For example, having
"mix test" and
{:cmd, "mix
test"} in the hook
tasks will be equivalent.
If you want to forward the git hook arguments, add the option
include_hook_args: true.
config :git_hooks, verbose: true, hooks: [ commit_msg: [ tasks: [ {:cmd, "echo 'test'"}, {:cmd, "elixir ./priv/test_task.ex", include_hook_args: true}, ] ] ]
Executable file
The following configuration uses a script file to be run with a git hook. If you
want to forward the git hook arguments, add the option
include_hook_args:
true.
config :git_hooks, verbose: true, hooks: [ commit_msg: [ tasks: [ {:file, "./priv/test_script"}, {:file, "./priv/test_script_with_args", include_hook_args: true}, ] ] ]
The script file executed will receive the arguments from git, so you can use them as you please.
Removing a hook
When a git hook configuration is removed, the installed hook will automatically delete it.
Any backup done at the moment will still be kept.
Execution
Automatic execution
The configured mix tasks will run automatically for each git hook.
Manual execution
You can also run manually any configured git hook as well.
The following example will run the pre_commit configuration:
mix git_hooks.run pre_commit
It is also possible to run all the configured hooks:
mix git_hooks.run all
|
https://elixir.libhunt.com/elixir_git_hooks-alternatives
|
CC-MAIN-2020-34
|
refinedweb
| 736
| 58.58
|
PostgreSQL is a very versatile database. If you want to learn SQL, then a quick way to start is to 1) grab some data you want to analyze 2) insert into a PostgreSQL table and 3) use a SQL client such as pgweb and get started analyzing data. You can use any SQL client of your choice but pgweb is easy to use and is browser based which makes it a very convenient choice.
As with many of my posts, I'll use Docker to run PostgreSQL and pgweb.
START A POSTGRESQL CONTAINER
docker run -d -p 5432:5432 --name postgres_db -e POSTGRES_PASSWORD=postgres postgres
START A PGWEB CONTAINER
docker run -d -p 8081:8081 --link postgres_db:postgres_db -e DATABASE_URL=postgres://postgres:postgres@postgres_db:5432/postgres?sslmode=disable sosedoff/pgweb --readonly
The
--link flag here is being used to allow the pgweb container to access the PostgreSQL database running inside another container. Recall the
--name flag to set the name of the PostgreSQL container to
postgres_db. The name is now being used to point the pgweb container to where PostgreSQL is running. This is evident in the environment variable
DATABASE_URL that provides the connection details needed by pgweb to connect to PostgreSQL.
Go over to your browser and type
localhost:8081 to access pgweb. There isn't any data available yet which will be fixed below.
INSERT DATA INTO POSTGRESQL
In the snippet below, I use
pandas to grab the famous
iris dataset. I then define a connection using
sqlalchemy to the PostgreSQL container. Since I'm doing this on the host system, I can use
localhost to connect to the container. The containers by default are available on
localhost on the host system.
Finally I use
to_sql function to write the dataset to the database.
import pandas as pd from sqlalchemy import create_engine iris = pd.read_csv('') engine = create_engine('postgresql://postgres:postgres@localhost:5432/postgres') iris.to_sql("iris", engine, if_exists = 'replace', index=False, chunksize=1000)
Now when I go over to
localhost:8081 in my browser I see
iris in the list of tables.
NOTE
PostgreSQL Persisting your PostgreSQL data inside your container requires you to set the appropriate
-v flags. Besides data persistence there are other factors you may have to deal with when running PostgreSQL inside a container. Please find more details here for some great suggestions on this topic.
pgweb You may want to limit access to the SQL client, run pgweb on a different port, allow read-only access to the database and so on. A large number of options can be found here. To enable these options you may need to create a custom Dockerfile for pgweb and enable the options of your choice. You can reuse the pgweb Dockerfile found here.
|
https://harshsinghal.dev/learn-sql-in-a-browser-with-postgresql-and-pgweb/
|
CC-MAIN-2021-04
|
refinedweb
| 457
| 63.29
|
How to move a ball
Created Nov 11, 2011
We want to start with a very essential step. We will program an applet in which a ball is moving from the left to the right hand side. I know this is nothing BIG but if you want to learn how to program games it is very important to understand how to animate objects!
At the beginning we have to write our basic structure of an applet again but we will add two little things. Our applet has to implement the interface Runnable and the corrosponding method run() to animate an object. The structure of the applet should look like this:
import java.applet.*;
import java.awt.*;
public class BallApplet extends Applet implements Runnable
{
public void start() { }
public void stop() { }
public void destroy() { }
public void run () { }
public void paint (Graphics g) { }
}
Threads
A thread is a piece of program that is able to run parallel to other parts of the program (multithreading). Threads are implemented by the class Thread, the interface Runnable and the method run(), we have already implemented these two things in the step before. Important methods of the class Thread are:
- Thread.start(): starts a thread
- Thread.stop(): stops a thread
- Thread.sleep(time in milliseconds): stops thread for a certain amount of time
You can find more functions of the thread class in the Java API!
And here comes the code!
To move a object we need another object that has to be an instance of the class Thread; we declare this object in the start - method of our applet:
public void start ()
{
Thread th = new Thread (this);
// start this thread
th.start ();
}
Now this thread is running in the run() - method of our applet. Every time all methods... in the run - method have been called, we stop the thread for a short time. Your run method should look like this:
public void run ()
{
Thread.sleep (20);
repaint();
try
{ }
catch (InterruptedException ex)
{ }
// set ThreadPriority to maximum value
Thread.currentThread().setPriority(Thread.MAX_PRIORITY);
Thread.currentThread().setPriority(Thread.MIN_PRIORITY);
// run a long while (true) this means in our case "always"
while (true)
{ }
}
What we have now is a neverending loop that executes all things within the loop, waits 20 milliseconds and executes everything once again and so on. But how can we move a circle that is painted by the applet?
Well this is a very simple idea: Our circle has a x - and a y - position. If we were to add 1 to the x - position everytime the thread is executed, the ball whould move across the applet, because it is painted at a different x - position everytime we execute the thread!
Ok, let's start with drawing a circle: Add these lines to the paint - method of the applet:
public void paint (Graphics g)
{
g.setColor (Color.red);
// paint a filled colored circle
g.fillOval (x_pos - radius, y_pos - radius, 2 * radius, 2 * radius);
}
And we need the following instance variables at the head of the program:
int x_pos = 10;
int y_pos = 100;
int radius = 20;
To move a ball we change the value of the x_pos variable everytime the thread is executed. Our run - method should look like this:
public void run ()
{
...
x_pos ++;
...
{ }
}
If you add this applet to an HTML document as seen in the previous chapter, a red ball should be moving across the applet one time!
Sourcecode download
Take a look at the applet
Next chapter
|
http://www.jguru.com/print/article/client-side/how-to-move-a-ball.html
|
CC-MAIN-2017-04
|
refinedweb
| 572
| 69.11
|
Clojure Lessons
Recently I've been working with Java code in a Spring.
To get myself motivated again, I thought I'd try something fun and render a Mandelbrot set. I know these are easy, but it's something I've never done for myself. I also thought it might be fun to do something with graphics on the JVM, since I'm always working on server-side code. Turned out that it was
Color class, and it's just too obtuse to have two different spellings in the same program).
To get my feet wet, I started with a simple Java application, with a plan to move it into Clojure. My approach gave me a class called
Complex that can do the basic arithmetic (trivial to write, but surprising that it's not already there), and an abstract class called
Drawing that does all of the Window management and just expects the implementing class to implement
paint(Graphics). With that done it was easy to write a pair of functions:
coord2Mathto convert a canvas coordinate into a complex number.
mandelbrotColorto calculate a colour for a given complex number (using a logarithmic scale, since linear shows too many discontinuities in colour).
for (int x = 0; x < gWidth; x++) { for (int y = 0; y < gHeight; y++) { g.setColor(mandelbrotColor(coord2Math(x, y))); plot(g, x, y); } }
(
plot(Graphics,int,int) is a simple function that draws one pixel at the given location)..
Clojure Graphics
Initially I tried extending my
Drawing class using
proxy,:
(def window-name "Mandelbrot") (def draw-fn) (defn new-drawing-obj [] (proxy [JPanel] [] (paint [^Graphics graphics-context] (let [width (proxy-super getWidth) height (proxy-super getHeight)] (draw-fn graphics-context width height))))) (defn show-window [] (let [^JPanel drawing-obj (new-drawing-obj) frame (JFrame. window-name)] (.setPreferredSize drawing-obj (Dimension. default-width default-height)) (.add (.getContentPane frame) drawing-obj) (doto frame (.setDefaultCloseOperation JFrame/EXIT_ON_CLOSE) (.pack) (.setBackground Color/WHITE) (.setVisible true)))) (defn start-window [] (SwingUtilities/invokeLater #(show-window)))
Calling
start-window sets off a thread that will run the event loop and then call the
show-window function. That function uses
new-drawing-obj to create a proxy object that handles the
paint event. Then it sets the size of panel, puts it into a frame (the main window), and sets up the frame for display.
The only thing that seems worth noting from a Clojure perspective is the proxy object returned by
new-drawing-obj. This is simple extension of
java.swing.JPanel that implements the
paint(Graphics) method of that class. Almost every part of the drawing can be done in an external function (
draw-fn here), but the width and height are obtained by calling
getWidth() and
getHeight() on the
JPanel object. That object isn't directly available to the
draw-fn function, nor is it available through a name like "this". The object is returned from the
proxy function, but that's out of scope for the
paint method to access it. The only reasonable way to access methods that are inherited in the proxy is with the
proxy-super function (I can think of some unreasonable ways as well, like setting a reference to the proxy, and using this reference in
paint. But we won't talk about that kind of abuse).
While I haven't shown it here, I also wanted to close my window by pressing the "q" key. This takes just a couple of lines of code, whereby a proxy for
KeyListener is created, and then added to the frame via
(.addKeyListener the-key-listener-proxy). Compared to the equivalent code in Java, it's strikingly terse.
Rendering?
Each time the
mandelbrotColor was to be called, it is mapping a coordinate to a colour. This gave me my first hint. I needed to map coordinates to colours. This implies calling
map on a seq of coordinates, and ending up with a seq of colours. (Actually, not a seq, but rather a reducible collection)..
Coordinates can be created as pairs of integers using a comprehension:
(for [x (range width) y (range height)] [x y])
and the calculation can be done by mapping on a function that unpacks x and y and returns a triple of these two coordinates along with the calculated colour. I'll rename x and y to "a" and "b" in the mapping function to avoid ambiguity:
(map (fn [[a b] [a b (mandelbrot-color (coord-2-math a b))]) (for [x (range width) y (range height)] [x
reduce. The first parameter for the reduction function will be the image, the second will be the next tuple to draw, and the result will be a new image with the tuple drawn in it. The
reduce):
(defn plot [^Graphics g [x y c]] (.setColor g c) (.fillRect g x y 1 1) g)
Note that the original graphics context is returned, since this is the "new" value that plot has created (i.e. the image with the pixel added to it). Also, note that the second parameter is a 3 element tuple, which is just unpacked into x y and c.
So now the entire render process can be given as:
(reduce plot g (map (fn [[a b] [a b (mandelbrot-color (coord-2-math a b))]) (for [x (range width) y (range height)] [x.
Reflection
The first thing that @objcmdo suggested was to look for reflection. I planned on doing that, but thought I'd continue cleaning the program up first. The
Complex class was still written in Java, so I embarked on rewriting that in Clojure.
plus can be defined differently depending on whether it receives a
double value, or another
Complex number. My understanding is that Clojure only overloads functions based on the parameter count, meaning that different function names are required to redefine the same operation for different types. So for instance, the
plus functions were written in Java as:
But in Clojure I had to give them different names:But in Clojure I had to give them different names:
public final Complex plus(Complex that) { return new Complex(real + that.real, imaginary + that.imaginary); } public final Complex plus(double that) { return new Complex(real + that, imaginary); }
(plus [this {that-real :real, that-imaginary :imaginary}] (Complex. (+ real that-real) (+ imaginary that-imaginary))) (plus-dbl [this that] (Complex. (+ real that) imaginary))
Not a big deal, but code like math manipulation looks prettier when function overloading is available.
It may be worth pointing out that I used the names of the operations (like "plus") instead of the symbolic operators ("+"). While the issue of function overloading would have made this awkward (
+dbl is no clearer than
plus-dbl) it has the bigger problem of clashing with functions of the same name in clojure.core. Some namespaces do this (the
* character is a popular one to reuse), but I don't like it. You have to explicitly reject it from your current namespace, and then you need to refer to it by its full name if you do happen to need it. Given that
Complex needs to manipulate internal numbers, then these original operators are needed.
So I created my protocol containing all the operators, defined a
Complex record to implement it, and then I replaced all use of the original Java
Complex class. Once I was finished I ran it again just to make sure that I hadn't broken anything.
To my great surprise, the full screen render went from 682 seconds down to 112 seconds. Protocols are an efficient mechanism, but they shouldn't be that good. At that point I realised that I hadn't used type hints around the
Complex class, and that as a consequence the Clojure code had to perform reflection on the complex numbers. Just as @objcmdo had suggested.
Wondering what other reflection I may have missed, I tried enabling the
*warn-on-reflection* flag in the repl,).
Composable Abstractions
The next thing I wondered about was the map/reduce part of the algorithm. While it made for elegant programming, it was creating unnecessary tuples at every step of the way. Could these be having an impact??
Creating a loop without burning through resources can be done easily with tail recursion. Clojure doesn't do this automatically (since the JVM does not provide for it), but it can be emulated well with loop/recur. Since I want to loop between 0 (inclusive) and the width/height (exclusive), I decremented the upper limits for convenience. Also, the
plot function is no longer constraint to just 2 arguments, so I changed the definition to accept all 4 arguments directly, thereby eliminating the need to construct that 3-tuple:
(let [dwidth (dec width) dheight (dec height)] (loop [x 0 y 0] (let [[next-x next-y] (if (= x dwidth) (if (= y dheight) [-1 -1] ;; signal to terminate [0 (inc y)]) [(inc x) y])] (plot g x y (mandelbrotColor (coord-2-math x y))) (if (= -1 next-x) :end ;; anything can be returned here (recur next-x next-y)))))
My word, that's ugly. The
let that assigns
next-x and
next-y has a nasty nested
if construct that increments x and resets it at the end of each row. It also returns a flag (could be any invalid number, such as the keyword
:end) to indicate that the loop should be terminated. The loop itself terminates by testing for the termination value and returning a value that will be ignored.
But it all works as intended. Now instead of creating a tuple for every coordinate, it simply iterates through each coordinate and plots the point directly, just as the Java code did. So what's the performance difference here?
map/reduce on a
for comprehension, versus using
loop/rec.
Aside from the obvious clarity issues, the composability of the
for/map/reduce makes an enormous difference. Because each element in the range being mapped is completely independent, we are free to use the
pmap function instead of
map. The documentation claims that this function is,
"Only useful for computationally intensive functions where the time of f dominates the coordination overhead."
Yup. That's us.
So how much does this change make for us? Using
map on the current code, a full screen render takes 112 seconds. Changing
map to
pmap improves it to 75 seconds. That's a 33% improvement with no work, simply because the correct abstraction was applied. That's a very powerful abstraction.
Future Work
(Hmmm, that makes this sound like an academic paper. Should I be drawing charts?)
pmap could make the Clojure version faster due to being multi-threaded. Of course, Java can be multi-threaded as well, but the effort and infrastructure for doing this would be significant.
|
http://gearon.blogspot.com/2012_05_01_archive.html
|
CC-MAIN-2017-09
|
refinedweb
| 1,790
| 61.06
|
vfork(2) BSD System Calls Manual vfork(2)
NAME
vfork -- spawn new process in a virtual memory efficient way
SYNOPSIS
#include <unistd.h> pid_t vfork(void);
DESCRIPTION
V. Vfork() differs from fork. It does not work, however, to return while running in the childs context from the procedure that called vfork() since the eventual return from vfork() would then return to a no longer existent stack frame. Be careful, also, to call _exit rather than exit if you can't execve, since exit will flush and close standard I/O channels, and thereby mess up the parent processes standard I/O data structures. (Even with fork it is wrong to call exit since buffered data would then be flushed twice.)
SEE ALSO
execve(2), fork(2), sigaction(2), wait(2)
ERRORS
The vfork() system call will fail for any of the reasons described in the fork man page. In addition, it will fail if: [EINVAL] A system call other than _exit() or execve() (or libc functions that make no system calls other than those) is called following calling a vfork() call.
BUGS
This system call will be eliminated when proper system sharing mechanisms are implemented. Users should not depend on the memory sharing semantics of vfork as it will, in that case, be made synonymous to fork.() function call appeared in 3.0BSD. 4th Berkeley Distribution June 4, 1993 4th Berkeley Distribution
Mac OS X 10.9.1 - Generated Mon Jan 6 18:35:43 CST 2014
|
http://www.manpagez.com/man/2/vfork/
|
CC-MAIN-2015-35
|
refinedweb
| 247
| 70.33
|
I am bit tired of using Serial as logger - mainly for two reasons: it does not support sprintf syntax and string are being held in RAM. For this reason I've implemented new library:
ArdLog serves as simple logger for Arduino that creates formatted messages over Serial:
- Each message has timestamp.
- Each message within single loop has the same timestamp, so that you can logically connect activities together.
- Messages can be formatted using sprintf syntax.
- Text for the messages is being held in PROGMEM.
Step 1: Installation
In order to install ArdLog you have to download desired release and unpack in into folder containing Arduino libraries. The is the result on MacOS:
$ pwd
/Users/fred/Documents/Arduino/libraries/ArdLog
$ ls ArdLog.cpp ArdLog.h LICENSE README.md
Step 2: Configuration
- Logger is disabled by default, in order to enable it set LOG to true.
- Messages are created over default Serial port.
- You can choose alternative port by setting: USE_SERIAL_1, USE_SERIAL_2 or USE_SERIAL_3 to true.
- In order to print current time for each message set USE_CURRENT_TIME to true. By default logger will sample time only once at the beginning of each loop.
Step 3: Getting Up and Running
- Choose suitable configuration in ArdLog.h. In most cases you have to only set LOG to true.
- Call log_setup() in setup() method - this will initialize serial port.
- Call log_cycle() at the beginning of each loop() - it will sample current time.
- Put log messages into #if LOG log(F("....") #endif - once logger is disabled, it will not waste RAM and CUP.
This is output created by example above:
>>[000-00:00:00,000]-> Logger initialized, free RAM: 1537
>>[000-00:00:00,003]-> Free RAM: 1527
>>[000-00:00:00,003]-> **** Loop 1 ****
>>[000-00:00:00,003]-> T1 = 9
>>[000-00:00:00,003]-> T2 = 112
>>[000-00:00:01,113]-> **** Loop 2 ****
>>[000-00:00:01,113]-> T1 = 1114
>>[000-00:00:01,113]-> T2 = 1215
>>[000-00:00:02,215]-> **** Loop 3 ****
>>[000-00:00:02,215]-> T1 = 2217
>>[000-00:00:02,215]-> T2 = 2318
>>[000-00:00:03,319]-> **** Loop 4 ****
>>[000-00:00:03,319]-> T1 = 3320
>>[000-00:00:03,319]-> T2 = 3421
>>[000-00:00:04,422]-> **** Loop 5 ****
>>[000-00:00:04,422]-> T1 = 4423
>>[000-00:00:04,422]-> T2 = 4525
|
http://www.instructables.com/id/Logger-for-Arduino-Over-Serial-Port/
|
CC-MAIN-2017-26
|
refinedweb
| 387
| 64.3
|
Icenium Graphite is a new product from Telerik. It is a part of a larger offering, Icenium. You can download it now for free from Icenium web site. The idea behind the tooling is simple. It provides an environment for developers to create mobile applications using HTML and JavaScript that can be deployed to iOS and Android. A key part of such solution is Kendo Mobile framework that is smart enough to adjust look and feel to match target platforms. Telerik also provides build service. With that you can develop applications for iOS on Windows. You do not need Mac machine at all. On the other hand, you get to reuse all your code between all platforms. Windows Phone 8 is on the product road map already as well. In this post I will walk through the steps of creating a mobile app in Icenium and consume Web Api service to show a list of items in your app.
Once you download and open Icenium, you will see a very simple user interface. Just click on New button and pick project template. I recommend you would start with Kendo UI template to save yourself trouble of downloading a number of JavaScript frameworks. Once the project is created, you can immediately run it in simulator and see the results of your hard work. I am now going to replace home tab with a list of artists from Chinook database.
First, let’s create Web Api project. To do that simply create new project in Visual Studio, select ASP.NET MVC 4 template, then pick Web Api on the last page. Now, add new item to the project’s Model folder, select Entity Framework Model template, select Generate From Database option and point to Chinook database. Voila. You now have model with appropriate connection string injected into web.config. Now rename ValuesController to ChinooksController, delete all the methods, and add new method to get the list of artists. In the method below I am returning two things – list of artists and total count. Total count will be necessary to utilize a cool feature of Kendo mobile – endless scrolling list.
public class ChinooksController : ApiController { public object GetArtists(int page = 1, int pageSize = 20) { IEnumerable<Artist> returnValue; int count; using (var context = new ChinookEntities()) { context.Configuration.ProxyCreationEnabled = false; returnValue = context.Artists.OrderBy(one => one.Name).Skip((page - 1)*pageSize).Take(pageSize).ToList(); count = context.Artists.Count(); } return new { Count = count, Data = returnValue }; } }
I am a fan of explicit routes, so I am going to add a new route to Web Api routes configuration (WebApiConfog in App_Start folder) as follows.
public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "ExplicitApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{action}", defaults: new { id = RouteParameter.Optional } ); } }
Test this in Fiddler now. I just navigated to in my case. I did change project configuration to run in IIS instead of Express. I just like to be able to run my app at any time, whether or not Visual Studio is running.
Now, let’s get back to mobile application. First of all, I am going to use Kendo data source to get the data. To do this, right click on scripts folder in Icenium, click Add->New Item->JavaScript file. I named mine data.js.
var data = (function() { return { artists: new kendo.data.DataSource({ pageSize: 20, page:1, serverPaging: true, transport: { read: { url: "", dataType: "json" }, parameterMap: function(options) { var parameters = { pageSize: options.pageSize, page: options.page, }; return parameters; } }, schema: { data: function (data) { return data.Data; }, total: function (data) { return data.Count; } } }) }; })();
As you can see from above, the code is fairly simple. I am pointing to my endpoint you saw above. I am getting json formatted list of artists, I am adding parameters for my method to control page and page size. I also configure server size paging to tell Kendo framework to call the service for the next page instead of doing the same on the client. I also inform Kendo UI about the schema of my result – list of artists and total count. Now, I am going to add a list view to my HTML. I am going to delete everything from home tab and add a <ul> tag for my list view.
<body> <div data- <ul id="artistsListView" data-</ul> </div>
Now I need to configure my view. I am going to do it in jQuery ready function call in the bottom of the home page.
<script> var app = new kendo.mobile.Application(document.body, { transition: "slide", layout: "mobile-tabstrip" }); $(function(){ $("#artistsListView").kendoMobileListView({ endlessScroll: true, dataSource: data.artists, template: $("#artists-list-template").text(), scrollTreshold: 10 }); }); </script>
As you can see I am just using Kendo mobile API to give my list view a data source and indicate that it needs to support endless scrolling. Endless scrolling will make calls to the server for more data as the user scrolls down my list. The last piece of the puzzle is the template, which simply includes the name of the artist
<script id="artists-list-template" type="text/x-kendo-template"> <span style="display: inline-block;white-space: nowrap;">#=Name#</span> </script>
Nothing fancy, but does illustrate the point. Here is how my app looks at the end
You can download this project here.
Personally, I really like the idea of Icenium. The IDE itself is decent enough. I had a number of hiccups in it, but nothing too bad. I do not have a key combination to format my code, like Visual Studio. My JavaScript breakpoints sometimes are not hit as well. Overall though, I can develop an iOS and Android app on Windows, which is really cool. Read more about the technology on Icenium web site and Kendo Mobile site.
Enjoy.
Hello Sergey
Thanks for this nice articel. I’m also looking for a way to make crossplatform apps and Icenium looks promising.
Can you give me an example how the output from the created json file is looking?
Regards,
Aren
Thanks for the useful article. Might be new to a later version, but Ctrl+Alt+F will format your code / css / html in the IDE.
Thanks for this new & useable article.
can you give me an example how I can make api for login ,Login using facebook & new signup step by step.
Thanks to review my problem.
You can check out Facebook guide here
|
http://www.dotnetspeak.com/asp-net-mvc/your-first-mobile-application-in-icenium-and-web-api/
|
CC-MAIN-2019-09
|
refinedweb
| 1,075
| 66.84
|
tag:blogger.com,1999:blog-61424862017-08-27T14:05:01.619+01:00Mike Dimmick's BleurghRandom regurgitations of technical material, with added bileMike Dimmick message handling<p>Goran, a commenter at <a href="">Raymond's blog</a>, <a href="">asks</a>:</p> <blockquote><p?</p></blockquote><p>The specific context is handling the return key - Windows tells you what key caused WM_GETDLGCODE to be fired in the wParam parameter, but MFC doesn't pass that information on to the OnGetDlgCode handler.</p><p.</p><p>Instead, you can simply use ON_MESSAGE in the message map, using WM_GETDLGCODE and declaring the function as taking WPARAM, LPARAM and returning LRESULT. Do make sure you don't also include ON_WM_GETDLGCODE.</p><p.</p><p>Many of the predefined helper message map macros, which usually don't take any arguments, define the function name that you're expected to implement. ON_MESSAGE allows you to specify the function name (as you'd expect, as a generic macro).</p><p.</p>Mike Dimmick months<p>…that’s how long I was running Windows 7 RC1 without switching back to my hard disk with Windows Vista. The System event log shows I last booted it on 31 May 2009.</p> <p>This is practically the last action I’m going to take before wiping this disk – the <em>Vista </em>disk – and installing Windows 7 RTM. Just a little check here and there to find anything that I might not have backed up when transferring over to 7 RC1, then it’s formatting time.</p> <p>The only thing I’ve missed? The extra 200GB of disk space!</p> <p>In retrospect Windows Vista has not been a <em>bad</em>.</p>Mike Dimmick details on Humax problem<p>I thought I’d break this into another post so that <a href="">the fix</a> is easier to understand for non-technical people.</p> <p>Comparing the old and new driver source shows that the definition of the <em>libusb_request</em> structure had changed. Visual inspection suggested that this was probably the source of the compatibility issue, when using the old DLL with the new driver.</p> <p>When using the <em>new</em> <em>by ordinal</em>..</p> <p.</p> <p><a href="">File Attachment: libusb0.def (1 KB)</a></p>Mike Dimmick from your Humax PVR9200T on Windows x64<p>The.</p> <p>Unfortunately the supplied eLinker software is not highly-regarded. As a result some replacements have been written, such as <a href="">Humax Media Controller</a>. However, the driver supplied only works on 32–bit Windows.</p> <p>The USB library and driver used are a <a href="">Win32 port</a> of <a href="">libusb</a>, a driver which makes it possible to write your USB transfers fully in user-mode. <a href="">A later version</a> (0.1.12.2) has both 32– and 64–bit versions of the driver. However, the updated driver doesn’t work with the libusb0.dll supplied with Media Controller 1.05, and the program won’t run if you overwrite it with the new DLL.</p> .</p> <p.</p> <p>Hopefully this will help other people with the x64 versions of Windows!</p> <p><em><a href="">File Attachment: HumaxPVR.zip (78 KB)</a></em></p>Mike Dimmick Router suggestions?<p>OK, I think <a href="">my router</a> has become too hackable. The reliability has hit the toilet, and I’ve had two messages from <a href="">my ISP</a> warning me that there has been odd traffic. The first time I got a warning message I think I was using BitTorrent at the time, but the second was yesterday at around lunchtime – when my computers were all off and I was at work.</p> <p>So, list of <strike>demands</strike> requirements:</p> <ul> <li>Wired router with a few Ethernet ports</li> <li>802.11g WiFi with at least WPA encryption (can’t do WPA2 on the handheld)</li> <li>Xbox Live compatible</li> <li>DHCP implementation that works</li> <li>DNS server address passthrough from DHCP server (i.e. I want to use my ISP’s DNS servers, not have the box do DNS for me)</li> <li>ADSL2+ compatible, for whenever that gets rolled out by Demon</li> <li>PPPoA since this is of course a UK requirement</li> <li>A firewall that can be completely switched off <em>or</em> is actually configurable intelligibly</li> <li>Not excessively burdened with blinkenlights</li> <li>Will be updated <em>regularly</em> with patches for whatever its OS is</li></ul> .</p> <p.</p> <p>I’m half tempted to try building a Windows CE kernel for it myself.</p>Mike Dimmick do I see what the JIT compiled, anyway?<p>You use the SOS debugger extension’s !U command.</p> <p>Yeah, that’s not very helpful. Here’s how to get from that terse description to something useful.</p> <p>As an example, let’s use the following trivial program:</p> <pre><span class="TPkeyword1">using </span>System; <span class="TPkeyword1">public class </span>Simple <span class="TPbracket">{</span> <span class="TPkeyword1">public static int </span>Main<span class="TPbracket">()</span> <span class="TPbracket">{</span> Console<span class="TPoperator">.</span>WriteLine<span class="TPbracket">( </span><span class="TPstring">"Hello World!" </span><span class="TPbracket">)</span>; Console<span class="TPoperator">.</span>ReadLine<span class="TPbracket">()</span>; <span class="TPkeyword1">return </span><span class="TPnumber">0</span>; <span class="TPbracket">}</span> <span class="TPbracket">}</span> </pre> <p>Dump this snippet into Notepad, save as simple.cs file and compile it. We want to see what it’ll be like when the user runs it, so we’ll do a release build. From the VS2005 Command Prompt, run <code>csc /debug:pdbonly /o+ simple.cs</code>. (I find it’s quicker to do command-line compiles for simple test programs than firing up Visual Studio. It’s not as if we’re designing a Form.) If you’re doing this on a 64–bit computer, like me, add <code>/platform:x86</code> so we get consistent output across platforms. (Marking as x86 means that the executable headers are set to indicate a 32–bit program. The IL is the same whatever you set here, but if you leave it as <code>anycpu</code>, the default, the .NET Framework will create a 64–bit process which obviously gives different output.)</p> <p>Why call Console.ReadLine? Well, the JIT compiler is helpful (or at least it thinks it is). If you start a program under the debugger, it will generate less optimized code because it thinks you want to debug it, but that changes the code from what the user will see. So I want to add a stop point in the program so I can attach the debugger after the process has started.</p> <p>The next thing you’ll need is the <a href="">Debugging Tools for Windows kit</a>. Grab the 32–bit kit – the 64–bit kit can debug 32–bit processes, but SOS (the debugger extension DLL which implements the commands we’re going to use) doesn’t appear to work. We’re going to use WinDBG, which is slightly friendlier than the other debuggers although not a lot!</p> <p>Open WinDBG. Run simple.exe and, when it’s waiting for input, go to <em>File/Attach to a Process</em> in WinDBG. You’ll see a list of other processes (and if running Windows Vista with UAC enabled, or on XP as a standard user, a load of access denied errors). Select simple.exe from the list and hit OK. WinDBG automatically stops once you’ve attached to a process so you can start manipulating the program straight away. This is the output I got:</p><pre>Microsoft (R) Windows Debugger Version 6.8.0004.0 X86 Copyright (c) Microsoft Corporation. All rights reserved. *** wait with pending attach Symbol search path is: C:\Windows;C:\Windows\System32;C:\Windows\SysWOW64;SRV*C:\WebSymbols* Executable search path is: ModLoad: 008c0000 008c8000 C:\Users\Mike\Documents\Programming\simple.exe ModLoad: 776a0000 77800000 C:\Windows\SysWOW64\ntdll.dll ModLoad: 79000000 79046000 C:\Windows\system32\mscoree.dll ModLoad: 75b70000 75c80000 C:\Windows\syswow64\KERNEL32.dll ModLoad: 76ce0000 76da6000 C:\Windows\syswow64\ADVAPI32.dll ModLoad: 76e10000 76f00000 C:\Windows\syswow64\RPCRT4.dll ModLoad: 75850000 758b0000 C:\Windows\syswow64\Secur32.dll ModLoad: 75a80000 75ad8000 C:\Windows\syswow64\SHLWAPI.dll ModLoad: 77110000 771a0000 C:\Windows\syswow64\GDI32.dll ModLoad: 77040000 77110000 C:\Windows\syswow64\USER32.dll ModLoad: 76c30000 76cda000 C:\Windows\syswow64\msvcrt.dll ModLoad: 75d00000 75d60000 C:\Windows\system32\IMM32.DLL ModLoad: 76940000 76a08000 C:\Windows\syswow64\MSCTF.dll ModLoad: 75a70000 75a79000 C:\Windows\syswow64\LPK.DLL ModLoad: 759c0000 75a3d000 C:\Windows\syswow64\USP10.dll ModLoad: 746c0000 7485e000 C:\Windows\WinSxS\x86_microsoft.windows.common-controls_6595b64144ccf1df_6.0.6001.18000_none_5cdbaa5a083979cc\comctl32.dll ModLoad: 79e70000 7a3ff000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\mscorwks.dll ModLoad: 75390000 7542b000 C:\Windows\WinSxS\x86_microsoft.vc80.crt_1fc8b3b9a1e18e3b_8.0.50727.1434_none_d08b6002442c891f\MSVCR80.dll ModLoad: 75e30000 7693f000 C:\Windows\syswow64\shell32.dll ModLoad: 771b0000 772f4000 C:\Windows\syswow64\ole32.dll ModLoad: 790c0000 79bf6000 C:\Windows\assembly\NativeImages_v2.0.50727_32\mscorlib\5b3e3b0551bcaa722c27dbb089c431e4\mscorlib.ni.dll ModLoad: 79060000 790b6000 C:\Windows\Microsoft.NET\Framework\v2.0.50727\mscorjit.dll (1dc.12b4): Break instruction exception - code 80000003 (first chance) eax=7efaf000 ebx=00000000 ecx=00000000 edx=7770d2d4 esi=00000000 edi=00000000 eip=776b0004 esp=04c6fa00 ebp=04c6fa2c: 776b0004 cc int 3 </pre> <p>You enter your debugging commands in the edit box at the bottom, next to the prompt 0:003>. This means we’re debugging the 0th process we’re attached to, and our commands by default apply to thread number 3. Four threads? Well, thread 0 is our main thread, thread 1 was created by the CLR’s debugging support in case we attached a debugger, thread 2 is the finalizer thread, and thread 3 was just created by WinDBG so it could stop the process safely. (You can see this by running ~* k, although you’ll need to be set up to get debugging symbols from the symbol server to get good stack traces.) The Debugging Tools debuggers automatically stop the process when you attach, so that you can start manipulating the process straight away.</p> <p>The first thing we need to do is ask WinDBG to load the SOS extension. This is installed with the CLR, so we ask it to load from the same folder that mscorwks.dll (the DLL which implements the virtual machine, effectively the guts of the CLR) lives in:</p><pre>0:003> .loadby sos.dll mscorwks</pre> <p>Now we can see what state of the managed threads are in by running !threads. Any command beginning ! comes from a debugger extension DLL, and they can be disambiguated if necessary, but they’re searched in reverse order loaded (last loaded = first searched) so anything we want will be found in SOS anyway.</p> <p>Right. To see what we can do, run !help.</p><pre>0:00</pre> <p>To find out a little more about !U, let’s run !help U:</p><pre>0:003> !help U ------------------------------------------------------------------------------- !U [-gcinfo] [-ehinfo] <MethodDesc address> | <Code address> <code address> Presents an annotated disassembly of a managed method when given a MethodDesc pointer for the method, or a code address within the method body. Unlike the debugger "U" function, the entire method from start to finish is printed, with annotations that convert metadata tokens to names. <example output> ... 03ef015d b901000000 mov ecx,0x1 03ef0162 ff156477a25b call dword ptr [mscorlib_dll+0x3c7764 (5ba27764)] (System.Console.InitializeStdOutError(Boolean), mdToken: 06000713) 03ef0168 a17c20a701 mov eax,[01a7207c] (Object: SyncTextWriter) 03ef016d 89442414 mov [esp+0x14],eax If you pass the -gcinfo flag, you'll get inline display of the GCInfo for the method. You can also obtain this information with the !GCInfo command. If you pass the -ehinfo flag, you'll get inline display of exception info for the method. (Beginning and end of try/finally/catch handlers, etc.). You can also obtain this information with the !EHInfo command.</code></pre> <p>Right, so we need the address of a MethodDesc structure, or the address of the code itself. How can we get one of these? There’s a command called DumpMD…</p><pre>0:003> !help dumpmd ------------------------------------------------------------------------------- !DumpMD <methoddesc address> This command lists information about a MethodDesc. You can use !IP2MD to turn a code address in a managed function into a MethodDesc: 0:000> !dumpmd 902f40 Method Name: Mainy.Main() Class: 03ee1424 MethodTable: 009032d8 mdToken: 0600000d Module: 001caa78 IsJitted: yes m_CodeOrIL: 03ef00b8 If IsJitted is "yes," you can run !U on the m_CodeOrIL pointer to see a disassembly of the JITTED code. You can also call !DumpClass, !DumpMT, !DumpModule on the Class, MethodTable and Module fields above. </pre> <p>We’re getting a bit closer. Let’s try DumpModule…</p><pre>0:003> !help DumpModule ------------------------------------------------------------------------------- !DumpModule [-mt] <module address> You can get a Module address from !DumpDomain, !DumpAssembly and other functions. Here is sample output: 0:000> !dumpmodule 1caa50 Name: C:\pub\unittest.exe Attributes: PEFile Assembly: 001ca248 LoaderHeap: 001cab3c TypeDefToMethodTableMap: 03ec0010 TypeRefToMethodTableMap: 03ec0024 MethodDefToDescMap: 03ec0064 FieldDefToDescMap: 03ec00a4 MemberRefToDescMap: 03ec00e8 FileReferencesMap: 03ec0128 AssemblyReferencesMap: 03ec012c MetaData start address: 00402230 (1888 bytes) The Maps listed map metadata tokens to CLR data structures. Without going into too much detail, you can examine memory at those addresses to find the appropriate structures. For example, the TypeDefToMethodTableMap above can be examined: 0:000> dd 3ec0010 03ec0010 00000000 00000000 0090320c 0090375c 03ec0020 009038ec ... This means TypeDef token 2 maps to a MethodTable with the value 0090320c. You can run !DumpMT to verify that. The MethodDefToDescMap takes a MethodDef token and maps it to a MethodDesc, which can be passed to !DumpMD. There is a new option "-mt", which will display the types defined in a module, and the types referenced by the module. For example: 0:000> !dumpmodule -mt 1aa580 Name: C:\pub\unittest.exe ...<etc>... MetaData start address: 0040220c (1696 bytes) Types defined in this module MT TypeDef Name ------------------------------------------------------------------------------ 030d115c 0x02000002 Funny 030d1228 0x02000003 Mainy Types referenced in this module MT TypeRef Name ------------------------------------------------------------------------------ 030b6420 0x01000001 System.ValueType 030b5cb0 0x01000002 System.Object 030fceb4 0x01000003 System.Exception 0334e374 0x0100000c System.Console 03167a50 0x0100000e System.Runtime.InteropServices.GCHandle 0336a048 0x0100000f System.GC</pre> <p>We still need a Module to pass to this command, but we’re getting there. Maybe DumpDomain can help us? I’ll skip the help text this time.</p><pre>0:003> !DumpDomain -------------------------------------- System Domain: 7a3bc8b8 LowFrequencyHeap: 7a3bc8dc HighFrequencyHeap: 7a3bc934 StubHeap: 7a3bc98c Stage: OPEN Name: None -------------------------------------- Shared Domain: 7a3bc560 LowFrequencyHeap: 7a3bc584 HighFrequencyHeap: 7a3bc5dc StubHeap: 7a3bc634 Stage: OPEN Name: None Assembly: 0052f848 -------------------------------------- Domain 1: 00516800 LowFrequencyHeap: 00516824 HighFrequencyHeap: 0051687c StubHeap: 005168d4 Stage: OPEN SecurityDescriptor: 00517d30 Name: simple.exe Assembly: 0052f848 [C:\Windows\assembly\GAC_32\mscorlib\2.0.0.0__b77a5c561934e089\mscorlib.dll] ClassLoader: 0050e470 SecurityDescriptor: 0051f5d0 Module Name 790c2000 C:\Windows\assembly\GAC_32\mscorlib\2.0.0.0__b77a5c561934e089\mscorlib.dll Assembly: 00535ac0 [C:\Users\Mike\Documents\Programming\simple.exe] ClassLoader: 0050e630 SecurityDescriptor: 00535a38 Module Name 000f2c3c C:\Users\Mike\Documents\Programming\simple.exe</pre> <p>At last, we have the address of something we can use. Let’s get the module information for simple.exe. For that we need the address next to it in the module list for the simple.exe assembly.</p><pre>0:003> !dumpmodule -mt f2c3c Name: C:\Users\Mike\Documents\Programming\simple.exe Attributes: PEFile Assembly: 00535ac0 LoaderHeap: 00000000 TypeDefToMethodTableMap: 000f0038 TypeRefToMethodTableMap: 000f0040 MethodDefToDescMap: 000f005c FieldDefToDescMap: 000f0068 MemberRefToDescMap: 000f006c FileReferencesMap: 000f0088 AssemblyReferencesMap: 000f008c MetaData start address: 008c206c (740 bytes) Types defined in this module MT TypeDef Name ------------------------------------------------------------------------------ 000f3030 0x02000002 Simple Types referenced in this module MT TypeRef Name ------------------------------------------------------------------------------ 790fd0f0 0x01000001 System.Object 79101118 0x01000006 System.Console </pre> <p>Now we can get the MethodTable for the Simple class:</p><pre>0:003> !dumpmt -md f3030 EEClass: 000f1174 Module: 000f2c3c Name: Simple mdToken: 02000002 (C:\Users\Mike\Documents\Programming\simple.exe) BaseSize: 0xc ComponentSize: 0x0 Number of IFaces in IFaceMap: 0 Slots in VTable: 6 --------------------------------------() 00310070 000f3020 JIT Simple.Main() 000fc01c 000f3028 NONE Simple..ctor()</pre> <p>Finally we have something we can pass to !U. Note the JIT column. Here, PreJIT means that the code has come from a native image generated by ngen, JIT means that the JIT compiler in this process has generated the code, and NONE means that the method hasn’t been compiled yet. We didn’t override any of System.Object’s virtual methods, so they occupy the first four places in our MethodTable.</p> <p>I placed the call to Console.ReadLine at the end of the program so everything has already been compiled – remember, the JIT compiles code on-demand, one method at a time, when that method is called (barring very simple methods which might be inlined into their callers). Let’s just get on and show the code for Main.</p><pre>0:003> !U f3020 Normal JIT generated code Simple.Main() Begin 00310070, size 36 00310070 833d8c10920300 cmp dword ptr ds:[392108Ch],0 00310077 750a jne 00310083 00310079 b901000000 mov ecx,1 0031007e e8e1580579 call mscorlib_ni+0x2a5964 (79365964) (System.Console.InitializeStdOutError(Boolean), mdToken: 06000770) 00310083 8b0d8c109203 mov ecx,dword ptr ds:[392108Ch] (Object: System.IO.TextWriter+SyncTextWriter) 00310089 8b153c309203 mov edx,dword ptr ds:[392303Ch] ("Hello World!") 0031008f 8b01 mov eax,dword ptr [ecx] 00310091 ff90d8000000 call dword ptr [eax+0D8h] 00310097 e84c750d79 call mscorlib_ni+0x3275e8 (793e75e8) (System.Console.get_In(), mdToken: 0600076e) 0031009c 8bc8 mov ecx,eax 0031009e 8b01 mov eax,dword ptr [ecx] 003100a0 ff5064 call dword ptr [eax+64h] 003100a3 33c0 xor eax,eax 003100a5 c3 ret</pre> <p>Interesting. The JIT clearly sees Console.WriteLine(string) and Console.ReadLine as trivial and has inlined the calls. In turn it’s inlined Console.get_Out. The reference to SyncTextWriter isn’t the JIT being clever – it’s SOS telling us that the address 0x0392108C is the base of SyncTextWriter’s vtable, as this call to TextWriter.WriteLine(string) is virtual.</p> <p>I realise, and hope, that this isn’t everyday usage. More often than not you’ll have crashed somewhere and want to find out where (for which see !IP2MD). But it can be useful to see just how your C# code turns into machine instructions.</p>Mike Dimmick the heck does 'rep ret' mean?<p>I was looking at what the .NET x64 JIT compiler generates for some code, and saw something very odd at the end of the routine: the last instruction of the function was <code>rep ret</code>. Looking a bit further, this is the same at the end of <em>every</em> JIT-compiled routine.</p> <p>The thing is, the <code>rep</code> prefix to an instruction is supposed to tell it to be repeated. Repeat the return? How do I do that?</p> <p>The <a href="">Intel architecture software developer’s manual set</a> says it’s only defined for the ‘string’ instructions like <code>movs</code> (which moves a byte/word/dword from the address pointed to by ESI to the address pointed to by EDI). The rep prefix repeats the string instruction ECX times. Yes, this means that you can implement <code>memcpy</code> in a single instruction. (You can do <code>memset</code> with a single <code>rep stos</code> instruction, once AL is loaded with the value to be stored.) It’s explicitly undefined for anything else.</p> <p>So where the heck has this illegal usage come from? I followed a couple of clues and found <a href="">this patch notification</a> for glibc on x64. And indeed <a href="">the current version of AMD’s optimization guide</a> [PDF] for Athlon 64 processors says that you should do this. The reason? The branch predictor gets it wrong if the <code>ret</code> instruction is jumped to directly by a branch instruction, or if the <code>ret</code> directly follows a branch instruction.</p> <p>I’m not sure doing it throughout, even when you’ve got epilog code in there which prevents the bug, is necessary though.</p> <p>AMD have now published <a href="">a new optimization guide for their Family 10h processors</a> [PDF] and guess what, the advice has changed. Instead of using a two-byte illegal instruction, they now recommend the <em>three-</em>byte instruction <code>ret 0</code>. The difference between a plain <code>ret</code> and a <code>ret</code> <em>imm16</em> (where <em>imm16</em> is an immediate 16–bit value) is that <code>ret</code> <em>imm16</em> pops the return address, then specified number of bytes from the stack before jumping to the return address. It’s common to see this in 32–bit Windows <code>WINAPI</code> (<code>__stdcall</code>) code as this calling convention requires the called function to clean up the parameters from the stack. (64–bit Windows has only one calling convention and it mainly passes parameters in registers, so stack cleanup is not required.</p> <p.</p>Mike Dimmick and the case of the missing DSL service<p>Recently,.</p> <p>When the new bloke moved in downstairs, he wanted a BT service. He claims that he asked them for a new line – they interpreted this to mean taking over the old one, and started the process of transferring <em>my</em>.”</p> <p.</p> <p>In early November my parents tell me they couldn’t get through. (They didn’t <em>actually</em> tell me they were getting a message saying ‘number not in service.’) A few days into November, suddenly, no connection. A horror begins to dawn on me, and my neighbour – <em>BT have gone ahead and done it anyway</em>.</p> <p>Now comes the reason for the tale. <em>Most of the time I can’t get through to BT Customer Service, even if I wait for half an hour on the phone</em>..</p> .</p> <p>I thought about it for a couple of days and decided to do what should have been done a <em>very</em> long time ago – get a separate line terminating in this flat. You’d expect that to take a long time, right? Longer than transferring the billing from one customer to another?</p> <p>Nope. I could have it put in on the 20th. So I paid my £125 (£125!) and got it done.</p> <p.</p> <p.</p> <p.</p> .</p>Mike Dimmick of Best Error Page<p><a href="">CodeProject</a>).</p> <p>Anyway, they recently moved to an all-new ASP.NET implementation of their forums, but unfortunately the errors are still a common occurrence. And the error in the error handler is back:</p> <p><img alt="Error - Windows Internet Explorer" src="" border="0" /></p> <p>(Yes, running Vista now.)</p> <p>One of the things I love most about CP is the more… <em>relaxed</em> attitude to reporting problems.</p>Mike Dimmick out EMBASSY Trust Suite<p>Dear <a href="">Wave Systems</a>,</p> <p>Fuck right off.</p> <p>When I bought my Dell Latitude D820 laptop, I went through and deleted a certain amount of the shovelware that was preinstalled, but some of it, to be honest, I didn’t know what it did. I tried to work out what EMBASSY Trust Suite did, and it seems to be something to do with the Trusted Platform Module in the system. Encrypting files using the TPM key, or something like that.</p> <p>Charles Petzold wrote <a href="">an article</a> yesterday commenting on random number generators, and I added a comment earlier on mentioning in passing the reported flaw in CryptGenRandom. I decided to see if this seemed to be the case in XP SP2. My answer was inconclusive, as the assembler was hard to follow. By chance I was debugging Internet Explorer with WinDBG (easiest way to force the RSA Enhanced Cryptographic Provider, rsaenh.dll, to load so I could disassemble it) and noticed an odd number of access violation exceptions occurring when I accidentally did a search in the instance of IE I was debugging. That’s weird, I thought – where am I on the stack?</p> <p>In wxvault.dll. The stack trace was:</p><pre>0012c888 100065ee wxvault+0x7967<br />0012cac4 42f8c769 wxvault+0x65ee<br />0012cadc 42f8cdc9 IEFRAME!PathFileExistsW+0x24<br />0012cb14 42f8ccf7 IEFRAME!HelperForReadIconInfoFromPropStore+0x97<br />0012cb98 42f78e53 IEFRAME!CInternetShortcut::_GetIconLocationWithURLHelper+0x156</pre> <p>Looks like we’re trying to get the favourite icon for the address bar. But how is IEFRAME calling into wxvault? Microsoft can’t know that this library exists. Is there something on the stack that somehow isn’t being included (can happen if a function was compiled with the Frame Pointer omitted and no symbols are available to get FPO data [which tells the debugger how to fix up]). Let’s disassemble around PathFileExistsW:</p><pre>42f8c75e ff7508 push dword ptr [ebp+8]<br />42f8c761 8bf8 mov edi,eax<br />42f8c763 ff152c13ef42 call dword ptr [IEFRAME!_imp__GetFileAttributesW (42ef132c)]<br />42f8c769 83f8ff cmp eax,0FFFFFFFFh</pre> <p>That’s weird, we called into GetFileAttributesW. How did we end up in wxvault?</p><pre>0:000> u kernel32!GetFileAttributesW<br />kernel32!GetFileAttributesW:<br />7c80b74c e965ae7f93 <strong>jmp wxvault+0x65b6 (100065b6)</strong></pre> <p><strong>Evil!</strong> They patched the running instance of kernel32! What else have they patched?</p><pre>kernel32!CreateFileW:<br />7c810760 e9d0587f93 jmp wxvault+0x6035 (10006035)</pre> <p>Note how they’ve failed to rebase the DLL, using the default 0x10000000 base address, making it collide with <em>everything ever</em> which also uses that default address.</p> <p>Needless to say this is going to get uninstalled as soon as I take a full backup of the laptop! In my book, this is a user-mode rootkit. I don’t use the feature, so it’s going.</p> <p>How <em>should</em> they have implemented this? Well, I’d start by seeing if it’s possible to change the algorithm for the Encrypting File System. It should be, it’s implemented using the Cryptographic API and CSPs (involving callbacks into LSASS in usermode!), so I would have thought that simply providing your own CSP would be sufficient.</p> <p>If that’s not possible, my next port of call would be a file system filter driver. That would have the downside (like this) that every file system call would go through it, rather than the tiny amount of calls which actually target a file encrypted in this way.</p> <p>The access violation looks like it might ultimately be caused by a bug in IE – it looks like IE tried to pass the URL to the favicon to GetFileAttributesW, which I would <em>hope</em> would fail (or would it try to invoke WebDAV?)</p>Mike Dimmick Smart Device projects don't Dispose components<p>I’ve posted about this <a href="">on Connect</a> and <a href="">on Codeproject’s Lounge</a>, and informed my colleagues, but not yet on here. I realise I’m repeating myself for some of my audience, but this is in a more readily searchable location.</p> <p <a href="">.NET Compact Framework Remote Performance Monitor</a> to monitor the application. I was seeing 400KB+ of GC heap still being used after a collection. Looking at the GC heap (in .NET CF 2.0 SP2’s RPM) showed that a large number of instances of the second form were still referenced.</p> <p>Drilling down showed that the MainMenu and MenuItem objects that had been owned by the forms were still rooted, <em>by their finalizers </em>(shown as [root: Finalizer]). The forms were still referenced because I had event handlers connected to the menu items. That meant that all the other controls on the form were also still referenced, by the members of the form class.</p> <p <em>finalizer thread </em>processes the queue of objects to be finalized. The object will continue to be reported to GC until its finalizer is run, so may survive multiple GCs if memory demand is high and the finalizers are slow.</p> <p “<a href="">Overview of the .NET Compact Framework Garbage Collector</a>”. I’ve recommended before that <a href="">you always call Dispose</a>, on any object that implements it (although I’ve discovered it <em>isn’t</em> safe to Dispose the Graphics object passed to you in a PaintEventArgs structure).</p> <p>So where does the finalizer come from? I studied the assemblies in <a href="">Reflector<).</p> <p).</p> <p>For both desktop and device projects, a newly-created form declares a <em>components</em> member of type System.ComponentModel.Container, to contain the components dragged onto the designer surface. A Dispose(bool) override is generated for you (in Form.Designer.cs/.vb, for .NET 2.0 projects) which calls Dispose on the container, if <em>components</em> is not null. (Presumably the intent was that the container wouldn’t be created until needed, but in fact the initial code in InitializeComponent for a new form does create a Container and assigns it to <em>components</em>.) Container’s Dispose method disposes anything that was added to the container.</p> <p – <em>the device designer doesn’t</em>. In fact it even deletes the creation of the Container object. Result, all your components end up running their finalizer, and thereby consuming memory past the first GC after they died, potentially increasing the overall memory use of the process.</p> <p>To avoid the problem, you have to write the code to dispose the components yourself. The most straightforward way is to add the code that Visual Studio <em>should </em>have generated to your form’s constructor, after the call to InitializeComponent. That will typically look like:</p><pre>this.components = new System.ComponentModel.Container(); this.components.Add( mainMenu1 ); // etc for other components</pre> <p.</p> <p><em>This bug still exists in VS2008 Beta 2</em>. To help ensure it doesn’t continue in future versions, please <a href="">vote for the bug on Connect</a>.</p> <p.</p> <p>The MainMenu class unfortunately <em>does</em> have a Finalizer, but it does <em>not</em> have a Dispose method. After a bit of digging around, you can see that you can at least clean up the native resources by setting the form’s Menu property to null. It still causes all the <em>managed </em>objects to remain referenced (if it has menu items that have event handlers). You’ll have to call GC.SuppressFinalize yourself once you’ve forced it to clean up.</p> .</p>Mike Dimmick of whacking rats<p>I.</p> <p>And so you do this, and bing! up pops a notification. So you work out how to kill that one and bing! here comes another one. It’s like playing Whack-A-Rat.</p> <p!</p> <p>…and breathe…</p> <p>!</p>Mike Dimmick working on databases should read this<p>Rico Mariani (now Chief Architect of Visual Studio, formerly just a performance architect on the .NET Framework) has written a great article on database performance, but also covers correctness issues. A good read for any developer working on databases (and isn’t that most of us now?)</p> <p><a title="Database Performance, Correctness, Compostion, Compromise, and Linq too" href="">Database Performance, Correctness, Compostion, Compromise, and Linq too</a></p>Mike Dimmick reason not to overload the .NET Framework name<p><a href="">This month’s security bulletin</a> becomes a lot more confusing. It was pretty confusing already, but the extra detail of .NET Framework 3.0 is/is not vulnerable just adds an extra layer.</p> <p>(Suggestion: update and let your customers know. Since .NET Framework patches are cumulative I expect <a href="">Barry’s validators</a> are also included.)</p>Mike Dimmick, long time no post<p>I have intended to on occasions, never got round to it. It’s been so long that Blogger have changed completely over to Google logins and my old configuration in <a href="">BlogJet</a> no longer worked because I’d had to switch to the Google login at one point (think I wanted to use my own identity on posting a comment on someone else’s blog).</p> <p>Indeed the old Google API didn’t work anymore either and I had to grab BlogJet 2.0.x.</p> <p>Google’s increasing privacy-invasion is making me want to get off this ship (and <a href="">this one too</a>).</p>Mike Dimmick developers fight...<a href="">Microsoft take their ball away</a>.Mike Dimmick problem with biometrics<p><a href="">If they change, they can’t identify you</a> <img src="" /></p>Mike Dimmick Dinner this Friday (19 Jan)<p>My friend <a href="">Colin Mackay</a>, who I know from <a href="">CodeProject</a>, sent me a message last week to tell me that he was attending this weekend’s <a href="">Vista and Office Developer Launch</a> in Reading, and to ask if I’d like to meet up while he was here.</p><p>I was a bit slow responding and discovered tonight that he’d signed up for <a href="">Zi Makki</a>’s <a href="">Geek Dinner on Friday night</a>. If you’re in the area – whether or not you’re going to the event itself (I’m not, and nor is Ian) – why not come along?</p>Mike Dimmick to misuse the Office 2007 Ribbon<p>Dare Obasanjo <a href="">has noticed</a> a <a href="">comment</a> of mine on Jensen Harris’s post <a href="">announcing Microsoft’s licensing of the concept</a> of the Office 2007 ‘Ribbon’ UI. In that comment, I criticised (in a single sentence) Dare’s <a href="">concept</a> for a future version of <a href="">RSS Bandit</a>. I should say up-front that I’m a regular user of RSS Bandit; it’s my main RSS reader at home, in which I’m subscribed to over 100 feeds. I want this to remain usable, and my fear is that it won’t be.</p><p>Funnily, he doesn’t acknowledge that I made the first comment on <em>that</em> post, in which I go into detail. I said:</p><blockquote dir="ltr" style="MARGIN-RIGHT: 0px"><p>It doesn't belong. There's no need to go to an Office-style menu system in RSS Bandit because you barely ever use the menus anyway. It's not like there are loads of features hidden in the depths of the menus and dialogs, and the gallery is particularly over-the-top. How often do you think people will change the style of the newspaper view? Virtually never, in my opinion - they'll pick one that works, and stick with it. These options don't need to be 'in your face' the whole time. RSS Bandit is not document authoring software, it's a browser.<br /><br />If anything you could follow IE7's lead and drop the menu bar entirely. There aren't that many menu options, and most of them are replicated with some other widget, on one of the toolbars, or in the case of View/Feed Subscriptions and View/Search, the two tabs in the pane.<br /><br />Most of the other options that aren't duplicated could end up on an extended Tools menu.</p></blockquote><p dir="ltr">Dare links to Mike Torres who comments on the menu-less UI of various Microsoft applications, suggesting that this is something recent. At least two of these have been menu-less for a while, in one case for five years: Windows Media Player. The original version of WMP in Windows XP was without menus:</p><p dir="ltr"><img alt="Windows Media Player for Windows XP (WMP 8)" src="" border="0" /></p><p dir="ltr"><em>(screenshot from </em><a href=""><em></em></a><em>).</em></p><p dir="ltr">The highly-unconventional window shape was toned down in version 9.0 and became virtually conventional in 10.0, although all four corners are rounded whereas the normal XP themes have rounded top corners and square bottom corners.</p><p dir="ltr">It appears that the menus first disappeared from MSN Messenger in version 7.0, which was released in April 2005:</p><p dir="ltr"><img alt="MSN Messenger 7.0" src="" border="0" /></p><p dir="ltr"><em>(screenshot from <a href=""></a>)</em></p><p dir="ltr">Which Office application is RSS Bandit most like? Word? Excel? No. It’s most like Outlook. Which major Office 2007 application does <em>not</em> get a Ribbon (in its main UI)? Outlook.</p><p dir="ltr">I’ve been following Jensen Harris’s blog more-or-less since the beginning. In it, he explains the motivations behind creating the Ribbon, and the data that was used to feed the process of developing it. The Ribbon is mainly about creating better access to <em>creating and formatting documents</em>, by showing the user a gallery of choices and allowing them to refine it. Which part of Outlook gets a Ribbon? The message editor (OK, this is actually part of Word).</p><p dir="ltr">RSS Bandit is about <em>viewing other people’s content</em>, for which the best analogy is probably IE7.</p><p dir="ltr">I haven’t done any UI studies. I’ve not taken part in any. But Microsoft <em>have </em>analysed their UIs. They’ve gathered data on how those interfaces are used – automatically, in some cases (the Customer Experience Improvement Programs). The Ribbon is an improvement for Office. It’s not going to be right for all applications. Many applications actually suffer in the classic File/Edit/View/Tools/Help system: the menus tend to either be padded with commands that are duplicated elsewhere, or are ridiculously short (e.g. RSS Bandit’s ‘Edit’ menu which only has a ‘Select All’ option, which if you’re currently looking at a browser tab appears to do nothing – it’s only when you switch back to the Feed tab that you notice it’s selected all the items in the current feed or feed group). They’ll suffer equally in the Ribbon, particularly if there are too few features to make a Ribbon worthwhile.</p><p dir="ltr">When designing a UI for your application, don’t be too slavish to a particular model. If you find yourself padding out the menus to conform to the File/Edit/View model, or if all your commands are on the Tools menu, a classic menu probably doesn’t fit. If you’re not offering a feature for the user to customise the formatting of something, which the user will use <em>regularly</em>, a Ribbon is probably also wrong. The standard toolbar is probably enough.</p>Mike Dimmick knock-on effect of the stupid WinFX->.NET 3.0 naming decision<p>The next version of the Compact Framework will be called:</p><p><a href="">.NET Compact Framework <em>3.5</em></a>.</p><p>Yeah.</p><p>Great way to confuse people.< to fix the Smart Device Framework 2.0 installer<p>Neil Cowburn <a href="">noted that the Smart Device Framework 2.0 installer doesn’t work properly on Windows Vista</a>.</p><p>This is the comment I couldn’t post to his website:</p><p>“It's error upon error for this one. Code 2869 means that the dialog designated as an error dialog doesn't work how Windows Installer needs an error dialog to work - see <a href=""></a>. So the real error is being lost. Visual Studio is generating you a broken Error dialog.</p><p.</p><p>The Windows Installer team does not recommend the use of managed code custom actions. This message does not seem to have got through to the Visual Studio deployment team. The recommendation is to use as few dependencies as possible, which generally translates to statically-linked C++ code.</p><p).</p><p>Windows Installer does support finding and executing an EXE that's already on the system as a custom action, but I don't think you can do this in Visual Studio.</p><p>You might want to consider a better installation solution, such as Windows Installer XML (WiX, <a href=""></a>)”</p><p>I’ve been getting into WiX recently. I was going to do a presentation at <a href="">DDD4</a>, but not enough people voted for it. If you fancy attending any of the proposed sessions and can spare a Saturday, <a href="">sign up now</a>. (I’m waiting for the final agenda to be posted, but the places may all go before that happens.)</p>Mike Dimmick to rename .NET Framework 3.0<p>As soon as I heard that Microsoft were changing the name WinFX, an umbrella name for Avalon, Indigo – oh, excuse me, Windows Presentation Foundation and Windows Communication Foundation – and Windows Workflow, to .NET Framework 3.0, I thought it was an incredibly bad idea.</p><p.</p><p>Someone’s started a <a href="">petition to name it back to WinFX</a>. I don’t care what name it has – does it even need an umbrella name? Can we not call the three subsystems by their own names? Even better, their codenames which despite not being descriptive were at least <em>easy to say! </em>Do I really need to even install WCF and WF just to get a WPF application to work?</p><p <em>will</em> install and work on older operating systems, they’ll have <em>another</em> stupid naming decision to make.</p><p.</p><p>Please, if you value everyone’s sanity, <em>sign this petition</em>. It probably won’t do any good but you can at least say you spoke up against the insanity.</p>Mike Dimmick scanners not particularly reliable<p>Dana Epp <a href="">posted</a> a movie from Mythbusters cracking a fingerprint ‘lock’.</p><p>Not exactly secure.</p><p><a href="">Watch now</a>. (YouTube, may get taken down when someone spots the copyright violation. What the hell, it’s Talk Like A Pirate Day. Arrr!)</p>Mike Dimmick
|
http://feeds.feedburner.com/MikeDimmick
|
CC-MAIN-2018-05
|
refinedweb
| 7,090
| 65.73
|
String::TT - use TT to interpolate lexical variables
use String::TT qw/tt strip/; sub foo { my $self = shift; return tt 'my name is [% self.name %]!'; } sub bar { my @args = @_; return strip tt q{ Args: [% args_a.join(",") %] } }
String::TT exports a
tt function, which takes a TT (Template Toolkit) template as its argument. It uses the current lexical scope to resolve variable references. So if you say:
my $foo = 42; my $bar = 24; tt '[% foo %] <-> [% bar %]';
the result will be
42 <-> 24.
TT provides a slightly less rich namespace for variables than perl, so we have to do some mapping. Arrays are always translated from
@array to
array_a and hashes are always translated from
%hash to
hash_h. Scalars are special and retain their original name, but they also get a
scalar_s alias. Here's an example:
my $scalar = 'scalar'; my @array = qw/array goes here/; my %hash = ( hashes => 'are fun' ); tt '[% scalar %] [% scalar_s %] [% array_a %] [% hash_h %]';
There is one special case, and that's when you have a scalar that is named like an existing array or hash's alias:
my $foo_a = 'foo_a'; my @foo = qw/foo array/; tt '[% foo_a %] [% foo_a_s %]'; # foo_a is the array, foo_a_s is the scalar
In this case, the
foo_a accessor for the
foo_a scalar will not be generated. You will have to access it via
foo_a_s. If you delete the array, though, then
foo_a will refer to the scalar.
This is a very cornery case that you should never encounter unless you are weird. 99% of the time you will just use the variable name.
None by default, but
strip and
tt are available.
Treats
$template as a Template Toolkit template, populated with variables from the current lexical scope.
Removes a leading empty line and common leading spaces on each line. For example,
strip q{ This is a test. This is indented. };
Will yield the string
"This is a test\n This is indented.\n".
This feature is designed to be used like:
my $data = strip tt q{ This is a [% template %]. It is easy to read. };
Instead of the ugly heredoc equivalent:
my $data = tt <<'EOTT'; This is a [% template %]. It looks like crap. EOTT
If you want to pass args to the TT engine, override the
_build_tt_engine function:
local *String::TT::_build_tt_engine = sub { return Template->new( ... ) } tt 'this uses my engine';
This module is hosted in the
jrock.us git repository. You can view the history in your web browser at:;a=summary
and you can clone the repository by running:
git clone git://git.jrock.us/String-TT
Patches welcome.
Jonathan Rockway
jrockway@cpan.org
This module is copyright (c) 2008 Infinity Interactive. You may redistribute it under the same terms as Perl itself.
|
http://search.cpan.org/dist/String-TT/lib/String/TT.pm
|
CC-MAIN-2017-04
|
refinedweb
| 452
| 74.49
|
Start Lecture #1
I start at 0 so that when we get to chapter 1, the numbering will agree with the text.
There is a web site for the course. You can find it from my home page, which is
The course text is Liang,
Introduction to Java Programming (Brief Version),
Eighth Edition (8e)
Replyto contribute to the current thread, but NOT to start another topic.
top post, that is, when replying, I ask that you either place your reply after the original text or interspersed with it.
musttop post.
Grades are based on the labs and the final exam, with each
very important.
The weighting will be approximately
..
Introductionwith a Programming Prerequisite
How weird is this?
The formal prerequisite for 0101 is 0002, which teaches the Python programming language. (I had a tiny, insignificant part in the development of Python when I first arrived at NYU, 30 years ago.)
If instead of taking 0202, you have programmed in some other language (say C/C++), that is fine.
If, however, you are already a wizard Java programmer (or even a mere expert), you are taking the wrong course—you would be wasting somebody's money and, more significantly, wasting much of your time.
This course, indeed the CS major sequence, emphasizes software, i.e., computer programs, rather than hardware, i.e., the physical components of a computer.
We teach a little hardware in 201, Computer Systems Organization, giving a high-level, non-detailed view, and present much more in 436, Computer Architecture.
In general, the NYU course sequence offers a top-down view: we first show you how to program in high-level languages such as Java and Python, later we present the assembly language that is essentially the language understood by the computer itself, and later still we describe the how the electronic components in a computer are able to actually execute these programs.
Many universities follow this approach. Others provide a bottom-up sequence beginning with the components, then low-level (assembly) languages, and then high-level languages.
Computers store and process data.
Modern computers store the programs on the same media as the data.
Figure 1.1 in the book is quite dated: It shows the design of 1980s computers. Modern machines do not have a single bus over which all information must travel. Compare the diagrams in sections 1.3 and 1.3.6 of my OS class note.
The CPU contains the electronic components that actually execute the instructions given to the computer.
First of all the CPU decodes the instruction (i.e., determines what is to be done, for example the contents of two CPU memory units called registers are to be added and the result placed in another register). Other instructions require the CPU to access additional components of the computer, e.g., the central memory.
In addition to determining the action needed, the CPU performs many of the operations required. For example the ALU (Arithmetic/Logic Unit) portion of the CPU contains an adder and thus performs the register add mentioned above.
Within a computer all data is stored as a sequence of bits, each of which can take on one of two values. Computers today mostly represent numbers as words, each consisting of 32 or 64 bits.
A 32-bit word can take on 232 (approximately 4 billion) different values.
I very much believe that you should remember just one value,
210=1024.
Then you can deduce that
232 = 22*230 = 4*23*10 = 4*210*210*210 = 4*1024*1024*1024,
which is a little more than 4*1000*1000*1000 = 4,000,000,000.
Modern computers cannot access a single bit of memory. They can access a single word and most computers (including all those we shall consider) can access a smaller unit called a byte, which consists of 8 bits.
Since a byte is the smallest unit of memory that can be referred to directly, modern computers are called byte-addressable.
Since the bytes can be accessed in any order (not just sequentially in order byte #1, byte #2, ...), the memory is said to support random access and is called random access memory or RAM.
Computers can access any byte in RAM quite quickly, which is wonderful. However, there are at least three problems with RAM.
Today's (personal) computers have around a gigabyte GB of RAM.
The exact size of a gigabyte is controversial.
It is either a billion (109) bytes or the binary
equivalent (230).
When you purchase a gigabyte of RAM you are getting the latter, but
when you purchase a gigabyte of disk storage you are getting only
the former.
It is clearly nonsensical that an 80GB disk cannot
hold 10 copies of the data contained in an 8GB RAM.
Nonetheless, it is true.
In fact, the
proper terminology is that the disk contains
only 80GiB (abbreviating 80 gibibytes) not 80GB.
However common usage is still 80GB.
Although a gigabyte of RAM is huge by historical standards, it is still insufficient to hold all the data we want on a computer system. For example, my (lavishly equipped) laptop has 8GB, which can store one movie in standard definition (one DVD) but not one hi-def movie (one blu-ray).
Disks (i.e., so called
hard drives) provide several hundred
times more bytes per dollar than does RAM.
Disks are not byte addressable (i.e., you can't refer to a single
byte store on a disk).
Instead the smallest addressable unit is called a sector,
which is typically 512 bytes.
Disks form the primary storage medium for most computer systems that
are at least as big as a laptop.
Current RAM does not maintain its contents when the power is shut off and hence is not suitable for storing permanent data. Hard drives, various types of CDs, and flash storage do maintain their contents without power.
The book's words are a little garbled.
CDs come in basically three flavors: read-only, write-once, and
rewritable.
(In this course CDs refer to
data CDs; audio CDs organize the
data stored in a different manner).
DVDs and Blu-ray are (for us) simply higher density CDs (in 202 you
will learn that the filesystems stored on DVDs differs from that of
CDs).
blanked) and then CD-RW can be rewritten. The user cannot simply rewrite a single word or byte, leaving the remaining data unchanged. So it is not rewritable the way RAM is.
Flash drives are physically small storage units (they are often
called
thumb drives due to their size and shape).
Unlike disks and CDs, flash drives have no moving parts and are thus
potentially much faster.
Like disks they are not byte addressable; their smallest accessible
unit is called a block.
Blocks can be rewritten a
large number of times.
However, the
large number is not large enough to be ignored.
Flash drives are sometimes called
solid-state disks.
These are becoming less important and we will not discuss them.
RAM cannot be moved easily from one machine to another. You would lose the data present (due to volatility) and if done often or carelessly, might damage the device. Some disk drives (called external disks) can be transported, but CDs and flash drives are much better in this regard.
Note the CPU-centric terminology. Devices that produce output, such as mice and keyboards, are called input devices and devices that accept input such as monitors are called output devices.
How does moving a mouse, cause the pointer to move?
How does a keyboard send a 'X' as opposed to a 'x'?
Screen resolution and dot pitch of a monitor are defined correctly in the book, but the statements about quality and clarity are too simplistic; the size of the monitor must be considered as well.
We will not study these. The book is somewhat dated here. Some homes (e.g., mine) have LANs; a typical NIC now is at least 100 megabits per second not 10 (many are now 1000 megabits per second).
I assume you have written programs (perhaps in 0002) and thus know what they are..
An operating system (OS) is a software system that raises the level
of abstraction provided by the hardware to a more convenient
virtual machine that other software can then use.
For example, when we write programs accessing disk files, we do not
worry about (or even have knowledge of) how the data is actually
stored on the disk.
Indeed, they very concept of a file is foreign to a disk and is an
abstraction provided by the OS.
The OS also acts as a
resource manager permitting multiple
users to share the hardware resources.
Naturally much more detail is provided in my OS class notes. A short summary is in section 1.1 of those notes.
Java is a very popular, modern, general purpose, programming language. It comes with an extensive standard library that aids in writing graphical programs, especially those, called applets, that are invoked from browsers, e.g., firefox.
Java has extensive support for the modern software development
methodology called
object-oriented programming.
Java is a full-featured, and thus large, programming language. In its entirety, Java is not simple; but we will be able to avoid most of the tricky parts.
Any programming language needs a detailed, precise specification describing the syntax and semantics of the language. It is basically the rules that determine a legal Java program. We will not need this level of precision.
Changing the specification essentially changes the language. The Java spec is stable.
The Application Program Interface (API) is defined by the standard library that comes with Java. It is comparatively easy to extend the API—write another library routine—and this does occur.
There are several versions of Java; we use Java SE 1.6, which we will just call Java.
The programs used to compile and run Java programs are part of the Java Development Toolkit (JDK).
Instead of using the JDK, one can use an Integrated Development Environment (IDE). Several IDEs are available. I will use only the JDK. You may develop your labs using either the JDK or an IDE, but the final product must be a Java program that can be run with just the JDK
// Hello world in Java public class Hello { public static void main (String[] args) { System.out.println("Hello, world."); } }
// Hello world in the C programming language #include <stdio.h> void main(int argc, char *argv[]) { printf("Hello, world.\n"); }
On the right we see a simple Java program that prints the sentence
Hello, world..
This program is contained in the file Hello.Java.
For comparison, the corresponding C program is below it. I put this program in a file called hello.c, but it could have been in a file called xyxxy.c (the .c is important).
Although they may look different, these two programs are basically the same. We now discuss briefly the Java version, line by line.
Homework: 1.1, 1.3.
For the benefit of those students with the 7th edition, here are the problems.
1.1 (Displaying three messages) Write a program that displays Welcome to Java, Welcome to Computer Science, and Programming is fun.
1.3 (Displaying a pattern) Write a program that displays the following pattern:
J A V V A J A A V V A A J J AAAAA V V AAAAA J J A A V A A
Unless otherwise stated homeworks are from the
Programming Exercises at the end of the current chapter.
They are not from the
Review Questions
Start Lecture #2
A Java program is created using a text editor. I use emacs. Others use vi, notepad, or a variety of alternatives. One possibility is the editor included with a Java IDE.
Java programs are compiled using a Java compiler. The standard compiler included with a JDK is called javac. To compile our Hello program (in the file Hello.java), one would write
javac Hello.javaJavac translates Java into an intermediate form normally called bytecode. This bytecode is portable. That is the bytecode produced on one type of computer can be executed on a computer of a different type.
The bytecode is placed in a so-called class file, in this case the file Hello.class
Our C version of Hello, if contained in the file Hello.c, could be compiled via the command
cc -o Hello Hello.cThe resulting file Hello is not portable. Instead it has instructions suitable for one specific machine type (and software system).
Portability of Java bytecode has been obtained by defining a virtual machine on which to run the bytecode. This virtual machine is called the JVM, Java Virtual Machine.
Each platform (hardware + software, e.g., Intel x86 + MacOS or SPARC + Solaris) on which Java is to run includes an emulator of this virtual machine. Somewhat ambiguously, the emulator of the Java Virtual Machine is itself also called the JVM.
The penalty for this portability is that executing bytecode by the (typically software) JVM is not as efficient as executing non-portable, so-called native, code tailored for a specific machine.
Since essentially no hardware/OS can execute Java bytecode directly, another program is run and is given the bytecode as data. This program is typically included in any Java IDE. For the JDK the program is called java (lower case).
For our example the bytecode located in the file Hello.class is executed via the JDK command
java Hello(Note that we must write Hello and not Hello.class.)
// Hello world in Java -- gui version public class HelloGui { public static void main (String[] args) { javax.swing.JOptionPane.showMessageDialog(null, "Hello, world."); } }
Java comes with a large library of predefined functions. The top example on the right shows how just changing the output function causes a dialog box to be produced. Naturally, this code only works on a graphical display.
Another difficulty is that actually explaining how the dialog box appears on the screen is quite complicated, involving widgets, fill-rectangle, and other graphics concepts.
// Hello world in Java -- pedantic version public class HelloPedantic { public static void main (String[] args) { java.lang.System.out.println("Hello, world."); } }
In fact, our original hello example used a shortcut. The class System is actually found in the package java.lang (which is searched automatically by javac). The bottom example shows the program without using this shortcut.
These have answers on the web in the companion. See the book for details. I tried it and it works.
If you cannot understand an answer, ask! A good question for the mailing list.
Let's solve quadratic the equations Ax2 + Bx + C = 0. Computational problems like this often have the form
For our first example we will
hard-wire the inputs and not
do it again.
What is the input?
Ans: The three coefficients A, B, and C.
How is the output computed?
Ans: -B +- sqrt(B2-4AC)
What about sqrt of a negative number?
What about it? Mathematically, you get complex numbers. We will choose A, B, C avoiding this case. A better program would check.
public class Quadratic1 { public static void main (String[] args) { double A, B, C; // double precision floating point A = 1.0; B = -3.0; C = 2.0;); } }
The program is on the right. Note the variables are declared to have type double. This is the normal representation of real numbers in Java.
We can assign values to variables either when we declare them or separately. Both are illustrated in the example.
Note that the first two lines and the last two lines are the same as from the first example.
How does the println work?
In particular, what are we adding?
Ans: The + is
overloaded.
X+Y means add if X and Y are numbers it means concatenate if X and Y
are strings.
But in the program we have both strings and numbers.
When the operands are mixed, numbers are converted to strings.
The name double is historical.
On early systems real numbers were called
floating point because the decimal point was not in a fixed
location but instead could float.
This way of writing real numbers is often called
scientific notation.
The keyword double signifies that the variable
would be given double the amount of storage as a would be given to
a variable declared to be float.
The previous example was quite primitive. In order to solve a different quadratic equation it is necessary to change the program, recompile, and re-run.
In this section we will save the first two steps, by having the program read in the coefficients A, B, and C. Later we will use a loop to enable one run to solve many quadratic equations.
On the right is the program in a pedantic style. We show the program as it would normally be written below.
public class Quadratic2Pedantic { public static void main (String[] Args) { double A, B, C; double discriminant, ans1, ans2; java.util.Scanner getInput; getInput = new java.util.Scanner(java.lang.System.in); A = getInput.nextDouble(); B = getInput.nextDouble(); C = getInput.nextDouble(); discriminant = B*B - 4.0*A*C; // We assume discriminant >= 0 for now ans1 = (-B + Math.pow(discriminant,0.5))/(2.0*A); ans2 = (-B - Math.pow(discriminant,0.5))/(2.0*A); java.lang.System.out.println("The roots are " + ans1 + " and " + ans2); } }
Note the following new features in this program.
import java.util.Scanner; public class Quadratic2 { public static void main (String[] Args) { Scanner getInput = new Scanner(java.lang.System.in); double A = getInput.nextDouble(); double B = getInput.nextDouble(); double C = getInput.nextDouble();); } }
On the right we see the program rewritten in the style that would normally be used.
The above explanation is way more complicated that the program itself! For now it is fine to just remember how to read doubles. Other methods in the Scanner class include nextInt, nextString, and nextBoolean.
Homework: 2.1, 2.5
For those without the 8e.
2.1: Write a program that reads Celsius degrees (in double), converts it to Fahrenheit, and displays the result. The formula is F=(9/5)C+32.
2.5: Write a program that reads in the subtotal and the gratuity rate, then computes the gratuity and the total.
These are the names that appear when writing a program. Examples include variable names, method names, and object names. There are rules for the names.
keyword. (e.g., class, double, if, and else).
As we have said classes contain data (normally called fields) and methods. Fields are examples of variables, but we haven't seen any yet. Later we shall learn that there are two kinds of fields, static and non-static.
Another kind of variable is the local variable, which is a variable declared inside a method. We have seen several, e.g. discriminant.
The final kind of variable is the parameter. We have seen one example so far, the args parameter always present in the main method.
Many programming languages have assignment statements, which in Java are of the form
variable = expression ;Executing this statement evaluates the expression and places the resulting value in the variable.
This ability to change the value of a variable (so called mutable state) is a big deal in programming language design. Proponents of functional languages believe that imperative languages, such as Java, are harder to understand due to the changing state.
Java, like the C language on which the Java syntax is based, also includes assignment expressions, which are of the form
variable = expression(NO semicolon). An assignment expression evaluates the RHS (right hand side) expression, assigns the value to the LHS variable, and then returns this value as the result of the assignment expression itself.
System.out.println(x = Math.pow(2,0.5)); a = b = c = 0; a = 1 + (b = 2 + (c = 0));The first example evaluates the square root of 2, assigns the value to x, and then passes this value to println.
The second example first performs c=0, which results in c becoming 0 and the value 0 returned. This 0 is then assigned to b and returned where it is assigned to a. Note the right to left evaluation.
The third example is ugly, confusing, and not recommended. It works as follows: c=0 is evaluated assigning 0 to c and returning 0. Then 2+0 is evaluated to 2, assigned to b, and returned. Then 1+2 is evaluated to 3, assigned to a, and discarded.
Java, again like C, uses = for assignment and == to test if two values are equal. Another common choice is to use := for assignment and = to test for equality.
We have seen constant numbers and constant strings. But these were literal constants; that is the value itself was used (e.g., 2 or "Hello, world.").
A string constant like "Hello, world." is certainly descriptive, but a numeric constant like 2 is not (does its usage signify that we are computing base 2, that our computer game program is playing 2-handed cribbage, or what?). In addition, if we wanted to change either constant, we would need to change all relevant occurrences.
Instead of explicitly writing the 2 or "Hello, world." at each occurrence, we can defined a named constant (a.k.a a symbolic constant) for each value.
final int numberBase = 2; final int cribbageHands = 2; final String msg = "Hello, world.";The first two lines on the right could be used in a two-handed cribbage program that somehow relied on base 2 representations. Then if the program was to be extended to 4-handed cribbage as well, we would have a clue where to look.
The final keyword says that definition gives the final value this identifier will have. That is, the identifier will not be assigned a new value. It is read-only or constant.
Numerical values in Java come in two basic types, corresponding to mathematical integers and real numbers. However each of these come in various sizes.
Mathematical integers have four Java types.
Mathematical real numbers have two Java types
The difference between the four integer types and the four real types is the amount of memory used for each value. As the name suggests a byte is stored in a byte. A short is stored in two bytes; an integer is stored in four; and a long is stored in eight.
A float is stored in four bytes and a double is stored in eight.
For integers, allocating more storage permits a wider range of values.
Floating point is more complicated. The extra storage is used to extend the range of both the fractional part and the exponent. Recall that floating point numbers are represented in what is essentially scientific notation, so they store both a fractional part and an exponent (unlike customary scientific notation, the exponent represents a power of 2 not 10).
Start Lecture #3
Remark: I mistakenly typed in the wrong problem 2.5 (I typed in 2.2). You may do either. By tomorrow the notes will be corrected.
Java, like essentially all languages, defines +, -, *, and / to be addition, subtraction, multiplication, and division.
(-1) % 5 == -1 (-1) modulo 5 == 4Java defines % to be the remainder operator. Some people (and some books and some web pages) will tell you that Java defines % to be the mathematical modulo operator. Don't believe them. Remainder and modulo do agree when you have positive arguments, but not in general.
Literals are constants that appear directly in a program. We have already seen examples of both numeric and string literals.
For integers, if the literal begins with 0 followed by a digit, the value is interpreted as octal (base 8). If it begins with 0x, it is interpreted as hexadecimal (base 16). Otherwise, it is interpreted as decimal (base 10).
What is the use of base 8 and base 16 for us humans with 10 fingers
(i.e, digits)?
Ans: Getting at individual bits.
The literal is considered an int unless it ends with an l or an L
Real numbers are always decimal. If the literal ends with an f or an F, it is considered a float. By default a real literal is a double, but this can be emphasized by ending it with a d or a D.
Literals such as 6.02×1023 (Avogadro's number) can also be expressed in Java (and most other programming languages). Although the exact rules are detailed, the basic idea is easy: First write the part before the × then write an e or an E (for exponent) then write the exponent. So Avogadro number would be 6.02E23. Negative exponents get a minus sign (9.8E-3).
The
details concern the optional + in the
exponent, the optional trailing d or D, the
trailing f or F for float, and
the ability to omit the decimal point if it is at the right.
This is quite a serious subject if one considers
arbitrary Java expressions.
In general, to find the details, one should search the web using
site:sun.com.
This leads to the
Java Language Specification.
The chapter on expressions
is 103 pages according to print preview on my browser!
For now we restrict ourselves to arithmetic expressions involving just +, -, *, /, %, (, and ). In particular, we do not now consider method evaluation. In this case the rules are not hard and are very similar to the rules you learned in algebra.
The last step illustrated integer division, which in Java rounds
towards zero.
So (5 - 12) / 4 = (-7) / 4 == -1.
Note that these operators are binary.
We have unitary operators as well, e.g., unitary -, which are
evaluated first.
So 5 + - 3 == 2 and - 5 + - 3 == -8
Unary + is defined as well, but doesn't do much.
Unary (prefix) operators naturally are applied right to left
So 5 - - - 3 == 2 and 5 + + + + 3 == 8
// Silly version of How Long Ago import java.util.Scanner; public class HowLongAgo { public static void main (String[] args) { final int daysInMonth = 30; // ridiculous final int daysInYear = 360; // equally ridiculous Scanner getInput = new Scanner(System.in); System.out.println("Enter today's day, month, and year"); int day = getInput.nextInt(); int month = getInput.nextInt(); int year = getInput.nextInt(); System.out.println("Enter old day, month, and year"); int oldDay = getInput.nextInt(); int oldMonth = getInput.nextInt(); int oldYear = getInput.nextInt(); // Compute total number of days ago int deltaDays = (day-oldDay) + (month-oldMonth)*daysInMonth + (year-oldYear)*daysInYear; // Convert to days / months / years int yearsAgo = deltaDays / daysInYear; int monthsAgo = (deltaDays % daysInYear) / daysInMonth; int daysAgo = (deltaDays % daysInYear) % daysInMonth; System.out.println ("The old date was " + yearsAgo + " years " + monthsAgo + " months and " + daysAgo + " days ago."); } }
It is no fun to do one from the book so instead we do a silly
version of a program to tell how long it is from one date to
another.
For example from 1 July 1980 to 5 September 1985 is
5 years, 2 months and 4 days.
However, we will do a silly version since we don't yet know about arrays and if-then-else.
The program (on the right) performs this task in three steps.
Note that monthsAgo is not the total
number of months ago since we have already
removed the
years.
I computed the three values in the order days, months, years, which is from most significant to least significant. You can instead compute the values in the reverse order (least to most significant). To see an example, read the book's solution.
Splitting a combined value into parts is actually quite useful. Consider splitting a 3-digit number into its three digits. For example given six hundred fifteen, we want 6, 1, and 5. This is the same problem as in our program but daysInYear is 100 and daysInMonth is 10.
How would you convert a number into millions, thousands, and
the rest?
Ans: Set daysInYear = 1,000,000 and daysInMonth = 1,000.
How would you convert dollars into hundreds, twenties, and
singles?
Ans: Set daysInYear to 100 and daysInMonth to 20.
How can an operating system convert a virtual address into
segment number, page number, and offset
Ans: Set daysInYear to the number of bytes in a segment and ... oops wrong course (and real OSes do it differently).
Homework: 2.7 Write a program that prompts the user to enter the number of minutes and then calculates the (approximate) number of years and days it represents. Assume all years have 365 days.
Statements like x = x + y; are quite common and several languages, including Java have a shorthand form x += y;.
Similarly Java et al. has -=, *=, /=, and %=. Note that there is no space between the arithmetic operator and the equal sign.
An especially common case is x = x + 1; and Java and friends have an especially short form x++. Similarly, we have x-- (but NOT ** or // or %%).
In fact x++ can be part of an expression rather than as a statement by itself. In this case, a question arises. Is the value used in the expression the old (un-incremented) value of x, or the new (incremented) value? Both are useful and both are provided.
x = 5; y1 = x++ + 10; y2 = ++x + 10; y3 = x-- + 10; y4 = --x + 10;Consider the code sequence on the right where all 5 variables are declared to be ints.
On the board show how to calculate the index for queues (rear++ and front++) and for stacks (top++ and --top). What about moving all four ++ and -- operators to the other side?
What happens if we try to add a short to an int? How about multiplying a byte by a double?
Even simpler perhaps, how about assigning an integer value to a real variable, or vice versa?
The strict answer is that you can't do any of these things directly. Instead one of the values must be converted to another type.
Sometimes this happens automatically (but it does happen). Other times it must be explicitly requested and some of these requests fail.
When the programming language automatically converts a value of one type (e.g. integer) to another type (e.g., double), we call the conversion a coercion. When the programmer explicitly requests the conversion, we call it type casting.
Any short value is also a legal int. Similarly any float value is a legal double.
Conversions of this kind are called widenings since the new type in a sense is wider than the old. Similarly, the reverse conversions are called narrowing.
public class Test { public static void main (String[] args) { int a = 1234567891; // one billion plus short b; double x; float y; x = a; y = a; b = a; // error coercions cannot narrow b = (short)a; // narrowing cast System.out.println(a + " " + b + " " + x + " " + y); } } javac Test.java; java Test 1234567891 723 1.234567891E9 1.23456794E9
Java will perform widening coercions but not narrowing coercions. To perform a narrowing conversion, the programmer must use an explicit cast. This is done by writing the target type in parenthesis.
The code on the right illustrates these points. Two problems have arisen.
The load payment problem requires you to accept some complicated formula for monthlyPayment. That is not my style. Instead, we will do a simpler problem where we can understand the formula used.
Say you have a bank account with $1000 that pays 3% interest per year and want to know what you will have after a year.
finalBalance = origBalance + interest = origBalance + origBalance * interestRate = origBalance * (1 + interestRate)
finalBalance = origBalance * (1 + interestRate/12)12
finalBalance = origBalance * (1 + interestRate/365)365
finalBalance = origBalance * (1 + interestRate/n)n
import java.util.Scanner; public class CompoundInterest { public static void main (String[] args) { Scanner getInput = new Scanner(System.in); double origBal = getInput.nextDouble(); double interestRate = getInput.nextDouble(); int n = getInput.nextInt(); // numberCoumpoundings double finalBal = origBal*Math.pow(1+interestRate/n,n); System.out.println(finalBal); } } javac CompoundInterest.java; java CompoundInterest 1000. .03 1 1030.0 java CompoundInterest 1000. .03 2 1030.2249999999997 java CompoundInterest 1000. .03 12 1030.4159569135068 java CompoundInterest 1000. .03 100000 1030.4545293116412 java CompoundInterest 1000. .03 1000000 1030.4545335307425
The program on the right is fairly straigtforward, given what we have done already. First we read the input data: the original balance, the (annual) interest rate, and the number of compoundings.
Then we compute the final balance after one year. We could do k years by changing the exponent from n to k*n.
Finally, we print the result
The only interesting line is the computation of finalBal. Let's read it carefully in class to check that it is the same as the formula above.
We show five runs, the first for so called simple interest (only compounded once). We did this above and sure enough we again get $1030 for 3% interest on $1000.
The second is semi-annual compounding. Note that we do not need to recompile (javac).
The third is monthly compounding.
The fourth and fifth suggest that we are approaching a limit.
char c = 'x'; string s = "x"; s = c; // compile error
The char (character) datatype is used to hold a single character. A literal char is written as a character surrounded by single quotes.
The variables c and s on the right are most definitely not the same.
Java's char datatype uses 16-bits to represent a character, thereby allowing 216=65,536 different characters. This size was inspired by the original Unicode, which was also 16 bits. The goal of Unicode was that it would supply all the characters in all the world's languages.
However, 65,536 characters proved to be way too few; Unicode has since been revised to support many more characters (over a million), but we won't discuss how it was shoehorned into 16-bit Java characters. It turns out that the character 'A' has the 16-bit value 0000000000100001 (41 in hex, 65 in decimal).
There are two ways to write this character.
Naturally 'A' is one, but the other looks wierd, namely
'\u0041'.
This representation consists of a backslash, a lower case u
(presumably for unicode), and exactly
4 hexadecimal
digits.
I don't like this terminology—to me digit implies base
10—but it is standard.
public class As { public static void main(String[] Args) { String str = "A\u0041\u0041A \u0041"; System.out.println(str); } } javac As.java; java As AAAA A
So 'A' is the 4*16+1=65th character in the Unicode set.
The two character representations can be mixed freely, as we see on the right.
In the not so distant past, computers (at least US computers) used the 7-bit ASCII code (rather nationalistically, ASCII abbreviates American Standard Code for Information Interchange). The 65th ASCII character is also 'A'. All the normal letters, digits, etc are in both ASCII and are in the same position (65 for A). It might be right that the 127 ASCII codes make up the first 127 unicode characters and are at the same position, but I am not sure.
Although it will not be emphasized in this course, it is indeed wonderful that alphabets from around the world can be represented.
We have a problem. The character " is used to end a string. What do we do if we want a " to be part of a string?
We use an escape sequence beginning with \. In particular we use \" to include a " inside a string.
But then how do we get \ itself?
Ans: We use a \\ (of course).
On the right is a list of Java escape sequences. The ones above the line are more commonly used than the ones below.
public class Test { public static void main (String[] args) { int i; short s; byte b; char c = 'A'; i = c; s = c; b = c; System.out.println(c + " " + i + " " + s + " " + b); } }
I believe a better heading would be converting between char and numeric types since not all of the conversions used are (explicit) casts; some are (implicit) coercions.
The code on the right has two errors. Clearly a (16-bit) char may not fit into an (8-bit) byte. The (subtle) trouble with the short is that it is signed.
So coercions will not work. You may write casts such as b=(byte)c; this will compile but will produce wrong answers if the actual value in the char c does not fit in the byte b.
Similarly, integer values may be cast into chars (but not coerced).
We already did a simpler version of this.
Let's do the design of a more complicated version: Read in a double containing an amount of dollars and cents, e.g., 123456.23 and prints the number of twenties, tens, fives, singles, quarters, dimes, nickels, pennies.
There is a subtle danger here that we will ignore (the book doesn't even mention it). A number like 1234.1 cannot be represented exactly as a double since 0.1 cannot be written in base 2 (using a finite number of bits) just as 1/3 cannot be written as a decimal.
Ignoring the problem, the steps to solve the problem are
Start Lecture #4
In Java a String (note the capital S) is very different from a double, short, or char. Whereas all the latter are primitive types, String is actually a class and hence the String type is a reference type.
So what?
It actually is an important distinction, but not yet. Remember that Java was not designed as a teaching language but as a heavy-duty production language. Once consequence is that doing simple things (reading an integer, declaring a string) uses fairly sophisticated concepts.
Now we just want to learn how to declare String variables, write String constants, assign Strings and concatenate Strings. Fortunately, we can do all of these tasks without having to deal with the complications. Later we will learn much more about classes.
"This is A string" "This is \u0041 string"
String s1, s2; s1 = "A string"; String s3 = "Another string"; s2 = s1 + " " + s3; s2 += " and more";
A string A string Another string and more Another string
The top line on the right shows a string.
The syntax is very simple: a double quote, some characters, a double quote. The second line is the same string.
Below the horizontal line are legal Java statements.
The first declares two strings; they are currently
uninitialized.
The second line assigns a string to the first variable The line declares a third string and initializes it.
The last two lines perform string concatenation
The last group of lines show the output when we println() each of the three strings.
There are three types of Java comments.
I will use only the first form, but you may use any of the three.
To learn about javadoc, see the reference in the book.
Java programmers tend to use the following conventions. I will try to adhere to them. I don't know if you can ask javac to tell you if you have violated the conventions.
You should all be familiar with proper indenting to show the meaning of the program since Python requires it.
I use the
end-of-line style but many prefer the
next-line style.
You may use either (see the book for examples of next-line style).
I wish I could write a few words or pages (or books) that would show you how to avoid making errors and to find and fix error made by others. The best I can do, is to write programs in class. I will doubtless have errors and we can fix them together.
Homework: 2.11 (copy of problem handed out in class for those w/o 8e).
Essentially computer languages have a way to indicate that some statements are executed only if a certain condition holds.
Java syntax is basically the same as in C, which is OK but not great.
We have seen four integer types (byte, short, int, and long), two real types (float and double), and a character type (char). These 7 types are primitive.
We have also see one reference type (the String class). Note the capitalization, which reminds you that String is a class.
Java has one more primitive type (boolean), used to express truth and falsehood.
Mathematicians, when discussing Boolean algebra, capitalize Boolean
since it is named after a person, specifically the logician George
Boole.
Java of course does not capitalize boolean.
Why?
Because boolean is a primitive type, not a class.
There are two boolean constants, true and false.
There are 6 comparison operators that are used to compare values in an ordered type (a type in which values are ordered). So we can compare ints with ints, doubles with doubles, and chars with chars. The result of a comparison is a boolean, namely true or false.
If values from different (ordered) types are being compared, one must be converted to the type of the other. Again this can be via coercion or type casting
int i = 1, j = 2; boolean x, y = true, z; x = i == j; // false z = i < j; // true
On the right we see some simple uses of booleans.
The one point to note is the distinction between = and ==
in Java.
The single = is the assignment operator that causes the value on the RHS to be placed into the variable on the LHS.
The double == is the comparison operator that evaluates its LHS and RHS, compares the results, and results in either true or false.
No fun to use booleans before learning about if.
Found in basically all languages.
We will see that the C/Java syntax for if permits the
infamous
dangling else problem.
if (boolean expression) { one or more statements; }
The simplest form of if statement is shown on the right.
The semantics (meaning) of a one-way if statement is simple and the same as in many languages.
Read
Write a program to read in the coefficients A,B,C of a quadratic equation and print the answers. Handle the cases of positive, zero, and negative discriminant separately.
Let's do this in class. One solution is here.
if (boolean expression) { one or more statements; } else { one or more statements; }
Often if is used to choose between two actions, one when the Boolean expression is true and one when it is false.
The skeleton code for this
if-then-else construct is shown
on the right.
The previous simpler skeleton is normally called an
if-then
even though Java and C do not actually employ a
then keyword.
As the name suggests, the semantics of an
if-then-else is to do the
then or the
else depending on the if.
Specifically,
if (boolean expression) exactly one statement
if (boolean expression) exactly one statement else exactly one statement
if (boolean expression) { one or more statements } else exactly one statement
If any of the three blocks (the then block in the if-then statement, the then block in the if-then-else statement, or the else block in the if-then-else statement) consists of only one statement, its surrounding {} can be omitted.
Experienced Java programmers would almost always omit such {}, but it is probably better for beginners like us to leave them in so that all if-then's and all if-then-else's have the same structure. There is an exception, see below.
On the right we see three of the four possibilities.
What is the remaining possibility?
Ans: The
then block has exactly one statement and no {} while
the
else block has one or more statements and does have the
{}.
if (be1) { // be is Boolean expr if (be2) { ss1 // ss is statement(s) } else { ss2 } else { if (be3) { ss3 } else { ss4 } }
The statement(s) in either the then block or the else block can include another if statement, in which case we say the two if statements are nested.
On the right we see three if statements, with the second and third nested inside the first.
If be1, the Boolean expression of the outside if statement evaluates to true, the second if statement is executed. If instead be1 is false, the third if statement executes.
There are eight possible truth values for the three Boolean
expressions, but only four possible actions
(ss1–ss4).
Why?
Because for be1==true the value of be3 is irrelevant and for be1==false, the value of be2 is irrelevant.
if (be1) { ss1 } else { if (be2) { ss2 } else { if (be3) { ss3 } else { ss_default } } }
Often deeply nested if-the-else's have the nesting only in the else part. This gives rise to code similar to that shown on the right.
Be sure you understand why this is not the same as separate (non-nested) if statements.
The trouble with the code shown is that it keeps moving to the right and, should there be many condition, you either get very wide lines or only have a few columns to use.
This is one situation where it pays to make use of the fact that the {} can be omitted when they enclose exactly one statement. To make use of this possibility, you must realize that an if statement, in its entirety, is only one statement. For example, the entire nested if structure we just saw is only one statement, even though it may have hundreds of statements inside its many {} blocks.
if (be1) { ss1 } else if (be2) { ss2 } else if (be3) { ss3 } else { ss_default }
As a result we can remove the { and } from else { if ... } and convert the above to the code on the right.
Please read the code sequence carefully and understand why it is the same as the one above.
Note, in particular, that the indenting is somewhat misleading as the symmetric placement of ss1, ss2, s3, and ss_default suggests that they have equal status. In particular since be1==true implies that ss1 gets execute, you might think that be3==true implies that ss3 gets executed. But that is wrong! Be sure you understand why it is wrong (hint what if all the boolean expressions are true).
Indeed this pattern with all the statements in the
then block is so common that some languages (but not Java)
have a keyword elsif that is used in place of
the else if above.
Other languages require an explicit ending to an if statement, most commonly end if. In that case no {} are ever needed for if statements.
Homework: 3.7 and 3.9. Handout for those w/o 8e.
Don't forget the {} when you have more that one statement in the
then block or
else block.
I recommend that, while you are learning Java, you use {} for each
ss even if it is just one statement (except for the
else if situation illustrated above).
Don't mistakenly put a ; after the boolean expression right before
the
then block.
Don't write if (be==true). Instead, write the equivalent if (be). Although the first form is not wrong; it is poor style.
if (be1) if (be2) ss2 else ss1
The indenting on the right mistakenly suggests that should be1 evaluate to false, ss1 will be executed. That is WRONG. The else in the code sequence is part of the second (i.e., inner) if. Indeed, the Java (and C) rule is that an else pairs with the nearest unfinished if
In a language like Python that enforces proper indentation no such misleading information can occur.
if (be1) { if (be1) { if (be2) if (be2) { ss2 ss2 } else { } else { ss1 ss1 } } }
On the right, I have added {} to make the meaning clear. The code on the far right has the same semantics as the dangling else code above. There is no ambiguity: the outer then has not yet ended so the else must be part of the inner if.
The code on the near right has the semantics that the original
indentation above mistakenly suggested.
Again there is no ambiguity:
The inner if has ended (it is part of the outer
then block) so the else must be part of the outer
if
Read
Start Lecture #5
Homework: The problems from lecture 4 that weren't assigned (3.7 and 3.9).
Let's fix two problems with our previous solution to solving the quadratic equation Ax2+Bx+C=0.
Let's do this in class; one possible solution is here.
Read.
Let's do this one in class. Handout sheet with tax table. A scan of the table is here.
Like most computer languages, Java permits Boolean expressions to contain so-called "logical operators". These operators have Boolean expressions as both operands and results.
The table on the right shows the four possibilities.
The
not operator is unary (takes one operand) and returns the
opposite Boolean value as result.
The
and operator is binary and returns false unless
when both operands are true.
The (inclusive)
or operator is binary and returns
true unless both operands are false.
That is, the meaning of || is
one or the other or both.
The exclusive
or operator is binary and returns
true if and only if exactly one operand is true.
That is, the meaning of ^ is
one or the other but not both.
As you know a day is the time it takes the earth to rotate about its axis, and a year is the time it takes for the earth to revolve around the sun.
By these definitions a year is a little less than 365.25 days. To keep the calendars correct (meaning that, e.g., the vernal equinox occurs around the same time each year), most years have 365 days, but some have 366.
A very good approximation, which will work up to at least the year 4000 and likely much further is as follows (surprisingly, we are not able to predict exactly when the vernal equinox will occur far in the future; see wikipedia).
Let's write in class a program that asks for the year and replies by saying if the year is a leap year. One solution is in the book.
The book has an interesting lottery game. The program generates a random 2-digit number and accepts another 2-digit number from the user.
We will write this in class, but need a new library method from Java, Math.random returns a random number between 0 and 1. The value returned, a double, might be 0, but will not be 1.
One solution is in the book.
Start Lecture #6
We have seen how to use if-then-else to specify one of a number of different actions, with the chosen action determined by a series of Boolean expressions.
Often the Boolean expressions consist of different values for a single (normally arithmetic) expression.
switch(expression) { case value1: stmts1 case value2: stmts2 ... case valueN: stmtsN default: def-stmts }
For example we may want to do one action if i+j is 4, a different action if i+j is 8, a third action if i+j is 15, and a fourth action if i+j is any other value. In such cases the switch statement, shown on the right, is appropriate.
When a switch statement is executed the expression is evaluated and the result specifies, which case is executed.
The semantics of switch are somewhat funny. Specifically, given the similarity with if-then-else, one might assume that after stmts1 is executed, control transfers to the end of the switch. However, this is wrong: Unless explicitly directed, control flows from one case to another.
computed gotoof Fortran.
switch (expression) { case value1: stmts1 break; case value2: stmts2 break; ... case valueN: stmtsN break; default: def-stmts }
It is quite easy to explicitly transfer control from the end of one case to the end of the entire switch. Java, again borrowing from C, has a break statement that serves this purpose.
We see on the right the common form of a switch in which the last statement of stmts1...stmtsN is a break.
When a break is executed within a switch, control
breaks out of the switch.
That is, execution proceeds directly to the end of
the switch.
In particular, the code on the right will execute exactly one
case or the default.
Homework: 3.11, 3.25, Handout for those w/o 8e.
bool-expr ? expr1 : expr2
if (x==5) y = 2; else y = 3;
y = x==5 ? 2 : 3;
Recall that Java has += that shortens x=x+5 to x+=5. Java has another shortcut, the tertiary operator ?...: that can be used to shorten an if-then-else.
The general form of the so-called
conditional expression is
shown on the top right.
The value of the entire expression is either expr1 or
expr2, depending on the value of the Boolean expression
bool-expr.
For example, the middle right if-then-else can be replaced by the bottom right assignment statement containing the equivalent conditional expression.
Note that this is not limited to arithmetic and can often be used to produce grammatically correct output.
System.out.println ("Please input " + N + "number" + N ? "." : "s.");
I realize this looks weird; be sure you see how it works.
Homework: Redo 3.7, this time using a switch.
A comparatively recent addition to Java is the C-like printf method. (It was added in Java 2 Standard Edition 5; we are using J2SE 6).
System.out.printf(format, item1, item2, ..., item n);
The first point to make is that printf() takes a variable number of arguments as indicated on the right. The required first argument, which must be a string, indicates how many additional arguments are needed and how those arguments are to be printed.
Wherever the first argument contains a
%, the value
of the next argument is inserted (a double
%% is
printed as a single
%).
System.out.printf("i = %d and j = %d", i, j); System.out.printf("x = %f and y = %e", x, y);
The first line on the right prints the integer values of x and
y.
The second line prints two real values, the second using scientific notation.
The table on the right gives the most common specifiers. Although Java will perform a few conversions automatically (e.g., an integer value will be converted to a string if the corresponding specifier is %s), most conversions are illegal. For example
int i; double x; int one=1, two=2; System.out.printf("Both %f and %d are bad\n", i, x); System.out.printf("%d plus %d is %d\n", one, two, one+two);
You could try to find which conversions are OK, but I very much suggest that instead you do not use any. That is use %f and %e only for floats and doubles; use %d only for the four integer types; use %s only for Strings, etc.
System.out.printf("%d\n%d\n",45,7);
45 7
45 7
System.out.printf("%2d\n%2d\n",45,7);
System.out.printf("%2d\n%2d\n",45,789);
45 789
Note that %d uses just enough space to print the actual number; it does not pad with blanks. If you print several integers one per line using %d they won't line up properly. For example, the printf() on the right will produce the first output, not the second.
To remedy this problem you can specify a minimum width between the
% and the
key-letter.
For example the second printf() does produce the second
output.
But what would happen if we tried to print 543 using %2d? There are two reasonable choices.
Java chooses the second option. So the bottom printf() produces the bottom output.
These same considerations apply to the other 5 specifiers in the table: if the value is not as wide as the specified width, the value is right justified and padded on the left with blanks.
For the real number specifiers %f and %e one can specify the precision in addition to the (minimum) width. This is done by writing the width, then a period, and then the precision between the % and the letter.
For example %6.2 means that the floating-point value will be written with exactly 2 digits after the decimal point and if the result (including a minus sign if needed) is less than 6 characters, the value will be padded with blanks on the left.
As we have seen, when the width is specified (with or without precision), the value is right justified if needed and padded on the left with blanks. This is normally just what you want for numbers, but not for strings.
For all specifiers (numbers, strings, etc) Java supports left justification (padding on the right with blanks) as well: Simply put a minus sign right before the width as in %-9f, %-10.2e, or %-7s.
The table on the right lists the operators we have seen in decreasing order of precedence. That mean that, in the absence of parentheses, if an expression contains two operators from different rows of the table, the operator in the higher row is done first.
The second column gives the associativity of the operators. If an expression contains two operators from the same row and that row has left-to-right associativity, the left operator is done first. Similarly if the row has right-to-left associativity, the right operator is done first. As with precedence, parentheses can be used to override this default ordering.
Not many programmers have the entire table memorized (not to mention the fact that there are other operators we have not yet encountered). Instead, they use parenthesis even when their use might not be necessary due to precedence and associativity.
However, it is wise to remember the precedence and associativity of
some of the frequently used operators.
For example, one should remember that *,/,% are executed before
(binary +,-) and that both groups have left-to-right associativity.
It would clutter up a program to see an expression like
((x*y)/z)/(((-x)+z)-y) rather than the equivalent (x*y/z)/(-x+z-y)
As you know from Python, loops are used to execute the same block
of code multiple times.
There are several kinds of loops:
The while (and do-while) are fairly general;
the for is most convenient when the loops are
counting, but can be used (abused?) in quite general ways.
while (BE) { stmts }
Some early languages (notable FORTRAN) did not have a while loop, but essentially all modern languages (including Fortran) do.
The semantics are fairly simple:
The idea of a while loop is that the body is executed while (i.e, as long as) the Boolean expression is true.
The flowchart on the right illustrates these simple semantics.
Note that if the Boolean expression is false initially, the loop body (the statements in the diagram) are not executed at all. We shall see in the next section a loop in which the body is executed at least once.
Note that when right after the while loop is executed, the
BE is FALSE.
Please don't get this wrong.
while ((i=getInput.nextInt) >= 0) { // process the non-negative i }
For example, when the loop on the right ends (assuming no EOF), the value of i is negative!
Lets write a program that adds two positive numbers just using ++ and --. One solution is here.
This is actually how one defines addition starting with Peano's postulates
The program picks a random number between 0 and 100 inclusive (101
possibilities).
The users repeatedly guess until they are correct.
For each guess, the program states
higher,
lower,
or
correct.
Let's do in class; one solution is in the book.
Start Lecture #7
Remark:
There are four parts to a loop and hence 4 tasks for the programmer to accomplish.
As you have seen in Python and we shall see in Java, loops can be quite varied. For one thing the body can be almost arbitrary. For now let's assume that the body is the quadratic equation solver that we have written in class. Currently it just solves one problem and then terminates. How can we improve it to solve many quadratic equations?
final int n=10; int i = 0; while (i<n) { // input A, B, C // solve one equation i++; }
n=getInput.nextInt(); int i = 0; while (i<n) { // input A, B, C // solve one equation i++; }
// input A, B, C while (A!=0 || B!=0 || C!=0) { // solve one equation // input A, B, C }
while (true) { // input A, B, C if (A==0 && B==0 && C=0) { break; } // solve one equation }
There are at least four techniques, each of which can be applied to many different problems.
extra.
extra.
import java.util.Scanner; public class Quadratic5 { public static void main (String[] Args) { System.out.println("Solving quadratic equations Ax^2 + Bx + C"); // The next line is for hardwiring; we are reading the count // final int count = 10; int count; System.out.println("How many equations are to be solved?"); Scanner getInput = new Scanner(System.in); count = getInput.nextInt(); while (count-- > 0) { System.out.println("Enter real numbers A, B, C"); double A = getInput.nextDouble(); double B = getInput.nextDouble(); double C = getInput.nextDouble(); double discriminant = B*B - 4.0*A*C; double ans; if (A == 0) { System.out.println("A linear equation; only one root"); ans = -C/B; System.out.println("The one root is " + ans); } else if (discriminant < 0) { System.out.println("No (real) roots"); } else if (discriminant == 0) { System.out.println("One (double) root"); ans = -B/(2*A); System.out.println("The double root is/are " + ans); } else { // discriminant > 0 double ans1 = (-B + Math.pow(discriminant,0.5))/(2.0*A); double ans2 = (-B - Math.pow(discriminant,0.5))/(2.0*A); System.out.println("The roots are " + ans1 + " and " + ans2); } } } }
On the right is the code for a yet further improved quadratic equation solver. This code illustrates several of the points made above.
import java.util.Scanner; public class SumN { public static void main (String[] args) { Scanner getInput = new Scanner(System.in); double x, sum; int i, n; while (true) { System.out.println("How many numbers do you want to add?"); n = getInput.nextInt(); if (n < 0) { break; } System.out.printf("Enter %d numbers: ", n); i = 0; sum = 0; while (i++ < n) { x = getInput.nextDouble(); sum += x; } System.out.printf("The sume of %d values is %f\n", n, sum); } } }
On the right is a simple program to sum n numbers. Since it does not make sense to sum a negative number of numbers, we let n<o be the sentinel that ends the program.
There are again a few points to note.
n and 1/2times loop: The first part of the body is executed one more time than the second part.
public class UseEOF { final static int EOF = -1; public static void main(String[] args) { int c; while ( (c = nextCharOrEOF()) != EOF) { System.out.write(c); } System.out.flush(); } public static int nextCharOrEOF() { try { return System.in.read(); } catch (java.io.IOException e) { return EOF; } } }
On the right is a very simple loop that is terminated by EOF (end-of-file). However, it does use an advanced feature of Java, namely exceptions.
The method (think of function) nextCharOrEOF either returns the next character from System.in or it returns the value EOF.
EOF is not special; it is declared in the program.
The magic is the try/catch mechanism. The read() method will raise an exception, namely an IOException if the end of file is reached. This is caught by nextCharOrEOF and the value EOF is returned instead of the value returned by read().
As you know System.in is normally the keyboard and System.out is normally the display. However, they can be redirected to be an ordinary file in the file system. When System.in is redirected to a file f, input is taken from f instead of the keyboard, and when System.out is redirected to a file g, output goes to g.
For example the program UseEOF with the mysterious implementation, simply copies all its input to its output.
I copied it to i5 so we can run it (ignore how it works).
If we type simply java UseEOF, then whatever we type will appear on the screen.
If we type instead java UseEOF <f, then the file f will appear on the screen.
Similarly java UseEOF <f >g copies the file f to the file g.
As we have seen a while loop repeatedly performs a test-action sequence: first a BE is tested and then, if the test passes, the body is executed.
do { stmts } while (BE);
In particular, if the test fails initially, the body is not executed at all.
Most of the time, this is just what is wanted; however, there are occasions when we want the body to be first and the test second. In such situations we use a do-while loop.
We see the code skeleton for this loop on the near right and the flowchart on the far right.
final int quota = 250; int count=0; do { count++; } while ((quota-=getInput.nextInt()) > 0); System.out.printf("Needed %d values.\n", count);
In the code on the right a quota is hardwired for brevity. Then the user inputs a series of numbers (say the dollar value of a sale). When he finally reaches or surpasses the quota, the program ends and reports how many values were needed.
Recall the four components of a loop as stated in 4.2.2
Parts 1, 2, and 4 can all be written in one for statement. After the word for comes a parenthesize list with three elements separated by semicolons. The first component is the initialization, the second the test, and the third the update to prepare for the next iteration.
The first code shown on the right presents the for loop corresponding to the example mentioned in the list. Indeed, this is a common usage of for, to have a counter step through consecutive values.
for(i=0; i<10; i++){ body }
for(int i=0; i<10; i++)
for(i=0, j=n; i*j < 100; i++, j--)
for( ; ; )
The second for on the right shows that the loop variable can be declared in the for itself. One effect of this declaration is that the variable, i in this case, is not visible outside the loop. If there was another declaration of i inside another loop, the two variables named i would be unrelated.
It is also possible to initialize and update more than one variable as the third example shows.
Finally, the various components can be omitted, in which case the omitted item is essentially erased from the flowchart. For example, if the first component is omitted, there is no initialization performed; if the middle item is omitted there is no test (it is better to say that the test always yield true); and, if the third item is omitted there is no update. So the last, rather naked for will just keep executing the body until some external reason ends the loop (e.g., a break).
Homework: 4.5, 4.1 (the book forgot to show inputing the number of numbers), 4.9
They are equivalent. Any loop written with one construct can be written with either of the other two.
The for loop puts all the non-body in one place. If these pieces are small, e.g., for(i=0;i<n;i++), this is quite convenient. If they are large and or complicated, the while form is generally easier to read.
Much of this is personal preference.
I don't believe there is a
right or
wrong answer.
We have already done a nested loop in section 4.2.4.
for (int i=1; i<n-1; i++) { for (int j=i+1; j<n; j++) { // if the ith element exceeds the jth, // swap them } }
It is essentially the same as in Python or any other programming language. The inner loop is done repeatedly for each execution of the outer loop.
For example, once we learn about arrays, we will see that the code on the right is a simple (but, alas, inefficient) way to sort an array.
This concerns the hazards of floating point arithmetic. Although we will not be emphasizing numeric errors, two general points can be made.
Given two positive integers, the book just tries every number from one up to the smaller input and chooses the largest one that divides both of the inputs.
Let try a different way to get the GCD. Keep subtracting the smaller from the larger until you get a zero, at which point the nonzero value is the GCD (if they are equal call one of them larger).
(54, 36) → (18, 36) → (18, 18) → (18, 0).
(7, 6) → (1, 6) → (1, 5) → ... → (1, 1) → (1, 0).
Start Lecture #8
Remark: Lab1 part three 40% (we will be covering methods today).
public static int second(int a, int b, int c, int d);This method returns the second largest value of its four parameters.
Remark: Tutor/E-tutor available.
Anagha Ashok Pande <anagha.pande@nyu.edu> is the tutor/e-tutor for this class. Anagha is reading the mailing list and may respond to questions there and also via the direct email addr above. In addition, Anagha.
I hate to take problems straight from the book since you can read it there, but this is too good to pass up.
Calculate π by choosing random points in a square and seeing how many are inside a circle.
Do in class. One solution is here.
We have already seen break for
breaking out of a
switch statement.
When break is used inside a loop, it does the analogous thing: namely it breaks out of the loop. That is execution transfers to the first statement after the loop. This works for any of the three forms of loops we have seen: while, do-while, or for.
In the case of nested loops, the break just breaks out of the inner one.
The continue statement can be used only inside a loop.
When executed, it
continues the loop by ending the current
iteration of the body and proceeding to the next iteration.
Specifically:
We want to display the first N primes. Unlike the book, we will
Although the second improvement does make the program much faster for large primes, it is still quite inefficient compared to the best methods, which are quite complicated and use very serious mathematics.
Start it in class.
Homework: Write this program in Java and test it for N=25.
Like all modern programming languages, Java permits the user to abstract a series of actions, give it a name, and invoke it from several places.
Many programming languages call the abstracted series of actions a procedure, a function, or a subroutine. Java, and some other programming languages, call it a method.
We have seen a method already, namely main().
We know how to do this since we have repeatedly defined the main method. The general format of a method definition is.
modifiers returnType name (list of parameters) { body }
For now we will always use two modifiers
public static.
The main() method had returnType void since it did not return a value. It also had a single argument, an array of Strings.
The returnType and parameters will vary from method to method.
public static int myAdd(int x, int y) { return x+y; }
On the right we see a very simple method that accepts two int parameters and returns an int result. Note the return statement, which ends execution of the method and optionally returns a value to the calling method.
The type of the value returned will be the returnType named in the method definition.
public class demoMethods { public static void main(String[]) { int a = getInput.nextInt(); int b = getInput.nextInt(); System.out.println(myAdd(a,b)); } public static int myAdd(int x, int y) { return x+y; } }
We have called a number of methods already. For example getInput.nextInt(), Math.pow(), Math.random(). There is very little difference between how we called the pre-defined methods above and how we call methods we write.
One difference is that if we write a method and call it from a sibling method in the same class, we do not need to mention the class name.
On the right we see a full example with a simple main() method that calls the myAdd() method written above.
Method such as myAdd() that return a value are used in expressions the same way as variables and constants are used, except that they contain a (possibly empty) list of parameters. They act like mathematical functions. Indeed, they are called functions in some other programming languages.
Note that the names of the arguments in the call do NOT need to agree with the names of the parameters in the method definition. For example, how could we possibly have gotten the parameter names we used with Math.pow() correct, since we have never seen the definition?
The LIFO (Last In, First Out) semantics of method invocation (f calls g and then g calls h, means that, when h returns, control goes back to g, not to f.
Method invocation and return achieves this semantics as follows.
return address(the address in f where g should return to). This is called a stack frame, or simply frame.
Homework: 5.1, 5.3.
public class demoMethods { public static void main(String[]) { int a = getInput.nextInt(); int b = getInput.nextInt(); printSum(a,b); } public static void printSum(int x, int y) { System.out.println(x+y); } }
Methods that do not return a value (technically their return type is void) are invoked slightly differently. Instead of being part of an expression, void methods are invoked as standalone statements. That is, the invocation consists of the method name, the argument list, and a terminating semicolon.
An example is the printSum() method is invoked by main() in the example on the right.
Homework: 5.5, 5.9.
Referring to the example above, the arguments a,b in main() are paired with the parameters x,y in printSum in the order given, a is paired with x and b is paired with y.
The result is that the x and y are initialized to the current values in a and b. Note that this works fine if an argument a or b is a complex expression and not just a variable. So printSum(4,a-8) is possible.
public class demoPassByValue { public static void main(String[]) { int a = 5; int b = 6; int c = 7; tryToAdd(a,b,c); } public static void tryToAdd(int x, int y, int z) { x = y + z; } }
The parameters, however, must be variables.
Java is a pass-by-value language. This means the on invocation the values in the arguments are transmitted to the corresponding parameters, but, at method return, the final values of the parameters are not, repeat NOT sent back to the parameters.
For example the code n the right does not set a to 13.
When we learn about objects and their reference semantics, we will need to revisit this section.
Programs are often easier to understand if they are broken into smaller pieces. When doing so, the programmer should look for self-contained, easy to describe pieces.
public class demoModularizing { public static void main(String[] args) { // input 3 points (x1,y1), (x2,y2), (x3,y3) double perimeter = length(x1,y1,x2,y2) + length(x2,y2,x3,y3) + length(x3,y3,x1,y1); System.out.println(perimeter); } public static length(double x1, double y1, double x2, double y2) { return Math.sqrt((x2-x1)*(x2-x1) + (y2-y1)*(y2-y1)); } }
Another advantage is the ability for
code reuse.
In the very simple example on the right, we want to calculate the
perimeter of a triangle specified by giving the (x,y) coordinates of
its three vertices.
Of course the perimeter is the sum of the lengths of the three sides and the length of a line is the square root of the sum of the squares of the difference in coordinate values.
Without modularization, we would have one large formula that was the sum of the three square roots.
As written on the right, I abstracted out the length calculation, which in my opinion, makes the code easier to understand.
Other examples are the methods in the
Math class.
Imagine writing a geometry package and having to code the square
root routine each time it was used.
Read.
A very nice feature of Java and other modern languages is the ability to overload method names.
public class DemoOverloading { public static void main(String[] args) { System.out.printf ("%d %f %f\n", max(4,8), max(9.,3.), max(9,3.)); } public static int max(int x, int y) { return x>y ? x : y; } public static double max(double x, double y) { return x>y ? x : y; } public static double max(int x, double y) { return ((double) x)>y ? x : y; } }
Of course if you use a method named max and another named min each with two integer arguments, Java is not confused and invokes max when you write max and invokes min when you write min.
All programming languages do that.
But now look on the right and see three methods all named max, one returning the maximum of two ints one returning the maximum of two double, and the third returning (as a double) the max of an int and a double.
When compiling a method definition javac notes
the
signature of the method, which for overloading consists
of the method name, the number of parameters, and the type of each
parameter.
When an overloaded method is called, the method chosen is the one with signature matching the arguments given at the call site.
Technical fine points: Java would coerce a 9 to a double if needed. But if the overloaded method has two signatures, one requiring coercion and the other not, the second is chosen. If the overloaded method has two signature, each with two parameters, having types so that one would require coercing the first argument and the other would require coercing the second argument, the program does not compile.
The scope of a variable is the portion of the program in which the variable can be referenced.
The rules for scope vary from language to language.
In Java a block is a group of statements enclosed in {}. For example the body of most loops, the body of most then and else clauses of an if, and the body of a method are all blocks. The exceptions are one statement bodies of then clauses, else clauses, and loops that are written without {}.
In Java the scope of a local variable begins with its declaration and continues to the end of the block containing the declaration. We have seen the one exception: if the initialization portion of a for statement contains a declaration, that local variable has scope including the body of the loop.
On the right we see two skeletons. The near right skeleton contains two blocks block1 and block2, neither of which is nested inside the other.
something { | something { block1 | start of block3 } | something { something { | block4 block2 | } } | }
In this case it is permitted to declare two variables with the same name, one in block1 and one in block2. These two variables are unrelated. That is, the situation is the same as if they had different names.
On the far right we again see two blocks, but this time block4 is nested inside block3. In this situation it is not permitted to have the same variable name declared in both blocks. Such a program will not compile.
We have already used a few methods from the Math class.
In addition to various methods, some of which are described below, the Math class defines two important mathematical constants, E, the base of the natural logarithm, and PI, the ratio of a circle's perimeter to its diameter.
public static double sin(double theta) public static double cos(double theta) public static double tan(double theta) public static double asin(double x) public static double acos(double x) public static double atan(double x)
public static double toDegrees(double radians) public static double toRadians(double degrees)
The six routines on the upper right compute the three basic trig functions and their inverses. In all cases, the angles are expressed in radians.
The two below are used to convert to and from degrees.
public static double pow(double a, double b) public static double sqrt(double a)
public static double log(double x) public static double exp(double x)
public static double log10(double x)
The first routine on the right computes ab. Since the special case of square root, where b=.5 is so widely used it has its own function, which is given next.
Mathematically, logs and exponentials based on Math.E are more important than the corresponding functions using base 10. They come next.
The base 10 exponential is handled with pow, but there is a special function for the base 10 log as shown.
Given a floating point value, one can think of three integer values that correspond to this floating point value.
public static double floor(double x) public static double ceil(double x) public static double rint(double x)
The Math class has three methods that correspond directly to these mathematical functions, but return a type of double. They are shown on the upper right. For the third function, ties are resolved toward the even number so rint(-1.5)==rint(-2.5)==-2..
public static int round(float x) public static long round(double x)
In addition Math has an overloaded method round() that rounds a floating point value to an integer value. The two round method are distinguished by the type of the argument (as normal for overloaded methods). Given a float, round returns an int. Given a double, round returns a long.
This choice prevents the loss of precision, but of course there are many floats and doubles (e.g., 10E30) that are too big to be represented as an ints or longs.
These three methods are heavily overloaded, they can be given int, long, float, or double arguments (or, for min and max) one argument of one type and the other of another type.
We have already used Math.random() several times in our examples. As mentioned random() returns a double x satisfying 0≤x<1.
To obtain a random integer n satisfying m≤x<n you would write m+(int)((n-m)*Math.random()). For example, to get a random integer 10≤x<100, you write 10+(int)(90*random()).
Let's do one slightly different from the book's.
Print N (assumed an even number) random characters, the odd ones lower case and the even ones upper case.
public class RandomChars { public static void main(String[] args) { final int n = 1000; // assumed EVEN for(int i=0; i<n/2; i++) { System.out.printf("%c", pickChar('a')); System.out.printf("%c", pickChar('A')); } } public static char pickChar(char c) { return (char)((int)(c)+(26*Math.random())); } }
The program on the right is surprisingly short. The one idea used to create it was the choice of the pickChar() method. This method is given a character and returns a random character between the given argument and 26 characters later. So, when given 'a, it returns a random lower case character and, when given 'A, it returns a random upper case character.
Remember that in Unicode lower case (so called latin) letters are contiguous as are upper case letters.
Be sure you understand the value returned by pickChar().
Start Lecture #9
Homework: 5.17, 5.35.
One big advantage of methods is that users of a method do not need to understand how the method is implemented. The user needs to know only the method signature (parameters, return value, thrown exceptions) and the effect of the method.
Although I remember enough calculus to derive the Taylor series for sine, that might not be the best way to implement sin(). Moreover, I would need to work very hard to produce an acceptable implementation of atan(). Nonetheless, I would have no problem writing a program accepting a pair (x,y) representing a point in the plane and producing the corresponding (r,θ) representation. To do this I need to just use atan, not implement it.
As mentioned previously, when given a problem that by itself is too complicated to figure out directly, it is often helpful to break it into pieces. A very simple example was in the previous section. Separating out pickChar() made the main program easy and pickChar itself was not very hard.
We haven't done anything large enough to illustrate top-down design in all its glory. The idea is to keep breaking up the problem. That is, you first specify a few methods that would enable you to write the main() method.
Then for each of these new methods M you define other
submethods that are used to implement M and the
process repeats.
The figure on the right (a very slightly modified version of figure 5.12 from the 8e) shows this design philosophy applied to the problem of printing a calendar for any given month.
You can guess the functionality of most of the methods from their name (an indication of good naming). Only getTotalNumberOfDays requires explanation. It calculates the number of days from 1 January 1800 to the first day of the month read from the input.
Given a fleshed out design as in the diagram, we now need to implement all the boxes. There are two styles: top-down and bottom-up.
In the first, you perform a pre-order traversal of the design. That is, you start by implementing the root calling the (unimplemented) child methods as needed.
In order to test what you have done before you are finished, you
implement
stub versions of the methods that have been called
but not yet implemented.
For example, the stub for a void method such
as printMonthBody might do nothing or might just print
something like printMonthBody called.
Stubs for non-void methods need to return a value of the correct type, but that value need not be correct.
In a bottom-up approach, you perform a post-order traversal of the design tree. That is, you only implement boxes all of whose children have been implemented (we are ignoring recursive procedures here). In the beginning this means you implement a leaf of the tree.
Since you implement the main program last, you need to write test programs along the way to test what you have implemented so far.
Read. In particular read over the full implementation of printCalendar.java
Arrays are an extremely important and popular data structure that are found in essentially all programming languages. An array contains multiple elements all of the same type.
In this chapter we restrict ourselves to a 1-dimensional (1D) array, which is essentially a list of values of a given type. The next chapter presents the more general multi-dimensional array.
In Java the ith element of a 1D array named myArray is referred to as myArray[i]. Note the square brackets [].
main(String[] args)
In the example on the right, which we have used in every program we have written, the only parameter args is a 1D array of Strings.
Note the syntax, first the base type (the type of each element of the array), then [], and finally the name of the array.
double[] y;
When the array is not a parameter as above, but is instead a locally declared variable, the syntax is essentially identical and is shown on the right. The only difference being a semicolon added at the end.
You may recall that early in the course I sometimes wrote String args[] and noticed that it worked as well as String[] args. In fact this has been added to Java just so that C language programmers would be comfortable. However, the String[] args format is preferred.
There is an important difference between declaring scalars as we have done many times previously and declaring an array. When the scalar is declared, it can be used (i.e., given a value). For an array another step is needed.
double[] x; x = new double[10];
double[] x = new double[10];
The declaration double[] x, declares x but does not reserve space for any of the elements x[i]. This must either be done separately as shown on the above right or written together as shown below the line.
The new operator allocates space, in the example on the right, enough space for 10 doubles. The effect is that the array x can hold 10 elements.
In Java the first element always has index 0, so x consists of x[0], x[1], ..., x[9].
The size of an array is fixed; it cannot be changed.
To retrieve the size of a 1D array is easy, just write name.length. For example x.length would be 10 for the example above.
Another difference between arrays and scalars is that when the array is created (using the new operator), initial values are given to each element.
for (int i=0; i<x.length; i++) { x[i] = 34; }
As suggested above, elements of a 1D array are accessed by giving the array name, then [, then a value serving as index, and finally a ]. For example, the code on the right, sets every element of the array x to 34.
Note the use of .length.
Creating an integer array containing the values 1, 8, 29, and -2 is logically a three step procedure.
int[] A; A = new int[4];
int[] A = new int[4];
int[] A; A = new int[4]; A[0]=1; A[1]=8; A[2]=29; A[3]=-2;
int[] A = {1,8,29,-2};
We have already seen how the first two steps can be combined into one Java statement. For example see the first two sections of code on the right.
In fact the Java programmer can write one statement that combines all three steps. The third code group on the right can be written much more succinctly as shown on the fourth group.
Note that the usage of curly brackets as shown is restricted to initialization and a few other situations, you cannot use {1,8,29,-2} in other places.
public class Test { public static void main(String[] args) { int[] a = new int[10]; int[] b = new int[10]; int[] c = new int[10]; for (int i=0; i<a.length; i++) { b[i] = 2*i; c[i] = i+100; a[i] = b[i] + c[i]; // 3i+100 System.out.printf("a[%d]=%d b[%d]=%2d c[%d]=%d\n", i, a[i], i, b[i], i, c[i]); } } }
Very often arrays are used with loops, one iteration of the loop corresponding to one entry in the array.
In particular for loops are often used since they are the most convenient for stepping through a range of values. The first entry of the array is always index 0 and the last entry of an array A is A.length-1.
For example the simple code on the right steps through three arrays, setting the value of two of them and calculating from these values the value of the third.
We have seen several short-cuts provided by Java for common
situations.
Here is another.
If you want to loop through an array accessing (but
not changing) the ith element
during the ith iteration, you can use the
for-each variant of a for loop.
for(int i=0; i<a.length; i++) { | for(double x: a) { use (NOT modify) a[i] | use (NOT modify) x } | }
Assume that x has been defined as an array of doubles.
Then instead of the standard for loop on the near right,
we can use the shortened, so-called
for-each variant on the
far right.
import java.util.Scanner; public class StdDev { public static void main(String[] args) { Scanner getInput = new Scanner(System.in); System.out.println("How many numbers do you have?"); final int n = getInput.nextInt(); System.out.printf("Enter the %d numbers\n", n); double[] x = new double[n]; double sum=0; for (int i=0; i<n; i++) { x[i] = getInput.nextDouble(); sum += x[i]; } double mean = sum / n; double diffSquared=0; // std dev is sqrt(diffSquared/n) for (double z : x) { diffSquared += Math.pow(mean-z,2); } System.out.printf("Mean is %f; standard deviation is %f\n", mean, Math.sqrt(diffSquared/n)); } }
Read
Given n values, x0, ..., xn-1, the mean (often called average and normally written as μ) is the sum of the values divided by n.
The variance is the average of the squared deviations from the mean.
That is, [ (x0-μ)2 + ... + (xn-1-μ)2 ] / n
A Java program to calculate these two values is on the right.
Note that the first loop, which modifies the x array, must
use the conventional for loop; whereas, the second loop,
which only uses the x array can use the
for-each
variant.
public class DeckOfCards { public static void main(String[] args) { int[] deck = new int[52]; String[] suits = {"Spades", "Hearts", "Diamonds", "Clubs"}; String[] ranks = {"Ace", "2", "3", "4", "5", "6", "7", "8", "9", "10", "Jack", "Queen", "King"}; // Initialize cards for (int i = 0; i < deck.length; i++) deck[i] = i; // Shuffle the cards for (int i = 0; i < deck.length; i++) { // Generate an index randomly int index = (int)(Math.random() * deck.length); int temp = deck[i]; deck[i] = deck[index]; deck[index] = temp; } // Display the shuffled deck for (int i = 0; i < 4; i++) { String suit = suits[deck[i] / 13]; String rank = ranks[deck[i] % 13]; System.out.println("Card number " + deck[i] + ": " + rank + " of " + suit); } } }
Let's look at this example from the book with some care. I downloaded it from the companion web site. You can download all the code examples.
The idea is to have a deck of cards represented by an array of 52 integers, the ith entry of the array represents the ith card in the deck.
How does an integer represent a card?
First of all the integers are themselves between 0 and 51. Given an integer C (for card), we divide C by 13 and look at the quotient and remainder. The quotient (from 0..4) represents the suit of the card and the remainder (from 0..12) represents the rank.
Look at the String arrays; they give us a way to print the name of the card given its integer value (by using the quotient and remainder).
The deck initialized to be
in order starting wit the Ace of
Spades and proceeding down to the 2 of Clubs (Liang had Clubs and
Diamonds reversed).
Notice how the deck is shuffled: The card in deck[i] is swapped with one from a random location. This can move a single card multiple times. Notice the 3 statement sequence used to swap two values.
Liang printed the first 4 cards; I print the entire deck.
The last loop could be written as a
one-liner.
for (int C : deck) System.out.printf("Card number %2d: %s of %s\n", C, ranks[C%13], suits[C/13]);
Serious business. We can no longer put off learning about references.
Consider the situation on the near right. We have created two arrays each holding 6 integers. The first has been initialized, the second has not. The goal is to make the second a copy of the first.
Note that the array name is not the array itself. Instead it points to (technically, refers to) the contents of the array. One consequence is that the size of Array1 does not depend on the number of elements in the array, which turns out to be quite helpful later on.
If we simply execute Array1 = Array2;, we get the situation in the upper right. Both arrays refer to the same content. In this state, changing the contents of one, changes the other.
If the goal is to get the situation in the lower right, you need a loop to copy each entry (or use some built in method that itself has a loop).
int a, b; a = 5; b = a; b = 10;
Look at the simple code on the right. According to the picture with arrays, after we execute b=a;, both a and b will refer to the same value, namely 5. Then when we execute b=10;, both a and b will refer to the same value, namely 10.
But this does NOT happen. Instead, b==10, but a==5 as we expected.
Why?
The technical answer is that primitive types, for example int, have value semantics; whereas, arrays (and objects in general) have reference semantics.
Looking at the diagram on the right we see that the array Array1 refers to (or points at) the actual array where the values reside; whereas, the int a and the int b ARE the values 5 and 10. There are no pointers or references involved.
When you change a or b, you change the value; when you change Array1, you change the reference (pointer).
Liang first talks about passing the entire array and then about passing individual members. I will reverse the order since I believe passing individual members is easier.
There is very little to say about passing an element of a one dimensional array as an argument. The element is simply a value of the base type of the array.
public static void printSum(int x, int y) { System.out.println(x+y); }
On the right is the simple printSum method that we saw earlier. It has two integer parameters. If our main method has a declaration int[] A=new int[10], then we can write, printSum(A[4],A[1]). The printSum() method will add the two values and print the sum just as if it was invoked via printSum(8,2);
public static void printArray(int[] A) { for (i : A) { System.out.println(i); } }
The printArray() method on the right has an (entire) int array as parameter. To call the array is easy: printArray(B) where B is a single-dimensional array of integers. Note that printArray can handle any size array.
The tricky part is the melding of pass by value method invocation and reference semantics for arrays, which we address next.
Assume we have a method, such as printArray that has an array parameter A and that we call it with an array argument named B.
At the point of the method call the value in B is copied to A, but, as we know, when the method returns any new value in A is NOT copied back to B. This rule still holds for arrays; no change.
The diagram at the right show the configuration when the method is called. Even, if the method changes the value in A, the value in B will not change. It will remain a pointer to the 6 element structure shown.
But now consider what happens if the called method executes A[0]+=1;. This changes not the value in B (a pointer) but instead changes the value in a cell pointed to by B. Hence when the method returns, assuming nothing else happened, the value in B[0] has been increased.
Summary, the call above cannot change the value B, but it can change the value of an individual B[i].
Java also has anonymous arrays, i.e., arrays without a name. So you can write printArray(new int[]{1,5,9});
Start Lecture #10
Remark: The username for the homework solutions is introCS (case sensitive). The password is [to be given orally in class].
Homework: 6.1. Write a program that reads student scores, tets the best score, and then assigns grades based on the following scheme.
public static int[] reverse(int[] a) { int[] b = new int[a.length]; for (int i=0; i<a.length; i++) { b[a.length-1-i] = a[i]; } }
We have seen how arrays can be passed into a method. They can also be the result of a method as shown on the right. Note the following points about reverse (which returns an array that has the same elements as its input but in reverse order).
Remark: Lab1 part4 assigned.
Read
Homework: 6.3 Write a program that reads integers between 1 and 100 and counts the occurrences of each. Assume the input ends with 0. (Hint: Declare a counts array of 100 integers. When you read in x, execute counts[x]++.)
import java.util.Scanner; public class Histogram { public static void main(String[] args) { System.out.printf("How many random numbers? "); Scanner getInput = new Scanner(System.in); int numRandom = getInput.nextInt(); int[] count = {0,0,0,0,0,0,0,0,0,0}; double[] limit = {0.0,0.1,0.2,0.3,0.4,0.5,0.6, 0.7,0.8,0.9,1.0}; for(int i=0; i<numRandom; i++) { double y = Math.random(); for (int j=0; j<10; j++) { if (y < limit[j+1]) { // one will succeed count[j]++; break; } } } for (int j=0; j<10; j++) { System.out.printf("The number of random numbers " + "between %3.1f and %3.1f is %d\n", limit[j], limit[j+1], count[j]); } } }
If Math.random() is any good, about 1/10 of the values should be less than 0.1, about 1/10 between 0.1 and 0.2, etc.
The program on the right tests that out by taking a bunch of numbers returned by Math.random() counting how many fall in each 0.1 range.
Note the following points.
The program can be downloaded from here.
Lets compile and run it for different number of random numbers and see what happens.
public class VarArgsDemo { public static void main(String); } }
We have used System.out.printf() which has a variable number of arguments and the types of the arguments cannot be determined until run time.
I don't believe you can write printf() in Java since Java does not support such a general version of varargs. Of course that does not prevent you from using printf().
Java does permit the last parameter to be of varying length but all of the arguments corresponding to that parameter must be of the same type. Java treats the parameter as an array.
The example on the right is from the book.
Pay attention to the parameter of printMax. Java uses ellipses to indicate varargs. In this case a variable number of double arguments are permitted and will all be assigned to the array double... numbers.
Other non-vararg parameters could have preceded the double... numbers parameter.
Homework: 6.13. Write a method that returns a random number between 1 and 54, excluding the numbers passin the argument. The method header is specified as follows:
public static int getRandom(int... numbers)
public static int search(int[] A, int val)
Given an array A and a value val, find an index i such that A[i]==val.
What if there is more than one i that works?
Normally, we just report one of them.
What if there are no is that work?
We must indicate this in some way, often we return -1, which cannot be an index (all Java arrays have index starting at 0).
What if we want to search integers and also search reals?
We define two overloaded searches.
public static int linearSearch(int[] A, int val) { final int NOT_FOUND = 1; for (int i=0; i<A.length; i++) if (A[i] == val) return i; return NOT_FOUND; }
There is an obvious solution: Try A[0], if that fails try A[1], etc. If they all fail return -1. This is shown on the right.
The only problem with this approach is that it is slow if the array is large. If the item is found, it will require on average about n/2 loop iterations for an array of size n and if the item is not found, n iterations are required.
Homework: 6.15. Write a method to eliminate the duplicate values in the array using the following method header
public static int[] eliminateDuplicates(int[] numbers)Write a test program that reads in ten integers, invokes the method, and displays the result.
mid = (hi + lo) / 2; // 1. if (A[mid]==val) return mid; // 2. else if (A[mid]<val) lo = mid + 1; // 3. else // A[mid]>val hi= mid-1; // 4.
public static int binarySearch(int[]A, int val) { final int NOT_FOUND = -1; int lo = 0; int hi = A.length -1; int mid = (hi+lo)/2; while (lo <= hi) { // 5 // above if-then-else } return NOT_FOUND; // 6 }
If the array is unsorted, there is nothing better than the above. But of course if we are doing many searches it pays to have the array sorted.
Given a sorted array, we can preform a divide and conquer attack. Assume the array is sorted in increasing order (smallest number first).
Sorting is crucial. As we just saw, fast searching depends on sorting.
There are many sorting algorithms; we will learn only two, both are bad. That is, both take time proportional to N2, where N is the number of values to be sorted. Good algorithms take time proportional to N*logN. This doesn't matter when N is a few hundred, but is critically important when N is a few million.
Moreover, serious sorting does not assume that all the values can be in memory at once.
Having said all this, sorting is important enough that it is quite worth our while to learn some algorithms. Naturally, it also helps us learn to translate algorithms into Java.
Read.
The basic idea in both the 8e and my selection sort is that you first find the minimum element and put it into the first slot (A[0]) and then repeat for the rest of the array (A[1]...A[A.length-1]).
for (j=0; j<A.length; j++) if (A[j] < A[0]) { temp = A[0]; A[0] = A[j]; A[j] = temp; }
The difference is in how you find the minimum and get it into
A[0].
The code on the right shows the simple method used by bubble sort.
The code in the book is very slightly more complicated and perhaps
very slightly faster (both give
bad,
i.e., N2, sorts).
What remains is to wrap the code with another loop to find the 2nd smallest element and put it in A[1], etc.
Let's do that in class.
for (i=1; i<A.length; i++) // Insert A[i] correctly into A[0]..A[i-1]
Insertion sort is well described by the code on the right. The remaining question is how do we insert A[i] into A[0]..A[i-1] maintaining the sorted order?
In looking for the correct place to put A[i], we could either start with A[0] and proceed down the array or start with A[i-1] and proceed up the array. Why do we do the latter?
Consider an example where i is 20 and the right place to
put A[i] is in slot A[10].
Where are we going to put A[10]?
Answer: in A[11]. Then where does A[11] go?
Answer: in A[12].
public static void insSort(int[] a) { for (int i=1; <a.length; i++) { int ai = a[i]; int j = i-1; while (j>=0 && ai<a[j]) { a[j+1] = a[j]; j--; } a[j+1] = ai; } }
Thus we need to move the higher elements before we can move the lower. We can easily do this while searching for the correct spot to put A[i], providing we are traveling from higher to lower indices.
The code on the right deserves a few comments.
1 appears 2 times
2 appears 2 times
3 appears 1 time
5 appears 3 times
8 appears 1 time
The book notes that Java has many predefined methods for arrays. In particular java.util.Arrays.sort(a) would have sorted the array a (for any primitive type, thanks to overloading).
The best place to see all the methods (and classes, and more) that are guaranteed to be in all Java implementations is (really).
As we have seen, a one-dimensional arrays corresponds to a list of
values of some type.
Similarly, two-dimensional arrays correspond to tables of such
values and arrays of higher dimension correspond to analogous
higher dimensional tables.
double [][] M; // M for Matrix M = new double [2][3]; M[0][0]=4.1; M[0][1]=3; M[0][2]=4; M[1][0]=6.1; M[1][1]=8; M[1][2]=3;
double [][] M = new double [2][3];
double [][] M = { {4.1, 3, 4}, {6.1, 8, 3} };
double [][] M = {{4.1, 3, 4}, {6.1, 8, 3}};
As with one-dimensional arrays, there are three steps in declaring and creating arrays of higher dimension.
On the right we see three possibilities, the analogue of what we did last chapter for one-dimensional (1D) arrays. I wrote the third possibility twice using different spacings.
In fact there are other possibilities since a 2D array in Java is really a 1D array of 1D arrays as we shall see in the very next section.
The simple diagram on the right is very important. It illustrates what is meant by the statement that Java does not have native 2D arrays. Instead, Java 1D arrays can have elements that are themselves 1D arrays.
The M in the diagram is the same as in the example code above.
Note first of all, that we again have reference semantics for the arrays. Specifically, M refers to (points to) the 1D array of length two in the middle of the diagram. Similarly M[0] refers to the top right 1D array of size 3. However, M[0][0] is a primitive type and thus it IS (as opposed to refers to) a double.
Next note that there are three 1D arrays in the diagram (M, M[0], and M[1]) of lengths, 2, 3, and 3 respectively.
Finally, note that you cannot write M[1,0] for M[1][0], further showing that Java does not have 2D arrays as a native type.
One advantage of having 1D arrays of 1D arrays rather than true 2D arrays is that the inner 1D arrays can be of different lengths. For example, on the right, M is a length 1 array, M[0], is a length 3 array, and M[1] is a length 1 array.
Text that does not have its right margin aligned is said to have
ragged right alignment.
Since the right hand
boundary of the component 1D arrays
look somewhat like ragged right text, arrays such as M are
called ragged arrays.
double [][] M = { {3.1,4,6}, {5.5} };
double [][] M; M = new double [2][]; // the 2 is needed M[0] = new double [3]; M[1] = new double [1]; M[0][0]=3.1; M[0][1]=4; M[0][2]=6; M[1][0]=5.5;
There are several ways to declare, create, and initialize a ragged array like M. The simplest is all at once as on the upper right.
The most explicit is the bottom code. Note how clear it is that M has length 2, M[0] has length 3, and M[1] has length 1.
Intermediates are possible. For example the first two lines can be combined. Another possibility is to combine the initialization of M[0] with its creation.
for (int i=0; i<n; i++) { | for (int i=0; i<M.length; i++) { for(int j=0; j<m; j++) { | for (int j=0; j<M[i].length; j++) { // process M[i][j] | // process M[i][j] } | } } | }
On the far right is the generic code often used when processing a single 2D array (I know 2D arrays are really 1D arrays of 1D arrays). This code works fine for ragged and non-ragged (often called rectangular) arrays. Note that the bound in the inner loop depends on i.
When the array is known to be rectangular (the most common case) with dimensions n and m, the simpler looking code on the near right is used.
double maximum = Matrix[0][0] double minimum = Matrix[0][0]; double sum = 0; for (int i=0; i<n; i++) for (int j=0; j<m; j++) { if (Matrix[i][j]>maximum) maximum = Matrix[i][j]; else if (Matrix[i][j]<minimun) minimum = Matrix[i][j]; sum += Matrix[i][j]; }
These are quite easy since the computation for each element of the matrix is independent of all other elements. The idea is simple, use 2 nested loops to index through all the all the elements. The loop body contains the computation for the generic element.
For a possible ragged array, simply replace n by Matrix.length and replace n by Matrix[0].length.
for (int i=0; i<A.length; i++) for (int j=0; j<A[0].length; j++) A[i][j] = 1 + (int)((N+1)*Math.random());
We use the nested loop construction appropriate for ragged arrays. The body requires remembering how to scale Math.random() to produce numbers from 1 to N.
The reason for (int) is in case the matrix A is floating point (but we want integer values).
Read
Start Lecture #11
Remark: Using the PC browse,
public static void matrixMult (double [][] A, double [][] B, double [][] C) { // check that the dimensions are legal for (int i=0; i<A.length; i++) for (int j=0; j<A[0].length; j++) { A[i][j] = 0; for (int k=0; k<C.length; k++) A[i][j] += B[i][k]*C[k][j]; } }
Remark: Write the formula on the board. I do not assume you have memorized the formula. If it is needed on an exam, it will be supplied.
For multiplication the matrices must be rectangular. Indeed, if we are calculating A=B×C, then B must be an n×p matrix and C must be a p×m matrix (note the two ps). Finally, the result A must be an n×m matrix.
The code on the right does the multiplication, but it omits the dimensionality checking. Note that this checking will involve loops (to ensure that the arrays are not ragged).
An alternative would be to ask the user for the array bounds and trust that the arrays are of the size specified. Then the method definition would begin
public static void matrixMult (int n, int m, int p, double [][] A, double [][] B, double [][] C) { // A is an n by m matrix, B is n by p matrix, C is a p by m matrix
A full program for rectangular matrices is here.
Homework: 7.5.
Remark: Lab 2, Part 1 assigned. Due in 7 days. Changed to 9 days due to midterm.
Read
Read
import java.util.Scanner; public class Intermediate { public static void main (String[] args) { final double [][] points = { {0,0}, {3,4}, {1,9}, {-10,-10} }; System.out.println("Input 4 doubles for start and end points"); Scanner getInput = new Scanner(System.in); double [] start = {getInput.nextDouble(),getInput.nextDouble()}; double [] end = {getInput.nextDouble(), getInput.nextDouble()}; int minIndex = 0; double minDist = dist(start, points[0]) + dist(points[0], end); for (int i=1; i<points.length; i++) if (dist(start,points[i])+dist(points[i],end) < minDist) { minIndex = i; minDist = dist(start,points[i])+dist(points[i],end); } System.out.printf("Intermediate point number %d (%f,%f) %s%f\n", minIndex, points[minIndex][0], points[minIndex][1], " is the best. The distance is ", minDist); } public static double dist(double [] p, double [] q) { return Math.sqrt((p[0]-q[0])*(p[0]-q[0])+ (p[1]-q[1])*(p[1]-q[1])); } }
Assume you are given a set of (x,y) pairs that represent points on a map. For this problem we join the Flat Earth Society and assume the world is flat like the map.
Read in two (x,y) pairs S and E representing your starting and ending position and find which of the given points P minimizes the total trip, which consists of traveling from S to P and from P to E.
Note the following points about the solution on the right.
Not as interesting as the title suggests. The program in this section just checks if an alleged solution is correct.
double x[][][] = { { {1,2,3,4}, {1,2,0,-1}, {0,0,-8,0} }, { {1,2,3,4}, {1,2,0,-1}, {0,0,-8,0} } ];
There is nothing special about the two in two-dimensional arrays. We can have 3D arrays, 4D arrays, etc.
On the right we see a three 3D array x[2][3][4]. Note that both x[0] and x[1] are themselves 2D arrays.
double[][][] data = new double[numDays][numHours][2];
double[][] temperature = new double[numDays][numHours]; double[][] humidity = new double[numDays][numHours];
The three dimensional array data on the upper right holds
the temperature and humidity readings for each of 24 hours during a
10 day period.
data[d][h][0] gives the temperature on day d at hour h.
data[d][h][1] gives the humidity on day d at hour h.
However, I do not like this data structure since the temperature and humidity are really separate quantities so I believe the second data structure is better (the naming would be better).
double[][][] temperature = new double[numLatitudes][numLongitudes][numAltitudes];
On the right we see a legitimate 3D array that was used in a NASA computer program for weather prediction. The temperature is measured at each of 20 altitudes sitting over each latitude and longitude.
A cute guessing game. The Java is not especially interesting, but you might want to figure out how the game works.
Homework: 7.7
Remark: lab 2 part 2 assigned. Due in 7 days. Changed to 9 days due to midterm.
Remark: End of material on midterm.
Object-oriented programming (OOP) has become an important methodology for developing large programs. It helps to support encapsulation so that different programming teams can work on different aspects of the same large programming project.
Java has good support for OOP.
In Java, OOP is how gui programs are usually developed.
An object is an entity containing both data (called fields in Java) and actions (called methods in Java). Two examples would be a circle in the plane and a stack.
For a circle, the data could be the radius and the (x,y) coordinates of the center. Actions might include, moving the center, changing the radius, or (in a graphical setting) changing the color in which the circle is drawn.
For a stack, the data could include the contents and the current top of stack pointer.
In Java, objects of the same type, e.g., many circles, are grouped together in a class.
Publis class Longstack { int top = 0; long[] theStack; void push(long elt) { theStack[top++] = elt; } long pop() { return theStack[--top]; } }
On the right is the beginnings of a class for
stacks containing Java long's.
The
fields in the class are top and
theStack; the methods are push() and
pop().
Two obvious problems with this class are.
Given a class, a new object is created by instantiating the class. Indeed, an object is often called an instance of its class.
Many classes supply a special method called a constructor that Java calls automatically whenever an object is instantiated. Although a constructor can in principle perform any action, normally its task is to initialize fields. For example the longStack constructor would create the array theStack.
public class TestCircle1 { /** Main method */ public static void main(String[] args) { // Create a circle with radius 5.0 Circle1 myCircle = new Circle1(5.0); System.out.println("The area of the circle of radius " + myCircle.radius + " is " + myCircle.getArea()); // Create a circle with radius 1 Circle1 yourCircle = new Circle1(); System.out.println("The area of the circle of radius " + yourCircle.radius + " is " + yourCircle.getArea()); // Modify circle radius yourCircle.radius = 100; System.out.println("The area of the circle of radius " + yourCircle.radius + " is " + yourCircle.getArea()); } } // Define the circle class with two constructors class Circle1 { double radius; /** Construct a circle with radius 1 */ Circle1() { radius = 1.0; } /** Construct a circle with a specified radius */ Circle1(double newRadius) { radius = newRadius; } /** Return the area of this circle */ double getArea() { return radius * radius * Math.PI; } }
On the right is the example from the book. Note several points.
The last two points above are closely related.
Start Lecture #12
Remark: reviewed practice midterm
Homework: UML diagrams are not required for all homework. 8.1, 8.3 (all ch 8 problems are at the 7th floor front desk of 715 bway.)
A static method (also called a class method) is associated with the class as a whole; whereas a non-static method (called an instance method) is associated with an individual object. The Circle1 class has one instance method along with the two constructors.
Thus when the two Circle1 objects are created, each gets its own getArea() method.
In a like manner a field can be declared to be static (none are in this example) in which case it is called a class field and there is only one that is shared by all objects instantiated from the class. (An object instantiated from a class is often called an instance of the class).
A field not declared to be static is called an instance field and each object gets its own copy; changing one, does not affect the others.
Putting this together we see that when the main() class invokes getArea() it must specify which getArea() is desired, i.e., which object's getArea() should be invoked. So myCircle.getArea() invokes the getArea() associated with myCircle and hence when it mentions radius (an instance field) it gets the radius associated with myCircle.
Start Lecture #13
Remark: Midterm
Start Lecture #14
Remark: Lecture given by Prof. Victor Shoup. (8.7 covered in lecture #15).
public class TV { int channel = 1; // Default channel is 1 int volumeLevel = 1; // Default volume level is 1 boolean on = false; // By default TV is off public TV() {} public void turnOn() { on = true; } public void turnOff() { on = false; } public void setChannel(int newChannel) { if (on && newChannel >= 1 && newChannel <= 120) channel = newChannel; } public void setVolume(int newVolumeLevel) { if (on && newVolumeLevel >= 1 && newVolumeLevel <= 7) volumeLevel = newVolumeLevel; } public void channelUp() { if (on && channel < 120) channel++; } public void channelDown() { if (on && channel > 1) channel--; } public void volumeUp() { if (on && volumeLevel < 7) volumeLevel++; } public void volumeDown() { if (on && volumeLevel > 1) volumeLevel--; } }
public class TestTV { public static void main(String[] args) { TV tv1 = new TV(); tv1.turnOn(); tv1.setChannel(30); tv1.setVolume(3); TV tv2 = new TV(); tv2.turnOn(); tv2.channelUp(); tv2.channelUp(); tv2.volumeUp(); System.out.println("tv1's channel is " + tv1.channel + " and volume level is " + tv1.volumeLevel); System.out.println("tv2's channel is " + tv2.channel + " and volume level is " + tv2.volumeLevel); } }
Above and to the right we see our first program written in two files. It is straight from the book. Each file has one public class that determines the name of the file.
Again note that the instance methods are referred to as className.methodName.
To compile and run this program you would type.
javac TestTV.java TV.java java TestTV
The last line could not have been java TV since you must invoke the file that has the main() method.
We have seen constructors above. Note the following points.
The purpose of constructors is to initialize the object being created via new.
Normally, a class provides a constructor with no parameters (as well
as other constructors with one or more parameters).
This constructor is often called the
no argument or
no-arg constructor.
If the class contains NO constructors at all, the system provides a no-arg constructor with empty body. Thus the no-arg constructor in class TV could have been omitted.
class LongStack { int top = 0; long[] theStack; LongStack() { theStack = new long[100]; } LongStack(int n) { theStack = new long[n]; } void push(long elt) { if (top>=theStack.length) System.out.println("No room to push"); else theStack[top++] = elt; } long pop() { if (top<=0) { System.out.println("Nothing to pop."); return -99; // awful!! } return theStack[--top]; } } public class DemoLongStack { public static void main (String[] args) { long x = 333, y = 444; LongStack myLStk = new LongStack(4); myLStk.push(x); myLStk.push(y); System.out.printf("Before: x=%d and y=%d\n", x, y); x = myLStk.pop(); y = myLStk.pop(); System.out.printf("After: x=%d and y=%d\n", x, y); } }
Before: x=333 and y=444 After: x=444 and y=333
Now that we know a little more about constructors, we can add them to class LongStack, and, while we are at it, put in some error checks. The revised code is on the right.
If we knew about Java exceptions, it would be good to utilize them for these errors. In particular, returning a -99 is awful.
The default stack, i.e., the one created with the no-arg constructor, has room for 100 elements. The user can, however, specify the size when invoking new and then the other constructor will be called.
Note that LongStack is no longer a public
class.
With no public (or private or protected)
declaration, the default is that LongStack is visible
withing the current
package.
Since we don't know about packages, we will say it is visible within
the current directory.
The main() method declares a longStack and uses it a little. Below the line, we see the output produced, which indeed exhibits stack-like (FIFO) semantics.
Note especially the command myLStk.push(x), which we now discuss.
When compared to most programming languages something looks amiss
with this statement.
The purpose of the command is to push x on to the stack
myLStk and hence
should be written something like
push(x,myLStk).
Similarly, the purpose of x=myLStk.pop() is to pop the
stack and place the value into x.
So it should be something like x=pop(myLStk).
However, in Java, and other object-oriented languages, it is written as on the right. The idea is that the LongStack object myLStk contains not only the data (in this case top and theStack) but the methods push() and pop() as well.
We have previously seen this dotted notation (object dot methodname). For example, getInput.nextInt().
Recall the
serious business we mentioned in
section 6.5 when discussing copying arrays.
Liang finally comes to grips with it here.
As with arrays, variables for objects contain references to the objects not the objects themselves.
The diagram on the right illustrates the situation right after main() executes
LongStack myLStk = new LongStack(4);We see that myLStk refers to (points at) the new object. We also see that the object has three
pieces.
int top, which is a primitive type so actually contains the value.
The above is very important. It is a key tenant of Java that primitive types have value semantics and that both arrays and objects have reference semantics.
After creation, methods and data fields within objects are referenced using dot notation as mentioned above and illustrated by getInput.nextInt(), myLStk.pop(), and myLStk.top.
Note that the fields and methods just mentioned do not have the static modifier (we will see f static fields later and have seen many static methods already. Fields and methods that are not static are called instance fields (or instance variables) and instance methods.
If in the DemoLongStack example above, the main() method created two LongStack's, each one would have its own top, theStack, push(), and pop().
As I mentioned several lectures ago, Java assigns default values to many variables, but I do not use or recommend learning the default values. I always (try to remember to) explicitly give the variables values at their declaration or ensure that they are assigned a value before there value is requested.
We see here that in some cases, Java does not assign a default value. Since I suggest not making use of default values, I don't see a need to remember which variables get defaults (and what the default is) and which variables do not.
Nonetheless there is one default you will need to recognize as it will indicate a fairly common error. If you declare an object or array but do not create it, then it receives the default value (really default reference or pointer) null. If you then use the object, you can get a nullpointerexception.
For example, if LongStack class had no constructors, then the declaration/creation of myLStk in DemoLongStack would be erroneous and the program would not compile since there would be no constructor to match the new operator. If, in addition. the declaration/creation was changed to not specify the size, the program would compile and the default do-nothing, no-arg constructor would be called. The result would be that theStack would be declared but not created. Then, when the first push is invoked, a nullpointerexception would be raised because theStack contains null.
int[] a = {1,2,3}, b; b = a; b[1] = 999; System.out.println(a[1]);
This section finally gives the pictures showing that reference types only point to values; they do not contain the values. All this applies equally well to arrays where we previously discussed it.
The example on the right illustrates reference semantics: The value printed is 999.
A significant strength of Java is its very extensive class library.
Let's use opportunity as an excuse for learning how to obtain information from.
The 8e mentions two constructors and three methods (toString(), getTime(), and setTime).
If you knew that Date was java.util.Date you could restrict the classes, but let's not and use the default of all classes.
The same two constructors are present as in the 8e (along with several that are deprecated). The methods are there as well, together with many others (some deprecated).
import java.util.Date Date myDate = new Date(); System.out.println(myDate.toString());
Don't forget the syntax of ClassName.methodName.
On the right is a simple example basically from the book.
The one difference is that I used an import statement.
How would it have differed without the import?
Answer: Date would have been java.util.Date twice in the declaration/creation.
Again use the web and find the same constructors and a superset of the methods().
Skipped for now.
Homework: 8.3.
We have now seen several methods in classes that are associated with objects. For example myDate.toString() invokes the toString() method that is associated with the object myDate. If we had also declared another Date, say yourDate, then myDate.toString() would be a different method from yourDate.toString().
This is a good thing since the string format of myDate is different from the string format of yourDate (unless myDate and yourDate happen to refer to the exact same time).
Similarly, we have seen several data fields defined in classes that are associated with each instance of the class. For example, the top of each instance of LongStack is unique.
This is also good. If you have several LongStack's, you don't expect their top's to be equal and surely pushing one LongStack does not affect the top of any other LongStack.
class ShortStack { int top = 0; short[] theStack; ShortStack() { theStack = new short[100]; } ShortStack(int n) { theStack = new short[n]; } // Maintain total number of items stacked static int totalStacked = 0; // should be pvt static int getTotalStacked() { return totalStacked; }()); } }
Before pops: stacked=4 After pops: x=13, y=12 and stacked = 2
But sometimes having separate instances of each method and field for each object is not what we want. On the right we see ShortStack. I derived it from LongStack, by first replacing all long's with short's.
Then I decided to keep track of the total number of items stacked on all the ShortStack's. This single variable totalStacked is incremented by every push(), no matter which ShortStack is being pushed, and is decremented by every pop(). We indicate that there is one totalStacked associated with the entire class ShortStack rather than one totalStacked associated with each instance of totalStacked (i.e., one associated with each object) by declaring totalStacked to be static.
A variable like totalStacked that is associated with the class and not with each instance is called a static variable or a class variable.
I could have printed this value in the main() method, but did not. Instead, I defined a method getTotalStacked() that simply returns its value and had the main() method call getTotalStacked(). As we will see in the next section, it is possible to prevent other classes from seeing a variable, which has the great advantage that the programmer of the original class can be sure that no other class either changes the variable or depends on how the variable is implemented.
In such cases it is common to supply an
accessor method
like
getTotalStacked() to get the value and sometimes another
method to alter the value.
We could have writen getTotalStacked() as an instance method, so that there would be one associated with each object of type ShortStack. However, that would be a little silly (but would work) since the method just returns the value of a class field and hence all instances of the method would be the same. Hence we declare getTotalStacked() to be static so that there is only one getTotalStacked() method for the entire class. That is, we make it a class method or static method.
One more point, one that will reveal a great secret. Since a class method is dependent on only the class and not on any object of the class, we can define and use a class method even if there are no objects of the class. At long last we can understand
public static void main (String[] args)which must be used in every Java program.
Some of our classes, data fields, and methods have been
public and some have not.
What gives?
There are actually four possibilities, of which we have used two.
(Top level classes, i.e., classes not inside any other class, can not be declared private or protected.)
An entity (a class, field, or method) that is declared to be public can be accessed from any class.
An entity that is declared to be private is accessible only within its own class.
public class PubClass { public int x; // accessible everywhere private int f2() { // accessible only in PubClass return 2; } private class PvtClass { // x and f2() accessible here } } // Following is a new file public class C2 { // x accessible f2() is not private class C1 { // x accessible f2() is not } }
Look at the example on the right and note the following.
Java includes the notion of a package, which is a
collection of classes.
A .java file can begin with a line
package packageName;
which declares all the classes in the file to be in the package packageName.
Packages are very important for large programs, but less so for those we write.
None of our .java files have included a
package statement.
In such cases Java places the classes declared in the file into the
so called
default package.
If an entity is declared without a visibility modifier (public, private, or protected), then the entity has package visibility, which means it is visible in all classes belonging to the same package as does the entity.
The protected visibility modifier comes into play only when there are subclasses and inheritence so we defer it to Chapter 11.
Until we learn about subclasses and protected, our use of modifers will be fairly simple (in part because we are not learning about packages).
Homework: 8.7
class ShortStack { int top = 0; short[] theStack;
class ShortStack { private int top = 0; private short[] theStack;
int getTop() { return top; }
On the upper right we see the beginning of the ShortStack class we saw previously. In the second group, we added private visibility modifiers, which improves the class considerably. Why?
public class Circle3 { private double radius = 1; private static int numberOfObjects = 0; public Circle3() { numberOfObjects++; } public Circle3(double newRadius) { radius = newRadius; numberOfObjects++; } public double getRadius() { return radius; } public static int getNumberOfObjects() { return numberOfObjects; } public double getArea() { return radius * radius * Math.PI; } public void setRadius(double newRadius) { radius = (newRadius >= 0) ? newRadius : 0; } }
Until we learn about packages and subclasses, we need only
distinguish public and private.
Both protected and the default
package-private are
equivalent to
public in the absence of subclasses and
packages.
On the right we see an improved circle class, now called Circle3 (we did not do Circle2).
mutatormethod is interesting. If we want to let clients modify the radius field, why didn't we just make it public?
Homework: 8.9.
class ShortStack { private int top = 0; private short[] theStack; ShortStack() { theStack = new short[100]; } ShortStack(int n) { theStack = new short[n]; } private static int totalStacked = 0; static int getTotalStacked() { return totalStacked; } int getTop() { return top; }2 {()); System.out.println("Printing and emptying stack #1"); printAndEmptyShortStack(mySStk1); System.out.println("Printing and emptying stack #2"); printAndEmptyShortStack(mySStk2); } public static void printAndEmptyShortStack(ShortStack SStk){ System.out.printf("The stack has %d items.\n", SStk.getTop()); for (int i=1; SStk.getTop()>0; i++) System.out.printf("Item #%d is %d\n", i, SStk.pop()); System.out.println(); } }
Before pops: stacked=4 After pops: x=13, y=12 and stacked=2 Printing and emptying the first stack 2 items stacked, 0 in this stack. Printing and emptying the second stack 2 items stacked, 2 in this stack. Item #1 is 23 Item #2 is 22
As with arrays, passing an object to a method actually passes a reference (pointer) to the object.
The book illustrates this with one of the circle classes. You should read that.
For variety (i.e., a different example) we consider our ShortStack class.
As illustrated on the right, we have included the following improvements/enhancements.
The diagram shows the situation when main() has called the method for the first time.
We see that the integers are using value semantics (they are their values); whereas the objects (in this case ShortStack's) have reference semantics and only refer to (point at) their values.
Also note the call-by-value semantics. The contents of the argument myStk1 is copied into the parameter SStk.
Start Lecture #15
Remark: Reviewed most of midterm.
Start Lecture #16
Remark: Finished review of midterm.
Remark: Covered section 8.7.
Arrays of objects just puts together the semantics of arrays with the semantics of objects. Consider the diagram on the right.
In Java a String is an object. Thus, like a ShortStack, a String has reference semantics.
In some languages a string is an array of characters. Although an array of char's is similar to a Java String (e.g., both have reference semantics), an array of char's is not the same as a String (e.g., you do not use [] to extract a char from a String).
As shown on the right, String's can be created several ways.
String s1 = new String(); String s2 = new String("A string"); char[] c1 = {'c','h','a','r','','a','r','r','a','y'}; String s3 = new String(c1); String s4 = "Another String";
String s1 = "joe"; s1 = "linda"; String s2 = "linda";
String objects are immutable, they cannot be changed. However, a string variable is mutable and can reference different string objects at different times.
On the right we have two String objects "joe" and "linda", and two String variables s1 and s2. The objects cannot change but we have changed one of the variables.
Interned
Java creates only one copy of each string literal, even if you write it several times. Thus after the code above is executed, there is only one object "linda". The technical term is that the literal is interned.
String s3 = new String ("linda"); if (s1==s2) YES if (s1==s3) NO
Both s1 and s2 refer to this one literal. That is, s1 and s2 contain the same reference and, as indicated to the right they are equal (==).
However, again as mentioned previously, when you create a string from another using the constructor as on the right, a copy is made. Since s3 refers to this copy it is NOT equal (!=) to either s1 or s2.
The last paragraph seems very bad. You have two variables s1 and s2 both of which evaluate to "linda", but they are unequal. We can't seem to tell that the referred values are equal. Fortunately, there are more sections to this chapter.
The problem we just had was that we were comparing the references and not the values they referred to. To do the latter we need to use a method in the String class.
boolean equals (Object anObject)
On the right we see the equals() method. Technically, it has an Object as parameter, but for now pretend that the only permitted Object is a String.
String s1 = "joe"; String s2 = new String(s1); if ("joe".equals(s1)) YES if ("joe".equals(s2)) YES
Something looks wrong! How come equals has only one parameter. Surely, we want to see if two strings are equal. The answer is that equals() is an instance method and this is attached to a string object, which it then compares to its parameter. We see an example of the usage on the right. Note that in this example if(s1==s2) would be NO.
There are other string comparison methods defined, e.g., one that
ignores case and one (compare.To() that distinguishes
less than from
greater than
(see below).
Java has many methods dealing with strings. We will just touch on a few. See the API for a list.
int length() char charAt(int index) String concat (String s)
The three methods on the right length(), charAt(), and concat() are all instance methods. That is, they are associated with an instance of a class, i.e., an object. The notation used is objectName.methodName(args). Note that the named object is a de facto extra parameter.
Start Lecture #17
String substring(int start) String substring(int start, int end)
There are two overloaded substring() instance methods. The first, one argument, variant gives the substring starting at the position specified by the argument. The other, two argument, variant gives the substring starting at the position specified by the first argument and ending just before the position specified by the second argument.
For example, "jane".substring(1,2)=="a" .
String toLowerCase() String toUpperCase() String trim() String replace(char old, char new) String replaceFirst(String old, String new) String[] replaceAll(String old, String new) String split(String delim)
String[] ans = "ab cd e".split(" ");
ans[0]=="ab" ans[1]=="cd" ans[2]=="e"
The methods on the right are well described by their names except perhaps for trim() and split.
trim() removes leading and trailing blanks.
split() is quite nice splitting the original string at the specified delimiter returning the pieces as an array.
For example, executing the statement shown in the middle right, declares ans to be an array of strings having three elements. These elements are shown on the bottom right.
Many of the string arguments in the preceding section are
interpreted as
regular expressions, which can be quite
useful.
For example the regular expression *\.java would match any string
that ended in .java.
In addition to the methods in 9.2.6, we note matches(), which is like equals() except that its string argument is interpreted as a regular expression.
int indexOf(char c) int indexOf(String s)
The two methods on the right find the first occurrence of the character or string in the object. They return -1 if the character or string does not occur.
There are variants that look for the first occurrence after a specified position, the last occurrence, or the last occurrence before a specified index. They all return -1 if the search is unsuccessful.
We have already seen the String(char[] charArray) constructor. The inverse operation is provided by the toCharArray() method.
char[] ca = "Joseph".toCharArray(); String s1 = new String(ca);
String s2 = new String("Joseph".toCharArray());
For example, the first code snippet initializes s1 to
Joseph in a complicated way, using the character array
ca as an intermediary.
The second (one-line, no intermediary) snippet, is perhaps even more complicated.
So far we have see constructors and instance methods. No we will see two class methods, one of which is heavily overloaded.
The class method String.valueOf() converts many data types into strings. Note again that this is a class (i.e., static) method so is associated with the class String and not with any specific string. It is heavily overloaded; there are versions with a char argument, a char[] argument, a double argument, an int argument, a boolean argument, and others as well.
The other class method is String.format(). It takes the same arguments as System.out.printf() but, instead of printing the result, it returns it as a String.
For example,
String s = String.format("3.777 rounded to 2 places is %4.2f", 3.777);sets s equal to 3.78
A palindrome is a string that reads the same from left to right as from right to left. Note that blanks count. For example the following are palindromes.
import java.util.Scanner; class DemoPalindrome { public static void main (String[] Args) { Scanner getInput = new Scanner(System.in); while (true) { System.out.println("Type a string, use !!EXIT!! to exit"); String s = getInput.nextLine(); if (isPalindrome(s)) System.out.println (String.format("\"%s\" is a palindrome.", s)); else System.out.println (String.format("\"%s\" is not a palindrome.", s)); } } public static boolean isPalindrome(String s) { int lo = 0; int hi = s.length()-1; while (lo < hi) { // this works for *all* s if (s.charAt(lo) != s.charAt(hi)) return false; lo++; hi--; } return true; } }
The program on the right, queries the user for a string and then checks if it is a palindrome. Note the following points.
Read
The compareTo() String instance method makes alphabetizing essentially the same as sorting integers, as shown by the side by side code comparison below.
public class DemoSortString { public static void main (String[] args) { String[] strArray = {"a","zzz","abc","ab","Z","6"}; System.out.println("Before"); for (int i=0; i<strArray.length; i++) System.out.println(strArray[i]); sortString(strArray); System.out.println("After"); for (int i=0; i<strArray.length; i++) System.out.println(strArray[i]); } public static void sortString(String[] strArray) { for (int i=0; i<strArray.length-1; i++) for (int j=i+1; j<strArray.length; j++) if(strArray[i].compareTo(strArray[j]) > 0) { String t = strArray[i]; strArray[i] = strArray[j]; strArray[j] = t; } } }
public class DemoSortInt { public static void main (String[] args) { int[] intArray = {6, 9, 2, 4, 1, 8}; System.out.println("Before"); for (int i=0; i<intArray.length; i++) System.out.println(intArray[i]); sortInt(intArray); System.out.println("After"); for (int i=0; i<intArray.length; i++) System.out.println(intArray[i]); } public static void sortInt(int[] intArray) { for (int i=0; i<intArray.length-1; i++) for (int j=i+1; j<intArray.length; j++) if(intArray[i] > intArray[j]) { int t = intArray[i]; intArray[i] = intArray[j]; intArray[j] = t; } } }
The only real difference between the two code snippets is that the integer comparison via > is replaced by compareTo(), which returns a positive, zero, or negative integer depending on whether the given object is greater than, equal to, or less than the argument.
For fun let's try to write a class method compareStr() that takes two arguments and returns a positive, zero, or negative integer depending on whether the first argument is greater than, equal to, or less than the second.
public static int compareStr(String s1, String s2) { final int EQUAL=0, LESS=-123, GREATER=321; for (int i=0; i<s1.length() && i<s2.length(); i++) if (s1.charAt(i) < s2.charAt(i)) return LESS; else if (s1.charAt(i) > s2.charAt(i)) return GREATER; if (s1.length() < s2.length()) return LESS; else if (s1.length() > s2.length()) return GREATER; else // s1.length() == s2.length() return EQUAL; }
This is one of those programs where I found it useful to think first and just sketch out the idea before starting to code. It is actually fairly easy, but you can be lead astray.
The code on the right has two parts.
Wrapper) Class(es)
As mentioned, Java is an object oriented (OO) language and objects/classes play a central part in serious Java programming.
However, the advantages of OO languages comes in to play mostly for large programs, much larger than any we will write. So, for us as of now there isn't all that much advantage to using an object of type Character rather than a value of type char.
We call the class Character a
wrapper for the
primitive type
char.
Similarly, Java provides wrapper classes Boolean,
Byte, Short, Integer, Long,
Float, and Double for the corresponding primitive
types boolean, byte, short, int,
long, float, and double.
Do remember that the wrapper classes provide objects, and thus have reference semantics.
It is easy to obtain the Character corresponding to a given char: The Character class provides a constructor for just this purpose. Thus, new Character('a') produces the Character corresponding to the char 'a' .
boolean isDigit(char c) boolean isLetter(char c) boolean isLetterOrDigit(char c) boolean isLowerCase(char c) boolean isUpperCase(char c) char toLowerCase(char c) char toUpperCase(char c)
Perhaps the piece of the Character class most useful to us now, is the collection of class methods that operate on char's (not Character's).
On the right we see seven of these useful class methods. As an example to test if the char c) is a digit, you would write
if (String.isDigit(c))
Note the consistent naming: the prefix is used for a test, and the prefix to used for a conversions.
Homework: 9.1 and 9.5.
Java has 3 classes whose objects are
strings, namely
String, StringBuilder, and StringBuffer.
import java.util.Date; class Test { public static void main (String[] Args) { StringBuilder sb = new StringBuilder("Hello"); String s = new String ("Hello"); System.out.printf("s is \"%s\" and sb is \"%s\"\n", s, sb); System.out.printf("The lengths of s and sb are %d and %d\n", s.length(), sb.length()); System.out.printf("The capacity of sb is %d\n", sb.capacity()); s = s.concat(", world."); sb.append(", world."); System.out.printf("s is \"%s\" and sb is \"%s\"\n", s, sb); } }
s is "Hello" and sb is "Hello" The lengths of s and sb are 5 and 5 The capacity of sb is 21 s is "Hello, world." and sb is "Hello, world."
Superficially a String s and a StringBuilder Sb can appear to act the same. Indeed, a quick glance at the code on the right suggests that StringBuilder adds nothing.
In this case, however, looks can be deceiving. Specifically, s.concat() and sb.append() are really quite different because the String referenced by s is immutable; whereas, the StringBuffer referenced by sb is not.
The expression sb.append(", world") actually appends the argument onto the object referenced by sb. It does not copy the original contents of that object.
In contrast s.concat(", world") does not, indeed cannot change the string referenced by s. What happens is that
There are other StringBuilder instance methods that permit insertions and modifications in the middle of the object as well as deletion of parts of the object. One could write String methods to do the same, but they would be a little more complicated.
There are other methods as well.
For our programs it hardly matters at all. The real advantage of StringBuilder is that these manipulations are done on the original, with no copy being made. For strings of length a few dozen, making copies is no big deal (unless done very many times). However, for much larger strings, the copying can become expensive.
All of these are instance methods. As expected sb.toString() returns the String equivalent of the StringBuilder sb, sb.length() returns the length, and sb.charAt() returns the character at the position specified by the argument.
The capacity() instance method is less obvious. A Java StringBuilder object is generally bigger than its current length (this larger size is called the capacity). That way it is easy to append characters. Except for efficiency considerations, we can ignore capacity since the JVM will extend the StringBuilder whenever needed.
You can also reduce the length of a StringBuilder (eliminating characters at the end) or increase its length (padding with null characters).
The book's solution is basically an exercise in using a few of the methods. The basic steps are
The reason for converting back to String's is that (surprisingly?) I don't think StringBuilder has an equals() or compareTo() method.
Homework: 9.11
We know that the main() method has a single parameter, which is an array of String's. However, we have not yet made use of this. Technically, we have used it all the time; we just had the empty array.
Recall that, if we have a program Prog.java, we compile it with the command javac Prog.java and run the result with the command java Prog.
To introduce command-line arguments, there is no change to the compilation, but we run the program differently. Specifically we add the arguments after the name of the program.
Say we want to give Prog three arguments, one integer and two strings. Specifically the arguments are: 25, arg 2, last. We would write
java Prog 25 "arg 8" last
Several comments are needed.
Start Lecture #18
Homework: 9.1, 9.5, 9.11.
Remark: On monday night I was editing my magazine column and needed a function to reverse a number. Since I just taught it a few hours previously, I actually wrote one of those routines that is all library method calls. Then, for fun I made it an ugly one-liner. Here is the result.
import java.util.Scanner; import java.io.File; public class TechRev { public static int reverse (int n) { // StringBuilder sb = new StringBuilder(Integer.toString(n)); // return Integer.parseInt(sb.reverse().toString()); return Integer.parseInt(new StringBuilder(Integer.toString(n)).reverse().toString()); } }
Remark: Added the missing two lines to the Calculator.
public class Calculator { public static void main(String[] args) { if (args.length != 3 || args[1].length != 1) { System.out.println( "Usage: java Calculator operand1 [+-*/] operand2"); System.exit(0); } char operator = args[1].charAt(0); int result; switch (operator) { case '+': result = Integer.parseInt(args[0]) + Integer.parseInt(args[2]); break; case '-': result = Integer.parseInt(args[0]) - Integer.parseInt(args[2]); break; case '*': result = Integer.parseInt(args[0]) * Integer.parseInt(args[2]); break; case '/': result = Integer.parseInt(args[0]) / Integer.parseInt(args[2]); break; default: System.out.printf("Illegal operator %c\n", operator); System.exit(0); } System.out.printf("%s %c %s = %d\n", args[0], operator, args[2], result); } }
The code on the right (only slightly modified from the book) is for a trivial calculator that can do one operation, providing the operation is +, -, *, or /. There are a few points to say about it.
Homework: 9.13, 9.15
One of the major uses of command-line arguments is to tell programs which files to process. We next learn how to do that.
In Java reading and writing files is divided into two tasks.
The File class is used to construct a File object from a file name and to perform certain operations on the object. However, it is not used to create a file, to read a file, or to write a file.
File(String pathname) File(String parent, String child) File(File parent, String child)
The primary form of the File constructor is the first one shown on the right. This form accepts a single String fn as parameter and constructs a file object corresponding to the file with name fn.
This file may or may not exist and the constructor does not create the file in any case. The name may be relative (to the current directory) or may be absolute. One downside of absolute addresses is that they are OS dependent.
The File class includes exists(), canRead(), canWrite(), isDirectory(), isFile, isAbsolute(), and isHidden().
Their names describe well their actions.
The File class includes getName(), getPath(), and getParent().
They return respectively the file name without any directories, the full file name, and the full file name of the parent.
System.out.println(new java.util.Date(file.lastModified()));prints the modification date in a human-readable format.
Once we have an object File f1 we can actually perform input/output on the corresponding file in the file system.
One complication that can occur is that the I/O operation can fail. For example, you might try to read from a file that doesn't exits or write to a file for which you do not have the needed permissions. The technical term is that an I/O exception can occur. We will learn about exceptions later; for now we simply add throws Exception to the header line for any method that (directly or indirectly) invokes I/O.
The first step in writing data to a file is to create a PrintWriter object, which in turn needs a File object. We can create both at once with
java.io.PrintWriter output = new java.io.PrintWriter(new java.io.File("x.text"));
If the above looks too imposing, use some import statements.
import java.io.PrintWriter; import java.io.File; PrintWriter output = new PrintWriter(new File("x.text"));
Now we can use output the way we have used System.out in the past. For example we can write output.printf("hello");
After all the data has been sent to the PrintWriter object, its close() instance method should be called.
We have used the Scanner class many times, but always with System.in, which is predefined to correspond to the keyboard. In order to obtain a Scanner capable of reading from the file named existing.text we write
Scanner input = new Scanner(new File ("existing.text"));(assuming the necessary import's have been executed.
You can also write this declaration (and the one for PrintWriter) in two steps.
File f = new File("existing.text"); Scanner input = new Scanner(f);
Remark: Note that the book is rather confusing at the end of this section. It seems to be saying that the behavior is different when you change from an input file to input from the keyboard. As far as I can see, the real change is where the newlines are placed.
import java.util.Scanner; import java.util.Date; import java.io.File; public class Test { public static void main (String[] Args) throws Exception { Scanner input = new Scanner (new File ("t.txt")); Scanner getInput = new Scanner (System.in); String s1 = input.next(); String s2 = input.next(); String line = input.nextLine(); System.out.printf("s1=\"%s\" and s2=\"%s\"\n", s1, s2); System.out.printf("line=\"%s\"\n", line); s1 = getInput.next(); s2 = getInput.next(); line = getInput.nextLine(); System.out.printf("s1=\"%s\" and s2=\"%s\"\n", s1, s2); System.out.printf("line=\"%s\"\n", line); } }
File t.txt contains one line (ending with a newline) 34 567 |<-- the line ends here
I entered the same characters from the keyboard
s1="34" and s2="567" line=" " s1="34" and s2="567" line=" "
The instance methods nextByte(), nextShort(),
nextInt(), nextLong(), nextFloat(),
nextDouble(), and next() are often referred to as
token-reading methods.
They all work basically the same and depend on the value of the
delimiter, which by default is equal to
whitespace.
Although there are other characters often considered whitespace and
one can change the delimiter, for now just think of delimiter as
meaning any non-empty mixture of blanks, tabs, and newlines.
Then the token-reading methods work as follows.
The nextLine() method is different and is not considered token-reading. It acts as follows.
import java.io.*; import java.util.*; public class ReplaceText { public static void main(String[] args) throws Exception { if (args.length != 4) { System.out.printf("Usage: java ReplaceText %s\n", sourceFile targetFile oldStr newStr"); System.exit(0); } File sourceFile = new File(args[0]); if (!sourceFile.exists()) { System.out.println("Source file " + args[0] + " does not exist"); System.exit(0); } File targetFile = new File(args[1]); if (targetFile.exists()) { System.out.println("Target file " + args[1] + " already exists"); System.exit(0); } Scanner input = new Scanner(sourceFile); PrintWriter output = new PrintWriter(targetFile); while (input.hasNext()) { String s1 = input.nextLine(); String s2 = s1.replaceAll(args[2], args[3]); output.println(s2); } input.close(); output.close(); } }
The program on the right illustrates many of the concepts we have just learned and is worth some study.
The program is executed with four command-line arguments, the first two are file names and the second two are strings (technically, all command-line arguments are Strings; the first two are interpreted by the program as names of files).
The program copies the first file to the second replacing all occurrences of the third argument with the fourth argument.
Start Lecture #19
Homework: 9.19.
We have seen already some differences when using objects. With instance methods one of the parameters is the object itself and is written differently. For example given a String s1, we get its length via s1.length() rather than length(s1). Designing (large) systems using objects as centerpieces (so called object-oriented design) has deeper differences from conventional design than the essentially syntactic change in length().
An object is called immutable if, once created, it cannot be changed.
Let me be clearer about that. When we say the object cannot be change, we mean the data fields of the object cannot be changed.
For example, any String object is immutable (its only data field is the sequence of characters that constitute the string). A class, such as String, all of whose objects are immutable is also called immutable.
public class NoChanges { private int x; public NoChanges(int z) { x = z; public int getX() { return x; } }
public class Changes { public int a; public int b; public Changes(int r, int s) { a = r; b = s; } }
public class Maybe { private Changes c; public Maybe(int w) { c = new Changes (w,w); } public Changes getC() { return c; } }
public class Immutable { public static void main (String[] Args) { NoChanges nc = new NoChanges(5); Changes c = new Changes (5,6); c.a = 9; Maybe mb = new Maybe(12); System.out.println(mb.getC().a); // mb.c.a = 1; Will not compile but mb.getC().a = 1; // works fine } }
What must we do to write an immutable class?
Naturally, we must make all the data fields private and
no public method can change a data field.
Such methods are often called
mutators.
But that is not enough!
Looking to the right, clearly the class NoChanges is immutable: the only data field is x, which cannot be directly accessed (it is private). With the accessor getX() we can find the value of x, but cannot change it. Thus once we create a NoChanges object with say
Nochanges nc = new NoChanges(5).then nc.x==5 and will never change.
Similarly the class Changes is not immutable (it is mutable). We can say
Changes c = new Changes (5,6); c.a = 9;and have changed the object c by changing its first component from 5 to 9.
The interesting case is class Maybe. At first (and even second) glance it looks like class NoChanges. Indeed each line of Maybe looks like the corresponding line of NoChanges. The only difference is the initialization of c. But just as with NoChanges, we do not have direct access to c (it is private) and the accessor lets us only find c's current value, not change it.
But there is a difference! The variable c is of type Changes, which is an object and hence has reference semantics. Also the object to which we now have a reference is itself mutable. Therefore the accessor has returned a reference that enables us to change the value of the components. Code to exploit this hole is shown in the bottom frame on the right.
Thus there are three requirements for a class to be immutable.
There are two kinds of variables found in a class.
Data fields (a.k.a. class variables).
These have scope the entire class (except where
hidden,
see below) including prior to their declaration.
If one data field, say x, is initialized to an expression involving another field y, then y must be declared before x.
Variables (including parameters) declared in a method.
These are referred to as
local variables.
Naturally, the same variable name can be declared in multiple methods. As we have seen previously, the same name can be declared in disjoint blocks in the same method. In both these cases each instance of the name refers to a separate variable.
If a local variable has the same name as a data field, the field is hidden (i.e., not directly accessible) in that method. For this reason as well as for increased clarity, it is not recommended (but is permitted) to use the same name for both a class variable and a non-parameter local variable.
We shall see next section that it is common to have parameters with the same name as fields.
The special name this is used in several ways with respect to objects and classes. We see two of them on the right as well as an illustration of a few other points. Although the code is quite simple, it warrants some careful study.
public class C { int i; static int si = 4; C (int i) { this.i = i; } C () { // no-arg constructor this(8); } int getI () { // nothing hidden return i; } void setI (int i) { // instance field this.i = i; } void setSi (int si) { // class field C.si = si; } public static void main (String[] args) { C c = new C(); System.out.println(c.i); } }
set methods, one for setting each of the two fields, and a main() method.
set methodsthe convention is to use as parameter the name of the field being set, thus hiding the field name. For an instance set method, we need to reference the current object and hence use this.
exposesthe hidden field name.
Class abstraction is the separation of how a class is used from how a class is defined.
Another name is
information hiding, the implementation of
the class is hidden from the users of the class.
How to use a given class is described by the class specification, also called is public interface. In Java classes, fields and methods declared public form the interface. In contrast, private fields and methods form the implementation.
When looked at from the user's point of view, the class encapsulates the knowledge needed to use these object. In the next section we discuss an example.
public class CheckingAcct { private String acctName; private double balance = 0; CheckingAcct(String acctName) { } public void makeDeposit (double amt) { } public boolean makeWithdrawal (double amt) { } } public class TestChecking { public static void main (String [] Args) { CheckingAcct acct = new CheckingAcct( "Allan Gottlieb's checking acct"); acct.makeDeposit(100.00); acct.makeWithdrawal(30.00); } }
Allan Gottlileb's checking account Description Amount Balance Deposit 100.00 100.00 Withdrawal 30.00 70.00
The book discuss a loan example, which you should read. We will do the beginnings of a checking acct example.
On the right is the very beginning of a CheckingAccount class. Currently, we can open an account (in Java-speak we can create a CheckingAccount object), make deposits, and make withdrawals.
The bottom section shows the output produced.
This is not what a bank would use, but what customers would used for themselves. For example, we will not generate monthly statements, but might process a monthly statement received by the bank. Also banks do not permit withdrawals that drive the balance negative, but we will permit it since the user may know that certain previous debits have not cleared. Most significantly, we do not do any of the hard work involving actually money.
To handle writing a check, we need a data structure for the check itself, including the date, amount, payee, and note. We would also need a data structure for a statement. These would also be objects so would need class definitions. The class for checks might contain no methods.
A CheckingAccount object would need to contain (many instances of) a Check object, each of which would contain String objects for the payee and note.
Since a Check object can belong to only one
CheckingAccount object, we call this a
composition
between the Check class and the CheckingAccount
class.
You should read this section in the book.
We will continue with our checking account.
An obvious omission is that we can't handle checks themselves. Before taking on that task, let's try something that should be simpler.
Each transaction should have a date. Many times the date of the transaction is today, but sometimes you might enter the transaction days after it was performed (remember this is our personal account history, not the bank's).
public class TestChecking { public static void main (String[] Args) { CheckingAcct acct = new CheckingAcct( "Allan Gottlileb's checking account"); acct.makeDeposit(100.0, 7,15,2010); acct.makeWithdrawal(30.0); acct.makeDeposit(50.00); } }
Allan Gottlileb's checking account Date Description Amount Balance ---------- ------------------------- ------- -------- 7/15/2010 Deposit 100.00 100.00 10/13/2010 Withdrawal 30.00 70.00 10/13/2010 Deposit 50.00 120.00
public void makeDeposit (double amt, int month, int day, int year) {...}
import java.util.Date; private Date now = new Date(); private int nowMonth = now.getMonth(); private int nowDate = now.getDate(); private int nowYear = now.getYear()+1900;
Hence the public interface of CheckintAcct must be changed so that there are two makeDeposit() methods, one accepting a date and the other defaulting to today's date. Similarly, for makeWithdrawal().
On the right, the top frame is what we expect the client (user) of our class to write. Below that we see the output we wish to have.
What changes are needed to CheckingAcct.java?
abstract classand we don't yet know how to deal with those.
Here is my version.
import java.util.Date; public class CheckingAcct { private Date now = new Date(); private int nowMonth = now.getMonth(); private int nowDate = now.getDate(); private int nowYear = now.getYear()+1900; private String acctName; private double balance = 0; CheckingAcct(String acctName) { this.acctName = acctName; printHeading(); } public void makeDeposit (double amt) { makeDeposit(amt, nowMonth, nowDate, nowYear); } public void makeDeposit (double amt, int month, int day, int year) { balance += amt; String dateString = String.format("%d/%d/%d", month, day, year); System.out.printf("%10s Deposit %26.2f %9.2f\n", dateString, amt, balance); } public void makeWithdrawal (double amt) { makeWithdrawal(amt, nowMonth, nowDate, nowYear); } public void makeWithdrawal (double amt, int month, int day, int year) { balance -= amt; String dateString = String.format("%d/%d/%d", month, day, year); System.out.printf("%10s Withdrawal %23.2f %9.2f\n", dateString, amt, balance); } private void printHeading() { System.out.printf("%40s\n\n", acctName); System.out.printf("%10s %-25s %8s %9s\n", "Date","Description", "Amount", "Balance"); System.out.printf("%s %s %s %s\n", "----------","-------------------------","-------","--------"); } }
We already did stacks.
You should read this.
A class should describe a single entity. So having a class StacksAndQueues does not sound like a good idea.
But you might say that both stacks and queues are related; they both have insert and remove methods and perhaps boolean methods saying the structure is full, empty, or neither. We will learn about inheritance in the next chapter.
Follow the Java naming and style conventions (see this short section for a brief review).
Most fields should be private to ease maintenance and modifiability. Provide get and set methods as appropriate. In general only expose to the user what you guarantee will not change.
Read.
Read.
Declare data fields to be static if the value is constant for all objects in the class. Static fields should be set with a set method not via a constructor (why set the same variable to the same thing everything a new object is made).
Some methods do not need any object of the class; typically the main method has this property. Such methods must be declared static as there is no object they can be attached to.
Instance methods can reference both instance and class fields, as well as both instance and class methods.
Class methods can reference class methods and fields, but cannot invoke instance methods and fields (what object would the instance method or field refer to?).
Homework: 10.3
The idea of inheritance, which is fundamental in object-oriented programming, is that sometimes classes B and C are refinements of class A and it is silly to have to reproduce all the A methods for both B and C.
For example, you could have a class for quadrilaterals that would have as data fields the four vertices, the color to draw the figure in, and some complicated method for calculating the area.
Then you would have a subclass for a rhombus that would include, in addition, the (unique) side length. You would also have a subclass for rectangles that would override the area method with a much simpler one as well as providing additional data fields for height and width.
The terminology for the situation in the previous section is that quadrilateral is called a superclass, parent class, or base class. Both rectangle and rhombus would be called a subclass, a child class, an extended class, or a derived class.
Consider the example on the right.
public class GeometricalObject { String color = "blue"; }
public class Point extends GeometricObject { double x; double y; }
public class Quadrilateral extends GeometricObject { private Point p1, p2, p3, p4; // constructors double area() { // complicated general area formula } // more }
public class Rhombus extends Quadrilateral { private double sideLength; // constructors (check side lengths equal) // more }
public class Rectangle extends Quadrilateral { private double width, height; // constructors double area() { return width * height; // more } }
every quadrilateral is a point. The
is-arelationship is normally needed for a subclass to be appropriate. A quadrilateral is a geometric object.
every rhombus is a quadrilateral.
A terminology comment. A subclass usually contains more information (data fields) than the superclass. It is called a subclass because each object in the subclass is also a member of the superclass. Thus the set of objects in the subclass can be considered a subset of the set of objects in the superclass.
We mentioned above that there is a
complicated formula for
the area of a quadrilateral.
Just for fun, not part of the course, I worked it out.
Assume the 4 points p1, p2, p3, and p4 are given in the order so that the resulting quadrilateral is convex as shown in the diagram on the right. Draw one of the diagonals, say connecting p1 and p3.
It is easy to get the lengths of each side as the square root of delta-x squared + delta-y squared. So in the diagram we know all the sij.
Now the quadrilateral is composed of two triangles and we know the side lengths for each triangle. There is a cute formula for the area of any triangle with side lengths a, b, and c. Let s be the semiperimeter s=(a+b+c)/2, then the area of the triangle is the square root of s(s-a)(s-b)(s-c).
This analysis enables us to compute the
complicated formula
for the quadrilateral area mention above.
Start Lecture #20
Recall that this refers to the current object. In an analogous way super refers to the superclass of the current class. It is used to
When we create a new object, a constructor is called. Let's say that class Child is a child of the class Parent and we make a new Child object child. The Child constructor is called and produces/initializes any relevant data fields in child.
But the Child object child contains, in addition to its own data, the data fields in Parent as well. How are these created/initialized?
The answer is that a constructor in the subclass invokes a constructor in the superclass. In our case, constructors in Child invoke constructors in Parent. How is this done. If, in the child constructor, we write new Parent(), we would create a new object rather than creating/initializing a part of the current object, namely child.
Indeed it is an error to mention the superclass constructor inside a subclass; instead super() is used.
public class Rhombus extends Quadrilateral { private double sideLength; public Rhombus (Point p1, Point p2, Point p3, Point p4) { super(p1, p2, p3, p4); sideLength = p1.distTo(p2); } }
public class Quadrilateral extends GeometricObject { Point p1, p2, p3, p4; public Quadrilateral (Point p1, Point p2, Point p3, Point P4) { this.p1 = p1; this.p2 = p2; this.p3 = p3; this.p4 = p4; } }
On the right we see a part of an improved class for Rhombus and below it part of the base class Quadrilateral. Although not explicitly included in the class, a derived class inherits the fields from its base. Thus a Rhombus object has 4 Point fields that must be created/initialized by the constructor.
Were there no inheritance involved, the rhombus constructor would have 4 assignment statements using this. However, the points are not declared in Rhombus so should not be referenced by this. Instead we invoke super() to have a constructor in the base class Quadrilateral do the necessary work.
The idea is that the constructors in a derived class deal with the fields introduced in the derived class; dealing with the fields inherited from the base class is left to the base class constructors, furthering abstraction/encapsulation.
When super() is used in a constructor, it must be the first statement.
You might wonder why these same considerations don't apply to Quadrilateral as they did to Rhombus. After all, Quadrilateral is itself a derived class.
The answer is that they do apply; a default super() has been supplied by Java, as will be explain in the next section.
As we have seen, a constructor can invoke another constructor in the same class and, if it is a derived class, can utilize super(). If a constructor in a derived class does neither of these actions, Java itself supplies a no-arg super() as the first statement of the constructor.
As a result of this automatic insertion of super() when needed, if A is derived from B, which in turn is derived from C, any A construct will, as its first action, invoke a B constructor, which, as its first action, will invoke an C constructor.
Thus the real actions will be performed in the order C, B, A, i.e., from base to derived. This naturally applies as well when there are more than 3 classes.
For example, consider the class tree on the right, which represents the parent-child relations in our geometry example.
The process of applying constructors from base to derived is called constructor chaining.
The keyword super can be used to reference superclass methods as well as superclass constructors. Normally the superclass method would be available in the subclass without any decoration. However, if the superclass and subclass both have a method with the same name, prepending super. references the method in the superclass whereas the naked name references the method in the subclass. We see an example in the next section
public class Parent { int f() { return 1; } int g() { return 10; } }
public class Child extends Parent { int f() { return 100; } int h() { return f()+2*super.f()+g(); } }
public class Test { public static void main(String[] args) { Parent p = new Parent; Child c = new Child; System.out.println(c.h()); System.out.println(c.f()+c.g()); } }
112 110
Consider the example on the right. The parent class defines two methods f() and g(). The child class defines two methods f() and h().
In the parent class (for example, inside f()), nothing about the child class is known. So f() means the f() in the parent, the same for g(), and there is no h().
In the child class the situation is more interesting since everything in the parent (that is not private) is known in the child. So g() means the g() in the parent and h() means the h() in the child, but what about f()?
The rule is that a naked f() means the f() in the child. However, the f() in the parent can be obtained as super.f(). That is why the first line printed, c.h(), is 100+2*1+10.
What about usage in the class Test, a client of the child class. In the client, super cannot be used so c.f() in the example invokes the f() in the child. Since g() occurs only in the parent c.g() refers to that g(), which explains why the second line printed is 100+10.
The example above used instance methods in both the parent and child. If a method f() in the child overrides f() in the parent you must have either both static or neither static.
If they are both static then
Start Lecture #21
public class Base { public void f(double x) { System.out.printf("Base says x=%f\n", x); } } public class Derived { public void f(double x) { System.out.printf("Derived says x=%f\n", x); } } public class Test { public static void main(String[] Args) { Derived d = new Derived(); d.f(5.0); d.f(5); } } Derived says x=5.000000 Derived says x=5.000000
public class Derived extends Base { public void f(int x) { System.out.printf("Derived says x=%d\n", x); } } Base says x=5.000000 Derived says x=5
It is important to understand the distinction between overriding, which we just learned, and overloading, which have seen previously.
The first frame on the right demonstrates overriding. Both the base class and the derived class define an instance method f() that have the same signature (i.e, the same number and types of parameters) and the same return type. In such a case, an object in the derived class sees only the method defined in the derived class.
In the example, both d.f(5.0) and d.f(5) invoke the f() in the child class. In the second case the 5 is coerced from int to double.
The second example is almost the same, but with a key difference. Both Base and Test are unchanged. The f() in Derived is changed so that it's parameter is of type int (and the printf() now uses %d not %i).
This tiny change has a large effect.
The f() in the derived class no longer has the same signature as the f() defined in the base. Thus, both are available (i.e., f() is overloaded).
For this reason the invocation f(5) invokes the f() in the derived class; whereas, the f(5.0) invokes the f() in the base.
Let's look at a larger example.
I have filled in some of the details for the geometric methods
sketched above.
public class GeometricObject { protected String color = "blue"; } public class Point extends GeometricObject { Double x, y; public Point(Double x, Double y) { this.x = x; this.y = y; } Double distTo (Point p) { return Math.sqrt(Math.pow(this.x-p.x,2)+Math.pow(this.y-p.y,2)); } } public class Quadrilateral extends GeometricObject { protected Point p1, p2, p3, p4; public Quadrilateral (Point p1, Point p2, Point p3, Point P4) { this.p1 = p1; this.p2 = p2; this.p3 = p3; this.p4 = p4; } double area() { return 123.4; // A stub, need to use complicated formula } } void areaCheck() { System.out.printf("Rectangle says %f, but quad says %f\n", this.area(), super.area()); } } public class TestQuad { public static void main (String[] args) { Point origin = new Point(0.0,0.0); Point p1 = new Point(0.0,0.0); Point p2 = new Point(1.0,0.0); Point p3 = new Point(1.0,1.0); Point p4 = new Point(0.0,1.0); Point p5 = new Point(5.0,5.0); Quadrilateral quad = new Quadrilateral (p1,p2,p3,p4); Rectangle rect1 = new Rectangle (p1,p2,p3,p4); Rectangle rect2 = new Rectangle (p1,p3); Rectangle rect3 = new Rectangle (origin, new Point(1.0,4.0)); System.out.printf("rect1 has width=%f, height=%f, and color=%s\n", rect1.getWidth(), rect1.getHeight(), rect1.color); System.out.printf("rect2 has width=%f, height=%f, and color=%s\n", rect2.getWidth(), rect2.getHeight(), rect2.color); System.out.printf("rect3 has width=%f, height=%f, and color=%s\n", rect3.getWidth(), rect3.getHeight(), rect3.color); rect1.areaCheck(); Rhombus rhom1 = new Rhombus(p1,p2,p3,p4); System.out.printf("rhom1 has side length=%f\n", rhom1.getSideLength()); Rhombus rhomBad = new Rhombus(p1,p2,p3,p5); } }
rect1 has width=1.000000, height=1.000000, and color=blue rect2 has width=1.000000, height=1.000000, and color=blue rect3 has width=1.000000, height=4.000000, and color=blue Rectangle says 1.000000, but quad says 123.400000 rhom1 has side length=1.000000
Where in the class tree should we put a new class Circle?
Where in the class tree should we put a new class Square?
Where in the class tree should we put a new class Ellipse?
You would think from looking at a class definition beginning
public class TopLevel {with no extends clause that this class has no parent. That, however, is wrong.
Any class without an extends clause, whether defined in the Java library or by you and me, actually extends the built in class java.lang.Object.
Hence, any method defined in Object is available to all classes, unless it is overloaded or overridden.
One interesting instance method defined in Object is
toString(), which when applied to an object, produces
a string describing the object.
The actual description given is the class name, followed by
@
followed by the memory address of the object.
The class name is certainly useful, the memory address at least
enables us to distinguish one object from another.
One advantage of having a toString() method defined for all objects is that other methods can count on its existence. For example the various print methods (e.g., printf(), println(), etc) use this to coerce objects to strings. Thus if obj is any object at all,
printf("The object is %s\n", obj);is guaranteed to work.
I use toString() in the 2nd version of the geometry classes.
We have already seen encapsulation/abstraction and inheritance, which are major constituents of object oriented programming. We now turn our attention to polymorphism (and dynamic binding/dispatch), the third major constituent).
public static void printColor(GeometricObject geoObj) { System.out.printf("The color of %s is %s\n", geoObj.toString(), geoObj.getColor()); } }
Say we are given the geometry classes and want to print the color. We don't want to add this print to the geometry classes since it is useful only for us and not all geometry users. We do modify GeometricObject to add a get method for the color and then write ourselves the simple method on the right. Now if we call the method with any GeometricObject all is well.
But we actually don't have variables of type GeometricObject, but instead have variables of various subtypes such as Rhombus or Point. Thus the natural call PrintColor(rhom1) would be a type error, we are passing a Rhombus as the argument and the method has a GeometricObject as its parameter.
But no it is not an error!
A variable of a
supertype (the type defined by a superclass)
can refer to an object of any of its
subtypes).
This concept of polymorphism can also be stated
in terms of objects as
an object of a subclass can be used anywhere that an object of its
superclass could be used
.
In section 11.B we see an enhanced version of the geometry example that includes printColor() and a few polymorphic calls.
How should we implement printArea() in our geometry classes?
It would be a one-line method in Rectangle.java, namely
void printArea() { System.out.printf("The area of %s is %f\n", this.toString(), this.area()); }(Recall that the Rectangle class already has the method area() and inherits toString() from Object).
But the exact same (character for character) one-line method printArea() would be perfect in the class Rhombus since Rhombus inherits area() from Quadrilateral and toString() from Object.
Once we put a trivial area() method in Point (it always returns 0), the exact same printArea() method would work there as well.
So is that the solution, cut-and-paste the identical method in all the classes? It would work, but it sure seems ugly.
Since GeometricObject is at the top of our geometry class hierarchy, why not put printArea() there? Since all the geometric classes extend GeometricObject, this has the effect of placing printArea in all of them.
Hence when, for example, we write rect1.printArea(), we will invoke the printArea() written in GeometricObject. Since the object in question is rect1, when printArea() issues this.area(), the area() in Rectangle will be called as desired.
Success!
The code is in section 11.B.
Let's say that we don't want to add printArea() to any of the geometry classes since it is somewhat special purpose. So we write the following one-line method in our main class
static void myPrintArea(GeometricObject geoObj) { System.out.printf("My area of %s is %f\n", geoObj.toString(), geoObj.area()); }and in the main() method we write myPrintArea(rhom1).
At first glance this looks unlikely to compile since we have a type mismatch. The argument to myPrintArea() is a Rhombus, but the parameter is a GeometricObject. However, we know that polymorphism will save us, the parameter of type GeometricObject can legally point to an object of type Rhombus.
At second glance, it looks like it will work, but poorly. When compiling myPrintArea(), Java will see geoObj.area() where geoObj is of type GeometricObject and will thus always invoke the area() method in the class GeometricObject no matter what class the actual argument is in. This is bad, we have different formulas for the area of points, general quadrilaterals, and rectangles.
Failure!
Wrong! It actually works great, we now have to understand how.
As we know well, Java requires that, prior to using a variable, we must declare it to be of a specific type (int, Object, Point, etc).
Previously, we have called the type in the declaration, simply the type of the variable. But now, in the presence of objects and polymorphism, we need to make a finer distinction and refer to the type in the declaration as the declared type of the variable.
When the declared type is a primitive such as int or char, there is very little more to say. The variable will always contain a value of its (declared) type. Recall that a double x=3; converts the 3 to floating point, which is then stored in x. The variable always contains a value of its (declared) type.
When the declared type is class, (e.g., String, or Rectangle), the situation is more interesting. First, we remember that a variable of declared type Rectangle never contains a rectangle. It often does contain a reference to a rectangle, but never contains the object itself (reference vs. value semantics).
Even more interesting a variable of declared type rectangle can, due to polymorphism, at times contain a reference to a GeometricObject, at other times a reference to a Quadrilateral, and even sometimes a reference to an Object.
The type of the object to which a variable refers is called its actual type.
Recall the situation.
We have a method myPrintArea() defined in the main class
that has one parameter geoObj, whose declared type
is GeometricObject.
We call this method in several places with arguments of various
geometric types.
myPrintArea() contains a call of geoObject.area().
We previously concluded that, since the
type
of
geoObject is GeometricObject, the call would
always be to the area() method in GeometricObject.
We are wiser now and can question this conclusion. If myPrintArea() was called with an argument that referenced a Rectangle, then geoObj will indeed have declared type GeometricObject, but it will have actual type Rectangle. Thus the question becomes, does the method invocation geoObj.area() select the area() method to call based on the declared type of geoObj or on its actual type. Another wording is does geoObj.area() use static dispatch (declared type) or dynamic dispatch (actual type).
The answer in Java is that is uses dynamic dispatch, and hence our second try does indeed work. This situation is illustrated in the code constituting section 11.B.
In C++ static dispatch is used by default but
dynamic dispatch can be chosen by declaring the method to be
virtual.
Start Lecture #22
int i1=1, i2=2, i3=3; double d1=1., d2=2., d3=3.E50; d1 = i1; // or d1 = (double)i1; // i2 = d2; Compile-time error i2 = (int)d2; // Fine i3 = (int)d3; // Gives wrong answer
Recall the situation for primitive types. Java performs safe type conversions (called coercions) automatically, but will not perform unsafe conversions, unless told by an explicit cast. Referring to the code on the right we see an illustration that
We now want to understand the corresponding actions for objects in the class hierarchy rather than for primitive types. In this situation the terminology is upcasting and downcasting.
The familiar diagram on the right motivates the names.
Upcasting refers to a conversion from a class lower in the diagram to a class higher in the diagram (really there is the class Object above GeometricObject).
Downcasting refers to a conversion from a class higher in the diagram to one lower in the diagram.
Parent p1 = new Parent(); Parent p2 = new Child(); // NOT Parent() Parent p3 = new Parent(); Child c1 = new Child(); Child c2 = new Child(); Child c3 = new Child(); p1 = c1; p1 = (Parent) c1; // c2 = p2; compile-time error c2 = (Child)p2; // c3 = (Child)p3; run-time error
The code on the right is the rough analogue for objects of the code above for primitive types. Note that the variable p2 after initialization has declared type Parent, but actual type Child. This situation is not present with primitive types since the variable contains the value and hence both are of the same type; whereas, variables of any object type contain a reference (to an object) and the object itself has a type.
Referring to the code we see
The general rules are (compare with primitive types above):
We have just seen that downcasting with an explicit cast will compile but might fail at run-time depending on the actual type of the variable at that time.
What should we do to avoid this run-time error?
Parent p; Child c; // much code involving p and c c = (Child) p;
if (p instanceof Child) c = (Child) p;
The code on the right gives one technique. The omitted code could be complicated and, depending on various values that are known only at run time, might result in the actual type of p being either parent or child.
We wish to do the downcast only if legal, i.e., only if the actual type of p is Child (or a descendant of Child). The instanceof operator is defined for exactly this purpose it returns true if the object on the left is in the class on the right (or a descendant of that class).
Why would I want to upcast and why would I ever want to try to downcast to something not matching the actual type of the variable.
Consider some sort of container class, i.e., a class each of whose instance is some container of GeometricObject's. For some examples consider
Each of these three examples illustrates a container for Object's. Let's say in some geometry program you have the array geoObjArr onto which you have placed various geometric objects, some points, some rhombuses, etc.
If you try something like geoObjArr[i].sideLength, you will get a compile-time error since geoObjArr[i] has declared type GeometricObject, which has no sideLength field.
A naked assignment to a variable of declared type Rhombus will not compile, so you need an explicit cast to do the downconvertion. But this will give a run-time error if the actual type of geoObjArr[i] is not a Rhombus (or a descendant).
Thus you need an if statement with instanceof as shown above. { public double x, y; public Point(double x, double y) { this.x = x; this.y = y; } public double distTo (Point p) { return Math.sqrt(Math.pow(this.x-p.x,2)+Math.pow(this.y-p.y,2)); } public double area() { return 0; } } = new Rhombus(origin,p1,p2,p3);()); } }
The area of Point@4830c221
On the board let's develop a Circle class with data fields center and radius.
normalconstructor is given the center point and the radius.
Homework: 11.1. UML diagram not required. The area formula they references is the same one I used in the Quadrilateral class. The filled data field is in the book's GeometricObject but not in ours. You may ignore it.
Homework: Use cut and paste to extract our geometry classes from the notes. Add your triangle class from the homework above to these files and incorporate your tests of triangle into our TestQuad (which should be named TestGeom).
Start Lecture #23
Remark: Lab 4 Assigned.
public boolean equals (Object obj) { return (this == obj); }
String s1="xy", s2="xy"; if (s1.equals(s2)) System.out.println("yes");
public boolean equals(Object obj) { if (obj instanceof Circle) return radius == ((Circle)obj).radius; return false; }
In addition to toString(), which we learned earlier, the Object class has an instance method equals(), shown on the right.
Note carefully that obj1.equals(obj2), checks if
the variables are ==, which means that both variable refer to the
same object.
Often you want equals() to return true if the
two objects have the same contents.
For example the String class overrides equals() so
that the middle code on the right prints
yes.
If the bottom code is placed in the Circle class then, Circle c1,c2; with equal radii will result in ci.equals(c2) yielding true.
public boolean equals(Circle c) { return radius==c.radius
The equals() code on the right, looks simpler that the one above and seems to do the same job: When two circles are compared, their radii are checked.
The difference is that the equals() in 11.10 overrides the equals() defined in Object since both have the same signature. In contrast the equals() on the right has a different signature and thus we have two overloaded implementation of equals().
In the overloaded case, the choice of which equals() to invoke depends on the declared type of the argument. In the overriding case the choice depends on the actual type of the argument.
Thus, if we have an object array Object[]obj and if a circle is placed on this array, then
It is straightforward to create an array of Object's; just do it. The fancy ArrayList permits all sorts of extra operations. If you look on http:java.sun.com, you will find that there are even more goodies than the book suggests, as well as some nifty performance guarantees.
Another example of the justly-deserved fame accorded the Java libraries.
Shows how to use the ArrayList class to design a stack class that holds arbitrary objects.
Since we will not be studying packages (and they are primarily useful for large projects and libraries, which we are not writing), for us the rules are simple.
For a class itself, the only legal modifiers are public and the default (no modifier)
For members of a class (i.e., for fields and method)
We have seen the final keyword use for constants.
It has other analogous uses as well.
We have seen several places where errors can occur that can be detected only at run time. For example, a PrintWriter might be trying to write a file for which the user doesn't have sufficient privileges. Another example, this one from our geometry methods occurs when the Rhombus constructor detects that the given points do not yield a quadrilateral with all sides equal.
In Java, situations like this are often handled with exceptions, which we now study. top frame on the right shows (a slight variant of) the body of the Rhombus constructor.
The frame below it above (try to forget that I am the author of both). Without any such knowledge the package author terminated the run as there was no better way to let the client know that a problem occurred. first improvement is to add another rhombus constructor. This one takes a point, a side-length, and an angle and produces the rhombus shown in the diagram on the top right. The constructor itself is in the 2nd frame.
Note that if Θ=Π/2, the rhombus is a square.
This improvement has little to do with exceptions and
could 4th frame. We see the try and catch. What happens is that the client calls the constructor, which raises (throws in Java-speak) the exception. Since the constructor does not catch this exception, Java automatically raises it in the caller, namely the client code, where it is finally caught.
It is this automatic call-back that exceptions provide that the original code does not.
This code includes my implementation of the Circle class. { protected double x, y; public Point(double x, double y) { this.x = x; this.y = y; } public double distTo (Point p) { return Math.sqrt(Math.pow(this.x-p.x,2)+Math.pow(this.y-p.y,2)); } public double area() { return 0; } public double getX() { return this.x; } public double getY() { return this.y; } } Circle extends GeometricObject { public Point center; public double radius; private static double maxRadius = -1; private static Circle maxCircle = null; public Circle(Point center, double radius) { this.center = center; this.radius = radius; this.color = "pink"; if (radius > maxRadius) { maxRadius = radius; maxCircle = this; } } public Circle(double radius) { this(new Point(0.,0.), radius); } public Circle(Point center) { this(center, 1.0); } public Circle() { this(1.0); } public double area () { return Math.PI * radius * radius; } public static Circle largestCircle() { if (maxRadius < 0) System.out.println("No circles created, so maximum circle is null"); return maxCircle; } public boolean equals(Object obj) { if (obj instanceof Circle) return radius == ((Circle)obj).radius; return false; } } Circle c1 = new Circle(); Circle c2 = new Circle(2.); Circle c3 = new Circle(); System.out.println(c3.equals(c1)); Circle c = Circle.largestCircle(); c.printArea(); System.out.println(c.getColor()); }()); } }
rhom1 error: Rhombus with unequal sides. Use unit square. The area of Point@40a0dcd9 true The area of Circle@2e471e30 is 12.566371 pink
Our use of exceptions so far has just been to define one of our own. Specifically, we threw a new Exception. Every exception is an object and thus is a member of some class. So our exception was a member of the class Exception.
Many of the Java library methods can throw exceptions as well. For example, the Scanner constructor we use to create a Scanner object will throw an exception in the FileNotFoundException class if the argument to the Scanner constructor names a file that does not exist.
The diagram on the right shows the class tree for exceptions.
As always, Object is the root of the tree and naturally has many children in addition to Throwable, which is shown. As the name suggests, the Throwable class includes objects that can be thrown, which are exceptions.
From our simplified point of view, there are three important classes of exceptions, namely Error, Exception, and Runtime Exception. They are highlighted in the diagram.
Java divides exceptions into two groups: checked exceptions and unchecked exceptions. Exceptions in (a descendant of) Error or RuntimeException are unchecked. Exceptions in (a descendant of) Exception are checked.
The difference for us between checked and unchecked exceptions are that the header of any method that can throw a checked exception must declare this fact in its header line. For example consider the following constructor from the Rhombus class; }
Please note two important points.
The purpose of this declaration is to alert users (clients) of the method that the exception may be raised while executing the client code since the throw is not caught internally and thus would propagate to the caller.
public class Test { public static void main(String[]Args) throws Exception { throw new Exception; throw new BufferOverflowException; } }
public class MyException extends Exception {} public class Test { static MyException myE = new MyException(); public static void main(String[]args) throws MyException { throw myE; } }
Throwing an exception is easy; just throw it. But first you need to create an object in some subclass of throwable (i.e., a member of Error, Exception, or a descendant of one of these).
The first frame shows two examples of throwing predefined exceptions. Since BufferOverflowException is a subclass of RuntimeException (a red box), it does not appears in the throws clause.
The second frame shows an example of creating a custom exception class and throwing an instance of this class. If MyException extended a red box rather than the blue one, the throws in the method header would not be needed.
Catching an exception is a two step procedure involving try and catch blocks.
If an exception occurs in a statement not within a try block, the exception is not caught by this method. Instead, the current method ends and the same exception is raised in the statement that called the method. If the current method is the main method, the program is terminated.
The above paragraph is for
red-box
(i.e., unchecked) exceptions.
Java will not compile a method in which a blue-box (checked)
exceptions might occur not within a try block.
try { statements; } catch (Exception1 ex1) { handler for exception1; } catch (Exception2 ex2) { handler for exception2; } ... catch (ExceptionN exN) { handler for exceptionN; } statements after the catches
Thus, to catch an exception you must first delimit a sequence of statements withing a try block, as shown on the right.
If no exception occurs within the try block, the corresponding catch blocks are ignored.
If an exception does occur within the try block, the corresponding catch blocks are searched to see if one catches this class of exception. If such a block is found, its handler is executed followed by the statements after the catches.
If no catch block matches, the exception is NOT caught and once again, the current method is ended and the exception is re-raised in the statement that called the method.
The order of the catch blocks is significant as they are checked for matching in the order they are written.
Note that if a catch block is declared to handle exceptions in a class C, it actually catches exceptions in any descendant class of C as we.
As a result, if a later catch block handles a subclass of a prior cache block, the later catch block can never be executed. Indeed, it is compile-time error to list cache blocks in this nonsensical order
Draw on the board a deeper example with f() calling g() calling h(), handlers in various places, and h() raising various exceptions.
So far, inside catch (Exception ex) we have not used the object ex at all. Moreover sometimes in a throw we have included a string, which again we have not used.
The key to using the exception object and the string included with the throw is to employ one of the methods supplied by the class throwable. Two useful instance methods are getMessage(), which is used in our geometry program and printStackTrace().
Our geometry program uses getMessage(), which returns the string argument passed to the throw clause. As its name suggests printStackTrace prints a trace of the call stack (main called joe, joe called sam, sam threw an exception).
You should read the book's example. Here is the relevant portion of geometry example.); } } }
rhom1 error: Rhombus with unequal sides. Use unit square.
The constructor call inside the try is clearly not a rhombus since one side has length zero and the other three are not zero. Hence the constructor throws (but does not catch) a new Exception.
The main method executes the constructor inside a try block so, if the exception is thrown, the catch block is checked ans sure enough the only catch there matches. Hence ex becomes the new Exception.
The handler first prints the message included in the throw (via ex.getMessage() and then announces that it will use a unit square instead. The unit square is obtained by invoking the other rhombus constructor that takes the side length and angle. This handler cannot raise an exception.
Start Lecture #24
Homework: 13.1.
Homework: Do 13.3 two ways.
Out of Bounds.
Remark: Lab 4 is accessible directly off the home page.
try { try block } catch (something){ catch block } finally { finally block } the rest
In addition to throw and catch blocks there is occasionally a finally block that gets executed in any case. Consider the example on the right (there can be many catch blocks).
As mentioned above, it is silly to use an exception as a complicated if statement. It is the automatic up-call from the called method back to the caller that exceptions provide and simple code does not.
try { statements; } catch (SomeException ex) { statements throw ex }
A catch block can re-throw the exception it has just caught so that the exception is also handled in the caller.
Typical syntax is shown on the right.
Another possibility is that a catch block for one exception can throw a different exception.
Most of the exceptions raised either have no argument or have a string as an argument. The exception in the Rhombus class has a string argument.
public class MyException extends Exception { MyException(int code, String str) { super(String.format("Exception raised: (code=%d) %s\n"), code, str); } }
However, you can define your own subclass of the Exception class (say MyException) and have a MyException constructor take whatever arguments you want. The example on the right takes an integer code in addition to the usual string.
import java.io.*; // FileReader/Writer IO/FileNotFoundException public class CopyFile { public static void main(String[]Args) throws IOException { FileReader getInput = null; FileWriter putOutput = null; try { getInput = new FileReader(Args[0]); } catch (FileNotFoundException fileNotFoundEx) { System.out.printf("Can not read %s\n", Args[0]); System.exit(0); } try { putOutput = new FileWriter(Args[1]); } catch (IOException IOEx) { System.out.printf("Can not write %s\n", Args[1]); System.exit(0); } // If an IOException occurs here, we can't help int c; while (getInput.ready()) { // false at EOF c = getInput.read(); putOutput.write(c); } putOutput.flush(); } }
The program on the right copies one file to another. Each file is specified as a command line argument. That is to run the program one would type.
java CopyFile inputFileName outputFileName
The program uses exceptions to detect improper input. Several points are worth noting.
We shall soon see that it is sometimes useful to define methods
without bodies
(called abstract methods) just to act as
placeholders for real methods that will override this
abstract method.; } }
Consider the area() methods sprinkled throughout our geometry example. In particular, look at the area() method defined in the GeometricObject class, shown to the right.
The method certainly does not compute the area and the important point is that there is no way it can compute the area of a GeometricObject itself since the only field guaranteed to exist is the color.
The reason for including area() here was twofold.
The problem with the area() method in the GeometricObject class is that it only reminds us to override area() in each class derived from GeometricObject. We want more than a reminder, we want a guarantee.
Another advantage of abstract methods is that they permit us to have interfaces, which we will very briefly touch on soon.
public abstract class GeometricObject { protected String color = "blue"; public void printArea() { System.out.printf("The area of %s is %f\n", this.toString(), this.area()); } public String getColor() { return color; } public abstract double area(); // must be overridden }
On the right we see a reimplementation of GeometricObject as an abstract method. Note that this method has no body and cannot be called.
The compiler insures that this abstract method is overridden in the subclasses of GeometricObject.
Since this class contains an abstract method, the class must itself be declared abstract, as we have done on the first line.
You can declare a variable of type GeometricObject, but you cannot construct an object of the class. Thus a variable of declared type GeometricObject can never have actual type GeometricObject. That is, the value of the variable must be an object of a subclass of GeometricObject. Specifically a GeometricObject variable can contain a reference to a Rectangle, a Point, a Circle, etc. In all these cases the abstract area() method in GeometricObject is overridden and becomes a real (i.e., non-abstract) method.
Start Lecture #25
If all methods in a class are abstract and all data fields are constants (static final in Java), the class is called an interface.
Imagine that you want to write a class MyClass that extends two base classes ClassA and ClassB. That is, a MyClass object has all the data fields of both ClassA and ClassB (plus any others you add to MyClass). Also MyClass starts with all the methods of the two base classes (minus any that are overridden in MyClass, plus any new methods added in MyClass).
public class MyClass extends ClassA, ClassB { statements; }
You would try to write the class as shown on the right.
This is called
multiple inheritance since MyClass
inherits from multiple base classes.
However, Java does not support multiple inheritance
(although some languages, notably C++, does support it).
One question with multiple inheritance is what to do if two base classes include methods with the same signature but different bodies.
public class MyClass implements InterfaceA, InterfaceB { statements; }
Java does support a somewhat similar notation. If ClassA and ClassB are very simple, specifically if they each consist of just constants and abstract methods, then they are called interfaces, say InterfaceA and InterfaceB, and MyClass can be defined as shown on the right. Since all methods in an interface are abstract and hence body-free, the question mentioned just above for multiple inheritance cannot arise.
// Interface for comparing objects. // Defined in java.lang package java.lang; public interface Comparable { public abstract int compareTo(Object o); }
This entire interface is shown on the right. We have not studied packages so ignore the package statement.
Many classes in the java library implement this interface, which means that they supply an integer valued compareTo() instance method having one parameter, an Object.
For all these classes: if the instance object is
less than
the argument object, the value returned is negative; if the two are
equal, the value returned is 0; and if the instance object is
greater than the argument object, the value return is
positive.); } sort(n, student); System.out.printf("Sorted by name\n"); for (int i=0; i<n; i++) { System.out.printf("%s\t%3d\n", student[i].name, student[i].stuId); } } public static void sort(int n, Student[] student) { for (int i=0; <n-1; i++) for (int j=i+1; j<n; j++) if (student[i].compareTo(student[j]) > 0) { Student tmpStudent = student[i]; student[i] = student[j]; student[j] = tmpStudent; } } }
10 Sorted by name 1 Robert Alice 3 2 John Alice 9 3 Alice Harry 10 4 Jessica Jessica 4 5 Sam John 2 6 Sarah Judy 8 7 Mary Mary 7 8 Judy Robert 1 9 Alice Sam 5 10 Harry Sarah 6
The example on the right illustrates a class extending Comparable as well as reviewing a number of concepts we have learned previously.
As an added benefit, it previews sorting arrays of objects, which we will study very soon.
The top frame shows a very simple Student class that extends the Comparable interface. The class contains
Note that, although compareTo() will generate a runtime error unless its argument can be cast to student, the parameter itself is of type Object. Indeed, in order to implement Comparable, the defined compareTo() must override the compareTo() in Comparable and thus must have an Object parameter.
The second frame shows a test program using the Student class. The first loop reads in student IDs and names from a database file stu.db (shown in the third frame) and calls the Student() constructor to create the entries in the student array.
After the student records are read into the student array, the sort routine is invoked.
Finally the records are printed, to verify that they are now in sorted order.
You should compare the sort() method, which appears right after the main() method in the second frame, to the alphabetizing routing (sorting strings) that we did in section 9.2.13.
To conserve vertical space, the third frame contains two items. On the left is the stu.db file. It begins with a count of the number of students followed by the records themselves, one per line.
On the right is the output of the TestStudent program.
As expected, the records are now sorted by the name field.
Since the name
Alice appeared in two input records, it also
appears twice in the output.
Since the sort used is
stable, the relative order of the two
Alice records is preserved.
As mentioned previously, Java does not support multiple inheritance, but a class can implement multiple interfaces. This is not as hard since interfaces are extremely simple classes.
For performance reasons, the Java primitive types (int, char, etc) are not objects. Reference semantics requires an extra level of indirection.
However, the full power of object orientation, does require objects
so, for each primitive type, there is a corresponding, so called,
wrapper class with a very similar name.
We have already seen Character, the wrapper class for
char.
Also present are Byte, Short, Integer,
Long, Boolean, Float,
and Double.
Double d = new Double(4.3); Character c = new Character('c');
To create a Double object, we naturally use a Double() constructor. The argument is a double (lower case, i.e., a primitive type). Examples for Double() and Character() are shown on the right. The Float(), Long(), etc. constructors are just the same.
public static void sort(int n, Student[] student) { for (int i=0; i<n-1; i++) for (int j=i+1; j<n; j++) if (student[i].compareTo(student[j]) > 0) { Student tmpStudent = student[i]; student[i] = student[j]; student[j] = tmpStudent; } }
public static void sort(int n, Comparable[] obj) { for (int i=0; i<n-1; i++) for (int j=i+1; j<n; j++) if (obj[i].compareTo(obj[j]) > 0) { Comparable tmpObj = obj[i]; obj[i] = obj[j]; obj[j] = tmpObj; } }
Recall the sort routine in Section 14.A. which has been reproduced on the right.
There is something curious about this routine: It sorts items in the Student class, but uses nothing abut the Student class except for the compareTo method, which is present in any class implementing Comparable.
This raises the question of what would need to be changed for this sort to work for any class implementing Comparable.
The answer is that essentially nothing has to be changed. Specifically, the two occurrences of Student, need be changed to Comparable. Although not necessary, the code on the right changes other names as well. In particular, the variable name student would be misleading so has been changed to obj.
When this new version is dropped into TestStudent, the
results are the same (ignoring the
unchecked warning, see
below).
The input is sorted alphabetically on the name field.={new Float(3.),new Float (0.),new Float (-5.)}; Long[] l={new Long(333),new Long (0), new Long (-555)}; // Sort and print sort(3, student); sort(3, f); sort(3, l); System.out.printf("Sorted by name\n"); for (int i=0; i<3; i++) { System.out.printf("%s\t%3d %6.1f %5d\n", student[i].name, student[i].stuId, f[i], l[i]); } } public static void sort(int n, Comparable[] obj) { for (int i=0; i<n-1; i++) for (int j=i+1; j<n; j++) if (obj[i].compareTo(obj[j]) > 0) { Comparable tmpObj = obj[i]; obj[i] = obj[j]; obj[j] = tmpObj; } } }
The great news is that the identical sorting routine can be used to sort objects of any class that implements Comparable.
On the right is an expanded version of TestStudent. Note the following points.
The above program, if run on a modern Java system, will give a somewhat cryptic warning and then work fine. If the same program is run on an older (version 1.4) Java, and the printf() methods replaced by println(), there would not be any warnings and again it would work.
This difficulty has to do with the addition of
Java generics in version 1.5, and the resulting addition
of Comparable<T>
The previous section illustrated both an advantage and slight
annoyance with the
wrapper classes Double,
Character, etc. when compared to the corresponding
primitive data types.
The advantage is that, since the wrappers are classes, they can make use of many object oriented features. The minor annoyance is the relative wordiness of
Short s[] = {new short[5], new short[7]};when compared to
short s[] = {5, 7};
In most cases the annoyance can be avoided because Java will automatically convert between the primitive type and the corresponding wrapper class. Conversion to the wrapper class is called boxing and the reverse conversion is called unboxing.
As an illustration here is yet one more version of the TestStudent class, this time using the autoboxing and also sorting many more types. For completeness we repeat the Student class and the student database. = {3.F, 0.F, -5.F, 2.5F, -8.F, 8.2F, 7.8F, 0.F, -5.F, -8.F}; Double[] d = {3., 0., -5., 2.5, -8., 8.2, 7.8, 0., -5., -8.}; Byte[] b = {33, 0, -55, 2, -8, 8, 7, 0, -5, -8}; Short[] s = {333, 0, -555, 2, -8, 8, 7, 0, -5, -8}; Integer[] g = {333, 0, -555, 2, -8, 8, 7, 0, -5, -8}; Long[] l = {333L, 0L, -555L, 2L, -8L, 8L, 7L, 0L, -5L, -8L}; Character[] c = {'q', '8', 'A', 'a', 'Z', 'q', 'z', 'Q', '8', '8'}; String[] t = {"q", "8", "A", "a", "Z", "q", "z", "Q", "88", "78"}; // Sort and print sort(10, student); sort(10, f); sort(10, d); sort(10, b); sort(10, s); sort(10, g); sort(10, l); sort(10, c); sort(10, t); System.out.printf("Sorted by name\n"); for (int i=0; i<10; i++) { System.out.printf("%s\t%3d %6.1f %6.1f %5d %5d %5d %5d %5c %5s\n", student[i].name, student[i].stuId, f[i], d[i], b[i], g[i], s[i], l[i], c[i], t[i]); } } public static void sort(int n, Comparable[] obj) { for (int i=0; i<n-1; i++) for (int j=i+1; j<n; j++) if (obj[i].compareTo(obj[j]) > 0) { Comparable tmpObj = obj[i]; obj[i] = obj[j]; obj[j] = tmpObj; } } }
Sorted by name Alice 3 -8.0 -8.0 -55 -555 -555 -555 8 78 Alice 9 -8.0 -8.0 -8 -8 -8 -8 8 8 Harry 10 -5.0 -5.0 -8 -8 -8 -8 8 88 Jessica 4 -5.0 -5.0 -5 -5 -5 -5 A A John 2 0.0 0.0 0 0 0 0 Q Q Judy 8 0.0 0.0 0 0 0 0 Z Z Mary 7 2.5 2.5 2 2 2 2 a a Robert 1 3.0 3.0 7 7 7 7 q q Sam 5 7.8 7.8 8 8 8 8 q q Sarah 6 8.2 8.2 33 333 333 333 z z
10 | 1 Robert | 2 John | 3 Alice | 4 Jessica | 5 Sam | 6 Sarah | 7 Mary | 8 Judy | 9 Alice | 10 Harry |
Homework: 14.5 (omit UML diagram).
A method is recursive if it directly or indirectly calls itself. So if the method f() invokes f(), we call f() recursive. Also if f() invokes g() and g() invokes f(), we call both f() and (g) recursive.
For an example consider two mathematical functions f() and g() defined on the integers by these three rules.
For example let's compute f(3)
f(3) = f(2) + g(2) = f(1) + g(1) + f(2) + 1 = f(0) + g(0) + f(1) + 1 + f(1) + g(1) + 1 = 0 + f(0) + 1 + f(0) + g(0) + 1 + f(0) + g(0) + f(1) + 1 + 1 = 0 + 0 + 1 + 0 + f(0) + 1 + 1 + 0 + f(0) + 1 + f(0) + g(0) + 1 + 1 = 0 + 0 + 1 + 0 + 0 + 1 + 1 + 0 + 0 + 1 + 0 + g(0) + 1 + 1 = 0 + 0 + 1 + 0 + 0 + 1 + 1 + 0 + 0 + 1 + 0 + f(0) + 1 + 1 + 1 = 0 + 0 + 1 + 0 + 0 + 1 + 1 + 0 + 0 + 1 + 0 + 0 + 1 + 1 + 1 = 7
public class FG { 0 0 public static void main(String[]arg) { 1 1 for (int i=0; i<10; i++) 2 3 System.out.printf("%2d %4d\n", i, f(i)); 3 7 } 4 15 public static int f(int n) { 5 31 if (n <= 0) 6 63 return 0; 7 127 return f(n-1) + g(n-1); 8 255 } 9 511 public static int g(int n) { return f(n) + 1; } }
The program is quite simple and is shown on the right together with the output produced.
What is surprising is that it works! After all when we invoke f(3), this sets n=3 and invokes f(2), which sets n=2. So now in the same method, namely f(), the same variable, namely n, has two different values at the same time, namely 3 and 2.
How can this be?
Naturally, the situation gets even worse when we consider the call of f(9), and worse still when we remember that f() calls g(), which in turn calls f(). There will be very many values for n in f() at the same time.
public class Factorial { public static void main(String[]arg) { System.out.printf(" n Recursive Iterative\n"); for (int i=-1; i<10; i++) if (i<0) System.out.printf("\t\t\t\t(%d)! is not defined!!\n", i); else System.out.printf("%2d %8d %8d\n", i, recFac(i), iterFac(i)); } public static int recFac(int n) { if (n <= 1) return 1; else return n * recFac(n-1); } public static int iterFac(int n) { int fact=1; if (n>1) for (int i=2; i<=n; i++) fact = fact * i; return fact; } }
n Recursive Iterative (-1)! is not defined!! 0 1 1 1 1 1 2 2 2 3 6 6 4 24 24 5 120 120 6 720 720 7 5040 5040 8 40320 40320 9 362880 362880
Factorial is a very easy function to compute either with or without recursion. It is normally written using ! rather than the usual parenthesized notation. That is, instead of factorial(7), we usually write 7!
For a positive integer n, n! is defined to be
1 * 2 * 3 * ... * n
For example 5! = 1 * 2 * 3 * 4 * 5 = 120
If instead of defining n! to be multiplying from 1 up to n, we define n! (equivalently) to be multiplying from n down to 1, we see that the definition is the same as defining (recursively)
n! = n * (n-1)!
It is conventional to define 0! = 1.
Sometimes factorial is defined to be 1 for negative numbers as well, but I believe it is more common to consider factorial undefined for negative numbers.
The program on the right computes factorial twice, once with each definition. The factorial methods would return 1 if given a negative argument, but the main program declares this to be an error.
As in the previous section, perhaps the most interesting question is "How can the Java program possible work with n having so many different values at the same time?".
On the board show the stack growing and shrinking when computing recFact(4).
Start Lecture #25
Homework: 14.5 (omit UML diagram).
public class Fibon { public static void main(String[]arg) { System.out.printf(" n Recursive Iterative\n"); for (int i=-1; i<10; i++) if (i<0) System.out.printf("\t\t\t\tFibonacci is not defined for %d\n", i); else System.out.printf("%2d %8d %8d\n", i, recFibon(i), iterFibon(i)); } public static int recFibon(int n) { if (n <= 1) return 1; else return recFibon(n-1) + recFibon(n-2); } public static int iterFibon(int n) { int a=1, b=1, c=1; if (n>1) for (int i=2; i<=n; i++) { c = a + b; a = b; b = c; } return c; } }
n Recursive Iterative Fibonacci is not defined for -1 0 1 1 1 1 1 2 2 2 3 3 3 4 5 5 5 8 8 6 13 13 7 21 21 8 34 34 9 55 55
The Fibonacci sequence is normally defined recursively by the following two rules.
From the formulas above, we see that the sequence begins
1, 1, 2, 3, 5, 8, 13, 21, 34, ...
The limiting ratio f(n)/f(n-1)
as n approaches infinity is called the
golden mean and comes up in a number of biological settings.
The code for both recursive and iterative solutions is on the right.
Again the methods liberally define Fibonacci values for negative n, but the main() method flags it as an error.
Draw on the board the diagram showing all the recursive calls that
occur when you invoke recFib(4).
It is important when designing a recursive solution, that you avoid
infinite recursion where the method f()
always calls itself.
There must be a so called
base case that can be solved
directly.
For example, in the factorial problem, we specified that 0!=1.
Another requirement is that when you are not in the base case, the
recursive calls bring you
closer to the base case.
For example, in the factorial problem (excluding negative values),
if n>0, then we define n! = n*(n-1)!,
which means that when trying to compute factorial for n, we
need to compute factorial for n-1.
Since the base case is n=0, n-1 is closer to the
base case than is n.
Often the base case occurs for a small value of an integer parameter, (e.g., n for factorial or Fibonacci, the length for many string problems, array bounds for some array problems, etc). Then the recursive call has a smaller value of the parameter than the original and is thus closer to the base case.
Thus, many times the high-level structure of a recursive solution is
if (the parameters fit the base case) return the solution directly else call the method recursively with a smaller parameter value return the answer using the answer from the recursive call
Here is an (inefficient, overly complex) method of adding non-negative integers that uses the above pattern.
public static int sum(int x, int y) { if (y == 0) then return x int z = sum(x,y-1); return z+1; }
Sometimes we have more than one base case. For example consider the isPalindrome() procedure on the right. Recall that a string is a palindrome if it reads the same from left to right as from right to left.
public static boolean isPalindrome(String s) { if (s.length() <= 1) // base case 1 return true; if (s.charAt(0) == x.charAt(s.length()-1)) // base case 2 return false; return isPalindrome(s.substring(1,s.length()-1)); }
One base case is that an empty string or a string of length one is a palindrome.
Another base case is that if the first and last characters are not equal than it is not a palindrome.
If neither base case applies than the original string is a palindrome if and only if the substring omitting the first and last characters is a palindrome.
The above code is from the book. I might prefer to view the solution with only one base case as follows.
public static boolean isPalindrome(String s) { if (s.length() <= 1) // base case return true; return (s.charAt(0)==x.charAt(s.length()-1)) && isPalindrome(s.substring(1,s.length()-1)); }
The base case is that empty and length 1 strings are palindromes.
If we are not in the base case, the original string is a palindrome if and only if (1) the first and last characters are the same and (2) the substring omitting these two characters is a palindrome. This interpretation gives rise to the code on the right.
Sometimes it is easier or more efficient to solve a more general problem. For example, an inefficiency in the above palindrome program is that at each recursive call it builds a new string when in fact all that we need to do is to restrict our attention to a contiguous portion of the original string.
public static boolean isPalindrome(String s, int lo, int hi) { if (hi < lo) return true; // base case return (s.charAt(lo)==s.charAt(hi)) && isPalindrome(s, lo+1, hi-1); }
So instead of asking if a string is a palindrome, we as the more general question is the portion of the string from position low to position high a palindrome and then recursively restrict the range low...high, while keeping the same string. This program is shown in the top frame on the right.
One objection to the program is that it is more awkward for the user who would typically invoke the program as isPalindrome(s,0,s.length()-1). In other words, the helper program may have helped the implementer (in this case to make their program more efficient), but is sure didn't help the user who much preferred writing isPalindrome(s).
public static boolean isPalindrome(String s) { return isPalindrome(s, 0, s.length()-1)); }
To answer this objection, we write a second isPalindrome() method that meets the user's expectation and properly invokes the first isPalindrome() method. The code is shown on the right. Note that the method name isPalindrome() is overloaded: If invoked with just a string argument, the second version is called; if invoked, with a string and two integers, the first version is called.
On the right is an preliminary look at an important data structure in 102: a binary tree.
These trees have two kinds of nodes: interior nodes drawn as squares, and leaves drawn as circles. Each node has two children; each leaf has no child. A node also contains data, in this case a character.
This data structure is called a binary tree.
public class Tree { char data; Tree left; Tree right; Tree(char data, Tree left, Tree right) { this.data = data; this.left = left; this.right = right; } Tree (char data) { // constructs a leaf this(data, null, null); } }
The code on the right is a class for this data structure. Note that we use the name Tree when the object is in fact the single node of a tree.
We see that a Tree has three components, two Tree's and a char, which seems crazy! How can a Tree contain 2 Tree's?
The answer is an old friend, reference semantics. The variables left and right do not contain trees, but instead contain references to trees. Thus a Tree object contains two references to Tree's and a char.
Informally, you can think of a tree (i.e., a tree node) as it is shown in the diagram. That is, you can view a node as containing two arrows and a character.
public class Tree { char data; Tree left; Tree right; Tree(char data, Tree left, Tree right) { this.data = data; this.left = left; this.right = right; } Tree (char data) { // constructs a leaf this(data, null, null); } public void preOrderTraverse () { System.out.printf("%c ", this.data); if (this.left != null) { // leaf? this.left.preOrderTraverse(); this.right.preOrderTraverse(); } } public void postOrderTraverse () { if (this.left != null) { // leaf? this.left.postOrderTraverse(); this.right.postOrderTraverse(); } System.out.printf("%c ", this.data); } public void inOrderTraverse () { if (this.left != null) { // leaf? this.left.inOrderTraverse(); } System.out.printf("%c ", this.data); if (this.right != null) { // leaf? this.right.inOrderTraverse(); } } public static void main(String[]arg) { Tree t1 = new Tree('A'); Tree t2 = new Tree('l'); t1 = new Tree('C',t1,t2); t2 = new Tree('l'); Tree t3 = new Tree('S',t1,t2); t1 = new Tree('a'); t2 = new Tree('n'); t1 = new Tree('1',t1,t2); t2 = new Tree('G'); t1 = new Tree('0',t1,t2); t1 = new Tree('1',t3,t1); System.out.println("Preorder Traversal"); t1.preOrderTraverse(); System.out.printf("\n\nPostorder Traversal\n"); t1.postOrderTraverse(); System.out.printf("\n\nInorder Traversal\n"); t1.inOrderTraverse(); } }
Preorder Traversal 1 S C A l l 0 1 a n G Postorder Traversal A l C l S a n 1 G 0 1 Inorder Traversal A C l S l 1 a 1 n 0 G
The program on the right constructs the tree in the diagram and then traverses the tree in three different orders.
First let's ignore the three traverse methods and concentrate on how the beginning of the main() method constructs the desired tree, which I have redrawn here for convenience.
To construct a leaf we use the second constructor and thus supply only the data item, which in this small example is simply a character. The left and right fields are set to null by the constructor. In main() I deliberately reused some Tree variables as I believe it is instructive.
To construct an interior node, we use the first construct and thus need to supply two child trees in addition to the data. This means we must construct the children before constructing the interior node. The tree is constructed from the bottom up.
On the board execute each line of main() showing the variable name, the reference, and the referred to Tree. As execution proceeds, you will see why we called the objects trees not nodes.
In a tree traversal all the tree nodes are visited. In the three traversals we study, each node is visited exactly once. In general, what to do during a visit is specified by the user. In our example, visiting a node consists simply of printing the character data item.
How do we ensure we visit each node (exactly once) and in what order should we visit them?
Three orderings are common: pre/post/in-order. In all three left children are visited before right children. The pre/post/in refers to when a node is visited in relation to its children.
On the board, without looking at the code, manually perform all three traversals.
Now that we understand trees and how to recursively walk through them, we can solve a number of problems. We will just look at a familiar structure, the file system tree.
In this tree the interior nodes are the (sub-)directories and the leaves are the simple files (in 202 you will learn that file system trees are more complicated than this and actually are not trees, but we ignore that complication).
import java.io.File; import java.util.Scanner; public class DirOrFileSize { public static void main(String[] args) { System.out.print("Enter a directory or a file: "); Scanner input = new Scanner(System.in); String directory = input.nextLine(); System.out.printf("There are %d bytes.\n", getSize(new File(directory))); } public static long getSize(File file) { long size = 0; // Size of this file or directory if (! file.isDirectory()) // Base case, 1 file size += file.length(); else { // All files and subdirectories File[] files = file.listFiles(); for (int i = 0; i < files.length; i++) size += getSize(files[i]); } return size; } }
Note that a directory can have any number of files and sub-directories so not all interior nodes have exactly 2 children. Thus we do not have a binary tree and cannot use inorder traversal.
The program on the right reads in a file or directory name and prints the total size of all the files there. Specifically, if given a file it gives the size of the file; whereas, if given a directory, it prints the size of all the files under that directory (which includes files under subdirectories of the directory).
The base case is a simple file and then the size is obtained by calling the File.length() library routine.
When encountering a directory getSize() sums the sizes of all the files and subdirectories in the given directory. Note that this involves many recursive calls, one for each file and subdirectory.
Consider the file system tree just below, where boxes are directories and circles are ordinary files.
import java.io.File; import java.util.Scanner; public class ListFiles { public static void main(String[] args) { System.out.print("Enter a directory or a file: "); Scanner getInput = new Scanner(System.in); String dirOrFile = getInput.next(); listFiles(new File(dirOrFile), ""); } public static void listFiles(File dirOrFile, String indent) { System.out.println(indent + dirOrFile.getName()); if (! dirOrFile.isDirectory()) return; // base case -- simple file File[] dirOrFiles = dirOrFile.listFiles(); for (int i = 0; i < dirOrFiles.length; i++) listFiles(dirOrFiles[i], indent + " "); return; } }
The program on the top right produces the output on the bottom right, when the program is run on this file system.
T E F b y A a x y D
Note that the items within the a directory appear below it and are indented. All the work is done by listFiles(). Its first parameter is the file or directory to list and the second argument is the indentation to use.
The base case is when the first argument is a file, which is simply listed using the File.getName() library routine.
When given a directory, listFiles() finds all the files and subdirectories and then calls itself recursively on each one giving an indentation 2 greater than its own indentation.
We already discussed this a little.
Show demo on emacs.
The End: Good luck on the final
|
http://cs.nyu.edu/courses/fall10/V22.0101-003/class-notes.html
|
CC-MAIN-2014-41
|
refinedweb
| 40,539
| 65.62
|
go to bug id or search bugs for
Description:
------------
Build fails when trying to use system GD
--with-gd=shared,/usr
This is because gdhelpers only provided by bundled library.
Trivial patch proposal :
--- ext/gd/gd.c.orig 2008-09-07 08:53:38.000000000 +0200
+++ ext/gd/gd.c 2008-09-07 08:54:03.000000000 +0200
@@ -74,7 +74,9 @@
#include <gdfontmb.h> /* 3 Medium bold font */
#include <gdfontl.h> /* 4 Large font */
#include <gdfontg.h> /* 5 Giant font */
+#if HAVE_GD_BUNDLED
#include <gdhelpers.h>
+#endif
#ifdef HAVE_GD_WBMP
#include "libgd/wbmp.h"
Reproduce code:
---------------
make
Expected result:
----------------
Build complete.
Actual result:
--------------
Build failed.
Add a Patch
Add a Pull Request
Had the same problem and came up with similar fix. Would someone apply
this patch to CVS? My karma is not sufficient.
This patch is not correct as then the function defined in gdhelper.h will not be defined and the default signature will be used. please do not apply it, I will fix the issue while working on another similar issue in gd.
I had the same issue yesterday and switched to the bundled GD to make it
compile. Now, it would be interesting to know what functionality
difference this means and what the other issue is, this is related to.
The bundled GD is the recommend version anyway. It has more bug fixes and features than almost all GD distributions out there (the worst is debian's gd which should not be used at all).
Functions like imagefilter, imagerotate are not available when an external GD is used.
php6 may support external GD but only a recent version and only when a minimum set of features are enabled.
Thanks for letting us know why specifically the bundled libgd is superior (the imagefilter and imagerotate functions). Given this, I will switch MacPorts php5 back to using the bundled libgd.
You must admit however there is room for confusion. The first sentence on the installation instructions [1] says "If you have the GD library (available at ?) you will also be able to create and manipulate images." And the libgd homepage in turn says "The library [...] is now maintained by Pierre-A. Joye under the umbrella of PHP.net." This gives the impression that the standalone libgd is being developed by the same people who develop PHP's bundled libgd. The note further down the installation instructions reads "Since PHP 4.3 there is a bundled version of the GD lib [which] should be used in preference to the external library since its codebase is better maintained and more stable." Because PHP 4.3 is so old, someone reading this note might well assume the information is outdated, and that since libgd is now under PHP.net's umbrella, the unique changes in PHP's bundled libgd are now in the standalone libgd as well.
php5 has supported standalone libgd, up to and including php 5.3.0 alpha 1. So it would be good if php 5.3.0 final did not break this. Or, if it is your intention to break this, then do so with a friendlier message in the configure phase and update the documentation.
[1]
This is still an issue in 5.3.0RC1. If building against an external version of GD is no longer supported, the configure script should probably be updated to throw an error or at least a warning should one try to do so. Currently, the build just fails during "make" with the error:
ext/gd/gd.c:72:23: error: gdhelpers.h: No such file or directory
leaving the user with no idea what to do.
Here's a quick patch that removes support for building against an external libgd and replaces it with an error message. It seems to work as intended but I am by no means experienced with the autoconf system.
--- php-5.3.0RC1.orig/ext/gd/config.m4 2009-01-14 13:05:59.000000000 -0600
+++ php-5.3.0RC1/ext/gd/config.m4 2009-03-27 13:42:01.071603975 -0500
@@ -262,7 +262,6 @@
dnl
if test "$PHP_GD" = "yes"; then
- GD_MODULE_TYPE=builtin
extra_sources="libgd/gd.c libgd/gd_gd.c libgd/gd_gd2.c libgd/gd_io.c libgd/gd_io_dp.c \
libgd/gd_io_file.c libgd/gd_ss.c libgd/gd_io_ss.c libgd/gd_png.c libgd/gd_jpeg.c \
libgd/gdxpm.c libgd/gdfontt.c libgd/gdfonts.c libgd/gdfontmb.c libgd/gdfontl.c \
@@ -339,57 +338,7 @@
else
if test "$PHP_GD" != "no"; then
- GD_MODULE_TYPE=external
- extra_sources="gdcache.c"
-
-dnl Various checks for GD features
- PHP_GD_ZLIB
- PHP_GD_TTSTR
- PHP_GD_JPEG
- PHP_GD_PNG
- PHP_GD_XPM
- PHP_GD_FREETYPE2
- PHP_GD_T1LIB
-
-dnl Header path
- for i in include/gd1.3 include/gd include gd1.3 gd ""; do
- test -f "$PHP_GD/$i/gd.h" && GD_INCLUDE="$PHP_GD/$i"
- done
-
-dnl Library path
- for i in $PHP_LIBDIR/gd1.3 $PHP_LIBDIR/gd $PHP_LIBDIR gd1.3 gd ""; do
- test -f "$PHP_GD/$i/libgd.$SHLIB_SUFFIX_NAME" || test -f "$PHP_GD/$i/libgd.a" && GD_LIB="$PHP_GD/$i"
- done
-
- if test -n "$GD_INCLUDE" && test -n "$GD_LIB"; then
- PHP_ADD_LIBRARY_WITH_PATH(gd, $GD_LIB, GD_SHARED_LIBADD)
- AC_DEFINE(HAVE_LIBGD,1,[ ])
- PHP_GD_CHECK_VERSION
- elif test -z "$GD_INCLUDE"; then
- AC_MSG_ERROR([Unable to find gd.h anywhere under $PHP_GD])
- else
- AC_MSG_ERROR([Unable to find libgd.(a|so) anywhere under $PHP_GD])
- fi
-
- PHP_EXPAND_PATH($GD_INCLUDE, GD_INCLUDE)
-
- dnl
- dnl Check for gd 2.0.4 greater availability
- dnl
- old_CPPFLAGS=$CPPFLAGS
- CPPFLAGS=-I$GD_INCLUDE
- AC_TRY_COMPILE([
-#include <gd.h>
-#include <stdlib.h>
- ], [
-gdIOCtx *ctx;
-ctx = malloc(sizeof(gdIOCtx));
-ctx->gd_free = 1;
- ], [
- AC_DEFINE(HAVE_LIBGD204, 1, [ ])
- ])
- CPPFLAGS=$old_CPPFLAGS
-
+ AC_MSG_ERROR([Building the GD extension against an external libgd is not supported.])
fi
fi
@@ -399,23 +348,13 @@
if test "$PHP_GD" != "no"; then
PHP_NEW_EXTENSION(gd, gd.c $extra_sources, $ext_shared,, \\$(GDLIB_CFLAGS))
- if test "$GD_MODULE_TYPE" = "builtin"; then
- () {}])
- else
- GD_HEADER_DIRS="ext/gd/"
- GDLIB_CFLAGS="-I$GD_INCLUDE $GDLIB_CFLAGS"
- PHP_ADD_INCLUDE($GD_INCLUDE)
-
- PHP_CHECK_LIBRARY(gd, gdImageCreate, [], [
- AC_MSG_ERROR([GD build test failed. Please check the config.log for details.])
- ], [ -L$GD_LIB $GD_SHARED_LIBADD ])
- fi
+ () {}])
PHP_INSTALL_HEADERS([$GD_HEADER_DIRS])
PHP_SUBST(GDLIB_CFLAGS)".
Fixed in 5.3.0 and later.
|
https://bugs.php.net/bug.php?id=46015
|
CC-MAIN-2022-27
|
refinedweb
| 978
| 69.38
|
No update?
.
@ccc I would prefer to pay the price of a coffee/beer/wine (user choice) per year to @omz for his marvelous app, even without any update, only to use it.
I can pay two cups of coffee if omz can add support for python3.8 and latest modules. I just wonder what work him spent two years on.
@timtim Perhaps could you run this kind of code to execute in Pyto some modules unknown in Pythonista
import sys import urllib import webbrowser #print(sys.argv) if len(sys.argv) == 1: # ┏━━━━━━━━━━━━━━━━━━━━┓ # ┃code to run in Pyto ┃ # ┗━━━━━━━━━━━━━━━━━━━━┛ code = ''' import urllib import webbrowser import pandas as pd data = { 'apples': [3, 2, 0, 1], 'oranges': [0, 3, 7, 2] } purchases = pd.DataFrame(data) result = str(purchases) encoded = urllib.parse.quote(result) webbrowser.open('pythonista3://pyto.py?action=run&argv='+encoded) ''' # execute code in Pyto encoded = urllib.parse.quote(code) webbrowser.open('pyto://x-callback/?code='+encoded) else: # back from Pyto print(sys.argv[1])
@cvp I was qurious about the script, but I received "The file pyto.py couldn't be found", when Pytonista opens.
@pavlinb yes, sorry I forgot to say you have to save the script as pyto.py in Pythonista.
Thanks to @pulbrich
All is done in Pythonista, nothing in Pyto but you need to have both apps
.
|
https://forum.omz-software.com/topic/6118/no-update/?page=2
|
CC-MAIN-2021-21
|
refinedweb
| 220
| 67.45
|
This is a C++ writeup, and a somewhat arcane one at that.
The C++ Standard Template Library is a vast candy-box full of good things, most of which don't show on the surface. Two good things that do show on the surface are the string class (a specialization of the basic_string template), and the map template, which gives you an associative array. We'll be discussing both of those, and we'll also venture a tentative inch or two into the great darkness that lies below them.
We have an associative array: std::map<string, foo *>. Our problem today is that we happen to need our lookups to be case blind. std::map::find() is efficient: That's why we're using std::map in the first place. We don't want to reinvent the wheel here. We also need to preserve the case of the keys. Therefore, we want to alter the behavior of string comparisons taking place deep in the guts of STL code that we didn't write and don't care to modify. We don't want to touch anything else, because everything else is hunky dory. Fortunately, the STL is designed to allow just that.
The STL std::tree template is used for the guts of the std::map, and it uses the operator< member function of the key type to do comparisons. If the key type is a primitive type like int or char, == isn't even a member function, but a C++ template doesn't care about the distinction either way. But our key type isn't primitive at all: It's a string class.
Clearly, there are (in principle) two ways to approach the problem: We can alter the behavior of string::operator<, or we can alter the behavior of map.
So. We'll start with altering std::string, because it's the solution I tried first, it looks like the wrong solution, and it's the solution I decided to use. It works, but it's mighty verbose.
The std::basic_string template has three type parameters:
The character traits class defines the behavior of the characters in the string. It's got a bool eq(T, T) member to test two characters for equality, and an bool lt(T, T) member to test if one character is "less than" another. It's also got compare (T *, T *, size_t n) which compares sequences of characters, and there are a few other goodies besides. Well, now. All we've got to do is subclass std::char_traits<char> and override those three member functions and a couple others:
template<class _E>
struct blind_traits : public char_traits<_E>
{
static bool eq(const _E& x, const _E& y)
{ return tolower( x ) == tolower( y ); }
static bool lt(const _E& x, const _E& y)
{ return tolower( x ) < tolower( y ); }
static int compare(const _E *x, const _E *y, size_t n)
{ return strnicmp( x, y, n ); }
// There's no memichr(), so we roll our own. It ain't rocket science.
static const _E * __cdecl find(const _E *buf, size_t len, const _E& ch)
{
// Jerry says that x86s have special mojo for memchr(), so the
// memchr() calls end up being reasonably efficient in practice.
const _E *pu = (const _E *)memchr(buf, ch, len);
const _E *pl = (const _E *)memchr(buf, tolower( ch ), len);
if ( ! pu )
return pl; // Might be NULL; if so, NULL's the word.
else if ( ! pl )
return pu;
else
// If either one was NULL, we return the other; if neither is
// NULL, we return the lesser of the two.
return ( pu < pl ) ? pu : pl;
}
// I'm reasonably sure that this is eq() for wide characters. Maybe.
static bool eq_int_type(const int_type& ch1, const int_type& ch2)
{ return char_traits<_E>::eq_int_type( tolower( ch1 ),
tolower( ch2 ) ); }
};
// And here's our case-blind string class.
typedef basic_string<char, blind_traits<char>, allocator<char>
> blindstr;
// . . . and our case-blind map:
typedef std::map<blindstr, foo *> foomap;
But look at all that code! What a mess. Let's take a look at altering the behavior of the map instead. First, we'll RTFM on std::map. What does it give us that we might find useful?
The std::map template has three type parameters:
A "predicate" class: This is a class with one relevant member
function, operator():
bool operator()(const T& x, const T& y) const;
That predicate is used to compare instances of the key type. By default,
std::map uses the generic predicate std::less, which
uses the operator< members of the two arguments
Bingo. He's our boy.
Alright, then. Let's make a predicate. We RTFM some more, write a little code, and here's what it looks like:
struct case_blind_string_compare
: public binary_function<string, string, bool>
{
bool operator() (const string& x, const string& y) const
{ return stricmp( x.c_str(), y.c_str() ) < 0; }
};
To use it, we modify very slightly the typedef where we define our map:
typedef std::map<string, foo *, case_blind_string_compare> foomap;
Now get this: I used the former method, and left the much cleaner predicate stunt in a comment. I'm not wholly convinced that I did the Right Thing, but I'm reasonably confident. Here's why: The key type is a path in an operating environment where the filesystem is case blind. In that environment, paths are strings of which case is not a meaningful property. It makes sense to represent them that way all the time, rather than just remembering to store them in case-blind containers.
If we wanted to port the code, we could put an #ifdef block around the definition of blind_traits, and switch in a case sensitive version when the thing is compiled for a case sensitive environment. We could do the same with the predicate, of course, but that handles only one special case of a more general issue.
My boss suggests that we might consider the filesystem as the thing which has the property of case blindness, in which case the latter method would make more sense. Or equal sense. I'm not sure I buy that, but then again I'm not sure I don't, either. He didn't object when I decided to stick with my view. It's a fun question, isn't it?
If I'm wrong, I suppose they'll put me in the dunk tank at the company picnic (again), but I think I'm right.
Thus do I earn my daily bread.
Need help? accounthelp@everything2.com
|
https://everything2.com/user/wharfinger/writeups/Case-blind+STL+string
|
CC-MAIN-2020-10
|
refinedweb
| 1,077
| 70.13
|
Practical DevOps Learning Path — from where should i start ?
I spent more than 8 years retenlessly working between software development, Cloud Operations, Release Engineering then Architecting. People are thirsty to see DevOps at the ground. I let our directors / VPs breathe with me “DevOps”, not just to talk about it… i am here to share the whole experience practically
1. Thu Shalt be a software developer guided by software engineering principles
- Learn GIT , sorry , Don’t learn it ! but Master it ! Otherwise, Don’t proceed.
- Know what’s README, why, and how to write it concisely.
- Learn one of frontend Technologies ( VueJS, ReactJS or AngularJS) in webdev & Practice that → Recommend React+Redux
- Learn/Practice how to write unit-tests for your frontend components
- Learn one of backend Web Dev Technologies ( Gin of Go,Express of NodeJS, Flask of Python, Laravel of Php, Grails of groovy, Spring Boot, ..… ) & Practice that! → Recommend Go or NodeJS
- if by mistake 😁 you met something called “Spring”. Do the following: Give it a Respect 💁🏽♂️ !Stop there, Go and learn about it ! Learn AOP vs OOP ! Learn IoC Design Pattern and its implementation which is DI … Indeed, you just met the Software Architect of JAVA apps.
- Learn how to build a fullstack web application with these technologies & BUILD it!
- Get Basic Knowledge about OWASP.
- Scan your app(s) against the top 10 OWASP ( Try Sonarqube as tool)
- Learn how to enrich this fullstack application with enterprise features ( Oauth/OpenID Connect, Maching Learning, … ) → Recommend OAuth with Google
- Optionally, Learn/practice Superficially mobile development → Recommended (React Native)
- Establish reusable components as per need.
- Do not re-invent the wheel
- Do not repeat your self
- Composition with Go , OOP with JAVA .. No OOP in Go.
- Make it The Clean Code even if you have to practice “the Art of Destroying a Software” 🙃
It was a hands-on mission which took 6 Years of My career (2012–2018). Still applying that now, but my in-house software is now built to ship operational knowledges more than being an end-user apps
2. Thu Shalt use Linux Systems as a BOSS
- Learn Basics of Linux Administration ( RHCSA is a good path ) & Apply that in your daily work!
- Focus on process management, file management, Systemd, Log rotation.
- Learn how to prepare a Linux server for deploying app(s) that you built in the first part.
- Learn Vagrant to be able to spin off VM Linux as Code with no time — Basic Knowledge is enough
- Deploy your application to linux server
- Conclude all used commands and add them to your Vagrantfile.
- Git commit your Vagrantfile and do not forget gitignore, otherwise, you will end up with a fat git repo 😁 . Congratulations! you just touch an area from GitOps areas.
- Have the skills to troubleshoot why your app goes down just after one hour 😅 . May be “nohup” is a solution, or may be systemd ,… or … but Let’s not think about kubernetes now.
3. Thu Shalt jump into Public Cloud
- Go into AWS Solution Architect-Associate Path & Catch two birds 🦅 by one stone
- The first bird is getting the required knowledge that expands your vision & introduces to you all main Cloud Architecture & Cloud services : Events ( Cloudwatch), Network as a Service (VPC), Computing as Service (EC2, Lambdda) , Storage as a Service(EBS, S3, ..).. so on.
- The second bird is to help you getting your hands dirty with Cloud services.
- Unfortunately, Do not think that Getting AWS Solution Architect-Associate will qualify you as a Solutions Architect unless you have architecture skills in prior.
- Instead & fortunately, this path puts you on the Cloud road by giving overview about main cloud services, how can they interconnect as well as introducing you smoothly into Infra/Ops world ( Resilience, Failure & DR , Loadbalancing, Scalability, Subnetting, Supernnetting, Routing, Firewalls, WAF, … ) even if you were 100% pure developer previously.
- Use AWS Services (EC2 Linux instance) to deploy app(s) mentioned in part 1. Actually, do not appear to the exam without getting these hands-on in place.
- I assume you’ve already learnt cost effectiveness. So don’t forget to terminate the EC2 instance 💰
4. Thu Shalt eat 🍽 Container Technologies
- Get the basic knowledge about containers from two perspectives: SysOps perspective (linux process) and Developer perspective (app package)
- Learn Docker then Docker Compose
- Dockerize app(s) that we mentioned in the 1st part. ( Dockerfile,.. so on)
- Translate “docker run/build” commands into Code called “docker-compose.yaml”, then update README
- Git Commit your docker-compose file → Congrats! You touch GitOps again but not the full cycle.
- Provision EC2 instance from AWS, Install Docker depencies, Deploy your app(s).
- Absorb the Architecture of Kubernetes, how it works and how the whole infrastructure is exposed thru a REST API !!
- Deploy the container image of your application to a public registry (Docker hub).
- Write the required Kubernetes resources, in YAML format, in order to deploy app(s) mentioned in the first part.
- Deploy your app with kubectl
- Scale your app
5. Thu Shalt develop Software with CBD in mind
- CBD stands for “Cloud By Design” (My own Abbrv)
- Folks call it also “Cloud Native”.. Sorry! I just call it CBD.
- Learn 12 factors & refactor app(s) mentioned in the 1st part accordingly.
- Learn Serverless implementation using FaaS (Function as Service Offering Model) which fall into learning Lambda +one of DB backend (dynamo) + API Gateway + may be more (SQS,… )
- Run your first AWS Lambda, add an HTTP request trigger to it , let it responds accordingly.
- Refactor app(s) mentioned in the first part ! Break down it into functions ! Deploy it to AWS Lambda.
- Do not use directly AWS Console to develop Lambda! Use a framework that helps you to control the Code with Git → Recommend serverless Framework.
- Preview your app locally (“serverless offline”) even if it requires remote services in prod runtime.
6. Thu Shalt Codify Everything
- Treat everything as Configuration Item.
- In one of previous part, you codified your Linux administration using Vagrant
- In the other previous part, you codified your container image definitions using Dockerfile
- In that same part, you codified your container runtime definitions using docker-compose.
- → You did it. But you just need to know why ? so you keep doing it.
- Code vs Adhoc Scripts = huge difference
- — — more practice — —
- Delete EC2 that you created previously to deploy yout app(s)
- Learn AWS Cloudfmration, then implement a Cloudformation template that ensures the creation of the EC2 instance with a startup script that installs all dependencies & Deploy the application. ( startup script = user metadata)
- Deploy the Cloudformation template as a new Stack
- Git commit the template inside the git repository of app(s) mentioned in the first part.
7. Thu Shalt breathe the 10 Commandments of Release Engineering (RELENG)
- — — theories — —
- Release Engineering is Speciality & you have to be SME here.
- RELENG is about communicating Code Change to Production in a reliable manner.
- Google’s Engineer (Dinah McNutt) defined 10 Commandments of RELENG. Just understand then, then, practice it with real use cases.
- There are at least 2 phases in the software delivery : Build , then Deploy
- Binary Artifacts are the single source of truth for deployment phases
- Binary Artifacts must be scanned against security issues
- Binary Artifacts must be signed for integrity concerns
- From RELENG perspective, releasing non-containerized app is the same as releasing containerized app.
- Secrets must be versioned/managed outside Code repositories
- Semantic version is the single source of truth for the release.
- Semantic version must be used everywhere ( git tag, artifact naming, jira release name, sonar tag, vault secret path,… etc )
- A Release has a scope : Add feature(s) OR/AND Fix bug(s)
- Releases Scopes can be managed via Task Management Systems ( e.g Github Issues, Jira )
- Use Package Managers ( “mvn/gradle” is a must comparing with “javac”, “Helm” is a must comparing with “kubernetes hard-coded manifests”, .. )
- — — practices — —
- Assuming your code is hosted in Github, ….
- Create a release scope by creating a Github issue ( feature to add, or bug to fix ) .. Take note of the Github issue ID ( ie.g #12)
- Implement the issue #12 as per its acceptance criteria.
- Use #12 in the git commit message.
- Make a meaningful git commit message with Semantic Commit Message. That time, #12 will be in the footer of the commit message.
- Once it’s ready, create Pull-request from the feature branch to the integration branch (dev)
- Install Artifact management system (Nexus Sonatype ), Configure Docker Registry to be served by it.
- Regarding app(s) mentioned at the first part, try to change to way of deployment : Build → Push Artifact to Nexus → In deployment server, Pull it → Run it.
- Install Secret Management system (Vault Hashicorp).
- Regarding app(s) mentioned at the first part, externalize sensitive configuration from the source code of the application(s) → push the sensitive configuration to a Vault secret path → refactor your application to read this configuration externally using environment variable or directly call Vault ( Hint: update README & .gitignore at the end)
- Once the new feature (#12) is deployed, Go back to the Github issue (#12) and close it. .. ( Better, to have bots do that on your behalf )
8. Thu Shalt adopt Pipeline Theory for your software delivery:
- Pipeline are invented to deliver things from source x to destination(s) Y.
- A Software Delivery Pipeline (CI/CD pipeline) orchestrates the process of the delivery.
- CI/CD pipeline is a set of quality gates, namely: Build → Deploy.
- Adding quality gates before build/deployment helps to establish more confidence in the pipeline.
- Pipeline is just an orchestrator and it must communicate with the right systems thru APIs until getting things done.
- — — Practices _____
- Install Jenkins
- Configure a declarative pipeline using Jenkinsfile inside the git repo of app(s) mentioned above.
- Implement 2 stages: build , deploy.
- Embed other stages to establish confidence : run unit-tests before build stage.. run static code analysis before the build stage also.
- I used to write unit-tests for my smart contracts in blockchain dapp(s) using ethereum framework.
- Deploy stage requires a communication protocol between CI/CD system and target deployment(s): If it’s VM based deployment, SSH/WINRM is the protocol.. If it’s Kubernetes-based deployment, HTTP protocol (REsT of kubeapi) is the protocol.
- Hint: Deploying containers without kubernetes is classified as VM-based deployment and SSH/WINRM is required.
- Congratulations ! DevOps One-way (from left-to-right) is implemented
9. Thu Shalt apply Pipelines not only for your Apps but also for its Infra
- Infra as Code should give you the impression to treat infra changes as a software.
- Because it’s a software, it must have unit-tests,… and for sure Pipeline.
- Use the right techs/tools (e.g. Terraform, Ansible,… )which comes with main features of infra as code ( modularity, reusability, artifacting, externalizing secret data smoothly).
- — — Practices — —
- Build a pipeline (build / Deploy) for the Cloudformation template mentioned initially in the part (6).
10. Thu Shalt taste Deployment STRATEGIES as Sweets
- It’s about the way how are you upgrading your app(s).
- Learn difference among deployment strategies : All-at-once, Rolling update, Blue/Green (Red/Black), Canary, A/B Testing,.. etc
- Learn how to select the right strategy
- By practice, you will learn some anti-patterns of deployment strategies: Blue/Green with DNS switch is a worst practice as DNS TTL is out of control (use LB healthcheck switch instead) ,… so on.
- ______ Practices(1) _______
- Deploy your app(s) to kubernetes using resource of kind “Deployment”.
- Make sure “.spec.strategy.type==RollingUpdate”
- Increase the replica of your deployment
- Specify the maximum number of Pods that can be unavailable during the rollinng update process. (e.g.
maxUnavailableand
maxSurge)
- ______ Practices(2) _______
- Install Istio ecosystem on your kubernetes cluster
- Implement Canary Deployment (.e.g. VirtualService,… so on)
- Implement Blue/Green Deployment : using same helm-chart, but 2 different helm releases in 2 different namespaces + Istio Gateway + Istio VirtualService).
11. Thru Shalt Gather Accurate Feedback about what you deliver
- Measure then Measure the Measure
- 1st Measure = Setup required tools (prometheus, grafana,..) to monitor your system performannce and behavior.
- 2nd Measure= Measure app(s) & fix issues
- 3rd Measure = Keep measuring
- Gather feedback from customer smartly : UX techniques (hitmap,…) can help, Bots also are a good thing to cook.
- Congratulations! DevOps Second-way is implemented
12. Thu Shalt scale by services not by people
- “It’s OK to use our services, but it’s not ok to use our people” said by someone.
- Install tools but expose it as a service
- .e.g Jenkins is a tools.. Users should know it as Pipeline as a Service
- e.g ArgoCD is a tools … Users should know it as CD as a Service.
- e.g …
- Exposing tools to be as-a-service requires building interface between the tools and its users. Interface can be : Rest-API, Configuration File (Jenkinsfile, buildspec.yaml ,… so on )
- System owners must be promoted from System-admins to builder of software operators.
- Cloud is not only the Public Cloud. Build your private Cloud.
- Cloud is architecture before being internet facing or not.
- Some services must be at the level of the API Gateway for AuthN, AuthZ, API economy and other features. (Learn here Kong or 3scale)
13. Thu shalt ship all your Operational Knowledge into a Software
- Migrate the AWS Cloudformation Template to a Terraform Plan.
- Reuse ready-made Terraform modules
- Build your own terraform modules if needed.
- Thru a pipeline, Release all your own Terraform modules as a software library.
- ___ another practice: for kubernetes deployment ___
- Create the whole kubernetes cluster using a ready-made Terraform module.
- Migrate the Kubernetes hard coded Manifests to a Helm Chart.
- All these Code (terraform files, helm chart directory ) must be committed with GIT.
- All sensitive data must be pushed to a secret management system ( e..g: terraform backend = vault) .
- Gitignore must be updated ( ).
14. Thu Shalt adopt SRE to connect all these principles
- Reliability is the solution of the trade-off between : Stability(Ops) vs Frequent-Change(Dev).
- Practically, SRE foundation is “Code Don’t Lie”; Which means treating everything as code helps to shift from a human operators to a software operators.
- Reliability can built on top of software but it cannot be built on top of people operations.
- A Software can be reliabile by following all GitOps practices: Version the code, Review the Code, Test the Code, Having a Pipeline for that Code.
- The previous principle is a subset of this principle
15. Thu Shalt Go Crazy
- API Maturity Level 4, rfc7807, .. so on.
- Podman, skopeo, sigstore, .. so on
- Kubernetes + ingress + metric-server + grafana + kibana + kafka + … → shipped as a service, used as a service.
- Security in containers: seccomp, SELinux, AppArmor, syscalls, distroless, secret encryption, OIDC with kube-api …, Clair, Trivy, egress router, netnamespace, … etc
- Data pipelines : ingest data, analyze data, …
- Software Defined Storage: Ceph (OSD → RGW, RBD, CephFS) , GlusterFS ( PV → VG → LV thin pool → Brick → volume), …
- Software Defined Everything: SDN, SDDC (datacenter), …. etc
- iso27001 — e.g Calculation of the cooling energy needed for a datacenter by taking into account the heat generated by the bodies of people entering the datacenter ( people bodies are not the same 👀 )
- Performance Engineering, I/O , Latency, Throttling, ….
- Kill technical debt
- Hybrid cloud, Direct Connect, Enterprise Architecture,…
- Business Values Driven Architecture
- Kill WIP , Kanban
- TOGAF, Strategies, Principles, Use Cases, Business Goals, Architecture Steering Committee,… so on.
- … so
16. Thu Shalt Practice or figure out ways to Practice
- Sometimes, your company adopted some good technologies, and you got the opportunity to work on them
- Sometimes, you have a good Boss who listened to you and gave you area for experimentation
- But sometimes, you find yourself in the Desert !!
- That time, you need to dig the hole and build the well. I mean you need to generate Labs to yourself by any means. You have to manage it !
17. Thu Shalt Never Stop Learning
- The more you learn/know, the wider your vision become. Do never stop learning !
- Learning is not only studying but also learning from failures.
- Congratulations! DevOps Third-way is implemented
18. Thu Shalt use the right Tech in the Right Context
- In a company where most of teams have only SSH access, Ansible might be better than others.
- In a company where a Private/Public Cloud is enabled, Terraform might be better than others.
- Vendor lock-in is relative , and hard to measure.
- Technology lock-in is inevitable
19. Thu Shalt Think Big
- Be ambassador and not just an implementer
- DevOps Culture is established by practicing the 14 Leadership Principles.
Question : What’s the relationship between the word “DevOps” and these 18 points ? 😆😆😆
Answer: Simply! XyzOps means “Operate with Xyz” . DevOps means “Operate with Software Development” Does that make sense ?
|
https://abdennoor.medium.com/practical-devops-learning-path-from-where-should-i-start-9d536a5a7250?readmore=1&source=user_profile---------0-------------------------------
|
CC-MAIN-2022-05
|
refinedweb
| 2,764
| 55.13
|
Have you ever come across this mind-boggling bug and wonder how to solve it? Let’s go through the post and study how we address the Illegal start of expression Java error.
This is a dynamic error, which means that the compiler finds something that doesn’t follow the rules or syntax of Java programming. Beginners mostly face this bug in Java. Since it is dynamic, it is prompted at compile time i.e., with javac statement.
This error can be encountered in various scenarios. The following are the most common errors. They are explained on how they can be fixed.
1. Prefixing the Local Variables with Access Modifiers
Variables inside a method or a block are local variables. Local variables have scope within its specific block or method; that is, they cannot be accessed anywhere inside the class except the method in which they are declared. Access modifiers: public, private, and protected are illegal to use inside the method with local variables as its method scope defines their accessibility.
This can be explained with the help of an example:
Class LocalVar { public static void main(String args[]) { int variable_local = 10 } }
2. Method Inside of Another Method
A method cannot have another method inside its scope. Using a method inside another method would throw the “Illegal start of expression” error. The error would occur irrespective of using an access modifier with the function name.
Below is the demonstration of the code:
Class Method { public static void main (String args[]) { public void calculate() { } } }
Class Method { public static void main (String args[]) { void calculate() { } } }
3. Class Inside a Method Must Not Have Modifier
Similarly, a method can contain a class inside its body; this is legal and hence would not give an error at compile time. However, make note classes do not begin with access modifiers, as modifiers cannot be present inside the method.
In an example below, the class Car is defined inside the main method; this method is inside the class Vehicle. Using the public modifier with the class Car would give an error at run time, as modifiers must not be present inside a method.
class Vehicle { public static final void main(String args[]) { public class Car { } } }
4. Missing Curly “{}“ Braces
Skipping the curly braces of any method block can result in having an “illegal start of expression” error. The error will occur because it would be against the syntax or against the rules of Java programming, as every block or class definition must start and end with curly braces. The developer might also need to define another class or method depending on the requirement of the program. Defining another class or method would, in turn, have modifiers as well, which is illegal for the method body.
In the following code, consider the class Addition, the method main adds two numbers and stores in the variable sum. Later, the result is printed using the method displaySum. An error would be shown on the terminal as the curly brace is missing at the end of the method main.
public class Addition { static int sum; public static void main(String args[]) { int x = 8; int y= 2; sum=0; sum= x + y; { System.out.println("Sum = " + sum); } }
5. String Character Without Double Quotes “”
Initializing string variables without the double quotes is a common mistake made by many who are new to Java, as they tend to forget the double quotes and later get puzzled when the error pops up at the run time. Variables having String data type must be enclosed within the double quotes to avoid the “illegal start of expression” error in their code.
The String variable is a sequence of characters. The characters might not just be alphabets, they can be numbers as well or special characters like @,$,&,*,_,-,+,?, / etc. Therefore, enclose the string variables within the double quotes to avoid getting an error.
Consider the sample code below; the missing quotes around the values of the variable operator generates an error at the run time.
import java.util.*; public class Operator { public static void main(String args[]) { int a = 10; int b = 8; int result =0; Scanner scan = new Scanner(System.in); System.out.println("Enter the operation to be performed"); String operator= scan.nextLine(); if(operator == +) { result = a+b; } else if(operator == -) { result = a-b; } else { System.out.prinln("Invalid Operator"); } System.out.prinln("Result = " + result); }
6. Summary
To sum up, “Illegal start of expression” error occurs when the Java compiler finds something inappropriate with the source code at the time of execution. To debug this error, try looking at the lines preceding the error message for missing brackets, curly braces or semicolons and check the syntax.
Useful tip: Remember, in some cases, a single syntax error sometimes can cause multiple “Illegal start of expression” errors. Therefore, evaluate the root cause of the error and always recompile when you fix the bug that means avoid making multiple changes without compilation at each step.
7. Download the Source Code
You can download the full source code of this article here: How to fix an illegal start of expression in Java
So your first example is not the one that shows the error?
|
https://www.javacodegeeks.com/how-to-fix-illegal-start-of-expression-in-java.html
|
CC-MAIN-2020-16
|
refinedweb
| 866
| 61.87
|
TVs usually have limited graphics acceleration, single core CPUs and high memory usage for a common TV App. These restrictions make super responsive 60fps experiences especially tricky.
React-TV is an ecosystem for React Applications on TVs. Includes a Renderer and a CLI tool for building applications. Focused on be a better tool for building and developing fast for TVs.
React-TV optimizations includes removing cross-browser support, being friendly to TVs’ events, preventing DOM or Fiber caching to reduce memory sweep and adding support to canvas-based components.
Netflix have also been tackling this problem.
As a pre 1.0 release it currently only works for LG WebOS. Support for Samsung Tizen, Samsung Orsay, and Amazon Fire TV are on the roadmap.
import React from 'react' import ReactTV, { Platform } from 'react-tv' class Clock extends React.Component { state = { date: new Date() } componentDidMount() { setInterval(() => this.setState({date: new Date()}), 1000) } render() { if (Platform('webos')) { return ( <h1>Time is {this.state.date.toLocaleTimeString()}</h1> ) } return <h2>This App is available only at LG WebOS</h2> } } ReactTV.render(<Clock />, document.getElementById('root'))
If you’re interested in this project I recommend you to follow React-TV developer Raphael on Twitter, as he frequently posts some nice work in progress videos.
Developing for TVs with React-TV →
React-TV (GitHub) →
React-TV YouTube Example App →
About the Author
|
https://www.bram.us/2017/12/14/developing-tv-apps-with-react-tv/
|
CC-MAIN-2020-29
|
refinedweb
| 225
| 50.43
|
0 ilovejava 5 Years Ago i have to do a pyramid with a pattern like this 1 1 2 1 1 2 4 2 1 1 2 4 16 4 2 1 i just realized dani web didnt take my center pyrmaid thing it moved to the side no matter what spacing i did this is the pattern but the height is not 4 as it is now, it is 8 and width is 15 so far i have done something like this public class pyramid { public static void main(String args[]){ int w = 8; for(int count =1; count<=w;count++){ for(int a = 1; a<=w-count;a++){ System.out.printf(" "); } for(int k = 1; k<=2*count -1;k++){ System.out.printf(" 0 "); } System.out.println(); } System.out.println(); } } can someone please help me forloop help homework java Edited 5 Years Ago by Ezzaral: Added code tags to format pyramid.
|
https://www.daniweb.com/programming/software-development/threads/382541/pyramid-with-a-pattern
|
CC-MAIN-2017-26
|
refinedweb
| 153
| 63.06
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.