text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
I'm using a ESP32 to transmit serveral kB worth of sensor data via WiFi.
I store these in a textfile where one line represents a sample:
I need to push a fresh datavector on the file, while popping the oldest. My solution for this is working, but really slow. Adding one new element takes about 10s.
Code: Select all
... 22.5,42.0,1002.2,8.3 22.5,42.3,1002.2,8.3 22.5,42.5,1002.2,128.3 ...
I basically use a new file to copy the lines to, and destroy the "donor" file at the end:
Does anyone have any suggestions how to improve the performance of the script? Or is there a better way of doing this task?
Code: Select all
def HandleMeasBuffer(newData): print("start transfer") foundFiles = os.listdir() if "log_A.txt" in foundFiles: log_A = True file1 = open("log_A.txt", 'r') file2 = open("log_B.txt", 'w+') else: log_A = False file1 = open("log_B.txt", 'r') file2 = open("log_A.txt", 'w+') for num, f1 in enumerate(file1): if num > 0: #forget first, oldest line file2.write(f1) file2.write("\n"+newData) file1.close() file2.close() if log_A: os.remove("log_A.txt") else: os.remove("log_B.txt") print("end transfer")
Sorry for the dumb question, I'm really not used to think much about system resources when it comes to regular Python
Thanks in advance for your help | https://forum.micropython.org/viewtopic.php?t=8043&p=45748 | CC-MAIN-2020-16 | refinedweb | 234 | 77.43 |
So far in this chapter, the storyboard content does not involve any Swift or other programming content—it used the drag and drop capabilities of the storyboard editor. Fortunately, it is easy to integrate Storyboard and Swift using a custom view controller.
Each standard view controller has a corresponding superclass (listed in the Scenes and view controllers section previously in this chapter). This can be replaced with a custom subclass, which then has the ability to influence and change what happens in the user interface. To replace the message in the Message Scene, create a new file named
MessageViewCotroller.swift with the following content:
import UIKit class MessageViewController: UIViewController ...
No credit card required | https://www.safaribooksonline.com/library/view/swift-essentials/9781784396701/ch04s03.html | CC-MAIN-2018-30 | refinedweb | 113 | 55.34 |
In yesterday’s Programming Praxis problem we have to implement a more efficient string search algorithm than the brute force approach we did earlier, namely the Horspool variation of the Boyer-Moore algorithm. Let’s get started.
Our import:
import Data.Map (findWithDefault, fromList, (!))
The algorithm this time is a bit longer than the brute force one, but it’s nothing too bad. In lines 2-4 we cache some values to remove some duplication and possibly avoid recalculation. The last four lines are the actual algorithm and the first line just calls it with the proper initial arguments.
horspool :: Ord a => [a] -> Maybe Int -> [a] -> Maybe Int horspool pat skip xs = f (lp - 1 + maybe 0 id skip) p' where (lp, lxs, p') = (length pat, length xs, reverse pat) t = fromList $ zip pat [lp - 1, lp - 2..] m = fromList $ zip [0..] xs f n [] = Just (n + 1) f n (p:ps) | n >= lxs = Nothing | p == m ! n = f (n - 1) ps | otherwise = f (n + findWithDefault lp (m ! n) t) p'
When we test our algorithm with the same test suite as last time we can see everything is working correctly: horspool
Tags: bonsai, boyer, code, Haskell, horspool, kata, moore, praxis, programming, search, string | http://bonsaicode.wordpress.com/2009/08/29/programming-praxis-string-search-boyer-moore/ | CC-MAIN-2014-42 | refinedweb | 204 | 71.75 |
jGuru Forums
Posted By:
mohammad_afzal
Posted On:
Friday, February 15, 2002 10:01 PM
I want to open a Msword or any other application by clicking a button placed on a Frame.
Please help me out. Thanks a lot.
Re: Opening a Msword document from Frame.
Posted By:
Paul_Connaughton
Posted On:
Friday, February 22, 2002 08:13 AM
You are going to have to make some platform specific runtime commands in order to achieve this. Below is how you could do it to launch notepad, but the command used will depend on the file and application and underlying operating system. It is very unstable.
public class OpenNotepad { private static final String APPLICATION_EXE = "notepad"; public OpenNotepad() { openFileInNotepad( "C:\myFile.txt" ); } public Process openFileInNotepad( String filename ){ try{ return Runtime.getRuntime().exec( new String[]{ APPLICATION_EXE, filename} ); } catch( Exception ex){ ex.printStackTrace(); return null; } } public static void main(String args[] ){ new OpenNotepad(); }}
The only other help that I can offer is to take a look at this article which explains how to open an Internet Browser.
Hope this helps.Paul Connaughton | http://www.jguru.com/forums/view.jsp?EID=760943 | CC-MAIN-2014-49 | refinedweb | 177 | 65.73 |
CGI passes information to Python in the form of a class called FieldStorage. In your Pythonic CGI script, you must create an instance of FieldStorage in order to access the CGI data. The main attribute of class FieldStorage which you will need to know is 'getvalue'. For example, the data from an address book form might be accessed as follows:
# Import modules for CGI handling
import cgi, cgitb
# Create instance of FieldStorage
form = cgi.FieldStorage()
# Get data from field 'name'
name = form.getvalue('name')
# Get data from field 'address'
address = form.getvalue('address')
# Get data from field 'phone'
phone = form.getvalue('phone')
# Get data from field 'email'
From here, the data has been assigned to the variables. You can thus handle the values within the program like other literals. | http://python.about.com/od/cgiformswithpython/ss/pycgitut1_2.htm | crawl-002 | refinedweb | 129 | 66.13 |
I recently stumbled on a paper [1] that looks at a cubic equation that comes out of a problem in orbital mechanics:
σx³ = (1 + x)²
Much of the paper is about the derivation of the equation, but here I’d like to focus on a small part of the paper where the author looks at two ways to go about solving this equation by looking for a fixed point.
If you wanted to isolate x on the left side, you could divide by σ and get
x = ((x + 1)² / σ)1/3.
If you work in the opposite direction, you could start by taking the square root of both sides and get
x = √(σx3) – 1.
Both suggest starting with some guess at x and iterating. There is a unique solution for any σ > 4 and so for our example we’ll fix σ = 5.
We define two functions to iterate, one for each approach above.
sigma = 5 x0 = 0.1 def f1(x): return sigma**(-1/3)*(x+1)**(2/3) def f2(x): return (sigma*x**3)*0.5 - 1
Here’s what we get when we use the cobweb plot code from another post.
cobweb(f1, x0, 10, "ccube1.png", 0, 1.2)
This shows that iterations converge quickly to the solution x = 0.89578.
Now let’s try the same thing for
f2. When we run
cobweb(f2, x0, 10, "ccube2.png", 0, 1.2)
we get an error message
OverflowError: (34, 'Result too large')
Let’s print out a few values to see what’s going on.
x = 0.1 for _ in range(10): x = f2(x) print(x)
This produces
-0.9975 -3.4812968359375005 -106.4783129145318 -3018030.585561691 -6.87243939752166e+19 -8.114705541507359e+59 -1.3358518746543001e+180
before aborting with an overflow error. Well, that escalated quickly.
The first iteration converges to the solution for any initial starting point in (0, 1). But the solution is a point of repulsion for the second iteration.
If we started exactly on the solution, the unstable iteration wouldn’t move. But if we start as close to the solution as a computer can represent, the iterations still diverge quickly. When I changed the starting point to 0.895781791526322, the correct root to full floating point precision, the script crashed with an overflow error after 9 iterations.
More on fixed points
[1] C. W. Groetsch. A Celestial Cubic. Mathematics Magazine, Vol. 74, No. 2 (Apr., 2001), pp. 145–152.
2 thoughts on “A tale of two iterations”
I hoped you would throw in :
if r is a root of x = f(x) then
f(x) – r = f(x) – f(r) = f'(c)(x-r) for c between x and r
so estimating |f’| 1 tells you converge or diverge
Additionally, since (f^-1)’ = (f’)^-1, you should always expect that one will converge and the other will not, barring some pathology with |f’| = 1.
This is in contrast to the situation with a function of two or more variables, where you can have something like f(x, y) = (x/2, 2y) such that neither f nor f^-1 converges to the fixed point in general. | https://www.johndcook.com/blog/2020/02/26/cubic-iteration/ | CC-MAIN-2022-27 | refinedweb | 529 | 72.46 |
100Wp 16v high efficiency Sunpower solar cell ETFE semi flexible solar panels
US $0.7-1.5 / Watt
1 Watt (Min. Order)
Super quality 12v 50w solar panel
US $0.5-1.4 / Watt
50 Pieces (Min. Order)
Factory Wholesale cheapest solar panel
US $8-125.4 / Piece
50 Pieces (Min. Order)
10 Year warranty solar panel 200W 250w 300w
US $3-128 / Piece
1 Piece (Min. Order)
panel 305w-335w 12v solar panel with high quality
US $0.35-0.4 / Watt
20000 Watts (Min. Order)
High quality polycrystalline solar panels 270W for sale off-grid on-grid system
US $0.35-0.4 / Watts
1000 Watts (Min. Order)
import solar panels from China and USA
US $0.6-0.8 / Watt
1 Watt (Min. Order)
Best Products for Import Mono Solar Panels 250 watt With High Efficiency
US $0.2-0.4 / Watt
250 Watts (Min. Order)
solar panel 230v yingli brand 310W by imported solar panel manufacturing machines
US $0.35-0.45 / Watt
25 Watts (Min. Order)
Gama Solar import solar panels 300w high power high quality with long service life
US $0.3-0.4 / Watt
10000 Watts (Min. Order)
High quanlity price per watt solar panels solar cells 6x6 import solar panels
US $1.5-2.0 / Pieces
500 Pieces (Min. Order)
100w 150w 250w 300w import cheap Chinese photovoltaic solar panels price for houses
US $0.31-0.55 / Watt
1 Watt (Min. Order)
Alibaba wholesale import solar panels from germany 300w 310w 320w 330w polycrystalline silicon solar cell price
US $0.32-0.39 / Watt
1 Watt (Min. Order)
250w solar panel polycrystalline made in China, Import Solar Panels 250w Price from China, Solar panel polycrystal 250W
US $0.42-0.55 / Watt
1 Piece (Min. Order)
Brand new solar panel 5v solar panel importer thin film solar panel
US $115.0-145.6 / Piece
1 Piece (Min. Order)
import solar panels,PV solar panel,panel solar
US $0.28-0.33 / Watt
50 Watts (Min. Order)
poly crystalline 300w solar panel 36v/ import pv module from solar panel manufacturers in china
US $0.4-0.5 / Watt
1 Piece (Min. Order)
Hot sale solar fabric solar charger 21w import solar panels charging for phone outside using in the sun
US $22-30 / Piece
2 Pieces (Min. Order)
Roof mono solar panels solar cells import from Germany for Pakistan market
US $0.38-0.92 / Set
1 Set (Min. Order)
High demand import products 600 watt solar panel from alibaba shop
US $0.45-0.5 / Piece
1 Piece (Min. Order)
import china products monocrystalline 60watt solar panel
US $0.3-0.6 / Watt
5 Watts (Min. Order)
solar module PV panel 250w low price import
US $0.3-0.48 / Watt
1 Watt (Min. Order)
OEM import solar panels from germany With CE and ISO9001 Certificates
US $0.3-0.36 / Watt
260 Watts (Min. Order)
import 5w 10w 15w 20w 30w Min solar panels, buy solar cells bulk, solar cells for sale direct china
US $0.21-0.33 / Watt
1 Watt (Min. Order)
China factoy direct price 300w poly solar panel importer
US $0.38-0.38 / Watts
20 Watts (Min. Order)
CE TUV certificated Solar system import solar panels 140W poly thin film solar cell
US $0-0.495 / Watt
100 Pieces (Min. Order)
Import Solar Panels 275W Polycrystalline Photovoltaic Solar Panel
US $0.55-0.57 / Watt
1 Watt (Min. Order)
High demand import products 600 watt solar panel from alibaba shop
US $0.38-0.7 / Watt
5 Watts (Min. Order)
Best Service Hot Sale Solar Roof Panels Importer 100W Mono Crystal Sunpower Cell Pv Flexible Solar Panel 18V
US $0.73-0.74 / Watts
100 Watts (Min. Order)
High efficiency mono per watt import sun power solar panels factory direct
US $0.95-1.05 / Watt
10 Pieces (Min. Order)
2017 bluesun top import solar panels 320w 320watt 320 w sale solar panel home
US $0.35-0.43 / Wp
1 Wp (Min. Order)
import solar panels from germany price per watt 250w solar panel
US $0.32-0.4 / Watt
1 Watt (Min. Order)
solar panels buy 280w solar panel import
10 Pieces (Min. Order)
Import Photovoltaic Mini Transparent Solar Panel Wholesale
US $0.3-0.4 / Watt
1 Watt (Min. Order)
CETCsolar 340w solar panels for sale import solar panels
US $163-180 / Piece
1 Piece (Min. Order)
Import Solar panels mono 72 cells 320 wp 320w solar panel fabric
US $0.33-0.34 / Watts
500 Watts (Min. Order)
250w poly import solar panels,wholesale China for solar panel system
US $0.31-0.42 / Watt
2500 Watts (Min. Order)
High quality glass fiber material imported mobike solar power system panel
US $4.5-6 / Piece
1000 Pieces (Min. Order)
Made in china MC4 Connector monocrystalline silicon 340 Watt import solar panels
US $0.35-0.5 / Watt
10 Watts (Min. Order)
Promotion Wholesale Import Solar Panels From Germany
US $0.3-0.4 / Watt
250 Watts (Min. Order)
- About product and suppliers:
Alibaba.com offers 8,878 import solar panels products. About 42% of these are solar cells, solar panel, 3% are solar energy systems, and 3% are other solar energy related products. A wide variety of import solar panels options are available to you, such as free samples, paid samples. There are 8,884 import solar panels suppliers, mainly located in Asia. The top supplying countries are China (Mainland), India, and Egypt, which supply 99%, 1%, and 1% of import solar panels respectively. Import solar panels products are most popular in Domestic Market, Western Europe, and Southeast Asia. You can ensure product safety by selecting from certified suppliers, including 3,463 with ISO9001, 442 with Other, and 323 with ISO14001 certification.
Buying Request Hub
Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE
Do you want to show import solar panels or other products of your own company? Display your Products FREE now! | https://www.alibaba.com/showroom/import-solar-panels.html | CC-MAIN-2018-05 | refinedweb | 1,000 | 69.79 |
This interview was originally published at SurviveJS.
One of the developments that has began to change the way we style our applications and sites is the introduction of utility classes. Tailwind is an example of a popular framework that has adopted the approach.
To learn more about the approach, I am interviewing Chad Donohue about his library called Benefit. It's designed particularly React developers in mind and makes Tailwind a perfect fit with React.
My name is Chad Donohue. I enjoy creating user experiences and talking about design systems. I've written computer software as a Full Stack Engineer for a little over ten years. When I'm not in front of a computer screen, I spend time with my beautiful wife and three amazing kids.
benefit is a Tailwind-compatible utility CSS library powered by emotion. It is framework agnostic, has a file size of
5kB, and it only ships the CSS styles that you use.
Here we have a
Button component:
import React from "react"; export default function Button() { return <button>Click Me</button>; }
We'll add a few lines to include benefit and add some Tailwind class names:
/** @jsx jsx */ import { jsx } from "benefit/react"; export default function Button() { return ( <button className=" px-8 py-2 bg-blue-600 text-white font-bold tracking-wide uppercase rounded-full border-2 border-blue-700 shadow-lg"> Click Me </button> ); }
By adding two lines and some additional class names, we have accomplished two things:
~10,000) at just a
5kBinclusion cost
~350 bytes
At the point of inclusion within your project, benefit takes its default configuration (or your own if you need to customize it), then it generates CSS declarations in memory.
As you use these generated class names in your markup, the styles are looked up in benefit's cache, auto prefixed, and injected into the document.
On the client, benefit generates and injects styles for class names only when they are used. On the server, benefit pairs with emotion’s built-in SSR support and inlines CSS with the markup.
Since benefit is powered by emotion in both scenarios, you also can tap into the power that it provides, like nested declarations and deterministic style composition.
Also, being framework agnostic, benefit can be used alongside any JS framework. It can be introduced at the component level or at the root of an application. And, dead-style elimination is built-in.
I help build and ship 3rd party components. It is for sure an edge-case, but it brought on problems to solve for:
margin,
padding, etc.) and sandbox the elements that made up our shipped components and not have to duplicate those normalized styles with every new component.
benefit started as an internal idea to solve these issues and has been through a few iterations. As it matured a bit more, we began to see how this could be a solution for both isolated components and full-blown sites alike.
We are working to remove the runtime altogether for SSR. Soon, we'll have some examples put together for how this would work with something like Next.js.
We're also working on the way to generate custom documentation based on a configuration. So, it will be easy to share visually what different benefit configurations look and behave.
As digital experiences increase in complexity, we have more of a responsibility as makers to take a look at what we are shipping to the end user. In the future, I see this getting better through the use of code-splitting and rendering on the server before shipping to the browser.
The use of utility classes for styling will continue to gain popularity thanks to the great work over at Tailwind. Utility classes are a great pattern that DRYs up a lot of the view layer. I'm not saying that every page/application will only have utility classes, but the individual one-off styling needs will go down considerably.
CSS Utility Classes and "Separation of Concerns" by Adam Wathan is an excellent read that talks about some of the benefits to be gained from styling with utility classes.
Make it a goal to learn something new every day and share your knowledge with others. This industry moves fast, and it helps tremendously to be able to step out of your comfort zone often.
It is a gratifying profession that allows you to produce your best work while simultaneously learning something.
Thank you for your time and interest! I've enjoyed sharing my thoughts here and am always around on Twitter and GitHub. Ask me anything 😄!
Thanks for the interview, Chad! I can immediately see how adopting the utility class approach would help my in my daily development and I might have a project in mind that's a perfect fit for it!
To learn more about benefit, check out the homepage for more examples and star the project on GitHub. | https://bebraw.hashnode.dev/benefit-utility-css-library-interview-with-chad-donohue-cjwgj7bld00111ss1s33re76w?guid=none&deviceId=894c6ff1-5b44-44e2-b4e2-103d864f59d9 | CC-MAIN-2020-34 | refinedweb | 823 | 61.36 |
DB2::Row Framework wrapper around rows using DBD::DB2
package myRow; use DB2::Row; our @ISA = qw( DB2::Row ); ... use myDB; use myTable; my $db = myDB->new; my $tbl = $db->get_table('myTable'); my $row = $tbl->find($id); print $row->col_name;
new
Do not call this - you should get your row through your table object. To create a new row, see
DB2::Table::create_row
save
Save the current row. Will happen automatically if it can. Only really need to call this if you're interested in any generated identity column for a new row.
discard_changes
If you do not want your changes up to this point to be kept,
discard_changes will do the obvious
timestamp_to_time
Converts a DB2 timestamp column to a perl ("C") time value
time_to_timestamp
Converts a perl ("C") time value to a DB2 timestamp string.
time_to_date
Convert time to date. Converts a C/perl time to DB2's DATE format.
validate_column
Override this if you need to validate changes to a column. Normally you can leave this to the database itself, but you may want to do this earlier than that. You can also use this to massage the value before it is kept.
The parameters are:
self column name new value
To keep the value as given, simply return it. To modify (massage) the value, return the modified value. To prevent the update, die.
Remember to call your SUPER before validating yourself to allow for future enhancements in
DB2::Row. The base function may perform massaging such as converting time to timestamp, etc., in the future, so you can get that for free then. Currently this behaviour is done in the
column method, but it may move into here in the future.
Beware not to try to update the current column directly or indirectly through this method as you could easily end up with infinite recursion.
column
This get/set method allows you to retrieve or update any given column for this row. With a single parameter, it will return the current value of that column. The second parameter will be the new value to use. This value will be validated before being used.
as_hash
This is intended to help template users by returning the current row as a hash/hashref. For example, if you have a set of rows, @rows, you can give them to HTML::Template as:
loop => [ map { $_->as_hash(1) } @rows ],
The optional parameter will force a scalar return (hashref) despite an array context, such as the map context above.
find
Shortcut to calling
DB2::Table::find_id.
find_where
Shortcut to calling
DB2::Table::find_where
table_name
Shortcut to calling
DB2::Table::full_table_name
count
Shortcut to calling
DB2::Table::count
count_where
Shortcut to calling
DB2::Table::count_where
delete
Shortcut to calling
DB2::Table::delete for this ID
Shortcut to calling
DB2::Table::SELECT
dbi_err
dbi_errstr
dbi_state
The relevant variable from DBI for the last problem occurring on this table.
Dumps the current values of this row without any internal variables that Data::Dumper would follow.
AUTOLOADed functions
Any column defined by the corresponding DB2::Table object is also a get/set accessor method for DB2::Row. For example, if you have a column named "LASTNAME" in your table,
$row_obj->lastname() will retrieve that column from the $row_obj object, while
$row_obj->lastname('Smith') will set that objects' lastname to 'Smith'. | http://search.cpan.org/~dmcbride/DB2-db-0.25/lib/DB2/Row.pm | CC-MAIN-2017-22 | refinedweb | 554 | 62.58 |
"This program will read grades from two different files (one at a time) and do some simple analytical operations on those grades -- identifying the maximum, minimum, median, and mean."
"Information about the scores will be stored in an array. But, the array will not hold the scores themselves -- instead it will record the number of occurrences of each particular score."
So, I have an assignment where I need to input 2 datafiles, "data1.txt" and "data2.txt", and find the max, min, median, and mean. I need to use the main function, as well as readData and analyzeData functions.
The readData function opens and reads the specified data file, counting the scores that are found inside.
Parameters:
fileName: (string input) the name of the file to read
dataCounts: (integer array output) the scores found in the file
totalScores: (integer output) the total number of scores found
The analyzeData function analyzes the data described by the array given to it, computing desired statistics
Parameters:
dataCounts (integer array input) describes the values of data
totalScores (integer input) the number of scores described
maximum (integer output) the maximum score
minimum (integer output) the minimum score
median (integer output) the median scores
mean (double output) the average (mean)
this is what i have done so far;
#include <iostream> #include <fstream> #include <string> using namespace std; void readData(): //opens and reads the specified data file, counting the scores that are found inside. void analyzeData(): //analyzes the data described by the array given to it, computing desired statistics int main() { /*analyzes the data in TWO data files, named "data1.txt" and "data2.txt" and displays the maximum, minimum, median, and mean for each file. There will be NO input from the keyboard*/ }
what should i do next? how do I use incorporate data1 and data2? | http://www.dreamincode.net/forums/topic/315752-array-grade-help/ | CC-MAIN-2016-22 | refinedweb | 299 | 50.06 |
Revision history for Test-Mocha 0.67 2019-02-05 - Add compile tests. - Make UNIVERSAL::ref required again because of tests failing on Perl versions less than 5.025. 0.66 2019-02-04 - Revert the change from 0.65 because it broke spies. - Make UNIVERSAL::ref optional because it can't be installed on Perl 5.025 and above. 0.65 2019-01-13 Pass spy when dispatching to real object method to enable stubbed methods to be dispatched indirectly when it is called from another method in the original object. 0.64 2015-09-30 Fixed a bug with TODO not being looked for in the right place due to $Test::Builder::Level not being restored after being modified. This wasn't detected due to a bug in a test. (Thanks exodist!). 0.63 2015-09-05 Make the latest development release official. 0.62_02 2015-08-14 - Use Devel::PartialDump again, since bug fixes have been released. - Enable multiple method calls from multiple mocks/spies to be captured simultaneously in stub(), called_ok(), or inspect(). 0.62_01 2015-08-05 - Cleared out the namespaces of Mock and Spy as much as possible to avoid getting in the way of AUTOLOAD. - Added tests to make sure we keep the namespacees clean. - Resolved failing tests in t/called_ok.t caused by Test::More v0.98. The issue was resolved in Test::More v0.98_05. - Stop skipping test_out(qr//) tests for newer versions of Test::Builder::Tester. The issue was resolved in Test::More v0.99_01. 0.62 2015-07-24 - Introducing: spy() for creating spies. - Restructured internals to use proper methods instead of avoiding polluting namespace of mock objects. Their names begin with 2 underscores to mark them as private to the distribution. 0.61 2015-04-23 - Added class_mock() for mocking class methods and module functions (Scott Davis) 0.60_02 2014-10-28 - Fix travis-ci configuration. 0.60_01 2014-10-04 0.60 2014-08-22 - Added function prototypes to trim down syntax (API change). stub() and inspect() are no longer backwards compatible for v0.21 API. - Apply perltidy and perlcritic to code. 0.50 2013-11-18 Major interface change - dies() is now throws(). - verify() is now called_ok(). - stub(), called_ok() and inspect() now take a coderef with a method spec instead of a mock object. - Backwards compatibility has been maintained with deprecation warnings. - Carp 1.11 is no longer supported. 0.21_02 2013-10-24 - Enable isa(), DOES() and can() to be stubbed and verified. - Fix test failure with Carp 1.11 where Carp::Heavy calls ref() on mocks. - Skip failing tests with Test::Builder::Test 1.23_002 where `test_out(qr//)` does not work because it tries to stringify `qr//`. 0.21 2013-10-16 - Fix test failure with Carp 1.32 where CARP_TRACE is called on mocks. [Internal modifications to Devel::PartialDump] - Removed Moose dependency - Removed all functions (only dump() remains as a method) - Minor bug fixes: - 'list_delim' attribute is now used to separate lists. - 'max_length' attribute with value 0 now dumps '...'. - Object dumps have '=' after the class name. 0.20 2013-10-11 - Allow ref() to be stubbed. - Provide better diagnostics with method call history and caller info when verify() fails. - Make Moose an optional prerequisite. 0.19 2013-09-18 - Add inspect_all() function. - Remove Exception::Tiny test dependency. - Fix returns() and dies() when no arguments. 0.18 2013-09-13 - Fix tests for Perl versions older than 5.014 (operator precedence for bitwise '&'). - Make matcher_moose.t optional using Test::Requires. - Stub executes() should be given mock $self as its first argument. 0.17 2013-09-10 - Set version dependency for Types::Standard to 0.008 when InstanceOf was introduced. 0.16 2013-09-04 - Fix for Perl versions older than 5.014 (s/// operator with /r switch). 0.15 2013-09-02 - Don't let AUTOLOAD() handle DESTROY(). - Added stubbing with callbacks. 0.14 2013-08-30 - Made inspect() public. - Removed Moose and other dependencies. 0.13 2013-08-26 - Distribution fix (no modules were provided). 0.12 2013-08-26 - Added support for using Type::Tiny type constraints as matchers. 0.11 2013-08-16 - Forked from Test-Magpie. - Removed deprecated functions. - Refined documentation. 0.10 2013-08-12 - Changed when->then_return to stub->returns. [Stub behaviour changed to match Mockito] - The last stubbed response persists. - New stubs take precedence over older. | http://web-stage.metacpan.org/changes/distribution/Test-Mocha | CC-MAIN-2019-43 | refinedweb | 729 | 71 |
Modern SEO
Getting the most out of search engines and social networking is more important than ever! Take advantage of Google, Facebook and Twitter’s most advanced features, and boost user engagement.!Agenda
Traditional SEO
It’s important to know where we’re starting from before we talk about where we’re going. Let’s look at some SEO basics, including content quality, page rank, positive and negative metrics.
Optimizing for Crawlers
Giving hints to crawlers, by way of a
robots.txt file, meta tags and DOM attributes can go a long way in allowing search engines to index and represent your content in the best way possible.
EXERCISE 1 - Crawler optimizations
Using what we’ve just learned, make some optimizations that will give web crawlers some important hints as to how your URLs relate to each other.
Structured Data
While search engines are good at inferring what your content is all about in general, providing structured data can allow further enrichment of how your site is represented in search results. We’ll look at providing standardized structured data, aligned with the schema.org standard, to Google and other search engines.
EXERCISE 2 - An events page
Events are one of several types of structured data that popular search engines use to enrich listings in results pages. Using the google structured data tester, add some of this metadata to your app, and fix any warnings brought to your attention.
Accelerated Mobile Pages
Accelerated Mobile Pages are part of a standards-based effort to provide a nearly instant loading experience for content on mobile devices. While AMP-ready pages make use of familiar technologies, there are some strict constraints we must adhere to, in order to enable this fast-loading experience.
EXERCISE 3 - Build an AMP Page
We discussed two strategies for building AMP pages. For this exercise, make a separate namespace for equivalent AMP content, and build a simple representation of a news article, while staying within the relevant constraints.
Lunch
Break for Lunch
Social Metadata
Modern web crawlers execute at least a limited subset of your app’s JavaScript, but this doesn’t help much when it comes to sharing links on Facebook, Twitter, Slack and other sites. To provide a great social sharing experience, we need to employ server-side rendering and a combination of OpenGraph and Twitter Card metadata, to go along with our content.
EXERCISE 4 - Enriched product pages
The product pages for our e-commerce site currently provide a very basic sharing experience. Using your newfound knowledge of Twitter Card and OpenGraph meta tags, enrich the sharing experience with a large product image, and a short description
Rules of Thumb
Now that we’ve learned about social metadata, we can see that generating thumbnails will be a bit of a challenge. We’ll look at a battle-tested library called ImageMagick to generate all the sizes we need on the fly, and learn some tips and tricks to make sure our cropping and resizing will turn out beautifully.
EXERCISE 5 - Thumb Generation
Find a set of ImageMagick arguments that results in a great group of thumbnails, given a collection of source images. Make sure not to cut any area indicated as “critical” out of the thumbnail image.
Embedding in other places
We’ll go beyond simple link sharing, and explore the OEmbed standard, whereby our apps can instruct consumers as to how our rich web content should be represented on their sites.
EXERCISE 6 - iframe via OEmbed
Build a new express route that returns an OEmbed-compliant JSON response, instructing consumers to iframe your content.
Mobile Optimizations
In mid-2016, Google announced that mobile-friendliness would have a significant impact on SEO for their new “mobile index”. We’ll take a look at a few tools designed to measure how well our apps work on mobile devices and provide some key metrics and thresholds to keep an eye on. We’ll also take a look at some additional metadata that modern mobile devices can use to provide a more native-app-like experience.
EXERCISE 7 - Lighthouse + Web App Manifest
Add a web app manifest to our app, and make other easy improvements to boost our app’s lighthouse score
Wrap Up & Recap
We’ll recap everything covered throughout the day. | https://simplabs.com/training/2017-02-17-modern-seo | CC-MAIN-2017-47 | refinedweb | 714 | 55.98 |
I am trying to process a month's worth of website traffic, which is stored in an S3 bucket as json (one json object per line/website traffic hit). The amount of data is big enough that I can't ask Spark to infer the schema (OOM errors). If I specify the schema it loads fine obviously. But, the issue is that the fields contained in each json object differ, so even if I build a schema using one day's worth of traffic, the monthly schema will be different (more fields) and so my Spark job fails.
So I'm curious to understand how others deal with this issue. I can for example use a traditional RDD mapreduce job to extract the fields I'm interested in, export and then load everything into a dataframe. But this is slow and seems a bit like self-defeating.
I've found a similar question here but no relevant info for me.
Thanks.
If you know the fields you're interested in just provide a subset of schema. JSON reader can gracefully ignore unexpected fields. Let's say your data looks like this:
import json import tempfile object = {"foo": {"bar": {"x": 1, "y": 1}, "baz": [1, 2, 3]}} _, f = tempfile.mkstemp() with open(f, "w") as fw: json.dump(object, fw)
and you're interested only in
foo.bar.x and
foo.bar.z (non-existent):
from pyspark.sql.types import StructType schema = StructType.fromJson({'fields': [{'metadata': {}, 'name': 'foo', 'nullable': True, 'type': {'fields': [ {'metadata': {}, 'name': 'bar', 'nullable': True, 'type': {'fields': [ {'metadata': {}, 'name': 'x', 'nullable': True, 'type': 'long'}, {'metadata': {}, 'name': 'z', 'nullable': True, 'type': 'double'}], 'type': 'struct'}}], 'type': 'struct'}}], 'type': 'struct'}) df = spark.read.schema(schema).json(f) df.show() ## +----------+ ## | foo| ## +----------+ ## |[[1,null]]| ## +----------+ df.printSchema() ## root ## |-- foo: struct (nullable = true) ## | |-- bar: struct (nullable = true) ## | | |-- x: long (nullable = true) ## | | |-- z: double (nullable = true)
You can also reduce sampling ratio for schema inference to improve overall performance. | https://codedump.io/share/l3b8iIZdiNHn/1/spark-200-reading-json-data-with-variable-schema | CC-MAIN-2017-09 | refinedweb | 323 | 72.87 |
On Mon, Aug 10 2009, Steve Langasek wrote: > On Sun, Aug 09, 2009 at 07:37:10PM -0500, Manoj Srivastava wrote: > >> >> >. > >> > I don't have a strong opinion on whether ddebs should be documented in >> > policy, but I certainly don't agree with requiring dpkg to understand >> > them as a prerequisite for implementing a general purpose, public >> > archive for auto-stripped debugging symbols packages. There really is > >> Since this is on -policy, I am commenting on when it gains >> enough gravitas to be enshrined in policy. Getting things in policy is >> also not a pre-requisite for implementing a general purpose, public >> archive for auto-stripped debugging symbols packages. > > There is a namespace issue here, that falls in scope for Policy because it > impacts interoperability; if there are going to be limits placed on the > names of packages in the main archive, that almost certainly *does* belong > in Policy. And the Policy editors should not be dictating a dpkg > implementation for ddebs as a precondition, not when that dpkg > implementation isn't required and doesn't appear to have any backing from > the dpkg maintainers. The policy editors may ask for the design to be implemented and tested, and (gasp) even critique the design, before having it added to policy. Policy is not the place to shoce in untested/raw design. And in this case, there seems to be an issue of occams razor: why should a new file suffix be created when policy based naming wold not require it in the first place; namespace partitioning can be done on the package name, not on the filename. So, please keep heckling from the peanut gallery to a minimum, please, and assume that policy editors have a modicum of sense when dealing with their role duties. >> I do have a question: Why is the fact that these are >> automatically created relevant? > > Because if they're *not* automatically created, there's no namespace > issue: package name conflicts would continue to be resolved the usual > way, via ftpmasters and the NEW queue. Seems like if policy carves out a namespace for debug packages, it would serve for both automatically generated and hand crafted debug packages; and it is trivial for the automatic generation not to happen when there is an entry in debian/control for a debug package already, as long as there is a naming convention for debug packages. > >> Why should it be a leading change in policy? Can't we try out >> the experiment, make any changes needed, and then come with the policy >> change? If we do not need maintainers to change anything, ans we do not >> need dpkg to change anything, why is there a hurry to get this into >> policy before it has been implemented and tested? > > I'm in no particular hurry, myself, but I think the right time to > reserve package namespace is *before* there are exceptions in the > archive that have to be dealt with. What with the maxim about Policy > not making packages insta-buggy, and all. If policy is going to be creating name spaces for debug packages, it should be done on the basis of the content or type of package it is, not because of the tools that created it. We do not have a emacs-debhelper_2.3-1_amd64.deb emacs-cdbs_2.3-1_amd64.deb emacs-yada_2.3-1_amd64.deb So as long as there is clarity on what the contents of the package should be (only detached debug symbols, kept in a standard location, or something), how they are generated should not matter. >> So why not just have foo-ddeb.*.deb? > > Why not, indeed? manoj -- "Oh what wouldn't I give to be spat at in the face..." a prisoner in "Life of Brian" Manoj Srivastava <srivasta@debian.org> <> 1024D/BF24424C print 4966 F272 D093 B493 410B 924B 21BA DABB BF24 424C | https://lists.debian.org/debian-devel/2009/08/msg00298.html | CC-MAIN-2017-39 | refinedweb | 642 | 56.18 |
This is my assignement, my professor wnats me to output 4500.0 in interest, but I'm getting is 0.
All codes are below,
public class Interest {
public static void main(String[] args) {
//declare variables
double loan=5000, interest=0.0;
int years=15, rate=6;
//calculations
interest = loan * (rate/100) * years;
//output
System.out.println(interest);
}
}
THIS IS THE ASSIGNMENT PAPER
Assignment 2 (10 points): Calculating Interest
Our goal is to calculate the interest given the loan amount, rate, and years to be taken out.
Your program should have the following:
Make the name of the project Interest
4 comment lines that state the purpose of the program, author, date and JDK
used. (1 point)
Include 4 variables for the amount of loan, rate, years, and interest. The amount
of loan and interest variables are decimal numbers. The years and rate
variables should be integers. Make up your own meaningful correctly-formed
variable names for these 4 items and declare them appropriately as an int or
double. (4 points)
Set the loan amount to be 5000. Set interest rate to be 6. Set years to be 15.
With an assignment statement, have the computer calculate the interest using the
following formula: (3 points)
interest = amount * (rate/100) * years
Note: Please use your own variable names in above formula.
Have the computer display the amount of loan, rate, years and the interest that you
calculated. You should print this on several lines. (2 points)
Compile your program until you have no compilation errors. When you run this
application, you should get an answer for interest as 4500.0 If you are getting an
answer of 0, THINK!! Don't change the variable types. Don't worry about it
appearing with dollars and cents since formatting has not been covered yet. | http://forums.devshed.com/java-help-9/simple-homework-java-please-help-930683.html | CC-MAIN-2015-32 | refinedweb | 300 | 66.23 |
This tutorial is all about How to Write to a File or a Plain Text Document.
There are multiple ways to write data into text file or plain text file. We will learn the most standard way of writing into a text file.
Let us take an example:
java
- public class Manager{
- public static void main(String[] args) throws IOException{
- //Whenever we are dealing with any sort of input output operation
- //it is desirable to throws IOException as this is checked exception
-
- FileWriter out=new FileWriter(f);
- BufferWriter bw=new BufferWrite(out); //BufferWriter Object
- bw.write("abc");
- bw.write("xyz");
- bw.newLine(); // to change the line in the text document newLine method is used
- bw.write("123");
- bw.write("678");
- bw.flush(); //this will save the text document
- bw.close();
- out.close(); //close method with this twoi object will avoid any kind of memory leaks.
-
- }
- }
-
OUTPUT:
Write Operation successful
Readers might real also:
How Can We Handle Web Based Pop Up
How to get current date and time of the system in Java
Facebook Comments
(Visited 2 times, 1 visits today) | http://itsourcecode.com/2016/02/how-to-write-to-a-file-or-a-plain-text-document/ | CC-MAIN-2017-17 | refinedweb | 181 | 59.13 |
TurtleBot ROS moving using Twist
I am trying to program for a TurtleBot, but there is a significant lack of tutorials for the robot and I have been unable to write my own C++ which works. I am trying to use a tutorial from another robot just to make the robot move when a key is pressed.
The source tutorial is found here: , which I only modified the publish topic to "/cmd_vel"
#include <iostream> #include <ros/ros.h> #include <geometry_msgs/Twist.h> class RobotDriver { private: //! The node handle we'll be using ros::NodeHandle nh_; //! We will be publishing to the "/base_controller/command" topic to issue commands ros::Publisher cmd_vel_pub_; public: //! ROS node initialization RobotDriver(ros::NodeHandle &nh) { nh_ = nh; //set up the publisher for the cmd_vel topic cmd_vel_pub_ = nh_.advertise<geometry_msgs::Twist>("/cmd_vel", 1); } //! Loop forever while sending drive commands based on keyboard input bool driveKeyboard() { std::cout << "Type a command and then press enter. " "Use '+' to move forward, 'l' to turn left, " "'r' to turn right, '.' to exit.\n"; //we will be sending commands of type "twist" geometry_msgs::Twist base_cmd; char cmd[50]; while(nh_.ok()){ std::cin.getline(cmd, 50); if(cmd[0]!='+' && cmd[0]!='l' && cmd[0]!='r' && cmd[0]!='.') { std::cout << "unknown command:" << cmd << "\n"; continue; } base_cmd.linear.x = base_cmd.linear.y = base_cmd.angular.z = 0; //move forward if(cmd[0]=='+'){ base_cmd.linear.x = 0.25; } //turn left (yaw) and drive forward at the same time else if(cmd[0]=='l'){ base_cmd.angular.z = 0.75; base_cmd.linear.x = 0.25; } //turn right (yaw) and drive forward at the same time else if(cmd[0]=='r'){ base_cmd.angular.z = -0.75; base_cmd.linear.x = 0.25; } //quit else if(cmd[0]=='.'){ break; } //publish the assembled command cmd_vel_pub_.publish(base_cmd); } return true; } }; int main(int argc, char** argv) { //init the ROS node ros::init(argc, argv, "robot_driver"); ros::NodeHandle nh; RobotDriver driver(nh); driver.driveKeyboard(); }
The code compiles and runs correctly, but the turtlebot does not move when commands are issued. Any ideas why?
Additional Info:
When I'm on the laptop provided with my Turtlebot messages appear to not be being sent (or are not being delivered). In separate terminals, I have:
turtlebot@turtlebot-0516:~$ sudo service turtlebot start [sudo] password for turtlebot: turtlebot start/running, process 1470 turtlebot@turtlebot-0516:~$ rostopic echo /cmd_vel
And
turtlebot@turtlebot-0516:~$ rostopic pub /cmd_vel geometry_msgs/Twist '[1.0, 0.0, 0.0]' '[0.0, 0.0, 0.0]' publishing and latching message. Press ctrl-C to terminate
With info:
turtlebot@turtlebot-0516:~$ rostopic info /cmd_vel Type: geometry_msgs/Twist Publishers: * /rosttopic_2547_1352476947372 () Subscribers: * /turtlebot_node () * /rostopic_2278_1352476884936 ()
There is no output for the echo at all
What's the output of
rostopic echo /cmd_velwhen you are sending commands? What's the output of
rostopic info /cmd_vel. Please edit your question to add more information. Also please tag it correctly to make sure the right people get notified.
Hm. Weird that
rostopic pubdoes not work. You should receive exactly one message since
rostopic publatches the topic. Can you try
rostopic pub /cmd_vel geometry_msgs/Twist '{linear: {x: 1.0}}' -r 10to repeat the message with 10 Hz? Btw. my syntax will have the same effect as yours.
if we want to move two turtle bot by rostopic pub /cmd_vel geometry_msgs/Twist what can i do ..... because by giving this command both turtle bot move in the same direction ..... i want to move one turtlebbot in one direction and other turtlebot in another direction .... thanks for the help @Lorenz | https://answers.ros.org/question/47500/turtlebot-ros-moving-using-twist/ | CC-MAIN-2021-43 | refinedweb | 584 | 58.08 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Everything Else » How to take a screenshot
Many times our staff here at NerdKits (and our wonderful community on the forums) is asked to diagnose or help with a problem that is vaguely described. We always encourage customers who email us to include a picture of their setup, and a screenshot of any command line errors they might be getting. Having the full screenshot really helps the debugging process since it often gives us many more clues as to what the problem is than the original poster may have thought to describe.
In order to make this process easier, here is a step-by-step guide to taking a screen shot:
In Windows, you can use the Print Screen button, usually found near the upper right corner of your keyboard, to copy an image of the current desktop to your clipboard (the equivalent of copying text using Ctrl+C). I recommend you use Alt+Print Screen (hold the Alt key, then press the Print Screen key) in order to copy only the currently selected window. Then open up your favorite image editing program and paste the image right into it. I am going through this example with Microsoft Paint because everybody has it since it is included with Windows.
Windows Vista and Windows 7 have special tool called the snipping tool. It allows you to select a section of your desktop and save it directly as an image. Microsoft does a great job of explaining how to use it, so I don't have to.
Pressing Command+Shift+3 will copy your entire screen and save the image to your desktop. Pressing Command+Shift+4 will give you ability to select a rectangle on your screen. The section you select will be saved as an image on your Desktop.
Most Linux distributions come with some handy way of capturing a screenshot. In Ubuntu the Print Screen button will launch the Screenshot application which makes it easy to grab the whole screen, or a portion of it. You can launch this application from Applications -> Accessories. KDE has a nice screen grabbing tool called ksnapshot. If you are more the command line sort of person, and have imagemagick installed you can do
import -window root screenshot.png
In all of the methods above you are taking a screenshot and then editing it to your liking in a different image editing program. This is generally a cumbersome procedure. My favorite, and most often used, method of grabbing a screenshot is by using gimp - a free, open source, and rather powerful image editing program. It is not a small piece of software, but once you have it it is very useful for all sorts of image manipulation. To capture a screenshot with gimp, open up the program and choose File -> Create -> Screenshot.
A window will appear asking how you should like to take the screenshot. I usually select "screenshot of a single window," and give myself a 5 second delay. This gives me time to minimize gimp, and get the window the way I want it.
Once you have the screenshot you can post it to our forums using our recommended best practices for posting pictures.
Humberto
In windows if you just want a screen shot of just the active window you could press alt-printscreen in place of the printscreen key alone. Thought I'd give another option...
Rick
Whoops,... I feel silly.. You actually said that in your post. I was reading it on my phone and apparantly missed that part :D... Too bad I can't delete my mistake.
I will however state that I totally agree that screen shots are golden, as are the photos of the project in its current state when asking for help. The extra eyes often catch something the person who built the project will miss.
I'm sure that's why authors of books have proof readers. They will catch what the author misses.
Again, I apologise for restating what you had already said....
Rick (With mud on his face :D)
in iPhone an iPod touch:
hold down the power button and press the home key. there will be a camra sound and the photo will be saved to the pictures folder
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/911/ | CC-MAIN-2018-22 | refinedweb | 730 | 71.14 |
ninth version
ZAGREB 2010
No part of this publication should be copied or edited without permission from the author.
LIST OF ABBREVIATIONS
Abkh. = Abkhaz adm. = admirative ADV = adverbial advers. = adversative Adyg. = Adyghean af. = affirmative ant. = anterior assoc.= associative, associative plural caus. = causative cond. = conditional conj. = conjunctivity dir. = directional (directional prefix) ERG = ergative evid. = evidential fut. = future fut.II = future II ger. = gerund imp. = imperative impf. = imperfect inf.= infinitive INST = instrumental inter. = interrogative intrans. = intransitive invol. = involuntative Kab. = Kabardian neg. = negation NOM = nominative opt. = optative part. = participle perm. = permissive pl. = plural plup. = pluperfect poss. = possessive pot. = potential pref. = prefix pres. = present pret. = preterite quot.part. = quotative particle rec. = reciprocal refl. = reflexivity rel. = relative particle rec. = reciprocal prefix Rus. = Russian
PREFACE
This grammar should be used with some caution, not only because it was written by a linguist who is far from being a fluent speaker of Kabardian. It is largely compilatory in nature, and many examples were drawn from the existing works on Kabardian by M. L. Abitov, Mukhadin Kumakhov, and others. However, I have also excerpted and analyzed many sentences from the literature, especially from the Nart corpus (Nrtxar, 1951, Nrtxar, 2001), and some examples were elicited from native speakers. Although I have relied heavily on the published scholarly works on Kabardian, my interpretations of the data are sometimes very different from those in the available literature. I have tried to approach the Kabardian language from a typological point of view, comparing its linguistic features, that may appear strange to speakers of Indo-European languages, to similar features found in other languages of the world. Although primarily designed for linguists, I hope that at least parts of this overview of Kabardian grammar may be of some use to laymen. If it succeeds in attracting at least a few people to the study of Kabardian, this grammar will have served its purpose. Apart from John Colarusso's grammar (1992) and his recently published grammatical sketch (2006), and the largely outdated monograph by Aert Kuipers (1960), this is, to my knowledge, the only general overview of the structure of Kabardian available in English. In contrast to these three works, which were composed as a result of field work with native speakers from the Kabardian diaspora, this grammar attempts to describe the standard Kabardian language used in the Kabardino-Balkar Republic of the Russian Federation. This grammar is a result of my long-standing endeavor to learn this exciting and fascinating, though incredibly difficult language. In a world in which a language dies out every fortnight, the linguist's task is at least to describe the small languages threatened by extinction. Although the statistics on the number of speakers of Kabardian does not lead one to think that Kabardian is in immediate danger of extinction, especially if compared with other small Caucasian languages in Russia, sociolinguistic data show that the number of native speakers is decreasing among the younger generations; it seems that it is especially in the diaspora that Kabardian is facing extinction. As R. M. W. Dixon wrote, anyone who calls themselves a linguist should assume the task of saving at least one endangered language from oblivion. This work is my response to this greatest challenge that linguists, as well as other people who care about the preservation of linguistic diversity, are facing today. Finally, I would like to thank Lemma Maremukova and Alim Shomahua for their help and for the examples they provided as native speakers of Kabardian. Globalization, which is partly responsible for the mass extinction of languages, has, on the other hand, opened some, until recently unimaginable, possibilities for the investigation of languages over large distances, for "field work" via Internet. F''axwa.
INTRODUCTION
The Kabardian language is a member of the Abhkaz-Adyghean (Northwest Caucasian) language family. 1 Together with the closely related Adyghean language Kabardian constitutes the Adyghean branch of this family, while Abhkaz and Abaza constitute the other branch (these are also considered to be dialects of the same language by some linguists). The third, transitional branch was formed by the recently extinct Ubykh 2: Proto-Abkhaz-Adyghean Abkh.-Abaz. Adyghean-Kabardian
The frequent common name for Adygheans and Kabardians is Circassians. The names Kabardian and Circassian are alloethnonyms 3. The Adygheans and the Kabardians call themselves da, and their language dabz. Their languages are mutually quite intelligible, and most Adygheans and Kabardians consider themselves members of the same nation, with a common history and a common set of social institutions and customs (da xbza) 4. The Kabardians are the easternmost Abkhaz-Adyghean people. Their country is bordered by Ossetia to the south, by Chechnia and Ingushetia to the east, and by the
The NW Caucasian languages may be affiliated with the NE Caucasian (Nakh-Dagestanian) languages, but this hypothesis is still unproven sensu stricto (but see, e. g., Dumzil 1933, Abdokov 1981, 1983). Some linguists connect them to the extinct Hattic language of Anatolia (cp. Chirikba 1996, Braun 1994). In my opinion, the evidence suffices to show areal and typological, but not necessarily genetic links between Hattic and NW Caucasian. 2 It seems that Ubykh was dialectally closer to the Adyghean languages than to the Abkhaz-Abaza languages (Kumaxov 1976). However, Chirikba (1996) rejects this, and proposes an Ubykh-Abkhazian node. 3 The ethnonym Kabardians (Rus. kabardncy) is of unknown origin (Kabardians derive it from the name of one ancient chief, Kabarda Tambiev), while the ethnonym Circassians (Rus. erksy, older erkasy) has two etymologies; some relate it to the Greek name Kerktai for one of the ancient peoples on the east coast of the Black Sea (e. g. Der Kleine Pauly, s. v.), and others derive it from the Ossetian crgs, originating from a Scythian word *arkas "nobleman" (e.g. M. Vasmer, Russisches etymologisches Wrterbuch, s. v.). The name kasog, pl. kasozi "Circassians" is found from the 10th century in Old Russian, and most linguists relate it to the Ossetian ksg "Circassian" (according to Vasmer this name is also related to the Scythian word *arkas "nobleman"). The resemblance with the ancient inhabitants of Northern Anatolia, Kaskas, is probably accidental. Finally, the name by which Circassians are called by the Abkhazians, -zxwa, has been compared with Gr. Zgoi, Zikkho, which designated a people on the NE Caucasus in the 1st century AD. This could, perhaps, be related to Kabardian c'xw "man" (Chirikba 1996: 3). 4 In the Soviet age, in accordance with the "divide and rule" principle, Circassians in the KarachayCherkess Autonomous Region of Russia were also set apart as a distinct ethnic group, but they consider themselves descendants of immigrant Kabardians. Their literary language is close to standard Kabardian, though it does have some characteristics which link it to Adyghean (cf. Kumaxova 1972: 22-23).
1
Abazinia region to the west. The Abkhaz-Adyghean languages used to be spoken along the entire eastern coast of the Black Sea, from the Kuban River (Kabardian Ps) almost as far as the town of Batumi, and in the interior all the way to the Terek River 5. The Kabardians became a distinct ethnic group in the Middle Ages. They were one of the dominant peoples to the north of the Caucasus, and they established diplomatic relations with the Muscovite kingdom as early as the 15th century. Emperor Ivan the Terrible married the Kabardian princess Goshenay, christened as Maria Temriukovna. In the course of the next couple of centuries a few important Russian noblemen and army leaders were of Kabardian origin. Slave trade in the Islamic world brought numerous Circassians into various countries of the Near East, and it is believed that the Mameluke dynasty, which ruled Egypt from 1379 to 1516, was of Circassian origin. Unlike the Adygheans and the West Circassians, whose society mostly remained organized into large families and clans/tribes, the Kabardians have developed a feudal social organization with princes (warq), noblemen (p) and serfs/commoners (wna?wt). Part of the nobility converted to Orthodoxy during the 16th century, and in the course of the 16th and 17th centuries Islam spread into Kabardia. The majority of the population, however, remained loyal to pagan traditions, still alive in the Kabardian folklore. Islam was not solidified until the 19th century wars with the Russians, and a part of the Kabardian people (speakers of the Mozdok dialect) remained true to Orthodoxy. After the Russian conquest of Caucasus in 1864 the Adygheans became isolated in the north (around the city of Maykop), and the area where all the other Abkhaz-Adyghean languages used to be spoken has decreased due to Russian immigration, and due to the exodus of almost all Ubykhs and of many Circassians into the Ottoman Empire 6. There are more than 400 000 speakers of Kabardian living in the Kabardino-Balkar Republic and the neighbouring areas. More than 90% of ethnic Kabardians use Kabardian as their mother-tongue, but almost all of them are bilingual and speak Russian as well. Kabardians are today an absolute majority in the Kabardino-Balkar Republic of the Russian Federation, with 55.3% of the population according to the 2002 census. Other important ethnic groups are Turkic Balkars, with around 11% of the population, and Russians, whose number is decreasing (according to the 2002 census they constituted around 25% of the population). The number of Kabardian speakers abroad is unknown, but it is believed that a significant number of them still live in Jordan, Turkey and Syria, where they emigrated after the Russian conquest of Caucasus in 1864. It is believed that around 400 000 Kabardians and Adygheans were then exiled, while their descendants went through a partial linguistic assimilation in their new countries. Today there are around 200 000 ethnic Kabardians in Turkey and around 30 000 in Syria 7, but it is not known how many of them still speak Kabardian. Part of the Syrian Kabardians emigrated to the USA after the Israeli occupation of the Golan Heights (1967), and settled as a relatively compact group in New Jersey. Most
The original homeland of the Abkhaz-Adyghean languages must have comprised the Black Sea coastal area as well, because common words for "sea" (Ubykh wa, Adyghean x, Kabardian x), for "big sea fish" (Abkhaz a-ps, Ubykh psa, Adyghean pca, Kabardian bdza), etc. can be reconstructed (see Klimov 1986: 52). 6 A part of Kabardians and other West Caucasian refugees ended up in Kosovo, where their language survived until recently in two villages, cf. zbek 1986. It appears that all of the remaining Kosovo Circassians were resettled in Russia a few years ago. 7 Kabardian is also preserved in a few villages in Israel, and until recently there was a primary school in Kabardian in one of these villages.
5
speakers of Kabardian in Jordan are centered around Amman, where there is a private school with classes held in Kabardian. In central Turkey Kabardians and other Circassians live around the cities of Samsun, Amasya and Sivas. While the use of Kabardian (and other Circassian idioms) was persecuted under Atatrk, the situation has become a bit better recently. Today Circassian culture associations are being founded in Turkey as well, and their language is making a humble appearance in the media (especially the Internet). Turkish television recently started broadcasting shows in Kabardian and Adyghean. From the typological point of view, Kabardian shares many common features with other Abkhaz-Adyghean languages: a complex system of consonants (though simpler than in Ubykh, for example), an extremely simple vowel system, a complex prefixation system and the S(ubject) O(bject) V(erb) order of syntactic constituents. There are, however, some typological differences between Abkhaz-Abaza and Kabardino-Adyghean. Unlike Abkhaz-Abaza, the Adyghean languages do not have grammatical gender, but they do have cases. Adpositional phrases are expressed as in the Indo-European languages, and not according to the HM (head marking) pattern 8, as in Abkhaz-Abaza. This means that a Kabardian postpositional phrase consists of the postposition and the governed noun only, without any person/gender affixes on the postposition (as, for example, in Abkhaz). The verbal system, however, is in some respects even more complicated than in Abkhaz-Abaza. Kabardian was a non-written language until the beginning of the twentieth century, though there were attempts to write it down using an adapted Arabic script. Up until the 20th century Classical Arabic was the language of literacy throughout the Caucasus. Special alphabets for Kabardian, based on Arabic and the Russian Cyrillic, were developed by the Kabardian scholar Shora Nogma (1801-1844), who is also the author of the first Kabardian-Russian dictionary (which was not published until 1956). However, these alphabets have not persisted, and neither have the Arabic and Latin alphabets developed by a Turkish doctor of Kabardian origin, Muhamed Pegatluxov (1909-10) 9. The Latin script was adapted for Kabardian in 1923 by M. Xuranov in Soviet Russia, and in 1924 the first Kabardian periodical began to be published in Latin script. Classes in primary schools have been held in Kabardian since 1923. In 1936 the Latin alphabet was replaced by an adapted Russian Cyrillic, still used as the Kabardian alphabet. The last reform of the Kabardian Cyrillic was in 1939. There are some attempts today to reintroduce the Latin script, especially with the Kabardian diaspora in Turkey, where the Latin alphabet is used. These attempts, however, have not taken hold in Kabardia. To abandon the Cyrillic script would mean to give up the literary tradition which has been developing for some seventy years now. Standard Kabardian is based on the Baksan dialect, spoken in Great Kabardia, which today constitutes a significant part of the Kabardino-Balkar Republic in the Russian
8
For the term HM (head marking), introduced by Johanna Nichols, and for other commonplace terms of linguistic typology, see Matasovi 2001. 9 On the beginnings of literacy in Kabardian see Kumaxova 1972: 18-21. The fate of the Latin alphabet adapted for Circassian by G. Kube Shaban is also interesting. Shaban was a Circassian scholar who was taken prisoner near Dravograd (on the Slovenian-Austrian border) as a soldier of the Wehrmacht, but he ran away from the British camp and settled in Syria, where he developed educational institutions for Circassians in the 50-ies (zbek 1982). However, the regime of the Baath party abolished all cultural institutions of Circassians in Syria in the 1960-ies, so that Kube Shabans alphabet was also abandoned.
Federation (west of the Terek River). There are also the Besleney dialect (also called Besney, spoken in the Karachay-Cherkess Republic of the Russian Federation and in the Krasnodar area), the Mozdok dialect (spoken in the north of North Ossetia, where some Kabardians are believed to have emigrated some time before the 16th century), and the Kuban dialect (spoken in the territory of the Republic of Adyghea in the Russian Federation) 10. All dialects are mutually intelligibile 11, and Besleney differs most from the other dialects, being, in a sense, transitional between Eastern Circassian (Kabardian proper) and Western Circassian (Adyghean, or Adyghe, divided into Bzhedhugh, Temirgoy, Abadzekh, and Shapsugh dialects). Besleney is spoken in the region from which the majority of Kabardians are believed to have emigrated, probably in the 13-14th centuries, to Great Kabarda. Along with Russian and Balkar, Kabardian is one of the official languages of the Kabardino-Balkar Republic of the Russian Federation. In the first four grades of primary school in the Kabardino-Balkar Republic classes are held in Kabardian, and there is a Kabardian Department at the University of Nalchik (the capital of Kabardia). Literature and the publishing industry in Kabardian are poorly developed, but there is a huge corpus of oral literature, with the mythological Nart Epic standing out (Colarusso 2002). There are a few weeklies and the daily Mayak ("Lighthouse") published in Kabardian. The official daily newspaper Ada psa ("Adyghean Word") is available on the Internet (http:). Note also the monthly magazine Psyna "Source" (). Radio Free Europe () broadcasts news in Kabardian on the "listen on demand" principle.
Speakers of the Kuban dialect are trilingual, they speak Adyghean along with Russian and Kabardian (Kumaxova 1972). They are rather recent immigrants into the region. 11 For an overview of Kabardian dialects, see Kumaxov (ed.) 1969.
10
PHONOLOGY
Kabardian has one of the most complex phonological systems of all the languages in the world. In native words there are only two vowels and around fifty consonants (depending on the dialect). The vowel a can be both short and long (ie. a and ) 12. VOWELS a -short -long
The vowel o appears in loan-words; the diphthong aw is pronounced as in some surroundings, the diphthong y as , the diphthong w as and the diphthong ay as . Alternative accounts of Kabardian phonology posit two short vowels ( and a) and five long vowels (, , , , ). Only the vowel can occur in the word-initial position in native words 13.
dental
palatal
velar
12
k'w
The difference between a and is not only in their length, but also in their quality, though phonetic descriptions differ. In the pronunciation of my informants, is a low open vowel, while a is a central open vowel (as in the phonological description by Kumaxov (ed.) 2006). Kuipers (1960) thinks that is not a distinct phoneme, but rather a phonological sequence of the short a and the consonant h in all positions except at the beginning of a word, where it can be analyzed as ha. Kuiperss analysis, though disputed, has the advantage of enabling us to formulate a simple rule according to which all Kabardian words start with a consonant, since and a can never occur word-initially. In the speech of many Kabardians the initial is, indeed, realized with a "prosthetic" h-. 13 Aert Kuipers (1960, 1968) tried to eliminate the phonological opposition between the vowels a and as well, claiming that it is actually a feature of "openness" which should be ascribed to consonants (like palatalization, glottalization and labialization). In Kuipers's analysis the opposition between pa and p in Kabardian is not an opposition between two vowels, but rather between an "open" (pa) and a "closed" (p) consonant (p). This would make Kabardian the only language in the world without the opposition between vowels and consonants, but most Caucasiologists do not accept this analysis by Kuipers (for a critical review see, e. g., Halle 1970, Kumaxov 1973, Anderson 1991).
x xw uvular q qw w ? h ?w
laryngeal
According to some authors 14 labiovelars (kw, gw, k'w) are actually labialized uvulars, while the point of articulation of uvulars is even deeper in the pharynx (they represent pharyngeal consonants 15). The dialect described in J. Colarusso's grammar (1992) has pharyngeal fricatives as well; in the standard language described by this grammar they have, as far as I was able to determine from the examples, become velar fricatives. The voiceless laryngeal fricative h has its voiced pair in the standard speech of the older generation, which penetrated the language mostly through Arabic loanwords, e. g. Hazb "torment"; the Kabardian Cyrilic does not have a distinct symbol for this segment, which becomes h in the speech of the younger generation and is written with the digraph x. In the speech of many Kabardians from the diaspora (especially from Turkey) 16 some oppositions, still preserved in Kabardia, have been lost, such as the one between and (Turkish Kabardian has got only ). The pronunciation of the stops which are described here as voiced and voiceless varies from speaker to speaker (apparently, this has nothing to do with the dialect, but rather with cross-linguistic interference). Some speakers pronounce voiceless stops as voiceless aspirated stops (ph, th, kh); these speakers sometimes unvoice voiced stops (i. e. instead of b, d, g they pronounce p, t, k). Only the glottalized stops are consistently ejective with all speakers, regardless of the dialect. Laterals l, , and ' are actually lateral fricatives: l is voiced, voiceless, and ' glottalized. The fact that it has lateral fricatives without having the lateral resonant [l] (except in loan-words) makes Kabardian typologically unique. The presence of glottalized fricatives ', ' and f is also typologically rare. Besides Kabardian, segments such as these are found only in some American Indian languages (of the Salishan and the Na-Dene language families) and in some dialects of Abkhaz. As in other Caucasian languages, the consonant r can never occur at the beginning of a word, except in recent borrowings; older borrowings receive an unetymological prosthesis, e. g. wrs "Russian". Among the velar stops, Kabardian does not have the segment k (except in loanwords); it has only the labiovelar kw, gw and k'w. The segments transcribed in this grammar as , d and ' are, according to some descriptions, palatalized velars (ky, gy and k'y) 17. This would make Kabardian a typologically unique language, having
14 15
E. g. Kumaxova 1972. E. g. according to Kumaxov (ed.) 2006: 51. 16 See Gordon & Applebaum 2006: 162. 17 According to Kumaxova (1972) in the contemporary standard pronunciation these segments are palatal affricates, but in the older and the dialectal pronunciation they are palatalized velars. Turkish
10
palatalized and labialized velars without having the ''unmarked'', regular velars. (This is exactly the kind of system that some linguists ascribe to the Proto-Indo-European language). Voiceless stops are assimilated to the stops and fricatives that follow them with respect to the features of voice and glottalization: sa z-l "I painted" < *sa s-l (cf. sa saw "I saw") wa paw "you saw" < *wa b-aw (cf. wa bl "you painted") da t'' "we did" (in writing ) < *da d-' (cf. da daw'a "we do") Two vowels cannot occur next to each other; at a morpheme boundary where the first morpheme ends and the second one begins with a vowel, the two vowels merge, whereby the lower vowel is always stronger (i. e. * -a merge as a, *a- as ): sk'w "I went" < *s-k'wa-- sh "I carried it" < *s-h-- Morpheme-final can be deleted in (underlyingly) polysyllabic words, but the exact rules are complex, and the deletion appears to be optional in some cases (for details see Colarusso 1992: 43ff.): hn "carry" but s-aw-h "I carry" < *sawh "horse" but z- "one horse" < *z The vowel is preserved word-finally after y and w, when it merges with the glide and is pronounced as [i:] viz. [u:], e. g. patmy "although" [patmi:], dadw "cat" [gyad(d)u:]. Unaccented vowels in open syllables are shortened (i. e. becomes a): x ma "foreign" vs. xam' "foreigner" Likewise, accented vowels in open syllables are lengthened (a becomes ): d xa "beautiful" vs. daxa "excessively beautiful" APOPHONY (ABLAUT) Like the Semitic, Kartvelian, and the older Indo-European languages, the AbkhazAdyghean languages have morphologically regular vowel alternations (apophony,
Kabardian most certainly has palatalized velars, which must be an archaism with regard to the innovative standard (in Kabardia), in which these segments have become affricates.
11
Ablaut) 18. Vowel alternations in Kabardian are most frequently used with verbs, especially to express the category of transitivity/intransitivity. The most common vowel alternations are: 1. a - : this apophony pattern is used for the opposition between transitive and intransitive verbs, e. g. dan "to sew (intrans.)" - dn "to sew (trans.)", txan "to write (intrans.)" - txn "to write (trans.)", xan "mow (intrans.)" - xn "mow (trans.)"; in some verbs of movement, the root-final vowel a also characterizes movement towards the subject (the so called "illative verbs"), while the vowel characterizes movement away from the subject (the so-called "elative verbs"), cf. badaatan "fly towards" vs. badaatn "fly away from". Finally, this apophony pattern serves to distinguish cardinal from adverbial numbers, e. g. "three" - a "thrice". 2. - 0: this pattern is used to distinguish the personal prefixes cross-referencing lowest ranking macrorole arguments (Undergoers, with the ''full-grade'', ) from the prefixes cross-referencing Actors and Obliques (with the ''zero-grade'', 0): s-b-d-aw-va 1sg.-2sg.-conj.-pres.-to plow "I plow together with you" intransitive verb with the prefix s- for the 1st person sg. as the single core macrorole argument. b-d-z-aw-va 2sg-conj.-1sg.-pres.-to plow "I plow (it) together with you" transitive verb with the prefix z- < *s- for the 1st person sg. Actor 3. a - 0. This apophony pattern is merely a special type of the alternation between a and ( is usually dropped in the word-final position). It is used to distinguish between the forms of the illative and elative verbs, e. g. y- "take out!" y-a "bring in!", and it also appears in different forms of transitive and intransitive verbs, e. g. m-da "he is sewing (intrans.)" - ya-d "he is sewing it (trans.)". It is also used to distinguish personal prefixes indexing Obliques (non-macrorole core arguments, including the causees of causative verbs) from those indexing Actors and Undergoers, cf. y-xw-va-z-a--- "I made you carry him for them", where -va- indexes the 2pl. causee argument, and y-xwa-z-v-a--- "you (pl.) made me carry him for them". STRESS In Kabardian the last syllable carries the stress, except for words ending in a, in which the second-to-last syllable is stressed. Grammatical suffixes are mostly unstressed.
Apophony patterns in the Abkhaz-Adyghean languages are typologically particularly similar to those in Proto-Kartvelian (Kumaxov 1971: 202). For a general overview of apophony in the Adyghean languages see Kumaxov 1981: 228 ff.
18
12
The following words are thus stressed in this way: ztan "give presents", d ta "sword", but d tam "with the sword", pa "girl", but paxar "girls". We can 'a formulate the rule: the syllable before the last root consonant carries the stress. However, some verbal suffixes attract the stress, e. g. the preterite suffix -- and the future suffix -nw-, so these forms, although suffixed, are end-stressed, cp. yaa w-s-w-- 2sg.-1sg.-see-pret.-af. "I saw you" I s-k'wa-nw- 1sg.-go-fut.-af. "I will go"
SYLLABLE Unlike the neighbouring Kartvelian languages, the Abkhaz-Adyghean languages do not have complex consonant clusters in the onset of the syllable; the structure of most syllables is C(C)V(C), and most consonant clusters consist of a stop and a fricative, e.g. t+h: tha "God" b+w: bw "nine" p+': p' "ten", etc. There are also consonant clusters consisting of two stops, e. g. in the word pqaw "pillar". Some rare clusters consist of three consonants, e. g. in the verb ptn "to boil", or in the noun bzw "sparrow". Consonant clusters in Kabardian are predominantly regressive, i.e. the point of articulation of the first element is closer to the lips than that of the second element. Consonant clusters in which the first element is a labial consonant are especially frequent, e. g. p "prince, nobleman", pssa "story", xbza "custom", bl "seven", etc. Roots are mainly monosyllabic, e. g. fz "woman", t- "give", z- "one", k'wa- "go". Bisyllabic roots, which typically end in a vowel (?an earlier suffix), are less frequent, e. g. pa "girl", mza "moon", etc. Syllables are normally closed in the middle of a word. Many speakers have a geminate pronunciation of consonants preceded by an open syllable in the middle of a word, which results in the canonical syllable structure, i. e. instead of pssa "story" they pronounce psssa, instead of dda "very" they say ddda (Colarusso 1992: 15); if the long vowel -- is phonologically analyzed as -ah-, as is the habit of some linguists,
13
then the rule is that all syllables in the middle of a word are closed. This type of restriction on the syllable structure is typologically very rare in the world's languages.
14
ORTHOGRAPHY
The Russian Cyrillic alphabet, used as the Kabardian script since 1936, contains the following graphemes 19: I. consonants stops voic. unvoic. glott. b p I p' v d t I t' dz c I c' z l d I ' f s I ' r I f' n affricates voic. unvoic. glott. fricatives voic. unvoic. glott. m resonants
I ' x xw
gw
kw
I k'w w w h
q qw q' q'w I I ? ?w
The grapheme <> denotes the uvular character of the consonants q, qw, q', q'w, , w, and w, and there is a special grapheme used to mark voicelessness of uvulars (hence
Rules for the transliteration of the Kabardian Cyrillic applied in this grammar are basically the same as the standard principles of transliteration for the Caucasian languages written in the Cyrillic script, proposed by J. Gippert in his work Caucasian Alphabet Systems Based upon the Cyrillic Script (). Some minor deviations from Gippert's system in this grammar should, however, be brought to the reader's attention: 1) glottalized consonants are written as C', and not as C; 2) labialized consonants are written as Cw, and not as Co; 3) the Cyrillic is transliterated as y, and not as j. 4) palatalized fricatives are written as , , and not as , z. 5) the Cyrillic letters , are transliterated as dz, d, instead of , '.
19
15
<> = q, < > = qw). The notation of palatal consonants is inconsistent: < , > denote d (gy), (ky), but <I> is ' (k'y). Although the Kabardian orthography is phonological, the notation of some phonological changes is inconsistent 20, e. g. the shortening of the long which occurs in compounds, cf. xdaxa' "fruit" (the first part of the compound xda "garden" has a long , but the pronunciation in the compound is /xadaxa'/). Some authors (e. g. M. A. Kumaxov) use and for the palatalized and , instead of the standard , , since that is how these consonants are denoted in the closely related Adyghean language. However, despite certain efforts to make them more alike (e. g. the 1970 proposition for a common orthography for all Adyghean languages), the Adyghean and the Kabardian orthographies are still quite different 21. II. semi-vowels: = y; = w III. vowels: = ; = a; =
The Kabardian Cyrillic has some other graphemes for vowels, but these graphemes always denote diphthongs and triphthongs: = y = y = aw, wa y= w = yw = ay, ya The grapheme y thus has a double value: it can denote the semi-vowel w or the phonemic sequence (diphthong) w.
Cf. Kumaxova 1972: 46. A few years ago a group of the most distinguished Adyghean and Kabardian linguists put forward a proposal for the creation of the common Adyghean-Kabardian orthography (see Kumaxov (ed.) 2006, I: 40 ff.). Although this proposal received the support of the parliament of the Kabardino-Balkar Republic, at the moment I am writing this its future is still uncertain.
21
20
16
MORPHOLOGY
Kabardian is a polysynthetic language which has a very large number of morphemes compared to the number of words in a sentence. Nouns can take a relatively small number of different forms, but the verbal complex typically contains a large number of affixes for a host of grammatical categories. In Kabardian the morphemes combine within a word according to the agglutinative principle: each grammatical morpheme expresses only one grammatical category. The exception is the category of person, which is always fused with the category of number in the case of verbs and pronouns: the form da, for example, denotes that a pronoun is in the first person and that it is plural, and it is not possible to divide this form into two morphemes (one for the first person and one for plural). Likewise, the category of definiteness is to a large extent fused with the category of case. Most of Kabardian morphemes consist of only one segment and a vowel (i.e., the structure is CV) 22; this results in large number of homonyms; e.g. can mean "brother", "horse", "to milk" and "to take out", c'a means "name" and "louse", dza means "tooth" and "army", x is "sea" as well as "six", etc. Bisyllabic and polysyllabic roots are mostly borrowings, e. g. nwka "science" (from Russian), haw "air" (from Persian), lh "god" (from Arabic), nq "glass" (from a Turkic language), etc.
NOMINAL INFLECTION
Nominal categories are: definiteness, number and case. Of all the Abkhaz-Adyghean languages only Abkhaz and Abaza have the category of gender; Kabardian shows no trace of this category. If we consider proclitic possessive pronouns to be possessive prefixes (see below), then possession should also be included in the morphological categories of nouns.
NUMBER There are two numbers singular and plural; the plural suffix is -xa: 'la "young man": 'laxar "young men"; wna "house": wnaxar "houses". The use of the suffix xa is optional for many nouns, i. e. the suffix is used only when the speaker wants to emphasise that the noun is plural. This is why forms such as sby "child/children" c'xw "man/men" and fz "woman/women" are inherently neutral with respect to the category of number. These nouns can be construed with both singular and plural forms of verbs: fz-m ay?a "the woman is speaking": fz-m y?a "the women are speaking" c'xwm ya' "the man is working": c'xwm y' "the men are working"
22
Three quarters of all morphemes have this structure according to Kuipers (1960).
17
Similarly, nouns neutral with respect to number can be construed with singular and plural possessive pronouns: c'xwm ypsaw'a "a man's life": c'xwm ypsaw'a "men's life" 23. The postposition sma is used to pluralise personal names: Dwdr sma "Dudar and others". This is the so-called "associative plural", which exists, e. g., in Japanese and Hungarian: Iaa , A, , a, a e xay aa I ?whamxwa y hagw-m Maztha, m, Thaalad, Sawzra, ap sma Pstha Elbrus 3sg.poss. top-ERG M. A. T. S. . assoc. P. w w day -zax a-s--xa-w Sna-x fa y-?a-t at dir.-meet-sit-pret.-pl.-ger. sana-drink 3pl.-have-impf. "On the top of Uesh'hemakhue (Elbrus) Mazatha, Amish, Thagoledzh, Sozrash, Hlapsh and others were meeting with Psatha (god of life) and having "the drinking of sana" (drink of the gods)". Nouns which denote substance and collective nouns have no plural: 'lawla "the youth", a "milk".
CASE Unlike Abkhaz and Abaza, the Adyghean languages (Kabardian and Adyghe) and Ubykh have cases marked by suffixes on nouns, adjectives and pronouns 24. The cases are: nominative (-r), ergative (-m), instrumental (-'a) and adverbial (-wa). Core cases, which express basic syntactic relations within a sentence, are nominative and ergative, and peripheral cases are instrumental and adverbial. NOM a dtar ERG a dtam INST aI / aI dtam'a / dta'a ADV ay dtawa The instrumental case has the definite (dtam'a) and the indefinite form (dtam'a). Definite forms consist of the ergative marker (-m-) and the suffix for the instrumental (-'a).
On this subject see Kumaxov 1971: 7 ff. By all accounts, the case system in the Adyghean-Ubykh languages is an innovation; the ProtoAbkhaz-Adyghean had no cases (Kumaxov 1976, 1989).
24
23
18
The nominative is the case of the nominal predicate: Ia e m 'la-r- yampyawn-r that young man-NOM-af. hero-NOM "that young man is the champion" The nominative is the case of the intransitive subject and the transitive object, i. e. the case of the verb argument which is the lowest ranking macrorole (see below): Ia 'la-r y-aw-da boy-NOM 3sg.-pres.-study "the boy studies" x a sa tx-r q'a-s-t-- I book-NOM dir-1sg.-take-pret.-af. "I took the book" The ergative is, basically, the general oblique case used for all other grammatical functions; it is the case of the transitive subject: ye x ea stwdyant-m tx-r ya-d-- student-ERG book-NOM. 3sg.-study-pret.-af. "the student studied the book" The ergative can also correspond to the dative case in the European languages; it marks the recipient of the verbs of giving, and other oblique arguments: a I x e w c'x -m tx-r m fz-m y-r-ya-t this man-ERG book-NOM this woman-ERG 3sg.-3sg.-3sg.-give "this man gives the book to this woman" I II '-m psaan-r f'af'-t old man-ERG. speech-NOM. like-impf. "the old man liked to speak", cp. Croatian, for example, which has the dative case: starcu se svialo govoriti (lit. "to-the-man it pleased to speak") The ergative is also the case which marks the goal of the verbs of movement (like the Latin accusative of the goal): y y a wa-ry wy -r a-m you-and 2sg.poss. horse-NOM barn-ERG "and you take your horse to the barn" a take to-imp.
19
The ergative can correspond to the locative case in those European languages which have it, indicating a spatial or temporal location: xa I dy xda-m ?wha yt- 2pl. poss. garden-ERG poppy be located-af. "there is poppy in our garden (poppy grows in our garden)" Croatian: "u naem je vrtu mak (u naem vrtu raste mak)", with vrt in the locative sg. I aa Sa sy nb-m ?ada s-w-- I 1sg.poss. life-ERG a lot 1sg.-see-pret.-af. "I have seen a lot in my life" Croatian: "Ja sam u svojem ivotu mnogo vidio", with ivot in the locative sg. In some constructions, the ergative can correspond to the English possessive genitive or prepositional phrase: a x a x Z mza-m xawa-nw-r z mxwa-m xawa-rt 1 month-ERG grow-inf.-NOM 1 day-ERG grow-impf. "He grew a one month's growth in one day" / "In one day he grew as much as is usually grown in one month" Thus, ergative functions as both the case of the Agent and as a "general oblique case" covering all other functions of oblique and non-macrorole core arguments, but nonarguments (adjuncts) can also be in the ergative. 25 The other two cases, as a rule, are reserved for non-arguments in the clause, i. e., for the adjuncts. Nouns and adjectives in the adverbial case (Rus. obstojatel'stvennyj pade) usually correspond to adverbs in the European languages, i. e. they indicate the circumstances under which the action is performed: x y -xa-r str-w tree-pl.-NOM row-ADV "They planted the trees in rows" xa xas-- to plant-pret.-af.
The adverbial can correspond to the genitive in the European languages: y g faww-w z kylawgram-m sugar-ADV 1 kilogram-ERG "I bought 1 kg of sugar " a q'a-s-xw-- dir.-1sg.-to be involved in shopping-pret.-af.
The adverbial can be the case of the nominal predicate, corresponding to the instrumental in Slavic:
25
20
a ee ay a a Taymbawlayt p-m y wsa-w r-t . T. prince-ERG 3sg.poss. servant-ADV it.be-ipf. "aga Taymbawlayt was the prince's servant" Interestingly, in the language of the epic poetry, the adverbial can correspond to the vocative case 26, i. e. it is used for addressing individuals: oe Sawsrq'wa-w sy-naf S.-ADV 1sg.poss.-light "O Sosruko, my light!" The instrumental mostly corresponds to the Slavic instrumental, i. e. it expresses the instrument with which the action is performed (including means of locomotion), cf. -m-'a m-k'wa "he rides the horse", literally "he goes with the horse", or q'arand'a tx n "to write with a pen"; however, the Kabardian instrumental has other functions as well, e. g. it can express various circumstances of the action, as well as the path (but not direction) with verbs of movement, and the duration of an action: a aI aa r mxwa-'a m-la he (NOM) day-INST 3sg.-to work "he works by day" I I maz-'a k'wa-n forest-INST to go-inf. "to go through the forest" I I I a-y- mxw-y--'a w-q'-ya-s-a'-ry night-rel.-3 day-rel.-3-INST 2sg.-dir.-3sg.-1sg.-lead.through-and sy ha'a w-q'-y-s--- my guest.house 2sg.-dir.-3sg.-1sg.-lead-pret.-af. "I was leading you three days and nights, and I led you into my guest-house" Occasionally, the Instrumental can also express the actor (in some participial constructions): I I Ia w w ? ax -r sar-'a '- wn-q'm job-NOM I-INST do-pret.(part.) become-neg.
Kumaxov (ed.) 2006: 369 calls this "the vocative case", but this is clearly just another use of the adverbial.
26
21
"I cannot do this job" (lit. "This job does not become done by me") Personal names normally do not differentiate cases (at least not NOM and ERG), but family names do 27; this is related to the fact that nominative and ergative endings express not only case, but also definiteness. Also, nouns (personal names) in the "associative plural" (see above) show no case differentiaton: e aI Maryan sma m-k'wa M. assoc.pl. 3pl.-go "Maryan and the others are going" e a Maryan sma s-aw-w M. assoc.pl. 1sg.-pres.-to see "I see Maryan and the others" In addressing people, nouns referring to them show no case differentiation, i.e. the bare stem is used (similarly as the Indo-European "vocative"): a, II I Nna, st m p'-'-r z-'-s-r mother what this 2sg.-do-pres. part.-dir.-sit-NOM "Mother, what is it that you're doing now?" aI t'a ydy? now
Demonstrative pronouns differentiate cases, but personal pronouns of the 1st and 2nd person have only got the peripheral cases (adverbial and instrumental), and not the core cases (ergative and nominative). This agrees entirely with Michael Silverstein's hierarchy28, according to which the most common case marking pattern in ergative languages is the one in which 1st and 2nd person pronouns do not differentiate core cases, while nominals and groups lower on the "animacy hierarchy" do (cf. the inverse pattern of case differentiation in the accusative languages, e. g. in English, where the nominative and the accusative are differentiated on the 1st person pronoun, but not on nouns). Since the category of case (especially of primary cases) is connected with the category of definiteness, and syntactical relations within a sentence are expressed by a system of personal prefixes on the verb (see below), there is some uncertainty over the rules of case assignment with some speakers, especially in the case of complex syntactic structures (just as there is often some uncertainty over the rules of the use of articles with speakers of languages which have the definite article).
See Kumaxov et alii 1996. This feature excludes Kabardian from the typological universal according to which languages that distinguish cases on 3rd person pronouns always distinguish cases on personal names as well (but not vice versa). 28 See e. g. Dixon 1994.
27
22
DEFINITENESS Definiteness is clearly differentiated only in the core cases, i.e. in the nominative and the ergative: the endings -r and -m are added only when the noun is definite; indefinite nouns receive no ending 29: a eI pa-m m-r ya-'a girl-ERG it-Nom. 3sg.-know "the girl knows it" a eI pa m-r ya-'a girl it-NOM 3sg.-know "a girl knows it" With some nouns, whose meaning is inherently definite (e. g. mza "moon", nsp "happiness", personal names), the nominative/definiteness suffix is optional: () a da-(r) s-aw-w sun-(NOM.) 1sg.-pres.-see "I see the sun" Other cases are not used to differentiate definite and indefinite forms of nouns, and the opposition definite/indefinite does not exist in the plural either (see Kumaxov et alii 1996). However, if a noun in the instrumental is definite, the ergative marker -mis added before the instrumental ending -'a: I I sa m-r sa-m-'a s-aw-' I it-NOM knife-ERG.-INST. 1sg.-pres.-do "I do it with the knife" The ergative marker m- probably developed from the demonstrative pronoun (cf. maw "this"), which had been "petrified" in the "definite instrumental" before the instrumental ending.
ADJECTIVES Adjectives are divided into two categories in Kabardian: qualitative and relational adjectives. Qualitative adjectives typically follow the noun they modify: wna yn "big house" (wna "house"), pa dxa "beautiful girl" (pa "girl"). Occasionaly they may also precede the noun, e.g. xma wna "foreign house" (wna "house"). Adjectives are declined like nouns, but they show no number and case agreement. If
See Kumaxov 1972, where the grammaticalization of the definiteness marker -r- is discussed (from the ending for the formation of participles, it seems). On the category of definiteness in the Adyghean languages see also Kumaxov & Vamling 2006: 22-24.
29
23
the noun is modified by a qualitative adjective, only the adjective receives the endings for case and number: y x wna xw-xa-r house white-pl.-Nom. "white houses" If a qualitative adjective precedes the head noun, it is not declined; it may be modified by the adverbial suffix -w (e.g. mva-w "made of stone", fa-wa "made of leather"): axy e a dxa-w pawrtfaylr sa q'a-s-axw-- pretty-ADV wallet I dir.-1sg.-buy-pret.-af. "I bought a pretty wallet / a pretty wallet I bought". Qualitative adjectives mostly have analytical comparison: dxa "beautiful", na dxa "more beautiful", dda dxa "the most beautiful" (or "very beautiful"). The morpheme na is sometimes merged with the adjective into a compound, cf. na--'a "the youngest" ('a "young"). There are also suffixes which express the elative superlative: -a, -?wa, -bza, -ps, -ay, but this seems to belong to the domain of word formation rather than morphology, cf. ?af'-a "the sweetest" (?af' "sweet"), 'h?wa "the longest" ('h "long"), p-bza "very red" (p "red"), etc. Adding the suffix -?wa to the comparative form gives the adjective a diminutive meaning, e. g. na xwba-?wa "somewhat warmer" (cp. xwba "warm"), na ?af' -?wa "somewhat sweeter" (?af' "sweet") 30. The circumfix xwa-...-fa has a similar function, e.g. xwa-dayl-fa "somewhat foolish". Adjectives can be reduplicated, whereby the first stem receives the suffix -ra "and", and the second the adverbial suffix -w(a). Such reduplicated adjectives have intensive meaning, e.g. bw-ra b w-w "extremely broad", p-ra-pw "extremely red", f'yay-ra-f'yay-wa "extremely dirty". Relational adjectives precede the head noun and they take no case and number endings; they can be formed by adding the relative particle -y to nominal and adjectival stems, e.g. mawdryay "other", dwaphryay "evening": e a nawbara-y mxwa today's day "today" Some nouns, ordinal numbers and Russian loans (nouns) can also function as relative adjectives, e.g. nawna "scientific", ha "head", ypa "first".
30
24
Some adjectival meanings are expressed by suffixes: the suffix -xwa means "great", cf. dwnay-xwa "great world", whe suffix -na means "being without, -less". It can often be translated as the English preposition without, but its adjectival status can be shown by the fact that nouns to which it is added can get the affirmative marker - to build a static verb (see below): sa s-da-na na-na- (I 1sg.-father-without motherwithout-af.) "I am without father and mother" = "I am fatherless and motherless".
PERSONAL AND DEMONSTRATIVE PRONOUNS Kabardian personal pronouns are indeclinable. The personal pronouns of the first and second person are similar to, and presumably represent the origin of, the person markers on the verb. There is no trace of the inclusive/exclusive opposition in pronouns, which exists in some NE Caucasian languages.
sg. 1. sa 2. wa 3. r
pl. da fa xar
The pronouns of the 1st and 2nd person also have longer forms sara, wara, dara, fara, which are used as stems to which verbal suffixes can be added: ye a q'araway-r sar-q'm K.-NOM I-neg. "I am not Karashawey (a Nart hero)" Third person pronouns are also used as demonstrative pronouns; Kabardian does not distinguish between "he" and "this, that". The pronominal declension is somewhat different from the nominal one: 1sg. Nom. sa Erg. Inst. Adv. sa 1pl. da da 3sg. r b 3pl. xar bxam I bxam'a xarw
In the first and second person singular the nominative form is always the same as the ergative form, which means that pronouns do not have the ergative alignment, as, for example, in Dyirbal. Unlike in Dyirbal, however, the clause alignment of personal
25
pronouns in Kabardian is neutral, rather than accusative. The third person pronoun is formed with the stems -, m - and maw-. It can appear without the nominative -r (which also expresses definiteness of personal pronouns): a a sa rsskaz-m I that story-ERG "I read that story" ea s-ya-d-- 1sg.-3sg.-read-pret.-af.
The difference in the usage of pronominal stems -, m - and maw- is not entirely clear, but - is the basic pronoun used in anaphora (reference to what has already been mentioned in the discourse), while m- and maw- are in opposition with respect to the degree of distance from the speaker: m- refers to a closer object (or person), and maw- to a more distant one. In the 3rd person plural Ergative, two different sets of forms exist: the basic stem can be extended with the pronominal Ergative ending, but it also occurs without it: xam = bxam mxam = mbxam mawxam = mawbxam There appears to be no difference in meaning, but the longer forms are somewhat more common in the texts. The stems which are used in the formation of demonstrative pronouns also serve to form pronominal prefixes, which are used instead of demonstrative pronouns: m- "this" maw- "that" x maw--xa-r those-tree-pl.-NOM "those trees" These prefixes can also be used as independent words, and they are declined like personal pronouns, e. g. NOM sg. m-r, maw-r, ERG sg. m-b, maw-b, etc. In addition to the pronominal case ending, third person personal/demonstrative pronouns can get the ergative ending used for nouns as well, which then results in double case marking (Kumaxov et alii 1996): a () e ya -b-m (m-b-m) bdaay-r q'-y-wbd-- he-ERG-ERG (this-ERG-ERG) fish-NOM dir.-3sg.-to catch-pret.-af. "He (this one) caught the fish"
26
In a larger sense, the category of demonstrative pronouns would also include pxwada "such, such as this" (from - and pxwada "similar"), mpxwada "such, such as that", mawpxwada "such, such as that". As a rule, these words occur in the attributive position, in front of the noun they modify, cf. pxwada c'xw "such a man". POSSESSIVE PRONOUNS Invariable possessive pronouns have only one form and they precede the noun they refer to: 1. sy 2. wy 3. y dy fy y
There is also the relative possessive pronoun zy "whose", and the 3rd person attributive possessive pronouns yay "his", yy "their". The attributive possessives must be preceded by a head nominal in the ergative: b yay "his, that which belongs to him", 'm yay "old man's, that which belongs to the old man". Possessive pronouns are clitics, and they should perhaps be thought of as prefixes which express possession. Sometimes they are written as one word with the word they refer to (ie. with the possessum), cf. sy "my cow". There seems to be a lot of uncertainty in the Kabardian orthography over whether possessive pronouns should be written separately or as one word with the possessum. The relative-possessive pronoun zy "whose" always precedes the noun it refers to: zy r "whose horse". It is declined as the personal pronouns: NOM zy-r, ERG zym, INST zyr'a, etc. In addition to the basic (clitic) possessive pronouns there are also emphatic possessive pronouns, formed by reduplication: ssay "my", wway "your", dday "our", ffay "your", yy "their". Unlike the clitic possessive pronouns, these can be inflected for case (e. g. NOM sysayr, ERG sysaym, etc.).
INTERROGATIVE PRONOUNS Although it does not distinguish animacy in other pronouns, Kabardian, like most of the world's languages, distinguishes the animate and inanimate forms of interrogative pronouns: x xat "who" st "what"
27
Interrogative pronouns are normally not inflected for case, though there is a growing tendency in the spoken language to use the case endings -m (ERG), -r (NOM), and -w (ADV) with the pronoun st 31: st-w xx "What was he elected for?" The interrogative possessive pronouns do not exist, but are rather replaced by the interrogative xat "who" and the possessive pronoun 3sg. y, e.g. x a y I? xat y da-m wna-r y-'-ra? who 3sg.poss. father-ERG house-NOM 3sg.-do-inter. "Whose father is building the house?" Other interrogative words are: dana "where", stw "why", dawa "how", dpa "how much", dpa "when", datxana "which". THE EMPHATIC PRONOUN The emphatic pronoun is yaz "personally, himself". It emphasises the verb's subject and stresses it as the topic of the sentence (theme). It is declined as a noun: NOM yaz-r, ERG yaz-m, etc. e a yaz-r m- personally-NOM 3sg.-to cry "he himself cries" ("It is he who cries") e I a yaz-m '-r y-v-- personally-ERG ground-NOM 3sg.-to plow-pret.-af. "they personally plowed the ground"/"he personally plowed the ground" In the following passage one can see how yaz is used to shift the topic back to the name Dlstan which had already been introduced earlier in the discourse: a y axa a X Ia. "a a, a a" aI I. E a axa ay a Dlstan y-pw Badaxw y dxa-r Nrt Xakw-m ?w-t. D. 3sg.poss.-daughter B 3sg.poss. beauty-NOM N land-ERG be.heard-ant.pret. "Mxwa-m da-, a-m mza-" --?a-rt Badaxw ha'a. day-ERG sun-af. night-ERG moon-af. pref.-3pl.-say-ipf. B about Yaz Dlstan-y y pw-m y dxa-m himself D.-and 3sg.poss. daughter-ERG 3sg.poss. beauty-ERG
31
28
y-ry-gwxwa---wa z-y-apa-rt 3sg.-3sg.-rejoice-back-pret.-ger. refl.-3sg.-boast -impf. "The beauty of Dilahstan's daughter Badah was heard in the Land of the Narts. 'She is the Sun by day, she is the Moon at night' -they used to say about Badah. Dilahstan himself, having rejoiced at his daughter's beauty, boasted (about it)".
QUANTIFIERS Quantifiers differ from adjectives and pronouns in their morphological and syntactic features. For example, the quantifier q'as "every" is not inflected for case (this is what differentiates it from adjectives), and it follows the noun it modifies (this is what differentiates it from pronouns): I aI c'xw q'as m-k'wa man every 3sg.-go "every man walks" The quantifier gwar "some" syntactically behaves similarly as q'as; it can be used together with the number "one" (z) which precedes the noun it modifies: I z ' gwar "a man", "some man" Aside from these, there is also the quantifier psaw "all, every"; its meaning is inherently plural, and it can be marked for case, cf. ' psawr "all men". Perhaps the words za'a "whole" and ha "every" should also be thought of as quantifiers.
29
INVARIABLE WORDS
NUMERALS Cardinal numbers: z I t'w I p' txw 1 2 3 4 5 x bl y bw I p' 6 7 8 9 10
a 100
Numerals sometimes merge with the noun which they precede, e. g. z "one horse", but z am "one cow". In the first example, the morpheme final of "horse" had been deleted, and the numeral received the stress; in the second example, the morpheme final -a- of a "cow" was preserved, together with its stress. Numerals can also merge with the noun they follow using the relative conjunction/particle -y-. They can also take case endings: a aI mz-y-bw-ra mxw-y-bw-'a month-rel.-9-and day-rel.-9-INST "In nine days and nine months" Kabardian has the decimal counting system; numerals above ten are formed with the stem p'- "ten", the suffix -k'w- (probably the root of the verb kwn "to go over (a distance), to transverse") and ones, e. g. p'k'wz "eleven", p'k'wt' "twelve", pk'w "thirteen", etc. The tens are formed on a decimal base, with the numeral "ten" reduced to -': t'wa' "twenty", a "thirty", p'' "forty", txw' "fifty", x' "sixty", etc. There are traces of the vigesimal system, manifested in the formation of tens as products of multiplication of the number twenty, t'wa': t'wa'-y-t' "forty", t'wa-y- "sixty", etc. In the standard language, these vigesimal formations are notably archaic, but they are alive in some dialects (e.g. Besleney) and in Adyghe. When counting above twenty, the counted noun (or noun phrase) is normally repeated before both constituent parts of the complex number: I aI I c'xw a'-ra c'xw-y--ra man thirty-and man-suf.-three-and "thirty three men" Ordinal numbers: yp 1. yaxna 6.
30
2. 3. 4. 5.
7. 8. 9. 10.
Ordinal numbers behave like relational adjectives, so they can take the suffix -ray (used for the formation of adjectives): yatxwnaray "fifth" etc. Adverbial numerals are formed from cardinal numbers by apophony, e. g. za "once", a "thrice", but they can also be formed by the prefix (or infix?)32 -r- and reduplication of the root of a cardinal number: z-r-z "once", p'-r-p' "ten times". Distributive numerals are formed from cardinal numbers with the suffix - na: t'wna "a half", na "a third", etc. Note also yz "one of two" and ztxwx (one-five-six) "about five or six". ADVERBS Adverbs are formed from adjectives by adding the suffixes -w, -wa, -ra: I ?ay "bad" - I ?ayw "badly"; xwb "quick" - xwbw "quickly", I f' "good" - I f'wa "well". ba "many, plentitude" - bara "much, very" The suffix -wa is identical to the suffix for the adverbial case (see above). The possessive prefix y- can be added to nouns to form adverb-like expressions (or "relational nouns"?) with directional meaning: ha "head" - yha "up, upwards" ba "hoof" - yba "down, downwards" Nouns in the instrumental case (in -'a) can also function as adverbs: mxwa "day": I mxwa'a "by day"; a "flight": I a'a "in flight" Some adverbs are formed with both the possessive prefix y- and the suffix -'a: ha "top, head": I yha'a "on top" (lit. "on his head"). There are also underived adverbs: nawba "today", pday "tomorrow", dwsa "yesterday", naba "tonight", dda "very much, just", wayblama "very much", mb "here": y (e) I st wa mb (daym) -p-'a-r
32
The morpheme -r- can be analysed as an infix which is inserted between the reduplicated root syllable and the root, if we think of reduplication as a kind of modification of the root (and not a special form of prefixation).
31
what you here dir.-2sg.-do-NOM "What are you doing here?" The category of adverbs might also include invariable expressions such as q?a "please", f'axwa "thank you", xat y'ara "maybe" ("who knows?"), etc.
POSTPOSITIONS Kabardian uses postpositions instead of prepositions. Postpositions are words which determine the grammatical relations of the nouns that precede them: naw "after", p'awnda "until", (y) day "at, in", ha'a "because, after", ypam "in front of", y'am "at the end, after", nam' "except", f'a'() "except", xwada "like", pp''a "because, due to", zaxwakwa "between", y gww "about", nas "to, up to, until", "from", ndara "since", yp'a'a "instead of", da'a "behind". a adwa nawm "after lunch" a I adwa p'awnda "until lunch, before lunch" I padyay p'awnda "until tomorrow" (I) m ypam('a) "in front of the horse" I war ha'a "because of you" I 'lam y gww "about the boy" y m xwada "like his horse" II Yamnya f'a'a "except Yaminezh" q'lam nas "to the city" y e Ia day ?-- Mwhamad y q'wa-m M. poss.3sg. brother-ERG at be-pret.-af. "Muhamad was at his brother's" y I I aIa st wy ?waxw-m ha'a --?--r what poss.2sg. work.-ERG about dir.3pl.-say-pret.-inter. "What did they say about your work?" As the preceding examples illustrate, postpositions govern the ergative case of nouns. Some of them govern the possessive pronouns, rather than personal pronouns, e.g. sy w "after me", wy day "to you", (wa) wy p'a'a "instead of you", etc. Others govern the personal pronouns (cf. war ha'a "because of you", sar f'a' "except myself"). The majority of postpositions are derived from nouns, especially nouns denoting body parts, cf. ha "head", pa "nose", 'a "tail". Some postpositions can be inflected, e. g. day has the full case paradigm (NOM day, ERG daym, INSTR day'a, ADV daywa), and some, but not all, can be construed with possessive prefixes (e. g. y
32
gww "about (it/him)" 33. This means that many Kabardian postpositions are quite like relational nouns in languages such as Tibetan. Instead of local adpositions, Kabardian often uses directional (local) prefixes on the verb; the English sentence "the student is sitting on the chair" corresponds to the Kabardian sentence waynk-r ant-m tay-s- (student-NOM chair-ERG dir.-to sitaf.), where the equivalent of the English preposition on is the Kabardian verbal prefix tay- (on local prefixes see below).
PARTICLES, CONJUNCTIONS AND INTERJECTIONS There are relatively few particles in Kabardian; these are the most frequently used ones: hawa "no"; I nt'a "yes" mys "here!" mda "there!, look!" I p'ara (interrogative particle); it is always placed at the end of a sentence and expresses a slight doubt: y Iy I wa p-'a-wa p'ara you 2sg.-know-ger. inter.particle "do you (really) know?" The other interrogative particle is a (also placed at the end of a sentence): a aI -r q'-k'wa-ma a he-NOM dir.-go-cond. inter.particle "Will he come?" The particle ayry is used as a quotation mark; it is usually best left untranslated: ye, e eya a ha w-naay, ayry ya-wp-- Bdnawq'wa why 2sg.-be.sad quot.part. 3sg.-ask-pret.-af. B "Why are you sad, asked Badinoko" Conjunctions are clitics, so they are mostly written as one word with the words they conjoin, e. g. -ra "and", -y "and", but there are also conjunctions which occur as separate words: y'y "and", wa "but", t'a "but", wayblama "even, but", ya...ya "either...or", hama "or".
33
33
The copulative conjunction -ra, -ry is repeated after each conjoined word within a noun phrase (NP): x I Tx-ra ?ana-ra "A book and a table" The conjunction -ry is placed after the verb in a sentence: " I" I e I Ia "M-r st a'awan" y?a-ry Satanyay y thak'wma-r mva-m ?wyh-- this-ERG what wonder said-and S. poss.3sg. ear-NOM rock-ERG place-pret.-af. "'What kind of wonder is this?' said Satanaya and placed her ear on the rock." The most common interjections are n "oh", waxw "ouch", ?wa "oh", wa "hey", yraby "hey!", ma "here!" (used while giving something away)
34
VERBS
Cette singularit (ergatif) tient, en gros, ce que, l o nous pensons "je vois le livre", les Caucasiens pensent quelque chose comme "-moi le-livre (il-m')est-en-vue" (G. Dumzil, cit. in Paris 1969: 159). Kabardian verbal morphology is extremely complex. Prefixes and suffixes are used to express different verbal categories, and there is also apophony (regular root vowel alternation). The verb does not have the category of voice (it does not distinguish active and passive), 34 but it does have the categories of transitivity, person, number, tense, mood, causative, two types of applicatives (version/benefactive (Rus. versija) and conjunctivity/comitative (Rus. sojuznost')), reflexivity, reciprocity, involuntative, and evidentiality. Active and stative verbs are distinguished systematically, and many of the mentioned categories do not apply to stative verbs.
THE VERBAL COMPLEX The verbal complex consists of a number of prefixes, the root, and a number of suffixes:
P1...Pn -R-S1..Sn
The prefix positions can be seen in the following matrix: 1. dir. 2. reflexive/reciprocal 3. version 4. conjunctivity 5. pot. 6. neg. 7. caus. invol. absolutive oblique agent - person markers
In the non-third persons, the dynamic present tense marker -aw- is added between the positions 5 and 6, cf., e.g., q'-z-aw--k'wa "I make him come". As can be gathered from the scheme above, the personal prefixes can be inserted at several points in the prefix chain, but two fixed rules apply: firstly, the prefix for the absolutive argument (the "lowest ranking macrorole", see below) precedes all other prefixes, and secondly, the prefix referring to the agent (if there is one) is closest to the verbal root. The picture above is further complicated by the fact that certain local prefixes, e.g. xa- "in", da- "in", etc. (see below) can be inserted in the verbal complex between the prefix slots 4 and 5; moreover, the factitive prefix w- can be inserted immediately before the root. However, we leave these prefixes out of the matrix scheme, because they belong to the domain of word formation more than to morphology.
34
Cp. Giev 1985: 41-57, where arguments to the contrary are disputed.
35
The suffix positions: 1. intransitivity 2. tense 3. mood potential evidential 4. negation interrogativity
We shall first deal with the prefixal verbal morphology, and then with the suffixal morphology.
VERBAL NEGATION The negation of the verb is expressed with the suffix -q'm (for finite forms) and the prefix m- (for non-finite forms; this prefix immediately precedes the root, or the causative prefix): I s-k'wa-r-q'm 1sg.-go-pres.-af.-neg. "I am not going" yaay x w-m-la-w p-x-r haram- 2sg.-neg.work-ger. 2sg.-eat-NOM sin-af. "It is a sin to eat not working" ("It is a sin if you eat, and not work") The imperative is, according to this criterion, included in non-finite forms: ye s-w-m-ay 1sg.-2sg.-neg.-lament "don't lament me" The prefixal negation can occur in some finite forms, but this usually happens in fixed expressions and proverbs: , I tha, s-m-'a god 1sg.-neg.-know "by god, I don't know" The two verbal negations differ in scope: the prefixed -m- is the narrow scope negation, with thescope just over the verbal nucleus, while the suffixed negation -q'm negates the whole sentence (including the embedded participles, infinitives, and/or gerunds).
36
The other NW Caucasian languages also have prefixal negation with the infinite verbal forms, and suffixal negation with the finite forms.
PERSON Kabardian distinguishes three persons singular and plural. Verbal person markers indicate the person of the subject of an intransitive verb / object of a transitive verb (the person which is in the nominative in the case of nouns), the person of the subject of a transitive verb (the person which is in the ergative in the case of nouns), and of the indirect object (the person which, in the case of nouns, is in the ergative in its function of dative, or some other oblique case): a) markers of the person which is in the nominative: sg. s w0// 0- /ma-/mpl. d f0// 0-/ma-/m-
1. 2. 3.
The prefix ma- is typically used in the present tense, with intransitive verbs which have only one expressed argument (Rus. odnolinye neperexodnye glagoly), while intransitive verbs with two expressed arguments take the prefix 0- for the person in the nominative. If the verb has a monosyllabic root that ends in -a, the vowel of the 3rd person prefix is lengthened, hence mk'wa "he goes" (from kwa-n), but ma-dagw "he is playing" (from dagw-n). This is in accordance with the phonological rule of lengthening of accented vowels in open syllables (see above). Intransitive verbs with a preverb do not have the prefix ma- in the present tense, cp. m-da(r) "(s)he is sewing", but q'-aw-k'wa "(s)he is coming" (where q'- is a directional preverb, and -awis a present tense marker of dynamic verbs). b) markers of the person which is in the ergative (person of the transitive subject and person of the indirect object): 1. 2. 3. / -s-/-z/ -w-/-b/ -y()-/ -r(-) -d -f -y-xa- (> -y-)
In the 3rd person singular the prefix -r- denotes the indirect object (usually the Recipient) 35: s-r-y-t 1sg.-3sg.-3sg.-give "He gives me to him"
The usual explanation is that the marker -r- is a result of dissimilation in a sequence of two semivowels -y-...-y- > -y-...-r-; this can be formulated as a synchronic phonological rule, so in most grammars it can be found that the marker for the 3rd person indirect object is -y-, like for the direct object (see Hewitt 2005: 102).
35
37
Personal prefixes indexing Obliques (non-macrorole core arguments, including the causees of causative verbs) are also distinguished from those indexing Actors and Undergoers by ablaut; they regularly have the same form as the markers of the transitive subject, but the vowel is -a- rather than --: a y-xw-va-z-a--- 3pl.-ver.-2pl.-1sg.-caus.-carry-pret.-af. "I made you (pl.) carry him for them" In the preceding example -va- indexes the 2pl. causee argument. Note that it differs from the form of the prefix for the causer (-v-) in the following example: a y-xwa-z-v-a--- 3pl.-ver.-1sg.-2pl.-caus.-carry-pret.-af. "You (pl.) made me carry him for them". The prefix indexing the recipient also has the form marked by a-vocalism: Sy -r q'-za-f-t- my horse-NOM dir.-1sg.-2pl.-give-back "Give me back my horse!" In the 3rd person plural the suffix -xa is usually only added if the verb's subject is not expressed, and if the subject is not placed immediately before the verb 36: xar yayd- = yayd-xa- "they studied" The order of personal markers is always (in terms of traditional grammatical relations): direct object / subject of intrans. verb indirect object subject of trans. verb S/O IO A ( y) a yea (sa wa) b w-ay-s-t-- I you he-ERG 2sg.-3sg.-1sg.-to give-pret.-af. "I gave you to him" (a ) y yea (b sa) wa w-q'-z-ya-t-- (he-ERG I) you 2sg.-dir.-1sg.-3sg.-give-pret.-af. "He gave you to me"
Forms with the plural suffix -xa- on the verb are characteristic for the contemporary literary language.
36
38
This schema shows that the verbal agreement system in Kabardian is ergative just like the case system, since the subject of an intransitive verb is treated in the same way as the direct object (S/O), while there is a different set of personal prefixes used for the subject of a transitive verb. With intransitive verbs the third position (A) is, of course, not realized.
INDEFINITE PERSON The suffix -?a- denotes the "indefinite person", i.e. that the verb's subject or object is indefinite (it is translated as "somebody"); this suffix is used only when the verb is in the third person: IaI q'a-k'w--?a- dir.-go-pret.-suf.-af. "Somebody came" yI d-za-p-nw-?a 1pl.-part.-watch-fut.-suf. "Are we going to see somebody?" The above examples lead to the conclusion that the suffix -?a- indicates only the person of the nominative argument (i.e. of the intransitive subject or object, the lowest ranking macrorole). It appears to be possible to use it with other arguments as well in participial constructions (Kumaxov & Vamling 1998: 68-69). A different way of expressing the "indefinite person" is to use the second person subject prefix, which is interpreted as referring to indefinite prson. This is possible in proverbs and statements of general truth: ye tn- w-ya-p-n easy-af. 2sg.-3sg.-see-inf. "It is easy to see him", lit. "It is easy for you to see him" The second person prefix with indefinite reference is added to the infinitive (or "masdar") and the predicate must be an adjective such as gww "difficult", tn "easy", dawa "good", halmat "interesting", etc.
TRANSITIVITY Verb valency is the number of arguments needed to complete the meaning of the verb in question. Verbs can be avalent (e. g. it is raining this verb is in English syntactically monovalent, but semantically avalent, since no thematic role is assigned to "it"), monovalent (e. g. I am sitting), bivalent (e. g. I am hitting an enemy), trivalent
39
(e. g. I am giving a book to a friend), possibly also quadrivalent (e. g. I am buying a book from a friend for twenty pounds). Verb valency is a semantic concept, realized in syntax through the category of transitivity. In most languages, bi- and trivalent verbs are realized as transitive verbs, i. e. verbs which have a compulsory nominal complement (direct object), possibly two complements (direct and indirect object). Arguments of bivalent verbs express different thematic roles according to the types of meaning they express. For example, verbs of giving (to give, to donate) always distinguish between the sender ("the person who is giving"), the theme ("the thing which is being given") and the recipient ("the person to whom something is being given"), and verbs of seeing distinguish between the thematic roles of the stimulus ("what is being seen") and the experiencer ("the person who is seeing"). Thematic roles can be grouped into macroroles with common semantic-syntactic features. We can distinguish between two macroroles: Actor and Undergoer. The Actor is always the thematic role closer to the left edge of the following hierarchy, while the Undergoer is always close to the right edge of the hierarchy 37:. Finally, the argument of a stative verb would be the traditional subject of verbs such as to lie, to sit, to exist, etc. The macroroles Actor and Undergoer of the action are, in a sense, the semantic correlates of the traditional syntactic-semantic concepts of ''subject'' and ''object'', which cannot be uniformly defined in all the languages of the world 38. Some Kabardian bivalent verbs can appear in their transitive and intransitive form, and many bivalent verbs can only be construed as intransitive (Rus. dvuxlinye neperexodnye glagoly). The way in which transitive and intransitive verbs differ in Kabardian in terms of the number of arguments, i. e. nominal complements to the verb meaning is typologically very interesting. Some linguists, e. g. Georgij Klimov (1986: 51), claim that a large majority of verbs in the Abkhaz-Adyghean languages are intransitive, precisely because they can be used with only one argument as complement, without breaking any syntactical rules. According to this criterion verbs meaning "to hit", "to catch", "to eat", "to kiss", "to lick", "to wait", "to move", "to call", "to do", "to ask", "to want", "to hunt", etc. are also intransitive in the AbkhazThe hierarchy was adapted from Van Valin and LaPolla 1997. In informal terms, the actor is the most "active" of the arguments of a particular verb, while the undergoer is the least active argument. 38 About this see e. g. Matasovi 2005, Klimov (ed.) 1978: 59.
37
40
Adyghean languages. Klimov uses the term ''diffuse'' or ''labile'' verbs for those verbs which can be used both in a transitive and an intransitive construction; this category comprises verbs meaning "to sow", "to graze", "to plow", "to knit", "to embroider", "to weave", etc. 39. These seem to be mostly verbs the first argument of which (the agent) is always a human being or a person, while the second argument (the patient) is inanimate. Sometimes the only difference between transitive and intransitive verbs is in different root vocalism (Ablaut); transitive forms end in , and intransitive forms in -a: d-n "to sew (something)" - da-n "to be involved in sewing", tx-n "to write (something)", txan "to be involved in writing", -n "to avoid", a-n "to run away", tn "to give, to give presents" and tan "to give, to give presents", xn "to eat (something)" and xan "to eat", than "to wash (something)" and thaan "to wash", xn "to reap (something)" and xan "to reap", pn "to collect (something)" and pan "to collect", 'n "to do" and 'an "to know", 'n "to kill" and 'an "to die" 40. Transitive verbs can be derived from intransitive ones using some suffixes and prefixes, e.g. the suffix -h-, cf. q'afa-n "to dance" (intransitive), q'af-h-n "to dance (a dance around something)" (transitive). Sometimes the difference is purely lexical, e.g. the verbs h-n "to carry" and '-n "to do" are always transitive. If we assume that the basic form of the verb is the one with final stem morpheme -a-, while the form with the morpheme -- is derived, then a large majority of Kabardian languages are intransitive. With some exceptions, Kabardian is a language without (underived) transitive verbs. Intransitive verbs with two arguments often express the fact that the Undergoer is not entirely affected by the action, i. e., the fact that the action is not being performed completely; in terms of Role and Reference Grammar, these verbs express activities, but not accomplishments (active accomplishments): x ha-m q'wpxa-r y-dzaq'a dog-ERG bone-NOM 3sg-bite "the dog is biting the bone (to the marrow, completely)" x w ha-r q' pxa-m y-aw-dzaq'a dog-NOM bone-ERG 3sg.-pres.-bite "the dog is gnawing, nibbling at the bone" Ia a 'la-r m-da boy-NOM 3sg.-read "the boy is reading" intransitive verb with 1 argument
39
According to Kumaxov (1971), in the closely related Adyghean language the number of "labile" verbs is significantly greater than in Kabardian. 40 Kuipers (1960) considers the opposition between a and in verbs a part of the wider system of "introvert" forms (with a) and "extrovert" forms (with ) in Kabardian, where a and are not morphemes for "introvertedness/extrovertedness", but the realization of the feature of "openness", which, according to Kuipers, is parallel to the phonological features such as palatalization, glottalization, etc.
41
Ia x 'la-r tx-m y-aw-da boy-NOM book-ERG 3sg.-pres.-read "the boy is reading the book" - intransitive verb with 2 arguments Ia x e 'la-m tx-r ya-d boy-ERG book-NOM 3sg.-read- transitive verb "the boy is reading the book (to the end), young man reads through the book" 41 r mtxa "he is writing" (intransitive) / b txm yatx() "he is writing a letter" (transitive) pa-r pa-m y-xwa "the carpenter is arranging the boards " (intransitive) / pa-m pa-r y-xwa "the carpenter is arranging the boards" (transitive); in the second sentence it is implied that the action will be performed completely, i. e. that the verbal action will be finalized (there is no such implication in the first sentence). Some linguists (Catford 1975, Van Valin & LaPolla 1997: 124) refer to the intransitive construction as the antipassive. The antipassive is a category which exists in many ergative languages (Dyirbal, Chukchi, etc.). The verb becomes intransitive in the antipassive, and the only compulsory argument of such verbs is the doer of the action, which is marked for the same case as the subject of an intransitive verb and the object of a transitive verb in an active (ie. not antipassive) construction. This case is usually called the absolutive, but in Kabardian it is traditionally referred to as the nominative. The patient can either be left out in the antipassive construction, or it can appear in an oblique case. Equating the Kabardian ''bipersonal'' intransitive construction with the antipassive is not correct 42; the affix -(a)w- is not the antipassive marker, as Catford explains it, but the present prefix which is added in the 3rd person to intransitive verbs only, and in the 1sta and 2nd person to all verbs. Monovalent intransitive verbs with a preverb have this prefix as well, and these verbs cannot appear in an antipassive construction, e. g. n-aw-k'wa "he goes (this way)" (dir.-pres.-to go). In works on Kabardian there is quite a lot of confusion regarding this problem (the conditions under which the prefix (a)w- appears are not entirely transparent), but it is clear that some verbs are always either transitive or intransitive, i. e. that the difference is lexical with some verbs (which we wouldn't expect if the intransitive construction was actually the antipassive). The antipassive is usually characteristic for most transitive verbs, similarly as most transitive verbs can form the passive in the nominative-accusative languages. Aside from all this, the antipassive is always a derived, marked construction in the ergative languages, while the intransitive construction in the Abkhaz-Adyghean languages is just as unmarked (underived) as the transitive one. A) Transitive verbs
41 42
My informants tell me that this sentence can also mean "the young man is studying the book". About this see also Hewitt 1982 and Kumakhov & Vamling 2006: 13 ff.
42
Transitive verbs can take markers for all persons, except for the 3rd person direct object (this marker is the ''zero-morpheme'', the prefiks 0-). The order of personal markers is: direct object-(indirect object)-subject: yxa w-s-tx-- you-I-write down-pret.-af. "I wrote you down" y yaa sa wa w-s-w-- I you 2sg.-1sg.-see-pret.-af. "I saw you" yea w-ya-s-t-- you-he-I-give-pret.-af. "I gave you to him" ea (0-)ya-s-t-- (0-)3sg.-I-to give-pret.-af. "I gave it to him" With transitive verbs the subject takes the ergative case, and the object the nominative case. In RRG terms we would say that in constructions with transitive verbs the nominative case is assigned to the lowest ranking macrorole, while all other arguments are assigned the ergative case. Also, the order of personal prefixes can be expressed like this 43: I: lowest ranking macrorole; II: non-macrorole core argument; III: other macrorole (with transitive verbs this will always be the Actor). B) Intransitive verbs The order of personal markers with intransitive verbs is: subject (of an intransitive verb) indirect object; the subject is always the semantic agent (Actor): y s-w-aw-p I-you-pres.-watch "I am watching you" a ax pa-r dna-xa-m q'-y-da girl-NOM shirt-pl.-ERG dir.-3-pl.-sew (intrans.) "The girl is involved in the sewing of shirts"
For the RRG terminology see Van Valin & LaPolla 1997; for the overview of verbal morphosyntax in Kabardian in RRG see Matasovi 2006.
43
43
With intransitive verbs the subject is assigned the nominative case, and the object the ergative case (in its dative function): ye x stwdyant-r tx-m student-NOM book-ERG "The student is reading the book" y-aw-da 3sg.-pres.-read
ea sa kynaw-m s-ya-p-- I cinema-ERG 1sg.-3sg.-to watch-pret.-af. "I watched the cinema" (= "I was in the cinema") In RRG terms, the case assignment rule is completely identical for transitive and intransitive verbs: the lowest ranking macrorole is assigned the nominative case, while all other verb arguments (in this case the indirect object) are assigned the ergative case. Also, the order of verbal prefixes is the same as with transitive verbs: I: the lowest-ranking macrorole (with intransitive verbs this is also the only macrorole); II: non-macrorole core argument; III: other macrorole (this position is not realized with intransitive verbs, since they only have one macrorole). Verbs with the inverse (dative) construction are also intransitive; these are verbs which express belonging or a mental state, the only macrorole of which is the patient (Undergoer), assigned the Nominative case: I a I '-m a-r y-?a- old man-ERG. money-NOM. 3sg.-hold-af. "The old man has money" I II '-m psaan-r f'f'-t old man-ERG to speak-inf.-NOM like-impf. "The old man liked to speak" The inverse construction corresponds to Latin constructions of the type mihi est "it is to me", mihi placet "it is pleasing to me, I like". From the point of view of the abovementioned case assignment rules these verbs present no problem, because their only (and thus also the lowest ranking) macrorole is marked for the Nominative case. If a transitive verb has two complements (i.e. if it is a trivalent verb), only the lowest ranking macrorole (Undergoer) is in the Nominative: I gwp-m '-r group-ERG old man-NOM a Ia thamda y-'-- thamada 3pl.-make-pret.-af.
44
"The group made the old man thamada (commander of the feast)" in this sentence the noun thamda cannot be marked for the Nominative (i.e. it cannot appear in the form *thamda-r) 44. The object (i.e. the second argument, the Undergoer) of transitive verbs can be omitted; it is expressed by a personal prefix, which, in the case of a third person object, is the ''zero-morpheme (0-): aa 0-s-w-- 3sg.-1sg.-see-pret.-af. "I saw (it)" a 0-s-t--- 3sg.-1sg.-give-back-pret.-af. "I gave (it) back" Note that many, perhaps most bivalent verbs are intransitive in Kabardian: Ix aax '-xa-r m-p-xa man-pl.-NOM 3sg.pres.-watch-pl. "People are watching" a sa s-aw-p 1sg. 1sg.-pres.-watch "I am watching" a a sa b s--p-xwaz-- I there 1sg.-dir.-2sg.-meet-pret.-af. "I met you there" Some intransitive verbs have an "integrated" marker for the 3rd person object; they are "bipersonal" (Rus. dvuxlinye) 45, but their indirect object (oblique argument) is always in the 3rd person singular. The verb sn "to swim" is of this type: s-ya-s-- "I swam", w-ya-s--, "you swam", ya-s-- "he swam", d-ya-s-- "we swam", f-ya-s-- "you swam", ya-s-- "they swam". It seems that yw'n "to kill" behaves in the same way (in opposition to the transitive w'n). Finally, some verbal personal prefixes are different for transitive and intransitive verbs (see above):
Kumaxov 1971: 68. With some of these verbs ya- has become part of the stem, ie. only etymologically is it a personal prefix, cf. Kumaxov 1973a.
45
44
45
eI ya-k'w "he goes (through something), he transverses" - transitive aI m-k'wa "he goes" -intransitive LABILE (DIFFUSE) VERBS Labile (or "diffuse") verbs are typically bivalent, but they can be used both transitively and intransitively: a a r m-va "he plows" (intrans.) / a I e b 'r ya-va "he plows the ground" (transitive) a aI r m-?wa "he threshes " (intrans.) / a eI b gwadz-r ya-?wa "he threshes wheat" (transitive) These verbs are relatively rare in Kabardian, but their number is significantly greater in the closely related Adyghean language 46. From works on Kabardian (and based on my own questioning of native speakers) it is unclear whether two lexical units should be distinguished in the case diffuse verbs (two verbs differing with respect to transitivity), or whether it is just one lexical unit (one verb with two uses / constructions).
CAUSATIVE Verbs receive an additional argument in the causative construction, i.e. their valence is increased by one. All Kabardian verbs can form the causative, including intransitives, transitives, and ditransitives. The causative prefix is a-. I I I w w k' a-n "to go": m-k' a "he goes": ya--k'wa "he sends him" = "makes him go". The causative prefix a- / - turns intransitive verbs into transitive verbs: Ia aI 'la-r gwbwa-m m-k'wa boy-NOM. field-ERG 3sg.-go "The boy goes into the field" a Ia aI na-m 'la-r gwbwa-m y--k'wa mother-ERG boy-NOM field-ERG 3sg-caus.-go "The mother sends the boy to the field" y ()a swp-r (q'a-)v-- soup-NOM (dir.)-to cook-pret.-af. "The soup was boiling (it was cooking)"
46
46
Ia II y aa 'la c'k'w-m swp-r q'-y--v-- boy little-ERG soup-NOM dir.-3sg.-caus.-to cook-pret.-af. "The boy was cooking soup" Causative can also be built from reflexive verb forms, e.g. zaawan "make someone hit himself". Like, e.g.. Turkish, but unlike many languages, Kabardian allows "double causatives", i.e. the causative suffix can be added to a transitive verb that has already been derived by causativization: thus the causative -va-n "make boil, cook" can be causativized to a--van "make someone cook", taking three arguments: a I... a a a Nbaw-m q'z-ytxw y-?a-t-y... friend-ERG goose-five 3sg.-have-impf.-and y na-m his mother-ERG y-r-y-a--va-r-y 3sg.-3sg.-3sg.-caus.-caus.-boil-pres.-and
p-m xw-y-h-- lord-ERG ver.-3sg.-bring-pret.-af. "(His) friend had five geese... and he made his mother cook them, and he brought them to the lord" Cf. also an "burn" (intransitive): a-an "burn" (transitive): a-a-an "make someone burn". Case assignment with causative verbs is typologically very unusual 47. The case of the arguments in a causative construction is not determined by that verb, but by the verb from which the causative verb is derived. If this verb is intransitive and has only one argument, its only argument will be marked for the nominative, while the causer will be marked for the ergative (as the oblique argument), as in the previous example. If, on the other hand, the original verb is intransitive and has an indirect object (oblique argument), the only macrorole ("subject") of the original verb will be marked for the nominative (yadk'war "student" in the following example): eaI eaI y a yaadk'wa-m yadk'wa-r wsa-m q'-r-y-a-d-- teacher-ERG student-NOM poem-ERG dir.-3sg.-3sg.-caus.-to read-pret.-af. "The teacher encouraged the student to read the poem"
Information on this is given according to Kumaxov (ed.) 2006: 436 and according to the examples obtained from my informants.
47
47
Finally, if the causative verb is derived from a transitive verb, the lowest-ranking macrorole of this (original) verb will be in the nominative, and the other macrorole in the ergative; the causer is again in the ergative: I Ia a '-m 'la-m dabz-r y-r-y-a-h-- old man-ERG boy-ERG girl-NOM 3sg.-3sg.-3.sg.-caus.-to carry-pret.-af. "The old man made the boy carry the girl" I Ia ya '-m 'la-m pa-r y-r-y-a-q'wt-- old man-ERG boy-ERG tree-NOM 3sg.-3sg.-3sg.-caus.-cut-pret.-af. "The old man made the boy cut the tree" Of course, all of the nominal arguments can be left unexpressed, and proper nouns and indefinite NPs do not receive case marking: ye aI xa Q'arawyay y ha'a-m-ra y -m-ra y-a-x-- Q. 3sg.poss. guest-ERG-and 3sg.poss. horse-ERG-and 3sg.-caus.-eat-pret.-af. "Karaavey fed his guest and his horse" (in this sentence the name Q'arawyay would be in the ergative as the causer, the undergoer of the underived verb, i. e. the food, which is unexpressed, would be in the nominative, and the only case-marked nouns (ha'a and ) are in the ergative as the indirect objects viz. non-macrorole core arguments). These unusual rules of case assignment with causative verbs are related to the rules of case assignment in subordinate clauses (see below), where the case of the nouns in the main clause depends on the role of these nouns in the subordinate clause. Since causers are agents, the causative verb receives a personal prefix for the causer which takes the position of the prefix for the agent / subject of a transitive verb (immediately before the causative prefix), and the noun denoting the causer is in the ergative; the agent of the underived verb is reduced to the status of oblique argument / indirect object. The causative verb can thus take up to four personal markers 48 (for the causer, the subject, the object and the indirect object): I xx a ax '-m fz-m tx-xa-r pa-m y-r-ry--t-xa man woman books girl 3sg.-3sg.-3sg.-caus.-give-3pl. "The man makes the woman give the books to the girl" y ax ya sa wa b-xa-m s-ra-w-z-a-t-- I you he-pl.-ERG 1sg.-3pl.-2sg-1sg.-caus.-give-pret.-af. "I made you give me to them"
48
My informants warn me that examples like these are slightly unnatural, fabricated.
48
The order of personal prefixes is basically the same as with normal transitive verbs (see above), except for the fact that there is an extra position, the one for the causer immediately before the causative prefix 49. According to agirov (1977: 124) and Kumaxov (1989: 218), the causative prefix a(also Adyghe a-) is cognate with the Ubykh causative prefix a-, (for plural objects only) and with the Abkhaz causative prefix r- (the sound correspondence is regular). This would mean that the causative formation is inherited from Proto-NWC.
INVOLUNTATIVE A verb in the category of involuntative indicates an action which is done unintentionally. The Russian term is kategorija neproizvol'nosti, cf. Klimov 1986: 45. In the involuntative verbs take the prefix ?a'a-: a a ha-m ba-r y-thal-- dog-ERG fox-NOM 3sg.-kill-pret.-af. "The dog killed the fox" a IIa ham bar ?a'athalh "The dog slaughtered the fox (unintentionally)" a IIa ha-r ba-m ?a'athalh "The fox (unintentionally) slaughtered the dog" I yIa 'la-m dw-r y-w'-- boy-ERG thief-NOM 3sg.-kill-pret.-af. "The young man killed the thief" Ia IIyIa 'la-m dw-r ?a'a-w' "The young man (unintentionally) killed the thief" IIIa s-?a'a-k'wad-- 1sg.-invol.-disappear-pret.-af.
49
Dixon (2000: 49) includes Kabardian in his typology of causatives, claiming that it belongs to a small group of languages in which the causee in a causative derived from a transitive verb retains its Amarking (marking of agents of transitive verbs). As a similar case he adduces an isolate, Trumai (Brasil), in which both the causer and the causee take the ergative marking in a causative construction. However, what is special about Kabardian is that, in causatives built from intransitives, the same thing happens: the original "subject" retains its subject properties, getting the nominative case and not being indexed on the verb. There are other languages in which subjects retain some subject properties in causatives, e.g. Japanese (reflexive binding) and Qiang (case marking).
49
"This accidentally disappeared on me" (Rus. to u menja nevol'no propalo) y yIIyIa wa w-s-?a'a-w'-- 2sg. 2sg.-1sg.-invol.-kill-pret.-af. "I accidentally killed you" As can be seen from the previous example (the order of personal prefixes is patientagent), a transitive verb does not become intransitive in the involuntative, i. e. the action of the verb still ''affects'' its object 50. In Kabardian grammars I find no examples of the involuntative construction with causative verbs. Although causativity seems to presuppose that the first argument of the verb is a conscious instigator of the action (the agent), my informants say that the following sentence is possible: Ia I IIyIa 'la-m '-m dw-r ?a'-y-a-w'-- boy-ERG old.man-ERG thief-NOM invol.-3sg.-caus.-kill-pret.-af. "The boy made the old man accidentally kill the thief" I found the following example in the biography of abagy Kazanoko (Nal'ik 1984): aI I II IIaa bwy-p'-r zady-ry c' c'k'w-r bee-keeper-4 together.rise-and he.goat small-NOM dw-m q'-?a'-a-xw-- wolf-ERG dir.-invol.3pl.-caus.-drop(?)-pret.-af. "Four bee-keepers rose together and made the wolf (unintentionally) drop the little goat" Note that the prefix -?aa- modifies the action of the original actor (the wolf), which is the derived causee, rather than the action of the derived actor (the four beekeepers). It appears that the involuntative cannot be used with stative verbs, such as taysn "sit": Ia e 'la-r ant-m tay-s- boy-NOM chair-ERG dir.-sit-af.
Pace Abitov (ed.) 1957: 93, Hewitt 2004: 183. Moreover, the case marking on the arguments remains as in the non-involuntative construction. Prefixes with the similar function to the Kabardian involuntative exist in Abkhaz, but also in Georgian (Hewitt 2004: 183).
50
50
"The boy sits on the chair" but: *lam antm ?a'atays "the boy accidentally sits on the chair"; rather, one must use the following construction with the negated verb xwyayn "want": Ia ey e 'la-r ant-m m-xway-wa tay-s- boy-NOM chair-ERG neg.-want-ger. dir.-sit-af. The verb containing the involuntative prefix can be used in polite questions, and the prefix is best rendered as "perhaps, by chance": IIay I? q'-f-?a'a-m-aw--wa p'ara? horse dir.-2pl.-invol.-neg.-see-pret.-ger. inter. "Haven't you seen a horse, by chance?" The origin of the involuntative prefix is an incorporated syntagm which includes the noun ?a "hand" and the participle 'a "doing" (to do something unintentionally is ''to do something using the hand, and not the mind''). A similar, but etymologically unrelated, "involuntative" prefix exists in Abkhaz (-ama-).
FACTITIVE Adding the prefix w- to a nominal stem forms verbs the meaning of which is ''to make something become or have the quality of what the nominal stem expresses", e.g. wf'ayn "to pollute, to make dirty" from f'ay "dirty", or wq'abzn "to clean", from q'bza "clean": a yIea sbyy-m dna-r y-w-f'yay-- kid-ERG shirt-NOM 3sg.-fact.-dirty-pret.-af. "The kid made the shirt dirty" As the case marking on argument shows, the verbs containing the factitive prefix are transitive, just like the causative verbs. In a sense, the factitive is just a special type of denominative causative. The factitive prefix immediately precedes the verbal root. It can be freely combined with the causative prefix, which it follows, cf. e.g. b "soft", wabn "to make soft, soften", yaawabn "make someone soften (something).
51
The division into dynamic and stative verbs does not coincide with the division into transitive and intransitive verbs. Both transitive and intransitive verbs can be either dynamic or static. Dynamic intransitive verbs express action, activity; they are morphologically marked by the prefix -aw- in the present tense. Intransitive dynamic verbs have the prefix ma-(m-) in the 3rd person singular present. Here are some examples of dynamic verbs: s-aw-xda-r "I mock", w-z-aw-h "I carry you", II d-awp''a-r "we hurry", f-aw-la-r "you work", I m-k'wa-r "he goes" Stative verbs express a state, or the result of an action. They are often derived from nouns. They do not have the facultative suffix r in the present, but the affirmative suffix - is compulsory; in the present they do not have the prefix -aw- like dynamic verbs: sa s-- I 1sg.-lie-af. "I am lying" a -r t- he-NOM stand-af. "He is standing" s- "(he) is sitting (on a horse)", "he is riding", cf. - "horse", sn "to sit" All stative verbs are intransitive, except for the verb ?n "to hold". It seems that every noun can be used as a stative verb, i.e. it can be turned into an intransitive verb by adding the suffix - (for affirmative forms): e sa s-prawfayssawr- I 1sg.-professor-af. "I am a professor" Moreover, even adpositions can be turned into (stative) verbs by adding the affirmative suffix -: ay y zwa naw- war after-af. "It was (the time) after the war"
52
APPLICATIVES Kabardian has two sets of applicative prefixes. Applicatives are usually defined as constructions in which the number of object arguments selected by the predicate is increased by one with respect to the basic construction. The object of the original construction is usually demoted to the status of the oblique argument, and the applied argument takes at least some of the properties of the object, cf. the English opposition between Jane baked a cake and Jane baked John a cake, where John is put in the first post-verbal position otherwise reserved for direct objects 51. However, in contradistinction to the applicative construction in most other languages, both Kabardian applicatives do not affect the choice of the object/undergoer. According to Peterson (2007) the benefactive and the comitative functions of the applicative construction are the most common ones cross-linguistically. We have both of them in Kabardian.
I. VERSION (BENEFACTIVE/MALEFACTIVE) The prefix xwa-/-xw- indicates version, i.e. for whose benefit the action is performed; it could also be called a benefactive 52: xa p-xwa-s-tx-- 2sg.-ver.-1sg.-to write-pret.-af. "I wrote for you" The prefix -xw- is placed immediately after the prefix for the person for whose benefit the action is performed: Ia s-p-xwa-k'w-- 1sg.-2sg.-ver.-to go-pret.-af. "I went for you (on your behalf)" ya I, x w wy-na-r g f'a-n-, pysmaw q'-xwa-p-tx-ma your-mother-NOM be glad-fut.-af. letter dir.-vers.-2sg.-to write-cond. "Your mother will be glad if you write her a letter." There is also the malefactive (adversative) prefix f'-/f'a-, which seems to be parallel to the version prefix -xw-, but it indicates to whose detriment (or against whose will) the action is performed 53:
Note that English does not have any applicative morphology, and that the applied argument does not take all of the object properties, e.g. it cannot be passivized. 52 Applicatives (version prefixes) exist in the other NW Caucasian languages. Hewitt (2004: 134f.) calls the prefixes expressing version in NW Caucasian "relational particles" (cp. Abkhaz -z()- which corresponds to Kab. -xw -) to distinguish them from version prefixes in Kartvelian, where a somewhat more complex system exists. 53 Kumaxov 1971: 276. Cf. the similar "adversative" prefix ca- in Abkhaz.
51
53
yaIIa w--f'-da-k'w-- 2sg.-3pl.-advers.-conj.-go-pret.-af. "You went with them against their will" yIIa w-s-f'-da-k'w-- 2sg.-1sg.-advers.-conj.-go-pret.-af. "You went with them against my will" xIx Ixa Ixa
xa-z-a-p' -xa-r maz-m dir.-1sg.-caus.-graze.a.night-pl.-NOM wood-ERG s-f'-xa-ada----y q'-s-xwaxw--r-q'm 1sg.-advers.-dir.-run-back.-pret.-af.-and dir.-1sg.-drive.out-back-pres.-neg. "The horses that I herded at night ran away on me into the wood and I can't drive them out again". IIa -r wagw-m -s-f'a-k'wad-- horse-NOM road-ERG dir.-1sg.-advers.-pret.-af. "The horse disappeared to me on the road, I lost my horse along the road" The category of version in Kabardian should not be confused with the typologically similar applicative construction, which involves the adding of an argument to the core of the clause and increasing the transitivity of a verb. In Kabardian, adding the version prefix -xw- and the adversative prefix -f'- does not affect the transitivity of a verb. The applicative can be freely combined with the causative: II tha-m c'k'w-r q'-p-xw-y-a-w god-ERG this little-NOM dir.-2sg.-ver.-3sg.-caus.-grow/become "May God raise this little one for you!" a sy da-r q'-p-xw-aw--na my gold-NOM dir.-2sg.-ver.-pres.-caus.-remain "I am leaving you my gold" (= "I am making my gold remain for you")
54
II. CONJUNCTIVITY (COMITATIVE) The prefix expressing conjunctivity (Rus. sojuznost') -da-/-d- indicates that the subject is performing the action together with somebody else 54: s-da-k'w-- "I went with him" : s-k'w "I went" 1sg.-conj.-go-pret.-af. da-s-h-- "I carried (it) with him" : sh "I carried (it)" conj.-1sg.-carry-pret.-af. a dabz-r y-na-m d-aw-la girl-NOM 3sg.poss.-mother-ERG conj.-pres.-work "The girl works with her mother" I aIx ea '-m ha'a-xa-m xw y-d-ya-f-- old.man-ERG guest-pl.-ERG sour.milk 3pl.-conj.-3sg.-drink-pret.-af. "The old man drank sour milk with the guests" Note that ha'axa "guests" is in the Ergative in the preceding example, which shows that the applied argument has the status of the oblique, rather than direct object/undergoer. Compare also the Ergative case of the applied NP in the following example: a a I II
-r y nbaw c'k'w gwar-m mxwa gwar-m day some-ERG he-NOM 3sg.poss. friend small some-ERG 'an da-dagw-rt-y 'an conj.-play-impf.-and "And one day he played 'an (a game with sheep bones) with his little friend" The conjunctivity prefix follows the person marker it refers to, and it also follows the person marker expressing the argument marked with the Nominative ("the lowest ranking macrorole"); stating this rule in terms of the traditional "Subject" would be confusing, since we would have to say that -da-/-d- precedes the subject of transitive verbs, and follows the subject of intransitives: x b-d-z-aw-x 2sg.-conj.-1sg.-pres.-eat
A genetically cognate comitative/conjunctivity prefix exists in the other NW Caucasian languages, cf. Ubykh dz-, Abkhaz and Abaza c()-. Abkhaz has another applicative marker, la-, which has instrumental function (Hewitt 2004: 134).
54
55
"I am eating this with you" (transitive verb) a s-b-d-aw-la 1sg.-2sg.-conj.-pres.-work "I am working with you" (intransitive verb) With transitive verbs, adding a conjunctive prefix can refer not only to the conjunction of actors, but also of undergoers (Kumaxov et alii 2006: 250): e Ia x qwyay-m 'qwa da-x cheese-ERG meat conj.-eat "Eat meat with cheese" a e ex a Hasan sy nrtxw qap-r yazm yay-xa-m d-y-ha-- H. poss.1sg. corn bag-NOM himself his-pl.-ERG conj.-3sg.-grind-pret.-af. "Hasan ground my bag of corn together with his own" Note that the added (applied) argument in the examples above is in the Ergative (in its oblique function). This shows that the added argument is not the object/undergoer, but oblique. According to my informants, the applied argument has to be in the Ergative even if it is indefinite: Ia I aa 'la dabz '-m d-y-w-- boy girl old.man-ERG conj.-3sg.-see-pret.-af. "A boy saw a girl with an old man" Just as with the category of version (see above), the category of conjunctivity involves the adding of another person marker to the verb, so from a typological point of view this looks like the comitative applicatives found, e.g., in Haka-Lai, a Tibeto-Burman language (Peterson 2007). However, the difference lies in the fact that the adding of the conjunctivity prefix does not affect the transitivity of a verb, as is clear from case marking and the shape of the person markers. A related conjunctivity (comitative) prefix exists in Abkhaz (-c()-). The conjunctivity/comitative applicative construction should be distinguished from the incorporation of the adverbial prefix -zda-, -zada- "together". In Russian, this is sometimes referred to as the category of "togetherness" (sovmestnost'). The adding of this stem to the verbal matrix does not involve adding any personal prefixes: y a wara sara d-zad-aw-la I you 2pl.-together-pres.-work "You and I work together" e a y a,
56
eI ay Zagwarm m day nrt w gwp Once H. to Nart rider group zayk'wa zd--a-nw raid together-3pl.-lead-inf. "Once, a group of Nart riders came to Himish, to take him on a raid (together with them)" q'-dh--, dir.-come-pret.-af.
RECIPROCITY The verb in the reciprocal form expresses that its two core arguments (the Actor and the Undergoer) act on each other simultaneously. The reciprocal prefix is za- (for intransitive verbs), and zar- (for transitive verbs): I za-gwr?wa-n "to arrange between each other" a zar-w-n "to see each other" a d-zar-wat-- 1pl.-rec.-meet-pret.-af. "We met each other" The core arguments of the verb in the reciprocal form must be in the ergative case, to which the conjunctive suffix -ra "and" is attached: I Iay a '-m-ra y q'wa-m-ra kwad 'wa zar-aw--q'm old.man-ERG-and 3sg.poss. son-ERG-and long doing rec.-see-pret.-neg. "The old man and his son have not seen each other for a long time" Of course, personal pronouns in the 1st and 2nd person are not case marked, but they also receive the conjunctive -ra: Iy a Fara dara kwad m'aw d-zar-w-n- you we long not.doing 1pl.-rec.-see-fut.-af. "We will see each other shortly" Perhaps under the influence of the Russian reciprocal construction (drug-druga), Kabardian has also developed the construction with the "reciprocal pronouns" zdryay ("one-other"): ae Iy, Ia a
57
Z-m dryay-m z--y-a-pk'w-w-ra, one-ERG other-ERG refl.-dir-3sg.-caus.-avoid-ger.-and ?wha-m zarh-- za-y--r hill-ERG meet-pret.-af. brother-suff.-3-NOM "And, after avoiding one another, the three brothers met on the hill"
REFLEXIVITY Kabardian does not have reflexive pronouns; reflexivity is expressed by the verbal prefix za-/z-/z-, which indicates that the subject of the action is the same as the object; from the historical point of view, this is the same prefix as the basic reciprocal prefix. Reciprocity and reflexivity are in many languages semantically and morphologically related, cf. the Croatian verbs tui se (= to hit oneself or to hit each other), gledati se (= to look at oneself or to look at each other). The reflexive prefix follows the prefix for the subject of an intransitive verb (the lowest ranking macrorole, see above) and precedes the prefix for the subject of a transitive verb (the other macrorole): oI s-z-aw-wp'- 1sg.-refl.-pres.-ask-back "I ask myself" (intransitive verb) yoI w-z-aw-wp'- 2sg.-refl.-pres.-ask-back "You ask yourself" aI z-z-aw-tha' refl.-1sg.-pres.-wash "I wash myself" (transitive verb) a z-b-aw-xwpa refl.-2sg.-pres.-dress "You dress yourself" (transitive verb) The reflexive marker on the subordinated verb must be controlled by the subject of that verb, not the subject of the verb in the main clause: da wa z-b-wa-nw we you(SG) REFL-2SG-CAUS-relax-INF d-xwyay- 1PL-want-AFF
58
We want you to hit yourself The preceding example cannot be taken to mean *We want you to hit us, with the subject of the main clause (da) as the controller. It is typologically somewhat unusual that, in the case of transitive verbs, the reflexive affix precedes the personal affix for the constituent which has to be coreferent with it. The reflexive prefix can occur with the infinitive as well: x ps-m z-q'-xw-xa-dza-n water-ERG refl.-dir.-ver.-dir.-throw-inf. "to throw oneself into the water for him" The reflexive prefix is often combined with the suffix -(a)-, meaning "back". The details of the use of this suffix should be further examined, since it appears to be obligatory with intransitive bivalent verbs. The following examples are obtained from my informants: Ia II Ia 'la c'k'w-m z-y-'--- boy little-ERG refl.-3sg.-kill-back-pret.-af. "The little boy killed himself" (transitive verb) Ia II ya 'la c'k'w-r za-wa--- boy little-NOM refl.-hit-back-pret.-af. "The little boy hit himself" (intransitive verb) As can be seen from the examples, the reflexive construction of the verb does not change the valency of the verb (this can be seen by looking at the order of personal prefixes and the case assignment in the sentences above). Aside from this, it can be seen that, in a reflexive construction, the subject of an intransitive verb (to hit, wan) is treated in the same way as the subject of a transitive verb (to kill, 'n), i.e. that Kabardian syntax is nominative-accusative according to this criterion. Note the following pair of sentences with causative verbs, which point to the rules governing the use of --: a Ia ya pa-m 'la-r z-r-y-a-w-- girl-ERG boy-NOM refl.-3sg.-3sg.-caus.-hit-pret.-af. "The girl made the boy hit her" (litterally "herself", i.e. the girl) a Ia ya pa-m 'la-r z-r-y-a-wa--- girl-ERG boy-NOM refl.-3sg.-3sg.-caus.-hit-back.-pret.-af. "The girl made the boy hit himself"
59
The suffix -- "again, back", which we could refer to as "repetitive", can also appear without the reflexive prefix; it can often be translated as "again": a ya y w Ada apq'--ry apq' wrda --nw- Adyghean people-old-and people strong become-back-fut.-af. "And the old Adyghean people will become strong again". Besides temporal, the suffix -- also has directional (spatial) meaning, signifying the reverse direction of the action. Thus, while k'wan means "to go", k'wan means "to return", while tn is "to give", tn is "to give back", etc. When added to adjectival stems, it can also mean "even", e.g. ba is "a lot, many", naba is "more", and naba is "even more". In some cases, the suffix - can indicate that the action is performed again, but not by the same subject; in a Kabardian folk-story about the hero Ashamaz, we find a sentence in which his friend asks him to avenge his father: I I Wy da-r q'a-z-w'--r w'- your father-ABS DIR-PART-kill-PRET-ABS kill-back Kill the one who had killed your father! From the descriptive point of view, it can be said that the suffix - indicates that the lowest Macrorole argument of the verb (in traditional terms its intransitive subject or direct object) is doubly affected by the action: with nonreflexives, this may mean either that the action is performed twice (again) on (or by) that argument, or that the action is directed back at it. With reflexive intransitives, it also means that the lowest macrorole argument is doubly affected: once as the instigator of the action, and again as its undergoer. There is no special possessive reflexive. Rather, the usual possessive pronouns are used: , I sy mal zpxwxr maz pa-m q'-tay-z-n-t 1sg.poss. sheep 5-6 woodmeadow-ERG dir.-dir.-1sg.-leave-plup. "I had left my five or six sheep on a meadow in the wood"
DEONTIC MODALITY The potential prefix -xwa-/xw- and/or the suffix -f()- express deontic modality, i.e. whether the subject is capable of doing the action expressed by the verb or not: yy w-s-xw-h-nw- 2sg.-1sg.-pot.-carry-fut.-af.
60
"I will be able to carry you" The prefix -xw- is placed immediately after the personal prefix for the agent, the potential doer of the action. It seems to be added only to transitive verbs, and in origin it is probably identical to the "version" marker (benefactive) -xw- (Hewitt 2004: 135; see above). The suffix -f- is added both to transitive and intransitive verbs. It is not entirely clear whether these are variants of the same morpheme (-f-/-xw-) which can be both a suffix and a prefix, or whether they are two different morphemes. Klimov (1986: 45) claims that this is only one morpheme which can be either a suffix or a prefix, and he cites it as -xwa- in Kabardian, -fa- in Adyghean, which is in keeping with the rule according to which the Common Adyghean *xw results in f in Adyghean. However, the suffix -f- is found in Kabardian texts as well, cf. dabza'a sawpsaaf "I speak Kabardian" (i. e. "I can speak Kabardian"); the potential prefix occurs more often with negative and interrogative forms, while the suffix is tied to affirmative forms of the verb. In any case, the potential should be distinguished from the so-called "hypothetical mood", which can be included in the category of evidentiality (see below). Potential differs from the proper verbal moods in that it is negated by the suffix -q'm, rather than with the prefix -m-, i.e. it is a finite verbal form: a II sy Dta Kwaba-m q'-f'a'-f-n-q'm 1sg.poss. sword gate-ERG dir.-pass-pot.-fut.-neg. "He will not be able to pass my 'Sword-Gate'" , Wazrmad dy m-wsa-ma, q'-t-xwa-h-n-q'm W. 1pl.poss. neg.-companion-cond. dir.-1pl.-pot.-carry.away-fut.-neg. "If Wazirmad is not our companion, we will not be able to kidnap her (sc. Satanay)" An interesting feature of the potential prefix is that it reduces the transitivity of the verb, i.e. it turns transitive verbs into intransitive. This is in keeping with the relation between transitivity and the "affectedness of the object", i.e. the patient: in the potential, the patient is not affected by the action, so the verb has to be intransitive, cf. the following two examples (Kumaxov, ed. 2006: 257) 55: ye w-ya-s-t-r-q'm 2sg.-3sg.-1sg.-to give-pres.-neg. "I don't give you to him" (the verb is transitive, so the prefix for the doer of the action, 1sg., is placed next to the verbal root) ye w-s-xw-ya-t-r-q'm 2sg.-1sg.-pot.-3sg.-to give-pres.-neg. "I cannot give you to him" (the verb is intransitive, so the order of the prefixes for 1sg. and 3sg. is reversed)
This correlation between (at least some) potentials and intransitives seems to be an areal feature in the Caucasus. Cp. Hewitt 2004: 181ff. for similar examples from Mingrelian, Ingush, Khinalug, and Abkhaz.
55
61
However, the arguments of the verb in the potential form receive the same case endings as in the corresponding indicative 56: I ex smada-m m?arsa-r ya-x (note the 3sg. "transitive subject" prefix ya-) sick.man-ERG apple-NOM 3sg.-eat "The sick man is eating the apple" I x smada-m m?arsa-r xw-aw-x (note the lack of the 3sg. prefix) sick.man-ERG apple-NOM pot.-pres.-eat "The sick man can eat the apple" This can be accounted for if the potential construction is actually of the "inverse-type" (see above), i.e. if the preceding example should be rendered as "it is possible to the sick man to eat the apple". Unlike the potential prefix -xw-, the potential suffix -f- is freely combined with the version prefix -xwa-: I ? St p-xwa-s-'a-f-n ydry? what 2sg.-ver.-1sg.-do-pot.-inf. more "What more can I do for you?"
PERSONAL AND DIRECTIONAL PREFIXES The use of directional prefixes is compulsory with many verbs for certain persons and tenses; the use of these prefixes is quite idiomatic, and it seems that each verb has its own pattern 57, cf. the intransitive verb an "to wait": s-n-aw-w-a "I wait for you" 1sg.-dir.-pres.-2sg.-to wait s-v-aw-a "I wait for you (pl.)" 1sg.-2pl.-pres.-to wait s-aw-a "I wait for him/I wait for them" w-q'-s-aw-a "you wait for me" 2sg.-dir.-1sg.-pres.-to wait
56 57
s-ya--- "I waited for him/for them" w-q'-za--- "you waited for me"
62
w-q'-d-aw-a "you wait for us" w-aw-a "you wait for him/for them" q'-z-aw-a "he waits for me" d-n-aw-w-a "we wait for you" 1pl.-dir.-pres.-2sg.-to wait d-aw-a "we wait for him/them" q'-z-aw-a "they wait for me" etc.
w-q'-da--- "you waited for us" w-ya--- "you waited for him/them" q'-za--- "he waited for me" d-n-aw--- "we waited for you" d-y--- "we waited for him/them" q'-za--- "they waited for me"
Some linguists believe that the use of the directional prefix q'- with polyvalent intransitive verbs depends on the person hierarchy (see below).
TENSES Kabardian has a complex system of verbal tenses. It distinguishes the basic dimensions of the present, future and past, and, within the past, two degrees of remoteness: the preterite and the imperfect denote an action which happened in the more recent past, while the pluperfect denotes an event in the distant past. The category of tense is mostly expressed by suffixation (though there are also verbal prefixes in the present tense): present: prefixes ma- (m-), -aw- and the facultative suffix -r for dynamic verbs, without markers for stative verbs preterite: suffix - imperfect: suffix -(r)t for dynamic verbs and -m for stative verbs 58 anterior preterite: suffix --t pluperfect: suffix - anterior pluperfect: suffix -t categorical future: suffix -n factual future: suffix -nw future II: suffix -nwt
The terminology for Kabardian verbal tenses differs greatly depending on the author; Kumaxov and Vamling (1996: 39 ff.) refer to the anterior preterite as the "perfect II", and to the preterite as the "perfect". The same authors mention also forms with the suffix , which they call "aorist", but these forms seem to be quite rare in texts; cp. also Abitov 1957: 120f.
58
63
In all verbal tenses there are special negative forms, expressed by the suffix -q'm; in the present of dynamic verbs the prefixes ma-, aw- disappear in the negative form, and the suffix -r becomes compulsory, cp. the following examples: 1. Intransitive monovalent dynamic verb k'wan: I() : I() s-aw-k'wa(r) : ma-k'wa(r) 1sg.-pres.-go 3sg.pres.-go "I go" "He goes" : I : s-k'wa-r-k'm 1sg.-go-pres.-neg. "I don't go" : I : k'wa-r-q'm go-pres.-neg. "He doesn't go"
2. Intransitive stative verb tn "stand" s-t- "I stand" : : t- : "He stands" : s-t-q'm : "I don't stand" : t-q'm "He doesn't stand"
3. Intransitive bivalent (dynamic) verb an "wait" () s-aw-a(r) "I wait (for him)" () y-aw-a(r) s-a-r-q'm y-a-r-q'm "He waits (for him)" "I don't wait (for him)" "He doesn't wait"
4. Transitive (bivalent dynamic) verb dn "sew" () s-aw-d(r) "I sew it" e ya-d-r "He sews it" s-d-r-q'm "I don't sew it" e ya-d-r-q'm "He doesn't sew it"
The meaning of anterior verbal tenses is not entirely clear. These are the anterior pluperfect and preterite, and, because of the way it is formed, the future II as well. According to reference books, anterior tenses indicate an action which lasted for some time in the past, and forms in anterior tenses are glossed by adding the adverb "then" (Rus. togd), e. g. k'w "he went" in contrast to k'wt "he went then". Based on examples and the interviews with my informants, I find it most likely that the suffix -t- used in anterior tenses expresses definiteness, i.e. that a verb in an anterior tense indicates an action which was performed at a definite time in the past 59. This can be seen in the following sentence: ax II a, a a a a a Nrt-xa-r m 'p'a-m N-pl.-NOM that land-ERG
59
zamn a-m
bz-t
There do not seem to be any clear parallels to this kind of tense system in Comrie's cross-linguistic survey (Comrie 1985).
64
that time
far-ERG
perform.deeds-ant.pret.
"The Narts lived in that land, (and) Sosruko's sword performed feats then, long time ago". The use of the anterior preterite in the preceding example is consistent with the use of the adverbial expression zamn am "at that time, long ago". Similarly, the use of the preterite is incompatible (or nearly so) with temporal adverbs such as dwsa "yesterday", which specify the exact time when the action was performed. With such adverbs the anterior preterite must be used: a a Ia / w d sa q'la-m s-k'w-t / yesterday city-ERG 1sg.-go-ant. pret. "I went to the city yesterday" *Ia *s-k'w-- 1sg.-go-pret.-af.
The imperfect is, unlike the preterite, used for an action which lasted for some time or was repeated in the past. In narratives this tense alternates with the preterite, which in most cases indicates a one-off action, or an action which is not implied to have lasted for some time or to have been repeated in the past, e.g: . e y Ia da-s-t. Satanyay wna-m q'-'h-- Sawsrq'wa agw-m S. fireplace-ERG dir.-sit-impf. S. house-ERG dir.-enter-pret.-af. "Sosruko was sitting (impf.) by the fireplace. Satanaya entered (pret.) the house" Interestingly, the imperfect is compatible with temporal adverbs specifying the time when the action was performed: a a I w s-k'wa-rt d sa q'la-m yesterday city-ERG 1sg.-go-impf. "I was going to the city yesterday" The opposition between the imperfect and the preterite can easily be seen in the following paragraph:
65
"On the top of Uahamaxwa (Mt. Elbrus) Mazatha, Amish, Thagoled, Sozrash, Hlapsh and others were sitting together with Psatha and marking (y?at, impf.) the drinking of sana (drink of the gods). And so every year these gods organized (y't, impf.) the drinking of sana. And the one who was (taytmy, impf.) manliest on earth, he was brought over (yarty, impf.) and was given to drink (yrfart, impf.) from a horn filled with sana, as a favour to the thirsty little men on earth. The Narts esteemed (yap'art, impf.) highly the man who drank with the gods. And many years passed (yak'wa', pret.) in that way. At the celestial drinking of sana, Psatha, who personally sat as thamada (commander of the feast) got up and said (y?, pret.)." In this paragraph we can see how a sequence of events repeated in the past and expressed by the imperfect was interrupted by the event referred to by the commencing story, which is expressed by the preterite. The pluperfect generally expresses an action performed a long time ago, in the distant past: y yI aIa ax dy w q'-y-wa-nw-r zar-w'--nw- 1pl.poss. after dir.-3sg.-become-inf.-ABS recip.-kill-back-fut.-aff. q'-d---?a-- dir.-1pl.-pref.-3pl.-say-plup.-aff. dy da--xa-m 1pl. father-old-pl.-ERG
"Our forefathers said to us long time ago that the ones who will exist after us would kill each other" In vivid narration the present tense can also be used to express a past action: a y . Ia Ia a, aI.
66
Bly ma-w swan-am y mxwq'wa-r. Z-p'--m-y adult 3sg.pres.-become S-ERG 3sg.poss. stepson-NOM part.-raise-pret.-ERG-and y-?-t pw bly-y, z-aw-gwk'wa. 3sg.-have-ant.pret. daughter adult-and refl.-pres.-fall.in.love "The Svan's stepson grows up; those who had raised him had a grown-up daughter, and they fall in love" (note that Swan here refers to a member of a Kartvelian people, the Svans) The difference between the categorical and the factual future is not entirely clear to me. Some sources say that the categorical future expresses an intention to perform the action, while the factual future expresses the speaker's certainty that the action will be performed. According to my informant, the natural way to say "I shall go to the city" is q'lam s-k'wa-nw- (city-ERG 1sg.-go-factual fut.-af.), whereas q'lam s-k'wa-n- (with the categorical future suffix -n-) would be used only if the subject will go to the city under a certain condition. However, from the passages such as the following one it would appear that the categorical future does not refer to any particular time when the action will be performed, while this specification is necessary with the factual future. If so, the opposition between the categorical and the factual future would correspond to the opposition between the preterite and the anterior preterite: ax xay ax yy I ay, a aaIy: "yy Iy " I. I xa : "a , a x I" aI a x a Nrt-xa-m xbza-w y--xa-t za-zawa-nw byy-m nart-pl.-ERG custom-ADV 3sg.-3pl.-in.be-impf. rec.-fight-fut. enemy-ERG p'aa y-r-t-w, br-y y-r--'a-w: "d-va-zawa-nw date 3sg.3pl.give-ger. message-and 3sg.-3pl.-caus.know-ger. 1pl.-2pl.-fight-fut. d-na-k'wa-nw- m-pxwada zamn-m" - --?ara. rha'a 1pl.-dir.-go-fut.-af. this-like time-ERG dir.-3pl.-say but byy-m xbza-r y-q'wta-ry: "Nrtpq' t-q'wta-n-, enemy-ERG custom-NOM 3sg.-break-and Nart.race 1pl.-break-fut.af. nrt xakw t-wn'a-n-" --?a-ry nrt xakw-m q'-y-h-- Nart land 1pl.-seize-fut.-af. dir.-3pl.-and Nart land-ERG dir.-3sg.-carry-pret.-af. "The old Narts had the custom to give the enemy the date, to send him the message that they would come to fight: "We will come to fight at that time", they used to say. However, the enemy broke the custom: "We will come to fight the race of the Narts (eventually), we will seize the land of the Narts", they used to say when they came to the land of the Narts."
67
In the preceding passage, apparently, the Narts used the factual future to give the exact time when they would come to fight, while their enemies just indicated that they would come to fight, without stating exactly when. The opposition clearly seems to be in the definiteness of time reference. Some authors refer to the future II as conditional. It is formed by adding the suffix -t to the factual future form. It seems that forms with the nt suffix, which are sometimes set apart as a distinct verbal mood (the subjunctive), can also be included in this category, cf. s-k'wa-nt "I would go" (see below). II , . y t'sp'a q'--m-xwta-ma, paa-nwta-q'm 3sg.poss. weak.spot dir.-3pl.-neg.-discover-cond. overcome.-fut.II-neg. "If they would not find his weak spot, they would not overcome him" Here are the selected paradigms of the verbal tenses:
PRESENT A) dynamic intransitive verb k'wan "to go" 1. s-aw-k'wa(r) "I go" s-k'wa-r-q'm "I don't go" 2. w-aw-k'wa(r) "you go" 3. m-k'wa(r) "he goes" 1. d-a-k'wa(r) "we go" 2. f-a-k'wa(r) "you go" 3. m-k'wa-xa-r "they go" B) static intransitive verb -sn "to sit" 1. s--s- "I sit" 2. w--s- "you sit" 3. -s- "he sits" 1. d--s- "we sit" 2. f--s- "you sit" 3. -s- "they sit" C) dynamic intransitive verb psaan "to converse" sawpsa "I converse" wawpsa "you converse" mpsa "he converses" dawpsa "we converse" fawpsa "you converse" mpsa (mapsaxar) "they converse" D) transitive verb hn "to carry": s-aw-h "I carry him"/"I carry them" w-z-aw-h "I carry you" f-z-aw-h "I carry you (pl.)"
68
w-aw-h "you carry him" /"you carry them" s-b-aw-h "you carry me" d-b-aw-h "you carry us" ya-h "he carries him" / "he carries them" s-ya-h "he carries me" d-ya-h "he carries us" w-ya-h "he carries you" f-ya-h "he carries you (pl.)" f-d-aw-h"we carry you (pl.)" f-aw-h "you carry him" / "you carry them" s-v-aw-h "you (pl.) carry me" d-v-aw-h "you (pl.) carry us" y--h "they carry him" / "they carry them" s--h "they carry me" d--h "they carry us" w--h "they carry you" f--h "they carry you (pl.)" PRETERITE s-k'w- "I went" w- k'w- "you went" k'w- "he went" ss "I was sitting" ws "you were sitting" s "they were sitting" ds "we were sitting" fs "you were sitting" s "they were sitting" sa txm syad "I read a book" wa txm wyad "you read a book" r txm yad "he read a book" da txm dyad "we read a book" fa txm fyad "you read a book" xar txm yad "they read a book" sh "I carried him" / "I carried them" wsh "I carried you" fsh "I carried you (pl.)" ph "you carried him" / "you carried them" sph "you carried me" dph "you carried us" yh "he carried him" / "he carried them" syh "he carried me" dyh "he carried us" wyh "he carried you" fyh "he carried you (pl.)" th "we carried him" / "we carried them"
69
wth "we carried you" fth "we carried you (pl.)" fh "you (pl.) carried him" / you carried them" sfh "you (pl.) carried me" dfh "you (pl.) carried us" yh "they carried him" / "they carried them" sh "they carried me" dh "they carried us" wh "they carried you" fh "they carried you (pl.)" IMPERFECT s- k'wa -(r)t "I was going" w-k'wa(r)t "you were going" ya-k'wa(r)t "he was going" ANTERIOR PRETERITE s- k'w-t "(then) I went" PLUPERFECT s-k'wa-- "I went a long time ago" ANTERIOR PLUPERFECT s-k'wa-t "(then) I went a long time ago" CATEGORICAL FUTURE s- k'wa-n- "I will go" FACTUAL FUTURE s- k'wa-nw- "I will go, I am about to go" ( is the affirmative suffix) FUTURE II s- k'wa-nwt "I was about to go / I would go"
INTERROGATIVE The interrogative is sometimes referred to as the question mood. It uses the same type of suffixal formation as verbal moods. Like verbal moods, the interrogative is a nonfinite verbal form (it takes the prefixal negation -m-) and it cannot be combined with the affirmative suffix -. However, considering the function of this category, it is better to think of it as a form of expressing the illocutionary force; the interrogative suffixes bring into question the content of the predicate, ie. the verb. The interrogative suffixes are -ra, -q'a, -wy: yx w-txa-ra 2sg.-write-inter.
70
"Are you writing?" (interrogative) II s-f-?w'a-n-q'a 1sg.-2pl.-meet-fut.-inter. "Will I meet you?" (interrogative) 60 The suffix -q'a can also be used in exclamations: aIxa, yI ! w g v-'axmy, za w-q'-y'-n-q'a wa-m! soon-late once 2sg.-dir.-exit-fut.-inter. hole-ERG "Sooner or later, you will exit that hole!" The interrogative has no suffix in the preterite and in the future, but the affirmative suffix is not used, and the intonation of the sentence serves as another indicator of interrogativity: aaxa f---tx- 2pl.-3pl.-caus.-write-pret. "They made you write (it)?" Iy d-f-xwa-k'wa-nw 1pl.-2pl.-ver.-go-fut. "Are we going to go for you?" The suffix -ra can be used twice in disjunctive questions: I I yII z yas-'a s-y-a-ha'a-f-n-ra s-y-m-a-ha'a-f-n-ra? 1 year-INST 1sg-3sg.-caus.-guest-pot.-fut.-ra 1sg.-3sg.-neg.-caus.-guest.-pot.-fut.-ra "Will he be able to receive me as a guest for a year or will he not?" Interrogativity can also be expressed with interrogative particles, e. g. the particles p'ara, ha "why", etc. They can be freely combined with the interrogative suffixes: a yea ? ha w-z-tay-s- mva-r q'a-bana-ra? why this(NOM) 2sg.-part.-dir.-sit-pret. rock-NOM dir.-leave-inter. "Why are you leaving this rock you were sitting on?"
MOODS
60
In the interrogative formed with the suffix -q'a it is assumed that the answer will be affirmative (Kumaxov & Vamling 1998: 53).
71
Kabardian verbal moods are: indicative, imperative, admirative, optative, conditional and permissive. A) Indicative The indicative is the unmarked verbal mood. It has the suffixes - (for affirmative) and -q'm (for negation). B) Imperative The imperative is the bare stem (without any suffixes): la! "paint!" (lan "to paint") a! "lead!" (an "to lead") tx! "write!" (txn "to write") If the lexical verb contains directional prefixes, these remain in the imperative: I mda q'-k'wa "come here!" here dir.-go The third person singular imperative receives the personal prefix: yaI ee y-w-'a taylayfawn-r q'a-z-gwpss--m 3sg.-factitive-life telephone-NOM dir.-part.-invent-pret.-ERG "May live the one who invented the telephone!" The imperative is also used in the 2nd person plural, with the regular person prefix: e a eyI! fy Satanyay gwa f-ya-wp'! poss.2pl. S. lady 2pl.-3sg.-ask "Ask (pl.) your (pl.) Lady Satanay!" Instead of the 1st person plural imperative, the causative of the 2nd person singular or plural imperative is used, with the 1st person plural as the causer: d-v-a-tx (1pl.2pl.-caus.-write) "let's write". This is typologically completely parallel to the English imperative construction (let us write): I Wazrmad wsa d-v-a-' W. companion 1pl.-2pl.-caus.-do "Let us make Wazirmad our companion!" The negation in the imperative is the prefix -m-, as if it were a non-finite form: yI
72
w-m-k'wa 2sg.-neg.-go "don't go" The imperative can be formed from verbal stems containing preffixes for version or conjunctivity: a! "run!" s-xwa-a "run for me" s-xw-da-a "run for me with him!" The imperative can be reinforced by adding the suffix -t: xa "eat!" vs. xa-t "come on, eat!" ! x-m f-xa-pa-t sea-ERG 2pl.-dir.-look-imp. "Come on, look into the sea!" C) Admirative The admirative mood is formed with the suffix the suffix -y. It is used to express the speaker's admiration or the unexpectedness of the performing of the action expressed by the verb; few languages known to me have such a verbal mood, but it does exist, e. g., in Albanian: a sa nawba z ma s-aw---y I today 1 bear 1sg.-see-pret.-af.-adm. "Why, I saw a bear today!" The admirative suffix -y can also have an interrogative sense and imply that the speaker does not approve of the action expressed by the verb. D) Optative The optative is formed with the suffixes -ara(t), -rat and -'at, as well as the prefix -r-ay- (where -ay- is the petrified 3 sg. person marker) expresses a wish for an action to be performed. A morphologically formed optative as a verbal mood is very rare among the languages of Eurasia, but most Caucasian languages have this verbal mood 61. a a() -r q'a-s-ara(t) he-NOM dir.-come-opt. "Oh if he would come!"
61
According to the data in WALS, a morphologically formed optative must be an areal feature of languages spoken in the Caucasus; this doesnt refer only to the indigenous ("Caucasian") languages, but also to languages belonging to other families (Turkic, Iranian) which are spoken there.
73
ay m-r zy xw-r q'a-w---wa s-aw-arat he-NOM whose thigh-bone dir.-become-back-pret.-ger. 1sg.-see-opt. "May I see resurrected the one whose thigh-bone this is" yx exI wax q'yax-'at rain fall-optative "Oh if it would rain!" eI y-ray-'-f 3sg.-opt.-do-pot. "May he manage to do it" There is also an optative prefix w-, apparently identical with the 2nd person prefix; however, the optative formed with this prefix does not distinguish between the 2nd and the 3rd person, cf. w-k'wa "may he go", or "may you go" (Kumaxov 1989: 201). Besides that, a wish can also be expressed with the "optative particle" py(y), as in the greeting wpsaw py "may you be healthy". E) Conditional The conditional has the suffixes -m(a) and -am(a). It expresses the fact that the action is performed under a certain condition. A Kabardian verb in the conditional can be equivalent to an entire conditional clause in English: aa d-f-w--ma 1pl.-2pl.-see-pret.-cond. "If you saw us" Iy ye, yeI f'wa w-yada-ma, wacyanka-f' q'a-p-h-n- well 2sg.-study-cond. grade-good dir.-2sg.-get-fut.-af. "If you study well (hard), you will get a good grade" y I I, y I I thwrmba xw q'-y-'-ma foam white dir.-3sg.-appear-cond. s-q'-aw-k'wa-, 1sg.-dir.-pres.-go-back
thwrmba xw q'-y-m-'-ma s-q'a-k'wa--r-q'm foam white dir.-3sg.-neg.-appear-cond. 1sg.-dir.-go-back-pres.-neg. "If a white foam appears, I am coming back, if a white foam does not appear, I am not coming back"
74
z- wa-t-t-nt, q'-t-xwa-b-wat--ma 1-horse-2sg.-1pl.-give-fut.II dir.-2pl.-ver.-2sg.-find-again-cond. "We would give you a horse if you found it for us" The suffix -ama is apparently added to the imperfect -t-; the complex suffix -tama- is used in irreal conditional clauses: a aI Ia, ax y -b s- aq'wa-m mf'a 'a-m-n-t-ama, this-ERG alot-pret. leg-ERG fire dir.-neg.-catch.fire-impf.-cond. ba-mta-xa-r y-s-nwta-q'm bee-hive-pl.-NOM 3sg.-burn-fut.II-neg. "If the leg alloted to him did not catch fire, the bee-hives would not have burned down" (in spite of its weirdness, the translation is correct; in the story from which this example is taken, "he" is the bee-keeper who was "alotted" one leg of a goat, and this leg caused the fire that burned down the beehives). As can be seen from the preceding example, the future II is used in the main clause when there is an irreal (counterfactual) conditional in the dependent clause. F) Permissive The permissive mood has the suffix -m(), -my. It expreses that the action is performed in spite of some fact or circumstance. It is translated into European languages with permissive clauses containing conjunctions such as although. I Ia I I w fa-'a 'al-a-my g -'a '- skin-INST boy-af.-perm. heart-INST man-af. "Although by skin (=judging by the skin) he is a boy, by heart he is a man". Some authors include the subjunctive in the list of verbal moods 62. The subjunctive is expressed by the suffix -nt; forms with this suffix seem to have a conditional meaning, i. e. they express that the action is performed under a condition, e. g. s-k'wa-nt "I would go", but in some contexts they also appear to express the possibility that the action is performed, as in the following example: I ax? st y-'a--nt Nrt-xa-m? what 3pl.-do-back-fut. II N.-pl.-ERG "What could the Narts do?" (asked as a rhetorical question) I, I
62
75
wsa s-p-'-ta-ma, s-na-k'wa-nt companion 1sg.-2sg.-make-impf.-cond. 1sg.-dir.-go-fut. II "If you would make me your companion, I would go". This is presumably the same form referred to as the future II in this grammar (see above).
EVIDENTIALITY The basic evidentiality suffix is -an-. It is used to express that the action is probably happening (or that it has happened, or that it will happen), but that this was not evidenced by the speaker 63: a Ia -r q'a-k'wa---an- he-NOM dir.-go-back-pret.-evid.-af. "He probably came back" (but I did not see this)
Instead of the category of evidentiality, Kabardian grammars talk about a special "hypothetical mood", Rus. predpoloitel'noe naklonenie. However, it can be shown that this is not a sub-category of mood; evidentiality is a category used to express the source of information on the basis of which the assertion is made. This category exists in many languages, and it is morphologically realized in Turkish, for example. The evidential suffix is actually an agglutination of the pluperfect suffix -a- and the future suffix -n. It often happens that affixes used as tense markers become grammaticalized as evidentiality markers and/or epistemic modality markers (cf. the English will have been in evidential expressions such as It will have been him, or Croatian future tense marker bit e in the evidential phrase Bit e da je doao "He must have come, I guess he came"). As a confirmation that the "hypothetical mood" does not belong to the same category as other verbal moods we can use the fact that, unlike the affixes for true verbal moods, the evidentiality affix can be combined with the indicative/affirmative suffix -, cf. Ia k'w--an- "he probably went" in opposition to k'w-- "he went". The suffix -'a "maybe" can also be used together with the evidential suffix -an, cf. IaI k'w--an-'a ma-w "maybe he went" (ma-w is the 3rd p. sg. present of the verb "to become"). Besides the synthetic evidential construction, there is the analytic construction with the auxiliary verb wn (used in the future) and the (participial) verbal base:
It is not quite certain whether the source of information (evidentiality), or rather the uncertainty of the speaker (epistemic modality) is the primary function of this suffix. My informants tend to translate sentences with the suffix -an- using the Russian expression skoree vsego "most probably".
63
76
Ia f-k'w- w-n- 2pl.-go-pret. be-fut.af. "you probably went" ' dagw w-n- old.man deaf be-fut.af. "The old man is probably deaf"
DEVERBAL NOMINALS Kabardian has three classes of deverbal nominals: the infinitive (a kind of verbal noun), the participle (a kind of verbal adjective), and the gerund (a verbal adverbial, with many features of participles in other languages; some linguists would call it a converb).
I. INFINITIVE The lexical form of verbs is the infinitive, which ends in -n. The infinitive is actually a verbal noun which can be inflected for case, e. g. txan "to write" has the forms txanr (NOM), txanm (ERG), txanm'a (INST) and txanw (ADV). Also, personal prefixes can be added to the infinitive form, cf. forms of the verb laan "to work": 1sg. s-laan 2sg. y w-laan 3sg. laan 1pl. d-laan 2pl. f-laan 3pl. laan
The personal prefixes are sometimes optional, especially in obligatory control constructions, when one argument of the infinitive is obligatorily co-referent with one argument of the matrix verb: Ia sa 'a-z-dz-- I dir.-1sg.-begin-pret.-af. "I started to go" ()I (s)-k'wa-n 1sg.-go-inf.
However, the personal prefixes cannot be omitted when there is no necessary coreference between the arguments of the infinitive and of the matrix verb: a I s-k'wa-n sa sy-gw-- I 1sg.poss.-think-pret.-af. 1sg.-go-inf. "I intended to go, I thought about going".
77
In the preceding example the personal prefix s- cannot be omitted, because the verb gwan does not have obligatory control. Stative verbs can be formed from nouns and adjectives by adding the infinitive suffix: "man" : -n "to be a man"; f'c'a "black" : f'c'a-n "to be black". In some constructions (especially in subordinate clauses), the infinitive takes the suffix -w as well (identical to the adverbial suffix), and thus becomes formally identical to the future suffix (-nw) 64: a eIa y IIy sa -b ay?-- wna-m 'a-m-'-nw I he-ERG tell-pret.-af. house-ERG dir.-neg.-go-inf. "I told him not to go out of the house" For each infinitive construction (and each verb) it is necessary to learn whether the infinitive takes the suffix -n or -nw. The rule is that, if there is no personal prefix on the infinitive, the only possible infinitive form is the one with the suffix -n. Some authors distinguish verbal nouns or "masdar" from the infinitive. The verbal noun has the same ending as the infinitive (-n), but, unlike the infinitive, it can have possessive forms 65: txan-r "reading", sy-txan-r "my reading". Also, just as any other noun, the verbal noun can be modified by an adjective: Y y I yxa Wa wy dn 'h-r b-wx-- you your sewing long-NOM 2sg.-finish-pret.-af. "You have finished your long sewing" Due to lack of more detailed research we cannot be entirely certain whether it is legitimate to distinguish between infinitives and verbal nouns.
II. PARTICIPLES According to grammar text-books participles have the subject, object, instrumental and adverbial form. These forms of the participle correspond to nominal cases, but the affixes for different forms/cases are not entirely equal to the ones in the nominal declension 66. The subject form takes the prefix z()- if it expresses a transitive action; if the action is intransitive, there is no prefix, and the participle is thus the same as the bare stem of the verb:
This type of infinitive can also be called the supine. Kumaxov 1989: 279. In Kumaxov (ed.) 2006, I: 324 it is claimed that only the masdar (verbal noun) is inflected for case, while the infinitive has no case forms. 66 The morphology and syntax of participles are the weakest point of Kabardian grammars; cf. Kumaxov 1989: 254 ff.
65
64
78
z-txr "writing it" - ya-z-tr "giving it to him" - lar "working" - txar "writing" (-r is the nominative ending). The object form takes the prefix za-, z- if the participle refers to the indirect object; if not, there is no prefix: za-pr "who he is looking at", z-xwa-q'war "who he is going for", s-txr "which I am writing". What this actually means is that the prefix za-/z- is used when the participle refers to the noun phrase which is marked (or would be marked) by the ergative case, and not by the nominative 67. Participles referring to the nominative noun phrase do not have the prefix z-/za-: I e -b y-a-r "the one whom he is leading" : -b '-r ya-a "he leads the old man" he-ERG 3sg.-to lead-NOM he-ERG old man-NOM 3sg.-to lead sa -r z-xwa-s-a-r I he-NOM part.-ver.-1sg.-to lead-NOM "The one who I am leading (him) for" I sa -r '-m xw-z-aw-a I he-NOM old man-ERG ver.-1sg.-pres.-to lead "I lead him for the old man" In accordance with our schema of case assignment in Kabardian (see above), we can say that the prefix z-/za- indicates that the participle does not refer to the argument which is the lowest ranking macrorole (ie. that it refers to the argument which is not the lowest one in the Actor-Undergoer hierarchy). Since the lowest ranking macrorole in Kabardian, as an ergative language, is equivalent to the traditional notion of the subject, we can give a somewhat simplified statement saying that the prefix z-/zaindicates that the participle does not refer to the "subject" of the sentence. Traditional grammars say that the subject participle form is conjugated according to the person of the object, and the object form according to the person of the subject; what this really means is that the personal prefix on the participle with the z-/zaprefix expresses the argument which represents the lowest ranking macrorole in the verb's logical structure, while the personal prefix on the participle without the z-/zaprefix expresses the argument which is not the lowest ranking macrorole (which is not the "subject", in the sense in which we talk about the subject in Kabardian): s-z-txr "that is writing me down, writing me down"; w-z-txr "that is writing you down"; s-txr "which I am writing"; p-txr "which you are writing" (< *w-txr).
67
79
The participle can be inflected for all persons except for the person of the lowest ranking macrorole (the Undergoer) and for the person indexed by the participial prefix z-. Participles can also contain personal markers of conjunctivity and version: e d-ya-a-r conj.-3sg.-wait-NOM "who is waiting for him/her together with him/her" I xwa-k'wa-r vers.-go-NOM "who is going for him/on his behalf" The participle prefix has the form za- rather than z- when the participle refers to the oblique argument (non-macrorole core argument) of an intransitive verb, e.g. za-da-r "who he/she is calling" (from yadan "call"). The so-called "instrumental" participle form is formed with the prefix zar()-, zarawhich contains the prefix za-: zar-lar "with which you do"; zar-ya-dar f'wa "it is well the way he reads/studies" (Kumaxov 1984: 142). The instrumental form of the participle often behaves as a general-purpose complementizer/subordinator (see below). It can sometimes be translated as "when", "how", or "as", cp. the title Sawsrk'wa y dta -r ap zar-y-'--r (S. poss.3sg. sword-NOM L. part.-3sg.-dopret.-NOM) "How/when Lapsh made Sosruko's sword". This form of the participle can also be added to nominal stems in order to make them suitable for complementation: y a Iy wara sbyy-r q'a-wr-t zar-da-r y-m-'a-w thus child-NOM dir.-grow-impf. part.-Adygh-NOM 3sg.-neg.-know-ger. "Thus the child was growing, without knowing that it was an Adygh (Circassian)" Syntactically, participles behave as qualitative adjectives (they are inflected for case and they are placed after the noun they refer to): ax sbyy-r z--xa-r y child-NOM part.pref.-caus.-feed-NOM poss.3sg. "The one who feeds a baby is its mother" (a proverb) aa na- mother-af.
Participles are inflected for tense, but they do not have forms for all tenses. The verb txa-n "to write" has the forms for the active present participle txar "writing, that writes", the preterite participle txr and the future participle txanwr. Participles may receive case affixes, but this is mostly optional:
80
() I, II() a w w w Z- at(-r) ma-g f'a-ry, z-f'a-k' ad(-r) m- part.-find-(NOM) 3sg.-rejoice-and part.-advers.-lose-(NOM) 3sg.-cry "He who finds (it), rejoices, he who loses (it) - cries" (a proverb) There is no correlation between the case ending and the syntactic role of the participle. In the examples above, the participle refers to the actor of a transitive verb (with suppressed object), but it can still be in the nominative. The syntactic role of the participle is indicated only by the presence or absence of the prefix z- (above), or by directional prefixes z()da- (with telic meaning) and (z-)- (with locative/temporal meaning). Take, for example, the following participles: I zda-k'wa-r part.-dir.-go-NOM "where he is going to" a -la-r dir.-work-NOM "where he is working" I -?a-m dir.-talk-ERG "where (people) talk" ea e, I a nbaw dya-t-y, friend to-impf.-and
"It is to his friend that he set out, and when he got there, he entered the guest-house" The presence of the case endings -r, -m may indicate definiteness of the argument referred to by the participle. The exact conditions on their use are unknown. Negation of the participle is expressed by the prefix m-: m-txa "that isn't writing", s-z-m-w "that isn't seeing me". Cf. the opposition between the finite negation (-q'm) and the participial one 68:
68
The difference between these two types of negation is used as the basis for the differentiation of finite and non-finite forms in Kabardian (Kumaxov & Vamling 1995: 6). Non-finite forms can only be used in sentences in which they are dependent on finite forms. The only exception to this thesis are imperatives and interrogative constructions, which do not depend on finite forms and they do have the prefixed negation m- like non-finite forms.
81
y yI, I wa w-m-k'wa-ma, sa-ry s-k'wa-r-q'm you 2sg.-neg.-go-cond. I-and 1sg.-go-pres.-af.-neg. "If you don't go, I won't go either" Participles can be construed with the auxiliary verb wn "be, become": I I Ia ?waxw-r sar-'a '- wn-q'm job-NOM I-INST do-pret.(part.) become-neg. "I cannot do this job" (lit. "This job does not become done by me")
III. VERBAL ADVERBS (GERUNDS) Verbal adverbs (or gerunds) are formed from verbal roots using the same suffixes (-w(), -wa, -wra, -ra, -'ara) as in the formation of regular adverbs from nouns and adjectives (see above). The particularity of Kabardian verbal adverbs is that they can be inflected for person, and they also distinguish tenses, mood and transitivity/intransitivity. The transitive verbal adverb yad-aw "reading", for example, is inflected in the following way: sg. ey yey ey s-yadaw w-yadaw yadaw pl. ey ey ey, exy d-yadaw f-yadaw yadaw, yada-xa-w
In the preterite the suffix -- is added, so the forms are syadw, wyadw, etc. These finite forms of verbal adverbs are equivalent to entire subordinate clauses, so syadw would be translated as "when I was reading", fyadw "when you were reading", etc. ay a Ps-r t--wa ml dfa- river-NOM freeze-pret.-ger. ice smooth-af. "Since the river froze, the ice is smooth" Ia a Iy Sa s-'--q'm r I 1sg.-know-pret.-neg. he-NOM q'a-k'wa-wa dir.-to go-ger.
82
"I didn't know he had come" Iy aI y, Ie a I xIa T'w-ry mf'a-m bada-s-wra, z dap 'ayay-ry two-and fire-ERG dir.-sit-ger. one burning.coal fly.off-and ysp-m y dwarf-ERG his dna kwa'-r shirt lap-NOM pxys'-- burn.through-pret.-af.
"As the two (riders) were sitting by the fire, a burning coal flew off (it) and burned through the dwarf's shirt in his lap"
DIRECTIONALS The prefix q'a- can be roughly translated as "this way, hither", and the prefix n(a)- as "that way, thither", but their use is quite idiomatic. Their position in the verbal complex is immediately after the first personal prefix, or they come first if the personal prefix is 0- (in the 3 person): I 0-q'a-k'wa 3sg.-this way-pres.-go "He is coming this way" a y e Ia -r wy day 0-na-k'w-- he-NOM 2sg.-poss. to 3-thither-go-pret.-af. "He came towards you (that way)" In some combinations of personal markers these prefixes do not occur, in others they are compulsory 69: oa s-na-w--- "I waited for you", but *s-w()--- 1sg.-thither-2sg.-wait-pret.-af. a s-v--- "I waited for you (pl.)", but *s-n()-v--- 1sg.-2pl.-wait-pret.-af. y q'-d-aw-wa "he is hitting us", but *daw-wa
Kumaxov 1971: 253. It seems that the use of directionals depends on the "person hierarchy" (see below).
69
83
hither-1pl.-pres.-hit Colarusso (1992: 92-94) calls these prefixes "horizon of interest", which doesn't mean much. It seems that they function in the same way as directional affixes, which exist in many languages (cf. German hin-, her-, auf-, etc.), indicating the direction in which the action is performed. Some of them are so frequent (e. g. the prefix q'a-) that they must belong to verbal morphology, while others modify only some verbal roots and should therefore be included in the chapter on word formation (see below). There is no clear borderline between these two groups of prefixes. According to Colarusso (1991), there are also preverbs which indicate the manner in which the action is performed, or the state (consistency) of the subject, e. g. -xa- "as mass", -d- "as liquid": xa ps-r 0-q'-xa--- water-NOM 3sg.-hither-as.mass-flow-pret.-af. "The water flowed out" (if it was thrown out of the bucket, as mass) a ps-r 0-q'-d--- water-NOM 3sg.-hither-as.liquid-to flow-pret.-af. "The water flowed out" (if it leaked out through a hole or a pipe) Neither texts nor my informants enabled me to ascertain the existence of these preverbs. The nearest equivalents in the standard language are the directional preverbs da- and xa-, which both denote that the action is performed in some container; it appears, however, that the difference between them lies in the nature of the container: for da-, the container must be empty, while xa- refers to a container that is represented as some kind of mass, or substance. The prefix da- indicates that the action (or, more frequently, state) of the verb is being performed in a certain area, or (empty) container: x tx-r kaf-m da-- book-NOM vessel-ERG da-lie-af. "The book is lying in the vessel" IaI pa-r p'nt'a-m da-dza-n wood-NOM garden-ERG dir.-throw-inf. "to throw wood into the garden" The prefix xa- (x-) denotes the location in some container (conceived as substance), or the orientation of the action towards the interior: ps-m x xa-dza-n
84
water-ERG dir.-throw-inf. "to throw into water" The prefix - indicates the place of the action (usually the place from which the action is performed), e. g. -dzn "to throw off, to throw down from some surface" (cp. dzn "throw"), -n "to descend from" (cp. n "run"), -n "lie on something", -wn "to see something somewhere": a I, aIy y Zamn-r k'wa-rt, Wazrmas-y k'wa-w maz-m -psaw-rt time-NOM go-impf. W-and hunt-ger. wood-ERG dir.-live-impf. "Time was passing, and Wazirmes was living in the wood (and) hunting" The prefix - can also have temporal meaning; participles prefixed with - can be translated as temporal clauses introduced by "when", e.g. -k'w--m "when he went/had gone". The prefix tay- indicates movement onto, or away from some surface, e. g. tay-dzn "throw onto": x e tx-r stawl-m tay-dza-n book-NOM table-ERG dir.-throw-inf. "to throw the book on the table" The prefix 'a- indicates the location under something or inside something (conceptualized as being under some cover), e. g. 'a-dzn "to throw something under something", 'a-n "to run under something", 'a-atn "to fly away from under something": y I Ia wna-m 'a-ha-ry t's-- room-ERG dir.-carry-and sit-pret.-af. "He came into the room and sat (down)" -r bwan'a-m 'a-t- horse-NOM cave-ERG in-sit-af. "The horse is in the cave" The prefix bla- denotes an action by, or past a particular reference point, e. g. bla-an "to run past": y yIa q'a-wv?-- w-r kwaba-m bla--ry horseman-NOM gate-ERG dir.-run-and dir.-stop-pret.-af. "The horseman run past the gate and stopped"
85
The prefix f'a- denotes the falling movement from the surface of something, or the "hanging" position of some object, e. g. f'a-n "jump, fall off": Ia ar-r gwam-m f'a--- wheel-NOM axle-ERG dir.-run-pret.-af. "The wheel fell off the axle" The prefix p- denotes action which is taking place at the end, or edge of something, e. g. p-sn "sit at the edge", p-n "run off from the edge of something", pdzn "throw off from the edge", etc.
blatayq'ad xapdanaf'a-
'a-
Besides these basic directional and locative prefixes, there are also many secondary prefixes, mostly derived from nouns, often nouns denoting body parts: 1. bada- "towards, away from" (cf. ba "breast"): atan: "fly" vs. badaatan "fly towards" 2. ?w- "near, next to, away from" (cf. ?w "mouth"): atan "fly" vs. ?watn "fly away from" (note that the verbal root also changes its vocalism in derivation) 3. bwr- "sideways" (cf. bw "hip"): xwan "chase, drive" vs. bwrxwan "drive sideways" 4. 'ar- "on(to) the edge of, on(to) the top of" (cf. 'a "tail, end"): an "lead" vs. 'aran "lead to the top, or slope of" 5. axa- "in front of" (cf. a "mouth"): xwan "drive" : axaxwan "drive towards, drive near to"
86
CLASS A - intransitive monovalent verbs Structure of the verbal complex: Subject-V (= the single macrorole - V) a) k'wa-n "to go" (dynamic verb) I. Present 1. sg. s-aw-k'wa "I go" 2. sg. w-aw-k'wa "you go" 3. sg. m-k'wa "he/she/it goes" 1. pl. d-aw-k'wa "we go"
70
87
2. pl. f-aw-k'wa "you go" 3. pl. m-k'wa-(xa) "they go" Cp. '-r m-k'wa "the man goes" II. Preterite 1. sg. sk'w "I went" 2. sg. wk'w "you went" 3. sg. k'w "he/she/it went" 1. pl. dk'w "we went" 2. pl. fk'w "you went" 3. pl. k'w "they went" III. Future 1. sg. sk'wan "I will go" 2. sg. wk'wan "you will go" 3. sg. k'wan "he/she/it will go" 1. pl. dk'wan "we will go" 2. pl. fk'wan "you will go" 3. pl. k'wan "they will go" b) sn "sit" (static verb) I. Present 1. sg. ss 2. sg. ws 3. sg. s 1. pl. ds 2. pl. fs 3. pl. s II Preterite ss ws s ds fs s III. Future ssn wsn sn dsn fsn sn
CLASS B - intransitive bivalent verbs Structure of the verbal complex: Subject-Object-V (= the single macrorole - nonmacrorole core argument - V) wa-n "to hit"; an "to wait for" I. Present s-b-aw-wa "I hit you (sg.)" s-f-aw-wa "I hit you (pl.)" s-aw-wa (sawwa) "I hit him/her" s-y-wa "I hit them" w-q'a-s-aw-wa "you hit me"
88
q'a-s-aw-wa "he/she hits me" q'a-s-aw-wa-xa "they hit me" y-aw-wa "he/she hits him" y-wa "he/she hits them" y-aw-wa-xa "they hit him" y-wa-xa "they hit them" '-r q'a-s-aw-wa "the man is hitting me"; -m s -aw-wa "I am hitting a horse" (nominative construction) II. Preterite s-n-aw--- "I waited for you (sg.)" s-va--- "I waited for you (pl.)" s-ya--- "I waited for him" s-ya--- "I waited for them" w-q'-za--- "You (sg.) waited for me" w-q'-da--- "You waited for us" w-ya--- "You waited for him" w-ya--- "You waited for them" w-q'-za--- "You waited for me" w-q'-da--- "You waited for us" q'-za--- "He waited for me" q'-wa--- "He waited for you (sg.)" q'-va--- "He waited for you (pl.)" ya--- "He waited for him" ya--- "He waited for them" d-n-wa--- "We waited for you (sg.)" d-va--- "We waited for you (pl.)" d-ya--- "We waited for him" d-ya--- "We waited for them" f-q'-za--- "You (pl.) waited for me" f-q'-da--- "You (pl.) waited for us" f-ya--- "You (pl.) waited for him" f-ya--- "You (pl.) waited for them" q'-za--- "They waited for me" q'-da--- "They waited for us" q'-wa--- "They waited for you (sg.)" q'-va--- "They waited for you (pl.)" ya--- "They waited for him" ya--- "They waited for them" CLASS C - transitive bivalent verbs Structure of the verbal complex: Object-Subject -V (= the lowest ranking macrorole, Undergoer - the other macrorole, Actor - V) w-n "to see" w-z-aw-w "I see you"
89
s-aw-w "I see him" s-aw-w-xa "I see them" s-b-aw-w < *s-w-aw-w "you (sg.) see me" w-aw-w "you (sg.) see him" s-ya-w "he/she sees me" w-ya- w "he/she sees you (sg.)" w-d-aw-w "we see you (sg.)" f-d-aw-w "we see you (pl.)" d--w "we see them" s-v-aw-w "you (pl.) see me" s-v-aw-w "you (pl.) see me" f-aw-w "you (pl.) see him" d-v-aw-w "you (pl.) see us" f-aw-w-(xa) "you (pl.) see them" ya-w "he/she sees him" ya-w-(xa) "he/she sees them" s--w "they see me" w--w "they see you" d--w "they see us" f--w "they see you (pl.)" y--w-(xa) "they see them" y--w "they see him" '-m syw "the man sees me" -r saww "I see the horse" According to C. Paris, verbs of this class do not take the prefix -(a)w- in the 3rd person (Actor) present tense, cf. ya-w-wa "he is hitting him" (B) in contrast with ya-lw "he sees him" (C).
CLASS D transitive trivalent verbs Structure of the verbal complex: Object-Indirect Object-Subject-V (= the lowest ranking macrorole, Undergoer - non-macrorole core argument - the other macrorole, Actor) t-n "to give" w-y-s-t [wzot] "I give you to him" w-y-s-t [wazot] "I give you to them" q'-w-s-t [q'zot]"I give him to you" q'-w-s-t-xa "I give them to you" w-q'a-s-y-t "he gives you to me" w-q'a-s--t "they give you to me" s-r-y-t [sareyt] "he gives me to him" s--ry-t "he gives me to them" y-r-y-t [ireyt] "he gives him to him" y--ry-t [yareyt] "he gives him to them"
90
y-r-y-t-xa "he gives them to him" y-r--t (yrat) "they give him to him" tx'-r q'a-w-s-t (q'wzot) "I give you the letter"; -c'xw-m w-y-s-t (wzot) "I give you to this man"
CLASS E - causatives (valency increases by one in relation to the basic verb; transitive construction) Structure of the verbal complex: (Object-Indirect Object)-Subject-Causer-V t-n "to give"; k'wa-n "to go"; the causative prefix is ay-r-t "he gives it to him" : y-r-s-a-t "I make him give it to him" y-r-y-a-t [irreyt] "he makes him give it to him" y-r--a-t [irrt] "they make him give it to him" w-s-a-k'wa [wzok'wa] "I make you go" = "I send them" s-a-k'wa [sok'wa] "I make him go" s-a-k'wa-xa [so-k'waxa] "I make them go"
CLASS F verbs derived with some prefixes, e. g. tay- "on"; intransitive verbs Structure of the verbal complex: Subject-Object-Pref.-V fa-n "to fall" s-q'a-p-tay-fa (p < w) "I fall on you" s-tay-fa "I fall on him" s--tay-fa "I fall on them" q'a-p-tay-fa "he falls on you" q'a-p-tay-fa-x(a) "they fall on you" tay-fa "he falls on him" nnaw-r q'-tay-fa "the child falls on him" '-m s-tay-fa "I fall on the man" CLASS G verbs derived with some prefixes which are placed between two personal markers, e. g. p- "all the way, completely"; transitive verbs. Structure of the verbal complex: Object-Pref.-Subject-V wp''-n "to cut" w-p-s-wp'' "I cut you all the way"
91
p-s-wp'' "I cut him all the way" p-s-wp''-xa "I cut them all the way" s-p--wp'' "they cut me all the way" p--qp''-xa "they cut them all the way" -r p-s-wp'' "I cut the man"; -m s-p-y-wp'' "the man cuts me" CLASS H verbs derived with some directional/local prefixes, e. g. ty- (tay-) "on"; transitive verbs. Structure of the verbal complex: Object-Subject-Pref.-V x-n "to lift" w-q'a-t-tay-s-x(') [wq'ttezox''] "I lift you from us" w-q'a-tay-s-x "I lift you from him" w--q'a-tay-s-x "I lift you from them" s-p-tr-ay-x "he lifts me from you" ha-r q'a-p-tay-s-x "I lift the dog from you" nf-m w-q'-tay-s-x "I lift you from the rock"
92
WORD FORMATION
In Kabardian words can be formed by derivation (adding suffixes and prefixes), but also by combining lexical morphemes into compounds.
COMPOUNDS Like other Abkhaz-Adyghean languages, Kabardian forms words of a more complex, abstract meaning by joining two or more (usually monosyllabic) words of a simpler, concrete meaning. Compounds with nouns denoting body parts and organs such as "heart" are especially common. Guessing the meaning of a compound is quite frequently not a simple task: na-f "eye-rotten" = "blind" pa-s-a "nose-sit-on" = "early" na-p'c' "eye-lie" = "false" na-ps "eye-water" = "tear" na-f' "eye-good" = "goodness" bza-gw "tongue-heart" = "tongue" (as an organ of speech) mf'a-gw "fire-heart" = "train" ?a-pa "hand-nose" = "finger (on the hand)" ha-dza "barley-tooth" = "grain" thak'wma-'h "ear-long" = "rabbit" '-la "new-meat" = "young man, boy" da-xw "together-be born" = "brother (with respect to sister)" dw- "thief-old" = "wolf" da-na "father-mother" = "parents" '-fz "man-old-woman-old" = "grandparents" faw-w "honey-salt" = "sugar" maz-dad "forest-hen" = "pheasant" x-qa "sea-pig" = "dolphin" wna-c'a "house-name" = "surname" xa-wa "eat-time" = "lunch" -da "earth-grease" = "petroleum" a-b "night-summit" = "deep night" a-ps "milk-water" = "sap (of plants)" hada-ma "corpse-smell" = "smell of a corpse" -dw "horse-thief" = "horse-thief" dw- "thief-old" = "wolf" '-k'wa "man-go" = "messenger" faw-w "honey-salt" = "sugar" da-dz "bean-throw" = "fortune-teller" As can be seen from the examples, there are compounds in which both parts are nouns (da-na "parents"), compounds in which nouns are combined with adjectives (na-f "blind") and compounds in which nominal words or adpositions are combined with verbs (pa-s-a "early"). In most cases, the meaning of the compound can be both
93
nominal and adjectival, which is a consequence of a poor syntactical differentiation between nouns and adjectives in Kabardian. In the examples above only two words were joined into a compound, but many Kabardian compounds consist of more than two parts. Compounding is almost a recursive process in Kabardian; using the elements ' "man", "old", f' "good", xwa "big" and k'wa "to go" the following compounds can be formed 71: '- "old man" '-k'wa "messenger" '-f' "good man, good-natured man" '--f' "good old man" '-k'wa-f' "good messenger" '-k'wa--f' "good old messenger" '--f'-xwa "big good old man" When a noun is modified in a double possessive relation (according to the formula X of Y of Z), the first possessive relation is expressed with a compound, e. g. A ay Ada- y q'rw-r Adyghean-blood poss. power-NOM "The power of Adyghean blood" Some compounds retain two accents. They are often built with rhyming morphemes (German Reimbildungen), or they contain fully reduplicated morphemes. Such compounds usually have intensive or copulative meaning (the Sanskrit dvandva-type): yaxa-yafa "eating-drinking" = "a feast" pq'na-pq'naw "in little pieces" natx-patx "beautiful" (of a girl) q'aa-naa "here and there, in a zigzag manner" awa-p'awa "jumping, bouncing" NOMINAL SUFFIXES -ay (suffix for the formation of tree names): day "walnut tree": da "walnut"; ay "oak": "tree" - (suffix denoting place/dwelling): ha "dog house": ha "dog"; a "barn": "horse" -yay (diminutive suffix): dadyay "chicken" : dad "hen" -a (suffix for abstract nouns): 'a "manhood, manliness" : ' "man" -k'wa (suffix for names of professions): txk'wa "writer": txan "to write"
71
94
-w (suffix for nouns denoting participants of an action or members of a group): q'waw "fellow-villager": q'wa "village", laaw "co-worker, colleague": laan "to work". -fa (suffix meaning "a kind of"): wzfa "a kind of disease": wz "disease". Nouns with this suffix are probably originally nominal compounds with the noun fa "skin".
VERB FORMATION BY PREFIXING Kabardian verbs are often formed with prefixes of nominal origin. Many such prefixes (preverbs) are derived from nouns denoting body parts, and they usually add spatial meaning to the verb's original meaning (see the section on directionality): na-k'wa-n "to go from there" (cf. na "eye", k'wan "to go") da--n "to lie in something" (cf. n "to lie") -?an "to be in something": r qlam -?-- "he was in town" (cf. ?an "to be, to have") In the case of Kabardian local prefixes it is difficult to decide whether they belong to word formation or to the verb morphology. They express meanings which are in English and other European languages usually expressed by local prepositions, cf. the following examples: y y a bzw-r wna-m bla-at-- sparrow-NOM house-ERG by-fly-pret.-af. "The sparrow flew past the house" (the prefix bla- denotes movement past or by something) eI w -m st tray-'a tree-ERG hoar-frost on-do "The hoar-frost covers the tree" (the prefix tr(ay)- denotes movement onto the surface of something) However, some local prefixes can correspond to Croatian verbal prefixes: aa Ia w q' dma-r -m rgwarw gwa-'a--- branch-NOM tree-ERG again at-to go-back-pret.-af. "The branch adhered (in growing) to the tree again" Croatian: "grana je opet prirasla stablu" (the prefix gwa- denotes connecting with something, cf. gw "heart") e ay e
95
Byard -m zapaw tay-s- B. horse-ERG well on-sit-af. "Berd sits on the horse well (correctly)" (= "Berd rides well") From the typological point of view, local prefixes of the Kabardian verb are not that unusual, since these kind of prefixes exist in European languages as well, cf. the almost synonymous expressions in Croatian skoiti preko ograde (''to jump over the fence'', with a preposition) and preskoiti ogradu (''to jump the fence'', with a local prefix on the verb). However, though both these strategies of expressing spatial relationships exist in Kabardian, verbal prefixes are much more frequent in this language than are local postpositions.
VERBAL SUFFIXES A) Several suffixes affect the valence of verbs: The suffix -'- is used to turn intransitive monovalent verbs into intransitive bivalent verbs: 'an "to die": y-'-'-n "to die of something" Suffixes -'(a) and -x() also affect the valence of a verb, but not its transitivity: k'wan "to go": eII ya-k'wa-'a-n 3sg.-go-suff.-inf. "to approach something" an "to run": a a x -r -b y-aw-a-x 3sg.-NOM 3sg.-ERG 3sg.-pres.-run-suff. "he runs away from this" (intransitive) Both of the aforementioned suffixes (a) and x additionally seem to have directional meaning: yaa-n "run" : yaa-'a-n "run towards (someone or something)"; hn "carry": ya-ha-x-n "carry down". B) Other suffixes have adverbial meaning, and can perhaps be treated as incorporated adverbs: The suffix -xw('a) is added to a participial form of the verb to express that the action of the verb is simultaneous with the action of the finite verb (Abitov (ed.) 1957: 99): y II, e
96
wa-r p-'at-xw'a, pa-m z-ya-a-psaxw axe-NOM 2sg.-lift-suff. wood-ERG refl.-3sg.-caus.-relax "While you're lifting the axe, the wood is relaxing" (a proverb) e w m-psaa-x dayla-r-y gwbza- neg.-speak-suff. fool-NOM-and smart-af. "A fool is also smart while he is not speaking" (a proverb) y, a w w s-hawq' a-x , sy -r q'wad-- 1sg.-sleep-suff. my horse-NOM disappear-pret.-af. "While I was sleeping, my horse disappeared" I, y, I Nt'a, tna-m-ra mal-m-ra yes calf-ERG-and sheep-ERG-and q'a-v-wat--xw dir.-2pl.-find-back-until
s--s-n-q'm sa-ry ?waxw-na-w I-and work-without-ADV 1sg.-dir.-sit-fut.-neg. "Yes, and until you find the calf and the sheep again, we will not sit idly" As the last two examples show, the action of both the finite verb and the participle can be be either punctual or durative. Accordingly, the suffix of simultaneity can sometimes be translated as "while", and sometimes as "until". The suffix -'a is used to indicate that the action of the verb has been already completed; it can usually be translated as "already" (Abitov (ed.) 1957: 117): ye y yxIa y wytayl-m ynstytwt-r q'-wx--'a- our teacher-ERG university-NOM dir.-finish-pret.-suff.-af. "Our teacher has already finished university" The suffix -pa- has perfectivizing meaning; it seems to indicate that the action has been fully accomplished: laa-n "work" : laa-pa-n "accomplish"; xa-n "eat" : xa-pa-n "eat up" The suffixes -(a)- and -q'wa mean something like "too much, excessively": xa-n "eat": xa--an "eat too much, eat excessively" psaa-n "talk": psaaq'wa-n "talk too much" The suffix -xxa- is best translated as "at all"; it reinforces the negation:
97
s-k'wa-n-q'm "I will not go": s-k'wa-xxa-n-q'm "I will absolutely not go, "I will not go at all" The suffix -x(a)- means "already": s-hazr- "I am ready, I am prepared": s-hazr-xa- "I am already prepared"
98
SYNTAX
NOUN PHRASES (NP) Possessive constructions follow the HM (head-marking) pattern. "A man's house" is thus literally "A man his-house": I e ?ana-m y-taypwa table-ERG 3sg.poss.-cover "the cover of the table, tablecloth" ha-m y-pa-r dog-ERG poss.3sg.-nose-NOM "dog's nose, dog nose" In the contemporary standard language the possession marker is sometimes written separately, as an independent word: a e- aaa Nl Q'abardyay-Baq'ar-m y q'l-ha- Nalchik Kabardino-Balkaria-ERG poss.3sg. city-head-af. "Nalchik is the capital city of Kabardino-Balkaria" Kabardian, unlike Abkhaz and Adyghean, does not distinguish alienable and inalienable possession, but there are traces of this opposition in the Besleney dialect of Kabardian 72. Demonstrative pronouns precede the noun they refer to, and sometimes they merge with it as prefixes (see above). They can be separated from the noun by a participle, which is the equivalent of a relative clause in English: a e Iay m fa q'a-f-h- amad-r Daba yaz Thaalad xw-y-'--wa this you dir.-2pl.-bring-pret. scythe-NOM D. personally T.ver.-3sg.-make-pret.-ger. "This scythe you brought was made by Daba personally for Thagoled" A possessive pronoun can occur between a demonstrative pronoun and a noun: m sy sd-m this 1st.poss. anvil-ERG "this anvil of mine", lit. "this my anvil"
See Kumaxov 1984: 87-93, Balkarov 1959. It seems that Kabardian had the (Common Adyghean) opposition between alienable and inalienable possession, but it lost it.
72
99
Qualitative adjectives (which can be used as stative verbs) follow the head noun, while relational adjectives (usually nouns used attributively) precede it: a ax : pa dxa : girl beautiful "beautiful girl" y pa wna wood house "wooden house"
ADJECTIVE PHRASES Adjectives can be heads of nominal complements, which regularly follow them: a pa nq'- yz xw wood glass-old full sour.milk "A wooden glass full of sour milk" I found no examples of the predicative use of adjective phrases.
SYNTACTIC STRUCTURE OF THE SENTENCE Kabardian distinguishes three constructions 73: nominative, ergative and indefinite. In the nominative construction the subject (the only macrorole argument) is in the nominative and the verb is in the intransitive form. If there is an (indirect) object (ie. if the verb is semantically bivalent), the second argument is in the ergative: e ax a Satanyay dxa-r tad-- S. beautiful-NOM get up-pret.-af. "Beautiful Satanaya got up" ye x waynyk-r tx-m y-aw-da student-NOM book-ERG 3sg.-pres.-read "The student is reading a book" In the ergative construction the subject (the highest ranking macrorole argument) is in the ergative, and the verb is transitive. The direct object is in the nominative: x ax aa yn-xa-m nrt-xa-r q'--awz-- I.-pl.-ERG Nart-pl.-NOM dir.-3pl.-crush-pret.-af. "The Ini (giants) crushed the Narts"
The so-called "dative" or "inverse" construction (Kardanov 1957) is actually a nominative construction.
73
100
The causative verb is always transitive, so the ergative construction is used with a causative verb: I aI fz-m '-r y--k'wa woman-ERG man-NOM 3sg-caus.-go "The woman sends a man" In the indefinite construction the subject and the object have no case endings. This construction is common in proverbs, in the oral tradition; the verb's arguments are indefinite: Iaa ma dw f'a-balca- bear wolf advers.-hairy-af. "To the bear the wolf is hairy" (a proverb) The verb is stative, and thus intransitive, in this construction.
NOMINAL SENTENCE Kabardian has no copula, the nominal predicate is juxtaposed to the subject: I A sy c'-r Alym 1sg.-poss. name-NOM A. "My name is Alim" Adjectives and common nouns in a sentence with a nominal predicate take the affirmative suffix (thus becoming stative verbs): a Mza-r yz- moon-NOM full-af. "The moon is full" M-r maz- this-NOM forest-af. "This is a forest"
EQUI-NP DELETION In a coordinated construction, when two verbs share the same argument, this argument can be omitted if the agent is the first argument (agent) of a transitive verb or the only argument of an intransitive verb (ie. the "subject" in the same sense as in English):
101
I eaa I a '-m fz-r q'-ya-w-- y'y q'a--- man-ERG woman-NOM dir.-3sg.-see-pret.-af. and dir.-go-pret.-af. "The man saw the woman and left" Ia Ia 'la-m dabz-r y-w-ry k'wa-- young.man-ERG girl-NOM 3sg.-see-and leave-pret.-af. "The young man saw the girl and left" Ia II , a w 'la c'k' -r q'a-s-ry, dabz-r q'-y-w-- boy little-NOM dir.-come-and girl-NOM dir.-3sg.-see-pret.-af. "The boy came and saw the girl" Ia II II Iy a a 'la c'k'w-m dabz c'k'w-m q'a-k'wa-nw psa y-r-y-t-- boy little-ERG girl little-ERG dir.-come-fut. word 3sg-3sg-3sg-give-pret-af. "The boy promised the girl he would come" (lit. "gave the girl his word he would come"). This shows that Kabardian is not a syntactically ergative language, such as, e. g., Dyirbal or Chukchi. As can be seen from the examples above, when two verbs differing in transitivity are coordinated, the shared subject is in the case assigned to it by the nearest verb (the ergative if this is the transitive verb, the nominative if this is the intransitive verb). However, there seem to be cases when the shared argument is in the ergative case, although the intransitive verb is closer to the shared argument 74. This matter requires further research.
SUBORDINATION Most structures, which are equivalent to subordinate sentences in the European languages, are in Kabardian and other West Caucasian expressed by special verbal forms. These are typically infinitives, participles and gerunds: a a I Ia -r -b q'--xwa-k'w--m y?-- 3sg.-NOM 3sg.-ERG dir.-dir.-ver.-go-pret.-ERG say-pret.-af. "When he approached her, he spoke" aax a ha-r z----xa-m -aw-bna dog-NOM pref.-dir.-3pl.-caus.-eat-ERG dir.-pres.-to bark "The dog barks where he is not fed (where they do not feed him)" (a proverb)
74
102
Yaz Yamnay '-r himself Y. earth-NOM try-sa-nw dir.-sow-inf. "Yamine himself is plowing the ground (in order to) sow the seeds of Thagaled" I I? s-zar-thak'wma a-ry dana q'--f-'- 1sg.-part.-ear slow-and how dir.-dir.-2pl.-know-pret. "But how did you know my hearing was bad (lit. that I had slow ear)?" Infinite verbal forms may be modified by adverbial suffixes (see above) with spatial or temporal meaning: , mbdyay -t Sa q'a-z-aza-xw, I dir.-1sg.-return-until here dir.-sit "Sit here until I return!" An infinitive may be marked with instrumental case in the subordinate clause: I yI () Wa ?ha-na w-w-n-'a s-aw-na(r) you lot-without 2sg.-become-inf.-instr. 1sg.-pres.-be.afraid "I am afraid that you will be without a lot (inheritance)" A subordinate structure can also be expressed by a verbal noun (infinitive, or "masdar" according to some linguists) and a possessive pronoun (or prefix) denoting the subject: yxa x da d-wx-- dy-tx-n-r we 1pl.-finish-pret.-af. 1pl.poss.-write-inf.-NOM "We finished writing" or "We stopped writing" , ax ey w Sawsrq'wa-r y-aw-a, Badax dxa-m ya-pa-nw S. 3sg.-pres.-set.out B. beautiful-ERG 3sg.-see-inf. "Sosruko sets out to see beautiful Badah" With many verbs the person of one argument in the subordinate clause is necessarily the same as the person of one argument in the main clause (the so-called control constructions): ya-va Thaalad y lpa-w 3sg.-plow T. 3sg.poss. seed-ADV
103
I ay Ia dabz-m dagw k'wa-n psaw '-y-dz-- girl-ERG dance go-inf. early dir.-3sg.-throw-pret.-af. "The girl started going to dances early" In the previous example, the verb in the subordinate clause k'wan has got the same subject as the verb in the main clause 'adzan ("to start"). Which form the linked verb will take depends mostly on the type of matrix verb it is associated with. As a rule, verbs having obligatory control (i. e. verbs with obligatory co-reference between one argument of the matrix verb and one argument of the linked verb) take the infinitive, while other verbs take either the participle or the gerund (most can take both of these forms). In subordinate structures 75 the subordinated verb can carry the personal prefixes and the reflexive prefix: Ia x y 'la-m tx-r y-h-nw boy-ERG book-NOM 3sg.-to carry-inf. "The boy wanted to carry the book" ea xway-- want-pret.-af. Ia -y-?-- dir.-3sg.-say-pret.af.
Ia II II yy w w 'la c'k' -m dabz c'k' -r za-wa--nw boy little-ERG girl little-NOM refl.-hit-back-inf. "The boy told the little girl to hit herself"
a x y a sa -b tx q'-z-y-t-nw s-q'-y-a-w-- I he-ERG book dir.-1sg.-3sg.-give-fut. 1sg.-dir.-3sg.-caus.-hope-pret.-af. "He promised me he would give me the book." a I y ya Wa s-w-m f' ddaw s-q'-w-aw-w 2sg. 1sg.-see-ERG good much 1sg.-dir.-2sg.-pres.-see" "I see that you love me very much" (lit. "I see that you are the one who sees me well very much") The use of personal prefixes on infinitives and gerunds is sometimes optional. As can be seen from the preceding examples, in subordinate structures the main verb comes after the subordinate verb; this is in keeping with the general principle of Kabardian syntax, according to which the head of a construction is placed after the dependent:
The problem is that the difference between finite and non-finite forms in Kabardian cannot be easily defined and compared to the difference in Indo-European languages. Traditionally, some forms that can have personal endings (e. g. participles) are considered to be non-finite in Kabardian, and the form of the negation serves to distinguish finite from non-finite forms (Kumaxov & Vamling 1995); the negation m- characterizes the non-finite forms, and the negation -q'm the finite forms.
75
104
sa I
Constructions in which the subordinate clause is placed after the main clause are also possible, but they are marked: Ia Ia I 'la-m y-'-t dabz-r q'-zar-k'wa-n-r boy-ERG 3sg.-know-ant.pret. girl-NOM dir.-refl.-go-inf.-NOM "The boy knew that the girl would come." Many permutations of the word order are possible, but the subordinated structure cannot be "interrupted" by the main verb. There are also structures with subordinators, but they are stylistically marked and they seem to be developing under the influence of Russian (Kumaxov 1989: 348). Sentences with the complex conjunction stw p'am, st ha'a pp'ama76 "because, since" are of that type: yI a ay I, y a Iax Ia a I. Ydpstw'a r pxwadaw nam q'?wrydzarq'm, stwa pama 'laxam y' wa ?aq'm "For now it is not that important, since these young men haven't done much yet". Note also that the conditional sentences can be construed with the conjunction tma "if", rather than with the conditional mood of the verb (see above); the conjunction tma is originally the verb tn "be, find oneself" in the conditional mood: I yy I x,
76
It seems that these conjunctions are calques of the Russian poetomu, potomu to (see Kumaxov 1984: 150).
105
Maw Badnawq'wa y b'-r '-m q'-xa-f, this B. 3sg.poss. spear-shaft-NOM ground-ERG dir.-pull out-pot. Badaxw w-ry-psaw-w tma B. 2sg.-3sg.-woo-ger. if "You can (surely) pull out Badinoqo's spear-shaft from the ground, if you are wooing Badah" There are a few subordinators that developed from postpositions governing participles or infinitives. The subordinator ndara "since" is combined with the instrumental form of the participle, e.g. zar-k'wa ndara "since (the time that) he went". The temporal subordinator y pa "before" is actually composed of y "its" and pa "nose, front part"; the same syntagm can be used as a spatial postposition ("in front of"). yI, y x
p-d-wp'-n--y, ps-m xa-d-dza-n stick dir.-1pl.-cut.off--fut-af.-and water-ERG dir.-1pl.-throw-inf. y pa d-wbara-n- its front 1pl.-beat-fut.-af. "We'll cut off a stick and beat him before we throw him into water"
CASE ASSIGNMENT IN SUBORDINATE CLAUSES In complex sentences in which the verb of the main clause shares one of the arguments with the subordinate verb, this argument can be omitted in the subordinate clause, in accordance with the rule that Kabardian is not syntactically ergative (see above): Ia y ea 'la-m dabz-r y-w-nw xway-- boy-ERG girl-NOM 3sg.-see-fut. want.pret.-af. "The boy wanted to see the girl." Ia x y 'la-m tx-r y-h-nw boy-ERG book-NOM 3sg.-carry-fut. "The boy wanted to carry the book." ea xway-- want-pret.-af.
In these examples the main verb is intransitive (xwayn "to want"). However, nouns denoting the agent take the ergative suffix, and nouns denoting the patient of the action of the main verb are in the nominative. The reason for this is that case
106
assignment in the main clause in Kabardian can be determined by the role which the argument of the verb of the main clause has in the subordinate clause; if the shared argument of the main and the subordinate clause is the doer of the action (or the highest ranking macrorole) of a transitive verb 77 in the subordinate clause, then this argument is marked by the ergative case, even though the verb in the main sentence is intransitive. If, on the other hand, this argument is the patient or the only argument of an intransitive verb in the subordinate clause (e. g. yawan "to hit"), it will be marked by the nominative case: Ia eyy ea 'la-r dabz-m ya-wa-nw xway-t boy-NOM girl-ERG 3.sg.-hit-fut. want-ant.pret. "The boy wanted to hit the girl." The actual rules for case assignment in subordinate control constructions are more complex and cannot be fully explained here, since they partly depend on the information structure of the sentence (i.e. on the relation between the topic and the focus), and on the word order in the sentence (see Kumaxov & Vamling 1996 and Matasovi 2007). It seems that in the speech of younger speakers (perhaps under the influence of Russian?) constructions in which the verb of the subordinate clause assigns the case to the argument which it shares with the verb in the main clause are becoming increasingly rare.
MODAL VERBS Modal verbs such as a'n, xwzaf'a'n "be able, can", bawrn "must" are used as matrix verbs taking linked clauses as complements; their complements can be infinitives or verbal nouns (masdar), but, as a rule, not gerunds or participles (Kumaxov & Vamling 1998. 265ff.): Ia a y I sa s-a'-- wna-r s-'-n I 1sg.-can-pret.-af. that house-NOM 1sg.-do-inf. "I was able to build that house" IIy y I y-'n-r da t-xwzaf'a'-nw-q'm wna-m we 1pl.-can-fut.-neg. house-ERG 3sg.poss.-making-NOM "We will not be able to build the house" Note that the possessive prefix on 'n shows that it is a (verbal) noun; the noun wna "house" is in the ergative, which is the default case in the possessive noun phrase, and 'n is in the nominative case because the matrix verb is transitive.
In the sentence 'la-m tx-r y -h-nw xway-- the verb hn "to carry" is transitive, which can be seen by the order of personal prefixes, cf. e. g. w-z-aw-h-r "I carry you" (2sg.-1sg.-pres.-to carryaf.).
77
107
The "debitative modal" xwyayn is not inflected for person; it should be understood as meaning "it is necessary that X", taking whole clauses as complements. In this way it is differentiated from the verb xwyayn "want", which has the full set of personal prefixes, but also takes clausal complements (in obligatory control constructions): III I a I x e
yas-m y k'wac'-'a a qasxw-'a sy year-ERG 3sg.poss. duration-INST night every-INST 1sg.poss. -m horse-ERG z maq'w ?ata-ra 1 hay stack-and y-x-n 3sg.-eat-inf. xwyay- must-af.
"During the year, my horse must eat one stack of hay and one measure of corn every night". I a xeI e xwyay-r Dypa'a far- la-m y xayy'a w-n from.now.on 2pl.-af. village-ERG their judge become-inf. must-NOM "From now on, it is you who must become judges of the village"
PHASAL VERBS Like modal verbs, phasal verbs also take clausal complements, and require coreference between the shared arguments (the actor of the matrix verb must be coreferent with the subject of the linked, embedded verb): ay, a a , yxa x y Iy e x
wara, nq'a mza-r q'-y-ha-ry, wdz-r q'a-'-w -xwaya-m but May month-NOM dir.-3sg.-come-and grass-NOM dir.-grow.-ger. dir.-begin-grow "But the month of May came, and the grass began to grow" sa s-wx-- sy-tx-r I 1sg.-finish-pret.-af. poss.1sg.-book-NOM "I finished writing my book" s-tx-n 1sg.-write-inf.
REPORTED SPEECH Clauses containing reported speech are embedded in the main clause:
108
a "y y" "a eaI, -b "dwa w--t" he-ERG how 2sg.-dir.-stand "He asked me 'how are you?'"
Ia e"
eI
q'-z--y-?-- dir.-1sg.-dir.-3sg.-say-pret.-af.
"dta s-xw-ya-a-', s-xw-ya-wat" q'-z-ay?a sword 1sg.-ver.-3sg.-caus.-make horse 1sg.-ver.3sg.-find dir.-1sg.-say "Have a sword made for me, find a horse for me he tells me." / "He tells me to make him a sword, to find him a horse." Reported speech can also be expressed by a subordinate construction with a participle or a gerund: aI Ia I w -y-?-- [maz zar-k'wa-r] k' a-m hunter-ERG pref.-3sg.-say-pret.-af. [forest part.-go-NOM] "The hunter said he was going to the forest." Ia ay w w w fz-m q'-y-? ax -- [y-p -r la-w] woman-ERG dir.-3sg.-say-pret.-af. 3sg.poss.-daughter-NOM work-ger. "The woman said her daughter was working." The difference between subordinating reported speech by means of a participle and a gerund seems to lie in the level of commitment to the truthfulness of the speech. The use of gerund seems to imply less commitment by the speaker (Jakovlev 1948: 52f.): a Iay y Ia w wa q'-z-a-p-?-- -r q'a-k' -wa zar-t-r he-NOM dir.-come-ger. part.-be-NOM you dir.-1sg.-pref.-2sg.-say-pret.-af. "You told me that he came" a Iay y y Ia -r q'a-k'w-wa t-w wa q'-z-a-p-?-- he-NOM dir.-come-ger. be-ger. you dir-1sg.-pref.-2sg.-say-pret.-af. "You told me that he came (but this need not be so)"
AGREEMENT There is no category of gender, and no number and definiteness agreement within the noun phrase (NP), as was shown in the chapter on nouns. Verbs agree in person with the subject, object, and indirect object (if we can talk about person agreement on the verb), and agreement in number is very limited. The verbal suffix for the plural of the subject can be left out if the subject is placed immediately before the verb: Ix aI(x)
109
'-xa-r m-k'wa-(xa) man-pl-NOM 3sg-go-(pl.) "People go" According to C. Paris (1969: 161), the suffix for the plural of the subject is compulsory only if the subject is separated from the verb by other words. This is more or less confirmed by the examples I was able to elicit. Transitive verbs agree in person and number with the subject, i. e. with the doer of the action (marked for the ergative): ax ee yx Nrt-xa-m y yahayafar y-wx-t N.-pl.-ERG. 3pl.poss. peace 3pl.begin-ant.pret. "The Narts restored peace"
NEGATIVE CONCORD Kabardian is a language with negative concord. If there is a negated verb in the sentence, the negative (and not the indefinite) pronoun is used, as in Croatian, for example: Iy aI Sawsrq'wa zry -y-m-?a-w m-k'wa S. nothing dir.-3sg.-neg.-say-ger. 3sg.pres.-to go "Sosruko goes without saying anything" Croatian: Sosruko ide nita ne govorei Note that there is no negative concord in (Standard) English: Sosruko goes without saying anything/*nothing.
PRO-DROP Since the information about the grammatical relations within a sentence is codified in the verbal complex, all other syntactical elements can be left out. So instead of sa r zaza "I filled it" one can say just zaza (where 0- is the prefix for 3sg., z- the prefix for 1sg. (< s), and the verb is azaan "to fill"). Compare also: sa mva s-aw-dz "I throw a rock" : s-aw-dz-r "I throw it" I rock (3sg.)-1sg.-pres.-throw (3sg.)-1sg.-throw-af.
110
RELATIVE CLAUSES In Kabardian, the translational equivalents of relative clauses are usually expressed by participial constructions (in square brackets): a y xy ea -r [maw t stawra-m] a-xaa-nw xway-t he-NOM near-by stand(part.) guard-ERG dir.-throw oneself-inf. try-ant.pret. "He tried to throw himself on the guard who was standing near-by."; a e aI ay [-r it-NOM nrt-' Nart-hero a a x x ,
z-a-yay-f nrt-r] nrt xsa-m x--a-rt , part.-caus-move-pot. Nart-NOM Nart council-ERG dir.-3pl.-lead-impf. w--w y-b-rt become-pret.-ger. 3pl.-consider-impf.
"The Nart who was able to move it (sc. Hlapsh's rock) they used to take to the Nart council (and) they considered him to have become Nart hero." y ax aa E Ia [Thaalad xw lpaw Nrt-xa q'--r-y-t--r] T millet seed N-pl. dir.-pl.-3sg.-3sg.-give-pret.-NOM Yamna y-f'-y-h-- Y. 3pl.-advers.-3sg.-carry-pret.-af. "The millet seed, that Thagalad gave the Narts, Yamina stole (it) from them." The head of the relative clause usually follows it (exx. 1, 2), but it can also be inserted into it (3). There are no real relative pronouns; however, (under the influence of Russian?) interrogative pronouns can be used with a relative function: x a, a x xat m-a-m-y, -r xa-r-q'm who no-work-ERG-and this-NOM eat-pres.-neg. "Who doesn't work, doesn't eat" (a proverb)
COORDINATION Coordinated clauses are linked asyndetically by clitics/suffixes (e. g. ry "and", see above): a -r I I y?a-ry na-'a-r eyIa ap ya-wp'--
111
that-NOM say-and the youngest-NOM "The youngest one said that and asked Hlapsh"
3sg.-ask-pret.-af.
Most likely under the influence of Russian, conjunctions which are separate, independent words have also developed, e. g. wa "but", ya "or", tma "if": a ea ay Ia sa -r q'-ya-z-d-t wa q'a-k'w--q'm I he-NOM dir.-3sg.-1sg.-invite-ant.pret. but dir.-come-pret.-neg. "I invited him, but he didn't come" e yI e yI ya w-'-n ya w-'a-n or 2sg.man-inf. or 2sg.-die-inf. "Either be a man, or die" (a proverb)
THE ORDER OF SYNTACTIC ELEMENTS Like most Caucasian languages 78, Kabardian is basically an SOV language, though other (stylistically marked) word orders appear as well: a a eyya Sawsrq'wa wagwna bzda-m tayww-- S. journey bad-ERG set off-pret.-af. "Sosruko set off for his difficult journey " wagwna bzdam tayww Sawsrq'wa a Iax a b sa 'la-xa-m s--y-xwaz-- there 1sg. boy-pl.-ERG 1sg.-dir.-3pl.-meet-pret.-af. "I met the boys there" If the object of this sentence is in focus (i.e. the stress is on boys), the word order changes: Iax a xa sa 'la-xa-m b s-y-xa--xwaz-- "I met the boys there" (pay attention also to the change in the order of the deictic marker and the person marker -y-xa-). Also, if the subject of a transitive verb denoting an action is inanimate, and the object animate, the unmarked word order is OSV: Ia 'la-r
78
xa ps-m y-txal--
112
boy-NOM water-ERG 3sg.-strangle-pret.-af. "The boy drowned" (literally: "the water strangled the young man") The same OSV order obtains in embedded, subordinate clauses, with infinite verbal forms: I Iy a Dad dad'a y-a-?w--w f-aw-- chicken egg 3sg.-caus.-smart-back-ger. 2pl.-see-pret.-af. "You saw how the egg makes the chicken smart" Interrogative pronouns and other interrogative words stand in the place of the constituent which they substitute (i. e. Kabardian is a language of the Wh-in-situ type) 79: x xya xat-m -r q'a-z-x-w- who-ERG meat-NOM dir.-refl.-eat-inter.-pret. "Who ate the meat?" I exya '-m st-r q'a-y-x-w- man-ERG what-NOM dir.-3sg.-eat-inter.-pret. "What did the man eat?" The order of the arguments in front of the verb is the mirror image of the order of personal prefixes in the verbal complex in a transitive construction; in an intransitive construction the order of the arguments is the same as the order of personal prefixes: y yy wa sa w-q'a-z-aw-wa you I 2sg.-dir.-1sg.-pres.-hit "You hit me" (intransitive construction) y ya sa wa w-s-w I you 2sg.-1sg.-see "I see you" (transitive construction) The rule for the relation between verbal arguments and person markers with transitive verbs can be represented in this way:
According to Kumaxov (ed.) 2006, I: 496 the unmarked position of question words is at the beginning of the sentence, e. g. Dpa w-q'a-k'wa--nw "When will you be back?".
79
113
verbal complex
TOPICALIZATION/FOCALIZATION The relation between new and old information in the sentence is expressed syntactically in Kabardian, i.e. by the order of syntactic categories in the sentence. Focalization is a process by which the new, unexpected information in the sentence (rhema, what is in focus) is emphasised. The focalized element usually comes at the beginning of the sentence: x Ia y xat y-'- wna-r who 3sg.-do-pret. house-NOM "Who built the house?" aI Ia y p'a-m y-'-- wna-r carpenter-ERG 3sg.-do-pret.-af. house-NOM "The carpenter built the house." In the previous example the word answering the question "who" is in focus, the noun p'a. The SVO order at the same time denotes that the topic of the sentence is at the end (the noun wna) 80. If the question is "what did the carpenter do?", i.e. if wna "house" is not the topic of the sentence, then the noun wna will not be at the end of the sentence, but in front of the verb (i.e., we have the unmarked SOV order): aI I st p'a-m y-'--r what carpenter-ERG 3sg.-do-pret.-NOM "What did the carpenter do?" aI y Ia p'a-m wna y-'-- carpenter-ERG house 3sg.-do-pret.-af. "The carpenter built a house." Wh-words, which are focal as a rule, must be placed before the verb: 81 x
80 81
See Kumaxov & Vamling 2006: 107 ff. See Kumaxov & Vamling 2006: 89.
114
xat y-'-ra wna-r? who 3sg.-do-inter. house-NOM "Who is building the house?" *wnar xat y'ra? *wnar y'ra xat? *y'ra xat wnar? *y'ra wnar xat? The general rule for topicalization/focalization seems to be the following: The focalized element ("rhema") must be placed in front of the verb. The focalized element may be sentence-final, but then it has to be marked by the copula/affirmative marker -: a x ya -b tx-r z-r-y-t--r Mwrt- 3sg.-ERG book-NOM part.-3sg.-3sg.-give-pret.-NOM Murat-af. "To Murat did he give the book", or "It was Murat that he gave the book to". e aa x w m-r yaz-r q'-x --r fy xakw-r- this-NOM himself-NOM dir.-be.born-pret.-NOM your(pl.) country-NOM-af. "The place where he himself was born is your country" x y psa-r z-xa--r y -r- 3sg. soul-NOM part.-dir.-lie-NOM 3sg.poss. horse-NOM-af. "That in which his soul lies is his horse" Aside from the copula/affirmative marker , the suffixes -t (for imperfect), -q'a, -ra (interrogative suffixes) can also occur as focus markers: eaI x e w yaadk' a-q'a tx-r fz-m ya-z-t--r teacher-focus(inter.) book-NOM woman-ERG 3sg.-part.-give-pret.-NOM "The teacher gave the book to the woman" ("It was the teacher that gave the book to the woman") In all focalization constructions the main verb is replaced by the participle. These constructions are typologically similar to the Insular Celtic constructions in which the copula is used for focalization, or to French constructions of the type c'est X qui...
115
116
These are expressions appropriate for men, but not for women: I, I, I, yI ?aw, ?a, ?aw, wa? (these have a similar function as verbal crutches in the language of women) x I txa saw?wa "I swear to god"; txa y c''a saw?a "I swear by god's name" Wallahy "god, by god". Aside from the special characteristics of the idioms used by men and women, there are also special varieties of Kabardian used, for example, by hunters, or young people when conversing without the presence of older people. Some topics are considered inappropriate in the conversation between male speakers (e.g. talking about women and children). Due to a pronounced code of honour insults are not taken lightly, so that verbal communication outside of the family is conducted very cautiously, in order not to offend the person you are talking to; the order of speaking is strictly fixed (young people always speak after older people). On the whole, communication in Kabardian leaves an impression of laconic expression and restraint.
117
THE LEXICON
The core layer of the Kabardian lexicon was inherited from the Proto-AbkhazAdyghean language; words belonging to this layer are mostly included in the core lexicon. These are nouns denoting body parts (ie. gw "heart" = Abkhaz a-gw, na "eye" = Abkh. a-la, fa "skin" = Abkh. a-cwa), kin terms (na "mother" = Abkh. an, da "father" = Ubykh tw, q'wa "son" = Ubykh. qwa), and some basic verbs (e. g. 'an "to know" = Abkh. a-c'ara) and adjectives (e. g. "old" = Abkh. a-w), etc. Culturally and historically important are common nouns belonging to the sphere of flora and fauna, e. g. the nouns denoting bear, fox, dog, cow, pig, fish, bee, millet, nut, and plum, as well as the names of the metals copper, gold, and tin. Words common to the Adyghean-Kabardian branch of the Abkhaz-Adyghean languages represent the next layer of the lexicon. Among them there is an especially large number of words belonging to the semantic spheres of agriculture 84 (e. g. Adyghean and Kabardian van "to plow", Adyg. cwbza, Kab. vbdza "plow", Adyg. and Kab. ha "barley", Adyg. ma, Kab. ma "millet (Panicum tiliaceum)", Adyg. kawc, Kab. gwadz "wheat"). The terminology from the sphere of farm animal breeding is also common, especially for the breeding of horses (), cf. Kabardian and Adyghean ar "stirrup", xk'wa "foal", Adyghean k'a, Kabardian ?a "little foal", Adyg. fra, Kab. xwra "a breed of thoroughbred Adyghean horses", etc. Loan-words from Turkish and Turkic languages very frequently belong to the sphere of trade, economy and technology, cf. sawm "ruble", myn "a thousand", stw "shop", tawp "cannon", wn "kettle", bb "duck", bwr "black pepper", barq' "flag".Many Farsisms (words of Persian origin) have entered Kabardian through Turkic languages, e. g. dyn "faith", bazar "market", pth "emperor", haw "air", etc. Aside from these recent borrowings, there are also old Iranian loan-words in Kabardian, which could have been borrowed from Scythian or Alanic (the ancestor language of the today's Ossetian) in the prehistoric period. Many such words were borrowed into other Caucasian languages; for example, Iranian *pasu "sheep" (Cf. Skr. pu, Lat. pecu) was borrowed into Abkhaz with the meaning "sheep" and into Georgian as pasi "price"; the same meaning is found in Kabardian wsa "price" 85. A typologically similar semantic development ("sheep" > "property" > "money") has been recorded in other languages, for example in Latin in the relation between pecu "sheep" and pecnia "money". Some Kabardian words are almost certainly (Indo-)Iranianisms, but because of the shortness of attested forms we cannot be entirely sure, e. g. a "hundred" (Avestan satm), a "goat" (Vedic aja-); some words might be even older Indo-European loan-words, e. g. k'rw "crane", (cf. Latin grs, Armenian krunk, Lithuanian gerv, etc.). A younger layer of loan-words are also Arabic loan-words, which penetrated Kabardian mostly through the language of the Kur'an. They belong to the religious and the ethical-philosophical sphere of the lexicon, e. g. lh "god, Alah", anat "heaven", gwanh "sin", shat "hour", sbr "quiet, serene", mhana "meaning,
84 85
118
sense", q'l "reason, mind", br "news", a "doubt", tzr "punishment", barat "abundance", nsp "happiness", nalt "curse, damnation", zamn "time", sabap "benefit", dwnyay "world", etc. These words are quite numerous in Kabardian and most of them are not perceived as borrowings any longer. Arabic roots occur in some compounds containing native elements, cp. e.g. swrat "picture": swrattayx "photographer" (cp. Kab. tayxn "take off, take away"). The name of Kabardia's capital, Nalchik (Kab. Nlk) contains the stem nl "horse-shoe", which comes from Arabic (na`l). Finally, the chronologically last layer or borrowings are Russian loan-words, which flooded the Kabardian language in the 20th century 86. Russian loan-words cut across all spheres of the lexicon except the core lexicon; an especially large number of them belong to the scientific-technological terminology and the administration terminology, e. g. nwka "science", myna "automobile", smawlayt "aeroplane", rayspwblyka "republic", raydaktawr "editor". It is interesting, however, that the borrowing of suffixes for the formation of abstract nouns did not occur, for example the Russian suffix -cija (> Kabardian -ca); this suffix occurs in Kabardian in words such as rayzawlywca "resolution", rayvawlywca "revolution", mayxanyzaca "mechanization", but it doesn't occur in any word with a Kabardian root. Unlike a few suffixes borrowed from Turkish (e. g. the suffixes -ly, -l < -li, cf. wwr-l "good, benevolent"), the Russian suffixes cannot be added to Kabardian roots, i.e. they haven't become productive in Kabardian 87. Aside from direct borrowings, there are also many Russian calques in Kabardian, e. g. txayda "reader" (Rus. itatel'), sbaza "hoover" (Rus. pylesos), '?a "refrigerator" (Rus. xolodil'nik), bzaana "linguistics" (Rus. jayzkoznanie), etc. Although Russianisms are in Kabardian often pronounced quite differently than in Russian, the official orthography (especially after World War II) in most cases prescribes an identical way of writing them as in Russian. In older Kabardian books the name "Russia" will be found as rsay, but today it is written Rawssyya (in Cyrillic Poccue), and the noun "bank", which is pronounced with the glottalized k' (bnk'), is written, like in Russian, bnk (in Cyrillic ). The noun meaning "newspaper" was written at first as k'zayt, but today, under the influence of Russian (gazeta), it is written gazet (in Cyrillic ). Anglicisms, which have lately been penetrating all the languages of the world, enter the Kabardian standard language via Russian, e. g. kawmpyawtayr "computer", yntayrnayt "Internet", byznays "business", etc 88.
It is interesting to note that Sh. Nogma's "Kabardian dictionary", compiled in the first half of the 19th century, contains only 2,5 % of words borrowed from Russian (Apaev 2000: 234). 87 Kumaxova 1972. 88 For a general survey of Kabardian lexicology and lexicography see Apaev 2000.
86
119
TEXTS
1. A Very Simple and Instructive Text about Rabbits
(Source: Gwwat, L. et alii Adabza, El'brus, Nal'ik 1984).
Thak'wma' h. Rabbit (rabbits) Thak'wma' h-r maz-m -aw-psaw. rabbit-NOM forest-ERG dir.-pres.-to live r p'awra m-a. he-NOM fast 3sg.pres.-run Thak'wma'h-m y r-xa-r a-'a rabbit-ERG young-pl.-NOM milk-INSTR ya-a-xa. Thak'wma' h-m wdz 3sg.-caus.-eat rabbit-ERG grass ay-x, -r ya-w b pr 3sg-to eat wood-NOM 3sg.-gnaw he-ERG hay-stack f' -wa ya-w . Thma'h-r well-ADV 3sg.-see rabbit-NOM amxwam wa-, 'mxwam in the summer grey in the winter w x -. white-af. 120
Nrt w g wp zayk'wa k'wan-w ya---t. wagw zd-tay-t-m, wya bzda q'--t-awwa. Nrt-xa-r za-'ath--wa wagw-ham tay-t-w, Sawsrq'wa q'--'as--. "Mf'a w-y-?a, Sawsrq'wa? '?a-m d-ya-s!" "Sa q'a-z-a-zaxw, f-q'-s-papa," -ya-?ary Sawsrq'wa y Tayay -m z-r-ya-dz, Harama-?wha d-aw-yay-ry z-yaph, ' wna-m z ana- q'--ya-w, ?wwa ha--t-w. Sawsrq'wa anam nasra da-pama, mf'a-m q'-ya-waa'-wa Yn na zq' wa q'-ya-w. Sawsrq'wa p'nt'-am w -wa da-p'. Mf'a-m b ada - Y nm yap'ary z w w padz'a q'-y-p at--. Padz'a-m q'px a dapr Ynm y nam y-xw--. Vocabulary:
badan "lie next to" bzda "bad" ana "tower" dap "hot coals" dap'an "jump in" dapan "look in" dan "run up the hill" wagw "road" wna "territory" Harama name of mythological mountain k'wan "go" mf'a "fire" Nrt "Nart" (hero of old times) na "eye" nasn "come to" na "middle part of the face" Padz'a "burning log" pnt'a "gate" q'aazan "return" q'apwatan "catch, get" q''asn "follow, go after" q'papan "wait for" q'taywan "happen, occur" w "horseman" ?a "coldness, cold" hatn "stand above stg." ' "land, earth" taytn "find oneself, be" Tayay name of Sosruko's horse wya "cold" yap'an "jump through, jump over"
121
PART II
122
Translitterated: Wyyy, Wyyy, pna - y Sawsrq'wa y fa - y dary araw Wayy, z mxwa gwarty Y Twayayry Wayy, thak'wma llaw, yaz Sawsrq'wary y m yalalaxw p'nt'am q'dawha. Vocabulary:
da - day fa - 1. Kabardian national dress; 2. form, appearance gwar - some lla - weak, shabby mxwa - day pna - ballad pnt'a - gate q'dahan - bring in(to), get in - horse thak'wma -ear Twayay - name of Sosruko's horse wyyy - Hey! yalalaxn - hang yaz - himself y - they say (particle) araan - burn, be hot - old
123
4. Kabardian proverbs
(Source: Adabza psa, Nal'ik 1999). 1.Ya w'n ya w'an. 2. 'awa tay'm wy warad yarq'm. 3. Fz bzada ha'a maxa. 4. Fz bda yl' halal. 5. q'l zy?am an y?a. 6. L'ar lm tarq'm. 7. dam y na mwamy wra p'stara wyaxn. 8. C'xwr l'ama, y c'ar q'awnary, vr l'ama, y far q'awna. 9. Wy q'ma t'aw q'wmx, wy psa t'aw wm?a. 10. har psawma p?a 'arq'm. 11. 'anm 'a xa. Vocabulary: 1. ' "man"; 'an "die" 2. warad "song"; en "become weary, become tired" 3. fz "woman"; bzada "bad"; ha'a "guest"; xan "eat" 4. bda "strong"; halal "what is desirable" 5. q'l "mind, wisdom"; an "character" 6. 'a "manliness"; l "death"; tan "fear" 7. da "Adygh; Circassian"; mwa "poor"; w "salt"; p'sta "pasta (Circassian dish) 8. c'xw "person"; c'a "name"; v "ox"; fa "skin"; q'anan "remain" 9. q'ma "dagger"; q'axn "cut"; psa "word"; ?an "say, utter" 10. ha "head"; psawn "live"; p?a "hat" 11. xan "lie (in something)"
124
125
Alchiki (the Russian term for Kabardian 'an) is a traditional game played with sheep, or cattle bones. It is widespread among many peoples of Central Asia and the Caucasus, and it occurs in many variants. The rules always involve trying to get as many alchikis (bones) as you can, at the expense of your opponent.
126
REFERENCES
Abdokov, A. I. Vvedenie v sravnitel'no-istorieskuju morfologiju abxazsko-adygskix i naxsko-dagestanskix jazykov, Kabardino-balkarskij Gosudarstvennyj Universitet, Nal'ik 1981. Abdokov, A. I. O zvukovyx i slovarnyx sootvetstvijax severokavkazskix jazykov, El'brus, Nal'ik 1983. Abitov, M. L. et alii Grammatika kabardino-erkesskogo literaturnogo jazyka, AN SSSR, Moskva 1957. Abyt', M. L. et alii Slovar' kabardino-erkesskogo jazyka, Diroga, Moskva 1999. Alparslan, O. & Dumzil, G. "Le parler besney de zennun ky", Journal Asiatique 1963: 337-382. Anderson, J. "Kabardian disemvowelled, again", Studia Linguistica 45/1991: 18-48. Apaev, M. L. Sovremennyj kabardino-erkesskij jazyk. Leksikologija. Leksikografija, l'brus, Nal'ik 2000. Balkarov, B. H. Jazyk besleneevcev, Kabardino-balkarskoe kninoe izdatel'stvo, Nal'ik 1959. Balkarov, B. H. "O astjax rei v kabardinskom jazyke", in: Voprosy sostavlenija opisatel'nyx grammatik, Izdatel'stvo AN SSSR, Moskva 1961: 113-122. Balkarov, B. H. Vvedenie v abxazo-adygskoe jazykoznanie, Nal'ik 1979. Bersirov, B. M. "Jugoslavskie adygi i osobennosti ix rei", Annual of IberoCaucasian Linguistics, 8/1981: 116-127. Bganokov, B. G. Adygskij tiket, El'brus, Nal'ik 1978. Braun, Ja. "Xattskij i abxazo-adigskij", Rocznyk Orientalistyczny 49.1/1994: 15-23. Catford, I. "Ergativity in Caucasian Languages", North Eastern Linguistic Society Papers 6/1975: 37-48. Chirikba, V. A. Common West Caucasian, Research School CNWS, Leiden 1996. Choi, J. D. "An Acoustic Study of Kabardian Vowels", Journal of the International Phonetic Association 21/1991: 4-12. Colarusso, J. A grammar of the Kabardian Language, University of Calgary Press, Calgary 1992. Colarusso, J. "Proto-Northwest Caucasian, or How to Crack a very hard Nut," JIES 22,1-2/1994: 1-35. Colarusso, J. "Phyletic Links between Proto-Indo-European and Proto-Northwest Caucasian", The Journal of Indo-European Studies 25, 1-2/1997: 119-151. Colarusso, J. Nart Sagas from the Caucasus, Princeton University Press, Princeton 2002. Colarusso, J. Kabardian (East Circassian), Lincom Europa, Munich 2006. Comrie, B. Tense, CUP, Cambridge 1985. ern, V. "Verb Class System in Circassian. An Attempt of Classification of Circassian Verbal Forms", Archiv Orientaln 36/1968: 200-212. Dixon, R. M. W. Ergativity, CUP, Cambridge 1994. Dixon, R. M. W. "A Typology of Causatives: Form, Syntax, and Animacy", in: R. M. W. Dixon and A. Aikhenvald (eds.), Changing Valency, C.U.P., Cambridge 2000: 30-83. Dumzil, G. Introduction a la grammaire compare des langues caucasiennes du nord, Champion, Paris 1933.
127
Giev, N. T. Voprosy rgativnogo stroja adygskix jazykov, Adygejskoe otdelenie Krasnodarskogo kninogo izdatel'stva, Majkop 1985. Gordon, M. & A. Applebaum, "Phonetic Structures in Turkish Kabardian", Journal of the International Phonetic Association 36(2) 2006: 159-186. Greenfield, E. R. "Language of Dissent: Language, Ethnic Identity, and Bilingual Education Policy in the North Caucasus", Gjaurgiev, X. Z. & X. X. Sukunov, kol'nyj russko-kabardinskij slovar', Nart, Nal'ik 1991. Halle, M. "Is Kabardian a Vowel-less Language?" International Journal of Language and Philosophy 6/1970: 95. Hewitt, G. "Antipassive and 'labile' constructions in North Caucasian", General Linguistics 22/1982: 158-171. Hewitt, G. "Northwest Caucasian", Lingua 115/2005: 91-145. Hewitt, G. Introduction to the Study of the Languages of the Caucasus, Lincom, Munich 2004. Hewitt, G. (ed.) The Indigenous Languages of the Caucasus: the North West Caucasian Languages, London 1989. Jakovlev, N. F. Grammatika literaturnogo kabardino-erkesskogo jazyka, AN SSSR, Moscow 1948. Kardanov, V. M. "Grammatieskij oerk kabardinskogo jazyka", in: M. L. Apaev et alii, Kabardinsko-russkij slovar', Gosudarstvennoe izdatel'stvo inostrannyx i nacional'nyx slovarej, Moskva 1957: 489-576. Kardanov, V. M. Glagol'noe skazuemoe v kabardinskom jazyke, Kabardinskobalkarskoe kninoe izdatel'stvo, Nal'ik 1957. Keenan E. & B. Comrie, "Noun phrase accessibility and universal grammar", Linguistic Inquiry 8/1977: 63-99. Klimov, G. A. (ed.) Strukturnye obnosti kavkazskix jazykov, Nauka, Moskva 1978. Klimov, G. A. Vvedenie v kavkazskoe jazykoznanie, Nauka, Moskva 1986. Kuipers, A. H. Phoneme and morpheme in Kabardian, Mouton, The Hague 1960. Kuipers, A. H. "The Circassian nominal Paradigm: a Contribution to Case-theory", Lingua XI, 1962: 231-248. Kuipers, A. H. "Unique types and typological universals", in: Pratidnam. Festschrift F. B. J. Kuiper, Mouton, The Hague 1968: 68-88. Kumaxov, M. A. Slovoizmenenie adygskix jazykov, Nauka, Moskva 1971. Kumaxov, M. A. "Kategorija opredelennosti-neopredelennosti v adygskix jazykax", Trudy Tbilisskogo universiteta, V. 3 (142), 1972: 119-128. Kumaxov, M. A. "Teorija monovokalizma i zapadnokavkazskie jazyki", Voprosy jazykoznanija 4/1973: 54-67. Kumaxov, M. A. "Uerbnost' neperexodnyx paradigm v adygskix jazykax", Iberijsko-kavkazskoe jazykoznanie 18/1973a: 127-132. Kumaxov, M. A. "Teorija genealogieskogo dreva i zapadnokavkazskie jazyki", Voprosy jazykoznanija 3/1976: 47-57. Kumaxov, M. A. Sravnitel'no-istorieskaja fonetika adygskix jazykov, Moskva 1981. Kumaxov, M. A. Oerki obego i kavkazskogo jazykoznanija, l'brus, Nal'ik1984. Kumaxov, M. A. Sravnitel'no-istorieskaja grammatika adygskix (erkesskix) jazykov, Nauka, Moskva 1989. Kumaxov, M. A. & Kumaxova, Z. Ju. Jazyk adygejskogo fol'klora, Nauka, Moskva 1982.
128
Kumaxov, M. A. et alii, "Ergative case in the Circassian languages", Lund University Department of Linguistics Working Papers 45(1996): 93-111. Kumaxov, M. A. & K. Vamling, "On Root and Subordinate Clause Structure in Kabardian", Lund University Working Papers in Linguistics 44/1995: 91-110. Kumaxov, M. A. & K. Vamling, Dopolnitel'nye konstrukcii v kabardinskom jazyke, The Lund University Press, Lund 1998. Kumaxov, M. A. & K. Vamling, rgativnost' v erkesskix jazykax, Malm University, Malm 2006. Kumaxov, M. A. (ed.) Oerki kabardino-erkesskoj dialektologii, lbrus, Nal' ik 1969. Kumaxov, M. A. (ed.) Kabardino-erkesskij jazyk (I-II), Izdatel'skij centr l'-Fa, Nal'ik 2006. Kumaxova, Z. Ju. Razvitie adygskix literaturnyx jazykov, Nauka, Moskva 1972. Mafedzev, S. Adyg xabz. Adygi. Obiai. Tradicii, Izdatel'skij centr l'-Fa, Nal'ik 2000. Matasovi, R. Uvod u poredbenu lingvistiku, MH, Zagreb 2001. Matasovi, R. Jezina raznolikost svijeta, Algoritam, Zagreb 2005. Matasovi, R. "Transitivity in Kabardian", in: R. D. Van Valin Jr. (ed.), Investigations of the Syntax-Semantics-Pragmatics Interface, John Benjamins, Amsterdam 2008: 59-74. Matasovi, R. "The "Dependent First" Syntactic Patterns in Kabardian and other Caucasian Languages", paper from the "Conference on the Languages of the Caucasus" held at the Max-Planck Institut fr Evolutionre Anthropologie in Leipzig, December 2007. Nrtxar. Kabardej pos. Nal'ik 1951. Nrtxar. Psaryay 'wxam y brxar. l'brus, Nal'ik 2001. zbek, B. Die tscherkessischen Nartensagen, Esprint-Verlag, Heidelberg 1982. zbek, B. Erzhlungen der letzten Tscherkessen auf dem Amselfeld, Etnographie der Tscherkessen 4, Bonn 1986. Paris, C. "Indices personnels intraverbaux et syntaxe de la phrase minimale dans les langues du Caucase du nord-ouest", Bulletin de la Socit de linguistique de Paris 64/1969: 104-183. Paris, C. Systme phonologique et phnomnes phontiques dans le parler besney de Zennun Ky (Tcherkesse oriental), Klincksieck, Paris 1974. Peterson, D. A. Applicative Constructions, O.U.P, Oxford 2007. Smeets, R. Studies in West Circassian Phonology and Morphology, Brill, Leiden 1984. Smeets, R. "The Development of Literary Languages in the Soviet Union; the Case of Circassian", in: I. Fodor & C. Hagge (eds.), Language Reform. History and Future, VI,1990: 513-541. agirov, A. K. "Kabardinskij jazyk", in: V. V. Vinogradov (ed.) Jazyki narodov SSSR, T. IV Iberijsko-kavkazskie jazyki, Nauka, Moskva 1967: 165-183. agirov, A. K. timologieskij slovar' adygskix (erkesskix) jazykov, I, II, Moskva 1977. agirov, A. K. "Kabardinskij jazyk", in: V. N. Jarceva et alii (ed.), Jazyki mira. Kavkazskie jazyki, Academia, Moskva 1998: 103-115. Tuite, K. "The Myth of the Caucasian Sprachbund": The Case of Ergativity", Lingua 108/1999: 1-26. Van Valin, R. Exploring the Syntax-Semantics Interface, CUP, Cambridge 2005.
129
Van Valin, R. & LaPolla, R. Syntax, CUP, Cambridge 1997. WALS = The World Atlas of Linguistic Structures, ed. by M. Haspelmath et alii, CUP, Cambridge 2006. Uduhu, T. . "Preruptivnye smynye soglasnye zapadnyx dialektov adygejskogo jazyka", in: Z. Ju. Kumaxova (ed.) Struktura predloenija v adygejskom jazyke, Adygejskij nauno-issledovatel'skij institut, Maykop 1976: 135-157. Zekox, U. S. "O strukture prostogo predloenija v adygejskom jazyke", in: Z. Ju. Kumaxova (ed.) Struktura predloenija v adygejskom jazyke, Adygejskij nauno-issledovatel'skij institut, Maykop 1976: 3-49. For information on the history of Kabardians and other Adyghean peoples see About the customs, dances and. culture of the Adyghean peoples see
For the bibliography of works on Kabardian (in English) see A few texts about Kabardian and in this language are available at: For the transliteration of the Kabardian Cyrillic see J. Gippert, Alphabet Systems Based upon the Cyrillic Script ( The most extensive bibliography of Russian works on Kabardian can be found in the comparative grammar by M. A Kumaxov (Kumaxov 1989) and the monograph on Kabardian, edited by the same author (Kumaxov (ed.) 2006).
130
Kabardian
Note: ALUANIAN = Dagestanian languages NAKH = Chechen, Ingush and Bats (Batsbi)
131
132
APPENDIX III A table of phonological correspondences between Kabardian and Adyghean (according to agirov 1977: 25) Kabardian f f' xw b d d dz gw v ' q' q'w Adyghean w 'w f p t d, c kw cw, w , ky, , d , ', k'y q qw
Western Adyghean dialects (Shapsugh and Bzhedukh) are the most archaic Circassian dialects with respect to consonantism. They have a fourfold system of stops, distinguishing voiceless aspirated (ph), voiced (b), ejective (p') and voiceless unaspirated, or "preruptive" (p). It seems that Kabardian had such a system still in the beginning of the 19th century, because traces of it can be found in Sh. Nogma's writings (Uduxu 1976). In literary Kabardian, the voiceless unaspirated stops and affricates became voiced, merging with the original voiced series, and creating a number of homonyms, cp. Kab. da 1. "nut", 2. "we" vs. Bzhedukh da "nut", ta "we", or Kabardian dza 1. "army", 2. "tooth" vs. Bzhedukh dza "army", ca "tooth", etc.
133
APPENDIX IV INDEX OF KABARDIAN GRAMMATICAL MORPHEMES aw- present (for dynamic verbs) - demonstrative pronoun ("this/that") - preterite py optative particle -t anterior preterite wa "but" bla- directional ("by") -bza comparative suffix; "very" -'a Instrumental -'a "already" (verbal suffix) -'a "maybe" (verbal suffix) -'ara adverbializer; gerund -'at optative -' valency adding suffix (for intransitives) d- 1st. person plural verbal prefix da- conjunctivity (sojuznost') da- directional ("in") dana "where" day "towards" dpa "how much, how many" dwa "how" dy 1st. person pl. possessive pronoun dda comparative and superlative particle; "very" f- 2nd person pl. verbal prefix -f "potential" -fa "kind of" (nominal suffix) fy 2nd person pl. possessive pronoun f'- adversative f''(a) "except" gwa- directional ("together with") gwar "some" (quantifier) a- causative -a abstract noun formative -an evidential (probability) - pluperfect -t anterior pluperfect hawa "no" -h transitivizing suffix ndara "since the time that" -'(a) valency increasing suffix; adds directional meaning ("towards") -m Ergative (Oblique) case -m imperfect of stative verbs -m(a) conditional -m(y) permissive; "although" ma- (m-) 3 sg. of intransitives
134
maw- demonstrative pronoun ("that") m- negation (for infinite forms) m, m- demonstrative pronoun ("that") -n Infinitive -n categorical future n(a)- directional ("thither") naw "after" na comparative particle -na "without" -nt subjunctive / future II (?) -nw Infinitive -nw factual future -nwt future II (conditional) nt'a "yes" -pa perfectivizing suffix (indicates accomplished action) p'ara interrogative particle psaw "every" (quantifier) -q'a interrogative, exclamatory, and focus marking suffix q'as "every" q'- directional ("hither") -q'm negation (for finite forms) -q'wa suffix indicating excessive action; "too much" -r Nominative (Absolutive) case -r facultative present of dynamic verbs -ra interrogative -ra gerund -ra, -ry conjunction (clitic); "and" -(r)t imperfect of dynamic verbs rya- optative s-/z- 1st person sg. verbal prefix sy 1sg. possessive pronoun sma associative plural st "what" - affirmative -- suffix indicating excessive action; "too much" -a (elative) superlative a interrogative particle - directional; "from the surface of"; "when" tma "if" -ar(at) optative ha'a "after, because of" ha "every" -xwa "great" 'a- directional prefix; "under" -t imperfect of dynamic verbs -t suffix used in reinforcing the imperative -tam(a) irrealis conditional tay- directional; "on" w-/b- 2nd person sg. verbal prefix
135
wy 2sg. possessive pronoun w- factitive -w Adverbial case; gerund; adverbializing suffix xa-/x- directional ("towards the interior") -xa plural -x(a)- "already" xat "who" -xxa- "reinforced negation" -w transitivizing suffix xwa-/xw- version xwada "like" xwa-....-fa "somewhat" (circumfix modifying adjectives) xw- potential -xw('a) suffix expressing simultaneity of the action, "while" xway- debitative modal y-/r- 3rd. person sg. verbal prefix ya "or" yay attributive 3sg. possessive pronoun yy attributive 3pl. possessive pronoun yaz emphatic pronoun; "personally", "himself" y 3pl. possessive pronoun -y admirative y 3sg. possessive pronoun y' y "and" za-/z- participle forming prefix za-/z-/z- reflexive za-/zara- reciprocal zara- "instrumental" participle prefix; subordinating prefix on participles zy relative possessive pronoun; "whose" zda- "together" -() "back, again"; repetitive ayry "quotative particle" -ay diminutive suffix - transitivizing suffix -?a indefinite person marker, "somebody" ?a'a- involuntative -?wa superlative (elative); "diminutive" comparative
136
TABLE OF CONTENTS
List of abbreviations......................................................................................................2
ORTHOGRAPHY...................................................................................................15 MORPHOLOGY.....................................................................................................17 Nominal inflection...................................................................................................17 Number.........................................................................................................................17 Case..............................................................................................................................18 Definiteness..................................................................................................................23 Adjectives.....................................................................................................................25 Personal and demonstrative pronouns..........................................................................27 Possessive pronouns.....................................................................................................27 Interrogative pronouns.................................................................................................27 The emphatic pronoun..................................................................................................28 Quantifiers....................................................................................................................29 Invariable words......................................................................................................30
Numerals......................................................................................................................30 Adverbs........................................................................................................................31 Postpositions.................................................................................................................32 Particles, conjunctions and interjections......................................................................33
Verbs...........................................................................................................................35
The verbal complex......................................................................................................35 Verbal negation............................................................................................................36 Person...........................................................................................................................37 Indefinite person...........................................................................................................39 Transitivity...................................................................................................................39 Labile (diffuse) verbs...................................................................................................46 Causative......................................................................................................................46 Involuntative................................................................................................................49 Factitive........................................................................................................................51
137
Active (dynamic) and stative verbs..............................................................................51 Applicatives.................................................................................................................53 I. Version (Benefactive/Malefactive) ..........................................................................53 II. Conjunctivity (Comitative)......................................................................................55 Reciprocity...................................................................................................................57 Reflexivity....................................................................................................................58 Deontic modality..........................................................................................................60 Personal and directional prefixes.................................................................................62 Tenses...........................................................................................................................63 Interrogative.................................................................................................................70 Moods...........................................................................................................................71 Evidentiality.................................................................................................................75 Deverbal nominals.......................................................................................................76 I. Infinitive....................................................................................................................76 II. Participles................................................................................................................78 III. Verbal adverbs (gerunds).......................................................................................81 Directionals..................................................................................................................82 APPENDIX: VERBAL CLASSES AND PARADIGMS............................................87
WORD FORMATION...........................................................................................93
Compounds...................................................................................................................93 Nominal suffixes..........................................................................................................94 Verb formation by prefixing........................................................................................95 Verbal suffixes.............................................................................................................96
SYNTAX...................................................................................................................99
Noun phrases (NP).......................................................................................................99 Adjective phrases.......................................................................................................100 Syntactic structure of the sentence.............................................................................101 Nominal sentence.......................................................................................................101 Equi-NP deletion........................................................................................................101 Subordination.............................................................................................................102 Case assignment in subordinate clauses.....................................................................106 Modal verbs................................................................................................................107 Phasal verbs................................................................................................................108 Reported speech.........................................................................................................108 Agreement..................................................................................................................109 Negative concord........................................................................................................110 Pro-drop......................................................................................................................110 Relative clauses..........................................................................................................111 Coordination...............................................................................................................112 The order of syntactic elements.................................................................................112 Topicalization/focalization.........................................................................................114
TEXTS......................................................................................................................120 REFERENCES.......................................................................................................127
APPENDIX I: LANGUAGE MAP OF THE CAUCASUS.......................................131 APPENDIX II: ADYGH (CIRCASSIAN) TRIBES IN THE 18TH CENTURY......132 APPENDIX III: Phonological correspondences between Kabardian and Adyghean....................................................................................................................133 APPENDIX IV: Index of Kabardian grammatical morphemes.................................134
139 | https://www.scribd.com/document/79616556/Kabardian-Grammar | CC-MAIN-2019-35 | refinedweb | 40,228 | 61.97 |
Testing is a crucial part of maintaining a code base, but not all tests validate what they’re testing for. Flaky tests—tests that fail sometimes but not always—are a universal problem, particularly in UI testing. In this blog post, we will discuss a new and simple approach we have taken to solve this problem. In particular, we found that a large fraction of most test code is setting up the conditions to test the actual business-logic we are interested in, and consequently a lot of the flakiness is due to errors in this setup phase. However, these errors don’t tell us anything about whether the primary test condition succeeded or failed, and so rather than marking the test as failing, we should mark it as “unable to reach test condition.”
We’ve operationalized this insight using 2 simple techniques:
- We developed a simple way to designate the relevant parts of a test as the actual business logic being tested.
- We modified our test framework behavior to treat failures outside these critical sections differently—as “unable to test,” rather than “failure”.
This has led to a significant reduction in flakiness, in turn reducing maintenance time and increasing code test coverage and developers’ trust in the testing process.
End-to-end tests are a powerful tool for verifying product correctness, by testing end-user functionality directly. Unlike unit tests (their better-known counterparts), which test individual components in isolation, E2E tests provide the closest approximation to the production environment, and their success is the best guarantee for sane functionality in the real world. E2E tests are relatively easy to write, as they require only knowledge of the product’s expected behavior and don’t require (almost) any knowledge of the implementation. Well-written E2E tests reflect and describe the application behavior better than any existing spec, and they allow us to undertake significant code refactoring by providing a safety net that the product continues to behave as expected.
At Dropbox, we use Selenium WebDriver for web E2E testing. Selenium sends commands to a web browser which simulate user interactions on a website. This allows the developer to verify a specific E2E functionality on the website. In Dropbox’s case, this includes actions such as adding, removing, or sharing a file; authentication flows; and user management. A typical test will describe a sequence of actions taken by a user in order to perform a certain operation, followed by a verification to ensure success. These actions can include navigating to pages, mouse clicks, or sending key strokes. Verifications are usually done by assertions of specific attributes of a web page that prove the success of the operation, such as success notifications or updates to the UI.
For example, let’s say we want to test whether a user can successfully share a file. An E2E test for that might specify a sequence of actions such as:
- Pick a file
- Click the “Share button” next to the file
- Specify the email address of a person to share with
- Click “Share”
The test would then check to see if the notification of a successful share was displayed.
(While the solution for flaky tests we will describe in this post can be applied to any testing framework, we will focus on Selenium tests, as that is where we have found the most use for it.)
Analyzing an E2E test
In order to better understand the problem of flaky tests and our solution for them, let’s look at a real E2E test: verifying that a Dropbox Team Admin can delete a Dropbox “group” (Dropbox teams can assign their users to groups). Here is an animated gif of screenshots depicting the process of creating a group, adding a users to it, and then deleting the group:
The code to test this flow might look something like this:
def test_delete_group(): #) # Delete the group single_group_page.delete_group() # Check that it no longer shows up group_row = get_group_row_info_by_name(GROUP_NAME) assert group_row is None
There’s a lot happening here, but most of it is setup—we’re creating a test team, creating a group within that team, adding members to it, and only then do we actually do the main purpose of the test: deleting the group and checking to make sure that it got deleted. Errors anywhere prior to that last step don’t tell us anything about whether the
delete_group() functionality is correct or not.
Furthermore, for this particular function (and many others that we would want to test), the amount of code executed during setup is much larger than the pieces being tested, and so if bugs were distributed evenly, we would expect failures of this test case to more likely be caused by irrelevant things rather than the delete functionality itself.
How do we deal with this issue, and in a general way?
The anatomy of a test
Let’s take a step back and think about tests in general. A test is an experiment that aims to validate proper functionality of the system by demonstrating that an expected outcome occurs when a particular factor is manipulated. You can think of it like this:
if A exists: Perform action X on A Verify that the output is O
In our case of
test_delete_group:
- A =
is a group
- X =
delete
A
- O =
group deleted
Conventionally, the outputs of a test are either
success or
failure, based on whether any part of the test causes an error. But what happens if there’s an error even before we get to the test condition of
if A exists ?
Let’s look at an analogous situation: imagine an experiment to test if lightning transmits electricity. To do this, we’ll measure the current through a metal rod placed on top of a tall building during a storm. However, if lightning never strikes the rod, we cannot conclude either that lightning does or does not transmit electricity, since the conditions for the experiment weren’t satisfied.
We call this result
fail to verify, leaving us with the following possibilities for the outcome of a test:
success
failure
fail to verify
Adding semantics to tests
To implement this logic into our tests, we have to do two things:
- Designate “the relevant part” of the tests
- Modify the testing framework to use this designation to return our 3 different outcomes.
For the first step, we introduce a simple semantic addition to designate parts of a test as “under test.” In python (our primary language), this can be implemented as a context manager which we call
under_test. This is used for the critical sections of code and wraps raised exceptions as
UnderTestFailure. Here’s how
test_delete_group looks with this new code construct (new code is bolded):
def test_delete_group(self): #) with under_test(): # Delete the group single_group_page.delete_group() # Check that it no longer shows up group_row = get_group_row_info_by_name(GROUP_NAME) assert group_row is None
For the second step, let’s look at before and after versions of how our tests are evaluated.
Before:
try: run_test(test_function, *args, **kwargs) except Exception as e: return Result.FAILURE else: return Result.SUCCESS
After:
try: run_test(test_function, *args, **kwargs) except Exception as e: if isinstance(e, UnderTestFailure): return Result.FAILURE else: return Result.FAIL_TO_VERIFY else: return Result.SUCCESS
Note that the changes in the test code are extremely minimal: the critical section has just been placed inside a
with under_test() block, and the rest of the code remains the same. However, this has a big impact on failures. The original code had 7 significant lines of code in the test, of which the new version moved 2 into the critical section. If we assume failures are evenly distributed across lines of code, then 5/7 of the errors in the original code would have actually been irrelevant to the functionality we are testing. And in practice, some of the setup code (such as
setup_team()) is way more complex, thus often resulting in an order of magnitude reduction in the number of failures that fall inside the critical section!
Success and failure scenarios
How does this simple change affect various scenarios? Let’s take a look at some common patterns:
# This is a test that passes. # Test output: success def test_pass(self): do_something() with under_test() assert True # This test fails before validating business logic. # Test output: fail_to_verify def test_skip(self): assert False with under_test() do_something() # This test fails while validating business logic. # Test output: failure def test_fail(self): with under_test() do_something() assert False # This test fails while validating business logic (multiple under_test blocks) # Test output: failure def test_fail_2(self): do_some_setupstuff() with under_test() do_something() some_other_preliminary_stuff() with under_test() do_something() assert False
Real failures in skipped tests
Our methodology is very effective with flakiness but we’ve introduced the possibility of missing some real bugs. In particular, consider this example:
def test_skip(self): assert False # real consistent bug with under_test() do_something()
The bug in the non
under_test() section will not be discovered by this test since the test gets marked as
fail to verify. But this is true only locally—when we consider the entire test suite, we would hope that another test would include the bug from this test, inside an
under_test() section, so that the bug is actually caught and eventually fixed. Thus, we must follow a new rule: every piece of code that is no longer inside the
under_test() block must be covered with its own dedicated test where it is
under_test().
In our
test_delete_group example from above, the non-critical setup pieces such as team and group creation are, in fact, tested in other tests dedicated to those operations, such as
test_create_group.
In order to best utilize tests, we run them as part of an automated deployment environment. Understanding the deployment process is crucial for understanding the environmental effects of flaky tests, and how the changes described above help remediate these effects.
Dropbox uses Continuous Integration (CI) across the organization. For each codebase, engineers commit code to the same mainline branch. Each commit (also called “build” at Dropbox) kicks off a suite of tests. Generally speaking, as a software organization grows, CI and well-written test suites are the first line of defense for automatically maintaining product quality. They document and enforce what the expected behavior of code is, which prevents one engineer (who may not know or regularly interact with everyone who commits code) from unknowingly mucking up another’s work—or from regressing their own features, for that matter.
We went into some detail on our CI system in previous blog posts: Accelerating Iteration Velocity on Dropbox’s Desktop Client, Part 1 and Part 2. Here we will briefly review a few pieces that are relevant to us right now.
Our test and build coordinator is a Dropbox open source project called Changes, which has an interface for each test suite that looks this:
Each bar represents a commit, in reverse chronological order. The result could be totally-passing (green) or have at least one test failure (red), with occasional system errors (black). The time it took to run the job is represented by the height of the bar. The test suite being run is quite extensive, and includes both unit tests as well as E2E tests. Thus, it runs on a separate cluster of machines, and currently takes tens of minutes to run.
At first, the workflow at Dropbox to add a new commit to the mainline branch was as follows:
- Ensure that the commit passes unit tests, which run on the developer’s machine.
- These run quite fast (under a few minutes, and often in a few seconds)
- Add the build to the main branch
- Changes would then run the full test suite on that build, eventually marking it as green or red.
However, we started to get cascading failures increasingly often with this system: notice the sequence of red builds on the left and right sides of the above screenshot. This happened because if one build had an error and thus failed the test suite, the next several would most likely fail as well, since in the time it took to run the full test suite, several other commits would have been added to the mainline branch, all of which include the failing code from the first build.
So we added an intermediate stage in the commit process: a “Commit Queue” (CQ). After passing unit tests, new commits now first have to go through the CQ, which runs the same suite of tests as on the main branch. Only builds that pass the CQ are submitted to the main branch, where they are again tested. This prevents cascading failures on the main branch, since each build has already been tested before being added. In the example above, the first bad build would have never been added to the mainline branch, since it would have been caught by the CQ. All subsequent builds would have gone through just fine (assuming they didn’t contain bugs of their own).
Flaky tests and the Commit Queue
Flakiness is the most common problem with E2E tests. What is so bad about flaky tests? Flaky tests are useless because they don’t provide a reliable signal on the particular code being tested. However, things get worse in the context of our Commit Queue (CQ), since a red build blocks engineers from landing code to the mainline, even if their code was fine, but a flaky test falsely marked the build as bad. Excessive flakiness can cause engineers to start losing faith in the entire CI process and starting pushing red builds anyways, in the hope that the build was red just due to flaky tests.
In part, flakiness is a logical price for trying to simulate a production environment that has a lot of indeterministic variants, in contrast to unit tests that run in a sterile mocked environment. For example, in the production environment, a delay of a few seconds with an occurrence rate of once per million per operation might be tolerable. However, in our CI system, if we have 10,000 tests, each composed of 10 operations, this might result in a red build 10% of the time. This is not just specific to us; Google reports that 1.5% of all test runs report a “flaky” result.
In our CQ, we try to discover flaky tests by rerunning failing tests and seeing if they succeed on rerun after failure. In the old system, if a test failed on retry, we would mark the test (and build) as truly failed; whereas if it succeeded on retry, we mark the test as flaky and wouldn’t include its results in evaluating the success of the build as a whole. We then moved the test into a quarantine, meaning it would not be evaluated on any future CQ builds, until the test was fixed by its author.
In practice, fixing flaky tests would often take quite a while, since by definition they contain issues that only surface occasionally. And for the entire duration of repair, our test coverage was reduced. Furthermore, we found a fairly high rate of these tests remaining flaky after a “fix”—25% by some internal estimates. Over time, the quarantine would grow quite large, as engineers struggled to fix flaky tests fast enough to keep up with the discovery of new flaky tests.
With the new
under_test framework, only failures that are raised
under_test() result in
Failure and block the commit queue. Failures outside the critical sections now return
fail_to_verify and are skipped, meaning that they do not block the commit queue. There is no longer a quarantine; all future builds run all tests, including those previously returned
fail_to_verify. Of course, tests which frequently return this error are marked and investigated to try to fix permanently, but now there is no urgency to do so right away.
Quantifying errors in the Commit Queue
What happens when we run an entire test suite? Let’s say we have 10,000 tests, 0.1% of which fail due to real bugs in the code. In addition, let’s assume a 1.5% rate of flakiness in the other tests. Due to the time it takes to fix flaky tests and their cascading failures in the old approach, we might have as many as 10% of tests in quarantine at one time. Finally, let’s assume that only 10% of test code is inside an
under_test() critical section.
Let’s see what happens when we run this test suite through both old and new approaches:
Notice that with the new approach, we not only reduced the number of failures due to flakiness (from 135 to 15), we also increased both our test coverage in successful cases (from 8,856 to 9,840) and the number of real bugs caught!
Summary
By introducing a framework of less than 20 lines of code, we expanded our testing outcomes to include a new
fail_to_verify result. We could then remove the “quarantine” for flaky tests from our continuous integration system, resulting in an improvement in all test metrics. In particular, we reduced flakiness by more than 90% from our test suite, transforming it from a lethal disease into a chronic—but treatable—condition. We hope this approach will prove useful to others. | http://engineeringjobs4u.co.uk/how-were-winning-the-battle-against-flaky-tests | CC-MAIN-2018-51 | refinedweb | 2,878 | 55.58 |
It’s summer in Germany, and [Valentin’s] room was getting hotter than he could handle. Tired of suffering through the heat, and with his always-on PC not helping matters any, he decided that he must do something to supplement his home’s air conditioner. The result of his labor is the single room poor man’s A/C unit you see above.
He had a spare Peltier cooler sitting around, so he put it to good use as the basis for his air conditioning unit. He sandwiched it between a pair of CPU heatsinks before cramming his makeshift heat pump into a shoe box. Warm air is drawn into the box and across the cold side of the Peltier before being blown back into the room. On the hot side of the box air is also pulled in by a fan, drawing heat away from the unit before being exhausted outdoors through his window.
While he hasn’t quantified the machine’s cooling power, he seems quite happy with the results. We have a spare Peltier kicking around here somewhere, perhaps we should try building one just for grins.
53 thoughts on “Poor man’s Peltier air conditioner”
How effective is the cooling? No mention on how effective it is in the article.
I’m no HVAC guy, but it is all about heat load and efficiency. Still, it’s interesting enough, and with some thermal sensors here and there, one could gather enough data to learn a bunch, and perhaps also have an excellent school project. And if it cools a fellow off a little bit more, great.
More of a ‘fun project’ than anything useful- Peltiers are notoriously inefficient. Looks to be an 80W peltier, so he’d be lucky to be getting 8W of cooling ability.
i need something like this, 32c at day is to much for me
This week we’re supposed to be hit with 38 to 40 Celsius! I also have a peltier kicking around, but I think it would be much better to draw cooling air in from outside, then vent it back.That way, you’re not sucking the hot outside air back into your house.
Any way to make a “poor mans AC” that is a bit more efficient than the 5-10% you get with a peltier?
@Wizzard: Peltiers aren’t THAT inefficient.
Could be an 80W peltier like the TEC12709, in which case let’s see…
80W electrical input, I’d approximate the temperature difference around 20 degrees between hot side and cold side, at which the peltier CoP is around 0.3 – 0.4. Therefore transferring about 24 – 32W of heat out of the air, and effectively 30-40% efficiency
The power supply that runs the TEC probably outputs more waste heat than what he can remove from the room with it.
^ Very true
Down talk this guy down too much. This isn’t meant to cool the whole room its meant as a spot cooler. If he’s happy with the results, then I say more power to the guy.
guys if you wanna make an air conditioner and maximize your peltier cooling effect just make the hot part cold enough so your peltier dont need to draw much heat and it will be colder enough..then it will now depend on your design…^^ have fun making this..nothings possible in this world only humans limit themselves but GOD created humans with limitless brains so make it happen…!!
@Dax,
2nd’ed. From personal experience with these beasts, I can confirm, they take a whacking-great amount of oomph, to give cooling to a very localized area. Having worked with triple, and quad-staged cascade systems for my job, I can vouch for the amount of power they require to do anything aside from cooling small thermal loads.
Using @Mike’s number of 32 Watts that comes out to 109.2 Btu’s. It takes aprox. 5000 Btu’s to cool a small room.
Still a neat project, just not very useful.
I did a similar trick with one 180w Pelt. I used my watercooling loop(external to my pc at the time) to cool the hot side, put the biggest CPU cooler I had at the time, It was like a desk cooler, the fan straight at me.
Shoe boxes work too, the AC unit in my old office had four “panels” of peltiers with huge heat-piping(commercial design) this cooled a 20×30′ office with 28 PCs. The panels had to have thousands of watts. But the whole unit fit in the 10ft wall, no more than 3 inch deep.about 4ft wide.
I was looking at designing something like this to market. Would have been great if it could work. Only 2 moving parts. After doing the research I found that it would require far more power than a conventional unit to cool even a very small room. I figured there had to be a reason it hasn’t been made already. The only way i can see this being useful is as a supplement to a phase change system.
Peltier exchange, something most of the industry are too smart to realize the potential of.
My friend works for a company who currently has the most efficient method in full manufacturing(using serial-wired iron pellet grids). We beat the best energy star fridge rating using 40 usd in parts one night…I still use it to keep food in the garage 2 years later..
It’s a heavily underrated method, it can replace even industrial ammonia-gas systems at 1/5 the cost and consumption…
HINT: If you can get the pellets, hot-glue is perfect for doing the grid method. It’s hard to do decent peltier coolers homebrew cause of the iron material resources and sizing.
I’d like to see people getting into actually making homebrew manufacturing for these, the most efficient way to date is publicly documented by companies making them.
It’s electric cooling and freezing..
@xorpunk: Why don’t you start us off by doing a small write up on it (something fun rather than dry industry papers)? The hacking community is a force to be reckoned with, we just need a nudge in the right direction to get started.
I AM an HVAC guy, and points for concept, but none for scale, and minus several for moisture control. most people don’t realize the First function of your AC unit is to dehumidify the air, it can be 110 degrees but if the air is dry, your sweat will keep your body temperature stable. The first thing this will do is try to remove the moisture from the air, he probably noticed it pooling in the bottom of the box. if he doesn’t have a drain, it will probably start to play host to all sorts of bacteria. of course he could have just gotten a dc electric cooler off ebay for $40 and added fans to it, works on the same concept.
Using @Mike’s and @Flame500’s numbers about 50 Peltiers could give a small room a good cool down! but at what cost?
A standard A/C will have a COP that is about 10 times better than that of a peltier.
It would make more sense to put the peltier directly on the forehead or something like that.
if you want to freeze your skin tissue or die than yeah go for it
Going by the photo of the DMM, and the stated 18V supply voltage, less than 75 W is being consumed. I can’t see how that would be that noticeable especially if the mini AC is only cooling air already cooled by a home AC, as Mile Nathan suggests in his post. However I didn’t see a mention a home AC on Valentin’s blog detailing the build. In reading the spec sheets for TEC there is a recommended amount of force that should be used to make the “sandwich” to get the best performance I doubt the method use here meets that. And BTW a heat sink equipped with a fan is not a passive cooler. Anyway it is what it is, if Valentin had fun building it, and is satisfied with the results is all that matters, no one is required to duplicate it
He could have gotten 300 Watts of “cooling” by just exhausting the heat of his PC to the outside. But wait- thats to simple and doesn’t have Peltiers and LEDs.
I’d like to see a wrist version of this for cooling your blood by contact with the veins in your inner wrist. Much more effective way to cool off your body.
Not a bad idea, but the SF 49ers research has found that it works better by placing the cooler across the palm of your hand. Want to learn more? Google info on the “cooling glove”: after an initial workout and a few minutes of one hand in the cooling glove, athletes performed BETTER in the immediate follow-up to the glove, than they did in their original workout.
Very good, but if anyone tried that they really need to use a precise temperature controller, maybe one of those industrial standard PID controllers, to regulate how cold your hand gets. Too cold and the blood vessels constrict reducing blood flow and pretty much halting any further heat transfer. The stanford researchers are looking for a marketable item, but are they able to compromise by not using a vacuum to swell the blood vessels? Trying to include a peltier module into a ‘cooling glove’ is not necessarily the right approach here. Maybe a glove designed to be worn while driving a race car, with thermally conducting metallic wire with a high ability to flex a lot without breaking, sewn around very thin passages that air or water is run through? But what I really want is a car seat skin that can be installed into any car with this sort of thing. And it really should be air cooling because there’s also a humidity control factor I want to address concerning my backside regions.
This was probably done for kicks. I’ve played with them some. Tried to make a beer cooler one time. Just WAY too much power consumed for what you get.
I had a hard enough time making a smaller insulated cooler. So I can’t see using something like this even for a “spot cooler.”
I’m sure anything this maker is feeling, is probably just the feel good from playing with these things and not actual cooling.
Dax’s noting about how the PSU could generate more heat than the peltier would offset probably holds water, I have a couple of 100watt peltiers I’ve yet to properly impliment into something and have hesitated because a 60watt 12v PSU brick I use for something else can get quite hot and I dread to think what heat a 100watt brick would put out.
Currently to help keep sane in hot weather I have a set of 9 12cm computer fans stuck together and blowing air around the room, using a Picaxe 08m I can control the speed easily through PWM and have them almost silent whilst still blowing air.
As for PSU heat problems there is none, they’re being powered from an old car battery charged from a 30w solar panel :)
1 for using solar battery setup
@nanomonkey, there’s something similar that sits on your neck to cool you down, although it uses water evaporation instead of peltier effect.
CoolWare Personal Cooling System
@sneakypoo: The actual design has to be done with magnification and super-fine soldering, using bulky size drastically decreases the efficiency. Also the materials(bismuth telluride is most efficient) is kind of hard to form into millimeter blocks at home..
@ztraph: All solutions including newer non-ammonia gas systems in consumer appliances have that problem. It’s commonly reused for cooling where it vaporizes externally..
In camping coolers that use this tech it’s also used to cool, it actually takes care of itself in those applications without pumping though by simply using ventilation.
This ‘hack’ is just a manufactured unit poorly implemented. The tech is super useful though.
Bad idea, open a window
@Climate Change Kills: congrats, that’s the stupidest idea.
@Flame 500
I’m assuming you’re deliberately trolling. However, in the case that you’re not, BTUs and Watts are not the same thing – BTUs measure energy, Watts measure power (rate of change of energy). It’s really important that units are correctly used and understood – so often on Hackaday I find well-meaning comments that serve to confuse and misdirect.
32W is, in fact, 109.2 BTU/hr. Notice the rate of change factor here. With this in mind, where does your 5000 BTU figure come from?
This is exactly how those tiny car refrigerators work – for suitably small amounts of “work”. Peltiers are quite inefficient even for such small volumes of air, let alone cooling rooms. Whatever effect he thinks he’s feeling in this room from this contraption, he’s fooling himself.
@Stefan
I was simply attempting to provide some useful metrics to help people analyze this project. You are correct, I neglected to include the /hr suffix. That was semi-intentional because I wanted to compare this project to the way consumer window AC’s are rated. They are marketed based on a cooling capacity in BTU’s. I believe this is actually BTUs/hr and was attempting to avoid unnecessarily complicating the matter.
To answer your question, 5000 BTU/hr is the smallest window air conditioner you can buy and that size is only supposed to be capable of cooling a 150 sq. ft room.
@Flame 500
Thanks for clarifying. Re-reading my post, it probably came across more aggressively than I intended. It’s good to learn a bit about AC from someone who clearly knows more than I do about it – I’ve only ever seen BTUs used to rate the contents of flammable gas canisters.
i like the ingenuity, but wouldn’t he be better to use a cooling coil that runs through an ice chest and then a fan blow through something like a heater core from an automobile? lots of retrofitting, but that’s what we like on HAD right?
looks like you are using a power supply for a printer.
you are probably going to burn that power supply.
you will need a power supply that can supply more than 10 amp.
They’re big on solar stuff in Germany, right? He should scale this down and make an AC unit out of it:
You can get plans (or at least the materials list and vague outline; i.e. enough for a real hacker to work with ;)) somewhere on the netmowebs. I think it uses calcium chloride deicer salt and ammonia.
I have some years of experience with a commercial print system that used peltiers to cool the recycled ink, and I can tell you that these things will kill *themselves* with internal condensation. The condensation on the cold plate will corrode the solder & eventually eat the copper between the blocks. If you want the unit to last, you have to seal that internal airspace.
When I was 12 made an “AC” unit out of a fishtank pump a heater core and blower and a tub of water.
I put the pump in a tub of water outside and put the heater core inside with the blower. It worked ok then I got the idea of burying the radiator and that worked much better. Me and my best friend spent two days and managed to get it about 6 feet in the ground.
It was decently effective at making a “cool” flow of air, better than nothing when it’s 99 degrees F. humid and still.
Germany’s ground temp should be even cooler than Southern Indiana.
@ joe pittsburgh
i have done almost the same thing.
i used the watercooling loop of my server (wich has the radiator dangling out the window) to cool the hot side of a peltier module. i used a shoebox as a seal between the hot and cold side. et voila a nice cooler that cools your legs/head/arms. its even better if you place it hanging above you ^^.
This “air conditioner” suffers from a defect common to many of the “portable air conditioners” that I have seen in that it fails to consider the source of make up air to replace the air exhausted outside will be more warm air. In order for this device to work, the warm side of the thermal interface should be outside of the building envelope and both the supply and return (basically a heat sink with a fan) should be there. The other side of the thermocouple should be inside the building envelope, where it can both draw its supply air and exhaust cool air without creating suction from outside. This will also improve air quality and make the device’s parts that have the potential to make people ill, (if they were to accumulate potential mold food which would get and stay wet from condensation.) more cleanable and accessible.
There should be no air communication between the outdoors and indoors parts.
The same principles apply to any cooling device.
:)
I like that approach, cut a board(insulating foam panel?) to fit tightly in a window, mount the peltier device in the middle of the board through a hole in the board, use a fan on both sides, mount the PSU on the outside of the board(maybe use the PSU exhaust as the peltier hotside cooling?), mount a sun/weather shielding box(rest of the foam panel?) on the outside, mount an aluminum flex hose on the inside to aim the cool air to where you want it. Maybe run a plastic flex hose right to your chair and connect it to one of those air-conditioned shirts from japan?
5000 BTU comes from typical 120VAC home outlets with 15A CB. Electric Space heaters and 120 VAC window AC units create about 5100 BTU and typically heat/cool a small 12×12 room
Design matters, integrated with proper cooling [hotside], peltiers are good and efficient than all you people think. and it can cool more efficient than current compressor type air conditioners, with less power. Use PC PSU s for powering peltiers, they are much efficient and outputs much less heat.
I was thinking about doing something like this with a solar panel I live in florida plenty of sun! Im sure its not that efficient but lets say like this setup I get a output of 32W thats 109.2 btu an hour if I have a pretty sealed small room and not much in the way of heat generation in it wouldn’t it keep the room cooler throughout the day?
Not noticably. I have a 40W peltier mini-fridge. It can’t even cool a can of pop on a warm spring day. And that’s just cooling a square foot of insulated box!
If you only have a solar panel you need some other method. Maybe pumping cool water round, or spraying it somehow. An indoor fountain might cool the place, maybe put a few drops of bleach in the water, to keep horrible things from growing.
Peltiers really are no use for cooling anything much bigger than they are. And of course, in a closed room, they generate more heat than they take away. You’d have to window-mount it, but again, just opening the window would do much more cooling.
Very true. In my experience only thing you can do is water cool the hot side, which an A/C unit would be more efficient than a pump and fan and pelt. I’ve done the water cooling route with a already water cooled computer and it can make for a 1C difference in a closed case
PSU’s fan could be used for the hot-side. But I would look into connecting the hot-side air intake to the enclosure of that ‘always on’ PC in that room!
not to be an dick but why is the hot side on the bottom, hot air thends to rise so the hot air would get trapped against the heating element potentialy stopping the peltier effect | http://hackaday.com/2011/07/02/poor-mans-peltier-air-conditioner/ | CC-MAIN-2015-14 | refinedweb | 3,433 | 69.31 |
Sometimes.
Out:
Iteration 1, loss = 0.32009978 Iteration 2, loss = 0.15347534 Iteration 3, loss = 0.11544755 Iteration 4, loss = 0.09279764 Iteration 5, loss = 0.07889367 Iteration 6, loss = 0.07170497 Iteration 7, loss = 0.06282111 Iteration 8, loss = 0.05529723 Iteration 9, loss = 0.04960484 Iteration 10, loss = 0.04645355 Training set score: 0.986800 Test set score: 0.970000
import matplotlib.pyplot as plt from sklearn.datasets import fetch_openml from sklearn.neural_network import MLPClassifier print(__doc__) # Load data from X, y = fetch_openml('mnist_784', version=1, return_X_y=True) X = X / 255. # rescale the data, use the traditional train/test split: ( 1 minutes 20.404 seconds)
Gallery generated by Sphinx-Gallery
© 2007–2018 The scikit-learn developers
Licensed under the 3-clause BSD License. | https://docs.w3cub.com/scikit_learn/auto_examples/neural_networks/plot_mnist_filters | CC-MAIN-2021-21 | refinedweb | 125 | 57.33 |
Empty type
From HaskellWiki
An empty type is one that has no values. They're often used with phantom types or type arithmetic. How you define one depends on how picky you are that the type has genuinely no values.
Frequently when defining a type whose values are never meant to be used, the simplest way is to just define it with a single, token value, whose constructor you don't export:
data E0 = E0
However, this is not a truly empty type, so some people find the following more satisfying:
newtype Void = Void Void
Although we do have constructors here, they do nothing (in fact, Void is essentially id) and the only value here is bottom, which is impossible to be rid of. So this is as close as we can get, except that we might regard the syntax as somewhat non-obvious. To address that concern, Haskell 2010 (or GHC with EmptyDataDecls) allows you to just not specify any constructors at all:
data Void
This is theoretically equivalent to the previous type, but saves you keyboard wear and namespace clutter. | http://www.haskell.org/haskellwiki/Empty_type | CC-MAIN-2014-10 | refinedweb | 182 | 58.15 |
Pete is a consultant specializing in library design and implementation. He has been a member of the C++ Standards Committee since its inception, and is Project Editor for the C++ Standard. He is writing a book about the newly approved Technical Report on C++ Library Extensions; the book will be published this summer by Addison-Wesley. Pete can be contacted at petebecker@acm.org.
Last December, I was in the Ground Transportation Center at the Indianapolis Airport waiting for the limousine that would take me on the last leg of my move to Bloomington, Indiana, when I got a call on my cell phone from Jon Erickson. He told me that the C/C++ Users Journal had ceased publication and that my next column, due in a couple of days, would not be needed.
I'd just spent two days with movers packing up everything from my house in Arlington, Massachusetts, and loading it onto a truck. On top of that, my fiancée and I had just bought a house; I was selling mine; I had just sent a preliminary draft of my book out for technical reviews; and I was in the middle of rearranging my employment relationship. So I was rather overloaded, and relieved to not have that next installment lurking on my to-do list.
Things have settled down now, and most of the stress of moving has gone away. Jon and I have been talking during the past month about this column, and I figure it's a great way to build that overload back to a peak. I've been reading Dr. Dobb's Journal for many years, dating back to the days when it was "Dr. Dobb's Journal of Computer Calisthenics and Orthodontia: Running Light without Overbyte." I'm pleased to be a part of it now.
This column is about solving C and C++ problems. Of course, that often means showing various coding tricks and programming techniques. But there are times when we just can't code our way out of a problemchanging the code around just creates a new set of problems, and we end up playing Whack-a-Mole instead of making progress. When we've painted ourselves into corners like this, the thing to remember is that there really isn't any wet paint. We can just leave. Throw the whole mess out, write off the time spent as an educational expense, and start over. I can usually get things right about the third time through.
This particular column is about overloading: both in the technical sense of writing multiple functions with the same name and leaving it to the compiler to figure out which one to call, and in the non-technical sense of giving the compiler so much to do that it gets overwhelmed. Combining templates with overloaded functions can cause breakdowns. Sometimes, the way to avoid these breakdowns is to get rid of the overloads.
Operator Overloading
Back in the late '80s, I was a C addict working at Borland International on its C compiler. There was a rumor that we might be moving to C++, so I decided to learn a little about it. I started reading the first edition of Bjarne Stroustrup's well-known book, The C++ Programming Language. I don't have a copy handy, but I have a vivid memory of seeing, around the second page, code something like this:
#include <iostream.h>
int main()
{
cout << "Hello, world\n";
return 0;
}
That use of the left-shift operator looks pretty pedestrian today, but 20 years ago, it was radical. I put the book aside and went back to my real work.
A couple months later, the rumor became fact and I picked up the book again. I still wasn't comfortable with that left-shift, but over time, I've gotten used to it, and now it looks even more natural than in its normal C usage.
Java zealots dismiss operator overloading as "syntactic sugar," but that's because they don't have it. The alternative is the named function, so Java code for arithmetic types ends up looking something like this:
BigInteger first = new BigInteger(1);
BigInteger second = new BigInteger(1);
BigInteger sum = first.add(second);
Compare that with the analogous C++ code, using an overloaded operator+:
BigInteger first = 1;
BigInteger second = 1;
BigInteger sum = first + second;
While it's certainly possible to learn to read the Java version, the C++ version looks much more like the natural formulation of the original problem. It also looks much more like the code for the same computation with built-in types that, in both languages, can be written like this:
int first = 1;
int second = 1;
int sum = first + second;
If you still think that syntactic sugar doesn't matter, imagine lemonade without sugar.
Function Overloading
Operators are just functions with funny names [1]. Once you get used to those funny names, operator overloading is just a part of function overloading. Function overloading occurs when you write two or more functions with the same name. For the compiler to tell them apart, they have to have different argument lists. For example, the C++ Standard Library provides three overloaded versions of the sin function, one for each of the three built-in floating-point types. So you can write code like this:
sin(1.0F); // calls float sin(float)
sin(1.0); // calls double sin(double)
sin(1.0L); // calls long double sin(long double)
I don't find this example particularly compelling. In every case I've run into, the C technique of using functions with different names works just fine:
sinf(1.0F); // calls float sinf(float)
sin(1.0); // calls double sin(double)
sinl(1.0L); // calls long double sinl(long double)
Furthermore, if you want to call sinl with a value of type double in C, you just do it:
double x = 1.0;
sinl(x); // calls long double sinl(long double),
// promotes 1.0 to type long double
To call the long double version of sin in C++, you must provide an argument of type long double:
double x = 1.0;
sin((long double)x); // calls long double
// sin(long double)
If you're compulsive about new-style casts, this becomes even more long winded:
double x = 1.0;
sin(static_cast<long double>(x));
While new-style casts may well provide useful benefits in general coding, in complex mathematical computations, they introduce unnecessary clutter. Fortunately, C++ retains the C versions of sin, so you can call the long double version, sinl, directly, just as in C.
On the other hand, with trig functions, we're dealing with a small set of argument types, so remembering and using the three C names isn't hard. But when you have a larger set of types, function overloading does simplify things. To continue with examples from mathematics, in TR1 we have the following versions of pow:
double pow(double, double);
float pow(float, float);
long double pow(long double, long double);
double pow(double, int);
float pow(float, int);
long double pow(long double, int);
TR1 also provides the C99 versions of pow, named powf and powl, which correspond to the second and third versions in this list. It's certainly possible to come up with reasonable naming conventions that provide distinct names for all six of these functions, but keeping track of their names would contribute to the mental overloading that makes programming harder.
Templates and Function Overloading
While overloading mathematical functions is convenient, overloading functions for use in templates is essential. Consider a rather pointless function that exchanges the contents of its second and third arguments if its first argument has the value true:
template <class Ty>
void exchg(bool do_it, Ty& arg1, Ty& arg2)
{
if (do_it)
{
Ty temp = arg1;
arg1 = arg2;
arg2 = temp;
}
}
This function works when called with two objects of any type Ty that can be copy constructed and assigned. But there are types that can be exchanged more efficiently by a function that knows how they are implemented. For example, the Standard Library template vector holds a pointer to an array of objects. Exchanging two vector objects by copy constructing and assigning means copying their arrays three times. Exchanging two vector objects by swapping their pointers around doesn't require any copying. So the swap operation is encapsulated in the Standard Library's template function swap, which can be implemented like this:
template <class Ty>
void swap(Ty& arg1, Ty& arg2)
{
Ty temp = arg1;
arg1 = arg2;
arg2 = temp;
}
With this version of swap, we can write our exchange function like this:
template <class Ty>
void exchg(bool do_it, Ty& arg1, Ty& arg2)
{
if (do_it)
swap(arg1, arg2);
}
Now, that doesn't look like much of an improvement. In fact, it looks worse because we have two functions instead of one simple one. The benefit comes when there is a specialization of swap, for the type Ty, that is more efficient than the generic one. For example, the Standard Library has a partial specialization of swap that takes arguments of type vector<Ty>:
template <class Ty>
void swap(vector<Ty>& arg1, vector<Ty>& arg2)
{
arg1.swap(arg2); // magic...
}
The magic member function vector::swap is part of the implementation of vector. It can be written with knowledge of how vector is implemented, so it can swap pointers instead of copying arrays.
When we call exchg with two arguments of type vector<T>, the compiler generates code that calls this partial specialization of swap instead of the generic version. When we call exchg with two arguments of type int, the compiler generates code that calls the generic version because there is no specialization of swap that takes two arguments of type int.
Granted, partial template specialization isn't function overloading in its technical sense. It's a convenient example, though; and when you use a type that isn't a template, you can provide an overloaded version of swap that will be used instead of the generic version wherever swap is called for your type.
Problems with Overloading
Say you're writing a function that displays an integer value and a complex value to the console. Something like this (assuming we have all the necessary #include directives):
int i = 3;
complex<double> c(1.0, 0.0);
cout << i << '\n';
cout << c << '\n';
Now, you make a seemingly innocuous change to write the same data to a temporary file, opened on the fly:
int i = 3;
complex<double> c(1.0, 0.0);
ofstream("data.log") << i << '\n';
ofstream("data.log") << c << '\n';
Perhaps surprisingly, this code shouldn't compile. The last line is illegal. The reason lies in the declarations of the overloaded shift-left operators.
The class ofstream is derived from basic_ofstream<char>, and basic_ofstream defines a member function to do the insertion:
template <class Elem, class Tr =
char_traits<Elem> >
class basic_ostream :
virtual public basic_ios<Elem, Tr>
{
// ...
basic_ostream& operator<<(int val);
// ...
};
The shift-left operator for complex types is not a member function. It looks something like this:
template <class Ty, class Elem, class Tr>
basic_ostream<Elem, Tr>& operator<<(
basic_ostream<Elem, Tr>&, const complex<Ty>&);
When you call either of these overloaded operators with cout as the output stream, everything is fine. When you create a temporary object such as ofstream("data.log"), the two functions act differently. That's because the temporary object is not an lvalue, and in general, to pass an argument by nonconst reference, you must have an lvalue. The reason for that rule is mostly caution: If you accidentally create a temporary object, you probably don't want to pass it to a function that's going to modify it because the temporary object will be destroyed when the function returns. There's an exception to this rule, though, for member functions: You can use a nonlvalue as the object for any member function, even if that member function modifies the object. The operator that inserts an int is a member function, so calling it with a temporary is okay. The operator that inserts a complex value is not a member function, so calling it with a temporary is illegal. So be careful how you define overloaded operators and how you use them.
Another problem arises from an interaction between templates and overloaded functions. Suppose you're writing a template function that takes two arguments. The first argument is a pointer to a function that, in turn, takes an argument of some type and returns a value of that type. The second argument is a value, not necessarily the same type as the argument type for the function pointer. The job of your template function is to call the function pointer, passing the value and returning the result. Like this:
template <class Ty0, class Ty1>
Ty0 apply(Ty0(*fp)(Ty0), Ty1 val)
{
return fp(val);
}
Simple enough [2]. And being an unrepentant C programmer, you try it out like this:
apply(sinf, 1.0);
apply(sinl, 1.0);
No problem. Then you try this:
apply(sin, 1.0);
Now there's a problem. The compiler doesn't know which of the three versions of sin you want. The call is ambiguous. You've been bitten by overloading. The solution, such as it is, is to tell the compiler what to do. You do that with a cast:
apply((float(*)(float))sin, 1.0);
Or, if you're going to be doing this more than about once, with a typedef and a cast:
typedef float(*fp_func)(float);
apply((fp_func)sin, 1.0);
Or, perhaps:
typedef float (*fp_func)(float);
fp_func ptr = sin;
apply(ptr, 1.0);
All that, just to undo the overloading.
Overloading is not an unmixed blessing. It can make code harder to read, and it can require more verbose coding. It can even bog down the compiler. When you're about to write an overloaded function, ask yourself if it's really needed. If the answer is no, you might save yourself from being overloaded, too.
Notes
[1] Yes, and funny syntax, too. But when we write overloaded operators, we write functions with funny names. It's only when we call them that we use notation that's different from ordinary functions. And even that isn't always true. You can call an overloaded operator by name, but it's almost always more appropriate to use the operator itself.
[2] But important. This general scheme is the core of call wrapper types that encapsulate and hide the differences between various callable objects. The C++ Standard Library, for example, provides call wrappers such as bind1st and bind2nd.
DDJ | http://www.drdobbs.com/overloading-and-overloading/184406484 | CC-MAIN-2015-40 | refinedweb | 2,438 | 61.56 |
.
We are cleaning up our house here in the suburbs of Philly to get it ready to sell. I don't think it is going to fetch a price comparable to what housing costs in Boston, though - Boston really is much more expensive.
Today I came across this graph that captures the situation well:
This was part of a report produced by the Pew Charitable Trusts comparing Philadelphia to the other cities in the chart. The report is an interesting read, providing a civic-minded take on the health of these seven old East-coast cities. They looked at the strength and makeup of the communities, trends in education, crime, and migration, and the state of governance and leadership.
Pew's summary on Boston: it is expensive to live there, the traffic is bad, and the gridlocked local governments make it hard to solve those problems. Otherwise, it is a fine place.
What use are python function decorators? Here is something fun: we can use them for speedy string templates. What follows is a further development of the templet python string-template idea. I find these techniques useful when building HTML and XML pages.
Here is a @stringfunction function decorator:
from templet import stringfunction
@stringfunction def poem(jumper, jumpee="moon"): "The $jumper jumped over the $jumpee."
The @stringfunction decorator transforms "poem" into the following function:
def poem(jumper, jumpee="moon"): out = [] out.append("The ") out.append(str(jumper)) out.append(" jumped over the ") out.append(str(jumpee)) out.append(".") return ''.join(out)
The decorator saves quite a bit of typing! In this implementation of @stringfunction, handy ${...} and ${{...}} syntax allows us to embed arbitrary python code, so we can make our templates as powerful as we need them to be. If we want to introduce complicated logic, our code can use "out.append(text)" to build up the string by hand.
The templates we get this way have all their parameters nicely-declared so they are easy-to-call and so misspellings get caught. These function templates are also far faster than the most popular template solutions, benchmarking more than four times faster than Django templates, python's string.Template templates, or the v1 templet classes. And we can use them by importing just one small python module, with no hacking of the import path mechanism or other major surgery.Continue reading "Python String Functions"
Anybody out there still trying to get a Wii?
My brother recently got a Wii. We mailed him all our extra games (Madden, Red Steel, and the rest of a big pile of games I had to buy in a bundle to get my Wii), so he is all set for gaming fun.
But he will have to buy his own copy of Zelda. We have been playing that game for about 100 hours (with the help of the hint book) and we are still not done. It is like watching the Fellowship of the Ring trilogy ten times over.
Zelda Twilight Princess is a good game. We sort of play it as a team, reading tips out of the hint book to each other, and the kids are very into the characters. Last weekend the kids xeroxed hintbook pages to make paper cutouts of Link, Zelda, Midna, Colin, Beth, Malo, Talo, Russel, and Zant. Why not? Just as fun as playing Gandalf and Frodo.
And the minigames.... Have any Zelda players out there beat RoalGoal yet?
So my brother got his Wii. But they still don't seem very easy to get. To get his, he had to visit several K-Marts early in the morning. I wondered if it would be possible to buy one "the normal way," so I visited EBGames the other day.
"No we don't have any. No Wii's. Come back in April."
I find this remarkable. It is three months past Christmas. The unobtainable Christmas hot-item TMX Elmo is easy to get. Yet the Wii is still absent everywhere.
The story behind this anti-Hillary video is fascinating. From yesterday's San Francisco Chronicle:.
This is why 2008 won't be like "1984..."Continue reading "Politics in a YouTube World"
How will the 21st century be different from the 20th? Will it be better?
The history of the last century was traced units of energy: it was all about kilowatts, barrels and megatons.
Maybe the history of the 21st century will be traced in bits. Instead of electrification and automobiles, the 21st century citizen is experiencing internetification and the incredible shrinking computer. But we are just at the start of harnessing the bit. What will be the power politics of the 21st century? What will trigger the wars of the next 50 years?
I found Bruce Sterling's SXSW's rant very entertaining. But I think it also provided a glimmer of insight on our future........"Continue reading "Kid's Garden Bench" | http://davidbau.com/archives/2007/03/index.html | crawl-001 | refinedweb | 815 | 75.1 |
DBAs regularly need to keep an eye on the error logs of all their SQL Servers, and the event logs of the host servers as well. When server numbers get large, the traditional Windows GUI approach breaks down, and the PoSH DBA reaches for a PowerShell script to do the leg-work.
“I know I'm searching for something
Something so undefined
that it can only be seen
by the eyes of the blind
in the middle of the night.”
Billy Joel
Contents
- Reading the Windows Event Viewer
- Get-EventLog examples
- Getting entries from the Windows Error Log into Excel
- Listing the last day that an entry was made in the Application Event Log
- Listing the System Event Log for the past two hours
- listing the Event Log between two time-periods
- Filtering the error log by the Error types
- Reading errors from just one particular source
- Reading all messages containing a specific string
- Selecting events according to a variety of conditions.
- Selecting the event logs of a number of servers and instances?
- Reading the SQL Server Error Log
- Applying Filters to the SQL Error Log
- Summary
Introduction
One of the everyday tasks of any DBA is to look for errors in your database server environment. With SQL Server, we have two major sources of information for doing this: the SQL Server Error Log and the Event Viewer.
When a problem occurs in SQL Server, ranging from a logon failure to a severe error in database mirroring, the first place to look for more information is the SQL Server Error Log. Similarly, if we have a problem related to physical hardware, the disk for example, we will look in the Event Viewer.
Both the SQL Server Error Log and the Event Viewer are designed to be used via their respective graphic user interface. This is fine for one or two servers, but painfully slow for the average DBA who has to read and filter information in many servers. Even when you’re focusing down on a problem with a single busy server, the added weight of the graphical tool in terms of resources can slow troubleshooting down considerably. It is very important in the day-to-day life of a DBA to have a mechanism to read and filter error messages quickly and unintrusively; a technique for "mining errors".
This is where PowerShell comes in handy. With a relatively simple script, you can read, and filter out just those error messages that you need in a multi-server environment and moreover, format the output to make the information stand out. In this article we will show how to do this, and, if required, include warnings or any other type of event, using the SQL Server Error Log in both an Online and Offline mode as well as messages in the Windows Event Viewer.
Reading the Windows Event Viewer
We are going to want to check the server logs automatically for problems or warnings. If, unlike us, you have the time to routinely ‘remote’ into each server in turn, then the Windows Event Viewer is the classic way of reading this information.
The official documentation states: “Windows Event Viewer is a utility that maintains logs about program, security, and system events on your computer. You can use Event Viewer to view and manage the event logs, gather information about hardware and software problems, and monitor Windows security events.” In other words, the event viewer collects the information about the health of your system.
Every process that starts within the Windows OS opens a communication channel with the OS informing it of its most important actions and events. This means, for example, that if the disk subsystem has a problem or if a service stops, this fact will be viewable in the Windows Event Viewer. In the same way, every SQL Server error message with a severity of 19 or greater is logged in both the SQL Server Error Log and the Event Viewer. Therefore, it’s important to have a mechanism to constantly monitor/read the Event Viewer, especially remotely, so you can find information about problems and take any necessary action; perhaps to even prevent a system crash.
PowerShell has a built-in cmdlet to make it easier to access information recorded in Event Viewer, but before we use it, let’s discuss some basic concepts that will help us to understand how to use it better.
The Event Viewer is a repository for the event logs. With the Event Viewer we can monitor the information about security, and identify hardware, software and system issues. There are three basic Event Logs:
- System Log: Stores the events related to the Operational System, as a problem with a driver.
- Application Log : Stores the events related to the Applications and programs
- Security Log : Stored the events related to security, as invalid logon attempts
You can also create a custom event log. There are several third-party tools that have their own event log.
The built-in PowerShell cmdlet to access the Event Viewer is Get-EventLog. Figure 7 shows the output when using Get-EventLog to read the application event log:
Get-EventLog -LogName Application
Figure1 – Properties from Get-EventLog
The Get-EventLog cmdlet has a parameter that allows you to read the Events remotely by passing in the name of the Server. Here we are using Get-EventLog to read the Security log on server ObiWan:
Get-EventLog -ComputerName ObiWan -LogName Security
Get-EventLog examples
Getting entries from the Windows Error Log into Excel
Two weeks ago your company bought a monitoring software for the SQL Server Servers called ContosoMonitor and installed the agent on all servers. This morning you realize that The Servers are not sending monitoring messages. In the installation manual says that every event is recorded by the software in the local Event Viewer, at the Application log but with a specific source named ContosoMonitor. You decide to check the Event Viewer for all servers and look for errors from the installed agents, again exporting the output to an Excel spreadsheet with the Servers split into worksheets . You open a PowerShell session from your desktop and type :
Get-Content c:\temp\Servers.txt | ForEach-Object { #A
Get-Eventlog -ComputerName $_ -LogName Application -EntryType Error -After (Get-Date).adddays(-1) |
Sort-Object Time –descending |
Export-Xls c:\temp\ContosoMonitorError.xlsx -AppendWorksheet -WorksheetName $_ #B
}
#A – Loop in the Servers inside the file Servers.txt
#B – Filter the Event log in the current server of the loop, sorting by descending Date Time and exporting to a xlsx splitting the servers in worksheets
To perform this operation using the Event Viewer GUI, You will need to connect to each Server and filter the Event Viewer by GUI, export to CSV file...etc. It is a painful process that will leave us wishing we could do it with two command lines of PowerShell. Let’s talk a little more about the PowerShell solution.
In order to read the Event Viewer, PowerShell has a built-in Cmdlet called Get-EventLog. There are some parameters in the Get-EventLog that can perform the filtering operation without needing an additional Where-Object and using it is faster than using the pipeline. Let’s take a look.
Note You can check out a complete help by typing Get-Help –full Get-EventLog
Listing the last day that an entry was made in the Application Event Log
This is just a matter of using the –after parameter and subtracting 1 day from the actual date :
Get-EventLog -ComputerName Obiwan -LogName Application -After ((Get-Date).adddays(-1))
Listing the System Event Log for the past two hours
To do this we also use the Get-Date methods, but use the –Before Parameter:
Get-EventLog -ComputerName Obiwan -LogName System -Before ((Get-Date).addHours(-2))
Listing the Event Log between two time-periods
To do this, we can join the parameters –after and –before as well. Imagine if we need list all Securities event logs in the last day, but for the 3 hours ago from the current date/time:
Get-EventLog -ComputerName Obiwan -LogName Security -After ((Get-Date).adddays(-1) -Before ((Get-Date).addHours(-3))
This Table shows the parameters to filter by date/time:
Get-EventLog has a parameter to filter the event type according to whether they are errors, warnings, information or Audit States, There are also parameters to specify the source of the error and filter by the contents the message itself. This table describes these parameters:
Filtering the error log by the Error types
If we were looking at the Application Log for the ObiWan Server :
Get-EventLog -ComputerName Obiwan -LogName Application –EntryType Error
Reading errors from just one particular source
If you want to filter by the Source of the all event types. See that wildcards are allowed , so we can use for example , *sql* to filter all events from SQL Server.
Get-EventLog -ComputerName Obiwan -EntryTpe Error -LogName Application -source ‘*sql*’
Reading all messages containing a specific string
We can filter by the contents of the message itself. Imagine if you want to filter the word ‘”started” in the message property :
Get-EventLog -ComputerName Obiwan -LogName Application –Message '*started* '
Selecting events according to a variety of conditions?
You can combine the selection of several properties. you may want to filter only the Event Log Application, type Error and source SQL Server in the last day :
Get-EventLog -ComputerName Obiwan -LogName Application -EntryTpe Error –Source '*sql*' -After ((Get-Date).adddays(-1))
You may need to query not only the Error entry type, but also Warning. In the table above we see that the EntryType is a STRING[] type and this means that I can pass an array with ‘Error,Warning’ to the -EntryType parameter :
Get-EventLog -ComputerName Obiwan -LogName Application -EntryTpe 'Error,Warning' –Source '*sql*' -After ((Get-Date).adddays(-1))
This technique also applies to the -Source Parameter.
Selecting the event logs of a number of servers and instances?
Get-EventLog does not accept pipeline input,so I cannot use “ObiWan” | Get-EventLog.
However, , the–Computername parameter is a STRING[] type, so I can use it with an array. If you want to perform the above operation on the ObiWan and QuiGonJinn Servers it is just, as the -EntryType and –Source parameters, use the comma between the names of the servers:
Get-EventLog -ComputerName @('Obiwan' ,'QuiGonJinn') -LogName Application -EntryType Error –Source '*sql*' -After ((Get-Date).adddays(-1))
Even better, using a txt file with the name of the servers, you also can do it with the Get-Content Cmdlet
Get-EventLog -ComputerName (Get-Content 'c:\temp\MyServers') -LogName Application -EntryTpe Error –Source '*sql*' -After ((Get-Date).adddays(-1))
Reading the SQL Server Error Log
Not only does the SQL Server error log write error information but it also records some information about successful operations, such as recovery of a database; and it also includes informational messages, such as the TCP port that SQL Server is listening on. The SQL Server Error Log is simply a repository of events. All these events are logged in order to assist in troubleshooting a potential problem and also to provide key information about the sequence of steps leading up to the problem.
You can view the SQL Server Error Log using SQL Server Management Studio (SSMS).
As it is a plain text file you can view it in any text editor From TSQL you can view the results of executing the
xp_readerrorlog extended stored procedure. By default, the error log is
stored at ...
Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\LOG\ERRORLOG....
The current file is named ERRORLOG, and has no extension. The previous files will be named ErrorLog.1, ErrorLog.2...etc. and the SQL Server retains backups of the previous six logs. Figure 1 shows a view for the SQL Server Error Log by the SSMS log viewer.
Figure 2- SQL Server Error Log in SQL Server Management Studio
The SSMS user interface works when the SQL Server instance is online, but even works with offline instances in SQL Server 2012 or 2014
The advantage of using PowerShell to read the SQL Server Error Log is that you can filter only the errors and format the output for later reference, for example, writing it to a CSV file or storing it in a SQL Server Table. We’ll use this technique in some of our DBA checklists in a subsequent article.
Accessing SQL Error logs in Online SQL Server Instances
When the SQL Server Instance is online, we can use the SQLPSX Get-SqlErrorLog function to read the Error Log. This is part of SQLPSX, but for your convenience I have a stand-alone version that doesn't need SQLPSX installed. Let’s start by using the Get-Help cmdlet with the –full parameter to see how this function works:
PS C:\> Get-Help Get-SqlErrorLog -Full
NAME
Get-SqlErrorLog
SYNOPSIS
Returns the SQL Server Errorlog.
SYNTAX
Get-SqlErrorLog [-sqlserver] <String[]> [[-lognumber] <Int32>] [<CommonParameters>]
DESCRIPTION
The Get-SqlErrorLog function returns the SQL Server Errorlog.
PARAMETERS
-sqlserver <String[]>
Required? true
Position? 1
Default value
Accept pipeline input? true (ByValue, ByPropertyName)
Accept wildcard characters?
-lognumber <Int32>
Required? false
Position? 2
Default value
Accept pipeline input? false
Accept wildcard characters?
<CommonParameters>
This cmdlet supports the common parameters: Verbose, Debug,
ErrorAction, ErrorVariable, WarningAction, WarningVariable,
OutBuffer and OutVariable. For more information, type,
"get-help about_commonparameters".
OUTPUTS
System.Data.DataRow
Get-SqlErrorLog returns an array of System.Data.DataRow.
-------------------------- EXAMPLE 1 --------------------------
C:\PS>Get-SqlErrorLog "Z002\sql2k8"
This command returns the current SQL ErrorLog on the Z002\sql2k8 server.
RELATED LINKS
Get-SqlErrorLog
As we can see in the help text we’ve gotten via the Get-Help cmdlet, we can pass in the SQL Server Instance name and the number of the log file, and the default of 0 corresponds to the current log. The –sqlserver parameter is mandatory. The SQLPSX version doesn’t accept pipeline input, so if you use this version you need to use foreach loop statment or foreach-object cmdlet from an array of the SQL Server instance names . So to list all events in SQL Error Log in the SQL Server Instance R2D2, using either version of Get-SqlErrorLog, we can use the form:
Get-SqlErrorLog -sqlserver R2D2
Or using foreach loop statment :
ForEach ($Server in $servers) {
Get-SqlErrorLog -sqlserver $Server
}
And foreach-object cmdlet
Get-Content c:\teste\Servers.txt |
ForEach-Object {
Get-SqlErrorLog -sqlserver $_
}
Note If you want performance, avoid the pipeline and the foreach-object cmdlet approach. Use the foreach loop statment. We will discuss this approach in later articles.
Figure 3 illustrates the output:
Figure 3- Get-SQLErrorLog output
Because the event description is truncated to fit the screen in this format, we can improve the formatting by piping the output to the Format-List cmdlet, as shown in Figure 3:
Get-SqlErrorLog -sqlserver R2D2 | Format-List
Figure 4- Get-SQLErrorLog output piping to Format-List
In the same way, if we want to list the SQL Server Error Log events in log file number 2, we can just use the –lognumber parameter:
Get-SqlErrorLog -sqlserver <SQLInstanceName> -lognumber 2
Accessing SQL Error logs in OffLine SQL Server Instances
SQL Server 2012 introduced a new feature that allows the Error Log to be read even if the instance is offline. SQL Server has two WMI providers, for Server Events and for Computer Management.Two new WMI classes have been added to the Management provider: the SqlErrorLogFile and SqlErrorLogEvent classes.
To access these two classes you need to connect to the ...
Root\Microsoft\SqlServer\ComputerManagement11
... WMI namespace. Unlike the WMI for Server Events that has a namespace to each instance, the provider for Computer Management covers all SQL Server instances on the machine. You will need to specify the correct instance within the WQL (WMI Query Language).
The account under which the script runs needs to have read permissions, locally or remotely, on the Root\Microsoft\SqlServer\ComputerManagement11 WMI namespace. It also needs permission to access the folder that contains the SQL Server Error Log File.
The SqlErrorLogFile WMI class contains information about the log file itself as we can see in the table 7.1:
This WMI Class is interesting if you want to know about the physical log file, but this is not our focus. Because we want the event descriptions and information, we need to use the SqlErrorLogEvent WMI Class. Table 7.2 shows the properties from the SqlErrorLogEvent class:
To access the SQL Server Error Log for the default SQL Server instance in the Server R2D2, let’s use the Get-WMIObject cmdlet:
Get-WmiObject -computername R2d2 -Class "SqlErrorLogEvent" -Namespace "Root\Microsoft\SqlServer\ComputerManagement11"
Figure 5 display the cmdlet’s output :
Figure 5- Get-WMIObject Output in the SQLErrorLogEvent Class
You’ll see that, as well as the error log date and message, we’re also being distracted by some irrelevant information. As we can see in Figure 4, there are some properties that start with “__”. They are called WMI System Properties and are in every WMI Class. Unfortunately the Get-WMIObject cmdlet does not provide a parameter to suppress them in the output. An alternative is to select the properties you want to show, piping the output from Get-WMIObject to the Select-Object cmdlet :
Get-WmiObject
-Class
"SqlErrorLogEvent"
-ComputerName
R2D2
-Namespace
"Root\Microsoft\SqlServer\ComputerManagement11"
|
Select-object Filename,InstanceName, Logdate,Message,ProcessInfo
But we still have a problem. The LogDate is incomprehensible. Unlike the Get-SQLErrorLog where the property LogDate is a System.DateTime type that uses the OS date/time format, the LogDate property in the WMI Class is a System.String and has its own format. You can read more about this in “Working with Dates and Times using WMI” at Microsoft Technet. Figure 6 illustrates this:
Figure 6- Get-WMIObject Output using Select-Object and displaying the LogDate in a WMI format.
This means that we need to convert the WMI date/time format to the system date/time format. Fortunately WMI has a method to perform this operation called ConvertToDateTime. We can just use it in the Select-Object step:
Get-WmiObject -Class "SqlErrorLogEvent" -ComputerName R2D2 -Namespace "Root\Microsoft\SqlServer\ComputerManagement11"|
Select-object FileName,
InstanceName,
@{Expression={$_.ConvertToDateTime($_.LogDate)};Label = 'Logdate'},
Processinfo
Uhaa!!! Now we have a friendly-view format to the Logdate property, as the Figure 7 shows :
Figure 7- Get-WMIObject Output using Select-Object and displaying the LogDate property in a user-friendly format
This has now given us a way of gathering information about a SQL Server instance even if SQL Server is offline. We have accessed the log remotely in PowerShell by using WMI. This greatly increases our chances of solving a problem server even if the instance is offline.
Applying Filters to the SQL Error Log
If SQL Server does not start, or users have problems logging in, then you really have to search for possible errors. It is a good practice to check for errors and warnings just to be proactive and look for problems before they happen.. So far we saw how to read the events in the SQL Error Log and the Windows Event Viewer, but, as a DBA, we are interested on filtering these events, to look for specific errors and warnings or events that happen at a specific time. This covers how we filter the log messages.
SQL Error Log in Online SQL Server Instances
Imagine the situation, where you have been informed that two hours ago the SQL Server ObiWan, part of a simple active/passive cluster, was refusing connections from an XPTO. The Server in question is extremely busy and therefore it is likely to be a bad idea to use a resource-intensive graphic-user interface to diagnose it. You need to find out about what may have happened by filtering the Error Log looking for any error messages from two hours ago. You need to urgently inform your boss of the problem once you have enough information to be able to give an estimate of how long it will take to fix the problem. But the use of SSMS is out of the question. In fact we not only have to read the Error Log from two hours ago, but the filter the information looking for specific errors, but let’s approach the problem in stages.
PowerShell can easily work with date/time intervals because we can directly use the Get-Date properties. We already covered the Get-Date cmdlet in article 2, but let’s take a look a bit more deeply on it.
As everything in PowerShell is an object with its properties and methods, Get-Date returns an instance of System.DateTime and this class has a method named Adddays. If you don’t know about this method, Figure 8 display the output of only the methods that start with “ADD” (using the PowerShell cmdlet Get-Member) because it is all the matters to us now.
Get-Date | Get-Member
Figure 8 – Only the methods that start with “ADD” from the Get-Date cmdlet
In section 7.1, “Reading the SQL Server Error Log”, we saw that this function returns the properties SQLInstanceName ,LogDate, ProcessInfo and Text . To filter by date/time we will use the LogDate property and to list the SQL Error Log in the last five days we’ll just use the Where-Object cmdlet , filtering the LogDate property using Get-Date and a negative number of the days, in this case, -5 , in the Adddays method :
Get-SqlErrorLog -sqlserver ObiWan |
Where-object { $_.logdate -ge ((Get-Date).adddays(-5)) }
But if we want the events from the last 24 hours only? Just use the AddHours method:
Get-SqlErrorLog -sqlserver ObiWan |
Where-object { $_.logdate -ge ((Get-Date).adddhours(-24)) }
The process to filter for Errors is similar, but we will need to pipe to the Where-Object Cmdlet to filter the errors but in this case we will use the Text property to look for strings that signify an error:
Get-SqlErrorLog -sqlserver ObiWan |
Where-object { ( $_.text -like '*Error*' `
-or $_.text -like "*Fail*"`
-or $_.text -like "*dump*"`
-or $_.text -like '*IO requests taking longer*'`
-or $_.text -like '*is full*' `
) -and ($_.text -notlike '*found 0 errors*')`
-and ($_.text -notlike '*without errors*')`
}
You can see the difficulty that we’ve had to ‘code around’, can’t you? Although the Event Viewer has a property that specifies whether the event is an Error, Warning or Information message, the Get-SQLErrorLog does not return this information and the error messages are embedded within in the message itself. The warnings sometimes contain text which contains the word ‘error’ but which aren’t actually error events. We don’t want to see those. This means that we need to filter “Error” but exclude “found 0 errors “or “without errors” and include some messages that do not have the “error” inside it, but characterizes an error or warning, as “is full” or “IO request taking longer”
‘In this example the trick is to use the operators –or and –and to filter exactly what you need. We can, however, produce neater code by using a RegEx string.
PowerShell works very well with Regex and, generally speaking, most of the string comparisons can be turned to a Regex Expression. It isn’t easy to understand the Regex patterns, but the result is clear code, without a bunch of the –and/-or operators. The same filter conditions used in the example above can be rewritten as:
Get-SqlErrorLog
-sqlserver
ObiWan
|
Where-object {$_.Text -match '(Error|Fail|IO requests taking longer|is full)' -and $_.Text -notmatch '(without errors|found 0 errors)' }
We see how to filter by date/time and by errors/warnings separately, but most of the time we prefer to have both type of events together so we just put the two together. In the next example, we filter by errors/warnings in the last 24 hours on ObiWan SQL Server Instance, the code is:
Get-SqlErrorLog
-sqlserver
ObiWan
|
Where-object { ( $_.logdate -ge ((Get-Date).addhours(-24)))`
-and $_.Text -match '(Error|Fail|IO requests taking longer|is full)' -and $_.Text -notmatch '(without errors|found 0 errors)'}
Now if you are checking for errors on all the servers that you’re responsible for, you will want to perform the same process, but for more than one SQL Server Instance. In our case let’s do it to the servers ObiWan and QuiGonJinn. Remembering the Get-Help from Get-SQLErrorLog in the first section of this article we noticed that the parameter –sqlserver accepts pipeline input and it is a STRING [] type. This applies to the version attached to this article: I've already shown how to use the SQLPSX version. In the rest of these examples, I'll be using the enhanced version which can be downloaded from the head of this article.
I can pass a list of the servers:
... by pipeline ...
'ObiWan',
'QuiGonJinn'
| Get-SqlErrorLog
|
Where-object { ( $_.logdate -ge ((Get-Date).addhours(-24)))`
-and $_.Text -match '(Error|Fail|IO requests taking longer|is full)'`
-and $_.Text -notmatch '(without errors|found 0 errors)'}
... having a txt file with the servers and using the Get-Content Cmdlet by pipeline
Get-Content
c:\temp\Servers.txt
|
Get-SqlErrorLog |
Where-object { ( $_.logdate -ge ((Get-Date).addhours(-24)))`
-and $_.Text -match '(Error|Fail|IO requests taking longer|is full)'`
-and $_.Text -notmatch '(without errors|found 0 errors)'}
... or having a txt file with the servers and using the Get-Content Cmdlet by array in the –sqlserver parameter because it is a STRING[] , just type:
Get-SqlErrorLog
-sqlserver (Get-Content
c:\temp\Servers.txt)
|
Where-object { ( $_.logdate -ge ((Get-Date).addhours(-24)))`
-and $_.Text -match '(Error|Fail|IO requests taking longer|is full)'`
-and $_.Text -notmatch '(without errors|found 0 errors)'}
You can also obtain the list of SQL Server instance names from rows in a database table. In this case, I’m using a database called SQLServerRepository with a table called tbl_SQLServerInstanceNames on SQL Server instance R2D2. The table structure is pretty simple, just one column called SQLServerInstanceName.
In this case, you first need to query this table to return the SQL Server instance names using the Invoke-SQLCMD2 function that is part of the SQLPSX toolkit, and pipe the information to the Get-SQLErrorLog cmdlet:)'}
You will notice that the output is sorting by ascending date of the LogDate Property . But what if we want to display the messages in descending order? To do this, we can just pipe the Where-Object cmdlets output to the Sort-Object designating the LogDate property and using the –descending switch parameter :)'}| Sort-Object Logdate –descending
Now, to return to our scenario where that SQL Server was refusing connections, we need to filter messages from two hours ago and search for some error that might give us a clue as to what the problem is. To be more accurate, we will filter the time to ten minutes before two hours, or 130 minutes. From your desktop, you type:
Get-SqlErrorLog
-sqlserver
ObiWan
|
Where-object { ( $_.logdate -ge ((Get-Date).addminutes(-130)))`
-and $_.Text -match '(Error|Fail|IO requests taking longer|is full)'`
-and $_.Text -notmatch '(without errors|found 0 errors)'}
In the output you see some interesting messages. As we can see in the Figure 9, the date/time of the errors are suspiciously close together and they are close to the date/time you were informed that SQL Server starts to refuse connections. The Text property displays the exact date/time that SQL Server stops responding to connections (logon error) and it was after the Dump Error.
Figure 9 Reading and Filtering the SQL Server Error Log to solve the connection refused problem
At this point the cause of the problem will become obvious just from reading the output, the question has been answered by a PowerShell one-liner.. The server ObiWan is part of a cluster and because of the Dump error, it experienced a failover. For the duration of the failover, where the mechanism stopped the SQL Server service in one node and started it in the other, the connections were refused. It is a normal behavior during a failover. Your job now is to research why the dump happened, but that task is out of the scope of this article.
I’ve described the bare bones here. In fact, the text message is truncated to fit the screen and as so you’d usually want to pipe the command line above to the Out-GridView Cmdlet to get a better way of inspecting the errors:
Get-SqlErrorLog
-sqlserver
ObiWan
|
Where-object { ( $_.logdate -ge ((Get-Date).addminutes(-130)))`
-and $_.Text -match '(Error|Fail|IO requests taking longer|is full)' -and $_.Text -notmatch '(without errors|found 0 errors)'}Where-object { ( $_.logdate -ge ((Get-Date).addminutes(-130)))`
-and ($_.text -notlike '*found 0 errors*')`
-and ($_.text -notlike '*without errors*')`
-and ( $_.text -like '*Error*' `
-or $_.text -like "*Fail*"`
-or $_.text -like "*dump*"`
-or $_.text -like '*IO requests taking longer*'`
-or $_.text -like '*is full*') `
} | Out-GridView
The text messages are easy to read as the Figure 10 shows:
Figure 10 Using the Out-GridView Cmdlet to achieve more User- friendly view of the Error Log
The Out-GridView Cmdlet has a plus. It has the filter options. This means that you use it as well. The Figure 10 is also displaying these options.
The SQL Server Error Log is a repository of events, whether they are errors, warnings or simple information messages. To filter for errors we need to include and exclude some messages at the same line as we did in the conditions above. The message “is full” was added, but “Without Errors” was added to our exclusion list. This means that if there is a line with both expressions it will be discarded. You may want to add more expressions on that condition to filter your needs more accurately. At some point, your filter conditions could become a bit unmanageable because you could find yourself changing the filter whilst exploring errors in the log. You really need something a bit more simple than the code we’ve done above. Possibly the best answer to this is to use a Regex but hide the complexity. By using PowerShell’s feature of variable-substitution in a string, we can keep things simpler. you can create a variable to -Match and -NoMatch operators, add all the conditions that you want, and use this in the Where-Object. This way it is easier for you understand, remove and add new filters for messages whatever you want or need and the search conditions for the Where-Object Cmdlet are clearer to read. The code would look like:
$match
=
'(Error|Fail|IO requests taking longer|is full)'
$nomatch = '(without errors|found 0 errors)'
Get-SqlErrorLog -sqlserver ObiWan |
where { $_.Text -match $match -and $_.Text -notmatch $nomatch }
To add a new message to the match condition, for example “Warning” it is just put it at the end of the string:
$match = '(Error|Fail|IO requests taking longer|is full|warning)'
The same process is used for the -nomatch conditions.
SQL Error Log in Offline SQL Server Instances
Imagine it: You're at your desk analyzing the new ‘Always On’ project and you notice a report that , for some reason, the Servers R2D2 and ObiWan stopped start to refuse connections. After you solve the problem and not stop the production, your action could to consolidate the Error Log of the two Servers in the last half hour in an excel spreadsheet, each server in separate worksheets so that you can analyze the events.
From your desktop you just type:
"R2D2","ObiWan" | ForEach-Object { #A
Get-WmiObject -Class "SqlErrorLogEvent" -ComputerName $_ -Namespace "Root\Microsoft\SqlServer\ComputerManagement11"| #B
Where-Object {$_.ConvertToDateTime($_.LogDate) -ge (Get-Date).addminutes(-30)} | #C
select InstanceName,
@{Expression={$_.ConvertToDateTime($_.LogDate)};Label = 'Logdate'},
Processinfo |
Sort-Object LogDate -Descending | #D
Export-Xls -Path "c:\Log.xlsx" -AppendWorksheet -WorksheetName $_ #E
}
#A – Loop for R2D2 and ObiWan Servers
#B – Accessing the SQL Server Error Log WMI at the server in the current loop
#C – Selecting the properties to display and changing the Logdate property from WMI Date format to the OS date format.
As the Figure 11 illustrates, an excel spreadsheet called Log.xlsx is created with the Servers R2D2 and ObiWan split into worksheets, with the last half hour events in descending date/time order:
Figure 11 Consolidated Error Log from ObiWan and R2D2 servers
The same operation by SSMS would be to:
- Create a CSV File to each Server
- Turn the CSV into an Excel spreadsheet
- Sort the date/time in descending order
- Copy each Excel spreadsheet to a new one as a worksheet
This is a relatively complex task if compared to just two command lines of PowerShell
We already covered way that you can read the SQL Error Log when the instance is offline by using the WMI class SQLErrorLogEvent, which is part of the WMI Computer Management Provider, and Get-WMIObject Cmdlet. The process by which one would filter in this case is a bit different to date/time and to choose the SQL Server instance. First let’s see the date/time process.
In the section of this article on offline SQL Server Instances, we saw that the WMI Classes have their own date/time format and so we need to convert this format to have a friendly-view format, or the system format. To filter by date/time we need to do the same to the LogDate Property but now using the Where-Object Cmdlet. In the example below, we are filtering the last one day event messages:
Get-WmiObject -Class "SqlErrorLogEvent" -ComputerName R2D2 -Namespace "Root\Microsoft\SqlServer\ComputerManagement11"|
Where-Object { $_.ConvertToDateTime($_.LogDate) -ge ((Get-Date).adddays(-1))} |
select FileName,
InstanceName,
@{Expression={$_.ConvertToDateTime($_.LogDate)};Label = 'Logdate'},
Processinfo
We also noticed that the WMI Computer Management Provider, unlike WMI for Server Events, reports on all SQL Server instances in the Server. This means that, so far, we only read the Error Log from the default SQL Server Instance. The –computername parameter in the GET-WMIObject refers to the name of the Server, not the SQL Server Instance.
Now imagine that you have five SQL Server Instances in the Server ObiWan and you need to read the Error Log from the fourth instance called ObiWan\INST4, which is, of course, offline. How to perform this operation? In this case, my friend, the Windows Query Language (WQL) is your best and only friend.
In order to read the Error Log in the Server R2D2, specifically the SQL Server Instance R2D2\INST4, we first need to query the InstanceName Property ‘INST4’ and so we will use the –Query property :
$WQL = "Select * from SqlErrorLogEvent where InstanceName = 'INST4'"
Get-WmiObject -Query $WQL -ComputerName Obiwan -Namespace "Root\Microsoft\SqlServer\ComputerManagement11"|
select Filename,
InstanceName,
@{Expression={$_.ConvertToDateTime($_.LogDate)};Label = 'Logdate'},
Processinfo,
To filter errors, we can do the same process with the Get-SQLErrorLog cmdlet using Where-Object, or we can use the WQL as well. In this case we need to create the conditions in the WQL using the Message property:
$WQL = "Select * from SqlErrorLogEvent where (Message like '%Error%' or Message like '%Fail%' ) and (not message like '%Found 0 Errors%') and (not message like '%without errors%')"
And to query only the SQL Server Instance INST4, it is just a case of adding the condition in the WQL:
$WQL = "Select * from SqlErrorLogEvent where (Message like '%Error%' or Message like '%Fail%' ) and (not message like '%Found 0 Errors%') and (not message like '%without errors%') and (InstanceName = 'INST4')"
To sort by date, it is also the same process. Just pipe the Sort-Object by LogDate before Select-Object and after Get-WMIObject.
Summary
In this article we discuss how to read and effectively filter errors, warnings or any other type of event, using the SQL Server Error Log in an Online and Offline SQL Server Instance. We also discovered how to use the Event Viewer and its parameters to filter the events searching for possible issues in the System, Security and Applications Event Logs.
In the example code, we use the enhanced version of Get-SQLErrorLog that does not require the installation of SQLPSX, and which accepts both string arrays and pipleline input. The SQLPSX version can, however, be used for most examples and can be made to operate on several instances by means of the techniques described in the article. The Get-SQLErrorLog can be downloaded from the link at the head of the article | https://www.simple-talk.com/sql/database-administration/the-posh-dba---reading-and-filtering-errors/ | CC-MAIN-2015-40 | refinedweb | 6,143 | 50.77 |
Actually the important setting is:
The decides how many rows are fetched each time the client exhausts its
local cache and goes back to the server. Reasons to have setCaching low:
- Do you have a filter on? If so it could spend some time in the region
server trying to find all the rows
- Are your rows fat? It might put a lot of memory pressure in the region
server
- Are you spending a lot of time on each row, like Stack was saying? This
could also be a side effect of inserting back into HBase. The issue I hit
recently was that I was inserting a massive table into a tiny one (in terms
of # of regions), and I was hitting the 90 seconds sleep because of too many
store files. Right there waiting that time was getting over the 60 seconds
lease timeout.
Reasons to have setCaching high:
- Lots of tiny-ish rows that you process really really fast. Basically if
your bottleneck is just getting the rows from HBase.
I found that 1000 is a good number for our rows when we process them fast,
but that 10 is just as good if we need to spend time on each row. YMMV.
With all that said, I don't know if your caching is set to anything else
than the default of 1, so this whole discussion could be a waste.
Anyways, here's what I do see in your case. LeaseException is a rare one,
usually you get UnknownScannerException (could it be that you have it too?
Do you have a log?). Looking at HRS.next, I see that the only way to get
this is if you race with the ScannerListener. The method does this:
InternalScanner s = this.scanners.get(scannerName);
...
if (s == null) throw new UnknownScannerException("Name: " + scannerName);
...
lease = this.leases.removeLease(scannerName);
And when a scan expires (the lease was just removed from this.leases):
LOG.info("Scanner " + this.scannerName + " lease expired");
InternalScanner s = scanners.remove(this.scannerName);
Which means that your exception happens after you get the InternalScanner in
next(), and before you get to this.leases.removeLease the lease expiration
already started. If you get this all the time, there might be a bigger issue
or else I would expect that you see UnknownScannerException. It could be due
to locking contention, I see that there's a synchronized in removeLease in
the leases queue, but it seems unlikely since what happens in those sync
blocks is fast.
If you do get some UnknownScannerExceptions, they will show how long you
took before going back to the server by say like 65340ms ms passed since the
last invocation, timeout is currently set to 60000 (where 65340 is a number
I just invented, yours will be different). After that you need to find where
you are spending that time.
J-D
On Tue, Oct 18, 2011 at 6:39 AM, Eran Kutner <eran@gigya.com> wrote:
> Hi Stack,
> Yep, reducing the number of map tasks did resolve the problem, however the
> only way I found for doing it is by changing the setting in the
> mapred-site.xml file, which means it will affect all my jobs. Do you know
> if
> there is a way to limit the number of concurrent map tasks a specific job
> may run? I know it was possible with the old JobConf class from the mapred
> namespace but the new Job class doesn't have the setNumMapTasks() method.
> Is it possible to extend the lease timeout? I'm not even sure lease on
> what,
> HDFS blocks? What is it by default?
>
> As for setBatch, what would be a good value? I didn't set it before and
> setting it didn't seem to change anything.
>
> Finally to answer your question regarding the intensity of the job - yes,
> it
> is pretty intense, getting cpu and disk IO utilization to ~90%
>
> Thanks a million!
>
> -eran
>
>
>
> On Tue, Oct 18, 2011 at 13:06, Stack <stack@duboce.net> wrote:
>
> > Look back in the mailing list Eran for more detailed answers but in
> > essence, the below usually means that the client has been away from
> > the server too long. This can happen for a few reasons. If you fetch
> > lots of rows per next on a scanner, processing the batch client side
> > may be taking you longer than the lease timeout. Set down the
> > prefetch size and see if that helps (I'm talking about this:
> >
> >
>
> > ).
> > Throw in a GC on client-side or over on the server-side and it might
> > put you over your lease timeout. Are your mapreduce jobs heavy-duty
> > robbing resources from the running regionservers or datanodes? Try
> > having them run half the mappers and see if that makes it more likely
> > your job will complete.
> >
> > St.Ack
> > P.S IIRC, J-D tripped over a cause recently but I can't find it at the
> mo.
> | http://mail-archives.us.apache.org/mod_mbox/hbase-user/201110.mbox/%3CCAGpTDNddVWqSTiCXrf0hPF38Wb6UVLAPvJmyMm-C5Zw--bP2Wg@mail.gmail.com%3E | CC-MAIN-2019-35 | refinedweb | 819 | 71.24 |
*Note: Moved under my own volition. Go my volition.*
Custom Header Files in C++
Everything that is in this article will probably work/apply to C and C#, although I cannot be sure about this
One of the most valuable things that can be done or used in any programming language is of course the code library, which saves people from having to re-write code again and again, an invaluable tool for anyone who aspires to write good code with any depth or speed.
In C++ code libraries are implemented in two main ways:
1.) Header files
2.) Copying and pasting code
Header files are often the easiest and most universal, having speed and simplicity on their side. Copying and pasting code will make it harder for people to look at and read, but is good in very specific instances.
To use a header file in C++, you must very simply insert, at the beginning of a program:
#include <headername>
In some older compilers you may have to include the .h (for header) after the header name:
#include <headername.h>
You should be able to tell fairly quickly from the compiler results or the compiler's documentation. Modern GCC compiler's should allow you to drop off the .h
Some of the most commonly used and useful header files in C++ are:
iostream -- Data to and from keyboard
ifstream -- Data to and from files
cmath -- Advanced math functions, i.e. square-roots
string -- Allows the string (text) variable
etc...
All well and good, right? Well, one of the great things about modern programming is that it is very easy to create new header files for yourself and anyone else you care to share them with.
You would do this if you had a function that you used in a lot of programs, specific variable types and names, sequences of data, or even a routine help sequence throughout your program.
So how do you create these wondrous little diddies? Quite simply, you would go ahead and make them in the same format that you would a regular C++ file, using the same syntax and the like.
You can, however, leave out the basic header file and int main() that you'll often see or make at the beginning of any program. If you want to though, you can include them, which can save you several other statements.
Here's an example of a decent, random header file:
#include <string>
#include <iostream>
void sodainfo(int sodanum, string sodaname, bool sodabuy);
int sodanum;
string sodaname;
bool sodabuy;
void sodainfo(int sodanum, string sodaname, bool sodabuy)
{
cout << "The type of soda is " << sodaname << "." << endl;
cout << "There are " << sodanum << " sodas to be bought." << endl;
cout << "Would you like to buy a soda?" << endl;
cin >> sodabuy;
}
Obviously this is a very singular function, with only one real use. But I think you'll get the point.
You may be saying to yourself: That's just like a regular C++ program. Well, it is. Its written exactly the same, but it can save you immense amounts of time in the right circumstances.
The only real difference is that you save it in the library directory (for the compiler) in the format "headername.h"
Nice, simple, easy, and effective.
Happy coding!
There is a ghost in the machine, and he is my friend.
A | http://www.antionline.com/showthread.php?249371-C-Custom-Header-Files-(Moved)&p=692438&viewfull=1 | CC-MAIN-2013-48 | refinedweb | 557 | 70.63 |
If you want to store some data on a state and have that information propagated from successor to successor, the easiest way to do this is with
state.globals. However, this can become obnoxious with large amounts of interesting data, doesn't work at all for merging states, and isn't very object-oriented.
The solution to these problems is to write a State Plugin - an appendix to the state that holds data and implements an interface for dealing with the lifecycle of a state.
Let's get started! All state plugins are implemented as subclasses of
angr.SimStatePlugin. Once you've read this document, you can use the API reference for this class to quickly review the semantics of all the interfaces you should implement.
The most important method you need to implement is
copy: it should be annotated with the
memo staticmethod and take a dict called the "memo"---these'll be important later---and returns a copy of the plugin. Short of that, you can do whatever you want. Just make sure to call the superclass initializer!
>>> import angr>>> class MyFirstPlugin(angr.SimStatePlugin):... def __init__(self, foo):... super(MyFirstPlugin, self).__init__()... self.foo = foo...... @angr.SimStatePlugin.memo... def copy(self, memo):... return MyFirstPlugin(self.foo)>>> state = angr.SimState(>> state2 = state.copy()>>> state.my_plugin.>> state3 = state.copy()>>> assert state2.my_plugin.foo == 'bar'>>> assert state3.my_plugin.foo == 'baz'
It works! Note that plugins automatically become available as attributes on the state.
state.get_plugin(name) is also available as a more programmatic interface.
State plugins have access to the state, right? So why isn't it part of the initializer? It turns out, there are a plethora of issues related to initialization order and dependency issues, so to simplify things as much as possible, the state is not part of the initializer but is rather set onto the state in a separate phase, by using the
set_state method. You can override this state if you need to do things like propagate the state to subcomponents or extract architectural information.
>>> def set_state(self, state):... super(SimStatePlugin, self).set_state(state)... self.symbolic_word = claripy.BVS('my_variable', self.state.arch.bits)
Note the
self.state! That's what the super
set_state sets up.
However, there's no guarantee on what order the states will be set onto the plugins in, so if you need to interact with other plugins for initialization, you need to override the
init_state method.
Once again, there's no guarantee on what order these will be called in, so the rule is to make sure you set yourself up good enough during
set_state so that if someone else tries to interact with you, no type errors will happen. Here's an example of a good use of
init_state, to map a memory region in the state. The use of an instance variable (presumably copied as part of
copy()) ensures this only happens the first time the plugin is added to a state.
>>> def init_state(self):... if self.region is None:... self.region = self.state.memory.map_region(SOMEWHERE, 0x1000, 7)
self.state is not the state itself, but rather a weak proxy to the state. You can still use this object as a normal state, but attempts to store it persistently will not work.
The other element besides copying in the state lifecycle is merging. As input you get the plugins to merge and a list of "merge conditions" - symbolic booleans that are the "guard conditions" describing when the values from each state should actually apply.
The important properties of the merge conditions are:
They are mutually exclusive and span an entire domain - exactly one may be satisfied at once, and there will be additional constraints to ensure that at least one must be satisfied.
len(merge_conditions) == len(others) + 1, since
self counts too.
zip(merge_conditions, [self] + others) will correctly pair merge conditions with plugins.
During the merge function, you should mutate
self to become the merged version of itself and all the others, with respect to the merge conditions. This involves using the if-then-else structure that claripy provides. Here is an example of constructing this merged structure by merging a bitvector instance variable called
myvar, producing a binary tree of if-then-else expressions searching for the correct condition:
for other_plugin, condition in zip(others, merge_conditions[1:]): # chop off self's conditionself.myvar = claripy.If(condition, other_plugin.myvar, self.myvar)
This is such a common construction that we provide a utility to perform it automatically:
claripy.ite_cases. The following code snippet is identical to the previous one:
self.myvar = claripy.ite_cases(zip(merge_conditions[1:], [o.myvar for o in others]), self.myvar)
Keep in mind that like the rest of the top-level claripy functions,
ite_cases and
If are also available from
state.solver, and these versions will perform SimActionObject unwrapping if applicable.
The full prototype of the
merge interface is
def merge(self, others, merge_conditions, common_ancestor=None).
others and
merge_conditions have been discussed in depth already.
The common ancestor is the instance of the plugin from the most recent common ancestor of the states being merged. It may not be available for all merges, in which case it will be None. There are no rules for how exactly you should use this to improve the quality of your merges, but you may find it useful in more complex setups.
There is another kind of merging called widening which takes several states and produces a more general state. It is used during static analysis.
TODO: @FISH PLEASE EXPLAIN WHAT THIS MEANS
In order to support serialization of states which contain your plugin, you should implement the
__getstate__/
__setstate__ magic method pair. Keep in mind the following guidelines:
Your serialization result should not include the state.
After deserialization,
set_state() will be called again.
This means that plugins are "detached" from the state and serialized in an isolated environment, and then reattached to the state on deserialization.
You may have components within your state plugins which are large and complicated and start breaking object-orientation in order to make copy/merge work well with the state lifecycle. You're in luck! Things can be state plugins even if they aren't directly attached to a state. A great example of this is
SimFile, which is a state plugin but is stored in the filesystem plugin, and is never used with
SimState.register_plugin. When you're doing this, there are a handful of rules to remember which will keep your plugins safe and happy:
Annotate your copy function with
@SimStatePlugin.memo.
In order to prevent divergence while copying multiple references to the same plugin, make sure you're passing the memo (the argument to copy) to the
.copy of any subplugins. This with the previous point will preserve object identity.
In order to prevent duplicate merging while merging multiple references to the same plugin, there should be a concept of the "owner" of each instance, and only the owner should run the merge routine.
While passing arguments down into sub-plugins
merge() routines, make sure you unwrap
others and
common_ancestor into the appropriate types. For example, if
PluginA contains a
PluginB, the former should do the following:
>>> def merge(self, others, merge_conditions, common_ancestor=None):... # ... merge self... self.plugin_b.merge([o.plugin_b for o in others], merge_conditions,... common_ancestor=None if common_ancestor is None else common_ancestor.plugin_b)
To make it so that a plugin will automatically become available on a state when requested, without having to register it with the state first, you can register it as a default. The following code example will make it so that whenever you access
state.my_plugin, a new instance of
MyPlugin will be instanciated and registered with the state.
MyPlugin.register_default('my_plugin') | https://docs.angr.io/extending-angr/state_plugins | CC-MAIN-2019-30 | refinedweb | 1,286 | 55.24 |
TensorFlow Tutorial For Beginners
Deep learning is a subfield of machine learning that is a set of algorithms that is inspired by the structure and function of the brain.
TensorFlow is the second.
You see? The name “TensorFlow” is derived from the operations which neural networks perform on multidimensional data arrays or tensors! It’s literally a flow of tensors. For now, this is all you need to know about tensors, but you’ll go deeper into this in the next sections!
Today’s TensorFlow tutorial for beginners will introduce you to performing deep learning in an interactive way:
Download the notebook of this tutorial here.
Also, you could be interested in a course on Deep Learning in Python, DataCamp's Keras tutorial or the keras with R tutorial.
Introducing Tensors
To understand tensors well, it’s good to have some working knowledge of linear algebra and vector calculus. You already read in the introduction that tensors are implemented in TensorFlow as multidimensional data arrays, but some more introduction is maybe needed in order to completely grasp tensors and their use in machine learning.
Plane Vectors
Before you go into plane vectors, it’s a good idea to shortly revise the concept of “vectors”; Vectors are special types of matrices, which are rectangular arrays of numbers. Because vectors are ordered collections of numbers, they are often seen as column matrices: they have just one column and a certain number of rows. In other terms, you could also consider vectors as scalar magnitudes that have been given a direction.
Remember: an example of a scalar is “5 meters” or “60 m/sec”, while a vector is, for example, “5 meters north” or “60 m/sec East”. The difference between these two is obviously that the vector has a direction. Nevertheless, these examples that you have seen up until now might seem far off from the vectors that you might encounter when you’re working with machine learning problems. This is normal; The length of a mathematical vector is a pure number: it is absolute. The direction, on the other hand, is relative: it is measured relative to some reference direction and has units of radians or degrees. You usually assume that the direction is positive and in counterclockwise rotation from the reference direction.
Visually, of course, you represent vectors as arrows, as you can see in the picture above. This means that you can consider vectors also as arrows that have direction and length. The direction is indicated by the arrow’s head, while the length is indicated by the length of the arrow.
So what about plane vectors then?
Plane vectors are the most straightforward setup of tensors. They are much like regular vectors as you have seen above, with the sole difference that they find themselves in a vector space. To understand this better, let’s start with an example: you have a vector that is 2 X 1. This means that the vector belongs to the set of real numbers that come paired two at a time. Or, stated differently, they are part of two-space. In such cases, you can represent vectors on the coordinate (x,y) plane with arrows or rays.
Working from this coordinate plane in a standard position where vectors have their endpoint at the origin (0,0), you can derive the x coordinate by looking at the first row of the vector, while you’ll find the y coordinate in the second row. Of course, this standard position doesn’t always need to be maintained: vectors can move parallel to themselves in the plane without experiencing changes.
Note that similarly, for vectors that are of size 3 X 1, you talk about the three-space. You can represent the vector as a three-dimensional figure with arrows pointing to positions in the vectors pace: they are drawn on the standard x, y and z axes.
It’s nice to have these vectors and to represent them on the coordinate plane, but in essence, you have these vectors so that you can perform operations on them and one thing that can help you in doing this is by expressing your vectors as bases or unit vectors.
Unit vectors are vectors with a magnitude of one. You’ll often recognize the unit vector by a lowercase letter with a circumflex, or “hat”. Unit vectors will come in convenient if you want to express a 2-D or 3-D vector as a sum of two or three orthogonal components, such as the x− and y−axes, or the z−axis.
And when you are talking about expressing one vector, for example, as sums of components, you’ll see that you’re talking about component vectors, which are two or more vectors whose sum is that given vector.
Tip: watch this video, which explains what tensors are with the help of simple household objects!
Tensors
Next to plane vectors, also covectors and linear operators are two other cases that all three together have one thing in common: they are specific cases of tensors. You still remember how a vector was characterized in the previous section as scalar magnitudes that have been given a direction. A tensor, then, is the mathematical representation of a physical entity that may be characterized by magnitude and multiple directions.
And, just like you represent a scalar with a single number and a vector with a sequence of three numbers in a 3-dimensional space, for example, a tensor can be represented by an array of 3R numbers in a 3-dimensional space.
The “R” in this notation represents the rank of the tensor: this means that in a 3-dimensional space, a second-rank tensor can be represented by 3 to the power of 2 or 9 numbers. In an N-dimensional space, scalars will still require only one number, while vectors will require N numbers, and tensors will require N^R numbers. This explains why you often hear that scalars are tensors of rank 0: since they have no direction, you can represent them with one number.
With this in mind, it’s relatively easy to recognize scalars, vectors, and tensors and to set them apart: scalars can be represented by a single number, vectors by an ordered set of numbers, and tensors by an array of numbers.
What makes tensors so unique is the combination of components and basis vectors: basis vectors transform one way between reference frames and the components transform in just such a way as to keep the combination between components and basis vectors the same.
Installing TensorFlow
Now that you know more about TensorFlow, it’s time to get started and install the library. Here, it’s good to know that TensorFlow provides APIs for Python, C++, Haskell, Java, Go, Rust, and there’s also a third-party package for R called
tensorflow.
Tip: if you want to know more about deep learning packages in R, consider checking out DataCamp’s keras: Deep Learning in R Tutorial.
In this tutorial, you will download a version of TensorFlow that will enable you to write the code for your deep learning project in Python. On the TensorFlow installation webpage, you’ll see some of the most common ways and latest instructions to install TensorFlow using
virtualenv,
pip, Docker and lastly, there are also some of the other ways of installing TensorFlow on your personal computer.
Note You can also install TensorFlow with Conda if you’re working on Windows. However, since the installation of TensorFlow is community supported, it’s best to check the official installation instructions.
Now that you have gone through the installation process, it’s time to double check that you have installed TensorFlow correctly by importing it into your workspace under the alias
tf:
import tensorflow as tf
Note that the alias that you used in the line of code above is sort of a convention - It’s used to ensure that you remain consistent with other developers that are using TensorFlow in data science projects on the one hand, and with open-source TensorFlow projects on the other hand.
Getting Started With TensorFlow: Basics
You’ll generally write TensorFlow programs, which you run as a chunk; This is at first sight kind of contradictory when you’re working with Python. However, if you would like, you can also use TensorFlow’s Interactive Session, which you can use to work more interactively with the library. This is especially handy when you’re used to working with IPython.
For this tutorial, you’ll focus on the second option: this will help you to get kickstarted with deep learning in TensorFlow. But before you go any further into this, let’s first try out some minor stuff before you start with the heavy lifting.
First, import the
tensorflow library under the alias
tf, as you have seen in the previous section. Then initialize two variables that are actually constants. Pass an array of four numbers to the
constant() function.
Note that you could potentially also pass in an integer, but that more often than not, you’ll find yourself working with arrays. As you saw in the introduction, tensors are all about arrays! So make sure that you pass in an array :) Next, you can use
multiply() to multiply your two variables. Store the result in the
result variable. Lastly, print out the
result with the help of the
print() function.
Note that you have defined constants in the DataCamp Light code chunk above. However, there are two other types of values that you can potentially use, namely placeholders, which are values that are unassigned and that will be initialized by the session when you run it. Like the name already gave away, it’s just a placeholder for a tensor that will always be fed when the session is run; There are also Variables, which are values that can change. The constants, as you might have already gathered, are values that don’t change.
The result of the lines of code is an abstract tensor in the computation graph. However, contrary to what you might expect, the
result doesn’t actually get calculated. It just defined the model, but no process ran to calculate the result. You can see this in the print-out: there’s not really a result that you want to see (namely, 30). This means that TensorFlow has a lazy evaluation!
However, if you do want to see the result, you have to run this code in an interactive session. You can do this in a few ways, as is demonstrated in the DataCamp Light code chunks below:
Note that you can also use the following lines of code to start up an interactive Session, run the
result and close the Session automatically again after printing the
output:
In the code chunks above you have just defined a default Session, but it’s also good to know that you can pass in options as well. You can, for example, specify the
config argument and then use the
ConfigProto protocol buffer to add configuration options for your session.
For example, if you add
config=tf.ConfigProto(log_device_placement=True)
to your Session, you make sure that you log the GPU or CPU device that is assigned to an operation. You will then get information which devices are used in the session for each operation. You could use the following configuration session also, for example, when you use soft constraints for the device placement:
config=tf.ConfigProto(allow_soft_placement=True)
Now that you’ve got TensorFlow installed and imported into your workspace and you’ve gone through the basics of working with this package, it’s time to leave this aside for a moment and turn your attention to your data. Just like always, you’ll first take your time to explore and understand your data better before you start modeling your neural network.
Belgian Traffic Signs: Background
Even though traffic is a topic that is generally known amongst you all, it doesn’t hurt going briefly over the observations that are included in this dataset to see if you understand everything before you start. In essence, in this section, you’ll get up to speed with the domain knowledge that you need to have to go further with this tutorial.
Of course, because I’m Belgian, I’ll make sure you’ll also get some anecdotes :)
- Belgian traffic signs are usually in Dutch and French. This is good to know, but for the dataset that you’ll be working with, it’s not too important!
- There are six categories of traffic signs in Belgium: warning signs, priority signs, prohibitory signs, mandatory signs, signs related to parking and standing still on the road and, lastly, designatory signs.
- On January 1st, 2017, more than 30,000 traffic signs were removed from Belgian roads. These were all prohibitory signs relating to speed.
- Talking about removal, the overwhelming presence of traffic signs has been an ongoing discussion in Belgium (and by extension, the entire European Union).
Now that you have gathered some more background information, it’s time to download the dataset here. You should get the two zip files listed next to "BelgiumTS for Classification (cropped images), which are called "BelgiumTSC_Training" and "BelgiumTSC_Testing".
Tip: if you have downloaded the files or will do so after completing this tutorial, take a look at the folder structure of the data that you’ve downloaded! You’ll see that the testing, as well as the training data folders, contain 61 subfolders, which are the 62 types of traffic signs that you’ll use for classification in this tutorial. Additionally, you’ll find that the files have the file extension
.ppm or Portable Pixmap Format. You have downloaded images of the traffic signs!
Let’s get started with importing the data into your workspace. Let’s start with the lines of code that appear below the User-Defined Function (UDF)
load_data():
- First, set your
ROOT_PATH. This path is the one where you have made the directory with your training and test data.
- Next, you can add the specific paths to your
ROOT_PATHwith the help of the
join()function. You store these two specific paths in
train_data_directoryand
test_data_directory.
- You see that after, you can call the
load_data()function and pass in the
train_data_directoryto it.
- Now, the
load_data()function itself starts off by gathering all the subdirectories that are present in the
train_data_directory; It does so with the help of list comprehension, which is quite a natural way of constructing lists - it basically says that, if you find something in the
train_data_directory, you’ll double check whether this is a directory, and if it is one, you’ll add it to your list. Remember that each subdirectory represents a label.
- Next, you have to loop through the subdirectories. You first initialize two lists,
labelsand
images. Next, you gather the paths of the subdirectories and the file names of the images that are stored in these subdirectories. After, you can collect the data in the two lists with the help of the
append()function. ROOT_PATH = "/your/root/path" train_data_directory = os.path.join(ROOT_PATH, "TrafficSigns/Training") test_data_directory = os.path.join(ROOT_PATH, "TrafficSigns/Testing") images, labels = load_data(train_data_directory)
Note that in the above code chunk, the training and test data are located in folders named "Training" and "Testing", which are both subdirectories of another directory "TrafficSigns". On a local machine, this could look something like "/Users/Name/Downloads/TrafficSigns", with then two subfolders called "Training" and "Testing".
Tip: review how to write functions in Python with DataCamp's Python Functions Tutorial.
Traffic Sign Statistics
With your data loaded in, it’s time for some data inspection! You can start with a pretty simple analysis with the help of the
ndim and
size attributes of the
images array:
Note that the
images and
labels variables are lists, so you might need to use
np.array() to convert the variables to an array in your own workspace. This has been done for you here!
Note that the
images[0] that you printed out is, in fact, one single image that is represented by arrays in arrays! This might seem counterintuitive at first, but it’s something that you’ll get used to as you go further into working with images in machine learning or deep learning applications.
Next, you can also take a small look at the
labels, but you shouldn’t see too many surprises at this point:
These numbers already give you some insights into how successful your import was and the exact size of your data. At first sight, everything has been executed the way you expected it to, and you see that the size of the array is considerable if you take into account that you’re dealing with arrays within arrays.
Tip try adding the following attributes to your arrays to get more information about the memory layout, the length of one array element in bytes and the total consumed bytes by the array’s elements with the
flags,
itemsize, and
nbytes attributes. You can test this out in the IPython console in the DataCamp Light chunk above!
Next, you can also take a look at the distribution of the traffic signs:
Awesome job! Now let’s take a closer look at the histogram that you made!
You clearly see that not all types of traffic signs are equally represented in the dataset. This is something that you’ll deal with later when you’re manipulating the data before you start modeling your neural network.
At first sight, you see that there are labels that are more heavily present in the dataset than others: the labels 22, 32, 38, and 61 definitely jump out. At this point, it’s nice to keep this in mind, but you’ll definitely go further into this in the next section!
Visualizing The Traffic Signs
The previous, small analyses or checks have already given you some idea of the data that you’re working with, but when your data mostly consists of images, the step that you should take to explore your data is by visualizing it.
Let’s check out some random traffic signs:
- First, make sure that you import the
pyplotmodule of the
matplotlibpackage under the common alias
plt.
- Then, you’re going to make a list with 4 random numbers. These will be used to select traffic signs from the
imagesarray that you have just inspected in the previous section. In this case, you go for
300,
2250,
3650and
4000.
- Next, you’ll say that for every element in the length of that list, so from 0 to 4, you’re going to create subplots without axes (so that they don’t go running with all the attention and your focus is solely on the images!). In these subplots, you’re going to show a specific image from the
imagesarray that is in accordance with the number at the index
i. In the first loop, you’ll pass
300to
images[], in the second round
2250, and so on. Lastly, you’ll adjust the subplots so that there’s enough width in between them.
- The last thing that remains is to show your plot with the help of the
show()function!
There you go:
# Import the `pyplot` module of `matplotlib` import matplotlib.pyplot as plt # Determine the (random) indexes of the images that you want to see traffic_signs = [300, 2250, 3650, 4000] # Fill out the subplots with the random images that you defined for i in range(len(traffic_signs)): plt.subplot(1, 4, i+1) plt.axis('off') plt.imshow(images[traffic_signs[i]]) plt.subplots_adjust(wspace=0.5) plt.show()
As you guessed by the 62 labels that are included in this dataset, the signs are different from each other.
But what else do you notice? Take another close look at the images below:
These four images are not of the same size!
You can obviously toy around with the numbers that are contained in the
traffic_signs list and follow up more thoroughly on this observation, but be as it may, this is an important observation which you will need to take into account when you start working more towards manipulating your data so that you can feed it to the neural network.
Let’s confirm the hypothesis of the differing sizes by printing the shape, the minimum and maximum values of the specific images that you have included into the subplots.
The code below heavily resembles the one that you used to create the above plot, but differs in the fact that here, you’ll alternate sizes and images instead of plotting just the images next to each other:
# Import `matplotlib` import matplotlib.pyplot as plt # Determine the (random) indexes of the images traffic_signs = [300, 2250, 3650, 4000] # Fill out the subplots with the random images and add shape, min and max values for i in range(len(traffic_signs)): plt.subplot(1, 4, i+1) plt.axis('off') plt.imshow(images[traffic_signs[i]]) plt.subplots_adjust(wspace=0.5) plt.show() print("shape: {0}, min: {1}, max: {2}".format(images[traffic_signs[i]].shape, images[traffic_signs[i]].min(), images[traffic_signs[i]].max()))
Note how you use the
format() method on the string
"shape: {0}, min: {1}, max: {2}" to fill out the arguments
{0},
{1}, and
{2} that you defined.
Now that you have seen loose images, you might also want to revisit the histogram that you printed out in the first steps of your data exploration; You can easily do this by plotting an overview of all the 62 classes and one image that belongs to each class:
# Import the `pyplot` module as `plt` import matplotlib.pyplot as pl) # Show the plot plt.show()
Note that even though you define 64 subplots, not all of them will show images (as there are only 62 labels!). Note also that again, you don’t include any axes to make sure that the readers’ attention doesn’t dwell far from the main topic: the traffic signs!
As you mostly guessed in the histogram above, there are considerably more traffic signs with labels 22, 32, 38, and 61. This hypothesis is now confirmed in this plot: you see that there are 375 instances with label 22, 316 instances with label 32, 285 instances with label 38 and, lastly, 282 instances with label 61.
One of the most interesting questions that you could ask yourself now is whether there’s a connection between all of these instances - maybe all of them are designatory signs?
Let’s take a closer look: you see that label 22 and 32 are prohibitory signs, but that labels 38 and 61 are designatory and a prioritory signs, respectively. This means that there’s not an immediate connection between these four, except for the fact that half of the signs that have a substantial presence in the dataset is of the prohibitory kind.
Feature Extraction
Now that you have thoroughly explored your data, it’s time to get your hands dirty! Let’s recap briefly what you discovered to make sure that you don’t forget any steps in the manipulation:
- The size of the images was unequal;
- There are 62 labels or target values (as your labels start at 0 and end at 61);
- The distribution of the traffic sign values is pretty unequal; There wasn’t really any connection between the signs that were heavily present in the dataset.
Now that you have a clear idea of what you need to improve, you can start with manipulating your data in such a way that it’s ready to be fed to the neural network or whichever model you want to feed it too. Let’s start first with extracting some features - you’ll rescale the images, and you’ll convert the images that are held in the
images array to grayscale. You’ll do this color conversion mainly because the color matters less in classification questions like the one you’re trying to answer now. For detection, however, the color does play a big part! So in those cases, it’s not needed to do that conversion!
Rescaling Images
To tackle the differing image sizes, you’re going to rescale the images; You can easily do this with the help of the
skimage or Scikit-Image library, which is a collection of algorithms for image processing.
In this case, the
transform module will come in handy, as it offers you a
resize() function; You’ll see that you make use of list comprehension (again!) to resize each image to 28 by 28 pixels. Once again, you see that the way you actually form the list: for every image that you find in the
images array, you’ll perform the transformation operation that you borrow from the
skimage library. Finally, you store the result in the
images28 variable:
# Import the `transform` module from `skimage` from skimage import transform # Rescale the images in the `images` array images28 = [transform.resize(image, (28, 28)) for image in images]
This was fairly easy wasn’t it?
Note that the images are now four-dimensional: if you convert
images28 to an array and if you concatenate the attribute
shape to it, you’ll see that the printout tells you that
images28’s dimensions are
(4575, 28, 28, 3). The images are 784-dimensional (because your images are 28 by 28 pixels).
You can check the result of the rescaling operation by re-using the code that you used above to plot the 4 random images with the help of the
traffic_signs variable. Just don’t forget to change all references to
images to
images28.
Note that because you rescaled, your
min and
max values have also changed; They seem to be all in the same ranges now, which is really great because then you don’t necessarily need to normalize your data!
Image Conversion to Grayscale
As said in the introduction to this section of the tutorial, the color in the pictures matters less when you’re trying to answer a classification question. That’s why you’ll also go through the trouble of converting the images to grayscale.
Note, however, that you can also test out on your own what would happen to the final results of your model if you don’t follow through with this specific step.
Just like with the rescaling, you can again count on the Scikit-Image library to help you out; In this case, it’s the
color module with its
rgb2gray() function that you need to use to get where you need to be.
That’s going to be nice and easy!
However, don’t forget to convert the
images28 variable back to an array, as the
rgb2gray() function does expect an array as an argument.
# Import `rgb2gray` from `skimage.color` from skimage.color import rgb2gray # Convert `images28` to an array images28 = np.array(images28) # Convert `images28` to grayscale images28 = rgb2gray(images28)
Double check the result of your grayscale conversion by plotting some of the images; Here, you can again re-use and slightly adapt some of the code to show the adjusted images:
import matplotlib.pyplot as plt traffic_signs = [300, 2250, 3650, 4000] for i in range(len(traffic_signs)): plt.subplot(1, 4, i+1) plt.axis('off') plt.imshow(images28[traffic_signs[i]], cmap="gray") plt.subplots_adjust(wspace=0.5) # Show the plot plt.show()
Note that you indeed have to specify the color map or
cmap and set it to
"gray" to plot the images in grayscale. That is because
imshow() by default uses, by default, a heatmap-like color map. Read more here.
Tip: since you have been re-using this function quite a bit in this tutorial, you might look into how you can make it into a function :)
These two steps are very basic ones; Other operations that you could have tried out on your data include data augmentation (rotating, blurring, shifting, changing brightness,…). If you want, you could also set up an entire pipeline of data manipulation operations through which you send your images.
Deep Learning With TensorFlow
Now that you have explored and manipulated your data, it’s time to construct your neural network architecture with the help of the TensorFlow package!
Modeling The Neural Network
Just like you might have done with Keras, it’s time to build up your neural network, layer by layer.
If you haven’t done so already, import
tensorflow into your workspace under the conventional alias
tf. Then, you can initialize the Graph with the help of
Graph(). You use this function to define the computation. Note that with the Graph, you don’t compute anything, because it doesn’t hold any values. It just defines the operations that you want to be running later.
In this case, you set up a default context with the help of
as_default(), which returns a context manager that makes this specific Graph the default graph. You use this method if you want to create multiple graphs in the same process: with this function, you have a global default graph to which all operations will be added if you don’t explicitly create a new graph.
Next, you’re ready to add operations to your graph. As you might remember from working with Keras, you build up your model, and then in compiling it, you define a loss function, an optimizer, and a metric. This now all happens in one step when you work with TensorFlow:
- First, you define placeholders for inputs and labels because you won’t put in the “real” data yet. Remember that placeholders are values that are unassigned and that will be initialized by the session when you run it. So when you finally run the session, these placeholders will get the values of your dataset that you pass in the
run()function!
- Then, you build up the network. You first start by flattening the input with the help of the
flatten()function, which will give you an array of shape
[None, 784]instead of the
[None, 28, 28], which is the shape of your grayscale images.
- After you have flattened the input, you construct a fully connected layer that generates logits of size
[None, 62]. Logits is the function operates on the unscaled output of previous layers, and that uses the relative scale to understand the units is linear.
- With the multi-layer perceptron built out, you can define the loss function. The choice for a loss function depends on the task that you have at hand: in this case, you make use of
sparse_softmax_cross_entropy_with_logits()
reduce_mean(), which computes the mean of elements across dimensions of a tensor.
0.001.
# Import `tensorflow` import tensorflow as tf # Initialize placeholders x = tf.placeholder(dtype = tf.float32, shape = [None, 28, 28]) y = tf.placeholder(dtype = tf.int32, shape = [None]) # Flatten the input data images_flat = tf.contrib.layers.flatten(x) # Fully connected layer logits = tf.contrib.layers.fully_connected(images_flat, 62, tf.nn.relu) # Define a loss function loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels = y, logits = logits)) # Define an optimizer train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss) # Convert logits to label indexes correct_pred = tf.argmax(logits, 1) # Define an accuracy metric accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
You have now successfully created your first neural network with TensorFlow!
If you want, you can also print out the values of (most of) the variables to get a quick recap or checkup of what you have just coded up:
print("images_flat: ", images_flat) print("logits: ", logits) print("loss: ", loss) print("predicted_labels: ", correct_pred)
Tip: if you see an error like “
module 'pandas' has no attribute 'computation'”, consider upgrading the packages
dask by running
pip install --upgrade dask in your command line. See this StackOverflow post for more information.
Running The Neural Network
Now that you have built up your model layer by layer, it’s time to actually run it! To do this, you first need to initialize a session with the help of
Session() to which you can pass your
graph that you defined in the previous section. Next, you can run the session with
run(), to which you pass the initialized operations in the form of the
init variable that you also defined in the previous section.
Next, you can use this initialized session to start epochs or training loops. In this case, you pick
201 because you want to be able to register the last
loss_value; In the loop, you run the session with the training optimizer and the loss (or accuracy) metric that you defined in the previous section. You also pass a
feed_dict argument, with which you feed data to the model. After every 10 epochs, you’ll get a log that gives you more insights into the loss or cost of the model.
As you have seen in the section on the TensorFlow basics, there is no need to close the session manually; this is done for you. However, if you want to try out a different setup, you probably will need to do so with
sess.close() if you have defined your session as
sess, like in the code chunk below:
tf.set_random_seed(1234) sess = tf.Session() sess.run(tf.global_variables_initializer()) for i in range(201): print('EPOCH', i) _, accuracy_val = sess.run([train_op, accuracy], feed_dict={x: images28, y: labels}) if i % 10 == 0: print("Loss: ", loss) print('DONE WITH EPOCH')
Remember that you can also run the following piece of code, but that one will immediately close the session afterward, just like you saw in the introduction of this tutorial:
tf.set_random_seed(1234) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(201): _, loss_value = sess.run([train_op, loss], feed_dict={x: images28, y: labels}) if i % 10 == 0: print("Loss: ", loss)
Note that you make use of
global_variables_initializer() because the
initialize_all_variables() function is deprecated.
You have now successfully trained your model! That wasn’t too hard, was it?
Evaluating Your Neural Network
You’re not entirely there yet; You still need to evaluate your neural network. In this case, you can already try to get a glimpse of well your model performs by picking 10 random images and by comparing the predicted labels with the real labels.
You can first print them out, but why not use
matplotlib to plot the traffic signs themselves and make a visual comparison?
# Import `matplotlib` import matplotlib.pyplot as plt import random # Pick 10 random images sample_indexes = random.sample(range(len(images28)), 10) sample_images = [images28[i] for i in sample_indexes] sample_labels = [labels[i] for i in sample_indexes] # Run the "correct_pred" operation predicted = sess.run([correct_pred], feed_dict={x: sample_images})[0] # Print the real and predicted labels print(sample_labels) print(predicted) # Display the predictions and the ground truth visually. fig = plt.figure(figsize=(10, 10)) for i in range(len(sample_images)): truth = sample_labels[i] prediction = predicted[i] plt.subplot(5, 2,1+i) plt.axis('off') color='green' if truth == prediction else 'red' plt.text(40, 10, "Truth: {0}\nPrediction: {1}".format(truth, prediction), fontsize=12, color=color) plt.imshow(sample_images[i], cmap="gray") plt.show()
However, only looking at random images don’t give you many insights into how well your model actually performs. That’s why you’ll load in the test data.
Note that you make use of the
load_data() function, which you defined at the start of this tutorial.
# Import `skimage` from skimage import transform # Load the test data test_images, test_labels = load_data(test_data_directory) # Transform the images to 28 by 28 pixels test_images28 = [transform.resize(image, (28, 28)) for image in test_images] # Convert to grayscale from skimage.color import rgb2gray test_images28 = rgb2gray(np.array(test_images28)) # Run predictions against the full test set. predicted = sess.run([correct_pred], feed_dict={x: test_images28})[0] # Calculate correct matches match_count = sum([int(y == y_) for y, y_ in zip(test_labels, predicted)]) # Calculate the accuracy accuracy = match_count / len(test_labels) # Print the accuracy print("Accuracy: {:.3f}".format(accuracy))
Remember to close off the session with
sess.close() in case you didn't use the
with tf.Session() as sess: to start your TensorFlow session.
Where To Go Next?
If you want to continue working with this dataset and the model that you have put together in this tutorial, try out the following things:
- Apply regularized LDA on the data before you feed it to your model. This is a suggestion that comes from one of the original papers, written by the researchers that gathered and analyzed this dataset.
- You could also, as said in the tutorial itself, also look at some other data augmentation operations that you can perform on the traffic sign images. Additionally, you could also try to tweak this network further; The one that you have created now was fairly simple.
- Early stopping: Keep track of the training and testing error while you train the neural network. Stop training when both errors go down and then suddenly go back up - this is a sign that the neural network has started to overfit the training data.
- Play around with the optimizers.
Make sure to check out the Machine Learning With TensorFlow book, written by Nishant Shukla.
Tip also check out the TensorFlow Playground and the TensorBoard.
If you want to keep on working with images, definitely check out DataCamp’s scikit-learn tutorial, which tackles the MNIST dataset with the help of PCA, K-Means and Support Vector Machines (SVMs). Or take a look at other tutorials such as this one that uses the Belgian traffic signs dataset. | https://www.datacamp.com/community/tutorials/tensorflow-tutorial?utm_campaign=meetedgar&utm_medium=social&utm_source=meetedgar.com | CC-MAIN-2019-22 | refinedweb | 6,259 | 59.43 |
dct 0.0.4
dct: ^0.0.4 copied to clipboard
package runner for dart
Use this package as a library
Depend on it
Run this command:
With Dart:
$ dart pub add dct
With Flutter:
$ flutter pub pub add dct
This will add a line like this to your package's pubspec.yaml (and run an implicit
dart pub get):
dependencies: dct: ^0.0.4
Alternatively, your editor might support
dart pub get or
flutter pub get.
Check the docs for your editor to learn more.
Import it
Now in your Dart code, you can use:
import 'package:dct/download.dart'; | https://pub.dev/packages/dct/install | CC-MAIN-2021-17 | refinedweb | 101 | 73.17 |
-------------------------------------------------------------------------------- Fedora Update Notification FEDORA-2008-9669 2008-11-14 11:11:49 -------------------------------------------------------------------------------- Name : gnome-python2-extras Product : Fedora 9 Version : 2.19.1 Release : 21.fc9 URL : Summary : The sources for additional. PyGNOME Python extension modules. Description : The gnome-python-extra package contains the source packages for additional Python bindings for GNOME. It should be used together with gnome-python. -------------------------------------------------------------------------------- Update Information: Updated firefox and xulrunner packages that fix various security issues are now available for Fedora Core[1]. All firefox users and users of packages depending on xulrunner[2] should upgrade to these updated packages, which contain patches that correct these issues. [1]- vulnerabilities/firefox30.html#firefox3.0.4 [2] cairo-dock chmsee devhelp epiphany epiphany-extensions evolution-rss galeon gnome-python2-extras gnome- web-photo google-gadgets gtkmozembedmm kazehakase Miro mozvoikko mugshot ruby- gnome2 totem yelp Provides Python bindings for libgdl on PPC64. This update fixes a build break. -------------------------------------------------------------------------------- ChangeLog: * Wed Nov 12 2008 Christopher Aillon <caillon redhat com> - 2.19.1-21 - Rebuild against newer gecko * Mon Oct 27 2008 Matthew Barnes <mbarnes redhat com> - 2.19.1-20 - Provide Python bindings for libgdl on ppc64 (RH bug #468693). * Thu Oct 9 2008 Matthew Barnes <mbarnes redhat com> - 2.19.1-19 - Remove gtkspell-static patch. Appears to not be needed anymore. * Wed Sep 24 2008 Christopher Aillon <caillon redhat com> - 2.19.1-18 - Rebuild against newer gecko * Fri Jul 18 2008 Paul W. Frields <stickster gmail com> - 2.19.1-17.fc9 - Rebuild against new xulrunner (1.9.0.4) and fix dependencies * Fri Jun 20 2008 Martin Stransky <stransky redhat com> - 2.19.1-16.fc9 - Rebuild against new gecko-libs 1.9 (xulrunner) -------------------------------------------------------------------------------- References: [ 1 ] Bug #470903 - CVE-2008-4582 Mozilla same origin policy bypass [ 2 ] Bug #470876 - CVE-2008-5015 Mozilla file: URIs inherit chrome privileges [ 3 ] Bug #470883 - CVE-2008-5017 Mozilla crash with evidence of memory corruption [ 4 ] Bug #470889 - CVE-2008-5019 Mozilla XSS via session restore [ 5 ] Bug #470894 - CVE-2008-5021 Mozilla crash and remote code execution in nsFrameManager [ 6 ] Bug #470898 - CVE-2008-5023 Mozilla -moz-binding property bypasses security checks on codebase principals [ 7 ] Bug #470873 - CVE-2008-5014 Mozilla crash and remote code execution via __proto__ tampering [ 8 ] Bug #470881 - CVE-2008-5016 Mozilla crash with evidence of memory corruption [ 9 ] Bug #470884 - CVE-2008-5018 Mozilla crash with evidence of memory corruption [ 10 ] Bug #470892 - CVE-2008-0017 Mozilla buffer overflow in http-index-format parser [ 11 ] Bug #470895 - CVE-2008-5022 Mozilla nsXMLHttpRequest::NotifyEventListeners() same-origin violation [ 12 ] Bug #470902 - CVE-2008-5024 Mozilla parsing error in E4X default namespace -------------------------------------------------------------------------------- This update can be installed with the "yum" update program. Use su -c 'yum update gnome-python2-extras' at the command line. For more information, refer to "Managing Software with yum", available at. All packages are signed with the Fedora Project GPG key. More details on the GPG keys used by the Fedora Project can be found at -------------------------------------------------------------------------------- | https://www.redhat.com/archives/fedora-package-announce/2008-November/msg00394.html | CC-MAIN-2016-26 | refinedweb | 496 | 56.05 |
rx_predict_rx_dtree
Usage
revoscalepy.rx_predict_rx_dtree(model_object=None, data: revoscalepy.datasource.RxDataSource.RxDataSource = None, output_data: typing.Union[revoscalepy.datasource.RxDataSource.RxDataSource, str] = None, predict_var_names: list = None, write_model_vars: bool = False, extra_vars_to_write: list = None, append: typing.Union[list, str] = 'none', overwrite: bool = False, type: typing.Union[list, str] = None, remove_missings: bool = False, compute_residuals: bool = False, residual_type: typing.Union[list, str] = 'usual', residual_var_names: list = None, blocks_per_read: int = None, report_progress: int = None, verbose: int = 0, xdf_compression_level: int = None, compute_context=None, **kwargs)
Description
Calculate predicted or fitted values for a data set from an rx_dtree object.
Arguments
model_object
Object returned from a call to rx_dtree.
data
A data frame or an RxXdfData data source object to be used for predictions._data
An RxXdfData data source object or existing data frame to store predictions.
predict_var_names
List of strings specifying name(s) to give to the prediction results
write_model_vars
Bool_vars_to_write
None or list of strings of additional variables names from the input data or transforms to include in the output_data. If write_model_vars is True, model variables will be included as well.
append
Either “none” to create a new files or “rows” to append rows to an existing file. If output_data exists and append is “none”, the overwrite argument must be set to True. Ignored for data frames.
overwrite
Bool value. If True, an existing output_data will be overwritten. overwrite is ignored if appending rows. Ignored for data frames.
type
the type of prediction desired. Supported choices are: “vector”, “prob”, “class”, and “matrix”.
remove_missings
Bool value. If True, rows with missing values are removed.
compute_residuals
Bool value. If True, residuals are computed.
residual_type
Indicates the type of residual desired.
residual_var_names
List of strings specifying name(s) to give to the residual results.
blocks_per_read
Number of blocks to read for each chunk of data read from the data source. If the data and output_data are the same file, blocks_per_read must be 1.
report_progress.
xdf_compression_level
Integer in the range of -1 to 9 indicating the compression level for the output data if written to an .xdf file.
compute_context
A RxComputeContext object for prediction.
kwargs
Additional parameters
Returns
A data frame or a data source object of prediction results.
See also
Example
import os from revoscalepy import rx_dtree, rx_predict_rx_dtree,"} cost = [2,3] dtree = rx_dtree(formula, data = kyphosis, pweights = "Age", method = method, parms = parms, cost = cost, max_num_bins = 100) rx_pred = rx_predict_rx_dtree(dtree, data = kyphosis) rx_pred.head() # regression formula = "Age ~ Number + Start" method = "anova" parms = {'prior': [0.8, 0.2], 'loss': [0, 2, 3, 0], 'split': "gini"} cost = [2,3] dtree = rx_dtree(formula, data = kyphosis, pweights = "Kyphosis", method = method, parms = parms, cost = cost, max_num_bins = 100) rx_pred = rx_predict_rx_dtree(dtree, data = kyphosis) rx_pred.head() | https://docs.microsoft.com/en-us/machine-learning-server/python-reference/revoscalepy/rx-predict-rx-dtree | CC-MAIN-2018-51 | refinedweb | 436 | 51.24 |
I have some programming experience, but I'm still pretty new at C++. As one of my first programs I created a basic guessing game. I use #define to specify the bounds for the random number, but when I try and create the number it gives me a syntax error. I found out that it doesn't like it when I use UPPER_BOUND in the creation of the random number. If I create a new variable and set it to the same value, then it works, but I shouldn't have to create another variable when UPPER_BOUND is already defined.
Anyway, here's what I have. The code works, but I want to replace lines 26-27 with line 24. For some reason trying to use line 24 instead gives me a syntax error.
Any idea why I just can't use what I've already defined?
Code:#include <iostream> #include <ctime> // time() #include <cstdlib> // rand() and srand() #define UPPER_BOUND 100; #define LOWER_BOUND 1; #define INIT_GUESSES 10; using namespace std; struct guessingGame { int target; int guesses; } game; int main () { // Initial welcome message cout << "I'm thinking of a number between " << LOWER_BOUND; cout << " and " << UPPER_BOUND; cout << "." << endl; // Generate a random number; LOWER_BOUND < r < UPPER_BOUND srand(time(0)); // game.target = (rand() % UPPER_BOUND) + LOWER_BOUND; <-- THE PROBLEM LINE int up = UPPER_BOUND; // I want to replace these two lines with the one above game.target = (rand() % up) + LOWER_BOUND; game.guesses = INIT_GUESSES; // Initialize some variables int guess; bool correct = false; // The guessing section guess: cout << "(" << game.guesses << " guesses left): "; cin >> guess; if (guess < game.target) { cout << "Higher." << endl; } else if (guess > game.target) { cout << "Lower." << endl; } else { // The guess was correct correct = true; goto finished; } game.guesses--; if (game.guesses == 0) { // Ran out of guesses goto finished; } goto guess; finished: if (correct) { cout << "Congratulations! The answer was " << game.target << ".\n"; } else { cout << "Sorry, the correct answer was " << game.target << ".\n"; } } | https://cboard.cprogramming.com/cplusplus-programming/108606-sharpdefine-problem.html | CC-MAIN-2017-26 | refinedweb | 314 | 75.5 |
/** * ; import java.io.Closeable; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.io.PrintStream; import org.slf4j.Logger; /* * This code is originally from HDFS, see the similarly named files there * in case of bug fixing, history, etc... */ public class IOUtils { /** * Closes the stream ignoring {@link IOException}. Must only be called in * cleaning up from exception handlers. * * @param stream * the Stream to close */ public static void closeStream(Closeable stream) { cleanup(null, stream); } /** * Close the Closeable objects and <b>ignore</b> any {@link IOException} or * null pointers. Must only be used for cleanup in exception handlers. * * @param log * the log to record problems to at debug level. Can be null. * @param closeables * the objects to close */ public static void cleanup(Logger log, Closeable... closeables) { for (Closeable c : closeables) { if (c != null) { try { c.close(); } catch (IOException e) { if (log != null) { log.warn("Exception in closing " + c, e); } } } } } /** * Copies from one stream to another. * * @param in * InputStrem to read from * @param out * OutputStream to write to * @param buffSize * the size of the buffer * @param close * whether or not close the InputStream and OutputStream at the * end. The streams are closed in the finally clause. */ public static void copyBytes(InputStream in, OutputStream out, int buffSize, boolean close) throws IOException { try { copyBytes(in, out, buffSize); if (close) { out.close(); out = null; in.close(); in = null; } } finally { if (close) { closeStream(out); closeStream(in); } } } /** * Copies from one stream to another. * * @param in * InputStrem to read from * @param out * OutputStream to write to * @param buffSize * the size of the buffer */ public static void copyBytes(InputStream in, OutputStream out, int buffSize) throws IOException { PrintStream ps = out instanceof PrintStream ? (PrintStream) out : null; byte buf[] = new byte[buffSize]; int bytesRead = in.read(buf); while (bytesRead >= 0) { out.write(buf, 0, bytesRead); if ((ps != null) && ps.checkError()) { throw new IOException("Unable to write to output stream."); } bytesRead = in.read(buf); } } } | https://www.programcreek.com/java-api-examples/?code=maoling/fuck_zookeeper/fuck_zookeeper-master/src/java/main/org/apache/zookeeper/common/IOUtils.java | CC-MAIN-2019-26 | refinedweb | 314 | 69.48 |
08 April 2013 18:10 [Source: ICIS news]
HOUSTON (ICIS)--Acrylonitrile (ACN) exports fell by almost 53% in February from the year-ago period, according to data released on Monday by the US International Trade Commission (ITC).
February 2013 exports were 21,661 tonnes, down from 45,768 tonnes a year ago. Month to month, February imports were also down about 42% from 37,614 tonnes in January 2013.
The sharpest drop was in exports to ?xml:namespace>
US producers of ACN include Ascend Performance Materials; Cornerstone Chemical and INEOS Nitriles.
Combined, those producers have an annual output capacity of 1.46 | http://www.icis.com/Articles/2013/04/08/9656626/acrylonitrile-february-exports-fell-by-53-from-year-ago-itc.html | CC-MAIN-2015-22 | refinedweb | 102 | 55.54 |
Installing and Configuring DNS
The Active Directory Installation wizard offers to install DNS if the wizard does not detect a proper DNS zone configuration during the installation of Active Directory. However, you should not rely on the wizard for these tasks. Many bug reports have been submitted regarding installation bases that relied on the wizard. Also keep in mind that the Active Directory Installation wizard does not install a reverse lookup zone.
The Windows 2000 DNS service can coexist with or migrate other DNS services, including the popular Berkeley Internet Name Domain (BIND) DNS service. One great place to find BIND information is the Internet Software Consortium Web site. To migrate from BIND, you must transfer the BIND zone and boot files to the Microsoft DNS service.
Windows 2000 DNS can also upgrade or coexist with Windows NT 4.0 DNS servers.
For the exam, you should know how to configure DNS for Active Directory. Here is the procedure.
1. Click Start –> Settings –> Control Panel.
2. Double-click Add/Remove Programs and then click Add/Remove Windows Components.
3. In Components, select Networking Services and then click Details.
4. In Subcomponents of Networking Services, select the Domain Name System (DNS) check box, click OK, and then click Next.
5. In Copy Files From, type the full path to the Windows 2000 distribution files and then click OK.
To host Active Directory, you must properly configure DNS with a zone for the Active Directory namespace. You should create both zone types for a proper DNS implementation for your Active Directory namespace — that is, a forward lookup zone and a reverse lookup zone. Read on to discover how.
Creating a forward lookup zone
To create a forward lookup zone:
1. Click Start –> Programs –> Administrative Tools –> DNS.
Windows 2000 launches the DNS Microsoft Management Console, from which you can perform your DNS administration.
2. Expand the DNS server.
3. Right-click the Forward Lookup Zone folder and choose New Zone.
4. Click Next to continue when the New Zone wizard appears.
The wizard takes the pain out of DNS administration.
5. Ensure that Standard Primary is selected and click Next.
6. Ensure that Forward Lookup Zone is selected and click Next.
7. At the New Zone page, type the name of your zone (for example, fordummies.com) and click Next.
8. Select Create a New File With This File Name and click Next.
9. Click Finish.
Creating a reverse lookup zone
To create a reverse lookup zone:
1. Click Start –> Programs –> Administrative Tools –> DNS.
2. Expand the DNS server.
3. Right-click your server and choose New Zone.
4. Click Next to continue when the New Zone wizard appears.
5. Ensure that Standard Primary is selected and click Next.
6. Ensure that Reverse Lookup Zone is selected and click Next.
7. Ensure that Network ID is selected, type your network ID in the Network ID field, and click Next.
8. Select Create a New File With This File Name and click Next.
9. Click Finish.
As far as Active Directory is concerned, your DNS server is almost ready. You should now configure the forward and reverse lookup zones for dynamic updating so that you do not get stuck creating all the records required for Active Directory yourself! | http://www.dummies.com/programming/networking/installing-and-configuring-dns/ | CC-MAIN-2016-50 | refinedweb | 544 | 66.94 |
- Advertisement
NitMember
Content Count285
Joined
Last visited
Community Reputation533 Good
About Nit
- RankGDNet+
Unnecessary Challenges
Nit commented on a_insomniac's blog entry in Keeping up with yesterdayThis happened to me a few years back, and I know how frustrating this situation must be for you. I'm currently using Amazon cloud (5 GB free storage) to back up my critical data (svn repository dump files mainly). If possible, i'd recommend that you identify the stuff you can't live without and get it down to a small amount that fits in the free cloud storage plans (or maybe use a few services that are free).
Standalone SVN Repository
Nit commented on Nit's blog entry in Abwood's Coding NotesIt's funny you say that, as I've been moving Git up on my priority list. Posts from promit and suprpig really brought my attention to some of SVNs limitations. That said, SVN does not current hold me back; it does everything I need it to do right now (which is to manage the evolution of my codebase, and allow me to diff between past versions). I'm the only developer adding to the repository, so a simple solution seems acceptable for now. Switching to Git is mostly motivated by its learning experience.
Interfacing C++ and Python (Part 1 & 2 Sample source code)
Nit posted a blog entry in Abwood's Coding NotesBy request, I'm attaching some sample source code that demonstrates the topics covered in my last two postings. This is a basic solution for calling into Python from C++. I've also included in the attached zipfile, notes on how to configure a Visual Studio 2008 solution to use Python26. I expect it is straightforward to translate these steps to work with a different compiler, or another version of Python. For completeness, these notes are included here: Notes for creating the Project in Visual Studio 2008 Assumes Python26 is installed at C:\Python26, so be sure to update the paths listed below if you are using a different version of python.Create a New Project --> Win32 Console Application, No precompiled Headers, Empty ProjectAdd main.cpp, pythonCallbacks.h, pythonCallbacks.cpp files to solutionRight click on PythonCppExample --> PropertiesC/C++ --> General --> Additional Include Directories = C:\Python26\includeLinker --> General --> Additional Library Directories = C:\Python26\libs(NOTE: Notice it is the libs directory, not the Lib directory)Linker --> Input --> Additional Dependencies = python26.lib
Interfacing C++ and Python (Part 2)
Nit commented on Nit's blog entry in Abwood's Coding NotesSure. I'll try to get an example uploaded this weekend.
Interfacing C++ and Python (Part 2)
Nit posted a blog entry in Abwood's Coding NotesThis is the second posting that deals with interfacing C++ with a Python scripting layer. Part 1 can be found here. The focus in this post relates to the design I chose to follow for wrapping Python callback functions in C++. The design objective here is to ensure that a callback function can be quickly defined in python, while also not requiring any recompilation of the c++ UI library. This last feature is important, because in theory the user interface should be configurable and extendable by advanced end users of the game engine. A lot of my inspiration of this train of thought can be attributed to World of Warcraft's extremely customizable user interface. To setup this example, consider the following xml file that defines a simple GUI, with a window containing a button. The button defines a callback to be executed when the button is pressed, that links to a function defined in Python. When constructing this class, a generic interface must be defined that are components of GUI elements. In my codebase, currently all Python standalone functions like this inherit from this callback process. class PythonCallback { public: PythonCallback(const std::string& module, const std::string& callback); virtual ~PythonCallback(); protected: PyObject* _module; PyObject* _callback; }; Notice _module and _callback are PyObjects. To rehash from my earlier entry, I personally prefer to forward declare this PyObjects rather than use a #include to ensure that python.h is not included unless it is absolutely necessary (e.g., in the cpp implementation file!). // forward declare PyObject // as suggested on the python mailing list // #ifndef PyObject_HEAD struct _object; typedef _object PyObject; #endif Much of the Python C API setup is done in the constructor of this abstract class. The guts of this is largely derived from the Python C API documentation, particularly in the following tutorial: The primary function here is to create the _callback and _module instances and maintain proper memory management of our PyObject references. PythonCallback::PythonCallback(const std::string &module, const std::string &callback) { PyObject *pyName; pyName = PyString_FromString(module.c_str()); _module = PyImport_Import(pyName); Py_DECREF(pyName); if (_module) { _callback = PyObject_GetAttrString(_module, callback.c_str()); /* _callback is a new reference */ if (_callback && PyCallable_Check(_callback)) { // success } else { if (PyErr_Occurred()) PyErr_Print(); std::cout
How to: from OpenGL 2.1 to 3.3 core?
Nit replied to golgoth13's topic in Graphics and GPU ProgrammingI'd recommend OpenGL Superbible (5th Ed); it does a great job of introducing the reader to 3.3 core and does away with the deprecated 2.1 API.
Interfacing C++ and Python (Part 1)
Nit posted a blog entry in Abwood's Coding NotesI've recently decided it was time to gain a bit more flexibility in my c++ codebase by adding support for a scripting interface. My initial use case was to enable my homegrown gui system (in c++) to offload callbacks and general customization to a scripting layer, rather than having to hard code a user interface in C++. I chose Python as my scripting language, due to my years of experience with this language. With the GUI/addon system use case in mind, I need my C++ code to call into a Python script/module and I also need my Python scripts to interact with C++ objects. Here's a simple example that I've got in mind for this process: # import the pyd that interfaces python with my c++ library from az_gui import * ... def onCancelButton(self, event): ''' Event handler for cancel button. ''' window = self.getWindow() window.close() return True I've decided to split this topic into several postings. This first post covers the ground work necessary for calling into Python scripts. The second posting will review a PythonCallback wrapper class that is designed to encapsulate the Python C API code for calling function callbacks in python modules. After that I'll have a post or two on my Boost.Python experiences that expose my C++ libraries to Python (e.g., allowing for Python code to call the C++ class, Window as shown in the example above). C++ calls into Python scripts: Went with the Python C API for communication in this direction, since it was a manageable process. I'm resistant to adding 3rdParty dependencies to my project, so this seemed like the best I could do. On a side note, my original goal was to write my own Python C API wrappers to also exposing my c++ classes to Python scripts, but that panned out to be an enormous undertaking that I'm not ready to tackle yet. (this discussion is saved for another post). There were a few catches along the way that made the implementation less straightfoward. For starters, a standard install of Python does not have a debug library. As such, some macro magic needs to be performed to include Python.h header file. //Note // Since Python may define some pre-processor definitions which affect // the standard headers on some systems, you must include Python.h before // any standard headers are included. #ifdef _DEBUG #undef _DEBUG #include "Python.h" #define _DEBUG #else #include "Python.h" #endif It is my preference to keep this Python.h ugliness inside the cpp files, so my next trick involved forward declaring PyObject: // forward declare PyObject // as suggested on the python mailing list // #ifndef PyObject_HEAD struct _object; typedef _object PyObject; #endif Pulling it all together: To manage the Python C API, I created a PythonManager class to encapsulate the Python interpreter session used within my codebase. PythonManager.h ... class PythonManager { public: PythonManager(); virtual ~PythonManager(); void initialize(); void uninitialize(); // add directories to the python interpreter's sys.path // to allow for Python scripts to locate script directories. bool addSysPath(const std::string& relativePath); protected: }; PythonManger.cpp: ... PythonManager::PythonManager() { } PythonManager::~PythonManager() { uninitialize(); } bool PythonManager::addSysPath(const std::string& relativePath) { std::ostringstream ss; ss
Standalone SVN Repository
Nit posted a blog entry in Abwood's Coding NotesSince I'm the sole programmer on my own personal game engine, I didn't feel much pressure in using configuration management software. I had at one point been reliant on regular backups by simply zipping the entire solution directory and timestamping the zip file. I keep a changelog that summarizes the changes made to the code base in order to hint at the state of the code snapshot in case I ever needed to roll back to that version. I've was never thrilled with solution; 95% (buttpull number) of each zip contained code that had not changed, rolling back could result in lots of snooping around to find the correct version, the backup process was time consuming, no diff mechanism to verify that I actually wanted to keep all changes made for this version snapshot..
Gridless 2D Path-finding
Nit replied to dadads's topic in Artificial IntelligencePoints of Visibility may be of interest to you. I briefly explain it in the following post:
Crash Recovery
Nit posted a blog entry in Abwood's Coding NotesI recently suffered a terrible HD crash on my coding laptop, which resulted in a loss of about 2 months of coding work. Needless to say, losing any amount of work that you are proud of certainly does suck. However, I've been able to collect a list of tasks that vanished as a result of this event and I should be back on track within a week or so. There is something positive that came out of this crash, as I have been reminded of the importance of backup up my work. For a while I was doing pretty good, using a svn repository in addition to performing full archives of the code base with a zip file. That backup regiment has slowly tapered off, and lately I've been only zipping the code base without copying the zip up to a thumb drive or external. In response, I've written the following script that I wanted to share that performs the backup for me. If you're really bad about backing up your work, you may want to do what I did: place the script directly on your external storage device and integrate it into the autorun. Directions on doing this are included in the scripts comments: ## ## file: backup.py ## author: Alex Wood ## date: 2008-06-04 ## desc: Archives all files in each specified directory into a specific zip file. ## ## ## Place this script in the toplevel of a thumb drive or external harddrive ## to enable autobackup of directories, as described below. ## ## Copy and paste the following text into your autorun.inf text file: ## ## [autorun] ## open=launch.bat ## ACTION = Launch portable workspace ## ## Create launch.bat and add the following line: ## python backup.py ## import os import zipfile import datetime dirList = [ ('C:\\foo', 'foo'), # directory, zip name ('C:\\bar', 'bar'), # directory, zip name ] copyDest = os.getcwd() numFiles = 0 for src in dirList: currentPath = src[0] # verify the source directory exists if not os.path.isdir(currentPath): print '%s not found' % currentPath continue # zipfile name is the concatenation of the zip name (defined in dirList) and a timestamp zipname = src[1] zipfilename = "%s_%s.zip" % (zipname, datetime.datetime.now().strftime("%Y%m%d%H%M%S")) file = zipfile.ZipFile(zipfilename , "w") for (path, dirs, files) in os.walk(currentPath): numFiles += len(files) currentPath = path for name in files: print name archiveName = str(os.path.join(path, name)) # the archive name must be relative to the root of the archive # so strip off the C:\foo part of the path archiveName = archiveName[len(src[0]):] file.write(os.path.join(path, name), archiveName, zipfile.ZIP_DEFLATED) print '%d files archived into %s' % (numFiles, zipfilename)
Urban Empires - January 13, 2008 Update
Nit commented on dgreen02's blog entry in Radioactive-SoftwareHey Dan, I just caught you post on January 2nd that showcased some new animations. I knew you've been a Milkshape guy (as am I) and I noticed that your screenshots are not from Milkshape. Are you using a separate technique/tool for character modeling or are those screenshots out of a homegrown tool? Things look great! I've been watching this project grow for many years on this site.
- Quote:Original post by haegarr Perhaps it is because ATM you're trying out glDrawElements only, but using indices at all makes sense only if you are re-using vertices. As long as each vertex is used only once, indices only increase the costs. The structure you have should work fine if using glDrawArrays. EDIT: Err, where has the reply of ViLiO gone to? IMHO he was totally right with his hint... Perhaps you haven't seen it: ViLiO stated out that the indices used by OpenGL do index _vertices_ and not co-ordinates of vertices. That means, that something like glVertex3f(vertex[index].x,vertex[index].y,vertex[index].z); (i.e. using the same index for all 3 co-ordinates) is the equivalent to what OGL does in glDrawElements. That did it. The way I needed to think about how glDrawElements() works comparably to immediate mode would be with the following code: unsigned int numIndicies = mesh->GetNumIndicies(); float* verts = mesh->GetVertexBuffer(); unsigned int* indicies = mesh->GetIndexBuffer(); glPushMatrix(); glScalef(0.005f, 0.005f, 0.005f); glColor3f(1.0f, 1.0f, 1.0f); glBegin(GL_TRIANGLES); for(int i=0; i<numIndicies; i++) { //notice the offsets are added to indicies glVertex3f(verts[indicies],verts[indicies+1],verts[indicies+2]); } glEnd(); glPopMatrix(); The redbook illustrates your points made subtley. Once again, you the gamedev community catches a detail that I overlooked. Thanks for your help haegarr and ViLiO. Pluses for everyone (will have to hunt down ViLiO). Thanks!
- I'm returning to this problem because I still seem to have some inconsistencies with immediate mode and glDrawElements(). This new problem came to light after fixing the type parameter in glDrawElements to GL_UNSIGNED_INT. Using immediate mode my test model renders as correctly as such: Whereas using glDrawElements produces undesireable results: I am under the impression that both implementations (shown above more or less) should yield the same results. Thanks for your attention.
- That was it. Thank you!
glDrawElements
Nit posted a topic in Graphics and GPU ProgrammingI seem to be having some problems getting glDrawElements to work. Using the fixed function implementation below, I try to draw my mesh object: int numIndicies = mesh->GetNumIndicies(); float* verts = mesh->GetVertexBuffer(); int* indicies = mesh->GetIndexBuffer(); glPushMatrix(); glColor3f(1.0f, 1.0f, 1.0f); glBegin(GL_TRIANGLES); for(int i=0; i<numIndicies; i+=3) { glVertex3f(verts[indicies],verts[indicies[i+1]],verts[indicies[i+2]]); } glEnd(); glPopMatrix(); The code above renders the mesh without any problems. Unfortunately, I am unable to render the mesh using (what I think is) a comparable implementation using glDrawElements: glPushMatrix(); glColor3f(1.0f, 1.0f, 1.0f); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, mesh->GetVertexBuffer()); glDrawElements(GL_TRIANGLES, mesh->GetNumIndicies(), GL_INT,mesh->GetIndexBuffer()); glDisableClientState(GL_VERTEX_ARRAY); glPopMatrix(); I've been staring at this for a while, but I can't seem to find the missing piece. Any help is greatly appreciated.
- Advertisement | https://www.gamedev.net/profile/77204-abwoodgdnet/ | CC-MAIN-2019-04 | refinedweb | 2,603 | 52.49 |
All code from this tutorial as a complete package is available in this repository.
If you find this tutorial helpful, please share it with your friends and colleagues! For more like it you can subscribe on Youtube or follow me on Twitter.
This tutorial is available as a video lesson if you prefer that format:
Table of Contents
- What is Notion?
- Introduction
- Project Setup
- Creating a Notion Database
- Creating the Server
- Querying the Server
- Creating a Notion Integration
- Querying the Database
- Connecting the App
- Wrapping Up
What is Notion?
Before we jump in I want to explain quickly a little bit what Notion is.
It's basically an organizational tool that runs in the cloud and supports multiple user collaboration at the same time.
It can be used for anything from organizing daily tasks, keeping track of school schedules, to managing the documentation of large enterprise projects.
Basically if you want to "organize" any kind of information, Notion is a great tool for that.
Similar products you might be familiar with would be something like Confluence, Evernote or OneNote.
Introduction
Recently I discovered that Notion provides an API to read and modify data on your Notion workspace.
They also have fantastic support for simple databases (even relational ones) so I thought it would be fun to try a little app that could use Notion as a quick and dirty CMS, and I had a lot of fun doing it, so I thought I would write up a little tutorial to share the process with others.
I want to be clear that I am absolutely not advocating for the use of Notion as a real database for a production application.
I do not know anything about the actual speed and performance of querying it at any scale, and I also wouldn't trust any critical data on a service that isn't specifically designed to offer a reliability guarantee.
However, for fun little projects I think it's a great option, especially for front end developers who don't have a lot of existing knowledge about databases and just want to get their feet wet.
It can also be a great way to collaborate with less technical folks and allow them the flexibility that Notion offers for creating content, and giving developers the ability to directly reference that content in code.
So without further delay, let's play around with it.
Project Setup
The structure of our project will be:
React App -> Node server -> Notion database
The reason we need the Node server is because if we were to query directly from our React app, we would have to expose our Notion account credentials and secret/database ID. Anything on the client side is always visible to the user.
By querying on the server we can keep the credentials there, out of reach of the front end, and only provide the database table data itself to the front end.
We'll begin by creating the project directory and React app. We're using Create React App here as it's still the simplest way to get an instant React project up and running with minimal complexity:
mkdir react-node-notion cd react-node-notion npx create-react-app@latest sample-app --template typescript cd sample-app npm run start
Make sure you are able to see the example React app on before you continue.
Creating a Notion Database
Next we are going to create our Notion workspace and database.
Navigate to:
You can create an account or login with an existing Google or Apple account. Notion is free to use for an individual.
Next we'll create a page where our database will live. My database is going to help me keep track of stuff I want to learn in 2022.
Click anywhere on the "My Cool Project" page and type
/page. You'll have te option of creating a new page. Create one and give it an icon.
Open your new page. You can give it a cover image at the top. CLick anywhere on the blank page and type
/database. You're going to select "Table Database - Inline"
The first column should be a unique value (our primary key). I'm simply going to name that column
key. The second column I will name
label and the third column I will name
url. The key column will be of type
title by default, but you will need to set the label column to
text and the url column to
url:
I've made the column headers lowercase on purpose since we will be referring to them with Javascript properties when we query (which are traditionally lowercase).
I will be using this database to keep track of the things I want to learn, and a URL link to the resource to learn them. This is super simple but you can come up with anything as complex as you want, we're mostly just here to give an example of how to query this data and display it in an app (or anywhere you like really).
Populate the DB with whatever data suits you best. Here's mine:
Creating the Server
We're next going to spin up a super simple Node server to serve the data. All we need is the
http module and the Notion client library from NPM.
Let's begin with just the server and confirm we can query the data before we add the Notion integration:
Go back to the root directory
react-node-notion before running these commands:
mkdir server cd server npm init -y npm install -D typescript @types/node npx tsc --init mkdir src touch src/server.ts
In case you aren't creating your files from the command line, the above instructions will install the necessary packages and create a
server directory and an
src directory inside with a
server.ts file. Your full directory structure for the entire project should look like:
. ├── sample-app │ └── (React app files) └── server ├── src │ └── server.ts ├── tsconfig.json ├── package-lock.json └── package.json
Your
server.ts file will look like:
server/src/server.ts
import http from "http"; const host = "localhost"; const port = 8000; const server = http.createServer((req, res) => { // Avoid CORS errors res.setHeader("Access-Control-Allow-Origin", "*"); res.setHeader("Content-Type", "application/json"); switch (req.url) { // Will respond to queries to the domain root (like) case "/": res.writeHead(200); res.end(JSON.stringify({ data: "success" })); break; // Only supports the / route default: res.writeHead(404); res.end(JSON.stringify({ error: "Resource not found" })); } }); server.listen(port, host, () => { console.log(`Server is running on{host}:${port}`); });
Your
npx tsc --init command will have created a
tsconfig.json file. All the defaults are fine, you just need to add one value:
tsconfig.json
{ ... "outDir": "./dist" }
That will output the result of the
tsc command into a
dist folder with a JS file that you can run.
Give it a try by running:
npx tsc && node dist/server.js`
That says "run typescript and then use Node to run the resulting Javascript file it creates in the output folder".
Querying the Server
Navigate back to the
sample-app directory and open the
src directory. We can delete
App.css and the
logo.svg file.
We'll update the
index.css with some super simple CSS based off this minimalist style.
sample-app/src/index.css; }
Now we update the contents of
App.tsx. Remove all the default content inside the file (including the imports) and replace it with the following:
sample-app/src/App.tsx
function App() { return ( <div> <h1>Things to Learn</h1> <button type="button" onClick={() => { fetch("") .then((response) => response.json()) .then((payload) => { console.log(payload) }); }} > Fetch List </button> </div> ); } export default App;
We use the Fetch API to query the simple server we just wrote that we made listen on port 8000 and respond on the root domain route
/.
So that means to reach that endpoint we need to query. Save and run your app, then press the "Fetch List" button. Open the dev console with F12 and you will see:
Notice the
{ data: "success" } response there in the console. Great!
Our React app is connected to our server and we can query basic data. Let's get Notion hooked up.
Creating a Notion Integration
Before you can query data from your Notion account you need to create an integration that has the necessary permissions. You can configure integrations to have different permissions like read/write/insert depending on who you are sharing the integration secret with.
Go to the following URL:
And click the big [+ New Integration] button on the left.
You can configure and name your integration how you like. For mine I only want to be able to read content from my database, so I am only giving it read permissions and no access to user data:
After you have created the integration you will be provided with a "secret" that gives access to your integration. Keep this handy as we will need it soon:
In addition to the secret, we also need to configure the database itself to be allowed to be read. Go back to your "Things to Learn" database (or whatever you wrote).
At the upper right corner of your database page is a "Share" button. Click it and then click the "Invite" button. You will have the ability to invite your new integration that you created to have access to this database. It will still be private and hidden from the general public.
The two values you need to query this database from your Node app are the Notion secret (which you already have) and the database ID. The database ID you can get from the URL when you are looking at your database. The URL will look something like this:
In the above example your
database id is the
aaaaaaaaaaaaaaaaaaaaaa part before the question mark.
You now have everything you need to query the data. Back to the Node server.
Querying the Database
We are going to need a secure place to store our Notion secret and database ID. If we put them in our code they will become visible to anyone who checks the source when we push to a remote repository. To get around this we will store our credentials in a
.env. file.
Inside your
server directory create two new files (note that both of them are hidden files that are prefix with a
. before the filename):
server/.env
NOTION_SECRET="secret_xxxxxxxxxxxxxxxxxxxxxx" NOTION_DATABASE_ID="aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
Where the dummy values above are replaced by the values you took from Notion. Remember your secret key does need the
secret_ prefix in front of it.
We also want to create a
.gitignore file:
server/.gitignore
.env dist node_modules
This will tell
git not to push your secret
.env file our your automatically generated
dist folder when you commit.
While we're at it let's add a start script for our server that does the
tsc build and runs the JS output:
server/package.json
{ ... "scripts": { "start": "tsc && node dist/server.js" }, }
Alright! Now that that is taken care of we just need two packages. One is the official Notion client for Node, and the other is
dotenv library that will made it super simple to read your secret and database id values from that
.env file:
npm install @notionhq/client@0.4.9 dotenv
Note that I have locked Notion client to
0.4.9 as the API may change since this is a relatively new product and I would like this tutorial to work for anyone who uses it in the future. You can try running the latest build however you may need to refer to their documentation and make corrections if anything has changed.
Now we're going to update our
server.ts file. We'll do it all at once but I'll add lots of comments to explain everything:
server/src/server.ts
require("dotenv").config(); import http from "http"; import { Client } from "@notionhq/client"; // This is Typescript interface for the shape of the object we will // create based on our database to send to the React app // When the data is queried it will come back in a much more complicated shape, so our goal is to // simplify it to make it easy to work with on the front end interface ThingToLearn { label: string; url: string; } // The dotenv library will read from your .env file into these values on `process.env` const notionDatabaseId = process.env.NOTION_DATABASE_ID; const notionSecret = process.env.NOTION_SECRET; // Will provide an error to users who forget to create the .env file // with their Notion data in it if (!notionDatabaseId || !notionSecret) { throw Error("Must define NOTION_SECRET and NOTION_DATABASE_ID in env"); } // Initializing the Notion client with your secret const notion = new Client({ auth: notionSecret, }); const host = "localhost"; const port = 8000; // Require an async function here to support await with the DB query const server = http.createServer(async (req, res) => { res.setHeader("Access-Control-Allow-Origin", "*"); switch (req.url) { case "/": // Query the database and wait for the result const query = await notion.databases.query({ database_id: notionDatabaseId, }); // We map over the complex shape of the results and return a nice clean array of // objects in the shape of our `ThingToLearn` interface const list: ThingToLearn[] = query.results.map((row) => { // row represents a row in our database and the name of the column is the // way to reference the data in that column const labelCell = row.properties.label; const urlCell = row.properties.url; // Depending on the column "type" we selected in Notion there will be different // data available to us (URL vs Date vs text for example) so in order for Typescript // to safely infer we have to check the `type` value. We had one text and one url column. const isLabel = labelCell.type === "rich_text"; const isUrl = urlCell.type === "url"; // Verify the types are correct if (isLabel && isUrl) { // Pull the string values of the cells off the column data const label = labelCell.rich_text?.[0].plain_text; const url = urlCell.url ?? ""; // Return it in our `ThingToLearn` shape return { label, url }; } // If a row is found that does not match the rules we checked it will still return in the // the expected shape but with a NOT_FOUND label return { label: "NOT_FOUND", url: "" }; }); res.setHeader("Content-Type", "application/json"); res.writeHead(200); res.end(JSON.stringify(list)); break; default: res.setHeader("Content-Type", "application/json"); res.writeHead(404); res.end(JSON.stringify({ error: "Resource not found" })); } }); server.listen(port, host, () => { console.log(`Server is running on{host}:${port}`); });
Should be good! We'll start the server with the new script we made in
package.json:
npm run start
Connecting the App
A quick jump back into the React app and hit that "Fetch Data" button again. If everything went well you will be greeted with the content of your database in your browser console:
You've now got the data in your React app, you can do whatever you want with it! We could probably wrap up the tutorial here, but let's make one final step of turning the data into an actual list of links:
sample-app/src/App.tsx
import { useState } from "react"; // Copy the payload shape interface from our server // We want to copy (rather than import) since we we won't necessarily deploy our // front end and back end to the same place interface ThingToLearn { label: string; url: string; } function App() { // A state value will store the current state of the array of data which can be updated // by editing your database in Notion and then pressing the fetch button again const [thingsToLearn, setThingsToLearn] = useState<ThingToLearn[]>([]); return ( <div> <h1>Things to Learn</h1> <button type="button" onClick={() => { fetch("") .then((response) => response.json()) .then((payload) => { // Set the React state with the array response setThingsToLearn(payload); }); }} > Fetch List </button> {/* Map the resulting object array into an ordered HTML list with anchor links */} {/* Using index as key is harmless since we will only ever be replacing the full list */} <ol> {thingsToLearn.map((thing, idx) => { return ( <li key={idx}> <a href={thing.url} {thing.label} </a> </li> ); })} </ol> </div> ); } export default App;
And with that, a click of the fetch button and we get a nice list of things to do which reflects the state of our Notion database and creates links to the relevant pages.
Go ahead, try changing some text in your DB and hitting the button again.
Wrapping Up
Well that's pretty neat! Now that you know how to do this, what cool projects can you think to build?
Remember that all code from this tutorial as a complete package is available in this repository.
Please check some of my other learning tutorials. Feel free to leave a comment or question and share with others if you find any of them helpful:
How to use Node.js to backup your personal files
Introduction to Docker for Javascript Developers
If you find this tutorial helpful, please share it with your friends and colleagues! For more like it you can subscribe on Youtube or follow me on Twitter.
Discussion (8)
Thank you for your post.
Notion works like an easy to use CRM to maintain website content with keywords in the table.
Can you mention how does it work in the case of images or even videos embedding?
Are there limits, ex: when does it stop being free to use?
Thanks again, bye.
Sure! I haven't used it personally for anything beyond the database values, but it looks like there is some decent documentation for those things. Notion did recently add image support to thier API:
developers.notion.com/changelog/pa...
And here is some information of rate limits. I don't think Notion has a paid tier for integrations, simply a global average of 3 requests per second (another reason why it's primarily useful for small projects but not as an actual scaleable CMS/CRM):
developers.notion.com/reference/er...
Hello, very interesting.
a query, is there a way to do it without nodejs (frontend only)?
You could "technically" add the Notion client package to the front end I imagine, but there would be no way to do so without exposing your Notion integration secret token to the users. If it has any write/modify permissions at all that's extremely dangerous. I guess with read only permissions you might be able to get away with it, but I still wouldn't do it for anything more than a hobby project.
So yes I do think the server is necessary (though it doesn't need to be written in Node.js).
Another possibility is to create a webhook with Google Apps Script and call the Notion APIs from there. In the client side, then, you can call the webhook without exposing Notion integration tokens.
Similarly, you can create a webhook with tools such as autocode.com/
Great post! I’m a big fan of Notion and I’m looking forward to writing apps for it in the future! I’ll definitely come back on this post.
Amazing article Alex, well written and explained. Just had a question for you, Why did you initiate the server with TS if you're not using types or TS features?
Hey there! That's a great question.
The answer is "I did use it!" Plenty of times. Here's some examples:
Typescript reminded me that
process.env.NOTION_SECRETcould be undefined so it lead me to add the error handling that tells the user they need to include an
.envfile to run the server
There's a TS interface on the server called
ThingsToLearnwhich defines the shape of the data being served to the front end
Typescript helped a TON with the Notion query payload which comes back in a pretty complex shape, and I can use the
@notionhq/clientbuilt in types to navigate the API within VS Code rather than having to jump back and forth between the documentation
TS also helped all the times I wrote something wrong or stupid and told me to fix it in the moment rather than causing a runtime error, so there's plenty of use of TS that happened in the development of this tutorial that you don't necessarily see in the final product 😉 | https://dev.to/alexeagleson/how-to-connect-a-react-app-to-a-notion-database-51mc | CC-MAIN-2022-05 | refinedweb | 3,374 | 62.48 |
I don’t know if this is the correct sub-forum to put this on, but I’ll do it anyway.
Would this be a bad way to optimise my code? (for minimum file size)
import GameLogic cont = GameLogic.getCurrentController() own = cont.owner c = cont.sensors def setFade(f): own['fade'] = f r = setFade(0) if c["aim"].positive: own['Text'] = "You used the aimbot cheat!" r if c["hurt"].positive: own['Text'] = "You used the 1HP cheat!" r if c["invincible"].positive: own['Text'] = "You used the Invincibility cheat!" r if c["heal"].positive: own['Text'] = "You used the Heal cheat!" r
Let’s say I have a very tight budget for file size - is this a bad way of optimising code? Is there a better way of doing it? Could optimising my code this way have other issues like for example, would it require more computing power? Is it okay to sacrifice readability for file size?
I’m pretty new to coding so any info would be helpful, but then again, it’s not really important I’m just interested. | https://blenderartists.org/t/code-optimisation/671298 | CC-MAIN-2020-50 | refinedweb | 182 | 71.1 |
#include <sys/capability.h>.
For details on the data, see capabilities(7).: −1,
meaning perform the change on all threads except the caller
and init(1); or a value less than
−1, in which case the change is applied to all
members of the process group whose ID is −
pid.
On success, zero is returned. On error, −1.
Bad memory address.
hdrp must not be NULL.
datap may be
NULL only when the user is trying to determine the
preferred capability version format supported by the
kernel.
One of the arguments was invalid.
An attempt was made to add a capability to the Permitted set, or to set a capability in the Effective or Inheritable sets that is not in the Permitted set..)
No such thread.
The portable interface to the capability querying and
setting functions is provided by the
libcap library and is
available here:
clone(2), gettid(2), capabilities(7) | http://manpages.courier-mta.org/htmlman2/capget.2.html | CC-MAIN-2021-17 | refinedweb | 152 | 58.08 |
Re: "ocaml_beginners"::[] Ocaml + GUI + Mac + Mono?
Expand Messages
- Okay, no mono. Thank you. That saves me from a very great many
hours of potentially frustrating research and software installation.
On the other hand, I can see Mono + F# being useful to me to solve
other problems, but those are not important now.
From the rest of what you and Richard Jones say, I think my plan of
action is this...
* Use labltk or lablgtk to build software on Aqua, unless I find a
solid Cocoa binding. Technically, there is Cocoa#, but I will look
into that when I look into mono. Either way, separate presentation
from logic as much as I absolutely possibly can.
* Port the presentation layer to other platforms when I decide that I
want to use the other platforms.
--
Savanni
On Oct 2, 2008, at 1:15 PM, Jon Harrop wrote:
> On Thursday 02 October 2008 17:47:14 Savanni D'Gerinel wrote:
> >)
>
> Amen.
>
> > * Make it a Native-Mac GUI on my Mac (because I am tired of XDarwin)
>
> The last time I looked, the native-Mac-GUI-in-OCaml problem had not
> been
> solved but several people were trying to solve it.
>
> > * Make it portable to both Windows and Linux (with a recompile)
>
> Does that not conflict with "native-Mac GUI"?
>
> > So far, the only option that looks obviously ready and complete is
> > using labltk.
>
> I believe LablGTK is a definite possibility. It works very well
> under Linux
> but I have not tried porting the software (I gave up on my Mac).
>
> > I do not mind this idea at all and am totally willing
> > to go with it, and I have some overall documentation on how to
> use it
> > at .
> >
> > I would prefer to use QT, but I can find no Ocaml bindings, and
> > writing such a binding is certainly beyond my skill and interest at
> > this time.
>
> You might try going via another language with more mature Qt
> bindings, e.g.
> using PyQt, but I do not know of any tutorials covering this.
>
> > On the other hand, I hear that there is a standard gui in Mono
> called
> > Windows Forms, though there is a Windows Presentation Layer that my
> > source is not certain has been ported to Mono yet. So, I really have
> > several questions here:
> >
> > 1. Does this Windows Forms gui bind directly to Aqua so that I
> have a
> > Mac-looking application without running Darwin?
>
> No:
>
> "Looks alien on non-Windows platforms." -
>
>
> > 2. What advantage is there to me writing my app in F# with Mono
> on my
> > Mac?
>
> None. Only on Windows, F# makes GUI programming vastly easier than
> anything
> OCaml has to offer. So it may be worth considering OCaml/F# cross
> compilation
> of the core.
>
> > 3. What way do you all most often use to write cross-platform GUI
> > apps that actually look like the underlying platform?
>
> I am not aware of anyone ever having succeeded in doing that. The
> nearest I
> can think of is Gtk apps but they have anomalous behaviour under
> Mac OS X.
> MLDonkey contains 20kLOC of cross platform GUI code using GTK2, for
> example.
>
> --
> Dr Jon Harrop, Flying Frog Consultancy Ltd.
>
>
>
[Non-text portions of this message have been removed]
- On Fri, Oct 03, 2008 at 09:02:58AM +0200, Adrien wrote:
> 2008/10/2, Savanni D'Gerinel <savanni@...>:unfortunately not, at least not when requiring lablgtkgl. If you can
> >)
> > * Make it a Native-Mac GUI on my Mac (because I am tired of XDarwin)
>
> Have you tried the native mac port of gtk ?
> You have two (more ?) available : one for gtk-1.2[1] and one for
> gtk-2[2]. I've never tried them as I don't use macs but I think gimp
> has been made to use the first one and I trust imendio, the company
> doing the second one, for releasing good code (partly because I know
> some of the devs and partly because the *definitely* want native gtk
> on the mac ; they're doing gtk development).
>
> Philippe Strauss has recently tried imendio's gtk with lablgtk but had
> troubles building it, unfortunately I don't know if his problem has
> been solved or not. [3]
do without GL embedding in gtk, it will probably build fine, with
just a tiny patch that Pascal Cuoq provided.
if you need lablgtkgl, you'll most probably chokes on a missing
symbol _GDK_DISPLAY (maybe related to X11, or double underscored somewhere,
i have to dig in further). (most probably gtkgl need a little bit of
patchwork for native gtk osx support).
regards.
---8<---
diff -ru lablgtk-2.10.1/src/ml_gdk.c lablgtk-2.10.1-nativegtk/src/ml_gdk.c
--- lablgtk-2.10.1/src/ml_gdk.c 2007-09-25 04:56:09.000000000 +0200
+++ lablgtk-2.10.1-nativegtk/src/ml_gdk.c 2008-02-26 11:10:30.000000000 +0100
@@ -22,13 +22,18 @@
/* $Id: ml_gdk.c 1369 2007-09-25 02:56:09Z garrigue $ */
+#define __QUARTZ__
+
#include <string.h>
#include <gdk/gdk.h>
+#if defined(__QUARTZ__)
+#else
#if defined(_WIN32) || defined(__MINGW32__)
#include <gdk/gdkwin32.h>
#else
#include <gdk/gdkx.h>
#endif
+#endif
#include <caml/mlvalues.h>
#include <caml/alloc.h>
#include <caml/memory.h>
@@ -253,7 +258,7 @@
ML_0 (GDK_ROOT_PARENT, Val_GdkWindow)
ML_1 (gdk_window_get_parent, GdkWindow_val, Val_GdkWindow)
-#if defined(_WIN32) || defined(__CYGWIN__)
+#if defined(_WIN32) || defined(__CYGWIN__) || defined(__QUARTZ__)
CAMLprim value ml_GDK_WINDOW_XWINDOW(value v)
{
ml_raise_gdk ("Not available for Win32");
@@ -488,7 +493,7 @@
CAMLprim value ml_gdk_property_get (value window, value property,
value length, value pdelete)
{
-#if defined(_WIN32) || defined(__CYGWIN__)
+#if defined(_WIN32) || defined(__CYGWIN__)|| defined(__QUARTZ__)
return Val_unit; /* not supported */
#else
GdkAtom atype;
---8<---
...
> [1]--
> [2]
> [3]
> [4]
Philippe Strauss
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/ocaml_beginners/conversations/topics/10238?xm=1&o=1&l=1 | CC-MAIN-2017-26 | refinedweb | 951 | 64.61 |
- Cookie policy
- Advertise with us
© Future Publishing Limited, Quay House, The Ambury, Bath BA1 1UA. All rights reserved. England and Wales company registration number 2008885..
If you're already familiar with other programming languages such as C or PHP, you'll applaud Python's simplicity. For instance, code blocks are marked by indentation rather than curly braces - see this program:
def saystuff(somestring): print "String passed: ", somestring saystuff("Wowzers")
If you're new to Python, make sure that you have the language installed (most distros install it by default, but otherwise it will be available in your distro's package manager). Enter the above code into a text editor and save it as test.py in your home directory. Then fire up a terminal and enter:
python test.py
All being well, Python will interpret the code and spit out a line of text. Our sample program here merely sets up a subroutine called saystuff, which prints whatever text string is passed to it. You can see that the code of the subroutine is indented with a tab.
Execution proper begins with the first call to saystuff, which passes the string Wowzers to be printed. It's as easy as that - you're almost ready to get coding.
But one last thing: you'll also need the Pygame modules for this tutorial. Pygame provides an extra layer on top of Python, linking with SDL and letting you display images and output sound effects in your programs. It's very widely available and will almost certainly be available in your distro's repositories, but otherwise you should download it from.
Although game genres vary enormously, in terms of the underlying mechanics, most games involving sprites (image objects) moving around follow a similar code path:
Now, we're going to write a little game that has several balls bouncing around the screen, and the player's objective is to avoid colliding the mouse pointer with them. Sounds easy? Well, if we add some randomness to the ball movements - ie they don't always move at the same speed - then it suddenly becomes a lot trickier.
You can't just hold the mouse pointer in the bottom-left of the screen, say, because a ball could zoom down there at any moment. Throughout this, a counter keeps track of how many seconds you stay 'alive'. It's a very basic concept, but it demands good mouse dexterity and a laser-like focus on the screen.
But first things first: let's work out how to get a single ball bouncing around the screen. How does the ball know when to reverse direction? Fortunately, there's a very easy method to achieve this: we have two variables that we use to alter the ball movement.
For each loop of the game engine, we add these numbers to the ball's position. If the ball is moving right, for instance, then it's because we're adding 1 to its horizontal position each loop. If the ball hits the right edge of the screen, we start adding -1 to its horizontal position, thereby moving it to the left.
Makes sense? If you're unsure how this works, here's a Python program to demonstrate it in action. You can find this code in the source code for this project. To run this program, you'll also need an image called ball.png alongside the code, which is a 32x32 pixel image containing a white (filled) circle on a black background.
If you want to make ball.png yourself, just create a new 32x32 pixel image, fill it with black, use the circle select tool to create a new circle selection and fill that with white. Save it as ball.png in the same directory as ball1.py and run it by entering python ball1.py. Alternatively, just use the ball.png in the source code for this project.
from pygame import * # Use Pygame's functionality! ballpic = image.load('ball.png') done = False ballx = 0 # Ball position variables bally = 0 ballxmove = 1 ballymove = 1 init() # Start Pygame screen = display.set_mode((640, 480)) # Give us a nice window display.set_caption('Ball game') # And set its title while done == False: screen.fill(0) # Fill the screen with black (colour 0) screen.blit(ballpic, (ballx, bally)) # Draw ball display.update() time.delay(1) # Slow it down! ballx = ballx + ballxmove # Update ball position bally = bally + ballymove if ballx > 600: # Ball reached screen edges? ballxmove = -1 if ballx < 0: ballxmove = 1 if bally > 440: ballymove = -1 if bally < 0: ballymove = 1 for e in event.get(): # Check for ESC pressed if e.type == KEYUP: if e.key == K_ESCAPE: done = True
Let's step through this. In the first line, we tell Python that we want to use routines from the Pygame library. Then we load the ball image we created before and store it in an object called ballpic, and create a true/false variable to determine when the game has finished.
The following four lines are hugely important: these declare variables which control the position and movement of the ball. ballx and bally store the location (in pixels) of the ball in our game window - 0,0 being the top-left, and 640,480 being the bottom-right pixel.
ballxmove and ballymove store the numbers we add to the ball for each game loop; we set them to 1 initially, so that when the game starts, 1 is added to ballx and bally each loop, thereby moving the ball down and to the right. So, our ball starts in the top-left of the screen, and starts moving diagonally down-right when the program starts.
Next, we set up a new Pygame window and start the main game loop, filling (clearing) the screen with black and drawing our ball at its current position (code comments are denoted with # characters). The following code chunk determines how the ball is going to move:
ballx = ballx + ballxmove bally = bally + ballymove if ballx > 600: ballxmove = -1 if ballx < 0: ballxmove = 1 if bally > 440: ballymove = -1 if bally < 0: ballymove = 1
In the first two lines, we update the ball's horizontal (x) and vertical (y) position by adding the two movement variables. If ballxmove and ballymove are 1, then the ball will move 1 pixel right and 1 pixel down each game loop.
But then the following if statements check to see if the ball is near the edge of the screen, and if so, change ballxmove and ballymove accordingly.
If, for instance, the ball position is over 600 pixels horizontally, then it should 'bounce' off and start moving left - so we start adding -1 to its position (effectively subtracting 1).
With just a few lines of code, we've managed to create the impression that the ball is bouncing around the screen - not bad! The final lines of this program set up a keyboard handler, so that you can quit the game by pressing the Esc key at any moment.
Our prototype ball-bouncing program - not much to look at yet, but seeing this on your screen will indicate that you've got the basics working.
All good and well so far - we have our basic game structure in place. Now we want to add more balls, and also detect if the mouse pointer is colliding with any of them. For the former, we're going to set up an array of 'dictionary' entries to keep track of the balls. This gives us a huge amount of flexibility: we can have as many balls as we want, instead of limiting ourselves to ball0, ball1, ball2 etc. Dictionaries are a doddle in Python:
mydict = {'Bach': 100, 'Handel': 75, 'Vivaldi': 90} print mydict['Vivaldi']
Here, we associate three words with numbers, and then print out the value contained in 'Vivaldi', which is 90. We'll use a dictionary to store the X, Y, X movement and Y movement values of our balls - a bit like a struct in C. But whereas C bogs us down in memory management turmoil, in Python we can create loads of ball objects without any hassle at all, giving them their own individual dictionary entries.
The final thing we need to think about is collision detection. How do we tell when the mouse pointer has collided with a ball? Logically, it seems sanest to go through the position of every ball and compare them with the mouse pointer location. But we have a trick up our sleeve: the balls are white, and the background is black, so why not just detect when the mouse pointer is over a white pixel? That only takes one line of code, and is very fast...
Here's the code, which you can find as ball2.py in the source code for this project, along with the ball.png picture we created earlier (it's exactly the same).
from pygame import * import random ballpic = image.load('ball.png') ballpic.set_colorkey((0,0,0)) numballs = 10 delay = 5 done = False balls = [] for count in range(numballs): balls.append(dict) balls[count] = {'x': 0, 'y': 0, 'xmove': random.randint(1, 2), 'ymove': random.randint(1, 2)} init() screen = display.set_mode((640, 480)) display.set_caption('Ball game') event.set_grab(1) while done == False: screen.fill(0) for count in range(numballs): screen.blit(ballpic, (balls[count]['x'], balls[count]['y'])) display.update() time.delay(delay) for count in range(numballs): balls[count]['x'] = balls[count]['x'] + balls[count]['xmove'] balls[count]['y'] = balls[count]['y'] + balls[count]['ymove'] for count in range(numballs): if balls[count]['x'] > 620: balls[count]['xmove'] = random.randint(-2, 0) if balls[count]['x'] < -10: balls[count]['xmove'] = random.randint(0, 2) if balls[count]['y'] > 470: balls[count]['ymove'] = random.randint(-2, 0) if balls[count]['y'] < -10: balls[count]['ymove'] = random.randint(0, 2) for e in event.get(): if e.type == KEYUP: if e.key == K_ESCAPE: done = True if screen.get_at((mouse.get_pos())) == (255, 255, 255, 255): done = True print "You lasted for", time.get_ticks()/1000, "seconds!"
The general concepts behind this code are the same as before, but there's lots of new juicy code to explore. Near the top, where we load our ball image, we also set its colorkey to (0,0,0)' which is the RGB (Red/Green/Blue) value for black. What we're saying here is: set all black pixels in our ball picture to be transparent.
This is important when we have many balls bouncing around, as we want them to overlap gracefully, and not have unsightly black corners drawn on other balls. So, only the white pixels of our balls will be displayed.
The following numballs and delay variables affect the difficulty of the game. As you'd expect, numballs controls the number of balls in play, whereas delay is the time (in milliseconds) that the game should pause each loop. You can leave these as-is for now - but if you fancy more of a challenge, you can up the number of balls and reduce the delay.
Our balls = [] line sets up a new array of ball objects, and in typical Python fashion, we're not limited to the number of objects (nor do we have to define the number straight away). The
for count in range(numballs):
line sets up a loop which runs numball (10) times, adding new dictionary objects to the balls array and giving them starting values - top-left of screen, and random movement down-right. The 1, 2 in the random number generator means 'any number between 1 and 2 (inclusive)'. So we have 10 balls, all starting off with random speeds.
Next, we set up the screen as before, and add an event.set_grab(1) line which constrains the mouse pointer in the game window; it'd be too easy if you could move the mouse pointer outside! Then we have our main game loop. As before, we fill the screen with black, and then use another for loop to display (blit) all our balls to the screen.
After updating the screen and delaying (so that it runs at the same speed on all machines), we again traverse through our array of balls, updating their positions with our movement variables. Each ball has its own copy of xmove and ymove in its dictionary entry, so they can all move independently.
Following this is the game logic, which determines if the balls have reached the window edges. This time, we've tweaked the values so that the balls can go slightly off-screen (remember, they're 32x32 pixels). This is vital for the gameplay: it means you can't just move the mouse cursor into a corner and never get hit! The balls now reach every part of the screen, so you have to keep moving the mouse.
The final three lines of code are new: screen.get_at() returns the pixel colour value at the specified position, which we set as the mouse pointer with mouse.get_pos(). We say: 'if the pixel colour at the mouse position is white (255,255,255), then done = True so the while main game loop will end.
Finally, we print the number of seconds for which the player survived - time.get_ticks() returns it in milliseconds, so we divide it by 1000 before displaying it.
This is more like it! Multiple circle mayhem demands lightning reactions and pixel-perfect accuracy with the mouse pointer...
Not bad for 55 lines of code, is it? As mentioned before, you can increase the difficulty of the game by raising the value of numballs at the top - the default of 10 is tricky enough, but if you think you're dextrous enough, try knocking it up to 15 or 20 for some finger-twistingly frantic gameplay.
There are many other aspects of the game you can fiddle with too, such as altering the random numbers used in the main game logic (if ball has hit screen edge) section.
Pygame is chock-full of features to play around with, and, using a handful of lines of code, you can add sound effects or even a background music track to the game. has some fantastically in-depth documentation to help users explore its functionality, along with a reference to all the routines used in this tutorial.
Having programmed in countless languages and environments, from Amiga Blitz Basic to C#-SDL on Mono/.NET, I can safely say that Pygame is one of the most blissfully easy game programming setups around - it's the perfect way to concretise any game ideas that are floating around in your head. Good luck!
The final version of our game is hardly a tour de force on the graphical front, but we can spruce it up a bit by adding a background image. However, it's important to remember how we're d where the balls are - we're looking for white pixels. So your background image shouldn't contain any fully white (255,255,255 RGB) pixels, otherwise the game will end if you mouse over them!
Find a picture and resize it to 640x480. If you suspect there are any white pixels in the image, you can always lower its brightness in Gimp which will eliminate such problems. Save this picture alongside ball2.py and call it background.jpg. Now, in ball2.py, enter the following code beneath the early ballpic.set_colorkey line:
backdrop = image.load('background.jpg')
So now we have our background picture in memory, ready to use. But we need to display it on the screen every loop, so further down in ball2.py, replace the screen.fill(0) line with this:
screen.blit(backdrop, (0,0))
This draws the background image before the balls. Note that if it's quite a complex image (eg lots of colours), then this extra blitting process will slow the game down slightly - but you can tweak the ball speeds and delay variable to compensate for that.
First published in Linux Format magazine
You should follow us on Identi.ca or Twitter
Your comments
Too awesome!
Anonymous Penguin (not verified) - March 27, 2009 @ 2:29am
This was my first exposure to Python and was the perfect excuse to try it out... I ended up making four different colored balls, assigning them to a list and then adding them randomly to the balls array... I had 800+ balls bouncing around at one point!! Sweet!
Too awesome!
Anonymous Penguin (not verified) - March 27, 2009 @ 2:30am
This was my first exposure to Python and was the perfect excuse to try it out... I ended up making four different colored balls, assigning them to a list and then adding them randomly to the balls array... I had 800+ balls bouncing around at one point!! Sweet!
TuxRadar is awesome
Tim (not verified) - March 27, 2009 @ 8:21am
I just had to comment and tell you guys on a great job you guys are doing.
I recently added you by RSS and have read some great articles by you guys.
Love the in-depth linux articles and your coding tuts
Keep up the great work!
26 شارع عباس على عيسى
ahmed (not verified) - April 9, 2009 @ 7:53pm
@@@@@@
how you get this to work
Anonymous Penguin (not verified) - January 31, 2011 @ 4:36pm
how you get this to work
thanks
Anonymous Penguin (not verified) - March 27, 2011 @ 8:40pm
have no knowledge how to program but heard about Python so here goes it. I will try this one as it isn't so long to code.
thanks again,
C
idk
Ben (not verified) - July 18, 2011 @ 9:51pm
when i try and play it a red highlight goes over ) and i dont know why?
thank
pradeepcse (not verified) - August 8, 2011 @ 1:50pm
thank very much free open source
Great Tutorial!
HTMLgrrrrl (not verified) - November 12, 2011 @ 3:26am
I am new to Python, but I have been practicing C and C++ for about a year now. I am definitely surprised by the simplicity of Python, and I loved coding this game. There is a great book by Al Sweigart (Invent Your Own
Computer Games with Python 2nd Edition)that has other great Python games, but of all the ones I've coded so far, this one takes the cake. Thanks!
Thanks bro
Python Program EPICNESS! (not verified) - March 9, 2012 @ 8:30pm
Awesome, I'm 13 years old. And creating things in python, VB.Net. Where would i be without internet NO WHERE
Neat.
123456789 (not verified) - April 12, 2012 @ 1:35am
Thank you for the help, this page was very useful.
I ran into some errors and had to fix them:
iornside8710 (not verified) - May 1, 2012 @ 10:27pm
you forgot to have just plain import pygame at the top and right below where you print the time you survived you forgot to have pygame.quit() with out these the program would break when i died mabey im using a more updated version of python then when this was made idk but this is how i got it to work.
I get a "there is no
sergio (not verified) - May 2, 2012 @ 1:20am
I get a "there is no soundcard" message in the terminal. The window opens up but it's just a black background with nothing on it. ?? I'm using Mint 12
lol
Anonymous yourmother (not verified) - October 18, 2012 @ 11:07am
yourmother smells like a male
Why, How, Huh?
Chuck Norris (not verified) - October 26, 2012 @ 12:56pm
I keep getting syntax errors and i dont know what to do... HELP!
Mouse pointer is stuck in game even after it has ended
Anonymous Jim (not verified) - November 13, 2012 @ 2:01pm
Hi
This works perfectly, except...after the game has ended, either by the mouse pointer coming in to contact with a ball or pressing escape, the mouse pointer cannot be moved from the game screen. It is stuck inside it. I have to restart my raspberry pi after each game! Is there a way to solve this?
Thanks
USeless
Anonymous Bumguin (not verified) - November 16, 2012 @ 10:18am
this program doesnt work it is a virus...
useless
Anonymous shithead (not verified) - November 29, 2012 @ 1:57am
ugly shit
ball just goes out the window
confused (not verified) - January 13, 2013 @ 5:07pm
just typed up the first lines of code to get one ball working. it's letter for letter and the ball just goes off the screen. no bouncing happening. no errors in code. what's wrong!?
note sure if this is just a windows error
Windows Execution (not verified) - February 11, 2013 @ 5:21pm
but you have to put parentheses around items when you want to
print them.
print ("You lasted for", time.get_ticks()/1000, "seconds!")
black screen
awesomeepiceli (not verified) - March 2, 2013 @ 8:41pm
I just get a black screen after the refresh line.
tyyyyyyyyyyyyyyyyy
miss. python (not verified) - March 19, 2013 @ 7:53pm
thank u i am just starting python ! its so simply! -from my raspberry pi
Guys, I know what your problem is
ignoremefurever (not verified) - July 1, 2013 @ 4:58am
If the print statements are getting errors, you're using Python 3, which isn't covered in this tutorial. Don't panic; look up a newer PyGame example and get coding. Try this one, if you don't want to find your own:
inventwithpython dotcomslash pygame / chapter1 . html
Note: Ubuntu users, do not uninstall Python 3 under any circumstances. You will crash your computer, likely irreversibly. | http://www.tuxradar.com/content/code-project-build-mouse-game-python | CC-MAIN-2017-04 | refinedweb | 3,595 | 71.34 |
Be the first to know about new publications.Follow publisher Unfollow publisher Sanjay Gupta
- Info
Spread the word.
Share this publication.
- Stack
Organize your favorites into stacks.
- Like
Like this publication.
RISK RETURN ANALYSIS AND COMPARATIVE STUDY OF MUTUAL FUNDS
PROF.GARGI NAIDU SOMESH BEHERE MASTER OF BUSINESS ADMINISTRATION (MBA) VIDYASAGAR INSTITUTE OF MANAGEMENT HDFC Asset Management Company Ltd. FOR Somesh Behere A Report on Project work By BARKATULLAH UNIVERSITY BHOPAL(M.P.) In SESSION(2008-2010) BY:
“RISK RETURN ANALYSIS AND COMPARATIVE STUDY OF MUTUAL FUNDS” FOR HDFC Asset Management Company Ltd. A Report on Project work In MASTER OF BUSINESS ADMINISTRATION (MBA) By Somesh Behere GUIDED BY: SUBMITTED BY: PROF.GARGI NAIDU SOMESH BEHERE HOD ACADEMICS MBA IV Semester VIM BHOPAL BHOPAL VIDYASAGAR INSTITUTE OF MANAGEMENT BARKATULLAH UNIVERSITY BHOPAL(M.P.) SESSION (2008-2010) BONOFIDE CERTIFICATE This is to certify that the Report on Project Work titled “RISK RETURN ANALYSIS AND COMPARATIVE STUDY OF MUTUAL FUNDS” for HDFC Asset Management Company Ltd. is a bonafide record of the work done by Somesh Behere studying in Master of Business Administration in Vidhyasagar Institute of Management ,Bhopal during the year 2008-10. Project Viva-Voce held on..................... Internal examiner External examiner EXECUTIVE SUMMARY The performance evaluation of mutual fund is a vital matter of concern to the fund managers, investors, and researchers alike. The core competence of the company is to meet objectives and the needs of the investors and to provide optimum return for their risk. This study tries to find out the risk and return allied with the mutual funds. This project paper is segmented into three sections to explore the link between conventional subjective and statistical approach of Mutual Fund analysis. To start with, the first section deals with the introductory part of the paper by giving an overview of the Mutual fund industry and company profile. This section also talks about the theory of portfolio analysis and the different measures of risk and return used for the comparison. The second section details on the need, objective, and the limitations of the study. It also discusses about the sources and the period for the data collection. It also deals with the data interpretation and analysis part wherein all the key measures related to risk and return are done with the interpretation of the results. In the third section, an attempt is made to analyse and compare the performance of the equity mutual fund. For this purpose β-value, standard deviation, and risk adjusted performance measures such as Sharpe ratio, Treynor measure, Jenson Alpha, and Fema measure have been used. The portfolio analysis of the selected fund has been done by the measure return for the holding period. At the end, it illustrates the suggestions and findings based on the analysis done in the previous sections and finally it deals with conclusion part. ACKNOWLEDGEMENT I take this opportunity to express my deep sense of gratitude to all those who have contributed significantly by sharing their knowledge and experience in the completion of this project work. I am greatly obliged to, for providing me with the right kind of opportunity and facilities to complete this venture. My first word of gratitude is due to Mr.Sidhartha Chattergee – Branch Manager, HDFC AMC,Allhabad, my corporate guide, for his kind help and support and his valuable guidance throughout my project. I am thankful to him for providing me with necessary insights and helping me out at every single step. I am also thankful to Prof Ashok Diwedi Executive Trainee, the former student of VIDHYASAGAR INSTITUTE OF MANAGEMENT, Bhopal for her constant valuable assistance and consultancy. I also thank Mr.Ankit Kumr, Unit Manager for his kind words of encouragement. Above all, I express my words of gratitude to HDFC AMC, Allahabad Branch for proving me with all the knowledge resources and enabling me to pass AMFI-MTUTUAL FUND (ADVISOR) MODULE; NSE’s CERTIFICATION IN FINANCIAL MARKETS (NCFM) with 74.5 percentages. I am extremely thankful to Miss. Gargi Naidu – my internal faculty guide under whose able guidance this project work was carried out. I thank her for her continuous support and mentoring during the tenure of the project. Finally, I would also like to thank all my dear friends for their cooperation, advice and encouragement during the long and arduous task of carrying out the project and preparing this report. PREFACE This is the age of technical up gradation. Nothing remains same for a long period every thing change with a certain span of time. So it is must for every organization to put a birds eye view on it’s over all functioning. This report was preparing during practical training of Master of business administration (M.B.A.) from Vidyasagar institute of management Bhopal (M.P.) .The student of M.B.A.essentially required a practical training of 4to6 weeks in any organization. It gives an opportunity to the student to test their acquired knowledge through practical experiences. The objective of my study was Risk Return Analysis And Comparative Study Of Mutual Funds “HDFC Asset Management Company Ltd.” I however present this report In all my modesty to the readers with a faith that it shall serve the causes of subject. . PLACE-……….. DATE………….. SOMESH BEHERE TABLE OF CONTENTS Page No. Part-I 1-37 Executive Summary Iii A. Mutual Fund Overview 1-19 1.1 Mutual Fund an Investment Platform 1-2 1.2 Advantages of Mutual Fund 3 1.3 Disadvantage of Investing Through Mutual Funds 4 1.4 Categories of Mutual Fund 4-8 1.5 Investment Strategies 8 1.6 Organisation of Mutual Fund 9-11 1.7 Distribution Channels 12 1.8 HDFC AMC Company Overview 12-19 B. Measuring and Evaluating Mutual Funds Performance 20-37 1.2.1 Purpose of Measuring and Evaluating 20-21 1.2.2 Financial Planning for Investors referring to Mutual Funds 22 1.2.3 Why Has It Become One Of The Largest Financial Instruments? 22-25 1.2.4 Evaluating Portfolio Performance 26 1.2.5 How to Reduce Risk While Investing 26-28 1.2.6 A Study of Portfolio Analysis from The Point Of Fund Manager 28-29 1.2.7 Measures of Risk and Return 29-37 Part-II 38-40 Research Methodology 2.1 Need For the Study 38-39 2.2 Objective of the Study 39 2.3 Limitations of the Study 40 2.4 Data Collection 40 Part-III 41-102 Case Analysis 3.1 Data Interpretation 41-87 3.2 Analysis of the observation 87-97 3.3 Findings 98 3.4 Recommendations 99-100 3.5 Conclusion 101 References 102 PART-I 1. MUTUAL FUND OVERVIEW 1.1 MUTUAL FUND AN INVESTMENT PLATFORM Mutual fund is an investment company that pools money from small’s PROFIT/LOSS FORM PORTFOLIO OF INVESTMENT INVEST IN VARIETY OF STOCKS/BONDS PROFIT/LOSS FROM INDIVIDUAL MARKET (FLUCTUATIONS) INVESTOR INVEST THEIR MONEY MUTUAL FUND SHEMES current net asset value: total fund assets divided by shares outstanding. Figure: 1.1 In Simple Words,. 1.2 ADVANTAGES OF MUTUAL FUND Table:1.1 S. No. Advant age Particulars 1. Portfoli o Diversif ication Mutual Funds invest in a well-diversified portfolio of securities which enables investor to hold a diversified investment portfolio (whether the amount of investment is big or small). 2. Professi onal Manage ment Fund manager undergoes through various research works and has better investment management skills which ensure higher returns to the investor than what he can manage on his own. 3. Less Risk Investors acquire a diversified portfolio of securities even with a small investment in a Mutual Fund. The risk in a diversified portfolio is lesser than investing in merely 2 or 3 securities. 4. Low Transac tion Costs Due to the economies of scale (benefits of larger volumes), mutual funds pay lesser transaction costs. These benefits are passed on to the investors. 5. Liquidit y An investor may not be able to sell some of the shares held by him very easily and quickly, whereas units of a mutual fund are far more liquid. 6. Choice of Scheme s Mutual funds provide investors with various schemes with different investment objectives. Investors have the option of investing in a scheme having a correlation between its investment objectives and their own financial goals. These schemes further have different plans/options 7. Transp arency Funds provide investors with updated information pertaining to the markets and the schemes. All material facts are disclosed to investors as required by the regulator. Flexibili ty part of a well-regulated investment environment where the interests of the investors are protected by the regulator. All funds are registered with SEBI and complete transparency is forced. 8. 9. 1.3 DISADVANTAGE OF INVESTING THROUGH MUTUAL FUNDS Table:1.2 S. No. Disadva ntage Particulars 1. Costs Control Not in the Hands of an Investor Investor has to pay investment management fees and fund distribution costs as a percentage of the value of his investments (as long as he holds the units), irrespective of the performance of the fund. 2. No Custom ized Portfoli os The portfolio of securities in which a fund invests is a decision taken by the fund manager. Investors have no right to interfere in the decision making process of a fund manager, which some investors find as a constraint in achieving their financial objectives. 3. Difficult y in Selectin g a Suitable Fund Scheme Many investors find it difficult to select one option from the plethora of funds/schemes/plans available. For this, they may have to take advice from financial planners in order to invest in the right fund to achieve their objectives. 1.4 CATEGORIES OF MUTUAL FUND BASED ON THEIR STURCTURE OPEN ENDED FUNDS CLOSE-ENDED FUNDS Figure:1.2 2. BASED ON INVESTMENT OBJECTIVE EQUITY FUNDS BALANCED FUNDS DEBT FUNDS INDEX FUNDS LEQUID FUNDS DEBT ORIENTED GUILT FUNDS DEVIDEND YEILD EQUITY ORIENTED EQUITY DIVERSIFIED INCOME FUNDS THEMANTIC FUND FMPS FUNDS SECTOR FUND FLOATING RATE ELSS ARBITAGE FUNDS Mutual funds can be classified as follow: Based on their structure: Open-ended funds: Investors can buy and sell the units from the fund, at any point of time. Close-ended funds: These funds raise money from investors only once. Therefore, after the offer period, fresh investments cannot be made into the fund. If the fund is listed on a stocks exchange, the units can be traded like stocks (E.g., Morgan Stanley Growth Fund). Recently, most of the New Fund Offers of close-ended funds provided liquidity window on a periodic basis such as monthly or weekly. Redemption of units can be made during specified intervals. Therefore, such funds have relatively low liquidity. portfolio mirrors the benchmark index in terms of both composition and individual stock weightages.. ďƒ˜ Balanced fund: Their investment portfolio includes both debt and equity. As a result, on the risk-return ladder, they fall between equity and debt funds. Balanced funds are the ideal mutual funds vehicle for investors who prefer spreading their risk across various instruments. Following are balanced funds classes: 2 Debt-oriented funds -Investment below 65% in equities. 3 Equity-oriented funds -Invest at least 65% in equities, remaining in debt. ďƒ˜bills. 3. Floating rate funds - Invest in short-term debt papers. Floaters invest in debt instruments, which have variable coupon rate. 4. Arbitrage fund- They generate income through arbitrage opportunities due to misspricing. How are funds different in terms of their risk profile: Table:1.3 Equity Funds High level of return, but has a high level of risk too Debt funds Returns comparatively less risky than equity funds Liquid and Market funds Money Provide stable but low level of return 1.5. 1.6. ORGANISATION OF MUTUAL FUND: Figure:1.4: Table1.4 ASSET UNDER MANAGEMENT OF TOP AMC,S as on Jun 30, 2009 Mutual Fund Name No. of Corpus (Rs.Crores) schemes Reliance Mutual Fund 263 108,332.36 HDFC Mutual Fund 202 78,197.90 ICICI Prudential Mutual Fund 325 70,169.46 UTI Mutual Fund 207 67,978.19 Birla Sun Life Mutual Fund 283 56,282.87 SBI Mutual Fund 130 34,061.04 LIC Mutual Fund 70 32,414.92 Kotak Mahindra Mutual Fund 124 30,833.02 Franklin Templeton Mutual Fund 191 25,472.85 IDFC Mutual Fund 164 21,676.29 Tata Mutual Fund 175 21,222.81 The graph indicates the growth of assets over the years. Figure:1.5 1.7 DISTRIBUTION CHANNELS: Mutual funds posses a very strong distribution channel so that the ultimate customers doesn’t face any difficulty in the final procurement. The various parties involved in distribution of mutual funds are: 1. Direct marketing by the AMCs: the forms could be obtained from the AMCs directly. The investors can approach to the AMCs for the forms. some of the top AMCs of India are; Reliance ,Birla Sunlife, Tata, SBI magnum, Kotak Mahindra, HDFC, Sundaram, ICICI, Mirae Assets, Canara Robeco, Lotus India, LIC, UTI etc. whereas foreign AMCs include: Standard Chartered, Franklin Templeton, Fidelity, JP Morgan, HSBC, DSP Merill Lynch, etc. 2. Broker/ sub broker arrangements: the AMCs can simultaneously go for broker/subbroker to popularize their funds. AMCs can enjoy the advantage of large network of these brokers and sub brokers. 3. Individual agents, Banks, NBFC: investors can procure the funds through individual agents, independent brokers, banks and several non- banking financial corporations too, whichever he finds convenient for him. 1.8 HDFC AMC COMPANY OVERVIEW HDFC ASSET MANAGEMENT COMPANY LIMITED (AMC) AMC was incorporated under the Companies Act, 1956, on December 10, 1999, and was approved to act as an AMC for the Mutual Fund by SEBI on July 30, 2000. The registered office of the AMC is situated at Ramon House, 3rd Floor, H.T. Parekh Marg, 169, Back bay Reclamation, Church gate, Mumbai - 400 020. In terms of the Investment Management Agreement, the Trustee has appointed HDFC Asset Management Company Limited to manage the Mutual Fund As per the terms of the Investment Management Agreement, the AMC will conduct the operations of the Mutual Fund and manage assets of the schemes, including the schemes launched from time to time. The present share holding pattern of the AMC is as follows: Table:1.5 Particulars % of the paid up capital Housing Development Finance Corporation Limited 50.10 Standard Life Investments Limited 49.90 Schemes of Zurich India Mutual Fund has now migrated to HDFC Mutual Fund on June 19, 2003. The AMC is also providing portfolio management / advisory services and such activities are not in conflict with the activities of the Mutual Fund. The AMC has renewed its registration from SEBI vide Registration No. - PM / INP000000506 dated December 22, 2000 to act as a Portfolio Manager under the SEBI (Portfolio Managers) Regulations, 1993. The Certificate of Registration is valid from January 1, 2004 to December 31, 2006. Board of Directors The Board of Directors of the HDFC Asset Management Company Limited (AMC) consists of the following eminent persons. Table:1.6 Mr. Deepak S. Parekh Mr. N. Keith Skeoch Mr. Keki M. Mistry Mr. James Aird Mr. P. M. Thampi Mr. Humayun Dhanrajgir Dr. Deepak B. Phatak Mr. Hoshang S. Billimoria Mr. Rajeshwar Raj Bajaaj Mr. Vijay Merchant Ms. Renu S. Karnad Chairman of the board CEO of Standard Life Investments Ltd. Associate director Investment director Independent director Independent director Independent director Independent director Independent director Independent director Joint managing director Mr. Milind Barve Managing director Mr. Deepak Parekh, the Chairman of the Board, is associated with HDFC Ltd. in his capacity as its Executive Chairman. Mr. Parekh joined HDFC Ltd. in a senior management position in 1978. He was inducted as Wholetime Director of HDFC Ltd. in 1985 and was appointed as the Executive Chairman in 1993. Mr. N. Keith Skeoch is associated with Standard Life Investments Limited as its Chief Executive and is responsible for all company business and investment operations within Standard Life Investments Limited. Mr. Keki M. Mistry is an associate director on the Board. He is the Vice-Chairman & Managing Director of Housing Development Finance Corporation Limited (HDFC Ltd.) He is with HDFC Ltd. since 1981 and was appointed as the Executive Director of HDFC Ltd. in 1993. He was appointed as the Deputy Managing Director in 1999, Managing Director in 2000 and Vice Chairman & Managing Director in 2007. SPONSORS HOUSING DEVELOPMENT FINANCE CORPORATION LIMITED (HDFC): HDFC was incorporated in 1977 as the first specialised housing finance institution in India. HDFC provides financial assistance to individuals, corporate and developers for the purchase or construction of residential housing. It also provides property related services (e.g. property identification, sales services and valuation), training and consultancy. Of these activities, housing finance remains the dominant activity. HDFC currently has a client base of over 8, 00,000 borrowers, 12, 00,000 depositors, 92,000 shareholders and 50,000 deposit agents. HDFC raises funds from international agencies such as the World Bank, IFC (Washington), USAID, CDC, ADB and KFW, domestic term loans from banks and insurance companies, bonds and deposits. HDFC has received the highest rating for its bonds and deposits program for the ninth year in succession. HDFC Standard Life Insurance Company Limited, promoted by HDFC was the first life insurance company in the private sector to be granted a Certificate of Registration (on October 23, 2000) by the Insurance Regulatory and Development Authority to transact life insurance business in India.. STANDARD LIFE INVESTMENTS LIMITED The Standard Life Assurance Company was established in 1825 and has considerable experience in global financial markets. In 1998, Standard Life Investments Limited became the dedicated investment management company of the Standard Life Group and is owned 100% by The Standard Life Assurance Company. With global assets under management of approximately US$186.45 billion as at March 31, 2005, Standard Life Investments Limited is one of the world's major investment companies and is responsible for investing money on behalf of five million retail and institutional clients worldwide. With its headquarters in Edinburgh, Standard Life Investments Limited has an extensive and developing global presence with operations in the United Kingdom, Ireland, Canada, USA, China, Korea and Hong Kong. 2% of the market capitalization of the London Stock Exchange. HDFC MUTUAL FUND PRODUCTS Equity Funds HDFC Growth Fund HDFC Long Term Advantage Fund HDFC Index Fund HDFC Equity Fund HDFC Capital Builder Fund HDFC Tax saver HDFC Top 200 Fund HDFC Core & Satellite Fund HDFC Premier Multi-Cap Fund HDFC Long Term Equity Fund HDFC Mid-Cap Opportunity Fund Balanced Funds HDFC Children's Gift Fund Investment Plan HDFC Children's Gift Fund Savings Plan HDFC Balanced Fund HDFC Prudence Fund Debt Funds HDFC Income Fund HDFC Liquid Fund HDFC Gilt Fund Short Term Plan HDFC Gilt Fund Long Term Plan HDFC Short Term Plan HDFC Floating Rate Income Fund Short Term Plan HDFC Floating Rate Income Fund Long Term Plan HDFC Liquid Fund - PREMIUM PLAN HDFC Liquid Fund - PREMIUM PLUS PLAN HDFC Short Term Plan - PREMIUM PLAN HDFC Short Term Plan - PREMIUM PLUS PLAN HDFC Income Fund Premium Plan HDFC Income Fund Premium plus Plan HDFC High Interest Fund HDFC High Interest Fund - Short Term Plan HDFC Sovereign Gilt Fund - Savings Plan HDFC Sovereign Gilt Fund - Investment Plan HDFC Sovereign Gilt Fund - Provident Plan HDFC Cash Management Fund - Savings Plan HDFC Cash Management Fund - Call Plan HDFCMF Monthly Income Plan - Short Term Plan HDFCMF Monthly Income Plan - Long Term Plan HDFC Cash Management Fund - Savings Plus Plan HDFC Multiple Yield Fund HDFC Multiple Yield Fund Plan 2005 ACHIEVEMENT AND AWARDS CNBC - TV 18 - CRISIL Mutual Fund of the Year Awards 2008 : HDFC Prudence Fund was the only scheme that won the CNBC - TV 18 - CRISIL Mutual Fund of the Year Award 2008 in the Most Consistent Balanced Fund under CRISIL ~ CPR for the calendar year 2007 (from amongst 3 schemes). HDFC Cash Management Fund - Savings Plan was the only scheme that won the CNBC TV 18 - CRISIL Mutual Fund of the Year Award 2008 in the Most Consistent Liquid Fund under CRISIL ~ CPR for the calendar year 2007 (from amongst 5 schemes). HDFC Cash Management Fund - Savings Plan was the only scheme that won the CNBC - TV 18 - CRISIL Mutual Fund of the Year Award 2008 in the Liquid Scheme – Retail Category for the calendar year 2007 (from amongst 19 schemes). Lipper Fund Awards 2008: HDFC Equity Fund - Growth has been awarded the 'Best Fund over Ten Years' in the 'Equity India Category' at the Lipper Fund Awards 2008 (form amongst 23 schemes). It was awarded the Best Fund over ten years in 2006 and 2007 as well. 2008 makes it three in a row. Lipper Fund Awards 2009 : HDFC Equity Fund - Growth has been awarded the 'Best Fund over Ten Years' in the 'Equity India Category' (form amongst 34 schemes) and HDFC Prudence Fund – Growth Plan in the ‘Mixed Asset INR Aggressive Category’ (from amongst 6 schemes), have been awarded the ‘Best Fund over 10 Years’ by Lipper Fund Awards India 2009. ICRA Mutual Fund Awards – 2008 : HDFC MF Monthly Income Plan - Long Term Plan - Ranked a Seven Star Fund and has been awarded the Gold Award for "Best Performance" in the category of "Open Ended Marginal Equity" for the three year period ending December 31, 2007 (from amongst 27 schemes) HDFC High Interest Fund - Short Term Plan - Ranked a Five Star Fund indicating performance among the top 10% in the category of "Open Ended Debt - Short Term" for one year period ending December 31, 2007 (from amongst 20 schemes). HDFC Prudence Fund - Ranked a Five Star Fund indicating performance among the top 10% in the category of "Open Ended Balanced" for the three year period ending December 31, 2007 (from amongst 16 schemes) B. MEASURING AND EVALUATING MUTUAL FUNDS PERFORMANCE: 1.2.1 PURPOSE OF MEASURING AND EVALUATING Every investor investing in the mutual funds is driven by the motto of either wealth creation or wealth increment or both. Therefore it’s very necessary to continuously evaluate the funds’ performance with the help of factsheets and newsletters, websites, newspapers and professional advisors like HDFC AMC. If the investors ignore the evaluation of funds’ performance then he can lose hold of it any time. In this ever-changing industry, he can face any of the following problems: 1. Variation in the funds’ performance due to change in its management/ objective. 2. The funds’ performance can slip in comparison to similar funds. 3. There may be an increase in the various costs associated with the fund. 4 .Beta, a technical measure of the risk associated may also surge. 5. The funds’ ratings may go down in the various lists published by independent rating agencies. 6. It can merge into another fund or could be acquired by another fund house. Performance measures: Equity funds: the performance of equity funds can be measured on the basis of: NAV Growth, Total Return; Total Return with Reinvestment at NAV, Annualized Returns and Distributions, Computing Total Return (Per Share Income and Expenses, Per Share Capital Changes, Ratios, Shares Outstanding), the Expense Ratio, Portfolio Turnover Rate, Fund Size, Transaction Costs, Cash Flow, Leverage. Debt fund: Likewise, the performance of debt funds can be measured on the basis of: Peer Group Comparisons, The Income Ratio, Industry Exposures and Concentrations, NPAs, besides NAV Growth, Total Return and Expense Ratio. Liquid funds: the performance of the highly volatile liquid funds can be measured on the basis of: Fund Yield, besides NAV Growth, Total Return and Expense Ratio. Concept of benchmarking for performance evaluation: Every fund sets its benchmark according to its investment objective. The funds performance is measured in comparison with the benchmark. If the fund generates a greater return than the benchmark then it is said that the fund has outperformed benchmark , if it is equal to benchmark then the correlation between them is exactly 1. And if in case the return is lower than the benchmark then the fund is said to be underperformed. Some of the benchmarks are: 1. Equity funds: market indices such as S&P CNX nifty, BSE100, BSE200, BSE-PSU, BSE 500 index, BSE bankex, and other sectoral indices. 2. Debt funds: Interest Rates on Alternative Investments as Benchmarks, I-Bex Total Return Index, JPM T-Bill Index Post-Tax Returns on Bank Deposits versus Debt Funds. 3. Liquid funds: Short Term Government Instruments’ Interest Rates as Benchmarks, JPM T- Bill Index. To measure the fund’s performance, the comparisons are usually done with: I) with a market index. ii) Funds from the same peer group. iii) Other similar products in which investors invest their funds. 1.2.2. 1.2.3. Measuring these investment options on the basis of the mentioned parameters, we get this in a tabular form Table:1.7 Return Safety Volatility Liquidity Convenienc e Equity High Low High High Moderate Bonds Moderate High Moderate Moderate High Co. Moderate Debent ures Moderate Moderate Low Low Co. FDs Moderate Low Low Low Moderate Bank Deposi ts Low High Low High High PPF Moderate High Low Moderate High Life Insura nce Low High Low Low Moderate Gold Moderate High Moderate Moderate Gold Real Estate High Moderate High Low Low Mutual Funds High High Moderate High High We can very well see that mutual funds outperform every other investment option. On three parameters, it scores high whereas it’s moderate at one. comparing it with the other options, we find that equities gives us high returns with high liquidity but its volatility too is high with low safety which doesn’t makes it favourite among persons who have low risk- appetite. Even the convenience involved with investing in equities is just moderate. Now looking at bank deposits, it scores better than equities at all fronts but lags badly in the parameter of utmost important ie; it scores low on return , so it’s’t‌. One can start investing in mutual funds with amount as low as Rs. 500 through SIPs and even Rs. 100 in some’s i.e. consistent performance, risk adjusted returns, total returns and protection of capital. Each of these factors is very important and ha. 1.2.4 EVALUATING PORTFOLIO PERFORMANCE It is important to evaluate the performance of the portfolio on an on-going basis. The following factors are important in this process: Consider long-term record of accomplishment record of accomplishment against similar funds. Success in managing a small or in a fund focusing on a particular segment of the market cannot be re lied upon as an evidence of anticipated performance in managing a large or a broad based fund. Discipline in investment approach is an important factor as the pressure to perform can make a fund manager susceptible to have a n urge to change tracks in terms of stock selection as well a s investment strategy. The objective should be to differentiate investment skill of the fund manager from luck and to identify those funds with the greatest potential of future success. 1.2.5 it self percent and if inflation is 6 per cent then real rate of return from fixed deposit is reduced by 6 percent. Similarly if returns generated from equity market is 18 percent. Therefore, de posit investment is not lost. Similarly if our equity investment is in Tata Motors, HLL, Infosys, diversification and averaging. of both Portfolio of mutual funds consists of multiple securities and hence adverse news about single security will have nominal impact on overall portfolio. By systematically investing in mutual fund, we get benefit of rupee cost averaging. Mutual fund as an investment vehicle helps reduce, both, systematic as well a s unsystematic risk 1.2.6 A STUDY OF PORTFOLIO ANALYSIS FROM THE POINT OF FUND MANAGER: Effective use of portfolio management disciplines improves customer satisfaction, reduces the number of risks problems, and increases success. The goal of portfolio analysis is to realize these same benefits at the portfolio level by applying a consistent structured management approach. The considerations underlying the portfolio analysis is a matter of concern to the fund managers, investors, and researchers alike. This study attempts to answer two questions relating to the portfolio analysis: • Make an average (or fair) return for the level of risk in the portfolio • To find out the portfolio which best meets the purpose of the investor. At a minimum, any comprehensive mutual fund selection and analysis approach should include the following generalized processes: • Fund selection • Fund prioritize/ reprioritize • Selection of the acceptable and required fund • Fund analysing and monitoring • Corrective action management The fund portfolio analysis gives the ability to select funds that are aligned with the investor’s strategies and objectives. It helps the fund manager to make the best use of available opportunities by applying to the highest priority of the investor. A fund manager can regularly assess how securities and stocks are contributing to portfolio health and can make the corrective action to keep the portfolio in compliance with the investor’s interest and objectives. Mutual funds do not determine risk preference. However, once investor determines his/her return preferences, he/she can choose a mutual fund a large and growing variety of alternative funds designed to meet almost any investment goal. Studies have showed that the funds generally were consistent in meeting investors stated goals for investment strategies, risk, and return. The major benefit of the mutual fund is to diversify the portfolio to eliminate unsystematic risk. The instant diversification of the funds is especially beneficial to the small investors who do not have the resources to acquire 100 shares of 12 or 15 different issues required to reduce unsystematic risk. Mutual funds have generally maintained the stability of their correlation with the market because of reasonably well diversified portfolios. There are some measures for the analysis and each of them provides unique perspectives. These measures evaluate the different components of performance. 1.2.7 MEASURES OF RISK AND RETURN: Risk is variability in future cash flows. It is also known as uncertainty in the distribution of possible outcomes. A risky situation is one, which has some probability of loss or unexpected results. The higher the probability of loss or unexpected results is, the greater the risk. It is the uncertainty that an investment will earn its expected rate of return. For an investor, evaluating a future investment alternative expects or anticipates a certain rate of return is very important. Portfolio risk management includes processes that identify, analyse, respond to, track, and control any risk that would prevent the portfolio from achieving its business objectives. These processes should include reviews of project level risks with negative implications for the portfolio, ensuring that the project manager has a responsible risk mitigation plan. Additionally, it is important to do a consolidated risk assessment for the portfolio overall to determine whether it is within the already specified limits. Since portfolio and their environments are dynamic, managers should review and update their portfolio risk management plans on a regular basis through the fund life cycle. Simple measure of returns: The return on mutual fund investment includes both income (in the form of dividends or investment payments) and capital gains or losses (increase or decrease in the value of a security). The return is calculated by taking the change in a fund’s Net Asset Value, which is the market value of securities the fund holds divided by the number of the fund’s shares during a given time period, assuming the reinvestment all income and capital gains distributions, and dividing it by the original net asset value. The return is calculated net of management fees and other expenses charged to the fund. Thus, a fund’s monthly return can be expressed as follows: Rt= (NAVt- NAVt-1)/NAVt-1 Where, Rt is the return in month t NAVt is the closing net asset value of the fund on the last trading day of the month NAVt-1 is the closing net asset value of the fund on the last day of the previous month Measure of risk Investors are interested not only in fund’s return but also in risk taken to achieve those returns. So risk can be thought as the uncertainty of the expected return, and uncertainty is generally equated with variability. Variability and the risk are correlated; hence high returns will tend to high variability. Standard deviation: in simple terms standard deviation is one of the commonly used statistical parameter to measure risk, which determines the volatility of a fund. Deviation is defined as any variation from a mean value (upward & downward). Since the markets are volatile, the returns fluctuate every day. High standard deviation of a fund implies high volatility and a low standard deviation implies low volatility. S.D. =√1/T× (Rt-AR) ² Where, S.D. is the periodic standard deviation, AR is the average periodic return, T is the number of observations in the period for which the standard deviation is being calculated. Rt is the return in month t Beta analysis: ) β(Beta) Co-efficient: Systematic risk is measured in terms of Beta, which represents fluctuations in the NAV of the fund vis-à-vis market. The more responsive the NAV of a Mutual Fund is to the changes in the market; higher will be its beta. Beta is calculated by relating the returns on a Mutual Fund with the returns in the market. While unsystematic risk can be diversified through investments in a number of instruments, systematic risk cannot. By using the risk return relationship, we try to assess the competitive strength of the Mutual Funds vis-à-vis one another in a better way. β(Beta) is calculated as = [N (ΣXY) – ΣXΣY]/ [N (ΣX2) – (ΣX) 2 ] Beta is used to measure the risk. It basically indicates the level of volatility associated with the fund as compared to the market. In case of funds, as compared to the market. success of beta is heavily dependent on the correlation between a fund and its benchmark. Thus, if the fund’s portfolio doesn’t have a relevant benchmark index then a beta would be grossly inappropriate. For example if we are considering a banking fund, we should look at the beta against a bank index. R-Squared (R2): R squared is the square of ‘R’ (i.e.; coefficient of correlation). It describes the level of association between the fun’s market volatility and market risk. The value of R- squared ranges from0 to1. A high R- squared (more than 0.80) indicates that beta can be used as a reliable measure to analyze the performance of a fund. Beta should be ignored when the rsquared is low as it indicates that the fund performance is affected by factors other than the markets. For example: Case 1 Case 2 R2 0.65 0.88 B 1.2 0.9 In the above tableR2 is less than 0.80 in case 1, implies that it would be wrong to mention that the fund is aggressive on account of high beta. In case 2, the r- squared is more than 0.85 and beta value is 0.9. it means that this fund is less aggressive than the market. ďƒ˜ Portfolio turnover ratio: Portfolio turnover is a measure of a fund's trading activity and is calculated by dividing the lesser of purchases or sales (excluding securities with maturities of less than one year) by the average monthly net assets of the fund. Turnover is simply a measure of the percentage of portfolio value that has been transacted, not an indication of the percentage of a fund's holdings that have been changed. Portfolio turnover is the purchase and sale of securities in a fund's portfolio. A ratio of 100%, then, means the fund has bought and sold all its positions within the last year. Turnover is important when investing in any mutual fund, since the amount of turnover affects the fees and costs within the mutual fund. Total expenses amount, which represents the TER: Total expense ratio = (Total fund Costs/ Total fund Assets) The most important and widely used measures of performance are: The Sharpe Measure The Treynor’Measure Jenson Model Fama Model ďƒ˜ The Sharpe Measure :- at a percentage of the fund that the investors are concerned about. So, the model evaluates funds on the basis of reward per unit of total risk. Symbolically, it can be written as: Sharpe Ratio (Si) = (Ri - Rf)/Si Where, Si is standard deviation of the fund, Ri represents return on fund, and Rf is risk free rate of return. While a high and positive Sharpe Ratio shows a superior risk-adjusted performance of a fund, a low and negative Sharpe Ratio is an indication of unfavourable performance. ďƒ˜. ďƒ˜ Jenson Model: Jenson's model proposes another risk adjusted performance measure. This measure was developed by Michael Jenson and is sometimes referred to as the differential Return Method. This measure involves evaluation of the returns that the fund has generated vs. the returns actually expected out of the fund1 given the level of its systematic risk. The surplus between the two returns is called Alpha, which measures the performance of a fund compared with the actual returns over the period. Required return of a fund at a given level of risk (Bi) can be calculated as: E(Ri) = Rf + Bi (Rm - Rf) Where, E(Ri) represents expected return on fund, and Rm is average market return during the given period, Rf is risk free rate of return, and Bi is Beta deviation of the fund. After calculating it, Alpha can be obtained by subtracting required return from the actual return of the fund. Îąp= Ri –[ Rf + Bi (Rm - Rf) ] Higher alpha represents superior performance of the fund and vice versa. Limitation of this model is that it considers only systematic risk not the entire risk associated with the fund and an ordinary investor cannot mitigate unsystematic risk, as his knowledge of market is primitive. ďƒ˜. The Net Selectivity. Selectivity: measures the ability of the portfolio manager to earn a return that is consistent with the portfolio’s market (systematic) risk. The selectivity measure is: Ri –[ Rf + Bi (Rm - Rf) ] Diversification: measures the extent to which the portfolio may not have been completely diversified. Diversification is measured as: [Rf +(Rm - Rf)(αi/ αm)]-[Rf + Bi (Rm - Rf)] If the portfolio is completely diversified, contains no unsystematic risk, then diversification measure would be zero. A positive diversification measure indicates that the portfolio is not completely diversified; it would contain unsystematic risk and it represents the extra return that the portfolio should earn for not being completely diversified. The performance of the portfolio can be measured as: Net selectivity = selectivity – diversification Net selectivity measures, how well the portfolio mangers did manager did at earning a fair return for the portfolio’ systematic risk and diversifying away unsystematic risk. Positive net selectivity indicates that the fund earned a better return. The comparison, done based on sharpe ratio, Treynor measure, Jensen alpha, and Fema measure notifies that the portfolio performance can be evaluated on the following basis: Sahrpe ratio: measures the reward to total risk trade off Treynor: measures the reward to systematic risk trade off Jensen’s alpha: measures the average return over and above that predicted. Fema measure: measures return of portfolio for its systematic risk and diversifying away unsystematic risk. Among the above performance measures, two models namely, Treynor measure and Jenson model use Systematic risk isify.. PART-II RESEARCH METHODOLOGY 2.1 NEED FOR THE STUDY The Mutual Fund Companies periodically build up a study, which can prioritize and analyse the portfolio of the mutual funds. This study is helpful in having a comparison among the mutual funds based on the risk bearing capacity and expected return of the investor and will also carry out an analysis of the portfolio of the selected mutual fund. The mutual fund industry is growing globally and new products are emerging in the market with all captivating promises of providing high return. It has become difficult for the investors to choose the best fund for their needs or in other words to find out a fund which will give maximum return for minimum risk. Therefore, they turn to their financial adviser to get precise direct investment. Hence, the company asked me to prepare a model, which will facilitate them to analyse the fund and to have reasonable estimation for the fund performance. The driving force of Mutual Funds is the ‘safety of the principal’ guaranteed, plus the added advantage of capital appreciation together with the income earned in the form of interest or dividend. The various schemes of Mutual Funds provide the investor with a wide range of investment options according to his risk bearing capacities and interest besides; they also give handy return to the investor. Mutual Funds offers an investor to invest even a small amount of money, each Mutual Fund has a defined investment objective and strategy. Mutual Funds schemes are managed by respective asset managed companies, sponsored by financial institutions, banks, private companies or international firms. A Mutual Fund is the ideal investment vehicle for today’s complex and modern financial scenario. The study is basically made to analyze the various open-ended equity schemes of HDFC Asset Management Company to highlight the diversity of investment that Mutual Fund offer. Thus, through the study one would understand how a common person could fruitfully convert a meagre amount into great penny by wisely investing into the right scheme according to his risk taking abilities. Sharpe ratio is a performance measure, which reflects the excess return earned on a portfolio per unit of its total risk (standard deviation). Treynor measure indicates the risk premium return per unit risk of the portfolio. While Jensen alpha talks about the deviation of the actual return from its expected one. Fema measure decomposes the portfolio total return into two main components: systematic return and the unsystematic return. It determines whether the portfolio is perfectly diversified or not. Hence, it is a significant measure to evaluate the performance of the fund manager. The analysis of the fund portfolio has been done to find out the influence of the top holdings on the performance of the fund. All these measures give fair implication and results about the portfolio performance and can show the ground reality to a rational investor. 2.2 OBJECTIVE OF THE STUDY ďƒ˜ Whether the growth oriented Mutual Fund are earning higher returns than the benchmark returns (or market Portfolio/Index returns) in terms of risk. Whether the growth oriented mutual funds are offering the advantages of Diversification, Market timing and Selectivity of Securities to their investors This study provides a proper investigation for logical and reasonable comparison and selection of the funds. It also assists in analysing the portfolio of the selected funds. 2.3 LIMITATIONS OF THE STUDY The study is limited only to the analysis of different schemes and its suitability to different investors according to their risk-taking ability. The study is based on secondary data available from monthly fact sheets, websites and other books, as primary data was not accessible. The study is limited by the detailed study of six schemes of HDFFC. Many investors are all price takers. The assumption that all investors have the same information and beliefs about the distribution of returns. Banks are free to accept deposits at any interest rate within the ceilings fixed by the Reserve Bank of India and interest rate can vary from client to client. Hence, there can be inaccuracy in the risk free rates. The study excludes the entry and the exit loads of the mutual funds. 2.4 DATA COLLECTION The Methodology involves the selected Open-Ended equity schemes of HDFC mutual fund for the purpose of risk return and comparative analysis the competitive funds. The data collected for this project is basically from secondary sources, they are; The monthly fact sheets of HDFC AMC fund house and research reports from banks. The NAVs of the funds have been taken from AMFI websites for the period starting from 31 st jan 2007 to 31st May, 2009. For the Benchmark prices, data has been taken from BSE and NSE sites. Part-III CASE ANALYSIS 3.1 DATA INTERPRETATION Risk returns analysis and comparative study of funds In this section, a sample of HDFC equity related funds have been studied, evaluated and analysed. This study could facilitate to get a fair comparison. The expectations of the study are to give value to the funds by keeping the risk in the view. Here equity funds are taken as they bear high return with high risk. Following are the products of HDFC Mutual Fund, which have been taken the evaluation purpose. HDFC Equity Fund Growth option HDFC Capital Builder HDFC Growth Fund HDFC Long Term Advantage Fund HDFC Top 200 Fund HDFC EQUITY FUND Investment Objective The investment objective of the Scheme is to achieve capital appreciation. Basic Scheme Information Table:3.1 Nature of Scheme Open Ended Growth Scheme Inception Date Jan 01, 1995 asset allocation under the Scheme will be as follows: Table:3.2 SR NO. TYPE OF INSTRUMENTS NORMAL ALLOCATION (%of net asset) 1 Equities & Equities related RISK PROFILE 80-100 Medium to high 0-100 Low to medium instruments 2 Debt securities, money market instruments &. Investment Strategy & Risk Control In order to provide long term capital appreciation, the Scheme will invest predominantly in growth companies. Companies selected under this portfolio would as far as practicable consist of medium to large sized companies which: are likely achieved above average growth than the industry; enjoy distinct competitive advantages, and have superior financial strengths. The aim will be to build a portfolio, which represents a cross-section of the strong growth companies in the prevailing market. In order to reduce the risk of volatility, the Scheme will diversify across major industries and economic sectors. Benchmark Index : S&P CNX 500. HDFC Equity, which is benchmarked to S&P CNX 500 Index is not sponsored, endorsed, sold or promoted by Indian Index Service & Products Limited (IISL). Fund Manager : Mr. Prashant Jain HDFC EQUITY FUND-GROWTH OPTION Table:3.3 NAV S&P Ri Rm Ri Rm CNX Rm-Rm sqr(Rm- av Rm av) Rm2 500 2007 151.389 4899.39 FEB 141.228 4504.73 -6.71185 -8.05529 54.06587 -9.0864 82.56268 64.88767 MAR 142.602 4605.89 0.972895 2.24564 2.184771 1.214527 1.475077 5.042897 APL 151.16 4934.46 6.001318 7.133692 42.81156 6.10258 37.24148 50.88956 MAY 161.281 5185.95 6.695554 5.096606 34.1246 4.065494 16.52824 25.9754 JUN 165.313 5223.82 2.499984 0.730242 1.825594 -0.30087 0.090523 0.533254 JULY 172.325 5483.25 4.241651 4.966289 21.06526 3.935177 15.48562 24.66403 AUG 168.827 5411.29 -2.02989 -1.31236 2.663941 -2.34347 5.491863 1.72229 SEP 182.84 6094.11 8.300213 12.61843 104.7357 11.58732 134.266 159.2248 OCT 210.3 7163.3 15.0186 17.54465 263.4959 16.51353 272.6968 307.8146 NOV 206.176 6997.6 -1.96101 -2.31318 4.536164 -3.34429 11.18429 5.3508 DEC 223.324 7461.48 8.317166 6.62913 55.13557 5.598018 31.3378 43.94536 2008 188.42 6245.45 -15.6293 -16.2974 254.7177 -17.3285 300.2786 265.6065 FEB 187.594 6356.92 -0.43838 1.784819 -0.78243 0.753707 0.568075 3.18558 MAR 165.788 5762.88 -11.624 -9.34478 108.6241 -10.3759 107.6591 87.32486 APL 178.191 6289.07 7.481241 9.130678 68.3088 8.099566 65.60296 83.36928 MAY 169.605 5937.81 -4.81843 -5.58525 26.91209 -6.61636 43.77619 31.19497 JAN JAN JUN 143.171 4929.98 -15.5856 -16.9731 264.5363 -18.0042 324.1514 288.0859 JULY 151.715 5297.47 5.967689 7.454188 44.48428 6.423076 41.25591 55.56493 AUG 158.924 5337.28 4.751673 0.751491 3.570838 -0.27962 0.078188 0.564738 SEP 145.721 4807.2 -8.30774 -9.93165 82.50962 -10.9628 120.1822 98.63768 OCT 110.322 3539.57 -24.2923 -26.3694 640.5738 -27.4005 750.7883 695.3455 NOV 101.808 3379.53 -7.71741 -4.52145 34.8939 -5.55257 30.83098 20.44354 DEC 112.377 3635.87 10.38131 7.585078 78.74302 6.553966 42.95447 57.53341 2009 103.754 3538.57 -7.67328 -2.67611 20.53456 -3.70723 13.74352 7.161582 FEB 98.163 3403.33 -5.38871 -3.82188 20.59501 -4.85299 23.55156 14.60679 MAR 108.852 3720.51 10.88903 9.319696 101.4825 8.288584 68.70062 86.85673 APL 127.097 4278.54 16.76129 14.99875 251.3984 13.96764 195.0949 224.9625 MAY 169.897 5480.11 33.67507 28.08365 945.7186 27.05253 731.8396 788.6911 Total 29.7767 28.87114 3533.466 0 3469.417 3499.186 average 1.063454 1.031112 126.1952 0 123.9077 JAN Figure:3.1 σm= √123.9077 =11.13239 β(Beta) =[N (ΣXY) – ΣXΣY ]/[ N (ΣX2) – (ΣX) 2 ] = (98937.047- 859.6872)/( 97977.214- 833.54264) = 98077.36/ 97143.672 = 1.0096114 Table:3.4 Ri Rm Ri-Rm Dev frm ave sq of Dev frm av FEB -6.71185 -8.05529 1.34344 -1.3111 1.71898 MAR 0.972895 2.24564 -1.27274 1.305086 1.70325 APL 6.001318 7.133692 -1.13237 1.164715 1.356561 MAY 6.695554 5.096606 1.598948 -1.56661 2.454256 JUN 2.499984 0.730242 1.769742 0.75698 0.573018 JULY 4.241651 4.966289 -0.72464 0.75698 0.573018 AUG -2.02989 -1.31236 -0.71753 0.749866 0.5623 SEP 8.300213 12.61843 -4.31822 4.350562 18.92739 OCT 15.0186 17.54465 -2.52605 2.558392 6.545367 NOV -1.96101 -2.31318 0.352172 -0.31983 0.102291 2007 JAN DEC 8.317166 6.62913 1.688036 -1.65569 2.741324 2008 -15.6293 -16.2974 0.668127 -0.63579 0.404223 FEB -0.43838 1.784819 -2.2232 2.255543 5.087475 MAR -11.624 -9.34478 -2.27926 2.311604 5.343511 APL 7.481241 9.130678 -1.64944 1.681778 2.828377 MAY -4.81843 -5.58525 0.76682 -0.73448 0.539459 JUN -15.5856 -16.9731 1.387467 -1.35513 1.836366 JULY 5.967689 7.454188 -1.4865 1.518841 2.306878 AUG 4.751673 0.751491 4.000182 -3.96784 15.74376 SEP -8.30774 -9.93165 1.623906 -1.59156 2.533078 OCT -24.2923 -26.3694 2.077092 -2.04475 4.181006 NOV -7.71741 -4.52145 -3.19596 3.228297 10.4219 DEC 10.38131 7.585078 2.796228 -2.76389 7.639067 2009 -7.67328 -2.67611 -4.99717 5.029507 25.29594 FEB -5.38871 -3.82188 -1.56683 1.599166 2.557333 MAR 10.88903 9.319696 1.569336 -1.53699 2.362352 APL 16.76129 14.99875 1.76254 -1.7302 2.993588 MAY 33.67507 28.08365 5.591422 -5.55908 30.90337 Total 29.7767 28.87114 0.90556 160.2354 avrage 1.063454 1.031112 0.032341 5.722694 JAN JAN Standard Deviation for the fund’s excess return (S.D.) σi=√5.722694 =2.392215 Sharpe Index (Si) = (Ri - Rf)/Si = (1.063454-5)/ 2.392215 =-1.64557 Treynor's Index (Ti) = (Ri - Rf)/Bi. =(1.063454-5)/ 1.0096114 =-3.89907 Jenson alpha (αp)= Ri –[ Rf + Bi (Rm - Rf) ] = 1.063454 - [ 5+1.0096114 (1.031112-5)] =0.070488 Expected return E(Ri) = Rf + Bi (Rm - Rf) =[ 5+1.0096114 (1.031112-5)] =0.992965 Fema Measures Selectivity =Ri –[ Rf + Bi (Rm - Rf) ] = 1.063454 - [ 5+1.0096114 (1.031112-5)] =0.070488 Diversification =[Rf + (Rm - Rf)(αi/ αm)]-[Rf + Bi (Rm - Rf) ] =[5+(1.031112-5)(2.392215/11.13139)]- [ 5+1.0096114 (1.031112-5)] =3.154092 Net selectivity= selectivity- diversification =0.070488-3.15409 =-3.0836 HDFC CAPITAL BUILDER FUND Investment Objective The Investment Objective of the Scheme is to achieve capital appreciation in the long term. Basic Scheme Information Nature of Scheme Open Ended Growth Scheme Inception Date February 01, 1994 Option/Plan Dividend Option,Growth Option. The Dividend Option offers Dividend Payout and Reinvestment Facility. Entry Load NIL (purchase / additional (With effect from August 1, 2009) purchase / switch-in) Exit Load • In respect of each purchase / switch-in of Units less (as a % of the Applicable than Rs. 5 crore in value, an Exit Load of 1.00% is NAV) payable if Units are (Other than Systematic redeemed/switched-out within 1 year from the date of Investment Plan (SIP)/ allotment. Systematic Transfer Plan (STP)) • In respect of each purchase / switch-in of Units equal to or greater than Rs. 5 crore in value, no Exit Load is payable. No Exit Load shall be levied on bonus units and units allotted on dividend reinvestment. Minimum Application For new investors :Rs.5000 and any amount thereafter. Amount For existing investors : Rs. 1000 and any amount thereafter. (Other than Systematic Investment Plan (SIP)/ Systematic Transfer Plan (STP)) Lock-In-Period Nil Net Asset Value Periodicity Every Business Day. Redemption Proceeds Normally despatched within 3 business% Pattern The asset allocation under the Scheme will be as follows : Sr.No. Asset Type (% of Portfolio) 1 Equities and Equity Related Instruments Upto 100% 2 Debt & Money Market Instruments Risk Profile Medium to High Not more than 20% Low to Medium Investment in Securitised debt, if undertaken, would not exceed 20% of the net assets of the scheme. Investment Strategy This Scheme aims to achieve its objectives by investing in strong companies at prices which are below fair value in the opinion of the fund managers. The Scheme defines a "strong company" as one that has the following characteristics : • strong management, characterized by competence and integrity • strong position in its business (preferably market leadership) • efficiency of operations, as evidenced by profit margins and asset turnover, compared to its peers in the industry • working capital efficiency • consistent surplus cash generation • high profitability indicators (returns on funds employed) In common parlance, such companies are also called 'Blue Chips'. The Scheme defines "reasonable prices" as : • a market price quote that is around 30% lower than its value, as determined by the discounted value of its estimated future cash flows • a P/E multiple that is lower than the company's sustainable Return on funds employed • a P/E to growth ratio that is lower than those of the company's competitors • in case of companies in cyclical businesses, a market price quote that is around 50% lower than its estimated replacement cost Fund Manager Mr. Chirag Setalvad (since Apr 2, 07) Mr. Anand Laddha - Dedicated Fund Manager - Foreign Securities HDFC CAPITAL BUILDER FUND Table:3.5 NAV S&P CNX 500 Ri Rm Ri Rm Rm-Rm av sqr(RmRm av) Rm2 2007 JAN 64.459 4899.39 FEB 61.259 4504.73 -4.9644 -8.05529 39.98964 -9.0864 82.5626 8 64.88 767 MAR 60.3 4605.89 -1.56548 2.24564 -3.51551 1.214527 1.47507 7 5.042 897 APL 65.818 4934.46 9.150912 7.133692 65.27979 6.10258 37.2414 8 50.88 956 MAY 69.818 5185.95 6.077365 5.096606 30.97394 4.065494 16.5282 4 25.97 54 JUN 73.27 5223.82 4.944284 0.730242 3.610525 -0.30087 0.09052 3 0.533 254 JULY 76.914 5483.25 4.973386 4.966289 24.69927 3.935177 15.4856 2 24.66 403 AUG 76.323 5411.29 -0.76839 -1.31236 1.008405 -2.34347 5.49186 3 1.722 29 SEP 83.09 6094.11 8.866266 12.61843 111.8784 11.58732 134.266 159.2 248 OCT 96.061 7163.3 15.61078 17.54465 273.8857 16.51353 272.696 8 307.8 146 NOV 99.034 6997.6 3.094908 -2.31318 -7.15908 -3.34429 11.1842 9 5.350 8 DEC 106.53 8 7461.48 7.577196 6.62913 50.23022 5.598018 31.3378 43.94 536 2008 JAN 88.367 6245.45 -17.0559 -16.2974 277.9672 -17.3285 300.278 6 265.6 065 FEB 87.439 6356.92 -1.05017 1.784819 -1.87436 0.753707 0.56807 5 3.185 58 MAR 75.967 5762.88 -13.12 -9.34478 122.6035 -10.3759 107.659 1 87.32 486 APL 79.418 6289.07 4.542762 9.130678 41.4785 8.099566 65.6029 6 83.36 928 MAY 75.065 5937.81 -5.48113 -5.58525 30.61343 -6.61636 43.7761 9 31.19 497 JUN 64.169 4929.98 -14.5154 -16.9731 246.3716 -18.0042 324.151 4 288.0 859 JULY 67.228 5297.47 4.767099 7.454188 35.53486 6.423076 41.2559 1 55.56 493 AUG 70.149 5337.28 4.344916 0.751491 3.265164 -0.27962 0.07818 8 0.564 738 SEP 63.365 4807.2 -9.67084 -9.93165 96.04744 -10.9628 120.182 2 98.63 768 OCT 47.587 3539.57 -24.9002 -26.3694 656.603 -27.4005 750.788 3 695.3 455 NOV 44.556 3379.53 -6.36939 -4.52145 28.79888 -5.55257 30.8309 8 20.44 354 DEC 48.064 3635.87 7.873238 7.585078 59.71913 6.553966 42.9544 7 57.53 341 2009 JAN 45.564 3538.57 -5.2014 -2.67611 13.91953 -3.70723 13.7435 2 7.161 582 FEB 43.34 3403.33 -4.88105 -3.82188 18.65479 -4.85299 23.5515 6 14.60 679 MAR 46.604 3720.51 7.531149 9.319696 70.18802 8.288584 68.7006 2 86.85 673 APL 53.006 4278.54 13.73702 14.99875 206.0381 13.96764 195.094 9 224.9 625 MAY 67.6 5480.11 27.53273 28.08365 773.2195 27.05253 731.839 6 788.6 911 TOTAL 21.08029 28.87114 3270.029 0 3469.41 7 3499. 186 AVERA GE 0.752867 1.031112 116.7868 0 123.907 7 Figure:3.2 σm = √123.9077 =11.13239 β(Beta) =[N (ΣXY) – ΣXΣY ]/[ N (ΣX2) – (ΣX) 2 ] = (91560.82- 608.6119)/ (97977.21- 833.5426) = 90952.21/ 97143.67 = 0.936265 Table:3.6 Ri Rm Ri-Rm Dev frm ave sq of Dev frm av FEB -4.9644 -8.05529 3.090893 -3.36914 11.35109 MAR -1.56548 2.24564 -3.81112 3.532879 12.48124 APL 9.150912 7.13369 2 2.01722 -2.29546 5.269159 MAY 6.077365 5.09660 6 0.980759 -1.259 1.585089 JUN 4.944284 0.73024 2 4.214041 -4.49229 20.18063 JULY 4.973386 4.96628 9 0.007097 -0.28534 0.08142 AUG -0.76839 -1.31236 0.54397 -0.82221 0.676037 SEP 8.866266 12.6184 3 -3.75217 3.473923 12.06814 OCT 15.61078 17.5446 5 -1.93386 1.655617 2.741069 NOV 3.094908 -2.31318 5.408088 -5.68633 32.33438 DEC 7.577196 6.62913 0.948066 -1.22631 1.503837 2008 JAN -17.0559 -16.2974 -0.75845 0.480204 0.230596 FEB -1.05017 1.78481 9 -2.83499 2.55674 6.536922 MAR -13.12 -9.34478 -3.77523 3.496982 12.22888 APL 4.542762 9.13067 8 -4.58792 4.309671 18.57326 MAY -5.48113 -5.58525 0.10412 -0.38237 0.146203 JUN -14.5154 -16.9731 2.457673 -2.73592 7.485245 JULY 4.767099 7.45418 8 -2.68709 2.408844 5.802531 AUG 4.344916 0.75149 1 3.593425 -3.87167 14.98983 SEP -9.67084 -9.93165 0.260807 -0.53905 0.290577 OCT -24.9002 -26.3694 1.469223 -1.74747 3.053642 2007 JAN NOV -6.36939 -4.52145 -1.84793 1.569689 2.463923 DEC 7.873238 7.58507 8 0.28816 -0.5664 0.320814 2009 JAN -5.2014 -2.67611 -2.52528 2.24704 5.049189 FEB -4.88105 -3.82188 -1.05916 0.780919 0.609834 MAR 7.531149 9.31969 6 -1.78855 1.510302 2.281012 APL 13.73702 14.9987 5 -1.26173 0.983487 0.967247 MAY 27.53273 28.0836 5 -0.55091 0.272669 0.074348 TOTAL 21.08029 28.8711 4 -7.79085 181.3761 0.75286 7 1.03111 2 -0.27824 6.477719 AVERAGE Standard Deviation for the fund’s excess return (S.D.) σi=√6.477719 =2.545136 Sharpe Index (Si) = (Ri - Rf)/Si = (0.752867-5)/ 2.545136 =-1.66872 Treynor's Index (Ti) = (Ri - Rf)/Bi. =(0.752867-5/ 0.936265 =-4.53625 Jenson alpha (αp)= Ri –[ Rf + Bi (Rm - Rf) ] =0.752867- [5+0.936265(1.031112-5)] = -0.39357 Expected return E(Ri) = Rf + Bi (Rm - Rf) =[5+0.936265(1.031112-5)] =1.284069 Fema Measures Selectivity =Ri –[ Rf + Bi (Rm - Rf) ] =0.752867- [5+0.936265(1.031112-5)] = -0.39357 Diversification =[Rf + (Rm - Rf)(αi/ αm)]-[Rf + Bi (Rm - Rf) ] =[5+(1.031112-5)( 2.545136/11.13139)]- [5+0.936265(1.031112-5)] =2.808464 Net selectivity= selectivity- diversification =-0.39357-2.808464 =-3.33967 HDFC GROWTH FUND Investment objective The primary investment objective of the Scheme is to generate long term capital appreciation from a portfolio that is invested predominantly in equity and equity related instruments. Basic Scheme Information Table:3.7 Nature of Scheme Open Ended Growth Scheme Inception Date Sep 11, 2000 corpus of the Scheme will be invested primarily in equity and equity related instruments. The Scheme may invest a part of its corpus in debt and money market instruments, in order to manage its liquidity requirements from time to time, and under certain circumstances, to protect the interests of the Unit holders. The asset allocation under the Scheme will be as follows : Table:3.8 SR TYPE OF INSTRUMENTS NO. NORMAL ALLOCATION (%of net asset) 1 Equities & Equities related 80-100 instruments 2 Debt securities, money market instruments & cash RISK PROFILE Medium to high 0-100 Low to medium Investment Strategy & Risk Control The investment approach will be based on a set of well established but flexible principles that emphasise the concept of sustainable economic earnings and cash return on investment as the means of valuation of companies. In summary, the Investment Strategy is expected to be a function of extensive research and based on data and reasoning, rather than current fashion and emotion. The objective will be to identify "businesses with superior growth prospects and good management, at a reasonable price". Benchmark Index : SENSEX Fund Manager : Mr. Shrinivas Rao HDFC GROWTH FUND Table:3.9 NAV SENSEX Ri Rm Ri Rm Rm-Rm av sqr(RmRm av) Rm2 2007 JAN 48.917 14090.92 FEB 45.047 12938.09 -7.91136 -8.18137 64.72575 -8.91997 79.56584 66.93478 MAR 45.461 13072.1 0.91904 1.035779 0.951922 0.297178 0.088315 1.072838 APL 48.581 13872.37 6.863025 6.12197 42.01523 5.383369 28.98066 37.47851 MAY 53.198 14544.46 9.503715 4.84481 46.0437 4.106209 16.86095 23.47219 JUN 54.695 14650.51 2.814016 0.729144 2.051821 -0.00946 8.94E-05 0.53165 JULY 58.716 15550.99 7.351677 6.146407 45.1864 5.407806 29.24437 37.77832 AUG 58.17 15318.6 -0.9299 -1.49437 1.389618 -2.23298 4.986179 2.233155 SEP 63.82 17291.1 9.71291 12.8765 125.0683 12.1379 147.3287 165.8043 OCT 73.682 19837.99 15.45284 14.72949 227.6123 13.99088 195.7448 216.9577 NOV 74.895 19363.19 1.646264 -2.39339 -3.94015 -3.13199 9.809352 5.728304 DEC 80.576 20286.99 7.585286 4.770908 36.1887 4.032307 16.2595 22.76156 2008 JAN 68.432 17648.71 -15.0715 -13.0048 196.0015 -13.7434 188.8807 169.1245 FEB 67.827 17578.72 -0.88409 -0.39657 0.350606 -1.13517 1.28862 0.15727 MAR 62.15 15644.44 -8.36982 -11.0035 92.09761 -11.7421 137.8777 121.0777 APL 66.196 17287.31 6.510056 10.5013 68.36407 9.762702 95.31035 110.2774 MAY 62.813 16415.57 -5.11058 -5.04266 25.77091 -5.78126 33.42296 25.4284 JUN 53.472 13461.6 -14.8711 -17.9949 267.6048 -18.7335 350.9451 323.8174 JULY 56.819 14355.75 6.259351 6.642227 41.57603 5.903626 34.8528 44.11918 AUG 58.871 14564.53 3.611468 1.45433 5.252267 0.715729 0.512268 2.115076 SEP 54.54 12860.43 -7.35676 -11.7003 86.07665 -12.4389 154.7273 136.898 OCT 42.283 9788.06 -22.4734 -23.8901 536.8922 -24.6287 606.5731 570.737 NOV 40.089 9092.72 -5.18885 -7.10396 36.86137 -7.84256 61.50578 50.46627 DEC 41.652 9647.31 3.898825 6.099275 23.78001 5.360674 28.73683 37.20116 2009 JAN 38.443 9424.24 -7.70431 -2.31225 17.8143 -3.05085 9.307696 5.346504 FEB 36.429 8891.61 -5.23893 -5.6517 29.60885 -6.3903 40.83598 31.94174 MAR 38.73 9708.5 6.316396 9.1872 58.03 8.448599 71.37883 84.40465 APL 44.131 11403.25 13.94526 17.45635 243.4334 16.71775 279.4832 304.7242 MAY 56.982 14625.25 29.12012 28.2551 822.792 27.5165 757.1579 798.3508 TOTAL 30.39962 20.68083 3139.6 0 3381.666 3396.941 AVG 1.085701 0.738601 112.1286 0 120.7738 Figure:3.3 σm= √120.7738 =10.98971 β(Beta) =[N (ΣXY) – ΣXΣY ]/[ N (ΣX2) – (ΣX) 2 ] = (87908.8-628.6893)/ (95114.34-427.6966) = 87280.12/ 94686.64 = 0.921779 Table:3.10 Ri Rm Ri-Rm Dev frm ave sq of Dev frm av Rm2 FEB -7.91136 -8.18137 0.270008 0.077092104 0.005943 66.93478 MAR 0.91904 1.035779 -0.11674 0.463838642 0.215146 1.072838 APL 6.863025 6.12197 0.741056 -0.393955855 0.155201 37.47851 MAY 9.503715 4.84481 4.658905 -4.311805316 18.59167 23.47219 JUN 2.814016 0.729144 2.084872 -1.737772055 3.019852 0.53165 JULY 7.351677 6.146407 1.20527 -0.85817039 0.736456 37.77832 AUG -0.9299 -1.49437 0.564474 -0.217374552 0.047252 2.233155 SEP 9.71291 12.8765 -3.16359 3.510692544 12.32496 165.8043 OCT 15.45284 14.72949 0.723351 -0.376251086 0.141565 216.9577 NOV 1.646264 -2.39339 4.039651 -3.692551406 13.63494 5.728304 DEC 7.585286 4.770908 2.814378 -2.467278063 6.087461 22.76156 2008 JAN -15.0715 -13.0048 -2.0667 2.413797413 5.826418 169.1245 FEB -0.88409 -0.39657 -0.48752 0.834616326 0.696584 0.15727 MAR -8.36982 -11.0035 2.633708 -2.286608411 5.228578 121.0777 APL 6.510056 10.5013 -3.99125 4.338346289 18.82125 110.2774 MAY -5.11058 -5.04266 -0.06792 0.415022146 0.172243 25.4284 JUN -14.8711 -17.9949 3.123803 -2.776702677 7.710078 323.8174 JULY 6.259351 6.642227 -0.38288 0.729975995 0.532865 44.11918 AUG 3.611468 1.45433 2.157138 -1.810037944 3.276237 2.115076 SEP -7.35676 -11.7003 4.34358 -3.996480234 15.97185 136.898 2007 JAN OCT -22.4734 -23.8901 1.416689 -1.069589295 1.144021 570.737 NOV -5.18885 -7.10396 1.915115 -1.568014871 2.458671 50.46627 DEC 3.898825 6.099275 -2.20045 2.547549815 6.49001 37.20116 2009 JAN -7.70431 -2.31225 -5.39206 5.73916105 32.93797 5.346504 FEB -5.23893 -5.6517 0.412777 -0.065677352 0.004314 31.94174 MAR 6.316396 9.1872 -2.8708 3.217903695 10.3549 84.40465 APL 13.94526 17.45635 -3.51109 3.858190515 14.88563 304.7242 MAY 29.12012 28.2551 0.865017 -0.517917027 0.268238 798.3508 TOTAL 30.39962 20.68083 9.718797 -4.44089E-15 181.7403 3396.941 AVG 1.085701 0.738601 0.3471 -1.58603E-16 6.490725 Standard Deviation for the fund’s excess return (S.D.) σi=√6.490725 =2.54769 Sharpe Index (Si) = (Ri - Rf)/Si = (1.085701-5)/ 2.54769 =-1.53641 Treynor's Index (Ti) = (Ri - Rf)/Bi. =(1.085701-5)/ 0.921779 =-4.24646 Jenson alpha (αp)= Ri –[ Rf + Bi (Rm - Rf) ] =1.085701- [5+0.921779 (0.738601-5)] = 0.013767 Expected return E(Ri) = Rf + Bi (Rm - Rf) =[5+0.921779 (0.738601-5)] =1.071934 Fema Measures Selectivity =Ri –[ Rf + Bi (Rm - Rf) ] =1.085701- [5+0.921779 (0.738601-5)] = 0.013767 Diversification =[Rf + (Rm - Rf)(αi/ αm)]-[Rf + Bi (Rm - Rf)] =[5+(0.738601-5)( 2.54769/10.98971)]- [5+0.921779 (0.738601-5)] =2.940167 Net selectivity= selectivity- diversification =0.013767-2.940167 =-2.9264 HDFC LONG TERM FUND Investment Objective To achieve long term capital appreciation. Basic Scheme Information Nature of Scheme Close Ended Equity Scheme with a maturity period of 5 years with automatic conversion into an open-ended scheme upon maturity of the Scheme. Inception Date 10-Feb-06 Closing Date 27-Jan-06 Option/Plan Dividend Option,Growth Option. Dividend Option currently offers payout facility only. Entry Load NIL (purchase / additional (With effect from August 1, 2009) purchase / switch-in) Exit Load (as a % of the Applicable NAV) (Other than Systematic Investment Plan (SIP)/ Systematic Transfer Plan (STP)) Redemption / Switch-out from the Date of Allotment : • Upto 12 months 4% • After 12 months upto 24 months 3% • After 24 months upto 36 months 2% • After 36 months upto 48 months 1% • After 48 months upto 54 months 0.5% • After 54 months upto Maturity Nil No Exit Load shall be levied on bonus units and units allotted on dividend reinvestment. Specified Redemption Period A Unit holder can submit redemption/ switch-out request only during the Specified Redemption Period. Presently, the Specified Redemption Period is the first five Business Days immediately after the end of each calendar half year. Minimum Application Currently no purchases/ switch-ins are allowed into this Amount scheme. (Other than Systematic Investment Plan (SIP)/ Systematic Transfer Plan (STP)) Lock-In-Period Nil Net Asset Value Periodicity Every Business Day. Redemption Proceeds Normally dispatched within 3% Investment Pattern The following table provides the asset allocation of the Schemes portfolio. Type of Instruments Minimum Allocation Minimum Allocation Risk Profile of the Instrument Equity & Equity related (% of Net Assets) (% of Net Assets) 70 100 High 0 30 Low instruments Fixed Income Securities (including money market instruments) Investment Strategy The investment strategy of the Scheme is to build and maintain a diversified portfolio of equity stocks that have the potential to appreciate in the long run. Companies identified for selection in the portfolio will have demonstrated a potential ability to grow at a reasonable rate for the long term. The aim will be to build a portfolio that adequately reflects a cross-section of the growth areas of the economy from time to time. While the portfolio focuses primarily on a buy and hold strategy at most times, it will balance the same with a rational approach to selling when the valuations become too demanding even in the face of reasonable growth prospects in the long run. Fund Manager Mr. Srinivas Rao Ravuri (since Apr 3, 06) Mr. Anand Laddha - Dedicated Fund Manager - Foreign Securities HDFC LONG TERM FUND Table:3.11 NAV SENSEX Ri Rm Ri Rm Rm-Rm av sqr(RmRm av) Rm2 2007 JAN 95.224 14090.92 FEB 87.782 12938.09 -7.81526 -8.18137 63.93949 -8.91997 79.56584 66.93478 MAR 86.337 13072.1 -1.64612 1.035779 -1.70502 0.297178 0.088315 1.072838 APL 91.627 13872.37 6.127153 6.12197 37.51024 5.383369 28.98066 37.47851 MAY 96.561 14544.46 5.384876 4.84481 26.0887 4.106209 16.86095 23.47219 JUN 100.695 14650.51 4.281232 0.729144 3.121633 -0.00946 8.94E-05 0.53165 JULY 102.976 15550.99 2.265256 6.146407 13.92319 5.407806 29.24437 37.77832 AUG 102.627 15318.6 -0.33891 -1.49437 0.506464 -2.23298 4.986179 2.233155 SEP 109.68 17291.1 6.87246 12.8765 88.49326 12.1379 147.3287 165.8043 OCT 118.185 19837.99 7.754376 14.72949 114.218 13.99088 195.7448 216.9577 NOV 119.445 19363.19 1.066125 -2.39339 -2.55165 -3.13199 9.809352 5.728304 DEC 128.983 20286.99 7.985265 4.770908 38.09697 4.032307 16.2595 22.76156 2008 JAN 112.202 17648.71 -13.0102 -13.0048 169.1954 -13.7434 188.8807 169.1245 FEB 110.554 17578.72 -1.46878 -0.39657 0.582478 -1.13517 1.28862 0.15727 MAR 96.105 15644.44 -13.0696 -11.0035 143.8121 -11.7421 137.8777 121.0777 APL 103.44 17287.31 7.632277 10.5013 80.14885 9.762702 95.31035 110.2774 MAY 99.18 16415.57 -4.11833 -5.04266 20.76733 -5.78126 33.42296 25.4284 JUN 85.045 13461.6 -14.2519 -17.9949 256.4613 -18.7335 350.9451 323.8174 JULY 88.972 14355.75 4.617555 6.642227 30.67085 5.903626 34.8528 44.11918 AUG 93.359 14564.53 4.930765 1.45433 7.17096 0.715729 0.512268 2.115076 SEP 82.286 12860.43 -11.8607 -11.7003 138.7739 -12.4389 154.7273 136.898 OCT 63.504 9788.06 -22.8253 -23.8901 545.298 -24.6287 606.5731 570.737 NOV 57.237 9092.72 -9.86867 -7.10396 70.10665 -7.84256 61.50578 50.46627 DEC 61.406 9647.31 7.28375 6.099275 44.42559 5.360674 28.73683 37.20116 2009 JAN 58.709 9424.24 -4.39208 -2.31225 10.15559 -3.05085 9.307696 5.346504 FEB 55.785 8891.61 -4.9805 -5.6517 28.14829 -6.3903 40.83598 31.94174 MAR 59.209 9708.5 6.137851 9.1872 56.38966 8.448599 71.37883 84.40465 APL 68.298 11403.25 15.35071 17.45635 267.9674 16.71775 279.4832 304.7242 MAY 87.958 14625.25 28.78562 28.2551 813.3405 27.5165 757.1579 798.3508 TOTAL 6.828943 20.68083 3065.056 0 3381.666 3396.941 AVG 0.243891 0.738601 109.4663 0 120.7738 Figure:3.4 σm= √120.7738 =10.98971 β(Beta) =[N (ΣXY) – ΣXΣY ]/[ N (ΣX2) – (ΣX) 2 ] = (85821.57- 141.2282)/ (95114.34- 427.6966) = 85680.34/ 94686.64 = 0.904883 Table:3.12 Ri Rm Ri-Rm Dev frm ave sq of Dev frm av FEB -7.81526 -8.18137 0.366111 -0.860821325 0.741013 MAR -1.64612 1.035779 -2.6819 2.187192079 4.783809 APL 6.127153 6.12197 0.005183 -0.499893333 0.249893 MAY 5.384876 4.84481 0.540065 -1.034775537 1.07076 JUN 4.281232 0.729144 3.552088 -4.046798071 16.37657 JULY 2.265256 6.146407 -3.88115 3.386440599 11.46798 AUG -0.33891 -1.49437 1.15546 -1.650170515 2.723063 SEP 6.87246 12.8765 -6.00404 5.509332488 30.35274 OCT 7.754376 14.72949 -6.97511 6.48039862 41.99557 NOV 1.066125 -2.39339 3.459513 -3.954222903 15.63588 DEC 7.985265 4.770908 3.214357 -3.709067209 13.75718 2008 JAN -13.0102 -13.0048 -0.00545 -0.489256261 0.239372 FEB -1.46878 -0.39657 -1.07221 0.577496505 0.333502 MAR -13.0696 -11.0035 -2.0661 1.571389464 2.469265 APL 7.632277 10.5013 -2.86903 2.374315379 5.637374 MAY -4.11833 -5.04266 0.924329 -1.419039116 2.013672 JUN -14.2519 -17.9949 3.743063 -4.237772814 17.95872 JULY 4.617555 6.642227 -2.02467 1.529961243 2.340781 AUG 4.930765 1.45433 3.476435 -3.971144712 15.76999 SEP -11.8607 -11.7003 -0.16032 -0.334386466 0.111814 OCT -22.8253 -23.8901 1.064835 -1.559545364 2.432182 NOV -9.86867 -7.10396 -2.76471 2.26999821 5.152892 DEC 7.28375 6.099275 1.184475 -1.679185121 2.819663 2009 JAN -4.39208 -2.31225 -2.07983 1.585118054 2.512599 FEB -4.9805 -5.6517 0.671205 -1.165915514 1.359359 MAR 6.137851 9.1872 -3.04935 2.554639268 6.526182 2007 JAN APL 15.35071 17.45635 -2.10565 1.610935739 2.595114 MAY 28.78562 28.2551 0.530513 -1.025223388 1.051083 TOTAL 6.828943 20.68083 -13.8519 -4.44089E-15 210.478 AVG 0.243891 0.738601 -0.49471 -1.58603E-16 5.538895 Standard Deviation for the fund’s excess return (S.D.) σi=√5.538895 =2.353486 Sharpe Index (Si) = (Ri - Rf)/Si = (0.243891-5)/ 2.353486 =-2.02088 Treynor's Index (Ti) = (Ri - Rf)/Bi. =(0.243891-5)/ 0.904883 =-5.25605 Jenson alpha (αp)= Ri –[ Rf + Bi (Rm - Rf) ] =0.243891- [5+0.904883 (0.738601-5)] = -0.90004 Expected return E(Ri) = Rf + Bi (Rm - Rf) =[5+0.904883 (0.738601-5)] =1.143932 Fema Measures Selectivity =Ri –[ Rf + Bi (Rm - Rf) ] =0.243891- [5+0.904883 (0.738601-5)] = -0.90004 Diversification =[Rf + (Rm - Rf)(αi/ αm)]-[Rf + Bi (Rm - Rf)] =[5+(0.738601-5)( 2.353486/10.98971)]- [5+0.904883 (0.738601-5)] =2.943474 Net selectivity= selectivity- diversification =-0.90004-2.943474 =-3.84352 HDFC TAXSAVER Investment Objective The investment objective of the Scheme is to achieve long term growth of capital. Basic Scheme Information Table:3.13 Nature of Scheme Open Ended Equity Linked Saving Scheme Inception Date Mar 31, 1996 Option/Plan Dividend Option, Growth Option, Entry Load NIL (purchase / additional purchase / switch- (With effect from August 1, 2009) in) Exit Load. Nil (as a % of the Applicable NAV) Minimum Application Amount Rs.5000 and in multiples of Rs.100 thereof to open an account / folio. Lock-In-Period 3 yrs Net Asset Value Periodicity Every Business Day Redemption Proceeds Normally despatched within 3 Business days Investment Pattern The asset allocation under the Scheme will be as follows: Table:3.14 SR NO. ASSET TYPE (% OF PORTFOLIO) 1 Equities & Equities RISK PROFILE Minimum 80% Medium to high Minimum 20% Low to medium related instruments 2 Debt securities, money market instruments & cash Investment in Securitized debt, if undertaken, would not exceed 20% of the net assets of the scheme. The Scheme may also invest up to 25% of net assets of the Scheme in derivatives such as Futures & Options and such other derivative instruments as may be introduced from time to time for the purpose of hedging and portfolio balancing and and other uses as may be permitted under the regulations and guidelines. The Scheme may also invest a part of its corpus, not exceeding 40% of its net assets, in overseas markets in Global Depository Receipts (GDRs), ADRs, overseas equity, bonds and mutual funds and such other instruments as may be allowed under the Regulations from time to time. The ELSS (Equity Linked Savings Scheme) guidelines, as applicable, would be adhered to in the management of this Fund. If the investment in equities and related instruments falls below 80% of the portfolio of the Scheme at any point in time, it would be endeavoured to review and rebalance the composition. Benchmark Index : S&P CNX 500. HDFC Tax saver, which is benchmarked to S&P CNX 500 Index is not sponsored, endorsed, sold or promoted by Indian Index Service & Products Limited (IISL). Fund Manager : Dhawal Mehta HDFC TAX SAVER FUND Table:3.15 NAV S&P CNX 500 Ri Rm Ri Rm Rm-Rm av sqr(RmRm av) Rm2 2007 JAN 146.134 4899.39 FEB 135.133 4504.73 -7.52802 -8.05529 60.64039 -9.0864 82.56268 64.88767 MAR 133.882 4605.89 -0.92575 2.24564 -2.07891 1.214527 1.475077 5.042897 APL 144.308 4934.46 7.787455 7.133692 55.5533 6.10258 37.24148 50.88956 MAY 153.765 5185.95 6.553344 5.096606 33.39982 4.065494 16.52824 25.9754 JUN 156.535 5223.82 1.80145 0.730242 1.315495 -0.30087 0.090523 0.533254 JULY 163.61 5483.25 4.519756 4.966289 22.44641 3.935177 15.48562 24.66403 AUG 161.481 5411.29 -1.30127 -1.31236 1.707729 -2.34347 5.491863 1.72229 SEP 173.27 6094.11 7.300549 12.61843 92.12149 11.58732 134.266 159.2248 OCT 198.737 7163.3 14.69787 17.54465 257.8689 16.51353 272.6968 307.8146 NOV 196.735 6997.6 -1.00736 -2.31318 2.330208 -3.34429 11.18429 5.3508 DEC 204.284 7461.48 3.837141 6.62913 25.43691 5.598018 31.3378 43.94536 2008 JAN 173.277 6245.45 -15.1784 -16.2974 247.3687 -17.3285 300.2786 FEB 171.845 6356.92 -0.82642 1.784819 -1.47501 0.753707 0.568075 3.18558 MAR 152.02 5762.88 -11.5366 -9.34478 107.8066 -10.3759 107.6591 87.32486 APL 158.411 6289.07 4.204052 9.130678 38.38584 8.099566 65.60296 83.36928 265.6065 MAY 148.793 5937.81 -6.07155 -5.58525 33.91109 -6.61636 43.77619 31.19497 JUN 126.45 4929.98 -15.0162 -16.9731 254.8707 -18.0042 324.1514 288.0859 JULY 135.953 5297.47 7.515223 7.454188 56.01989 6.423076 41.25591 55.56493 AUG 142.358 5337.28 4.711187 0.751491 3.540414 -0.27962 0.078188 0.564738 SEP 132.682 4807.2 -6.79695 -9.93165 67.50492 -10.9628 120.1822 98.63768 OCT 99.119 3539.57 -25.2958 -26.3694 667.0357 -27.4005 750.7883 695.3455 NOV 90.957 3379.53 -8.23455 -4.52145 37.23212 -5.55257 30.83098 20.44354 DEC 98.972 3635.87 8.811856 7.585078 66.83862 6.553966 42.95447 57.53341 2009 JAN 93.555 3538.57 -5.47327 -2.67611 14.64708 -3.70723 13.74352 FEB 89.449 3403.33 -4.38886 -3.82188 16.77372 -4.85299 23.55156 14.60679 MAR 97.063 3720.51 8.512113 9.319696 79.3303 8.288584 68.70062 86.85673 APL 112.05 4278.54 15.44049 14.99875 231.588 13.96764 195.0949 224.9625 MAY 144.827 5480.11 29.25212 28.08365 821.5062 27.05253 731.8396 788.6911 TOTAL 15.36369 28.87114 3293.627 0 3469.417 3499.186 AVG 0.548703 1.031112 117.6295 7.161582 123.9077 Figure:3.5 σm= √123.9077 =11.13139 β(Beta) =[N (ΣXY) – ΣXΣY ]/[ N (ΣX2) – (ΣX) 2 ] = (92221.54- 443.5671)/ (97977.21- 833.5426) = 91777.98/ 97143.67 = 0.944765 Table:3.16 Ri Rm Ri-Rm Dev frm ave sq of Dev frm av FEB -7.52802 -8.05529 0.527266 -1.00968 1.019444 MAR -0.92575 2.24564 -3.17139 2.688985 7.230641 APL 7.787455 7.133692 0.653763 -1.13617 1.290886 MAY 6.553344 5.096606 1.456738 -1.93915 3.760291 JUN 1.80145 0.730242 1.071208 -1.55362 2.413726 JULY 4.519756 4.966289 -0.44653 -0.03588 0.001287 AUG -1.30127 -1.31236 0.011095 -0.4935 0.243546 SEP 7.300549 12.61843 -5.31788 4.835475 23.38182 OCT 14.69787 17.54465 -2.84678 2.364366 5.590227 NOV -1.00736 -2.31318 1.305818 -1.78823 3.197757 DEC 3.837141 6.62913 -2.79199 2.30958 5.334158 2008 JAN -15.1784 -16.2974 1.119058 -1.60147 2.564696 FEB -0.82642 1.784819 -2.61124 2.128833 4.531929 MAR -11.5366 -9.34478 -2.19178 1.709373 2.921956 APL 4.204052 9.130678 -4.92663 4.444217 19.75106 2007 JAN MAY -6.07155 -5.58525 -0.4863 0.003894 1.52E-05 JUN -15.0162 -16.9731 1.956929 -2.43934 5.950372 JULY 7.515223 7.454188 0.061035 -0.54344 0.295331 AUG 4.711187 0.751491 3.959696 -4.44211 19.7323 SEP -6.79695 -9.93165 3.134702 -3.61711 13.08349 OCT -25.2958 -26.3694 1.073584 -1.55599 2.421115 NOV -8.23455 -4.52145 -3.71309 3.230684 10.43732 DEC 8.811856 7.585078 1.226778 -1.70919 2.921319 2009 JAN -5.47327 -2.67611 -2.79715 2.314743 5.358035 FEB -4.38886 -3.82188 -0.56698 0.08457 0.007152 MAR 8.512113 9.319696 -0.80758 0.325174 0.105738 APL 15.44049 14.99875 0.441737 -0.92415 0.854046 MAY 29.25212 28.08365 1.168474 -1.65088 2.725415 TOTAL 15.36369 28.87114 -13.5075 147.1251 AVG 0.548703 1.031112 -0.48241 5.254467 Standard Deviation for the fund’s excess return (S.D.) σi=√ 5.254467 = 2.292262 Sharpe Index (Si) = (Ri - Rf)/Si = (0.548703-5)/ 2.292262 =-1.94188 Treynor's Index (Ti) = (Ri - Rf)/Bi. =(0.548703-5)/ 0.944765 =-4.71154 Jenson alpha (αp) = Ri –[ Rf + Bi (Rm - Rf) ] =0.548703- [5+0.944765 (1.031112-5)] = -0.70163 Expected return E(Ri) = Rf + Bi (Rm - Rf) =[5+0.944765 (1.031112-5)] =1.250332 Fema Measure: Selectivity =Ri –[ Rf + Bi (Rm - Rf) ] =0.548703- [5+0.944765 (1.031112-5)] = -0.70163 Diversification = [Rf + (Rm - Rf)(αi/ αm)]-[Rf + Bi (Rm - Rf)] =[5+(1.031112-5)( 2.292262/11.13139)]- [5+0.944765 (1.031112-5)] =2.932363 Net selectivity= selectivity- diversification =-0.70163-2.932363 =-3.63399 HDFC TOP 200 FUND Investment Objective The investment objective is to generate long-term capital appreciation from a portfolio of equity and equity linked instruments. The investment portfolio for equity and equity-linked instruments will be primarily drawn from the companies in the BSE 200 Index. Further, the Scheme may also invest in listed companies that would qualify to be in the top 200 by market capitalisation on the BSE even though they may not be listed on the BSE This includes participation in large IPO’s where in the market capitalisation of the company based on issue price would make the company a part of the top 200 companies listed on the BSE based on market capitalisation. Basic Scheme Information Table:3.17 Nature of Scheme Open Ended Equity Growth Scheme Inception Date Oct 11, 1996 Option/Plan Dividend Option, Growth Option, Entry Load NIL (purchase / additional purchase / switch- (With effect from August 1, 2009) in) Exit Load. Nil Minimum Application Amount Rs.5000 and in multiples of Rs.100 thereof to open an account / folio. Additional purchases is Rs. 1000 and in multiples of Rs. 100 thereof. Lock-In-Period Investment Pattern Nil The asset allocation under the Scheme will be as follows: Table:3.18 SR NO. ASSET TYPE (% OF PORTFOLIO) RISK PROFILE 1 Equities & Equities Upto 100% (including use of related instruments derivatives for hedging and other Medium to high uses as permitted by prevailing SEBI Regulations) 2 Debt securities, money Balance in Debt & Money Market market instruments & Instruments Low to medium and guidelines. Investment Strategy & Risk Control. Benchmark Index : BSE 200 Fund Manager : Mr. Prashant Jain HDFC TOP 200 FUND Table:3.19 Ri Rm Ri Rm Rm-AvRm (RmAvRm)2 Rm2 2007 JAN 112.359 1687.35 FEB 103.269 1545.27 -8.09014 -8.4203 68.12144 -9.34081 87.25075 70.90152 MAR 104.504 1556.72 1.195906 0.740971 0.886131 -0.17954 0.032233 0.549038 APRI L 111.805 1666.14 6.986335 7.028881 49.10612 6.108374 37.31223 49.40517 MAY 119.096 1766.08 6.521175 5.998295 39.11594 5.077788 25.78393 35.97955 JUNE 120.34 1804.81 1.044536 2.192992 2.290658 1.272485 1.619219 4.809216 JULY 127.614 1894.18 6.04454 4.951768 29.93116 4.031261 16.25106 24.52 AUG 126.201 1857.7 -1.10725 -1.9259 2.132443 -2.84641 8.10203 3.709088 SEPT 140.49 2118.86 11.32241 14.05824 159.1733 13.13774 172.6001 197.6342 OCT 160.215 2439.87 14.04015 15.15013 212.71 14.22962 202.4821 229.5264 NOV 158.356 2454.23 -1.16032 0.588556 -0.68291 -0.33195 0.110192 0.346398 DEC 169.794 2656.52 7.222966 8.242504 59.53532 7.321997 53.61163 67.93887 2008 JAN 147.718 2230.39 -13.0016 -16.0409 208.5581 -16.9614 287.6897 257.3108 FEB 147.689 2217.47 -0.01963 -0.57927 0.011372 -1.49978 2.249334 0.335555 MAR 131.544 1932.41 -10.9318 -12.8552 140.5298 -13.7757 189.7699 165.2559 APRI L 143.025 2157.52 8.727878 11.64918 101.6727 10.72868 115.1045 135.7035 MAY 137.675 2038.22 -3.7406 -5.5295 20.68366 -6.45 41.60255 30.57534 JUNE 115.424 1644.18 -16.162 -19.3326 312.4523 -20.2531 410.1865 373.7477 JULY 123.902 1749.11 7.345093 6.381905 46.87568 5.461398 29.82686 40.72871 AUG 129.235 1782.08 4.304208 1.884959 8.113254 0.964452 0.930167 3.553069 SEPT 118.754 1555.7 -8.11003 -12.7031 103.0228 -13.6236 185.6036 161.3696 OCT 92.324 1145.68 -22.2561 -26.356 586.5812 -27.2765 744.0068 694.6377 NOV 86.546 1062.35 -6.25839 -7.27341 45.51987 -8.19392 67.14027 52.90249 DEC 92.798 1156.59 7.223904 8.870899 64.08253 7.950392 63.20874 78.69286 2009 JAN 88.074 1107.06 -5.09063 -4.28242 21.80018 -5.20292 27.07041 18.33909 FEB 84.379 1044.94 -4.19534 -5.61126 23.54111 -6.53177 42.66396 31.48622 MAR 92.552 1140.43 9.686059 9.138324 88.51435 8.217817 67.53251 83.50896 APRI L 107.584 1339.38 16.24168 17.44517 283.3389 16.52467 273.0646 304.3341 MAY 139.341 1772.82 29.51833 32.36124 955.2498 31.44073 988.5198 1047.25 Total 37.30138 25.7742 3632.867 0 4141.326 4165.051 Avera ge 1.332192 0.920507 129.7453 0 147.9045 Figure:3.6 σm= √147.9045 =12.1616 β(Beta) =[N (ΣXY) – ΣXΣY ]/[ N (ΣX2) – (ΣX) 2 ] = (101720.3- 961.4133)/ (116621.4- 664.3093) = 100758.9/ 115957.1 = 0.868932 Table:3.20 Ri Rm Ri-Rm dev frm av sq of dev Rm2 FEB -8.09014 -8.4203 0.330164 0.081521 0.006646 70.90152 MAR 1.195906 0.740971 0.454935 -0.04325 0.001871 0.549038 APRIL 6.986335 7.028881 -0.04255 0.454231 0.206326 49.40517 MAY 6.521175 5.998295 0.52288 -0.11119 0.012364 35.97955 JUNE 1.044536 2.192992 -1.14846 1.560142 2.434043 4.809216 JULY 6.04454 4.951768 1.092773 -0.68109 0.46388 24.52 AUG -1.10725 -1.9259 0.818654 -0.40697 0.165624 3.709088 SEPT 11.32241 14.05824 -2.73583 3.147515 9.906851 197.6342 OCT 14.04015 15.15013 -1.10998 1.521668 2.315473 229.5264 NOV -1.16032 0.588556 -1.74887 2.160557 4.668006 0.346398 DEC 7.222966 8.242504 -1.01954 1.431223 2.048399 67.93887 2008 JAN -13.0016 -16.0409 3.039273 -2.62759 6.90422 257.3108 FEB -0.01963 -0.57927 0.559639 -0.14795 0.02189 0.335555 MAR -10.9318 -12.8552 1.923436 -1.51175 2.285389 165.2559 APRIL 8.727878 11.64918 -2.92131 3.332991 11.10883 135.7035 MAY -3.7406 -5.5295 1.788892 -1.37721 1.896699 30.57534 2007 JAN JUNE -16.162 -19.3326 3.170579 -2.75889 7.611496 373.7477 JULY 7.345093 6.381905 0.963188 -0.5515 0.304156 40.72871 AUG 4.304208 1.884959 2.41925 -2.00756 4.030315 3.553069 SEPT -8.11003 -12.7031 4.593101 -4.18142 17.48424 161.3696 OCT -22.2561 -26.356 4.099889 -3.6882 13.60285 694.6377 NOV -6.25839 -7.27341 1.015015 -0.60333 0.364007 52.90249 DEC 7.223904 8.870899 -1.647 2.058681 4.238165 78.69286 2009 JAN -5.09063 -4.28242 -0.80821 1.219896 1.488145 18.33909 FEB -4.19534 -5.61126 1.415923 -1.00424 1.008493 31.48622 MAR 9.686059 9.138324 0.547736 -0.13605 0.01851 83.50896 APRIL 16.24168 17.44517 -1.20349 1.615179 2.608803 304.3341 MAY 29.51833 32.36124 -2.84291 3.254597 10.5924 1047.25 37.30138 25.7742 11.52718 107.7981 4165.051 1.332192 0.920507 0.411685 3.849932 Standard Deviation for the fund’s excess return (S.D.) σi=√3.849932 =1.962124 Sharpe Index (Si) = (Ri - Rf)/Si = (1.332192-5)/ 1.962124 =-1.8693 Treynor's Index (Ti) = (Ri - Rf)/Bi. = (4.528901-5)/ 0.868932 =-4.22105 Jenson alpha (αp)= Ri –[ Rf + Bi (Rm - Rf) ] =1.332192- [5+0.868932 (0.920507-5)] = -0.12301 Expected return E(Ri) = Rf + Bi (Rm - Rf) =[5+0.868932 (0.920507-5)] =1.455198 Fema Measure: Selectivity =Ri –[ Rf + Bi (Rm - Rf) ] =1.332192- [5+0.868932 (0.920507-5)] = -0.12301 Diversification =[Rf + (Rm - Rf)(αi/ αm)]-[Rf + Bi (Rm - Rf)] =[5+(0.920507-5)( 1.962124/12.1616)]- [5+0.868932 (0.920507-5)] =2.886626 Net selectivity= selectivity- diversification =-0.12301-2.886626 =-2.87834 3.2 ANALYSIS OF THE OBSERVATION: The table given below illustrates the comparison among the analysed funds based on the different measures of comparison. Performance of Fund portfolio and Benchmark return for 29 months (jan07-may08) Table:3.21 FUND BENCHMARK RETURNS RETURN EQUITY FUND 12.22546 11.8529 Capital builder 4.872865 11.8529 Growth fund 16.48711 3.792016 Long term adv -7.63043 3.792016 Tax saver -0.89438 11.8529 Top 200 24.0141 5.065339 Figure:3.7 Performance Evaluation against Benchmarks The above table presents return and risk of the six funds along with market return and risk. From the table it is evident that, Top 200, Equity fund and Growth fund have earned greater return as against the market earning. Capital builder, Long term advantage and Tax saver funds have not earned higher return than the Market portfolio. Long-term advantage and Tax saver funds have even negative returns. Comparison of ratios: Table:3.22 Fund name S.D. market S.D. fund B value Sharpe ratio Treynor ratio Jenson’s alpha Fema Retuns jan07may08(29 months) HDFC Equity 11.13239 2.392215 1.0096114 -1.64557 -3.89907 0.070488 -3.0836 12.22546 HDFC Capital Builder 11.13239 2.545136 0.936265 -1.66872 -4.53625 -0.39357 -3.33967 4.872865 HDFC Growth Fund 10.98971 2.54769 0.921779 -1.53641 -4.24646 0.013767 -2.9264 16.48711 HDFC Long Term Adv 10.98971 2.353486 0.904883 -2.02088 -5.25605 -0.90004 -3.84352 -7.63043 HDFC Tax saver 11.13139 2.292262 0.944765 -1.94188 -4.71154 -0.70163 -3.63399 -0.89438 HDFC Top 200 12.1616 1.962124 0.868932 -1.8693 -4.22105 -0.12301 -2.87834 24.0141 Standard Deviation of the Market: High standard deviation of a fund implies high volatility and a low standard deviation implies low volatility. HDFC equity fund, HDFC capital Builder and HDFC Tax saver take S&P CNX 500 as their benchmark, HDFC Growth fund and HDFC long term have taken Sensex as bench mark and HDFC Top 200 has taken BSE 200 as its bench mark. We found out that BSE 200’s S.D. is 12.1616, which is greater than Sensex and S&P CNX 500 having 10.98971 and 11.13139 S.D. respectively. Therefore, BSE 200 is more volatile than Sensex and S&P CNX 500. Standard deviation of the Fund: It has been found that HDFC Top 200’s S.D. is lesser than all other funds. Although benchmark index (BSE 200) is more volatile as it has higher S.D. than other indexes still HDFC Top 200 is less volatile because of lesser fund S.D. This is might be because of diversification of unsystematic risk as it compensates the systematic risk. β Value : As we know analysis illustrates that HDFC Equity fund’s is less volatile and its performance is very close to its benchmark as its beta value is 1.0096114 compared to other funds which have beta value lesser than 1 point. HDFC Top 200’s beta value is more volatile than the benchmark as its value is 0.868932, which is very far from point 1. Sharpe ratio: A fund with a higher Sharpe ratio means that these returns have been generated taking lesser risk. In other words, the fund is less volatile and yet generating good return. The analysis shows that all the funds have negative Sharpe ratio therefore they are more risky. Comparing all the funds HDFC growth fund has lesser negative marks that means its return 16.48711 is generated taking lesser risk. Treynor ratio: While a high and positive Treynor's Index shows a superior risk-adjusted performance of a fund, a low and negative Treynor's Index is an indication of unfavourable performance (systematic risk associated with it (beta)). All the funds are having negative Treynor’s ratio which means they are affected by the volatility of the market (systematic risk)or by the great recession. Jenson’s alpha: Its measure involves evaluation of the returns that the fund has generated vs. the returns actually expected out of the fund given the level of its systematic risk. Higher alpha represents superior performance of the fund and vice versa. The analysis points out that all the funds are having negative alpha except HDFC Equity fund and HDFC Growth fund which have positive points. Jenson alpha ratio justifies that these two funds are at least able to achieve the expected return given the level of their systematic risk. Fema measure: The Net Selectivity (Fema). It has been that all the funds are having negative net selectivity because of the higher risk found both in systematic risk (B) and unsystematic risk. This findings point out, that the stock selection of the fund manager has been failed because of the systematic risk i.e. recession. Comparing to other funds HDFC Growth fund (-2.9264) has lesser negative points in this time of great crisis. This indicates that HDFC Growth fund is getting enhanced return by nullifying systematic risk and unsystematic risk. From the above analysis there is no fund which has consistency. The funds are being affected very badly either by the systematic risk or by the unsystematic risk. As we observe closely, it is the HDFC Growth fund, which has better option for the investment. Its Sharpe ratio is lesser negative than other funds which illustrates that its return is less affected by overall risk. Its alpha value is more than 0 which means its less affected by the market risk (systematic risk) and also its Fema value (selectivity) has lesser negative value which has managed to nullify systematic risk and unsystematic risk during the time of recession. An investor who is entering into the capital market for making long-term investment, the volatility of the market is important to accomplish his or her goal and these expectations are often formed on the basis of historical record of monthly returns, measured for holding period and other important ratios. We will take this fund (HDFC Growth fund) for further analysis of its portfolio. HDFC Growth Fund Portfolio Analysis Table:3.23 Portfolio 31-May-09 Name of Instrument Industry + Quantity Market/ Fair Value(Rs. In Lakhs) % toNAV Equity & Equity Related (a) Listed / awaiting listing on Stock Exchanges State Bank of India Banks 448,000 8,372.45 7.20 Zee Entertainment Enterprises Ltd. Media & Entertainment 4,160,179 7,001.58 6.02 ICICI Bank Ltd. Banks 932,397 6,901.14 5.93 Bharti Airtel Ltd. Telecom - Services 750,346 6,159.59 5.30 Crompton Greaves Ltd. Industrial Capital Goods 2,099,819 5,513.07 4.74 Bharat Petroleum Corporation Limited Petroleum Products 926,557 4,305.71 3.70 Housing Development Finance Corporation Ltd.$ Finance 182,500 3,977.77 3.42 Exide Industries Ltd. Auto Ancillaries 5,319,910 3,769.16 3.24 Divis Laboratories Ltd. Pharmaceuticals 318,535 3,666.18 3.15 Sun Pharmaceutical Industries Ltd. Pharmaceuticals 272,365 3,305.83 2.84 H T Media Ltd. Media & Entertainment 2,307,000 2,861.83 2.46 Solar Explosives Ltd. Chemicals 913,257 2,807.81 2.41 Nestle India Ltd. Consumer Non Durables 160,268 2,766.79 2.38 Dr Reddys Laboratories Ltd. Pharmaceuticals 420,000 2,719.50 2.34 ITC Ltd. Consumer Non Durables 1,462,305 2,685.52 2.31 Coromandel Fertilisers Ltd. Fertilisers 1,433,271 2,608.55 2.24 Biocon Limited Pharmaceuticals 1,319,006 2,397.95 2.06 Reliance Industries Ltd. Petroleum Products 104,250 2,368.46 2.04 Hindustan Petroleum Corporation Ltd. Petroleum Products 633,721 2,300.09 1.98 Dabur India Ltd. Consumer Non Durables 2,050,115 2,264.35 1.95 Bank of Baroda Banks 469,151 2,058.63 1.77 Infosys Technologies Ltd Software 120,000 1,926.12 1.66 MphasiS Limited Software 569,000 1,916.96 1.65 Axis Bank Ltd Banks 220,000 1,713.69 1.47 Apollo Tyres Ltd Auto Ancillaries 5,367,120 1,682.59 1.45 Tata Steel Limited Ferrous Metals 400,000 1,621.40 1.39 Hindustan Unilever Ltd. Consumer Non Durables 653,355 1,507.94 1.30 Noida Toll Bridge Company Ltd. Transportation 3,607,000 1,441.00 1.24 Thermax Ltd. Industrial Capital Goods 367,366 1,345.29 1.16 Oil & Natural Gas Corporation Ltd. Oil 111,353 1,301.99 1.12 Nagarjuna Construction Co. Ltd. Construction Project 711,738 990.03 0.85 Ballarpur Industries Ltd. Paper Products 3,967,287 987.85 0.85 Eimco Elecon (India) Ltd. Industrial Capital Goods 276,428 811.18 0.70 Amara Raja Batteries Ltd. Auto Ancillaries 836,454 705.97 0.61 C & C Constructions Ltd Construction 396,496 635.78 0.55 Maytas Infra Ltd Construction 761,912 552.01 0.47 KNR Construction limited Construction 710,597 531.53 0.46 ISMT Ltd. Ferrous Metals 1,175,668 413.25 0.36 Ahmednagar Forgings Ltd. Industrial Products 424,234 245.21 0.21 Disa India Ltd Engineering 12,612 207.85 0.18 Technocraft Industries (India) Ltd Ferrous Metals 538,745 199.07 0.17 Sub total 101,548.67 87.33 Total 101,548.67 87.33 Short Term Deposits as margin for Futures & Options 1,000.00 0.86 Cash margin / Earmarked cash for Futures & Options 5,072.00 4.36 Other Cash,Cash Equivalents and Net Current Assets 8,679.32 7.45 Net Assets 116,299.99 100.00 Table:3.24 Sectoral Allocation of Assets(%) Banks 16.37 Pharmaceuticals 10.39 Media & Entertainment 8.48 Consumer Non Durables 7.94 Petroleum Products 7.72 Industrial Capital Goods 6.60 Telecom - Services 5.30 Auto Ancillaries 5.30 Finance 3.42 Software 3.31 Chemicals 2.41 Fertilisers 2.24 Ferrous Metals 1.92 Construction 1.48 Transportation 1.24 Oil 1.12 Construction Project 0.85 Paper Products 0.85 Industrial Products 0.21 Engineering 0.18 Cash,Cash Equivalents and Net Current Assets 12.67 TOTAL 100 Figure:3.8 Table:3.25 HDFC Growth Fund (NAV as at evaluation date 30-June-09, Rs. 57.219 Per unit) Date Period NAV Per Unit (Rs.) Returns (%) ^ Benchmark Returns (%) Sensex December 30, 2008 Last Six months (182 days) 41.697 37.23 49.17 June 30, 2008 Last 1 Year (365 days) 53.472 7.01 7.67 June 30, 2006 Last 3 Years (1096 days) 36.034 16.65 10.95 June 30, 2004 Last 5 Years (1826 days) 16.439 28.31 24.74 June 30, 1999 Last 10 Years (3653 days) N.A N.A. 13.34 September 11, 2000 Since Inception (3214 days) 10 21.91 13.65 Figrure:3.9 HDFC Growth Fund - Analysis It requires a lot of research and constant watch on the capital market for a fund manager to analyze the portfolio of the particular fund. I took the secondary data from the fund review of the article corner from The Business Line web site. I comprehended the analysis and concluded my view as stated below. HDFC Growth Fund invests in stocks across market capitalisations. Despite a large-cap bias, mid and small cap stocks account for 28 per cent of the portfolio. The fund has managed to consistently beat its benchmark Sensex over one-, three- and five-year periods. In the latest portfolio, the fund has invested in as many as 52 stocks across 18 different sectors making it a fairly diversified portfolio. This may indicate net inflows into the fund. Sector Moves: There is a fair bit of stability in terms of top sector holdings in the portfolio. Banks (16.39 per cent) and pharmaceuticals (10.37 per cent) sectors continue to be the top two sector holdings, although exposures have been a bit reduced. Banks and consumer non-durables also figure among top holdings in the fund, and have seen increased exposures over the September-February period. While capital goods and banks have done well in the past year, they have been among the worst hit in the recent meltdown. The respective sector indices were beaten down by over 25 per cent in the last couple of months. Construction and predictably, software exposures have been pared in the six-month period. Interestingly, media and entertainment (8.48 per cent), which were not part of the portfolio six months ago is now in the top ten sector holdings for the fund. The power sector has been exited, while telecom services and auto ancillaries exposures have been increased substantially. Stock Moves: Most stocks are those whose prices have fallen during September-February, include stocks such as Zee Entertainment, HT Media and Dr Reddy's Labs. The fund has also taken profit booking opportunities, with several stocks whose prices rose between 60-105 per cent have been exited. These include, Axis Bank, Hanung Toys and Tata Power. Other high-profile exits include DLF, HPCL, Ranbaxy Labs, and Punj Lloyd. Reliance Industries, SBI, ONGC and BHEL are the stocks retained by the fund during the period and are among the fund's top holdings. 3.4 FINDINGS As far as analysis is concerned, we found out that the HDFC Growth Fund was among the best performers fund. Although all the funds are affected by the global meltdown, (recession) still HDFC Growth Fund has better performed comparing to other funds for its systematic and unsystematic risk. It offers advantages of diversification, market timing, and selectivity. In the comparison of sample of funds, HDFC Growth fund is found highly diversified fund and because of high diversification, it has reduced the total risk of portfolio. Further, other funds were found very poor in diversification, market timing, and selectivity. Although HDFC Top 200 Fund and Equity Fund performed better in terms of returns but these suffered by the systematic risk (market volatility) and lack of diversification. For the further clarification, we too studied the portfolio of HDFC Growth fund. One of the findings that I came across is that generally, a good model of asset classes is the one that can explain a large portion of the variance of returns on the assets and there were some stocks in the fund portfolio, which were not aligned with strategy of the fund portfolio. The optimal situation involves the selection that proceeds from sensible assumptions, is carefully and logically constructed, and is broadly consistent with the data while collecting the stocks for the portfolio. The portfolio was showing constructive outcome in long time horizon and the results can be improved by making the minor changes in fund portfolio. Hence, the portfolio theory teaches us that investment choices are made on the basis of expected risk and returns and these expectations can be satisfied by having right mix of assets. 3.5 RECOMMENDATIONS: Considering the above analysis, it can be noted that the three growth oriented mutual funds (HDFC Equity Fund, HDFC Growth Fund and HDFC Top 200 fund) have performed better than their benchmark indicators. Other funds such as HDFC Capital Builder Fund, HDFC Long term Advantage Fund did not perform well even some performed negatively. Though HDFC Equity Fund, HDFC Growth Fund and HDFC Top 200 fund have performed better than the benchmark of their systematic risk (volatility) but with respect to total risk the fund have not outperformed the Market Index. Growth oriented mutual funds are expected to offer the advantages of Diversification, Market timing and Selectivity. In the sample, HDFC Equity Fund, HDFC Growth Fund and HDFC Top 200 fund is found to be diversified fund and because of high diversification, it has reduced total risk of the portfolio. Whereas, others are low diversified and because of low diversification their total risk is found to be very high. Further, the fund managers of these under performing funds are found to be poor in terms of their ability of market timing and selectivity. The fund manager of HDFC Equity Fund, HDFC Growth Fund and HDFC Top 200 fund can improve the returns to the investors by increasing the systematic risk of the portfolio, which in turn can be done by identifying highly volatile shares. Alternatively, these can take advantage by diversification, which goes to reduce the risk if the same return is given to the investor at a reduced risk level, the compensation for risk might seem adequate. The fund manager of HDFC Capital Builder Fund, HDFC Long term Advantage Fund can earn better returns by adopting the marketing timing strategy and selecting the under priced securities. The fund manager can divide all securities into several asset classes and tries to construct an efficient portfolio based on expected returns, risk, and correlations of indexes representing these asset classes. The investment should be done in the bench mark indexes to get an “efficient” portfolio in such a way that no other combination of these indexes would result in a portfolio with a higher return for a given level of risk. It should be emphasized, however, that this is not a fully efficient portfolio because information about correlations among individual securities within an index and across the indexes is lost in the transition from individual securities to the benchmarks that represent them. These measures are more useful to investors who are putting their money into one diversified fund and are able to use leverage or invest in the risk-free asset. When the investor is investing in the different funds, the fund’s marginal contribution to the portfolio’s risk and return is more important than its individual security characteristics. To construct an efficient portfolio, an investor must take account of the correlations among the being considered. It is not advisable to apply just procedure or approach for all situations at least when it comes to investments though the used measures are highly reliable in the studies done on similar veins. Even at this juncture it would still be recommended that instead of going ahead only on the basis of risk and return, other indicators like new projects, sector impact, individual sentiments about companies etc besides ‘common sense and intuition’ may also be looked into. 3.6 CONCLUSIONS: Mutual fund has become one of the important sources for investing. It is quite likely that a more efficient portfolio can be constructed directly from funds. Thus, the two-step process of choosing an asset allocation based on the information about benchmark indexes and then choosing funds in each category may be one of the best realistically attainable approaches. To use this approach to portfolio selection effectively, investors would benefit from estimates of future asset returns, risks and correlations, as well as from fund management’s disclosure of future asset exposures and appropriate benchmarks. It has been a great opportunity for me to get a first experience of Mutual Funds. My study is to get the feel of how the work is carried out in relation to fund’s portfolio aspect. I got an opportunity in relation to the documentation and also the portfolio analysis that have been carrying out in facilitating the investor and the fund manager. REFERENCES Books: 1. Security Analysis and Portfolio Management (sixth Edition 1995) by Donald E. Fisher and Ronald J. Jordan. Publication: Pearson education. 2. The Indian Financial System (second edition) by Bharati V. Pathak. Published by Dorling Kindersley (India) Pvt. Ltd., licensees of Pearson Education in South Asia. 3. Security Analysis and Portfolio Management by Khan and Jain. Magazines: • Money Outlook (May &June 2009) • Business world (May & June 2009) Websites • • • • • • • • • • • • | http://issuu.com/sanjaykumarguptaa/docs/risk-return-analysis-and-comparative-study-of-mutu | CC-MAIN-2015-32 | refinedweb | 19,154 | 66.64 |
2. Getting Started
2.1. Introduction
This chapter covers the installation of Zope and getting started with the development of a simple application. This guide uses a build system called Buildout to build the application.
2.2. Prerequisites
Make sure you have Python installed. Version 3.6 or higher is recommended.
Creating and activating a VirtualEnv is recommended.
In order to use buildout, you have to install the
zc.buildout
package.
$ pip install zc.buildout
2.3. Directory structure
To begin application development, create a directory structure for the Python packages and build related files.
$ mkdir poll $ mkdir poll/poll_build $ mkdir poll/poll.main
All build related files will be added inside the
poll_build
directory, whereas the main Python package goes into the
poll.main directory.
2.4. Installing Zope using zc.buildout
Zope is distributed in egg format. To install Zope
and create an instance, create a buildout configuration file
(
poll/poll_build/buildout.cfg) with following content.
[buildout] extends = parts = zope4 [zope4] recipe = zc.recipe.egg eggs = Zope Paste
The
[zope4] part uses
zc.recipe.egg which will download
Zope and all its dependencies. It will create few console
scripts inside the
bin directory.
After updating the buildout configuration, you can run the buildout command to build the system.
$ cd poll/poll_build $ buildout
The initial build will take some time to complete.
2.5. Creating the instance
Once the build is complete, you can create an instance as follows.
$ bin/mkwsgiinstance -d .
2.6. Running the instance
Once you got a Zope instance, you can run it like this.
$ bin/runwsgi etc/zope.ini
Now, Zope is running. You can convince yourself by visiting the following URL.
You can also visit the administration area.
Use the user name and password you set earlier.
When you have a look at the drop-down box in the top right corner, you see a list of objects you may create.
In the next section we will create the poll application. Later, we will make it installable, too.
2.7. Developing the main package
Now, we can move to the
poll.main directory to create the main
package to develop the application. We will develop the entire
application inside the
poll.main package. For bigger projects,
it is recommended to split packages logically and maintain the
dependencies between the packages properly.
$ cd ../poll.main
In order to create an egg distribution, we need to create a
setup.py and a basic directory structure. We are going to place
the Python package inside the
src directory.
$ touch setup.py $ mkdir src $ mkdir src/poll $ mkdir src/poll/main $ touch src/poll/__init__.py $ touch src/poll/main/__init__.py $ touch src/poll/main/configure.zcml
The last file is a configuration file. The
.zcml file extension stands for
Zope Configuration Markup Language.
To declare
poll as a namespace package, we need to add following
code to
src/poll/__init__.py.
__import__('pkg_resources').declare_namespace(__name__)
Next, we need to add the minimum metadata required for the package
in
setup.py.
from setuptools import setup, find_packages setup( name="poll.main", version="0.1", packages=find_packages("src"), package_dir={"": "src"}, namespace_packages=["poll"], install_requires=["setuptools", "Zope"], )
We need to edit two more files to be recognized by Zope. First,
define the
initialize callback function in
src/poll/main/__init__.py.
def initialize(registrar): pass
And, in the ZCML file (
src/poll/main/configure.zcml), add these
few lines.
<configure xmlns=" <registerPackage package="." initialize=".initialize" /> </configure>
2.8. Creating an installable application
We need three things to make an installable application.
A form object created as Zope Page Template (manage_addPollMain)
A function to define the form action (addPollMain)
A class to define the toplevel application object (PollMain).
Finally, we need to register the class along with the form and add
the function using the
registrar object passed to the
initialize function.
We can define all these things in
app.py and the form template as
manage_addPollMain_form.zpt.
$ touch src/poll/main/app.py $ touch src/poll/main/manage_addPollMain_form.zpt
Here is the code for
app.py…
from OFS.Folder import Folder from Products.PageTemplates.PageTemplateFile import PageTemplateFile class PollMain(Folder): meta_type = "POLL" manage_addPollMain = PageTemplateFile("manage_addPollMain_form", globals()) def addPollMain(context, id): """ """ context._setObject(id, PollMain(id)) return "POLL Installed: %s" % id
… and for
manage_addPollMain_form.zpt:
<h1 tal:Header</h1> <main class="container-fluid"> <h2 tal:Form Title</h2> <form action="addPollMain" method="post"> <div class="form-group row"> <label for="id" class="form-label col-sm-3 col-md-2">Id</label> <div class="col-sm-9 col-md-10"> <input id="id" name="id" class="form-control" type="text" /> </div> </div> <div class="form-group row form-optional"> <label for="title" class="form-label col-sm-3 col-md-2">Title</label> <div class="col-sm-9 col-md-10"> <input id="title" name="title" class="form-control" type="text" /> </div> </div> <div class="zmi-controls"> <input class="btn btn-primary" type="submit" name="submit" value="Add" /> </div> </form> </main> <h1 tal:Footer</h1>
Finally, we can register it within
src/poll/main/__init__.py:
from poll.main.app import PollMain, manage_addPollMain, addPollMain def initialize(registrar): registrar.registerClass( PollMain, constructors=(manage_addPollMain, addPollMain) )
The application is now ready to install. But we need to make some changes in poll_build, so it gets installed along Zope.
2.9. Updating the build config
First, in the
[buildout] section of
buildout.cfg we need
to mention that
poll.main is locally developed. Otherwise,
buildout will try to get the package from package index server, by
default that is .
[buildout] develop = ../poll.main ...
Also, we need to add
poll.main to the
eggs option in the
[zope4] section.
... eggs = Zope Paste poll.main ...
The final buildout.cfg will look like this.
[buildout] develop = ../poll.main extends = parts = zope4 [zope4] recipe = zc.recipe.egg eggs = Zope Paste poll.main
To make these change effective, run the buildout again.
$ buildout
Finally, we have to include our package within
poll_build/etc/site.zcml. Add the following towards the bottom
of that file:
<include package="poll.main" />
Now, we can run application instance again.
$ bin/runwsgi etc/zope.ini
2.10. Adding an application instance
Visit the ZMI ( ) and select
POLL
from the drop-down box. It will display the add-form created
earlier. Enter
poll in the ID field and submit the form. After
submitting, it should display a message:
“POLL Installed: poll”.
2.11. Adding and index page for the POLL application
In this section we will add a main page to the POLL application, so that we can access the POLL application like this: .
First, create a file named
index_html.zpt inside
poll.main/src/poll/main
with content like this:
<html> <head> <title>Welcome to POLL!</title> </head> <body> <h2>Welcome to POLL!</h2> </body> </html>
Now add an attribute named
index_html inside PollMain class like
this:
class PollMain(Folder): meta_type = "POLL" index_html = PageTemplateFile("index_html", globals())
After restarting Zope, you can see that it displays the main page when you access: .
2.12. Summary
This chapter covered the installation of Zope and the beginning of the development of a simple project in Zope. | https://zope.readthedocs.io/en/latest/zdgbook/GettingStarted.html | CC-MAIN-2022-21 | refinedweb | 1,188 | 52.26 |
David, Appreciated so much for your sharing.
Advertising
Best Regards, Adele On 10/25/2011 10:23 AM, David Magda wrote:
On Tue, October 25, 2011 09:42, adele....@oracle.com wrote:Hi all, I have a customer who wants to know what is the max characters allowed in creating name for zpool, Are there any restrictions in using special characters?255 characters. Try doing a 'man zpool':. Or, use the source Luke: Going to, and searching for "zpool" turns up: Inside of it we have a zpool_do_create() function, which defines a 'char *poolname' variable. From there we call a zpool_create() in libzfs/common/libzfs_pool.c to zpool_name_valid() to pool_namecheck(), where we end up with the following code snippet: /* * Make sure the name is not too long. * * ZPOOL_MAXNAMELEN is the maximum pool length used in the userland * which is the same as MAXNAMELEN used in the kernel. * If ZPOOL_MAXNAMELEN value is changed, make sure to cleanup all * places using MAXNAMELEN. */ if (strlen(pool)>= MAXNAMELEN) { if (why) *why = NAME_ERR_TOOLONG; return (-1); } Check the function for further restrictions:
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org | https://www.mail-archive.com/zfs-discuss@opensolaris.org/msg47500.html | CC-MAIN-2016-44 | refinedweb | 185 | 62.78 |
I have discovered several issues in 6.8p4, which defines “full expression” and points out the major implications for an expression that is a full expression. In this paper I present the issues, along with my recommendations. For the issues for which it makes sense, I will later submit defect reports.
Here is the text of 6.8p4, with clause numbering added for convenience of reference:
- A full expression is an expression that is not part of another expression or of a declarator.
- Each of the following is a full expression:
- an initializer that is not part of a compound literal;
- the expression in an expression statement;
- the controlling expression of a selection statement (if or switch);
- the controlling expression of a while or do statement;
- each of the (optional) expressions of a for statement;
- the (optional) expression in a return statement.
- There is a sequence point between the evaluation of a full expression and the evaluation of the next full expression to be evaluated.
And here are the issues.
The phrase “not part of another expression or of a declarator” (sentence 1) is rather difficult to understand. It probably means: not part of another expression, nor part of a declarator. (But DeMorgan's law is hard on the brain.)
I believe this could be fixed as a simple editorial issue.
The status of an initializer expression depends on whether the context is a declaration or a compound literal (clause 2.1). That would seem to imply different sequencing guarantees in those contexts. As it turns out, it does, but the implication is quite subtle. Consider 6.7.9p23:
The evaluations of the initialization list expressions are indeterminately sequenced with respect to one another and thus the order in which any side effects occur is unspecified.
And consider this example:
#include <stdio.h> #define ONE_INIT '0' + i++ % 3 #define INITIALIZERS [2] = ONE_INIT, [1] = ONE_INIT, [0] = ONE_INIT int main() { int i = 0; char x[4] = { INITIALIZERS }; // case 1 puts(x); puts((char [4]){ INITIALIZERS }); // case 2 puts((char [4]){ INITIALIZERS } + i % 2); // case 3 }
In every use of the
INITIALIZERS macro, the variable
i
is incremented three times. In cases 1 and 2, there is no undefined behavior, because
the increments are in expressions that are indeterminately sequenced with respect
to one another, not unsequenced. There is no guarantee in what order the evaluations
are done, so there is no guarantee in what order they will appear, but the initial
values are guaranteed to be
'0',
'1' and
'2'.
(It's not perfectly clear whether that guarantee was provided by C99, which instead said:
The order in which any side effects occur among the initialization list expressions is unspecified.
In any event, as a data point, that guarantee was not honored by GCC until release 4.6, in 2011.)
On the other hand, because case 3 contains an unsequenced evaluation of
i
in the same full expression, it has undefined behavior.
Considering the number of hours it took me to finally reach this conclusion, I thought it would be worthwhile to bring it to the full committee to make sure everyone understands and agrees with it. If so, an addition to the rationale might be in order.
Consider 6.7.6p3 (emphasis added):
A full declarator is a declarator that is not part of another declarator. The end of a full declarator is a sequence point. …
Alsoarator ends. Any attempt to modify an object with temporary lifetime results in undefined behavior.
It is clear from these passages that the sequence of evaluations includes not only full expressions, but also full declarators – whatever sense it makes to talk about “evaluating” a full declarator. But sentence 3 does not acknowledge that reality.
My inclination is to adopt a bit of terminology from Ada, and start talking about the “elaboration” of a declarator, which, for a variably modified type, involves the run-time evaluation of array sizes, and then to re-draft sentence 3 and the other paragraphs cited here, to make it clear that sequence points separate elaborations of full declarators as well as evaluations of full expressions. In any event, I think there's a problem in sentence 3 that needs to be fixed.
Expressions in abstract declarators are not mentioned at all (compare to sentence 1). The logical inference is that such an expression is not a full expression by itself, but part of the containing full expression. But there are cases where there is no containing full expression. For example:
typedef _Atomic(int (*)[rand()]) T; _Alignas(int [rand()]) int i;
In these examples, not only is there no containing full expression, there isn't even any containing full declarator, because these expressions appear in the declaration specifiers, not the declarator.
Probably the simplest approach here would be to disallow variably modified types
with
_Atomic and
_Alignas, at least until the next revision
of the standard.
The list of full expression contexts (sentence 2) is not logically complete. According to the definition (sentence 1), an expression appearing in a constant-expression context is (often) a full expression. Of course there are no sequencing implications relevant for constant expressions, but it's not clear that makes it important for a constant expression not to be counted as a full expression. In any event, it's not clear how the list normatively interacts with the definition.
I think we should consider moving the list into a note, so it's clear that the definition is, well, definitive. The note could also point out that sequencing is irrelevant to constant expressions. | http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1729.htm | CC-MAIN-2018-26 | refinedweb | 927 | 51.18 |
David Glasser wrote:
> On Jan 29, 2008 8:22 AM, C. Michael Pilato <cmpilato_at_collab.net> wrote:
>> I'd like to avoid dump/load, but we can't retroactively prevent users of 1.4
>> clients from setting the svn:mergeinfo property on their 1.4-pedigreed
>> repositories.
>
> Also, I'm just unconvinced that this is something we need to worry
> about. I'm pretty sure we've set up the RA capabilities so that 1.5
> clients won't try to automatically set "svn:mergeinfo" on repositories
> that aren't being served by 1.5 servers (and we can involved the
> version number too), so it comes down to "what if somebody manually
> runs 'svn ps svn:mergeinfo'?" And, well, it's in our namespace. We
> reserve the right to make it do weird things. It's the same sort of
> complaint as "With svn 1.0 I decided to set the svn:special property
> on a file containing the word 'link'! How come it's now turning my
> files into symlinks?"
I don't think things are as simple as that. Our documentation for
Subversion 1.5 will almost certainly call out the fact that the property was
intentionally designed to be hand-tweakable. We can't say that and then
expect users to understand that when they do exactly what we've told them,
they get different results than we tell them they'd get.
--
C. Michael Pilato <cmpilato_at_collab.net>
CollabNet <> <> Distributed Development On Demand
This is an archived mail posted to the Subversion Dev
mailing list. | https://svn.haxx.se/dev/archive-2008-01/0641.shtml | CC-MAIN-2019-35 | refinedweb | 258 | 67.76 |
Hello,
Can you test the XSL map "remover_namespace_totalupload" locally using the source payload XML?
If you do not have Oxygen, Eclipse, etc. you can do it online - for example
Free Online XSL Transformer (XSLT) - FreeFormatter.com
An application like Oxygen would help because you also have a debugging mode and it is easier to find transmformation problems.
Best regards,
Peter
You probably have an XSLT mapping named remover_namespace_totalupload defined inside the Operation Mapping. Something has changed the source message such that this XSLT mapping can no longer read the message.
You can find the source of it inside the ESR / Designer under the specific Software Component version \ Namespace look inside the Imported Archives.
There you can see the source of the XSLT mapping by double clicking on the Archive Program .xsl Name.
Add comment | https://answers.sap.com/questions/11787002/transformerexception-occurred-during-xslt-mapping.html | CC-MAIN-2019-22 | refinedweb | 134 | 54.63 |
Randomly shuffling the elements in an array is a pretty common task. The standard algorithm for this is called the Fisher-Yates shuffle. The modern version of it looks like this in pseudo-code:
To shuffle an array a of n elements (indices 0..n-1): for i from n − 1 downto 1 do j ← random integer with 0 ≤ j ≤ i exchange a[j] and a[i]
We're going to implement this in Factor, and then look at improving the performance when shuffling an array of 10,000,000 numbers (generated with "
10,000,000 iota >array"). Afterwards, we are going to look at Ruby and Python versions.
FactorVersion 1:
Our first implementation is a straight-forward translation of the algorithm (a simplification of the randomize word included in the Factor standard library), and looks something like this:
: shuffle1 ( seq -- seq ) dup length [ dup 1 > ] [ [ random ] [ 1 - ] bi [ pick exchange ] keep ] while drop ;
On our test case, this takes 2.972 seconds.Version 2:
Our second implementation uses exchange-unsafe, which does not perform bounds checks (unnecessary given our loop constraints).
: shuffle2 ( seq -- seq ) dup length [ dup 1 > ] [ [ random ] [ 1 - ] bi [ pick exchange-unsafe ] keep ] while drop ;
On our test case, this takes 2.830 seconds (a 5% improvement on version 1!).Version 3:
Our third implementation uses the typed vocabulary to provide type information to the compiler to produce more optimal code:
TYPED: shuffle3 ( seq: array -- seq ) dup length [ dup 1 > ] [ [ random ] [ 1 - ] bi [ pick exchange-unsafe ] keep ] while drop ;
On our test case, this takes 2.554 seconds (a 15% improvement on version 1!).Version 4:
Our fourth implementation instead removes the generic dispatch of calling random as well as the dynamic variable lookup for the current random number generator:
: shuffle4 ( seq -- seq ) dup length [ dup 1 > ] random-generator get '[ [ _ (random-integer) ] [ 1 - ] bi [ pick exchange-unsafe ] keep ] while drop ;
On our test case, this takes 2.408 seconds (a 19% improvement on version 1!).Version 5:
Our fifth implementation combines the optimizations in version 3 and 4:
TYPED: shuffle5 ( seq: array -- seq ) dup length [ dup 1 > ] random-generator get '[ [ _ (random-integer) ] [ 1 - ] bi [ pick exchange-unsafe ] keep ] while drop ;
On our test case, this takes 2.254 seconds (a 24% improvement on version 1!).Version 6:
Our sixth implementation declares that the indices to exchange-unsafe are fixnums (removing a minor check that isn't optimized out by the compiler for some reason):
TYPED: shuffle6 ( seq: array -- seq ) dup length [ dup 1 > ] random-generator get '[ [ _ (random-integer) ] [ 1 - ] bi { fixnum fixnum } declare [ pick exchange-unsafe ] keep ] while drop ;
On our test case, this takes 2.187 seconds (a 27% improvement on version 1!).Version 7:
Our seventh implementation is just version 6 with a quick-and-dirty random number generator using rand():
FUNCTION: int rand ( ) ; SINGLETON: c-random M: c-random random-32* drop rand ; : with-c-random ( quot -- ) [ c-random ] dip with-random ; inline
On our test case, this takes 2.015 seconds (a 32% improvement on version 1!).Version 8:
Our eighth (and final!) version drops down to C, to implement a primitive version of shuffle:
void factor_vm::primitive_shuffle_array() { array* a = untag_check<array>(ctx->peek()); cell capacity = array_capacity(a); for (cell i = capacity - 1; i > 0; i--) { cell j = i + rand() / (RAND_MAX / (capacity - i) + 1); cell tmp = array_nth(a, i); set_array_nth(a, i, array_nth(a, j)); set_array_nth(a, j, tmp); } }
On our test case, this takes 0.494 seconds (a 83% improvement on version 1!).
Python
I wanted to see how various Python versions performed, so I wrote a simple version that takes 19.025 seconds on Python 2.7.3, 12.436 seconds on Jython 2.5.3, and 1.392 seconds with PyPy 1.9.0:
from random import randrange import time n = 10000000 l = list(xrange(n)) t0 = time.time() i = n while i > 1: j = randrange(i) i -= 1 l[i], l[j] = l[j], l[i] print 'took %.3f seconds' % (time.time() - t0)
A version using Numpy takes 3.273 seconds:
import numpy as np import time a = np.arange(10000000) t0 = time.time() np.random.shuffle(a) print 'took %.3f seconds' % (time.time() - t0)
Ruby
Curious what Ruby would look like, I tested two versions. The first implements shuffle in "pure" Ruby, taking 149.975 seconds on Ruby 1.8.7, 25.086 seconds in Ruby 1.9.3, 8.054 seconds on MacRuby 0.12, 7.258 seconds on JRuby 1.7.3, and 4.682 seconds on Rubinius 1.2.4:
def shuffle!(a) size = a.length size.times do |i| r = i + Kernel.rand(size - i) a[i], a[r] = a[r], a[i] end end a = (1..10000000).to_a t0 = Time.new shuffle!(a) t1 = Time.new printf "took %.3f seconds", t1-t0
But since Ruby 1.8.7, Array.shuffle! is a builtin (implemented in C), so lets look at how performance differs. It takes 0.443 seconds in Ruby 1.8.7, 0.554 seconds in Ruby 1.9.3, 0.884 seconds in MacRuby 0.12, 2.170 seconds in JRuby 1.7.3, and 3.479 seconds in Rubinius 1.2.4:
a = (1..10000000).to_a t0 = Time.new a = a.shuffle! t1 = Time.new printf "took %.3f seconds", t1-t0
Conclusion
C is awesome. PyPy is really fast. Factor can be pretty fast. Numpy should be faster. Jython could be faster. Python isn't very fast at all. Ruby is both blazing fast and slower than snails!
7 comments:
Just for giggles, I got the following results in my Debian64 VM with SBCL 1.14*:
[*] PRE and CODE tags were rejected. Sorry if this looks like garbage.
(defun shuffle (arr)
(loop for i from (1- (length arr)) downto 1
do (rotatef (aref arr (random i)) (aref arr i))))
CL-USER> (time (shuffle *toy*))
Evaluation took:
1.065 seconds of real time
1.524095 seconds of total run time (1.524095 user, 0.000000 system)
143.10% CPU
3,161,941,417 processor cycles
0 bytes consed
Taking away the specialization on arrays with AREF, we get a slightly slower version:
(defun shuffle2 (seq)
(loop for i from (1- (length seq)) downto 1
do (rotatef (elt seq (random i)) (elt seq i))))
CL-USER> (time (shuffle2 *toy*))
Evaluation took:
1.405 seconds of real time
2.032127 seconds of total run time (2.032127 user, 0.000000 system)
144.63% CPU
4,165,084,903 processor cycles
28,128 bytes consed
Adding (declaim (ftype (function ((simple-vector))) shuffle)) gets my lisp version down to ~0.6 seconds.
A luajit 2.0 version with no type declarations performs in about 0.63 seconds.
math = require("math")
os = require("os")
local toy = {}
for i=1, 10000000 do
toy[i] = math.random(10000000)
end
function shuffle(seq)
for i=(#seq-1), 1, -1 do
j = math.random(i)
seq[i], seq[j] = seq[j], seq[i]
end
end
You really should mention the python builtin function shuffle when writing about the ruby builtin too. In my case the runtime went down a factor of 3 when using this bultin shuffle function compared to your naive one (with plain Python 2.7.3).
CPython uses a dictionary lookup for module variables. Local variables have an optimized O(1) lookup because it's impossible to add new variables to local scope after a function has been declared.
If I wrap the code in a "def main():" and call it with "main()", then I get a drop from 20 seconds to 17.
Two other minor optimizations are to use range(n) instead of list(xrange(n)), and to make randrange be a local variable ("import random" then before the main loop "randrange = random.randrange"). This turns a large number of module lookup+attribute lookup calls into a single fast local lookup call.
Pypy does this last optimization automatically.
Using the suggestion of halex, random.shuffle on the array takes about 8 seconds in CPython.
@Redline6561: Nice examples!
@halex: I forgot about builtin shuffle, that would make a nice comparison with the builtin Ruby version.
@AndrewDalke: Great points! I wasn't trying to make Python look bad at all, just to show how performance is all over the place. I know there's many versions of optimizations you could do in Python (similar to how I try and speed up the Factor version) including writing it in C to be "fastest".
It's fun that you can usually write C in any language, but it's refreshing to see PyPy and LuaJit allowing C-like performance with more simple looking code.
Thanks!
@Redline6561: Your Lisp code is broken, you need to use (random (1+ i)) instead of (random i) to get an index in the right interval.
Also, please do not use ELT instead of AREF, the idea of using this algorithm on lists is disgusting.
For python, you not need to implement it. It's already implemented under random.shuffle ()
And you can use range instead of xrange and then go through and build an array (it's faster)
On my laptoon (2 years old MacBook air) and python 2.7.1 the result is:
took 13.629 seconds | http://re-factor.blogspot.com/2013/02/fast-shuffle.html | CC-MAIN-2016-30 | refinedweb | 1,536 | 67.55 |
Welcome to the Parallax Discussion Forums, sign-up to participate.
' telephonelift.bs2
' {$STAMP BS2} ' Select BS2 as target module
' {$PBASIC 2.5} ' Select PBASIC 2.5 as language
'sequence:
'1- phone rings or up button pushed
'2- upper cover panel slides to open position and stops
'3- lower table moves to upper position and stops
' Panels will stay in these positions unless option 4 is pushed. This will put the unit back in
waiting for sequence 1.
'4- optional: down button pushed
' reversing sequence to down position
' ======================= [variable declarations ] ================
true CON 1 ' boolean constants
false CON 0
' input pins with switches:
coverisclosed PIN 0
coverisopen PIN 1
tableisup PIN 2
tableisdown PIN 3
movephoneup PIN 4
movephonedown PIN 5
phonerings PIN 6
'output pins motor control through L298H dual H-bridge
enableA PIN 7 ' upper motor on/off
dirA1 PIN 8 ' forward/reverse upper motor
dirA2 PIN 9
enableB PIN 10 ' lower motor on/off
dirB1 PIN 11 ' forward/reverse lower motor
dirB2 PIN 12
' ========================= [ main ] ========================
DIRS = %1111111110000000 ' set pins 0..6 input (0), 7..15 output (1)
DO
DO ' wait
LOOP UNTIL (phonerings = true) OR (movephoneup = true) ' phone rings or up switch pressed
GOSUB opencover
GOSUB movetableup
DO ' wait
LOOP UNTIL (movephonedown = true) ' down switch pressed
GOSUB closecover
GOSUB movetabledown
LOOP ' forever
END
' ==========================[ end main ] ==============
' ------------------------ [subroutines ] -------------
opencover: ' opens the cover
HIGH dirA1 ' set upper motor to turn one way
LOW dirA2
HIGH enableA ' turn upper motor on
DO ' run upper motor
LOOP UNTIL (coverisopen = true) ' cover open switch is pressed
LOW enableA ' stop upper motor
RETURN
' ---------------------------------------
closecover: ' closes the cover
LOW dirA1 ' set upper motor to turn other way
HIGH dirA2
HIGH enableA ' turn upper motor on
DO ' run upper motor
LOOP UNTIL (coverisclosed = true) ' cover closed switch is pressed
LOW enableA ' stop upper motor
RETURN
' ---------------------------------------
movetableup: ' moves the telephone table up
HIGH dirB1 ' set lower motor to turn one way
LOW dirB2
HIGH enableB ' turn lower motor on
DO ' run lower motor
LOOP UNTIL (tableisup = true) ' table up switch is pressed
LOW enableB ' stop lower motor
RETURN
' ---------------------------------------
movetabledown: ' moves the telephone table down
LOW dirB1 ' set lower motor to turn other way
HIGH dirB2
HIGH enableB ' turn lower motor on
DO ' run lower motor
LOOP UNTIL (tableisdown = true) ' table down switch is pressed
LOW enableB ' stop lower motor
RETURN
' ------------------------ [end subroutines ] -----------------
' ========================== [end program ] ==========================
You'll have to write to code for both.
We can help you with this if you show enough interest.
The Truth is out there
It's not rocket-surgery
I see why we don't have any water,All of the pipes are full of wires!
E=WMc2
Now with WiFi
Not in the Spin Bunch
import RPi.GPIO as GPIO
ABB M202 certified
ABB M211 certified
but I do not know how to connect a switch (SW7 in the schematic) or another type of sensor to a telephone to check if it is ringing. Maybe somebody else has an idea...
MH
BS2Sx
Life is unpredictable. Eat dessert first.
Thanks a lot
David
MH
BS2Sx
David
In the earlier schematic I forgot the 220 ohm resistors on the IN ports of the BS
MH
BS2Sx | http://forums.parallax.com/discussion/168233/motion-control-telephone-lift | CC-MAIN-2019-09 | refinedweb | 521 | 56.52 |
In a previous article, I outlined the three main jobs of the Uno Platform in order to run a UWP app on iOS, Android, and in the browser:
- Parse XAML files;
- Implement data-binding;
- Implement the suite of views in the UWP framework for each platform.
In this article I want to focus on that last point. How does Uno implement UWP’s views? As a case study I’m going to use that humbly ubiquitous UI control, the button.
The number goes up
I present the simplest interactive application imaginable, one step above ‘Hello World’:
XAML:
<pre><Page x: <StackPanel> <TextBlock x: <Button Content="Click me" Click="Button_Click" /> </StackPanel>
</Page>
Code behind :
<pre>using System;
using Windows.UI.Xaml;
using Windows.UI.Xaml.Controls;
namespace UnoExtTestbed
{
public sealed partial class MainPage : Page
{
public MainPage()
{
this.InitializeComponent();
}
private int _clickCount = 0;
private void Button_Click(object sender, RoutedEventArgs e)
{
_clickCount++;
ClickTextBlock.Text = $"Button was clicked {_clickCount} times.";
}
}
}
I made a blank app using the Uno Solution template and put this code on the main page. Whenever the Button is clicked, the number goes up. Add a bit more chrome and we could have a viral hit.
Note that the XAML is useful here but not obligatory, either on UWP or Uno. We could have defined and created all our views in C#, had we really wanted to. That flexibility is handy to have.
Exhibit A — the visual tree
Visual tree information for UWP
Visual tree information for Android
Visual tree information for iOS
Visual tree information for WASM
So what does that get us? You can see the resulting visual trees on each target platform. They’re all substantially similar. The top-level wrapping varies a little, but inside we can see the
StackPanel,
TextBlock and
Button we defined in XAML. (The
Frame is created in the stock
App.xaml.cscode that comes with the project template.)
You might notice there are a couple of extra views inside the
Button that weren't explicitly defined in the XAML. These are part of Button's default template. If you're not familiar with UWP/WPF/Silverlight/etc's concept of control templating, there's a lot to say about the subject, but the gist of it is that any view which inherits from
Control is a tabula rasa, an empty vessel which will be filled with inner views as defined in its template. (Some of these child views might be templated controls themselves.) It's a powerful means to customise the look of a reusable control.
How do we go from a logical tree defined in XAML to a native visual tree?
Let’s keep the visual aids rolling. We’ll create an empty class inheriting from
Button, and take a look at its inheritance hierarchy.
<img width="570" height="270" src="" alt="1_1ebed9dmdBOu6m3lEVl5sw"> <img width="570" height="270" src="" alt="1_1ebed9dmdBOu6m3lEVl5sw"> <p><em>Inheritance chains for the Button class on UWP, Android, iOS, and WASM</em></p>
Like the visual tree, the platform-specific inheritance chains are pretty similar (identical in fact) at the ‘leafward’ end, and diverge a bit at the root. Let’s focus on where they diverge: after the UIElement class, the base view type in UWP.
On UWP,
UIElement inherits from DependencyObject, the base class for types which support data-binding using DependencyProperty values.
In Uno.Android and Uno.iOS, any
UIElement is an instance of the native base view type (Android.Views.View and UIKit.UIView respectively, mapped to managed types via the magic of Xamarin). So views defined in XAML are also native views. This means, for example, that it's possible to incorporate native views that know nothing about Uno directly into your app's XAML. The following works on iOS:
<Page x:Class="UnoExtTestbed.MainPage"
...
xmlns:
...
<StackPanel>
<uikit:UILabel
</StackPanel>
</Page>
This is uniquely easy to do in Uno. We talk about ‘leaving an escape hatch’: the goal is 100% code reuse, but if you positively have to use a platform-specific feature or view library, the flexibility is there.
But wait, what about
DependencyObject?
Since it’s an important part of the UWP contract, we didn’t want to leave
DependencyObject out of the picture, but we also have to be able to support
DependencyObjects that aren't views at all. (Brushes and Transforms, to name just a few.) In Uno therefore
DependencyObject is defined as an interface. It's a 'special' interface however: Uno's code generation automatically adds the backing methods when it finds a class like
MyDependencyObject : DependencyObject, allowing code written for UWP to mostly 'just work.' I'll talk more about it in a future article on code generation in Uno.
In WebAssembly, for now the inheritance hierarchy is a bit simpler and
UIElement is at the root of the type tree. As you can see in the screenshot, Uno.WASM is generating
<div> elements for each view in the visual tree.
Style points
The code above nets us a very plain, workaday button, but we could easily spice it up. We set the
Content property to a text string, but
Content can be anything, even another view. Our button could be an image, a shape, or a complex visual hierarchy. It could even have another button inside of it.
What if we want to go in the opposite direction? What if we want to abdicate control over our button’s appearance entirely?
A number of controls implemented by Uno,
Button included, support the concept of a 'native' style. Instead of a button that's consistent across all versions of your app, you get a button that looks the way a user of the target platform would expect it to look.
It’s supported by setting a pre-defined Style that puts an instance of the native control inside the XAML control. In our code above, we could have written:
<Button Content="Click me"
Click="Button_Click"
Style="{StaticResource NativeDefaultButton}" />
Since Android’s default button looks rather similar to UWP’s, I’m going with a more visually obvious illustration using a different control, ToggleSwitch. I’ll use the sample from the Uno Gallery app.
<img width="325" height="576" src="" alt="0_kX8Q2-mT-aSBw3US"> <img width="375" height="576" src="" alt="0_iZq_tOqh8zf0fwES"> <p><em>Uno’s ToggleSwitch control on Android and iOS, using default and native styles.</em></p>
The
ToggleSwitch with the default style looks the same on all platforms, both statically and in motion. On Android and iOS, however, the
ToggleSwitchwith the
NativeDefaultToggleSwitch style replicates the native toggle control of each platform. Of course you can still bind to its properties in XAML as you normally would. This is another powerful option to have: for some apps it makes sense to look as 'native' as possible, for others its desirable to have a rich, customised UI. You may even want to mix and match different approaches for different screens in your app. With Uno it's straightforward.
What if I don’t like buttons?
In fact
UIElement implements primitive interaction events like PointerPressed,
PointerReleased, etc, so all views in Uno can handle touches/clicks, not just
Button. (The big advantage of using
Button, apart from the MVVM-friendly
Command property, is that it implements visual statesto animate your button with.)
There’s plenty more to talk about, like the way that UWP’s API is hooked into each platform’s native input detection, or the way that layouting is done, but that’s all for now.
Try out Uno, and hit us up if you have any questions.
The post Pushing the right buttons : How Uno implements views appeared first on Uno Platform.
Discussion (0) | https://dev.to/uno-platform/pushing-the-right-buttons-how-uno-implements-views-57ld | CC-MAIN-2022-27 | refinedweb | 1,279 | 63.7 |
01 November 2011 17:51 [Source: ICIS news]
HOUSTON (ICIS)--Surging gasoline and distillate margins and discounted crude throughput for the third quarter pushed Valero earnings about 40% higher, the company said on Tuesday. ($1 = €0.72)
Valero net income rose to $1.2bn (€864m) from $292 in the same quarter last year.
Profits for gasoline refining from Louisiana Light Sweet crude were up 89% at $8.20/bbl compared with the third quarter of 2010, while ultra-low-sulphur diesel profits jumped 66% to $14.19/bbl.
In addition, Maya crude oil was at a $13.38/bbl discount to Louisiana Light Sweet crude, an increase of 20% on a year ago. Finally, mid-continent crude and Eagle Ford basin crude were at a discount of $22.47/bbl to West Texas Intermediate (WTI) crude, widening by about $15/bbl from mid-continent crude's discount in 2010.
During the quarter, Valero processed more than 460,000 bbl/day of the discounted Eagle Ford crude at its 93,000 bbl/day Three Rivers and 142,000 bbl/day Corpus Christi refineries in Texas. That was more than 40,000 bbl/day higher than in 2010, and the Eagle Ford crude saved the company about $15/bbl compared with processing imported sweet crude.
The higher profits from production of gasoline and distillate provided an incentive to increase throughput at the company's refineries, Valero executive vice president Mike Ciskowski said. As a result, refinery throughput jumped by 389,000 bbl/day for the quarter.
The increase in volumes was also a result of added capacity from the acquisition of the 220,000 bbl/day Pembroke refinery in Wales on 1 August and operations at the 235,000 bbl/day Aruba refinery, which was down in 2010.
“We were able to capitalise on favourable refining margins and attain our highest refinery utilisation since the third quarter of 2007,” Valero CEO Bill Kleese said.
?xml:namespace>
( | http://www.icis.com/Articles/2011/11/01/9504639/us-valero-earnings-soar-on-high-profit-margins-cheaper-feedstock.html | CC-MAIN-2015-14 | refinedweb | 324 | 60.35 |
Name
cyg_clock_sysclock_handle() — Return a handle to the system clock
Synopsis
#include <cyg/kernel/kapi.h> #include <cyg/clock/api.h>
cyg_handle_t cyg_clock_sysclock_handle(
void);
Description
This function returns a handle to the clock object associated with
the hardware clock driving the system time. This clock handle is
usable with the kernel C API functions such
as
cyg_clock_to_counter(), and thereby with
other kernel functions which use kernel counters such as kernel
alarms.
However to be clear, this clock may or may not be the same clock as used for the kernel real-time clock. Users must avoid operations which could interfere with system operation, such as setting the clock resolution or deleting the clock.
Note that this function may be implemented as a macro, and therefore taking the address of this function is not supported.
This function is only supplied when the tick conversion functionality is enabled.
Return value
This function returns a cyg_handle_t which can be used as a handle for kernel C API clock functions. No errors are reported. | http://doc.ecoscentric.com/ref/clock-common-api-sysclock-handle.html | CC-MAIN-2019-30 | refinedweb | 169 | 55.03 |
I don't understand the details, but would it be possible to sacrifice the button and joystick data on the nunchuck in order to increase the resolution of the accelerometer reporting?
The X and Y axes have full resolution, but unless you want a 2 axis accelerometer, that's probably not very helpful.
- this is just a though and I haven't tested this (will do if I get some time next week) - separate Nunchuck and WM+ and run them (address them) separately. Run one using wire lib and connecting data and clock to Analog 4 and Analog 5 pins, and run the other in the manner DogP (and krulkip) showed, connected to Digital 4 and Analog 0 pin, and using Ports lib. Just a thought....
//Zero the values on a#MID. ax_m -= axMID; ay_m -= ayMID; az_m -= azMID; // Convert to radians by mapping 180 deg of rotation to PI x = angleInRadians(ayRange, ax_m); y = angleInRadians(ayRange, ay_m); z = angleInRadians(azRange, az_m);
//Nunchuk accelerometer value to radian conversion.float angleInRadians(int range, int measured) { float x = range; return (float)(measured/x) * PI;}
// Exact values differ between x and y axis, and probably between nunchuks, but the range value may be constant:static const int axMIN = 290;static const int axMAX = 720;static const int axMID = (axMAX - axMIN)/2 + axMIN;static const int axRange = (axMID - axMIN)*2;static const int ayMIN = 298;static const int ayMAX = 728;static const int ayMID = (ayMAX - ayMIN)/2 + ayMIN;static const int ayRange = (ayMID - ayMIN)*2;// Not sure what a meaningful scale is for the z accelerometer axis so:static const int azMID = ayMID;static const int azRange = ayRange;
I have also incorporated a scheme to save zeroing calibration data to the eeprom and reading it back when arduino executes setup.Calibration and saving the data should be triggered either through software choice, or using an interrupt.
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=8661.150 | CC-MAIN-2016-07 | refinedweb | 346 | 52.83 |
Problem
For a class that represents some resource, you want to use its constructor to acquire it and the destructor to release it. This technique is often referred to as resource acquisition is initialization (RAII).
Solution
Allocate or acquire the resource in the constructor, and free or release the resource in the destructor. This reduces the amount of code a user of the class must write to deal with exceptions. See Example 8-3 for a simple illustration of this technique.
Example 8-3. Using constructors and destructors
#include #include using namespace std; class Socket { public: Socket(const string& hostname) {} }; class HttpRequest { public: HttpRequest (const string& hostname) : sock_(new Socket(hostname)) {} void send(string soapMsg) {sock_ << soapMsg;} ~HttpRequest ( ) {delete sock_;} private: Socket* sock_; }; void sendMyData(string soapMsg, string host) { HttpRequest req(host); req.send(soapMsg); // Nothing to do here, because when req goes out of scope // everything is cleaned up. } int main( ) { string s = "xml"; sendMyData(s, ""); }
Discussion
The guarantees made by constructors and destructors offer a nice way to let the compiler clean up after you. Typically, you initialize an object and allocate any resources it uses in the constructor, and clean them up in the destructor. This is normal. But programmers have a tendency to use the create-open-use-close sequence of events, where the user of the class is required to do explicit "opening" and "closing" of resources. A file class is a good example.
The usual argument for RAII goes something like this. I could easily have designed my HttpRequest class in Example 8-3 to make the user do a little more work. For example:
class HttpRequest { public: HttpRequest ( ); void open(const std::string& hostname); void send(std::string soapMsg); void close( ); ~HttpRequest ( ); private: Socket* sock_; };
With this approach, a responsible version of sendMyData might look like this:
void sendMyData(std::string soapMsg, std::string host) { HttpRequest req; try { req.open( ); req.send(soapMsg); req.close( ); } catch (std::exception& e) { req.close( ); // Do something useful... } }
This is more work without any benefit. This sort of design forces the user to write more code and to deal with exceptions by cleaning up your class (assuming you don't call close in your destructor).
The RAII approach has wide applicability, especially when you want a guarantee that something will be undone if an exception is thrown without having to put TRy/catch code all over the place. Consider a desktop application that wants to display a message on the status bar or title bar while some work is being done:
void MyWindow::thisTakesALongTime( ) { StatusBarMessage("Copying files..."); // ... }
All the StatusBarMessage class has to do is update the appropriate window with status information when it is constructed, and reset it back to the empty string (or whatever message was there previously) when it is destroyed. Here's the key point: if the function returns or an exception is thrown StatusBarMessage still gets its work done. The compiler guarantees that the destructor will be called for a stack variable whose scope has exited. Without this approach, the author of thisTakesALongTime needs to carefully account for every control path so the wrong message doesn't remain on the window if the operation fails, the user cancels it, etc. Once again, this results in less code and fewer errors for the author of the calling function.
RAII is no panacea, but if you have not used it before, chances are you can find a number of places where it is useful. Another good example is locking. If you are using RAII to manage locks on resources such as threads, pooled objects, network connections, etc., you will find that this approach allows for stronger exception-safety and less code. In fact, this is how the Boost multithreading library implements locks to make for clean programming on the part of the user. See Chapter 12 for a discussion of the Boost Threads library.
Building C++ Applications
Code Organization
Numbers
Strings and Text
Dates and Times
Managing Data with Containers
Algorithms
Classes
Exceptions and Safety
Streams and Files
Science and Mathematics
Multithreading
Internationalization
XML
Miscellaneous
Index | https://flylib.com/books/en/2.131.1/using_constructors_and_destructors_to_manage_resources_or_raii_.html | CC-MAIN-2019-04 | refinedweb | 682 | 51.58 |
Messages with line break markers (\\) in the Checkbox test descriptions not correctly extracted for translation
Bug Description
Binary package hint: checkbox
Description: Ubuntu lucid (development branch)
Release: 10.04
Description: Ubuntu 9.10
Release: 9.10
checkbox version > 0.8.5
There are several strings appearing in english although they are translated in the po.
This happens on strings having "\\" within, coming from a "\" character present in test source files.
The bug is present in the latest version of checkbox from ppa also (version 0.9):
https:/
Manually changing original string (msgid) in a po file (deleting \\) in the source, rebuilding and installing the new package make it looks correctly (but it's just a bad bad hack obviously ;-) )
This bug affects all not-english users.
Related branches
- Jeff Lane: Approve on 2011-03-16
- Diff: 732 lines (+233/-189)7 files modifieddebian/changelog (+14/-0)
jobs/fingerprint.txt.in (+12/-12)
jobs/firewire.txt.in (+3/-3)
jobs/media.txt.in (+48/-48)
jobs/monitor.txt.in (+6/-6)
jobs/sleep.txt.in (+22/-22)
po/checkbox.pot (+128/-98)
There are no more \\ characters, but intltool still does not retain line breaks, so strings stretching into multiple lines are not translated. Single line strings are translated correctly.
This bug is still present in Lucid Beta 2.
El dg 11 de 04 de 2010 a les 16:40 +0000, en/na Sergio Zanchetta va
escriure:
> This bug is still present in Lucid Beta 2.
>
> ** Summary changed:
>
> - [Lucid Beta1] Several strings appear in english although translated
> + [Lucid Beta2] Several strings appear in english although translated
>
Sergio,
Thanks for helping with reporting, triaging and fixing translations
bugs.
Every time you change the description of the bug there is a lot of
unnecessary e-mail traffic: quite a lot of people are subscribed to
bugmail and receive the update.
Therefore, unless there is a justified need to correct the title, I'd
recommend not changing it. If the bug is still not fixed, it is not
necessary to mark to which milestone it applies.
Thanks!
Marc, has this been fixed in Maverick?
A new checkbox template was imported yesterday.
Could someone perhaps test if this bug is still present by translating at least one of the problematic strings and checking out it is displayed correctly (either waiting for the next language pack or by installing an .mo file manually)?
Thanks!
Right, I've just tested this, and it's not fixed.
I've run 'LANGUAGE=es checkbox-gtk' (Spanish translation is complete), but several translations are still not being loaded. For example:
https:/
That particular one is present in the language pack, but it is not being loaded when running the fingerprint test.
I got it: http://
According to this, lines that begin with a space are expected to be part of a paragraph, and lines starting with two spaces are expected to be displayed verbatim - intltool-extract works consistently with this, so after changing a file to start description strings streching to multiple lines with two spaces, line breaks are preserved. Too bad, that translation still does not work: http://
Just in case, here is a patch to do further testing/etc.
I take this opportunity, based from Gabor patch and a little modifications
see http://
Any update on this?
Could any of the checkbox developers look at the merge proposal?
Thanks!
David, I merged Michael Terry's branch.
Jeff, thanks a lot for that!
I see you've marked the bug as Fix Released, but it seems that according to comment #8, while the linked branch improves things (preserves line breaks in the extracted transaltions), it does not actually fix the bug.
Did you have the chance to test whether after merging mterry's branch translations are actually loaded? Otherwise I'd suggest reverting the status to New, Confirmed or Triaged.
David, good call, thanks. I marked the upstream checkbox back to Triaged, but I can't change the Ubuntu Checkbox back :(
I poked at it briefly last night and didn't notice then, but on a second look this morning I'm assuming the issue is still there...
Looking at Spanish, the es.po file shows translated strings (using firewire for example):
#. description
#: ../tests/
msgid ""
"Firewire HDD verification procedure: 1.- Plug a Firewire HDD into the "
"computer 2.- A window should be opened asking which action should be "
"performed (open folder, photo manager, etc). 3.- Copy some files from the "
"internal/firewire HDD to the firewire/internal HDD"
msgstr ""
"Procedimiento de verificación de disco duro firewire: 1.- Conecte un disco "
"duro firewire al equipo 2.- Aparecerá una ventana preguntándole qué acción "
"desea tomar (abrir carpeta, gestor de fotos, etc.) 3.- Copie algunos "
"archivos del disco duro interno al firewire y viceversa."
but not all the strings are being picked up... :(
More info... after building checkbox from trunk after merging Michael's branch, the spanish po file looks like I mentioned above. I checked es.po and everything seemed to have a translation string.
Also, the locale files (/usr/share/
So when running checkbox with the language set, as above (LANGUAGE=es checkbox-gtk) the windows and buttons are all translated, but the strings for the job descriptions, while existing in the locale message file are not being pulled in... :(
Sadly, I know very little about translations and localizing so I'm learning this as I go along.
Jeff: don't worry, I can give a little more insight to the problem :)
Basically, you just committed my patch, which fixes one thing: extraction of strings. To see the effect of this, you would need to update the translation file, which you seem did not do. So, go to the po folder, and issue `intltool-update es`. This would give you an es.po file that looks like my hu.po on the screenshot in comment #8. So instead of "1 - ... 2 - ... 3 - ... " you should see
"1 - ...
2 - ...
3 - ..."
in the msgid field (also, don't forget to remove the "#, fuzzy" mark, otherwise the translation will be omitted from the .mo file when you build it).
After doing this, the problem should be solved - in theory. However, I could not see the translated text, so there must be an other bug too. Perhaps checkbox changes somehow the text, so when it looks for the translation, it does not look after the same strings that are in the po files - but this is just a guess.
I think the important thing to understand first is how the text for the string in all *.txt.in files are loaded at runtime.
They are rfc822deb files, and their strings are now correctly marked for _extraction_, so that at build time intltool extracts their translatable strings and puts them in the po/checkbox.pot file exposing them to translators.
I think the difference here with most of the translations we're used to deal with is that in this particular case marking a string to be translatable means that it will be extracted, but it does not mean that it will be passed to gettext at runtime to load its translation.
What I mean is:
1) In jobs/firewire.
_description:
Firewire HDD verification procedure:
[...]
=> Here the leading underscore '_' tells intltool to extract the string, but it does not call gettext, so translations are not loaded at runtime.
2) Somewhere in a .py file in the code:
from gettext import gettext as _
_("This is a translatable string in the code")
=> Here wrapping the string with the gettext function tells intltool to extract the string and calls the gettext function. Hence translations are loaded at runtime.
So how can we call gettext giving it the text in the *.txt files as an argument in the first case?
David: perhaps this file has something to do about it:
http://
But I don't really understand what's going on in here. :(
This bug was fixed in the package checkbox - 0.11.2
---------------
checkbox (0.11.2) natty; urgency=low
New upstream release (LP: #736919):
* Added version to dpkg dependency
* Added multiarch support to install script (LP: #727411)
* Fixed submitting data twice (LP: #531010)
* Fixed job descriptions for checkbox-cli (LP: #221400)
[Daniel Manrique]
* Fixed strings in audio tests and updated pot file (LP: #691241)
[Jochen Kemnade]
* Fixed grammar in user-apps tests (LP: #642001)
[Jeff Lane]
* Added reboot instructions to suspend/hibernate tests (LP: #420493)
* Made the firewire instructions make more sense (LP: #693068)
[Michael Terry]
* Fixed several strings appear in English although translated (LP: #514401)
- jobs/fingerprin
- jobs/media.txt.in
- jobs/monitor.txt.in
- jobs/sleep.txt.in
- jobs/firewire.
- po/checkbox.pot
* Fixed grammar (LP: #525454)
+ jobs/fingerprin
-- Marc Tardif <email address hidden> Thu, 17 Mar 2011 11:15:12 -0400
As the upload does not fix the original bug and it's best not to revert the status, I've changed the bug description to reflect what is actually being fixed with the upload and filed bug 740146 as a follow-up.
I think the best thing to do would be to remove all \ from the original strings if possible. It is not just the fact that translations are not working at the moment, but these backslashes confuse translators.
Marc, what do you think, would this be possible in Lucid? Thanks! | https://bugs.launchpad.net/ubuntu/+source/checkbox/+bug/514401 | CC-MAIN-2020-05 | refinedweb | 1,550 | 63.49 |
Archive::Zip - Provide an interface to ZIP archive files.
use Archive::Zip qw( :ERROR_CODES :CONSTANTS ); my $zip = Archive::Zip->new(); my $member = $zip->addDirectory( 'dirname/' ); $member = $zip->addString( 'This is a test', 'stringMember.txt' ); $member->desiredCompressionMethod( COMPRESSION_DEFLATED ); $member = $zip->addFile( 'xyz.pl', 'AnotherName.pl' ); die 'write error' unless $zip->writeToFileNamed( 'someZip.zip' ) == AZ_OK; $zip = Archive::Zip->new(); die 'read error' unless $zip->read( 'someZip.zip' ) == AZ_OK; $member = $zip->memberNamed( 'stringMember.txt' ); $member->desiredCompressionMethod( COMPRESSION_STORED ); die 'write error' unless $zip->writeToFileNamed( 'someOtherZip.zip' ) == AZ_OK;::Zlib library to read and write the compressed streams inside the files.:
File::Spec and
File::Basename are used for various file operations. When you're referring to a file on your system, use its file naming conventions.
This applies to every method that refers to an archive member, or provides a name for new archive members. The
extract() methods that can take one or two names will convert from local to zip names if you call them with a single name.
Archive::Zip::Archive objects are what you ordinarily deal with. These maintain the structure of a zip file, without necessarily holding data. When a zip is read from a disk file, the (possibly compressed) data still lives in the file, not in memory. Archive members hold information about the individual members, but not (usually) the actual member data. When the zip is written to a (different) file, the member data is compressed or copied as needed. It is possible to make archive members whose data is held in a string in memory, but this is not done when a zip file is read. Directory members don't have any data.
Exporter Archive::Zip Common base class, has defs. Archive::Zip::Archive A Zip archive. Archive::Zip::Member Abstract superclass for all members. Archive::Zip::StringMember Member made from a string Archive::Zip::FileMember Member made from an external file Archive::Zip::ZipFileMember Member that lives in a zip file Archive::Zip::NewFileMember Member whose data is in a file Archive::Zip::DirectoryMember Member that is a directory
Exports the following constants: FA_MSDOS FA_UNIX GPBF_ENCRYPTED_MASK GPBF_DEFLATING_COMPRESSION_MASK GPBF_HAS_DATA_DESCRIPTOR_MASK COMPRESSION_STORED COMPRESSION_DEFLATED IFA_TEXT_FILE_MASK IFA_TEXT_FILE IFA_BINARY_FILE COMPRESSION_LEVEL_NONE COMPRESSION_LEVEL_DEFAULT COMPRESSION_LEVEL_FASTEST COMPRESSION_LEVEL_BEST_COMPRESSION
Exports the following constants (only necessary for extending the module): FA_AMIGA FA_VAX_VMS FA_VM_CMS FA_ATARI_ST FA_OS2_HPFS FA_MACINTOSH FA_Z_SYSTEM FA_CPM FA_WINDOWS_NTFS GPBF_IMPLODING_8K_SLIDING_DICTIONARY_MASK GPBF_IMPLODING_3_SHANNON_FANO_TREES_MASK GPBF_IS_COMPRESSED_PATCHED_DATA_MASK COMPRESSION_SHRUNK DEFLATING_COMPRESSION_NORMAL DEFLATING_COMPRESSION_MAXIMUM DEFLATING_COMPRESSION_FAST DEFLATING_COMPRESSION_SUPER_FAST COMPRESSION_REDUCED_1 COMPRESSION_REDUCED_2 COMPRESSION_REDUCED_3 COMPRESSION_REDUCED_4 COMPRESSION_IMPLODED COMPRESSION_TOKENIZED COMPRESSION_DEFLATED_ENHANCED COMPRESSION_PKWARE_DATA_COMPRESSION_LIBRARY_IMPLODED
Explained below. Returned from most methods. AZ_OK AZ_STREAM_END AZ_ERROR AZ_FORMAT_ERROR AZ_IO_ERROR
Many of the methods in Archive::Zip return error codes. These are implemented as inline subroutines, using the
use constant pragma. They can be imported into your namespace using the
:ERROR_CODES tag:
use Archive::Zip qw( :ERROR_CODES ); ... die "whoops!" unless $zip->read( 'myfile.zip' ) == AZ_OK;
Everything is fine.
The read stream (or central directory) ended normally.
There was some generic kind of error.
There is a format error in a ZIP file being read.
There was an IO error.
Archive::Zip allows each member of a ZIP file to be compressed (using the Deflate algorithm) or uncompressed. Other compression algorithms that some versions of ZIP have been able to produce are not supported. Each member has two compression methods: the one it's stored as (this is always COMPRESSION_STORED for string and external file members), and the one you desire for the member in the zip file. These can be different, of course, so you can make a zip member that is not compressed out of one that is, and vice versa. You can inquire about the current compression and set the desired compression method:
my $member = $zip->memberNamed( 'xyz.txt' ); $member->compressionMethod(); # return current compression # set to read uncompressed $member->desiredCompressionMethod( COMPRESSION_STORED ); # set to read compressed $member->desiredCompressionMethod( COMPRESSION_DEFLATED );
There are two different compression methods:
file is stored (no compression)
file is Deflated.
The Archive::Zip class (and its invisible subclass Archive::Zip::Archive) implement generic zip file functionality. Creating a new Archive::Zip object actually makes an Archive::Zip::Archive object, but you don't have to worry about this unless you're subclassing.
Make a new, empty zip archive.
my $zip = Archive::Zip->new();
If an additional argument is passed, new() will call read() to read the contents of an archive:
my $zip = Archive::Zip->new( 'xyz.zip' );
If a filename argument is passed and the read fails for any reason, new will return undef. For this reason, it may be better to call read separately.
This is a utility function that uses the Compress::Zlib CRC routine to compute a CRC-32. You can get the CRC of a string:
$crc = Archive::Zip::computeCRC32( $string );
Or you can compute the running CRC:
$crc = 0; $crc = Archive::Zip::computeCRC32( 'abcdef', $crc ); $crc = Archive::Zip::computeCRC32( 'ghijkl', $crc );
Report or change chunk size used for reading and writing. This can make big differences in dealing with large files. Currently, this defaults to 32K. This also changes the chunk size used for Compress::Zlib. You must call setChunkSize() before reading or writing. This is not exportable, so you must call it like:
Archive::Zip::setChunkSize( 4096 );
or as a method on a zip (though this is a global setting). Returns old chunk size.
Returns the current chunk size:
my $chunkSize = Archive::Zip::chunkSize();
Change the subroutine called with error strings. This defaults to \&Carp::carp, but you may want to change it to get the error strings. This is not exportable, so you must call it like:
Archive::Zip::setErrorHandler( \&myErrorHandler );
If myErrorHandler is undef, resets handler to default. Returns old error handler. Note that if you call Carp::carp or a similar routine or if you're chaining to the default error handler from your error handler, you may want to increment the number of caller levels that are skipped (do not just set it to a number):
$Carp::CarpLevel++;
$ENV{TMPDIR}
environment variable. But see the File::Spec documentation for your system. Note that on many systems, if you're running in taint mode, then you must make sure that
$ENV{TMPDIR} is untainted for it to be used. Will NOT create
$tmpdir if it doesn't exist (this is a change from prior versions!). Returns file handle and name:
my ($fh, $name) = Archive::Zip::tempFile(); my ($fh, $name) = Archive::Zip::tempFile('myTempDir'); my $fh = Archive::Zip::tempFile(); # if you don't need the name
Return a copy of the members array
my @members = $zip->members();
Return the number of members I have
Return a list of the (internal) file names of the zip members
Return ref to member whose filename equals given filename or undef.
$string must be in Zip (Unix) filename format.
Return array of members whose filenames match given regular expression in list context. Returns number of matching members in scalar context.
my @textFileMembers = $zip->membersMatching( '.*\.txt' ); # or my $numberOfTextFiles = $zip->membersMatching( '.*\.txt' );
Return the disk that I start on. Not used for writing zips, but might be interesting if you read a zip in. This should be 0, as Archive::Zip does not handle multi-volume archives.
Return the disk number that holds the beginning of the central directory. Not used for writing zips, but might be interesting if you read a zip in. This should be 0, as Archive::Zip does not handle multi-volume archives.
Return the number of CD structures in the zipfile last read in. Not used for writing zips, but might be interesting if you read a zip in.
Return the number of CD structures in the zipfile last read in. Not used for writing zips, but might be interesting if you read a zip in.
Returns central directory size, as read from an external zip file. Not used for writing zips, but might be interesting if you read a zip in.
Returns the offset into the zip file where the CD begins. Not used for writing zips, but might be interesting if you read a zip in.
Get or set the zipfile comment. Returns the old comment.
print $zip->zipfileComment(); $zip->zipfileComment( 'New Comment' );.
Returns the name of the file last read from. If nothing has been read yet, returns an empty string; if read from a file handle, returns the handle in string form. and return the given member, or match its name and remove it. Returns undef if member or name doesn't exist in this Zip. No-op if member does not belong to this zip.
Remove and return the given member, or match its name and remove it. Replace with new member. Returns undef if member or name doesn't exist in this Zip, or if
$newMember is undefined.
It is an (undiagnosed) error to provide a
$newMember that is a member of the zip being modified.
my $member1 = $zip->removeMember( 'xyz' ); my $member2 = $zip->replaceMember( 'abc', $member1 ); # now, $member2 (named 'abc') is not in $zip, # and $member1 (named 'xyz') is, having taken $member2's place. );
Update a single member from the file or directory named
$fileName. Returns the (possibly added or updated) member, if any;
undef on errors. The comparison is based on
lastModTime() and (in the case of a non-directory) the size of the file.' );
Returns the uncompressed data for a particular member, or undef.
print "xyz.txt contains " . $zip->contents( 'xyz.txt' );
Also can change the contents of a member:
$zip->contents( 'xyz.txt', 'This is the new contents' );
A Zip archive can be written to a file or file handle, or read from one.
Write a zip archive to named file. Returns
AZ_OK on success.
my $status = $zip->writeToFileNamed( 'xx.zip' ); die "error somewhere" if $status != AZ_OK;
Note that if you use the same name as an existing zip file that you read in, you will clobber ZipFileMembers. So instead, write to a different file name, then delete the original. If you use the
overwrite() or
overwriteAs() methods, you can re-write the original zip in this way.
$fileName should be a valid file name on your system..
my $fh = IO::File->new( 'someFile.zip', 'w' ); if ( $zip->writeToFileHandle( $fh ) != AZ_OK) { # error handling }
If you pass a file handle that is not seekable (like if you're writing to a pipe or a socket), pass a false second argument:
my $fh = IO::File->new( '| cat > somefile.zip', 'w' ); $zip->writeToFileHandle( $fh, 0 ); # fh is not seekable
If this method fails during the write of a member, that member and all following it will return false from
wasWritten(). See writeCentralDirectory() for a way to deal with this. If you want, you can write data to the file handle before passing it to writeToFileHandle(); this could be used (for instance) for making self-extracting archives. However, this only works reliably when writing to a real file (as opposed to STDOUT or some other possible non-file). See examples/selfex.pl for how to write a self-extracting archive.
Writes the central directory structure to the given file handle. Returns AZ_OK on success. If given an $offset, will seek to that point before writing. This can be used for recovery in cases where writeToFileHandle or writeToFileNamed returns an IO error because of running out of space on the destination file. You can truncate the zip by seeking backwards and then writing the directory:
my $fh = IO::File->new( 'someFile.zip', 'w' ); my $retval = $zip->writeToFileHandle( $fh ); if ( $retval == AZ_IO_ERROR ) { my @unwritten = grep { not $_->wasWritten() } $zip->members(); if (@unwritten) { $zip->removeMember( $member ) foreach my $member ( @unwritten ); $zip->writeCentralDirectory( $fh, $unwritten[0]->writeLocalHeaderRelativeOffset()); } }
Write the zip to the specified file, as safely as possible. This is done by first writing to a temp file, then renaming the original if it exists, then renaming the temp file, then deleting the renamed original if it exists. Returns AZ_OK if successful.
Write back to the original zip file. See overwriteAs() above. If the zip was not ever read from a file, this generates an error.
Read zipfile headers from a zip file, appending new members. Returns
AZ_OK or error code.
my $zipFile = Archive::Zip->new(); my $status = $zipFile->read( '/some/FileName.zip' );
Read zipfile headers from an already-opened file handle, appending new members. Does not close the file handle. Returns
AZ_OK or error code. Note that this requires a seekable file handle; reading from a stream is not yet supported.
my $fh = IO::File->new( '/some/FileName.zip', 'r' ); my $zip1 = Archive::Zip->new(); my $status = $zip1->readFromFileHandle( $fh ); my $zip2 = Archive::Zip->new(); $status = $zip2->readFromFileHandle( $fh );
These used to be in Archive::Zip::Tree but got moved into Archive::Zip. They enable operation on an entire tree of members or files. A usage example:'); # now extract the same files into /tmpx $zip->extractTree( 'stuff', '/tmpx' );
my $pred = sub { /\.txt/ }; $zip->addTree( '.', '', $pred );
will add all the .txt files in and below the current directory, using relative names, and making the names identical in the zipfile:
original name zip member name ./xyz xyz ./a/ a/ ./a/b a/b
To translate absolute to relative pathnames, just pass them in: $zip->addTree( '/c/d', 'a' );
original name zip member name /c/d/xyz a/xyz /c/d/a/ a/a/ /c/d/a/b a/a/b
Returns AZ_OK on success. Note that this will not follow symbolic links to directories. Note also that this does not check for the validity of filenames.
Note that you generally don't want to make zip archive member names absolute.
of all is well.
If you don't give any arguments at all, will extract all the files in the zip with their original names.
If you supply one argument for
$root,
extractTree will extract all the members whose names start with
$root into the current directory, stripping off
$root first.
$root is in Zip (Unix) format. For instance,
$zip->extractTree( 'a' );
when applied to a zip containing the files: a/x a/b/c ax/d/e d/e will extract:
a/x as ./x
a/b/c as ./b/c
If you give two arguments,
extractTree extracts all the members whose names start with
$root. It will translate
$root into
$dest to construct the destination file name.
$root and
$dest are in Zip (Unix) format. For instance,
$zip->extractTree( 'a', 'd/e' );
when applied to a zip containing the files: a/x a/b/c ax/d/e d/e will extract:
a/x to d/e/x
a/b/c to d/e/b/c and ignore ax/d/e and d/e
If you give three arguments,
extractTree extracts all the members whose names start with
$root. It will translate
$root into
$dest to construct the destination file name, and then it will convert to local file system format, using
$volume as the name of the destination volume.
$root and
$dest are in Zip (Unix) format.
$volume is in local file system format.
For instance, under Windows,
If you want absolute paths (the prior example used paths relative to the current directory on the destination volume, you can specify these in
$dest:
Several constructors allow you to construct members without adding them to a zip archive. These work the same as the addFile(), addDirectory(), and addString() zip instance methods described above, but they don't add the new members to a zip.
Construct a new member from the given string. Returns undef on error.
my $member = Archive::Zip::Member->newFromString( 'This is a test', 'xyz.txt' );
Construct a new member from the given file. Returns undef on error.
my $member = Archive::Zip::Member->newFromFile( 'xyz.txt' );/' );
These methods get (and/or set) member attribute values.
Gets the field from the member header.
Gets or sets the field from the member header. These are
FA_* values.
Gets the field from the member header.
Gets the general purpose bit field from the member header. This is where the
GPBF_* bits live..
Return the member's external file name, if any, or undef.
Get or set the member's internal filename. Returns the (possibly new) filename. Names will have backslashes converted to forward slashes, and will have multiple consecutive slashes converted to single ones.
Return the member's last modification date/time stamp in MS-DOS format.
Return the member's last modification date/time stamp, converted to unix localtime format.
print "Mod Time: " . scalar( localtime( $member->lastModTime() ) );
Set the member's lastModFileDateTime from the given unix time.
$member->setLastModFileDateTimeFromUnix( time() );
Return the internal file attributes field from the zip header. This is only set for members read from a zip file.
Return member attributes as read from the ZIP file. Note that these are NOT UNIX!.
Gets or sets the extra field that was read from the local header. This is not set for a member from a zip file until after the member has been written out. The extra field must be in the proper format.
Gets or sets the extra field that was read from the central directory header. The extra field must be in the proper format.
Return both local and CD extra fields, concatenated.
Get or set the member's file comment.
Get or set the data descriptor flag. If this is set, the local header will not necessarily have the correct data sizes. Instead, a small structure will be stored at the end of the member data with these values. This should be transparent in normal operation.
Return the CRC-32 value for this member. This will not be set for members that were constructed from strings or external files until after the member has been written.
Return the CRC-32 value for this member as an 8 character printable hex string. This will not be set for members that were constructed from strings or external files until after the member has been written.
Return the compressed size for this member. This will not be set for members that were constructed from strings or external files until after the member has been written.
Return the uncompressed size for this member.
Return true if this member is encrypted. The Archive::Zip module does not currently create or extract encrypted members. me to a file with the given name. The file will be created with default modes. Directories will be created as needed. The
$fileName argument should be a valid file name on your file system. Returns AZ_OK on success.
Returns true if I am a directory.
Returns the file offset in bytes the last time I was written.
Returns true if I was successfully written. Reset at the beginning of a write attempt.
It is possible to use lower-level routines to access member data streams, rather than the extract* methods and contents(). For instance, here is how to print the uncompressed contents of a member in chunks using these methods:
my ( $member, $status, $bufferRef ); $member = $zip->memberNamed( 'xyz.txt' ); $member->desiredCompressionMethod( COMPRESSION_STORED ); $status = $member->rewindData(); die "error $status" unless $status == AZ_OK; while ( ! $member->readIsDone() ) { ( $bufferRef, $status ) = $member->readChunk(); die "error $status" if $status != AZ_OK && $status != AZ_STREAM_END; # do something with $bufferRef: print $$bufferRef; } $member->endRead(); data and set up for reading data streams or writing zip files. Can take options for
inflateInit() or
deflateInit(), but this isn't likely to be necessary. Subclass overrides should call this method. Returns
AZ_OK on success.
Reset the read variables and free the inflater or deflater. Must be called to close files, etc. Returns AZ_OK on success.
Return true if the read has run out of data or errored out. (and uncompress, if necessary) the member's contents to the given file handle. Return AZ_OK on success.
The Archive::Zip::FileMember class extends Archive::Zip::Member. It is the base class for both ZipFileMember and NewFileMember classes. This class adds an
externalFileName and an
fh member to keep track of the external file.
Return the member's external filename.
Return the member's read file handle. Automatically opens file if necessary.
The Archive::Zip::ZipFileMember class represents members that have been read from external zip files.
Returns the disk number that the member's local header resides in. Should be 0.
Returns the offset into the zip file where the member's local header is.
Returns the offset from the beginning of the zip file to the member's data.
Archive::Zip requires several other modules:
Ned Konz, <nedkonz@cpan.org>
File attributes code by Maurice Aubrey <maurice@lovelyfilth.com>
Copyright (c) 2000-2003 Ned Konz. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
There is a Japanese translation of this document at that was done by DEQ <deq@oct.zaq.ne.jp> . Thanks! | http://search.cpan.org/~nedkonz/Archive-Zip-1.14/lib/Archive/Zip.pod | crawl-002 | refinedweb | 3,430 | 66.84 |
Get the highlights in your inbox every week.
Automate your tweets with Python code running on a Raspberry Pi
Make your own Twitter bot with Python and Raspberry Pi
Automate your tweets with some simple Python code running on a Raspberry Pi.
Subscribe now.
Getting started with the Twitter API
Twitter is a web service that provides an application programing interface (API), which means you can write software that communicates with the live Twitter service—perhaps to read tweets in real time or to automatically publish tweets.
The API is free to use, but you have to have a Twitter account and to register your application in order to get access to the API, but that's easy enough.
Start by going to apps.twitter.com. Create a new app by completing the required fields—ensure you select Read and Write under your app's permissions. This will generate a you don't include these keys.
To access the Twitter API from Python, you'll need to install the Twython library. Install this using pip in a terminal window:
sudo pip3 install twython
Open your Python editor and create a new file. Save it as auth.py and insert your own API keys into this example code:'
In another file, write this simple program to test whether you can send a tweet:
from twython import Twython
from auth import (
consumer_key,
consumer_secret,
access_token,
access_token_secret
)
twitter = Twython(
consumer_key,
consumer_secret,
access_token,
access_token_secret
)
message = "Hello world!"
twitter.update_status(status=message)
print("Tweeted: {}".format(message))
Save and run this file. It should send a tweet saying "Hello world!" from your account.
If you get an error, check that your API keys were copied correctly.
Now try adding some randomness to your program. Add the random module at the top:
import random
Add a list of messages, and select one at random:
messages = [
"Hi Twitter",
"Hello from Python",
"Hello from my Raspberry Pi",
"I'm a bot",
]
Then change your code to pick a random message from the list before tweeting it:
message = random.choice(messages)
You might also want to try tweeting images:
message = "Hello world - here's a picture!"
with open('/home/pi/Downloads/image.jpg', 'rb') as photo:
twitter.update_status_with_media(status=message, media=photo)
Reading Twitter
In addition to using Python to send tweets, you can also read tweets using the TwythonStreamer class:
from twython import TwythonStreamer
from auth import (
consumer_key,
consumer_secret,
access_token,
access_token_secret
)
class MyStreamer(TwythonStreamer):
def on_success(self, data):
if 'text' in data:
print(data['text'])
stream = MyStreamer(
consumer_key,
consumer_secret,
access_token,
access_token_secret
)
stream.statuses.filter(track='raspberry pi')
This code tracks all tweets containing the phrase "raspberry pi." When it finds a tweet, it sends a collection of data about the tweet into the on_success method. Data is a dictionary containing the tweet text, along with lots of metadata. Here, we just printed out the tweet contents. You can leave it running, and it will run the on_success method every time a new tweet matches the search. This could be a word, a phrase, or a hashtag.
This example prints out the username of the account that tweeted, as well as the tweet contents:
class MyStreamer(TwythonStreamer):
def on_success(self, data):
if 'text' in data:
username = data['user']['screen_name']
tweet = data['text']
print("@{}: {}".format(username, tweet))
For more information, see Raspberry Pi's learning guide on using the Twitter API with Python.
If you want to have your Twitter bot code run 24/7, you can install it on a web server, run it on a Raspberry Pi at home, or even use a hosted Raspberry Pi.
Physical components
With a Raspberry Pi, you can easily add physical components, such as buttons and LEDs, to your Twitter program. For example, you could set it to send a random tweet when a physical button is pressed:
from gpiozero import Button
button = Button(2)
button.wait_for_press()
message = "Hello world!"
twitter.update_status(status=message)
print("Tweeted: {}".format(message))
Or tweet a Sense HAT's temperature reading:
from sense_hat import SenseHat
sense = SenseHat()
message = "The temperature is currently {:2.2f}
degrees".format(sense.temperature)
twitter.update_status(status=message)
Or light up an LED when a tweet matches a search:
from gpiozero import LED
led = LED(3)
class MyStreamer(TwythonStreamer):
def on_success(self, data):
if 'text' in data:
led.on()
username = data['user']['screen_name']
tweet = data['text']
print("@{}: {}".format(username, tweet))
sleep(10)
led.off()
Or scroll the tweet on a Sense HAT display:
from sense_hat import SenseHat
sense = SenseHat()
class MyStreamer(TwythonStreamer):
def on_success(self, data):
if 'text' in data:
username = data['user']['screen_name']
tweet = data['text']
sense.show_message("@{}: {}".format(username, tweet))
For more information on physical computing with Raspberry Pi, see my articles on GPIO Zero starter projects and getting started with the Sense HAT. You can also make a tweeting Raspberry Pi camera project by following the Tweeting Babbage tutorial on Raspberry Pi Learning Resources.
Real Twitter bots
Some time ago I created a Twitter bot called pyjokes that tweets out geeky jokes from a Python module I maintain. The code is very straightforward:
import pyjokes
joke = pyjokes.get_joke()
twitter.update_status(status=joke)
I simply use Cron to schedule the task to run.
Follow @pyjokes_bot for hilarious one-liners. You can read more about pyjokes on the project website pyjok.es, and you can see the code for the Twitter bot on GitHub.
I recently made another Twitter bot that tweets "on this day" links to previous years' content from the Raspberry Pi Foundation's blog.
It's a little more complex because it maintains a (single-table) database of all the historical blog posts, but the bot code is very simple. It just queries the database for all posts with a date matching the current date's month and day, picks one at random, and tweets out the year, title, and a link:
date = datetime.now().date()
month = date.month
day = date.day
posts = db.get_posts_on_date(month=month, day=day)
post = random.choice(posts)
year = int(post['year'])
title = html.unescape(post['title'])
slug = post['slug']
url = '{}'.format(slug)
tweet = "On this day in {}: {} {}".format(year, title, url)
print('Tweeting: {}'.format(tweet))
twitter.update_status(status=tweet)
You can follow this feed @raspberrypi_otd and see the bot's code on GitHub.
3 Comments
Looks like the first example is truncated so wouldn't work.
Whoops - looks like a bit got left off. The full line should be:
print("Tweeted: {}".format(message))
Thanks - I'll get that fixed up.
Hello
while posting a tweet went straight away when I tried a different thing like replying (not retweeting but replying) to someone who sent me a tweet, my tweet gets posted in My timeline but the other party got nothing.
twitter.update_status(username='destination', in_reply_to_status_id_str="23234234344445464556")
The party "destination" is getting nothing,. nothing shows up in his "Notifications".
How can I automate a reply to someone? notice that I have his tweet id, | https://opensource.com/article/17/8/raspberry-pi-twitter-bot | CC-MAIN-2020-34 | refinedweb | 1,148 | 55.64 |
Hello, I’m writing SOUL code generator / graph parser for a project and I need to be able to select different external source for different instances of the same processor. Does anyone know if this is even possible at this stage?
The issue I found is that you can only specify external variables per namespace / processor type. In other words, all instances of this processor share the same external variables.
E.g. all
Node_WavePlayers are going to play “thunder” sounds, and if I need to change it to something else, it’s going to be changed for all instances of the processor:
The workaround I found in
MinimumViablePiano is to add all of the possible external variables that all of the
Node_WavePlayers within this SOUL patch need and then select which one is supposed to play on each instance individually (haven’t tried it yet, not sure how it would work).
…In this case, if a
WavePlayer needs to select a random sound from a group at runtime, the externals list gets increasingly more entangled.
Other workaround that I’ve thought of is to wrap each external file into a simple processor reader and append a tag to processor type for each instance, e.g.
Node_Wave_Fire,
Node_Wave_Thunder, etc. But that seemed like an unreasonable duplication of code for the compiler. And the bigger problem with this approach is it would make it impossible for multiple
WavePlayers to read the same external sources from different positions.
If it would be possible to hold external variables in the top graph and pass them in to the instances of
WavePlayer, maybe that would solve this. Although I’m not sure if that wouldn’t produce more issues.
TLDR: Is the
MinimumViablePiano's approach my best bet at this point?
P.S. ExternalDataProvider also doesn’t solve the issue, as it provides data for the whole patch, not for individual instances of a processor within the patch. Technically I could compile multiple instances of the same patch each with its own manifest file, which would contain only required external variables and then route them together externally in the host, but that doesn’t make it simpler, or more elegant and requires even more recompiling of the same thing for just a variable change.
P.P.S. It’s possible I am completely overthinking this
| https://forum.juce.com/t/handling-large-quantities-of-external-sample-data-sources/47169 | CC-MAIN-2021-39 | refinedweb | 390 | 56.18 |
Here's the situation... I'm trying to make a 'gradebook' sorta program. I know the code isn't even close to being optimized nor programmed in the best way. The problem.. I can't seem to create an instance of CStudent in a switch statement. I want to use the switch statement to create a menu to give an option of adding a student, printing a student, etc... When I try to compile I get an error: "initialization of Student1 is skipped by case label"
Thanks for the helpThanks for the helpCode:#include <iostream> using namespace std; class CStudent //sizeof(CStudent) = 50 bytes { public: CStudent() {setAll();} //my constructor ~CStudent(){;} char firstName[20]; //1x20 =20 bytes char lastName[20]; //1x20 =20 bytes unsigned short grades[5]; //2x5 = 10 bytes //public member methods void printAll(int x); private: void setAll(); }; void CStudent::setAll() { cout << "------- Creating a new Student ---------" << endl; cout << "Enter First Name: "; cin >> this->firstName; cout << "Enter Last Name: "; cin >> this->lastName; //use loop to enter grades //starts with 0 and goes to 4 //total = 5 grades for(unsigned short gradeNum = 0; gradeNum < 5; gradeNum++) { cout << "Enter Grade #" << (gradeNum + 1) << ": "; cin >> this->grades[gradeNum]; } } void CStudent::printAll(int x) { cout << "\n---------- Printing Student Info ----------" << endl; cout << "Student Number " << (x+1) << endl; cout << "First Name:\t" << this->firstName << endl; cout << "Last Name:\t" << this->lastName << endl; //prints grades 0 to gradeNum for(unsigned short gradeNum = 0; gradeNum < 5; gradeNum++) { cout << "Grade #" << (gradeNum+1) << " grade:\t" << grades[gradeNum] << endl; } } int main() { //Menu unsigned short option; cout << "Enter #: " << endl; cin >> option; //switch statement based on option switch(option) { case 1: //create an instance of a Student CStudent Student1; break; case 2: //print an instance of a Student Student1.printAll(0); break; case 3: //print sizeof cout << "Size of Student: " << sizeof(Student1) << endl; case 4: //exit return 0; break; default: cout << "\nError!" << endl; break; } return 0; //the switch should take care of exit } | http://cboard.cprogramming.com/cplusplus-programming/66433-using-switch-create-object.html | CC-MAIN-2015-35 | refinedweb | 319 | 51.86 |
6. Substitute and Split
6.1.
re.sub()
Listing 315. Usage of
re.sub()
import re PATTERN = r'\s[a-z]{3}\s' INPUT = 'Baked Beans And Spam' re.sub(PATTERN, ' & ', INPUT, flags=re.IGNORECASE) # 'Baked Beans & Spam'
6.2.
re.split()
Listing 316. Usage of
re.split()
import re PATTERN = r'\s[a-z]{3}\s' INPUT = 'Baked Beans And Spam' re.split(PATTERN, INPUT, flags=re.IGNORECASE) # ['Baked Beans', 'Spam']
6.3. Assignments
6.3.1. Parsing text from webpage
Complexity level: easy
Lines of code to write: 5 lines
Estimated time of completion: 10 min
Filename:
solution/regex_split.py
Write input data to
regex_split.htmlfile
Using regexp split text by paragraphs
Print paragraph starting with "We choose to go to the moon"
- Polish
Zapisz dane wejściowe do pliku
regex_split.html
Za pomocą regexpów podziel tekst na paragrafy
Wyświetl paragraf zaczynający się od słów "We choose to go to the moon"
- Input
<html><body><bgsound src="jfktalk.wav" loop="2"><p></p><center><h3>John F. Kennedy Moon Speech - Rice Stadium</h3><img src="jfkrice.jpg"><h3>September 12, 1962</h3></center><p></p><hr><p></p><center>Movie clips of JFK speaking at Rice University: <a href="JFKatRice.mov">(.mov)</a> or <a href="jfkrice.avi">(.avi)</a> (833K)</center><p><a href="jfkru56k.asf">See and hear</a> the entire speech for 56K modem download [8.7 megabytes in a .asf movie format which requires Windows Media Player 7 (speech lasts about 33 minutes)].<br><a href="jfkru100.asf">See and hear</a> the entire speech for higher speed access [25.3 megabytes in .asf movie format which requires Windows Media Player 7].<br><a href="jfkslide.asf">See and hear</a>]. <br><a href="jfk_rice_speech.mpg">See and hear</a>. </p><p></p><hr><p></p><center><h4>TEXT OF PRESIDENT JOHN KENNEDY'S RICE STADIUM MOON SPEECH</h4></center>><p></p><hr><p></p><center><a href="movies.html">Return to Space Movies Cinema</a></center></body></html> | http://python.astrotech.io/regular-expressions/sub-split.html | CC-MAIN-2019-47 | refinedweb | 334 | 55.61 |
C library function - strrchr()
Advertisements
Description
The C library function char *strrchr(const char *str, int c) searches for the last occurrence of the character c (an unsigned char) in the string pointed to, by the argument str.
Declaration
Following is the declaration for strrchr() function.
char *strrchr(const char *str, int c)
Parameters
str -- This is the C string.
c -- This is the character to be located. It is passed as its int promotion, but it is internally converted back to char.
Return Value
This function returns a pointer to the last occurrence of character in str. If the value is not found, the function returns a null pointer.
Example
The following example shows the usage of strrchr() function.
#include <stdio.h> #include <string.h> int main () { int len; const char str[] = ""; const char ch = '.'; char *ret; ret = strrchr(str, ch); printf("String after |%c| is - |%s|\n", ch, ret); return(0); }
Let us compile and run the above program that will produce the following result:
String after |.| is - |.com|
string_h.htm
Advertisements | http://www.tutorialspoint.com/c_standard_library/c_function_strrchr.htm | CC-MAIN-2017-09 | refinedweb | 174 | 64.71 |
tutorial,.
Free Bonus: Click here to get access to a free NumPy Resources Guide that points you to the best tutorials, videos, and books for improving your NumPy skills.
The Importance of Data Splitting
Supervised machine learning is about creating models that precisely map the given inputs (independent variables, or predictors) to the given outputs (dependent variables, or responses).
How you measure the precision of your model depends on the type of a problem you’re trying to solve. In regression analysis, you typically use the coefficient of determination, root-mean-square error, mean absolute error, or similar quantities. For classification problems, you often apply accuracy, precision, recall, F1 score, and related indicators.
The acceptable numeric values that measure precision vary from field to field. You can find detailed explanations from Statistics By Jim, Quora, and many other resources.
What’s most important to understand is that you usually need unbiased evaluation to properly use these measures, assess the predictive performance of your model, and validate the model.
This means that you can’t evaluate the predictive performance of a model with the same data you used for training. You need evaluate the model with fresh data that hasn’t been seen by the model before. You can accomplish that by splitting your dataset before you use it.
Training, Validation, and Test Sets
Splitting your dataset is essential for an unbiased evaluation of prediction performance. In most cases, it’s enough to split your dataset randomly into three subsets:
The training set is applied to train, or fit, your model. For example, you use the training set to find the optimal weights, or coefficients, for linear regression, logistic regression, or neural networks.
The validation set is used for unbiased model evaluation during hyperparameter tuning. For example, when you want to find the optimal number of neurons in a neural network or the best kernel for a support vector machine, you experiment with different values. For each considered setting of hyperparameters, you fit the model with the training set and assess its performance with the validation set.
The test set is needed for an unbiased evaluation of the final model. You shouldn’t use it for fitting or validation.
In less complex cases, when you don’t have to tune hyperparameters, it’s okay to work with only the training and test sets.
Underfitting and Overfitting
Splitting a dataset might also be important for detecting if your model suffers from one of two very common problems, called underfitting and overfitting:
Underfitting is usually the consequence of a model being unable to encapsulate the relations among data. For example, this can happen when trying to represent nonlinear relations with a linear model. Underfitted models will likely have poor performance with both training and test sets.
Overfitting usually takes place when a model has an excessively complex structure and learns both the existing relations among data and noise. Such models often have bad generalization capabilities. Although they work well with training data, they usually yield poor performance with unseen (test) data.
You can find a more detailed explanation of underfitting and overfitting in Linear Regression in Python.
Prerequisites for Using
train_test_split()
Now that you understand the need to split a dataset in order to perform unbiased model evaluation and identify underfitting or overfitting, you’re ready to learn how to split your own datasets.
You’ll use version 0.23.1 of scikit-learn, or
sklearn. It has many packages for data science and machine learning, but for this tutorial you’ll focus on the
model_selection package, specifically on the function
train_test_split().
You can install
sklearn with
pip install:
$ python -m pip install -U "scikit-learn==0.23.1"
If you use Anaconda, then you probably already have it installed. However, if you want to use a fresh environment, ensure that you have the specified version, or use Miniconda, then you can install
sklearn from Anaconda Cloud with
conda install:
$ conda install -c anaconda scikit-learn=0.23
You’ll also need NumPy, but you don’t have to install it separately. You should get it along with
sklearn if you don’t already have it installed. If you want to refresh your NumPy knowledge, then take a look at the official documentation or check out Look Ma, No For-Loops: Array Programming With NumPy.
Application of
train_test_split()
You need to import
train_test_split() and NumPy before you can use them, so you can start with the
import statements:
>>> import numpy as np >>> from sklearn.model_selection import train_test_split
Now that you have both imported, you can use them to splity sparse matrices if appropriate:
sklearn.model_selection.train_test_split(*arrays, **options) -> list
arrays is the sequence of lists, NumPy arrays, pandas DataFrames, or similar array-like objects that hold the data you want to split. All these objects together make up the dataset and must be of the same length.
In supervised machine learning applications, you’ll typically work with two such sequences:
- A two-dimensional array with the inputs (
x)
- A one-dimensional array with the outputs (
y)
options are the optional keyword arguments that you can use to get desired behavior:
train_sizeis the number that defines the size of the training set. If you provide a
float, then it must be between
0.0and
1.0and will define the share of the dataset used for testing. If you provide an
int, then it will represent the total number of the training samples. The default value is
None.
test_sizeis the number that defines the size of the test set. It’s very similar to
train_size. You should provide either
train_sizeor
test_size. If neither is given, then the default share of the dataset that will be used for testing is
0.25, or 25 percent.
random_stateis the object that controls randomization during splitting. It can be either an
intor an instance of
RandomState. The default value is
None.
shuffleis the Boolean object (
Trueby default) that determines whether to shuffle the dataset before applying the split.
stratifyis an array-like object that, if not
None, determines how to use a stratified split.
Now it’s time to try data splitting! You’ll start by creating a simple dataset to work with. The dataset will contain the inputs in the two-dimensional array
x and outputs in the one-dimensional array
y:
>>> x = np.arange(1, 25).reshape(12, 2) >>> y = np.array([0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0]) >>> x array([[ 1, 2], [ 3, 4], [ 5, 6], [ 7, 8], [ 9, 10], [11, 12], [13, 14], [15, 16], [17, 18], [19, 20], [21, 22], [23, 24]]) >>> y array([0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0])
To get your data, you use
arange(), which is very convenient for generating arrays based on numerical ranges. You also use
.reshape() to modify the shape of the array returned by
arange() and get a two-dimensional data structure.
You can split both input and output datasets with a single function call:
>>> x_train, x_test, y_train, y_test = train_test_split(x, y) >>> x_train array([[15, 16], [21, 22], [11, 12], [17, 18], [13, 14], [ 9, 10], [ 1, 2], [ 3, 4], [19, 20]]) >>> x_test array([[ 5, 6], [ 7, 8], [23, 24]]) >>> y_train array([1, 1, 0, 1, 0, 1, 0, 1, 0]) >>> y_test array([1, 0, 0])
Given two sequences, like
x and
y here,
train_test_split() performs the split and returns four sequences (in this case NumPy arrays) in this order:
x_train: The training part of the first sequence (
x)
x_test: The test part of the first sequence (
x)
y_train: The training part of the second sequence (
y)
y_test: The test part of the second sequence (
y)
You probably got different results from what you see here. This is because dataset splitting is random by default. The result differs each time you run the function. However, this often isn’t what you want.
Sometimes, to make your tests reproducible, you need a random split with the same output for each function call. You can do that with the parameter
random_state. The value of
random_state isn’t important—it can be any non-negative integer. You could use an instance of
numpy.random.RandomState instead, but that is a more complex approach.
In the previous example, you used a dataset with twelve observations (rows) and got a training sample with nine rows and a test sample with three rows. That’s because you didn’t specify the desired size of the training and test sets. By default, 25 percent of samples are assigned to the test set. This ratio is generally fine for many applications, but it’s not always what you need.
Typically, you’ll want to define the size of the test (or training) set explicitly, and sometimes you’ll even want to experiment with different values. You can do that with the parameters
train_size or
test_size.
Modify the code so you can choose the size of the test set and get a reproducible result:
>>> x_train, x_test, y_train, y_test = train_test_split( ... x, y, test_size=4, random_state=4 ... ) >>> x_train array([[17, 18], [ 5, 6], [23, 24], [ 1, 2], [ 3, 4], [11, 12], [15, 16], [21, 22]]) >>> x_test array([[ 7, 8], [ 9, 10], [13, 14], [19, 20]]) >>> y_train array([1, 1, 0, 0, 1, 0, 1, 1]) >>> y_test array([0, 1, 0, 0])
With this change, you get a different result from before. Earlier, you had a training set with nine items and test set with three items. Now, thanks to the argument
test_size=4, the training set has eight items and the test set has four items. You’d get the same result with
test_size=0.33 because 33 percent of twelve is approximately four.
There’s one more very important difference between the last two examples: You now get the same result each time you run the function. This is because you’ve fixed the random number generator with
random_state=4.
The figure below shows what’s going on when you call
train_test_split():
The samples of the dataset are shuffled randomly and then split into the training and test sets according to the size you defined.
You can see that
y has six zeros and six ones. However, the test set has three zeros out of four items. If you want to (approximately) keep the proportion of
y values through the training and test sets, then pass
stratify=y. This will enable stratified splitting:
>>> x_train, x_test, y_train, y_test = train_test_split( ... x, y, test_size=0.33, random_state=4, stratify=y ... ) >>> x_train array([[21, 22], [ 1, 2], [15, 16], [13, 14], [17, 18], [19, 20], [23, 24], [ 3, 4]]) >>> x_test array([[11, 12], [ 7, 8], [ 5, 6], [ 9, 10]]) >>> y_train array([1, 0, 1, 0, 1, 0, 0, 1]) >>> y_test array([0, 0, 1, 1])
Now
y_train and
y_test have the same ratio of zeros and ones as the original
y array.
Stratified splits are desirable in some cases, like when you’re classifying an imbalanced dataset, a dataset with a significant difference in the number of samples that belong to distinct classes.
Finally, you can turn off data shuffling and random split with
shuffle=False:
>>> x_train, x_test, y_train, y_test = train_test_split( ... x, y, test_size=0.33, shuffle=False ... ) >>> x_train array([[ 1, 2], [ 3, 4], [ 5, 6], [ 7, 8], [ 9, 10], [11, 12], [13, 14], [15, 16]]) >>> x_test array([[17, 18], [19, 20], [21, 22], [23, 24]]) >>> y_train array([0, 1, 1, 0, 1, 0, 0, 1]) >>> y_test array([1, 0, 1, 0])
Now you have a split in which the first two-thirds of samples in the original
x and
y arrays are assigned to the training set and the last third to the test set. No shuffling. No randomness.
Supervised Machine Learning With
train_test_split()
Now it’s time to see
train_test_split() in action when solving supervised learning problems. You’ll start with a small regression problem that can be solved with linear regression before looking at a bigger problem. You’ll also see that you can use
train_test_split() for classification as well.
Minimalist Example of Linear Regression
In this example, you’ll apply what you’ve learned so far to solve a small regression problem. You’ll learn how to create datasets, split them into training and test subsets, and use them for linear regression.
As always, you’ll start by importing the necessary packages, functions, or classes. You’ll need NumPy,
LinearRegression, and
train_test_split():
>>> import numpy as np >>> from sklearn.linear_model import LinearRegression >>> from sklearn.model_selection import train_test_split
Now that you’ve imported everything you need, you can create two small arrays,
x and
y, to represent the observations and then split them into training and test sets just as you did before:
>>> x = np.arange(20).reshape(-1, 1) >>> y = np.array([5, 12, 11, 19, 30, 29, 23, 40, 51, 54, 74, ... 62, 68, 73, 89, 84, 89, 101, 99, 106]) >>> x array([[ 0], [ 1], [ 2], [ 3], [ 4], [ 5], [ 6], [ 7], [ 8], [ 9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19]]) >>> y array([ 5, 12, 11, 19, 30, 29, 23, 40, 51, 54, 74, 62, 68, 73, 89, 84, 89, 101, 99, 106]) >>> x_train, x_test, y_train, y_test = train_test_split( ... x, y, test_size=8, random_state=0 ... )
Your dataset has twenty observations, or
x-
y pairs. You specify the argument
test_size=8, so the dataset is divided into a training set with twelve observations and a test set with eight observations.
Now you can use the training set to fit the model:
>>> model = LinearRegression().fit(x_train, y_train) >>> model.intercept_ 3.1617195496417523 >>> model.coef_ array([5.53121801])
LinearRegression creates the object that represents the model, while
.fit() trains, or fits, the model and returns it. With linear regression, fitting the model means determining the best intercept (
model.intercept_) and slope (
model.coef_) values of the regression line.
Although you can use
x_train and
y_train to check the goodness of fit, this isn’t a best practice. An unbiased estimation of the predictive performance of your model is based on test data:
>>> model.score(x_train, y_train) 0.9868175024574795 >>> model.score(x_test, y_test) 0.9465896927715023
.score() returns the coefficient of determination, or R², for the data passed. Its maximum is
1. The higher the R² value, the better the fit. In this case, the training data yields a slightly higher coefficient. However, the R² calculated with test data is an unbiased measure of your model’s prediction performance.
This is how it looks on a graph:
The green dots represent the
x-
y pairs used for training. The black line, called the estimated regression line, is defined by the results of model fitting: the intercept and the slope. So, it reflects the positions of the green dots only.
The white dots represent the test set. You use them to estimate the performance of the model (regression line) with data not used for training.
Regression Example
Now you’re ready to split a larger dataset to solve a regression problem. You’ll use a well-known Boston house prices dataset, which is included in
sklearn. This dataset has 506 samples, 13 input variables, and the house values as the output. You can retrieve it with
load_boston().
First, import
train_test_split() and
load_boston():
>>> from sklearn.datasets import load_boston >>> from sklearn.model_selection import train_test_split
Now that you have both functions imported, you can get the data to work with:
>>> x, y = load_boston(return_X_y=True)
As you can see,
load_boston() with the argument
return_X_y=True returns a tuple with two NumPy arrays:
- A two-dimensional array with the inputs
- A one-dimensional array with the outputs
The next step is to split the data the same way as before:
>>> x_train, x_test, y_train, y_test = train_test_split( ... x, y, test_size=0.4, random_state=0 ... )
Now you have the training and test sets. The training data is contained in
x_train and
y_train, while the data for testing is in
x_test and
y_test.
When you work with larger datasets, it’s usually more convenient to pass the training or test size as a ratio.
test_size=0.4 means that approximately 40 percent of samples will be assigned to the test data, and the remaining 60 percent will be assigned to the training data.
Finally, you can use the training set (
x_train and
y_train) to fit the model and the test set (
x_test and
y_test) for an unbiased evaluation of the model. In this example, you’ll apply three well-known regression algorithms to create models that fit your data:
- Linear regression with
LinearRegression()
- Gradient boosting with
GradientBoostingRegressor()
- Random forest with
RandomForestRegressor()
The process is pretty much the same as with the previous example:
- Import the classes you need.
- Create model instances using these classes.
- Fit the model instances with
.fit()using the training set.
- Evaluate the model with
.score()using the test set.
Here’s the code that follows the steps described above for all three regression algorithms:
>>> from sklearn.linear_model import LinearRegression >>> model = LinearRegression().fit(x_train, y_train) >>> model.score(x_train, y_train) 0.7668160223286261 >>> model.score(x_test, y_test) 0.6882607142538016 >>> from sklearn.ensemble import GradientBoostingRegressor >>> model = GradientBoostingRegressor(random_state=0).fit(x_train, y_train) >>> model.score(x_train, y_train) 0.9859065238883613 >>> model.score(x_test, y_test) 0.8530127436482149 >>> from sklearn.ensemble import RandomForestRegressor >>> model = RandomForestRegressor(random_state=0).fit(x_train, y_train) >>> model.score(x_train, y_train) 0.9811695664860354 >>> model.score(x_test, y_test) 0.8325867908704008
You’ve used your training and test datasets to fit three models and evaluate their performance. The measure of accuracy obtained with
.score() is the coefficient of determination. It can be calculated with either the training or test set. However, as you already learned, the score obtained with the test set represents an unbiased estimation of performance.
As mentioned in the documentation, you can provide optional arguments to
LinearRegression(),
GradientBoostingRegressor(), and
RandomForestRegressor().
GradientBoostingRegressor() and
RandomForestRegressor() use the
random_state parameter for the same reason that
train_test_split() does: to deal with randomness in the algorithms and ensure reproducibility.
For some methods, you may also need feature scaling. In such cases, you should fit the scalers with training data and use them to transform test data.
Classification Example
You can use
train_test_split() to solve classification problems the same way you do for regression analysis. In machine learning, classification problems involve training a model to apply labels to, or classify, the input values and sort your dataset into categories.
In the tutorial Logistic Regression in Python, you’ll find an example of a handwriting recognition task. The example provides another demonstration of splitting data into training and test sets to avoid bias in the evaluation process.
Other Validation Functionalities
The package
sklearn.model_selection offers a lot of functionalities related to model selection and validation, including the following:
- Cross-validation
- Learning curves
- Hyperparameter tuning
Cross-validation is a set of techniques that combine the measures of prediction performance to get more accurate model estimations.
One of the widely used cross-validation methods is k-fold cross-validation. In it, you divide your dataset into k (often five or ten) subsets, or folds, of equal size and then perform the training and test procedures k times. Each time, you use a different fold as the test set and all the remaining folds as the training set. This provides k measures of predictive performance, and you can then analyze their mean and standard deviation.
You can implement cross-validation with
KFold,
StratifiedKFold,
LeaveOneOut, and a few other classes and functions from
sklearn.model_selection.
A learning curve, sometimes called a training curve, shows how the prediction score of training and validation sets depends on the number of training samples. You can use
learning_curve() to get this dependency, which can help you find the optimal size of the training set, choose hyperparameters, compare models, and so on.
Hyperparameter tuning, also called hyperparameter optimization, is the process of determining the best set of hyperparameters to define your machine learning model.
sklearn.model_selection provides you with several options for this purpose, including
GridSearchCV,
RandomizedSearchCV,
validation_curve(), and others. Splitting your data is also important for hyperparameter tuning.
Conclusion
You now know why and how to use
train_test_split() from
sklearn. You’ve learned that, for an unbiased estimation of the predictive performance of machine learning models, you should use data that hasn’t been used for model fitting. That’s why you need to split your dataset into training, test, and in some cases, validation subsets.
In this tutorial, you’ve learned how to:
- Use
train_test_split()to get training and test sets
- Control the size of the subsets with the parameters
train_sizeand
test_size
- Determine the randomness of your splits with the
random_stateparameter
- Obtain stratified splits with the
stratifyparameter
- Use
train_test_split()as a part of supervised machine learning procedures
You’ve also seen that the
sklearn.model_selection module offers several other tools for model validation, including cross-validation, learning curves, and hyperparameter tuning.
If you have questions or comments, then please put them in the comment section below. | https://realpython.com/train-test-split-python-data/ | CC-MAIN-2021-17 | refinedweb | 3,472 | 54.93 |
AtoZ CSS Quick Tip: Benefits of rem and em Values
This article is a part of our AtoZ CSS Series. You can find other entries to the series here.
You can view the full transcript and screencast for its corresponding video about the
:required pseudo class about using the
rem and
em values.
R is for
rem and
em
In the original screencast video we learned all about the
:required pseudo class which is useful for styling forms with fields that must be filled in.
As much as forms, validation, and styling state are big topics, there isn’t too much we didn’t cover on the topic of
:required the first time around. So instead, let’s look at a couple of quick tips for using the
rem unit of measurement. But first, let’s look at another type of relative unit: the
em.
The Pros and Cons of using
em
When working on a responsive project it’s more flexible to use relative units like
em for sizing text and spacing in and around elements rather than pixels. This is because this unit is relative to the font size of its parent element, allowing an element’s size, spacing and text content to grow proportionally as the
font-size of parent elements change.
Using these relative units enables you to build a system of proportions where changing values of
font-size on one element has a cascading effect on the child elements within. A system of proportions is a good thing, but this behavior of
em does come with a downside.
Take the following snippet of HTML:
<ul> <li>lorem ipsum</li> <li>dolor sit <ol> <li>lorem ipsum</li> <li>lorem ipsum</li> <li>lorem ipsum</li> <li>lorem ipsum</li> </ol> </li> </ul>
This nested list isn’t the most common thing in the world but could likely appear in a page of terms and conditions or some other kind of formal document.
If we wanted to make the list items stand out, we could set their
font-size to be 1.5 times the size of the base size of
16px.
li { font-size: 1.5em; /* 24px/16px */ }
But this will cause an issue with the nested
li as they will be 1.5 times the size of their parent too. The nested items will be 1.5 times
24px rather than 1.5 times
16px. The result is that any nested list items will grow exponentially with each level of nesting. This is likely not what the designer intended!
A similar problem occurs with nested elements and
em values of less than 1. In this case, any nested items would keep getting incrementally smaller with each level of nesting.
So what can we do instead?
Use
rem for setting text size
Instead of running the risk of ever-increasing or decreasing
font-size we can use an alternative unit.
We could use pixels but relative units are more flexible in responsive projects as mentioned earlier. Instead, we can use the
rem unit as this is always calculated based on the
font-size of the root element which is normally the
html element in the case of a website or web application. In a .svg or .xml document the root element might be different but those types of documents aren’t our concern here.
If we use
rem for setting
font-size it doesn’t mean the humble
em should never get a look in. I tend to use
em for setting
padding within elements so that the spacing is always relative to the size of the text.
Use Sass to help with
rem browser support
The
rem unit is only supported from IE9 and above. If you need to support IE8 (or below) then you can use a JS polyfill or provide a
px fallback in the following way:
li { font-size: 24px; font-size: 1.5rem; }
If you’re using Sass you could create a mixin and a function for calculating the desired size in
rem and providing the fallback automatically.
@function rem-calc($font-size, $base-font-size: 16) { @return ($size/$base-font-size) *1rem; } @mixin rem-with-px-fallback($size, $property:font-size) { #{$property}: $size * 1px; #{$property}: rem-calc($size); } li { @include rem-with-px-fallback(24); }
There you have it. A couple of quick tips for using
rem. If you’re not using them in your current projects, I’d definitely recommend giving them a try. | https://www.sitepoint.com/atoz-css-quick-tip-rem-em-values/?utm_source=sitepoint&utm_medium=articletile&utm_campaign=likes&utm_term=html-css | CC-MAIN-2019-13 | refinedweb | 746 | 69.62 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to send context from models.Model to osv.osv ?
I want to send context when call method in class osv.osv from models.Model (new api) like this :
In models.Model :
ctx = {'key' : 'value'}
self.picking_id.do_transfer(context=ctx)
In osv.osv :
@api.cr_uid_ids_context
def do_transfer(self, cr, uid, picking_ids, context=None):
print "context.get('key')================>>>>>>>>>>>>>>>>",context.get('key')
return True
And result from print is :
context.get('key')================>>>>>>>>>>>>>>>> None
Why the result is None ?
1. there is no difference between osv and Model as osv = Model, osv is keept for backward compatibility
2. in v8 there is function with_context() for such cases, use it:
self.picking_id.with_context(key='value').do_transfer()
It works, thank you Temur
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-to-send-context-from-models-model-to-osv-osv-85480 | CC-MAIN-2017-26 | refinedweb | 172 | 54.08 |
EDIT: I am aware that a question with similar task was already asked in SO but I'm interested to find out the problem in this specific piece of code. I am also aware that this problem can be solved without using recursion.
The task is to write a program which will find (and print) the longest sub-string in which the letters occur in alphabetical order. If more than 1 equally long sequences were found, then the first one should be printed. For example, the output for a string
abczabcd
abcz
s = 'hixwluvyhzzzdgd'
hix
luvy
s = 'eseoojlsuai'
eoo
jlsu
s = 'drurotsxjehlwfwgygygxz'
dru
ehlw
pos = 0
maxLen = 0
startPos = 0
endPos = 0
def last_pos(pos):
if pos < (len(s) - 1):
if s[pos + 1] >= s[pos]:
pos += 1
if pos == len(s)-1:
return len(s)
else:
return last_pos(pos)
return pos
for i in range(len(s)):
if last_pos(i+1) != None:
diff = last_pos(i) - i
if diff - 1 > maxLen:
maxLen = diff
startPos = i
endPos = startPos + diff
print s[startPos:endPos+1]
There are many things to improve in your code but making minimum changes so as to make it work. The problem is you should have
if last_pos(i) != None: in your
for loop (
i instead of
i+1) and you should compare
diff (not
diff - 1) against
maxLen. Please read other answers to learn how to do it better.
for i in range(len(s)): if last_pos(i) != None: diff = last_pos(i) - i + 1 if diff > maxLen: maxLen = diff startPos = i endPos = startPos + diff - 1 | https://codedump.io/share/dk9uDM335s0w/1/finding-longest-substring-in-alphabetical-order | CC-MAIN-2017-26 | refinedweb | 258 | 75.24 |
TODO for coalesced symbols: - Should external relocation entries for defined coalesced symbols only be created with -dynamic and not -static? TODO: - Add MacOS line termination \r . Known bugs: - The assembly line: bl ""foo"" causes the symbol name "" to be used. - cmpwi seems to be the same as cmpi for PowerPC. - Can't optimize because of compiler bug #50416 prevents line 235 in symbols.c from working (currently has #pragma CC_OPT_OFF and #pragma CC_OPT_RESUME around that routine). - The m88k instruction "tb0 0,r0,undef" trashes the instruction because of the undefined. - 68k does not handle packed immediates (the tables have been changed to dis- allow this) because there is no routine to convert a flonum to a 68k packed form. - The logical operators && and || are not implemented. Bugs to be fixed: - The m68k "jmp @(_foo)" is not legal and needs to be flaged. - The PowerPC extended branch mnemonic like beqlrl need to take a CRFONLY or a number shifted over by 2 (like the fcmpu instruction). Changes for the 5.25 release (the cctools-667.1_UNDF) || || (symbolP->sy_type & N_TYPE) == N_UNDF) && (!is_local_symbol(symbolP) || ((symbolP->sy_type & N_TYPE) == N_SECT && is_section_cstring_literals(symbolP->sy_other)) ) ) {. Changes for the 5.16 release (the cctools-523 release): - Fixed a bug in the handing of PPC_RELOC_LO14 relocs in md_number_to_imm() in ppc.c which needed to mask in the low byte of a PPC_RELOC_LO14 instead of assigning it. Radar bug #3687246 Changes for the 5.16 release (the cctools-522 release): - Fix the positioning of the INTERIM_PPC64 ifdef in cons() so it doesn't fall through and make bogus complaints. Radar bug #3682374. Changes for the 5.16 release (the cctools-520 release): - Added the cmovnz instruction to the i386 assembler to be the same as cmovne. Radar bug #3345900. - Changed the PowerPC assembler to not mark the fsqrte and fres as optional. Radar bug #3519365. - Changed the i386 assembler to not mark the following instructions as optional: prefetcht0, prefetcht1, prefetcht2 and prefetchnta. Radar bug #3500323. - Changed the assembler to not create scattered relocation entries when the r_address field would overflow its 24 bit field. This can still cause problems if the offset to the symbol reaches outside of the block and ld(1) is asked to order this section. The change is in fix_to_relocation_entries() in write_objects.c to test if the r_address field has any of the 0xff000000 bits set and then does not create a scattered relocation entry. Radar bug #3604972. - Changed the PowerPC Data cache block touch with hint instruction, dcbt with the TH third argument as to not be optional or 64-bit only. So it will now assemble without the -force_cpusubtype_ALL or -arch ppc970. The change was in ppc-opcode.h to change this line: { 0x7c00022c, "dcbt", {{16,5,G0REG}, {11,5,GREG}, {21,4,NUM}}, IMPL64|OPTIONAL }, to this: { 0x7c00022c, "dcbt", {{16,5,G0REG}, {11,5,GREG}, {21,4,NUM}}, }, Radar bug #3468847. - Added the .machine directive which takes the same arguments as the command line -arch <arch_name> flag. The routine s_machine() was added to read.c and the local variable program in main() in as.c was made global. Radar bug 3492132. - Fixed a bug in pseudo_set() in read.c that did not correctly preserve the private extern bit (N_PEXT). Radar bug #3660818. Changes for the 5.16 release (the cctools-515 release): - Added the installGASsrc and fromGASsrc Makefile targets to install and build just the PowerPC assembler from the GAS sources. Radar bug #3657295. Changes for the 5.16 release (the cctools-509 release): - Added #ifdef INTERIM_PPC64 to the source changes to support the interim ppc64 file format. And added -DINTERIM_PPC64 to the Makefile for the appc64_build target and removed -DPPC64. - Changed main() in as.c to handle "-arch ppc64" only when INTERIM_PPC64 is defined. And to handle the ppc arch flags only when PPC is defined and INTERIM_PPC64 is not defined. - #ifdef INTERIM_PPC64 the changed to md_number_to_chars() and md_number_to_imm() second argument 'val' in mh.h from long to long long . - Backed out the changed of the definition of md_number_to_chars() andi md_number_to_imm() in the machine dependent files i386.c, hppa.c, i860.c, m68k.c, m88k,c and sparc.c for the type of the argument 'val' from long long back to long. - Backed out the change to md_cputype in md.h and added back the const qualifer. Backed out this change in the machine dependent files ppc.c, i386.c, hppa.c, i860.c, m68k.c, m88k,c and sparc.c . - Changed ppc.c to set md_cpusubtype to CPU_SUBTYPE_POWERPC64_ALL when #ifdef INTERIM_PPC64. Changes for the 5.16 release (the cctools-501 release): - Added the variable subsections_via_symbols as a boolean value to as.[ch]. - Added the .subsections_via_symbols directive in read.c with the new routine s_subsections_via_symbols() which sets the variable subsections_via_symbols to TRUE. - Changed write_object() in write_object.c to set the MH_SUBSECTIONS_VIA_SYMBOLS bit in the mach_header if variable subsections_via_symbols is TRUE. - Fixed a bug in s_comm() in read.c that caused it to generate a re-definition symbol error if a .no_dead_strip directive is seen before a .comm directive. Changes for the 5.16 release (the cctools-500 release): - Added the no_dead_strip attribute and live_support to attribute_names[] in read.c that sets S_ATTR_NO_DEAD_STRIP and S_ATTR_LIVE_SUPPORT. Also updated the logic in colon() in symbols.s to allow N_NO_DEAD_STRIP in the n_desc field when checking for a weak symbol. Radar bug #2284500. - Added S_ATTR_NO_DEAD_STRIP to all the builtin Objective-C sections in the __OBJC segment to builtin_sections[] in read.c . Radar bug #2284500. - Added setting the N_NO_DEAD_STRIP bit for the .reference and .lazy_reference directives. The changes were in read.c in s_reference() and s_lazy_reference(). Radar bug #2284500. - Added setting the N_NO_DEAD_STRIP bit for .set and assignments for absolute symbols. The changes were in read.c in s_equals() and s_set(). Radar bug #2284500. - Added the .no_dead_strip directive, which takes a symbol name and sets the N_NO_DEAD_STRIP bit. Radar bug #2284500. Changes for the 5.16 release (the cctools-499.3 release): - Fix i386 hosting bug with longs by using ~0ULL in cons(). Radar bug #3591543. Changes for the 5.16 release (the cctools-499.1 release): - Added support for the ppc64 architecture, using the interim file format. A new assembler is built with the cpp macro PPC64 and the cpp macro PPC defined. Radar bug #3562232. - Added the appc64_build target the Makefile which compiles the code with -DPPC64 and installs it in $(DSTROOT)$(LIBDIR)/ppc64/as . - Changed main() in as.c to handle "-arch ppc64" for PPC64. - Added support for the .quad directive for relocatable 8-byte quantities. - Added the following entry to pseudo_table[] in read.c: { "quad", cons, 8 }, - In cons() in read.c made the following changes: - The types of the parameters mask, unmask, get, and use changed from long to long long. - The test of (nbytes >= (int)sizeof(long int)) changed to (nbytes >= (int)sizeof(long long) . - In the case for SEG_BIG code was added to handle bignums small enough to fit in a long long then passed to md_number_to_chars(). - In fix_to_relocation_entries() in write_object.c code ifdef'ed PPC64 was added for the fx_size value of 8 which sets the r_length to 3 to indicate a 64-bit item to be relocated. - Changed md_number_to_chars() and md_number_to_imm() second argument 'val' in mh.h from long to long long. - Changed the definition of md_number_to_chars() and md_number_to_imm() in the machine dependent files ppc.c, i386.c, hppa.c, i860.c, m68k.c, m88k,c and sparc.c . And for md_number_to_chars() and md_number_to_imm() in ppc.c added a case when nbytes is 8. Also the as_warn() formatting string was chaged from using %ld to %lld when using 'val'. - Changed the initialization of md_cputype in ppc.c to be CPU_TYPE_POWERPC64 when PPC64 is defined. - In md_assemble() in ppc.c ifndef'ed PPC64 out the check for 64-bit compare instructions without the -force_cpusubtype_ALL option. - In md_assemble() in ppc.c changed the check for instructions marked with IMPL64 to be flagged only when md_cputype == CPU_TYPE_POWERPC. - In layout_indirect_symbols() in write_object.c the stride for symbol pointers was changed to be conditional on CPU_TYPE_POWERPC64 where it is 8 otherwise 4. Changes for the 5.14 release (the cctools-498 release): - Added the PowerPC pseudo-instruction 'jmp' as the non-linking form of 'jbsr'. The following entry was added to ppc-opcode.h : { 0x48000000, "jmp", {{0,0,JBSR}, {2,24,PCREL}} }, Radar bug #3458928. Changes for the 5.14 release (the cctools-497 release): - Added the --gstabs option to be like -g and to treat the --gdwarf2 option as an unknown option. Radar bug #3411071. Changes for the 5.14 release (the cctools-496 release): - Added the file COPYING from FSF. Radar bug #3443583. Changes for the 5.13 release (the cctools-487 release): - Fixed a bug that should have allow the 64-bit compare instructions and other optional instructions when -arch ppc970 was used. The fix was to add the constant CPU970 in ppc-opcode.h and mark the optional instructions that the 970 implements. The other part of this change was in md_assemble() in ppc.c where it checks for 64-bit compare instructions and for optional instructions. The error messages were also updated to include the instruction name. Radar bug #3364523. Changes for the 5.12 release (the cctools-475 release): - Fixed ppc-check.c to not produce illegal operands. Changes for the 5.12 release (the cctools-472 release): - Fixed a bug in the movq SSE2 instruction opcodes. The opcodes for register to memory and memory to register were switch. The old ones were: {"movq", 2, 0x660fd6, _, RR|Modrm, {xmm|m64, xmm, 0},"O"}, {"movq", 2, 0xf30f7e, _, RR|Modrm, {xmm, xmm|m64, 0},"O"}, The corrected ones were: {"movq", 2, 0x660fd6, _, RR|Modrm, {xmm, xmm|m64, 0},"O"}, {"movq", 2, 0xf30f7e, _, RR|Modrm, {xmm|m64, xmm, 0},"O"}, Radar bug #3250086. Changes for the 5.12 release (the cctools-469 release): - Fixed the handling of the .p2align directive to correctly allow the fill value (the second argument) to be omitted by using two commas after the required alignment. This is used by the compiler to get the alignment to be filled with no-op instructions when appropriate. The change in in s_align() in read.c . Radar bug #3227897. Changes for the 5.12 release (the cctools-468 release): - Changed the setting of the r_length field of PPC_RELOC_BR14_predicted to 3 instead of 4 to tell the static link editor this is a Y-bit predicted branch. Radar bug #3223045. - Changed PPC_RELOC_BR14_predicted in ppc.c to be PPC_RELOC_BR14 or'ed with a bit out of the 4-bit range of r_type in a relocation entry. So when it later gets assigned to r_type it has the right value and gets carried thru to the code that creates the relocation entries. - Changed a few error messages in md_number_to_imm() in ppc.c and one in fix_to_relocation_entries() in write_object.c dealing with fix stuctures to set the the layout_file and layout_line variables from the file and line fields of the fix stucture to produce error messages that contain the correct file and line number. - Changed fix_to_relocation_entries() in write_object.c to set the r_length to 3 for PPC_RELOC_BR14_predicted relocation types. Changes for the 5.12 release (the cctools-467 release): - Changed the handing of mismatching section attributes so they do not cause an error but are simply or in with the previous attributes. The change is in section_new() in sections.c . Radar bug #3218644. - Added support for the PPC_RELOC_LO14_SECTDIFF relocation type used with double word load/store instructions. Radar bug #3218027. - Changed the error message in fix_to_relocation_entries() in write_object.c where it gets a fix struct with a non-zero fx_subsy and has a relocation entry it does not have a matching SECTDIFF type. Now the error causes the file and line number to be printed and includes "Internal error". - Changed fix_to_relocation_entries() in write_object.c and added a if case for PPC_RELOC_LO14 to set the sectdiff to PPC_RELOC_LO14_SECTDIFF. And later in there set sri.r_address the same as the PPC_RELOC_LO16_SECTDIFF case with the high part of the address. Changes for the 5.12 release (the cctools-465 release): - Added the following directives: .p2align[wl] align, [fill[, max_bytes_to_fill]] as well as now allow max_bytes_to_fill with: .align align, [fill[, max_bytes_to_fill]] .align32 align, [fill[, max_bytes_to_fill]] Radar bug #3207027. - Added entries for .p2align, p2alignw and p2lalignl in the array pseudo_table[] in read.c . - Changed s_align() in read.c to parse out the option max_bytes_to_fill argument. Then passed the max_bytes_to_fill as a new fourth argument to frag_align(). And not set the section's alignment if max_bytes_to_fill is non-zero. - Changed the call to frag_align() in s_even() in m68k.c and s_align() in i860.c to pass 0 as the new fourth argument to frag_align(). - Changed frag_align() in frags.c and frags.h to take a new fourth argument max_bytes_to_fill. This is saved in the frag struct field fr_subtype. - Changed relax_section() in layout.c to not do the alignment if it takes more than max_bytes_to_fill bytes (if non-zero). If max_bytes_to_fill is non-zero and the alignment is done then the section's alignment is set. - Fixed a bug in the .fill directive: .fill repeat_expression , fill_size , fill_expression for a fill_size of anything but 1 was broken. Radar bug #3201031. - Also fixed two error messages in s_fill() in read.c to correctly indicate which argument (the repeat_expression or fill_size) was in error. - Changed write_object() in write_object.c where it writes out the section contents of the variable repeated part of a frag. The fill_size is stored in an rs_fill frag in the fr_var field, the repeat_expression is in the fr_offset field and the fill_expression are in the bytes at fr_literal. - The above change to write_object() in write_object.c uncovered bugs with how rs_align frags were created in frag_align() in frags.c and handled in layout_addresses() in layout.c due to the partial bytes needed before the fill_expression of the fill_size. - frag_align() in frags.c now calls frag_var() with max_chars parameter large enough to hold the fill_expression of fill_size plus the maximum number of partial bytes that may be needed to be zeroed before the fill. - The second argument to frag_align() was change to a char * from an int so the the caller can properly convert the fill expression to the target byte sex. This change was maded in both frags.c and frags.h . - s_align() in read.c, s_even() in m68k.c and s_align() in i860.c were change to call md_number_to_chars() on the fill expression and convert it to a char (or array of chars). A pointer to this char is then passed to frag_align(). Other changes were made to add comments and change variable names to make things more understandable to the reader. Changes for the 5.12 release (the cctools-464 release): - Made changes to build cleanly with gcc3.3 - Removed -Wno-precomp from the Makefile - Fixed warnings for "comparison between signed and unsigned" in driver.c, expr.c, frags.c, layout.c, read.c, write_object.c, m68k.c, m88k.c, i386.c i860.c, ppc.c, hppa.c and sparc.c . Changes for the 5.12 release (the cctools-461 release): - Changed the assembler to support the code gen for fix-n-continue where the compiler generates indirection for static data references. The assembly for a non-lazy pointer to a local symbol looks like this: .non_lazy_symbol_pointer L_i$non_lazy_ptr: .indirect_symbol _i .long _i With the difference between this and a non-lazy pointer to a global symbol being the ".long 0" for a global symbol. The initialization allows the value of the symbol to be set into the pointer. Code was added in fixup_section() in layout.c to not cause the relocation entry to be created. Code was also added in write_object() in write_object.c that changes the indirect symbol table entry to INDIRECT_SYMBOL_LOCAL when the symbol is local. This is what the static and dynamic linkers expect and will then cause the pointer to be correctly relocated. The routine is_section_non_lazy_symbol_pointers() was added in sections.[ch] for fixup_section() to use. Radar bug #3182683. Changes for the 5.12 release (the cctools-458 release): - Changed the i386 pause instruction so that it is not marked Optional (in i386-opcode.h) so it does not require the -force_cpusubtype_ALL option. Radar bug #3173226. Changes for the 5.12 release (the cctools-454 release): - Fixed a problem with .include not finding the included file if the included file is in the same directory as the source file but the source file is not assembled from the directory containing the source file. Radar bug #3139454. - Added the varable input_dir to input-scrub.c and set its value to the directory part of the file name in input_scrub_new_file(). - Changed read_an_include_file() to check if the include file does not start with a '/' and if so look for the file with the input_dir first. - Changed input_scrub_next_buffer() in input-scrub.c where it previously caused a error message for "Source line too long." to now reallocate the buffer and adjust things to read more from the input. Also added saving and restoring buffer_length in read_an_include_file(). Radar bug #3138898. Changes for the 5.12 release (the cctools-450 release): - Fixed the i386 opcode entry for the movd opcode which takes xmm,r/m32 operands which had the encoding switched for the reg and r_m bits of the ModR/M byte. It was: {"movd", 2, 0x660f7e, _, RR|Modrm, {xmm, r32|m32, 0},"O"}, and was changed to: {"movd", 2, 0x660f7e, _, Modrm, {xmm, r32|m32, 0},"O"}, (Radar bug #3117280). - Fixed a bug in is_end_section_address() to sections.c so that it alwasy returns FALSE when the section is a zerofill section. Radar bug #3123561. - Fixed the i386 opcode entry for the mov opcode which took segment registers to take a Reg not just a Reg16. It was: { "mov", 2, 0x8c, _, D|Modrm, {SReg3|SReg2, Reg16|Mem, 0} }, and was changed to: { "mov", 2, 0x8c, _, D|Modrm, {SReg3|SReg2, Reg|Mem, 0} }, (Radar bug #3114720). Changes for the 5.12 release (the cctools-449 release): - Removed the specific cpusubtype markings on all i386 instructions. Also added -arch i686 to as.c to be the same as -arch pentpro. Radar bug #3111977. - Changed the error message for non-relocatable subtraction expression so that it would indicate the source file and line and which of the symbols were undefined (Radar bug #2254439). - Changed the fix struct in fixes.h to have a file and line fields for the file name and line number. - Added as_file_and_line() to input-scrub.c to return the current file name and line number. - Added the variables layout_file and layout_line to input-scrub.[ch] and changed as_warn() to use them if no input_file_is_open and layout_line is not zero. - Changed fix_new() in fixes.c to call as_file_and_line() and fill in the file and line fields of the fix struct it creates. - Changed the error message generated in fixup_section() in layout.c at the point it has something that it can't generate a relocation entry when there is a subtraction expression. This also sets layout_file and layout_line from the file and line fields in the fix struct before calling as_warn(). - Added an error message for a relocatable subtraction expression when one of the symbols is at the end of a section (Radar bug #2259427). - Added is_end_section_address() to section.[ch] which returns a boolean value if the address passed for the section number passed is at the end of the section. - Added code in fixup_section() in layout.c when processing a section difference to check the symbol address using is_end_section_address() and generating an error message is either address is at the end of the section. Changes for the 5.12 release (the cctools-448 release): - Fixed the i386 opcode entry for the cmpxchg8b opcode which had None, _, in the extension_opcode field which should have been a 1. It was: {"cmpxchg8b", 1, 0x0fc7, _, Modrm, {Mem, 0, 0}, "5" }, and was changed to: {"cmpxchg8b", 1, 0x0fc7, 1, Modrm, {Mem, 0, 0}, "5" }, (Radar bug #3099684). Changes for the 5.12 release (the cctools-447 release): - Changed s_align() in read.c to put out nops if the section has machine instructions. Also changed frag_align() in frags.c to allow a size of 2 for m68k nop padding. Did not change the i860 stuff as it pads just the text section (Radar bug #3073763). Changes for the 5.11 release (the cctools-446 release): - Changed the i386 assembler opcode table entry for cmpxchg8b (in i386-opcode.h) from taking just a Mem32: {"cmpxchg8b", 1, 0x0fc7, _, Modrm, {Mem32, 0, 0}, "5" }, to taking a Mem: {"cmpxchg8b", 1, 0x0fc7, _, Modrm, {Mem, 0, 0}, "5" }, (Radar bug #3089041). Changes for the 5.11 release (the cctools-442 release): - Moved the i386 assembler to not be installed as a local assembler. Radar bug #3074138. - Added the missing i386 opcode table entry for the an fxch with no operands: {"fxch", 0, 0xd9c9, _, ShortForm, {0, 0, 0} }, /* exchange %st0, %st1 */ Radar bug #3073760. Changes for the 5.11 release (the cctools-440 release): - Fixed the warnings about extra tokens at end of #endif directive in m68k-check.c, m68k.c, m88k.c, sparc.c, m68k-opcode.h, make_defs.h, atof-ieee.c and m88k-opcode.h (Radar bug #3072042). Changes for the 5.11 release (the cctools-439 release): - Picked up the changes for the .align32 directive. The changes were in frags.c and frags.h in frag_align(), read.c in s_align() and the pseudo_table[] array, write_object.c in write_object(). And calls to frag_align() in m68k.c and i860.c . Radar bug #2680692. - Changed all the SSE2 instructions an marked them as "optional" so that the -force_cpusubtype_ALL must be used. Radar bug #2972486. - Changed back way instructions such fcmovb are marked so they were marked i686 instructions as in the fix to Radar bug #2928507 (this change was lost in the changes to cctools-437 in intergrating the partial fixes to Radar bug #2972486). - Updated i386-check.c to allow it be used with otool's disassembler to check opcodes. Cases for RegMM and RegXMM were added as well as tables for each. Radar bug #2972486. - Because of the added i386 opcode "fildll" to be the same as "fildq" and "fistpll" to be the same as "fistpq" (done for Radar bug #2909568) code in i386-check.c was added to not add the 'l' suffix to "fildl" and "fistpl". - Added built in macros to the i386 assembler for the two-operand SSE2 compare pseudo-ops in i386.c and loaded them in md_begin(). Radar bug #2972486. - To allow a literal '$' to be used in a macro, expand_macro() in read.c was changed to output a single '$' when the macro contains two "$$" in a row. Radar bug #2972486. - Fixed the bugs in the following instruction opcodes: {"sysenter", 0, 0x0f34, _, NoModrm, {0, 0, 0}}, {"sysexit", 0, 0x0f35, _, NoModrm, {0, 0, 0}}, where originally they had Modrm which should have been NoModrm as these instructions have no operands. This cause these instructions to be assembled with a trailing 0x00 byte. Radar bug #2972486. - Fixed a bug in the following SSE2 instruction opcode {"pinsrw", 3, 0x0fc4, _, RR|Modrm, {Imm8, r32|m16, mm}}, where the "mm" was "0" which caused the following two legal instructions to not assemble at all: pinsrw $0x1,(%ecx),%mm1 pinsrw $0x2,%edx,%mm2 Radar bug #2972486. - Fixed a bug in the movq SSE2 instruction opcodes. The opcodes for register to memory and memory to register were switch. The old ones were: {"movq", 2, 0x660fd6, _, RR|Modrm, {xmm, xmm|m64, 0}}, {"movq", 2, 0xf30f7e, _, RR|Modrm, {xmm|m64, xmm, 0}}, The corrected ones were: {"movq", 2, 0x660fd6, _, RR|Modrm, {xmm|m64, xmm, 0}}, {"movq", 2, 0xf30f7e, _, RR|Modrm, {xmm, xmm|m64, 0}}, Which made these instructions: movq 0x93939393(%eax),%xmm3 movq %xmm4,0x94949494(%eax) assemble as: movq %xmm3,0x93939393(%eax) movq 0x94949494(%eax),%xmm4 Radar bug #2972486. - Fixed a bug in the following SSE2 instruction opcode: {"pcmpeqw", 2, 0x660f75, _, RR|Modrm, {xmm|m128, xmm, 0}}, the trailing 75 was incorrectly a 74 making the following instruction: pcmpeqw 0x90909090(%eax),%xmm2 assemble as ('b'-byte not 'w'-word): pcmpeqb 0x90909090(%eax),%xmm2 Radar bug #2972486. - Fixed a bug in the following SSE2 instruction opcode: {"pcmpgtw", 2, 0x660f65, _, RR|Modrm, {xmm|m128, xmm, 0}}, the trailing 65 was incorrectly a 64 making the following instruction: pcmpgtw 0x90909090(%eax),%xmm2 assemble as ('b'-byte not 'w'-word): pcmpgtb 0x90909090(%eax),%xmm2 Radar bug #2972486. Changes for the 5.11 release (the cctools-437 release): - Picked up the changes to assembler the i386 SSE2/SSE/MMX instructions. This is not yet complete. The changes include changing the way instructions such fcmovb should are marked. They were marked i686 instructions as in the fix to Radar bug #2928507 but now are unmarked. Also the pseudo-ops like cmpeqps are not yet supported. Radar bug #2972486. - Fixed a bug in fix_to_relocation_entries() in write_object.c where it sets the "true target address" into the r_address field of the PAIR relocation entry for a PPC_RELOC_JBSR. In the case the symbol is not undefined then r_address should be set to the absolute address of the "true target address". The bug was the code was subtracting sect_addr which was the section address of the section that the relocation entry is in (nothing to do with the target of the relocation which is fixP->fx_value). The change makes the undefined symbol case and the defined symbol case the same as both are storing fixP->fx_value in the "other_part" of the PAIR relocation entry. For an external relocation entry this is the offset from the symbol and for a local relocation entry this the "true target address". The other change is to use a scattered relocation entry for the PAIR if it is a local relocation entry and the "true target address" has the high bit set or if it is an external relocation entry and the offset has the high bit set. As a non-scattered relocation entry limits the "other_part" to 31 bits when stored in the r_address field. Radar bug #3046962. Changes for the 5.10 release (the cctools-422 release): - Added some pentium pro instructions. Radar bug #2928507. - Added the strip_static_syms section attribute. Radar bug #2945659. Changes for the 5.10 release (the cctools-418 release): - Changed s_section() in read.c to not cause an error if a section attribute is used with out -dynamic. Radar bug #2929120. Changes for the 5.10 release (the cctools-416 release): - Added the .weak_definition directive to read.c in the routine s_weak_definition(). An error check was also added in colon() of symbols.c, and error checks were added to layout_symbols() in write_object.c . - Removed the weak_definitions attribute from the attribute_names[] array in read.c. Also removed the check in s_section() in read.c to only allow S_ATTR_WEAK_DEFS with a coalesced section type. - Fixed a bug in the PowerPC assembler that did not detect instructions with more than 5 parameters as an error. The fix is in calcop() in ppc.c. Radar bug #2911611. Changes for the 5.10 release (the cctools-415 release): - Added i386 opcode "fildll" to be the same as "fildq" and "fistpll" to be the same as "fistpq". Radar bug #2909568. Changes for the 5.10 release (the cctools-414 release): - Added the weak_definitions attribute to the attribute_names[] array in read.c. Also added a check in s_section() in read.c to only allow S_ATTR_WEAK_DEFS with a coalesced section type. Radar bug #2898558. - Fixed two places in md_number_to_imm() in ppc.c that was checking to make sure the value used for a branch displacement was in range but the check was made before the final adjustment of the value by 4. Radar bug #2890217. Changes for the 5.10 release (the cctools-400 release): - Added the i386 instruction fistp to be the same as fistps. Radar bug #2851846. Changes for the 5.10 release (the cctools-400 release): - Changed the Makefile back to again use the -dependency-file with gcc. Changes for the 5.10 release (the cctools-399 release): - Changed BUFFER_SIZE in input-file.c from 32k to 64k as gcc3 is puting out single stab lengths that are longer than 32k. Radar bug #2840883. Changes for the 5.10 release (the cctools-396 release): - Added the .weak_reference directive to read.c. A change was also needed in colon() of symbols.c. - Fixed a bad line of code in md_number_to_imm() in sparc.c that was: val = (val >>= 2) + 1; where it should have been: val = (val >> 2) + 1; - Changed the Makefile to not use the -dependency-file with gcc as well as mwccppc. - Added an include for <stdlib.h> in obstack.c, m88k.c, i860.c, ppc.c and hppa.c to pick up the prototype for abort(). - Added an include for <stdlib.h> in as.c and i860.c to pick up the prototype for exit(). - Added guards for the header files read.h, expr.h, struc-symbol.h, flonum.h, frags.h, fixes.h, hash.h Changes for the 5.10 release (the cctools-389 release): - Changed the PowerPC fsel instruction to not be marked as optional. Changes for the 5.9 release (the cctools-375 release): - Picked up patches to add ()'s around builtin ppc macros. Radar bug #2717461. - Removed the references to libstreams by removing the use of the old project builder interfaces with the ifdef OLD_PROJECTBUILDER_INTERFACE. This appears in messages.c and as.c . Changes for the 5.9 release (the cctools-364 release): - Fixed a bug in the assembler when a ".=value" was past the value of the current location counter and caused the assembler to crash. The fix is in layout_addresses() in layout.c in the case for rs_org to test the value of fragP->fr_offset for being less than 0. Radar bug #2682911. - Picked up a change to the PowerPC assembler adds the 4-arg option to rlwinm and friends, and uses mask->mb/me conversion code that was lifted from current GNU as. Radar bug #2684824. Changes for the 5.9 release (the cctools-363 release): - Fixed a bug in the PowerPC assembler that incorrectly changed the Y bit of a branch conditional when no prediction was specified. The book on Page F-10 says "assemblers should clear this bit unless otherwise directed". Radar bug #2665165. - Changed the ppc assembler to allow VMX instructions when -arch ppc7400 or -arch ppc7450 is specified. as.c. Radar bug #2599869. Changes for the 5.8 release (the cctools-355 release): - Changed the syntax of the .section directive to allow the attribute field to contain attributes such as "no_toc+pure_instructions" so more than one attribute can be specified. The change is in s_section() in read.c. Radar bug #2580298. Changes for the 5.8 release (the cctools-353 release): - Changed where the assembler gets installed and the assembler driver looks for it. The new location for Cheetah and beyond is /usr/libexec/gcc/darwin/<arch>/as for the ppc assembler and /usr/local/libexec/gcc/darwin/<arch>/as for the local assemblers. Radar bug #2574173. - Fixed a problem similar to the one below with using: .set x,a-b .long x where the value of the symbol x and the long do not get set correctly if the symbols a and b are not in the same frag. This showed up in the code for zero overhead exceptions as the way the size of the unwind table entries are calculated. The fix was to use the same pointer to the expression that the routine pseudo_set() uses in the fix below. The routine operand() in expr.c looks for a non-NULL value of the expression field of the symbol before assuming it is an absolute symbol and if so returns an expression with the symbol and a segment type of SEG_DIFFSECT. Then fixup_section() in layout.c also looks for a non-NULL value of the expression field of the symbol before assuming it is an absolute symbol and if so evaluates the expression as an absolute value as the differnce between the values of the symbols in the expression. The code in pseudo_set() in read.c for the case statement of of SEG_DIFFSECT also had to be tweaked as the test to save the expression into the symbol was base only symbol not being in the same frag. This test also needed to include if either symbol was undefined. Radar bug #2573260. Changes for the 5.7 release (the cctools-338 release): - Fixed the problem with the trailing N_FUN function stab not having its value set correctly for large functions. The stab in question is: .stabs "",36,0,0,Lscope0-_function The change is a bit of a hack as we are at the end of the Public User Beta cycle. The problem is that pseudo_set() in read.c for the case statement of SEG_DIFFSECT does not take into account that the two symbols may not be in the same fragment so that using their sy_value feilds in the first pass will not work. So the safest hack was to add a pointer to an expresion in the symbol struct (struc-symbol.h) and save it away in pseudo_set(). Then have write_object() in write_object.c look for it and evaluate then. Radar bug #2504182. Changes for the 5.7 release (the cctools-336 release): - Added the no_toc attribute to the attribute_names array in read.c. Radar bug #2494286. Changes for the 5.7 release (the cctools-330 release): - Changed the call to netname_look_up() in check_for_ProjectBuilder() in messages.c to bootstrap_look_up() when __OPENSTEP__ and __GONZO_BUNSEN_BEAKER__ is not defined (Radar bug #2473864). Changes for the 5.7 release (the cctools-329 release): - Added a fix for checking the range of a PPC_RELOC_BR14 and PPC_RELOC_BR24 relocation entry in ppc.c in md_number_to_imm() to make sure they do not overflow and generating an error if they do. (Radar bug #2469441). Changes for the 5.7 release (the cctools-327 release): - Changed back where as the assembler-321 release): - Changed the built in section directives .objc_class_names, .objc_meth_var_types, .objc_meth_var_names to be the (__TEXT,__cstring) section. Radar bug #2447117. Changes for the 5.6 release (the cctools-319 release): - Removed support of the GNU source and MW source targets in the Makefiles as Darwin takes care of this. This is done at this time because the new directory layout with Gonzo1G has /System/Developer moving to /Startup/Developer which would have effected /System/Developer/Source/GNU. Changes for the 5.5 release (the cctools-307 release): - Added the i386 opcode "fild" to be the same as "filds" and the opcode "fist" to be the same as "fists". Radar bug #2410226. - Additions to support coalesced symbols and external relocation entries for defined coalesced symbols. Radar bug #2411273. - Changed fix_to_relocation_entries() in write_object.c to create an external relocation entry for defined external coalesced symbols. - Changed relax_section() in layout.c to not added the value of a defined external coalesced symbol to the item being relocated. - Added the routine is_section_coalesced() to sections.c and its prototype to sections.h. - Added support for the CPU_SUBTYPE_POWERPC_7400 in as.c. Radar bug #2397523. - Fixed a bug where the assembler would treat the next line after a line containing only a '#' as a comment. Radar bug #2393418. Changes the 5.4 release (the cctools-302 release): - Added the i386 instruction setc which is the same as setb and setnae. Radar bug #2374684. Changes the 5.4 release (the cctools-300 release): - Fix a bug where the assembler does not catch the error case of an unterminated .macro. The fix is in read_a_source_file() on line 705 in read.c (Radar bug #2368659). - Fixed a bug with parsing signed numbers in the PowerPC assembler that showed up in assembling: vspltisb v1,-1 where the -1 over wrote the high bits of the opcode. - Added support for the coalesced section type. Changes the 5.3 release (the cctools-292 release): - Moved the i386 assembler to /usr/local/libexec/i386/as for MacOS X. - Added -I$(NEXT_ROOT)/System/Library/Frameworks/System.framework/PrivateHeaders to the Makefile as the egcs compiler's cpp no longer searches this by default. This is needed to pick up <streams/streams.h> for use with the PB interface. - Added the int type to state, old_state and add_newlines in app.c to remove a warning from the egcs compiler. - Changed the return type of main() in driver.c and as.c from void to int to remove a warning from the egcs compiler. Also changed the exit() call at the end to return(). Changes the 5.3 release (the cctools-286 release): - Added the support for the module termination section. Changes the 5.3 release (the cctools-285 release): - Fixed the i386 opcode of cmpxchg from 0x0fa6 (as it is in the i486 manual) to 0x0fb0 which is correct (as it is in the Pentium manual). - Changed all #ifdef NeXT to #ifdef NeXT_MOD and added a -DNeXT_MOD to the Makefile. This is because -DNeXT will nolonger be defined for MacOS X builds. Changes for the the 5.3 release, MacOS X bring up (the cctools-282 release): - Changed task_self() to mach_task_self() for MacOS X in writeout.c. Also included "stuff/openstep_mach.h" for macros to allow it to still build on Openstep. Also changed ifdef's __SLICK__ to __OPENSTEP__. - Changed task_self() to mach_task_self() in - Ifdef'ed __MACH30__ make.defs for mach_port_t vs mach_port. Also ifdef'ed out netname_look_up() call and #include <servers/netname.h> for __MACH30__ in messages.c (these are not yet in the SDK). - Changed the Makefile to allow for RC_OS=macos for MacOS X builds. - Added a few casts in places to get it to compile with the MetroWerks compiler without -relax_pointers. - Changed the assignment of macro_name = FALSE; to macro_name = NULL in s_endmacro() in read.c to make it compile with the MetroWerks compiler. - Added some void casts before some zeroes in macros in obstack.h to make it compile with the MetroWerks compiler. - Changed the Makefile for the driver_build not to pass GCC flags (-Wall and -Wno-precomp flags) down when using the MetroWerks compiler. Changes for the 5.2 release (the cctools-274 release): - Removed the #ifndef REMOVE_VMX stuff and the unifdef and sed lines from the Makefile. Radar bug #2237908. - Removed all uses of CPU_SUBTYPE_586SX in as.c. Added the pentium, pentpro, pentIIm3 and pentIIm5 -arch flags to as.c. Updated md_assemble() in i386.c to deal with the new subtypes. Radar bug #2231830. Changes for the 5.2 release (the cctools-267 release): - Added #ifndef REMOVE_VMX around the VMX stuff. Added unifdef and sed lines in the Makefile for removing this from the GNU source. Also removed this notes file from the GNU source. Radar bug #2227999. Changes for the 5.1 release (the cctools-261 release): - Moved the assemblers into /usr/libexec and /usr/local/libexec and changed the assembler driver to look there. (Radar 2213838) - Change the PowerPC instructions tlbld and tlbli to marked as OPTIONAL rather than 603 specific. This leaves only 601 specific instructions and the code for 603 subtypes was removed from md_assemble() in ppc.c. (Radar 2213821) - Added a check for VMX tagged instructions in md_assemble() in ppc.c to require the -force_cpusubtype_ALL flag. (Radar 2213821) - Added the -ppc603e, -ppc603ev and -ppc750 to as.c for PowerPC. (Radar 2213821) - Added the ppc750 special register names (Radar 2212878): { 936, "ummcr0" },/* 750 only */ { 937, "upmc1" }, /* 750 only */ { 938, "upmc2" }, /* 750 only */ { 939, "usia" }, /* 750 only */ { 940, "ummcr1" },/* 750 only */ { 941, "upmc3" }, /* 750 only */ { 942, "upmc4" }, /* 750 only */ { 1017,"l2cr" }, /* 750 only */ { 1019,"ictc" }, /* 750 only */ { 1020,"thrm1" },/* 750 only */ { 1021,"thrm2" },/* 750 only */ { 1022,"thrm3" },/* 750 only */ - Added the dcba PowerPC optional instruction. Changes for the 5.1 release (the cctools-260 release): - Added -c to all the install commands in the Makefile. Changes for the 5.1 release (the cctools-259 release): - Fixed a bug where .stabd have their name incorrectly set. Fixed this in layout_symbols() in write_object.c in line 779 and assign symbolP->sy_name_offset to 0 not *string_byte_count; Changes for the 5.1 release (the cctools-255 release): - Added the VMX opcodes and other needed support. Radar bug 2004760. Changes for the 5.1 release (the cctools-253 release): - Changed the Makefile to only create the needed dst directories. - Removed the default search path for headers because of the new directory layout for Preimer. Changes for the 5.1 release (the cctools-250 release): - Added taking -arch ppc604 and -arch ppc604e for the ppc assembler. - Changed printing the cctools version from NeXT version to Apple version. Changes were in as.c and the driver only in /usr/bin for RC_OS teflon and in /bin for RC_OS nextstep. Removed the creation of symbolic links (Radar 1672088). - Changed the Makefile to install ppc and i386 assemblers in /lib for RC_OS teflon with the rest in /local/lib. And to install m68k, i386 and sparc assemblers /lib for RC_OS nextstep the the rest in /local/lib. - Changed the code ifdef __TEFLON__ to ifndef __SLICK__ (where __TEFLON__ will nolonger be defined for Rhapsody builds) so the default builds will be native Rhapsody builds. The changes were to input-file.c and i386.h . Changes for the 5.0 release (the cctools-245 release): - Fixed the symbolic link from /usr/bin/as to ../../bin/as (Radar 1672088). Changes for the 5.0 release (the cctools-243 release): - Added a symbolic link from /usr/bin/as to $(DSTROOT)/bin/as (Radar 1672088). Changes for the 5.0 release (the cctools-242 release): - Removed the following builtin macros which the compiler once used which had been ifdef'ed out: #undef POWER_MNEMONICS #ifdef POWER_MNEMONICS { "ai\n", "addic $0,$1,$2\n"}, { "ai.\n", "addic. $0,$1,$2\n"}, #endif /* POWER_MNEMONICS */ - Removed the following builtin macros which was a typo: { "clrlsdi\n", "rldic $0,$1,$3,$2-$3\n"}, the correct macro name is clrlsldi which was already in the table. - Removed the following builtin macros left over from the NRW port: { "mtrtcd\n", "mtspr 281,$0\n"}, { "mfrtcd\n", "mfspr $0,281\n"}, { "mtrtci\n", "mtspr 282,$0\n"}, { "mfrtci\n", "mfspr $0,282\n"}, { "mtbatu\n", "mtspr 528+2*$0,$1\n"}, { "mfbatu\n", "mfspr $0,528+2*$1\n"}, { "mtbatl\n", "mtspr 529+2*$0,$1\n"}, { "mfbatl\n", "mfspr $0,529+2*$1\n"}, - Marked the instruction "eciwx" as optional. - Removed the non-existant instructions "stmd", "mtpmr" and "mfpmr". - Changed the 601 mtrctu and mtrctl from mtrtcu Rx equivalent to mtspr rtcu,Rx mtrtcl Rx equivalent to mtspr rtcl,Rx to: mtrtcu Rx equivalent to mtspr 20,Rx mtrtcl Rx equivalent to mtspr 21,Rx because the move to and move from use different register numbers for rtcu (20 vs. 4) and rtcl (21 vs. 5). Changes for the 5.0 release (the cctools-240 release): - Turned off the ifdef NRW_COMPILER so the instructions mull[o][.] and mulwd[.] are no longer supported. - Added the 604 & 604e special register names with the following numbers: { 952, "mmcr0" }, /* 604 & 604e only */ { 953, "pmc1" }, /* 604 & 604e only */ { 954, "pmc2" }, /* 604 & 604e only */ { 955, "sia" }, /* 604 & 604e only */ { 956, "mmcr1" }, /* 604e only */ { 957, "pmc3" }, /* 604e only */ { 958, "pmc4" }, /* 604e only */ { 959, "sda" }, /* 604 & 604e only */ - Added the 603 special register names with the following numbers: { 976, "dmiss" }, /* 603 only */ { 977, "dcmp" }, /* 603 only */ { 978, "hash1" }, /* 603 only */ { 979, "hash2" }, /* 603 only */ { 980, "imiss" }, /* 603 only */ { 981, "icmp" }, /* 603 only */ { 982, "rpa" }, /* 603 only */ - Removed the "bat[0123][ul]" 601 special register names left over from the NRW port. The names with "ibat[0123][ul]" are to be used. - Changed the name of the 601 special register "pid" to "pir" and added an entry for the name "hid15" also with the number 1023. - Removed the special register entries in the special_registers for {281,"rtcd"} {282,"rtci"} and {1022,"fpecr"} which appear to be left over from the NRW port but nolonger exist. - Added the special register {282,"ear"} which is optional in the PowerPC architecure. - Added the special registers {284,"tbl"} and {285,"tbu"} which were missing. - Fixed the clrrdi simplified mnemonics to use rldicr not rldicl which they were using. - Added the simplified mnemonic: { "clrlsdi\n", "rldic $0,$1,$3,$2-$3\n"}, - Added the missing multiply low double word (mulld) 64-bit instruction. - Added tests for invalid forms of branch conditional instructions where reserved bits of the BO field are not zero when -force_cpusubtype_ALL is not specified. - Changed the test for the BO's branch always encoding to include the any value for the z bits in the 1z1zz encoding. This test is used to set the Y bit in branch conditional instructions. - Added code in ppc.c to flag 64-bit compares if -force_cpusubtype_ALL is not specified. Note none of the 64-bit compares are marked with IMPL64. - Added IMPL64 for 64-bit instructions. Added code in ppc.c to not allow them unless -force_cpusubtype_ALL is specified. Marked the following instructions as 64-bit instructions: cntlzd, divd, divdu, extsw, fcfid, fctid, fctidz, ld, ldarx, ldu, ldux, ldx, lwa, lwaux, lwax, mulhd, mulhdu, mulld, rldcl, rldcr, rldic, rldicl, rldicr, rldimi, slbia, slbie, sld, srad, sradi, srd, std, stdcx., stdu, stdux, stdx, td, tdi, - Added OPTIONAL for optional instructions. Added code in ppc.c to not allow them -force_cpusubtype_ALL is not specified. Marked the following instructions as optional: ecowx, fres, frsqrte, fsel, fsqrt, fsqrts, slbia, slbie, stfiwx, tlbia, tlbie, tlbsync - Added an #ifdef POWER_MNEMONICS to ppc.c for the following Power mnemonics: { "ai\n", "addic $0,$1,$2\n"}, { "ai.\n", "addic. $0,$1,$2\n"}, - Removed the non-existant instruction "lmd". This appeared in the opcode table and in ppc.c for a check for invalid forms. - Fixed a bug in the checking of ldu invalid form where the mask was 0xfc000001 and should have been 0xfc000003, which also picked up lwa by mistake. - Added the error messages and tests for load multiple instructions (lmw, lswi and lswx) for invalid forms. - Fixed a bug where "ld r1,0(0)" was being flagged invalid because rA == 0. This was trying to get "ldu r1,0(r0)" which had the low bit set and "ld" does not. The mask and opcode were changed in the checking code in ppc.c. - Changed the rA parameter of lhax, lfsx, lfdx, ldx, lbzx, lhbrx, lhzx, lwax, lwbrx, stbx, stdcx., stdx, stfdx, stfiwx, stfsx, sthbrx, sthx, stwbrx, stwx, eciwx, ecowx from rA to (rA|0) to match the book. These are the same as the Radar #1653885 for lwzx. - Changed the rA parameter of lfdu, lfsu, lhau, ldu, lwzu, stbu, stdu, stfdu, stfsu, sthu and stwu from (rA|0) to rA to match the book. This is odd as most of the interger load with updates use (rA|0) and the floating point load with updates use rA. For both when rA == 0 they are invalid. This also required a change to parse_displacement() to allow GREG as the third parameter as well as G0REG. - Put in a test for the "branch conditional to count register" (bcctr and bcctrl) that can't use the "decrement and test CTR" option to be flagged as invalid. - Removed a warning about 'reference' might be used uninitialized in ppc.c . Changes for the 5.0 release (the cctools-236 release): - Changed the immediate shifted instructions (addis and lis) to not check the sign of the immediate just out of 16 bit range (signed or unsigned). The new parameter type HI, for high immediate was added. - Fixed the built in macro (added in cctools-235): { "crmove\n", "cror $0,$1,$1\n"}, was "crxor" - Fixed a bug in "cmpldi" so that the immediate value is unsigned. - Added forms of "cmpd", "cmpdi", "cmpld", "cmpldi", "cmpw", "cmpwi", "cmplw", "cmplwi" that take a number as their first parameter (previously they only took a cr register). - Added code to allow "tw 31,0,0" where the last two parameters rA and rB are allowed to be coded as 0. The new parameter type ZERO was added. Also allow the instruction "ori 0,0,0" to be coded where the first two parameters rA and rS are allowed to be zero when the third is zero. - Added the PowerPC instruction: mttbl rS equivalent to mtspr 284,rS - Fixed the opcode of "mttbu rS" "Move to time base upper" to be equivalent to "mtspr 285,rS" - Removed the PowerPC instruction mttb "Move to time base". - Fixed a bug in the PowerPC instruction "lwzx rD,rA,rB" where rA should have been (rA|0) that is encoded as G0REG. It was GREG (Radar #1653885). Changes for the 5.0 release (the cctools-235 release): - Added the PowerPC pseudo instruction "jbsr symbol,lable". This involved adding a JBSR operand type in ppc-opcode.h, a parse_jbsr() routine in ppc.c. The addition of the PPC_RELOC_JBSR was handled by adding a jbsr_exp to the ppc_insn struct and code in md_assemble() to call fix_new() for it. Then code was added in write_object.c to write this reloc out. - Added extended mnemonics: mtrtcu Rx equivalent to mtspr rtcu,Rx mtrtcl Rx equivalent to mtspr rtcl,Rx mtmq Rx equivalent to mtspr mq,Rx mfrtcu Rx equivalent to mfspr Rx,rtcu mfrtcl Rx equivalent to mfspr Rx,rtcl mfmq Rx equivalent to mfspr Rx,mq bctr BO,BI equivalent to bcctr BO,BI bctrl BO,BI equivalent to bcctrl BO,BI - Added the built in macros: { "crmove\n", "crxor $0,$1,$1\n"}, { "crnot\n", "crnor $0,$1,$1\n"}, { "mfear\n", "mfspr $0,282\n"}, { "mtear\n", "mtspr 282,$0\n"}, { "mtfs\n", "mtfsf 0xff,$0\n"}, { "mtfs.\n", "mtfsf. 0xff,$0\n"}, - Added the following instructions: { 0x7c00026c, "eciwx", {{21,5,GREG}, {16,5,GREG}, {11,5,GREG}} }, { 0x7c00036c, "ecowx", {{21,5,GREG}, {16,5,GREG}, {11,5,GREG}} }, { 0xec000030, "fres", {{21,5,FREG}, {11,5,FREG}} }, { 0xec000031, "fres.", {{21,5,FREG}, {11,5,FREG}} }, { 0xfc000034, "frsqrte", {{21,5,FREG}, {11,5,FREG}} }, { 0xfc000035, "frsqrte.",{{21,5,FREG}, {11,5,FREG}} }, { 0xfc00002e, "fsel", {{21,5,FREG}, {16,5,FREG}, {6,5,FREG}, {11,5,FREG}} }, { 0xfc00002f, "fsel.", {{21,5,FREG}, {16,5,FREG}, {6,5,FREG}, {11,5,FREG}} }, { 0xfc00002c, "fsqrt", {{21,5,FREG}, {11,5,FREG}} }, { 0xfc00002d, "fsqrt.", {{21,5,FREG}, {11,5,FREG}} }, { 0xec00002c, "fsqrts", {{21,5,FREG}, {11,5,FREG}} }, { 0xec00002d, "fsqrts.", {{21,5,FREG}, {11,5,FREG}} }, { 0x38000000, "la", {{21,5,GREG}, {0,16,D}, {16,5,G0REG}} }, "la rT,d(rA)" equivalent to "addi rT,rA,d" { 0x7c0007ae, "stfiwx", {{21,5,FREG}, {16,5,GREG}, {11,5,GREG}} }, { 0x7c00046c, "tlbsync", }, { 0xfc000080, "mcrfs", {{21,5,CRFONLY},{18,5,NUM}} }, Allows a crf as the first operand (previously only a number) { 0x7c000400, "mcrxr", {{21,5,CRFONLY}} }, Allows a crf as the operand (previously only a number) As opcodes in ppc-opcode.h - Added the "mftb rT,TBR" opcode form to specify the timer base register as an opcode in ppc-opcode.h - Added the archaic forms of compares: "cmp crT,rA,rB" equivalent to "cmp crT,0,rA,rB" "cmp num,rA,rB" equivalent to "cmp num,0,rA,rB" "cmpi crT,rA,s16" equivalent to "cmpi crT,0,rA,s16" "cmpi num,rA,s16" equivalent to "cmpi num,0,rA,s16" "cmpl crT,rA,rB" equivalent to "cmpl crT,0,rA,rB" "cmpl num,rA,rB" equivalent to "cmpl num,0,rA,rB" "cmpli crT,rA,s16" equivalent to "cmpli crT,0,rA,u16" "cmpli num,rA,s16" equivalent to "cmpli num,0,rA,u16" As opcodes in ppc-opcode.h Also fixed the existing 4 parameter forms of cmpli, and cmplwi to take an unsigned immediate not a signed immediate. - Fixed the clrlslwi macros which were: { "clrlslwi\n","rlwinm $0,$1,$3,$2-$3,31-$2\n" }, { "clrlslwi.\n","rlwinm. $0,$1,$3,$2-$3,31-$2\n" }, to: { "clrlslwi\n","rlwinm $0,$1,$3,$2-$3,31-$3\n" }, { "clrlslwi.\n","rlwinm. $0,$1,$3,$2-$3,31-$3\n" }, where the 31-$2 should have been 31-$3 Changes for the 5.0 release (the cctools-229 release): - The change to cctools-228 broke -static that had x-y relocation when used with -static. The code in fix_to_relocation_entries() in write_output.c Also needed it's testing of the -k flag removed. Changes for the 5.0 release (the cctools-228 release): - Changed the code in fixup_section() in layout.c to use the section differnce relocations even if -static (the -k flag not set) is used. It was there because it was incompatible with 3.2. Changes for the 5.0 release (the cctools-227 release): - Added the PowerPC built in macros: { "ai\n", "addic $0,$1,$2\n"}, { "ai.\n", "addic. $0,$1,$2\n"}, Changes for the 5.0 release (the cctools-223 release): - Added the PowerPC built in macros: { "crset\n", "creqv $0,$0,$0\n"}, { "crclr\n", "crxor $0,$0,$0\n"}, { "mtcr\n", "mtcrf 0xff,$0\n"}, - Fixed the Makefile to pass OFLAG through to lower makes for development builds. Changes for the 5.0 release (the cctools-222 release): - Added s_include() as in ppcasm_pseudo_table[] in read.c. - Changed do_scrub_next_char() in app.c to handle strings in single quotes for ppcasm. - Changed next_char_of_string() and demand_copy_string() to use single quotes for strings with ppcasm instead of double quotes. - Added code in do_scrub_begin() in app.c n ifdef'ed PPC and if'ed flagseen[(int)'p'] to not use '@' as a LEX_IS_LINE_SEPERATOR. Also '\r' is used as a LEX_IS_LINE_SEPERATOR for -ppcasm. - Changed do_scrub_next_char() after the label flushchar: ifdef'ed PPC and flagseen[(int)'p'] and in state 3 (3: after second white on normal line (flush white)) to return a space. This is so that labels without colons don't have spaces removed on their lines and mess up the parsing. - Added ppcasm_parse_a_buffer() to read.c which will drive the parsing of the syntax for the ppcasm flavor of the assembler. Many other changes in read.c for parsing are also added. - Moved all the calls to initialization routines in main() to after the command line flags are parsed. This is so that -ppcasm can effect the initialization routines where needed. - Added the -ppcasm flag for PowerPC. This is to be used to make the PowerPC assembler more like the Apple ppcasm assembler. The goal of this hack is to to allow the assembler source for the Apple's blue box to assemble so they can build it under Rhapsody. Changes for the 5.0 release (the cctools-216 release): - Changed the size of the standard PowerPC symbol stubs in read.c to 20 and 36 to be used with the super-scaler symbol stubs. Changes for the 5.0 release (the cctools-215 release): - Removed bsd fron <bsd/libc.h> in write_object.c so it will build under the Teflon SDK. - Fixed the mftb and mftbu (move from time base/upper) opcodes to be correct. - Added the sizes to the standard symbol stubs in read.c for PowerPC. - Fixed a bug in try_to_make_absolute() in expr.c which was making expresions absolute and not using section difference when -dyanamic (flagseen['k']) was set. This caused some expressions like a-b when a and b were previously been defined to turn into absolute when they should be SECTDIFF. Changes for the 5.0 release (the cctools-214 release): - Added the PPC_RELOC_HI16_SECTDIFF, PPC_RELOC_LO16_SECTDIFF and PPC_RELOC_HA16_SECTDIFF relocation types. - Fixed a bug where the comment character ';' in the PowerPC assembler was acting like a statement seporator. - Allowed -arch m98k for PowerPC assembler. Changes for the 5.0 release (the cctools-213 release): - Added macros for "not" and "not." . Changes for the 5.0 release (the cctools-212 release): - Changed tlbiex to tlbie and slbiex to slbie (also set correct opcodes for slbie and slbia). - Fixed a bug in the assembler that it exits with a zero status if it can't open it's input file and creates an output file. - Added .flag_reg and .noflag_reg ppc directives to cause flaging of registers that should not be used as a real warning (that is still create an output file). - Added the -no_ppc601 flag and the .no_ppc601 directive to flag 601 uses (again as real warnings). - Added the 603 instructions "tlbld rB" and "tlbli rB". - Added the 601 instruction "clcs rD,rA". - Added marking the 601 & 603 instructions in the ppc assembler. Changes for the 5.0 release (the cctools-210 release): - Changed to allow $ as the same as . (dot) for the ppc assembler. - Changed the third parameter of stswi and lswi to a NUM0 so that 32 means 0. - Changed mulwd to mulhw along with mulwd. to mulhw. . - Changed mull to mullw along with mull. to mullw. . - Changed mullo to mullwo along with mullo. to mullwo. . - Fixed mr (it was "ori rA,rS,0" not "or rA,rS,rS" as it should have been). To do the fix it was removed as an opcode and added a as a builtin macro. Also I added the mr. instruction (again as a macro). - Fixed a bug in the ppc assembler instruction for dcbi first parameter. It should be (rA|0) not (rA). Changed GREG to G0REG. - Changed lis to take an unsigned operand to force it not to check the sign. This is the same "add rD,0,value" who's operand was treated as unsigned. This allows "lis rD,0x8000" not to get an error flagged. - Changed m98k to ppc. Changes for the 4.1 release (the cctools-202 release): - Fixed a bug in the assembler's handling of signals so that only one signal needs to be generated to cause it to exit non-zero. The change is in as.c in got_sig() (bug #67606). - Fixed a bug in the assembler such that when it gets an error in the second pass it does not create the output file. The change is in write_object.c in write_object() (bug #67607). Changes for the 4.0 release (the cctools-182 release): - Ifdef'ed !defined(__TEFLON__) out a free() call to the internal stdio buffer as has changed in 4.4BSD. This change was in input-file.c in input_file_give_next_buffer(). - Ifdef'ed !defined(__TEFLON__) out a typedef of uint in i386.h: #ifndef __TEFLON__ typedef unsigned int uint; #endif as this is defined in <sys/types.h>. Changes for the 4.0 release (the cctools-178 release): - Changed the work around for compiler optimizer bug #50416. The change is to symbols.c in the routine colon(). The expression the compiler was having trouble with was: ((symbolP->sy_desc) & (~REFERENCE_TYPE)) == 0 so symbolP->sy_desc was assigned to a volatile unsigned int temporary and the expression was re-written to: (temp & (~REFERENCE_TYPE)) == 0 which seems to work. The reason for no longer using the #pragma CC_OPT_OFF as a work around is that the sparc compiler for -dynamic does not work for non-optimized code (bug #58804). Changes for the 4.0 release (the cctools-173 release): - Picked up a fis to the sparc assembler that was ifdef'ed sparc instead of SPARC in layout.c . - Picked up a fix to the i386 assembler which allows lables to have double quotes around them as in: call "foo". This was a one expression change to an if statement in i386_operand() in i386.c when looking for a memory reference. Changes for the 4.0 release (the cctools-172 release): - Picked up the sparc changes to symbols.c that ifdef's the #pragma CC_OPT_OFF so that it is not turned off for sparc and the sparc compiler fails. Changes for the 4.0 release (the cctools-170 release): - Picked up the sparc changes to layout.c for dealing with sparc relocation entries which have (symbol1 - symbol2 + constant) that must be treated as absolute (too small for sectdiff). Changes for the 4.0 release (the cctools-168 release): - Picked up the sparc changes to sparc.c for the changes to internal relocation entries used by only the assembler. - Picked up the sparc changes to the sizes of the symbol_stub section (32) and the picsymbol_stub section (60) in read.c . - Fixed a bug in the .zerofill directive that did not propagate the align expresion into the section header. Changes for the 4.0 PR1 release (the cctools-166.1 and cctools-167 releases): - Changed it so that if -static is seen twice no complaint is printed (bug #53307). Changes for the 4.0 release (the cctools-166 release): - Added the .const_data directive for the (__DATA,__const) section (bug #53067). - For the m68k assembler removed the -l flag which made a2@(undef) 32 bit offsets rather than 16 bit offsets. Now they are always 32 bit offsets. This had one change in m68_ip() in m68k.c in the AOFF case when the expression was not a constant. This condition was removed. It was also incorrect. It was: if(!flagseen['l'] && flagseen['k'] && seg(opP->con1) != SEG_DIFFSECT) and should have been: if(!flagseen['l'] && (flagseen['k'] || seg(opP->con1) != SEG_DIFFSECT)) Also in md_parse_option() the 'l' case was removed for completeness. This case caused bug #53010 when -dynamic was changed to the default. - Changed the directive .mod_init_func to be in the __DATA section as it may will have relocation entries and may be written on by dyld. Changes for the 4.0 release (the cctools-163 release): - Changed the default to -dynamic. This is done in as.c by setting flagseen[(int)'k'] = TRUE; which is TRUE for -dynamic and FALSE for -static. Changes for the 4.0 release (the cctools-162 release): - For the fix below in cctools-159 missed clearing the lazy bound bit in indirect_symbol_new() when the section was S_NON_LAZY_SYMBOL_POINTERS. So This was added to symbols.c . Changes for the 4.0 release (the cctools-160 release): - Put back in the -O flag and for bug #50416 which prevents line 235 in symbols.c from working added has #pragma CC_OPT_OFF and #pragma CC_OPT_RESUME around that routine. Changes for the 4.0 release (the cctools-159 release): - Fixed a problem the fix below and jbsr's on hppa which caused them to all be non-lazy. Changes for the 4.0 release (the cctools-158 release): - Fixed a bug that caused a symbol that was used both lazy and non-lazy to be incorrectly marked as lazy. This happened in gmon.c for _moninitrld which caused the dynamic libsys not to work with profiling as it would crash as the call to moninitrld would jump to the common symbol moninitrld. Changes for the 4.0 release (the cctools-154 release): - Changed it so that if -dyanmic is seen twice no complaint is printed. - Removed the use of nmedit now that libstuff has __private_extern__'s. Changes for the 4.0 release (the cctools-150 release): - Changed to allow .private_extern without -dynamic. Changes for the 4.0 release (the cctools-149 release): - Added setting of the S_ATTR_SOME_INSTRUCTIONS section attribute in the md_assemble() routines (except the i860) to mark those sections that contain some instructions. - Removed the section attribute relocate at launch (S_ATTR_RELOC_AT_LAUNCH). Changes for the 4.0 release (the cctools-138 release): - Picked up sparc.c & sparc-opcode.h. Changes for the 4.0 release (the cctools-137 release): - Picked up sparc.c. Changes for the 4.0 release (the cctools-136 release): - Change for sparc.c which cause relocation entries for call instructions to localy defined symbols to be emitted. Changes for the 4.0 release (the cctools-135 release): - Fix for Tracker 41317 [as(hppa) : does not support cache control hints.] as/hppa.c and as/hppa-opcode.h changed to add new parsing rule characters for cache control hints. The general format of the insruction supporting cache control hints is : <opcode>, cmpltr,cc <operands> Here cmpltr can be <none>, in which case the formats supported are : <opcode>,,cc <operands> or <opcode>,cc <operands> The parser will take care of both. Changes for the 4.0 release (the cctools-134 release): - Picked up the sparc changes to the sparc.c. Changes for the 4.0 release (the cctools-133 release): - Picked up the sparc changes to the sparc.c and sparc-check.c. Changes for the 4.0 release (the cctools-132 release): - Picked up the sparc changes to the sparc.c. - Picked up the sparc changes to the write_object.c for putting out the relocation entries. - Picked up the sparc changes to the comments in fixup_section() in layout.c. - Picked up the sparc s_seg() routine in read.c. - Picked up the sparc-check stuff in the Makefile and sparc-check.c. - Made the assembler ProjectBuilder aware and send its error and warning messages to ProjectBuilder (bug #40745). - Added -dynamic to eventually replace -NEXTSTEP-deployment-target 3.3 and -static to eventually replace -NEXTSTEP-deployment-target 3.2. Changed all the incompatiblity error messages to use -dynamic. Changes for the 3.3 release (the cctools-131 release): - Fixed a bug in md_estimate_size_before_relax() in i386.c that caused all branches to be long. The problem was with the change ti a full Mach-O assembler the test for symbols in the section changed and the code was not changed (bug #43745). Changes for the 3.3 release (the cctools-128 release): - Picked up the bug fix for 42587 made in cctools-119.1 for the 3.2hp release. "Hangs if tried to enter a register number in hex format". The test case is the instruction "ldcwx 0xc(0,%r1),%r2". Changes for the 3.3 release (the cctools-127 release): - Changed the hppa picsymbol_stub size to 32 bytes. - Changed the order of the output of the assembler's symbolic info to this: relocation entries (by section) indirect symbol table symbol table string table - Moved the sparc assember to /usr/local/bin for now (bug #42033). Changes for the 3.3 release (the cctools-122 release): - Had to give up on checking indirect symbol because the m68k pic symbol stubs generate machine dependent frags that get relaxed later. The code in symbol.c in indirect_symbol_new() was ifdef'ed CHECK_INDIRECTS which is off. - Fixed another bug in the m68k assembler when trying to parse '#L0-"x:y:z"' in crack_operand() in m68k.c. It needed to know about "'ed symbol names to correctly step over them. - Fixed a bug that showed up in the m68k assembler when trying to assemble the expression in the instruction: 'addl #L0-"L1",a1' . This is a problem in the way get_symbol_end() works and is used. get_symbol_end() writes a '\0' on the symbol's trailing " which does not get replaced with a " later. So I fixed This on in operand() when it calls get_symbol_end() and it knows the name started with a ". Later when it is replacing the character returned from get_symbol_end() back into input_line_pointer it also replaces the " if the name started with a ". This a may have to be done in other places some day. - Fixed a bug in indirect_symbol_new() where we first see if the last frag recorded for an indirect symbol turned out to be zero sized then changed that recorded frag to the next non-zero frag in the list. I think this happens because we record the frag before we fill it and if we run out of space that frag gets a zero size and a new one is created. - Added the flag -NEXTSTEP-deployment-target which takes either 3.2 or 3.3 as arguments. With 3.3 it turns on the -k flag. Also the warnings about incompatible features that printed -k were changed. Changes for the 3.3 release (the cctools-120 release): - Fixed a bug in that caused the symbol table to be trashed when -L was used and the input file had a global symbol that started with 'L'. The fix was in layout_symbols() in write_object.c that corrected the assumption that all 'L' symbols were non-external. - Fixed a bug in the i386 assembler that did not allow symbols like "[Foo bar:]" to be parsed as operands. This fix was made in i386.c, first was to add " to operand_special_chars[] array, second was to add some code in md_assemble() in the loop that parses operands to scan for the ending " if an operand has one. - Set the sizes for the i386 .symbol_stub and .picsymbol_stub to 24 and 26. Changes for the 3.3 release (the cctools-119 release): - Picked up first round of changes for the sparc target. This work is incomplete. Changes for the 3.3 release (the cctools-116 release): - Fixed a bug when -n is used on a file containing just a .zerofill directive the assembler core dumped indirecting through frag_now in layout_addresses() in layout.c. A check for frag_now being NULL was added. Changes for the 3.3 release (the cctools-115 release): - Changed the way the m68k assembler handles the operand "pc@(symbol-.)" to make the value of "." the value of the instruction after the opcode. This is needed so that when this operand is used in a symbol stub to reference the lazy pointer an any offset in the expression "symbol1-symbol2+offset" will correctly apply to symbol1 and the check in the link editor can figure out which lazy pointer is being referenced by the relocation entry. - Fixed a bug in indirect_symbol_new() when a section changed occured between .indirect_symbol directives it thought it was a bad or missing indirect symbol. This was because there were zero length frags created on the section change. Code was added to find the last real frag by skiping the zero length frags at the end. Changes for the 3.3 release (the cctools-112 also 111.1 release): - Picked up a fix for the hppa assembler that caused (bug #39710): comib,<> 0,%r19,LBE2 nop nop nop nop LBE2: nop Not to assemble correctly as it didn't do the relocation. The fix was in hppa.c where the following constant was stuff in char field which it does not fit: 32c32 < #define HPPA_RELOC_12BRANCH (127) /* only used internal in here */ --- > #define HPPA_RELOC_12BRANCH (1000) /* only used internal in here */ - Fix a bug in the hppa assembler (bug 40043) that did not assemble "ble R`0xc0000004(%sr4,%r1)" correctly. The code noticed that the expression was absolute but failed to remember the instruction takes a word (not a byte) displacement. In hppa.c: 804c804 < dis_assemble_17(im14>>2,&w1,&w2,&w); --- > dis_assemble_17(im14,&w1,&w2,&w); Changes for the 3.3 release (the cctools-111 release): - Fixed a bug in parsing octal characters \ooo out of strings that would not stop after picking up at most 3 characters and not stop if the digit was not an octal digit. The fix was in next_char_of_string() in read.c (bug #39363). - Fixed a bug in the i386 assember that the instruction "call 0" caused the assembler to core dump. The fix was to md_assemble() in i386.c at line 1352 where an SEG_ABSOLUTE has a NULL i.disps[0]->X_add_symbol which was not tested for. This was in the code that caused relocation entries for calls to be generated for scattered loading. - Fixed a bug when an unlink() was needed before the creation of the output file so that clones are handled correctly. The fix was in write_object() at the end just before the call to open(). - Fixed a bug in the native hppa assembler that put an extra 4 bytes of zero in the text section. The problem was caused by frags being aligned to the default which turns out to be 8 on the hppa in the obstack code. The fix was to change the obstack_begin() call in section_new() in sections.c to use _obstack_begin() and specifiy a 4 byte alignment. Changes for the 3.3 release (the cctools-108 release): - Fixed a bug for the i386 which caused scattered loading not to work because it did not create a relocation entry for local calls to symbols that were in the output file. The change is at line 1352 in i386.c in md_assemble(). Changes for the 3.3 release (the cctools-104 release): - Changed the code from using COMPAT_NeXT3_2 ifdef's to using flagseen['k'] and requiring -k when the new incompatable features are being used. - Fixed a bug in the JBSR relocation type for non-external relocation types where the other_part (stored in the r_address of the pair) which is an offset from the start of the section address had the base address of the section in it (the fix was to subtract sect_addr in this case in at the end of fix_to_relocation_entries() in write_object.c in the JBSR case). - Fixed a 3.2 compatiblity problem introduced with putting the symbol table before the relocation entries which caused strip not to work since the symbol table and string table were not at the end of the file. A set of #ifdef COMPAT_NeXT3_2 were added to write_object.c when assigning the offset to the symbol table. - Added the use of the reserved1 field in the section header for indirect sections to be the index into the indirect symbol table for that section. One line change in layout_indirect_symbols() in write_object.c. Changes for the 3.3 release (the cctools-103 release): - Fixed a bug in s_lcomm() in read.c that did not propagate the alignment to the section header leaving the bss section with an alignment of 0 and failing to align the starting address of the section. Changes for the 3.3 release (the cctools-102 release): - Integrated in the hppa support. * Added the SECTDIFF support for the hppa with the HI21 and LO14 SECTDIFF relocation types. * Fixed the use of calc_hppa_HILO() in md_number_to_imm() in hppa.c to correctly pass the symbol value and offset value as the two first parameters. different as/Mach-O.c (integrated for cctools-102, logicly into write_object.c) Using cctoolshppa-37 with diffs minimized for format changes. New stuff for hppa relocation entries. different as/Makefile (integrated for cctools-102) Using cctoolshppa-37 with diffs minimized for format changes. New stuff for hppa assembler and new hppa files. Changes for cctools-102 Added -DNEW_FORMAT to ahppa_test target's COPTS. Removed ASFLAGS=-W from ahppa_test target. different as/app.c (started integrating for cctools-102) Using cctoolshppa-37. Has a bunch of code to deal with field selectors (of the form L'expression) the code has comments about a BUG to be fixed. Changes for cctools-102 Picked up 4 additional "#if ... defined(HPPA)" so '@' can be used as a statement separator and // used as a comment. Not picked up: the field selectors stuff. In talking to Umesh he said this was no longer needed as they changed from L' to L` for field selectors. different as/as.c (integrated for cctools-102) Using cctoolshppa-37. New stuff for hppa cputype, CPU_SUBTYPE_HPPA_ALL and -arch hppa. different as/read.c (no changes for cctools-102) Two real changes plucked from cctoolshppa-37: 1) Add the include hppa.h ifdef'ed HPPA 2) Also there is an issue with the completer ",=" not being treated as an assignment. The cctools-100 changes appear to also fix this. There still is a bug with spaces around "=" for assignments. The cctools-100 changes have fixed this. different as/write.c (integrated for cctools-102, this now in write_object.c) One real changes plucked from cctoolshppa-37: 1) Add the include hppa.h ifdef'ed HPPA Only in cctoolshppa-37/as: hppa-aux.c (picked up for cctools-102) Pick up cctoolshppa-37/as/hppa-aux.c from cctoolshppa-37. Only in cctoolshppa-37/as: hppa-aux.h (picked up for cctools-102) Pick up cctoolshppa-37/as/hppa-aux.h from cctoolshppa-37. Only in cctoolshppa-37/as: hppa-check.c (picked up for cctools-102) Pick up cctoolshppa-37/as/hppa-check.c from cctoolshppa-37. Only in cctoolshppa-37/as: hppa-ctrl-func.c (picked up for cctools-102) Pick up cctoolshppa-37/as/hppa-ctrl-func.c from cctoolshppa-37. Only in cctoolshppa-37/as: hppa-ctrl-func.h (picked up for cctools-102) Pick up cctoolshppa-37/as/hppa-ctrl-func.h from cctoolshppa-37. Only in cctoolshppa-37/as: hppa-opcode.h (picked up for cctools-102) Pick up cctoolshppa-37/as/hppa-opcode.h from cctoolshppa-37. Only in cctoolshppa-37/as: hppa.c (picked up for cctools-102) Pick up cctoolshppa-37/as/hppa.c from cctoolshppa-37 and changed HPPA_RELOC_NORELOC to NO_RELOC in three places. Changes for cctools-102 to allow hppa.h to be removed: were to add these lines: #include <mach-o/hppa/reloc.h> #define HPPA_RELOC_12BRANCH (1000) /* only used internal in here */ Only in cctoolshppa-37/as: hppa.h (NOT picked up for cctools-102) Pick up cctoolshppa-37/as/hppa.h from cctoolshppa-37 and change: Changed lines 33 and 34 from: #define NO_RELOC HPPA_RELOC_NORELOC #define HPPA_RELOC_12BRANCH (HPPA_RELOC_NORELOC + 1000) to: #define NO_RELOC 0x10 /* out side the range of r_type:4*/ #define HPPA_RELOC_12BRANCH (NO_RELOC + 1000) So HPPA_RELOC_NORELOC could be removed from mach-o/hppa/reloc.h . Removed line 38 which was: extern int next_char_of_string(); It is static in read.c and was changed for no apperent reason. Changes for the 3.3 release (the cctools-101 release): - Second major round of changes for the new shlib stuff. 1) Added the LC_DYSYMTAB load command to the object file format and organization of the symbol table and string table as well as the layout of the relocation entries. 2) Added the support for the indirect symbol sections and stub sections. This added 3 new types of sections, some new section directives, and indirect symbols and the creation of the indirect symbol table and marking of symbols as lazy bound undefined. - For the m68k fixed the code in m68k_ip() for the "Bg", "Bw", and "Bc" branches as many parts did not work. Now things like "bra foo:w" works. To make this work m68k_ip_op() was changed to not strip off the ":w" and not set the opP->isiz field but to use get_num() and use the e_siz field. This only effected the ABSL case. - Made a bit of an ugly fix for "jbsr foo:w" and "jra foo:w" which is trying to force the word from. So to make this always work when foo is an absolute number that fits in a word the instruction is changed from the "bsr" form to the "jsr" form (or from the "bra" to "jmp") which does not use a displacement and is not effected by the address of the instruction. Changes for the 3.3 release (the cctools-100 release): - First major round of changes for the new shlib stuff. 1) Major restructuring and clean up for support of a true Mach-O assembler which includes .section and .zerofill directives for arbitrary sections. 2) Support for possition-independent code through the SECTDIFF relocataion type (these changes are ifdef'ed COMPAT_NeXT_3_2 as the will produce object files that are incompatible with the 3.2 release). 3) Support for .private_extern directive (again ifdef'ed COMPAT_NeXT_3_2). - Fixed a bug in try_to_make_absolute() which when changing an expression to absolute did not set the add_symbol and subtract_symbol fields to NULL which caused the wrong fixup to be done that used the expression with a fixup (bug #37382). Changes for the 3.2 release (the cctools-25 release): - Added forms of shld and shrd with two operands that imply the cl register. - Added missing opcode table entries for the i386 instructions fcom and fcomp with no arguments in i386-opcode.h. - Fixed "0: jmp 0b" which did not work, 0 was the problem in the 0b (1-9 work). This was a problem in operand() in expr.c when 0b... expressions were added (bug #8331) and the fix was to look to see of the next character was the end of the line or not a hex digit. Changes for the 3.1 release (the cctools-20 release): - Fixed a bug for the m98k that did not correctly check for too few parameters. Two bugs here one in calcomp that was testing != NONE which should have been == NONE and a bug in md_assemble in advanceing past the '+' or '-' when it did not exist and there was nothing but a '\0' after the op it advanced past it. Changes for the 3.1 release (the cctools-16 release): - Fixed a bug with the m98k opcodes for stwcx. stdcx. where bit 0 was not set to a 1. - Changed the following instructions so that for the SH field the value 32 (or 64) assembles as 0: rldicl rldicl. rldicr rldicr. rldic rldic. rldimi rldimi. rlwinm rlwinm. rlwimi rlwimi. - Fixed a bug in the m98k assembler where the value of exprressions which was exactly (1 << width) was not treated as an error (4 places > was changed to >= in m98k.c). Changes for the 3.1 release (the cctools-15 release): - Moved the m88k and m98k to be install in /usr/local/lib not /lib. - Fixed a bug in the m98k assembler that did not detect instructions with too many parameters. - Added macros and register names for batX[ul] the same as ibatX[ul] since the 601 does not have split i and d for these. - Changed the m98k instruction's "icbi" first paramenter to G0REG from GREG. - Back out the below fix and added new code in try_to_make_absolute() that walked the frags between the the symbols L0 and L1 to calculate the absolute value. - Fixed a bug where the expression L1-L0 was not coming up right when it had a .align between L0 and L1. A hack was removed from try_to_make_absolute() in expr.c that had the code ifdef out that was trying to say the expresion could change due to relaxation. The the routine s_align() in read.c was ifdef RISC to do the alignment instead of creating an align frag. Changes for the 3.1 release (the cctools-14 release): - Added a form of fcmpu and fcmpo that takes a crX as it's first argument. - Added the opcodes for tlbiex (31,338) and tlbia (31,370). - Fixed a bug in the m68k assembler where the code to handle implementation specific instructions had || in two places where && was supposed to be. The change was on lines 2693 and 2706 in md_assemble() in m68k.c . - Changed the m98k instructions: lwarx, ldarx, stwcx. and stdcx. second arg from GREG to G0REG. - Fixed the Makefile to install the m98k assembler it built. Changes for the 3.1 release (the cctools-13 release): - Added the m98k (PowerPC) architecture. Changes for the 3.1 release (the cctools-10 release): - Changed the default .include directories to /NextDeveloper/Headers and /LocalDeveloper/Headers in as.c and made them work (the default never worked). - Corrected the following table entries for i386 floating point instructions which had a FloatD or'ed into them which was wrong: faddp, fmulp. - Fixed a bug that caused an error message that a segment override was specified more than once for string instructions for the i386. The fix was in i386.c where i.seg was need to be set to zero after the string instruction operands were parsed (bug #29867). - Fixed the assembler driver /bin/as to take machine specific architecture flags and run the family architecture named assembler. So "as -arch m68040" will run /lib/m68k/as and not /lib/m68040 assemblers implementation of this design is: By default the assembler will produce the cpusubtype ALL for the object file it is assembling if it finds no implementation specific instructions. Again by default the assembler will allow implementation specific instructions for implementations and combine the cpusubtype for those specific implementations. The combining of specific implementations is architecture dependent and may not be allowed for some architectures and are flagged as an error. With the optional -force_cpusubtype_ALL flag all instructions are allowed and the object file's cpusubtype will be ALL. If an -arch flag for a specific implementation (ie. -arch m68040 or -arch i586) is used the assembler will flag as errors instructions that are not supported on that architecture and produce the cpusubtype for that specific implementation in the object file (even if no specific instructions are used). This effected as.c, as.h, Mach-O.c, m68k.c, m68k-opcode.h, i386.c, i386.h, and i386-opcode.h. The m88k and i860 assemblers had no machine specific modifications. Changes for the 3.1 release (the cctools-9 release): - Fixed a bug that caused the .include feature to fail in some cases. The value of the stuff saved by save_scrub_context() in app.c was not reset which caused the app preprecessor to start parsing the included file and think it was in the case of a string. Changes for the 3.1 release (the cctools-8 release): - Fixed a bug the did not cause m68k floating point branchs to undefined symbols to have relocation entries that make it to the object file to work with scattered loading. On line 3299 in m68k.c was: fix_new(fragP,(int)(fragP->fr_fix),4,fragP->fr_symbol, (symbolS *)0,fragP->fr_offset+4,1); changed to: fix_new(fragP,(int)(fragP->fr_fix),4,fragP->fr_symbol, (symbolS *)0,fragP->fr_offset+4,1+2); - Fixed a bug in the i386 assembler for these two instructions where the segment override did not get picked up from: mov %eax,%gs:sr_mem_offset jmp %gs:sr_mem_offset The first is bug #29555 the second is just another form of the same logic bug in another place. There maybe more of this same logic bug. The fixes are in i386.c in md_assemble() when putting out the opcode. - Fixed a bug in the string instructions where segment overrides in the operand fields were not picked up. To do this the kludge that ignored the operands of string instructions had to be removed as special case table entries and matching checking had to be added (bug #26409). - Fixed a bug in the i386 assembler where the invlpg instruction did not take a Mem operand (it was Mem32). The fix was in the table entry for invlpg in i386-opcode.h (change requested by the Lono group). The manual is confusing on this instruction. - Fixed a bug in the i386 assembler where a call or jmp instruction to an absolute address was not getting put out pc relitive and no relocation entry was produced (line 1050 and line 1155 in i386.c). - Fixed the problem of getting alignment correct for .align directives that are greater than the default alignment. This effected the struct frchain in subsegs.h, the routine set_section_align() in Mach-O.c and the routine write_object_file() in write.c and the initialization of the new field in subsegs.c (bug #29432). - Changed the I386 bound instruction such that the parameters are consistant with gas (reversed them). Also fixed the boundw so it only put out one '0x66' data16 prefix byte. - Fixed a bug for the I386 that padded the text section with zeros (changed to pad with nops) in write.c. - Added the wait (0x9b) prefix to the following instructions: "finit", "fstcw", "fstsw", "fclex", "fstenv" and "fsave" the "XnXXX" form does not have the wait prefix. - Added "fucom" as an alias for "fucom %st(1)" - Added "fucomp" as an alias for "fucomp %st(1)" Changes for the 3.1 release (the cctools-7 release): - Added the i486 and i586 specific stuff to i386-opcode.h (bug #27475). The changes are ifdef'ed i486 and i586 and these are turned in in the Makefile. Also the define STRICT_i586 is ifdef'ed (but not defined) where the i586 does not allow certian things (test register instructions). - Fixed a bug in md_assemble() in i386.c where the instruction "mov %ah,%al" would assemble wrong. The problem was when the suffix was selected based on the register type the "i.types[o]" needed to and'ed with `Reg' because %al and %ax have `Acc' in their types and they were coming up with a 'l' suffix. This is ifdefed NeXT. - Fixed a bug in m68k_reg_parse() in m68k.c where the registers "ic", dc" and "bc" were incorrectly parsed because of an "if (c2 = 'c')" (bug #27954). - Added an ifdef CHECK_WORD_IMMEDIATES in m68k.c and code to make checking of 16-bit immediates consistant in the m68k assembler (bug #26863). Also to make this work the "link" (no suffix) for an immediate word entry in m68k-opcode.h had a new "place" character created for it (#z for #w) and code was added to m68k.c to handle it. The define CHECK_WORD_IMMEDIATES is left off to cause truncation of too large immediate words. - Fixed a bug that did not allow -arch and -arch_multiple (as.c). This was put in to the NRW cctools-6 but not into lono's. Changes for the 3.1 release (the cctools-6 release): - Added the -arch_multiple flag that will cause one line to printed before all error messages which will contain the architecture name. - Fixed the m88k pmul and punpk instructions where the last register is not required to be an even register. - Fixed a bug in atof-ieee.c in gen_to_words() that did not round IEEE denorms correctly and caused the smallest denorm to become zero (bug #23178). Changes for the 3.1 release (the cctools-5 release): - Picked up the lono team's cctools-4_2 i386-opcode.h . - Added the pseudo op for the m88k assembler ".dot symbol" that sets the value of the location counter into the value of the symbol. - Removed the trnc.x mnemonic as it is not legal "trnc.sx" is the correct form which remains in m88k-opcode.h { 0x8400d900, "trnc.x", { {21,5,REG}, {0,5,XREG}, {0,0,NIL} } }, Changes for the 3.1 release (the cctools-4 release): - Fixed a bug in parse_cst() in m88k.c that did not allow expressions for constant operands. This bug was found with "tb0 0,r0,(129)" where the ()'s caused the problem. (bug #21052) - Changed installing the i386 assembler into /lib/i386/as from ix86 (and changed the -arch name to i386). - Changed CPU_TYPE_I80x86 to CPU_TYPE_I386 in Mach-O.c - Picked up the changes for the i386 assembler to allow scattered loading from the lono team. Changes for the 3.0 release (the -57 compiler release) - Removed the following opcodes from m88k-opcode.h as siff(1) and the newest 110 manual says they are not valid. { 0X840008A0, "fcvt.dd", { {21,5,REG}, {0,5,REG}, {0,0,NIL} } }, { 0X840088A0, "fcvt.dd", { } } }, (bug #20021) - Fixed a bug introduced with the change of the SUB_SEGMENT_ALIGN. It turns out it broke the objective-C runtime that assumed that the __protocol section (amoung others) can be indexed like an array of structs which are not multiples of 8 bytes. The fix was to align all objective-C sections to 4 bytes. Again the change was in write_object_file() in write.c (bug #20022). Changes for the 3.0 release (the -56 compiler release) (performance month): - Changed the order of the objective-C sections. The message_refs and the cls_refs section were switched. The meta_cls_refs section was removed. This change effected Mach-O.c, write.c and read.c. - Changed write_object_file() in write.c to used the normal subsegment alignment: #ifdef RISC #define SUB_SEGMENT_ALIGN (3) #else #define SUB_SEGMENT_ALIGN (2) #endif and handle the literal pointer sections special (by knowing their subsegment values). This fixes a problem on the m88k where the const section had a .align 3 directive but started on a align of 2 boundary. This still has the problem if a section has an align greater 3 the data in the output file will end up aligned correctly but the section start will not resulting in the link edited object to having the data not aligned correctly. (bug #19492) Changes for the 3.0 release (the -55 compiler release) (performance month): - Changed the Makefile to install the driver in /usr/local/bin and the 68k assembler in /bin/as and all other assemblers in /usr/local/lib/* . - Changed the as driver (driver.c) to look in /lib and in /usr/local/lib for assemblers. - Changed the order of the objective-C setions to: 2 (__OBJC,__class) 3 (__OBJC,__meta_class) 4 (__OBJC,__string_object) 5 (__OBJC,__protocol) 6 (__OBJC,__cat_cls_meth) 7 (__OBJC,__cat_inst_meth) 8 (__OBJC,__cls_meth) 9 (__OBJC,__inst_meth) 10 (__OBJC,__cls_refs) 11 (__OBJC,__meta_cls_refs) 12 (__OBJC,__message_refs) 13 (__OBJC,__class_names) 14 (__OBJC,__module_info) 15 (__OBJC,__symbols) 16 (__OBJC,__category) 17 (__OBJC,__meth_var_types) 18 (__OBJC,__class_vars) 19 (__OBJC,__instance_vars) 20 (__OBJC,__meth_var_names) 21 (__OBJC,__selector_strs) Also the special casing of the objective-C section in determing to created a scatter or non-scattered relocation entry was removed for all but the (__OBJC,__selector_strs) section. The directive ".objc_selector_refs" is still there and the cc-55 compiler will be changed to use the correct directive ".objc_message_refs" and then this can be removed. These changes effected read.c and Mach-O.c Changes for the 3.0 release (the -54 compiler release) (performance fortnight): - Added three string sections to the Objective-C segment: .objc_class_names, __OBJC __class_names .objc_meth_var_names, __OBJC __meth_var_names .objc_meth_var_types, __OBJC __meth_var_types This effected read.c and Mach-O.c. - Added the following lines to i386-opcode.h at the request of the lono guys: {"repz", 0, 0xf3, _, NoModrm, 0, 0, 0}, { "repnz", 0, 0xf2, _, NoModrm, 0, 0, 0}, Plus allow .word on the ix86 (ifdef added in read.c). - Added const to lex_type, is_end_of_line, potable in read.c to make read-only. - Added const to op_encoding in expr.c to make read-only. - Added const to m68k_opcodes and endop in m68k-opcode.h to make read-only. - Changed in_buf in input-file.c to allways be malloc()'ed. - Added const to the Makefile for the next_version.c echo line. Changes for the 3.0 release (the -51 compiler release): - Changed Mach-O.c to pad out the string table to a multiple of 4 and set the padded bytes to '\0'.). - Fixed a bug in the fsincos instruction where the FPc and FPs registers were switched in the instruction. - Changed the order of the objective-C classes to: __class - always written on __meta_class - always written on __string_object - always written on __protocol - always written on __cls_meth - sometimes written on __inst_meth - sometimes written on __cat_cls_meth - sometimes written on <these will not be used soon> __cat_inst_meth - sometimes written on <these will not be used soon> __cls_refs - sometimes written on (uniqued) <these are not used now> __meta_cls_refs - sometimes written on (uniqued) <these are not used now> __message_refs - sometimes written on (uniqued) __symbols - never written on __category - never written on __class_vars - never written on __instance_vars - never written on __module_info - never written on __selector_strs - never written on (uniqued) The six sections starting from the __string_object section were effected. The change was made in read.c and Mach-O.c . Changes for the 3.0 release (the -49 compiler release): - Fixed a bug where the assembler was padding literal pointer sections with a zero for RISC machines and causing the link editor to complain. The fix was to change the macro SUB_SEGMENT_ALIGN from 3 to 2 in write.c and to set the alignment of S_LITERAL_POINTER sections in Mach_O.c to 2. - Fixed the passing and using of RC_CFLAGS correctly in the Makefile. Changes for the 3.0 release (the -49 compiler release): - Changed the Makefile to meet the RC api. Changes for the 3.0 release (the -47 compiler release): - Added the missing 040 "ptest[rw] An@" instructions. - Changed the constant CPU_TYPE_I386 to CPU_TYPE_I80x86 to match header file. - Changed the behavior so if warning message is produced (with as_warn()) that an object is not produced. The change was in as_warn() in messages.c and is ifdef'ed NeXT which sets bad_error to 1 just like as_bad(). (bug #16137 and #16044) - Added the (__OBJC,__string_object) section with the directive .objc_string_object (read.c and Mach-O.c where changed). Changes for the 3.0 release (the -44 compiler release): - Created an assembler driver to be placed in /bin/as and the assemblers are then moved to /lib/<arch_flag>/as . The Makefile was updated to build and install this way. as.c was changed to take "-arch <arch_flag>" and check it against the type of assembler it is. - Switch over to the new header file organization. Changes for the 3.0 release (the -43 compiler release): - Changed the Makefile to install the i860 assembler in /usr/local/bin/as860 . - Picked up md.h from 1.38.1 which added const to md_pseudo_table and md_relax_table so i860.c, m68k.c and m88k.c were all updated as were the uses in read.c and write.c. - Picked up the files: i386.c, i386.h and i386-opcode.h from the 1.38.1 release. Changes for the 3.0 release (the -39 compiler release): - Fixed so that strings can have characters with the 8th bit set in them. This involved adding the lines in next_char_of_string() in read.c: #ifdef NeXT /* allow 8 bit chars in strings */ c = (c & MASK_CHAR); /* to make sure the 0xff char is not returned as -1*/ #endif NeXT so that the high bit does not get sign extened and the -1 return code that is tested for at the call sites as >= 0 is not tripped over. Second changed all 8th bit set chars in lex_type[] in read.c to be allowed in names. Also had to change the macros in read.h #define is_name_beginner(c) ( lex_type[(c) & 0xff] & LEX_BEGIN_NAME ) #define is_part_of_name(c) ( lex_type[(c) & 0xff] & LEX_NAME ) to add the "& 0xff" because of the sign extension of chars (bug #15597). Changes for the 3.0 release (the -37 compiler release): - Fixed the relocation entries for the 88k so that 88k objects can be scatter loaded by the link editor. This involves adding a PAIR relocation entry for the LO16 type (as well as having one for the HI16 type) and moving the place in the relocation entry where the other half of the relocatable expression is store from the r_symbolnum field to the r_address field (so that a scattered relocation can be used, since it does not have an r_symbolnum field). Also removed all support for signed immediates on the 88110 since NeXT will not use this feature. Also to be consistant the i860's PAIR's will also use the r_address field even though they will not use scattered relocation entries. These changes were made in Mach-O.c . Also required forcing relocation entries for non-local labels for 88k branch instructions which was done with the same kludge as the 68k by setting the 0x2 bit in the fx_pcrel fix structure when it is created in m88k.c in md_assemble(). This also required an extra ifdef M68k in Mach-O.c in fix_to_relocation_info() when choosing to put out a scattered relocation entry because of the way 68 branch instructions work. Changes for the 3.0 release (the -36f compiler release): - Fixed a bug that did not catch a bit field expression error between the <>'s. A check for the trailing '<' was missing. This was added in parse_bf() in m88k.c . - Fixed the .abs directive for 88k registers. The fix was for it to handle scaled register expressions and also not generate an undefined symbol for the register name "U r1". The changes were to m88k.c adding s_m88k_abs() for the .abs pseudo-op and to read.c to leave s_abs() but to ifdef it out of the table for the 88k as s_m88k_abs() uses s_abs(). - Corrected the lex_type table in read.c to not allow the character '[' as part of a name. - Added '@' as a statement separator for the 88k (change the "# <line> <file> <level>" stuff to use it when generating ".line <line> @ .file <file>" ). Changed app.c and read.c ifdef'ed M88K. Also s_reg(), s_scaled() and no_delay() in m88k.c also need this because they can't use the macros in read.c . - Added the .no_delay 88k pseudo-op. Changed m88k.c and m88k-opcode.h to add the delay_slot field to the instruction table and a static variable, in_delay_slot, that gets set each time an instruction is assembled. - Fixed a bug not allowing macro names to start with '.' . The fix was in read.c in parse_a_buffer() right before it detects an unknown pseudo-op. Also change it so the unknown pseudo-op is printed when an error happens. Also changed s_macro() in read.c to print a warning if a known pseudo-op name is used a macro name. Changes for the 3.0 release (the -36e compiler release): - Fixed a bug where the operand "pc@" did not assemble correctly. The fix was in m68_ip() in m68k.c on linr 1604 for the "case AINDR" which can be set with PC in the opP->reg. In this case the mode pc@(d16) is used. This is ifdef'ed NeXT. - Fixed a bug where "foo :" did not recognize "foo" as a label. The fix was in app.c in do_scrub_begin() where the line: lex [':'] |= LEX_IS_LINE_SEPERATOR; was ifdef'ed out since ':' did not work. But ... This DOES not cause ':' to be a LINE SEPERATOR but does make the second if logic after flushchar: in do_scrub_next_char() to handle "foo :" and strip the blanks. This is the way has always been and must be this way to work. Changes for the 3.0 release (the -36d compiler release): - Fixed a bug in the 88k assember that did not handle "# <line> <file> <level>" comments correctly because it uses ";" which is a comment and the .file gets ignored. The fix was ugly. The change was to app.c and read.c ifdef'ed M88K to allow '\001' as a statement seporator (CHANGED IN -36f see above). - Changed the marking of literal sections from not marking them for RISC to not marking them for only the I860. This change is since the 88k compiler will ALLWAYS make a 32 bit reference to an item and leave it to the link editor to find ways to make 16 bit references these sections can marked for uniqueing for the 88k. - Added the following directives for the following new sections: .constructor for __TEXT, __constructor .destructor for __TEXT, __destructor .objc_protocol for __OBJC, __protocol .objc_cls_refs for __OBJC, __cls_refs (S_LITERAL_POINTERS) .objc_meta_cls_refs for __OBJC, __meta_cls_refs (S_LITERAL_POINTERS) Changes for the 3.0 release (the -36c compiler release): - Fixed a bug involving expressions with unknown symbols for operators other than '+' or '-'. The problem is that in expr() in expr.c if an expression operator is something other than '+' or '-' then it sets the need_pass_2 flag and no other "frags" (bytes of output) are generated. You would think it would want to run another pass but the code doesn't do that (major bug)! So now it just does what it would in the case the symbol is known which is report the expression can't be relocated. - Fixed m88k.c to use LO16 relocation types not LO16EXT types. Changes for the 3.0 release (the -36b compiler release): - Added the m88k directive .scaled as requested by the OS group. - Allow expressions for the bit number, parse_cmp() like in bb0, condition match, parse_cnd() like in bcnd, and even 4 bit rotation, parse_e4rot like in prot. Changes for the 3.0 release (the -36a compiler release): - Added the opcodes "illop[123]" as per the version 3.0 88110 spec. - Removed the "lda.b rD,rS1[rS2]" instruction and replaced that opcode with "lda.x rD,rS1[rS2]" as per the version 3.0 88110 spec. - Corrected "nint.[sdx]" to be "nint.s[sdx]" and "int.[sdx]" to be "int.s[sdx]" which was just wrong in the GNU assembler (trnc was previous corrected but flaged as an 88110 versrion 3 change but that was incorrect as the assembler was just wrong (even for the 88100)). - Corrected "mov.s xD,xS2" to be "mov xD,xS2" as per the version 3.0 88110 spec. - Removed the old (version 2.0 of the 88110) opcodes: "mov.t xrD,rD2" and "mov.t rD,xrS2" which used blank instead of .s for single. - Removed the old (version 2.0 of the 88110) opcodes: "trnc.t rD,xrs2" and "trnc.t rD,rS2" (where t is the type of the result) which used only the type of the result and implied the .s for single. - Removed the "ppack.8.b", "ppack.8.h", and "ppack.16.b" opcodes from the m88k opcode table. These operations are undefined. Changes for the 3.0 release (the -35 compiler release): - Fixed a bug in parse_bf() when expressions were added expressions that did not start with a digit (for example a '(', '+', '-', '~' or '!') were not recognized. - Changed the action for .abort to print the remaining line as part of the error message. Feature request by the OS group. - Added an option [,alignment] expression to the end of the .lcomm directive which aligns the symbol to the power of 2 alignment expression (same as the .align directive). This is ifdef'ed NeXT in s_lcomm() in read.c . - Changed which directives are allowed on which machines .word 68k and i860 only (machine specific) NOT 88k .long 68k and i860 only NOT 88k .quad 68k only .octa 68k only .float 68k and i860 only NOT 88k These changes are in read.c, m68k.c and i860.c . Feature request by the OS group and removal of .quad and .octa for the i860 approved by the NeXT Dimension group. - Added the directive .elseif . This involed a bit of reworking the .if, .else and .endif stuff in read.c . Feature request by the OS group. - Fixed a bug that would allow you to use the macro in a macro definition and cause the assember to core dump. A limit, MAX_MACRO_DEPTH of 20, is used to avoid this. - Added the directives .macros_on .macros_off. This is to allow macros to be turned off, which allows macros to override a machine instruction and still use the machine instruction. This is in read.c and toggles the variable macros_on in s_macros_on() and s_macros_off() which is tested in parse_a_buffer(). Feature request by the OS group. - Added s_abs() in read.c to implement ".abs symbol,exp" which set symbol to 1 or 0 depending on if the expression is an absolute expression. This is intended for use in macros. Feature request by the OS group. - Added s_reg() to m88k.c to implement ".greg symbol,exp" and ".xreg symbol,exp" which set symbol to 1 or 0 depending on if the expression is a general register or extended register respectfully. These are intended for use in macros. Feature request by the OS group. - Added $n in expand_macro() in read.c to substitute the number of actual arguments for the macro. Feature request by the OS group. - Changed the code for setting the line separator's (character for multiple statements on a line) in do_scrub_begin() in app.c . The character '/' tried to be a separator for the 88k but code down stream prevented it from working so it was removed and the 88k does not allow multiple statements on a line. Also removed the NeXT ifdef for the ':' character which also did not work. Changes for the 3.0 release (the -34 compiler release): - Fixed a bug that for all floating-point branches it did not generate a relocation entry. The fix is in md_estimate_size_before_relax() in m68k.c where the case of 'TAB(FBRANCH,SZ_UNDEF)' was not handled in the first switch and code to generate the relocation entry (a call to add_fix) was not done. This is ifdef'ed NeXT. - Fixed a bug for branches that use displacements to absolute addresses which produced a displacement off by -4. This is ifdef'ed NeXT in m68_ip() in m68k.c in the second main switch statement that sets the bits in the instruction for the 'B' case. There are two ifdef's, one for 'g' sub case (normal branches) and one for the 'c' sub case (floating-point branches). - Disallow all floating-point packed immediates for the 68k assembler because the gen_to_words() routine in atof-ieee.c does not produce the correct 68k packed decimal format. This simply disallows this but does not fix it. So "fmovep #0r1.0,fp0" will no longer assemble instead of assemble wrong. This is ifdef'ed PACKED_IMMEDIATE in m68k-opcode.h and m68k.c in m68_ip(). (internal bug #5) - Fixed a bug in the assembler which matched the "fmoveml #1,fpc" where the immediate #1 cause an internal FATAL error because it can't decode the mode "*s". The fix is in m68_ip() in m68k.c where the case for 's' was ifdef'ed NeXT in just like the long case. This is legal but the instruction "fmoveml #1,fpc/fpi" is not and the assembler STILL accepts it. (internal bug #4). - Fixed a bug in the assembler which matched the "movec sfc,#1" where the immediate #1 cause an internal FATAL error because it can't decode the mode "#j". The fix in m68_ip() in m68k.c in the loop to install the operand bits for the '#' case was missing the second sub case for 'j' that check the range of the value and installed the operand. If the immediate is a variable my guess this will still fail but in a different way. (internal bug #3). - Fixed a bug that caused the assembler to call abort with no error message when assembling "andiw #0x8001,fpir/fpcr/fpsr". In get_num() in m68k.c the case for a SEG_PASS1 was missing from the switch statement from the type of the expression() call. It was ifdef'ed NeXT in and handled like SEG_UNKNOWN and a bunch of others that just print out it can't handle the expression. STILL BROKEN! (internal bug #2). - Fixed a bug that the operand of the form "zpc@(0)" was trying to use the pc@(16) (AOFF) form which does not allow the base register to be suppressed which is what zpc is. So this now uses the pc@(bd,Xn) form (AINDX). The bug caused "zpc@(0)" to generate garbage, namely "d1". The change is in m68k_ip_op() in m68k.c and ifdef'ed NeXT with a comment like above. (internal bug #1). - Ifdef'ed out the turning operands into PC relative in m68_ip() in m68k.c (this is a 1.36 feature) because it breaks scattered loading. - Fixed a bug in the 1.36 version of GAS where the table of fmovem instructions were reordered. See the comment in m68k-opcode.h with the header "REGISTER LIST BUG:". The fix was to put the list back in the previous order. There is a design bug here that needs to be fixed. - Fixed a bug where the .align directives were not propagated into the section headers of the object file. A new routine, set_section_align() in Mach_O.c, is called from s_align() in read.c . - Put the change in atof-ieee() in atof-ieee.c that creates infinities on overflows. This fixes bug #13409. - Picked up a change in i860_ip() in i860.c from the NDTools-6 version. Having to do with constant offset alignments. - Added expressions to the width and <offset> bit field instructions. Since the parameter syntax is width<offset> and offset may be a two character 'cmp' bit designator, the width expression may not contain the character '<' and the offset expression must start with a digit. - Changed "mov.t xrD,rD2" and "mov.t rD,xrS2" to use .s for single instead of blank. (version 3.0 of 88110 spec). - Changed "trnc.t rD,xrs2" and "trnc.t rD,rS2" (where t is the type of the result) to use .st where the s is for single and t is the type of the result. (version 3.0 of 88110 spec). - Changed the pflusha instruction to pflusha030 and pflusha040 because there is no way to tell them apart. - Added automatic creation of stabs for assembly code debugging with -g. The comment that explains in detail how this is done is in read_a_source_file() in read.c, The other changes are in make_stab_for_symbol() in symbols.c, s_include(), s_line() in read.c, and md_assemble() in m68k.c and m88k.c also two static declarations were removed from input-scrub.c. These changes are marked with pairs of: #ifdef NeXT /* generate stabs for debugging assembly code */ ... #endif NeXT /* generate stabs for debugging assembly code */ - Added the MMU instructions for the 030 and 040 (ifdef'ed BUILTIN_MMUS) and turned off the m68851 define for that set of MMU instructions. The reason to turn it off is because of the register names it must recognize (see bug #7525 why we don't want to do this). This change is not ifdef'ed NeXT because it is very intertwined with the 68851 stuff. Also with this change the "MMU status register" correct name of "mmusr" was added but the old name of "psr" was retained for compatiblity because of assembler code that might use it. - Added installsrc, installIBMsrc and installGNUsrc targets to the Makefile. - Bug #8331, feature request for hex immediate bit-patterns for floating-point immediates. Added the constant radix 0b... like 0x... except that it would be assumed to be a "bignum" and then a binary bit pattern for a hex immediate. This effected the routines operand() in expr.c, get_num() in m68k.c and m68_ip() in m68k.c . All of these are ifdef'ed NeXT with the comment /* fix for bug #8331 */ . - Bug #13017, this is where ".fill x,3,x" would cause the assembler to call abort because the repeat size is 3 bytes. This is now dissallowed in s_fill() in read.c and only repeat sizes of 0,1,2 and 4 are allowed. - Bug #11209, this is where if the file system fills up or something and the file can't be closed the object file was left and would confuse later make(1)'s because the object file would be present but would then just hand this off to the link editor and it would complain about a bad object file. The fix in output_file_close() in output-file.c was to remove the file in this case because it might be bad. - Bug #8920, where s file containing just "bra L1" would produce a bad object file because the undefined local lable L1 was not defined is fixed. The fix is in write_object_file() in write.c (and one line in write_Mach_O() in Mach-O.c to test bad_error). The undefined local symbols are printed with an error message in this case and then the object file is not written. - Bug #8918, where a line of the form "# 40 MP1 = M + 1" gets confused with a line of the form "# 1 "hello.c" 1" and causes a bug that ignores the rest of the file. This was fixed in app.c when in state 4 (after putting out a .line, put out digits) and then not finding an expected '"' for the name of the file it ignores the rest of the line but forgets to set the state to 0 (begining of a line). This is ifdef'ed NeXT. - Bug #7525 (second part), where "bfffo d0{#2,#32},d1" would not work with the field width of 32 is now fixed. (I'm not sure exactly what the fix was it probably came from the 1.36 version of GNU). - Bug #5384, where if a ".globl foo" precedes "foo=1" foo does not end up global has been verfied to be fixed (I'm not sure exactly what the fix was it probably came from the 1.36 version of GNU). - Changed the default alignment of sections to 3 (8) for RISC machines from 2 (4) in both write.c and MachO.c. - Print a warning for -R (make data text) to used .const and not put the data in the text. - Cleaned up Mach-O.c and read.c by changing/adding message_refs where selector_refs was used. --- Changes to merge in John Anderson's (DJA) version of GAS --- - added relational binary operators (<, ==, >, <= and >=) and modified the precedence to conform to 'C'. The code is marked with pairs of: #ifdef NeXT /* added relational ops, changed precedence to be same as C */ ... #endif NeXT /* added relational ops, changed precedence to be same as C */ and is contained in the file expr.c and is the DJA version with a few bug fixes to make it work. Found a logic bug when "<>" was used as an operator it was recognized as a "<". This "operator" appears in the WriteNow source so I added "<>" as a form of "!=". - added logical negation unary operator (!). The code is marked with pairs of: #ifdef NeXT /* logical negation feature */ ... #endif NeXT /* logical negation feature */ and is contained in the file expr.c and is exactly the DJA version. - added code to try to make expresions absolute. The code is marked with pairs of: #ifdef NeXT /* feature to try to make expressions absolute */ ... #endif NeXT /* feature to try to make expressions absolute */ and is contained in the files expr.c and m68k.c (the code is exactly the DJA version). - added the .dump/.load feature (this is based on top of the .include and .macro features). The code is marked with pairs of: #ifdef NeXT /* the .dump/.load feature */ ... #endif NeXT /* the .dump/.load feature */ and is in read.c (and one line in symbols.c) and is the DJA version. Fixed a bug in write_symbol() in read.c where the symbol's n_type field needed to be and'ed with the N_TYPE macro before checking for equal to N_ABS. not checked - added the conditional assembly feature (pseudo ops .if .else .endif) and the macro feature (pseudo ops .macro and .endmacro). This is all contined read.c and required a major rewrite of the main parsing routine read_a_source_file(). This was replaced by three routines read_a_source_file(), parse_a_buffer() and parse_line_comment(). Since the their was no way to ifdef the old code it was removed. Where possible the conditional assembly feature code is marked with pairs of: #ifdef NeXT /* the conditional assembly feature (.if, .else, and .endif) */ ... #endif NeXT /* the conditional assembly feature (.if, .else, and .endif) */ and the macro feature code is marked with pairs of: #ifdef NeXT /* the .macro feature */ ... #endif NeXT /* the .macro feature */ All of these changes are in read.c and except for the rewrite read_a_source_file() the changes are the DJA version. - added the .include "filename" pseudo op. This is marked with pairs of: #ifdef NeXT /* .include feature */ ... #endif NeXT /* .include feature */ the code in in read.c, as.c, app.c, as.h and input-scrub.c. Except for the code in app.c and the typedef scrub_context_data in as.h (related to the major changes in the app.c code from the DJA version) it is exactly what was in the DJA version. Fixed a bug in input_file_open() in input-file.c where it was doing a setbuffer() call using a staticly allocated buffer for all the file's in read. This was changed to use a dynamicly allocated buffer when processing an include file so the buffer does not get reused by include files. Changes for the 3.0 release (the -33 compiler release): - Fixed trap*.w and trap*.l to take one immediate operand of word or long (this was just wrong in GAS). --- Changes to merged in the 1.36 version of GAS --- app.c: (1.36 version picked up) - This deals with the "# <number> <filename> <garbage>" in the state machine (the NeXT fix in s_line() was much cleaner). - Picked up the 1.36 version. The only odd difference is that ':' was ifdef'ed OUT in the 1.36 version and IN the the NeXT 1.28 version. #ifdef DONTDEF <- 1.36 #ifndef DONTDEF <- NeXT 1.28 lex [':'] |= LEX_IS_LINE_SEPERATOR; #endif I did the NeXT thing in fear of breaking something. Done with: #if defined(DONTDEF) || defined(NeXT) append.c: (1.36 version picked up) - Only Copyright comment changed as.c: (1.36 version picked up) - The machine specific command line options have been moved to routines named md_parse_option() in the machine specific files. - The handling of assembly errors has changed from using as_warn() to the new routine as_bad() which if called sets bad_error and will not produce an output file if that gets set (see the file messages.c for definitions). - Handling of signals has changed to an array of signal numbers and a routine that catches them and prints out the signal number. messages.c: (1.36 version picked up) - The addition of the routine as_bad() and the variable bad_error. If as_bad() is called then bad_error gets set and the output file does not get produced (see main() in as.c). as.h: (1.36 version picked up) - The following macros had ()'s added around their parameters: #define bzero(s,n) memset((s),0,(n)) #define bcopy(from,to,n) memcpy((to),(from),(n)) atof-generic.c: (1.36 version picked up) - Macro for alloca ifdef'ed __GNUC__ added: #ifdef __GNUC__ #define alloca __builtin_alloca #else #ifdef sparc #include <alloca.h> #endif #endif - Macros for bzero and index ifdef'ed USG added: #ifdef USG #define bzero(s,n) memset(s,0,n) #define index strchr #endif - The strings "nan", "inf" or "infinity" (in either case) are recognized first and NaN's get the sign set to 0, +infinity gets the sign set to 'P' and -infinity gets the sign set to 'N' (see flonum.h). They used to be caught at the end and the strings "Infinity", "infinity", "NaN", "nan", "SNan", or "snan" had been recognized and some note about see atof-m68k.c was there (this file was removed and atof-ieee.c was added). - A loop was added to strip leading '0' characters: while(number_of_digits_after_decimal && first_digit[number_of_digits_before_decimal + number_of_digits_after_decimal] == '0') --number_of_digits_after_decimal; After they were picked up. - Looks like the extra precision was move into two extra littlenums worth in the implementation of converting digit strings into flonum's. flonum-const.c: (1.36 version picked up) - Comment changes. flonum-copy.c: (1.36 version picked up) - Copyright comment changed. flonum-mult.c: (1.36 version picked up) - Added a check if the signs of the two numbers are one of '+' or '-' it is an error and returns zero. This happens with infinities as the sign is set to 'P' or 'M' or NaNs and the sign is set to zero ('\0' or 0). - Also some extra term in an if statement: 146c141 < if (significant || P<0) --- > if (significant) I did figure out what it was. flonum.h: (1.36 version picked up) - Comment about NaN and infinities was added: /* JF: A sign value of 0 means we have been asked to assemble NaN A sign value of 'P' means we've been asked to assemble +Inf A sign value of 'N' means we've been asked to assemble -Inf */ atof-ieee.c: (1.36 version picked up) - Replaces atof-m68k.c bignum-copy.c: (1.36 version picked up) - The addtion of the explit return type of 'int' was added to the routine bignum_copy(). - Copyright comment changed bignum.h: (1.36 version picked up) - The commented out extra digits of LOG_TO_BASE_2_OF_10 were uncommented. the comment above this was that this was done to get around a problem in GCC (I'm assuming that has been fixed). < #define LOG_TO_BASE_2_OF_10 (3.3219280948873623478703194294893901758651) --- > #define LOG_TO_BASE_2_OF_10 (3.321928 /* 0948873623478703194294893901758651 */) - Copyright comment changed. expr.c: (1.36 version picked up with Mach_O and NeXT ifdef's merged in) - Copyright comment changed and top comment removed. - A hack was changed with respect to the variable generic_bignum[]. The comment explains: /* Seems atof_machine can backscan through generic_bignum and hit whatever happens to be loaded before it in memory. And its way too complicated for me to fix right. Thus a hack. JF: Just make generic_bignum bigger, and never write into the early words, thus they'll always be zero. I hate Dean's floating-point code. Bleh. */ - This varable and comment was added but no one uses it. See flonum.h for how NaNs and infinities are handled. /* If nonzero, we've been asked to assemble nan, +inf or -inf */ int generic_floating_point_magic; -. expr.h: (1.36 version picked up) - Copyright comment changed. frags.h: (1.36 version picked up) - Copyright comment changed. hash.c: (1.36 version picked up with error() calls ifdef NeXT to as_fatal) - Copyright comment changed and two /* in comments changed to / * - A change from: newsize = handle->hash_stat[STAT_SIZE] <<= 1; to handle->hash_stat[STAT_SIZE] <<= 1; newsize = handle->hash_stat[STAT_SIZE]; in hash_grow(); hash.h: (1.36 version picked up) - Copyright comment changed. - The following line removed: static char * hash_grow(); /* error text (internal) */ hex-value.c: (1.36 version picked up) - Copyright comment changed. - The following routine was added: #ifdef VMS dummy2() { } #endif input-file.c: (1.36 version picked up) - Copyright comment changed. - The commented out declaration was removed (but not the comment out code) /* static int file_handle; /* <0 === not open */ - The explict declaration of the pre prameter was added to the routine input_file_open(). - The explict declaration of the routine do_scrub_next_char() was added inside the routine input_file_give_next_buffer() in a local scope. input-file.h: (1.36 version picked up) - Copyright comment changed. input-scrub.c: (1.36 version picked up) - Copyright comment changed. - The macro AFTER_STRING was changed from: #define AFTER_STRING (" ") /* bcopy of 0 chars might choke. */ to: #define AFTER_STRING ("\0") /* bcopy of 0 chars might choke. */ - The varables used by the ifdef'ed DONTDEF code was removed (why not just also ifdef'ed?): char *p; char *out_string; int out_length; static char *save_buffer = 0; extern int preprocess; m68k-opcode.h: (1.36 version merged in) - Copyright comment changed. - The bras and bsrs were ifdef'ed NeXT to not use word displacements. - some reordering of the movem and fmovem type instructions. - all m68851 stuff pulled in (comments and opcodes), pmmu.h was removed. m68k.c: (1.36 version merged in) - Copyright comment changed - Lots of changes related to the DBCC and DBCC68000 with jumps to jumps (see GAS 1.36 version change log). - The characters 'e' and 'E' were added to FLT_CHARS[] - In the md_relax_table the long branches (BRANCH,FBRANCH & PCREL) had their forward and backward reach changed by 2 where (the 2 was removed from the expression). - Constants for the BCC68000 and DBCC branch types were added as well as entries in the md_relax_table. - The .proc pseudo op was added - The register defines for m68851 were added to m68k.c and pmmu.h was removed. - Fixed a bunch of the macros like add_fix which did NOT have ()'s around the parameters which was the source of a nasty bug NeXT tracked down. - The routine m68k_reg_parse() takes something of the form fp0: and turns the ':' into a ',' . - A fix to handling big numbers (greater than 32 bits) as a floating-point bit pattern was made to put the bits out in the correct order. The loop was changed from: for(wordp=generic_bignum;offs(opP->con1)--;wordp++) to: for(wordp=generic_bignum+offs(opP->con1)-1;offs(opP->con1)--;--wordp) - The the routine md_atof() was changed to use atof_ieee() from atof_m68k(). - Picked up the md_parse_option() routine. - The NeXT made change to allow hex immediates for floating-point (which broke decimal immediates like #1 and did not work for doubles) was removed. Also see bug #8331 in bug tracker. This change is in the routine m68_ip() (which converts a string into a 68k instruction) in the code for handling immediates which are some type of floating point number that is not a SEG_BIG. This next #if 0 #endif pair comments out these two lines: int_to_gen(nextword); gen_to_words(words,baseo,(long int)outro); and replaces it with this line: /* modified by mself to support hex immediates. */ *(int *)words = nextword; The effect is that the non SEG_BIG expression (which is just an integer, not a floating point value) is not converted to a float but just used as a bit pattern for the floating point number. This fails for doubles since some random bits left in the local array words[] get stuffed into the 64 bit double value and of course breaks the common case of #1 for decimal numbers. - The NeXT use of atof_m68k was removed in the case of getting a floating point immediate and the code to call gen_to_words() was put back. - The NeXT change of #if 0'ing out the line: (I don't know why): gen_to_words(words,2,8L);/* These numbers are magic! */ was removed the the #if removed and the code left in. obstack.c: (1.36 version picked up) - Lots of changes but diffed with the same file in the cc directory (which is based on 1.36) it looks very close to the same. Since the NeXT 2.0 compiler uses it it is picked up here on faith. obstack.h: (1.36 version picked up) - Lots of changes but diffed with the same file in the cc directory (which is based on 1.36) it looks very close to the same. Since the NeXT 2.0 compiler uses it it is picked up here on faith. output-file.c: (1.36 version picked up with NeXT and Mach_O ifdef's put in) - Copyright comment changed. - The NeXT ifdef is to unlink the file before doing a create on it. - The Mach_O ifdef is for the routine output_write(). pmmu.h: removed (1.36 has this stuff moved into m68k-opcode.h and m68k.c) read.c: (1.36 version picked up with NeXT, Mach_O and I860 ifdefs added) - Copyright comment changed. - There is a differing set of changes related to the bumping of the line counters with respect to #NO_APP and #APP. One in the 1.28 version ifdef'ed NeXT and the other in the 1.36 version. The 1.36 set of changes were picked up. - A bunch of changes to the s_set routine (not use in the NeXT compiler suite). read.h: (1.36 version picked up) - Copyright comment changed. strstr.c: (1.36 version picked up with NeXT ifdef code added) - Only Copyright comment changed - The routine strstrn() apperently was added by NeXT and is used in read.c for searching for "#NO_APP\n". struc-symbol.h: (1.36 version picked up with NeXT ifdef code added) - Only Copyright comment changed - The ifdef NeXT code is to the sy_other macro to refer to the n_sect field instead of the n_other field. subsegs.c: (1.36 version picked up) - Only Copyright comment changed subsegs.h: (1.36 version picked up) - Only Copyright comment changed symbols.c: (1.36 version picked up with Mach_O ifdef code added) - Only Copyright comment changed -. - The ifdef Mach_O code is to set the n_sect field. symbols.h: (1.36 version picked up) - Only Copyright comment changed version.c: (1.36 version picked up) - The comments were removed and place in a file ChangeLog write.c: (1.36 version picked up with NeXT, M68K, Mach_O and I860 ifdefs added) write.h: (1.36 version picked up with the NeXT ifdef added) xm. xre. --- Changes to merged in the i860 version of GAS by NeXT Dimension team --- (NDTools-4) - i860.h: This contained the i860 relocation stuff. This was moved into reloc.h Also there was a bug in the GNU version of ld that relocated the RELOC_HIGHADJ wrong. The adjustment was always done out of the assembler and should have been taken out and put back everytime. This is now the case in the NeXT Mach-O link editor in i860_reloc.c . - I860 changes to read.c: big_cons(), get_known_segmented_expression() and stringer() no longer static Mike changed s_file() and s_line() to handle the cpp line directive nesting level by adding discard_rest_of_line() to it. The complier group's version just recognized the extra digits in s_file(). The compiler group's version was retained and Mike's changes were left out. The i860 has it's own align syntax and the "align" pseudo-op is ifdef'ed out for the i860 (what is this symtax?). The i860 has the "org" and "quad" pseudo-op's ifdef'ed out. The as_fatal() call in pobegin() has "... (%s)", errtxt ); added to it. An Intel "lable::" crock, which also makes the symbol global The fix_new() call in cons() has an extra RELOC_VANILLA argument added to it that is ifdef'ed I860. This also requires i860.h to be included which defines RELOC_VANILLA to be added at line 37: #if defined(I860) #include <i860.h> #endif - I860 changes to write.c: Added at line 50 (for the NO_RELOC relocation r_type) #if defined(I860) #include "i860.h" #endif The variable next_object_file_charP is not static for the i860 (ifdef'ed I860). fix_new has an extra prameter r_type (ifdef'ed I860) and it is set in to the fixP struct via: fixP->fx_r_type = r_type; also ifdef'ed I860. In write_object_file() after the relax segment calls the text alignment is forced to 32 byte alignment, the data and bss to 16 byte alignment. The code for text at line 316 is: /* Check/force alignment here! */ #if defined(I860) text_siz = (text_siz + 0x1F) & (~0x1F);/* Keep 32 byte alignment (most restrictive) */ text_last_frag->fr_address = text_siz; /* and pad the last fragment.*/ #endif for data at line 388 is: #if defined(I860) data_siz += (16 - (data_siz % 16)) % 16; /* Pad data seg to preserve alignment */ data_last_frag->fr_address = data_siz; /* to quad-word boundries */ #endif and for bss at line 361 is: #if defined(I860) local_bss_counter=(local_bss_counter+0xF)&(~0xF); /* Pad BSS to preserve alignment */ #endif The call to fix_new() in write_object_file() has an extra parameter added to it, NO_RELOC, which is ifdef'ed I860. At line 522: #if defined(I860) fix_new(lie->frag,lie->word_goes_here - lie->frag->fr_literal,2, lie->add,lie->sub,lie->addnum,0,NO_RELOC); #else fix_new(lie->frag,lie->word_goes_here - lie->frag->fr_literal,2, lie->add,lie->sub,lie->addnum,0); #endif In write_object_file() a bunch of checks were added. Just before emitting relocations at line 675: know(next_object_file_charP== (the_object_file+(N_TXTOFF(the_exec)+the_exec.a_text+the_exec.a_data))); Just before emiting the symbols at line 684: know(next_object_file_charP == (the_object_file + N_SYMOFF(the_exec)) ); Just before emiting the strings at line 710: know(next_object_file_charP == (the_object_file + N_STROFF(the_exec)) ); In fixup_segment() the switch statement for immediate displacement types for case 0 is ifdef'ed I860 with this change (at line 1209): #if defined(I860) fixP->fx_addnumber = add_number; /* * fixup_segment is expected to return a count of the number of * relocation_info structures needed for an object module. * Two specific relocation types encode only the high half * of an address, and so are followed by a second relocation_info * structure which encodes the low half. We allow for this * by bumping seg_reloc_count an extra time here. * * The extra item is generated in emit_relocations(). */ if ( fixP->fx_addsy && (fixP->fx_r_type==RELOC_HIGH || fixP->fx_r_type==RELOC_HIGHADJ)) { ++seg_reloc_count; } md_number_to_imm (place, add_number, size,fixP,this_segment_type); #else md_number_to_imm (place, add_number, size); #endif and for case 1 the comment was added (at line 1232): case 1: /* Not used in i860 version */ In emit_relocations() the following line was ifdef'ed in the other two were else'ed out (at line 1276): #if defined(I860) ri . r_type = fixP -> fx_r_type; #else /* I860 */ /* These two 'cuz of NS32K */ ri . r_bsr = fixP -> fx_bsr; ri . r_disp = fixP -> fx_im_disp; #endif /* I860 */ In emit_relocations() at the end of the loop processing the fixS structures the following lines were added to handle split relocations (at line 1425): #if defined(I860) /* Whenever we have a relocation item using the high half of an * address, we also emit a relocation item describing the low * half of the address, so the linker can reconstruct the address * being relocated in a reasonable manner. * * We set r_extern to 0, so other apps won't try to use r_symbolnum * as a symbol table indice. We OR in some bits in bits 16-23 of * r_symbolnum so it is guaranteed to be outside the range we use * for non-external types to denote what segment the relocation is in. */ if ( fixP->fx_r_type == RELOC_HIGH || fixP->fx_r_type == RELOC_HIGHADJ ) { ri.r_length = nbytes_r_length [fixP->fx_size]; ri.r_pcrel = fixP->fx_pcrel; ri.r_address = fixP -> fx_frag->fr_address + fixP->fx_where - segment_address_in_file; ri.r_extern = 0; ri.r_type = RELOC_PAIR; /* Hide the low half of the addr in r_symbolnum. More overloading...*/ ri.r_symbolnum = (fixP->fx_addnumber & 0xFFFF) | 0x7F0000; md_ri_to_chars((char *) &ri, ri); append(&next_object_file_charP, (char *)&ri, (unsigned long)sizeof(ri)); } #endif --- Changes made to do the merges of 1.36 and i860 versions --- - Removed the cpp macro "error" which was set on the compiler line to -Derror=as_fatal and changed the 4 uses in hash.c, xmalloc.c and xrealloc.c to just use as_fatal. - Added the cpp macro M68K for 68k specific ifdef that are needed (like in Mach-O.c). This is instead of the "default case" without a target processor macro meaning that it is the 68k case. This is set in the Makefile as the target processor that the assembler is for in the make macro COPTS. - Changed the only use of the cpp macro CROSS in output-file.c to use NeXT to get rid of this macro. The line of code that is ifdef'ed is is the unlink of "name" in output_file_create(). - Removed a.out.h and letting the one in ../include get used which is a merge of the original and includes NeXT's files (nlist.h and reloc.h). - Removed the file atom.c since Mach-O.c replaces it (also removed all the code in write.c that used it). - Removed all machine specific files except for the target processors that NeXT uses. The remaining code that used this stuff has been ifdef'ed where needed to preserved the code in the files we use. - Removed the files gdb.c, gdb-file.c, gdb-symbols.c, gdb-blocks.c and gdb-lines.c and ifdef'ed DONTDEF the code in as.c, read.c and write.c that used this stuff since the GNU 1.36 version of GAS did the same. - Removed the files m-68k.h, m-sun3.h, m-hpux and m-generic and ifndef'ed the include of m-68k.h out of m-68k.c. - Removed the files atof-m68k.c atof-m68k-assist.s since they are no longer used (see the change below for the -27 compiler release). And replaced the the file atof-m68k.c with the 1.36 atof-ieee.c . The 2.0 Release (the -32 compiler release) Changes for the Warp ?? release (the -27 compiler release): - Fixed m68_ip() to handle hex immediate operands to floating point insn's. Now fadds #0xffffffff,fp0 works correctly. The fix only works for .s, not for .d or .x. This orignally worked, but was broken by NeXT's mods to atof-m68k.c. (mself) - Added new 68040 floating-point instructions to m68k-opcode.h (mself) - Changed the name of the the section generated by the .objc_selector_refs directive from __selector_refs to __message_refs and set the flags field of this section to S_LITERAL_POINTERS. This change requires a link editor that knows how to handle a S_LITERAL_POINTERS section. - Changed m68k.c to use the reguar atof (actually strtod) instead of using atof-m68k.c and atof-m68k-assist.s, since these instructions will be emulated on th '040. (mself) Changes for the Warp ?? release (the -26 compiler release): - Added the file Mach-O.c and the ablity to have a subset of a fixed number of sections. All changes ifdef'ed MACH_O. This removes atom.c (ifdef'ed out). New sections include const, literal4, literal8, 11 new objc sections, etc. Basicly a lot of changes. Changes for the Warp 3 (OS update) release (the -25 compiler release): - Added scattered relocation entries to the assembler in emit_relocation() in write.c (see extensive comments in there and in <reloc.h>). - Changed fixup_segment() in write.c and md_estimate_size_before_relax() in m68k.c to make branches to lables that persist on output to be long in length and have a relocation entry (to make scattered loading in the link editor work). This was done by using the value of 3 in fx_pcrel (see the comment in write.h) for force this to happen. Changes for the Warp ?? release (the -24 compiler release): - Fixed the bug that would not assemble the following instruction: L2:movl #1,@(_down:l,d7:l:4) The fix was a bug in the macro use in m68k.c for add_fix() which the macro did not put ()'s around it's arguments (bugs 5207 and 5270). - Fixed assembler's preprecessor inserts) to optionally recognize this new number. - Changed the section alignment of the text section to 2 byte alignment so that scattered loading will work (the branch table of the shlibs will not move). Changes for the 2.0 impulse X.X release (the -23 compiler release): - Now is linked with libsys. Changes for the 2.0 impulse X.X release (the -22 compiler release): - Allow symbol names to appear in ""'s . This is so that the symbol names for methods can be "+[Class(category) method:name:]" and tools will not have to go through the objective-C section to get these names. Changes how get_symbol_end() works and how the callers of it use it. Changes for the 2.0 impulse X.X release (the -19 compiler release): - as is no longer installed as as-<version_number> to match the rest of the project. - Updated atom.c to the changes to CPU_TYPE and CPU_SUBTYPE with the changes to <sys/machine.h> Changes for the 0.91 release (the -10 compiler release): * s.stone fixed a bug in `#APP', `#NO_APP' that affected read.c & strstr.c. + Fixed a bug in converting to Mach-O object files with the new sections for the objective-C runtime. The bug was if a local relocation item refered to a symbol plus an offset the incorrect section number could be assigned if the value of the symbol plus offset was in a different section than the value of the symbol. This is an un fixable bug in atom(1) but fixed in here by moving the assignment of the section number into the r_symbolnum field into the assembler and using just the symbol's value (not plus the offset) to pick the section number. The fix is in write.c in emit_relocation() (plus a call to a new function get_objc_section_bounds() before calling emit_relocation). + Fixed a bug where a file had no symbols and the result was a Mach-O file. What would happen was a 4 (the size of the string table) was written at offset 0 in the output file (overwriting the magic number). Also did some major clean up of atom.c and removed all the garbage that did not apply (about half of what was there). + Added the .reference pseudo op to read.c. This was added for the new objective-C runtime to use so that archive semantaic could be maintained but no globals (that 'C' could use) are created. + Fixed the exponent overflow handling in atof-m68k.c to not print a warning (ifdef NeXT) and to get the right answer (a bzero of the 'words' as added, and corrected the reversed sign for infinities). New notes go at the TOP of this file. | http://opensource.apple.com//source/cctools/cctools-667.4.0/as/notes | CC-MAIN-2016-44 | refinedweb | 23,782 | 76.42 |
Outlier Test in Python
Learn how to test for outliers in datasets
In order to start performing outlier tests, we will import some data of average wind speed sampled every 10 minutes, also used in the Normality Test Tutorial.
data = pd.read_csv('') df = data[0:10] table = FF.create_table(df) py.iplot(table, filename='wind-data-sample')
In any set of data, an
outlier is a a datum point that is not consistent with the other data points. If the data sampled from a particular distribution then with high probability, an outlier would not belong to that distribution. There are various tests used for testing if a particular point is an outlier, and this is done with the same null-hypothesis testing used in Normality Tests.
Dixon's Q-Test is used to help determine whether there is evidence for a given point to be an outlier of a 1D dataset. It is assumed that the dataset is normally distributed. Since we have very strong evidence that our dataset above is normal from all our normality tests, we can use the Q-Test here. As with the normality tests, we are assuming a significance level of $0.05$ and for simplicity, we are only considering the smallest datum point in the set.
For more information on the choice of 0.05 for a significance level, check out this page.
def q_test_for_smallest_point(dataset): q_ref = 0.29 # the reference Q value for a significance level of 95% and 30 data points q_stat = (dataset[1] - dataset[0])/(dataset[-1] - dataset[0]) if q_stat > q_ref: print("Since our Q-statistic is %f and %f > %f, we have evidence that our " "minimum point IS an outlier to the data.") %(q_stat, q_stat, q_ref) else: print("Since our Q-statistic is %f and %f < %f, we have evidence that our " "minimum point is NOT an outlier to the data.") %(q_stat, q_stat, q_ref)
For our example, the Q-statistic is the ratio of the absolute distance between the smallest and closest number in the set, to the range of our dataset. This means:$$ \begin{align*} Q = \frac{gap}{range} \end{align*} $$
For our example, we will take 30 values from our dataset that contains the minimum value in full dataset, and apply the test on that sample. Then we'll convert our array to a list and sort it by increasing value.
dataset = data[100:130]['10 Min Sampled Avg'].values.tolist() dataset.sort() q_test_for_smallest_point(dataset)
Since our Q-statistic is 0.023077 and 0.023077 < 0.290000, we have evidence that our minimum point is NOT an outlier to the data.
To properly visualize our
critical height, we can make a scatter plot with the dataset points in increasing order and draw a line for our critical height. This critical height is the threshold such that if our lowest point in the dataset was lower than it, than it would be considered an
outlier. To derive this value, we just take
from a look-up table and then plug it into our formula for $Q$ above, replacing our smallest value with an unknown $x$$$ \begin{align*} 0.29 = \frac{5.5 - x}{26.0} \end{align*} $$
and therefore we get$$ \begin{align*} x = -2.04 \end{align*} $$
x = [j for j in range(len(dataset))] y1 = dataset y2 = [-2.04 for j in range(len(dataset))] trace1 = go.Scatter( x = x, y = y1, mode = 'lines+markers', name='Dataset', marker=dict(symbol=[100, 0]) ) trace2 = go.Scatter( x = x, y = y2, mode = 'lines', name='Critical Line' ) data = [trace1, trace2] py.iplot(data, filename='q-test-scatter')
Since our smallest value (the holoed out circle) is higher than the critical line, this validates the result of the test that the point is
NOT an outlier. | https://plot.ly/python/outlier-test/ | CC-MAIN-2017-43 | refinedweb | 626 | 61.87 |
With the Object API you can create and manage Open Graph objects using a simple HTTP-based API. The Object API is supported for both mobile and web apps.
The Object API is one of two ways that objects can be created. Objects can also be created by adding special markup to web pages that you host. These self-hosted objects must be hosted on your web server and all objects are public. Creating self-hosted objects is covered in our Using Objects documentation. In contrast, the Object API lets you create objects through a single HTTP call without the requirement for a web server to host them. The Object API can also create objects that have custom or non-public privacy settings.
The Object API also includes an API for you to upload images to Facebook to use in objects and stories.
Objects must be in English. Learn how to localize objects created via the object API in our documentation on localizing open graph stories.
With the Object API you can create two kinds of objects:
An app-owned object. These objects are used across your entire app. They are public, just like self-hosted objects. Anyone who uses your app can see them and they can be attached to any story your app creates.
A user-owned object. A user-owned object is owned by a user. The privacy setting on a user-owned object can be changed so it's only visible to a subset of a person's audience. Stories created with a user-owned object are specific to the person attached to it.
In order to create an app-owned object you make a call to a URL that includes the type of object you want to create along with a JSON-encoded version of the object attached. For example, if you want to create an app-owned that is a common
book object you can make this call:
curl \ -X POST \ "" \ -F "access_token=$APP_TOKEN" \ -F "object={ \"title\":\"The Hunt for Red October\", \"image\":\"\", \"url\":\"\", \"description\":\"Classic cold war technothriller\", \"data\": { \"isbn\":\"425240339\" } }"
There are several things to note with this call:
The path:
/app/objects/{object_type}.
/appis shorthand for your app. It's roughly equivalent to
/me, but for apps instead of people.
object/{object_type}provides Facebook context on what type of object you want to create. In our example, we create a
books.bookobject type. If you use a custom object type the format will be something like
app_namespace:object_typeinstead of
books.book.
The access token: In order to create an app-owned object, you must use an app access token. App-owned objects are public and can be used with multiple people and multiple stories.
The object: The object is a JSON-encoded version of an object. The types used here are the types used in the
books.book type. You can also include any of the properties of standard objects. Note that the
type is not included in the call. This is because it's implied from the path that you used when you made the call.
The image: The image associated with this object. It will likely be used with stories told with it. Follow the guidelines for image sizes when referencing an image. (See
og:image for guidelines.)
The URL: The object URL is what a story on Facebook links to when it shows up in News Feed. Note that any URL you reference must be below the URL that's set in your app dashboard. This isn't required for hosted objects. You can also use this URL to set up deep linking to your native app, in which case you must use the
canonical_url of an App Link Host in this parameter.
When you create an object with the API, the
id of the new object is returned:
{"id":"505124632884107"}
With that ID, you can publish a story to Facebook.
You can read an API object just like any other object or action in the graph, with its ID:
curl
Deleting an object is done by ID:
curl –X DELETE{app-access-token}
Creating a user-owned object is very similar to creating an app-owned object. Here's an example call for the
deliciousapp app that creates a custom object that has the object type
meal:
curl -X POST "" -F "access_token={user-or-app-access-token}", -F "privacy={'value':'SELF'}", -F "object={\"title\":\"Chicken Enchiladas\", \"image\":\"\", \"url\":\"\", \"description\":\"My homemade enchiladas\", \"data\":{ \"calories\":1200 } }"
The path:
/me/objects/{custom_object_type}.
/meis shorthand for the person using your app.
object/{custom_object_type}provides Facebook context on what type of object you want to create. In our example, we create a
deliciousapp:mealobject type. If you use a common object type, the path to the object would be
books.bookinstead of
deliciousapp:meal.
The access token: To create a user-owned object you must use either a user token or an app access token.
The privacy parameter: This call includes a
privacy parameter. In this example, the value is
SELF, which means that the post is visible only to the person making the call. Your app can set this value to another setting if it informs the person using the app of the privacy setting when the object is created. Your privacy setting can not be more public than what the person has chosen for your app. For example, if a person has chosen 'Friends' you can't set the privacy of an object to 'Public.' If you don't include a
privacy parameter, the default privacy setting that the person has chosen for the app will be used for the privacy of the object.
The object: The object is a JSON-encoded version of an object. This is a custom object type and includes a
data section for non-default object properties, as mentioned in the section on properties. Note that the
type is not included in the call. This is because it's implied from the path that you used when you made the call.
The image: The image associated with this object. This image will be used when a story appears on timeline or News Feed. Follow the guidelines for image sizes when referencing an image. (See
og:image for guidelines.) The Object API also makes it possible to upload an image and then reference it in the object when it's created.
The URL: The Object URL is what a story on Facebook links to when it shows up in News Feed. Note that any URL you reference must be below the URL that's set in your app's website URL that's set in your App Dashboard. This isn't required for hosted objects. You can also use this URL to set up deep linking to your native app.
A successful call returns the ID of the object:
{"id":"509707412404500"}
Read the object with its ID:
curl{user-access-token}
Delete the object by its ID
curl –X DELETE{user-access-token}
Publishing a story on Facebook with objects created via the object API is very similar to publishing stories with self-hosted objects. There are only two differences: First, you must specify the object by ID instead of using a URL, and second, the object must be visible to the person posting the story. Here's an example of creating a story with the
books.reads action. For the book argument, it uses the ID of an object you created with the Object API:
curl -X POST "" -F "book=505124632884107" -F "access_token=$TOKEN"
Object properties are added to the top-level of the JSON object that you pass into your call to create an object.
Any property that's not a standard object property should be included as a
data: {...} element on the JSON object you pass in when creating an object. Here's an example of a custom
mountain type that includes an
elevation custom property:
{ title: 'Mt. Rainier', type: 'myapp:mountain', image: '', url: '', description: 'Classic cold war technothriller', data: { elevation: 14411 } }
This format is the same as what an object looks like when it's read back from the database via the Graph API.
Facebook offers a staging service that allows you to upload images in order to reference them in both objects and actions. This service allows you to upload an image via an HTTP endpoint and the service returns an opaque token you can use as an image URL when referencing images in objects and actions.
Using the staging service with the Object API is optional and is provided only for convenience. If you want to use an image hosted on your own server, just use a URL to your image wherever an image argument is required.
The staging service is only available for use with user-owned objects and actions. It can't be used with app-owned objects.
If you want to stage an image for a user-owned object, use this URL:
As an example, if you wanted to stage an image to use in a user-owned object, a call with curl would look like this:
curl -X POST \ \ -F "file=@images/prawn-curry-1.jpg" \ -F "access_token=$USER_ACCESS_TOKEN"
The
file argument is how you attach the file to the HTTP call.
Note: Curl does not always add an image type for PNG images, so you may have to add the image type manually if your images are pngs:
-F file='@foo.png;type=image/png'
A call to the service will return an opaque
uri value that you can use in later calls:
{"uri":"fbstaging://graph.facebook.com/staging_resources/MDAxMDE1MTU3NjIyNzExNzYwNA=="}
Although this is a URI, it's for internal use only and is only a way to pass around an identifier. Referencing it as if it were an image will not return any useful data.
Once you have the handle, you can easily create an object with the Object API and reference the image that you just uploaded. This is an example of creating a user-owned object:
curl -X POST "" -F "access_token=$USER_ACCESS_TOKEN" -F object= '{"title": "Prawn Curry", "image": { "url": "fbstaging://graph.facebook.com/staging_resources/MDAxMDE1MTU3NjIyNzExNzYwNA==", "user_generated": true } }'
That call returns an object ID you can use to post an action, as referenced above. Stories created with this object use the image you uploaded to the staging service.
You can also attach images to actions as well as objects. We'll use an example of uploading two user-generated photos with an action.
First, upload two photos:
curl -X POST -F file=@images/us-at-the-door.jpg -F access_token={user-access-token} curl -X POST \ -F file=@images/band-on-stage.jpg -F access_token={user-access-token}
These calls return two URIs:
{"uri":"fbstaging://graph.facebook.com/staging_resources/MDAxMDE1MTU3NjIyNzExNzYwNA=="} {"uri":"fbstaging://graph.facebook.com/staging_resources/MDAxMDE1MTU3NjIyNzExNzYwNZ=="}
You can then reference the images in an action. In this example, we create a custom action called
concertapp:see:
curl -X POST -F image[0][url]=fbstaging://graph.facebook.com/staging_resources/MDAxMDE1MTU3NjIyNzExNzYwNA== -F image[0][user_generated]=true -F image[1][url]=fbstaging://graph.facebook.com/staging_resources/MDAxMDE1MTU3NjIyNzExNzYwNZ== -F image[1][user_generated]=true -F concert= -F access_token={user-access-token}
It's possible to specify more than one image with an object, much like you can with an
og:image with self-hosted objects. This is useful because different types of stories require different resolution images, depending on the device the viewer is using.
In a previous example, we specified a single image:
{ title: 'Mt. Rainier', type: 'myapp:mountain', image: '', ... }
But you can also include more than one image with the following syntax:
{ title: 'Mt. Rainier', type: 'myapp:mountain', image:[ {url: ''}, {url:''} ], ... }
It's possible to iterate over the user-owned and app-owned objects that your app has created. The syntax that you use depends on if you want to iterate over user-owned or app-owned objects:
A JSON dictionary is returned that includes a
data array for the data and a
paging element to give you the ability to page through objects. To learn more about using paging, please see the paging section of the Graph API documentation.
This is an example of app-owned data. User-owned data will be similar.
{ "data": [ { "id": "534973293215437", "url": "", "type": "books.book", "title": "The Hunt for Red October", "image": [ { "url": "" } ], "data": { "isbn": "425240339" }, "updated_time": "2013-04-12T21:06:15+0000", "created_time": "2013-04-12T21:06:15+0000", "application": { "id": "190052254412546", "name": "Your App Name", "url": "" }, "is_scraped": false }, ], "paging": { "next": "<token>&limit=25&offset=25&__after_id=521388851272635" } }
Facebook offers an object browser that lets you browse objects that you've created:
From this object browser you can also click on links and edit existing objects. You can also select on the upper-right-hand side of the browser from your apps to see app-owned objects or people to see user-owned objects. You can also create new objects from a drop down in the upper-right-hand side. | https://developers.facebook.com/docs/sharing/opengraph/object-api | CC-MAIN-2018-30 | refinedweb | 2,162 | 54.42 |
Using Fatal Errors to Add Clarity and Elegance to Swift
The Missing Manual
for Swift Development
The Guide I Wish I Had When I Started Out
Join 20,000+ Developers Learning About Swift DevelopmentDownload Your Free Copy
A few months ago, I stumbled upon a discussion in the thoughtbot guides about the use of fatal errors in Swift. It seems every developer has an opinion about fatal errors and a consensus hasn't been reached yet in the Swift community. The only mention in The Swift Programming Language is in the section that discusses the
guard statement and early exit.
When I fist encountered the
fatalError(_:file:line:) function, it reminded me of the
abort() function. Even though both functions cause the immediate termination of your application, the
fatalError(_:file:line:) function is different and, in the context of Swift, it is much more useful.
What to Do When You Don't Know What to Do
Ever since I started working with Swift, I have been struggling with the implementation of the
tableView(_:cellForRowAt:) method. If you think that sounds silly, then take a look at the following example.
override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { if let cell = tableView.dequeueReusableCell(withIdentifier: SettingsTableViewCell.reuseIdentifier, for: indexPath) as? SettingsTableViewCell { // Configure Cell cell.textLabel?.text = "Some Setting" return cell } else { return UITableViewCell() } }
There are several variations of the above implementation and I have tried all of them. The idea is simple. We expect an instance of the
SettingsTableViewCell class if we ask the table view for a cell with the reuse identifier of the
SettingsTableViewCell class. Because the
dequeueReusableCell(withIdentifier:for:) method returns a
UITableViewCell instance, we need to cast the result to an instance of the
SettingsTableViewCell class.
This is inconvenient since we always expect to receive a
SettingsTableViewCell instance if we ask the table view for a cell with the reuse identifier of the
SettingsTableViewCell class. We could use
as! instead of
as?, but that is not a solution I feel comfortable with. I avoid the exclamation mark whenever I can.
If, for some reason, something goes wrong, we return a
UITableViewCell instance from the
tableView(_:cellForRowAt:) method. But that should never happen. Right?
While this is fine and necessary to make sure we return a
UITableViewCell instance from the
tableView(_:cellForRowAt:) method, I hope you can see that we are implementing a workaround for a scenario we do not expect, a scenario that should never occur.
Guard Against Unexpected Events
Every time I implement
tableView(_:cellForRowAt:) I wonder if there is a better approach. And there is. We need to guard against the event that the table view hands us a
UITableViewCell instance we don't expect and, if that happens, we throw a fatal error.
override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { guard let cell = tableView.dequeueReusableCell(withIdentifier: SettingsTableViewCell.reuseIdentifier, for: indexPath) as? SettingsTableViewCell else { fatalError("Unexpected Index Path") } // Configure Cell ... return cell }
The application crashes if a fatal error is thrown. Why is this better? This is a better solution for two reasons.
Unexpected State
If the application runs into a scenario in which a fatal error is thrown, we communicate to the runtime that the application is in a state it does not know how to handle.
In the first example, the solution was to return a
UITableViewCell instance. Does that solve the problem? No. The application ignores the situation and avoids causing havoc by returning a
UITableViewCell instance. The application avoids dealing with the unexpected state it is in.
Finding and Fixing the Bug
If the application runs into a state we did not anticipate, it means a bug has slipped into the codebase. If the application is terminated due to a fatal error being thrown, we have work to do. It means we need to find the bug and fix it.
The above solution, using a
guard statement and throwing a fatal error, is a solution I am very happy with. It avoids the obsolete
if statement of the first implementation of the
tableView(_:cellForRowAt:) method and it correctly handles the situation. The result is a much more elegant implementation of the
tableView(_:cellForRowAt:) method. And it adds clarity to the implementation of the
tableView(_:cellForRowAt:) method.
Use Fatal Errors Sparingly
This doesn't mean that you need to use fatal errors whenever you want to avoid error handling or the application enters a state that is hard to recover from. I use fatal errors only when the application can enter in a state it was not designed for. Take a look at the following example.
import Foundation enum Section: Int { case news case profile case settings var title: String { switch self { case .news: return NSLocalizedString("section_news", comment: "news") case .profile: return NSLocalizedString("section_profile", comment: "profile") case .settings: return NSLocalizedString("section_settings", comment: "settings") } } } struct SettingsViewViewModel { func title(for section: Int) -> String { guard let section = Section(rawValue: section) else { fatalError("Unexpected Section") } return section.title } }
The
title(for:) method of the
SettingsViewViewModel struct does not expect a value it cannot use to instantiate a valid
Section instance with. If the value of the
section parameter is invalid, it would not know what to do. In that scenario, a fatal error is thrown.
If the application does enter that scenario, it means you made a logical mistake and it is your task to find out why and how it can be resolved.
Clarity and Elegance
The use of the
fatalError(_:file:line:) function has made my code more readable without compromising the inherent safety of the Swift language. It adds clarity to the code I write and Swift regains its elegance. Give it a try and let me know if you like it.
The Missing Manual
for Swift Development
The Guide I Wish I Had When I Started Out
Join 20,000+ Developers Learning About Swift DevelopmentDownload Your Free Copy | https://cocoacasts.com/using-fatal-errors-to-add-clarity-and-elegance-to-swift/ | CC-MAIN-2020-45 | refinedweb | 990 | 55.03 |
Contents
This programming problem really is nasty. Programming is conceptually one of the most difficult jobs human beings do. Something is needed to make this difficulty manageable. Functions and modules are at the heart of strategies and tactics which achieve this.
The Human brain is capable of holding between 5 and 10 facts, concepts or ideas in short term memory at the same time. Without tools to help us design programs in a modular manner, one component at a time, the complexity of this task will very soon overwhelm even the best programmer. When programs are designed using smaller components the complexity is contained within neat functions and packages. Many of the objects which belong inside a function or package don't need to be visible outside.
This means that a programmer can write a program unit which makes use of other components without needing to know or think very much about them. Did you know, or need to know or care how the print or raw_input functions worked internally in the Python programs you have written ?
At the simplest level a function just does a job and quits. This allows a program to be split up to the limited extent of which parts execute when.
def croak(): print "groan, ", print "winge, ", print "This,", croak() print "that."
A function is an indented block of code following a def statement. The word following def is the function name. The function is called by stating its name followed by () brackets, including parameters if any. This program outputs:
This, groan, winge, that.
The order of statement execution is clear from the output.
If function identification and definition were a trivial task, designing programs would be very easy. It isn't so programming experience counts. The only way to get experience is to read and write and run a lot of source code.
The approaches adopted by experienced programmers include:
a. Split the program actions into a number of well defined tasks.
Wherever you are tempted to copy and paste code from one place in a program to another, could your cut and pasted code usefully go into its own function ? If you are only copying a call to an existing program unit e.g. print, the answer is probably not. If you can put a simple name and definition to what the cut and pasted code does the answer is probably yes.
b. Define functions to minimise communication between the function and the rest of the program, and to maximise containment of data and actions within the function. Designers of electrical and mechanical components also decide interfaces like this.
c. Where possible try to limit communication between functions and their environment to well-defined interfaces, and reduce possible side effects. Side effects can occur through the use of global variables and by performing input and output within the function. Clarifications of this rule are in rules d. and e.
d. Avoid using global data unless doing this requires you to have many functions with similar parameters and return values instead.
E.G. if your program handles a database which is accessed in most or all functions (which mainly exist to access it) then you may as well make the database accessible through a global variable in preference to having more parameters.
e. If a function sensibly does some input or output work, then make it only access one file (or possibly a set of similar files in a similar manner).
E.G. i. If a function calculates a square root this function becomes more flexible and self-contained if it returns the square root to the calling program unit than if it prints it on the console.
E.G. ii. In line with objective a. it makes sense to have separate functions for reading and writing external files especially if data is parsed on input or output.
When a function is defined it can be given any types of data object through named parameters.
def square(x): # x is a named parameter print x*x
This function definition prints the square of the object locally named x. If called with a parameter of 3, the local reference x will refer to the object 3 at the time the function is called and run:
>>> def square(x): # definition ... print x*x # function body ... >>> square(3) # function call 9
We can just as easily call square() with a floating point parameter:
>>> square(2.5) 6.25
If we specify more than 1 parameter in a comma separated positional parameter list the position of parameters will be the same in both definition and call:
>>> def divide(x,y): # definition ... print x/y ... >>> divide(27,9) # call 3
Parameters can also be supplied through name=value pairs. These allows complex function parameters to be specified by the programmer in any order. Name=value pairs can also be given default values by the function defining programmer, so that when a function using programmer doesn't need to know about a parameter he or she can ignore it and still use the function, but in a less configured manner. Consider the following program:
import sys from Tkinter import * widget = Button(None,text="push me!",command=sys.exit) widget.pack() widget.mainloop()
When run this creates a window with conventional controls to minimise, resize and exit, and a button with the legend: push me! . Clicking on the button causes the application to exit. Removing the text="push me!" parameter from the Button() call:
widget = Button(None,command=sys.exit)
This program still runs, but without any button text.
Any object can be returned using a return statement. A function will stop executing when it executes a return statement.
>>> def total(list): ... tot=0 ... for item in list: ... tot=tot+item ... return tot ... >>> total([2,4,3]) 9
The returned value can be used directly as above, or assigned to a reference for use later on:
>>> sum=total((3,4)) >>> print sum 7
We have seen integer and floating point parameters interchanged if the operations performed on them by the function make sense. The same applies to sequence parameters such as lists and tuples.
References which already exist at the module level (i.e. not inside a def or class indented block) of the enclosing source file are considered global so they can be read directly by functions:
>>> def readonly(): ... print a # read access ... >>> a=5 # a is global >>> readonly() 5
Writing (assigning) to a reference within a function is another matter:
>>> def writelocal(): ... b=3 # write access to b so this b is local to writelocal() ... print b ... >>> b=5 >>> writelocal() 3 >>> print b 5
This assigns to a local variable called b, not the global variable of the same name which retains its original value. However, if you assign to a global list member within a function you change the global list:
>>> c=[2,4,6] >>> def listmember(): ... c[1]=5 ... >>> print c [2, 4, 6] # gets original value, have not called listmember yet >>> listmember() >>> print c [2, 5, 6]
Assigning an entirely different list to reference c within a function would change what the local reference c refers to, but the original list would still be available globally.
>>> print c [2, 5, 6] >>> def newlist(): ... c=[1,2,3] # defines a new list for local reference c ... >>> newlist() >>> c [2, 5, 6]
If we want to override a global name within a local scope we can apply the global keyword:
>>> c [2, 5, 6] >>> def globalref(): ... global c # going to mess with name c at module level ... c=[1,2,3] ... >>> globalref() >>> c [1, 2, 3]
Python's built in names
We've already encountered a few, such as print, len() and range(). To use these we didn't have to import any modules containing them; they are part of the Python core language. Computer scientists describe one of the significant features of object oriented programming as "polymorphism", or something taking many forms.
However, if you don't know why you want to override built in names like len, or built in operators like + then don't. To get a list of names to leave well alone (or mess around with) call the dir() function to look at the built-in list named __builtins__ . The dir() function can get you all the names from any module.
>>> dir(__builtins__) ['ArithmeticError', 'AssertionError', 'AttributeError', 'EOFError', 'Ellipsis', 'EnvironmentError', 'Exception', 'FloatingPointError', 'IOError', 'ImportError', 'IndentationError', 'IndexError', 'KeyError', 'KeyboardInterrupt', 'LookupError', 'MemoryError', 'NameError', 'None', 'NotImplementedError', 'OSError', 'OverflowError', 'RuntimeError', 'StandardError', 'SyntaxError', 'SystemError', 'SystemExit', 'TabError', 'TypeError', 'UnboundLocalError', 'UnicodeError', 'ValueError', 'ZeroDivisionError', '_', '__debug__', '__doc__', '__import__', '__name__', 'abs', 'apply', 'buffer', 'callable', 'chr', 'cmp', 'coerce', 'compile', 'complex', 'copyright', 'credits', 'delattr', 'dir', 'divmod', 'eval', 'execfile', 'exit', 'filter', 'float', 'getattr', 'globals', 'hasattr', 'hash', 'hex', 'id', 'input', 'int', 'intern', 'isinstance', 'issubclass', 'len', 'license', 'list', 'locals', 'long', 'map', 'max', 'min', 'oct', 'open', 'ord', 'pow', 'quit', 'range', 'raw_input', 'reduce', 'reload', 'repr', 'round', 'setattr', 'slice', 'str', 'tuple', 'type', 'unichr', 'unicode', 'vars', 'xrange', 'zip']
If this output looks suspiciously like a list your suspicions are well founded.
At a certain stage we learned that boxed games were more usable and interesting if we don't empty them all on the floor at the same time. Children discover this sooner with games which have many similar parts such as jigsaws. Sometimes we want to access parts from more than one game at a time. E.G. we might want to use a dice from one game to play another. When we do this it's a good idea to remember which box it came from though.
If we want all names from a module imported into ours we can empty it on the floor:
from Tkinter import *
But if you empty names from too many modules at once into yours you might find it more difficult figuring out which name does what or where it really belongs. If you say instead:
import math print math.pi # you remembered which box pi came in
Having functions allows the same code to be accessed from different parts of the same program without needing local copies of the code. This solves many problems. However, when programmers work on multiple programs (as they do) this naturally leads to another kind of mess. Instead of filing generally useful related functions and data into modules which can be shared between programs, inexperienced programmers tend to cut and paste functions and data between programs. This approach may work with throw-away code, e.g. intended to solve one-off data conversions etc. Unfortunately, in other situations this inevitably attracts a horrible exploding bug swarm, where cut and pasted code means cut and pasted bugs.
In mechanical engineering terms this would be like a car designer needing new starter motor and battery designs for every car, instead of using a standard range of components across an entire model range.
Enough of the theory. The following module has been saved as stats.py :
# stats.py a simple statistics module
def total(sequence): """ returns total of items in sequence parameter """ total=0 for item in sequence: total+=item # add item to total return total def average(sequence): """ returns average of items in sequence parameter """ n_items=len(sequence) if n_items: # false if empty, avoids /0 tot=total(sequence) # calls the total() function in same module return tot/n_items else: return None # The Python NULL object # and just for fun, a reference to some data the_meaning_of_life_the_universe_and_everything=42
These objects can then be imported and used in other programs, or tested on the interpreter command line:
>>> import stats >>> a=[2,5,8] >>> stats.total(a) # note use of qualified object name 15 >>> stats.average(a) 5
The data reference is also used by qualifying it with the module it comes from:
>>> print stats.the_meaning_of_life_the_universe_and_everything 42
Python bytecode compilation
After the stats module had been imported, another file called stats.pyc appeared in the same directory as stats.py . Python creates a compiled bytecode version from the module source code once they are imported to save this part of the compile-interpret-run cycle being repeated needlessly. Python also detects module source updates and recompiles the .pyc file if needed.
We can check the names available within a module using the dir() built in:
>>> dir(stats) ['__builtins__', '__doc__', '__file__', '__name__', 'average', 'the_meaning_of_life_the_universe_and_everything', 'total']
The names in the list e.g. __doc__ which start and end with two underscores are used internally by the Python system.
Using what is already available in free-software libraries is where the software reuse engineering philosophy starts paying off. What can now be achieved with a few lines of our own code didn't happen by accident, or just because some engineers were altruistic. For a programmer who doesn't sell proprietary software packages (i.e. about 95% of us), nothing is lost by sharing code you develop yourself. Everything is gained when other engineers who use your code send you improvements.
We had to import the math and file library modules to use them, just as we had to import our own stats module. Some library modules are written in 'C' or 'C++' and are compiled into the Python interpreter. If you carry out a search of the Python modules, you won't find a math.pyc or a math.py because Python is not as fast as 'C' for this kind of job.
The availability of Python source code for much of the library helps us dig deeper and learn more. For example, here is part of the bisect.py module from the Python library, which uses a binary split search to perform an insertion sort:
def insort(a, x, lo=0, hi=None): """Insert item x in list a, and keep it sorted assuming a is sorted.""" if hi is None: hi = len(a) while lo < hi: mid = (lo+hi)/2 if x < a[mid]: hi = mid else: lo = mid+1 a.insert(lo, x)
The last statement was of interest. It shows Python lists have a method which inserts an item into the middle of the list, which I previously handled using slice assignments. Let's check this out:
>>> a=[2,4,6] >>> dir(a) # does a list type have an insert() method ? ['append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] >>> a.insert(1,9) # It does, so try it >>> a [2, 9, 4, 6] # insert worked as intended
Looking at the names supported by a list object shows there is an insert object, which will probably do what we saw in bisect.py . So trying it by passing the index before the insert position and the object to be inserted as parameters: a.insert(1,9) we succeed in inserting the object 9 at index 1, moving higher items from 1 up by one.
Having learned how to use other people's modules and create our own we now want to be able to store them in places on the system where they are accessible from any Python project directory or program. We don't want to have to copy modules.
The solution to this requirement comes in the form of an environment variable.
Linux/Unix
On Linux/Unix environment variables names can be echoed e.g:
$ echo $PYTHONPATH PYTHONPATH=/usr/lib/python2.0:/usr/local/lib/python
This shows the current PYTHONPATH value has 2 directories, /usr/lib/python2.0 and /usr/local/lib/python . These 2 folders are seperated by the colon (:).
Setting this environment variable on Linux is achieved by adding the following 2 lines to .bash_profile :
PYTHONPATH=/usr/lib/python2.0:/usr/local/lib/python export PYTHONPATH
If you use the old Bourne Shell you could add these lines to .profile instead. If you use the 'C' shell) on Unix put the following command into .login in the users home directory:
setenv PYTHONPATH /usr/lib/python2.0:/usr/local/lib/python
Microsoft Windows
On older versions of Windows (95/98) these can be set adding the MS-DOS shell command into c:\autoexec.bat :
set PYTHONPATH=C:\Python21;C:\python
Note the use of semicolon (;) to delimit the 2 folders on the path.
On Windows 2000 you can set environment variables using a dialog accessed from the control panel. | http://bcu.copsewood.net/python/notes4.html | CC-MAIN-2017-22 | refinedweb | 2,691 | 61.67 |
Introduction
Personally, I think arrow functions are one of the most awesome syntax additions to the JavaScript language introduced in the ES6 specification — my opinion, by the way. I’ve gotten to use them almost every day since I knew about them, and I guess that goes for most JavaScript developers.
Arrow functions can be used in so many ways as regular JavaScript functions. However, they are commonly used wherever an anonymous function expression is required — for example, as callback functions.
The following example shows how an arrow function can be used as a callback function, especially with array methods like
map(),
filter(),
reduce(),
sort(), etc.
const scores = [ /* ...some scores here... */ ]; const maxScore = Math.max(...scores); // Arrow Function as .map() callback scores.map(score => +(score / maxScore).toFixed(2));
At first glance, it may seem like arrow functions can be used or defined in every way a regular JavaScript function can, but that is not true. Arrow functions, for very good reasons, are not meant to behave exactly the same way as regular JavaScript functions. Perhaps arrow functions can be considered JavaScript functions with anomalies.
Although arrow functions have a pretty simple syntax, that will not be the focus of this article. This article aims to expose the major ways in which arrow functions behave differently from regular functions and how that knowledge can be used to the developer’s advantage.
Please note: Throughout this article, I use the term regular function or regular JavaScript function to refer to a traditional JavaScript function statement or expression defined using the function keyword.
TL;DR
- Arrow functions can never have duplicate named parameters, whether in strict or non-strict mode.
- Arrow functions do not have an
argumentsbinding. However, they have access to the arguments object of the closest non-arrow parent function. Named and rest parameters are heavily relied upon to capture the arguments passed to arrow functions.
- Arrow functions can never be used as constructor functions. Hence, they can never be invoked with the new keyword. As such, a prototype property does not exist for an arrow function.
- The value of this inside an arrow function remains the same throughout the lifecycle of the function and is always bound to the value of this in the closest non-arrow parent function.
Named function parameters
Functions in JavaScript are usually defined with named parameters. Named parameters are used to map arguments to local variables within the function scope based on position.
Consider the following JavaScript function:
function logParams (first, second, third) { console.log(first, second, third); } // first => 'Hello' // second => 'World' // third => '!!!' logParams('Hello', 'World', '!!!'); // "Hello" "World" "!!!" // first => { o: 3 } // second => [ 1, 2, 3 ] // third => undefined logParams({ o: 3 }, [ 1, 2, 3 ]); // {o: 3} [1, 2, 3]
The
logParams() function is defined with three named parameters:
first,
second, and
third. The named parameters are mapped to the arguments with which the function was called based on position. If there are more named parameters than the arguments passed to the function, the remaining parameters are
undefined.
Regular JavaScript functions exhibit a strange behavior in non-strict mode with regards to named parameters. In non-strict mode, regular JavaScript functions allow duplicate named parameters. The following code snippet shows the consequence of that behavior:
function logParams (first, second, first) { console.log(first, second); } // first => 'Hello' // second => 'World' // first => '!!!' logParams('Hello', 'World', '!!!'); // "!!!" "World" // first => { o: 3 } // second => [ 1, 2, 3 ] // first => undefined logParams({ o: 3 }, [ 1, 2, 3 ]); // undefined [1, 2, 3]
As we can see, the
first parameter is a duplicate; thus, it is mapped to the value of the third argument passed to the function call, completely overriding the first argument passed. This is not a desirable behavior.
The good news is that this behavior is not allowed in strict mode. Defining a function with duplicate parameters in strict mode will throw a
Syntax Error indicating that duplicate parameters are not allowed.
// Throws an error because of duplicate parameters (Strict mode) function logParams (first, second, first) { "use strict"; console.log(first, second); }
How do arrow functions treat duplicate parameters?
Now here is something about arrow functions:
Unlike regular functions, arrow functions do not allow duplicate parameters, whether in strict or non-strict mode. Duplicate parameters will cause a
Syntax Errorto be thrown.
// Always throws a syntax error const logParams = (first, second, first) => { console.log(first, second); }
Function overloading
Function overloading is the ability to define a function such that it can be invoked with different call signatures (shapes or number of arguments). The good thing is that the arguments binding for JavaScript functions makes this possible.
For a start, consider this very simple overloaded function that calculates the average of any number of arguments passed to it:
function average() { // the number of arguments passed const length = arguments.length; if (length == 0) return 0; // convert the arguments to a proper array of numbers const numbers = Array.prototype.slice.call(arguments); // a reducer function to sum up array items const sumReduceFn = function (a, b) { return a + Number(b) }; // return the sum of array items divided by the number of items return numbers.reduce(sumReduceFn, 0) / length; }
I have tried to make the function definition as verbose as possible so that its behavior can be clearly understood. The function can be called with any number of arguments from zero to the max number of arguments that a function can take — that should be 255.
Here are some results from calls to the
average() function:
average(); // 0 average('3o', 4, 5); // NaN average('1', 2, '3', 4, '5', 6, 7, 8, 9, 10); // 5.5 average(1.75, 2.25, 3.5, 4.125, 5.875); // 3.5
Now try to replicate the
average() function using the arrow function syntax. I mean, how difficult can that be? First guess — all you have to do is this:
const average = () => { const length = arguments.length; if (length == 0) return 0; const numbers = Array.prototype.slice.call(arguments); const sumReduceFn = function (a, b) { return a + Number(b) }; return numbers.reduce(sumReduceFn, 0) / length; }
When you test this function now, you realize that it throws a
Reference Error, and guess what? Of all the possible causes, it is complaining that
arguments is not defined.
What are you getting wrong?
Now here is something else about arrow functions:
Unlike regular functions, the
argumentsbinding does not exist for arrow functions. However, they have access to the
argumentsobject of a non-arrow parent function.
Based on this understanding, you can modify the
average() function to be a regular function that will return the result of an immediately invoked nested arrow function, which should have access to the
arguments of the parent function. This will look like this:
function average() { return (() => { const length = arguments.length; if (length == 0) return 0; const numbers = Array.prototype.slice.call(arguments); const sumReduceFn = function (a, b) { return a + Number(b) }; return numbers.reduce(sumReduceFn, 0) / length; })(); }
Obviously, that solved the problem you had with the
arguments object not being defined. However, you had to use a nested arrow function inside a regular function, which seems rather unnecessary for a simple function like this.
Can you do this differently?
Since accessing the
arguments object is obviously the problem here, is there an alternative? The answer is yes. Say hello to ES6 rest parameters.
With ES6 rest parameters, you can get an array that gives you access to all or part of the arguments that were passed to a function. This works for all function flavors, whether regular functions or arrow functions. Here is what it looks like:
const average = (...args) => { if (args.length == 0) return 0; const sumReduceFn = function (a, b) { return a + Number(b) }; return args.reduce(sumReduceFn, 0) / args.length; }
Wow! Rest parameters to the rescue — you finally arrived at an elegant solution for implementing the
average() function as an arrow function.
There are some caveats against relying on rest parameters for accessing function arguments:
- A rest parameter is not the same as the internal
argumentsobject inside the function. The rest parameter is an actual function parameter, while the
argumentsobject is an internal object bound to the scope of the function.
- A function can only have one rest parameter, and it must always be the last parameter. This means a function can have a combination of named parameters and a rest parameter.
- The rest parameter, when present, may not capture all the function’s arguments, especially when it is used together with named parameters. However, when it is the only function parameter, it captures all function arguments. On the other hand, the
argumentsobject of the function always captures all the function’s arguments.
- The rest parameter points to an array object containing all the captured function arguments, whereas the
argumentsobject points to an array-like object containing all the function’s arguments.
Before you proceed, consider another very simple overloaded function that converts a number from one number base to another. The function can be called with one to three arguments. However, when it is called with two arguments or fewer, it swaps the second and third function parameters in its implementation.
Here is what it looks like with a regular function:
function baseConvert (num, fromRadix = 10, toRadix = 10) { if (arguments.length < 3) { // swap variables using array destructuring [toRadix, fromRadix] = [fromRadix, toRadix]; } return parseInt(num, fromRadix).toString(toRadix); }
Here are some calls to the
baseConvert() function:
// num => 123, fromRadix => 10, toRadix => 10 console.log(baseConvert(123)); // "123" // num => 255, fromRadix => 10, toRadix => 2 console.log(baseConvert(255, 2)); // "11111111" // num => 'ff', fromRadix => 16, toRadix => 8 console.log(baseConvert('ff', 16, 8)); // "377"
Based on what you know about arrow functions not having an
arguments binding of their own, you can rewrite the
baseConvert() function using the arrow function syntax as follows:
const baseConvert = (num, ...args) => { // destructure the `args` array and // set the `fromRadix` and `toRadix` local variables let [fromRadix = 10, toRadix = 10] = args; if (args.length < 2) { // swap variables using array destructuring [toRadix, fromRadix] = [fromRadix, toRadix]; } return parseInt(num, fromRadix).toString(toRadix); }
Notice in the previous code snippets that I have used the ES6 array destructuring syntax to set local variables from array items and also to swap variables. You can learn more about destructuring by reading this guide: “ES6 Destructuring: The Complete Guide.”
Constructor functions
A regular JavaScript function can be called with the
new keyword, for which the function behaves as a class constructor for creating new instance objects.
Here is a simple example of a function being used as a constructor:
function Square (length = 10) { this.length = parseInt(length) || 10; this.getArea = function() { return Math.pow(this.length, 2); } this.getPerimeter = function() { return 4 * this.length; } } const square = new Square(); console.log(square.length); // 10 console.log(square.getArea()); // 100 console.log(square.getPerimeter()); // 40 console.log(typeof square); // "object" console.log(square instanceof Square); // true
When a regular JavaScript function is invoked with the
new keyword, the function’s internal
[[Construct]] method is called to create a new instance object and allocate memory. After that, the function body is executed normally, mapping
this to the newly created instance object. Finally, the function implicitly returns
this (the newly created instance object), except a different return value has been specified in the function definition.
Also, all regular JavaScript functions have a
prototype property. The
prototype property of a function is an object that contains properties and methods that are shared among all instance objects created by the function when used as a constructor.
Initially, the
prototype property is an empty object with a
constructor property that points to the function. However, it can be augmented with properties and methods to add more functionality to objects created using the function as a constructor.
Here is a slight modification of the previous
Square function that defines the methods on the function’s prototype instead of the constructor itself.
function Square (length = 10) { this.length = parseInt(length) || 10; } Square.prototype.getArea = function() { return Math.pow(this.length, 2); } Square.prototype.getPerimeter = function() { return 4 * this.length; } const square = new Square(); console.log(square.length); // 10 console.log(square.getArea()); // 100 console.log(square.getPerimeter()); // 40 console.log(typeof square); // "object" console.log(square instanceof Square); // true
As you can tell, everything still works as expected. In fact here, is a little secret: ES6 classes do something similar to the above code snippet on the background — they are simply syntactic sugar.
So what about arrow functions?
Do they also share this behavior with regular JavaScript functions? The answer is no. Now here, again, is something else about arrow functions:
Unlike regular functions, arrow functions can never be called with the new keyword because they do not have the
[[Construct]]method. As such, the
prototypeproperty also does not exist for arrow functions.
Sadly, that is very true. Arrow functions cannot be used as constructors. They cannot be called with the
new keyword. Doing that throws an error indicating that the function is not a constructor.
As a result, bindings such as
new.target that exist inside functions that can be called as constructors do not exist for arrow functions; instead, they use the
new.target value of the closest non-arrow parent function.
Also, because arrow functions cannot be called with the
new keyword, there is really no need for them to have a prototype. Hence, the
prototype property does not exist for arrow functions.
Since the
prototype of an arrow function is
undefined, attempting to augment it with properties and methods, or access a property on it, will throw an error.
const Square = (length = 10) => { this.length = parseInt(length) || 10; } // throws an error const square = new Square(5); // throws an error Square.prototype.getArea = function() { return Math.pow(this.length, 2); } console.log(Square.prototype); // undefined
What is
this?
If you have been writing JavaScript programs for some time now, you would have noticed that every invocation of a JavaScript function is associated with an invocation context depending on how or where the function was invoked.
The value of
this inside a function is heavily dependent on the invocation context of the function at call time, which usually puts JavaScript developers in a situation where they have to ask themselves the famous question: What is the value of
this?
Here is a summary of what the value of
this points to for different kinds of function invocations:
- Invoked with the
newkeyword:
thispoints to the new instance object created by the internal
[[Construct]]method of the function.
this(the newly created instance object) is usually returned by default, except a different return value was explicitly specified in the function definition.
- Invoked directly without the
newkeyword: In non-strict mode,
thispoints to the global object of the JavaScript host environment (in a web browser, this is usually the
windowobject). However, in strict mode, the value of
thisis
undefined; thus, trying to access or set a property on
thiswill throw an error.
- Invoked indirectly with a bound object: The
Function.prototypeobject provides three methods that make it possible for functions to be bound to an arbitrary object when they are called, namely:
call(),
apply(), and
bind(). When the function is called using any of these methods,
thispoints to the specified bound object.
- Invoked as an object method:
thispoints to the object on which the function (method) was invoked regardless of whether the method is defined as an own property of the object or resolved from the object’s prototype chain.
- Invoked as an event handler: For regular JavaScript functions that are used as DOM event listeners,
thispoints to the target object, DOM element,
document, or
windowon which the event was fired.
For a start, consider this very simple JavaScript function that will be used as a click event listener for, say, a form submit button:
function processFormData (evt) { evt.preventDefault(); // get the parent form of the submit button const form = this.closest('form'); // extract the form data, action and method code, you will see that everything works correctly. The value
this inside the event listener function, like you saw earlier, is the DOM element on which the click event was fired, which in this case is
button.
Therefore, it is possible to point to the parent form of the submit button using:
this.closest('form');
At the moment, you are using a regular JavaScript function as the event listener. What happens if you change the function definition to use the all-new arrow function syntax?
const processFormData = (evt) => { evt.preventDefault(); const form = this.closest('form'); now, you will notice that you are getting an error. From the look of things, it seems the value of
this isn’t what you were expecting. For some reason,
this no longer points to the
button element — instead, it points to the global
window object.
What can you do to fix the
this binding?
Do you remember
Function.prototype.bind()? You can use that to force the value of
this to be bound to the
button element when you are setting up the event listener for the submit button. Here it is:
// Bind the event listener function (`processFormData`) to the `button` element button.addEventListener('click', processFormData.bind(button), false);
Oops! It seems that was not the fix you were looking for.
this still points to the global
window object. Is this a problem peculiar to arrow functions? Does that mean arrow functions cannot be used for event handlers that rely on
this?
What are you getting wrong?
Now here is the last thing we’ll cover about arrow functions:
Unlike regular functions, arrow functions do not have a
thisbinding of their own. The value of
thisis resolved to that of the closest non-arrow parent function or the global object otherwise.
This explains why the value of
this in the event listener arrow function points to the window object (global object). Since it was not nested within a parent function, it uses the this value from the closest parent scope, which is the global scope.
This, however, does not explain why you cannot bind the event listener arrow function to the
button element using
bind(). Here comes an explanation for that:
Unlike regular functions, the value of
thisinside arrow functions remains the same and cannot change throughout their lifecycle, irrespective of the invocation context.
This behavior of arrow functions makes it possible for JavaScript engines to optimize them since the function bindings can be determined beforehand.
Consider a slightly different scenario in which the event handler is defined using a regular function inside an object’s method and also depends on another method of the same object:
({ _sortByFileSize: function (filelist) { const files = Array.from(filelist).sort(function (a, b) { return a.size - b.size; }); return files.map(function (file) { return file.name; }); }, init: function (input) { input.addEventListener('change', function (evt) { const files = evt.target.files; console.log(this._sortByFileSize(files)); }, false); } }).init(document.getElementById('file-input'));
Here is a one-off object literal with a
_sortByFileSize() method and an
init() method, which is invoked immediately. The
init() method takes a file
input element and sets up a change event handler for the input element that sorts the uploaded files by file size and logs them on the browser’s console.
If you test this code, you will realize that when you select files to upload, the file list doesn’t get sorted and logged to the console; instead, an error is thrown on the console. The problem comes from this line:
console.log(this._sortByFileSize(files));
Inside the event listener function,
this points to the DOM element on which the event was fired, which in this case is the
input element; hence
this._sortByFileSize is undefined.
To solve this problem, you need to bind
this inside the event listener to the outer object containing the methods so that you can be able to call
this._sortByFileSize(). Here, you can use
bind() as follows:
init: function (input) { input.addEventListener('change', (function (evt) { const files = evt.target.files; console.log(this._sortByFileSize(files)); }).bind(this), false); }
Now everything works as expected. Instead of using
bind() here, you could simply replace the event listener regular function with an arrow function. The arrow function will use the
this value from the parent
init() method, which will be the required object.
init: function (input) { input.addEventListener('change', evt => { const files = evt.target.files; console.log(this._sortByFileSize(files)); }, false); }
Before you proceed, consider one more scenario. Let’s say you have a simple timer function that can be invoked as a constructor to create countdown timers in seconds. It uses
setInterval() to keep counting down until the duration elapses or until the interval is cleared. Here it is:
function Timer (seconds = 60) { this.seconds = parseInt(seconds) || 60; console.log(this.seconds); this.interval = setInterval(function () { console.log(--this.seconds); if (this.seconds == 0) { this.interval && clearInterval(this.interval); } }, 1000); } const timer = new Timer(30);
If you run this code, you will see that the countdown timer seems to be broken. It keeps logging
NaN on the console infinitely.
The problem here is that inside the callback function passed to
setInterval(),
this points to the global
window object instead of the newly created
instance object within the scope of the
Timer() function. Hence, both
this.seconds and
this.interval are
undefined.
As before, to fix this, you can use
bind() to bind the value of
this inside the
setInterval() callback function to the newly created instance object as follows:
function Timer (seconds = 60) { this.seconds = parseInt(seconds) || 60; console.log(this.seconds); this.interval = setInterval((function () { console.log(--this.seconds); if (this.seconds == 0) { this.interval && clearInterval(this.interval); } }).bind(this), 1000); }
Or, better still, you can replace the
setInterval() callback regular function with an arrow function so that it can use the value of
this from the closest non-arrow parent function, which is
Timer in this case.
function Timer (seconds = 60) { this.seconds = parseInt(seconds) || 60; console.log(this.seconds); this.interval = setInterval(() => { console.log(--this.seconds); if (this.seconds == 0) { this.interval && clearInterval(this.interval); } }, 1000); }
Now that you completely understand how arrow functions handle the
this keyword, it is important to note that an arrow function will not be ideal for cases where you need the value of
this to be preserved — for example, when defining object methods that need a reference to the object or augmenting a function’s prototype with methods that need a reference to the target object.
Nonexistent bindings
Throughout this article, you have seen several bindings that are available inside regular JavaScript functions but don’t exist for arrow functions. Instead, arrow functions derive the values of such bindings from their closest non-arrow parent function.
In summary, here is a list of the nonexistent bindings in arrow functions:
arguments: List of arguments passed to the function when it is called
new.target: A reference to the function being called as a constructor with the
newkeyword
super: A reference to the prototype of the object to which the function belongs, provided it is defined as a concise object method
this: A reference to the invocation context object for the function
Conclusion
Hey, I’m really glad that you made it to the end of this article despite the long read time, and I strongly hope that you learned a thing or two while reading it. Thanks for your time.
JavaScript arrow functions are really awesome and have these cool characteristics (which we’ve reviewed in this article) that will make it easy for JavaScript engineers to optimize them in ways that they can’t for regular JavaScript functions.
In my opinion, I would say that you should keep using arrow functions as much as you can — except in cases where you just can. | https://blog.logrocket.com/anomalies-in-javascript-arrow-functions/ | CC-MAIN-2019-43 | refinedweb | 3,946 | 56.15 |
- 26 Jun, 2015 2 commits
- 25 Jun, 2015 6 commits
This was in the main configure.ac script, but not the clientside..
- 24 Jun, 2015 4 commits
- 23 Jun, 2015 5 commits
- David Johnson authored
reload; be sure to set the status so that the webui changes the node color. Watch for illegal imagenames when they are based on profile name, and change them to a legal name based on the profile id.
which affords a bit more flexibility in the link element.
- 19 Jun, 2015 19 commits
your defs file to avoid a warning..
builtin tablesorter widget and code from their demo page.
a new descriptor.
we import the actual image file..
-
-
Note: Since snmpit runs as the user, we don't rely on root's ssh keys for Netconf authentication. The username and password to use can be encoded in the "community" field in the database (switch_stack_types, or overridden in switch_stacks): <community>[:<user>[:<pass>]] User/password default to: username: snmpit password: <community>
-
This commit fills in all of the snmpit OF API calls for the Comware module. Note that Comware doesn't support OF "listener" ports, such as are available on the Procurve/Provision switches. Therefore, the functions related to listener ports return an error. It is assumed that when "disableOpenflow()" is called, that the caller also intends to destroy the related OpenFlow instance on the switch. There is no other call in the snmpit API for explicitly removing OF instances.
I don't know what I like most about this module, working with Expect or handling XML namespaces... Includes the basics of the Netconf protocol. Only tested with HP Netconf-over-ssh (Comware)! Main interface calls: $ncobj = snmpit_libNetconf->new($switch, $userpass, $port, $debuglvl) Create a new Netconf library object, pointed at switch host $switch. Username with optional password are passed as second argument (delimit with ':'). $port is port to connect to, and $debuglvl is the debugging level. Last two arguments are optional. $ncobj->doGet($filter) Execute Netconf "Get", with optional Netconf XML filter argument. $ncobj->doGetConfig($filter, $source) Get entire switch config, or partial config if $filter is supplied. $source is which config (running or saved). Arguments are optional. $ncobj->doEditConfig($xmlconf, $target, $defop); Edit the switch's config. $xmlconf is an XML::LibXML::Node object containing the XML-encoded configuration update. $target identifies what to apply it to (running or startup. Defaults to running). $defop is the default operation (merge, replace, none. Defaults to "merge"). Last two arguments are optional. $ncobj->doRPC($cmd, $xmlarg) Generic RPC dispatcher (Used by the other "do" commands above). $cmd is the rpc command to execute. Optional $xmlarg parameter is an XML:LibXML::Node object that encodes the arguments to $cmd.
>= 2, return lba size info.
- 17 Jun, 2015 4 commits | https://gitlab.flux.utah.edu/emulab/emulab-devel/-/commits/8f43717b32eb417acd2481f02ff56a75202c6872 | CC-MAIN-2020-34 | refinedweb | 460 | 59.9 |
<html>
<head>
<title>Package overview for net.sf.saxon.tinytree</title>
</head>
<body>
<p>This package is an implementation of the Saxon internal tree structure,
designed to minimize memory usage, and the costs of allocating and garbage-collecting
Java objects.</p>
<p>The data structure consists of a set of arrays, held in the <code>TinyTree</code> object.
A <code>TinyTree</code> represents one or more root document or element nodes, together with their
subtrees. If there is more than one root node, these will often be members of a sequence, but
this is not essential and is never assumed.
The arrays are in three groups. </p>
<p>The principal group of arrays contain one entry for each node other
than namespace and attribute nodes. These arrays are in document order. The following
information is maintained for each node: the
depth in the tree, the name code, the index of the next sibling, and two fields labelled "alpha" and "beta".
The meaning of "alpha" and
"beat" depends on the node type. For text nodes, comment nodes, and processing instructions
these fields index into a StringBuffer holding the text. For element nodes, "alpha" is
an index into the attributes table, and "beta" is an offset into the namespaces table.
Either of these may be set to -1 if there are no attributes/namespaces.</p>
<p>A name code is an integer value that indexes into the NamePool object: it can be
used to determine the prefix, local name, or namespace URI of an element or attribute name.</p>
<p>The attribute group holds the following information for each attribute node: parent
element, prefix, name code, attribute type, and attribute value. Attributes
for the same element are adjacent.</p>
<p>The namespace group holds one entry per namespace declaration (not one per namespace
node). The following information is held: a pointer to the element on which the namespace
was declared, and a namespace code. A namespace code is an integer, which the NamePool can
resolve into a prefix and a namespace URI: the top 16 bits identify the prefix, the bottom
16 bits the URI.</p>
<p>The data structure contains no Java object references: the links between elements and
attributes/namespaces are all held as integer offsets. This reduces size, and also makes
the whole structure relocatable (though this capability is not currently exploited).
All navigation is done by serial traversal of the arrays, using
the node depth as a guide. An array of pointers to the preceding sibling is created on
demand, the first time that backwards navigation is attempted. There are no parent pointers;
Saxon attempts to remember the parent while navigating down the tree, and where this is not
possible it locates the parent by searching through the following siblings; the last sibling
points back to the parent. The absence of the other pointers is a trade-off between tree-building time and
transformation time: I found that in most cases, more time was spent creating these pointers
than actually using them. Occasionally, however, in trees with a very large fan-out, locating
ancestors can be slow.</p>
<p>When the tree is navigated, transient ("flyweight") nodes are created as Java objects.
These disappear as soon as they are no longer needed. Note that to compare two nodes for
identity, you can use either the isSameNode() method, or compare the results of
generateId(). Comparing the Java objects using "==" is incorrect.</p>
<p>The tree structure implements the DOM interface as well as the Saxon NodeInfo interface.
There are limitations in the DOM support, however: especially (a) the tree is immutable, so
all updating methods throw an exception; (b) namespace declarations are not exposed as
attributes, and (c) only the core DOM classes are provided.</p>
<p>The primary way of navigating the tree is through the XPath axes, accessible through the
iterateAxis() method. The familiar DOM methods such as getNextSibling() and getFirstChild()
are not provided as an intrinsic part of the <code>NodeInfo</code> interface: all navigation
is done by iterating the axes, and each tree model provides its own implementations of the axes.
However, there are helper methods in the shared <code>Navigator</code> class which many
of these implementations choose to use.</p>
<p align="center"><i>Michael H. Kay<br/>
Saxonica Limited<br/>
9 February 2005</i></p>
</body>
</html> | http://www.java2s.com/Open-Source/Java/REST/alc-rest/net/sf/saxon/tinytree/package.html.htm | CC-MAIN-2014-15 | refinedweb | 725 | 52.8 |
If I understood your problem correctly, then I would
say it is NOT a bug!..
I have been able to deploy a service which returns
object(java bean)with document/literal style.
The generated wsdl is also correct. The service
works fine if i invoke a method with
?method=methodname on a URL. I'm also able generate
stub classes using wsdl2java, invoke and get results
back.
Its also works fine with webservicestudio(.net).
Please see whether you have deployed the service
properly with proper de/serializer etc.,
Regards
Balaji
--- Davanum Srinivas <dims@yahoo.com> wrote: >
Naresh,
>
> Please log a bug report. I saw something similar
> yesterday.
>
> Thanks,
> dims
>
> --- "Agarwal, Naresh" <nagarwal@informatica.com>
> wrote:
> > Hi
> >
> > I've developed a simple document style web service
> (contains a function, which return a bean)
> > using AXIS . I've generated the WSDL file for
> this.
> > The WSDL file generated by AXIS has two bugs
> (empty targetNameSpace and missing namespace
> > prefix) due to which WSDL2Java fails to generate
> stub classes.
> >
> > I have manually edited the WSDL file and was able
> to generate the stub classes from WSDL2Java
> > tool.
> >
> > I developed a client application from the stub
> classes. When I run the client, I am receiving
> > various fields of bean (return value of the
> function) as null.
> > However, I have checked from TCPMon and found
> that data is not null in Soap response.
> >
> > Could any one help me out? Is there a bug in stub
> classes generated for document style Web
> > Service
> >
> > thanks & regards,
> > Naresh Agarwal
> >
> >
> >
> >
> >
> >
>
>
> =====
> Davanum Srinivas -
>
>
> __________________________________
> Do you Yahoo!?
> SBC Yahoo! DSL - Now only $29.95 per month!
> - Yahoo! Mobile
- Check & compose your email via SMS on your Telstra or Vodafone mobile. | http://mail-archives.apache.org/mod_mbox/axis-java-user/200307.mbox/%3C20030717105608.426.qmail@web13906.mail.yahoo.com%3E | CC-MAIN-2014-15 | refinedweb | 280 | 60.21 |
Clojure Programming/Concepts
Contents
- 1 Concepts
- 1.1 Basics
- 1.1.1 Numbers
- 1.1.2 Structures
- 1.1.3 Exception Handling
- 1.1.4 Mutation Facilities
- 1.1.5 Namespaces [1]
- 1.2 Functional Programming
- 1.3 Lisp
- 1.4 Collection Abstractions
- 1.5 Concurrency
- 1.6 Macros
- 1.7 Libraries
- 1.8 References
Concepts[edit]
Basics[edit]
Numbers[edit]
Types of Numbers[edit]
Clojure supports the following numeric types:
- Integer
- Floating Point
- Ratio
- Decimal
Numbers in Clojure are based on java.lang.Number. BigInteger and BigDecimal are supported and hence we have arbitrary precision numbers in Clojure.
The Ratio type is described on Clojure page as
- Ratio
- Represents a ratio between integers. Division of integers that can't be reduced to an integer yields a ratio, i.e. 22/7 = 22/7, rather than a floating point or truncated value.
Ratios allows a computation to be maintained in numeric form. This can help avoid inaccuracies in long computations.
Here is a little experiment. Lets first try a computation of
(1/3 * 3/1) as floating point. Later we try the same with Ratio.
(def a (/ 1.0 3.0)) (def b (/ 3.0 1.0)) (* a b) ;; ⇒ 1.0 (def c (* a a a a a a a a a a)) ; ⇒ #'user/c (def d (* b b b b b b b b b b)) ; ⇒ #'user/d (* c d) ;; ⇒ 0.9999999999999996
The result we want is
1, but the value of
(* c d) above is
0.9999999999999996. This is due to the inaccuracies of a and b multiplying as we create c and d. You really don't want such calculations happening in your pay cheque :)
The same done with ratios below:
(def a1 (/ 1 3)) (def b1 (/ 3 1)) (def c (* a1 a1 a1 a1 a1 a1 a1 a1 a1 a1)) (def d (* b1 b1 b1 b1 b1 b1 b1 b1 b1 b1)) (* c d) ;; ⇒ 1
The result is
1 as we hoped for.
Number Entry Formats[edit]
Clojure supports the usual formats for entry as shown below
user=> 10 ; decimal 10 user=> 010 ; octal 8 user=> 0xff ; hex 255 user=> 1.0e-2 ; double 0.01 user=> 1.0e2 ; double 100.0
To make things easier, a radix based entry format is also supported in the form
<radix>r<number>. Where radix can be any natural number between 2 and 36.
2r1111 ;; ⇒ 15
These formats can be mixed and used.
(+ 0x1 2r1 01) ;; ⇒ 3
Many bitwise operations are also supported by Clojure API:
(bit-and 2r1100 2r0100) ;; ⇒ 4
Some of the others are:
- (bit-and x y)
- (bit-and-not x y)
- (bit-clear x n)
- (bit-flip x n)
- (bit-not x)
- (bit-or x y)
- (bit-set x n)
- (bit-shift-left x n)
- (bit-shift-right x n)
- (bit-test x n)
- (bit-xor x y)
Check Clojure API for the complete documentation.
Converting Integers to Strings[edit]
One general purpose way to format any data for printing is to use a java.util.Formatter.
The predefined convenience function
format makes using a Formatter easy (the hash in
%#x displays the number as a hexadecimal number prefixed with
0x):
(format "%#x" (bit-and 2r1100 2r0100)) ;; ⇒ "0x4"
Converting integers to strings is even easier with java.lang.Integer. Note that since the methods are static, we must use the "/" syntax instead of ".method":
(Integer/toBinaryString 10) ;; ⇒ "1010" (Integer/toHexString 10) ;; ⇒ "a" (Integer/toOctalString 10) ;;⇒ "12"
Here is another way to specify the base of the string representation:
(Integer/toString 10 2) ;; ⇒"1010"
Where 10 is the number to be converted and 2 is the radix.
Note: In addition to the above syntax, which is used for accessing static fields or methods,
. (dot) can be used. It is a special form used for accessing arbitrary (non-private) fields or methods in Java as explained in the Clojure Reference (Java Interop). For example:
(. Integer toBinaryString 10) ;; ⇒ "1010"
For static accesses, the / syntax is preferred.
Converting Strings to Integers[edit]
For converting strings to integers, we can again use java.lang.Integer. This is shown below.
user=> (Integer/parseInt "A" 16) ; hex 10 user=> (Integer/parseInt "1010" 2) ; bin 10 user=> (Integer/parseInt "10" 8) ; oct 8 user=> (Integer/parseInt "8") ; dec 8
The above sections give an overview of the integer-to-string and string-to-integer formatting. There is a very rich set of well documented functions available in the Java libraries (too rich to document here). These functions can easily be used to meet varied needs.
Structures[edit]
Structures in Clojure are a little different from those in languages like Java or C++. They are also different from structures in Common Lisp (even though we have a
defstruct in Clojure).
In Clojure, structures are a special case of maps and are explained in the data structures section in the reference.
The idea is that multiple instance of the structures will need to access their field values using the field names which are basically the map keys. This is fast and convenient, especially because Clojure automatically defines the keys as accessors for the structure instances.
Following are the important functions dealing with structures:
- defstruct
- create-struct
- struct
- struct-map
For the full API refer to data structures section in Clojure reference.
Structures are created using
defstruct which is a macro wrapping the function
create-struct which actually creates the struct.
defstruct creates the structure using
create-struct and binds it to the structure name supplied to
defstruct.
The object returned by
create-struct is what is called the structure basis. This is not a structure instance but contains information of what the structure instances should look like. New instances are created using
struct or
struct-map.
The structure field names of type keyword or symbols are automatically usable as functions to access fields of the structure. This is possible as structures are maps and this feature is supported by maps. This is not possible for other types of field names such as strings or numbers. It is quite common to use keywords for field names for structures due to the above reason. Also, Clojure optimises structures to share base key information. The following shows sample usage:
(defstruct employee :name :id) (struct employee "Mr. X" 10) ; ⇒ {:name "Mr. X", :id 10} (struct-map employee :id 20 :name "Mr. Y") ; ⇒ {:name "Mr. Y", :id 20} (def a (struct-map employee :id 20 :name "Mr. Y")) (def b (struct employee "Mr. X" 10))' ;; :name and :id are accessors (:name a) ; ⇒ "Mr. Y" (:id b) ; ⇒ 10 (b :id) ; ⇒ 10 (b :name) ; ⇒ "Mr. X"
Clojure also supports the
accessor function that can be used to get accessor functions for fields to allow easy access. This is important when field names are of types other than keyword or symbols. This is seen in the interaction below.
(def e-str (struct employee "John" 123)) e-str ;; ⇒ {:name "John", :id 123} ("name" e-str) ; ERROR: string not an accessor ;; ERROR ⇒ ;; java.lang.ClassCastException: java.lang.String cannot be cast to clojure.lang.IFn ;; java.lang.ClassCastException: java.lang.String cannot be cast to clojure.lang.IFn ;; at user.eval__2537.invoke(Unknown Source) ;; at clojure.lang.Compiler.eval(Compiler.java:3847) ;; at clojure.lang.Repl.main(Repl.java:75) (def e-name (accessor employee :name)) ; bind accessor to e-name (e-name e-str) ; use accessor ;; ⇒ "John"
As structures are maps, new fields can be added to structure instances using
assoc.
dissoc can be used to remove these instance specific keys. Note however that struct base keys cannot be removed.
b ;; ⇒ {:name "Mr. X", :id 10} (def b1 (assoc b :function "engineer")) b1 ;; ⇒ {:name "Mr. X", :id 10, :function "engineer"} (def b2 (dissoc b1 :function)) ; this works as :function is instance b2 ;; ⇒ {:name "Mr. X", :id 10} (dissoc b2 :name) ; this fails. base keys cannot be dissociated ;; ERROR ⇒ java.lang.Exception: Can't remove struct key
assoc can also be used to "update" a structure.
a ;; ⇒ {:name "Mr. Y", :id 20} (assoc a :name "New Name") ;; ⇒ {:name "New Name", :id 20} a ; note that 'a' is immutable and did not change ;; ⇒ {:name "Mr. Y", :id 20} (def a1 (assoc a :name "Another New Name")) ; bind to a1 a1 ;; ⇒ {:name "Another New Name", :id 20}
Observe that like other sequences in Clojure, structures are also immutable, hence, simply doing
assoc above does not change
a. Hence we rebind it to
a1. While it is possible to rebind the new value back to
a, this is not considered good style.
Exception Handling[edit]
Clojure supports Java based Exceptions. This may need some getting used to for Common Lisp users who are used to the Common Lisp Condition System.
Clojure does not support a condition system and is not expected to be supported anytime soon as per this message. That said, the more common exception system which is adopted by Clojure is well suited for most programming needs.
If you are new to exception handling, the Java Tutorial on Exceptions is a good place to learn about them.
In Clojure, exceptions can be handled using the following functions:
(try expr* catch-clause* finally-clause?)
- catch-clause -> (catch classname name expr*)
- finally-clause -> (finally expr*)
(throw expr)
Two types of exceptions you may want to handle in Clojure are:
- Clojure Exception: These are exception generated by Clojure or the underlying Java engine
- User Defined Exception: These are exceptions which you might create for your applications
Clojure Exceptions[edit]
Below is a simple interaction at the REPL that throws an exception:
user=> (/ 1 0) java.lang.ArithmeticException: Divide by zero java.lang.ArithmeticException: Divide by zero at clojure.lang.Numbers.divide(Numbers.java:142) at user.eval__2127.invoke(Unknown Source) at clojure.lang.Compiler.eval(Compiler.java:3847) at clojure.lang.Repl.main(Repl.java:75)
In the above case we see a
java.lang.ArithmeticException being thrown. This is a runtime exception which is thrown by the underlying JVM. The long message can sometimes be intimidating for new users but the trick is to simply look at the exception (
java.lang.ArithmeticException: Divide by zero) and not bother with the rest of the trace.
Similar exceptions may be thrown by the compiler at the REPL.
user=> (def xx yy) java.lang.Exception: Unable to resolve symbol: yy in this context clojure.lang.Compiler$CompilerException: NO_SOURCE_FILE:4: Unable to resolve symbol: yy in this context at clojure.lang.Compiler.analyze(Compiler.java:3669) at clojure.lang.Compiler.access$200(Compiler.java:37) at clojure.lang.Compiler$DefExpr$Parser.parse(Compiler.java:335) at clojure.lang.Compiler.analyzeSeq(Compiler.java:3814) at clojure.lang.Compiler.analyze(Compiler.java:3654) at clojure.lang.Compiler.analyze(Compiler.java:3627) at clojure.lang.Compiler.eval(Compiler.java:3851) at clojure.lang.Repl.main(Repl.java:75)
In the above case, the compiler does not find the binding for
yy and hence it throws the exception. If your program is correct (i.e. in this case
yy is defined
(def yy 10)) , you won't see any compile time exceptions.
The following interaction shows how runtime exceptions like
ArithmeticException can be handled.
user=> (try (/ 1 0) (catch Exception e (prn "in catch")) (finally (prn "in finally"))) "in catch" "in finally" nil
The syntax for the
try block is
(try expr* catch-clause* finally-clause?).
As can be seen, it's quite easy to handle exceptions in Clojure. One thing to note is that
(catch Exception e ...) is a catch all for exceptions as
Exception is a superclass of all exceptions. It is also possible to catch specific exceptions which is generally a good idea.
In the example below, we specifically catch
ArithmeticException.
user=> (try (/ 1 0) (catch ArithmeticException e (prn "in catch")) (finally (prn "in finally"))) "in catch" "in finally" nil
When we use some other exception type in the catch block, we find that the
ArithmeticException is not caught and is seen by the REPL.
user=> (try (/ 1 0) (catch IllegalArgumentException e (prn "in catch")) (finally (prn "in finally"))) "in finally" java.lang.ArithmeticException: Divide by zero java.lang.ArithmeticException: Divide by zero at clojure.lang.Numbers.divide(Numbers.java:142) at user.eval__2138.invoke(Unknown Source) at clojure.lang.Compiler.eval(Compiler.java:3847) at clojure.lang.Repl.main(Repl.java:75)
User-Defined Exceptions[edit]
As mentioned previously, all exceptions in Clojure need to be a subclass of java.lang.Exception (or generally speaking - java.lang.Throwable which is the superclass for Exception). This means that even when you want to define your own exceptions in Clojure, you need to derive it from Exception.
Don't worry, that's easier than it sounds :)
Clojure API provides a function
gen-and-load-class which can be used to extend
java.lang.Exception for user-defined exceptions.
gen-and-load-class generates and immediately loads the bytecode for the specified class.
Now, rather than talking too much, let's quickly look at code.
(gen-and-load-class 'user.UserException :extends Exception) (defn user-exception-test [] (try (throw (new user.UserException "msg: user exception was here!!")) (catch user.UserException e (prn "caught exception" e)) (finally (prn "finally clause invoked!!!"))))
Here we are creating a new class
'user.UserException that extends
java.lang.Exception. We create an instance of
user.UserException using the special form
(new Classname-symbol args*). This is then thrown.
Sometimes you may come across code like
(user.UserException. "msg: user exception was here!!"). This is just another way to say
new. Note the
. (dot) after the
user.UserException. This does exactly the same thing.
Here is the interaction:
user=> (load-file "except.clj") #'user/user-exception-test user=> (user-exception-test) "caught exception" user.UserException: msg: user exception was here!! "finally clause invoked!!!" nil user=>
So here we have both the
catch and the
finally clauses being invoked. That's all there is to it.
With Clojure's support for Java Interop, it is also possible for the user to create exceptions in Java and catch them in Clojure, but creating the exception in Clojure is typically more convenient.
Mutation Facilities[edit]
Employee Record Manipulation[edit]
Data structures and sequences in Clojure are immutable as seen in the examples presented in Clojure_Programming/Concepts#Structures (it is suggested that the reader go through that section first).
While immutable data has its advantages, any project of reasonable size will require the programmer to maintain some sort of state. Managing state in a language with immutable sequences and data structures is a frequent source of confusion for people used to programming languages that allow mutation of data.
A good essay on the Clojure approach is [ Values and Change - Clojure's approach to Identity and State], written by Rich Hickey.
It may be useful to watch Clojure Concurrency screen cast as some of those concepts are used in this section. Specifically refs and transactions.
In this section we create a simple employee record set and provide functions to:
- Add an employee
- Delete employee by name
- Change employee role by name
The example is purposely kept simple as the intent is to show the state and mutation facilities rather than provide full functionality.
Lets dive into the code.
(alias 'set 'clojure.set) ; use set/fn-name rather than clojure.set/fn-name (defstruct employee :name :id :role) ; == (def employee (create-struct :name :id ..)) (def employee-records (ref #{})) ;;;=================================== ;;; Private Functions: No Side-effects ;;;=================================== (defn- update-role [n r recs] (let [rec (set/select #(= (:name %) n) recs) others (set/select #(not (= (:name %) n)) recs)] (set/union (map #(set [(assoc % :role r)]) rec) others))) (defn- delete-by-name [n recs] (set/select #(not (= (:name %) n)) recs)) ;;;============================================= ;;; Public Function: Update Ref employee-records ;;;============================================= (defn update-employee-role [n r] "update the role for employee named n to the new role r" (dosync (ref-set employee-records (update-role n r @employee-records)))) (defn delete-employee-by-name [n] "delete employee with name n" (dosync (ref-set employee-records (delete-by-name n @employee-records)))) (defn add-employee [e] "add new employee e to employee-records" (dosync (commute employee-records conj e))) ;;;========================= ;;; initialize employee data ;;;========================= (add-employee (struct employee "Jack" 0 :Engineer)) (add-employee (struct employee "Jill" 1 :Finance)) (add-employee (struct-map employee :name "Hill" :id 2 :role :Stand))
In the first few lines we define the
employee structure. The interesting definition after that is
employee-records.
(def employee-records (ref #{}))
In Clojure refs allow mutation of a storage location with a transaction.
user=> (def x (ref [1 2 3])) #'user/x user=> x clojure.lang.Ref@128594c user=> @x [1 2 3] user=> (deref x) [1 2 3] user=>
Next we define private functions
update-role and
delete-by-name using
defn- (note the minus '-' at the end). Observe that these are pure functions without any side-effects.
update-role takes the employee name
n, the new role
r and a table of employee records
recs. As sequences are immutable, this function returns a new table of records with the employee role updated appropriately.
delete-by-name also behaves in a similar manner by returning a new table of employees after deleting the relevant employee record.
For an explanation of the
set API see Clojure API reference.
We still haven't looked at how state is maintained. This is done by the public functions in the listing
update-employee-role,
delete-employee-by-name and
add-employee.
These functions delegate the job of record processing to the private functions. The important things to note are the use of the following functions:
ref-setsets the value of a ref.
dosyncis mandatory as refs can only be updated in a transaction and
dosyncsets up the transaction.
commuteupdates the in-transaction value of a ref.
For a detailed explanation of these functions see the refs section in API reference.
The
add-employee function is quite trivial and hence not broken up into private and public function.
The source listing initializes the records with sample data towards the end.
Below is the interaction for this program.
user=> (load-file "employee.clj") #{{:name "Jack", :id 0, :role :Engineer} {:name "Hill", :id 2, :role :Stand} {:name "Jill", :id 1, :role :Finance}} user=> @employee-records #{{:name "Jack", :id 0, :role :Engineer} {:name "Hill", :id 2, :role :Stand} {:name "Jill", :id 1, :role :Finance}} user=> (add-employee (struct employee "James" 3 :Bond)) #{{:name "James", :id 3, :role :Bond} {:name "Jack", :id 0, :role :Engineer} {:name "Hill", :id 2, :role :Stand} {:name "Jill", :id 1, :role :Finance}} user=> @employee-records #{{:name "James", :id 3, :role :Bond} {:name "Jack", :id 0, :role :Engineer} {:name "Hill", :id 2, :role :Stand} {:name "Jill", :id 1, :role :Finance}} user=> (update-employee-role "Jill" :Sr.Finance) #{{:name "James", :id 3, :role :Bond} {:name "Jack", :id 0, :role :Engineer} {:name "Hill", :id 2, :role :Stand} {:name "Jill", :id 1, :role :Sr.Finance}} user=> @employee-records #{{:name "James", :id 3, :role :Bond} {:name "Jack", :id 0, :role :Engineer} {:name "Hill", :id 2, :role :Stand} {:name "Jill", :id 1, :role :Sr.Finance}} user=> (delete-employee-by-name "Hill") #{{:name "James", :id 3, :role :Bond} {:name "Jack", :id 0, :role :Engineer} {:name "Jill", :id 1, :role :Sr.Finance}} user=> @employee-records #{{:name "James", :id 3, :role :Bond} {:name "Jack", :id 0, :role :Engineer} {:name "Jill", :id 1, :role :Sr.Finance}}
Two things to note about the program:
- Using refs and transactions makes the program inherently thread safe. If we want to extend the program for a multi-threaded environment (using Clojure agents) it will scale with minimal change.
- Keeping the pure functionality separate from the public function that manages state, it is easier to ensure that the functionality is correct as pure functions are easier to test.
Namespaces [1][edit]
Overview[edit]
- use
requireto load clojure libraries
- use
referto refer to functions in the current namespace
- use
useto load and refer all in one step
- use
importto refer to Java classes in the current namespace
Require
1. You can load the code for any clojure library with
(require libname
). Try it with
clojure.contrib.math:
(require clojure.contrib.math)
2. Then print the directory of names available in the namespace
(dir clojure.contrib.math)
3. Show using
lcm to calculate the least common multiple:
1 (clojure.contrib.math/lcm 11 41) 2 -> 451
4. Writing out the namespace prefix on every function call is a pain, so you can specify a shorter alias using
as:
(require [clojure.contrib.math :as m])
5. Calling the shorter form is much easier:
1 (m/lcm 120 1000) 2 -> 3000
6. You can see all the loaded namespaces with
(all-ns)
Refer and Use[edit]
1. It would be even easier to use a function with no namespace prefix at all. You can do this by referring to the name, which makes a reference to the name in the current namespace:
(refer 'clojure.contrib.math)
2. Now you can call
lcm directly:
1 (lcm 16 30) 2 -> 240
3. If you want to load and refer all in one step, call
use:
(use 'clojure.contrib.math)
4. Referring a library refers all of its names. This is often undesirable, because
- it does not clearly document intent to readers
- it brings in more names than you need, which can lead to name collisions
Instead, use the following style to specify only those names you want:
(use '[clojure.contrib.math :only (lcm)])
The
:only option is available on all the namespace management forms. (There is also an
:exclude which works as you might expect.)
5. The variable
*ns* always contains the current namespace, and you can see what names your current namespace refers to by calling
(ns-refers *ns*)
6. The refers map is often pretty big. If you are only interested in one symbol, pass that symbol to the result of calling
ns-refers:
1 ((ns-refers *ns*) 'dir) 2 -> #'clojure.contrib.ns-utils/dir
Import[edit]
1. Importing is like referring, but for Java classes instead of Clojure namespaces. Instead of
(java.io.File. "woozle")
you can say
1 (import java.io.File) 2 (File. "woozle")
2. You can import multiple classes in a Java package with the form
(import [package Class Class])
For example:
1 (import [java.util Date Random]) 2 (Date. (long (.nextInt (Random.))))
3. Programmers new to Lisp are often put off by the "inside-out" reading of forms like the date creation above. Starting from the inside, you
- get a new Random
- get the next random integer
- cast it to a long
- pass the long to the Date constructor
You don't have to write inside-out code in Clojure. The -> macro takes its first form, and passes it as the first argument to its next form. The result then becomes the first argument of the next form, and so on. It is easier to read than to describe:
1 (-> (Random.) (.nextInt) (long) (Date.)) 2 -> #<Date Sun Dec 21 12:47:20 EST 1969>
Load and Reload[edit]
The REPL isn't for everything. For work you plan to keep, you will want to place your source code in a separate file. Here are the rules of thumb to remember when creating your own Clojure namespaces.
1. Clojure namespaces (a.k.a. libraries) are equivalent to Java packages.
2. Clojure respects Java naming conventions for directories and files, but Lisp naming conventions for namespace names. So a Clojure namespace com.my-app.utils would live in a path named com/my_app/utils.clj. Note especially the underscore/hyphen distinction.
3. Clojure files normally begin with a namespace declaration, e.g.
(ns com.my-app.utils)
4. The syntax for import/use/refer/require presented in the previous sections is for REPL use. Namespace declarations allow similar forms—similar enough to aid memory, but also different enough to confuse. The following forms at the REPL:
1 (use 'foo.bar) 2 (require 'baz.quux) 3 (import '[java.util Date Random])
would look like this in a source code file:
1 (ns 2 com.my-app.utils 3 (:use foo.bar) 4 (:require baz.quux) 5 (:import [java.util Date Random]))
Symbols become keywords, and quoting is no longer required.
5. At the time of this writing, the error messages for doing it wrong with namespaces are, well, opaque. Be careful.
Now let's try creating a source code file. We aren't going to bother with explicit compilation for now. Clojure will automatically (and quickly) compile source code files on the classpath. Instead, we can just add Clojure (.clj) files to the src directory.
1. Create a file named student/dialect.clj in the src directory, with the appropriate namespace declaration:
(ns student.dialect)
2. Now, implement a simple canadianize function that takes a string, and appends , eh?
(defn canadianize [sentence] (str sentence ", eh"))
3. From your REPL, use the new namespace:
(use 'student.dialect)
4. Now try it out.
1 (canadianize "Hello, world.") 2 -> "Hello, world., eh"
5. Oops! We need to trim the period off the end of the input. Fortunately, clojure.contrib.str-utils2 provides chop. Go back to student/dialect.clj and add require in clojure.contrib.str-utils2:
(ns student.dialect (:require [clojure.contrib.str-utils2 :as s]))
6. Now, update canadianize to use chop:
(defn canadianize [sentence] (str (s/chop sentence) ", eh?"))
7. If you simply retry calling canadianize from the repl, you will not see your new change, because the code was already loaded. However, you can use namespace forms with reload ( or reload-all) to reload a namespace (and its dependencies).
(use :reload 'student.dialect)
8. Now you should see the new version of canadianize:
1 (canadianize "Hello, world.") 2 -> "Hello, world, eh?"
Functional Programming[edit]
Anonymous Functions[edit]
Clojure supports anonymous functions using
fn or the shorter reader macro
#(..). The
#(..) is convenient due to its conciseness but is somewhat limited as
#(..) form cannot be nested.
Below are some examples using both forms:
user=> ((fn [x] (* x x)) 3) 9 user=> (map #(list %1 (inc %2)) [1 2 3] [1 2 3]) ((1 2) (2 3) (3 4)) user=> (map (fn [x y] (list x (inc y))) [1 2 3] [1 2 3]) ((1 2) (2 3) (3 4)) user=> (map #(list % (inc %)) [1 2 3]) ((1 2) (2 3) (3 4)) user=> (map (fn [x] (list x (inc x))) [1 2 3]) ((1 2) (2 3) (3 4)) user=> (#(apply str %&) "Hello") "Hello" user=> (#(apply str %&) "Hello" ", " "World!") "Hello, World!"
Note that in
#(..) form,
%N is used for arguments (1 based) and
%& for the rest argument.
% is a synonym for
%1.
Lazy Evaluation of Sequences[edit]
This section tries to walk through some code to give a better feel for the lazy evaluation of sequences by Clojure and how that might be useful. We also measure memory and time to understand whats happening better.
Consider a scenario where we want to do a
big-computation (1 second each) on records in a list with a billion items. Typically we may not need all the billion items processed (e.g. we may need only a filtered subset).
Let's define a little utility function
free-mem to help us monitor memory usage and another function
big-computation that takes 1 second to do its job.
(defn free-mem [] (.freeMemory (Runtime/getRuntime))) (defn big-computation [x] (Thread/sleep 1000) (* 10 x))
In the functions above we use java.lang.Runtime and java.lang.Thread for getting free memory and supporting sleep.
We will also be using the built in function
time to measure our performance.
Here is a simple usage at REPL:
user=> (defn free-mem [] (.freeMemory (Runtime/getRuntime))) #'user/free-mem user=> (defn big-computation [x] (Thread/sleep 1000) (* 10 x)) #'user/big-computation user=> (time (big-computation 1)) "Elapsed time: 1000.339953 msecs" 10
Now we define a list of 1 billion numbers called
nums.
user=> (time (def nums (range 1000000000))) "Elapsed time: 0.166994 msecs" #'user/nums
Note that it takes Clojure only 0.17 ms to create a list of 1 billion numbers. This is because the list is not really created. The user just has a promise from Clojure that the appropriate number from this list will be returned when asked for.
Now, let's say, we want to apply
big-computation to x from 10000 to 10005 from this list.
This is the code for it:
;; The comments below should be read in the numbered order ;; to better understand this code. (time ; [7] time the transaction (def v ; [6] save vector as v (apply vector ; [5] turn the list into a vector (map big-computation ; [4] process each item for 1 second (take 5 ; [3] take first 5 from filtered items (filter ; [2] filter items 10000 to 10010 (fn [x] (and (> x 10000) (< x 10010))) nums)))))) ; [1] nums = 1 billion items
Putting this code at the REPL, this is what we get:
user=> (free-mem) 2598000 user=> (time (def v (apply vector (map big-computation (take 5 (filter (fn [x] (and (> x 10000) (< x 10010))) nums)))))) "Elapsed time: 5036.234311 msecs" #'user/v user=> (free-mem) 2728800
The comments in the code block indicate the working of this code. It took us ~5 seconds to execute this. Here are some points to note:
- It did not take us 10000 seconds to filter out item number 10000 to 10010 from the list
- It did not take us 10 seconds to get first 5 items from the list of 10 filtered list
- Overall, it took the computation only 5 seconds which is basically the computation time.
- The amount of free memory is virtually the same even though we now have the promise of a billion records for processing. (It actually seems to have gone up a bit due to garbage collection)
Now if we access v it takes negligible time.
user=> (time (seq v)) "Elapsed time: 0.042045 msecs" (100010 100020 100030 100040 100050) user=>
Another point to note is that a lazy sequence does not mean that the computation is done every time; once the computation is done, it gets cached.
Try the following:
user=> (time (def comps (map big-computation nums))) "Elapsed time: 0.113564 msecs" #'user/comps user=> (defn t5 [] (take 5 comps)) #'user/t5 user=> (time (doall (t5))) "Elapsed time: 5010.49418 msecs" (0 10 20 30 40) user=> (time (doall (t5))) "Elapsed time: 0.096104 msecs" (0 10 20 30 40) user=>
In the first step we map
big-computation to a billion
nums. Then we define a function
t5 that takes 5 computations from comps. Observe that the first time t5 takes 5 seconds and after that it takes neglegible time. This is because once the calculation is done, the results are cached for later use. Since the result of
t5 is also lazy,
doall is needed to force it to be eagerly evaluated before
time returns to the REPL.
Lazy data structures can offer significant advantage assuming that the program is designed to leverage that. Designing a program for lazy sequences and infinite data structures is a paradigm shift from eagerly just doing the computation in languages like C and Java vs giving a promise of a computation.
This section is based on this mail in the Clojure group.
Infinite Data Source[edit]
As Clojure supports lazy evaluation of sequences, it is possible to have infinite data sources in Clojure. The infinite sequence (0 1 2 3 4 5 ....) can be defined using (range) since clojure 1.2:[2]
(def nums (range)) ; Old version (def nums (iterate inc 0)) ;; ⇒ #'user/nums (take 5 nums) ;; ⇒ (0 1 2 3 4) (drop 5 (take 11 nums)) ;; ⇒ (5 6 7 8 9 10)
Here we see two functions that are used for create an infinite list of numbers starting from 0. As Clojure supports lazy sequences, only the required items are generated and taken of the head of this list. In the above case, if you were to type out (range) or (iterate inc 0) directly at the prompt, the [ reader] would continue getting the next number forever and you would need to terminate the process.
(iterate f x) is a function that continuously applies f to the result of the previous application of f to x. Meaning, the result is
...(f(f(f(f .....(f(f(f x)))))....
(iterate inc 0) first gives 0 as the result, then
(inc 0) => 1, then
(inc (inc 0)) => 2 and so on.
(take n coll) basically removes
n items from the collection. There are many variation of this theme:
- (take n coll)
- (take-nth n coll)
- (take-last n coll)
- (take-while pred coll)
- (drop n coll)
- (drop-while pred coll)
The reader is encouraged to look at the Clojure Sequence API for details.
List Comprehension[edit]
List Comprehensions are the constructs offered by a language that make it easy to create new lists from old ones. As simple as it sounds, it is a very powerful concept. Clojure has good support for List comprehensions.
Lets say we want a set of all
x + 1 for all
x divisible by 4 with
x starting from
0.
Here is one way to do it in Clojure:
(def nums (iterate inc 0)) ;; ⇒ #'user/nums (def s (for [x nums :when (zero? (rem x 4))] (inc x))) ;; ⇒ #'user/s (take 5 s) ;; ⇒ (1 5 9 13 17)
nums is the infinite list of numbers that we saw in the previous section. We need to
(def s ...) for the set as we are creating an infinite source of numbers. Running it directly at the prompt will make the
reader suck out numbers from this source indefinitely.
The key construct here is the
for macro. Here the expression
[x nums ... says that x comes out of
nums one at a time. The next clause
.. :when (zero? (rem x 4)) .. basically says that x should be pulled out only if it meets this criteria. Once this x is out,
inc is applied to it. Binding all this to
s gives us an infinite set. Hence, the
(take 5 s) and the expected result that we see.
Another way to achieve the same result is to use
map and
filter.
(def s (map inc (filter (fn [x] (zero? (rem x 4))) nums))) ;; ⇒ #'user/s (take 5 s) ;; ⇒ (1 5 9 13 17)
Here we create a predicate
(fn [x] (zero? (rem x 4))) and pull out x's from nums only if this predicate is satisfied. This is done by
filter. Note that since Clojure is lazy, what
filter gives is only a promise of supplying the next number that satisfies the predicate. It does not (and cannot in this particular case) evaluate the entire list. Once we have this stream of x's, it is simply a matter of mapping inc to it
(map inc ....
The choice between List Comprehension i.e.
for and
map/filter is largely a matter of user preference. There is no major advantage of one over the other.
Lisp[edit]
Sequence Functions[edit]
(first coll)[edit]
Gets the first element of a sequence. Returns nil for an empty sequence or nil.
(first (list 1 2 3 4)) ;; ⇒ 1 (first (list)) ;; ⇒ nil (first nil) ;; ⇒ nil (map first [[1 2 3] "Test" (list 'hi 'bye)]) ;; ⇒ (1 \T hi) (first (drop 3 (list 1 2 3 4))) ;; ⇒ 4
(rest coll)[edit]
Gets everything except the first element of a sequence. Returns nil for an empty sequence or nil.
(rest (list 1 2 3 4)) ;; ⇒ (2 3 4) (rest (list)) ;; ⇒ nil (rest nil) ;; ⇒ nil (map rest [[1 2 3] "Test" (list 'hi 'bye)]) ;; ⇒ ((2 3) (\e \s \t) (bye)) (rest (take 3 (list 1 2 3 4))) ;; ⇒ (2 3)
(map f colls*)[edit]
Applies f lazily to each item in the sequences, returning a lazy sequence of the return values of f.
Because the supplied function always returns true, these both return a sequence of true, repeated ten times.
(map (fn [x] true) (range 10)) ;; ⇒ (true true true true true true true true true true) (map (constantly true) (range 10)) ;; ⇒ (true true true true true true true true true true)
These two functions both multiply their argument by 2, so (map ...) returns a sequence where every item in the original is doubled.
(map (fn [x] (* 2 x)) (range 10)) ;; ⇒ (0 2 4 6 8 10 12 14 16 18) (map (partial * 2) (range 10)) ;; ⇒ (0 2 4 6 8 10 12 14 16 18)
(map ...) may take as many sequences as you supply to it (though it requires at least one sequence), but the function argument must accept as many arguments as there are sequences.
Thus, these two functions give the sequences multiplied together:
(map (fn [a b] (* a b)) (range 10) (range 10)) ;; ⇒ (0 1 4 9 16 25 36 49 64 81) (map * (range 10) (range 10)) ;; ⇒ (0 1 4 9 16 25 36 49 64 81)
But the first one will only take two sequences as arguments, whereas the second one will take as many as are supplied.
(map (fn [a b] (* a b)) (range 10) (range 10) (range 10)) ;; ⇒ java.lang.IllegalArgumentException: Wrong number of args passed (map * (range 10) (range 10) (range 10)) ;; ⇒ (0 1 8 27 64 125 216 343 512 729)
(map ...) will stop evaluating as soon as it reaches the end of any supplied sequence, so in all three of these cases, (map ...) stops evaluating at 5 items (the length of the shortest sequence,) despite the second and third giving it sequences that are longer than 5 items (in the third example, the longer sequence is of infinite length.)
Each of these takes a a sequence made up solely of the number 2 and a sequence of the numbers (0 1 2 3 4) and multiplies them together.
(map * (replicate 5 2) (range 5)) ;; ⇒ (0 2 4 6 8) (map * (replicate 10 2) (range 5)) ;; ⇒ (0 2 4 6 8) (map * (repeat 2) (range 5)) ;; ⇒ (0 2 4 6 8)
(every? pred coll)[edit]
Returns true if pred is true for every item in a sequence. False otherwise. pred, in this case, is a function taking a single argument and returning true or false.
As this function returns true always, (every? ...) evaluates to true. Note that these two functions say the same thing.
(every? (fn [x] true) (range 10)) ;; ⇒ true (every? (constantly true) (range 10)) ;; ⇒ true
(pos? x) returns true when its argument is greater than zero. Since (range 10) gives a sequence of numbers from 0 to 9 and (range 1 10) gives a sequence of numbers from 1 to 10, (pos? x) returns false once for the first sequence and never for the second.
(every? pos? (range 10)) ;; ⇒ false (every? pos? (range 1 10)) ;; ⇒ true
This function returns true when its argument is an even number. Since the range between 1 and 10 and the sequence (1 3 5 7 9) contain odd numbers, (every? ...) returns false.
As the sequence (2 4 6 8 10) contains only even numbers, (every? ...) returns true.
(every? (fn [x] (= 0 (rem x 2))) (range 1 10)) ;; ⇒ false (every? (fn [x] (= 0 (rem x 2))) (range 1 10 2)) ;; ⇒ false (every? (fn [x] (= 0 (rem x 2))) (range 2 10 2)) ;; ⇒ true
If I had a need, elsewhere, to check if a number were even, I might, instead, write the following, making (even? num) an actual function before passing it as an argument to (every? ...)
(defn even? [num] (= 0 (rem num 2))) ;; ⇒ #<Var: user/even?> (every? even? (range 1 10 2)) ;; ⇒ false (every? even? (range 2 10 2)) ;; ⇒ true
Complementary function: (not-every? pred coll)
Returns the complementary value to (every? pred coll). False if pred is true for all items in the sequence, true if otherwise.
(not-every? pos? (range 10)) ;; ⇒ true (not-every? pos? (range 1 10)) ;; ⇒ false
Looping and Iterating[edit]
Three different ways to loop from 1 to 20, increment by 2, printing the loop index each time (from mailing list discussion):
;; Version 1 (loop [i 1] (when (< i 20) (println i) (recur (+ 2 i)))) ;; Version 2 (dorun (for [i (range 1 20 2)] (println i))) ;; Version 3 (doseq [i (range 1 20 2)] (println i))
Mutual Recursion[edit]
Mutual recursion is tricky but possible in Clojure. The form of (defn ...) allows the body of a function to refer to itself or previously existing names only. However, Clojure does allow dynamic redefinition of function bindings, in the following way:
;;; Mutual recursion example ;; Forward declaration (def even?) ;; Define odd in terms of 0 or even (defn odd? [n] (if (zero? n) false (even? (dec n)))) ;; Define even? in terms of 0 or odd (defn even? [n] (if (zero? n) true (odd? (dec n)))) ;; Is 3 even or odd? (even? 3) ;; ⇒ false
Mutual recursion is not possible in internal functions defined with
let. To declare a set of private recursive functions, you can use the above technique with
defn- instead of
defn, which will generate private definitions.
However one can emulate mutual recursive functions with
loop and
recur.
(use 'clojure.contrib.fcase) (defmacro multi-loop [vars & clauses] (let [loop-var (gensym "multiloop__") kickstart (first clauses) loop-vars (into [loop-var kickstart] vars)] `(loop ~loop-vars (case ~loop-var ~@clauses)))) (defn even? [n] (multi-loop [n n] :even (if (zero? n) true (recur :odd (dec n))) :odd (if (zero? n) false (recur :even (dec n)))))
Collection Abstractions[edit]
Concurrency[edit]
Macros[edit]
A nice walkthrough on how to write a macro can be found at by Chouser.
Macros are used to transform data structures at compile time. Let's develop a new
do1 macro. The
do special form of Clojure evaluates all containing forms for their side-effects and returns the return value of the last one.
do1 should act similar, but return the value of the first sub-form.
In the beginning one should first think about how the macro should be invoked.
(do1 :x :y :z)
The return value should be
:x. Then the next step is to think about how we would do this manually.
(let [x :x] :y :z x)
This first evaluates :x, then :y and :z. Finally the let evaluates to the result of evaluating :x. This can be turned into a macro using
defmacro and
`.
(defmacro do1 [fform & rforms] `(let [x# ~fform] ~@rforms x#))
So what happens here. It is just a simple translation. We use the
let to create a temporary place for the result of our first form to stay. Since we cannot simply use some name (it might be used in the user code), we generate a new one with
x#. The # is a special notation of Clojure to help us: it generates a new name, which is guaranteed to be not used by the user code. The
~ "unquotes" our first form, that is
~fform is replaced by the first argument. Then the
~@ is used to inject the remaining forms. Using the
@ basically removes one set of () from the following expression. Finally we refer again to the result of the first form with
x#.
We can check the expansion of our macro with
(macroexpand-1 '(do1 :x :y :z)).
Libraries[edit]
The lib package from
clojure.contrib is now integrated into clojure. It is easy to define libraries that can be loaded by other scripts. Suppose we have an awesome
add1 function which we want to provide to other developers. So what do we need? First we settle on a namespace, eg.
example.ourlib. Now we have to create a file in the classpath with the filename "example/ourlib.clj". The contents are pretty straight forward.
(ns example.ourlib) (defn add1 [x] (add x 1))
All we have to do now is to use the functionality of
ns. Suppose we have another file, where we want to use our function.
ns lets us specify our requirements in a lot of ways. The simplest is
:require
(ns example.otherns (:require example.ourlib)) (defn check-size [x] (if (too-small x) (example.ourlib/add1 x) x))
But what if we need the
add1 function several times? We have to type always the namespace in front. We could add a
(refer 'example.ourlib), but we can have this easier. Just use
:use instead of
:require!
:use loads the library as
:require does and immediately
refers to the namespace.
So now we have already two small libraries which are maybe used in a third program.
(ns example.thirdns (:require example.ourlib) (:require example.otherns))
Again we can save some typing here. Similar to
import we can factor out the common prefix of our libraries' namespaces.
(ns example.thirdns (:require (example ourlib otherns)))
Of course
ourlib contains 738 more functions, not only those shown above. We don't really want to have
use because bringing in so many names risks conflicts, but we also don't want to type the namespace all the time either. So the first thing we do is employ an
alias. But wait! You guessed it:
ns helps us again.
(ns example.otherns (:require (example [ourlib :as ol])))
The
:as takes care of the aliasing and now we can refer to our
add1 function as
ol/add1!
Up to now it is already quite nice. But if we think a bit about our source code organization, we might end up with the insight that 739 functions in one single file is maybe not the best idea to keep around. So we decide to do some refactoring. We create a file "example/ourlib/add1.clj" and put our functions there. We don't want the user to have to load many files instead of one, so we modify the "example/ourlib.clj" file to load any additional files as follows.
(ns example.ourlib (:load "ourlib/add1" "ourlib/otherfunc" "ourlib/morefuncs"))
So the user still loads the "public" example.ourlib lib, which takes care of loading the rest. (The :load implementation includes code to provide the ".clj" suffix for the files being loaded)
For more information see the docstring of require -
(doc require).
References[edit]
- ↑
- ↑ range on ClojureDocs | https://en.wikibooks.org/wiki/Clojure_Programming/Concepts | CC-MAIN-2014-10 | refinedweb | 7,650 | 65.62 |
Red Hat Bugzilla – Bug 443548
Use a dynamic pipe menu for Fedora menu ?
Last modified: 2016-09-19 22:38:34 EDT
With the following code, we could ship openbox with a dynamic Fedora pipe menu.
import gobject>"
...then, in the default menu configuration, all you have to put is:
<menu id="fedora" label="Fedora" execute="/path/to/script.py" />
I initially wrote this script for the security spin (more details here:), but I think many people would find it
useful. This is just a suggestion up for discussion. What do you think?
I was thinking about adding gnome menus to the default menu for some time, but
wasn't able to finish it.
There are already written some utils that we could use:
- obx-xdgmenu (not packaged in Fedora) which does the same as your script, just
written in C, but it doesn't replace character & -> & and similar, so "Sound
& Video" menu doesn't work. The script you posted probably has the same problem ;-).
- obmenu package has obm-xdg, unfortunately it doesn't seem to work well with
Fedora menus.
If you fix the problem with & character in the script and add an option to
create arbitrary menus (I think settings and preferences would be nice to have),
I'll be happy to include it.
If possible I'd like to keep openbox dependencies minimal and not add
gnome-menus (and python) to them. Do you think creating a -menus subpackage is a
good idea? A wrapper to switch between the pipe-menus and the hardcoded menus
when the subpackage is not installed would be needed.
Changing version to '9' as part of upcoming Fedora 9 GA.
More information and reason for this action is here:
I've included the script with some minor modifications in the package. Thanks!
openbox-3.4.7.2-2.fc8,obconf-2.0.3-2.fc8 has been submitted as an update for Fedora 8
openbox-3.4.7.2-2.fc8, obconf-2.0 openbox obconf'. You can provide feedback for this update here:
No update for Fedora 9?
openbox-3.4.7.2-2.fc8, obconf-2.0.3-2.fc8 has been pushed to the Fedora 8 stable repository. If problems still persist, please make note of it in this bug report. | https://bugzilla.redhat.com/show_bug.cgi?id=443548 | CC-MAIN-2017-09 | refinedweb | 382 | 64 |
CTIME(3) BSD Programmer's Manual CTIME(3)
asctime, asctime_r, ctime, ctime_r, difftime, gmtime, gmtime_r, localtime, localtime_r, mktime, timegm, timelocal - convert date and time to ASCII
#include <sys/types.h> #include <time.h> extern char *tzname[2]; void tzset(void); char * ctime(const time_t *clock); char * ctime_r(const time_t *clock, char *buf); double difftime(time_t time1, time_t time0); char * asctime(const struct tm *tm); char * asctime_r(const struct tm *tm, char *buf); struct tm * localtime(const time_t *clock); struct tm * localtime_r(const time_t *clock, struct tm *result); struct tm * gmtime(const time_t *clock); struct tm * gmtime_r(const time_t *clock, struct tm *result); time_t mktime(struct tm *tm); time_t timegm(struct tm *tm); time_t timelocal(struct tm *tm);
The ctime() function converts a time_t, pointed to by clock, representing the time in seconds since 00:00:00 UTC, 1970-01-01, and returns a pointer to a string of the form Thu Nov 24 18:22:48 1986\n Years requiring fewer than four characters are padded with leading zeroes. For years longer than four characters, the string is of the form Thu Nov 24 18:22:48 81986\n with five spaces before the year. These unusual formats are designed to make it less likely that older software that expects exactly 26 bytes of output will mistakenly output misleading values for out-of-range years. The ctime_r() function converts the calendar time pointed to by clock to local time in exactly the same way as ctime() and puts the string into the array pointed to by buf (which contains at least 26 bytes) and re- turns buf. Unlike ctime(), the thread-safe version ctime_r() is not re- quired to set tzname. The localtime() and gmtime() functions return pointers to tm structures, described below. localtime() corrects for the time zone and any time zone adjustments (such as Daylight Saving Time in the United States). After filling in the tm structure, localtime() sets the tm_isdst'th element of tzname to a pointer to an ASCII string that's the time zone abbreviation to be used with the return value of localtime(). gmtime() converts to Coordinated Universal Time. The localtime_r() and gmtime_r() functions convert the calendar time pointed to by clock into a broken-down time in exactly the same way as their non-reentrant counterparts, localtime() and gmtime(), but instead store the result directly into the structure pointed to by result. Unlike localtime(), the reentrant version is not required to set tzname. asctime() converts a time value contained in a tm structure to a string, as shown in the above example, and returns a pointer to the string. asctime_r() uses the buffer pointed to by buf (which should contain at least 26 bytes) and then returns buf.. timelocal() is a deprecated interface that is equivalent to calling mktime() with a negative value for tm_isdst. timegm() is a deprecated interface that converts the broken-down time, as returned by gmtime(), into a calendar time value with the same encoding as that of the values returned by the time() function.) */ time_t tm_year; /* year - 1900 */ int tm_wday; /* day of week (Sunday = 0) */ int tm_yday; /* day of year (0 - 365) */ int tm_isdst; /* is summer time in effect? */ long tm_gmtoff; /* offset from UTC in seconds */ char *tm_zone; /* abbreviation of timezone name */ The tm_zone and tm_gmtoff fields exist, and are filled in, only if ar- rangements to do so were made when the library containing these functions was created. There is no guarantee that these fields will continue to ex- ist.
/usr/share/zoneinfo time zone information directory /etc/localtime local time zone file /usr/share/zoneinfo/posixrules used with POSIX-style TZ's /usr/share/zoneinfo/UTC for UTC leap seconds If /usr/share/zoneinfo/UTC is absent, UTC leap seconds are loaded from /usr/share/zoneinfo/posixrules.
getenv(3), strftime(3), time(3), tzset(3), tzfile(5), zic(8)
The return values of the non re-entrant functions point to static data; the data is overwritten by each call. The tm_zone field of a returned struct tm points to a static array of characters, which will also be overwritten at the next call (and by calls to tzset()). asctime() and ctime() behave strangely for years before 1000 or after 9999. The 1989 and 1999 editions of the C Standard say that years from -99 through 999 are converted without extra spaces, but this conflicts with longstanding tradition and with this implementation. Traditional im- plementations of these two functions are restricted to years in the range 1900 through 2099. To avoid this portability mess, new programs should use strftime() instead. The default system time zone may be set by running "zic -l timezone" as the superuser. Avoid using out-of-range values with mktime() when setting up lunch with promptness sticklers in Riyadh. MirOS BSD #10-current August 8, 2007. | http://mirbsd.mirsolutions.de/htman/i386/man3/asctime_r.htm | crawl-003 | refinedweb | 803 | 54.97 |
Pythonista Countdown on button with .pyui
How do I make a timer start to countdown with the click of a button? Here is an image of what I have so far. Im trying to make it so when you click "Launch", the timer starts. Can anyone help me? Thanks
@JeremyMH, if you have created this in the UI editor, now you have to move to the code.
You need to set the button’s
actionbe a function that updates the timer and, if the timer should still be running, calls itself after a second with
ui.delay.
sorry im kind of a noob. can you please explain further! thanks
like for example, how would i make it call the function and how woukd i change the value of the ui? i havent used python in a while.
@JeremyMH Can you please paste here the code that you have so far?
It is easier to debug Python code than it is to debug English prose.
@JeremyMH, let’s check the basics first.
- Have you read the basic usage page?
- Have you checked this section of the ui module manual and tried the example of tying a button to an action?, it looks to me that your code should do nothing, not even give an error.
Did you try the code in my second link above?
Look at the stopwatch1.py and stopwatch1.pyui example in the following GitHub repository. May be this is what you are looking for.
@JeremyMH a foundation vote that proposed @enceladus but without the editor UI
import ui class StopWatch(ui.View): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.value = 0 self.state = 'stop' self.update_interval = .1): if self.state == 'run': self.value += 1 self.set_needs_display() def button_action(sender): v1 = sender.superview['Watch'] sender.hidden = True if sender.title == 'Reset': v1.value = 0 v1.state = 'stop' sender.superview['Start'].hidden = False elif sender.title == 'Start': v1.value = 0 v1.state = 'run' sender.superview['Stop'].hidden = False elif sender.title == 'Stop': v1.state = 'stop' sender.superview['Reset'].hidden = False v = ui.View(width=397, height=271) v.add_subview(StopWatch(name = 'Watch', frame = (0, 30, 345.00, 76.00))) v.present('sheet') for btn in ['Reset', 'Stop', 'Start']: v.add_subview(ui.Button(frame=(v.center.x-40, v.center.y-40, 80, 80), name=btn, border_width=1, corner_radius=40, action=button_action)) v[btn].title = btn if btn != 'Start': v[btn].hidden = True``` | https://forum.omz-software.com/topic/6615/pythonista-countdown-on-button-with-pyui/10 | CC-MAIN-2022-27 | refinedweb | 406 | 72.73 |
User Interface
When you start Qt Creator, it opens to the Welcome mode, where you can:
- Open recent sessions and projects
- Create and open projects
- Open tutorials and example projects
- Read news from the online community and Qt blogs
- Create or manage a Qt Account
You can use the mode selector (1) to change to another Qt Creator mode.
You can use the kit selector (2) to select the kit for running (3), debugging (4), or building (5) the application. Output from these actions is displayed in the output panes (7).
You can use the locator (6) to to browse through projects, files, classes, functions, documentation, and file systems.
For a quick tour of the user interface that takes you to the locations of these controls, select Help > UI Tour.
Modes
The mode selector allows you to quickly switch between tasks such as editing project and source files, designing application UIs, configuring how projects are built and executed, and debugging your applications. To change modes, click the icons, or use the corresponding keyboard shortcut.
To hide the mode selector and to save space on the display, select Window > Mode Selector Style > Hidden. To only show icons on the mode selector, select the Icons Only style.
The following image displays an example application in Edit mode (1) and Design mode (2).
You can use Qt Creator in the following modes:
- Welcome mode for opening projects.
- Edit mode for editing project and source files.
- Design mode for designing and developing application user interfaces. This mode is available for UI files.
- Debug mode for inspecting the state of your application while debugging and for using code analysis tools to detect memory leaks and profile C++ or QML code.
- Projects mode for configuring project building and execution. This mode is available when a project is open.
- Help mode for viewing Qt documentation.
Certain actions in Qt Creator trigger a mode change. Clicking on Debug > Start Debugging > Start Debugging automatically switches to Debug mode.
Browsing Project Contents
A left and right sidebar are available in most Qt Creator modes. The availability of the sidebars and their contents depend on the mode.
In the Edit mode, you can use the sidebars to browse the project contents.
You can select the contents of the sidebars in the sidebar menu (1):
- Projects shows a list of projects open in the current session and the project files needed by the build system.
- Open Documents shows currently open files.
- Bookmarks shows all bookmarks for the current session.
- File System shows all files in the currently selected directory.
- Git Branches shows the local and remote brances for the project in the Git version control system. For more information, see Working with Branches.
- Outline shows an overview of defined types and other symbols, as well as their properties and hierarchy in a source file.
The following views display additional information about C++ code:
- Class View shows the class hierarchy of the currently open projects.
- Tests lists autotests and Qt Quick tests in the project. For more information, see Running Autotests.
- Type Hierarchy shows the base classes of a class.
- Include Hierarchy shows which files are included in the current file and which files include the current file.
For more information about the sidebar views that are only available when editing QML files in the Design mode, see Editing QML Files in Design Mode.
You can change the view of the sidebars in the following ways:
- To toggle the left sidebar, click
(Hide Left Sidebar/Show Left Sidebar) or press Alt+0 (Cmd+0 on macOS). To toggle the right sidebar, click
(Hide Right Sidebar/Show Right Sidebar) or press Alt+Shift+0 (Cmd+Shift+0 on macOS).
- To split a sidebar, click
(Split). Select new content to view in the split view.
- To close a sidebar view, click
The additional options in each view are described in the following sections.
In some views, right-clicking opens a context menu that contains functions for managing the objects listed in the view.
Viewing Project Files
The sidebar displays projects in a project tree. The project tree contains a list of all projects open in the current session. For each project, the tree visualizes the build system structure of the project and lists all files that are part of the project.
Some build systems support adding and removing files to a project in Qt Creator (currently qmake and Qbs). The faithful display of the project structure allows to specify exactly where a new file should be placed in the build system.:
- To open files that belong to a project, double-click them in the project tree. Files open in the appropriate editor, according to the file type. For example, code source files open in the code editor.
- To bring up a context menu containing the actions most commonly needed, right-click an item in the project tree. For example, through the menu of the project root directory you can, among other actions, run and close the project.
- To hide the categories and sort project files alphabetically, click
(Filter Tree) and select Simplify Tree.
- To hide source files which are automatically generated by the build system, select Filter Tree > Hide Generated Files.
- To stop synchronizing the position in the project tree with the file currently opened in the editor, deselect
(Synchronize with Editor). You can specify a keyboard shortcut to use when synchronization is needed. Select Tools > Options > Environment > Keyboard, and then search for Show in Explorer.
- To see the absolute path of a file, move the mouse pointer over the file name.
Files that are not sources or data can be still included into a project's distribution tarball by adding their paths to the
DISTFILES variable in the .pro file. This way they also become known to Qt Creator, so that they are visible in the Projects view and are known to the locator and search. The Projects view contains context menus for managing projects, subprojects, folders, and files. The following functions are available for managing projects and subprojects:
- Set a project as the active project.
- Execute the Build menu commands.
- Create new files. For more information, see Adding Files to Projects.
- Add existing files and directories.
- Add libraries. For more information, see Adding Libraries to Projects.
- Add and remove subprojects.
- Search from the selected directory.
- Close projects.
For managing files and directories, the same functions are available as in the File System view. In addition, you can remove and rename files.
Viewing the File System
If you cannot see a file in the Projects view, switch to the File System view, which shows all the files in the file system.
By default, the contents of the directory that contains the file currently active in the editor are displayed. To stop the synchronization, delesect the Synchronize Root Directory with Editor button.
The path to the active file is displayed as bread crumbs. You can move to any directory along the path by clicking it. To hide the bread crumbs, select
(Options) and then deselect the Show Bread Crumbs check box.
To move to the root directory of the file system, select Computer in the menu (1). Select Home to move to the user's home directory. Further, you can select a project to move to an open project or Projects to open the Projects view.
By default, folders are separated from files and listed first in the view. To list all items in alphabetic order, select Options and then deselect the Show Folders on Top check box.
To also show hidden files, select Options > Show Hidden Files.
To stop the synchronization with the file currently opened in the editor, deselect Synchronize with Editor.
Use the context menu functions to:
- Open files with the default editor or some other editor.
- Open a project located in the selected directory.
- Show the file or directory in the file explorer.
- Open a terminal window in the selected directory or in the directory that contains the file. To specify the terminal to use on Linux and macOS, select Tools > Options > Environment > System.
- Search from the selected directory.
- View file properties, such as MIME type, default editor, and size.
- Create new files. For more information, see Adding Files to Projects.
- Rename or remove existing files.
- Create new folders.
- Compare the selected file with the currently open file in the diff editor. For more information, see Comparing Files.
- Display the contents of a particular directory in the view.
- Collapse all open folders.
Viewing QML Types
The Outline view shows the type hierarchy in a QML file.
- To see a complete list of all bindings, select Filter Tree > Show All Bindings.
- To stop the synchronization with the QML type selected in the editor, deselect Synchronize with Editor.
Viewing the Class Hierarchy
The Class View shows the class hierarchy of the currently open projects. To organize the view by subprojects, click
(Show Subprojects).
To visit all parts of a namespace, double-click on the namespace item multiple times.
Viewing Type Hierarchy
To view the base classes of a class, right-click the class and select Open Type Hierarchy or press Ctrl+Shift+T.
Viewing Include Hierarchy
To view which files are included in the current file and which files include the current file, right-click in the editor and select Open Include Hierarchy or press Ctrl+Shift+I.
Viewing Output
The task pane in Qt Creator can display one of the following panes:
- Issues
- Search Results
- Application Output
- Compile Output
- Debugger Console
- To-Do Entries
- Version Control
- General Messages
- Test Results. In these panes, you can also use the zoom buttons to increase and decrease the text size of the output.
To open the General Messages and Version Control panes, select Window > Output Panes. To display the To-Do Entries pane, enable the Todo plugin.
For more information about the Debugger Console view, see Executing JavaScript Expressions.
Issues
The Issues pane provides lists of following types of issues:
-.
Select toolbar buttons to run applications, to attach the debugger to the running application, and to stop running or debugging.
To specify settings for displaying application output, select Tools > Options > Build & Run > Application and restart Qt Creator.
In addition, you can open task list files generated by code scanning and analysis tools in the Issues pane. For more information, see Showing Task List Files in Issues Pane..
Use the toolbar buttons (1) or keyboard shortcuts to:
- Export SVG images to pixmaps
- Switch between background and outline modes
- Zoom in and out
- Fit images to screen
- Return to original size
- Play and pause animated GIF and MNG images
Exporting SVG Images
If you Tips and Tricks.
Platform Notes
This section describes the cases where the behavior of Qt Creator depends on the operating system it runs on.
Location of Functions
Qt Creator uses standard names and locations for standard features, such as options or preferences. In this manual, the names and locations on Windows and Linux are usually used to keep the instructions short. Here are some places to check if you cannot find a function, dialog, or keyboard shortcut on macOS when following the instructions:
Location of Settings Files
Qt Creator creates the following files and directories:
- QtCreator.db
- QtCreator.ini
- qtversion.xml
- toolChains.xml
- qtcreator
- qtc-qmldump
The location of the above files and directories depends on the platform:
- On Linux and other Unix platforms, the files are located in
~/.config/QtProjectand
~/.local/share/data/QtProject/qtcreator.
- On macOS, the files are located in
~/.config/QtProjectand
~/Library/Application Support/QtProject/Qt Creator.
- On Windows XP, the files are located in
%SystemDrive%\Documents and Settings\%USERNAME%\Application Data\QtProjectand
%SystemDrive%\Documents and Settings\%USERNAME%\Local Settings\Application Data\QtProject.
- On Windows 7, the files are located in
%SystemDrive%\Users\%USERNAME%\AppData\Roaming\QtProjectand
%SystemDrive%\Users\%USERNAME%\AppData\Local\QtProject.
High DPI Scaling
The operating systems supported by Qt Creator implement high dots-per-inch (DPI) scaling at varying levels. Therefore, Qt Creator handles high DPI scaling differently on different operating system:
- On macOS, high DPI scaling is forced, which means that Qt Creator allows Qt to use the system scaling factor as the Qt Creator scaling factor.
- On Windows, if no scaling environment variables are set, Qt Creator instructs Qt to detect the scaling factor and use it for Qt Creator.
- On Linux, Qt Creator leaves it to the user to enable high DPI scaling, because the process varies so much on different distributions and windowing systems that it cannot be reliably done automatically.
To override the default approach and always enable high-DPI scaling, select Tools > Options > Environment > Enable high DPI scaling. The changes will take effect after you restart Qt. | https://doc-snapshots.qt.io/qtcreator-master/creator-quick-tour.html | CC-MAIN-2019-13 | refinedweb | 2,107 | 64.3 |
Type: Posts; User: moffy
I am having a problem with the CMFCToolBarEditBox control. I have added the below code:
MyMFCToolBar.h
#pragma once
#include "resource.h"
class CMyMFCToolBar : public CMFCToolBar
{
putVolume() only changes the output volume of the wave or directshow audio renderer, it does not change the volume level of a microphone input.
Thank you for all your advice. I am in the middle of a rewrite and have found the rules listed in post#2 valuable. I pretty much have to agree with them all. I especially like the forward declaration...
Thank you for your advice.
1.D_ Drmmr: I have been doing something similar in C to the constructor/destructor you illustrated. Very handy when it comes to cleaning up memory structures. If you...
Thank you D_Drmmr for your advice.
I have been thinking about the differences that result from as you mentioned C++ which is OOP and C which is function based. In C++ it makes good sense to hide...
Thank you laserlight for your response. How many of the rules do you think are based upon the precept of hiding the data structures and code? I would be interested in your opinion.
Found this advice:
What are your thoughts.
Hello,
I am a self taught newbie in C/C++ with some holes in my knowledge. I am looking for advice on how to structure my headers for a C/C++ project that I am working on in VS2008. I would like...
Hello there,
I am unfortunately self taught in both C and C++, and get myself tied in knots with headers. At present I am working on a project with multiple files as well as libraries that are... | http://forums.codeguru.com/search.php?s=eabb4f3d6ab62225c7bd56930b60dfc9&searchid=2754073 | CC-MAIN-2014-15 | refinedweb | 283 | 76.22 |
Introduction.. There are some inconsistencies across all web services. For example some web methods take a XML string as input while others take an XmlNode as input. Most web methods return the result as XmlNode while others return to each web service and web method the detailed SDK help page.
A summary of the SharePoint Portal Server web services
SharePoint Portal Server provides the same web services as Windows SharePoint Services. It also provides the following five additional web services. WSS Web Services Description Area Service Areas are sections used in SharePoint Portal Server to group content. This web service allows to manage areas. You can create new areas, update areas, remove areas, get the list of sub-areas, etc. Query Service The Query web service is used by clients to search SharePoint. You can send in complex search XML requests and get a result-set of matches. User Profile Service Users in SPS have user profiles that are used to target content to audiences (users). This web service allows to obtain user profile information. It does not allow to create or modify user profiles. SPS Crawl Service This web service is undocumented and is used by SharePoint itself for site crawling purposes. Outlook Adapter Service Provides the same capabilities as the Alerts web service of WSS.
The table below shows the URLs to use for each web service provided by SharePoint Portal Server. You can add them the same way as the WSS web services described above.
WSS Web Services Web Reference Area Service> Query Service> User Profile Service http:// /_vti_bin/userprofileservice.asmx SPS Crawl Service> Outlook Adapter Service http:// /_vti_bin/outlookadapter.asmxd>
Namespaces used in the returned SharePoint XML documents
Many of the web methods return its result in form of an XML document. Most root nodes have a namespace URI associated with it. Here is an example XML document returned by the GetListCollection() web method (on the Lists web service). Please note that this is just a partial XML snippet for demonstration purpose:
Naturally we would think of running an XPath query like "//List" using the SelectNodes() method on the XmlDocument or XmlNode object. We expect it to return all the List nodes of this XML document. But the result returned is empty. The reason being that you need to query within the namespace associated with the root node. But how do you do that if there is no namespace qualifier (or prefix) associated with the namespace URI. We need to use the XmlNamespaceManager class to associate a namespace prefix to the namespace URI. Here is the code snippet:
Some real life examples of using the SharePoint web services
The following examples demonstrate how you can leverage the SharePoint web services to interact tightly with SharePoint from within your application. The detailed code for each example can be found in the attached "SharePoint explorer" sample application. The description below explains which web service and web method to use to obtain the desired SharePoint information.
Example 1 - Get the collection of SharePoint lists, fields and views
In the first example we want to get the collection of SharePoint lists. For each list we want to get all the defined list fields (fields you can use to store information) and finally all views associated with the list. Here are the web methods to call:
Example 2 - Get the list of users and site-groups
In this example we want to get the list of site users and to which site group each user belongs. We also want to get the list of site groups and which users belong to each site group.
Example 3 - Get the list of sites, site-templates and list-templates
With the last example we want to get a list of all sites in the site collection. We want to get for the site collection the list of site templates. Additionally we want for each site the list of list templates.
Summary
SharePoint comes with a vast number of web services and web methods which enable you to tightly integrate SharePoint capabilities into your application. It is very easy to learn and use these web services and web methods. Please refer to the attached "SharePoint web service browser" example which provides a complete wrapper class for all existing (about 150) web methods. This removes the burden of adding all the web references and worrying about the details how to instantiate each web method, etc. The sample application provides a user interface to explore all web methods. You can browse the web services and web methods, display the SDK help page, enter the input values, execute the web method and look at the displayed output values.
The second example - "SharePoint explorer" provides a much more comprehensive sample of how to use and work with the SharePoint web services and web methods. It retrieves as much information and displays the data in lists and tree nodes (always running simple XPath queries against the result-set). The user interface allows you to traverse through the related data. You can also write your own web services using the managed SharePoint server API. Here is a sample application which provides a document check-in and check-out capability through a new web service.
Talk to SharePoint through its web services
Publishing Exception in SPS Web Parts
thanks klaus for these valuable information, reders may try this,ADD|Update|Delete Item SharePoint Web Services
Thanks Klaus, very good article.
ry article. | http://www.c-sharpcorner.com/UploadFile/klaus_salchner/SharePointWS11152005045049AM/SharePointWS.aspx | crawl-003 | refinedweb | 910 | 61.77 |
Now we have a bitmap class we can move on to make use of it within a BmpToAVI class.
Add a new class called BmpToAVI to the project. The new class is going to make use of a number of API calls within the VfW API but rather than translate them all it makes more sense to create definitions for only those that are needed.
When you use VfW you first have to initialise the system and when you have finished you have to free it. These are most sensibly done in the class constructor and destructor as these are called automatically when the class is created and destroyed respectively:
class BmpToAVI{[DllImport("avifil32.dll")] extern static void AVIFileInit();[DllImport("avifil32.dll")] extern static void AVIFileExit();
public BmpToAVI(IntPtr hwnd) { m_hWnd = hwnd; AVIFileInit(); } ~BmpToAVI() { AVIFileExit(); }}
The need for the hwnd parameter will become apparent later. For now we simply store it in a private variable:
private IntPtr m_hWnd;
The API definitions are the first of several and they simply define the DLL in which the functions reside. Things get more complicated when we have parameters to translate to C# data types. To make these DllImport commands work we also need to add:
using System.Runtime.InteropServices;
The VfW AVI system works in terms of multimedia data streams stored in a single file. Streams can be video, audio or text. First we have to open, and in this case create, an AVI file and this can be done as part of adding the first bitmap to the video.
Add the following function to the class:
public void FirstFrame( string AVIfile, string BMPfile){ int result = AVIFileOpen( ref pFile, AVIfile, OF_WRITE | OF_CREATE, 0);
This returns a pointer to the AVI file that is used in other API calls and this has to be declared as a private global variable along with the other pointers to the streams that we are going to create – one standard and one compressed:
private IntPtr pFile = IntPtr.Zero;private IntPtr pStream = IntPtr.Zero;private IntPtr psComp = IntPtr.Zero;
The definition of the API call is:
[DllImport("avifil32.dll")]extern static int AVIFileOpen( ref IntPtr pfile, string File, int Mode, int clsidHandler);
and the constants are:
private const int OF_WRITE=0x00000001;private const int OF_CREATE=0x00001000;
The values for constants and error codes can be found in the file VFW.h file contained in the platform SDK which can be downloaded from the Microsoft website. The translation of the DLL function is relatively easy. The IntPtr type is useful for almost any 32-bit handle or pointer. The result returned should be zero if the function has worked.
Now that we have the file open we have to create a video stream to store in it. This requires us to provide information about the format of the video. Not difficult but the API does seem to need this or very similar information supplied to it more often than seems strictly necessary. Much if the format information is concerned with the details of the bitmaps that are going to be used to create the stream so now is a good time to load the first bitmap:
RawBitmap bm = new RawBitmap(); bm.LoadFromFile(BMPfile);
Next we need to fill in the details in an AVISTREAMINFO struct. This is fairly easy to translate from the C++ definition:
StructLayout(LayoutKind.Sequential, Pack = 1)]public unsafe struct AVISTREAMINFO{ public Int32 fccType; public Int32 fccHandler; public Int32 dwFlags; public Int32 dwCaps; public Int16 wPriority; public Int16 wLanguage; public Int32 dwScale; public Int32 dwRate; public Int32 dwStart; public Int32 dwLength; public Int32 dwInitialFrames; public Int32 dwSuggestedBufferSize; public Int32 dwQuality; public Int32 dwSampleSize; public AVI_RECT rcFrame; public Int32 dwEditCount; public Int32 dwFormatChangeCount; public fixed char szName[64];};
[StructLayout(LayoutKind.Sequential, Pack = 1)]public struct AVI_RECT{ public Int32 left; public Int32 top; public Int32 right; public Int32 bottom;};
The only complication is the need to use a fixed sized character array. This is considered “unsafe” – hence you have to remember to allow unsafe code in the project properties. We also need an AVI_RECT structure. Filling in the details is fairly easy but it is difficult knowing which parameters are important and which aren’t:
AVISTREAMINFO Sinfo = new AVISTREAMINFO();Sinfo.fccType = mmioStringToFOURCC("vids", 0);Sinfo.fccHandler = 0;Sinfo.dwScale = 1;Sinfo.dwRate = 10;Sinfo.dwSuggestedBufferSize = bm.bmIH.biSizeImage;Sinfo.rcFrame.top = 0;Sinfo.rcFrame.left = 0;Sinfo.rcFrame.right = bm.bmIH.biWidth;Sinfo.rcFrame.bottom = bm.bmIH.biHeight;
It is clear that we do need to specify the type of the stream “vids” for video in this case, the size of the frame and the frame rate, set to 10 frames per second in this case. We also need the mmioStringToFOURCC to convert the string “vids” to the multimedia code for a video stream:
[DllImport("winmm.dll", EntryPoint = "mmioStringToFOURCCA")]extern static int mmioStringToFOURCC( string sz, int Flags)
In this case we use an EntryPoint to specify the function in the DLL because it has a slightly different name to the C# function.
The stream can now be created using the format information:
result = AVIFileCreateStream( pFile, ref pStream, ref Sinfo);
This returns a pointer to the video stream within the AVI file. The definition of the API call is:
[DllImport("avifil32.dll")]extern static int AVIFileCreateStream( IntPtr pfile, ref IntPtr pavi, ref AVISTREAMINFO lParam);
<ASIN:0735619115>
<ASIN:0735614555>
<ASIN:073561945X> | http://i-programmer.info/projects/38-windows/220-bitmaps-into-videos.html?start=2 | CC-MAIN-2018-17 | refinedweb | 890 | 50.67 |
Every time you use some external library, you need compile your application with the library header, but this is not really necessary, if you know the library function interface, then the "extern" can help!
The "extern" keyword is used to tell the compiler that the variable will be solved later, at linker time, which means the variable is somewhere. This can be used to create a global variable between different files and solve when compiled/linked together, or simply to avoid the use of the header file of a library! this can be useful for missing header files for example.
Let's use the "Creating C/C++ Shared and Static Library" as example and compile the same library using just the "libsumlib.a" file. Edit the "main.cpp" to include the "extern" keyword:
#include <iostream> extern int Sum(int,int); using namespace std; int main(void) { cout << Sum(1,2) << endl; return 0; }
And compile:
g++ main.cpp -Lsumlib/lib/static -lsumlib
Note the missing -I include directory, just the static library "libsumlib.a" is used, the binary can run as always:
./a.out 3
0 comentários : | http://www.l3oc.com/2015/06/cc-extern-and-use-of-library-without.html | CC-MAIN-2017-51 | refinedweb | 187 | 62.48 |
Joy, frustration, excitement, madness, aha's, headaches, ... codito ergo sum!
SkypeWhat is.” Ok, big deal you could say, we already have Messenger. But I recently had some problems making voice connections to family with Messenger. Even when they were using one of the most recent ADSL Routers, with UPnP enabled, I couldn't connect. So I gave Skype a try and I must say I was amazed; not only the sound quality is in my opinion even better than Messenger, but Skype could make connections to pc's behind firewalls and/or routers. Installation and account creation is very easy, connections are made within a second. Ofcourse they have not (yet?) the huge user base like Messenger, and their focus is on voice communications, but I keep this little program so I can voice chat with people to Messenger can not connect.
Click the link on my blog, to call me!
C# HandleWhiteSpace Add-InFrom the authors website: The C# code editor in Visual Studio.NET does not handle whitespaces automatically like the VB.NET code editor does. This Add-In solves this problem by adding the 'Handle WhiteSpace' menu option to the Visual Studio.NET Tools menu. This option formats the C# code of the active code editor. This includes:
The author, Fons Sonnemans, sent me an email pointing me to this little utility he hase created. Check out the other downloads, some cool stuff!
Component developers have to make decisions all the time about how to flexible their components need to be. They need to find the balance between flexibility and development duration: if they want to increase the flexibility, they probably need to spend more development time. How to implement the needed flexibility depends on the situation. In some cases it is needed that the components must execute code that will be written by developers that actually use these components, so that code is not yet available while designing the component. In other cases it may be required something changes depending on a setting, for example how Customers are displayed (first name + last name, last name + initial, …). Providing this kind of flexibility in your components can require a rather complex model to be able to determine what the ToString method of the Customer object needs to return. But this kind of flexibility could quite easy be accomplished if you only could put some code in a String, and let this code be evaluated by your component. Of course you’d want to avoid writing your own parser for such a String at all times, so what could be of any help? CodeDom and Reflection can be used for solving this kind of problems.
CodeDom can be used mainly for two goals: compilation at runtime and code generation. It is possible to define a class in memory using the CodeDom, and let it compile at runtime, so you can use the class you’ve constructed in your code, as a compiled class! But once you’ve defined that class using CodeDom, you can easily generated VB.NET or C# code for it. Reflection can be used to investigate and invoke objects at runtime. For example you can use Reflection to iterate through all properties or methods of a class, and invoke them when needed. In my opinion the CodeDom – Reflection combination is one on the great concepts in .NET. Let’s find out how to use them for the “flexibility problem”!
For example, you want to build a component that has a Customer class which’ ToString function can be determined when instantiating Customer instances. Let’s say the Customer class has two properties Name and Street. The flexible part of our Customer class is that we can determine how the ToString method will be implemented. For example:Customer c = new Customer("customer.Name");c.Name = "Jan";If the ToString method of this customer object would be called, the string “Jan” would be returned. When we would instantiate the customer class as shown below, the result of the ToString method would be “JAN”, because of the ToUpper() function call:Customer c = new Customer("customer.Name.ToUpper()");c.Name = "Jan";Notice that the same Customer class is used to obtain that result, and even complex functions can be used:Customer c = new Customer("customer.Name + \" (\" + customer.Name.Length + \")\"");c.Name = "Jan";The result of invoking the ToString method would be “Jan (3)”. Notice that the quotes are escaped because you’ll have to put them in a String. Without the escaping that string looks like: customer.Name + “ (“ + customer.Name.Length + “)”
To obtain this behaviour the implementation of the ToString function on the Customer class is:public override string ToString(){ Assembly ass;
if(!AssemblyCache.ContainsKey(this.GetType().Name + _toString)) { //Create the class definition using CodeDom CodeTypeDeclaration tempClass = new CodeTypeDeclaration("TempClass"); tempClass.IsClass = true; CodeMemberMethod tempMethod = new CodeMemberMethod(); tempMethod.Name = "GetValue"; tempMethod.Attributes = MemberAttributes.Public; tempMethod.ReturnType = new CodeTypeReference(typeof(string)); tempMethod.Parameters.Add( new CodeParameterDeclarationExpression(this.GetType(), "customer")); tempMethod.Statements.Add( new CodeMethodReturnStatement( new CodeSnippetExpression(this._toString))); tempClass.Members.Add(tempMethod); //Compile that class CodeCompileUnit unit = new CodeCompileUnit(); CodeNamespace ns = new CodeNamespace("Temp"); ns.Types.Add(tempClass); unit.Namespaces.Add(ns); CompilerParameters compilerParams = new CompilerParameters(); compilerParams.GenerateInMemory = true; string assemblyFileName = Assembly.GetExecutingAssembly().GetName().CodeBase; compilerParams.ReferencedAssemblies.Add( assemblyFileName.Substring("".Length)); ICodeCompiler compiler = new Microsoft.CSharp.CSharpCodeProvider().CreateCompiler(); CompilerResults results = compiler.CompileAssemblyFromDom(compilerParams, unit); ass = results.CompiledAssembly; //Add compiled assembly to cache AssemblyCache.Add(ass, this.GetType().Name + _toString); } else { ass = AssemblyCache.GetAssembly(this.GetType().Name + _toString); }
//Create an instance of the compiled class Type tempClassType = ass.GetType("Temp.TempClass"); object tempClassInstance = Activator.CreateInstance(tempClassType);
//Call the GetValue method of the TempClass instance //using Reflection. MethodInfo getValueMethod = tempClassType.GetMethod("GetValue"); return (string)getValueMethod.Invoke( tempClassInstance,new object[] {this});}
First the AssemblyCache is checked to find out if the required code is already compiled before. This cache is implemented using the Singleton design pattern, for the complete code, look at the end of this post. If the code was not yet compiled before, a new CodeTypeDeclaration instance is created that will represent a new class that will be compiled in memory. A new CodeMemberMethod is added to that class, representing the GetValue function. This function contains only one line: the return statement. This return statement executes the code that is passed in the constructor and stored in the _toString variable. At that point the class is finished, so a CodeCompileUnit is created, containing a Namespace that contains the temporary class. To compile that unit, a Compiler object is obtained from the CsharpCodeProvider. The resulting assembly is placed in the cache.
So now, we have a compiled assembly containing the code that we passed in the constructor. To actually use the temporary class, Reflection comes into play. First we need to retrieve a Type instance from that Assembly. Then a MethodInfo object is created, that represents the GetValue method. This MethodInfo object is used to invoke the method, and the result is used as return value. It’s that simple!
The AssemblyCache class is implemented like this:public class AssemblyCache{ static System.Collections.Hashtable assemblies = new System.Collections.Hashtable();
public static bool ContainsKey(string key) { return assemblies.ContainsKey(key); }
public static void Add(Assembly ass, string key) { assemblies.Add(key, ass); }
public static Assembly GetAssembly(string key) { return (Assembly)assemblies[key]; }
private AssemblyCache() { //Singleton! }}
As you can see, CodeDom and Reflection can give developers a great ability to create very flexible components. If you need the complete source code of the example, contact me so I can send you the solution, or I’ll publish it so you could download it.
Tom and Gerd have updated the MSDN Belux site. They do this every week, but this time they have introduced some nice features (see Tom's post). There is a page about how you can participate in the Belux MSDN Community, so every developer in the Belux (Belgium and Luxembourg) should take a look. A list of bloggers in the Belux is added too.
But, a little bit hidden in my opnion, a link to the brand new MS Belux Forums is included too! This forum could use some more publicity, but I'm confidend something like that is planned. Nice work!
My new article for MSDN Belux is online!
<quote>Summary:.</quote>
I would like to say thanks to Tom for doing the layout and publishing the article.
David McNamee has a post about what the next version of VB.NET will offer:
Edit&continue is a great feature. I have to agree it can be abused and can lead to “just give it a try, let's see if it works“ scenarios. But if you use it wisely it can greatly improve productivity at least a few times a day. A video of the next VB.NET version is available here.
Article on The Register about Mono:
<quote.</quote>
Paul Vick (technical lead of VB.NET) has posted a nice entry about the edit and continue feature that will be available in the next version of VB.NET. He discusses the danger of introducing this advanced feature, for developers that are not so experienced. In my opinion, edit and continue is abused sometimes be less experienced developers and maybe even by experienced developers. But I think it can be very useful too, if you use it wisely! Paul uses the methaphore of cars to explain why he thinks this feature should be included.
But Paul introduces a new (and fun) feature to prevent unexperienced developers to use the advanced features. I guess the feature will not be in VS.NET 2004...<quote...</quote>
Last friday I attended an event of the Belgian .NET User Group (Benug) about service-oriented architectures in .NET. The presenter was Peter Himschoot (replacing his colleague Patrick Tisseghem) who did a very good job. The session was spit into a theoretical part and a hands-on workshop.
On tip Peter showed us, I would like to share: “Set StartUp Projects ...”. I guess everybody knows how to set a StartUp Project in a Visual Studio.NET solution; just right click on the project item in the Solution Explorer and choose “Set as StartUp Project”. But you can start and debug multiple projects at the same time! If you right click on the Solution Item instead of the Project Item, and then click “Set StartUp Projects...” you can choose between “Singe StartUp Project” and “Multiple StartUp Projects”. For the second choice you can select the Projects of your Solution you want to start when you hit the F5 or Start button. This is very useful for debugging multi-tier projects. Thanks Peter! | http://weblogs.asp.net/jan/archive/2003/09.aspx | CC-MAIN-2013-20 | refinedweb | 1,773 | 57.98 |
At import of a record describing a template set its type to
InjectedClassNameType (instead of RecordType).
LGTM! But let's wait for someone else's review too.
@a.sidorin
We discovered this error during the CTU analysis of google/protobuf and we could reduce the erroneous C file with c-reduce to the minimal example presented in the test file.
Adding Rafael too as a reviewer, because he has been working also on the ASTImporter recently.
LGTM!
I'm not really too confident to approve changes yet, but with a second opinion it should be fine.
The only thing I'm wondering is whether the Decls looked up here are always known to be already imported.
These are in the 'To' context. It may be that the Injected is not found here, probably because not yet imported (in this case the import may be part of a not completed recursive process).
As far as I understand that corner case could be covered by doing the lookup on DCXX instead and then importing the injected decl. But then you wouldn't find it if it is only in the To context (if that is possible).
I mean if a user calls ImportDecl in another order specifically. But such a case is probably really artificial and I'm not sure if it's even makes sense or is already covered by ImportDeclParts.
It should be fine as it is.
Hello, Balázs,
You can find my comments inline.
Maybe we should fix it a bit upper (line 2100)?
This looks like raw creduce output. Is there a way to simplify this or make human-readable? Do we really need nested namespaces, unused decls and other stuff not removed by creduce? I know that creduce is bad at reducing templates because we often meet similar output internally. But it is usually not a problem to resolve some redundancy manually to assist creduce. In this case, we can start by removing k and n.
We can leave this code as-is only if removing declarations or simplifying templates affects import order causing the bug to disappear. But even in this case we have to humanize the test.
The ToDescribed is used in this code, not available before the new record is created.
Probably remove this test? There was some bug in a previous version of the fix that was triggered by this test. Before that fix (and on current master) this test does not fail so it is not possible to simplify it.
I vote on deleting this test then. We already have another clear and simple test.
Hello Balázs,
Please clang-format the tests and delete injected-class-name-decl-1. Don't see any other issues.
Nit: "an InjectedClassNameType".
I think we should delete this test. As I see, it passes even in upstream clang, so it doesn't really make sense to keep it.
Small fixes. | https://reviews.llvm.org/D47450 | CC-MAIN-2021-31 | refinedweb | 484 | 66.03 |
Shopify can be integrated with Emarsys.
The app also automatically installs our Web Extend data collection scripts on your store, which will allow you to deliver personalized product recommendations both on your website and in emails or to track revenue from your campaigns.
Here you will find all the information you need to set up and work with the Emarsys for Shopify integration.
Contents:
Supported functionality
- Regular upload of all customer data fields from Shopify to Emarsys.
- Daily sync of contact opt-in data from Emarsys to Shopify (
accepts marketingfield in Shopify).
-.
Note that 3rd party plugins may change the default behavior of the store. Emarsys cannot guarantee compatibility with other plugins, such as a custom payment provider that tracks data differently, and may prevent purchase information from being synchronized. Before installing a new plugin, we recommend contacting Emarsys support.
Prerequisites
- A Shopify or Shopify Plus store.
- A fully set up and working Emarsys Marketing Platform account.
- An Emarsys merchant ID.
- Emarsys API credentials for authentication.
Notes
- All of the above should already have been set up as part of your standard Emarsys onboarding. If you are missing any of them, please contact Emarsys Support.
- You will only be able to track revenue from email campaigns if you have an enterprise-level Shopify Plus subscription, due to the limitations that come with the standard packages.
- If you have more than one Shopify store, you will need to map each one to a separate Emarsys account.
Installing the app
Before you start:
- Note that our Shopify app will import your Shopify contact data to Emarsys and overwrite any existing values. If you already have contacts in your Emarsys account, consult Emarsys Support before installing it so that we can help you to protect any critical data that has been collected in Emarsys, such as opt-in status.
- If the app is already installed on your store, the installation will be cancelled and you will be redirected back to this page.
- To install the Emarsys module in Shopify, enter the URL of your webshop (without http:// or https://) into the field below and click Go.
- This will take you to your Shopify store.
- Log in and click Install unlisted app.
- You will now see the app listed on the Shopify Apps page.
Connecting to Emarsys
On the Apps page, click the app link to open the app dashboard.
The dashboard shows a number of boxes, each representing an integration step. The vertical alignment of the boxes indicates the recommended order of the steps you must complete, so you should ideally start with the top box and work your way down.
Click Connect in the topmost box, and then enter your Emarsys API username and Emarsys API secret, then click Connect.
Notes
- You should already have your API credentials as an API user should have been created as part of your standard Emarsys onboarding. Your technical specialist should know where you store this information.
- If you cannot find your existing API username and secret, or if you do not have any, create a new API user.
Customers
In the Customers section, click Enable to turn on the regular upload of your Shopify customers to the Emarsys contact database. This will enable you to make use of your Shopify data in your marketing campaigns.
The progress of the initial upload is tracked in the Upload status pane. When it is finished a Live flag will be displayed on the left.
After the initial upload of your customer data, you will be able to modify the default field mappings. To do so, click Edit matching.
In the Field matching section you can choose which fields to map between Shopify and Emarsys.
When you enable customer sync for the first time, the app will pull all your Shopify customers and push them to your Emarsys account.
- By default, the
Shopify IDinstead, contact Emarsys Support.
- If you already have contacts in your Emarsys account, make sure you use the same external key as before.
You have to wait until all your customers are uploaded to Emarsys before you can start uploading your Shopify events, products and orders, or install our web behavior tracking scripts. This may take up to a few hours, depending on the size of your customer database.
Events
In the Events section, click Enable to make your Shopify events, which are user interactions in your store such as customer registration, available in the Emarsys Marketing Platform.
You will be able to use these to send transactional marketing messages, personalize them and to build Automation Center programs.
In the Orders section, click Enable to turn on the regular upload of your orders to Emarsys.
This information is essential for Emarsys features such as revenue reporting or product affinity models, and helps to make our smart features even smarter.
When you enable orders sync for the first time, the app will pull the full history of your orders and push it to emarsys.
- By default, the
- If you want to use your Shopify ID for this, please contact Emarsys Support and we will enable this.
Web behavior tracking
Our Web Extend scripts track visitor interactions on your website and process this information to serve validated data to various Emarsys applications, such as Smart Insight, Predict or the Automation Center. The app will install them automatically for you.
All your active themes are listed here. Click Add for each one or Add to all to install the Web Extend data collection scripts on all of them.
If you are not a Shopify Plus customer, you will need to manually implement the Web Extend data collection scripts into the code of your checkout page as standard Shopify packages do not allow users to modify the template of the checkout page.
To add the script to the page, log in to your Shopify admin, go to Settings > Checkout > Additional Scripts, copy the following code and paste it into the text box:
<script type="text/javascript"> var ScarabQueue = ScarabQueue || []; (function(id) { if (document.getElementById(id)) return; var js = document.createElement('script'); js.id = id; js.src = '//cdn.scarabresearch.com/js/#MERCHANT_ID#/scarab-v2.js'; var fs = document.getElementsByTagName('script')[0]; fs.parentNode.insertBefore(js, fs); })('scarab-js-api'); </script> <script type="text/javascript"> {% if checkout.order.financial_status == 'paid' or checkout.order.financial_status == 'authorized' %} (function() { {% assign orderTotalNum = checkout.subtotal_price | plus: checkout.discounts_amount %} {% assign orderTotalNumFloat = orderTotalNum | times: 1.0 %} {% assign discountPerc = checkout.discounts_amount | divided_by: orderTotalNumFloat | times: 100.0 %} {% assign orderCreatedAt = checkout.order.created_at | date: '%s' %} {% assign nowTimestamp = 'now' | date: '%s' %} {% assign orderAge = nowTimestamp | minus: orderCreatedAt %} var orderStorageKey = "scarab_order_number"; var orderAgeTimeLimit = 20; if ({{ orderAge }} < orderAgeTimeLimit && !hasOrderSent({{ checkout.order_number }})) { ScarabQueue.push(['purchase', { orderId: {{ checkout.order_number }}, items: [ {% for item in checkout.line_items %} {% if forloop.length == 1 %} {% assign discountLineItem = item.line_price | times: discountPerc | divided_by: 100.0 | round: 2 %} {item: {{item.variant_id}}, price: {{ item.line_price | minus: discountLineItem | divided_by: 100.0 }}, quantity: {{item.quantity}}}{% if forloop.last == false %},{% endif %} {% else %} {% assign discountLineItem = item.line_price | times: discountPerc | divided_by: 100.0 | round: 2 %} {item: {{item.variant_id}}, price: {{ item.line_price | minus: discountLineItem | divided_by: 100.0 }}, quantity: {{item.quantity}}}{% if forloop.last == false %},{% endif %} {% endif %} {% endfor %} ] }]); setOrderSent({{checkout.order_number}}); } function hasOrderSent(orderId) { return localStorage.getItem(orderStorageKey) >= orderId; }; function setOrderSent(orderId) { localStorage.setItem(orderStorageKey, orderId); } })(); {% endif %} </script> <script> ScarabQueue.push(['cart', [ {% if checkout.line_items %} {% for item in checkout.line %} {%else%} {% if cart.items %} {% for item in cart %} {%endif%} {% endif %}] ]); </script> <script> {% if customer.#CONTACT_IDENTIFIER# %} ScarabQueue.push(['#SET_CONTACT_IDENTIFIER#','{{customer.#CONTACT_IDENTIFIER#}}']); {%else%} {% if checkout.customer.#CONTACT_IDENTIFIER# %} ScarabQueue.push(['#SET_CONTACT_IDENTIFIER#','{{checkout.customer.#CONTACT_IDENTIFIER#}}']); {%endif%} {%endif%} ScarabQueue.push(['go']); </script>
Notes
- You can always test the data collection scripts on a test theme that is not live in your web store.
- Whenever you start using a new theme, make sure you add the scripts to it.
- Currently, the dashboard does not give any indication of whether the scripts are already installed on the listed themes. Try to keep track of which theme already has them. But do not worry; you cannot break the integration if you add the scripts to a theme which already has them.
Personalizing your message content
As a Shopify user, you not only have all the data stored in Emarsys available for personalizing your messages, but you can also use content coming directly from Shopify. Specifically, each Shopify event that triggers an Emarsys external event has a set of information attached to it, which you can add to your emails or other messages.
For a complete list of variables individual Shopify events pass to Emarsys, see the Shopify documentation.
To add such Shopify content to your emails, use Emarsys Scripting Language (ESL) placeholders in the body of your message with the following syntax:
{{ event.variable_name }}
where variable_name is the name of the variable passed in the Shopify event JSON. When the message is sent, the placeholder will be replaced with the respective value from the JSON.
For example, to add total discount and subtotal price values to your email triggered by the Shopify Order event you have to add the following placeholders to the HTML of your email:
{{ event.line_items[0].total_discount }}
{{ event.subtotal_price }} | https://help.emarsys.com/hc/de/articles/115005679509-Integrationsanleitung-f%C3%BCr-Emarsys-for-Shopify-und-Shopify-Plus- | CC-MAIN-2019-30 | refinedweb | 1,511 | 57.77 |
ooad presentation
Post on 05-Dec-2014
Category:
Technology
7 download
Embed Size (px)
DESCRIPTION
TRANSCRIPT
- 1. Object-Oriented Analysis and Design
- 2. How do you write great software every time?
- 3. Great software?
- 4. The software must do what the customer wants it to do
- 5. Make your code smart Well-Designed Well-Coded Easy to mantain Easy to reuse Easy to extend
- 6. Great software satisfies the customer and the programmer
- 7. Programmers are satisfied when: Their apps can be REUSED Their apps are FLEXIBLE Customers are satisfied when: Their apps WORK Their apps KEEP WORKING Their apps can be UPGRADED
- 8. Ok but what shall the software do?
- 9. Gathering requirements Pay attention to what the system needs to You can figure out how later
- 10. Requirement? Its a specific thing Your system has to do to work correctly Usually a single thing, and you can test that thing to make sure you fullfilled the requirement The complete app or project you are working on 2 - so if you leave out a requirement or even if they forget to mention sth your system isnt working correctly 1 - The customer decides when a system works correctly,
- 11. Your software has a context
- 12. Develope rs viewpoint (the perfect world) Real world
- 13. + Analysis = Real world context Textual analysis = nouns / verbsUse cases Diagrams
- 14. Nothing ever stay the same No matter how much you like your Right now, its probably going to ch Tomorrow Good design = flexible and resilien
- 15. Design principles Techniques that can be applied for designing or writing code to make that code more Mantainable, flexible or extensible OCP Open-closed princople DRY Dont repeat yourself SRP Single responsibility principle Classes are open for extension And closed for modification Avoid duplicate code by abstracting Things that are common Every object in your project should have a single responsabil
- 16. Open-closed principle class Printer{ public final void print(Printable p){ } } class SquarePrinter extends Printer{ public void print(Square s){ } } OPEN = Extending functionality ClOSED = NO OVERRIDES
- 17. Single responsibility principle public class MobileCaller{ public void callMobile(MobileNo mobileNo){ ... } } public class ValidationService{ public static boolean validateMobileNumber(MobileNo mobileNo){ ... } } Responsibility #1: Call Responsibility #2: validate number
- 18. DRY Dont repeat yourself
- 19. By Contract / Defensive programming programming defensively means youre making sure the client gets safe response, no matter what the client wants to have happen Programming by contract means you are working with a client code to agree on how youll handle problem situations
- 20. Defensive programming | https://vdocuments.net/ooad-presentation.html | CC-MAIN-2022-40 | refinedweb | 419 | 51.78 |
I cannot understand files I/O, I have the pseudo-code for the program and some code I tried, but I cannot ge it. Could you help me please?
*The input file is read character into an array. The array is a large array of 200 characters, and acts as temporary storage before being output after modification to a new file.
*The output file is written starting from the character at the end of the array, and taking characters from the array in reverse order until the start of the array is reached. Counter is used to keep track of where the read/write is currently accessing the array.
* Create an input text file using Windows notepad
#include <stdio.h>
void main()
{
// Initialise array of total 200 for characters
char name[200];
// Initialise single character for each read/write
FILE *f_input;
// Initialise variable for counter to zero
int counter=0
// Open input file
FILE *f_input = fopen("FILE.TXT", "r");
// While not at end of file (loop start)
while (f_input==NULL);
// read single character from input file
fget(f_input);
// Store character into array [counter]
// Increment counter by one
count=count+1
// End of loop code
// Close input file
fclose(f_input);
}
// Decrement counter by one (This moves counter back one position to end of array)
counter=counter-1
// Open output file (different filename to input file)
FILE *f_output
// While counter is greater than zero (loop start)
While (count=>0)
// Get current character at array [counter]
// Write this single character to output file
fput(
// Decrement counter by one
counter=counter-1
// End of loop code
Close output file
fclose(f_output);
// End
} | https://cboard.cprogramming.com/c-programming/17746-file-i-o-problems-help.html | CC-MAIN-2017-09 | refinedweb | 268 | 51.92 |
This summer I created a pool temperature monitoring and pump control system using a Pi Zero W. This article gives an overview of the system and how I put it together. It allows the temperature of the air and water to be displayed on a web page while automatically turning the pump on and off according to a preset schedule.
The Pool
We originally had an 8ft Bestway “fast set” pool with an inflatable ring around the top. This proved to be slightly incompatible with the 3 cats we share the garden with. So this year we changed it for a 10ft metal framed “Summer Escapes” pool. It holds 4100 litres of water and I had no idea what temperature the water ever reached or how this related to the air temperature. I also had no way of easily controlling the pump without fiddling with the settings on a mains timer.
The Requirements
I decided my Pool Temperature Monitoring system should be able to :
- Measure the air and water temperature
- Log temperatures to the internet
- Display temperatures on a web page accessible from a mobile phone
- Allow the pump to be turned on, off or placed in an automatic mode that would follow a schedule
The Water Pump
As with the previous pool it came with a mains powered pump containing a filter. In previous years I controlled this with a traditional mains timer but there were times I wanted to manually turn it on to take advantage of the free electricity available from my solar panels. If the pool had been used in the day I also wanted an easier way to give it an extra few hours without messing with a timer unit.
The Weatherproof Box
To power the pump I bought a weather proof box that had room for a 4-way extension block. A 10m mains cable runs into the house. There was plenty of room to put in a 5V power supply and a Pi Zero. The sensor cables come in via the rubberised slot.
These boxes seem expensive but when mains is involved you really want something designed to keep out water rather than trying to use an old ice cream tub.
The Parts
My finished system includes the following :
- Pool [Amazon]
- Pump
- Weatherproof box [Amazon]
- 10m mains extension lead
- Raspberry Pi Zero W [Amazon]
- 5V power supply [Amazon]
- 4GB microSD card
- 2x waterproof DS18B20 temperature sensors with 3m cables [Amazon] [eBay]
- 4.7Kohm resistor for DS18B20s
- Energenie Socket [Amazon] [eBay]
- Energenie Pi-mote add-on [Amazon] [eBay]
Energenie Sockets & Pi-mote
Energenie produce remote control sockets. They operate like most of the remote control sockets out there but with one crucial difference. You can buy a “Pi-mote” add-on for the Pi that lets you control sockets using Python.
It’s really easy to setup and was the perfect combination of hardware to allow me to control the pump. The Pi-mote simply plugs onto the GPIO header and allows sockets to be controlled using single lines of Python.
Temperature Sensors
I opted for two DS18B20 “1-wire” interface temperature sensors as I had used them before. Multiple sensors can be connected to the same GPIO pins. The DS18B20 can be purchased as a waterproof version with all the cabling attached. They are slightly more expensive than the standard sensors but all I had to do was solder the three wires to the appropriate GPIO pins on the back of the PiMote and they were ready to go.
One was strategically placed in a hedge to measure air temperature and the other dropped into the pool to measure the water temperature. My sensors had 3m of cable.
The Software
All the software is available on my Pool Monitoring BitBucket Repository. There are two main scripts involved in this system and both are written in Python. They are launched at boot time using cron via the launcher.sh script.
The first (poolmain.py) runs in a continuous loop and checks the mode. If in “auto” it keeps an eye on the time and decides when to turn the pump on and off. It also sends regular temperature readings to the “Thingspeak” cloud-based IoT platform. You can see my public channel with recent air and water temperature readings here :
The second (poolweb.py) uses the Flask framework to create a basic set of webpages. This includes a dashboard, a schedule and a login page. The dashboard shows the air and water temperatures and also allows the pump mode to be changed. There are three modes. On, Off and Auto. When in auto the pump is automatically turned on and off depending on the Schedule defined on the schedule page. The login page ensures only authorised people can mess with the pump!
The Web Interface
The system presents a web-page either over the local network or the internet (if you’ve configured your router appropriately). It looks something like this :
The schedule can be modified on the schedule screen. Ticks are placed against the hours the pump should be active when in “auto” mode :
The default username and password is “admin” and “splishsplosh”. The pages can be accessed via 192.168.1.42:5000 where “192.168.1.42” should be replaced with the IP address of your Pi on your network. 5000 is the default port used by Flask.
Pushover and Thingspeak
When the system boots it sends a notification using Pushover. This can then be read on a mobile phone using their app. It’s a great service and you can use it to manage notifications from other systems.
The notification message contains the internet IP address of your network and if you have set up port forwarding to the Pi you can access the dashboard when away from home. It assumes you are forwarding port 50000 to port 5000 on the Pi.
Temperatures are sent to Thingspeak which allows you to see your data plotted as a graph.
Both these services require you to create an account and obtain some API keys. They can be added to the config.py file.
Hardware Setup
The hardware setup is fairly straightforward.
- The pump is connected to the power strip using an Energenie remote socket.
- The Pi Zero is powered with a 5v phone charger.
- The PiMote is attached to the Pi’s GPIO header.
- The temp sensor wires were soldered onto the back of the PiMote. “Red” to Pin 1 (3.3V), Black to Pin 9 (Gnd) and Yellow to Pin 7 (GPIO4). A 4.7Kohm resistor is put between Pin 1 and Pin 7 as per the 1-wire interface requirements.
- SD card in the Pi’s SD card slot (obviously!)
SD Card Setup
To get the software working you can follow this process. Start by creating a fresh SD card using the latest version of Raspbian. I used the “Lite” version as I didn’t need the desktop environment.
Enable SSH using your preferred method. I used my Windows PC and simply created a blank file named “ssh” on the boot partition of the card before inserting it into the Pi.
I then manually setup the WiFi using a wpa_supplicant.conf file. This should be done before you boot the fresh image for the first time.
Please change the default Pi password to something sensible using this guide. This is particularly important if you are allowing your Pi to be accessed over the internet.
Updates and Package Installation
Run the following commands to update the image :
sudo apt-get update sudo apt-get -y upgrade
Then install the following packages :
sudo apt-get -y install git sudo apt-get install python3-gpiozero sudo apt-get -y install python3-pip sudo pip3 install flask sudo pip3 install requests
These packages support the features used in the Pool Monitoring Python scripts.
Pool Monitoring File Downloads
The next step is to clone the software from my BitBucket repository :
git clone
Now rename the directory to something a bit easier to type :
mv rpispy-pool-monitor pool
and navigate into it :
cd pool
Make the launcher.sh file executable using :
chmod +x launcher.sh
Energenie Socket and Pi-Mote Pairing
The first time the Pi-Mote is used it must be paired with the socket.The process is described in the official user manual. The socket must be put into “learning mode” and this can be done by:
- If socket is on press the green button to turn it off
- Hold the green button for 5 seconds or more and then release it when the lamp starts to flash at 1 second intervals.
Then run the pairing script in the utils directory :
cd /home/pi/pool/utils python3 energenie_pair.py
Pressing Enter when prompted will pair the Pi-mote to the socket and the socket will have an ID of 1. This ID is used in the Python scripts to turn this specific socket on and off.
DS18B20 Sensor Setup
In order to configure the DS18B20 sensors you need to make a small change to the config.txt file using :
sudo nano /boot/config.txt
add the following line to the bottom :
dtoverlay=w1-gpio,gpiopin=4
There is more detail on the DS18B20 on the DS18B20 1- Wire Digital Thermometer Sensor page.
Web Interface Password
The default username and password is “admin” and “splishsplosh”. The password is stored as a hash so to change it you must use the hashgenerator.py script to convert your new password into a new hash value.
cd /home/pi/pool/utils nano hashgenerator.py
Then change the default password “splishsplosh” to your password. Save using CTRL-X, Y and Enter. Run the script to create the hash value :
python3 hashgenerator.py
The new hash can be inserted into the config.py file.
Config File
Edit config.py using :
cd /home/pi/pool nano config.py
Paste in the new hash.
The “FLASHSECRET” can be changed to anything you like. Stick in some random characters to personalise yours.
To send a boot notification to Pushover and temp data to Thingspeak you will need to register with those services and obtain API keys. These are personal to you and should be carefully inserted into the config.py file.
Save and exit the nano editor using CTRL-X, Y, ENTER.
Cron Setup
To get the scripts running when the Pi boots we need to create a cron entry. Do this using :
sudo crontab -e
If prompted select a default text editor. I usually choose nano which is option “2”.
Then add this line at the bottom :
@reboot sh /home/pi/pool/launcher.sh > /home/pi/pool/logs/cronlog 2>&1
Make sure there is a blank line after this line.
This will run the launcher script at boot time and will in turn run the two main Python scripts.
Timezone Setup
One final step is to set the correct timezone for your location. I has to do this to ensure my system knew the correct time and wasn’t out by an hour. You can do this by :
- Running “sudo raspi-config”
- Selecting “Localisation Options”
- Selecting “Change Timezone”
- Select your region
- Select your nearest city/region
- Save and exit by selecting “Finish”
Typing the command :
timedatectl
should report the correct “Local time”.
Ready to go!
Assuming the sensors are connected and you’ve configured everything correctly it should all work when the Pi is rebooted.
Troubleshooting
As with most projects that involve a mixture of hardware and software things might not work straightaway. Here are some tips :
- Check the contents of the logs in /home/pi/pool/logs
- Ensure the temp sensors are wired correctly to 3.3V, GPIO4 and Ground
- Ensure the temp sensors are wired correctly and that there are two “28-00” directories in /sys/bus/w1/devices
- Check all the files are located in /home/pi/pool/
- Check “crontab -e” contains the @reboot line
- Check the launcher.sh script is executable. Use the “ls” command and it should show up in green.
References
The following links provide additional technical information on the technologies I used in this Pool Monitoring and Pump control project :
Energenie Pi-Mote Manual
Energenie Support in gpiozero Library
Flask Documenation
This is an awesome setup. I just started working on doing pretty much the same thing except my is for HVAC. (2 temps though, supply and return) I am a complete noob at this though. I have my Pi set up with the same sensors as yours. Only 1 connected right now. Im able to run the code and have it show my temp, but cant figure out how to get it up to Thingspeak. I have an account, channel set up, API key and all. My current code is…
import os
import glob
import time
os.system(‘modprobe w1-gpio’)
os.system(‘modprobe w1-therm’)
base_dir = ‘/sys/bus/w1/devices/’
device_folder = glob.glob(base_dir + ’28*’)[0]
device_file = device_folder + ‘/w1_slave’
def read_temp_raw():
f = open(device_file, ‘r’)
lines = f.readlines()
f.close()
return lines_f, temp_c
while True:
print(read_temp())
time.sleep(1)
Tried reading your code to get just the part to send to Thingspeak but as I said, Im a noob and not 100% sure which part I need. I think its this one…
# Read temperatures and send to Thingspeak
# every 5 loops
loopCounter+=1
if loopCounter==loopSendData:
temp1,temp2=p.readTemps(mySensorIDs)
p.sendThingspeak(c.THINGSPEAKURL,c.THINGSPEAKKEY,’field1′,’field2′,temp1,temp2)
loopCounter=0
Where you have “mySensorIDs” would be my 28-XXXXXXXXX Id’s for my sensors?
ThinkspeakURL and KEY are my url and key, Field Identifiers and names to what mine are…
Does the temp1,temp2 before sensorIDS need to be changed to anything?, does the loopcounter you have make a difference on mine?
Thanks for any help you can give.
Dan
Hi Dan,
“p.sendThingspeak” is the function called “sendThingspeak” that is imported from “poollib.py”. So you could copy the function definition into your code and refer to it as “sendThingspeak” rather than “p.sendThingspeak”.
c.THINGSPEAKURL,c.THINGSPEAKKEY are variables that are imported from the “config.py” file. You could either import your own config.py file or replace these with THINGSPEAKURL and THINGSPEAKKEY.
i.e. stick these near the start of your script :
THINGSPEAKKEY='yourkeygoeshere'
THINGSPEAKURL=''
This bit:
while True:
print(read_temp())
time.sleep(1)
could become:
while True:
tempc,tempf=readTemp()
sendThingspeak(THINGSPEAKURL,THINGSPEAKKEY,'field1','field2',tempf,tempc)
print(tempf)
print(tempc)
time.sleep(1)
If you are still struggling create a new script. Set tempf and tempc to dummy values and get the Thingspeak bit to work without worrying about the sensor stuff. Get the sensor stuff working in a separate script.
With both mechanisms working it’s then to combine and use the sensors to feed the values into the Thingspeak section.
Regards,
Matt
Hi. I am looking to just measure temperature in my pool – can I connect the probe directly to the Pi? If so, how would I do this?
Thanks!
The DS18B20 can be connected directly. I only attached to the PiMote board as it made it easier to get to the Pi’s GPIO pins. See my DS18B20 tutorial.
Thanks Matt. I managed to get it all to work – but I am having an issue.
From time to time the sensor no longer appears in the /sys/bus/w1/devices/ directory. Its like, one minute its there, then the directory is not there.
Do you have any advice?
Thanks
Chris
This could be due to a wiring issue between the sensor and Pi GPIO. Double check the connections are good. Could also be due to the length of cable. Mine are 3m and seem to work OK.
Cron log reports the following:
traceback (most recent call last):
File “/home/pi/pool/poolmain.py”, line 27, in
import poollib as p
File “/home/pi/pool/poollib.py”, line 29, in
import requests
ImportError: No module named ‘requests’
Traceback (most recent call last):
File “/home/pi/pool/poolweb.py”, line 29, in
import poollib as p
File “/home/pi/pool/poollib.py”, line 29, in
import requests
ImportError: No module named ‘requests’
This is probably due to “requests” not being installed. Use “sudo pip3 install requests” to install it.
Hi Matt,
This is an excellent write-up. I have been able to get the entire system working fine but my question is, how can I convert the temperature to show Fahrenheit instead of Celsius on the website that Flask is running?
Thanks
Dan
I’ve just updated the poollib,poolmain,poolweb files, index and debug templates and the config.py file. If you look at the config,py file there is new variable TEMPUNIT. Add this to your current file and set to ‘F’. Then overwrite the 5 other files. This will convert the values to Fahrenheit and dispay F rather than C in the web interface.
Thanks Matt,
That worked like a charm and it is displaying in Fahrenheit now. Thanks for quick reply also.
Hi
if i should give a suggestion then it would be to add support for solar heating, would require one more pump and 1 more temp sensor that can say when the water inside the solar heating loop is starting to get warmer as the sun heats it up, this would trigger the 2nd pump, the pump can be turned off when the water in the solar heating circuit is the same as the pool
Hi Matt, thanks very much for creating this – it’s really good.
I’ve had this running successfully most of the summer in my garden and it’s allowed me to check on the status of my pool and switch on/off remotely and schedule the heating when I’ve got a cheaper electricity rate.
It’s my first Raspberry Pi project and I’m really enjoying it – next summer I’m hoping to modify your code to behave as a thermostat and keep target pool temp (we’ve found 31 degrees is quite nice!!), and then run the pump/heater for longer to exceed this whenever my solar panels are generating.
I do have a question though; I noticed you used “sudo” to setup Crontab to load the script on bootup privileges. Isn’t there an additional “cyber security” risk of doing this? I.e. if someone found a way to modify your code remotely, they could do much more harm? I’ve changed my Pi’s username/password but I’m not expert enough to understand the risks of allowing WWW access to a computer like this on my network.
Hi Matt, this is an awesome project. Trying to figure out how to tweak it for myself. My pi resides in a shed by my pool and I would like to monitor the internal CPU temp of the pi. Any ideas on how to add it to the code and possibly a 3rd DS18B20?
I’m trying to learn and would appreciate any direction you can give me.
Thank you
Hi Matt – wonderful project that I have just installed on the Raspberry Pi I am making for a friend. He want to see if the summer here in Costa del sol may be prolonged some weeks if heating the pool a bit.
I have absolutely no python experience – and thought I would ask an expert this simple question:
I want to use an GPIO to controll the pump as this is way easier in this setup.
Have the relay working as well as the rest of the setup with in & out temperature (thanks to you).
I am sure all I have to do, is to replace the “Energenie” object with another object – maybe you could save me some time by suggesting a direction of search?
Might come in handy for others as the wireless power control module is not that easy to buy everywhere 😉
TIA
/Niels | https://www.raspberrypi-spy.co.uk/2017/07/pool-temperature-monitoring-and-pump-control-with-the-pi-zero-w/ | CC-MAIN-2022-27 | refinedweb | 3,318 | 72.46 |
12345678910111213141516171819202122232425262728293031323334353637
#include <iostream>
#include <cmath>
#define PROMPT "Please enter a whole number:"
#define NOT_PRIME "The number is not a prime number./n"
#define PRIME "The number is a prime number./n"
#define DONE 0 /* ends successful program */
#define FIRST_FACTOR 3 /* initial value in for loop */
using std::cout;
using std::cin;
int main(){
int i; /* loop counter */
int number; /* number provided by user */
cout << PROMPT; /* promt user */
cin >> number; /* wait for user input */
/* Prime numbers are defined as any number
* greater than one that is only divisible
* by one and itself. Dividing the number
* by two shortens the time it takes to
* complete. */
for(i = FIRST_FACTOR; i < number/2; ++i)
if( number/i == 0 ){ /* if divisible */
cout << NOT_PRIME << number; /* not prime */
return DONE; /* exit program */
}
/* if number is not divisible by anything
* than it must be prime */
cout << PRIME << number;
return 0; /* exit program */
}
if (number%i==0) | http://www.cplusplus.com/forum/beginner/103640/ | CC-MAIN-2015-35 | refinedweb | 148 | 51.72 |
Nested Types
Classes can also define Nested Types. Nested types are like regular custom types, except they are considered part of the class and their visibility can be scoped as granular as all class Members.
Nested types aree declared using the
nested in syntax, and (outside of the containing class) are referred to in the same way as static class members would – prefixed with the name of the class, and dot.
Refer to the Nested Types topic for more details.
type OuterClass = public class end; InnerClass nested in OuterClass = public class end;
These can be accessed like OuterClass.InnerClass.
Visibility
A visibility level modifier can be applied to a class type, the default level is
assembly. Note that because nested types are considered class members, they can be applied the full range of more granular member visibility levels, instead od just
public or
assembly.
Other Modifiers
A nested type can be any Custom Type or Type Alias that is marked with the
nested in Type Modifier. | https://docs.elementscompiler.com/Oxygene/Members/NestedTypes/ | CC-MAIN-2019-51 | refinedweb | 166 | 50.57 |
Transformation Matrices
Pages: 1, 2, 3, 4, 5
Vector mathematics
To see how this works, let's review some basic operations on vectors. The addition of two vectors can be shown graphically. It works as might be expected: Two vectors
and
add up to
. Here is how it looks in a graph:
Notice how the first vector starts from the origin (0,0). The second vector is then placed with its tail (the end with no arrow head) to the head of the first. The combination of the two is the same as a vector starting from the origin connecting to the head of the second one. Subtraction follows in a similar fashion, but reverses the direction of the second vector.
Graphing addition like this, a collection of vectors (each representing a line) can be used to create basic images, a vector version of connect-the-dots. Although our example is simple in nature, many vectors can be combined to generate complex drawings.
Let's try an example to illustrate what can be done. Following is a complete Python program that uses both the Numeric (NumPy) module as well as pxDislin. What is new from last month is the use of the
dMultiLine function, which draws a line (vector) between points of
x_list and
y_list arguments. Essentially, the dMultiLine function is performing a vector addition operation. The function does not actually do the math, but it does graphically represent the operation. The points chosen trace out a simple box structure of a house centered on the plot axis.
from pxDislin import * from Numeric import * plot = dPlot() axis = dAxis(-20,20,-20,20) plot.add_axis(axis) house_pts = array([[5,5,-5,-5,5,4,4,2,2,5,5,0,-5], [-5,5,5,-5,-5,-5,-1,-1,-5,-5,5,8,5]]) house = dMultiLine(house_pts[0],house_pts[1]) plot.add_object(house) plot.show()
The generated plot is:
The points that draw out the house are essentially a 2-by-13 matrix:
where the x coordinates are in the top row and the y coordinates are the bottom row. Each column of the matrix may be taken as a 2-by-1 vector, and as such the matrix is a collection of 13 column vectors.
| http://www.linuxdevcenter.com/pub/a/python/2000/07/05/numerically.html?page=2 | CC-MAIN-2017-04 | refinedweb | 375 | 62.38 |
Namespace question
Expand Messages
- Hi,
I have a web service that needs to respond to soap requests from two name spaces. (The current namespace and a legacy one.) I'm swapping out the uri in an on_dispatch call. This works fine in that I receive the requests and respond to them. The problem is that the namespace listed in the response is always the current one.
For example the old namespace was "" and the new one is "". When requests come in using the wilma.com namespace the response lists "".
How does one change the response namespace?
I'm sure that was incoherent.
Thanks!
Tom
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/soaplite/conversations/topics/6295?source=1&var=1 | CC-MAIN-2015-32 | refinedweb | 117 | 77.84 |
Shape an initial or default display on windows systems. More...
#include <windows.h>
#include <windowsx.h>
#include "ui-prefs.h"
#include "ui-term.h"
#include "z-virt.h"
#include "win-term.h"
Shape an initial or default display on windows.
Default window layout function.
Just a big list of what to do for a specific screen resolution.
Note: graphics modes are hardcoded, using info from angband/lib/tiles/graphics.txt at the time of this writing
Return values: 0 - Success -1 - Invalid argument -3 - Out of memory
References arg_graphics, arg_graphics_nice, term_data::cols, i, NULL, term_data::rows, string_free(), string_make(), tile_height, tile_width, and void(). | http://buildbot.rephial.org/builds/master/doc/win-layout_8c.html | CC-MAIN-2017-43 | refinedweb | 102 | 55.1 |
Python – Telnet
Telnet is a type of network protocol which allows a user in one computer to logon to another computer which
also belongs to the same network. The telnet command is used along with the host name and then the user credentials
are entered. Upon successful login the remote user can access the applications and data in a way similar to the regular
user of the system. Of course some privileges can be controlled by the administrator of the system who sets up and
maintains the system.
In Python telnet is implemented by the module telnetlib which has the Telnet class which has the required methods
to establish the connection. In the below example we also use the getpass module to handle the password prompt as part
of the login process. Also we assume the connection is made to a unix host. The various methods from telnetlib.Telnet
class used in the program are explained below.
Telnet.read_until – Read until a given string, expected, is encountered or until timeout seconds have passed.
Telnet.write – Write a string to the socket, doubling any IAC characters. This can block if the connection is blocked. May raise socket.error if the connection is closed.
Telnet.read_all() – Read all data until EOF; block until connection closed.
Example
import getpass import telnetlib HOST = "" user = raw_input("Enter your remote account: ") password = getpass.getpass() tn = telnetlib.Telnet(HOST) tn.read_until("login: ") tn.write(user + "\n") if password: tn.read_until("Password: ") tn.write(password + "\n") tn.write("ls\n") tn.write("exit\n") print tn.read_all()
Please note that this output is specific to the remote computer whose details are submitted when the program is run. | https://scanftree.com/tutorial/python/python-network-programming/python-telnet/ | CC-MAIN-2022-40 | refinedweb | 279 | 59.19 |
Enhanced OpenShift CLI: More Stories
The enhanced OpenShift CLI can help you manage a large amount of OpenShift clusters securely and efficiently. It is an open source project I created on GitHub.
As the third post of enhanced oc series, in this post, I will share with you more stories regarding the use of enhanced oc.
Organize Cluster Contexts Hierarchically
In a real project, it is common that you have many clusters to manage and each cluster has its own access data. It is a good idea to use the enhanced oc and context alias to switch among these clusters efficiently. However, with more and more clusters added, the context list can be increasingly long. In such case, a more efficient way to manage these clusters is to organize the cluster contexts hierarchically.
This is natively supported by gopass because in the secret store maintained by gopass, each secret data lives in its own file and related secret data can be put in the same directory. For example, if you have three clusters and want to save the contexts for them, you can use the
path/to/your/context format to name the context alias for the clusters.
$ oc login -s -c dev-env/cluster-foo
$ oc login -s -c dev-env/cluster-bar
$ oc login -s -c dev-env/cluster-baz
Then your cluster contexts will be organized hierarchically in the secret store and the directory structure maps to how you name the context alias. This can be seen with a very straightforward view by running
gopass ls:
$ gopass ls
gopass
└── dev-env
├── cluster-foo
├── cluster-bar
└── cluster-baz
When you switch among these clusters, use the
path/to/your/context alias to refer to the target cluster that you want to access:
$ oc login -c dev-env/cluster-foo
$ oc login -c dev-env/cluster-bar
$ oc login -c dev-env/cluster-baz
Choose Among Multiple Clusters
To organize the cluster contexts hierarchically allows you to manage large amount of clusters efficiently. You can category them into different groups for different purposes and switch among them quickly. With more clusters added and the hierarchy expending, you may also find it is not very trivial to input the full context alias of each cluster when you login.
The enhanced oc supports partial input for context alias when you run
oc login. For example, if you put all clusters for development in category
dev-env, you can just input the first few characters such as
de,
dev,
dev-,
dev-env when you specify the context alias. If there are more than one results that match your input, a numbered list will be presented. You can enter a number to choose one option in the list:
$ oc login -c dev
1) dev-env/cluster-bar
2) dev-env/cluster-baz
3) dev-env/cluster-foo
#? 1
Read context 'dev-env/cluster-bar' from secret store...
Context loaded successfully.
Login successful.You have access to 59 projects, the list has been suppressed. You can list all projects with 'oc projects'Using project "default".
Fuzzy Search
The partial input of context alias and the numbered list can help you quickly choose one cluster among multiple options. Moreover, if you get fzf installed, an interactive command-line filter and fuzzy finder, you will be able to use typeahead and fuzzy search when run enhanced oc to select a context. There is no additional setup needed after you install fzf. The enhanced oc can auto-detect the existence of fzf and enable the corresponding function. For how to install fzf, please refer to the fzf documentation.
Customize Shell Prompt
For better use experience, you can install kube-ps1, which is a script that lets you add the current cluster context and namespace to the shell prompt. It is very helpful if you have many clusters to manage and need to switch among them from time to time. By looking at shell prompt from the command line, you can quickly know the cluster context that you are working with, so as to avoid any operation performed against wrong cluster.
The enhanced oc can auto-detect the existence of kube-ps1 and work with it seamlessly without any additional setup after you get kube-ps1 installed. It will customize the shell prompt, replace the full cluster context name, which is usually much longer and more machine readable, with the context alias, which is much shorter and more human readable. For how to install kube-ps1, please refer to the kube-ps1 documentation.
Summary
In this post, you have learned more usage stories on enhanced OpenShift CLI, e.g. to organize cluster contexts hierarchically, switch context among multiple clusters using partial input and fuzzy search, customize shell prompt to reflect the cluster context.
In next post, I will introduce another import feature supported by enhanced oc: how to share your cluster access information. | https://morningspace.medium.com/enhanced-openshift-cli-more-stories-8a7dee7f7e72?source=user_profile---------5---------------------------- | CC-MAIN-2022-27 | refinedweb | 815 | 57.81 |
Hi all, Newbie warning on. I'm having a problem under Windows where I'm getting an error 10055 when I'm using select() and having 64 sockets open in a single thread. The description for 10055 is "An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full". I found some more info about this here: When reading /Python-2.3.2/Modules/selectmodule.c I found the following snippet /* Windows #defines FD_SETSIZE to 64 if FD_SETSIZE isn't already defined. 64 is too small (too many people have bumped into that limit). Here we boost it. Users who want even more than the boosted limit should #define FD_SETSIZE higher before this; e.g., via compiler /D switch. */ #if defined(MS_WINDOWS) && !defined(FD_SETSIZE) #define FD_SETSIZE 512 #endif which to me looks like Python is trying to increase the value of this constant, but fails to do so (My only evidence for this is that I'm getting this error). To pick up the errno I also need to write something like except error, (errno, strerror): when I would expect except error errno, strerror = error to work if I have interpreted the documentation correctly: ...The exception raised when an error occurs. The accompanying value is a pair containing the numeric error code from errno and the corresponding string, as would be printed by the C function perror(). Should this trigger a WindowsError exception by the way? (selectmodule.c calls PyErr_SetExcFromWindowsErr to create the exception.) /Krister | https://mail.python.org/pipermail/python-list/2003-November/234204.html | CC-MAIN-2014-15 | refinedweb | 256 | 55.13 |
This plugin contributes a new view implementation that provides a dashboard / portal-like view for your Jenkins instance.
Add new view
On the Jenkins main page, click the + tab to start the new view wizard (If you do not see a +, it is likely you do not have permission to create a new view). On the create new view page, give your view a name and select the Dashboard type and click ok.
Configure dashboard view
The configuration is done in 2 parts, selecting the Jenkins jobs to include in the view, and selecting which dashboard portlets to have included in the view. The jobs that you select are used for all the portlets to draw their information from.
Select jobs
Select the list of jobs to include in the dashboard. This is exactly the same process as the standard list view that comes with Jenkins.
Also a regular expression can be used to specify the jobs to include in the view.
Options
- Show standard Jenkins list at the top of the page: shows uses standard Jenkins jobs list as it would be when using Jenkins' built-in 'List View' type.
Select portlets.
View layout
The dashboard view supports a layout with rows spanning the entire view with 2 columns.
Core portlets
The dashboard view comes with a number of portlets that you can configure your view with (New portlets can be contributed to Jenkins via other plugins, even your own).
Standard Jenkins jobs list
This portlet shows a row for each job showing the standard columns configured in Jenkins. All the configured portlets are displayed below this list.
Jobs Grid
The jobs grid portlet displays a 3 column table with the current status and a clickable link to the job. This offers a more compressed presentation of your jobs than the standard 1 row per job view, albeit at the cost of some job information.
Unstable Jobs
This portlet lists the unstable jobs within the view. Note, this does not necessarily list all of Jenkins' unstable jobs, but only looks at jobs configured for this view.
Test Statistics Grid
The test statistics grid shows detailed test data for the configured jobs. This is useful to get an aggregated count of tests across the jobs in your view. If desired, jobs with zero tests can be hidden.
Test Statistics Chart
This is a pie chart of the tests in the configured jobs. It shows the passing, failing, and skipped jobs with the total number and percentages.
Test Trend Chart.
Jobs statistics
Shows statistics based on jobs health.
Build statistics
Shows statistics based on build status.
Contributing
If you want to contribute to this plugin, you probably will need a Jenkins plugin developement environment. This basically means a current version of Java (Java 8 should probably be okay for now) and Apache Maven. See the Jenkins Plugin Tutorial for details.
If you have the proper environment, typing:
$ mvn verify
should create a plugin as
target/*.hpi, which you can install in your Jenkins instance. Running
$ mvn hpi:run -Djenkins.version=2.164.1
allows you to spin up a test Jenkins instance on localhost to test your local changes before commiting.
Code Style
This plugin tries to migrate to Google Java Code Style, please try to adhere to that style whenever adding new files or making changes to existing files. The style is enforced using the spotless plugin, if the build fails because you were using the "wrong" style, you can fix it by running:
$ mvn spotless:apply
to reformat Java code in the proper style.
Extending the Dashboard View plugin
Much of the benefit of this plugin will be realized when other plugins that enhance Jenkins offer support for it.
Add support in your plugin:
- Extend the DashboardPortlet class and provide a descriptor that extends the
Descriptor<DashboardPortlet>
- Create a jelly view called portlet.jelly
- Optionally create a jelly view called main.jelly to be used when the portlet is in maximized mode (otherwise the same portlet.jelly view will be used)
It is possible to define custom parameters for the DashboardPortlet. The displayName is always required. To add new parameters:
- create a jelly file called config.jelly to be used when the portlet is configured (added to the view in 'Edit View' config page);
- modify constructor (with
@DataBoundConstructor) to receive the new parameters.
Looking at the source code of this plugin will show a number of examples of doing this. The core portlets do the same thing that your plugin would do.
Please update the list below with a pull request against this repository.
Sample files:
MyPortlet.java:
import hudson.plugins.view.dashboard.DashboardPortlet; class MyPortlet extends DashboardPortlet { @DataBoundConstructor public MyPortlet(String name) { super(name); } // do whatever you want @Extension public static class DescriptorImpl extends Descriptor<DashboardPortlet> { @Override public String getDisplayName() { return "MyPortlet"; } } };
portlet.jelly:
<j:jelly xmlns: <dp:decorate <!-- This is to say that this is a dashboard view portlet --> <tr><td> <!-- This is needed because everything is formatted as a table - ugly, I know --> <!-- you can include a separate file with the logic to display your data or you can write here directly --> <div align="center"> <st:include </div> </td></tr> </dp:decorate> </j:jelly>
Other plugins that support the Dashboard View
(This is a curated list. If your favorite plugin is missing, please create a pull request to add it)
- Cadence vManager - This plugin adds an ability to perform REST over HTTP calls to Cadence vManager as a step in your build.
- Cppcheck Plugin - This plugin generates the trend report for CppCheck, a tool for static C/C++ code analysis.
- Maven Release - This plugin allows you to perform a release build using the maven-release-plugin from within Jenkins.
- OWASP Dependency-Check Plugin - This plugin can analyze dependencies and generate trend reports for Dependency-Check, an open source utility that detects known vulnerabilities in project dependencies.
- Parasoft Findings
- Questa VRM - Adds the ability for Jenkins to publish results from Mentor Graphics Questa Verification Run Manager (VRM).
- Release Plugin - This plugin adds the ability to wrap your job with pre- and post- build steps which are only executed when a manual release build is triggered.
- Rich Text Publisher Plugin - This plugin puts custom rich text message to the Build pages and Job main page (for last build). Atlassian Confluence, WikiText and HTML notations are supported.
- SLOCCount Plugin - Adds a portlet showing number of lines, files and languages per job.
- Warnings Next Generation Plugin - This plugin collects compiler warnings or issues reported by static analysis tools and visualizes the results.
License
This plugin is licensed under the MIT License (MIT), see LICENSE.
TODO
- Use
<div>instead of
<table>to place portlets in the page.
- Update this README with more screenshots.
Changelog
- See GitHub Releases for recent versions
- See the changelog file for versions 2.10 and older | https://plugins.jenkins.io/dashboard-view | CC-MAIN-2022-21 | refinedweb | 1,139 | 63.29 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.