text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
#include <sys/dlpi.h> void dlokack(queue_t *wq, mblk_t *mp, t_uscalar_t correct_primitive);
void dlerrorack(queue_t *wq, mblk_t *mp, t_uscalar_t error_primitive, t_uscalar_t error, t_uscalar_t unix_errno);
void dlbindack(queue_t *wq, mblk_t *mp, t_scalar_t sap, const void *addrp, t_uscalar_t addrlen, t_uscalar_t maxconind, t_uscalar_t xidtest);
void dlphysaddrack(queue_t *wq, mblk_t *mp, const void *addrp, t_uscalar_t addrlen);
void dluderrorind(queue_t *wq, mblk_t *mp, const void *addrp, t_uscalar_t addrlen, t_uscalar_t error, t_uscalar_t unix_errno);
Solaris DDI specific (Solaris DDI).
Streams write queue.
Pointer to the bind request message.
Service access point being requested.
Pointer to the dlpi layer source address.
Size of the dlpi layer address pointed to by addr.
Maximum number of DL_CONNECT_IND messages allowed to be outstanding per stream.
The XID and TEST responses supported.
Identifies the DL primitive completing successfully.
Identifies the DL primitive in error.
DLPI error associated with the failure in the DLPI request.
Corresponding UNIX system error that can be associated with the failure in the DLPI request.
All functions described in this manpage take a pointer to the message passed to the DLPI provider (mblk_t) and attempt to reuse it in formulating the M_PROTO reply. If the message block is too small to be reused, it is freed and a new one is allocated.
All functions reply upstream using qreply(9F). The write-side queue pointer must be provided.
The dlokack() function provides the successfull acknowledgement DL_OK_ACK message reply to the DLPI provider and is used to complete many of the DLPI requests in the DLPI consumer.
The dlerrorack() function provides the unsuccessfull acknowledgement DL_ERROR_ACK message reply to the DLPI provider and is used for error completions were required for DLPI requests in the DLPI consumer.
The dlbindack() function provides the DL_BIND_ACK message reply to the DLPI provider and is used to complete the DL_BIND_REQ processing in the DLPI consumer.
The dlphysaddrack() function provides the DL_PHYS_ADDR_ACK message reply used to complete the DL_PHYS_ADDR_ACK processing.
The dluderrorind() function provides the DL_UDERROR_IND message reply used to complete an unsuccessful DL_UNITDATA_REQ.
None.
These functions are not required if you are are writing a DLPI device driver using gld(7D).
All DLPI helper functions can be called from user, interrupt, or kernel context.
gld(7D), dlpi(7P), qreply(9F)
Writing Device Drivers for Oracle Solaris 11.2
STREAMS Programming Guide
|
http://docs.oracle.com/cd/E36784_01/html/E36886/dlbindack-9f.html
|
CC-MAIN-2015-35
|
refinedweb
| 376
| 56.76
|
Giving a dictionary and a string ‘str’, find the longest string in dictionary which can be formed by deleting some characters of the given ‘str’.
Examples:
Input : dict = {"ale", "apple", "monkey", "plea"} str = "abpcplea" Output : apple Input : dict = {"pintu", "geeksfor", "geeksgeeks", " forgeek"} str = "geeksforgeeks" Output : geeksgeeks
Asked In : Google Interview
This problem reduces to finding if a string is subsequence of another string or not. We traverse all dictionary words and for every word, we check if it is subsequence of given string and is largest of all such words. We finally return the longest word with given string as subsequence.
Below c++ implementation of above idea
// C++ program to find largest word in Dictionary // by deleting some characters of given string #include <bits/stdc++.h> using namespace std; // Returns true if str1[] is a subsequence of str2[]. // m is length of str1 and n is length of str2 bool isSubSequence(string str1, string str2) { int m = str1.length(), n = str2.length(); int j = 0; // For index of str1 (or subsequence // Traverse str2 and str1, and compare current // character of str2 with first unmatched char // of str1, if matched then move ahead in str1 for (int i=0; i<n&&j<m; i++) if (str1[j] == str2[i]) j++; // If all characters of str1 were found in str2 return (j==m); } // Returns the longest string in dictionary which is a // subsequence of str. string findLongestString(vector <string > dict, string str) { string result = ""; int length = 0; // Traverse through all words of dictionary for (string word : dict) { // If current word is subsequence of str and is largest // such word so far. if (length < word.length() && isSubSequence(word, str)) { result = word; length = word.length(); } } // Return longest string return result; } // Driver program to test above function int main() { vector <string > dict = {"ale", "apple", "monkey", "plea"}; string str = "abpcplea" ; cout << findLongestString(dict, str) << endl; return 0; }
Output:
apple
Time Complexity : O(N*K*n) Here N is the length of dictionary and n is the length of given string ‘str’ and K – maximum length of words in the dictionary.
Auxiliary Space : O.
|
https://www.geeksforgeeks.org/find-largest-word-dictionary-deleting-characters-given-string/
|
CC-MAIN-2018-09
|
refinedweb
| 347
| 64.64
|
sigsuspend - wait for a signal
#include <signal.h> int sigsuspend(const sigset_t *sigmask);
The sigsuspend() function replaces the current signal mask of the calling thread with the set of signals pointed to by sigmask and then suspends the thread until delivery of a signal whose action is either to execute a signal-catching function or to terminate the process. This will not cause any other signals that may have been pending on the process to become pending on the thread.
If the action is to terminate the process then sigsuspend() will never return. If the action is to execute a signal-catching function, then sigsuspend() will process execution indefinitely, there is no successful completion return value. If a return occurs, -1 is returned and errno is set to indicate the error.
The sigsuspend() function will fail if:
- [EINTR]
- A signal is caught by the calling process and control is returned from the signal-catching function.
None.
An interpretation request has been filed with IEEE PASC concerning whether sigsuspend() suspends process execution or suspends thread execution. The wording here matches the description of this interface specified by the ISO POSIX-1 standard.
None.
pause(), sigaction(), sigaddset(), sigdelset(), sigemptyset(), sigfillset(), <signal.h>.
Derived from the POSIX.1-1988 standard.
|
http://pubs.opengroup.org/onlinepubs/7990989775/xsh/sigsuspend.html
|
CC-MAIN-2015-06
|
refinedweb
| 207
| 53.21
|
14 May 2012 10:31 [Source: ICIS news]
SINGAPORE (ICIS)--Saudi Kayan resumed production at its facilities in ?xml:namespace>
In a filing to the Saudi Stock Exchange on 12 May, Saudi Kayan said that the pause on most of the company’s plants at the site was caused by a problem in the production of steam used in its manufacturing operations.
Among the plants that halted operations are Saudi Kayan’s 400,000 tonne/year high density polyethylene (HDPE) plant, its 350,000 tonne/year PP plant in Al-Jubail and its 650,000 tonne/year ethylene glycol plant.
The company has yet to assess the financial impact of the outage, but expects it to be limited.
Publicly listed Saudi Kayan is 35% owned by Saudi
|
http://www.icis.com/Articles/2012/05/14/9559187/saudi-kayan-resumes-operations-at-al-jubail-plants-after.html
|
CC-MAIN-2014-52
|
refinedweb
| 127
| 56.59
|
QML:
import QtQuick 2.0 Column { width: 200; height: 200 TextInput { id: myTextInput; text: "Hello World" } Text { text: myTextInput keyword is optional, and modifies the semantics of the property being declared. See the upcoming section on default properties for more information about the
default property modifier.:
import QtQuick 2.0 Rectangle { color: "red" property color nextColor: "blue" // combined property declaration and initialization }:
import QtQuick 2.0 Item { states: [ State { name: "loading" }, State { name: "running" }, State { name: "stopped" } ] }:
import QtQuick 2.0 Rectangle { // declaration without initialization property list<Rectangle> siblingRects // declaration with initialization property list<Rectangle> childRects: [ Rectangle { color: "red" }, Rectangle { color: "blue"} ] }:
import QtQuick 2.0 Item { width: 100; height: 100 MouseArea { anchors.fill: parent onClicked: { console.log("Click!") } } }>[([<type> <parameter name>[, ...]])]
Attempting to declare two signals or methods with the same name in the same type block is an error. However, a new signal may reuse the name of an existing signal on the type. (This should be done with caution, as the existing signal may be hidden and become inaccessible.)
Here are three examples of signal declarations:
import QtQuick 2.0 Item { signal clicked signal hovered() signal actionPerformed(string action, var actionResult) }:
import QtQuick 2.0 TextInput { text: "Change this!" onTextChanged: console.log("Text has changed to:",>[, ...]]) { <body> }
Methods can be added to a QML type in order to define standalone, reusable blocks of JavaScript code. These methods can be invoked either internally or by external objects.
Unlike signals, method parameter types do not have to be declared as they default to the
var type.
Attempting to declare two methods or signals with the same name in the same type block is an error. However, a new method may reuse the name of an existing method on the type. (This should be done with caution, as the existing method may be hidden and become inaccessible.)
Below is a Rectangle with a
calculateHeight() method that is called when assigning the
height value: create an attaching type:
import QtQuick 2.0 ListView { width: 240; height: 320 model: ListModel { id: listModel Component.onCompleted: { for (var i = 0; i < 10; i++) listModel.append({"Name": "Item " + i}) } } delegate: Text { text: index } }:
import QtQuick 2.0 ListView { width: 240; height: 320 model: 3 delegate: Item { width: 100; height: 30 Rectangle { width: 100; height: 30 color: ListView.isCurrentItem ? "red" : "yellow" // WRONG! This won't work. } } }:
ListView { //.... delegate: Item { id: delegateItem width: 100; height: 30 Rectangle { width: 100; height: 30 color: delegateItem.ListView.isCurrentItem ? "red" : "yellow" // correct } } }
Now
delegateItem.ListView.isCurrentItem correctly refers to the
isCurrentItem attached property of the.
|
http://doc.qt.io/qt-5/qtqml-syntax-objectattributes.html
|
CC-MAIN-2017-43
|
refinedweb
| 424
| 50.73
|
Chemical Equilibrium Example Program¶
In the program below, the
equilibrate method is called to set the gas to a
state of chemical equilibrium, holding the temperature and pressure fixed.
#include "cantera/thermo.h" using namespace Cantera; void equil_demo() { std::unique_ptr<ThermoPhase> gas(newPhase("h2o2.cti","ohmech")); gas->setState_TPX(1500.0, 2.0*OneAtm, "O2:1.0, H2:3.0, AR:1.0"); gas->equilibrate("TP"); std::cout << gas->report() << std::endl; } int main() { try { equil_demo(); } catch (CanteraError& err) { std::cout << err.what() << std::endl; } }
The program output is:
temperature 1500 K pressure 202650 Pa density 0.316828 kg/m^3 mean mol. weight 19.4985 amu 1 kg 1 kmol ----------- ------------ enthalpy -4.17903e+06 -8.149e+07 J internal energy -4.81866e+06 -9.396e+07 J entropy 11283.3 2.2e+05 J/K Gibbs function -2.1104e+07 -4.115e+08 J heat capacity c_p 1893.06 3.691e+04 J/K heat capacity c_v 1466.65 2.86e+04 J/K X Y Chem. Pot. / RT ------------- ------------ ------------ H2 0.249996 0.0258462 -19.2954 H 6.22521e-06 3.218e-07 -9.64768 O 7.66933e-12 6.29302e-12 -26.3767 O2 7.1586e-12 1.17479e-11 -52.7533 OH 3.55353e-07 3.09952e-07 -36.0243 H2O 0.499998 0.461963 -45.672 HO2 7.30338e-15 1.2363e-14 -62.401 H2O2 3.95781e-13 6.90429e-13 -72.0487 AR 0.249999 0.51219 -21.3391
How can we tell that this is really a state of chemical equilibrium? Well, by applying the equation of reaction equilibrium to formation reactions from the elements, it is straightforward to show that:
where \(\mu_k\) is the chemical potential of species k, \(a_{km}\) is the number of atoms of element m in species k, and \(\lambda_m\) is the chemical potential of the elemental species per atom (the so-called “element potential”). In other words, the chemical potential of each species in an equilibrium state is a linear sum of contributions from each atom. We see that this is true in the output above—the chemical potential of H2 is exactly twice that of H, the chemical potential for OH is the sum of the values for H and O, the value for H2O2 is twice as large as the value for OH, and so on.
We’ll see later how the equilibrate() function really works.
|
https://cantera.org/documentation/docs-2.3/sphinx/html/cxx-guide/equil-example.html
|
CC-MAIN-2018-47
|
refinedweb
| 400
| 68.16
|
Note: at times this document may fall behind what is written in XUL::Gui, in that case, XUL::Gui is right.
gui programming has always been hard, be it a simple form, or a complex dynamic interface, the learning curve has always been steep, the boilerplate painful, and the design patterns, well, they have been quite tedious. then came HTML, and with it a clean, clear, and concise nested programming style, that has, for the most part, logical and intuitive functions and styling. it has taken some time for web browsers to support user interfaces on a par with the native gui in most operating systems but that time has come. firefox, available for all major operating systems, provides a rich and extensible framework for developing cross platform gui applications. these applications are written in XUL, Mozilla's gui development language, the same language that firefox itself is written in. HTML is also fully supported, and can be freely intermixed with XUL. as powerful as XUL and HTML are, they are fundamentally bound to javascript, which if you're anything like me, just isn't a suitable replacement for perl.
XUL::Gui seeks not only to fully integrate all of the features of the XUL and HTML markup languages, in both XML and functional forms, but to also proxy every property, attribute and method from javascript to perl and back enabling transparent manipulation of the DOM in pure perl.
as functional as XUL and the DOM are, they aren't always the most convenient, otherwise the various javascript frameworks would not exist. the XUL::Gui proxy aims to smooth some of the DOM's rough edges by abstracting away the difference between properties and attributes, and adding plural versions of many functions (you can also use any javascript framework with XUL::Gui by simply including it in a <script> tag as you normally would: SCRIPT( src=>"myframework.js" ), but I hope in most cases that you won't have to)
the primary way of assembling your gui and submitting large updates
Label( value=>'Hello World' ) Button( label=>'Click', oncommand=>\&eventhandler )
the parenthesis are optional in simple contexts
$someparent->appendChild(Label value=>"$count");
but are of course needed for nested objects
display Window( Hbox( Label( value=>'Hello, World!' ), Button( label=>'Click Me' ) ) );
or if you're golfing
display Hbox Label( value=>'hello, world!' ), Button label=>'Click Me';
every XUL and HTML tag is imported into your namespace with the following spellings:
Somexulname and SomeXulName SOMEHTMLNAME and html_somehtmlname
the nesting of tags can be arbitrarily deep and complex and functions of course follow all the same nesting rules as XML. however unlike XML, the attributes, properties and children of a tag can be distributed in any order, but its probably best for readability to keep them at the front of the @_ list. of course all arguments, children most usefully, are processed in order
Hbox( Vbox( id=>'hbox1', Label( value=>'vbox1' ), Button( label=>'vbox2', oncommand=>\&eventhandler ) ), Label( value=>'hbox2' ), Button( id=>'btn', label=>'hbox3', oncommand=>sub{ my ($self, $event) = @_; print "$self->{ID} received event: ", $event->type, "\n"; # prints "btn received event: command" }) )
in a tag, to set a property at creation time (if it makes sense), prepend a single underscore
Sometag( attributename=>'val', _property=>4 )
tag functions generate an XUL::Gui::Object hashref object that knows how to create itself and then proxy interaction between perl and javascript for every attribute, property and method that the corresponding XUL or HTML object has in javascript. all of the names are mirrored into perl with the exact spelling and capitalization, however all three are condensed into a single namespace, a perl $object->method; call serves as the official documentation of tags and their attributes, properties and methods
inside the hashref itself, all UPPERCASE keys are reserved, but feel free to use any other keys as you want. a few useful reserved keys to know are:
ID the supplied or auto generated id TAG the XUL or HTML tag name A a hashref containing creation time attribute and _property settings C an array ref containing the creation time children M a hashref for user defined methods W the parent widget if it exists
all tags are loaded into the exported
%ID hash with their specified id or an auto generated one. all reserved ids match
/^xul_\d+/
tag objects are accessed as follows:
js: ID.btn = document.createElement('button'); ID.btn.setAttribute('id', 'btn'); ID.btn.setAttribute('label', 'Click Me'); ID.btn.setAttribute('oncommand', handler); ID.someparent.appendChild(ID.btn); perl: Button( id=>'btn', label=>'Click Me', oncommand=>\&handler ); $ID{someparent}->appendChild($ID{btn}); or all in one line: $ID{someparent}->appendChild (Button id=>'btn', label=>'Click Me', oncommand=>\&handler); js: ID.btn.getAttribute('attribute') ID.btn.setAttribute('attribute', value); perl: $ID{btn}->attribute $ID{btn}->attribute = $value; js: ID.btn.property = 5; perl: $ID{btn}->property = 5;
in the event of a namespace collision, the attribute is returned, to get the property, simply prepend a
_ to the name. in most cases setting the attribute works better.
perl: $ID{btn}->_forcedproperty = $value; js: ID.btn._prop = 5; // a property that starts with _ perl: $ID{btn}->__prop = 5; # only the first _ is shifted off
attributes don't start with underscores so they are safe, in the rare event of an attribute that is not a perl
\w, just use the normal
(get|set)Attribute() call
js: ID.btn.callMethod(); ID.btn.callMethod(arg1, arg2); perl: $ID{btn}->callMethod; $ID{btn}->callMethod($arg1, $arg2);
here is as good a time as any to explain the DWIM details of how one namespace in perl maps to three in javascript (and abstracts away the tedious
(set|get)Attribute() calls)
$ID{btn}->callMethod; # void context is always a method call $ID{btn}->callMethod('@_ > 0'); # any arguments is obvious $ID{btn}->somename = 5 # the following selection order is used unless $ID{btn}->somename; # attribute if hasAttribute(...) # function if typeof is function # property if has property # undef or warn if :lvalue $ID{btn}->_somename = 10; # forced property ID.btn.somename = 10; print $ID{btn}->method_(...); # forced method ID.btn.method(...);
the returned value of all -> calls is either a scalar, or a reference to an appropriate proxy object.
if javascript returns an array, access the object as a perl array reference.
my $array = gui 'new Array(1, 2, 3)'; print "@$array"; # prints 1 2 3 $array->reverse; print "@$array"; # prints 3 2 1
the bidirectional translation between perl and javascript is:
JavaScript | Perl --------------------|------------------------- Array | ARRAY ref Object | Tag Object undefined /| undef null <--/ | undef | String, Number, | SCALAR any other scalar |
the same attribute, property, and method call syntax from tags apply to returned values as well.
all
-> operations are atomic and execute immediately unless inside a pragmatic block.
this is fine for most events, but there are occasions when large changes need to be made to the gui that would be too slow to send individually to the client.
if you need to add many elements to the gui, you could write it in javascript with the gui('javascript here') call, but that would be tedious, and Larry tells us we should be lazy. so use the preferred method of generating your objects with the tag subs, and utilize map to factor out some of XML's repetition. since tag objects are not written to the client until they are used in a method call, such as appendChild(), and then are written in one large message, they are very fast. a side effect of this means that attempting to set attributes, properties, or to call javascript methods before using the object in a method call will result in errors.
that is all well and good, but what about if you need to make many changes to existing objects such as loading thousands of lines into a list, as with all Perl, TIMTOWTDI:
$ID{list}->removeItemAt(0) for 1..$ID{list}->getRowCount; # simple but slow $ID{list}->appendItem($_, $_) for @items; # mirrors the JS solution buffered { # a touch longer than the $ID{list}->removeItemAt(0) for 1..shift; # first but easily as clean, $ID{list}->appendItem($_, $_) for @items; # and much faster. keep in } $ID{list}->getRowCount; # mind that dependent values, # such as the row count, need to be # passed in or placed in a now block $ID{list}->removeItems # for a few common tasks, XUL::Gui adds ->appendItems(@items); # plural methods which are easiest and # fastest of all
buffered { CODE } LIST; # buffer SCALAR, sub{ CODE }, LIST; not implemented cached { CODE }; now { CODE };
buffered accepts a code block that defers proxying all commands to the gui until the block ends. it also accepts a list in case you need to pass in any non-defered attributes, as in the last section.
cached accepts a code block that performs set calls normally, but only performs a particular get once, and then afterward always returns the same value. javascript function calls behave normally
now is provided as a way to temporarily escape a buffered or cached block without causing a buffer flush or a cache reset. it does nothing outside of a pragmatic block.
buffered returns the value of the combined javascript call, useful for testing for errors. cached and now return the result of their last perl expression.
buffered and cached can be nested in either order. when inside both, get calls are cached, and set calls are buffered
there is only one buffer and cache, so nesting multiple buffered or cached blocks has no effect. neither will work inside of a now block.
note that all subroutines called from within a pragmatic block retain that pragma.
sub { my ($self, $event) = @_; # $_ == $self $self->someattribute = 'something'; print $event->type, "\n"; }
XUL::Gui has a robust widget system designed to group tag patterns and other widgets. it offers functionality similar to XBL, but entirely in perl, and with what at least I think is an easier syntax.
*MyWidget = widget { # a simple widget Hbox( Label( value=>'Labeled Button: ' ), Button( label=>'OK' ) ) };
inside of each widget, the following variables are defined:
%{ $_{A} } the attributes passed in to the widget @{ $_{C} } the children passed into the widget %{ $_{M} } a hash containing widget methods, which can be added to $_ and $_{W} the widget itself $_->mymethod is the same as $_{M}{mymethod}($_) *MyWidget = widget{ # a widget that accepts attributes and children Hbox( Label( $_->has('label->value!') ), Button( label=>'OK', $_->has('oncommand!') ), $_->children ) }; MyWidget( label=>'My Button: ', oncommand=>\&action, SomeChildObject() );
*BetterWidget = widget{ Hbox Label( id=>'lbl', $_->has('label->value!') ), Button( id=>'btn' label=>'OK', $_->has('oncommand!') ), Button( id=>'exit', label=>'Exit', oncommand=>sub{ my $self = shift; $self->{lbl}->label = 'Goodbye'; $self->blur; quit; }), $_->children } mymethod => sub{ my $self = shift; say $self->{lbl}->value; $self->{btn}->focus; }; BetterWidget( id=>'better' label=>'Better: ', oncommand=>\&action); ..... $ID{better}->mymethod; #prints the label's value and focuses the OK button
You may be wondering what happens when you create a second BetterWidget now that the internal elements have id's. As we have seen, all id's get loaded into the %ID hash for later reference. However, if widgets behaved the same way, you could never reuse a widget, and what would be the point? Rather, inside of a widget, all id's are in their own private lexical space.
After instantiating the widget with an id of 'better':
$ID{lbl} does not exist, but $ID{better}{lbl} does, and can be interacted with as normal.
Inside of a widget's method handlers, $_[0] contains no native methods of its own, but contains hash keys of all of the id's defined within the widget
Inside of a widget's event handlers, $_[0] contains widget methods that affect the current object, as well as containing hash keys of all of the id's defined within the widget, in addition to all of its ordinary attributes, properties, and methods that go along with Tag objects
Since widgets define their own namespace and methods and behave externally the same way as normal Tag objects, it is possible to create complex interaction without much repeated coding.
Widgets also can be nested within each other without limit. Each nested widget is again its own lexical id space.
$ID{mainwidget}{subwidget}{lbl}->value = 'something';
previously we have seen that widgets behave like Tag objects, but Widgets can also behave like classes, using the extends method.
*SuperClass = widget{ Vbox( $_->has('width'), Label( value=>'SuperClass' ), Button( id=>'btn', oncommand=>sub{...}) )} supermethod = sub{ ... }; *SubClass = widget{ $_->extends( &SuperClass )} submethod = sub{ ... };
any SubClass objects now have both the 'submethod' and 'supermethod' methods, and SubClass creates the same gui elements as SuperClass. It does this because extends returns the results from the &SuperClass call. This also means that you are free to rearrange, add, or dismiss objects from the SuperClass as you see fit. Named ID's from the superclass are also in the subclass.
*ReverseClass = widget{ my @super = $_->extends( SubClass( width=>50, @_ ) ); #add a default value reverse @super }; *AnotherClass = widget{ $_->extends( &ReverseClass ); #throw away super class's objects Vbox( Label( 'Only the button from SuperClass' ), $ID{btn}, #grab the btn ) };
when you call a widget, you need to always use parenthesis due to its runtime definition, to get around this:
sub MyWidget; *MyWidget = widget{ ... }; # or BEGIN{ *MyWidget = widget{ ... } }
MyWidget can then be called like any native tag object
any key value pair in a tag's argument list with a coderef value and a key that doesn't match /^on/ is entered into that tag's method table, as if it were a widget.
|
http://search.cpan.org/~asg/XUL-Gui-0.63/lib/XUL/Gui/Manual.pm
|
CC-MAIN-2018-17
|
refinedweb
| 2,277
| 53.85
|
0
Hi, I am given this assignment that should be run in Jython. The assignment says that the program consists of a Java application with a canvas and a textarea for turtle code. I need to create a Jython application that takes turtle code from the Java application, parses it with regular expressions and calls setPixel(x,y)
in the Java application to draw a rectangle. the Java program ,setPixel(x, y) is used to control the painting and getCode() to get the code entered in to the turtle code textarea. These methods are both defined in the DYPL Java class.
import Translater class Jtrans(Translater): def __init__(self): pass def actionPerformed(self, event): print("Button clicked. Got event:") self.obj.setPixel(100,10) self.obj.setPixel(101,10) self.obj.setPixel(102,10) def move(self, x,y): move(50, 90) move(100, 90) move(50, 90) move(100, 90) def put(self, x,y,a): put(150, 150, 0) for x in range(0,4): move(50, 90) end eval("self."+self.obj.getCode()+"()")#why do we need this? def setDYPL( self, obj ): print("Got a DYPL instance: ") print(obj) if __name__ == '__main__': import DYPL DYPL(Jtrans())
I also attach a zip file containing classes like Translater.class,DYPLCanvas.java etc if you need it. so does anyone know how I should start?
|
https://www.daniweb.com/software-development/python/threads/447560/how-to-draw-a-rectangle-in-jython
|
CC-MAIN-2015-35
|
refinedweb
| 225
| 56.05
|
/webware/Webware/WebKit
In directory sc8-pr-cvs1:/tmp/cvs-serv2857
Modified Files:
ServletFactory.py
Log Message:
ServletFactory makes sure it gets a class from the servlet module,
but the way to test for this has changed for new-style classes.
This changes the assert so it allows new-style classes.
Fixes bug #635967
Index: ServletFactory.py
===================================================================
RCS file: /cvsroot/webware/Webware/WebKit/ServletFactory.py,v
retrieving revision 1.26
retrieving revision 1.27
diff -C2 -d -r1.26 -r1.27
*** ServletFactory.py 10 Nov 2002 12:37:15 -0000 1.26
--- ServletFactory.py 9 Jan 2003 03:55:00 -0000 1.27
***************
*** 2,6 ****
from WebKit.Servlet import Servlet
import sys
! from types import ClassType
import ImportSpy as imp # ImportSpy provides find_module and load_module
import threading
--- 2,6 ----
from WebKit.Servlet import Servlet
import sys
! from types import ClassType, BuiltinFunctionType
import ImportSpy as imp # ImportSpy provides find_module and load_module
import threading
***************
*** 185,189 ****
# Pull the servlet class out of the module
theClass = getattr(module, name)
! assert type(theClass) is ClassType
assert issubclass(theClass, Servlet)
self._cache[path]['mtime'] = os.path.getmtime(path)
--- 185,200 ----
# Pull the servlet class out of the module
theClass = getattr(module, name)
! # new-style classes aren't ClassType, but they
! # are okay to use. They are subclasses of
! # type. But type isn't a class in older
! # Python versions, it's a builtin function.
! # So we test what type is first, then use
! # isinstance only for the newer Python
! # versions
! if type(type) is BuiltinFunctionType:
! assert type(theClass) is ClassType
! else:
! assert type(theClass) is ClassType \
! or isinstance(theClass, type)
assert issubclass(theClass, Servlet)
self._cache[path]['mtime'] = os.path.getmtime(path)
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
|
https://sourceforge.net/p/webware/mailman/webware-checkins/?viewmonth=200301&viewday=9
|
CC-MAIN-2017-22
|
refinedweb
| 320
| 54.79
|
Difference between revisions of "ArchWiki:Requests"
Latest revision as of 15:33, 29 April 2016 Should we remove or archive obsolete articles?
- 3.2 Broken redirects
- 3.3 Hide irrelevant pages in search suggestions
- 3.4 Article guideline
- 3.5 TrackPoint
- 3.6 Renamed software
- 3.7 Cleanup: links to non-existent packages
- 3.8 Strategy for updating package templates
- 3.9 index.php in url address
- 3.10 Change drive naming/accessing to UUID?
- 3.11 User and group pacnew files
- 3.12 FAQ
- 3.13 Pacman hooks
- 4 Bot requests
General.
- For AUR3 package links marked with Template:Aur-mirror, you may consider resubmitting them to the AUR if interested in maintaining them.
All pages with broken package links are tracked in Category:Pages with broken package links.) [2]
Broken redirects
I have written a simple script to list redirects with broken fragments (i.e. when no section with given title is found on the target page). After filtering out some false positives, here is the result. (And because I am incredibly selfish, I have eaten the low-hanging fruit before sharing the rest with others ;) )
As usual, please strike the fixed items off the list. Some things to consider:
- The section might have been renamed, either to simplify the title, or just to fix capitalization as per Help:Style#Section headings.
- Or it might have been merged with other, more generic section. Use the most relevant section for the redirect, or just redirect directly to the relevant page.
- In the worst case, there is no relevant content on the target page. This will probably deserve a separate discussion.
-- Lahwaacz (talk) 20:01, 18 August 2014 (UTC)
Update: I have removed the fixed redirects for better readability of the rest, for reference they can be found here. -- Lahwaacz (talk) 17:51, 26 December 2014 (UTC)
Another update to the list, there are about 50 more broken redirects since the last time... -- Lahwaacz (talk) 13:33, 11 April 2015 (UTC)
- Beginners' Guide/Installation --> Beginners' guide#Installation
- Beginners' Guide/Installation (Español) --> Beginners' guide (Español)#Instalación
- Beginners' Guide/Installation (Русский) --> Beginners' guide (Русский)#Установка
- Beginners' Guide/Preface --> Beginners' guide#Preparation
- Beginners' Guide/Preparation --> Beginners' guide#Preparation
- Beginners' Guide/Preparation (Español) --> Beginners' guide (Español)#Preparación
- Beginners' Guide/Preparation (Русский) --> Beginners' guide (Русский)#Подготовка
- Beginners' guide/Installation --> Beginners' guide#Installation
- Beginners' guide/Installation (Español) --> Beginners' guide (Español)#Instalación
- Beginners' guide/Preparation --> Beginners' guide#Preparation
- Beginners' guide/Preparation (Español) --> Beginners' guide (Español)#Preparación
- Map Custom Device Entries with udev (Español) --> Udev (Español)#Escribir reglas udev
- Pacman Color Output (Español) --> Pacman tips (Español)#Coloriando la salida de pacman
- Redownloading all installed packages --> Pacman tips#Redownloading All Installed Packages (minus AUR) -- section renamed with [3], does the redirect still make sense?
Hide irrelevant pages in search suggestions
Is it possible to hide some irrelevant pages in search suggestion? For example, we should not hide Dkms page, because someone will looking that word. But we should hide irrelevant suggestions, such as Internet Share. It pollutes search and more unpleasant, I have sometimes created russian page with wrong title, because of this. I mean, I just added ' (Русский)' to english title, because when you are redirected, page's title in address string is still wrong. I understand, that we could not just delete them, because of some forums may use old titles. But making them invisible in Archwiki I think is good idea. Agent0 (talk) 14:39, 9 October 2014 (UTC)
- Theoretically it would be possible to hide the redirects which differ only in capitalization, but MediaWiki is not able to do this. You can either get results including all redirects (default), or none: on search page click Advanced in the search bar and then unselect List redirects checkbox (and click Search to renew the results).
- It seems that starting with MediaWiki 1.24, the URL for redirect pages will be rewritten to the target URL[4], which would certainly prevent the kind of mistakes you described. By the way, does anybody have an idea why ArchWiki is still at 1.22 branch?
- -- Lahwaacz (talk) 17:41, 9 October 2014 (UTC)
- Ok, and what about hiding pages, which titles differ not only in capitalization? For example, begin typing search string 'wire', you will see many pages named "Wireless Setup", but you cannot see what ever pages' titles begins with 'wire'.
- Thanks for advice. I can do advanced search, but usually I use search box, which is in left side of each page. And the problem is that I do not want to hide ALL redirects, but want hide SOME redirects, that are irrelevant.
- By the way, about hiding: is it theoretically possible to hide other languages suggestions in which languages I am not interested in? For example, I want to see which pages exists with title beginning like "Blueto". In suggestions I see many "Bluetooth (Language)" pages, but do not see, that there are article about bluetooth keyboard.
- It's good, that URL will be overwritten, we will wait for Archwiki to be updated by some admin.
- Agent0 (talk) 18:51, 9 October 2014 (UTC)
- @Lahwaacz: I've always wondered too why Pierre is maintaining a "mw-1.23" branch in the repo but not merging it; I've posted a message in arch-general about that.
- @Agent0: The wiki search engine is MediaWiki's vanilla, with all its features and limitations; for an alternative, you can try an external search engine like Google, appending site:wiki.archlinux.org to the search strings. In any case, a search engine will not understand what is "relevant" and what is not, unless it's given an algorithm to do so, which is not the case for our wiki. About languages in search results, you can already add the name of a language to the search strings in order to limit the results to that language; restricting results to English is not currently possible and will be fixed by Help_talk:I18n#Language_namespace(s)_in_place_of_suffixes?.
- -- Kynikos (talk) 03:38, 11 October 2014 (UTC)
Article guideline
Contributing
Large portion of my Arch Wiki time is spent to below pages. Need a way to improve the situation. I think Archwiki:Contributing could/should hold such information.
- Specific Laptop pages - User should: Only add information to fix things that do not work. Do not list things already work well. Do not duplicate information from Laptop.
- Specific Application pages - If the application work out of box and the page only list needed packages. Add it to List of applications.
- Network related apps. For example OwnCloud, only base platform setup (php, Apache) are Arch Linux specific. General settings should go upstream instead.
--Fengchao (talk) 09:50, 11 December 2014 (UTC)
- In accordance with Help talk:Style#Better structuring, I'd like to minimize the centralization of style rules for specific pages in single articles, be it Help:Style, ArchWiki:Contributing etc. This is my position:
- Style for Category:Laptops pages should be discussed in ArchWiki:Requests#Specific_laptops.27_pages_templates and should probably be stated in the category page itself.
- Style for HCL pages should be stated directly there and/or in its direct child pages.
- Fengchao's "Specific Application pages" rule could go in the intro of List of applications.
- Fengchao's "Network related apps" is IMHO already covered by Help:Style#Hypertext metaphor.
- +1. To add to above:
- Fengchao's note about "Specific Application pages" could further be added as example to the "stub" bullet in ArchWiki:Contributing#Improving and
- the "Network related apps" example in a general fashion to ArchWiki:Contributing#Organizing.
- Perhaps one could also rename ArchWiki:Contributing#Announce massive edits in a talk page to ArchWiki:Contributing#Massive edits and use that rule to (1) refer the user to announce the edit on the talk page (current content) and (2) add a couple sentences to make the editor consider ArchWiki's stance on deduplication, crosslinking, referring to upstream documentation for default features, etc. - before starting massive edits.
- --Indigo (talk) 12:59, 14 December 2014 (UTC)
- I wouldn't like to add too many specific recommendations to ArchWiki:Contributing, but at the moment I'm not strongly against either, since I want to largely reorganize that page anyway one day; maybe the "Specific Application pages" and "Network related apps" rules would better fit ArchWiki:Contributing#Creating.
- About renaming ArchWiki:Contributing#Announce massive edits in a talk page I'm against: I want to keep those rules as basic as possible, since their only goal must be to prevent damage that can only be fixed with complete reversions; for this reason they shouldn't refer to style rules; also their headings should be complete sentences; if a massive edit is announced in a talk page, we can point the author to Help:Style#Hypertext metaphor if necessary.
- -- Kynikos (talk) 15:39, 17 December 2014 (UTC)
TrackPoint
A new page, TrackPoint, has been created to cover the configuration of the special input device typical to all/most ThinkPad laptops. There are still many pages duplicating the information, below is a complete list of pages that need to be merged. -- Lahwaacz (talk) 20:56, 22 December 2014 (UTC)
- IBM_ThinkPad_X41#Scrolling_with_trackpoint
- Lenovo ThinkPad Edge E430
- Lenovo ThinkPad T410
- Lenovo ThinkPad T420s
- Lenovo ThinkPad T530
- Lenovo ThinkPad T61
- Lenovo ThinkPad X120e
- Lenovo ThinkPad X220
- Lenovo ThinkPad X230
- Lenovo Thinkpad X60 Tablet
Renamed software
Nautilus got renamed to Gnome Files
Do we want to rename every occurrence of 'nautilus' in Arch wiki or is the redirect enough? -- Karol (talk) 23:49, 21 August 2014 (UTC)
- The redirect has only 3 backlinks, 2 of which are talk pages. About normal strings, this should be a comprehensive list, but it seems pretty dangerous to do a blind mass rename with a bot.
- Moving this back to the normal requests.
- -- Kynikos (talk) 14:06, 22 August 2014 (UTC)
- I started going through the pages in the search you link to. I think it will have to be done manually seeing as paths, gschemas, executables and upstream projects that use nautilus in their name cannot be renamed. Regarding foreign language pages, if there are just a handful occurrences of 'Nautilus' google translate can be used to get the gist of a sentence and see if it is safe to change 'Nautilus' to 'GNOME Files' or 'Files' - usually it is. But for the Spanish and Arabic pages Nautilus pages, there are dozens of occurrences so I would much rather leave that to native speakers. -- Chazza (talk) 11:41, 6 November 2014 (UTC)
- Doh, I said the discussion was to be kept open, but I struck the heading anyway ^^' (thanks for re-opening it)
- I was only referring to the English pages, I agree that the presence of out of date translations shouldn't be enough to keep requests in this page open, unless they are specifically about translations, which is not this case.
- -- Kynikos (talk) 13:36, 7 November 2014 (UTC)
List
A list of pages where all possible instances Nautilus have been changed so we know what's been done and what hasn't. I will add to the list as I go through the pages returned in the search. -- Chazza (talk) 11:41, 6 November 2014 (UTC)
- Nautilus - Completed
- Nautilus (日本語) - No changeable instances
- Digital Cameras - Completed
- Awesome (한국어) - Completed
- GNOME (简体中文) - Completed
- GNOME tips - Completed
- Desktop environment - No changeable instances
- Lineak - No changeable instances
- GNOME tips (Nederlands) - Completed
- Firefox (العربية) - No changeable instances
Bug Day/2010 - No changeable instances
- Openbox (Česky) - Completed
- fuseiso - Completed
- Beginners' Guide (Indonesia) - Completed
- Feh - Completed
- Bluetooth (Italiano) - Completed
- Feh (Italiano) - Completed
- Samba (Italiano) - Completed
- Backup programs - Completed
- Oggenc - Completed
- Samba - Completed
- IPod - Completed
- Dropbox - Completed
- GNOME (Italiano) - No changeable instances
- Samba (日本語) - Completed
- GNOME Tips (简体中文) - Completed
XBMC renamed to Kodi
Since version 14 XBMC was renamed to Kodi, see FS#43220. There are also some typos that say 'xmbc'. -- Karol (talk) 10:59, 25 December 2014 (UTC)
- I checked backlinks to Kodi of the now moved article. I think we got it covered. Can this be closed? --Indigo (talk) 09:22, 4 January 2015 (UTC)
- Thank you Indigo, unfortunately there's more, including an entry in List of applications/Multimedia and an entire section in XScreenSaver :) — Kynikos (talk) 09:21, 5 January 2015 (UTC)
Gummiboot
Gummiboot is included in systemd since 220-2 as systemd-boot. Relevant search: gummiboot -- Lahwaacz (talk) 14:05, 30 August 2015 (UTC)
Cleanup: links to non-existent packages
As of today, there are exactly 714 invalid uses (413 unique) of Template:AUR or Template:Pkg, spread across 398 pages. The complete list is on
User:Lahwaacz.bot/Report 2014-04-05. I will try to go through it and update the links, but this is not a one-man job, so I would really appreciate some help. Please remember to strike/remove the items from the list to save others' time. -- Lahwaacz (talk) 16:42, 5 April 2014 (UTC)
- The previous report is closed for further updates, please contribute to
User:Lahwaacz.bot/Report 2014-05-11. -- Lahwaacz (talk) 09:15, 12 May 2014 (UTC)
- The previous reports are closed for further updates, please contribute to
User:Lahwaacz.bot/Report 2014-12-24. -- Lahwaacz (talk) 13:14, 25 December 2014 (UTC)
- The previous reports are closed for further updates, please contribute to User:Lahwaacz.bot/Report 2015-02-06. -- Lahwaacz (talk) 23:08, 6 February 2015 (UTC)
- I'll delete them eventually, for now I think they could be useful for searching through the user notes when fixing localized pages (not that many user do this...) First I will need to implement the automatic report page as suggested in #Strategy for updating package templates. -- Lahwaacz (talk) 13:00, 13 March 2015 (UTC)
Surrounding link text
While performing #Cleanup: links to non-existent packages, the bot updated a lot of package links, but of course it couldn't update the text around them accordingly, for example [7], so that's something else that could be done. Here's the list of the changes:
[8]. -- Kynikos (talk) 03:24, 7 April 2014 (UTC)
- Technical detail: the last link is now slightly inaccurate, it shows all edits of the bot made in April 2014. Is there a way to set a specific day? (in this case 5th April) -- Lahwaacz (talk) 21:46, 9 April 2014 (UTC)
- Cool, there's really a way, I've just noticed it :D (the magic is in the
offsetparameter!) -- Kynikos (talk) 13:51, 11 April 2014 (UTC)
Strategy for updating package templates
It is an open secret that the current strategy for updating package templates, which lies in creating a dedicated report page, is not very effective. The number of broken package links was 714 before the first cleanup almost a year ago. Today, after several successful cleanups the number is 979.
The first problem is that the report page is being noticed only a short time after announcing the cleanup and (almost) completely overlooked after a few days. Updating a wiki should be a continuous effort, but this strategy relies on announcing an event.
The second problem is that only English pages were consistently updated in the cleanups. No wonder since the events were announced only in English...
I have a proposal to solve the first problem, and partially also the second: instead of creating report pages and organizing cleanups, the broken package links could be marked directly in wiki pages, similarly to the way external links are marked with Template:Dead link. The result could look like this: pkgnamebroken link: hint where "broken link" is a link to a section with detailed instructions to fix the package link and "hint" is a short hint uniquely identifying the problem (given by the bot).
The proposal could be implemented in multiple ways:
- By introducing a single template, e.g. "Broken package link", which would take "hint" as a parameter and produce
<sup>broken link: hint</sup>. This template would be added immediately after the broken instance of Pkg, AUR or Grp, exactly the same way as Template:Dead link is added to broken external links.
- By introducing a separate template for each Pkg, AUR and Grp, for example "Broken Pkg link", "Broken AUR link" and "Broken Grp link". These templates would take two parameters, the (broken) package name and the hint. Then, "Broken Pkg link" would produce
{{Pkg|pkgname}}<sup>broken link: hint</sup>and so on.
So far I'm for the second way, which should be more favourable to the bot.
The advantage of this strategy is that broken package links and hints are continuously visible to everybody reading the wiki page, which are presumably people most interested in the topic at the moment, while maintainers can still easily go through full lists generated by Special:WhatLinksHere. Also, I would not have to announce events :)
Of course I'm still open to other suggestions on solving the above problems, or any other if I missed something.
-- Lahwaacz (talk) 18:51, 10 February 2015 (UTC)
- I totally support this, although method 1. looks cleaner to me, because:
- It makes it clearer and more natural for users how to fix the template (just remove the message Vs fix the template name AND remove the message).
- It's consistent with Template:Dead link.
- Why would method 2. "be more favourable to the bot"?
- — Kynikos (talk) 00:51, 11 February 2015 (UTC)
- Other thing is that one of the mistakes when using AUR/Pkg/Grp templates is invalid number of parameters, e.g. [12]. If the additional parameters are to be preserved and not automatically removed by the bot, this would be impossible to mark using the second way. Well, almost impossible, we could still set the hint using a named parameter, but it would be really unclear how the link marked this way should be "fixed". So we should definitely use the first method. -- Lahwaacz (talk) 19:32, 11 February 2015 (UTC)
- Regarding your last point, I think those instances are very rare and editors will figure how to fix the link/sentence correctly (once they did the tricky part of finding where the package went). In my view this should not deter from using method 2, if there are any doubts method 1 with the bot might be problematic (over time).
- Either method would be great. One suggestion regarding the current report pages: How about still producing them additionally, but onto a static page (overwritten on next run, noting a schedule, e.g. quarterly, on top)? Reason: Particularly if an editor wants to fix articles of a language, Special:WhatLinksHere is unwieldy to swim through for articles in the language. --Indigo (talk) 10:20, 12 February 2015 (UTC)
- I could certainly still produce the reports. Also being it a static page, the updating could be done automatically similarly to ArchWiki:Statistics (so far I have created the report pages manually).
- I have implemented the first method in the bot script today. Since it has taken me almost half a day, it was probably not "equally easy" to the second method, but I think it works quite well and safely now. Template:Broken package link is now created and its "broken link" link points to #Broken package links, feel free to improve both.
- I was also thinking about how to integrate #Surrounding link text into #Broken package links, maybe the link to the bot's "contributions" with fixed date should be automatically put somewhere on each run? This task also depends too much on context, so the links like [13] are not very useful. In a way, it is already marked on the pages, because the link points e.g. to AUR and the context says otherwise. Also, we probably don't have the workforce to do systematic checking on this task, so is it even worth to include these instructions/links?
- -- Lahwaacz (talk) 20:59, 12 February 2015 (UTC)
- Sorry I did not mean to incur an extra manual task regarding the static page, just recalled how helpful it is to do the task systematically. It is related to your question about a run result page of the bot though. For cross-checking the bot's contributions for consistency we can filter like you do above and that should be enough imo. To work on the content, a bot result page probably would require manual transformation like the current method again. So, not required but an option.
- Regarding the template: I would leave out "the hint" because the sup-text gets too long and breaks text badly on small resolutions. "Broken link" should be enough, not? Also I would let "Broken link" link itself to Template:Broken_package_link. This way we can change the link to the instructions (#Broken_package_links) noted there in case they move (e.g. to ArchWiki:Contributing) at a later point, or we decide to expand the instructions in the template itself a little, without needing a bot run over existing broken links again. --Indigo (talk) 22:45, 12 February 2015 (UTC)
- I actually like your idea about the report page. Don't worry about the manual work, it can all be automated :) The report page could also include some statistics that can be collected during the update, e.g. how many broken links are there per language/in total.
- I think that it is important to include the hint, except for the obvious "package not found", which is included only for consistency (and because the template would otherwise end with the
: ]sequence). It may be silly that the hint is longer than the package name (the message "invalid number of template parameters" is probably too long anyway), but the only alternative I can think of is using some cryptic abbreviations like "PNF" for "package not found", which would be explained among other instructions pointed to by the "broken link" link.
- Having the instructions on this page has the advantage that it is linked from the navigation bar on the left, whereas Template:Broken package link would not be as easily discovered "by accident". The template page should serve mainly as a description of the template itself and just link to the instructions, so I'd leave it this way. Of course if the instructions are moved, the link in the template can be updated accordingly.
- -- Lahwaacz (talk) 15:32, 13 February 2015 (UTC)
- (I was writing this while Lahwaacz was writing his post above, and he saved before me, but I think this comment can still be useful)
- Well done Lahwaacz for implementing method 1.
- The problem of grouping broken links by language would IMO be more naturally solved with localized Template:Broken package link templates, instead of using a report: it would be easy to add this feature to the bot.
- The problem of the context wording is clearly unsolvable automatically, but for completeness I would mention it briefly in #Broken package links. I don't think adding any links is going to be of any usefulness; in theory a list of the changes could indeed be maintained in a report, and the entries stricken whenever each link's surrounding text is manually checked to be mentioning the correct location of the package; I'm not sure how popular such a list would be though, and the text would also be fixed anyway by casual users whenever they notice the inconsistency while reading the article. An alternative is using the bot to only always add the broken link template, even when the package template could be updated automatically, letting instead the users do all the actual updates manually; this method could be integrated with Wiki Monkey's editor assistant, which would complete the template updates in the editor, but still allow checking the changes before saving (already implemented there).
- About leaving out "the hint", maybe we could instead remove "broken link: " and move the link to "the hint". Would adding a light-red background to "the hint" help having it recognized as a link status template, even if "broken link: " isn't there anymore? But still, this method would make the template inconsistent with Template:Dead link, unless we want to update that one too.
- If we want the link to ArchWiki:Requests#Broken package links to be flexible in case we want to move the instructions somewhere else, I'd say the most natural way would be to use a redirect like Broken package link that we can point to wherever we want.
- — Kynikos (talk) 15:50, 13 February [14])
User and group pacnew files
I just went about handling .pacnew files of filesystem and believe we are missing content for it. It's important, as one can havoc a system mishandling them. Neither Users and groups nor Pacnew and Pacsave files mentions it; both should in my view. I'm creating the item here to:
- Check if I missed where it may be mentioned.
- Confirm procedure for handling them: can anyone think of valid reasons/cases in the current situation not to delete passwd/group/shadow.pacnew files?
- Ensure we add it to the relevant places. Opinions?
--Indigo (talk) 12:03, 18 April 2015 (UTC)
- Well, the user and group database files are handled also from filesystem.install so the only difference I ever got from a .pacnew file was different ordering or (compatible) database format changes. Anyway, I'm wondering as well what is the best way to handle the .pacnew for these 4 files. -- Lahwaacz (talk) 21:46, 18 April 2015 (UTC)
- Thanks. Searching the BBS yields a few threads, e.g. [15]. All I saw seem to suggest the files can be deleted. I remember an update where bash path changed; the .pacnew used
/usr/binwhile the original still had
/binpath. I guess that would count as an example under "compatible" changes. A manual sorting with pwck and grpck could be part of the explanation how to diff the files before deletion. Also grpconv and pwconv should be mentioned briefly in my opinion. I added [16] and [17] for now.
- Anyone has input for (2.) or (3.) above? --Indigo (talk) 09:33, 23 April 2015 (UTC)
The FAQ could use an entry like "After upgrading my kernel, I can't mount USB devices", preferably linking FS#16702. See [18] [19], and
.install files in the repos are gradually being outphased: see DeveloperWiki:Pacman Hooks -- Alad (talk) 15:32, 29 April 2016 (UTC)
Bot requests
Here, list requests for repetitive, systemic modifications to a series of existing articles to be performed by a wiki bot.
|
https://wiki.archlinux.org/index.php?title=ArchWiki:Requests&diff=cur&oldid=243002
|
CC-MAIN-2016-22
|
refinedweb
| 4,413
| 59.23
|
#include <list.h>
List of all members.
Default Constructor.
Copy constructor. It is only a reference copier.
Destructor. It does consider the number of references to the object before destruction.
Appends a node to the end of the list. It is equivalent to push.
Destroys the list.
Returns the data in the first node of the list.
Returns an iterator for the list.
Returns the number of data nodes in the list.
Checks if an element is present in the list. Returns
true if present,
false if not. The type T should support the operator '==' for this function to compile successfully.
Returns the data in the last node of the list.
Assignment operator. It is only a reference copier. i.e., after an assignment operation, the l-value and the r-value share the same data.
Pops a node from the end of the list. Does nothing in case of an empty list.
Pushes a node to the end of the list. It is equivalent to append. Infact push calls the function append.
Removes the first list element equal to the function argument
e.
|
http://liblcs.sourceforge.net/classlcs_1_1_list.html
|
CC-MAIN-2017-22
|
refinedweb
| 184
| 71.21
|
i18ndude 3.2.2
i18ndude performs various tasks related to ZPT's, Python Scripts and i18n.
i18ndude
Overview
Call i18ndude with the --help argument to see what it can do for you.
Changel]
3.0c1 (2008-10-04)
- Fixed major bug introduced in the kupu changes. We didn't include cpt pages anymore. [hannosch]
- Updated the table output with some styling tips from limi. [hannosch]
- Added a new -t option to the list command. This will output a simple HTML page with some colored progress bars instead of the simple text listing. [hannosch, limi]
- Ripped out the long unmaintained chart and combinedchart commands. Added a new list command instead, which shows the translation status in a simple listing. [hannosch]
3.0b4 (2008-04-26)
- Updated documentation. [hannosch]
- Fixed parsing of projects without any i18n in page templates. It is sufficient to have i18n in Python files or in GS profiles. [naro]
- Added special handling of some more entities to please kupu. [hannosch]
- Applied kupu-i18nextract-sa-diff.patch from kupu to be able to rebuild the kupu pot files from xsl and html files. [hannosch]
- Normalize path separators in references to / on all platforms. [hannosch]
- Reverted undocumented c58402, which broke in presence of Unicode strings. [hannosch]
- Strip the computer specific base folder from generated references. [hannosch]
- On most errors, show the error and a very short help message, instead of the complete doc string of the file. [maurits]
3.0b3 (2007-09-01)
- Stripped some more trailing whitespace. Fixed recursive algorithm for i18n domain extraction. [hannosch]
- Fixed some bugs in the GenericSetup extraction handler. Messages defined on the root node will now be extracted as well. The message texts are stripped from beginning and trailing whitespace. [hannosch]
3.0b2 (2007-06-05)
- No longer replace … and — with simple ASCII equivalents, but use proper Unicode characters, properly representing these HTML entities. [hannosch]
- Fixed some minor bugs found while rebuilding the Plone pot files. [hannosch]
- Added basic Unicode support to the MessageCatalog class. [hannosch]
- Fixed comment handling in the merge command. [hannosch]
- Made gdchart dependency for the combinedchart command optional. Instead you only get a textual listing right now. [hannosch]
- Quote new lines in the default comment properly. [hannosch]
- Added support for extracting i18n:attributes from GenericSetup profiles. [hannosch]
- Added a new basic GenericSetup profile extractor, which automatically extracts messages marked with i18n:translate. [hannosch]
- Added back support for specifying multiple folders to be searched in the rebuilt-pot command. [hannosch]
- Added new --exclude argument to the rebuilt-pot command, which lets you specify a whitespace delimited list of files that should not be included in the message extraction. [hannosch]
- Added automatic replace from '…' to '...'. [hannosch]
3.0b1 (2007-03-01)
- Fixed bug in mixing different catalogs into one. It wouldn't respect the default value of messages extracted from Python code. Also don't extract messages from translate and utranslate functions anymore, because they have a different call signature. [hannosch]
- Whitespace fix for filename filter. [sunew]
- Improved the regular expression used in find_untranslated, so that it also matches tags beginning with capital letters. [kclarks]
- Removed custom TAL parser. Use the one from zope.tal.talgettext instead. [hannosch]
- Integrate extract.py and interfaces.py from zope.app.locales. Got rid of our own version of TokenEater and the whole pystrings.py file. [hannosch]
2.1.1 (2006-10-28)
- Use entry_points console_scripts from setuptools to install the main script. This should generate an executable file on Windows platforms. [hannosch]
- Removed some unused test files. [hannosch]
- Corrected the package information in the setup.py. Figured out how to use find_packages() with the correct arguments. [hannosch]
2.1 (2006-09-22)
- Refactored the package source code layout to comply to the usual best practices. [hannosch]
- Added framework classifiers to the package metadata. [hannosch]
- Some small refinements to setup.py. i18ndude is now registered in the Cheese Shop and you can get the current development version just by typing 'easy_install i18ndude' :) [hannosch]
- Egg enabled i18ndude. The next release will be available as an egg. If you are in a development environment you might want to reinstall i18ndude by using 'python setup.py develop' instead of 'python setup.py install' now, so you don't have to do this whenever something changes in SVN :) [hannosch]
- Sorted textual output of the combinedchart option by language code. [hannosch]
- Clarified the 'already exists with different text' message by providing the location of the original text as well. [hannosch]
- Removed the 'Assuming rendered msgid' warning messages. These only clutter the logs but don't provide any real value. [hannosch]
- Fixed tests, so they can be run with the normal Zope testrunner. [hannosch]
- Disabled external namespace validation for find-untranslated so you can run it without network access and results in a major speed increase. Thx to Chuck Bearden for the patch. This closes [hannosch, encolpe]
- Remove the 'addPortalMessage' again from the list of python functions whose argument should be translated. We do these with proper MessageID's now. [hannosch]
- Exclude folders named 'tests' from subfolder scanning. Page templates and Python code from tests shouldn't be scanned for i18n tags. [hannosch]
- Fix the broken admix option. [hannosch]
- Update the usage info to reflect the removal of the silent option. [hannosch, frisi]
- Fixed yet another issue regarding whitespace and provided test for it. [hannosch, Tuttle]
- Change output of references's containing '//' to conform to the poEdit format. [hannosch]
- Change output of msgstr's containing newline codes (n) to conform to the gettext standard and specifically the poEdit format. [hannosch]
- Don't add fuzzy status to empty messages anymore. [hannosch]
2.0 (2005-10-09)
- Removed two unsupported scripts, if you have used them please tell me [hannosch]
- Refactored sync option into function of catalog.py and wrote test for it. Fixing a bug along the way and did some code cleanup. [hannosch]
- Added option to specify title of chart explicitly [hannosch]
- Added combinedchart option, which is used to build the overview charts on plone.org for a comprehensive view on the status of translations. [hannosch]
- Removed the extract literals feature. If anybody needs it, please speak up. [hannosch]
- Updated visualisation.py to handle new internal catalog format. Provided basic test for it. [hannosch]
- Added new feature to PTReader. For msgid's which are already in the catalog, check if the msgstr matches or provide an error message. [hannosch]
- Fixed another issue regarding missing whitespace and provided test for it [hannosch, Tuttle]
- POWriter generates new default comments instead of the old original comments. POParser automatically converts existing original comments (# Original: "") or Zope3-style default comments (# Default: "") at the reading step to new default comments (#. Default: "").
- PTReader now doesn't extracts any excerpts anymore. [hannosch]
- Removed the silent option on rebuilt, merge and sync completly. The addition of the added- and removed-sections lead to an incorrect format. Use a normal diff tool if you are really interessted in this information. [hannosch]
- PYReader now extracts the line numbers of messages and writes these at the end of the reference seperated with a ':' [hannosch]
- The catalog's add method doesn't adds duplicate references or automatic comments anymore. Test were updated to reflect new behaviour. [hannosch]
- Fixed handling of normal comments in PT- and PYReader. [hannosch]
- Always add a blank line as the last line on po's, as poEdit does it [hannosch]
- Adjusted Python parsing tests to cover the current behaviour. Added XXX comments where it is wrong [hannosch]
- Fixed an issue with an unnecessary whitespace and provided test for it [hannosch, Tuttle]
- Based all of catalog.py on the new MessageEntry class. Rewrote POWriter to use new gettext standard conform output. [hannosch]
1.0 (2005-09-02)
- This is the last release with old-stlye output formatting and command line options. The next release will be incompatible in many ways, so update with care. [hannosch]
- Cleaned up docs and removed some stale files [hannosch]
- untranslated.py: fixed handling of new i18n:attributes with trailing semicolon as introduced by myself ;) thx to xaNz for pointing me to it [hannosch]
- Added a new MessageEntry class to base the MessageCatalog on, added new default_comment constant, reformated the Changelog as HISTORY.txt [hannosch]
- Fixed path handling in tests and wrote tests for PYParser [deo]
- Adjusted tests to pass on new behaviour of 'unneeded literal msgids' now getting added and warning shown [hannosch]
- Instead of only showing a warning about 'unneeded literal msgids' these get added now [tuttle]
- move the utils.py from PloneTranslations here, Removed ## X more comments [hannosch]
- Preserve special ## comments, added tests for special comments, started PageTemplate parsing tests [hannosch]
- added test for po file writing and allowed filenames without excerpt (lines starting with #: without corresponding #. lines) [hannosch]
- added test infrastructure and tests for po file parsing [hannosch]
- Fix an issue in merge option when trying to merge two files [hannosch]
0.6 (2005-07-04)
- tagged and released 0.6 [batlogg]
- pystring: add 'addPortalMessage' to the list of python functions whose argument should be translated. This allows to catch the new-style portal- messages in Plone 2.1 to be automagically extracted [hannosch]
0.5 (2005-06-14)
- tagged and released 0.5 [batlogg]
- as html-entities in msgstr's are bad, don't provide them in the original comments. this has confused translators. [hannosch]
- untranslated.py: Added a new handler available through a command line switch, generally ignore text in script and style tags [hannosch]
- Fixed whitespace error in generation of #, fuzzy comments [hannosch]
- i18ndude now takes a second pot to merge in the merge and rebuild-pot commands, this is useful if you have both a manual.pot and a generated.pot [hannosch]
- catalog.py (MessageCatalog) added a new method addToSameFileName() which adds in a msgid but adds the excerpt to an existing filename occurrence This is used in i18ngenerator.py of PloneTranslations to add in actions like "Edit" as msgids and adds all types these are definied to the same occurrence rather than generating a new one for each type [hannosch]
- catalog.py (POWriter.write) takes a new argument noMoreComments that supresses the "## xx More..." comments [hannosch]
- Also fill in the Original comments when parsing i18n:attributes This allows for efficient handling of named i18n:attributes [hannosch]
- Refactored the English Translation stuff and renamed it to Original [hannosch]
- When syncing po-files the old and new original comments are compared and the msgid is set to fuzzy if these aren't the same, meaning the msgid has changed and needs some verification [hannosch]
- Added optional allcomments argument and get_original() method These are used in the latest PloneTranslations tests to compare the original value of a msgid with the msgstr [hannosch]
- Set chart width to 1000px. Plone has too many translations ;) [hannosch]
- Included a new option to scan python scripts for messages. Currently it looks for the _(), translate() and uranslate() functions. It is based on pygettext.py from Python and some ideas taken from Zope's extract.py. [hannosch]
- catalog.py (POWriter._print_entry): Added a second # to "English translation" lines so that those lines don't get written out twice. Which happens because we try to preserve comments. [dn]
0.4 (2005-05-08)
- Moved from cvs.sf.net to svn.plone.org. History was NOT migrated check for older revisions. [batlogg]
- Older entries can be found in the ChangeLog.
- Author: Vincent Fretin
- Keywords: Plone i18n zpt
- License: GPL
- Categories
- Package Index Owner: hannosch, nouri, vincentfretin
- DOAP record: i18ndude-3.2.2.xml
|
http://pypi.python.org/pypi/i18ndude/3.2.2
|
crawl-003
|
refinedweb
| 1,903
| 57.87
|
Bug#340393: python2.3-twisted: Twisted Mail in core package, not in description.
Package: python2.3-twisted Version: 1.3.0-8 Severity: normal Twisted Mail is in the core package, python2.3-twisted, but not mentioned in the description. E.g., apt-cache search mail does not find it. This is doubly confusing since with most other Twisted components are split out. Other
Bug#340394: xmms: consistantly segfaults when playing some specific mp3s
Package: xmms Version: 1.2.10+cvs20050209-2 Severity: normal The mp3's in question causing xmms to segfault upon play were downloaded from. Imogen_Heap_-_Hide_And_Seek_(Morgan_Page_Bootleg_Remix).mp3 and
Bug#338340: acknowledged by developer (Bug#338340: fixed in stunnel 2:3.26-5)
On 2005.11.22 at 23:33:03 -0800, Debian Bug Tracking System wrote: Source: stunnel Source-Version: 2:3.26-5 We believe that the bug you reported is fixed in the latest version of stunnel, which is due to be installed in the Debian FTP archive: Unfortunately, if this bug is closed, you've
Bug#340396: svn-buildpackage: can't generate debian/control with a pre-build hook, since dpkg-checkbuildeps will fail.
Package: svn-buildpackage Version: 0.6.14 Severity: normal Hi, ... We have a package which is in a subversion repository, and which has a debian/control.in which is used to generate the debian/control before the first source upload. Naturally using svn-buildpackage fails because of the missing
Bug#340395: amule-utils: links to libfreetype6, which is going away
Package: amule-utils Version: 2.0.3-3 Severity: grave Hi Julien, The amule-utils package currently depends on libfreetype6, but it does not use it. This dependency is being pulled in via gdlib-config --libs, which works as designed but is *not* a correct tool for getting a list of libs to link
Bug#293185: squidguard: Please use a newer version of Berkeley DB sponsor. The packages can be found at this URL: The packages can be found at this
Bug#339924: advi: same problem
Package: advi Version: 1.6.0-6 Followup-For: Bug #339924 I've just got the same problem just one day before having to do my presentation... Switching back to gs-esp 7.07.1-9 makes it work again. With the current gs-gpl, I get the following output from advi+gs: (using trans.dvi from the test
Bug#293185: squidguard: Please use a newer version of Berkeley DB
Hi Stefan, On Wed, Nov 23, 2005 at 12:14:58AM -0800, Steve Langasek wrote:
Bug#322157: removing backtick feature
I plan to remove the feature of backtick evaluation. IMO it not usefull any more. Any comments? Otherwise I will close this bug. -- regards Thomas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#340397: postgresql-common doesn't install (and blocks other postgres packages)
Package: postgresql-common Version: 34 Severity: important postgresql-common does not upgrade correctly. When trying to update an unstable system, a postinst error (which I haven't been able to pinpoint) is returned : yod:~# dpkg --configure --debug 0442 postgresql-common D40: checking
Bug#337271: patch can't work
On Mon, Nov 21, 2005 at 10:58:22PM +0100, Thomas Lange wrote: I like to include this patch, but it can't work. The function getopt is not extended, so the new option -I will not be recognized. you're right; fixed in the corresponding svn branch... but in a certain sense it worked as intended:
Bug#340332: lftp: ls does not print files starting with . (like .htaccess).
Alexander V. Lukyanov wrote: It is a FAQ. Use: set ftp:list-options -a if the option is supported by the ftp server. I am sorry, I looked at documentation etc. Shouldn't this option be used by default? About a year ago the dot files were taken into account by default. -- Eugen
Bug#340385: segfault installing OOo hyphenation
On Tue, 2005-11-22 at 23:37 -0800, Daniel Burrows wrote: Note there appear to be two versions of the openoffice.org-dictionaries source package. There's one in main, which provides myspell-* and openoffice.org-thesaurus-en-us, 1:2.0.0-1 in unstable, and a different one in contrib
Bug#339734: openssh-server: Kerberos tickets are not saved (pam_krb5)
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Russ Allbery wrote: Hm. That looks okay. Could you add debug to the end of the two pam_krb5.so lines and then send me the resulting log output from syslog Here it is: Nov 23 10:06:37 myhost sshd[18820]: (pam_krb5): none: pam_sm_authenticate:
Bug#340398: CVE-2005-3531: fusermount may corrupt /etc/mtab
Package: fuse-utils Version: 2.4.0-1 Severity: grave Tags: security Justification: user security hole Thomas Biege from the SuSE security team discovered that special chars such as \n, \t and \\ are misinterpreted by fusermount, which could potentially allow a user from the fuse group (or
Bug#340400: mailman: Really screwed up template = no logging?
Package: mailman Version: 2.1.5-9 Severity: normal Tried altering the templates for article.html and really screwed up the template file, but got no error/warning for mailman. Debugging the code to HyperArch.py, I see it just pass on a screwed up template. except (TypeError,
Bug#340401: libmagick9-dev: Magick-config --ldflags spits out things that aren't flags
Package: libmagick9-dev Version: 6.2.4.5-0.2 Severity: important Tags: upstream Magick-config's --ldflags option spits out things that aren't flags: $ Magick-config --ldflags -L/usr/lib -L/usr/X11R6/lib -lfreetype -lz -L/usr/lib $ If you are going to make a distinction between --libs and
Bug#340332: lftp: ls does not print files starting with . (like .htaccess). not ever enabled by default.
Bug#340397: postgresql-common doesn't install (and blocks other postgres packages)
Hello, I encountered the same problem, after some investigation it seems that the problem is in the /usr/share/postgresql-common/supported-versions script when the function lsb_debian is called it takes the output of `lsb_release -rs`, wich on my system returns 3.1 not testing/unstable the
Bug#340332: lftp: ls does not print files starting with . (like .htaccess).
Alexander V. Lukyanov wrote:
Bug#340402: procps: top fails silently if /proc not mounted, leaves terminal in bad state
Package: procps Version: 1:3.2.6-2 Severity: normal If /proc is not mounted, calling top silently returns the user to the shell, on an empty screen, with the terminal in echo off setting. It would be good if top would fail more gracefully in that case, for example with an error message, and a
Bug#332919: #332919 Still not fixed
On Tue, 2005-11-22 at 23:31 +0100, Jérôme Marant wrote: Hi, I've just noticed that this security bug has not been fixed: #332919: CAN-2005-2967: Format string vulnerability in xine-lib's CDDB response parsing Any action taken? This bug has been addressed for stable in DSA-863, it's
Bug#340374: mozilla-thunderbird: counter for unread is wrong (shows much more than unread messages exist in this folder)
On Wed, Nov 23, 2005 at 02:15:01AM +0100, Stefan Hirschmann wrote: counter for unread is wrong (shows much more than unread messages exist in this folder) I guess this is about a pop account ... again? - Alexander p.s. please take care that the bug is listed as To: or CC: when
Bug#339024: another workaround
Here is another workaround for the broken stat, that will work with both the old and new version. Avoid the stat program entirly: perl -e 'for (@ARGV) {print (((stat)[7]) . \n);}' -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#339037: open(2) man page doesn't document potentia
Von: Avery Pennarun apenwarr An: [EMAIL PROTECTED] Betreff: Re: Bug#339037: open(2) man page doesn't document potentia Datum: Thu, 17 Nov 2005 10:51:34 -0500 On Thu, Nov 17, 2005 at 03:59:58PM +0100, Michael Kerrisk wrote: It appears you *have* to open in blocking mode in order to
Bug#339804: [Pkg-alsa-devel] Bug#339804: alsa-base: Running reportbug after dpkg-reconfigure linux-sound-base
I am reassigning this back to udev on the assumption that udev is not respecting hotplug blacklist files. udev does not even know about blacklists, module-init-tools does. Recent versions of module-init-tools properly support hotplug-style blacklisting, and I have no reason to believe that they
Bug#340283: [CVE-2005-1790] DoS against Mozilla-based browsers
tags 340283 - security thanks * Florian Weimer ([EMAIL PROTECTED]) wrote: severity 340283 grave thanks * Mike Hommey: severity 340283 important thanks Until it is proven that the crash can lead to an exploit, it's not critical. A crash which can be triggered just by visiting
Bug#340306: ldapvi_1.4-1_i386.changes REJECTED
reassign 340306 title 340306 archive rejects .deb packages with any additional member severity important thanks * Jeroen van Wolffelaar [EMAIL PROTECTED] [2005-11-22 21:06]: Should != must. But you have to have a good reason to ignore it. I haven't heard any (real) reason at all
Bug#340374: mozilla-thunderbird: counter for unread is wrong (shows much more than unread messages exist in this folder)
On Wed, Nov 23, 2005 at 02:15:01AM +0100, Stefan Hirschmann wrote: Package: mozilla-thunderbird Version: 1.0.7-3 Severity: normal counter for unread is wrong (shows much more than unread messages exist in this folder) Please go to bugzilla.mozilla.org and search for 'unread count mail'
Bug#340403: dpkg-sig: please more docs inside the package
Package: dpkg-sig Version: 0.12 Severity: wishlist Hi, during the current discussion about ftp-master breaking dpkg-sig, I was asked what does dpkg-sig do in the first place. I had to look for a while to find the dpkg-sig FAQ on the web page. Please include the FAQ in the package, and write one
Bug#340404: ITP: libemail-valid-loose-perl -- Email::Valid which allows dot before at mark
Package: wnpp Severity: wishlist Owner: Krzysztof Krzyzaniak (eloy) [EMAIL PROTECTED] * Package name: libemail-valid-loose-perl Version : 0.04 Upstream Author : Tatsuhiko Miyagawa [EMAIL PROTECTED] * URL : *
Bug#340405: mutt: It does not seem at all easy to forward an email with all its attachments
Package: mutt Version: 1.5.9-2 Severity: wishlist It would be really nice if there were an option/command forward-with-attachments to forward an email including all attachments. I've tried playing with mime_forward et al, and had no success there. The only way I've been able to do it is using
Bug#301178: Error message when cannot receive mail could be useful.
On Wed, Nov 23, 2005 at 02:19:13AM +0100, Stefan Hirschmann wrote: | You still see this problem with the latest tbird? At least in TB 1.0.7 it still exist. You can test it for yourself: Chance in the accountsettings the name of the incoming POP3 server. I guess we cannot do anything with a
Bug#338561: clamav incorrectly reports that oversized zip files are virus infected
This one time, at band camp, Michael Gilbert said: Found in man 5 clamd.conf: ArchiveBlockMax Mark archives as viruses (e.g RAR.ExceededFileSize, Zip.ExceededFilesLimit) if ArchiveMaxFiles, ArchiveMaxFileSize, or ArchiveMaxRecursion limit is reached.
Bug#340298: unclear about ia64
Without further any warning/information about that other 64bit architecture intel, amd etc are producing: amd64. A *lot* of people try to use ia64 installation media to install Debian on their Opteron's etc, and then mail (for example) debian-cd that the cd is broken and doesn't boot. I am
Bug#340343: 'man adduser' typo: usefull
tags #340343 patch confirmed pending thanks On Thu, Nov 17, 2005 at 02:08:35PM -0500, A Costa wrote: Found a typo in '/usr/share/man/man8/adduser.8.gz', see attached '.diff'. Fixed in svn, thanks. Greetings Marc -- -
Bug#340406: libvte-dev: please do not export unnecessary libraries in vte.pc
Package: libvte-dev Version: 1:0.11.15-3 Severity: important Tags: upstream Hi folks, So, I suppose most of you have read about problems with packages depending on libraries that they don't use, particularly as relates to a
Bug#257163: (no subject)
Maybe it may be usefull to include a link to the FAQ of vim-latexsuite at the end of this bug report for peoples searching the solution : Q: I cannot insert the e-acute (é) character! HELP! Insert the following line
Bug#267265: icewm: same on i386 with a i740 graphic card
Package: icewm Version: 1.2.20+21pre1-3 Followup-For: Bug #267265 I'm having the same problem here on a Celeron with a i740 graphics card. The other installed window managers (fvwm, afterstep, wmaker) show the contents of windows properly, only icewm does not show the contents (except in full
Bug#329974: xlibmesa-dri: function __driUtilCreateScreen is freeing never allocated data
On Tue, 2005-11-22 at 22:35 +0100, Samuel Hym wrote: The __driUtilCreateScreen function (line 1357 and beyond...) is freeing, at the end, framebuffer.dev_priv that has never been allocated when drmOpen(NULL,BusID) fails to open for instance, which must be the case in this bug report (when
Bug#339804: [Pkg-alsa-devel] Bug#339804: alsa-base: Running reportbug after dpkg-reconfigure linux-sound-base
Marco d'Itri wrote: udev does not even know about blacklists, module-init-tools does. Recent versions of module-init-tools properly support hotplug-style blacklisting, and I have no reason to believe that they don't. OK, that's good. So I do not understand what you think my packages should
Bug#339979: Minor cosmetic problems with lastest initscripts
Package: initscripts Version: 2.86.ds1-6 Followup-For: Bug #339979 Hi, I have tried your checkroot.sh but I think that there is a mistake. In my log I find that +++ Wed Nov 23 12:03:32 2005: Done checking root file system Wed Nov 23 12:03:32
Bug#340408: esmtp: [INTL:sv] Swedish debconf templates translation
Package: esmtp Severity: wishlist Tags: patch l10n Small but important update for swedish debconf template. -- System Information: Debian Release: testing/unstable APT prefers unstable APT policy: (500, 'unstable'), (500, 'stable') Architecture: i386 (i686) Shell: /bin/sh linked to
Bug#340409: audacity: please build against wxgtk2.6
package: audacity severity: wishlist Hello! Audacity looks a little bit out-dated. Would it be possible to rebuild it against libwxgtk2.6? The Program would look better integrated into gnome then. Thanks in advance. Nice Greetings, Fabian -- Fabian Greffrath Institut für Experimentalphysik I
Bug#340410: liblist-moreutils-perl: New upstream version
Package: liblist-moreutils-perl Version: 0.10-1 Severity: wishlist Please upgrade package to new upstream version (0.16). -- System Information: Debian Release: testing/unstable APT prefers unstable APT policy: (500, 'unstable'), (500, 'stable'), (1, 'experimental') Architecture: i386 (i686)
Bug#340411: SPARC- Failed installation.
Package: installation-reports Boot method: CD Image version: Fri 18 Nov 2005 21:28:13 GMT Debian mirrors on netinstall page Date: 23 Nov 05 1041 GMT Machine: Sun Sparc Processor: Sparc 32, uname -a says Sparc unknown Memory: 64megs Partitions: Multi-user auto partitioning (now reformatted, so
Bug#174639: unable to find fonts
hi, i found quite usefoul, for looking for TTFonts on my debian system, to add this line 45 in rl_config: '/usr/share/fonts/truetype/', so my rlconfig regarding TTF is now: # places to look for TT Font information TTFSearchPath = ( 'c:/winnt/fonts',
Bug#337621: phpbb2-conf-mysql: Should remove the created database
Hello Jochen, When selected for purge the Package phpbb2-conf-mysql should ask for removal of the database created on installation. Thank you for your report. That's indeed a good suggestion, I'll combine that with implementing dbconfig-common for phpbb. bye, Thijs signature.asc
Bug#340026: Acknowledgement (unicorn-source: does no build with 2.6.14 kernel)
Sure no problem, if you think it would be easier I can also give you access to the machine as its not doing anything till it works :) I was able to compile it in testing for 2.6.8 if I remeber rightly but then due to my QoS needs I went to unstable due to 2.6.14 having most of the patches
Bug#340357: phpmyadmin: Debconf configuration request Ignored
tags 340357 wontfix sarge close 340357 4:2.6.4-pl4-1 thanks Dnia Wednesday 23 of November 2005 00:13, James Clendenan napisał: Version: 4:2.6.2-3sarge1 I am using Apache 2, with SSL and non SSL virtual hosts. I had only wished to enable PHPMyadmin access to a limited set of hosts, however,
Bug#340412: vte: requires freetype2 to build, but doesn't build-depend on it
Package: vte Version: 2.1.10-1 Severity: serious The vte source package requires libfreetype6-dev to be available in order to build, but lacks a build-dependency on it. Instead, it appears to currently build only because the build-dep is hard-coded on the buildds. Please add this
Bug#340413: linux-2.6: Old MegaRAID driver missing in 2.6.14
Package: linux-2.6 Severity: normal The old MegaRAID driver megaraid.ko is missing in linux 2.6.14, it exists in 2.6.12. I can't make the new drivers work with the MegaRAID 428 Ultra RAID Controller. Will the old drivers come back? -- System Information: Debian Release: 3.1 Architecture: i386
Bug#340414: licq-plugin-rms: RMS plugin won't load
Package: licq-plugin-rms Version: 1.3.2-4 Severity: normal Hi, Licq plugins are installed using apt-get, with no errors. When trying to start licq with RMS plugin (-p rms) it crashes with the following error message: --- quote --- 13:12:26: [ERR] Unable to load plugin (rms):
Bug#336623: phpbb2-languages: Russian translations fixes
Hello Alexander, Hello there. I've added some translations missed in original translation from phpbb team. (That's about Visual comformation and Autologin expires with appear in ~2.0.14 and 2.0.18) Thanks for the fix. Could you please send the patch to me as an attachment, not inline
Bug#340228: [PATCH, IDE] Blacklist CD-912E/ATK
On Tue, Nov 22, 2005 at 08:26:19 +, Alan Cox wrote: The drive is clearly broken. Adding blacklist to drivers/ide/ide-dma.c for this model (CD-912E/ATK) fixes this problem. That may be the case but knowing if th drive is the problem is more tricky. By saying that drive is
Bug#340327: adduser: [INTL:pl] Polish man pages didn't get installed + updated Polish translations
tags #340327 l10n patch confirmed pending thanks On Tue, Nov 22, 2005 at 06:53:54PM +0100, Robert Luberda wrote: The Polish man pages didn't get included into the binary package, because the po4a_paths section of po4a.conf does not contain the `pl:po/pl.po' entry. Please apply the following
Bug#340415: libpam-tmpdir: pam_tmpdir is to paranoid and sets TMP='(null)/uid'
Package: libpam-tmpdir Version: 0.05-2 Severity: important I use grsecurity on my server and pam_tmpdir sets TMP='(null)/1001' - I looked at the source and came over this snippet in get_tmp_dir: /* Start paranoia checks */ if (check_path(confdir) != 0) { return NULL; } The problem
Bug#340416: gtk+2.0: builds against freetype2, but doesn't build-depend on it
Package: gtk+2.0 Version: 2.6.10-2 Severity: minor The gtk+2.0 source package uses libfreetype6-dev when building, but lacks a build-dependency on it. Instead, it relies on the fact that libpango1.0-dev depends on freetype in order for it to be detected. Since gtk+2.0 uses freetype directly,
Bug#340417: xserver-xfree86: mozilla-firefox crash after loading a specific url
Package: xserver-xfree86 Version: 4.3.0.dfsg.1-14sarge1 Severity: normal write(3, 5\30\4\0)\2 \1X\0\0\0\20\'\3\0+\1\1\0, 20) = 20 read(3, \0\vuH)\2 \1\0\0005\0\30\0\0\0\7\0\0\0\220xC\10\310\337..., 32) = 32 open(/usr/X11R6/lib/X11/XErrorDB, O_RDONLY) = 27 fstat64(27, {st_mode=S_IFREG|0644,
Bug#340418: wml/developer.wml: generates wrong markup via html_table function
Package: qa.debian.org Severity: minor Tags: patch Hello there, the code currently given generates markup such as | table width=% ... While this is only a bad parameter the very same bug screws up the page where all developers are displayed as | table80 border=1 ... thus effectively killing the
Bug#340068: patch fix confirmed
Just a note I forgot to send to say that I've compiled 2.6.14-3 for powerpc64 with this patch and it does indeed fix the problem. -- Mark Hymers [EMAIL PROTECTED] Don't you hate those Claims Direct adverts? 'I slipped on a banana skin and sued the Dominican Republic!' Linda Smith on the
Bug#340420: electric-fence: FTBFS on GNU/kFreeBSD
Package: electric-fence Version: 2.1.14 Severity: important Tags: patch Hi, the current version of electric-fence fails to build on GNU/kFreeBSD. The following simple patch fix that. It would be nice if it could be included in the next upload. Thanks in advance, Petr ---
Bug#340419: gnome-control-center: gnome-about-me has incorrect eds error
Package: gnome-control-center Version: 1:2.12.1-1 Severity: normal On a system without the evolution-data-server installed, running gnome-about-me gives the error message There was an error trying to get the addressbook information. Evolution Data Server can't handle the protocol. This is a)
Bug#340405: mutt: It does not seem at all easy to forward an email with all its attachments
* Julian Gilbey [Wed, 23 Nov 2005 10:10:29 +]: Package: mutt Version: 1.5.9-2 Severity: wishlist It would be really nice if there were an option/command forward-with-attachments to forward an email including all attachments. I've tried playing with mime_forward et al, and had no
Bug#340412: vte: requires freetype2 to build, but doesn't build-depend on it
severity 340412 minor thanks Sorry, overinflated severity -- on second glance, libfreetype6-dev is pulled in via a build-dependency on libgtk2.0-dev (- libpango1.0-dev - libfreetype6-dev), so this doesn't currently impact the package's buildability, inside or outside of the buildds. For
Bug#340423: debmirror: add expected/known archive sizes to documentation
Package: debmirror Version: 20050207 Severity: wishlist Hi, a complete mirror of a debian distribution will take a lot of harddisk space, but I could not find anything about the exact size info. It would be nice to know how much space current distributions (woody/sarge) are requiring to
Bug#231806: Bug #231806: Explain why we don't package findsmb, smbtar, etc. in samba?
On Sat, Nov 19, 2005 at 08:24:00AM +0100, Christian Perrier wrote: Quoting Chris M. Jackson ([EMAIL PROTECTED]): On 11/18/05, Christian Perrier [EMAIL PROTECTED] wrote: The bug submitter in #231806 suggest that we at least document why scripts in sources/scripts are *not* packaged in
Bug#340424: elvis: FTBFS on GNU/kFreeBSD
Package: elvis Version: 2.2.0-3 Severity: important Tags: patch Hi, the current version of elvis fails to build on GNU/kFreeBSD. The simple fix bellow fix that. It would be nice if it could be included in the next upload. Thanks in advance, Petr --- configure~ 2005-11-23
Bug#340425: rss2email: man page doesn't document [num] parameter (r2e run)
Package: rss2email Version: 1:2.55-4 Severity: minor $ r2e ... run [--no-send] [num] ... $ man r2e ... run [--no-send] ... -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#339136: stat behavior change
On Wed, Nov 23, 2005 at 01:24:08PM +0900, Junichi Uekawa wrote: However, from an upstream software POV, this is a nightmare. I'll need to check for stat version before giving it a format string, possibly by checking it in configure.ac Yeah, the problem is that if upstream doesn't acknowledge
Bug#340426: RFP: openmpi -- A high performance message passing library
Package: wnpp Severity: wishlist * Package name: openmpi Version : 1.0 Upstream Author : Open MPI Development Team * URL : * License : BSD Description : A high performance message passing library Open MPI is a project combining
Bug#340427: arts - lists mailing list as uploader in changelog
Package::
Bug#340428: octave2.9 - lists mailing list as uploader in changelog:
Bug#340429: mkisofs: incorrect volume size on multisession disk
Package: mkisofs Version: 4:2.01+01a01-2 Severity: normal I wrote two sessions on a CD-R. All files are recorded and can be read back well but statfs64() system call reports the size of the last session only: $ df /mnt/cdrom Filesystem 1K-blocks Used Available Use% Mounted on
Bug#327324: svn-arch-mirror: breaks with svn 1.2.0 (new upstream version + debian package fixes it)
Eric Wong wrote: Gustavo Franco [EMAIL PROTECTED] wrote: Hi Eric, Is that breakage in 0.4.1-1 producing the problem below ? No, it's been a problem with all versions of svn-arch-mirror, and svn itself when working inside a subdirectory, too. Looking at the svn log, the directory you're
Bug#340430: libboost-program-options-dev: positional_options_description dtor abort
Package: libboost-program-options-dev Version: 1.33.0-3 Severity: important test-pd.cpp works with g++-3.3(with warning) but not with g++-4.0. I think it's boost's bug, not g++'s $ cat test-pd.cpp #include boost/program_options.hpp int main() { namespace po = boost::program_options;
Bug#340431: genesis: FTBFS on GNU/kFreeBSD
Package: genesis Version: 2.2.1-5 Severity: important Tags: patch Hi, the current version of genesis fails to build on GNU/kFreeBSD. Please find attached patch to fix that. It would be nice if it could be included in the next upload. Thanks in advance, Petr only in patch2: unchanged:
Bug#340432: linphone: [INTL:sv] Swedish PO-template translation
Package: linphone Severity: wishlist Tags: patch l10n Here is the swedish translation of linphone. This has also been sent to Simon Morlat (upstream author) -- System Information: Debian Release: testing/unstable APT prefers unstable APT policy: (500, 'unstable'), (500, 'stable')
Bug#340433: Some non-free FLUKA code still present in source package and libgeant* binaries
Package: cernlib Severity: serious Justification: non-free code Original Message Subject: FLUKA/Cernlib licensing questions Date: Wed, 23 Nov 2005 12:38:30 +0100 (CET) From: Alfredo Ferrari [EMAIL PROTECTED] Reply-To: Alfredo Ferrari [EMAIL PROTECTED] To: Kevin B. McCarty [EMAIL
Bug#266824: Subbird 0.3alpha1
Hi Alexander, any chance you package this up anytime soon? Would be great to have this in the archive. Cheers, -- Guido signature.asc Description: Digital signature
Bug#305361: please add the option --timeout n (NAT)
Am 2005-11-19 20:51:09, schrieb Thierry Godefroy: For Michelle: You may get this pre-release version of Xdialog and have a look at how well it suits your needs (--timeout implemented, hopefully in a dialog-compatible manner...).
Bug#340434: vlc adds broken mailcap entries
Package: vlc Version: 0.8.1.svn20050314-1 Severity: normal Hi! In helping me tracking down problems with exmh and mailcap, Alex Zangerl found out that vlc adds broken entries to /etc/mailcap[0]. From mailcap(5): Each individual mailcap entry consists of a content-type specification, a
Bug#266824: Subbird 0.3alpha1
On Wed, Nov 23, 2005 at 01:50:12PM +0100, Guido Guenther wrote: Hi Alexander, any chance you package this up anytime soon? Would be great to have this in the archive. Cheers, -- Guido Is there a first official alpha release available? Please understand, I don't like the idea to send some
Bug#340349: RFA: openldap2.2 -- OpenLDAP server (slapd)
Russ Allbery wrote: Torsten Landschoff [EMAIL PROTECTED] writes: I am very sorry but I'll have to give away maintainership of OpenLDAP as currently I can't devote the time which is needed to maintaining it. I know this is really late and I regret not doing this step earlier. I can't
Bug#266824: Subbird 0.3alpha1
On Wed, Nov 23, 2005 at 02:08:51PM +0100, Alexander Sack wrote: On Wed, Nov 23, 2005 at 01:50:12PM +0100, Guido Guenther wrote: Hi Alexander, any chance you package this up anytime soon? Would be great to have this in the archive. Cheers, -- Guido Is there a first official alpha
Bug#340376: libcommandline-ruby1.8: incomplete doc-base files break installation
Hi Aaron, On Tue, Nov 22, 2005 at 08:56:10PM -0500, Aaron M. Ucko wrote: Package: libcommandline-ruby1.8 Version: 0.7.10-1 Severity: serious Justification: Policy 9.10 (doc-base section 2.3) [...] It is mandatory to specify Files even if the only relevant file is the one already specified
Bug#340435: find -mindepth option after a non-option argument -type
Package: dbs Version: 0.36 Severity: minor Hi, /usr/share/dbs/lib has: files=`find -type f -maxdepth 1 -mindepth 1` dirs=`find -type d -maxdepth 1 -mindepth 1 ! -name 'debian' ! -name 'upstream'` find barks: find: warning: you have specified the -mindepth option after a
Bug#340436: initrd-tools: Reboot with disk failure fails due to incorrect mdadm params in initrd
Package: initrd-tools Version: 0.1.81.1 Severity: important /usr/sbin/mkinitrd writes to initrd a file called script that enable the md used for the root fs. This script fails to create the md because of incorrect parameters passed to mdadm. The parameter passed to mdadm are: mdadm -A $device
Bug#340437: Transcode DEPENDS error
Package: transcode version: 2:0.6.14-0.7 Depends on libmagick6, which is not available to install. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Bug#340438: CVE-2005-3665: Cross-site scripting by trusting potentially user-supplied input.
Package: phpmyadmin Version: 4:2.6.2-3sarge1, 4:2.6.4-pl4-1 Severity: critical The patch by Martin Schulze in attachment. -- .''`.Piotr Roszatycki, Netia SA : :' :mailto:[EMAIL PROTECTED] `. `' mailto:[EMAIL PROTECTED] `- Cross-site scripting by trusting potentially user-supplied
Bug#309511: postgrey -- volatile
Hi Sven, Sorry for making a fuss. Sometimes bitching on prominent mailing lists does have an effect... I'm now working with mzh on getting portgrey into postgrey, so just ignore my email from last week. cheers -- vbi -- The man who raises a fist has run out of ideas. --
Bug#340298: unclear about ia64
On Wed, Nov 23, 2005 at 06:00:48PM +0800, Randolph Chung wrote: Without further any warning/information about that other 64bit architecture intel, amd etc are producing: amd64. A *lot* of people try to use ia64 installation media to install Debian on their Opteron's etc, and then mail (for
Bug#340439: firestarter: denial of service
Subject: firestarter: denial of service Package: firestarter Version: 1.0.3-1.1 Severity: grave Justification: causes non-serious data loss *** Please type your report below this line *** when reconfiguring the firewall (from menu - run wizard) and finally pressing save button, the gui interface
Bug#340440: Bad Server Response leads to Cannot get POP
Package: evolution Version: 2.4.1-3 Severity: important I am trying to read my mail from a remote server. I ssh tunneled the pop server to a local port and configured evolution to use POP to access this local port. I get the following message: Cannot get POP Summary: Success. This is perhaps
Bug#340005: licq: confirmation
Package: licq Version: 1.3.2-4 Followup-For: Bug #340005 i can confirm this bug. licq shows exactly the same behaviour here. -- System Information: Debian Release: testing/unstable APT prefers unstable APT policy: (500, 'unstable') Architecture: i386 (i686) Shell: /bin/sh linked to
Bug#340404: ITP: libemail-valid-loose-perl -- Email::Valid which allows dot before at mark
On Wed, Nov 23, 2005 at 11:06:32AM +0100, Krzysztof Krzyzaniak (eloy) wrote: * Package name: libemail-valid-loose-perl Version : 0.04 Upstream Author : Tatsuhiko Miyagawa [EMAIL PROTECTED] * URL : * License
Bug#340441: Please prune .svn dirs
Package: dbs Version: 0.36 Severity: wishlist Tags: patch Hi, We're keeping the dbs package libgail-gnome under SVN, but only for the debian/ part. It would be nice if the find expressions used in the dbs lib shell script would really prune .svn directories and file under these. I
Bug#324358: preview-latex with emacs-snapshot
That is actually _not_ the future. The future would be AUCTeX-11.81 (well, in a few days we should have 11.82) out which would include preview-latex. I have no idea how the AUCTeX and preview-latex Debian maintainers are planning to deal with it. Personally, I don't think it makes much sense
Bug#340286: phpmyadmin: configure script depends on apache, but apache 2 should work allso
tags 340286 moreinfo severity 340286 normal thanks Dnia Tuesday 22 of November 2005 13:10, Wilfried Goesgens napisał: dpkg --pending --configure Setting up phpmyadmin (2.6.4-pl4-1) ... Error: apache appears not to be installed dpkg: error processing phpmyadmin (--configure): subprocess
Bug#340442: cl-swank: Removing package leaves variable slime-backend set
Package: cl-swank Severity: normal I encountered problems using the SLIME package with SBCL. Then I wanted to switch to CVS Slime by removing the slime and cl-swank packages. Strangely, this did not work, because even after emacs -q the variable slime-backend is still set to
|
https://www.mail-archive.com/search?l=debian-bugs-dist%40lists.debian.org&q=date:20051123&o=newest
|
CC-MAIN-2019-43
|
refinedweb
| 5,585
| 62.58
|
(14)
Mohammad Elsheimy(9)
Mahesh Chand(5)
Mike Gold(4)
Dhananjay Kumar (4)
Prabhakar Maurya(3)
Dipal Choksi(2)
Jigar Desai(2)
Scott Lysle(2)
Nikhil Kumar(2)
Jean Paul(2)
Amit Choudhary(2)
Krishna Garad(2)
Gaurav Gupta(2)
Aman Singhal(2)
Shivani (1)
Ashish Banerjee(1)
Vijay Cinnakonda(1)
Thomas Regin(1)
Subramanian Veerappan(1)
Mohan Kumar Rajasekaran(1)
Rob (1)
Rehan Ahmad Abbasi(1)
Kalyan Bandarupalli(1)
Mamta M(1)
Shivprasad (1)
Sateesh Arveti(1)
Rifaqat Ali(1)
Freddy Mounir(1)
Pradeep Chandraker(1)
Ashish Shukla(1)
Srihari Chinna(1)
Harshit Vyas(1)
Charles Petzold(1)
Nipun Tomar(1)
Jaish Mathews(1)
Vishal Nayan(1)
Gomathi Palaniswamy(1)
Valon Ademi(1)
Rohatash Kumar(1)
Mahadesh Mahalingappa(1)
Bhushan Gawale(1)
Satyapriya Nayak(1)
Karthikeyan Anbarasan(1)
Vikas Mishra(1)
Rahul Ray(1)
Vijay Prativadi(1)
Vineet Kumar Saini(1)
Aravind BS(1)
Deepak Dwij(1)
Amit Maheshwari(1)
Abhishek Dubey(1)
Abhimanyu K Vatsa(1)
Ravish Sindhwani(1)
Sachin Kalia(1)
Jaganathan Bantheswaran(1)
Akkiraju Ivaturi(1)
Vishal Kulkarn.
Wireless Model : How Does It Work?
May 09, 2001.
The browser sends an HTTP request to the Web server, which interprets the request and determines which resources to retrieve or execute. If the URL specifies a file, then the server sends it back.
A Simple C# Utility to Help You Invent Names
Jul 10, 2001.
I wrote this simple console utility to help me think of a new name for a project I was launching.
Web services with Language Interoperability
Oct 18, 2001.
A web service in general is a way of exposing the properties and methods through the Internet In other words, it's an URL-addressable resource that programmatically returns information to clients who want to use it.
Invoking Unmanaged DLL Functions from Compact Framework for Pocket PC
Jan 04, 2003.
In this example we will use the Compact Framework to create a program containing a launch pad for the Pocket PC.
Link Fetcher Service
Mar 06, 2003.
In this article we will learn how to create a Web Service that fetches all the links from a given URL...#.
Add a Quick Map to a Windows Application
Mar 15, 2007.
This project demonstrates a quick and easy way to add mapping to a windows desktop application (with an available internet connection) using Google Maps as the basis and source for the map.
Search Engine Optimization (SEO) & friendly URL
Nov 20, 2007.
This article describes about one of the technique that is used commonly for improving SEO – creating friendly URL.
Access the Same Instance of Internet Explorer Window
Jun 26, 2008.
This tip shows how to open an URL inside an Internet Explorer browser window from your C# application. Furthermore it shows how to update that specific window.
Attaching a Digital Certificate (Public Key) to an HTTPS Request
Aug 10, 2008.
This article will guide you on how to post data to an HTTPS (i.e., secure connection) URL from a Windows application (.NET) by attaching a digital certificate from a certificate file and getting the response back..
Windows Forms Events Lifecycle
Nov 19, 2008.
This article describes the standard events that take place when a form is created and launched and shows the sequence in which they are raised..
Introduction to Dynamic Data Web Application Model: Part II
Jan 15, 2009.
This article explains about URL Routing and Inline Editing of Dynamic Data Web Application.
Changing the DNS or URL (host header) in SharePoint 2007 Site
May 05, 2009.
In this article we will see how to map the sharepoint site with public IP using Alternate Access Mappings.
How to Make More Than One URL Link to Your SharePoint Site
Jan 18, 2010.
In this article you will learn how to make more than one URL link to your sharepoint site
Debugging Silverlight 4 Out of Browser Application
Apr 29, 2010.
In this article you will learn how we can configure Silverlight application to run Out of Browser and debug when application is launched in Out of Browser window.
Routing in ASP.NET4
May 11, 2010.
Routing Allows us to build friendly URL's by decoupling the URL of the HTTP Request from the physical path of the web form that serves the Request.
Step by Step walk-through on URL routing in ASP.Net 4.0
Aug 18, 2010.
URL Routing is new feature in ASP.Net 4.0. This article will give a walkthrough on how to work with URL Routing and an introduction of.
Consuming URL Shortening Services - Introduction
Aug 24, 2010...
Consuming URL Shortening Services - Cligs
Aug 30, 2010.
This is another article that talks about URL shortening services. Today we are going to talk about Cligs, one of the popular shortening services on the web.!
Introduction to ASP.NET URL Rewriting with HttpHandler
Sep 06, 2010.
In this article you will learn how to use ASP.NET URL Rewriting with HttpHandler.
Getting Started with Microsoft Expression
Sep 14, 2010.
Recently, Microsoft launched a new product called Microsoft Expression, which allows developers to create interactive graphics and much more. This article gives you a head start on Expression tools..
URL Rewriting in ASP.Net
Sep 24, 2010.
In this article we will see a method for URL Rewriting in ASP.Net.
Handling exception in SharePoint 2010 object model
Nov 15, 2010.
In this article I will show how to handle the exception "Web application not found" in SharePoint 2010 object model.
Multitasking or Tombstoning and Isolated Storage in Windows Phone 7
Nov 26, 2010.
Windows Phone 7 manages multiple active applications by implementing a stack. In a sense, this application stack extends the page stack within a single Silverlight program. You can think of the phone as an old-fashioned web browser with no tab feature and no Forward button. But it does have a Back button and it also has a Start button, which brings you to the Start screen and allows you to launch a new program.
SharePoint 2010 Document Library Create/Edit Title and change Address/URL
Dec 17, 2010.
In this article you will learn how to use SharePoint 2010 Document Library Create/Edit Title and change Address/URL.
Consuming URL Shortening Services – 1click.at
Dec 26, 2010.
This article is talking about the 1click.at shortening service; how you can use it and how to access it via your C#/VB.NET application.
Using WebClient Class in .NET
Jan 10, 2011.
Here I am explaining various ways to handle the response from a URL and display that response in a browser.
Enable Self Service Site creation in SharePoint 2010
Feb 02, 2011.
In this article we will be seeing how to enable “Self-Service Site creation” which is used to enable users to create their own site collections at a specified URL namespace in SharePoint 2010..
Website Recursive Url Parser
Mar 28, 2011.
In this article I am trying to share a piece of code that might be useful to some of the developers.
Quick Launch Navigation for SharePoint Publishing Sites
Apr 25, 2011.
In this article we will be seeing how to add a heading or link and how to delete the heading or link from the SharePoint publishing site quick launch.
Hour 2: Understanding 5 ASP.NET State Management Techniques in 5 Hours
Apr 28, 2011.
In a simple way see understanding 5 ASP.NET state management techniques in 5 hours.
Cross Domain AJAX Request Using JQuery
May 17, 2011.
Cross Domain AJAX request using JQuery loading the RSSFeeds asynchronously from feedburner url in your website.
Configuring an RSS Viewer Web Part
Jun 07, 2011.
In this article we will be seeing how to obtain the URL for an RSS feed from the SharePoint list and how to configure a RSS viewer web part for it.
Hyperlink in C#
Jun 16, 2011.
The hyperlink is the control to link another page. The hyperlink can navigate to “URL” as well as xaml page.
REST API in SharePoint 2010 for Excel Services: Part 2
Jun 20, 2011.
In this article we will be seeing how to access the Charts and PivotTable using REST URL and how to add the chart to the SharePoint Wiki page.
REST API in SharePoint 2010 for Excel Services: Part 1
Jun 20, 2011.
The REST API in Excel Services is new in Microsoft SharePoint Server 2010. REST API is used to access workbook parts or elements directly through a URL.
Programmatically create Managed Paths in SharePoint 2010
Jun 23, 2011.
Managed Paths - We can specify which paths in the URL namespace of a Web application are used for site collections. We can also specify that one or more site collections exists at a specified path.
Base Tags in HTML5
Jul 11, 2011.
A Base tag is usually used to set a default (base) URL or relative links for all subsequent relative links.
URL Rewriting in ASP.NET using C#
Jul 20, 2011.
A URL rewriting is very important when you are running a community website where user posts articles, forum messages in web sites..
Quick Steps to URL Rewriting in Asp.net 4.0
Aug 25, 2011.
In this article you will learn URL Rewriting in Asp.net 4.0...
Getting URL of Current Page in Windows Phone Dynamically Through Code
Sep 13, 2011.
In this article you will learn how to Get URL of current page in Windows Phone dynamically through code...
Create Social Comments in SharePoint 2010 using PowerShell
Nov 02, 2011.
In this article you will be seeing how to create Social Comments for any specified URL in SharePoint 2010 using Powershell.
Remove the My Site Host Location URL in SharePoint 2010
Nov 21, 2011.
In this article you will see how to remove the My Site Host location URL in SharePoint 2010 using PowerShell and the SharePoint object model.
URL (Uniform Resource Locator) Rewriting
Dec 22, 2011.
This article demonstrates the complete URL rewriting concept using regular expression and set up the predefined rules. The article also demonstrates the post back issues in ASP.NET while requesting to the virtual path.
Using Multiple Endpoints in WCF Hosted on Web App
Dec 25, 2011.
Today, in this article let’s see how to create a multiple endpoints using WCF Service and later we will try to host it on to a web application.
How to Get All The Sub Web Sites Using SharePoint 2010 Web Service in Powershell
Jan 03, 2012.
In this article you will see how to get all the titles and URLs of all sites within the current site collection using SharePoint 2010 web service in powershell.
How to Launch Call Task From Secondary Tile in Windows Phone 7
Jan 06, 2012.
In this article we will learn how a call task is launching from Secondary Tile in Windows Phone 7.
Hosting WCF Service Under a Local IIS
Jan 28, 2012.
In this article we will see how to host a WCF service under IIS (Internet Information Services).
Add URL in Navigation Menu in Visual Studio LightSwitch 2011
Feb 01, 2012.
In this article you will see how to add an URL to the navigation menu of a LightSwitch application.
Brief Introduction to MVC3
Feb 01, 2012.
In this article I provide a brief Introduction to MVC3.
Shaded Ball in Random Colors Using HTML 5
Feb 05, 2012.
In this article we are going to understand the shaded ball using the HTML 5. In this application you will come to know that the shaded ball is changing the color when the hit the url again and again in the browser.
Create a Custom Email Validator Control in Windows Phone 7
Feb 07, 2012.
In this article we are going to explain how to create a custom email validator in Windows Phone 7.
Multithreaded Sockets (Multithreaded Server) and Working With URL Class
Feb 22, 2012.
In this article we describe the basic need of creating a Multithreaded server and what is the URL class and its methods. We also give examples of Multithreaded server and URL classes method..
Get and Load Canvas Image DataURL
May 08, 2012.
In this article we will learn how to get an Image data URL within a canvas and how to load the image data URL of the canvas..
Reading Files From Given Specific URL Using WebClient
Jul 07, 2012.
This article is about reading files from a given specific URL using WebClient..
How to open command prompt in Windows 8
Sep 02, 2012.
How to launch a Windows command prompt..
Create HTML Report on Facebook Urls Using PowerShell
Oct 12, 2012.
In this article we can explore addition of a custom User Profile Property and generating a report based on it..
Consuming URL Is.gd Shortening Services in VB.NET
Nov 09, 2012.
This article is talking about is.gd shortening service, how you can use it, and how to access it via your VB.NET application.
Consuming URL Shortening Services in VB.NET
Nov 09, 2012...
Add a Google Map to a VB Desktop Application
Nov 10, 2012.
This project demonstrates a quick and easy way to add mapping to a windows desktop application (with an available internet connection) using Google Maps as the basis and source for the map..
About Launching-a-URL.
|
http://www.c-sharpcorner.com/tags/Launching-a-URL
|
CC-MAIN-2016-30
|
refinedweb
| 2,214
| 65.62
|
Re: Please Explain where will the struct be stored if it is declared inside the Class
- From: "Bruce Wood" <brucewood@xxxxxxxxxx>
- Date: 6 Jun 2005 10:36:11 -0700
OK... forget about structs for a second. Let's talk only about native
types (like ints, longs, doubles, and floats) and classes.
There are only two places that things are stored in a running .NET
program. (Actually, as Willy pointed out, there are three: static
things are stored elsewhere, but we'll ignore those for now.) Something
can be stored either on the stack, or on the heap.
First, think about the stack. What is it? It is a place to store the
local variables that you declare in methods, and arguments to method
parameters. So, if you have code like this:
public decimal PowerOf(decimal value, int power)
{
decimal result = 1;
for (int i = 0; i < power; i++)
{
result *= power;
}
return result;
}
....
decimal x = PowerOf(2, 8);
Yes, I realize that this is a cheesy example (and the method doesn't
even work for negative powers), but take a look at what's going on here
with respect to the stack.
In the main program, down below, x is allocated on the stack because
it's a local (non-static) variable. 2 and 8 are copied onto the stack,
because they're arguments to PowerOf. Within PowerOf, result and i are
also allocated space on the stack because they're local variables. The
return value from PowerOf is also copied onto space allocated on the
stack, and then copied from that space into the variable x, which as
you recall was allocated space on the stack.
So, in this example, everything is happening on the stack. The heap
isn't involved at all.
A struct would act exactly the same as any of these decimals and ints.
A struct variable declared as a local (non-static) variable would be
allocated space on the stack. A struct passed to a method as an
argument would be _copied_ onto the stack (just as 2 and 8 were copied
onto the stack).
Now let's look at what happens with classes and the heap:
public class MyClass
{
private int classInt = 0;
private decimal classDecimal = 15.0;
public SomeOtherClass DoSomeStuff(YetAnotherClass
yetAnotherInstance)
{
SomeOtherClass someOtherInstance = new SomeOtherClass();
...
return someOtherInstance;
}
}
....
MyClass mine = new MyClass();
SomeOtherClass other = mine.DoSomeStuff(new YetAnotherClass());
Again, a silly method and a silly call, but let's look at what's going
on.
The first thing that happens is that a new instance of MyClass is
created _on the heap_, and a _reference_ to that instance (the space
allocated for the class information on the heap) is saved in "mine",
which is allocated _on the stack_ because it's a local variable. So
what was saved on the heap? Well. every instance of MyClass contains
two class members, "classInt" and "classDecimal". So, space for an int
and a decimal was reserved on the heap, those variables were
intialized, and a reference to that heap location (a pointer, if you
will) was saved in the variable "mine", which is located on the stack.
Every time we create a new instance of MyClass, space for yet another
int and decimal will be created on the heap.
Notice, however, that everything on the heap eventually comes back to
the stack. What is stored on the stack in the case of class instances
are _references_ or _pointers_ to the heap location where the class
information is stored.
The next thing that happens is that we create an instance of
YetAnotherClass in order to pass it to mine.DoSomeStuff. The instance
of YetAnotherClass is allocated space on the heap... space for whatever
class members it declares (we can't see the declaration, so we don't
know how much space it needs). Then, a reference to that instance of
YetAnotherClass is placed on the stack as an argument to DoSomeStuff.
DoSomeStuff creates a new instance of SomeOtherClass. Again, space for
whatever members SomeOtherClass defines is reserved on the heap, and
the members of SomeOtherClass are initialized into that heap space. A
reference (or pointer) to this instance is stored in someOtherInstance
on the stack (because someOtherInstance is a local variable).
When DoSomeStuff returns, it copies the reference (pointer) to the
instance of SomeOtherClass into space allocated on the stack for its
return value. Back in the main program, this reference is copied into
the variable "other", which has space reserved on the stack (because
it's a local variable to the main program). So, we're left with "other"
containing a reference (pointer) to an instance of SomeOtherClass,
which stores its members in space on the heap.
So, now, what about structs? Well, if we were to change classInt and
classDecimal to user-defined struct types, _nothing would change_. When
space was allocated for MyClass on the heap, .NET would allocate enough
space to hold all of the information for the two structs, just as
though they were ints or decimals. It would initialize the space for
those structs with some intial values, just as you would initialize an
int or a decimal, and then it would put a reference to the MyClass
instance's heap space (which contains the information for the two
structs) on the stack. Again, all that goes on the stack in this case
is a simple reference (pointer) to the information on the heap.
As I said: structs act exactly like ints, doubles, floats, or decimals.
When passed as arguments to methods they are copied onto the stack.
When returned from methods, they are returned on the stack. When you
have local variables of a "struct" type, space for the entire struct's
information is allocated on the stack. When you assign them from one
variable to another, they are copied.
When a struct forms part of the information (the "state") for a class
instance, space for that struct is allocated on the heap along with
(and in the same memory as) the ints, doubles, and decimals that make
up the rest of the class's state information.
If you can understand how basic types like ints and decimals are
treated by the compiler and the CLR, then you understand how structs
are treated: exactly the same.
.
- Follow-Ups:
- References:
- Prev by Date: Re: Style: Are these two blocks of code equivalent?
- Next by Date: Re: Do I need to call MemoryStream.Close ?
- Previous by thread: Re: Please Explain where will the struct be stored if it is declared inside the Class
- Next by thread: Re: Please Explain where will the struct be stored if it is declared inside the Class
- Index(es):
|
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.csharp/2005-06/msg01078.html
|
crawl-002
|
refinedweb
| 1,111
| 69.21
|
How do I change the figure size for a seaborn plot?
How do I change the size of my image so it's suitable for printing?
For example, I'd like to use to A4 paper, whose dimensions are 11.7 inches by 8.27 inches in landscape orientation.
You need to create the matplotlib Figure and Axes objects ahead of time, specifying how big the figure is:
from matplotlib import pyplot import seaborn import mylib a4_dims = (11.7, 8.27) df = mylib.load_data() fig, ax = pyplot.subplots(figsize=a4_dims) seaborn.violinplot(ax=ax, data=df, **violin_options)
From: stackoverflow.com/q/31594549
|
https://python-decompiler.com/article/2015-07/how-do-i-change-the-figure-size-for-a-seaborn-plot
|
CC-MAIN-2019-47
|
refinedweb
| 102
| 61.93
|
Breakfast under Bill - A look at my morning on the front page of Hacker News
Tuesday night I wrote a short blog post about how I used python to find cheap tickets to a music festival. I finished up pretty late so I decided to post it online the next morning. I woke up pretty early and posted the article on a few websites around seven. I started watching my google analytics page and the hits started coming in very fast, much faster than normal. First it was twenty, then thirty, and shortly after fifty people were reading within minutes of submitting. I looked at the map and most of the hits were from Europe. I must have lunch hour crowd. I navigated over to Hacker News planning to check the new section and see if I had gotten any upvotes and to my surprise I saw my article at number five on the front page. I couldn't believe my eyes and refreshed and it had already moved up a spot. At its peak it hit the number two spot, I couldn't believe short article on something pretty trivial had made the front page. It spent most of the morning fluctuating around the fifth spot, a good portion of the time under Bill Gates blog post on developing genetically modified bananas.
By the end of the day my blog dashboard said I had hit fourteen thousand hits. However google analytics had a somewhat smaller number. I am not sure what caused this difference but I decided to export the data and take a look at what had happened throughout the day. The first thing I took a look at was views throughout the day.
import pandas as pd import matplotlib.pyplot as plt %matplotlib inline views = pd.read_csv('/Users/danielforsyth/Desktop/views_tues.csv') views['hour'] = pd.to_datetime(pd.Series(views['hour'])) views.set_index('hour', drop=False, inplace=True) pd.options.display.mpl_style = 'default' from matplotlib import rcParams rcParams['figure.figsize'] = (18, 6) rcParams['figure.dpi'] = 150 views.plot(colormap = 'Spectral')
As you can see the views increase dramatically early on and then after peaking around two thousand at nine slowly drop as the post moved further down the page. To get an idea of how much traffic I recieved compared to normal I created a graph of all traffic since I created my blog.
total = pd.read_csv('/Users/danielforsyth/Desktop/views_all.csv') total['Day'] = pd.to_datetime(pd.Series(total['Day'])) total.set_index('Day', drop=False, inplace=True) total.plot()
The spikes occur on the days I had posted articles. The most views I had gotten in a single day previously was two thousand, the same number I got within an hour while on the front page. Lastly I took a quick look at where people were viewing from and what browser they were using.
There were views from one hundred and fourteen countries. I listed a few of them below with a bar chart of the top ten, I did the same for browsers.
countries = pd.read_csv('/Users/danielforsyth/Desktop/countries.csv') countries.set_index('Country', drop=False, inplace=True)
top10 = countries.head(10) top10.plot(kind='bar', rot=90)
browser = pd.read_csv('/Users/danielforsyth/Desktop/browsers.csv') browser.set_index('Browser', drop=False, inplace=True)
btop10 = browser.head(10) btop10.plot(kind='bar', rot=90)
As you can see making the front page of HN provided an insane amount of traffic. I believe I mostly got lucky by posting at the right time but I recieved a lot of great feedback and messages which provided great motivation to keep working on cool things. If you have any questions, feedback, advice, or corrections please get in touch with me on Twitter or email me at danforsyth1@gmail.com.
|
https://www.danielforsyth.me/breakfast-under-bill-a-look-at-my-morning-on-the-front-page-of-hacker-news/
|
CC-MAIN-2022-05
|
refinedweb
| 628
| 64.91
|
hello,
thanks for the explanation of why it's that way.
Any ideas of a work around?
python2.5 has been out for ages now. Even if it was an accident, it's
the behavior people expect, and it's still a regression.
Also, why should it matter if a module is a package or a module?
Note how pygame.tests has a type of module, and not of package:
>>> import pygame.tests
>>> type(pygame.tests)
<type 'module'>
Even though it is a package, python calls its type a module. This has
been true for a long time (at least as far back as python2.3).
Because it's a regression, I think this bug should be reopened.
To illustrate why it causes problems, here is part of the documentation
mentioning the __main__.
"""
You can do a self test with:
python -m pygame.tests
Or with python2.6 do:
python -m pygame.tests.__main__
See a list of examples...
python -m pygame.examples
Or with python2.6,
python -m pygame.examples.__main__
"""
It's twice as long, and I doubt anyone will remember the __main__ part.
People used to running their programs with -m now have broken programs
with python2.6.
Having said all that, maybe there is a work around...
One work around might be to make it into a module-module, not a
package-module. Then have the module-module load the package-module
into its namespace. I haven't tested that yet, but it might work. Will
have to go through a round of testing to see how that works out. Will
write back when I've found out the issues with that approach.
cheers,
|
https://bugs.python.org/msg90375
|
CC-MAIN-2022-05
|
refinedweb
| 278
| 78.55
|
I'm in the process of cleaning up my engine code. It has gotten quite messy due to hacking in this or that and now I want to clean all that up.
What is a good namespace for billboards? I've started using my classes to only spit out vertices. Later on then another class can use these classes to create the vertices they need yet they are not tied down to a specific vertex type. So if you need a billboard with 4 textures and a shader you can do that but you will not have to gen the vertices for the billboard and the base vertex data for the billboard is not associated with or tied down to a specific type of vertex buffer.
So what are billboards? They don't really fit into primitives.
|
https://cboard.cprogramming.com/game-programming/95940-need-namespace-billboards.html
|
CC-MAIN-2017-39
|
refinedweb
| 137
| 79.8
|
Chapter 2: The Python language):
On Apple OS X, enter the following command type in a Terminal window (assuming you're in the same folder as web2py.app):
On a Linux or other Unix box, chances are that you have Python already installed. If so, at a shell prompt type:":
and, since "1" is an integer, we get a description about the
int class and all its methods. Here the output has been truncated because it is very long and detailed.
Similarly, we can obtain a list of methods of the object "1" with the command
dir:
Types
Python is a dynamically typed language, meaning that variables do not have a type and therefore do not have to be declared. Values, on the other hand, do have a type. You can query a variable for the type of value it contains::
After executing these three commands, the resulting
a is an ASCII string storing UTF8 encoded characters. By design, web2py uses UTF8 encoded strings internally.
It is also possible to write variables into strings in various ways:
The last notation is more explicit and less error prone, and is to be preferred.
Many Python objects, for example numbers, can be serialized into strings using
str or
repr. These two commands are very similar but produce slightly different output. For example:.
list
The main methods of a Python list are append, insert, and delete:
Lists can be sliced:
and concatenated:
A list is iterable; you can loop over it:
The elements of a list do not have to be of the same type; they can be any type of Python object.
There is a very common situation for which a list comprehension can be used. Consider the following code:
This code clearly processes a list of items, selects and modifies a subset of the input list, and creates a new result list, and this code can be entirely replaced with the following list comprehension:
tuple
A tuple is like a list, but its size and elements are immutable, while in a list they are mutable. If a tuple element is an object, the object attributes are mutable. A tuple is delimited by round brackets.
So while this works for a list:
the element assignment does not work for a tuple:
A tuple, like a list, is an iterable object. Notice that a tuple consisting of a single element must include a trailing comma, as shown below:
Tuples are very useful for efficient packing of objects because of their immutability, and the brackets are often optional:
dict
A Python
dict-ionary is a hash table that maps a key object to a value object. For example::
Useful methods are
has_key,
keys,
values and
items:
The
items method produces a list of tuples, each containing a key and its associated value.
Dictionary elements and list elements can be deleted with the command
del:
Internally, Python uses the
hash operator to convert objects into integers, and uses that integer to determine where to store the value.
About indentation
Python uses indentation to delimit blocks of code. A block starts with a line ending in colon, and continues for all lines that have a similar or higher indentation as the next line. For example:
It is common to use four spaces for each level of indentation. It is a good policy not to mix tabs with spaces, which can result in (invisible) confusion.
for...in
In Python, you can loop over iterable objects:
One common shortcut is
xrange, which generates an iterable range without storing the entire list of elements.
This is equivalent to the C/C++/C#/Java syntax:
Another useful command is
enumerate, which counts while looping:
You can jump to the next loop iteration without executing the entire code block with
continue
while
The
while loop in Python works much as it does in many other programming languages, by looping an indefinite number of times and testing a condition before each iteration. If the condition is
False, the loop ends.
There is no
loop...until construct in Python.
if...elif...else
"elif" means "else if". Both
elif and
else clauses are optional. There can be more than one
elif but only one
else statement. Complex conditions can be created using the
not,
and and
or operators.
try...except...else...finally:
The
else and
finally clauses are optional.
Here is a list of built-in Python exceptions + HTTP (defined by web2py):
Identifiers defined outside of function scope are accessible within the function; observe how the identifier
a is handled in the following code::
Function
f creates new functions; and note that the scope of the name
g is entirely internal to
f. Closures are extremely powerful.
Function arguments can have default values, and can return multiple results:
Function arguments can be passed explicitly by name, and this means that the order of arguments specified in the caller can be different than the order of arguments with which the function was defined:
Functions can also take a runtime-variable number of arguments::
and a dictionary can be unpacked to deliver keyword arguments:
lambda
lambda provides a way to create a very short unnamed function very easily::
The only benefit of
lambda is brevity; however, brevity can be very convenient in certain situations. Consider a function called
map that applies a function to all items in a list, creating a new list::
There are many situations where currying is useful, but one of those is directly useful in web2py: caching. Suppose you have an expensive function that checks whether its argument is prime:
This function is obviously time consuming.
Suppose you have a caching function
cache.ram that takes three arguments: a key, a function and a number of seconds.
The first time it is called, it calls the function
f(), stores the output in a dictionary in memory (let's say "d"), and returns it so that value is:
The second time it is called, if the key is in the dictionary and not older than the number of seconds specified (60), it returns the corresponding value without performing the function call.
How would you cache the output of the function isprime for any input? Here is how::
Similarly, you can read back from the file with:
and you can close the file with:namespace:
What just happened? The function
exec tells the interpreter to call itself and execute the content of the string passed as argument. It is also possible to execute the content of a string within a context defined by the symbols in a dictionary:.
import
For example, if you need to use a random number generator, you can do:
This prints a random integer between 0 and 9 (including 9), 5 in the example. The function
randint is defined in the module
random. It is also possible to import an object from a module into the current namespace:
or import all objects from a module into the current namespace:
or import everything in a newly defined namespace::
Some of the
osfunctions, such as
chdir, MUST NOT be used in web2py because they are not thread-safe.
os.path.join is very useful; it allows the concatenation of paths in an OS-independent way:
System environment variables can be accessed via:.
When running web2py, Python stays resident in memory, and there is only one
sys.path, while there are many threads servicing the HTTP requests. To avoid a memory leak, it is best to check if a path is already present before appending:
datetime
The use of the datetime module is best illustrated by some examples:
Occasionally you may need to time-stamp data based on the UTC time as opposed to local time. In this case you can use the following function:
The datetime module contains various classes: date, datetime, time and timedelta. The difference between two date or two datetime or two time objects is a timedelta:
In web2py, date and datetime are used to store the corresponding SQL types when passed to or returned from the database.
time
The time module differs from
date and
datetime because it represents time as seconds from the epoch (beginning of 1970).:
and now:
In this example,
b is a string representation of
a, and
c is a copy of
a generated by de-serializing
b.
cPickle can also serialize to and de-serialize from a file:
|
http://web2py.com/books/default/chapter/34/02
|
CC-MAIN-2017-09
|
refinedweb
| 1,395
| 55.98
|
Has anyone else had problems with the Array class? I'm trying to do a sort on line segments that I have stored as arrays of points. All the Array methods seem to work fine except for insert and clone. Every time I use the insert method it appends.
Also does anybody have a working example of how to use the clone method?
Also does anybody have a working example of how to use the clone method?
The first part is copied from the help files and modified. I wrote this in PythonWin 2.6. I only have the sort for one end of the line segment so far.
I'm new to Python but I have some experience as a programmer.
Thank you very much
import arcpy
p = arcpy.Point()
a = arcpy.Array()
a.add(p)
a.add(p)
a.add(p)
p.X=11
a.replace(0,p) #so far so good
p.X=12
a.replace(2,p) #but, they are both replaced
p2 = arcpy.Point() #however, if you make a new point...
p2.X=42
a.replace(2,p2)#...it goes only where expected
If you change an attribute of an existing point object, even the id, and then replace it somewhere, it will replace all the old instances in the array with the new attribute.
(So maybe it's not a bug?)
(just a little annoying)
|
https://community.esri.com/thread/44650-arcpy-array-methods
|
CC-MAIN-2018-22
|
refinedweb
| 231
| 85.49
|
Results 1 to 1 of 1
Frustration Authoring problem - Please help...
- Member Since
- Oct 18, 2010
- 1
Really annoying problem that i cannot work out... please help
i'll start at the beginning...
Shot a 1080i project on sony FX1000
captured in HD in final cut, completed project, exported using compressor "DVD best quality 90 min" setting
import into dvd studio...
project resolution is 720 x 576i
display mode 16:9 letterbox
i burn the project, now....
when i play the dvd on my 16:9 television, it does not fill the whole screen. It squeezes the picture horizontally and gives me black bars top and bottom. Why is that, if i shot it in 16:9, burnt it in 16:9, and my tv display is 16:9, why am i getting a stretched out dvd picture with the black bars. I want it to fill the whole screen (no black bars) without warping/stretching/squeezing the picture.
Please help....
Bobby
DVD AuthoringBy HHelmsley in forum macOS - Apps and GamesReplies: 0Last Post: 08-18-2011, 02:54 AM
DVD - R for AuthoringBy PowerBookG4 in forum Schweb's LoungeReplies: 6Last Post: 10-16-2006, 05:23 PM
DVD AuthoringBy thegrimmsleeper in forum Movies and VideoReplies: 17Last Post: 03-06-2006, 02:44 AM
Need an authoring program...By moosegoose in forum Web Design and HostingReplies: 0Last Post: 02-19-2006, 04:00 PM
|
http://www.mac-forums.com/forums/movies-video/217613-frustration-authoring-problem-please-help.html
|
CC-MAIN-2018-17
|
refinedweb
| 233
| 68.91
|
Automatically catch many common errors while coding
One of the most common complaints about the Python language is that variables are Dynamically Typed. That means you declare variables without giving them a specific data type. Types are automatically assigned at based on what data was passed in:
In this case, the variable president_name is created as str type because we passed in a string. But Python didn?t know it would be a string until it actually ran that line of code.
By comparison, a language like Java is Statically Typed. To create the same variable in Java, you have to declare the string explicitly with a String type:
Because Java knows ahead of time that president_name can only hold a String, it will give you a compile error if you try to do something silly like store an integer in it or pass it into a function that expects something other than a String.
Why should I care about types?
It?s usually faster to write new code in a dynamically-typed language like Python because you don?t have to write out all the type declarations by hand. But when your codebase starts to get large, you?ll inevitably run into lots of runtime bugs that static typing would have prevented.
Here?s an example of an incredibly common kind of bug in Python:
All we are doing is asking the user for their name and then printing out ?Hi, <first name>!?. And if the user doesn?t type anything, we want to print out ?Hi, UserFirstName!? as a fallback.
This program will work perfectly if you run it and type in a name? BUT it will crash if you leave the name blank:
Traceback (most recent call last): File “test.py”, line 14, in <module> first_name = get_first_name(fallback_name) File “test.py”, line 2, in get_first_name return full_name.split(” “)[0]AttributeError: ‘dict’ object has no attribute ‘split’
The problem is that fallback_name isn?t a string ? it?s a Dictionary. So calling get_first_name on fallback_name fails horribly because it doesn?t have a .split() function.
It?s a simple and obvious bug to fix, but what makes this bug insidious is that you will never know the bug exists until a user happens to run the program and leave the name blank. You might test the program a thousand times yourself and never notice this simple bug because you always typed in a name.
Static typing prevents this kind of bug. Before you even try to run the program, static typing will tell you that you can?t pass fallback_name into get_first_name() because it expects a str but you are giving it a Dict. Your code editor can even highlight the error as you type!
When this kind of bug happens in Python, it?s usually not in a simple function like this. The bug is usually buried several layers down in the code and triggered because the data passed in is slightly different than previously expected. To debug it, you have to recreate the user?s input and figure out where it went wrong. So much time is wasted debugging these easily preventable bugs.
The good news is that you can now use static typing in Python if you want to. And as of Python 3.6, there?s finally a sane syntax for declaring types.
Fixing our buggy program
Let?s update the buggy program by declaring the type of each variable and each function input/output. Here?s the updated version:
In Python 3.6, you declare a variable type like this:
variable_name: type
If you are assigning an initial value when you create the variable, it?s as simple as this:
my_string: str = “My String Value”
And you declare a function?s input and output types like this:
def function_name(parameter1: type) -> return_type:
It?s pretty simple ? just a small tweak to the normal Python syntax. But now that the types are declared, look what happens when I run the type checker:
$ mypy typing_test.pytest.py:16: error: Argument 1 to “get_first_name” has incompatible type Dict[str, str]; expected “str”
Without even executing the program, it knows there?s no way that line 16 will work! You can fix the error right now without waiting for a user to discover it three months from now.
And if you are using an IDE like PyCharm, it will automatically check types and show you where something is wrong before you even hit ?Run?:
It?s that easy!
More Python 3.6 Typing Syntax Examples
Declaring str or int variables is simple. The headaches happen when you are working with more complex data types like nested lists and dictionaries. Luckily Python 3.6?s new syntax for this isn?t too bad? at least not for a language that added typing as an afterthought.
The basic pattern is to import the name of the complex data type from the typing module and then pass in the nested types in brackets.
The most common complex data types you?ll use are Dict, List and Tuple. Here?s what it looks like to use them:
from typing import Dict, List# A dictionary where the keys are strings and the values are intsname_counts: Dict[str, int] = { “Adam”: 10, “Guido”: 12}# A list of integersnumbers: List[int] = [1, 2, 3, 4, 5, 6]# A list that holds dicts that each hold a string key / int valuelist_of_dicts: List[Dict[str, int]] = [ {“key1”: 1}, {“key2”: 2}]
Tuples are a little bit special because they let you declare the type of each element separately:
from typing import Tuplemy_data: Tuple[str, int, float] = (“Adam”, 10, 5.7)
You also create aliases for complex types just by assigning them to a new name:
from typing import List, TupleLatLngVector = List[Tuple[float, float]]points: LatLngVector = [ (25.91375, -60.15503), (-11.01983, -166.48477), (-11.01983, -166.48477)]
Sometimes your Python functions might be flexible enough to handle several different types or work on any data type. You can use the Union type to declare a function that can accept multiple types and you can use Any to accept anything.
Python 3.6 also supports some of the fancy typing stuff you might have seen in other programming languages like generic types and custom user-defined types.
Running the Type Checker
While Python 3.6 gives you this syntax for declaring types, there?s absolutely nothing in Python itself yet that does anything with these type declarations. To actually enforce type checking, you need to do one of two things:
- Download the open-source mypy type checker and run it as part of your unit tests or development workflow.
- Use PyCharm which has built-in type checking in the IDE. Or if you use another editor like Atom, download it?s own type checking plug-in.
I?d recommend doing both. PyCharm and mypy use different type checking implementations and they can each catch things that the other doesn?t. You can use PyCharm for realtime type checking and then run mypy as part of your unit tests as a final verification.
Great! Should I start writing all my Python code with type declarations?
This type declaration syntax is very new to Python ? it only fully works as of Python 3.6. If you show your typed Python code to another Python developer, there?s a good chance they will think you are crazy and not even believe that the syntax is valid. And the mypy type checker is still under development and doesn?t claim to be stable yet.
So it might be a bit early to go whole hog on this for all your projects. But if you are working on a new project where you can make Python 3.6 a minimum requirement, it might be worth experimenting with typing. It has the potential to prevent lots of bugs.
One of the neat things is that you easily can mix code with and with out type declarations. It?s not all-or-nothing. So you can start by declaring types in the places that are the most valuable without changing all your code. And since Python itself won?t do anything with these types at runtime, you can?t accidentally break your program by adding type declarations.
Thanks for reading! If you are interested in machine learning (or just want to understand what it is), check out my Machine Learning is Fun! series.
You can also follow me on Twitter at @ageitgey or find me on linkedin.
|
https://911weknow.com/how-to-use-static-type-checking-in-python-3-6
|
CC-MAIN-2021-04
|
refinedweb
| 1,422
| 74.69
|
As mentioned in the last post, today we’re going to have some fun pulling data from our recently-implemented, cloud-based web-service into the Unity3D game engine.
My intention, here, is to reinforce the fact that exposing web-service APIs really does give you broader reach with your technology. In this case, we’ll be calling our web-service – implemented using F#, which we could not have included in our Unity3D project – inside a game engine that could be hosted on any one of a wide variety of operating systems.
A few words on Unity3D: according to its web-site, Unity3D is “a feature rich, fully integrated development engine for the creation of interactive 3D content.” While the main goal behind Unity3D (which from here on I’ll just call Unity) is clearly for people writing games, Unity is also a great visualization environment.
This technology is cool. It makes heavy use of .NET – perhaps not the first environment you’d think of when implementing a cross-platform game engine – but its use of Mono allows you to use it to create games for Windows, Mac, the web, Android, iOS, PS3, Wii and Xbox 360. Woah!
One of the great things about Unity is the sample content available for it. The default scene when you install Unity is called Angry Bots, for instance, which is a fully-featured third person shooter. Now I’m not actually interested in implementing a game for this post – although that might be pretty cool, shooting each of the spheres in an Apollonian packing ;-) – so I decided to start with a more architectural sample scene.
I didn’t want to make any static changes to the scene – it looks really cool as it stands – I just wanted to access the web-service, pull down the sphere definitions and then create “game objects” inside the scene dynamically at run-time.
A quick aside regarding my OS choice for this… I originally started working with the Windows version of Unity inside a Parallels VM on my Mac, but then decided to switch across to the Mac version of Unity. The scene loaded as well there as it did on Windows – nothing needed to change, at all. I installed the free version of Unity – which means I can’t build for mobile platforms or game consoles – but you can apparently get free trials of a version that builds for those environments, if so inclined.
Unity’s scripting environment is pretty familiar: the MonoDevelop editor installed with Unity is pretty decent – it was my first time using the tool, and I thought I’d give it a try rather than configuring Unity to use Visual Studio – and is well integrated with the Unity scene development environment. You can code in either Javascript or C# (no prizes for guessing which one I chose ;-), and it’s possible to have both in a scene.
I started off bringing down a JSON-reading implementation for their Wiki (be sure to add the Nullable class definition, listed further down the page) which would help me from my own code. I added an additional C# source file – which must be called ImportSpheres.cs, as it needs to match the name of the class – and placed this code in it:
using UnityEngine;
using System.Collections;
using System.Net;
using System.IO;
public class ImportSpheres : MonoBehaviour
{
// The radius of our outer sphere
const float radius = 0.8f;
IEnumerator DownloadSpheres()
{
// Pull down the JSON from our web-service
WWW w = new WWW(
"" +
radius.ToString() + "/7"
);
yield return w;
print("Waiting for sphere definitions\n");
// Add a wait to make sure we have the definitions
yield return new WaitForSeconds(1f);
print("Received sphere definitions\n");
// Extract the spheres from our JSON results
ExtractSpheres(w.text);
}
void Start ()
{
print("Started sphere import...\n");
StartCoroutine(DownloadSpheres());
}
void ExtractSpheres(string json)
{
// Create a JSON object from the text stream
JSONObject jo = new JSONObject(json);
// Our outer object is an array
if (jo.type != JSONObject.Type.ARRAY)
return;
// Set up some constant offsets for our geometry
const float xoff = 1, yoff = 1, zoff = 1;
// And some counters to measure our import/filtering
int displayed = 0, filtered = 0;
// Go through the list of objects in our array
foreach(JSONObject item in jo.list)
{
// For each sphere object...
if (item.type == JSONObject.Type.OBJECT)
{
// Gather center coordinates, radius and level
float x = 0, y = 0, z = 0, r = 0;
int level = 0;
for(int i = 0; i < item.list.Count; i++)
{
// First we get the value, then switch
// based on the key
var val = (JSONObject)item.list[i];
switch ((string)item.keys[i])
{
case "X":
x = (float)val.n;
break;
case "Y":
y = (float)val.n;
break;
case "Z":
z = (float)val.n;
break;
case "R":
r = (float)val.n;
break;
case "L":
level = (int)val.n;
break;
}
}
// Create a vector from our center point, to see
// whether it's radius comes near the edge of the
// outer sphere (if not, filter it, as it's
// probably occluded)
Vector3 v = new Vector3(x, y, z);
if ((Vector3.Magnitude(v) + r) > radius * 0.99)
{
// We're going to display this sphere
displayed++;
// Create a corresponding "game object" and
// transform it
var sphere =
GameObject.CreatePrimitive(PrimitiveType.Sphere);
sphere.transform.position =
new Vector3(x + xoff, y + yoff, z + zoff);
float d = 2 * r;
sphere.transform.localScale =
new Vector3(d, d, d);
// Set the object's color based on its level
UnityEngine.Color col = UnityEngine.Color.white;
switch (level)
{
case 1:
col = UnityEngine.Color.red;
break;
case 2:
col = UnityEngine.Color.yellow;
break;
case 3:
col = UnityEngine.Color.green;
break;
case 4:
col = UnityEngine.Color.cyan;
break;
case 5:
col = UnityEngine.Color.magenta;
break;
case 6:
col = UnityEngine.Color.blue;
break;
case 7:
col = UnityEngine.Color.grey;
break;
}
sphere.renderer.material.color = col;
}
else
{
// We have filtered a sphere - add to the count
filtered++;
}
}
}
// Report the number of imported vs. filtered spheres
print(
"Displayed " + displayed.ToString () +
" spheres, filtered " + filtered.ToString() +
" others."
);
}
void Update ()
{
}
}
There’s nothing earth-shattering about this code: it calls our web-service to pull down a fairly detailed (level 7) representation of an Apollonian packing and inserts the resultant spheres into the current scene.
The code makes use of some Unity-specific classes, such as WWW – rather than some standard .NET capabilities which proved a bit more problematic – and there were some quirks needed to implement “co-routines” from C# (which meant having to return an IEnumerator and use yield return).
Being based on Mono, your Unity code is always going to be a bit behind the state-of-the-art in the latest .NET Framework, but hey – that’s the cost of being cross platform. :-)
Otherwise, it’s probably worth mentioning that the code only adds GameObjects for spheres that are close to the outside of the outer sphere, as they would only end up being occluded, anyway.
After the script has been added to the scene – in this case in the _Scripts folder – it’s a relatively simple matter of attaching it to one of the scene’s objects, to make sure it gets called. This step took me some time to work out – despite it being really simple, once you know how – so I’ll step through it, below.
From the base scene with our scripts added…
We just need to select a game object (in this case I chose the computer desk, but we could select anything in the initial scene), and then choose Component –> Scripts –> Import Spheres.
Once this has been selected, we should be able to see the script attached to the selected object in the Inspector window on the right. This means the script will be executed as the game object – and as a static object this ultimately means the scene – loads.
The script will be checked, by default – to stop it from running you can either uncheck it or use the “cog” icon to edit the script settings and select “Remove Component” to get rid of it completely.
Now we can simply run the scene using the “play” icon at the top – which runs the scene inside the editor – or you can build it via the File menu and run the resultant output.
Here’s the scene in the editor:
I tried embedding the Unity web player in this post, directly, but gave up: it seems you need to edit the <head> section of your HTML page to load the appropriate script: I could do that for every post on this blog, but that seems like unnecessary overhead. If you’d like to give the scene a try, you’ll have to open a separate page to do so.
Once small note: I did need to add a crossdomain.xml file to our Azure-hosted ASP.NET web-service, to make sure it met with Unity’s security requirements.
Right – that’s it for today’s post. Next time we’ll be continuing to look at using our data in other places, as we shift gears and implement a basic 3D viewer for the Android platform.
|
http://through-the-interface.typepad.com/through_the_interface/2012/04/calling-a-web-service-from-a-unity3d-scene.html
|
CC-MAIN-2017-09
|
refinedweb
| 1,512
| 60.95
|
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
On Mon, 3 Aug 2015, Zack Weinberg wrote: > > Hmm, why the relationship with _FILE_OFFSET_BITS? At least at first > > sight, supporting 64-bit time seems pretty unrelated/orthogonal to > > supporting 64-bit file sizes. > > I don't know for certain what Joseph had in mind, but I believe that's > just to cut down on the number of combinations the header files need > to support. 'struct stat', for instance, contains both time_t and And the number of ABI variants / function exports in the libraries. One thing to consider in the design is whether you have separate functions exported from the shared libraries at all for those cases where time_t is already 64-bit. _FILE_OFFSET_BITS=64 does have separate exports (and public API names such as stat64 for use with _LARGEFILE64_SOURCE; I don't see any clear need for API names like that for time_t); avoiding them for time_t would complicate the headers but simplify the ABI. Issues with _FILE_OFFSET_BITS=64 that you should definitely seek to avoid for _TIME_BITS=64 include: (a) stat and stat64 having different layouts on at least one 64-bit platform (MIPS n64) (that is, whatever _TIME_BITS=64 does on systems where time_t is already 64-bit, it should not change the layout of any structures); (b) link-time namespace violations (bug 14106); (c) _FILE_OFFSET_BITS=64 affecting the C++ ABI even when the layout of the types would otherwise be the same (bug 15766). The evidence is that libraries affected by the _FILE_OFFSET_BITS value are more likely nowadays to be built with _FILE_OFFSET_BITS=64 than _FILE_OFFSET_BITS=32 on GNU/Linux distributions. -- Joseph S. Myers joseph@codesourcery.com
|
http://www.sourceware.org/ml/libc-alpha/2015-08/msg00038.html
|
CC-MAIN-2018-34
|
refinedweb
| 287
| 61.6
|
Listening to a Polar Bluetooth HRM in Linux
My new toy, as of last Friday, is a Polar WearLink®+ transmitter with Bluetooth® because I wanted to track my heart rate from Android. Absent some initial glitches which turned out to be due to the battery it was shipped with having almost no charge left, it works pretty well with the open source Google My Tracks application.
But, but. A significant part of my exercise regime consists of riding a stationary bicycle until I feel sick. I do this in the same room as my computer: not only are GPS traces rather uninformative for this activity, but getting satellite coverage in the first place is tricky while indoors. So I thought it would be potentially useful and at least slightly interesting to work out how to access it directly from my desktop.
My first port of call was the source code for My Tracks. Digging into src/com/google/android/apps/mytracks/services/sensors/PolarMessageParser.java we find a helpful comment revealing that, notwithstanding Polar’s ridiculous stance on giving out development info (they don’t, is the summary) the Wearlink packet format is actually quite simple.
* Polar Bluetooth Wearlink packet example; * Hdr Len Chk Seq Status HeartRate RRInterval_16-bits * FE 08 F7 06 F1 48 03 64 * where; * Hdr always = 254 (0xFE), * Chk = 255 - Len * Seq range 0 to 15 * Status = Upper nibble may be battery voltage * bit 0 is Beat Detection flag.
While we’re looking at Android for clues, we also find the very useful information in the API docs for BluetoothSocket that “The most common type of Bluetooth socket is RFCOMM, which is the type supported by the Android APIs. RFCOMM is a connection-oriented, streaming transport over Bluetooth. It is also known as the Serial Port Profile (SPP)”. So, all we need to do is figure out how to do the same in Linux
Doing anything with Bluetooth in Linux inevitably turns into an exercise in yak
epilation, especially for the kind of retrocomputing grouch (that’s
me) who doesn’t have a full GNOME or KDE desktop with all the D buses
and applets and stuff that come with it. In this case, I found that
XFCE and the Debian blueman package were sufficient to
get my bluetooth dongle registered, and to find and pair with the HRM.
It also included a natty wizard thing which claimed to be able to
create an rfcomm connection in
/dev/rfcomm0. I say “claimed” not
because it didn’t – it did, so … – but because for no readily
apparent reason I could never get more than a single packet from this
device without disconnecting, unpairing and repairing. Perhaps there
was weird flow control stuff going on or perhaps it was something
else, I don’t know, but in any case this is not ideal at 180bpm.
So, time for an alternative approach: thanks to Albert Huang, we find
that apparently you can work with rfcomm sockets using actual, y’know,
sockets
. The
rfcomm-client.c example on that we page worked perfectly, modulo the obvious
point that sending data to a heart rate monitor strap is a
peculiarly pointless endeavour, but really we want to write our code
in Ruby not in C. This turns out to be easier than we might expect.
Ruby’s
socket library wraps the C socket interface sufficently
closely that we can use pack to forge
sockaddr structures for any
protocol the kernel supports, if we know the layout in memory and the
values of the constants.
How do we find “the layout in memory and the values of the constants”? With gdb. First we start it
:; gdb rfcomm-client [...] (gdb) break 21 Breakpoint 1 at 0x804865e: file rfcomm-client.c, line 21. (gdb) run Starting program: /home/dan/rfcomm-client Breakpoint 1, main (argc=1, argv=0xbffff954) at rfcomm-client.c:22 22 status = connect(s, (struct sockaddr *)&addr, sizeof(addr));
then we check the values of the things
(gdb) print sizeof addr $2 = 10 (gdb) print addr.rc_family $3 = 31 (gdb) p/x addr.rc_bdaddr $4 = {b = {0xab, 0x89, 0x67, 0x45, 0x23, 0x1}}
then we look at the offsets
(gdb) p/x &addr $5 = 0xbffff88e (gdb) p/x &(addr.rc_family) $6 = 0xbffff88e (gdb) p/x &(addr.rc_bdaddr) $7 = 0xbffff890 (gdb) p/x &(addr.rc_channel) $8 = 0xbffff896
So, overall length 10, rc_family is at offset 0, rc_bdaddr at 2, and
rc_channel at 8. And the undocumented (as far as I can see)
str2ba
function results in the octets of the bluetooth address going
right-to-left into memory locations, so that should be easy to
replicate in Ruby.
def connect_bt address_str,channel=1 bytes=address_str.split(/:/).map {|x| x.to_i(16) } s=Socket.new(AF_BLUETOOTH, :STREAM, BTPROTO_RFCOMM) sockaddr=[AF_BLUETOOTH,0, *bytes.reverse, channel,0 ].pack("C*") s.connect(sockaddr) s end
The only thing left to do is the actual decoding. Considerations here
are that we need to deal with short reads and that the start of a
packet may not be at the start of the buffer we get – so we keep
reading buffers and catenating them until
decode says it’s found a
packet, then we start again from where decode says the end of the
packet should be. Because this logic is slightly complicated we wrap
it in an Enumerator so that our caller gets one packet only each and
every time they call
Enumerator#next
The complete example code is at and the licence is “do what you like with it”. What I will like to do with it is (1) log the data, (2) put up a window in the middle of the display showing my instantaneous heart rate and zone so that I know if I’m trying, (3) later, do some interesting graphing and analysis stuff. But your mileage may vary.
Syndicated 2012-05-03 21:10:29 from diary at Telent Netowrks
|
http://www.advogato.org/person/dan/diary.html?start=161
|
CC-MAIN-2014-41
|
refinedweb
| 987
| 57.81
|
04, 2008 04:00 PMThe idea for the
tapmethod has been around for some time - but it has now been added to the standard Ruby library in Ruby 1.9. MenTaLguY, who blogged about the idea behind
tapshows the simple code:
class Object
def tap
yield self
self
end
end
tapmethod is defined in
Object, making it available for every object in Ruby by default. The method takes a Block as argument, which it calls with
selfas argument - then the object is returned.
tapmethod seems like a complicated way of doing something with an object. The real benefit of this becomes clear when the object of interest is passed from one method to another without ever being assigned to a variable. This is common whenever methods are chained, particularly if the chain is long.
xs = blah.sort.grep( /foo/ )
p xs
# do whatever we had been doing with the original expression
xs.map { |x| x.blah }
tap:
blah.sort.grep( /foo/ ).tap { |xs| p xs }.map { |x| x.blah }
tapis useful: without it, it's necessary to assign the object of interest to a local variable to use it - with
tapit's possible to insert the Block that inspects the object right where the handover between the chained methods happens. This gets particularly useful with APIs that expose so called Fluent Interfaces - i.e. APIs that encourage method chaining. Here a Java example from Martin Fowler's website:
customer.newOrder()
.with(6, "TAL")
.with(5, "HPK").skippable()
.with(3, "LGV")
.priorityRush();
tapallows to look at the object at an arbitrary stage (i.e. between every call) by simply inserting a
tapblock. This is also useful with debugging tools, which often don't support looking at anonymous return values of methods.
tapis normally about causing some kind of side effect without changing the object (the Block's return value is ignored). However, it is of course possible to modify the object as long as it's mutable.
returningmethod .
tapmethod is not restricted to Ruby 1.9 - Ruby's Open Classes allow to do this on non-1.9 Ruby versions too.
Ensuring Code Quality in Multi-threaded Applications
Effective Management of Static Analysis Vulnerabilities and Defects
Give-away eBook – Confessions of an IT Manager
|
http://www.infoq.com/news/2008/02/tap-method-ruby19
|
crawl-002
|
refinedweb
| 373
| 65.32
|
I
Problem in jsp.
Problem in jsp. hello friends, I have a problem in jsp.I want..." action="">
<table>
<...();
int i=st.executeUpdate("insert into "+tname+"(name, address)values('"+name
Some error find whats the problem
code i unable to understand that what's the problem in above code because output...Some error find whats the problem import java.io.*;
public class...=br.readline();
String str=" ";
String st=" ";
for(int i=0;i<=s.lenth();i problem
jsp problem problem::::::::
On JSP form ,when i insert data in text... but i want that data on form
plz help me...........
Hi Friend,
Use... (Exception e) {
System.out.println(e);
}
%>
3)For the above code, we have
i have problem with this query... please tell me the resolution if this .........
i have problem with this query... please tell me the resolution if this ......... select length(ename)||' charecters exist in '||initcap(ename)||'s name'
as "names and length" from emp
i have problem with this query... please tell me the resolution if this .........
i have problem with this query... please tell me the resolution if this ......... select initcap(ename),job
from emp
where substr(job,4,length(job,4,3)))='age
I have a small problem in my datagridview - Design concepts & design patterns
I have a small problem in my datagridview i have datagridviewer in c#(platform) and i try that change cell beckground, this cell Should... the backcolor of individual cells
please help me.
Sorry for My English.(I am
report problem
report problem I have made a project that displays charts using jfreechart library from the student marks stored in a database.
what all can i write in its report.Please if you can provide me hint or some link regarding
i have a problem to do this question...pls help me..
i have a problem to do this question...pls help me.. Write a program... reversedNumber = 0;
for (int i = 0; i <= num; i...;
reversedNumber = reversedNumber * 10 + r;
i = 0
problem i coding
problem i coding i have a problem in coding a combobox containing a number of *.java program on click one of them that program should compile and run and display the result in other window
jsp code problem - JSP-Servlet
jsp code problem Hi,
I have employee details form in jsp. After validate all the fields, it will update the database, then move to next form. I have a problem with open the next form. plz, help me.
thanks, Hi
logout problem?? - JSP-Servlet
logout problem?? sir,
the logout code which u have send... not happen.to logout properly i think we should empty or clear the browser... regarding that application.hope u understand wat i want to say..for coding plz...=connection.createStatement();
ResultSet rs=st.executeQuery("select * from data");
int i=1...='"+rs.getString(3)+"' where id='"+i+"'");
i++;
}
}
catch(Exception e){}
}
}
Thanks
Problem with loginbean.jsp
Problem with loginbean.jsp
-
I am getting an error in loginbean.jsp.There is some error regarding .What is hello in this?
Also in this example how
JSP code problem - JSP-Servlet
JSP code problem Hi friends,
I used the following code..., "Image successfully uploaded", but when i try to retrieve the image I'm unable to do...:
<%
//to get the content type information from JSP in my following code i have used a condition... the first row. i m new to java. so i have messed up the code. but please.... It would be good for me to provide you the solution if problem is clear.
Thanks
swing frame problem
it will display some information from DB to user etc.. Now when I will create a frame which have few buttons and Text Box to get some data from user at this time... wondow (like in JSP forward and send-redirect).
what shud I do to acheive this , I
jsp problem
jsp problem Hello Friends,
I want to show one page for 5 seconds and after that i want to redirect the page to another URL
problem with package
problem with package Dear sir,
i have created one java file with package com.net; and i compiled the program.it showing the .class file in " net" folder.The problem is i want to use that .class file in some other
java validation problem - JSP-Servlet
java validation problem I have created a Registration page.I want... problem. Here is the link... to do this?
Can anyone gives the answer for this problem?
Thanks in advance code problem - Java Beginners
jsp code problem Hi,
I have a problem with else part. It did not show the message box when the result set is null. plz, help me. thank u in advance
LOGIN PROBLEM - JDBC
i'm designing one website
in that one i have to give facility as login id... is partly incomplete. I am giving the solution considering you have Oracle in your...
================================================
Here I am putting some sample code.
Send user id and password
jsp - excel generating problem - JSP-Servlet
jsp - excel generating problem Hi,
I worked with the creating excel through jsp, which is the first example in this tutorial... is result excel file.
If you have more problem then give details
problem in insert query - JSP-Servlet
problem in insert query Hi!
I am using this statement for data...', 'gender' )";
I checked the data values are reaching in this method through... Hi friend,
We check your Query it is correct .If you have
Confirm problem
Confirm problem Sir
i have used following code in button delete
onclick="if (confirm('sure?')) submit();"
and if i choose cancel it still submits the form however if choose cancel i want to remain on same jsp page
please help
Tomcat installation problem - JSP-Servlet
Tomcat installation problem Hello
I have installed Tomcat and my startup and shutdown programs are working but the problem is i couldnt open tomcat homepage ""..I am awaiting for response..Thanks
Problem to display checkbox item
...........:-)
it really works
But,I want some more help from you.
In my program , i have..............:-) it really works
But,I want some more help from you.
In my program , i have given...Problem to display checkbox item Hi,
Following is my code:
<
menu problem
menu problem Hi to all ,
I am inserting a menu in my jsp.... index.html
2. menu.css
3. menu.js
4. jquery.js
I have edited... not appear when i open index.html. i have searched it through all the files but i
problem in record viewing in jsp - JSP-Servlet
the record after eclipse ide is closed and reopen it
i have used the jsp and servlet and back end mysql
actually what is my problem is i have to everytime... once again i have to rewrite the jsp page link.
please reply me today itself
query problem
query problem how write query in jsp based on mysql table field?
i have employee table it contain designation field, how write query in jsp... to write this query in jsp please anybody help me, and send me that code
Servlet problem
problem from last three month and now i hope rose india developers will definitely help me.
I built a web application using jsp, servlets . My web application... specified"
I have latest jdbc connector in server lib dir and also i have
printout problem
printout problem Hi I have created a swing application which enter some data and fetch some data from DB . Now I want a print feature by which i want to take a print out of that data in form of table. What should I do to achieve
I have crude application
I have crude application I have crude application, how to load into this roseindia.net
Basic problem but very urgent - JSP-Servlet
Basic problem but very urgent Respected Sir/Madam,
I am... problem which i hava tried my level best to solve.In the output, the text box is receiving some other value other than Emp ID and Emp Name. I know there is some
i need the source code to generate id in jsp
+1,like automatically when i click the button...as well i have several kinds of generate ids each should have some range...plz give the source code in jsp with ms...i need the source code to generate id in jsp hai,i need the source
Java: Some problems in given condition
Java: Some problems in given condition SIR, I want to get...||att.getdata2()==0)
If I use this code in Java.I am facing NullPointerException.
Can You Help me solve this problem
problem in programming - JSP-Servlet
problem in programming Hi!
I am new with jsp. I am facing a problem in programming to calculate the time interval between login time and logout time of user... corresponding city should comeon basis of state selected.
I have tried
i got an exception while accept to a jsp
in a file.later i changed it to ANSII problem is resolved...i got an exception while accept to a jsp type Exception report...)
java.util.ResourceBundle.getString(Unknown Source)
org.apache.jsp.index_jsp._jspServ
how to do dynamic ally placeholder using properties or some else - JSP-Servlet
how to do dynamic ally placeholder using properties or some else dear sir,
how to use and declare a dynamic place holder in java? i have to send... And Regards
HR
while sending this mail i have to read a excel file in that specified
Problem in Servlet - Servlet Interview Questions
Problem in Servlet Sir, I have pointed everything in web.xml, path class path etc..
empController is my main servlet program and I called in form... match with name you are using in url.
I hope if you do this you won't have any
problem while hosting application - JSP-Servlet
problem while hosting application hi ,
when i upload track.war file... privLabel = this.initPrivateLabelProperties(request);
when i used my local tomcat, privLabel has got some value whereas used catracking.com,got null value
adding some value to new column
adding some value to new column how to update a column having some value into new column that also have some value i need to add this two value...+"");
}
}
}
For the above code, we have created a table 'leaveData'
CREATE TABLE
|
http://www.roseindia.net/tutorialhelp/comment/18032
|
CC-MAIN-2015-11
|
refinedweb
| 1,716
| 75.71
|
Code: Select all
I recently purchased an OpenMV H7 camera and wanted to establish a UART connection between the camera and my Arduino Uno. I followed the steps indicated in abdalkader's github directory ... no_uart.py, but I am not seeing any outputs to the serial in any of the microcontrollers.
This is the code I am using for Arduino
Code: Select all
void setup() { // put your setup code here, to run once: Serial.begin(19200); } void loop() { // put your main code here, to run repeatedly: if (Serial.available()) { // Read the most recent byte byte byteRead = Serial.read(); // ECHO the value that was read Serial.write(byteRead); Serial.println(byteRead); } }
Code: Select all
import time from pyb import UART # UART 3, and baudrate. uart = UART(3, 19200) while(True): uart.write("Hello World!\n") if (uart.any()): print(uart.read()) time.sleep(1000)
OpenMV Cam Ground Pin ----> Arduino Ground
OpenMV Cam UART3_TX(P4) ----> Arduino Uno UART_RX(0)
OpenMV Cam UART3_RX(P5) ----> Arduino Uno UART_TX(1)
Attached is an image of my wiring
I would really appreciate some help!
|
https://forums.openmv.io/viewtopic.php?f=3&t=1980
|
CC-MAIN-2020-40
|
refinedweb
| 178
| 57.27
|
Paul Russell wrote:
>
> * Berin Loritsch (bloritsch@apache.org) wrote :
> > I discovered something incredible. The XSP system is our major
> > performance sink-hole. This I find to be amazing.
>
> > I have my suspiscions as to where the problems may lie: Class
> > validation (is it current?) and sending too many namespace events. I
> > am going to try running reading a normal file through the
> > LogTransformer, and then an XSP file through the same LogTransformer.
> > I have a feeling that those two areas are are major performance
> > bottlenecks.
>
> Interestingly, we discovered something similar a long time ago in
> Luminas. Probably because we are *very* heavy on namespaces (a lot of
> our pages have 10-15 namespaces floating around in them). The current
> XSP implementation does an awful lot of prefix mapping changes. In fact,
> we discovered that in a number of instances, _over half_ of the
> generated code was concerned with adding and removing prefix mappings.
> This is clearly not sensible. I'm not yet sure how to avoid this - I
> think we may have to use extension functions to keep track of which
> namespaces we've already defined.
I just noticed that the ServerPagesGenerator caches the SAX results with
a Stack. Is this really necessary? If an exception occurs, we should
just throw a SAXException or ProcessingException like the rest of the
system.
|
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200102.mbox/%3C3A8B0375.8273E5A2@apache.org%3E
|
CC-MAIN-2014-41
|
refinedweb
| 221
| 67.15
|
CodePlexProject Hosting for Open Source Software
Hi,
I wanted to create a layer rule for placing widgets on pages and posts with a specified tag. I was wondering if there was any guidance on layer rules?
Thanks,
Richard Garside.
Layer rules are really just Ruby expressions. They are evaluated in a sandbox where all classes that implement IRuleProvider are available.
AuthenticatedRuleProvider is a good one to look at to see how to create your own.
Rules can be deployed in modules and made available to share on the gallery ;).
Hi Bertrand,
I've created a rule provider, but I can't work out how to find the tags for the current page.
My first thought was to inject an instance of IOrchardServices, and use
ContentManager but this was just a stab in the dark really and I couldn't find how to get the info I needed. Could you give me a hint or point me towards a bit of code that checks the tags that the displayed piece of content
has. I was also wondering if I needed to check if the displayed content had tags attached at all.
Thanks for your help,
Richard.
Ah, apparently you don't have access to that information from rules. The only information you can use for now is information about the request.
Should I add this as a work item, or is it already on the agenda?
I have launched a process on a background thread in my head, and it ended this morning with a solution. You can implement an IContentHandler class, registering the OnDisplay method, and if the DisplayType is Detail, then using and IoC injected IWorkContextAccessor,
retrieve GetContext(), and SetState() this value. Thus from any view you can do a As<TagsPart> on the current displayed item, if available. You could even go further by providing the list of all content items/display type in the current request.
I can't quite get my head around that. I've only just started delving into the underbelly of Orchard.
How would my IRuleProvider class get access to the IContentHandler class?
Nice, Sebastien. I think you'd have that content handler set-up the state for you but the rule doesn't need to access the content handler itself. What it needs to inject is just the IWorkContextAccessor that's needed to retrieve the state the content handler
set-up. Does this help?
Here is the working solution:
The content handler which enlists all displayed content items within a request (here there is a filter on the Detail display type which makes sense for this purpose)
using System.Collections.Generic;using Orchard.ContentManagement;using Orchard.ContentManagement.Handlers; namespace Orchard.Experimental.Handlers { public class CurrentContentItemHandler : ContentHandler { private readonly IWorkContextAccessor _workContextAccessor; public CurrentContentItemHandler(IWorkContextAccessor workContextAccessor) { _workContextAccessor = workContextAccessor; } protected override void BuildDisplayShape(BuildDisplayContext context) { if (context.DisplayType == "Detail") { var workContext = _workContextAccessor.GetContext(); var contentItems = workContext.GetState<List<IContent>>("ContentItems"); if (contentItems == null) { workContext.SetState("ContentItems", contentItems = new List<IContent>()); } contentItems.Add(context.ContentItem); } } }}
Then a new IRuleProvider implementation to add a tagged() function to the widget rules engine:
using System;using System.Collections.Generic;using System.Linq;using Orchard.ContentManagement;using Orchard.Mvc;using Orchard.Tags.Models;using Orchard.UI.Widgets; namespace Orchard.Experimental.RuleEngine { public class WithTagsRuleProvider : IRuleProvider { private readonly IHttpContextAccessor _httpContextAccessor; private readonly IWorkContextAccessor _workContextAccessor; public WithTagsRuleProvider(IHttpContextAccessor httpContextAccessor, IWorkContextAccessor workContextAccessor) { _httpContextAccessor = httpContextAccessor; _workContextAccessor = workContextAccessor; } public void Process(RuleContext ruleContext) { if (!String.Equals(ruleContext.FunctionName, "tagged", StringComparison.OrdinalIgnoreCase)) return; var tag = Convert.ToString(ruleContext.Arguments[0]); var workContext = _workContextAccessor.GetContext(); var contentItems = workContext.GetState<List<IContent>>("ContentItems"); if(contentItems != null && contentItems.Any(c => c.As<TagsPart>() != null)){ ruleContext.Result = true; return; } ruleContext.Result = false; } }}
The module will need a reference to Orchard.Tags project.
Ideally, I would love to have the list of displayed content items in the WorkContext out of the box. It could open some nice scenarios. It might also be in a specific module so that it can be reused. And then the new rule filter
would be in the Tags module.
Oh, just a remark. This code is a proof of concept, you need to add one or two more lines to really filter based on the actual tag. I'm just filtering any content item with a TagPart, not one which actually has a tag ;) Getting the argument passed to
tagged() is already here though.
Thanks. Wish you were here to solve all my problems and write most of my code for me.
I've just modified this bit slightly from:
if(contentItems != null && contentItems.Any(c => c.As<TagsPart>() != null)){
ruleContext.Result = true;
return;
}
to:
if (contentItems != null)
{
var taggedContent = contentItems.Where(c => c.As<TagsPart>() != null);
if (taggedContent.Any(c => c.As<TagsPart>().CurrentTags.Any(t => t.TagName == tag)))
{
ruleContext.Result = true;
return;
}
}
I've got a working version. Will test it on my blog over the weekend. If all is well I'll add it to the gallery so people can use it in 0.8.
Do you think you'll add this as standard for v1? It seems like such an obvious thing and most of the work is done, it would seem a shame not to.
That would be great but time is really short. Can't promise anything at this point. At least if there's a module available for it...
This is now working on my live site.
I've used it to create a side panel that includes books relevant to the content of the post. You can see it in action on this
Will post the module to the gallery once I get a username.
Nice!
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
http://orchard.codeplex.com/discussions/233851
|
CC-MAIN-2017-30
|
refinedweb
| 974
| 50.23
|
The comment describes why in detail. This was found because QEMU nevergives up load reservations, the issue is unlikely to manifest on realhardware.Thanks to Carlos Eduardo for finding the bug!Signed-off-by: Palmer Dabbelt <palmer@sifive.com>--- arch/riscv/kernel/entry.S | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.Sindex 1c1ecc238cfa..e9fc3480e6b4 100644--- a/arch/riscv/kernel/entry.S+++ b/arch/riscv/kernel/entry.S@@ -330,6 +330,24 @@ ENTRY(__switch_to) add a3, a0, a4 add a4, a1, a4)+# error "The offset between ra and ra is non-zero"+#endif+#if (__riscv_xlen == 64)+ sc.d x0, ra, 0(a3)+#else+ sc.w x0, ra, 0(a3)+#endif REG_S sp, TASK_THREAD_SP_RA(a3) REG_S s0, TASK_THREAD_S0_RA(a3) REG_S s1, TASK_THREAD_S1_RA(a3)-- 2.21.0
|
https://lkml.org/lkml/2019/6/5/979
|
CC-MAIN-2019-51
|
refinedweb
| 136
| 53.17
|
Multiprocessing, Multithreading, and GIL: Essential concepts for every Python developer
Multithreading and Multiprocessing are ways to utilize 100% of the CPU and create performant applications. If you work with complex web applications, machine learning models, or video/image editing tools then you have to tackle multithreading/multiprocessing sooner or later.
Let’s say you create an application for editing photos. When the photo has a very high resolution, you will see a very significant drop in performance. Why? Because image editing is mathematically expensive. It puts pressure on the processor. To improve performance, you have to introduce multiprocessing/multithreading. In that case, if a user’s CPU has 4 cores, your application should use all 4 cores if required.
Compared to other languages, multiprocessing and multithreading have some limitations in Python. Lacking the knowledge might result in creating slow and inefficient systems. The main purpose of this article is to understand the difference.
Let’s get started by knowing a little bit more about Multithreading and Multiprocessing
Multithreading vs Multiprocessing
Let’s see the basic differences first
Multithreading
- A single process, having multiple code segments that can be run concurrently
- Each code segment is called a thread. A process having multiple threads is called a multi-threaded process
- The process memory is shared among the threads. So thread A can access the variables declared by thread B
- Gives the impression of parallel execution, but it’s actually concurrency which is not the same as parallelism. Although, threads can run in parallel in a multi-core environment (more on this later)
- Threads are easier to create and easier to throw away
Multiprocessing
- Multiple processes, working independently of each other. Each process might have one or more threads. But a single thread is the default.
- Each process has its own memory space. So process A cannot access the memory of process B
- Two different processes can run at two different cores in parallel independent of each other
- There is a significant overhead of creating and throwing away processes
Now, the terms concurrency and parallelism are not the same. By concurrent execution it means a task can start, progress and complete in overlapping time. It doesn’t necessarily mean that they are running at the same instant. On the other hand, parallel execution means two tasks are literally running at the same instant (in a multi-core environment).
Also, we have to understand the difference between I/O bound and CPU bound operations.
If an operation depends on I/O (input/output) devices to complete its work, then it’s I/O bound operation. For example, network requests, reading from a database or hard disk, reading from memory, writing to database — all these are I/O bound.
If an operation depends on the processor to complete its work, then it’s a CPU bound operation. For example, matrix multiplication, sorting arrays, editing images, video encoding/decoding, training ML models all are CPU bound operations. The common thing here is a mathematical operation. Every example stated here involves heavy mathematical calculation which can be done by the processor only.
In a single-core processor, thread execution might look like below
There are two processes, namely Process 1 and Process 2. While Process 1 is executing, Process 2 has to wait. In the case of threads, they can be executed concurrently (not in parallel). So for example, if Thread 1 issues a web request, it can take some time for the web request to complete. In that idle time, the CPU will be given to Thread 2 and it can do its operation (maybe do another web request). Please note, thread switching is only possible if it’s an I/O bound operation. In the case of CPU bound operation, the core will be blocked by that thread until the computation hasn’t finished.
In a multi-core scenario, thread execution might look like this:
In the above figure, there are 2 processes each having 8 threads (16 threads in total). As you can see, threads for Process 1 have expanded to core 2. Threads 1–4 and threads 5–8 (blue boxes, top to bottom) are executed in parallel, because they are running in different cores. The same applies to Process 2. Concurrency among the threads in a single core is still preserved. Thus, the computation power is doubled for a process. This was possible when we used multithreading in a multi-core environment.
Now, when to use multithreading and when to use multiprocessing? It depends. Based on the characteristics described above, a programmer will go with either of them. If communication is important, then threads might be better because memory is shared. There are some more factors involved but to keep things simple, we will not dive into those. But in the case of performance, there is a very subtle difference. Creating more processes would be slower than creating threads because processes have an extra overhead and threads are more lightweight in nature.
In most cases, a C++ or Java programmer will go with multithreading unless there is an absolute need to go with multiprocessing (btw, don’t mix up the term multiprocessing and multi-core). But, can we say the same thing for Python? Unfortunately, no. Python is different from C++ or Java.
What’s so different in Python?
Previously, we saw that threads of the same process might expand to the second core or more if required. Unlike C++ or Java, Python’s multithreading cannot expand to the second core by default no matter how many threads you create or how many cores the computer might have. All the threads will be run in a single core. Why? It’s to make the program thread-safe.
We know that memory space is shared between threads. So let’s say you have a variable named
counter which has a value of
3,
counter = 3. Now, if thread A is modifying the
counter variable, thread B should wait for thread A to complete. If both of them tries to modify the variable at the same time, there will be a race condition and the final value will be inconsistent. So, there should be some locking mechanism which can prevent the race condition.
Java and C++ use some other kind of locking mechanism to prevent the race condition. Python uses GIL.
Introducing Global Interpreter Lock (GIL)
We know that Python is an interpreted language and thus it runs in an interpreter. GlL is a type of mutex lock that locks the interpreter itself.
Python uses reference counting for memory management. It means that objects created in Python have a reference count variable that keeps track of the number of references that point to the object. When this count reaches zero, the memory occupied by the object is released. Let’s see an example to make it more clear
import sysa = 'Hello World'
b = a
c = asys.getrefcount(a) # outputs 4
For the above example, the variable
a is referenced in 4 places. The variable was referenced during the statements
a = 'Hello World',
b = a,
c = a and
sys.getrefcount(a).
Now, this reference counting system needs protection from a race condition. Otherwise, threads may try to increase or decrease the reference values simultaneously. If this happens then it would cause memory leaks, or, incorrectly release memory when the object/variable is still in use. This is where GIL comes into play. It’s a single lock on the interpreter itself. It enforces the rule that any Python bytecode must acquire the interpreter lock before it can be executed.
Alternative Python Interpreters: One thing to note, GIL is only used in the CPython implementation of the Python language. CPython is written in C. This is the official and most popular implementation that you download from the Python website. Just so you know, there are other interpreter implementations of Python, such as Jython (written in Java), IronPython (written in C#) and PyPy (written in Python). These implementations don’t have GIL. Although, they are not popular and very few libraries support them. Also, in later portions, you will know although seemingly obstructive, why GIL was chosen as the best solution.
The impact of GIL in multithreaded Python program
As GIL locks the interpreter itself, parallel execution of the program is not possible for a single process. So even if you create one hundred threads, all of them will run in a single core because of the GIL. The following figure clarifies how it would look
For C++, a hundred threads have been distributed in four cores. For Python, all the one hundred threads are running under the same core because of GIL.
The Remedy
It is possible to run a Python program utilizing all the cores available to you. How? Using multiprocessing instead of multithreading. For the record, I will show how to easily create a thread pool and process pool in Python. Of course, there are other ways to create threads and processes. But the one I’m going to show is more than enough in most cases.
In Python, when to use multithreading and when to use multiprocessing?
If your program does I/O heavy tasks, go for multithreading. Doing multiprocessing here would be a bad idea because of extra overhead. On the other hand, if your program does mathematical calculations, go with multiprocessing. Using thread, in this case, might result in decreased performance. If your program does both I/O and CPU related tasks, then you have to use a hybrid of both.
Multithreading with
ThreadPoolExecutor
Let’s say you have a program that scrapes web pages. Web requests are I/O bound operation, so threads are perfect here. Look at the code below:
from concurrent.futures import ThreadPoolExecutor, waitdef scrape_page(url):
# ... scraping logic # remove the following line when you have written the logic
raise NotImplementedErrordef batch_scrape(urls):
tasks = [] with ThreadPoolExecutor(max_workers=8) as executor:
for url in urls:
# for executor.submit, the first argument will be the name of the function to execute. All the argument after that will be passed as the executing function's argument
tasks.append(executor.submit(scrape_page, url)) wait(tasks)
if __name__ == "__main__":
urls = [' 'htpps://facebook.com']
batch_scrape(urls)
In the given code example, 8 threads will be spawned. We mentioned it using the
max_workers argument.
Multiprocessing with
ProcessPoolExecutor
Fortunately,
ThreadPoolExecutor and
ProcessPoolExecutor uses the same interface. So, the code will be almost similar to the previous one. For this example, we will assume we're encoding video files, which is a CPU intensive task
from concurrent.futures import ProcessPoolExecutor, waitdef encode_video(file):
# ... encoding logic # remove the following line when you have written the logic
raise NotImplementedErrordef batch_encode(files):
tasks = [] with ProcessPoolExecutor(max_workers=4) as executor:
for file in files:
tasks.append(executor.submit(encode_video, file)) wait(tasks)
if __name__ == "__main__":
filePaths = ['file1.mp4', 'file2.mp4']
batch_encode(filePaths)
Here, 4 processes will be created. All four process can run in parallel because each process has its own interpreter. Thus, the GIL limitation doesn’t matter in this case.
Why GIL?
Now that you know how to run your Python program utilizing multiple cores, you might wonder why use GIL instead of other solutions. Also, why not remove GIL?
Python is a very popular and widely used language. With many good sides, there are some bad sides like GIL (or, is it really a bad side? let’s see). If it was not for GIL, Python would not be so popular nowadays. Let’s see the reasons
- Other languages like Java/C++ use different locking mechanisms but with the cost of decreased performance for single-threaded programs. To overcome single-threaded performance issue they use something like JIT compilers
- If you try to add multiple locks, there might be a deadlock situation. Also constantly releasing and acquiring locks has performance bottlenecks. It’s not very easy to overcome these things while keeping awesome language features. GIL is a single lock and simple to implement.
- Python is popular and widely used because of its underlying support for C extension libraries. C libraries needed a thread-safe solution. GIL is a single lock on the interpreter, so there is no chance of deadlocks. Also, it’s simpler to implement and maintain. So ultimately GIL was chosen to support all those C extensions
- Developers and researchers tried to remove GIL in the past. But as a result, they saw a significant performance drop for single-threaded applications. You should note that most general applications are single-threaded. Also, the underlying C libraries on which Python heavily depends got completely broken. A major thing like GIL cannot be removed without causing backward compatibility issues or slowing down performance. But still, researchers are trying to get rid of GIL and it’s a topic of interest for many.
- Ultimately, it seemed that the GIL limitations are not causing any impact when it comes to writing large and complex applications. After all, multiprocessing is still there to solve such problems. Today’s modern computers have enough resource and memory to tackle multiprocessing related overheads.
That’s it for today. I hope you learnt something new and interesting. Also, if you have any questions or feedback, drop a comment below. Thank you.
|
https://ahmedsadman.medium.com/multiprocessing-multithreading-and-gil-essential-concepts-for-every-python-developer-1e1ce94509da?source=post_page-----1e1ce94509da-----------------------------------
|
CC-MAIN-2022-21
|
refinedweb
| 2,207
| 56.66
|
Provided by: libbobcat-dev_3.19.01-1ubuntu1_amd64
NAME
FBB::ISymCryptStreambuf - Input Filtering stream buffer doing symmetric encryption
SYNOPSIS
#include <bobcat/isymcryptstreambuf> Linking option: -lbobcat -lcrypto
DESCRIPTION: ──────────────────────────────────────────────────────────────── method keysize blocksize mode identifier (bytes) (bytes) ──────────────────────────────────────────────────────────────── AES 16 8 CBC "aes-128-cbc" EBC "aes-128-ecb" CFB "aes-128-cfb" OFB "aes-128-ofb" 24 24 CBC "aes-192-cbc" EBC "aes-192-ecb" CFB "aes-192-cfb" OFB "aes-192-ofb" 32 32 CBC "aes-256-cbc" EBC "aes-256-ecb" CFB "aes-256-cfb" OFB "aes-256-ofb" ──────────────────────────────────────────────────────────────── BLOWFISH 16 8 CBC "bf-cbc" EBC "bf-ecb" CFB "bf-cfb" OFB "bf-ofb" max key length is 56 bytes, 16 generally used ──────────────────────────────────────────────────────────────── CAMELLIA 16 16 CBC "camellia-128-cbc" EBC "camellia-128-ecb" CFB "camellia-128-cfb" OFB "camellia-128-ofb" 24 CBC "camellia-192-cbc" EBC "camellia-192-ecb" CFB "camellia-192-cfb" OFB "camellia-192-ofb" 32 CBC "camellia-256-cbc" EBC "camellia-256-ecb" CFB "camellia-256-cfb" OFB "camellia-256-ofb" ──────────────────────────────────────────────────────────────── CAST 16 8 CBC "cast-cbc" EBC "cast-ecb" CFB "cast-cfb" OFB "cast-ofb" min key length is 5 bytes, max is shown ──────────────────────────────────────────────────────────────── DES 8 8 CBC "des-cbc" EBC "des-ebc" CFB "des-cfb" OFB "des-ofb" ──────────────────────────────────────────────────────────────── DESX 8 8 CBC "desx-cbc" ──────────────────────────────────────────────────────────────── 3DES 16 8 CBC "des-ede-cbc" EBC "des-ede" CFB "des-ede-cfb" OFB "des-ede-ofb" ──────────────────────────────────────────────────────────────── 3DES 24 8 CBC "des-ede3-cbc" EBC "des-ede3" CFB "des-ede3-cfb" OFB "des-ede3-ofb" Key bytes 9-16 define the 2nd key, bytes 17-24 define the 3rd key ──────────────────────────────────────────────────────────────── RC2 16 8 CBC "rc2-cbc" EBC "rc2-ecb" CFB "rc2-cfb" OFB "rc2-ofb" Key length variable, max. 128 bytes, default length is shown ──────────────────────────────────────────────────────────────── RC2-40 5 8 "rc2-40-cbc" obsolete: avoid ──────────────────────────────────────────────────────────────── RC2-64 8 8 "rc2-64-cbc" obsolete: avoid ──────────────────────────────────────────────────────────────── RC4 16 N.A. "rc4" Key length is variable, max. 256 bytes. default length is shown Encrypt again to decrypt. Don’t use DecryptBuf ──────────────────────────────────────────────────────────────── RC4-40 5 N.A. "rc4-40" obsolete: avoid ──────────────────────────────────────────────────────────────── RC5 16 8 CBC "rc5-cbc" EBC "rc5-ecb" CFB "rc5-cfb" OFB "rc5-ofb" Key length variable, max. 256 bytes, rounds 8, 12 or 16, default # rounds is 12 ──────────────────────────────────────────────────────────────── The RC4 stream cipher is subject to a well-known attack (cf.) unless the initial 256 bytes produced by the cipher are discarded.
NAMESPACE
FBB All constructors, members, operators and manipulators, mentioned in this man-page, are defined in the namespace FBB.
INHERITS FROM
FBB::IFilterStreambuf
MEMBER FUNCTIONS
All members of FBB::IFilterStreambuf are available, as ISymCryptStreambuf inherits from this class. Overloaded move and/or copy assignment operators are not available.
ENUMERATIONStreambuf base class is initialized with a buffer of size filterBufSize, using a lower bound of 100; - The parameter ENGINE can be used to specify a hardware accelleration engine, as supported by the used encryption/decryption method. Its default argument value indicates that no hardware accelleration is available. Copy- and move constructors are not available.
EXAMPLE contents
bobcat/isymcryptstreambuf - defines the class interface
SEE ALSO
bobcat(7), encryptbuf(3bobcat), isymcryptstream(3bobcat), ibase64streambuf(3bobcat), ifilterstreambuf(3bobcat), ofilterstreambuf(3bobcat), std::streambuf.
BUGS
Sep/Oct 2013: due to a change in library handling by the linker (cf. and) libraries that are indirectly required are no longer automatically linked to your program. With BigInt this is libcrypto, which requires programs to link to both bobcat and crypto.).
|
http://manpages.ubuntu.com/manpages/trusty/man3/isymcryptstreambuf.3bobcat.html
|
CC-MAIN-2019-18
|
refinedweb
| 561
| 56.18
|
Richard Hansen <rhan...@bbn.com> writes: > Here's what I'm trying to say: > > * Given the current definition of "ref" in gitglossary(7), claiming > that a foo-ish is a ref is not entirely incorrect.
Advertising
Ahh. If you had quoted this a few exchanges ago: [[def_ref]]ref:: A 40-byte hex representation of a <<def_SHA1,SHA-1>> or a name that denotes a particular <<def_object,object>>. They may be stored in a file under `$GIT_DIR/refs/` directory, or in the `$GIT_DIR/packed-refs` file. I would have immediately understood what you were trying to say. Sorry about a wasted back-and-forth. The above is an utterly confused explanation. It explains object names and mentions as a sidenote that object names _can_ be held in refs. It does not say what a ref is, in other words. Before 'packed-refs' was introduced, the right definition would have been A file under `$GIT_DIR/refs/` directory that holds an object name. And packed-refs is a way to coalesce such files into a single file to make it easier/faster to access. In today's world (after packed-refs was introduced), probably A name that begins with refs/ (e.g. refs/heads/master) that can point at an object name. The namespace of refs is hierarchical and different subhierarchy is used for different purposes (e.g. the refs/heads/ hierarchy is used to represent local branches). is an appropriate rewrite of the above. If we also want to explain the implementation details of refs, then additionally at the end of the first paragraph, add: ... at an object name, by storing its 40-byte hex representation. They are implemented as either a file in $GIT_DIR/refs/ directory (called "loose refs") or an entry in $GIT_DIR/packed-refs file (called "packed refs"); when a loose ref exists, a packed ref of the same name is ignored. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at
|
https://www.mail-archive.com/git@vger.kernel.org/msg30208.html
|
CC-MAIN-2017-51
|
refinedweb
| 338
| 65.22
|
posted January 01, 2004 06:23 AM
Andrew: Also - your entire concept requires the book() method to be on the server.
Javini: Thanks for your response. I'll study the referenced link topic discussion.
[Can] the cookie simply be a hash value for the current thread?
In general, Sun's documents are so ambiguous, that what probably happens is that people bring their experience to their reading; so, for me, it never even occurred to me to expose lock and unlock to the client; I would never do this in real life, and I never even considered it as something Sun even requested; so, I certainly have no intention whatsoever of exposing lock and unlock to the client (bad idea, bad design, no way).
This thread began, because I was curious if I would be automatically failed if I only trivially implement lock(), unlock(), and isLocked(), as they are not needed at all for my server to keep the database file from being corrupted.
1. It's a must requirement to implement DBMain in Data; but, the Java programming language allows an implementation of a Java interface to be an empty method: public void lock() {} and thus, as far as meeting the requirements, it's implemented.
2. "Server: Locking: Your server must be capable of handling multiple concurrent requests, and as part of this capability, must provide locking functionality AS SPECIFIED in the interface provided above." So, specifications and implementations are two different things (again, I'm being a language lawyer here): the interface specifies what must be done, the implementation are empty methods; and, my design implements the locking using synchronized methods in my BusinessRequests class.
The DBMain interface states the intention of the server.
And, probably the best thing to do is not argue with Sun as a language lawyer, but to use your language lawyer skills while thinking things through, but justify your final design and implementation with accepted principles.
Okay, I knew I left something out; here are Sun's supplied comments for these three methods as found in the DBMain Java interface code: lock(): "Locks a record so that it can only be updated or deleted by this client. ..." Well, what is "this client" mean; if my threads are controlled, I don't care exactly which client "this client" is, right? unlock(): "Releases the lock on a record." isLocked(): "Determines if a record is currently locked. ..." So, in conclusion, I see nothing in my particular specifications which mandates that locking a record and associating this lock to a specific client is required. If this assertion is true, then perhaps I've gotten an easier version of the assignment than other postings I've seen here.
multiple clients use the server, and multithreaded remote method of server invokes singleton DataManager methods with each business method (such as "book()") synchronized which in turn uses Data which in turn uses a RandomAccessFile which in turn uses a physical random access file.
Except for the one must condition that I implement the DBMain Java interface, in which case I need to implement lock(), unlock(), and isLocked() even though to do so would, it appears to me, be completely silly given my design.
I would read that differently. It is clear from another part of the instructions that you the Data class must implement the interface. And the server instructions state that the server must provide the same functionality as well. I think this is a reasonably standard request: you have a standard interface which multiple clients use, now you want to provide it over a different medium (in our case RMI, but it could be over SOAP or MQ), and you want to keep the same interface.
Do you have the instructions "Portions of your submission will be analyzed by software; where a specific spelling or structure is required, even a slight deviation could result in automatic failure.".
Yes, justification is the big thing. Spend a lot of time making sure that it is explicit what you are doing and why.
Originally posted by Andrew Monkhouse: Hi Javini, I think your code will work, and as you have noted it does not need the lock methods. The only problem with this concept is that you have created a bottleneck in a place where no bottleneck is required. Consider a simple case: ...lines deleted... But lets consider a future enhancement to the business logic. We want to calculate exchange rates at the time of booking (to get the most favourable exchange rate), which has to be calculated on number of beds requested: ..lines deleted... Hmmm - clients are blocked for longer now. The second client has to wait until the first client has finished all that work before they have any chance of finding out if the record is still available. Lets stop that method from being synchronized, and use some locking, and see what happens:
Client A Client B
================ ================
lockRecord(5)
lockRecord(4)
readRecord(5)
readRecord(4)
if available if available
getExchangeRate getExchangeRate
calculatePrice calculatePrice
updateRecord updateRecord
endif endif
unlockRecord(5)
unlockRecord(4)
See all those simulatenous lines of code? We have reduced the bottleneck. Regards, Andrew
|
http://www.coderanch.com/t/184806/java-developer-SCJD/certification/NX-Locking-Unlocking-Sun-Conditions
|
CC-MAIN-2015-32
|
refinedweb
| 857
| 55.58
|
Holy cow, I wrote a book!
We saw some time ago that if somebody
invites you to a meeting in Building 7,
they are inviting you off campus to take a break from work.
If somebody invites you to a meeting in
Building 109,
Conference Room A,
they are inviting you to join them at the
Azteca Mexican restaurant next door.
Update:
One of the members of the "Building 109 Conference Room A"
mailing list
informed me that Building 109 Conference Room A is specifically
the bar at the Azteca restaurant.
Update 2:
Building 109 Conference Room A has its own mailing list!
Aa.
LoadLibrary.
A customer had this question:
I'd like to know how to get a window to remain visible,
even when the user has selected Show Desktop.
Right now, when the user picks Show Desktop,
the desktop appears and covers my window.
Um, yeah, because that's the whole point of Show Desktop:
To show the desktop and get rid of all those windows that are in the way.
Windows like yours.
We're sorry that Windows was unprepared for
a program as awesome as yours,
because there's no way to mark your window as
even if the user says to show the desktop instead of this window,
override the user's command and show the window anyway.
(They're probably
angling for a nice bonus.)
As a consolation prize, you can create a
desktop gadget.
Desktop gadgets are part of the desktop and raise with it.
It so happens that upon further discussion, the customer was
trying to write a clock-type program—this is something very
well-suited to gadgetification.
A different customer had a related question,
but disguised it behind another question:
I noticed that desktop gadgets remain on the desktop even if
the user clicks Show Desktop.
How does that work?
How do gadget stay in front of the desktop when it is shown?
What is the trick?
This was a rather odd question to come through the customer channel.
And it probably wasn't just idle curiosity.
You don't burn a support request for idle curiosity. customer liaison confirmed that that's what the customer
is actually trying to do,
but that the customer was being coy with the liaison as well
and did not explain what the problem scenario was that made them
think that they needed a program that is exempt from being covered
by the desktop when the user clicks Show Desktop.
The customer liaison went back to the customer with the explanation
that the way to get the special gadget behavior is to be a gadget,
and if they want to pursue it, then writing a gadget is what they need
to do..
Which comes to a third reason why there is no feature for
right-aligned toolbar buttons:
Because you can already do it yourself without too much effort.
By default,
when you ask
MultiByteToWideChar to convert
a UTF-8 string to UTF-16 that contains illegal sequences
(such as overlong sequences),
it will try to muddle through as best as it can.
If you want it to treat illegal sequences as an error,
pass the MB_ERR_INVALID_CHARS flag.
MultiByteToWideChar
MB_ERR_INVALID_CHARS
The MSDN documentation on this subject is, to be honest,
kind of hard to follow and even includes a double-negative:
"The function does not drop illegal code points if the application
does not set this flag."
Not only is this confusing, it doesn't even say what happens to illegal
code points when you omit this flag;
all it says is what it doesn't do, namely that it doesn't drop them.
Does it set them on fire?
(Presumably, if you omit the flag, then it retains illegal code points,
but how do you retain an illegal UTF-8 code point in UTF-16 output?
It's like saying about a function like atoi
"If the value cannot be represented as an integer,
it is left unchanged." Huh? The function still has to return an integer.
How do you return an unchanged string as an integer?)
atoi.]
Commenter.)
AllowSetForegroundWindow
GetWindowThreadProcessId
CoAllowSetForegroundWindow!
A customer
requested assistance with their shell namespace extension,
and the request worked its way to me for resolution.
Unhandled exception at 0x76fab89c (shell32.dll) in explorer.exe: 0xC0000005:
Access violation reading location 0x00000000.
shell32.dll!CShellItem::_GetPropertyStoreWorker() + 0x44 bytes
shell32.dll!CShellItem::GetPropertyStoreForKeys() + 0x38 bytes
thumbcache.dll!CThumbnailCache::_GetMonikerDataFromShellItem() + 0x8b bytes
thumbcache.dll!CThumbnailCache::GetThumbnail() + 0x11c bytes
shell32.dll!CSetOperationCallback::_LookupThumbnail() + 0x8d bytes
shell32.dll!CSetOperationCallback::_PrefetchCachedThumbnails() + 0xb6 bytes
shell32.dll!CSetOperationCallback::OnNextBatch() + 0x4f bytes
shell32.dll!CEnumTask::_PushBatchToView() + 0x68 bytes
shell32.dll!CEnumTask::_IncrFillEnumToView() + 0x2ca5 bytes
shell32.dll!CEnumTask::_IncrEnumFolder() + 0x8da5a bytes
shell32.dll!CEnumTask::InternalResumeRT() + 0xa was at a loss because the customer's code was
nowhere on the stack.
What is wrong?
The customer didn't provide a dump file or any other information
beyond the stack trace.
(Hint: When reporting a problem with a shell namespace extension,
at least mention the last few method calls your namespace extension
received before the crash.)
I was forced to use my psychic powers to solve the problem.
But you can, too.
All the information you need is right there in front of you.
The shell faulted on a null pointer in the function
CShellItem::_GetPropertyStoreWorker,
which from its name is clearly a worker function which
obtains the property store from a shell item.
CShellItem::_GetPropertyStoreWorker
At this point, you put on your thinking cap.
Why is the shell taking a null pointer fault trying to retrieve
the property store from a shell item?
Remember that the problem is tied to a custom namespace extension.
My psychic powers tell me that the namespace extension
returned S_OK
from GetUIObjectOf(IPropertyStoreFactory)
but set the output pointer to NULL.
S_OK
GetUIObjectOf(IPropertyStoreFactory)
NULL
(It turns out my psychic powers were weak without coffee, because
the initial psychic diagnosis was GetUIObjecttOf(IPropertyStore)
instead of IPropertyStoreFactory.)
GetUIObjecttOf(IPropertyStore)
IPropertyStoreFactory
As a general rule, if your function fails, then you should
return a failure code, not a success code.
There are exceptions to this rule, particular when OLE automation
is involved, but it's a good rule to start with.
The customer reported that fixing their
IShellFolder::BindToObject to return an error code
when it failed fixed the problem.
The customer then followed up with another crash, again providing
startling little information.
IShellFolder::BindToObject
Unhandled exception at 0x763cf7e7 (shell32.dll) in explorer.exe: 0xC0000005:
Access violation reading location 0x000a0d70.
Call Stack:
shell32.dll!CInfotipTask::InternalResumeRT() + 0x2 reported that
IQueryInfo::SetInfoTip is getting called.
The customer liaison added,
"Raymond, I'm looking forward to your psychic powers again."
IQueryInfo::SetInfoTip
Apparently, some people don't understand that psychic powers are not
something you ask for.
It's my way of scolding you for not providing enough information
to make a quality diagnosis possible.
You don't come back saying,
"Hey, thanks for answering my question even though I did a crappy job
of asking it.
Here's another crappy question!"
I reported back that my psychic powers were growing weary from overuse,
and that the customer might want to expend a little more time investigating
the problem themselves.
Especially since it has the same root cause as their previous problem.
Resources in PE-format files must be stored at offsets which are
a multiple of four.
This requirement is necessary for platforms which are
sensitive to data alignment.
That doesn't stop people from breaking the rules anyway.
After all, it sort of works anyway, as long as you're careful.
I mean, sure maybe if somebody running a non-x86 version of Windows
tries to read your resources, they will crash, but who uses
non-x86 versions of Windows, right?
In Windows Vista SP1,
additional hardening was added to the resource parsing code to
address various security issues, but the one that's important today
is that tests were made to verify that the data were properly aligned
before accessing it.
This prevents a file with a misaligned version
resource from crashing any program that tried to read its resources.
In particular, it is common for programs to read the version resources
of arbitrary files—for example,
Explorer does it when you view the file's
properties or if you turn on the Description column in Details view—so
enforcing alignment on resources
closes that avenue of remote denial of service.
And then the bug reports came in.
"Program XYZ fails to install" because the program tries to read
its own version resources and cannot,
because the tool they used to build the program cheated on the
alignment requirement and stored the resources at offsets that aren't
multiples of 4.
"I mean, come on, that wastes like three bytes per resource.
Everything still worked when we
removed the alignment padding, so we went ahead and shipped it that way."
Another example of a program that stopped working when the alignment
rules were enforced was a computer game expansion pack which could
not install because the code that tried to verify that you had the
base game found itself unable to read its version resources.
Multiple programs
refused to run, preferring to display the error message
"AppName is not a valid Win32 application."
Presumably, as part of initialization,
they tried to read their own version resources,
which failed with ERROR_BAD_EXE_FORMAT,
which they then showed to the user.
ERROR_BAD_EXE_FORMAT
The fix was to relax the enforcement of the rules back to the
previous level, and impose the stricter requirements only on
architectures which raise exceptions on misaligned data.
It does mean that you can have a program whose resources can
be read on one machine but not on the other,
but that was deemed a lesser evil than breaking all the programs
which relied on being able to misalign their data without consequence.
The Operations group at Microsoft manage the servers which keep
the company running.
And they have their own jargon which is puzzling to those of us
who don't spend all our days in a noisy server room..
|
http://blogs.msdn.com/b/oldnewthing/archive/2011/06.aspx?PageIndex=2
|
CC-MAIN-2015-48
|
refinedweb
| 1,684
| 54.63
|
We've had a PGB app up for some time and never had any problems with versioning before, but something seems to have changed since moving from 6.1.0-cli to 6.3.0-cli. The Builds tab of the app reports the expected version: 2.3.1. But when I upload the prod APK to the Google Play Store, it is reported as Version 4151 (1.0). The previous version is up as 4150 (2.3.0).
The version name is set to 2.3.1 in the widget's version attribute, as we always do. The versionCode is incremented, as we always do. The namespace declarations are similarly unchanged. Everything seems to be in line with Config.xml - Apache Cordova .
<widget id="our.example.app" version="2.3.1" versionCode="4151" android-
When I use apktool on the APK, it seems that neither of these values makes it into AndroidManifest.xml — but they weren't included in the previous versions, either. So where is the version name set? How can I correct the version name, either in the config.xml or perhaps by altering and re-signing the APK locally?
1. What is the attribute android-versionCode doing there?
It should not be an attribute of the widget element, but rather a preference, as in
<preference name="android-versionCode" value="4151" />
This is only used in combination with the gradle build tool
2. The id, version, description and other meta data are specified in the config.xml.
If your built .apk doesn't contain what was specified, then the odds are that your config was not found, read and parsed, due to:
- activated Hydration, or
- wrong directory structure for PGB
If the former, disable Hydration and rebuild.
Regarding the latter: can you confirm that you have both config.xml and index.html in the root directory ("/") of your zip file, and that no other file called index.html exists in your assets?
Thanks for the quick response. We've had versionCode and android-versionCode in <widget> since cli-5.2.0. I will try moving the latter to a <preference>. I can confirm that index.html and config.xml are both located in the root, and those are the only files with those names in the install. Hydration is definitely off.
I tried removing the attributes, replacing them with <preference> settings, and with moving the <preference> settings under <platform>. These changes had no effect; when installed on our test devices, and when uploaded to the Google Play Developer Console, the version name was still reported as 1.0.
But I did discover the problem: a custom GitHub plugin we introduced with this version. The plugin bundled a build.gradle file with a defaultConfig block whose versionName setting seems to have been overriding the version name set in config.xml. I change versionName here to 2.3.1 and this is now reflected when we build the app on PGB. Most of the contents of this file seem to be placeholders or otherwise ignored, so I'll need to work with our plugin developer to see what the best way to handle this is. But at least it's enough to get our release out.
|
https://forums.adobe.com/thread/2207652
|
CC-MAIN-2018-22
|
refinedweb
| 537
| 67.86
|
15 November 2010 18:53 [Source: ICIS news]
WASHINGTON (ICIS)--?xml:namespace>
In its monthly report, the department said that retail and food services sales in October rose to $373.1bn (€272.4bn) from September’s $368.6bn, with both figures seasonally adjusted.
October’s retail sales also were 7.3% ahead of the same month in 2009, and sales minus automobiles and related parts were 6% higher than the year-earlier month.
Economists had expected a gain of about 0.8% for retail sales last month, so the much stronger 1.2% advance was seen as another good indication that consumers are becoming slightly more confident.
Consumer spending is the principal driving force of the
October’s retail sales minus automobiles and automotive parts showed a 0.4% gain.
While that figure was slightly lower than the 0.5% advance that many economists had forecast, it was regarded as significant in terms of consumers’ earlier reluctance to spend on anything other than essentials.
(
|
http://www.icis.com/Articles/2010/11/15/9410668/us-retail-sales-jump-1.2-in-oct-chiefly-on-auto-sales.html
|
CC-MAIN-2014-52
|
refinedweb
| 164
| 58.89
|
Hi Sanjay,
I have different opinions about what's important and how to eventually
integrate this code, and that's not because I'm "conveniently ignoring"
your responses. I'm also not making some of the arguments you claim I am
making. Attacking arguments I'm not making is not going to change my mind,
so let's bring it back to the arguments I am making.
Here's what it comes down to: HDFS-on-HDSL is not going to be ready in the
near-term, and it comes with a maintenance cost.
I did read the proposal on HDFS-10419 and I understood that HDFS-on-HDSL
integration does not necessarily require a lock split. However, there still
needs to be refactoring to clearly define the FSN and BM interfaces and
make the BM pluggable so HDSL can be swapped in. This is a major
undertaking and risky. We did a similar refactoring in 2.x which made
backports hard and introduced bugs. I don't think we should have done this
in a minor release.
Furthermore, I don't know what your expectation is on how long it will take
to stabilize HDSL, but this horizon for other storage systems is typically
measured in years rather than months.
Both of these feel like Hadoop 4 items: a ways out yet.
Moving on, there is a non-trivial maintenance cost to having this new code
in the code base. Ozone bugs become our bugs. Ozone dependencies become our
dependencies. Ozone's security flaws are our security flaws. All of this
negatively affects our already lumbering release schedule, and thus our
ability to deliver and iterate on the features we're already trying to
ship. Even if Ozone is separate and off by default, this is still a large
amount of code that comes with a large maintenance cost. I don't want to
incur this cost when the benefit is still a ways out.
We disagree on the necessity of sharing a repo and sharing operational
behaviors. Libraries exist as a method for sharing code. HDFS also hardly
has a monopoly on intermediating storage today. Disks are shared with MR
shuffle, Spark/Impala spill, log output, Kudu, Kafka, etc. Operationally
we've made this work. Having Ozone/HDSL in a separate process can even be
seen as an operational advantage since it's isolated. I firmly believe that
we can solve any implementation issues even with separate processes.
This is why I asked about making this a separate project. Given that these
two efforts (HDSL stabilization and NN refactoring) are a ways out, the
best way to get Ozone/HDSL in the hands of users today is to release it as
its own project..
I'm excited about the possibilities of both HDSL and the NN refactoring in
ensuring a future for HDFS for years to come. A pluggable block manager
would also let us experiment with things like HDFS-on-S3, increasingly
important in a cloud-centric world. CBlock would bring HDFS to new usecases
around generic container workloads. However, given the timeline for
completing these efforts, now is not the time to merge.
Best,
Andrew
On Thu, Mar 1, 2018 at 5:33 PM, Daryn Sharp <daryn@oath.com.invalid> wrote:
> I’m generally neutral and looked foremost at developer impact. Ie. Will
> it be so intertwined with hdfs that each project risks destabilizing the
> other? Will developers with no expertise in ozone will be impeded? I
> think the answer is currently no. These are the intersections and some
> concerns based on the assumption ozone is accepted into the project:
>
>
> Common
>
> Appear to be a number of superfluous changes. The conf servlet must not be
> polluted with specific references and logic for ozone. We don’t create
> dependencies from common to hdfs, mapred, yarn, hive, etc. Common must be
> “ozone free”.
>
>
> Datanode
>
> I expected ozone changes to be intricately linked with the existing blocks
> map, dataset, volume, etc. Thankfully it’s not. As an independent
> service, the DN should not be polluted with specific references to ozone.
> If ozone is in the project, the DN should have a generic plugin interface
> conceptually similar to the NM aux services.
>
>
> Namenode
>
> No impact, currently, but certainly will be…
>
>
> Code Location
>
> I don’t feel hadoop-hdfs-project/hadoop-hdfs is an acceptable location.
> I’d rather see hadoop-hdfs-project/hadoop-hdsl, or even better
> hadoop-hdsl-project. This clean separation will make it easier to later
> spin off or pull in depending on which way we vote.
>
>
> Dependencies
>
> Owen hit upon his before I could send. Hadoop is already bursting with
> dependencies, I hope this doesn’t pull in a lot more.
>
>
> ––
>
>
> Do I think ozone be should be a separate project? If we view it only as a
> competing filesystem, then clearly yes. If it’s a low risk evolutionary
> step with near-term benefits, no, we want to keep it close and help it
> evolve. I think ozone/hdsl/whatever has been poorly marketed and an
> umbrella term for too many technologies that should perhaps be split. I'm
> interested in the container block management. I have little interest at
> this time in the key store.
>
>
> The usability of ozone, specifically container management, is unclear to
> me. It lacks basic features like changing replication factors, append, a
> migration path, security, etc - I know there are good plans for all of it -
> yet another goal is splicing into the NN. That’s a lot of high priority
> items to tackle that need to be carefully orchestrated before contemplating
> BM replacement. Each of those is a non-starter for (my) production
> environment. We need to make sure we can reach a consensus on the block
> level functionality before rushing it into the NN. That’s independent of
> whether allowing it into the project.
>
>
> The BM/SCM changes to the NN are realistically going to be contentious &
> destabilizing. If done correctly, the BM separation will be a big win for
> the NN. If ozone is out, by necessity interfaces will need to be stable
> and well-defined but we won’t get that right for a long time. Interface
> and logic changes that break the other will be difficult to coordinate and
> we’ll likely veto changes that impact the other. If ozone is in, we can
> hopefully synchronize the changes with less friction, but it greatly
> increases the chances of developers riddling the NN with hacks and/or ozone
> specific logic that makes it even more brittle. I will note we need to be
> vigilant against pervasive conditionals (ie. EC, snapshots).
>
>
> In either case, I think ozone must agree to not impede current hdfs work.
> I’ll compare to hdfs is a store owner that plans to maybe retire in 5
> years. A potential new owner (ozone) is lined up and hdfs graciously gives
> them no-rent space (the DN). Precondition is help improve the store.
> Don’t make a mess and expect hdfs to clean it up. Don’t make renovations
> that complicate hdfs but ignore it due to anticipation of its
> departure/demise. I’m not implying that’s currently happening, it’s just
> what I don’t want to see.
>
>
> We as a community and our customers need an evolution, not a revolution,
> and definitively not a civil war. Hdfs has too much legacy code rot that
> is hard to change. Too many poorly implemented features. Perhaps I’m
> overly optimistic that freshly redesigned code can counterbalance
> performance degradations in the NN. I’m also reluctant, but realize it is
> being driven by some hdfs veterans that know/understand historical hdfs
> design strengths and flaws.
>
>
> If the initially cited issues are addressed, I’m +0.5 for the concept of
> bringing in ozone if it's not going to be a proverbial bull in the china
>
>
> Daryn
>
> On Mon, Feb 26, 2018 at 3:18 PM, Jitendra Pandey <jitendra@hortonworks.com
> >
> wrote:
>
> > Dear folks,
> > We would like to start a vote to merge HDFS-7240 branch into
> > trunk. The context can be reviewed in the DISCUSSION thread, and in the
> > jiras (See references below).
> >
> > HDFS-7240 introduces Hadoop Distributed Storage Layer (HDSL), which
> is
> > a distributed, replicated block layer.
> > The old HDFS namespace and NN can be connected to this new block
> layer
> > as we have described in HDFS-10419.
> > We also introduce a key-value namespace called Ozone built on HDSL.
> >
> > The code is in a separate module and is turned off by default. In a
> > secure setup, HDSL and Ozone daemons cannot be started.
> >
> > The detailed documentation is available at
> >
> > Hadoop+Distributed+Storage+Layer+and+Applications
> >
> >
> > I will start with my vote.
> > +1 (binding)
> >
> >
> > Discussion Thread:
> >
> >
> >
> > Jiras:
> >
> >
> >
> >
> >
> >
> > Thanks
> > jitendra
> >
> >
> >
> >
> >
> > DISCUSSION THREAD SUMMARY :
> >
> > On 2/13/18, 6:28 PM, "sanjay Radia" <sanjayosrc@gmail.com>
> > wrote:
> >
> > Sorry the formatting got messed by my email client. Here
> > it is again
> >
> >
> >: hdfs-dev-unsubscribe@hadoop.
> > apache.org
> > For additional commands, e-mail:
> > hdfs-dev-help@hadoop.apache.org
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
>
>
> --
>
> Daryn
>
|
http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201803.mbox/%3CCAGB5D2Y=id7YfJLBfKde-D=WuZzmD0WOLKh+a8GWqh6yQyMNJw@mail.gmail.com%3E
|
CC-MAIN-2019-04
|
refinedweb
| 1,504
| 64.81
|
Apache Kafka is a distributed publish-subscribe messaging system and a robust queue that can handle a high volume of data and enables you to pass messages from one end-point to another.
Generally, data is published to topic via Producer API and Consumers API consume data from subscribed topics.
In this blog, we will see how to do unit testing of kafka.
Unit testing your Kafka code is incredibly important. It’s transporting your most important data. As of now we have to explicitly run zookeeper and kafka server to test the Producer and Consumer.
Now there is also an alternate to test kafka without running zookeeper and kafka broker.
Thinking how ? EmbeddedKafka is there for you.
Embedded Kafka is a library that provides an in-memory Kafka broker to run your ScalaTest specs against. It uses Kafka 0.10.2.1 and ZooKeeper 3.4.8.
It will start zookeeper and kafka broker before the test and stop it after the test.
Though we also have facility to start and stop the zookeeper and kafka server in programmatic way.
How to use ?
Before Testing, follow these instructions :
- Add the following dependency in your build.sbt
“net.manub” %% “scalatest-embedded-kafka” % “0.14.0” % “test”
2) Have your TestSpec extend the EmbeddedKafka trait.
Using withRunningKafka closure, it will give running instance of kafka. It will automatically start zookeeper and kafka broker respectively on port 6000 and 6001 and automatically shutdown at the end of the test.
class KafkaSpec extends FlatSpec with EmbeddedKafka {
“runs with embedded kafka” should {
withRunningKafka {
// test cases goes here
}
}
}
A EmbeddedKafka companion object is provided for usage without the EmbeddedKafka trait. Zookeeper and Kafka can be started an stopped in a programmatic way.
class KafkaSpec extends FlatSpec with EmbeddedKafka with BeforeAndAfterAll {
override def beforeAll():Unit = {
EmbeddedKafka.start()
}
// test cases goes here
override def afterAll():Unit = {
EmbeddedKafka.stop()
}
}
EmbeddedKafka also supports custom configurations. Like, It’s possible to change the ports on which Zookeeper and Kafka will be started by providing an implicit EmbeddedKafkaConfig. And we can also provide any implicit serializer according to our requirement.
implicit val config = EmbeddedKafkaConfig(kafkaPort = 9092, zookeeperPort = 2182)
implicit val serilizer = new StringSerializer
The same implicit EmbeddedKafkaConfig can be used to define custom producer/consumer properties.
The EmbeddedKafka trait also provides some utility methods to interact with embedded kafka, in order to test our kafka producer and consumer.
def publishToKafka(topic: String, message: String): Unit
def consumeFirstMessageFrom(topic: String): String
It also provides many more methods which can be used according to need.
For complete example click here.
Good things is that we can also test our kafka stream in similar way. For that, we have to add following dependency in build.sbt. And extend your spec with EmbeddedKafkaStreamsAllInOn.
“net.manub” %% “scalatest-embedded-kafka-streams” % “0.14.0” % “test”
For more information on testing of kafka stream, you can use links in references.
So, Embedded Kafka has made easier the unit testing of kafka. Besides that, embedded kafka is also very easy to use.
Hope, this blog will help you 🙂
References:
-
-
-
4 thoughts on “Unit Testing Of Kafka”
Reblogged this on akashsethi24.
Reblogged this on Mahesh's Programming Blog.
Excellent source … just make a minor fix on kafkaPort and zookeeperPort as they should be camelized 😉
|
https://blog.knoldus.com/unit-testing-of-kafka/
|
CC-MAIN-2019-22
|
refinedweb
| 544
| 50.02
|
Hi,
First of all, i'm having difficulties browsing these forums. It doesn't seem very user friendly, but here i am, starting a topic in what i hope is the best place for it. I couldnt find a better forum for it using the search on the IBM website.
Here is my problem:
When i use:
BEGIN PROGRAM.
import spss, spssaux
dataCursor=spss.Cursor()
dataCursor.SetFetchVarList([int(2)])
dataset = dataCursor.fetchall()
dataCursor.close()
print dataset
END PROGRAM.
from the SPSS Syntax all is well...but when i use the following code in a extension:
import spss
def Run(args):
dataCursor=spss.Cursor()
dataCursor.SetFetchVarList([2])
dataset = dataCursor.fetchall()
dataCursor.close()
print dataset
and when i run it i get the following error:
Warning: An open Cursor was detected while exiting a program block. The Cursor has been closed.
Traceback (most recent call last):
File "<string>", line 14, in <module>
File "C:\PROGRA~2\IBM\SPSS\STATIS~1\22\extensions\ITSTEST.py", line 9, in Run
dataset = dataCursor.fetchall()
File "C:\PROGRA~2\IBM\SPSS\STATIS~1\22\Python\Lib\site-packages\spss\cursors.py", line 1272, in fetchall
data.append(self.binaryStream.fetchData())
File "C:\PROGRA~2\IBM\SPSS\STATIS~1\22\Python\Lib\site-packages\spss\binarystream.py", line 818, in fetchData
return self.readcache(data)
File "C:\PROGRA~2\IBM\SPSS\STATIS~1\22\Python\Lib\site-packages\spss\binarystream.py", line 722, in readcache
currentcase = self.unpackdata(binaryData)
File "C:\PROGRA~2\IBM\SPSS\STATIS~1\22\Python\Lib\site-packages\spss\binarystream.py", line 298, in unpackdata
case = list(struct.unpack_from(self.varBinaryFmt, binarydata, 0))
struct.error: unpack_from requires a buffer of at least 8 bytes
How do i obtain a list of valid codes in the dataset for a given variable from with an extension module?
Answer by Jignesh Sutar (195) | Nov 07, 2014 at 11:08 AM
I'm not entirely sure why you are getting the error message that you are other than not having a closed dataCursor when having opened one earlier in your code perhaps?
However alternatively you can use something like this to read data for obtaining unique values:
This will read a single column of data whos variable name is "YourVarNameHere"
spssdata.Spssdata("YourVarNameHere", names=False).fetchall()
You can then use set operation to find unique values only:
allvalues = sorted(list(set(item[0] for item in spssdata.Spssdata("YourVarNameHere", names=False).fetchall())))
For large datasets it is more efficient to aggregate first and then use the code above to read the values obtained
Answer by wpgdewit (0) | Nov 07, 2014 at 12:57 PM
Thanks for the reply (and i found the code button ;)).
My extension (for testpuposes) contained only the following code:import spss
def Run(args): dataCursor=spss.Cursor() dataCursor.SetFetchVarList([2]) dataset = dataCursor.fetchall() dataCursor.close() print dataset
So there was no unclosed cursor. I think the unclosed cursor error comes from another error in the above code, however the code will run when pasted in the syntax editor. Of course without theimport spss
def Run(args):
and without the indentation.
So one way or the other the code works fine in the syntax editor, but not when used in a python script for my extension.
In the code you provide above i am missing what "spssdata" in spssdata..Spssdata() refers to. I guess it is a variable, but where do you set this variable?
Answer by wpgdewit (0) | Nov 07, 2014 at 01:38 PM
After reading some posts on Stack Oveflow on working with binary data (googled on "struct.error: unpack_from requires a buffer of at least 8 bytes") and re-reading page 40 of the "Python Reference Guide for IBM SPSS Statistics.PDF"...
I changed my code to this:import spss
def Run(args): dataCursor=spss.Cursor([2], isBinary = False) dataset = dataCursor.fetchall() dataCursor.close() print dataset
Now it runs even in my test extension. So i still don't know what the problem was...but it has something to do with binary.
If anyone is able to explain it would be great. But my problem seems to be solved for now.
Answer by Jignesh Sutar (195) | Nov 07, 2014 at 01:57 PM
spssdata module is supplied with the software you just need to have it imported
Answer by JonPeck (4671) | Nov 07, 2014 at 02:28 PM
First, if you are using Statistics 22.0.0.0, you install fixpack1 for V22. The isbinary option is a performance enhancement and has nothing to do with handling binary data structures. However, in 22.0.0.0 under certain circumstances it does not work. Setting isbinary=False turns that optimization off.
If you are just collecting the set of values that occur in the data, accumulating those as a set using a fetchone call will conserve memory and eliminate duplicate values.
Answer by wpgdewit (0) | Nov 07, 2014 at 02:43 PM
Well i guess that this is just one of those events where unrelated issues lead to a working code. Always funny.
I will install the fixpack. Thought i had that already, but must have been on my other machine.
After that i will try your suggestion of accumulating the values using a fetchone call.
UPDATE: It worked. Thanks!
Answer by wpgdewit (0) | Nov 07, 2014 at 02:46 PM
i am still interested in your suggestion of using Spssdata(), but for now i am going to explore my own code with the fixpack suggested by John Peck and the fetchone alternative he suggested. When i get around trying your advice i'll let you know.
Thanks for the help!
46 people are following this question.
PSM and Fuzzy 0 Answers
Changing name of OMS Table Subtype Means - "Report" ? 1 Answer
Setting case values conditionally with Cursor-class 1 Answer
Merge Cell in table output 3 Answers
spss.data not working with both read and write 1 Answer
|
https://developer.ibm.com/answers/questions/226513/$%7Buser.profileUrl%7D/
|
CC-MAIN-2019-51
|
refinedweb
| 988
| 58.38
|
Given its history, I am not going to be fooled by the apparent simplicity of binary search, or by the obviousness of the fix, especially because I've never used the unsigned bit shift operator (i.e., >>>) in any other code. I am going to test this fixed version of binary search as if I had never heard of it before, nor implemented it before. I am not going to trust anyone's word, or tests, or proofs, that this time it will really work. I want to be confident that it works as it should through my own testing. I want to nail it.
Here's my initial testing strategy (or team of tests):
Start with smoke tests.
Add some boundary value tests.
Continue with various thorough and exhaustive types of tests.
Finally, add some performance tests.
Testing is rarely a linear process. Instead of showing you the finished set of tests, I am going to walk you through my thought processes while I am working on the tests.
Let's get started with the smoke tests. These are designed to make sure that the code does the right thing when used in the most basic manner. They are the first line of defense and the first tests that should be written, because if an implementation does not pass the smoke tests, further testing is a waste of time. I often write the smoke tests before I write the code; this is called test-driven development (or TDD).
Here's my smoke test for binary search:
import static org.junit.Assert.*; import org.junit.Test; public class BinarySearchSmokeTest ...
No credit card required
|
https://www.safaribooksonline.com/library/view/beautiful-code/9780596510046/ch07s03.html
|
CC-MAIN-2017-13
|
refinedweb
| 273
| 74.39
|
I have converted the provided samples in test cases for Python 3. I have added a fourth test because I found an ambiguity in the problem description that I wanted to clarify.
def test_provided_1(self): self.assertEqual('Merlot', solution('Cabernet Merlot Noir | ot')) def test_provided_2(self): self.assertEqual('Chardonnay Sauvignon', solution('Chardonnay Sauvignon | ann')) def test_provided_3(self): self.assertEqual('False', solution('Shiraz Grenache | o')) def test_reversed(self): self.assertEqual('pinot', solution('pinot | to'))As you can see in the latest test, the characters in the last word do not say anything about the required search order in the wine name. So 'pinot' is a match for 'to', because it contains both characters, even if in reversed order.
Given this specification, I thought the best way to solve the problem was putting in a dictionary the letters we want to search and their number, and then comparing them in the wine name.
Setting the dictionary
hints = {} for c in data[1]: hints[c] = hints.get(c, 0) + 1Being data[1] the result of splitting the input line on ' | ', I loop on each character. I try to get it from the dictionary. If it is not there, I force get() to return 0, instead of the default None. Then I increase the value returned by get(), increase it, and store it back in the dictionary.
Selecting the wines
result = [] for wine in wines: # 1 for key in hints.keys(): # 2 if wine.count(key) < hints.get(key): # 3 break else: # 4 result.append(wine)1. Loop on wines, the list that contains all the words on the left to ' | ', split by the default single blank as separator.
2. Loop on all the characters in the dictionary initialized above.
3. Count the current character in the current wine. If there are less instances of it than expected, exit from the for loop through a break.
4. A nice Python feature. When the for loop is completed correctly, meaning no break has interrupted its execution, proceed to its else block, if any. Here it means that all the checks have succeeded, so I push the current wine in the result list.
Conditional join
If there is something in the result list, I should return it as a string, joining each wine name to the next on a blank. However, if the list is empty I should return a not-found message. As a C programmer, I am used to do it with the infamous ternary operator (?:). It does not exist in Python, but there is a mimic if-else construct:
return ' '.join(result) if result else 'False'It could be read in this way: if result is not empty return the join on result, otherwise return the 'False' string.
Being this solution accepted by CodeEval, I have pushed test cases and the python 3 function source code to GitHub.
|
http://thisthread.blogspot.com/2017/01/codeeval-chardonnay-or-cabernet.html
|
CC-MAIN-2018-43
|
refinedweb
| 475
| 73.47
|
C2589: '(' : illegal token on right side of '::'
Hello,
I have encountered the same problem found here, and tried all of the recommended solutions:
qdatetime.h gives a syntax error at the line:
static inline qint64 nullJd() { return std::numeric_limits<qint64>::min(); }
The error is:
C:\Qt\Qt5.2.1\5.2.1\msvc2012_64_opengl\include\QtCore\qdatetime.h:122: error: C2589: '(' : illegal token on right side of '::'
I have windows.h declared in a header file (which is probably a bad idea), but have included before it:
#define NOMINMAX #include <limits>
Also in the .pro file I have included DEFINES += NOMINMAX
For some reason this error still persists. Any ideas?
Thanks!
Did you define NOMINMAX before windows.h ?
@
#define NOMINMAX
#include <windows.h>
@
take a look on "this":
try to undef min and max before using min and max from the <limits>
|
https://forum.qt.io/topic/39782/c2589-illegal-token-on-right-side-of
|
CC-MAIN-2022-40
|
refinedweb
| 141
| 56.15
|
I have two lists and what I want to do is to insert some of the elements in the middle of one of the lists into another.
Does anyone have any ideas why this piece of code does not compile?
if on Line 13, I use comNode.bgin() instead of comNode.begin()+1 the coude compiles perfectly though.
Thank you for your help!
#include<iostream> #include<list> using namespace std; int main() { list<int> comNode; list<int> erased; comNode.push_front(10); comNode.push_front(100); comNode.push_front(1000); comNode.push_front(10000); erased.insert (erased.end (), comNode.begin()+1, comNode.end ()); std::list <int>::const_iterator listLocator; for (listLocator = erased.begin(); listLocator!=erased.end(); listLocator++) cout << *listLocator << endl; return 0; }
This post has been edited by machoolah: 27 July 2009 - 04:49 PM
|
http://www.dreamincode.net/forums/topic/117127-inserting-list-elements-into-another-list/
|
CC-MAIN-2017-17
|
refinedweb
| 130
| 59.7
|
note BrowserUk <blockquote><i> in the context of the sub GOOD { BAD } example: remember that this is about creating vulnerabilities ... [which] isn't about good programming practice, but of taking advantage of possible weaknesses. </i></blockquote> <p>[<i>"Vulnerability refers to the inability to withstand the effects of a hostile environment."</i>] <p>So, the hostiles somehow detect that I'm using two bareword filehandles in my script and then devise a mechanism by which the succeed in injecting a constant subroutine that effectively redirects one as the other into my scripts namespace. <p>The only way I can see for that to be possible, is that they modify the script itself; or, they modify one of the modules my script uses. <P>If they have access to my filesystem sufficiently to be able to exploit that <i>"vulnerability"</i>; don't you think that they might find easier, more direct ways of achieving their nefarious goals? Like maybe just writing whatever they damn please into whatever file they want to corrupt. <p>There's this vague memory running around my head. Something about shutting doors and horses bolting. <p2997
|
http://www.perlmonks.org/index.pl?displaytype=xml;node_id=1013000
|
CC-MAIN-2016-22
|
refinedweb
| 190
| 51.78
|
view raw
Is it possible to use a REST API in a custom receiver for Spark Streaming?
I am trying to be able to do multiple calls / reads from that API asynchronously and use Spark Streaming to do it.
A custom receiver can be whatever process that produces data asynchronously. Typically, your
def receive() method will send async requests to your REST server, maybe using using
Futures and a dedicated
ThreadPool.
onCompletion of the future, we call the
store(data) method to give the results to the Spark Streaming job.
In a nutshell,
def onStart()=> creates the process that manages the async request response handling
def receive()=> continuously does the I/O and reports the results through calling
store(...)
def onStop()=> stops the process and cleans up what
onStartcreates.
There's an example in the custom receivers docs.
|
https://codedump.io/share/C8E46VVdpK1m/1/spark-streaming-rest-custom-receiver
|
CC-MAIN-2017-22
|
refinedweb
| 138
| 62.78
|
Get your application up and running
Adding the route to the main view
We previously replaced the original
MainView with our own. The new one does not have an
@Route annotation which we need set our view as the the root route.
Switch back to IntelliJ IDEA.
Expand the
src/main/java/com.vaadin.tutorial.crm.uipackage and open
MainView.java.
Add the
@Route("")annotation at the beginning of the
MainViewclass.
Your
MainView class should now look like this:
@Tag("main-view") @JsModule("./src/views/main-view.js") @Route("") (1) public class MainView extends PolymerTemplate<MainView.MainViewModel> { // The rest of the file is omitted from the code snippet }
The
@Routeannotation maps to
MainView.
Running the project
Next, we run the project to see how the new layout will look like.
The easiest way to run the project for the first time is to:
Open the
ApplicationJava class in
src/main/java/com/vaadin/tutorial/crm/Application.java
Click the green play button next to the line which starts with "public class Application".
This starts the application and automatically adds a run configuration for it in IntelliJ IDEA. Later, when you want to run or restart the application, you can build, run/restart, stop and debug the application from the toolbar:
When the build is finished and the application is running. Open in your browser to see the result.
Proceed to the next chapter to connect your views to Java: Connecting your Main View to Java
|
https://vaadin.com/docs/latest/tools/designer/getting-started/get-your-application-up-and-running
|
CC-MAIN-2021-49
|
refinedweb
| 245
| 57.67
|
Hi, Just to say I made my own changes in tolua++ 1.0.5 and 1.0.6 ... And now, think it's a good idea to share what I made ... the patch 1.0.5 is a litle different ++ Added defines #TOLUA_DISABLE_AAA to disable the function AAA (to replace it by manual one for example Example : /* method: doSet of class CmdAlignment */ #ifndef TOLUA_DISABLE_tolua_luaOgre_Ogre_TextAreaOverlayElement_CmdAlignment_doSet00 static int tolua_luaOgre_Ogre_TextAreaOverlayElement_CmdAlignment_doSet00(lua_State* tolua_S) { // function code } #endif //#ifndef TOLUA_DISABLE ++ Added possibility to insert more than one line of C code by using // tolua code ${ // C code $} // tolua code ++ Added descriptions in lua chuks ... for $lfile it's the file name and for $[ ... $] it is the first line so : $[ -- description -- lua code $] will have description "-- description" The important thing is those descriptions appears in stack traceback ... so it's easier to debug programms. ++ Last thing : indentation in tolua_packagename_open. The patch files are here : [tolua++ 1.0.5] [tolua++ 1.0.6] in this version, tabulations are replaced with one space [tolua++ 1.0.6] Usage: cd to tolua++-1.0.5/src/bin patch -p1 < patchfile.patch This will patcjh lua/ directory ... then you need to compile it. cd to tolua++ root directory rebuild tolua++ c files by: scons build_dev=1 Mildred -- <> ou <> Le courrier E-mail est une correspondance privée. Merci de vous y conformer
Attachment:
signature.asc
Description: OpenPGP digital signature
|
http://lua-users.org/lists/lua-l/2005-09/msg00227.html
|
crawl-002
|
refinedweb
| 227
| 60.72
|
Using C++ Resumable Functions with Libuv
Previously on this blog we have talked about Resumable Functions, and even recently we touched on the renaming of the yield keyword to co_yield in our implementation in Visual Studio 2017. I am very excited about this potential C++ standards feature, so in this blog post I wanted to share with you a real world use of it by adapting it to the libuv library. You can use the code with Microsoft’s compiler or even with other compilers that have an implementation of resumable functions. Before we jump into code, let’s recap the problem space and why you should care.
Problem Space
Waiting for disks or data over a network is inherently slow and we have all learned (or been told) by now that writing software that blocks is bad, right? For client side programs, doing I/O or blocking on the UI thread is a great way to create a poor user experience as the app glitches or appears to hang. For server side programs, new requests can usually just create a new thread if all others are blocked, but that can cause inefficient resource usage as threads are often not a cheap resource.
However, it is still remarkably difficult to write code that is efficient and truly asynchronous. Different platforms provide different mechanisms and APIs for doing asynchronous I/O. Many APIs don’t have any asynchronous equivalent at all. Often, the solution is to make the call from a worker thread, which calls a blocking API, and then return the result back to the main thread. This can be difficult as well and requires using synchronization mechanisms to avoid concurrency problems. There are libraries that provide abstractions over these disparate mechanisms, however. Examples of this include Boost ASIO, the C++ Rest SDK, and libuv. Boost ASIO and the Rest SDK are C++ libraries and libuv is a C library. They have some overlap between them but each has its own strengths as well.
Libuv is a C library that provides the asynchronous I/O in Node.js. While it was explicitly designed for use by Node.js, it can be used on its own and provides a common cross-platform API, abstracting away the various platform-specific asynchronous APIs. Also, the API exposes a UTF8-only API even on Windows, which is convenient. Every API that can block takes a pointer to a callback function which will be called when the requested operation has completed. An event loop runs and waits for various requests to complete and calls the specified callbacks. For me, writing libuv code was straightforward but it isn’t easy to follow the logic of a program. Using C++ lambdas for the callback functions can help somewhat, but passing data along the chain of callbacks requires a lot of boilerplate code. For more information on libuv, there is plenty of information on their website.
There has been a lot of interest in coroutines lately. Many languages have added support for them, and there have been several coroutine proposals submitted to the C++ committee. None have been approved as of yet, but there will likely be coroutine support at some point. One of the coroutine proposals for C++ standardization is resumable functions and the current version of that proposal is N4402, although there are some newer changes as well. It proposes new language syntax for stackless coroutines, and does not define an actual implementation but instead specifies how the language syntax binds to a library implementation. This allows a lot of flexibility and allows supporting different runtime mechanisms.
Adapting libuv to resumable functions
When I started looking at this, I had never used libuv before, so I initially just wrote some code using straight libuv calls and started thinking about how I would like to be able to write the code. With resumable functions, you can write code that looks very sequential but executes asynchronously. Whenever the co_await keyword is encountered in a resumable function, the function will “return” if the result of the await expression is not available.
I had several goals in creating this library.
- Performance should be very good.
- Avoid creating a thick C++ wrapper library.
- Provide a model that should feel familiar to existing libuv users.
- Allow mixing of straight libuv calls with resumable functions.
All of the code I show here and the actual library code as well as a couple of samples is available on github and can be compiled using Visual Studio 2015, Visual Studio 2017, or in this branch of Clang and LLVM that implements this proposal. You will also need CMake and libuv installed. I used version 1.8 of libuv on Linux and 1.10.1 on Windows. If you want to use Clang/LLVM, follow these standard instructions to build it.
I experimented with several different ways to bind libuv to resumable functions, and I show two of these in my library. The first (and the one I use in the following examples) uses something similar to std::promise and std::future. There is awaituv::promise_t and awaituv::future_t, which point to a shared state object that holds the “return value” from the libuv call. I put “return value” in quotes because the value is provided asynchronously through a callback in libuv. This mechanism requires a heap allocation to hold the shared state. The second mechanism lets the developer put the shared state on the stack of the calling function, which avoids a separate heap allocation and associated shared_ptr machinery. It isn’t as transparent as the first mechanism, but it can be useful for performance.
Examples
Let’s look at a simple example that writes out “hello world” 1000 times asynchronously.
[code lang=”cpp”]
future_t<void> start_hello_world()
{
for (int i = 0; i < 1000; ++i)
{
string_buf_t buf("\nhello world\n");
fs_t req;
(void) co_await fs_write(uv_default_loop(), &req, 1 /*stdout*/, &buf, 1, -1);
}
}
[/code]
A function that uses co_await must have a return type that is an awaitable type, so this function returns a future_t<void>, which implements the methods necessary for the compiler to generate code for a resumable function. This function will loop one thousand times and asynchronously write out “hello world”. The “fs_write” function is in the awaituv namespace and is a thin wrapper over libuv’s uv_fs_write. Its return type is future_t<int>, which is awaitable. In this case, I am ignoring the actual value but still awaiting the completion. The start_hello_world function “returns” if the result of the await expression is not immediately available, and a pointer to resume the function is stored such that when the write completes the function is resumed. The string_buf_t type is a thin wrapper over the uv_buf_t type, although the raw uv_buf_t type could be used as well. The fs_t type is also a thin wrapper over uv_fs_t and has a destructor that calls uv_fs_cleanup. This is also not required to be used but does make the code a little cleaner.
Note: unlike std::future, future_t does not provide a “get” method as that would need to actually block. In the case of libuv, this would essentially hang the program as no callbacks can run unless the event loop is processing. For this to work, you can only await on a future.
Now let’s look at a slightly more complicated example which reads a file and dumps it to stdout.
[code lang=”cpp”]
future_t<void> start_dump_file(const std::string& str)
{
// We can use the same request object for all file operations as they don’t overlap.
static_buf_t<1024> buffer;
fs_t openreq;
uv_file file = co_await fs_open(uv_default_loop(), &openreq, str.c_str(), O_RDONLY, 0);
if (file > 0)
{
while (1)
{
fs_t readreq;
int result = co_await fs_read(uv_default_loop(), &readreq, file, &buffer, 1, -1);
if (result <= 0)
break;
buffer.len = result;
fs_t req;
(void) co_await fs_write(uv_default_loop(), &req, 1 /*stdout*/, &buffer, 1, -1);
}
fs_t closereq;
(void) co_await fs_close(uv_default_loop(), &closereq, file);
}
}
[/code]
This function should be pretty easy to understand as it is written very much like a synchronous version would be written. The static_buf_t type is another simple C++ wrapper over uv_buf_t that provides a fixed size buffer. This function opens a file, reads a chunk into a buffer, writes it to stdout, iterates until no more data, and then closes the file. In this case, you can see we are using the result of the await expression when opening the file and when reading data.
Next, let’s look at a function that will change the text color of stdout on a timer.
[code lang=”cpp”]
bool run_timer = true;
uv_timer_t color_timer;
future_t<void> start_color_changer()
{
static string_buf_t normal = "\033[40;37m";
static string_buf_t red = "\033[41;37m";
uv_timer_init(uv_default_loop(), &color_timer);
uv_write_t writereq;
uv_tty_t tty;
uv_tty_init(uv_default_loop(), &tty, 1, 0);
uv_tty_set_mode(&tty, UV_TTY_MODE_NORMAL);
int cnt = 0;
unref(&color_timer);
auto timer = timer_start(&color_timer, 1, 1);
while (run_timer)
{
(void) co_await timer.next_future();
if (++cnt % 2 == 0)
(void) co_await write(&writereq, reinterpret_cast<uv_stream_t*>(&tty), &normal, 1);
else
(void) co_await write(&writereq, reinterpret_cast<uv_stream_t*>(&tty), &red, 1);
}
//reset back to normal
(void) co_await write(&writereq, reinterpret_cast<uv_stream_t*>(&tty), &normal, 1);
uv_tty_reset_mode();
co_await close(&tty);
co_await close(&color_timer); // close handle
}
[/code]
Much of this function is straightforward libuv code, which includes support for processing ANSI escape sequences to set colors. The new concept in this function is that a timer can be recurring and doesn’t have a single completion. The timer_start function (wraps uv_timer_start) returns a promise_t rather than a future_t. To get an awaitable object, you must call “next_future” on the timer. This resets the internal state such that it can be awaited on again. The color_timer variable is a global so that the stop_color_changer function (not shown) can stop the timer.
Finally, here is a function that opens a socket and sends an http request to google.com.
[code lang=”cpp”]
future_t<void> start_http_google()
{
uv_tcp_t socket;
if (uv_tcp_init(uv_default_loop(), &socket) == 0)
{
// Use HTTP/1.0 rather than 1.1 so that socket is closed by server when done sending data.
// Makes it easier than figuring it out on our end…
const char* httpget =
"GET / HTTP/1.0\r\n"
"Host:\r\n"
"Cache-Control: max-age=0\r\n"
"Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\r\n"
"\r\n";
const char* host = "";
uv_getaddrinfo_t req;
addrinfo_state addrstate;
if (co_await getaddrinfo(addrstate, uv_default_loop(), &req, host, "http", nullptr) == 0)
{
uv_connect_t connectreq;
awaitable_state<int> connectstate;
if (co_await tcp_connect(connectstate, &connectreq, &socket, addrstate._addrinfo->ai_addr) == 0)
{
string_buf_t buffer{ httpget };
::uv_write_t writereq;
awaitable_state<int> writestate;
if (co_await write(writestate, &writereq, connectreq.handle, &buffer, 1) == 0)
{
read_request_t reader;
if (read_start(connectreq.handle, &reader) == 0)
{
while (1)
{
auto state = co_await reader.read_next();
if (state->_nread <= 0)
break;
uv_buf_t buf = uv_buf_init(state->_buf.base, state->_nread);
fs_t writereq;
awaitable_state<int> writestate;
(void) co_await fs_write(writestate, uv_default_loop(), &writereq, 1 /*stdout*/, &buf, 1, -1);
}
}
}
}
}
awaitable_state<void> closestate;
co_await close(closestate, &socket);
}
}
[/code]
Again, a couple of new concepts show up in this example. First, we don’t directly await on getaddrinfo. The getaddrinfo function returns a future_t<addrinfo_state>, which contains two pieces of information. The result of awaiting on future_t<addrinfo_state> gives an integer which indicates success or failure, but there is also a addrinfo pointer, which is used in the tcp_connect call. Finally, reading data on a socket potentially results in multiple callbacks as data arrives. This requires a different mechanism than just await’ing the read. For this, there is the read_request_t type. As data arrives on a socket, it will pass the data on if there is an outstanding await. Otherwise, it holds onto that data until the next time an await occurs on it.
Finally, let’s look at using these functions in combination.
[code lang=”cpp”]
int main(int argc, char* argv[])
{
// Process command line
if (argc == 1)
{
printf("testuv [–sequential] <file1> <file2> …");
return -1;
}
bool fRunSequentially = false;
vector<string> files;
for (int i = 1; i < argc; ++i)
{
string str = argv[i];
if (str == "–sequential")
fRunSequentially = true;
else
files.push_back(str);
}
// start async color changer
start_color_changer();
start_hello_world();
if (fRunSequentially)
uv_run(uv_default_loop(), UV_RUN_DEFAULT);
for (auto& file : files)
{
start_dump_file(file.c_str());
if (fRunSequentially)
uv_run(uv_default_loop(), UV_RUN_DEFAULT);
}
start_http_google();
if (fRunSequentially)
uv_run(uv_default_loop(), UV_RUN_DEFAULT);
if (!fRunSequentially)
uv_run(uv_default_loop(), UV_RUN_DEFAULT);
// stop the color changer and let it get cleaned up
stop_color_changer();
uv_run(uv_default_loop(), UV_RUN_DEFAULT);
uv_loop_close(uv_default_loop());
return 0;
}
[/code]
This function supports two modes: the default parallel mode and a sequential mode. In sequential mode, we will run the libuv event loop after each task is started, allowing it to complete before starting the next. In parallel mode, all tasks (resumabled functions) are started and then resumed as awaits are completed.
Implementation
This library is currently header only. Let’s look at one of the wrapper functions.
[code lang=”cpp”]
auto fs_open(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, int mode)
{
promise_t<uv_file> awaitable;
auto state = awaitable._state->lock();
req->data = state;
auto ret = uv_fs_open(loop, req, path, flags, mode,
[](uv_fs_t* req) -> void
{
auto state = static_cast<promise_t<uv_file>::state_type*>(req->data);
state->set_value(req->result);
state->unlock();
});
if (ret != 0)
{
state->set_value(ret);
state->unlock();
}
return awaitable.get_future();
}
[/code]
This function wraps the uv_fs_open function and the signature is almost identical to it. It doesn’t take a callback and it returns future<int> rather than int. Internally, the promise_t<int> holds a reference counted state object, which contains an int and some other housekeeping information. Libuv provides a “data” member to hold implementation specific information, which for us is a raw pointer to the state object. The actual callback passed to the uv_fs_open function is a lambda which will cast “data” back to a state object and call its set_value method. If uv_fs_open returned a failure (which means the callback will never be invoked), we directly set the value of the promise. Finally, we return a future that also has a reference counted pointer to the state. The returned future implements the necessary methods for co_await to work with it.
I currently have wrappers for the following libuv functions:
- uv_ref/uv_unref
- uv_fs_open
- uv_fs_close
- uv_fs_read
- uv_fs_write
- uv_write
- uv_close
- uv_timer_start
- uv_tcp_connect
- uv_getaddrinfo
- uv_read_start
This library is far from complete and wrappers for other libuv functions need to be completed. I have also not explored cancellation or propagation of errors. I believe there is a better way to handle the multiple callbacks of uv_read_start and uv_timer_start, but I haven’t found something I’m completely happy with. Perhaps it should remain callback-based given its recurrency.
Summary
For me, coroutines provide a simpler to follow model for asynchronous programming with libuv. Download the library and samples from the Github repo. Let me know what you think of this approach and how useful it would be.
|
https://devblogs.microsoft.com/cppblog/using-ibuv-with-c-resumable-functions/
|
CC-MAIN-2019-43
|
refinedweb
| 2,451
| 53.41
|
Subject: Re: [OMPI users] OPEN_MPI macro for mpif.h?
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2010-03-31 09:09:18
On Mar 29, 2010, at 4:10 PM, Martin Bernreuther wrote:
> looking at the Open MPI mpi.h include file there's a preprocessor macro
> OPEN_MPI defined, as well as e.g. OMPI_MAJOR_VERSION, OMPI_MINOR_VERSION
> and OMPI_RELEASE_VERSION. version.h e.g. also defines OMPI_VERSION
> This seems to be missing in mpif.h and therefore something like
>
> include 'mpif.h'
> [...]
> #ifdef OPEN_MPI
> write( *, '("MPI library: OpenMPI",I2,".",I2,".",I2)' ) &
> & OMPI_MAJOR_VERSION, OMPI_MINOR_VERSION, OMPI_RELEASE_VERSION
> #endif
>
> doesn't work for a FORTRAN openmpi program.
Correct. The reason we didn't do this is because not all Fortran compilers will submit your code through a preprocessor. For example:
-----
shell% cat bogus.h
#define MY_VALUE 1
shell% cat bogus.f90
program main
#include "bogus.h"
implicit none
integer a
a = MY_VALUE
end program
shell% ln -s bogus.f90 bogus-preproc.F90
shell% gfortran bogus.f90
Warning: bogus.f90:2: Illegal preprocessor directive
bogus.f90:5.14:
a = MY_VALUE
1
Error: Symbol 'my_value' at (1) has no IMPLICIT type
shell% gfortran bogus-preproc.F90
shell%
-----
That's one example. I used gfortran here; I learned during the process that include'd files are not preprocessed by gfortran, but #include'd files are (regardless of the filename of the main source file). The moral of the story here is that it's a losing game for our wrappers to try and keep up with what file extensions and/or compiler switches enable preprocessing, and trying to determine whether mpif.h was include'd or #include'd. :-(
That being said, I have a [very] dim recollection of adding some -D's to the wrapper compiler command line so that -DOPEN_MPI would be defined and we wouldn't have to worry about all the .f90 vs. .F90 / include vs. #include muckety muck... I don't remember what happened with that, though...
Are you enough of a fortran person to know whether -D is pretty universally supported among Fortran compilers? It wouldn't be too hard to add a configure test to see if -D is supported. Would you have any time/interest to create a patch for this, perchance?
--
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
|
http://www.open-mpi.org/community/lists/users/2010/03/12496.php
|
CC-MAIN-2015-32
|
refinedweb
| 379
| 59.9
|
A nexus (phylogenetics) file reader (.nex, .trees)
Project description
python-nexus
A Generic nexus (.nex, .trees) reader/writer for python.
Description
python-nexus provides simple nexus file-format reading/writing tools, and a small collection of nexus manipulation scripts.
Versions:
- dev:
- fixed parsing of an unusual MrBayes format treefile.
- fixed logging error in write_to_nexus()
- v2.1:
- fix minor bug with parsing of data/characters blocks.
- v2.0:
- Refactored cli. The package now installs a single command
nexus, providing several subcommands.
- Dropped python 2 compatibility.
- v1.7:
added rudimentary tree handling to
NexusWriterobjects:
nex = NexusWriter() nex.trees.append("tree tree1 = (a,b);")
added the ability to combine nexuses containing trees
- v1.63:
- fixed an issue where the bin directory wasn't packed on py2.7 (thanks @xrotwang)
- v1.62:
- cached DataHandler's characters property to speed up.
- cached DataHandler's symbol property to speed up.
- cached DataHandler's site parser to speed up.
- v1.61:
- fixed an install issue caused by refactoring.
- v1.6:
- remove some over-engineered checking on the
NexusReader.DataMatrix.charactersproperty
- major refactoring of reader.py into a
handlerssubpackage
NexusReader.read_stringnow returns self, such that it can be used as a factory-style method.
- added rudimentary support for taxon annotations in taxa blocks.
- v1.53:
- the character block format string
symbolsgenerated by
NexusReader.write()no longer includes missing or gap symbols.
- fix parsing glitch in NexusReader.DataHandler.parse_format_line.
- v1.51:
charactersand
datablock now retain their character labels in
NexusReader
- v1.5:
- work around a minor bug in BEAST2 ()
charactersblock is now added as
charactersand not invisibly renamed to
data.
- v1.42: minor fix to remove a stray debugging print statement
- v1.41: minor fix to remove a stray debugging print statement
- v1.40: major speed enhancement in NexusReader -- a 2 order of magnitude decrease in reading most nexus data blocks.
- v1.35: fixed nexus_nexusmanip.py utility to handle multiple arguments, and to delete arbitrary sites.
- v1.34: fixed parsing of malformed taxa blocks.
- v1.33: fixed bug in taxa labels parser when taxa are listed on one line.
Usage
Reading a Nexus:
>>> from nexus import NexusReader >>> n = NexusReader.from_file('nexus/examples/example.nex')
You can also load from a string:
>>> n = NexusReader.from_string('#NEXUS\n\nbegin foo; ... end;')
NexusReader will load each of the nexus
blocks it identifies using specific
handlers.
>>> n.blocks {'foo': <nexus.handlers.GenericHandler object at 0x7f55d94140f0>} >>> n = NexusReader('nexus/examples/example.nex') >>> n.blocks {'data': <NexusDataBlock: 2 characters from 4 taxa>}
A dictionary mapping blocks to handlers is available at .handlers:
>>> n.handlers { 'trees': <class 'nexus.handlers.tree.TreeHandler'>, 'taxa': <class 'nexus.handlers.taxa.TaxaHandler'>, 'characters': <class 'nexus.handlers.data.CharacterHandler'>, 'data': <class 'nexus.handlers.data.DataHandler'> }
Any blocks that aren't in this dictionary will be parsed using GenericHandler.
NexusReader can then write the nexus to a string using .write() or to another file using .write_to_file(filename):
>>> output = n.write() >>> # or >>> n.write_to_file("mynewnexus.nex")
NOTE: if you want more fine-grained control over generating nexus files, then try NexusWriter discussed below.
Block Handlers:
There are specific "Handlers" to parse certain known nexus blocks, including the common 'data', 'trees', and 'taxa' blocks. Any blocks that are unknown will be parsed with GenericHandler.
ALL handlers extend the
GenericHandler class and have the following methods.
parse(self, data)parse is called by NexusReader to parse the contents of the block (in
data) appropriately.
write(self)write is called by NexusReader to write the contents of a block to a string (i.e. for regenerating the nexus format for saving a file to disk)
All blocks have access to the following:
- The raw block content (as a list of lines) in n.blockname.block
- A helper function to remove all the comments in a nexus file. n.block.remove_comments
To find out what file the nexus was loaded from:
n.filename n.short_filename 'example.nex'
generic block handler
The generic block handler simply stores each line of the block in
.block:
n.blockname.block ['line1', 'line2', ... ]
data block handler
These are the main blocks encountered in nexus files - and contain the data matrix.
So, given the following nexus file with a data block:
#NEXUS Begin data; Dimensions ntax=4 nchar=2; Format datatype=standard symbols="01" gap=-; Matrix Harry 00 Simon 01 Betty 10 Louise 11 ; End; begin trees; tree A = ((Harry:0.1,Simon:0.2):0.1,Betty:0.2):Louise:0.1); tree B = ((Simon:0.1,Harry:0.2):0.1,Betty:0.2):Louise:0.1); end;
You can do the following:
Find out how many characters:
n.data.nchar 2
Ask about how many taxa:
n.data.ntaxa 4
Get the taxa names:
n.data.taxa ['Harry', 'Simon', 'Betty', 'Louise']
Get the
format info:
n.data.format {'datatype': 'standard', 'symbols': '01', 'gap': '-'}
The actual data matrix is a dictionary, which you can get to in
.matrix:
n.data.matrix { 'Simon': ['0', '1'], 'Louise': ['1', '1'], 'Betty': ['1', '0'], 'Harry': ['0', '0'] }
Or, you could access the data matrix via taxon:
n.data.matrix['Simon'] ['0', '1']
Or even loop over it like this:
for taxon, characters in n.data: print taxon, characters
You can also iterate over the sites (rather than the taxa):
for site, data in n.data.characters.items(): print(site, data) 0 {'Simon': '0', 'Louise': '1', 'Betty': '1', 'Harry': '0'} 1 {'Simon': '1', 'Louise': '1', 'Betty': '0', 'Harry': '0'}
..or you can access the characters matrix directly:
n.data.characters[0] {'Simon': '0', 'Louise': '1', 'Betty': '1', 'Harry': '0'}
NOTE: that sites are zero-indexed!
trees block handler
If there's a
trees block, then you can do the following
You can get the number of trees:
n.trees.ntrees 2
You can access the trees via the
.trees dictionary:
n.trees.trees[0] 'tree A = ((Harry:0.1,Simon:0.2):0.1,Betty:0.2):Louise:0.1);'
Or loop over them:
for tree in n.trees: print(tree)
taxa block handler
These are the alternate nexus file format found in programs like SplitsTree:
BEGIN Taxa; DIMENSIONS ntax=4; TAXLABELS [1] 'John' [2] 'Paul' [3] 'George' [4] 'Ringo' ; END; [Taxa]
In a taxa block you can get the number of taxa and the taxa list:
n.taxa.ntaxa 4 n.taxa.taxa ['John', 'Paul', 'George', 'Ringo']
NOTE: with this alternate nexus format the Characters blocks should be parsed by DataHandler.
Writing a Nexus File using NexusWriter
NexusWriter provides more fine-grained control over writing nexus files, and is useful if you're programmatically generating a nexus file rather than loading a pre-existing one.
from nexus import NexusWriter n = NexusWriter() #Add a comment to appear in the header of the file n.add_comment("I am a comment")
Data are added by using the "add" function - which takes 3 arguments, a taxon, a character name, and a value.
n.add('taxon1', 'Character1', 'A') n.data {'Character1': {'taxon1': 'A'}} n.add('taxon2', 'Character1', 'C') n.add('taxon3', 'Character1', 'A')
Characters and values can be strings or integers
n.add('taxon1', 2, 1) n.add('taxon2', 2, 2) n.add('taxon3', 2, 3)
NexusWriter will interpolate missing entries (i.e. taxon2 in this case)
n.add('taxon1', "Char3", '4') n.add('taxon3', "Char3", '4')
... when you're ready, you can generate the nexus using
make_nexus or
write_to_file:
data = n.make_nexus(interleave=True, charblock=True) n.write_to_file(filename="output.nex", interleave=True, charblock=True)
... you can make an interleaved nexus by setting
interleave to True, and you can
include a character block in the nexus (if you have character labels for example)
by setting charblock to True.
There is rudimentary support for handling trees e.g.:
n.trees.append("tree tree1 = (a,b,c);") n.trees.append("tree tree2 = (a,b,c);")
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/python-nexus/
|
CC-MAIN-2021-39
|
refinedweb
| 1,312
| 60.82
|
I got this effect from a very good French Flasher. I found
the code on his site,
flasheur.com , broke it down, and improved it a little
bit, all this just for you.
You'll find that in this tutorial, I also took the code from
the
Random motion tutorial, that I use the
swapDepths and duplication functions, and a lot of
actionscript-based motion.
So that's not an easy tutorial, and you'd better check the
other tutorials before attempting this one.
Take a
look at the effect:
[ see the bacteria melt? ]
You can grab an incomplete source to get you started by
clicking here.
Now, what we want is that the bacteria move randomly. We
need to put Supra's random motion code to the circle, and
make so that the outline follows somehow the fill.
We also need to duplicate our bacterium, so that there are
more than one on the scene
Create a new layer that you'll name code and add this
code:
//Random Movement: kirupa.com
//Thanks to Suprabeener for the original code!
function getdistance (x, y, x1, y1) {
var run, rise;
run = x1-x;
rise = y1-y;
return (hyp(run, rise));
}
function hyp (a, b) {
return (Math.sqrt(a*a+b*b));
}
MovieClip.prototype.reset = function () {
var dist, norm, movie_height, movie_width;
// movie_height: refers to the height of your movie
// movie_width: refers to the width of your movie
//---------------------------------
movie_height = 200;
movie_width = 400;
//---------------------------------
speed = Math.random()*4+2;
targx = Math.random()*(movie_width-_width);
targy = Math.random()*(movie_height-_height);
dist = _root.getdistance(_x, _y, targx, targy);
};
MovieClip.prototype.move = function () {
var cycle;
// cycle: specifies speed of the movement. The smaller
// number, the faster the objects move.
//--------------------------------------------
cycle = 200;
//--------------------------------------------
diffx = (targx-_x);
diffy = (targy-_y);
if (_root.getdistance(_x, _y, targx, targy)>speed) {
x += diffx/7;
y += diffy/7;
} else {
if (!this.t) {
t = getTimer();
}if (getTimer()-t>cycle) {
reset();
t = 0;
}
}
_x = x;
_y = y;
}
If you look very closely, you'll see that I removed two
or three lines from the move() and
reset function.
onClipEvent (enterFrame) {
move () ;
}
onClipEvent (enterFrame) {
_root.shadow._x = _root.circle._x;
_root.shadow._y = _root.circle._y;
// this makes the outline follow the fill
}
We get to the real actionscript part of the effect. We want
to duplicate our little bacteria, and make them all move,
and also make it so that it looks like they melt one into
each other.
In the code layer, under the random motion code, we are
going to put the duplication code.
// 1 : first level
i = 1;
// MAX : number of bacteria. Don't put too much, or you're going to kill your computer
MAX = 11 ;
// duplication loop. We could have used a for loop too.
do {
duplicateMovieClip (_root.circle, "circle"+i, 50+i);
duplicateMovieClip (_root.shadow, "shadow"+i, i);
_root["circle"+i].num = i ;
_root["shadow"+i].num = i ;
i++ ;
} while (i<MAX);
_root.circle._visible = 0 ;
_root.top.swapDepths(1000) ;
Everybody OK? I admit this isn't
really simple.
duplicateMovieClip (_root.circle, "circle"+i,
50+i);
This means you duplicate the movie
_root.circle
under the name "circle"+i in level
50+i.
At the beginning, i=1, so the name
of the duplicate will be circle1 and it will lay on
level 51. Logical. And so on and so forth 11 times
because of the do... while loop.
onClipEvent (enterFrame) {
i = 1;
do {
_root["shadow"+i]._x = _root["circle"+i]._x;
_root["shadow"+i]._y = _root["circle"+i]._y;
i++;
} while (i<_root.MAX);
}
That's it ! Finished ! Save your
work, and test the movie.
That last bit of code looks a lot like the one we first put
in the controller. The difference is that we loop so
that all the shadows follow their circle, and
that we have to give a general formula of the path to get to
the shadows and the circles. I like those NOTE things...
This
tutorial is written by Ilyas Usal. Ilyas is also known as
ilyaslamasse
on the
kirupa.com forums!
pom
Flash Transition Effects
Flash Effect Tutorials
Link
to Us
© 1999 - 2009
|
http://www.kirupa.com/developer/actionscript/outline.htm
|
crawl-002
|
refinedweb
| 681
| 68.26
|
Java method overloading allows different methods with the same name, but different signatures. Where the signature can be different by the number of input parameters or type of passing parameters or both.
See Also:
- Java: Method Signature
- Java: Method Overriding
- Java: Method Overloading Vs Method Overriding
- Exception handling with method overriding
Advantage of method Overloading
- Don’t need to remember so many method names.
Points to remember about Method Overloading
- Method Overloading is related to the concept of compile-time (or static) polymorphism.
- Method Overloading possible on the same class.
- Method Overloading doesn’t consider the return type of method. If two method signature is the same and only differ in return type then the compiler will show as a duplicate method.
- Method Overloading can overload static methods but not based on keyword static only.
- Method Overloading can also be done for the main() method.
- Java language doesn’t support user-defined operator loading but internally supports operator overloading for example : (+) for concatenation.
Note: (Java 8+)If both methods have the same parameter types, but different return type, then it is not possible.
Ways to do Method Overloading
Method overloading is possible by:
- Change the number of parameters in methods.
- Change data types of the parameters in methods.
- Change Order of the parameters of methods.
How to determine parameter matching in Overloading with different Types?
Parameter matching determines when parameter type is different but to higher type(in terms of range) in the same family.
For Example: suppose passing argument type is int, then check for an overloaded method of type int if not found then check for the data type of higher in the same family( long type if still not found then check for the float data type).
public class Demo { //overloaded methods public void show(int x) { System.out.println("In int " + x); } public void show(String s) { System.out.println("In String " + s); } public void show(byte b) { System.out.println("In byte " + b); } public static void main(String[] args) { byte a = 25; Demo obj = new Demo(); // If will call method with by argument obj.show(a); obj.show("Facing Issues On IT "); // String obj.show(250); // Int /* * Since method with char type is not available, * so the data type higher * than char in same family is int */ obj.show('A'); /* * Since method with float data type is * not available and so it's higher * data type, so at this step their will * be an compile time error. */ // obj.show(7.5); } }
Output
In byte 25 In String Facing Issues On IT In int 250 In int 65
|
https://facingissuesonit.com/2019/09/17/java-method-overloading/
|
CC-MAIN-2021-17
|
refinedweb
| 432
| 55.24
|
in reply to
How to share a group of functions with multiple main processes?
How do you run your two perl scripts? If it was indeed impossible for multiple scripts to use (well, require in your case) the same module at the same time, that issue would have been raised long ago.
I recommend you read Perl Modules. To make it short, if PrintVar.pm (something that is not a standalone Perl script but should be included, ie a module should have the .pm extension) is in the same folder as your script, to include it you just have to write:
use PrintVar. And if it is in Something, use Something::Printvar;. This would allow you to move all your files at once without having to change every include path. You should still read Perl Modules if you want to make "good" Perl modules, to learn about packages (namespaces) and exporting.
Yes
No
Other opinion (please explain)
Results (99 votes),
past polls
|
http://www.perlmonks.org/index.pl?node_id=1063365
|
CC-MAIN-2015-40
|
refinedweb
| 162
| 82.14
|
Explain the Purpose & requirement for keeping financial records.
Published: Last Edited:
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
Page | 1
Question: 1.1Explain the Purpose & requirement for keeping financial records.
Answer: Most of us do maintain some kind of a written record of our income and expenditure. The idea behind maintaining such records is to know the correct position regarding income and expenditure. The need for keeping a record of income and expenditure in a clear and systematic manner has given rise to the subject of book keeping.
It is all the more necessary for an organization or a concern to keep proper records. At the end of the year the true result of the economic activities of a concern must be made available otherwise it will not be possible to run the concern. In case of a business concern the profit or loss at the end of analyzed and appropriate measures taken for their rectification. But it is only possible if proper books of records are maintained in the business. There are several very good reasons for keeping different types of business and financial records. Records are required for tax preparation, filing and if needed for any audits. Lenders require certain financial information about the business before making any new loans or extending new credit. Managing the business including making plans and solving problems is much easier if records of past performance are available. Monitoring and evaluating business performance cannot be done without the important information, problems may be going undetected and opportunities missed. Records are needed to provide a paper trail or documentation. Records provide a sound basis for developing business agreements both with family members and with others. For taxes and for financial management purposes, records of sales, cash received and cash paid out records are needed. Records of accounts payable and receivable are needed to ensure timely payment and for cash flow management. Question: 1.2Analyse Techniques for recording financial information in a business organization.
Answer: Accountants have developed systematic and relatively simple techniques for recording financial information. The initial starting point for collecting financial information is to systematically collect records of financial transactions e.g. invoices, receipts etc and to enter these into some form of book keeping system.
Single Entry Book Keeping: A simple system in which transactions of accounting information are recorded only once is called Single entry book keeping. It is used primarily in simple applications such as check book balancing or in very small cash based business. It does not require keeping of journals and ledgers. It is incomplete, faulty, inaccurate, unsystematic and unscientific style of account keeping, generally less costly and the both aspects of debit and credit are not recorded. It is not possible to prepare trial balance, profit and loss account and balance sheet.
Double Entry Book Keeping:
Every business transaction causes at least two changes in the financial position of a business concern at the same time hence both the changes must be recorded in the books of account. Otherwise the books of accounts will remain incomplete and the result ascertained therefore will be inaccurate. For example we buy machinery for £100,000. Obviously it is a business transaction. It has brought two changes machinery increases by £100,000 and cash decreases by an equal amount. While recording this transaction in the books of account both changes must be recorded. In accounting language these two changes are termed as a debit change and a credit change.
We see that.
In this connection, the successive processes of the Double Entry System may be noted:
Journal:
First of all, transactions are recorded in a book known as Journal.
Ledger:
In the second process transactions are classified in a suitable manner and recorded in another book known as Ledger.
Trial Balance:
In the third process, the arithmetical accuracy of the books of account is tested by means of Trial Balance.
Final Accounts:
In the fourth and final process the result of the full year working is determined through the Final Accounts.
Journal:
The initial record of each transaction is evidenced by a business document such as invoice, cash voucher etc. As soon as a transaction takes place its debit and credited aspect are analyzed and first of all recorded in a book together with its short description. This book is known as Journal. Thus we see that the most important function of Journal is to show the relationship between the two accounts connected with a transaction. Since transactions are first of all recorded in Journal, so it is called Book of Original Entry.
Ledger:
All the changes for a single account are located in one place, in a ledger account. This makes it easy to determine the current of any account. The book in which accounts are maintained is called Ledger. Generally one account opened on each page of this book, but if transaction relating to a particular account numerous, it may extend to more than one page. All transactions relating to that account are recorded there. From journal each transaction is posted to at least two concerned accounts.
Remember that, if there are two accounts involved in a journal entry, it will be posted to two accounts in the ledger and if the journal entry consists of three accounts it will be posted to three different accounts in the ledger. But it must be remembered that transactions cannot be recorded directly in the ledger, they must be routed through journal.
Transactions ïƒ Journal ïƒ Ledger
So, the book in which all the transactions of a business concern are finally recorded in the concerned account in a summarized and classified form, is called Ledger.
Trial Balance:
The fundamental principle of Double entry system is that at any stage, the total of debits must be equal to the total of credits. If entries are recorded and posted correctly, the ledger will reflect equal debits and credits and the total credit balances will then be equal to the total debited balances. As we know that under Double entry system for each at every transaction one account is debited and another account is credited with an equal account. If all the transactions are correctly recorded strictly according to this rule, the total amount of debit side of all the ledger accounts must be equal to that of credit side of all the ledger accounts. This verification is done through Trial Balance.
If the trial balance agrees, we may reasonably assume that the books are correct. On the other hand if does not agree, it indicates that the books are not correct there are mistakes somewhere. The mistakes are to be detected and corrected otherwise correct result cannot be ascertained. The trial balance serves to check the equality of debit and credits or mathematical test of accuracy and to provide information for use in preparing Final accounts.
Thus in the light of above discussion a Trial Balance may be defined as ‘’an informal accounting schedule or statement that lists the ledger account balances at a point in time and compares the total of debited balances with the total of credit balances’’.
Computerised System: A system in which computers and software’s are used to maintain the records of a business or any other organisation is called Computerised System. In computerised system operations can be done with high pace and accuracy, there will be lot of time saved in maintaining accounts. It provides quick information when ever needed. Maintaining and installation of computerised system cost is high and its operation requires technical knowledge and training.
Manual System: A system under which the records are maintained manually by using pen and paper or account work is done by humans what has been sold and for how much is called Manual System. These days there is no using this of type. Pen and paper are very old school technique of writing and saving different types of information.
(Okay)
Question 1.3 and 1.4
Question:
2.1 Analyses components of working capital.
Answer:
Components of Working Capital:
Working capital constitutes various current assets and current liabilities.
Current Assets:
Assets which are short-lived and which can be converted into cash quickly to meet short term liabilities are called Current Assets, e.g. Stock, Debtors, Receivables, Prepaid expenses, Accrued income,.
Current or Short-Term Liabilities:
The debts which are repayable within a short period of time are called Current or Short-Term Liabilities, e.g. Creditors, Bills Payable, Bank overdraft, outstanding expenses, Dividend payable, Provision for taxation etc. Current liabilities may again be divided into two:
- Deferred Liabilities: Debts which are repayable in the course of less than one year but more than one month are called Deferred Liabilities, e.g. Short-Term Loan etc.
- Liquid or Quick Liabilities: Debts which are repayable in the course of a month are called Liquid or Quick Liabilities, e.g. Bank overdraft, outstanding expenses, creditors etc.
The main components of working capital are:
- Cash: Cash is one of the most liquid and main component of working capital. Holding cash involves cost because the value of cash held and a year later it will be less than the value of cash as on today. Excess of cash balance should not be held in reserve in business because cash is a non-earning asset. Hence a suitable and well judged cash management is of extreme importance in business.
Ok hay)
- Marketable Securities:
These securities also don't give much capitulate to the business because of two reasons:
- Marketable securities act as a replacement for cash.
- These are used are impermanent investments.
These are held not for provisional balances but only as a security against potential lack of bank credit.
- Accounts Receivable:
Lots of debtors always lock up the firm's assets particularly during inflationary tendencies. It is a two step account. When goods are sold, inventories are reduced and accounts receivables are formed. When payment is made, debtors reduce and cash level increases. Thus quantum of debtors depends on two things, volume of Credit sales and usual length of time between sales and collections. The capitalist should find out the most advantageous credit standards. An optimal credit policy should be established and the firm's operations should be always monitored to attain higher sales and minimum bad debt losses.
- Inventory:
Inventories stand for an important amount of firm's assets. Inventories must be properly managed so as to this investment doesn't become too large and it would result in blocked capital which could be put to productive use elsewhere. On the other hand having too little or small inventory could result in loss of sales or loss of customer goodwill. An optimum level of inventory therefore should be maintained.
Question:
2.2 Explain how business organizations can effectively manage working capital.
Answer:
Question:
3.1 Explain the difference between management accounting and financial accounting.
Question 3.2 and 3.3
Question 3.4
Evaluate the use of different costing methods for pricing purposes
- What are the effects of absorption and marginal costing on profit?
- Advantages and Disadvantages of Marginal Costing Technique.
Answer
Differenr costing methods are mentioned below:
Job Costing: A job is a cost unit which consists of a single order or contract. Job costing consists of a single order undertaken to customer special requirements and is usually for a short duration. Contract Costing:Contract costing is a form of job costing which applies where the job is on a large scale and for a long duration. The majority of costs relating to a contract are direct costs. Batch Costing:A batch is a cost unit where a quantity of identical items is manufactured. It consists of a separate, readily identifiable group of product unit which maintains their separate identity throughout the production process.
Operating Costing or Service Costing:Service costing can be used by companies operating in a service industry or by companies wishing to establish the cost of services carried out by some of their departments. Service costing is cost accounting for services or functions, e.g. canteen, maintenance, personnel. These may be referred to as service centers, departments or functions. Process Costing: Process costing is a costing method which is applicable or industries producing homogenous products in large quantities. The purpose of process costing is a typical one i-e, stock valuation.
(Okay hay)
- effects of absorption and marginal costing on profit:
The difference in profits reported under the two systems is due to the different stock valuation methods used.
- If stock level increases between the beginning and end of the period, absorption costing will report higher profit. This is due to some fixed production overhead is carried forward to next period in the closing stock value.
- If stock level decreases, then absorption costing shows lower profit than marginal costing due to fixed production overhead brought forward in the opening stock charged into the current period profit statement.
- If sales are constant and production fluctuates then the marginal costing profit is constant but the profit of absorption costing fluctuates.
- If the output volume is constant and the volume of sales fluctuates then both profits are different in the direction of sales.
(okay hay ye b)
- Advantages and Disadvantages of Marginal Costing Technique:
4.1: Demonstrate the main methods of project appraisal by explaining various investment appraisal techniques. State, in general terms, which method of investment appraisal you consider to be most appropriate for evaluating investment projects and why?
Answer
Key Methods of Project Appraisal:
Net present value:
The net present value method calculates the present value of all cash flows, and sums them to give the net present value. If this is positive, then the project is acceptable.
The net present value (NPV) method of evaluation is as follows.
- Determine the present value of costs.
In other words decide how much capital must be set aside to pay for the project. Let this be £C.
- Calculate the present value of future cash benefits from the project.
To do this we take the cash benefit in each year and discount it to a present value. This shows how much we would have to invest now to earn the future benefits, if our rate of return were equal to the cost of capital. By adding up the present value of benefits for each future year, we obtain the total present value of benefits from the project. Let this be £B.
- Compare the present value of costs £C with the present value of benefits £B.
The net present value is the difference between them: £ (B-C)
- NPV is positive.
The present value of benefits exceeds the present value of costs. This in return means that the project will earn a return in excess of the cost of capital. Therefore, the project should be accepted.. The NPV method should be always be used where money values over time need to be appraised.
Accounting Rate of Return –ARR:
A capital investment project may be assessed by calculating the return on investment (ROI) or accounting rate of return (ARR) and comparing it with a pre-determined target level. A formula for ARR which is common in practice is: ARR = Estimated average profits divided by Estimated average investment and multiply by 100% It allows owner of a business compare easily profit potential for investments and projects.
Internal Rate of Return (IRR):
The internal rate of return technique uses a trial and error method to discover the discount rate which produces the NPV of zero. The discount rate will be the return forecast for the project.
The internal rate of rate of return method involves two steps:
- Calculating the rate of return which is expected from a project.
- Comparing the rate of return with the cost of capital.
If a project earns a higher rate of return than the cost of capital, it will be worth undertaking (and its NPV would be positive). If it earns a lower rate of return, it is not worthwhile (and its NPV would be negative). If a project earns a return which is exactly equal to the cost of capital, its NPV will be 0 and it will only just be worthwhile.
The Payback:
This is the time taken to recover the initial outlay from the cash flows of the project. Payback period is the amount of time. It is expected to take for the cash inflows from a capital investment project to equal the cash outflows. Payback period is always calculated as:
Initial investment ÷ Annual cash inflows
If the cash inflows are for the same amount and incurs after the same time period.
Decision Rule: If payback period < Target, accept it and if payback period > Target then reject it.
Discounted Payback:
An alternative of the payback method is the discounted payback period. The discounted payback period is the amount of time that it takes to cover the cost of a project by adding the net positive discounted cash flows arising from the project. It is never the lone appraisal method used to measure a project but is a handy performance indicator to judge the projects expected performance.
4.3 Explain how finance might be obtained for a business project.
|
https://www.ukessays.com/essays/accounting/explain-the-purpose-requirement.php
|
CC-MAIN-2017-17
|
refinedweb
| 2,875
| 54.22
|
This program is suppose to be able to display the full address when inputed.
It can only display what is entered before the user hits the space bar.
For example:
When the program ask for the address i'll enter:
254 Marios Castle Lala Land 93884
And only 245 will be displayed.
Here is the code:
PHP Code:
//Write program to display name, address, email address, and with a border.
#include <stdio.h>
void main(void)
{
char name[50];
char address[50];
char email_address[50];
printf("\nPlease type your name:\n");
scanf("%s", &name);
printf("%s \n", name);
printf("\nPlease enter your address:\n:");
scanf("%s", &address);
printf("%s \n", address);
printf("\nPlease enter your email address:\n:");
scanf("%s", &email_address);
printf("%s\n", email_address);
}
|
https://cboard.cprogramming.com/c-programming/130119-program-does-not-printf-after-spaces-printable-thread.html
|
CC-MAIN-2017-47
|
refinedweb
| 125
| 58.42
|
Mathew
- Total activity 55
- Last activity
- Member since
- Following 0 users
- Followed by 0 users
- Votes 0
- Subscriptions 19
Mathew created a post,
ReSharper menu options missing from project/solution menu.Hi,For some reason ReSharper menu options (except for Refactor) are missing from the Solution and Project menus. For example; the "Adjust namespaces..." is missing.I've tried re-installing ReSharpe...
Mathew created a post,
Keep having problems with Actions.xmlMy actions.xml file is like this <?xml version="1.0" encoding="utf-8" ?> <actions> <insert group- <action-group ...
Mathew created a post,
Build-in UI component to pick classes or files?This is actually a multipart question related to UI popups.- Does the SDK have any UI components to allow users to pick files or classes so operations can be performed on them? For example; ReSharp...
Mathew created a post,
How to get end of line comment above class declaration?I want my plugin to inspect class declarations for a comment containing a GUID.For example; // ReTester id:123-123-123 public class TestMissingProcess : GenericDaemonProcessor<IClassDeclarat...
Mathew commented, Mathew created a post,
How to tell if a project is a test project?There use to be a property for IProject in older SDK versions that would tell if a project was a test project (I'm basing this on old repos I've browsed in GitHub).This property appears to have bec...
Mathew created a post,
How to indicate a problem to the user?If during a ContextAction the plug-in experiences a problem and needs to display a message to the user. How should this be handled?For example; A ContextAction that opens a file with a simular name...
Mathew created a post,
Is there a start up process for a plugin?I have some code that needs to run once when the plug-in is loaded.How do you handle start up code in a ReSharper plug-in?
|
https://resharper-support.jetbrains.com/hc/en-us/profiles/1378935402-Mathew
|
CC-MAIN-2021-21
|
refinedweb
| 322
| 59.19
|
Session;
Let’s have a look into a quick example where I will show how you can change the session state based on the different member types of your web site. Let’s say you have 3 different types of member (Gold, Silver and Platinum) and You want for Platinum member you want to maintain the session for some specific pages not for other. To start with this first, create an new HTTP Module by implementing IHttpModule Interface.
using System; using System.Web; /// <summary> /// Summary description for SessionModule /// </summary> public class SessionModule : IHttpModule { /// <summary> /// Disposes of the resources (other than memory) used by the module that implements <see cref="T:System.Web.IHttpModule"/>. /// </summary> public void Dispose() { } /// .BeginRequest += new EventHandler(context_BeginRequest); } /// <summary> /// Handles the BeginRequest event of the context control. /// </summary> /// <param name="sender">The source of the event.</param> /// <param name="e">The <see cref="System.EventArgs"/> instance containing the event data.</param> have done with implementation of HTTP Module, you have to configure web.config for the same.
Again, you have to make sure that,y ou can only use SetSessionStateBehavior until the AcquireRequestState event is fired.
And, not only with Query string, you can enable or disable session state based on the different page as show in below
To know more about ASP.NET 4.0 State management, you can read my Session notes on Microsoft Community Tech Days – Hyderabad
Download ASP.NET 4.0 State Management – Deep Dive PPT
You will get the complete demo application on runtime session state change from above download.
Hope this will help you !
Cheers !
AJ
That was very informational read!!!
Very nice article. I really enjoying it, and it also cleared lot of my doubts about Asp.Net session state. Thanks for sharing with us. Following link also helped me lot to understand the Asp.Net Session
State…
Thanks everyone for your precious post!!
|
https://abhijitjana.net/2011/01/15/programmatically-changing-session-state-behavior-in-asp-net-4-0/
|
CC-MAIN-2018-17
|
refinedweb
| 314
| 59.19
|
- Articles
- Documentation
- Distributions
- Forums
- Sponsor Solutions
Jetty, which is widely used with Java and open source projects, can act as a standalone program or as an API. It can serve HTML and Web applications written using JSPs or servlets. The Jetty library is small in size, and the server requires few system resources.
To see how you can serve content using very few lines of code, download Jetty's tar.gz file, uncompress it using the commands
tar -zxvf jetty-4.2.24.tar.gz, then
cd jetty-4.2.22. After that the following is all you need to start serving content:
import org.mortbay.http.SocketListener;
import org.mortbay.jetty.Server;
class testserver
{
public static void main(String[] args) throws Exception
{
Server server = new Server();
SocketListener listener = new SocketListener();
listener.setPort(8080);
server.addListener(listener);
server.addWebApplication("/", "./webapps/template/");
server.start();
}
}
Place this code in testserver.java in the Jetty directory. There are a few jar files that Jetty needs, so you should add them to your classpath:
CLASSPATH=lib/org.mortbay.jetty.jar:lib/javax.ser<nobr>v<wbr></nobr> let.jar:ext/jasper-runtime.jar:ext/jasper-compile<nobr>r<wbr></nobr> <nobr> <wbr></nobr>.jar:ext/xercesImpl.jar
Now you can compile and run this test server:
javac -classpath $CLASSPATH testserver.java
java -classpath $CLASSPATH:. testserver
Pointing your browser to should connect you to the Jetty server.
Looking at the source code can show us a bit of how Jetty works. The Server class accepts requests and sends them to various contexts that provide the conent. Listener classes such as SocketListener wait for HTTP connections. The code includes listeners for services like SSL as well. Finally, by adding a Web application, we create a context to which requests may go. Here we are telling connections to root ("/") to be routed to the template Web application provided by Jetty.
Another useful feature in Jetty is that almost everything available programatically can be written using XML documents as well. The example above can be replicated in the following XML code:
<>/</Arg> <Arg>./webapps/template/</Arg> </Call> </Configure>Try saving this to testserver.xml, then start Jetty by running
java -jar start.jar testserver.xml, and you should see the same content being served. A pre-existing application could easily add a Web interface to display data and foster user interaction using this method.
You can use Jetty for more than just Web applications. The current programming model for desktop applications usually requires backend code, which does the main work for the program, and GUI code, which gathers and displays information to the user. Often the development of the GUI portion of the application can be more difficult and time-consuming than anything else.
Another worry is making GUIs portable. Even with tools like Java, which are inherently cross-platform, developers still may worry about the "look and feel" of each platform on which their program may run. Having an application interface through the Web browser is especially convenient for users to be able to log into, say, Samba or MySQL by just typing the address into their browser, especially when the alternative is logging into the machine and typing in commands manually.
In addition to adding Web interfaces to network services running on remote hosts, companies are adding Web access to programs running locally on client machines, either as a supplement to a traditional GUI or a replacement for one. This model is appealing because of users' widespread use of and comfort with Web browsers, and because of the inherent cross-platform nature of Web content, which differs from, say, a Java GUI. In addition, from a developer's standpoint, creating an HTML GUI is easy and quick.
In general, requiring a user to maintain his own Web server is not a viable approach. The alternative is to embed a Web server within the code of a program with tools like Jetty. As such tools grow in popularity, we can expect to see more and more applications whose primary means of communicating with the user is through the browser.
Note: Comments are owned by the poster. We are not responsible for their content.
Jetty is great!Posted by: ammoQ on July 08, 2005 05:41 PM
#
Jetty, TomcatPosted by: Anonymous Coward on July 08, 2005 08:42 PM
DG
#
Re:Jetty, TomcatPosted by: Anonymous Coward on July 08, 2005 11:04 PM
If all you need are simple web apps (JSPs, Servlets, or some other web framework), Jetty is as good as they come.
#
Jetty smaller than Tomcat, are you sure?Posted by: Anonymous Coward on July 08, 2005 11:34 PM
Tomcat 5.0.27 = 10,057 kB
Jetty 4.2.19 = 11,892 kB
(From a gentoo 'emerge -s' command)
#
Re:Jetty smaller than Tomcat, are you sure?Posted by: Anonymous Coward on July 09, 2005 03:09 AM
Also, there's an "ext" dir that contains some default extensions weighing in at about 5 MB. On recent VMs, most of those can be thrown out, dropping the size by about another 4.5 MB.
In the end, it's super-easy to drop the whole thing down to less than 1MB.
Oh, and 4.2.19 is ancient. 5.1.4 is current.
#
Re:Jetty smaller than Tomcat, are you sure?Posted by: Anonymous Coward on July 15, 2005 07:05 AM
#
Portable GUIPosted by: Anonymous Coward on July 15, 2005 05:09 AM
<a href="" title="sourceforge.net"></a sourceforge.net>
#
This story has been archived. Comments can no longer be posted.
|
http://www.linux.com/archive/articles/46110
|
crawl-002
|
refinedweb
| 929
| 56.96
|
[ ]
Marcel Reutegger resolved JCR-489.
----------------------------------
Fix Version/s: 1.1
Resolution: Fixed
Fixed in revision: 425338
Thank you for reporting this issue.
> TCK: Incorrect check of namespace mappings in System View XML export
> --------------------------------------------------------------------
>
> Key: JCR-489
> URL:
> Project: Jackrabbit
> Issue Type: Bug
> Components: test
> Environment: TCK from SVN revision 421925
> Reporter: Nicolas Pombourcq
> Assigned To: Marcel Reutegger
> Fix For: 1.1
>
>
> org.apache.jackrabbit.test.api.SysViewContentHandler. In endDocument(), two issues:
> 1. line 351: tries to go through a table of prefixes but uses a fixed index inside the
loop;
> 2. The mapping for the 'xml' prefix should be skipped (it must be registered in the Session
but must not be registered during export since this is a built-in XML mapping.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
-
For more information on JIRA, see:
|
http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200607.mbox/%3C22005854.1153818378503.JavaMail.jira@brutus%3E
|
CC-MAIN-2014-41
|
refinedweb
| 147
| 56.15
|
#include <canna/RK.h> int RkMapPhonogram(romaji, dst, maxdst, src, srclen, flags, ulen, dlen, tlen, rule) struct RkRxDic *romaji; unsigned char *dst; int maxdst; unsigned char *src; int srclen; int flags; int *ulen; int *dlen; int *tlen; int *rule;
flags is a combination of the following Romaji-kana conversion flags connected by or:
ulen, dlen, and tlen are used to manage the progress of Romaji-kana conversion.
For example, if the character string "tt" is given to RkMapPhonogram, the first "t" is submitted to Romaji-kana conversion, with dst being set to small kana character "tsu". The remaining "t" is put to reuse for Romaji-kana conversion. When "a" is entered subsequently, it is combined with the "t" left from the previous run of RkMapPhonogram to generate kana character "ta".
ulen is set to the byte length of the characters from src used for Romaji-kana conversion.
dlen is set to the byte length of the kana characters derived from Romaji-kana conversion.
tlen is set to the byte length of the character string to be used in the next run of Romaji-kana conversion. The character string led by tlen is placed after the character string resulting from Romaji-kana conversion in the dst buffers.
If null pointers are specified in ulen, dlen, and tlen, these parameters do not return any values and are simply ignored.
rule is used to exchange information about the rule of Romaji-kana conversion. When calling RkMapPhonogram for the first time, specify a pointer to a variable loaded with 0. Upon return from the first call to RkMapPhonogram, that variable is loaded with internal information about the rule of Romaji-kana conversion, in place of 0. To continue Romaji-kana conversion, specify the same pointer to the variable to RkMapPhonogram.
|
http://www.makelinux.net/man/3/R/RkMapPhonogram
|
CC-MAIN-2015-40
|
refinedweb
| 296
| 57.91
|
A follow-up to one of my previous posts: Organizer Server Permissions System.
I recently had a requirement to create permission decorators for use in our REST APIs. There had to be separate decorators for Event and Services.
Event Permission Decorators
Understanding Event permissions is simple: Any user can create an event. But access to an event is restricted to users that have Event specific Roles (e.g. Organizer, Co-organizer, etc) for that event. The creator of an event is its Organizer, so he immediately gets access to that event. You can read about these roles in the aforementioned post.
So for Events, create operation does not require any permissions, but read/update/delete operations needed a decorator. This decorator would restrict access to users with event roles.
def can_access(func): """Check if User can Read/Update/Delete an Event. This is done by checking if the User has a Role in an Event. """ @wraps(func) def wrapper(*args, **kwargs): user = UserModel.query.get(login.current_user.id) event_id = kwargs.get('event_id') if not event_id: raise ServerError() # Check if event exists get_object_or_404(EventModel, event_id) if user.has_role(event_id): return func(*args, **kwargs) else: raise PermissionDeniedError() return wrapper
The
has_role(event_id) method of the
User class determines if the user has a Role in an event.
# User Model class def has_role(self, event_id): """Checks if user has any of the Roles at an Event. """ uer = UsersEventsRoles.query.filter_by(user=self, event_id=event_id).first() if uer is None: return False else: return True
Reading one particular event (
/events/:id [GET]) can be restricted to users, but a GET request to fetch all the events (
/events [GET]) should only be available to staff (Admin and Super Admin). So a separate decorator to restrict access to Staff members was needed.
def staff_only(func): @wraps(func) def wrapper(*args, **kwargs): user = UserModel.query.get(login.current_user.id) if user.is_staff: return func(*args, **kwargs) else: raise PermissionDeniedError() return wrapper
Service Permission Decorators
Service Permissions for a user are defined using Event Roles. What Role a user has in an Event determines what Services he has access to in that Event. Access here means permission to Create, Read, Update and Delete services. The User model class has four methods to determine the permissions for a Service in an event.
user.can_create(service, event_id) user.can_read(service, event_id) user.can_update(service, event_id) user.can_delete(service, event_id)
So four decorators were needed to put alongside POST, GET, PUT and DELETE method handlers. I’ve pasted snippet for the
can_update decorator. The rest are similar but with their respective permission methods for User class object.
def can_update(DAO): def decorator(func): @wraps(func) def wrapper(*args, **kwargs): user = UserModel.query.get(login.current_user.id) event_id = kwargs.get('event_id') if not event_id: raise ServerError() # Check if event exists get_object_or_404(EventModel, event_id) service_class = DAO.model if user.can_update(service_class, event_id): return func(*args, **kwargs) else: raise PermissionDeniedError() return wrapper return decorator
This decorator is a little different than
can_access event decorator in a way that it takes an argument,
DAO. DAO is Data Access Object. A DAO includes a database Model and methods to create, read, update and delete object of that model. The db model for a DAO would be the Service class for the object. You can look that the model class is taken from the DAO and used as the service class.
The
can_create,
can_read and
can_delete decorators look exactly the same except they use their (obvious) permission methods on the User class object.
|
https://blog.fossasia.org/permission-decorators/
|
CC-MAIN-2021-43
|
refinedweb
| 586
| 50.12
|
Normalize time series data intervals¶
If you followed one of the tutorials in the previous section, you should have some mock time series data about the position, or ground point, of the International Space Station (ISS).
It is common to visualize time series data by graphing values over time. However, you may run into the following issues:
The resolution of your data does not match the resolution you want for your visualization.
For example, you want to plot a single value per minute, but your data is spaced in 10-second intervals. You will need to resample the data.
Your data is non-continuous, but you want to visualize a continuous time series.
For example, you want to plot every minute for the past 24 hours, but you are missing data for some intervals. You will need to fill in the missing values.
This tutorial demonstrates the shortcomings of visualizing the non-normalized data and shows you how to address these shortcomings by normalizing your data using SQL.
Note
This tutorial focuses on the use of SQL. Code examples demonstrate the use of the CrateDB Python client. However, the following guidelines will work with any language that allows for the execution of SQL.
Table of contents
Prerequisites¶
Mock data¶
You must have CrateDB installed and running.
This tutorial works with ISS location data. Before continuing, you should have generated some ISS data by following one of the tutorials in the previous section.
Python setup¶
You should be using the latest stable version of Python.
You must have the following Python libraries installed:
pandas – querying and data manipulation
SQLAlchemy – a powerful database abstraction layer
The CrateDB Python Client – SQLAlchemy support for CrateDB
Matplotlib – data visualization
geojson – Functions for encoding and decoding GeoJSON formatted data
You can install (or upgrade) the necessary libraries with Pip:
sh$ pip3 install --upgrade pandas sqlalchemy crate matplotlib geojson
Using Jupyter Notebook¶
This tutorial shows you how to use Jupyter Notebook so that you can display data visually and experiment with the commands as you see fit.
Jupyter Notebook allows you to create and share documents containing live code, equations, visualizations, and narrative text.
You can install Jupyter with Pip:
sh$ pip3 install --upgrade notebook
Once installed, you can start a new Jupyter Notebook session, like this:
sh$ jupyter notebook
This command should open a new browser window. In this window, select New (in the top right-hand corner), then Notebook → Python 3.
Type your Python code at the input prompt. Then, select Run (Shift-Enter ⇧⏎) to evaluate the code:
You can re-evaluate input blocks as many times as you like.
See also
Alternative shells¶
Jupyter mimics Python’s interactive mode.
If you’re more comfortable in a text-based environment, you can use the standard Python interpreter. However, we recommend IPython (the kernel used by Jupyter) for a more user-friendly experience.
You can install IPython with Pip:
sh$ pip3 install --upgrade ipython
Once installed, you can start an interactive IPython session like this:
sh$ ipython Python 3.9.10 (main, Jan 15 2022, 11:48:04) Type 'copyright', 'credits' or 'license' for more information IPython 8.0.1 -- An enhanced Interactive Python. Type '?' for help. In [1]:
Steps¶
To follow along with this tutorial, copy and paste the example Python code into Jupyter Notebook and evaluate the input one block at a time.
Query the raw data¶
This tutorial uses pandas to query CrateDB and manipulate the results.
To get started, import the
pandas library:
import pandas
Pandas uses SQLAlchemy and the CrateDB Python Client to provide support
for
crate:// style connection strings.
Then, query the raw data:
pandas.read_sql('SELECT * FROM doc.iss', 'crate://localhost:4200')
Note
By default, CrateDB binds to port
4200 on
localhost.
Edit the connection string as needed.
If you evaluate the
read_sql() call above, the
Jupyter notebook should eventually display a table like this:
Here are a few ways to improve this result:
The current query returns all data. At first, this is probably okay for visualization purposes. However, as you generate more data, you will probably find it more useful to limit the results to a specific time window.
The
timestampcolumn isn’t human-readable. It would be easier to understand the results if this value was as a human-readable time.
The
positioncolumn is a Geographic types. This data type isn’t easy to plot on a traditional graph. However, you can use the distance() function to calculate the distance between two
geo_pointvalues. If you compare
positionto a fixed place, you can plot distance over time for a graph showing you how far away the ISS is from some location of interest.
Here’s an improvement that wraps the code in a function named
raw_data() so
that you can execute this query multiple times:
import pandas def raw_data(): # From < berlin_position = [52.520008, 13.404954] # Returns distance in kilometers (division by 1000) sql = f''' SELECT iss.timestamp AS time, DISTANCE(iss.position, {berlin_position}) / 1000 AS distance FROM doc.iss WHERE iss.timestamp >= CURRENT_TIMESTAMP - INTERVAL '1' DAY ORDER BY time ASC ''' return pandas.read_sql(sql, 'crate://localhost:4200', parse_dates={'time': 'ms'})
Specifically:
You can define the location of Berlin and interpolate that into the query to calculate the
DISTANCE()of the ISS ground point in kilometers.
You can use CURRENT_TIMESTAMP with an interval value expression (
INTERVAL '1' DAY) to calculate a timestamp that is 24 hours in the past. You can then use a WHERE clause to filter out records with a
timestampolder than one day.
An ORDER BY clause sorts the results by
timestamp, oldest first.
You can use the
parse_datesargument to specify which columns
read_sql()should parse as datetimes. Here, a dictionary with the value of
msis used to specify that
timeis a millisecond integer.
Execute the
raw_data() function:
raw_data()
Jupyter should display a table like this:
Above, notice the query used by the
raw_data() function produces:
-
Fewer rows than the previous query (limited by the 24 hour time window)
-
A human-readable time (instead of a timestamp)
-
The distance of the ISS ground point in kilometers (instead of a
geo_pointobject)
Plot the data¶
You can plot the data returned by the previous query using Matplotlib.
Here’s an example function that plots the data:
import matplotlib.pyplot as plt import matplotlib.dates as mdates def plot(data): fig, ax = plt.subplots(figsize=(12, 6)) ax.scatter(data['time'], data['distance']) ax.set( xlabel='Time', ylabel='Distance (km)', title='ISS Ground Point Distance (Past 24 Hours)') ax.xaxis_date() ax.xaxis.set_major_locator(mdates.HourLocator()) ax.xaxis.set_major_formatter(mdates.DateFormatter('%H:00')) # Plot the whole date range (null time values are trimmed by default) ax.set_xlim(data.min()['time'], data.max()['time']) fig.autofmt_xdate()
Above, the
plot() function:
-
Generates a
figurethat measures 12 × 6 (inches)
-
Plots
dataas a
scatterdiagram (distance over time)
-
Sets the
axeslabels and title
-
Sets up the x-axis to
handle datetimes
-
Configures major
tick locationsevery
hour
-
Configures major
tick formattingwith a
time string(
%H:00)
-
Forces Matplotlib to plot the whole data set, including null
timevalues, by manually setting the
limits of the x-axis(which are trimmed by default)
-
Activates x-axis tick label
auto-formatting(rotates them for improved readability)
See also
The full Matplotlib documentation
You can test the
plot() function by passing in the return value of
raw_data():
plot(raw_data())
Jupyter should display a plot like this:
Above, notice that:
-
This plot looks more like a
line chartthan a
scatter diagram. That’s because the raw data appears in intervals of 10 seconds. At this resolution, such a high sampling frequency produces so many data points that they appear to be a continuous line.
-
The x-axis does not cover a full 24 hours.
Matplotlib is plotting the whole data set, as requested. However, the data generation script has only been running for a short period.
The query used by
raw_data()only filters out records older than 24 hours (using a
WHEREclause). The query does not fill in data for any missing time intervals. As a result, the visualization may be inaccurate if there is any missing data (in the sense that it will not indicate the presence of missing data).
Resample the data¶
When plotting a longer timeframe, a sampling frequency of 10 seconds can be too high, creating an unnecessary large number of data points. Therefore, here is a basic approach to resample data at a lower frequency:
-
Place values of the
timecolumn into bins for a given interval (using DATE_BIN()).
In this example, we are resampling the data per minute. This means that all rows with an identical
timevalue on minute-level are placed into the same date bin.
-
Group rows per date bin (using GROUP BY).
The position index
1is a reference to the first column of the
SELECTclause so we don’t need to repeat the whole
DATE_BINfunction call.
-
Calculate an aggregate value across the grouped rows.
For example, if you have six rows with six distances, you can calculate the average distance (using avg(column)) and return a single value.
Tip
Date bin is short for date binning, or data binning in general. It is sometimes also referred to as time bucketing.
Here’s a new function with a rewritten query that implements the three steps above and resamples the raw data by the minute:
def data_by_minute(): # From < berlin_position = [52.520008, 13.404954] # Returns distance in kilometers (division by 1000) sql = f''' SELECT DATE_BIN('1 minute'::INTERVAL, iss.timestamp, 0) AS time, COUNT(*) AS records, AVG(DISTANCE(iss.position, {berlin_position}) / 1000.0) AS distance FROM doc.iss WHERE iss.timestamp >= CURRENT_TIMESTAMP - '1 day'::INTERVAL GROUP BY 1 ORDER BY 1 ASC ''' return pandas.read_sql(sql, 'crate://localhost:4200', parse_dates={'time': 'ms'})
Note
The
DATE_BIN function is available in CrateDB versions >= 4.7.0. In
older versions, you can use
DATE_TRUNC('minute', "timestamp") instead.
The
records column produced by this query will tell you how many source
rows have been grouped by the query per result row.
Check the output:
data_by_minute()
Tip
Despite an ideal time series interval of 10 seconds, some result rows may be aggregating values over fewer than six records.
Irregularities may occur when:
-
Data collection started or stopped during that period
-
There were delays in the data collection (e.g., caused by network latency, CPU latency, disk latency, and so on)
You can plot this data like before:
plot(data_by_minute())
Here, notice that the individual data points are now visible (i.e., the apparent line in the previous diagram is now discernible as a series of discrete values).
Interpolate missing records¶
The
data_by_minute() function resamples data by the minute. However, the
query used can only resample data for minutes with one or more records.
- If you want one data point per minute interval irrespective of the number of
records, you must interpolate those values.
You can interpolate data in many ways, some more advanced than others. For this tutorial, we will show you how to achieve the simplest possible type of interpolation: null interpolation.
Null interpolation works by filling in any gaps in the time series with
NULL values.
NULL is a value used to indicate missing data. The result
is a time series that indicates the presence of missing data, lending
itself well to accurate visualization.
You can perform null interpolation like so:
Generate continuous null data for the same period as the right-hand table of a join. You should sample this data at the frequency most appropriate for your visualization.
Select the data for the period you are interested in as the left-hand table of a join. You should resample this data at the same frequency as your null data.
Join both tables with a left inner join on
timeto pull across any non-null values from the right-hand table.
The result is a row set that has one row per interval for a fixed period with null values filling in for missing data.
See also
Read more about how joins work.
A brief example¶
To illustrate how null interpolation works with a brief example, imagine that you are interested in a specific five minute period between 07:00 and 07:05.
Here’s your resampled data:
Notice that rows for 07:01 and 07:04 are missing. Perhaps the data collection process ran into issues during those time windows.
If you generate null data for the same period, it will look like this:
Note
A column full of null values will be
cast to None values by pandas.
That’s why this table displays
None instead of
NULL.
If you perform a left inner join with those two result sets (on the
time
column), you will end up with the following:
Here, notice that:
There is one result row per minute interval, even when there are no corresponding
records.
Missing data results in a
distancevalue of
NaN(Not a Number). Pandas will cast
NULLvalues to
NaNwhen a column contains numeric data.
See also
Read more about Working with missing data using pandas.
Generate continuous null data for the past 24 hours¶
You can generate continuous null data with the generate_series() table function. A table function is a function that produces a set of rows.
For example, this query generates null values for every minute in the past 24 hours:
def null_by_minute_24h(): sql = ''' SELECT time, NULL AS distance FROM generate_series( DATE_TRUNC('minute', CURRENT_TIMESTAMP) - INTERVAL '24 hours', DATE_TRUNC('minute', CURRENT_TIMESTAMP), '1 minute'::INTERVAL ) AS series(time) ''' return pandas.read_sql(sql, 'crate://localhost:4200', parse_dates={'time': 'ms'})
Test the function, like so:
null_by_minute_24h()
Plot the data:
plot(null_by_minute_24h())
This plot displays null values for a full 24 hour period.
Conceptually, all that remains is to combine this null plot with the plot that includes your resampled data.
Bring it all together¶
To combine the null data with your resampled data, you can write a new query that performs a left Inner joins, as per the previous introductions.
def data_24h(): # From < berlin_position = [52.520008, 13.404954] # Returns distance in kilometers (division by 1000) sql = f''' SELECT time, COUNT(*) AS records, AVG(DISTANCE(iss.position, {berlin_position}) / 1000) AS distance FROM generate_series( DATE_TRUNC('minute', CURRENT_TIMESTAMP) - INTERVAL '24 hours', DATE_TRUNC('minute', CURRENT_TIMESTAMP), '1 minute'::INTERVAL ) AS series(time) LEFT JOIN doc.iss ON DATE_TRUNC('1 minute'::INTERVAL, iss.timestamp, 0) = time GROUP BY time ORDER BY time ASC ''' return pandas.read_sql(sql, 'crate://localhost:4200', parse_dates={'time': 'ms'})
In the code above:
The generate_series() table function creates a row set called
timethat has one row per minute for the past 24 hours.
The
isstable can be joined to the
timeseries by truncating the
iss.timestampcolumn to the minute for the join condition.
Like before, a GROUP BY clause can be used to collapse multiple rows per minute into a single row per minute.
Similarly, the avg(column) function can be used to compute an aggregate
DISTANCEvalue across multiple rows. There is no need to check for null values here because the
AVG()function discards null values.
Test the function:
data_24h()
Plot the data:
plot(data_24h())
And here’s what it looks like if you wait a few more hours:
The finished result is a visualization that uses time series normalization and resamples raw data to regular intervals with the interpolation of missing values.
This visualization resolves both original issues:
You want to plot a single value per minute, but your data is spaced in 10-second intervals. You will need to resample the data.
You want to plot every minute for the past 24 hours, but you are missing data for some intervals. You will need to fill in the missing values.
|
https://crate.io/docs/crate/howtos/en/latest/getting-started/normalize-intervals.html
|
CC-MAIN-2022-21
|
refinedweb
| 2,600
| 54.12
|
Accessing remote data through cross-domain ajax call in jquery
While developing a mobile app using phonegap ( or otherwise also
), we can access remotely hosted mysql database using jquery ajax calls. But this interaction between jquery and mysql database cannot happen directly. We will need to specify a server side script (in PHP terminology) or a controller action (in Grails Terminology) that will fetch data from the mysql database and serve it to the jquery call. Jquery will simply make a cross-domain ajax request to the server side script and the script will send requested data as response.
For a successful cross-domain communication, we need to use dataType “jsonp” in jquery ajax call.
JSONP or “JSON with padding” is a complement to the base JSON data format which provides a method to request data from a server in a different domain, something prohibited by typical web browsers.
When we specify dataType as jsonp, a “callback” parameter is appended to the request url and jquery creates a function whose name is the value of callback parameter. On server side, the script receives the “callback” parameter value(which is name of the function) and sends the data as argument to that function. Alternatively, that data is also available in the success function of jquery.
Jquery Code :
function crossDomainCall(url,data,fnSuccess,fnError){ $.ajax({ type:'POST', url:url, contentType:"application/json", dataType:'jsonp', crossDomain:true, data:data, success:fnSuccess, error: fnError }); } function authenticateUser(username, password) { var url = ''; var data={username:username, password:password}; var fnSuccess=function (dataReceived) { if(dataReceived) { alert("Welcome "+dataReceived.name); }else{ alert("Authentication failed") } }; var fnError=function (e) { alert(e); }; crossDomainCall(url,data,fnSuccess,fnError); }
Server side code :
def authenticate(String username, String password){ User user=User.findByNameAndPassword(username,password) if (user) { render "${params.callback}(${user as JSON})" }else{ render "${params.callback}(null)" } }
Here, the function name is received from params.callback and data is sent in json form as an argument to the function.
Hope it helps.
You can side-step CORS by using a proxy and get non-JSONP data. Yahoo has one (YQL). Here’s an implementation using jQuery.ajax(), it works for both XML and HTML: gist.github.com/rickdog/d66a03d1e1e5959aa9b68869807791d5
Nice article. I have found something for apache end need to be configured for cross domain ajax. Here is the URL techflirt.com/cross-domain-ajax-request
Hi colleagues, how is the whole thing, and what you desire to say on the topic of this article, in my view its genuinely awesome in support of me.
Thznks for aany other informative web site. The place else could I get that kind of info written inn such an ideal way?
I’ve a project that I am simply noow working on, and I’ve been on the glance out for such info.
Hello Raj,
I works upon grails and if i have to access grails service through a PhoneGap application than whats the configuration i have to do at the grails app end.
|
http://www.tothenew.com/blog/accessing-remote-data-through-cross-domain-ajax-call-in-jquery/
|
CC-MAIN-2019-39
|
refinedweb
| 499
| 55.95
|
if (user_id==1) pass1 =stricmp(password,"admin"); if (user_id==2) pass1 =stricmp(password,"user");
The above coding is part of a user authenticating function i did for my assignment. I am restricted to use the old turbo c compiler with which i can't use the namespace .
The above coding needs to be modified so that the user can enter the password as a character array and that inturn will be stored as the password for that particular user id.
To be simple i need to know how i could copy a character array to another character array . I am a beginner and would really be glad if you could help!
thanks
|
http://www.dreamincode.net/forums/topic/197918-storing-a-character-array-inside-another-character-array/
|
CC-MAIN-2017-47
|
refinedweb
| 112
| 58.62
|
We have an improved SOS.DLL with many bug fixes and enhancements. Tom Christian in Product Support maintains it, and gave me permission to post it here under the name PSSCOR.DLL.
[update June 2005 - For some time now, PSSCOR.DLL has been included with the Windows Debugger package, although it is renamed to SOS.DLL. I've removed this old link because you get it just by installing the debugger].
It works on V1.0 and V1.1 of the CLR. Load it in the same way you'd load sos.dll in the Windows Debugger, with “.load psscor.dll“. The good thing about PSSCOR.DLL is that we can fix bugs and enhance functions without going through a lengthy QFE process. If you've found bugs in SOS, it's likely that many were fixed already in PSSCOR.DLL.
The code examples below use psscor.dll to explore the gc heap. You'll want to use it in lieu of SOS, because some commands like !DumpMT and !EEHeap have additional useful output.
It's useful to know how objects are laid out in the gc heap. During garbage collection, valid objects are marked by recursively visiting objects starting from roots on stacks and in handles. But it's also important that the location of the objects sit in an organized way from the beginning to end of each heap segment. The psscor !DumpHeap command counts on this logical organization to walk the heap properly, and if it reports an error you can bet something is wrong with your heap (and will bite you later with a perplexing application violation). So to understand what !dumpheap is talking about, here is your guide to walking these objects by hand, hopping from one stone to another across a vast lake.
First you need a program. I have taken this program from Joel Pobar's Reflection Emit example, and inserted a PInvoke to DebugBreak so you can easily stop in the Windows Debugger. (You could use Visual Studio for these illustrations too, but the Windows Debugger "dd" command is quicker for viewing memory).
using System;using System.Reflection;using System.Reflection.Emit;using System.Threading;using System.Runtime.InteropServices;
public class EmitHelloWorld{
[DllImport("kernel32")] public static extern void DebugBreak();
DebugBreak(); // set the entry point for the application and save it assemblyBuilder.SetEntryPoint(methodbuilder, PEFileKinds.ConsoleApplication); assemblyBuilder.Save("HelloWorld.exe"); }}
DebugBreak();
Save the program as example.cs, compile and run "cdb -g example.exe"
When you reach the breakpoint, load psscor and run "!eeheap -gc". It lists the heap segments that objects are stored in:)Large object heap starts at 0x01a51000 segment begin allocated size01a50000 01a51000 01a54060 0x00003060(12384)Total Size 0x96060(614496)------------------------------GC Heap Size 0x96060(614496)
This is a small gc heap, with only one normal object segment, and one large object segment (for objects over 80K). It's fine for our purposes. Normal-sized objects start at address 00a51000, and end at 00ae4000. In general we have this simple pattern:
|---------| segment.begin = 00a51000 |object 1 | |_________| |object 2 | |_________| | ... | |_________| |object N | |_________| segment.allocated = 00ae4000
How large is each object? You can run !dumpobj to find out. The interesting thing is that each object has a 4 byte header, and the size of the header for object 2 is included in the size of object 1. Another point is that a special kind of object called a "Free" object lives in the heap. This is used to plug holes between valid objects. These Free objects are temporary, in that if a compacting gc occurs they'll disappear. Yun wrote a great article about how the heap could be unable to compact in the face of heavy pinning , and be filled with Free objects ().
Let's start walking. (My heap may look different because it's a Whidbey debug build)
0:000> !dumpobj 00a51000Free ObjectSize 12(0xc) bytes0:000> !dumpobj 00a51000+cFree ObjectSize 12(0xc) bytes0:000> !dumpobj 00a51000+c+cFree ObjectSize 12(0xc) bytes0:000> !dumpobj 00a51000+c+c+cName: System.OutOfMemoryExceptionMethodTable: 03077e9cEEClass: 03064050Size: 68(0x44) bytes (C:\WINDOWS\Microsoft.NET\Framework\v2.0.x86dbg\mscorlib.dll)Fields: MT Field Offset Type Attr Value Name03076b7c 40000a5 4 CLASS instance 00000000 _className03076b7c 40000a6 8 CLASS instance 00000000 _exceptionMethod03076b7c 40000a7 c CLASS instance 00000000 _exceptionMethodString03076b7c 40000a8 10 CLASS instance 00000000 _message03076b7c 40000a9 14 CLASS instance 00000000 _data03076b7c 40000aa 18 CLASS instance 00000000 _innerException03076b7c 40000ab 1c CLASS instance 00000000 _helpURL03076b7c 40000ac 20 CLASS instance 00000000 _stackTrace03076b7c 40000ad 24 CLASS instance 00000000 _stackTraceString03076b7c 40000ae 28 CLASS instance 00000000 _remoteStackTraceString03076b7c 40000af 30 System.Int32 instance 0 _remoteStackIndex03076b7c 40000b0 34 System.Int32 instance -2147024882 _HResult03076b7c 40000b1 2c CLASS instance 00000000 _source03076b7c 40000b2 38 System.IntPtr instance 0 _xptrs03076b7c 40000b3 3c System.Int32 instance -532459699 _xcode
Wow, it took some time to get to something interesting. You could continue like this until you get a buffer overflow due to all the "+c+44+68+12+..." You can also let !DumpHeap do this for you. It gives a rather sparse printout of the object pointers. Let's limit the output to the segment we care about (and note that Size is in decimal):
0:000> !dumpheap 00a51000 00ae4000 Address MT Size00a51000 0015c260 12 Free00a5100c 0015c260 12 Free00a51018 0015c260 12 Free00a51024 03077e9c 68 00a51068 030782cc 68 00a510ac 030786fc 68 00a510f0 03078b5c 68 00a51134 030f7b54 20 00a51148 0308b06c 108 00a511b4 030fa5bc 32 00a511d4 0305bbf8 28 00a511f0 030592e0 80 00a51240 0015c260 72 Free...
How do we know the size of each object? Just look at the MethodTable, the first DWORD of the object. You can run !dumpmt on it:
0:000> !dumpmt 03077e9cEEClass: 03064050Module: 0016b118Name: System.OutOfMemoryExceptionmdToken: 02000038 (C:\WINDOWS\Microsoft.NET\Framework\v2.0.x86dbg\mscorlib.dll)BaseSize: 44Number of IFaces in IFaceMap: 2Slots in VTable: 21
BaseSize is in hex here. (We have a hard time deciding how we like to see these things!) How about arrays, how do we know their size? Let's list all the arrays in the segment to figure it out:
0:000> !dumpheap -type [] 00a51000 00ae4000 Address MT Size00a511f0 030592e0 8000a5129c 03115b68 5600a51348 03135ca0 7600a513a8 030592e0 1600a51434 0313b1c0 14400a51634 0313c234 10000a51698 0313c620 5600a51cc4 030592e0 1600a51e8c 0313b1c0 14400a52008 0313b1c0 14400a52244 0313b1c0 14400a52308 0313b1c0 14400a523cc 0313b1c0 14400a52620 0313b1c0 14400a526e4 0313b1c0 14400a52a14 031e23f8 3600a52b7c 0313b1c0 14400a52c0c 0315778c 108400a53048 0315778c 162800a536a4 0315778c 824...
Picking one at random:
0:000> !dumpobj 00a52c0cName: System.Int32[]MethodTable: 0315778cEEClass: 03157708Size: 1084(0x43c) bytesArray: Rank 1, Type Int32Element Type: System.Int32Fields:None
The formula for determining array size is:
MethodTable.BaseSize + (MethodTable.ComponentSize * Object.Components)
!dumpmt will tell you the first two:
0:000> !dumpmt 315778cEEClass: 03157708Module: 0016b118Name: System.Int32[]mdToken: 02000000 (C:\WINDOWS\Microsoft.NET\Framework\v2.0.x86dbg\mscorlib.dll)BaseSize: 0xcComponentSize: 0x4Number of IFaces in IFaceMap: 4Slots in VTable: 25
and you can find the number of items in the array with:
0:000> dd 00a52c0c+4 l100a52c10 0000010C0:000>
[I'm sure Josh Williams will come along and chide me for forgetting that on 64-bit pointers are 8 bytes, so I'd have to add 8 instead of 4 above. :p]. 0xc + (0x10C*0x4) = 0x43c, so our size is correct.
So we understand object sizes, and how they are arranged. There is one thing missing though, and this is the presence of zero-filled regions throughout the heap called Allocation Contexts. For efficiency, each managed thread can be given such a region to direct new allocations to. This allows multithreaded apps to allocate without expensive locking operations. There is also an Allocation Context for the heap segment that contains generations 0 and 1 (also called the Ephemeral Segment). The !dumpheap command is aware of these regions, and steps lightly over them. You can get the thread Allocation Context addresses with the !threads command:
0:000> !threadsThreadCount: 2UnstartedThread: 0BackgroundThread: 1PendingThread: 0DeadThread: 0 PreEmptive GC Alloc Lock ID OSID ThreadOBJ State GC Context Domain Count APT Exception 0 1 16ac 00155da8 a020 Enabled 00ae2e1c:00ae3ff4 0014a890 0 MTA 2 2 169c 001648f8 b220 Enabled 00000000:00000000 0014a890 0 MTA (Finalizer)
Thread 0 (the main thread) has an allocation context, from 00ae2e1c to 00ae3ff4. If we look at that memory, we'll see all zeros:
0:000> dd 00ae2e1c00ae2e1c 00000000 00000000 00000000 0000000000ae2e2c 00000000 00000000 00000000 0000000000ae2e3c 00000000 00000000 00000000 0000000000ae2e4c ...
As for the Ephemeral Segment Allocation Context, we don't have one. Recalling !eeheap -gc output:)...
You might end up with a buffer overflow someday, and obliterate the MethodTable of an object right after your array of StrongBad fan club members. The next time a GC occurs, your program will crash. Let's simulate that dreadful occurrance and see how !dumpheap responds:
0:000> ed adf7f8 00650033 (I'm overwriting the MethodTable of the array we've been enjoying)0:000> !dumpheap 00a51000 00ae4000...00adf7ac 03135ca0 76object 00adf7f8: does not have valid MTcurr_object : 00adf7f8Last good object: 00adf7ac
This allows you to become suspicious of the last good object, 00adf7ac. Of course we know he's alright, he's not responsible for what happened. But in the real world, an aggressive response is required! [imagine WWII air-raid siren here]
What is that last good object anyway?
0:000> !dumpobj adf7acName: System.Byte[]MethodTable: 03135ca0EEClass: 03135c1cSize: 76(0x4c) bytesArray: Rank 1, Type ByteElement Type: System.ByteFields:None
Who cares about him? If I can find a root to this object on a stack, I may be close to code that would overwrite the next object:
0:000> !gcroot adf7acNote: Roots found on stacks may be false positives. Run "!help gcroot" formore info.Scan Thread 0 OSTHread 16acESP:12ea9c:Root:00ad4914(System.Reflection.Emit.MethodBuilder)->00ad4448(System.Reflection.Emit.TypeBuilder)->00adf52c(System.Reflection.Emit.MethodBuilder)->00adf74c(System.Reflection.Emit.ILGenerator)->00adf7ac(System.Byte[])Scan Thread 2 OSTHread 169c
Thread 0, eh? He's employed by an ILGenerator, eh? What kind of nefarious operations are going on in their shop! Okay I'll stop. But it's true, often the last good object is somehow responsible, and a PInvoke overrun is the reason why.
I've ignore the Large Object Heap Segment, but it is crawled in the same way. It has no pesky Allocation Contexts to muddy the water. Large Object segments are never compacted, it would take to long to move such objects around, as they are over 80K in size.
Have fun with PSSCOR.DLL.
I was writing an internal wiki page on performance and thought this info is useful to many external readers
In questo caso SOS sta per S on O f S trike. Alcuni debugger (tra gli altri CDB e WinDBG) espongono un
I'm done with most of these, but not totally. Tons of great information! Here are some more great
This should be must tools and skills that every .net developer needs to learn, to use WinDbg and SOS.dll
crosspost from This should be must tools and skills that every .net developer
This should be must tools and skills that every .net developer needs to learn, to use WinDbg and SOS
PingBack from
PingBack from
myspace free layouts and backgrounds
PingBack from
PingBack from
|
http://blogs.msdn.com/mvstanton/archive/2004/04/05/108023.aspx
|
crawl-002
|
refinedweb
| 1,842
| 66.94
|
From the post-receive file:
# This
#
But when I test it with echo "$1 $2 $3", I get a blank line only. Does any-one know why?
echo "$1 $2 $3"
Here's a simple example that confirms koumes21s answer. I made post-receive a Python script with the following code:
#!/usr/bin/env python
# -*- coding: UTF-8 -*-
import sys
print "ARGS:", sys.argv
a = sys.stdin.read()
(old, new, ref) = a.split()
print "Old: %s" % old
print "New: %s" % new
print "Ref: %s" % ref
Here's the output after a push. Notice that "ARGS" only reports the name of the script and none of the stdin.
inneralienmbp$ git push
Counting objects: 5, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 299 bytes, done.
Total 3 (delta 1), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
remote: ARGS: ['hooks/post-receive']
remote: Old: 5c9f9a43132516040200ae76cc2f4f2cad57d724
remote: New: 95e0e2873eaad2a9befa2dff7e2ce9ffdf3af843
remote: Ref: refs/heads/master
To /Users/tweaver/test2/test.git/
5c9f9a4..95e0e28 master -> master
Thanks koumes21!
That is because the arguments are passed through the stdin, not through the command line arguments. This is because there can be multiple changes which are then passed to your script as multiple lines. So you can either use the read command or get the input from /dev/stdin.
Here is a post on stack which solves this problem.
Here is a simple version for what you are trying to get:
read oldrev newrev ref
echo "$oldrev"
echo "$newrev"
echo "$ref"
Here is the version that I use for my CI server and email hook
read oldrev newrev ref
echo "$oldrev" "$newrev" "$ref" | . /usr/share/git-core/contrib/hooks/post-receive-email
if [ "refs/heads/qa" == "$ref" ]; then
# Big Tuna YO!
wget -q -O - --connect-timeout=2
fi
By posting your answer, you agree to the privacy policy and terms of service.
asked
5 years ago
viewed
1881 times
active
2 years ago
|
http://serverfault.com/questions/140784/git-post-receive-hook-doesnt-get-promised-arguments
|
CC-MAIN-2015-32
|
refinedweb
| 330
| 76.01
|
The Python tempfile Module
Introduction
Temporary files, or "tempfiles", are mainly used to store intermediate information on disk for an application. These files are normally created for different purposes such as temporary backup or if the application is dealing with a large dataset bigger than the system's memory, etc. Ideally, these files are located in a separate directory, which varies on different operating systems, and the name of these files are unique. The data stored in temporary files is not always required after the application quits, so you may want these files to be deleted after use.
Python provides a module known as tempfile, which makes creating and handling temporary files easier. This module provides a few methods to create temporary files and directories in different ways.
tempfile comes in handy whenever you want to use temporary files to store data in a Python program. Let's take a look at a couple of different examples on how the
tempfile module can be used.
Creating a Temporary File
Suppose your application needs a temporary file for use within the program, i.e. it will create one file, use it to store some data, and then delete it after use. To achieve this, we can use the
TemporaryFile() function.
This function will create one temporary file to the default
tempfile location. This location may be different between operating systems. The best part is that the temporary file created by
TemporaryFile() will be removed automatically whenever it the file is closed. Also, it does not create any reference to this file in the system's filesystem table. This makes it private to the current application i.e. no other program will be able to open the file.
Let's take a look at the below Python program to see how it works:
import tempfile #1 print("Creating one temporary file...") temp = tempfile.TemporaryFile() #2 try: print("Created file is:", temp) #3 print("Name of the file is:", temp.name) #4 finally: print("Closing the temp file") temp.close() #5
It will print the below output:
$ python3 temp-file.py Creating one temporary file... Created file is: <_io.BufferedRandom name=4> Name of the file is: 4 Closing the temp file
- To create one temporary file in Python, you need to import the
tempfilemodule.
- As explained above, we have created the temporary file using the
TemporaryFile()function.
- From the output, you can see that the created object is actually not a file, it is a file-like object. And, the
modeparameter (not shown in our example) of the created file is
w+b, i.e. you can read and write without being closed.
- The temporary file created has no name.
- Finally, we are closing the file using the
close()method. It will be destroyed after it is closed.
One thing we should point out is that the file created using the
TemporaryFile() function may or may not have a visible name in the file system. On Unix, the directory entry for the file is removed automatically after it is created, although this is not supported on other platforms. Normally
TemporaryFile() is the ideal way to create one temporary storage area for any program in Python.
Create a Named Temporary File
In our previous example, we have seen that the temporary file created using the
TemporaryFile() function is actually a file-like object without an actual file name. Python also provides a different method,
NamedTemporaryFile(), to create a file with a visible name in the file system. Other than providing a name to the tempfile,
NamedTemporaryFile() works the same as
TemporaryFile(). Now let's use the same above example to create a named temporary file:
import tempfile print("Creating one named temporary file...") temp = tempfile.NamedTemporaryFile() try: print("Created file is:", temp) print("Name of the file is:", temp.name) finally: print("Closing the temp file") temp.close()
Running this code will print output similar to the following:
$ python3 named-temp-file.py Creating one named temporary file... Created file is: <tempfile._TemporaryFileWrapper object at 0x103f22ba8> Name of the file is: /var/folders/l7/80bx27yx3hx_0_p1_qtjyyd40000gn/T/tmpa3rq8lon Closing the temp file
So, the created file actually has a name this time. The advantage of
NamedTemporaryFile() is that we can save the name of the created temp files and use them later before closing or destroying it. If the
delete parameter is set to
False, then we can close the file without it being destroyed, allowing us to re-open it later on.
Providing a Suffix or Prefix to the Name
Sometimes we need to add a prefix or suffix to a temp-file's name. It will help us to identify all temp files created by our program.
To achieve this, we can use the same
NamedTemporaryFile function defined above. The only thing we need to add is two extra parameters while calling this function:
suffix and
prefix
import tempfile temp = tempfile.NamedTemporaryFile(prefix="dummyPrefix_", suffix="_dummySuffix") try: print("Created file is:", temp) print("Name of the file is:", temp.name) finally: temp.close()
Running this code will print the following output:
$ python3 prefix-suffix-temp-file.py Created file is: <tempfile._TemporaryFileWrapper object at 0x102183470> Name of the file is: /var/folders/tp/pn3dvz_n7cj7nfs0y2szsk9h0000gn/T/dummyPrefix_uz63brcp_dummySuffix
So, if we will pass the two extra arguments
suffix and
prefix to the
NamedTemporaryFile() function, it will automatically add those in the start and end of the file name.
Finding the Default Location of Temp Files
The
tempfile.tempdir variable holds the default location for all temporary files. If the value of
tempdir is
None or unset, Python will search a standard list of directories and sets
tempdir to the first directory value, but only if the calling program can create a file in it. The following are the list of directories it will scan, in this order:
- The directory named by the TMPDIR environment variable.
- The directory named by the TEMP environment variable.
- The directory named by the TMP environment variable
- Platform-specific directories:
- On windows, C:\TEMP, C:\TMP, \TEMP, and \TMP, in the same order.
- On other platforms, /tmp, /var/tmp, and /usr/tmp, in the same order.
- The current working directory.
To find out the default location of temporary files, we can call
tempfile.gettempdir() method. It will return the value of
tempdir if it is not
None. Otherwise it will first search for the directory location using the steps mentioned above and then return the location.
import tempfile print("Current temp directory:", tempfile.gettempdir()) tempfile.tempdir = "/temp" print("Temp directory after change:", tempfile.gettempdir())
If you will run the above program, it will print an output similar to the following:
$ python3 dir-loc-temp-file.py Current temp directory: /var/folders/tp/pn3dvz_n7cj7nfs0y2szsk9h0000gn/T Temp directory after change: /temp
You can see that the first temp directory location is the system-provided directory location and the second temp directory is the same value as the one that we have defined.
Reading and Writing Data from Temp Files
We have learned how to create a temporary file, create a temporary file with a name, and how to create a temporary file with a suffix and/or prefix. Now, let's try to understand how to actually read and write data from a temporary file in Python.
Reading and writing data from a temporary file in Python is pretty straightforward. For writing, you can use the
write() method and for reading, you can use the
read() method. For example:
import tempfile temp = tempfile.TemporaryFile() try: temp.write(b'Hello world!') temp.seek(0) print(temp.read()) finally: temp.close()
This will print the output as
b'Hello world!' since the
write() method takes input data in bytes (hence the
b prefix on the string)..
import tempfile temp = tempfile.TemporaryFile(mode='w+t') try: temp.writelines("Hello world!") temp.seek(0) print(temp.read()) finally: temp.close()
Unlike the previous example, this will print "Hello World" as the output.
Create a Temporary Directory
If your program has several temporary files, it may be more convenient to create one temporary directory and put all of your temp files inside of it. To create a temporary directory, we can use the
TemporaryDirectory() function. After all temp files are closed, we need to delete the directory manually.
import tempfile with tempfile.TemporaryDirectory() as tmpdirname: print('Created temporary directory:', tmpdirname) # Both the directory and its contents have been deleted
It will print the below output:
$ python3 mk-dir-temp-file.py Created temporary directory: /var/folders/l7/80bx27yx3hx_0_p1_qtjyyd40000gn/T/tmpn_ke7_rk
Create a Secure Temporary File and Directory
By using
mkstemp(), we can create a temporary file in the most secure manner possible. The temporary file created using this method is readable and writable only by the creating user ID. We can pass
prefix and
suffix arguments to add prefix and suffix to the created file name. By default, it opens the file in binary mode. To open it in text mode, we can pass
text=True as an argument to the function. Unlike
TemporaryFile(), the file created by
mkstemp() doesn't get deleted automatically after closing it.
As you can see in the example below, the user is responsible for deleting the file.
import tempfile import os temp_directory = tempfile.TemporaryDirectory() print("Directory name:", temp_directory) os.removedirs(temp_directory)
$ python3 mk-secure-dir-temp-file.py Directory name: /var/folders/tp/pn3dvz_n7cj7nfs0y2szsk9h0000gn/T/tmpf8f6xc53
Similar to
mkstemp(), we can create a temporary directory in the most secure manner possible using
mkdtemp() method. And again, like
mkstemp(), it also supports
prefix and
suffix arguments for adding a prefix and suffix to the directory name.
Conclusion
In this article we have learned different ways to create temporary files and directories in Python. You can use temp files in any Python program you want. But just make sure to delete it if the particular method used doesn't automatically delete it on its own. Also keep in mind that behavior may different between operating systems, like the output directory names and file names.
All of these functions we have explained above works with many different arguments, although we have not covered in detail what type of arguments each function takes. If you want to learn more about the
tempfile module, you should check out the Python 3 official documentation.
|
https://www.codevelop.art/the-python-tempfile-module.html
|
CC-MAIN-2022-40
|
refinedweb
| 1,717
| 56.35
|
This chapter is a reference for the Pintos code. The reference guide does not cover all of the code in Pintos, but it does cover those pieces that students most often find troublesome. You may find that you want to read each part of the reference guide as you work on the project where it becomes important.
We recommend using "tags" to follow along with references to function and variable names (see section F.1 Tags).
This section covers the Pintos loader and basic kernel initialization.
The first part of Pintos that runs is the loader, in
threads/loader.S. The PC BIOS loads the loader into memory.
The loader, in turn, is responsible for finding the kernel on disk,
loading it into memory, and then jumping to its start. It's
not important to understand exactly how the loader works, but if
you're interested, read on. You should probably read along with the
loader's source. You should also understand the basics of the
80x86 architecture as described by chapter 3, "Basic Execution
Environment," of [ IA32-v1].
The PC BIOS loads the loader from the first sector of the first hard disk, called the master boot record (MBR). PC conventions reserve 64 bytes of the MBR for the partition table, and Pintos uses about 128 additional bytes for kernel command-line arguments. This leaves a little over 300 bytes for the loader's own code. This is a severe restriction that means, practically speaking, the loader must be written in assembly language.
The Pintos loader and kernel don't have to be on the same disk, nor does is the kernel required to be in any particular location on a given disk. The loader's first job, then, is to find the kernel by reading the partition table on each hard disk, looking for a bootable partition of the type used for a Pintos kernel.
When the loader finds a bootable kernel partition, it reads the partition's contents into memory at physical address 128 kB. The kernel is at the beginning of the partition, which might be larger than necessary due to partition boundary alignment conventions, so the loader reads no more than 512 kB (and the Pintos build process will refuse to produce kernels larger than that). Reading more data than this would cross into the region from 640 kB to 1 MB that the PC architecture reserves for hardware and the BIOS, and a standard PC BIOS does not provide any means to load the kernel above 1 MB.
The loader's final job is to extract the entry point from the loaded kernel image and transfer control to it. The entry point is not at a predictable location, but the kernel's ELF header contains a pointer to it. The loader extracts the pointer and jumps to the location it points to.
The Pintos kernel command line
is stored in the boot loader. The
pintos program actually
modifies a copy of the boot loader on disk each time it runs the kernel,
inserting whatever command-line arguments the user supplies to the kernel,
and then the kernel at boot time reads those arguments out of the boot
loader in memory. This is not an elegant solution, but it is simple
and effective.
The loader's last action is to transfer control to the kernel's entry
point, which is
start() in
threads/start.S. The job of
this code is to switch the CPU from legacy 16-bit "real mode" into
the 32-bit "protected mode" used by all modern 80x86 operating
systems.
The startup code's first task is actually to obtain the machine's
memory size, by asking the BIOS for the PC's memory size. The
simplest BIOS function to do this can only detect up to 64 MB of RAM,
so that's the practical limit that Pintos can support. The function
stores the memory size, in pages, in global variable
init_ram_pages.
The first part of CPU initialization is to enable the A20 line, that is, the CPU's address line numbered 20. For historical reasons, PCs boot with this address line fixed at 0, which means that attempts to access memory beyond the first 1 MB (2 raised to the 20th power) will fail. Pintos wants to access more memory than this, so we have to enable it.
Next, the loader creates a basic page table. This page table maps
the 64 MB at the base of virtual memory (starting at virtual address
0) directly to the identical physical addresses. It also maps the
same physical memory starting at virtual address
LOADER_PHYS_BASE, which defaults to 0xc0000000 (3 GB). The
Pintos kernel only wants the latter mapping, but there's a
chicken-and-egg problem if we don't include the former: our current
virtual address is roughly 0x20000, the location where the loader
put us, and we can't jump to 0xc0020000 until we turn on the
page table, but if we turn on the page table without jumping there,
then we've just pulled the rug out from under ourselves.
After the page table is initialized, we load the CPU's control
registers to turn on protected mode and paging, and set up the segment
registers. We aren't yet equipped to handle interrupts in protected
mode, so we disable interrupts. The final step is to call
main().
The kernel proper starts with the
main() function. The
main() function is written in C, as will be most of the code we
encounter in Pintos from here on out.
When
main() starts, the system is in a pretty raw state. We're
in 32-bit protected mode with paging enabled, but hardly anything else is
ready. Thus, the
main() function consists primarily of calls
into other Pintos modules' initialization functions.
These are usually named
module_init(), where
module is the module's name,
module.c is the
module's source code, and
module.h is the module's
header.
The first step in
main() is to call
bss_init(), which clears
out the kernel's "BSS", which is the traditional name for a
segment that should be initialized to all zeros. In most C
implementations, whenever you
declare a variable outside a function without providing an
initializer, that variable goes into the BSS. Because it's all zeros, the
BSS isn't stored in the image that the loader brought into memory. We
just use
memset() to zero it out.
main() calls
read_command_line() to break the kernel command
line into arguments, then
parse_options() to read any options at
the beginning of the command line. (Actions specified on the
command line execute later.)
thread_init() initializes the thread system. We will defer full
discussion to our discussion of Pintos threads below. It is called so
early in initialization because a valid thread structure is a
prerequisite for acquiring a lock, and lock acquisition in turn is
important to other Pintos subsystems. Then we initialize the console
and print a startup message to the console.
The next block of functions we call initializes the kernel's memory
system.
palloc_init() sets up the kernel page allocator, which
doles out memory one or more pages at a time (see section A.5.1 Page Allocator).
malloc_init() sets
up the allocator that handles allocations of arbitrary-size blocks of
memory (see section A.5.2 Block Allocator).
paging_init() sets up a page table for the kernel (see section A.7 Page Table).
In projects 2 and later,
main() also calls
tss_init() and
gdt_init().
The next set of calls initializes the interrupt system.
intr_init() sets up the CPU's interrupt descriptor table
(IDT) to ready it for interrupt handling (see section A.4.1 Interrupt Infrastructure), then
timer_init() and
kbd_init() prepare for
handling timer interrupts and keyboard interrupts, respectively.
input_init() sets up to merge serial and keyboard input into one
stream. In
projects 2 and later, we also prepare to handle interrupts caused by
user programs using
exception_init() and
syscall_init().
Now that interrupts are set up, we can start the scheduler
with
thread_start(), which creates the idle thread and enables
interrupts.
With interrupts enabled, interrupt-driven serial port I/O becomes
possible, so we use
serial_init_queue() to switch to that mode. Finally,
timer_calibrate() calibrates the timer for accurate short delays.
If the file system is compiled in, as it will starting in project 2, we
initialize the IDE disks with
ide_init(), then the
file system with
filesys_init().
Boot is complete, so we print a message.
Function
run_actions() now parses and executes actions specified on
the kernel command line, such as
run to run a test (in project
1) or a user program (in later projects).
Finally, if
-q was specified on the kernel command line, we
call
power_off() to terminate the machine simulator. Otherwise,
main() calls
thread_exit(), which allows any other running
threads to continue running.
struct thread
The main Pintos data structure for threads is
struct thread,
declared in
threads/thread.h.
struct thread. You may also change or delete the definitions of existing members.
Every
struct thread occupies the beginning of its own page of
memory. The rest of the page is used for the thread's stack, which
grows downward from the end of the page. It looks like this:
This has two consequences. First,
struct thread must not be allowed
to grow too big. If it does, then there will not be enough room for the
kernel stack. The base
struct thread is only a few bytes in size. It
probably should stay well under 1 kB.
Second, kernel stacks must not be allowed to grow too large. If a stack
overflows, it will corrupt the thread state. Thus, kernel functions
should not allocate large structures or arrays as non-static local
variables. Use dynamic allocation with
malloc() or
palloc_get_page() instead (see section A.5 Memory Allocation).
struct thread: tid_t tid
tid_tis a
typedeffor
intand each new thread receives the numerically next higher tid, starting from 1 for the initial process. You can change the type and the numbering scheme if you like.
struct thread: enum thread_status status
THREAD_RUNNING
thread_current()returns the running thread.
THREAD_READY
ready_list.
THREAD_BLOCKED
THREAD_READYstate with a call to
thread_unblock(). This is most conveniently done indirectly, using one of the Pintos synchronization primitives that block and unblock threads automatically (see section A.3 Synchronization).
There is no a priori way to tell what a blocked thread is waiting for, but a backtrace can help (see section E.4 Backtraces).
THREAD_DYING
struct thread: char name[16]
struct thread: uint8_t *stack
When an interrupt occurs, whether in the kernel or a user program, an
struct intr_frame is pushed onto the stack. When the interrupt occurs
in a user program, the
struct intr_frame is always at the very top of
the page. See section A.4 Interrupt Handling, for more information.
struct thread: int priority
PRI_MIN(0) to
PRI_MAX(63). Lower numbers correspond to lower priorities, so that priority 0 is the lowest priority and priority 63 is the highest. Pintos as provided ignores thread priorities, but you will implement priority scheduling in project 1 (see section 2.2.3 Priority Scheduling).
struct thread:
struct list_elemallelem
thread_foreach()function should be used to iterate over all threads.
struct thread:
struct list_elemelem
ready_list(the list of threads ready to run) or a list of threads waiting on a semaphore in
sema_down(). It can do double duty because a thread waiting on a semaphore is not ready, and vice versa.
struct thread: uint32_t *pagedir
struct thread: unsigned magic
THREAD_MAGIC, which is just an arbitrary number defined in
threads/thread.c, and used to detect stack overflow.
thread_current()checks that the
magicmember of the running thread's
struct threadis set to
THREAD_MAGIC. Stack overflow tends to change this value, triggering the assertion. For greatest benefit, as you add members to
struct thread, leave
magicat the end.
threads/thread.c implements several public functions for thread
support. Let's take a look at the most useful:
main()to initialize the thread system. Its main purpose is to create a
struct threadfor Pintos's initial thread. This is possible because the Pintos loader puts the initial thread's stack at the top of a page, in the same position as any other Pintos thread.
Before
thread_init() runs,
thread_current() will fail because the running thread's
magic value is incorrect. Lots of functions call
thread_current() directly or indirectly, including
lock_acquire() for locking a lock, so
thread_init() is
called early in Pintos initialization.
main()to start the scheduler. Creates the idle thread, that is, the thread that is scheduled when no other thread is ready. Then enables interrupts, which as a side effect enables the scheduler because the scheduler runs on return from the timer interrupt, using
intr_yield_on_return()(see section A.4.3 External Interrupt Handling).
thread_create() allocates a page for the thread's
struct thread and stack and initializes its members, then it sets
up a set of fake stack frames for it (see section A.2.3 Thread Switching). The
thread is initialized in the blocked state, then unblocked just before
returning, which allows the new thread to
be scheduled (see Thread States).
thread_create(), whose aux argument is passed along as the function's argument.
thread_unblock()is called on it, so you'd better have some way arranged for that to happen. Because
thread_block()is so low-level, you should prefer to use one of the synchronization primitives instead (see section A.3 Synchronization).
thread_current ()->tid.
thread_current ()->name.
NO_RETURN
NO_RETURN(see section E.3 Function and Parameter Attributes).
action(t, aux)on each. action must refer to a function that matches the signature given by
thread_action_func():
schedule() is responsible for switching threads. It
is internal to
threads/thread.c and called only by the three
public thread functions that need to switch threads:
thread_block(),
thread_exit(), and
thread_yield().
Before any of these functions call
schedule(), they disable
interrupts (or ensure that they are already disabled) and then change
the running thread's state to something other than running.
schedule() is short but tricky. It records the
current thread in local variable cur, determines the next thread
to run as local variable next (by calling
next_thread_to_run()), and then calls
switch_threads() to do
the actual thread switch. The thread we switched to was also running
inside
switch_threads(), as are all the threads not currently
running, so the new thread now returns out of
switch_threads(), returning the previously running thread.
switch_threads() is an assembly language routine in
threads/switch.S. It saves registers on the stack, saves the
CPU's current stack pointer in the current
struct thread's
stack
member, restores the new thread's
stack into the CPU's stack
pointer, restores registers from the stack, and returns.
The rest of the scheduler is implemented in
thread_schedule_tail(). It
marks the new thread as running. If the thread we just switched from
is in the dying state, then it also frees the page that contained the
dying thread's
struct thread and stack. These couldn't be freed
prior to the thread switch because the switch needed to use it.
Running a thread for the first time is a special case. When
thread_create() creates a new thread, it goes through a fair
amount of trouble to get it started properly. In particular, the new
thread hasn't started running yet, so there's no way for it to be
running inside
switch_threads() as the scheduler expects. To
solve the problem,
thread_create() creates some fake stack frames
in the new thread's stack:
switch_threads(), represented by
struct switch_threads_frame. The important part of this frame is its
eipmember, the return address. We point
eipto
switch_entry(), indicating it to be the function that called
switch_entry().
switch_entry(), an assembly language routine in
threads/switch.Sthat adjusts the stack pointer,(4) calls
thread_schedule_tail()(this special case is why
thread_schedule_tail()is separate from
schedule()), and returns. We fill in its stack frame so that it returns into
kernel_thread(), a function in
threads/thread.c.
kernel_thread(), which enables interrupts and calls the thread's function (the function passed to
thread_create()). If the thread's function returns, it calls
thread_exit()to terminate the thread.
If sharing of resources between threads is not handled in a careful, controlled fashion, the result is usually a big mess. This is especially the case in operating system kernels, where faulty sharing can crash the entire machine. Pintos provides several synchronization primitives to help out.
The crudest way to do synchronization is to disable interrupts, that is, to temporarily prevent the CPU from responding to interrupts. If interrupts are off, no other thread will preempt the running thread, because thread preemption is driven by the timer interrupt. If interrupts are on, as they normally are, then the running thread may be preempted by another at any time, whether between two C statements or even within the execution of one.
Incidentally, this means that Pintos is a "preemptible kernel," that is, kernel threads can be preempted at any time. Traditional Unix systems are "nonpreemptible," that is, kernel threads can only be preempted at points where they explicitly call into the scheduler. (User programs can be preempted at any time in both models.) As you might imagine, preemptible kernels require more explicit synchronization.
You should have little need to set the interrupt state directly. Most of the time you should use the other synchronization primitives described in the following sections. The main reason to disable interrupts is to synchronize kernel threads with external interrupt handlers, which cannot sleep and thus cannot use most other forms of synchronization (see section A.4.3 External Interrupt Handling).
Some external interrupts cannot be postponed, even by disabling interrupts. These interrupts, called non-maskable interrupts (NMIs), are supposed to be used only in emergencies, e.g. when the computer is on fire. Pintos does not handle non-maskable interrupts.
Types and functions for disabling and enabling interrupts are in
threads/interrupt.h.
INTR_OFFor
INTR_ON, denoting that interrupts are disabled or enabled, respectively.
A semaphore is a nonnegative integer together with two operators that manipulate it atomically, which are:
A semaphore initialized to 0 may be used to wait for an event that will happen exactly once. For example, suppose thread A starts another thread B and wants to wait for B to signal that some activity is complete. A can create a semaphore initialized to 0, pass it to B as it starts it, and then "down" the semaphore. When B finishes its activity, it "ups" the semaphore. This works regardless of whether A "downs" the semaphore or B "ups" it first.
A semaphore initialized to 1 is typically used for controlling access to a resource. Before a block of code starts using the resource, it "downs" the semaphore, then after it is done with the resource it "ups" the resource. In such a case a lock, described below, may be more appropriate.
Semaphores can also be initialized to values larger than 1. These are rarely used.
Semaphores were invented by Edsger Dijkstra and first used in the THE operating system ([ Dijkstra]).
Pintos' semaphore type and operations are declared in
threads/synch.h.
sema_down()or find a different approach instead.
Unlike most synchronization primitives,
sema_up() may be called
inside an external interrupt handler (see section A.4.3 External Interrupt Handling).
Semaphores are internally built out of disabling interrupt
(see section A.3.1 Disabling Interrupts) and thread blocking and unblocking
(
thread_block() and
thread_unblock()). Each semaphore maintains
a list of waiting threads, using the linked list
implementation in
lib/kernel/list.c.
A lock is like a semaphore with an initial value of 1 (see section A.3.2 Semaphores). A lock's equivalent of "up" is called "release", and the "down" operation is called "acquire".
Compared to a semaphore, a lock has one added restriction: only the thread that acquires a lock, called the lock's "owner", is allowed to release it. If this restriction is a problem, it's a good sign that a semaphore should be used, instead of a lock.
Locks in Pintos are not "recursive," that is, it is an error for the thread currently holding a lock to try to acquire that lock.
Lock types and functions are declared in
threads/synch.h.
lock_acquire()instead.
A monitor is a higher-level form of synchronization than a semaphore or a lock. A monitor consists of data being synchronized, plus a lock, called the monitor lock, and one or more condition variables. Before it accesses the protected data, a thread first acquires the monitor lock. It is then said to be "in the monitor". While in the monitor, the thread has control over all the protected data, which it may freely examine or modify. When access to the protected data is complete, it releases the monitor lock.
Condition variables allow code in the monitor to wait for a condition to become true. Each condition variable is associated with an abstract condition, e.g. "some data has arrived for processing" or "over 10 seconds has passed since the user's last keystroke". When code in the monitor needs to wait for a condition to become true, it "waits" on the associated condition variable, which releases the lock and waits for the condition to be signaled. If, on the other hand, it has caused one of these conditions to become true, it "signals" the condition to wake up one waiter, or "broadcasts" the condition to wake all of them.
The theoretical framework for monitors was laid out by C. A. R. Hoare ([ Hoare]). Their practical usage was later elaborated in a paper on the Mesa operating system ([ Lampson]).
Condition variable types and functions are declared in
threads/synch.h.
Sending a signal and waking up from a wait are not an atomic operation.
Thus, typically
cond_wait()'s caller must recheck the condition
after the wait completes and, if necessary, wait again. See the next
section for an example.
The classical example of a monitor is handling a buffer into which one or more "producer" threads write characters and out of which one or more "consumer" threads read characters. To implement this we need, besides the monitor lock, two condition variables which we will call not_full and not_empty:
Note that
BUF_SIZE must divide evenly into
SIZE_MAX + 1
for the above code to be completely correct. Otherwise, it will fail
the first time
head wraps around to 0. In practice,
BUF_SIZE would ordinarily be a power of 2.
An optimization barrier is a special statement that prevents the
compiler from making assumptions about the state of memory across the
barrier. The compiler will not reorder reads or writes of variables
across the barrier or assume that a variable's value is unmodified
across the barrier, except for local variables whose address is never
taken. In Pintos,
threads/synch.h defines the
barrier()
macro as an optimization barrier.
One reason to use an optimization barrier is when data can change
asynchronously, without the compiler's knowledge, e.g. by another
thread or an interrupt handler. The
too_many_loops() function in
devices/timer.c is an example. This function starts out by
busy-waiting in a loop until a timer tick occurs:
Without an optimization barrier in the loop, the compiler could
conclude that the loop would never terminate, because
start and
ticks start out equal and the loop itself never changes them.
It could then "optimize" the function into an infinite loop, which
would definitely be undesirable.
Optimization barriers can be used to avoid other compiler
optimizations. The
busy_wait() function, also in
devices/timer.c, is an example. It contains this loop:
The goal of this loop is to busy-wait by counting
loops down
from its original value to 0. Without the barrier, the compiler could
delete the loop entirely, because it produces no useful output and has
no side effects. The barrier forces the compiler to pretend that the
loop body has an important effect.
Finally, optimization barriers can be used to force the ordering of
memory reads or writes. For example, suppose we add a "feature"
that, whenever a timer interrupt occurs, the character in global
variable
timer_put_char is printed on the console, but only if
global Boolean variable
timer_do_put is true. The best way to
set up
x to be printed is then to use an optimization barrier,
like this:
Without the barrier, the code is buggy because the compiler is free to reorder operations when it doesn't see a reason to keep them in the same order. In this case, the compiler doesn't know that the order of assignments is important, so its optimizer is permitted to exchange their order. There's no telling whether it will actually do this, and it is possible that passing the compiler different optimization flags or using a different version of the compiler will produce different behavior.
Another solution is to disable interrupts around the assignments. This does not prevent reordering, but it prevents the interrupt handler from intervening between the assignments. It also has the extra runtime cost of disabling and re-enabling interrupts:
A second solution is to mark the declarations of
timer_put_char and
timer_do_put as
volatile. This
keyword tells the compiler that the variables are externally observable
and restricts its latitude for optimization. However, the semantics of
volatile are not well-defined, so it is not a good general
solution. The base Pintos code does not use
volatile at all.
The following is not a solution, because locks neither prevent interrupts nor prevent the compiler from reordering the code within the region where the lock is held:
The compiler treats invocation of any function defined externally, that is, in another source file, as a limited form of optimization barrier. Specifically, the compiler assumes that any externally defined function may access any statically or dynamically allocated data and any local variable whose address is taken. This often means that explicit barriers can be omitted. It is one reason that Pintos contains few explicit barriers.
A function defined in the same source file, or in a header included by the source file, cannot be relied upon as a optimization barrier. This applies even to invocation of a function before its definition, because the compiler may read and parse the entire source file before performing optimization.
An interrupt notifies the CPU of some event. Much of the work of an operating system relates to interrupts in one way or another. For our purposes, we classify interrupts into two broad categories:
intr_disable()does not disable internal interrupts.
intr_disable()and related functions (see section A.3.1 Disabling Interrupts).
The CPU treats both classes of interrupts largely the same way, so Pintos has common infrastructure to handle both classes. The following section describes this common infrastructure. The sections after that give the specifics of external and internal interrupts.
If you haven't already read chapter 3, "Basic Execution Environment," in [ IA32-v1], it is recommended that you do so now. You might also want to skim chapter 5, "Interrupt and Exception Handling," in [ IA32-v3a].
When an interrupt occurs, the CPU saves its most essential state on a stack and jumps to an interrupt handler routine. The 80x86 architecture supports 256 interrupts, numbered 0 through 255, each with an independent handler defined in an array called the interrupt descriptor table or IDT.
In Pintos,
intr_init() in
threads/interrupt.c sets up the
IDT so that each entry points to a unique entry point in
threads/intr-stubs.S named
intrNN_stub(), where
NN is the interrupt number in
hexadecimal. Because the CPU doesn't give
us any other way to find out the interrupt number, this entry point
pushes the interrupt number on the stack. Then it jumps to
intr_entry(), which pushes all the registers that the processor
didn't already push for us, and then calls
intr_handler(), which
brings us back into C in
threads/interrupt.c.
The main job of
intr_handler() is to call the function
registered for handling the particular interrupt. (If no
function is registered, it dumps some information to the console and
panics.) It also does some extra processing for external
interrupts (see section A.4.3 External Interrupt Handling).
When
intr_handler() returns, the assembly code in
threads/intr-stubs.S restores all the CPU registers saved
earlier and directs the CPU to return from the interrupt.
The following types and functions are common to all interrupts.
intr_entry(). Its most interesting members are described below.
struct intr_frame: uint32_t edi
struct intr_frame: uint32_t esi
struct intr_frame: uint32_t ebp
struct intr_frame: uint32_t esp_dummy
struct intr_frame: uint32_t ebx
struct intr_frame: uint32_t edx
struct intr_frame: uint32_t ecx
struct intr_frame: uint32_t eax
struct intr_frame: uint16_t es
struct intr_frame: uint16_t ds
intr_entry(). The
esp_dummyvalue isn't actually used (refer to the description of
PUSHAin [ IA32-v2b] for details).
struct intr_frame: uint32_t vec_no
struct intr_frame: uint32_t error_code
struct intr_frame: void (*eip) (void)
struct intr_frame: void *esp
"unknown"if the interrupt has no registered name.
Internal interrupts are caused directly by CPU instructions executed by the running kernel thread or user process (from project 2 onward). An internal interrupt is therefore said to arise in a "process context."
In an internal interrupt's handler, it can make sense to examine the
struct intr_frame passed to the interrupt handler, or even to modify
it. When the interrupt returns, modifications in
struct intr_frame
become changes to the calling thread or process's state. For example,
the Pintos system call handler returns a value to the user program by
modifying the saved EAX register (see section 3.5.2 System Call Details).
There are no special restrictions on what an internal interrupt handler can or can't do. Generally they should run with interrupts enabled, just like other code, and so they can be preempted by other kernel threads. Thus, they do need to synchronize with other threads on shared data and other resources (see section A.3 Synchronization).
Internal interrupt handlers can be invoked recursively. For example,
the system call handler might cause a page fault while attempting to
read user memory. Deep recursion would risk overflowing the limited
kernel stack (see section A.2.1
struct thread), but should be unnecessary.
If level is
INTR_ON, external interrupts will be processed
normally during the interrupt handler's execution, which is normally
desirable. Specifying
INTR_OFF will cause the CPU to disable
external interrupts when it invokes the interrupt handler. The effect
is slightly different from calling
intr_disable() inside the
handler, because that leaves a window of one or more CPU instructions in
which external interrupts are still enabled. This is important for the
page fault handler; refer to the comments in
userprog/exception.c
for details.
dpl determines how the interrupt can be invoked. If dpl is 0, then the interrupt can be invoked only by kernel threads. Otherwise dpl should be 3, which allows user processes to invoke the interrupt with an explicit INT instruction. The value of dpl doesn't affect user processes' ability to invoke the interrupt indirectly, e.g. an invalid memory reference will cause a page fault regardless of dpl.
External interrupts are caused by events outside the CPU. They are asynchronous, so they can be invoked at any time that interrupts have not been disabled. We say that an external interrupt runs in an "interrupt context."
In an external interrupt, the
struct intr_frame passed to the
handler is not very meaningful. It describes the state of the thread
or process that was interrupted, but there is no way to predict which
one that is. It is possible, although rarely useful, to examine it, but
modifying it is a recipe for disaster.
Only one external interrupt may be processed at a time. Neither internal nor external interrupt may nest within an external interrupt handler. Thus, an external interrupt's handler must run with interrupts disabled (see section A.3.1 Disabling Interrupts).
An external interrupt handler must not sleep or yield, which rules out
calling
lock_acquire(),
thread_yield(), and many other
functions. Sleeping in interrupt context would effectively put the
interrupted thread to sleep, too, until the interrupt handler was again
scheduled and returned. This would be unfair to the unlucky thread, and
it would deadlock if the handler were waiting for the sleeping thread
to, e.g., release a lock.
An external interrupt handler effectively monopolizes the machine and delays all other activities. Therefore, external interrupt handlers should complete as quickly as they can. Anything that require much CPU time should instead run in a kernel thread, possibly one that the interrupt triggers using a synchronization primitive.
External interrupts are controlled by a
pair of devices outside the CPU called programmable interrupt
controllers, PICs for short. When
intr_init() sets up the
CPU's IDT, it also initializes the PICs for interrupt handling. The
PICs also must be "acknowledged" at the end of processing for each
external interrupt.
intr_handler() takes care of that by calling
pic_end_of_interrupt(), which properly signals the PICs.
The following functions relate to external interrupts.
thread_yield()to be called just before the interrupt returns. Used in the timer interrupt handler when a thread's time slice expires, to cause a new thread to be scheduled.
Pintos contains two memory allocators, one that allocates memory in units of a page, and one that can allocate blocks of any size.
The page allocator declared in
threads/palloc.h allocates
memory in units of a page. It is most often used to allocate memory
one page at a time, but it can also allocate multiple contiguous pages
at once.
The page allocator divides the memory it allocates into two pools,
called the kernel and user pools. By default, each pool gets half of
system memory above 1 MB, but the division can be changed with the
-ul kernel
command line
option (see Why PAL_USER?). An allocation request draws from one
pool or the other. If one pool becomes empty, the other may still
have free pages. The user pool should be used for allocating memory
for user processes and the kernel pool for all other allocations.
This will only become important starting with project 3. Until then,
all allocations should be made from the kernel pool.
Each pool's usage is tracked with a bitmap, one bit per page in the pool. A request to allocate n pages scans the bitmap for n consecutive bits set to false, indicating that those pages are free, and then sets those bits to true to mark them as used. This is a "first fit" allocation strategy (see Wilson).
The page allocator is subject to fragmentation. That is, it may not be possible to allocate n contiguous pages even though n or more pages are free, because the free pages are separated by used pages. In fact, in pathological cases it may be impossible to allocate 2 contiguous pages even though half of the pool's pages are free. Single-page requests can't fail due to fragmentation, so requests for multiple contiguous pages should be limited as much as possible.
Pages may not be allocated from interrupt context, but they may be freed.
When a page is freed, all of its bytes are cleared to 0xcc, as a debugging aid (see section E.8 Tips).
Page allocator types and functions are described below.
The flags argument may be any combination of the following flags:
PAL_ASSERT
PAL_ZERO
PAL_USER
palloc_get_page()or
palloc_get_multiple().
The block allocator, declared in
threads/malloc.h, can allocate
blocks of any size. It is layered on top of the page allocator
described in the previous section. Blocks returned by the block
allocator are obtained from the kernel pool.
The block allocator uses two different strategies for allocating memory. The first strategy applies to blocks that are 1 kB or smaller (one-fourth of the page size). These allocations are rounded up to the nearest power of 2, or 16 bytes, whichever is larger. Then they are grouped into a page used only for allocations of that size.
The second strategy applies to blocks larger than 1 kB. These allocations (plus a small amount of overhead) are rounded up to the nearest page in size, and then the block allocator requests that number of contiguous pages from the page allocator.
In either case, the difference between the allocation requested size and the actual block size is wasted. A real operating system would carefully tune its allocator to minimize this waste, but this is unimportant in an instructional system like Pintos.
As long as a page can be obtained from the page allocator, small allocations always succeed. Most small allocations do not require a new page from the page allocator at all, because they are satisfied using part of a page already allocated. However, large allocations always require calling into the page allocator, and any allocation that needs more than one contiguous page can fail due to fragmentation, as already discussed in the previous section. Thus, you should minimize the number of large allocations in your code, especially those over approximately 4 kB each.
When a block is freed, all of its bytes are cleared to 0xcc, as a debugging aid (see section E.8 Tips).
The block allocator may not be called from interrupt context.
The block allocator functions are described below. Their interfaces are the same as the standard C library functions of the same names.
a * bbytes long. The block's contents will be cleared to zeros. Returns a null pointer if a or b is zero or if insufficient memory is available.
A call with block null is equivalent to
malloc(). A call
with new_size zero is equivalent to
free().
malloc(),
calloc(), or
realloc()(and not yet freed).
A 32-bit virtual address can be divided into a 20-bit page number and a 12-bit page offset (or just offset), like this:
Header
threads/vaddr.h defines these functions and macros for
working with virtual addresses:
Virtual memory in Pintos is divided into two regions: user virtual
memory and kernel virtual memory (see section 3.1.4 Virtual Memory Layout). The
boundary between them is
PHYS_BASE:
User virtual memory ranges from virtual address 0 up to
PHYS_BASE. Kernel virtual memory occupies the rest of the
virtual address space, from
PHYS_BASE up to 4 GB.
The 80x86 doesn't provide any way to directly access memory given
a physical address. This ability is often necessary in an operating
system kernel, so Pintos works around it by mapping kernel virtual
memory one-to-one to physical memory. That is, virtual address
PHYS_BASE accesses physical address 0, virtual address
PHYS_BASE + 0x1234 accesses physical address 0x1234, and
so on up to the size of the machine's physical memory. Thus, adding
PHYS_BASE to a physical address obtains a kernel virtual address
that accesses that address; conversely, subtracting
PHYS_BASE
from a kernel virtual address obtains the corresponding physical
address. Header
threads/vaddr.h provides a pair of functions to
do these translations:
The code in
pagedir.c is an abstract interface to the 80x86
hardware page table, also called a "page directory" by Intel processor
documentation. The page table interface uses a
uint32_t * to
represent a page table because this is convenient for accessing their
internal structure.
The sections below describe the page table interface and internals.
These functions create, destroy, and activate page tables. The base Pintos code already calls these functions where necessary, so it should not be necessary to call them yourself.
Returns a null pointer if memory cannot be obtained.
These functions examine or update the mappings from pages to frames encapsulated by a page table. They work on both active and inactive page tables (that is, those for running and suspended processes), flushing the TLB as necessary.
User page upage must not already be mapped in pd.
Kernel page kpage should be a kernel virtual address obtained from
the user pool with
palloc_get_page(PAL_USER) (see Why PAL_USER?).
Returns true if successful, false on failure. Failure will occur if additional memory required for the page table cannot be obtained.
Other bits in the page table for page are preserved, permitting the accessed and dirty bits (see the next section) to be checked.
This function has no effect if page is not mapped.
80x86 hardware provides some assistance for implementing page replacement algorithms, through a pair of bits in the page table entry (PTE) for each page. On any read or write to a page, the CPU sets the accessed bit to 1 in the page's PTE, and on any write, the CPU sets the dirty bit to 1. The CPU never resets these bits to 0, but the OS may do so.
Proper interpretation of these bits requires understanding of aliases, that is, two (or more) pages that refer to the same frame. When an aliased frame is accessed, the accessed and dirty bits are updated in only one page table entry (the one for the page used for access). The accessed and dirty bits for the other aliases are not updated.
See section 4.1.5.1 Accessed and Dirty Bits, on applying these bits in implementing page replacement algorithms.
The functions provided with Pintos are sufficient to implement the projects. However, you may still find it worthwhile to understand the hardware page table format, so we'll go into a little detail in this section.
The top-level paging data structure is a page called the "page directory" (PD) arranged as an array of 1,024 32-bit page directory entries (PDEs), each of which represents 4 MB of virtual memory. Each PDE may point to the physical address of another page called a "page table" (PT) arranged, similarly, as an array of 1,024 32-bit page table entries (PTEs), each of which translates a single 4 kB virtual page to a physical page.
Translation of a virtual address into a physical address follows the three-step process illustrated in the diagram below:(5)
Pintos provides some macros and functions that are useful for working with raw page tables:
threads/pte.h.
threads/vaddr.h.
You do not need to understand the PTE format to do the Pintos projects, unless you wish to incorporate the page table into your supplemental page table (see section 4.1.4 Managing the Supplemental Page Table).
The actual format of a page table entry is summarized below. For complete information, refer to section 3.7, "Page Translation Using 32-Bit Physical Addressing," in [ IA32-v3a].
Some more information on each bit is given below. The names are
threads/pte.h macros that represent the bits' values:
Pintos clears this bit in PTEs for kernel virtual memory, to prevent user processes from accessing them.
Other bits are either reserved or uninteresting in a Pintos context and should be set to@tie{}0.
Header
threads/pte.h defines three functions for working with
page table entries:
Page directory entries have the same format as PTEs, except that the
physical address points to a page table page instead of a frame. Header
threads/pte.h defines two functions for working with page
directory entries:
Pintos provides a hash table data structure in
lib/kernel/hash.c.
To use it you will need to include its header file,
lib/kernel/hash.h, with
#include <hash.h>.
No code provided with Pintos uses the hash table, which means that you
are free to use it as is, modify its implementation for your own
purposes, or ignore it, as you wish.
Most implementations of the virtual memory project use a hash table to translate pages to frames. You may find other uses for hash tables as well.
A hash table is represented by
struct hash.
struct hashare "opaque." That is, code that uses a hash table should not access
struct hashmembers directly, nor should it need to. Instead, use hash table functions and macros.
The hash table operates on elements of type
struct hash_elem.
struct hash_elemmember in the structure you want to include in a hash table. Like
struct hash,
struct hash_elemis opaque. All functions for operating on hash table elements actually take and return pointers to
struct hash_elem, not pointers to your hash table's real element type.
You will often need to obtain a
struct hash_elem given a real element
of the hash table, and vice versa. Given a real element of the hash
table, you may use the
& operator to obtain a pointer to its
struct hash_elem. Use the
hash_entry() macro to go the other
direction.
struct hash_elem, is embedded within. You must provide type, the name of the structure that elem is inside, and member, the name of the member in type that elem points to.
For example, suppose
h is a
struct hash_elem * variable
that points to a
struct thread member (of type
struct hash_elem)
named
h_elem. Then,
hash_entry@tie{(h, struct thread, h_elem)}
yields the address of the
struct thread that
h points within.
See section A.8.5 Hash Table Example, for an example.
Each hash table element must contain a key, that is, data that identifies and distinguishes elements, which must be unique among elements in the hash table. (Elements may also contain non-key data that need not be unique.) While an element is in a hash table, its key data must not be changed. Instead, if need be, remove the element from the hash table, modify its key, then reinsert the element.
For each hash table, you must write two functions that act on keys: a hash function and a comparison function. These functions must match the following prototypes:
unsigned int. The hash of an element should be a pseudo-random function of the element's key. It must not depend on non-key data in the element or on any non-constant data other than the key. Pintos provides the following functions as a suitable basis for hash functions.
If your key is a single piece of data of an appropriate type, it is
sensible for your hash function to directly return the output of one of
these functions. For multiple pieces of data, you may wish to combine
the output of more than one call to them using, e.g., the
^
(exclusive or)
operator. Finally, you may entirely ignore these functions and write
your own hash function from scratch, but remember that your goal is to
build an operating system kernel, not to design a hash function.
See section A.8.6 Auxiliary Data, for an explanation of aux.
If two elements compare equal, then they must hash to equal values.
See section A.8.6 Auxiliary Data, for an explanation of aux.
See section A.8.5 Hash Table Example, for hash and comparison function examples.
A few functions accept a pointer to a third kind of function as an argument:
See section A.8.6 Auxiliary Data, for an explanation of aux.
These functions create, destroy, and inspect hash tables.
hash_init()calls
malloc()and fails if memory cannot be allocated.
See section A.8.6 Auxiliary Data, for an explanation of aux, which is most often a null pointer.
hash_init().
If action is non-null, then it is called once for each element in
the hash table, which gives the caller an opportunity to deallocate any
memory or other resources used by the element. For example, if the hash
table elements are dynamically allocated using
malloc(), then
action could
free() the element. This is safe because
hash_clear() will not access the memory in a given hash element
after calling action on it. However, action must not call
any function that may modify the hash table, such as
hash_insert()
or
hash_delete().
hash_clear(). Then, frees the memory held by hash. Afterward, hash must not be passed to any hash table function, absent an intervening call to
hash_init().
Each of these functions searches a hash table for an element that compares equal to one provided. Based on the success of the search, they perform some action, such as inserting a new element into the hash table, or simply return the result of the search.
The caller is responsible for deallocating any resources associated with
the returned element, as appropriate. For example, if the hash table
elements are dynamically allocated using
malloc(), then the caller
must
free() the element after it is no longer needed.
The element passed to the following functions is only used for hashing
and comparison purposes. It is never actually inserted into the hash
table. Thus, only key data in the element needs to be initialized, and
other data in the element will not be used. It often makes sense to
declare an instance of the element type as a local variable, initialize
the key data, and then pass the address of its
struct hash_elem to
hash_find() or
hash_delete(). See section A.8.5 Hash Table Example, for
an example. (Large structures should not be
allocated as local variables. See section A.2.1
struct thread, for more
information.)
The caller is responsible for deallocating any resources associated with
the returned element, as appropriate. For example, if the hash table
elements are dynamically allocated using
malloc(), then the caller
must
free() the element after it is no longer needed.
These functions allow iterating through the elements in a hash table. Two interfaces are supplied. The first requires writing and supplying a hash_action_func to act on each element (see section A.8.1 Data Types).
hash_insert()or
hash_delete(). action must not modify key data in elements, although it may modify any other data.
The second interface is based on an "iterator" data type. Idiomatically, iterators are used as follows:
hash_insert()or
hash_delete(), invalidates all iterators within that hash table.
Like
struct hash and
struct hash_elem,
struct hash_elem is opaque.
hash_next()returns null for iterator, calling it again yields undefined behavior.
hash_next()for iterator. Yields undefined behavior after
hash_first()has been called on iterator but before
hash_next()has been called for the first time.
Suppose you have a structure, called
struct page, that you
want to put into a hash table. First, define
struct page to include a
struct hash_elem member:
We write a hash function and a comparison function using addr as
the key. A pointer can be hashed based on its bytes, and the
<
operator works fine for comparing pointers:
(The use of
UNUSED in these functions' prototypes suppresses a
warning that aux is unused. See section E.3 Function and Parameter Attributes, for information about
UNUSED. See section A.8.6 Auxiliary Data, for an explanation of aux.)
Then, we can create a hash table like this:
Now we can manipulate the hash table we've created. If
p
is a pointer to a
struct page, we can insert it into the hash table
with:
If there's a chance that pages might already contain a
page with the same addr, then we should check
hash_insert()'s
return value.
To search for an element in the hash table, use
hash_find(). This
takes a little setup, because
hash_find() takes an element to
compare against. Here's a function that will find and return a page
based on a virtual address, assuming that pages is defined at file
scope:
struct page is allocated as a local variable here on the assumption
that it is fairly small. Large structures should not be allocated as
local variables. See section A.2.1
struct thread, for more information.
A similar function could delete a page by address using
hash_delete().
In simple cases like the example above, there's no need for the
aux parameters. In these cases, just pass a null pointer to
hash_init() for aux and ignore the values passed to the hash
function and comparison functions. (You'll get a compiler warning if
you don't use the aux parameter, but you can turn that off with
the
UNUSED macro, as shown in the example, or you can just ignore
it.)
aux is useful when you have some property of the data in the hash table is both constant and needed for hashing or comparison, but not stored in the data items themselves. For example, if the items in a hash table are fixed-length strings, but the items themselves don't indicate what that fixed length is, you could pass the length as an aux parameter.
The hash table does not do any internal synchronization. It is the
caller's responsibility to synchronize calls to hash table functions.
In general, any number of functions that examine but do not modify the
hash table, such as
hash_find() or
hash_next(), may execute
simultaneously. However, these function cannot safely execute at the
same time as any function that may modify a given hash table, such as
hash_insert() or
hash_delete(), nor may more than one function
that can modify a given hash table execute safely at once.
It is also the caller's responsibility to synchronize access to data in hash table elements. How to synchronize access to this data depends on how it is designed and organized, as with any other data structure.
|
http://www.ccs.neu.edu/home/amislove/teaching/cs5600/fall10/pintos/pintos_6.html
|
CC-MAIN-2016-07
|
refinedweb
| 8,842
| 63.8
|
Opened 12 years ago
Closed 10 years ago
#732 closed Bugs (fixed)
Johnson All-Pairs needs better "no path" information
Description (last modified by )
Hi, The Johnson's SP algorithm as implemented in the BGL does not easily provide a way to determine whether two vertices are do have a path between them. I include below a simplified version of the example provided with the BGL. Running it I get the output below: D[0][0]=0 D[0][1]=3 D[0][2]=-4 D[1][0]=2147483647 <- no path between nodes '1' and '0' D[1][1]=0 D[1][2]=2147483643 <- no path between nodes '1' and '2' D[2][0]=-2147483645 <- no path between nodes '2' and '0' D[2][1]=-2147483645 <- no path between nodes '2' and '1' D[2][2]=0 That is, there isn't one single value that represents lack of connectivity - one has to pick a value close enough to 'inf' and discriminate with that. Shouldn't 'inf' (however represented) describe lack of connectivity? (To get around this problem, at the moment I run a transitive closure before JSP and use the result of that to determine whether two vertices are connected). Does this make sense or am I missing something? Thanks, Andrea #include <boost/property_map.hpp> #include <boost/graph/adjacency_list.hpp> #include <boost/graph/johnson_all_pairs_shortest.hpp> #include <iostream> int main() { using namespace boost; typedef adjacency_list<vecS, vecS, directedS, no_property, property< edge_weight_t, int, property< edge_weight2_t, int > > > Graph; const int V = 3; typedef std::pair < int, int >Edge; Edge edge_array[] = { Edge(0, 1), Edge(0, 2) }; const std::size_t E = sizeof(edge_array) / sizeof(Edge); Graph g(edge_array, edge_array + E, V); property_map < Graph, edge_weight_t >::type w = get(edge_weight, g); int weights[] = { 3, -4 }; int *wp = weights; graph_traits < Graph >::edge_iterator e, e_end; for (boost::tie(e, e_end) = edges(g); e != e_end; ++e) w[*e] = *wp++; std::vector < int >d(V, (std::numeric_limits < int >::max)()); int D[V][V]; johnson_all_pairs_shortest_paths(g, D, distance_map(&d[0])); std::cout << " "; std::cout << std::endl; for (int i = 0; i < V; ++i) for (int j = 0; j < V; ++j) std::cout << "D[" << i << "][" << j << "]=" << D[i][j] << std::endl; return 0; }
Change History (5)
comment:1 Changed 11 years ago by
comment:2 Changed 11 years ago by
comment:3 Changed 10 years ago by
comment:4 Changed 10 years ago by
comment:5 Changed 10 years ago by
Note: See TracTickets for help on using tickets.
Assigned to "doug_gregor" instead of nonexistent user "dgregor"
|
https://svn.boost.org/trac10/ticket/732
|
CC-MAIN-2018-34
|
refinedweb
| 416
| 50.8
|
I'd like to print a string to command line / terminal in Windows and then edit / change the string and read it back. Anyone knows how to do it? Thanks
print "Hell" Hello! <---Edit it on the screen s = raw_input() print s Hello!
If it's for your own purposes, then here's a dirty wee hack using the clipboard without losing what was there before:
def edit_text_at_terminal(text_to_edit): import pyperclip # Save old clipboard contents so user doesn't lose them old_clipboard_contents = pyperclip.paste() #place text you want to edit in the clipboard pyperclip.copy(text_to_edit) # If you're on Windows, and ctrl+v works, you can do this: import win32com.client shell = win32com.client.Dispatch("WScript.Shell") shell.SendKeys("^v") # Otherwise you should tell the user to type ctrl+v msg = "Type ctrl+v (your old clipboard contents will be restored):n" # Get the new value, the old value will have been pasted new_value= str(raw_input(msg)) # restore the old clipboard contents before returning new value pyperclip.copy(old_clipboard_contents ) return new_value
Note that ctrl+v doesn't work in all terminals, notably the Windows default (there are ways to make it work, though I recommend using ConEmu instead).
Automating the keystrokes for other OSs will involve a different process.
Please remember this is a quick hack and not a "proper" solution. I will not be held responsible for loss of entire PhD dissertations momentarily stored on your clipboard.
For a proper solution there are better approaches such as curses for Linux, and on Windows it's worth looking into AutHotKey (perhaps throw up an input box, or do some keystrokes/clipboard wizardry).
|
https://www.dowemo.com/article/70365/python-how-to-modify/edit-a-string-that's-printed-on
|
CC-MAIN-2018-26
|
refinedweb
| 273
| 53.71
|
I'm currently practicing algorithms and coding for my colleges programming team. The problem is, in High School I was almost purely C/C++, while my college team all work in Java. So, while not wildly different, learning Java is still pretty frustrating to me.
My most recent problem is a Java.lang.NullPointerException on line 26 (it's commented where it breaks). It is reading the file and creating the rectangles[] array properly. The file calls for 9 rectangles, and it creates indexes rectangles[0...8]. But each of the indexes is set to null when I pause the debugger. So Java will make my array, but not populate it? I haven't had this problem when using standard types like int or boolean arrays... any help please? I'm sure it's a pretty novice mistake, it may not even be Java and my tired eyes are missing an obvious typo?
package RectanglesInt; import java.util.*; import java.io.*; public class RectanglesInt { public class rectangle { int xMin, xMax, yMin, yMax, area; boolean intersects; } static rectangle rectangles[]; public static void main(String args[]) throws Exception { Scanner in = new Scanner(new FileReader("rectangles.in")); int numCases = in.nextInt(); //First line of file is # of test cases for (int cases = 0; cases < numCases; cases++) { int numRectangles = in.nextInt(); //Next line is number of rectangles in set of input rectangles = new rectangle[numRectangles]; int xMax, yMax; xMax = yMax = 0; for (int i = 0; i < numRectangles; i++) { rectangles[i].xMin = in.nextInt(); //Crashes right here. rectangles[0] == null rectangles[i].yMin = in.nextInt(); //[code left out for brevity]
|
https://www.daniweb.com/programming/software-development/threads/314377/null-pointer-error-when-creating-array-of-custom-class
|
CC-MAIN-2017-30
|
refinedweb
| 265
| 59.3
|
Hi, Default :
{
"button": "button1", "count": 2,
"command": "my_special_doubleclick"
}
I know I can assign modifiers like ctrl+alt, but I rather just have the double click...
thanks in advance Dean
Theoretically (I haven't checked that yet for double click) by making use of run_ you can get hold of the mouse click event and then pass it to the default handler (drag_select - see the default mousemap for the details).
Here's an example: [1] custom command that overrides run_, [2] mousemap for subclasses of that command, which specifies the system command that gets overriden and its arguments, so that the custom command can call it.
By the way, please, let me know how it goes. I'm also going to override double-click to navigate the stack trace during debugging. Would be great if this sketch works.
[1] [github.com/sublimescala/sublime ... me.py#L190]()[2] [github.com/sublimescala/sublime ... e-mousemap]()
Thanks, this worked like a charm!
The mouse overwrite ended as:
{
"button": "button1", "count": 2,
"press_command": "my_special_doubleclick",
"press_args": {"command": "drag_select", "args": {"by": "words"}}
}
And my plugin is:
class MySpecialDoubleclickCommand(sublime_plugin.TextCommand):
def run_(self, args):
if self.view.name() == "mySpecialBuffer":
self.doMyStuff()
else:
system_command = args"command"] if "command" in args else None
if system_command:
system_args = dict({"event": args"event"]}.items() + args"args"].items())
self.view.run_command(system_command, system_args)
thanks again for your quick replyDean
Aforementioned code apparently works only in ST2. I would like to get this functionality working in ST3 as well. Python 3.x treats dictionaries differently than 2.x so I changed the addition to union to achieve the same dictionary as a result.
However, my attempts to call the run_command(system_command, system_args) have been unsuccessful. I keep getting the following error:
File "C:\Program Files\Sublime Text 3\sublime.py", line 607, in run_command
sublime_api.view_run_command(self.view_id, cmd, args)
TypeError: Value required
Any help would be appreciated.
Scam, I did a little digging on bringing that broken dictionary syntax into Python 3, and here's what I got:
stackoverflow.com/questions/1336 ... dict-items
Turns out we need to replace the + operator with .update()
To put it in context, here's what it looks like, using the example from earlier in this thread:
class MySpecialDoubleclickCommand(sublime_plugin.TextCommand):
def run_(self, view, args):
if self.view.name() == "mySpecialBuffer":
self.doMyStuff()
else:
system_command = args"command"] if "command" in args else None
if system_command:
system_args = dict({"event": args"event"]}.items())
system_args.update(dict(args"args"].items()))
self.view.run_command(system_command, system_args)
Thanks to everyone in this thread for the springboard - I hope the fix I found works for you too.
|
https://forum.sublimetext.com/t/can-i-make-double-click-do-normal-behaviour-my-own/6962
|
CC-MAIN-2016-07
|
refinedweb
| 434
| 57.06
|
VB.NET not only provides us with new OO features, but it also changes the way we implement some of the features we are used to from VB6. As we go through these features we’ll cover both the new capabilities and also explore the changes to existing features.
Creating Classes
When building classes in previous versions of VB, each class got its own file. While simple, this solution could cause a larger OO project to have many files. VB.NET allows us to put more than one class in a single source file. While we don’t have to take this approach, it can be nice since we can reduce the overall number of files in a project – possibly making it more maintainable.
Additionally, VB.NET provides support
for the concept of .NET namespaces, as we discussed in Chapters 2 and 3. There
are also changes to the syntax used to create
Property
methods, and we can overload methods in our classes. We’ll look at all these
features shortly. First though, let’s look at how we add a class to a project.
This is the common dialog used for adding
any type of item to our project – in this case it defaults to adding a class
module. Regardless of which type of VB source file we choose (form, class, module,
etc.) we’ll end up with a file ending in a
.vb
extension.
It is the content of the file that determines its type, not the file extension. The IDE creates different starting code within the file based on the type we choose.
We can name the class
TheClass
in this dialog and, when we click Open,
a new file will be added to our project, containing very simple code:
Public Class TheClass
End Class
Though a
.vb
file can contain multiple classes, modules and other code, the normal behavior
from the IDE is the same as we’ve had in VB since its inception – one class,
module, or form per file. We can manually add other code to the files created
by the IDE with no problems, but when we ask the IDE to create a class for us
it will always do so by adding a new file to the project.
At this point we’re ready to start adding code.
Class Keyword
As shown in this
example, we now have a
Class
keyword along with the corresponding
End
Class. This new keyword is needed in order for a single
source file to contain more than one class. Any time we want to create a class
in VB.NET, we simply put all the code for the class within the
Class…
End
Class block. For instance:
Public Class TheClass
Public Sub DoSomething()
MsgBox(“Hello world”, MsgBoxStyle.Information, “TheClass”)
End Sub
End Class
Within a given source file (any
.vb
file) we can have many of these
Class...
End
Class blocks, one after another.
Classes and Namespaces
We discussed the concept of a namespace thoroughly in Chapters 2 and 3. Namespaces are central to the .NET environment, as they provide a mechanism by which classes can be organized into logical groupings, making them easier to find and manage.
Namespaces in VB.NET are declared using a block structure. For example:
Namespace TheNamespace
Public Class TheClass
End Class
End Namespace
Any classes, structures, or other types
declared within the
Namespace...
End
Namespace block will be addressed using that namespace.
In this example, our class is referenced using the namespace, so declaring a
variable would be done as follows:
Private obj As TheNamespace.TheClass
Because namespaces are created using a block structure, it is possible for a single source file to contain not only many classes, but also many namespaces.
Also, classes within the same namespace can be created in separate files. In other words, within a VB.NET project we can use the same namespace in more than one source file – and all the classes within those namespace blocks will be part of that same namespace.
For instance, if we have one source file with the following code:
Namespace TheNamespace
Public Class TheClass
End Class
End Namespace
And we have a separate source file in the project with the following code:
Namespace TheNamespace
Public Class TheOtherClass
End Class
End Namespace
Then we’ll have a single namespace –
TheNamespace
– with two classes –
TheClass
and
TheOtherClass.
It is also important to remember that
VB.NET projects, by default, have a
root namespace that is part of the project’s properties.
By default this root namespace will have the same name as our project. When
we use the
Namespace
block structure, we are actually adding to that root namespace. So, in our example,
if the project is named
MyProject,
then we could declare a variable as:
Private obj As MyProject.TheNamespace.TheClass
To change the root namespace, use the
Project | Properties menu option. The root namespace
can be cleared as well, meaning that all
Namespace
blocks become the root level for the code they contain.
Creating Methods
Methods
in VB.NET are created just like they are in VB6 – using the
Sub
or
Function
keywords. A method created with
Sub
does not return a value, while a
Function
must return a value as a result.
Sub DoSomething()
End Sub
Function GetValue() As Integer
End Function
We retain the three scoping keywords we are used to, and have one more:
Private– callable only by code within our class
Friend– callable only by code within our project/component
Public– callable by code outside our class
Protected– new to VB.NET; we’ll discuss this later when we cover inheritance
Protected Friend– callable only by code within our project/component and by code in our subclasses; we’ll discuss this later when we cover inheritance
Parameters to methods are now declared
ByVal
by default, rather than
ByRef.
We can still override the default behavior through explicit use of the
ByRef
keyword. We discussed these issues in more detail in Chapter 3.
Creating Properties
In
Chapter 3 we discussed the changes to the way
Property
routines are created. In the past we’d create separate routines for
Property
Get and
Property
Let. Now these are combined into a single structure:
Private mstrName As String
Public Property Name() As String
Get
Return mstrName
End Get
Set
mstrName = Value
End Set
End Property
Refer to Chapter 3 for further discussion, including details on creating read-only and write-only properties.
Default Property
When creating classes in VB6 we could declare a default method, or property, for our class. This was done using the Tools | Procedure Attributes menu option and by setting the Procedure ID to (default). Not an entirely intuitive process, since we couldn’t look at the code to see what was going on.
VB.NET changes this behavior in a couple
ways. First off, creating a default property is done through the use of a
Default
keyword – making the declaration much more clear and intuitive. However, VB.NET
introduces a new limitation on default properties – to be default, a property
must be a property array.
A
property
array is a property that is indexed – much like an
array. The
Item
property on a collection or list object is an example:
strText = MyList.Item(5)
The
Item
property doesn’t have a singular value, but rather is an array of properties
accessed via an index.
By requiring default properties to be
a property array, we allow the language to avoid ambiguities
in the use of default properties. This is a key to the elimination of the
Set
keyword as we knew it in VB6. Consider the following code:
MyValue = MyObject
Does this refer to the object
MyObject,
or to its default property? In VB6 this was resolved by forcing us to use the
Set
command when dealing with the object, otherwise the default property was used.
In VB.NET this statement always refers to the object since a default
property would be indexed. To get at a default property we’d have code such
as:
MyValue = MyObject(5)
This is not ambiguous, since the index
is a clear indicator that we’re referring to the default property rather than
to
MyObject
itself.
This change means a property array procedure must accept a parameter. For example:
Private theData(100) As String
Default Public Property Data(ByVal Index As Integer) As String
Get
Data = theData(index)
End Get
Set
theData(index) = Value
End Set
End Property
In the end, this code is much clearer
than its VB6 counterpart, but we lose some of the flexibility we enjoyed with
default properties in the past. For instance, we’d often use default properties
when working with GUI controls, such as the default
Text
property:
TextBox1 = “My text”
This is no longer valid in VB.NET, since
the
Text
property is not a property array. Instead we must now use the property name
in these cases.
Overloading Methods
One of the more exciting new polymorphic features in VB.NET is the ability to overload a method. Overloading means that we can declare a method of the same name more than once in a class – as long as each declaration has a different parameter list. This can be very powerful.
A different parameter list means different data types in the list. Consider the following method declaration:
Public Sub MyMethod(X As Integer, Y As Integer)
The parameter list of this method can be viewed as (integer, integer). To overload this method, we must come up with a different parameter list – perhaps (integer, double). The order of the types also matters, so (integer, double) and (double, integer) are different and would work for overloading.
Overloading cannot be done merely by changing the return type of a function. It is the data types of the actual parameters that must differ for overloading to occur.
As an example, suppose we want to provide a search capability – returning a set of data based on some criteria – so we create a routine such as:
Public Function FindData(ByVal Name As String) As ArrayList
‘ find data and return result
End Function
In VB6, if we wanted to add a new searching option based on some other criteria, we’d have to add a whole new function with a different name. In VB.NET however, we can simply overload this existing function:
Public Overloads Function FindData(ByVal Name As String) As ArrayList
‘ find data and return result
End Function
Public Overloads Function FindData(ByVal Age As Integer) As ArrayList
‘ find data and return result
End Function
Notice that both method declarations have
the same method name – something that would be prohibited in VB6. Each has different
parameters, which allows VB.NET to differentiate between them, and each is declared
with the
Overloads
keyword.
When overloading a method we can have
different scopes on each implementation – as long as the parameter lists are
different as we discussed earlier. This means we could change our
FindData
methods to have different scopes:
Public Overloads Function FindData(ByVal Name As String) As ArrayList
‘ find data and return result
End Function
Friend Overloads Function FindData(ByVal Age As Integer) As ArrayList
‘ find data and return result
End Function
With this change, only other code in our
VB.NET project can make use of the
FindData
that accepts an
Integer
as its parameter.
|
https://www.developerfusion.com/article/1047/new-objectoriented-capabilities-in-vbnet/2/
|
CC-MAIN-2018-39
|
refinedweb
| 1,889
| 60.35
|
#include <RTOp_SparseSubVector.h>
List of all members.
vec.value_stridemay be positive (>0), negative (<0) or even zero (0). A negative stride
vec.value_stride < 0allows a reverse traversal of the elements in
vec.values[]. A zero stride
vec.value_stride == 0allows
vec.sub_nz == 0then it is allowed for
vec.indices == NULL. If
vec.sub_dim > vec.sub_nz > 0then
vec.indices != NULLmust
vec.local_offsetis used to shift the values in
vec.indices[]to be in range of the local sub-vector. In other words:
1 <= vec.local_offset + vec.indices[vec.indices_stride*(k-1)] <= vec.sub_nz for k = 1...vec.sub_nz
vec.value_stridemay be positive (>0), negative (<0) or zero (0). However, the member
vec.indices_stridemay be may be positive (>0) or negative (<0) but not zero (0). Allowing
vec.indices_stride == 0would mean that a vector would have
vec.sub_nznonzero elements with all the same value and all the same indexes and non-unique indices are not allowed. Allowing non-unique indexes would make some operations (e.g. dot product) very difficult to implement and therefore can not be allowed. A sparse vector where
vec.value_stride == 0is one where all of the nonzeros have the value
vec.values[0]. If
vec.sub_nz == 118 of file RTOp_SparseSubVector.h.
|
http://trilinos.sandia.gov/packages/docs/r7.0/packages/moocho/src/RTOpPack/doc/html/structRTOp__SparseSubVector.html
|
CC-MAIN-2014-35
|
refinedweb
| 202
| 55.3
|
Malformed URL exception using Web Start after upgrading to JRE 1.5.0_16
Hi people,
I seem to have encountered a problem using Web Start after upgrading my JRE from 1.5.0_15 to 1.5.0_16.
I have written a simple test program as shown below. This simply tries to load a java-help HelpSet file.
import java.net.URL;
import javax.help.HelpSet;
import javax.help.HelpSetException;
public class HelpTest {
public HelpTest() {
final ClassLoader cl = HelpTest.this.getClass().getClassLoader();
System.out.println("classloader = " + cl);
final URL url = HelpSet.findHelpSet(cl, "jhelpset.hs");
System.out.println("url = " + url);
try {
final HelpSet hs = new HelpSet(cl, url);
System.out.println("helpset = " + hs);
} catch (final HelpSetException e) {
e.printStackTrace();
}
}
public static void main(String[] args) {
new HelpTest();
}
}
(NOTE: I have also used Thread.currentThread().getContextClassLoader() to retrieve the classloader but I get the same result).
The following is the JNLP file
<?xml version="1.0" encoding="utf-8"?>
Help Test Demo
My Own Vendor
Help Test
Test loading of help
This is my JNLP file for the java-help
<?xml version="1.0" encoding="utf-8"?>
JavaHelp
Sun Microsystems, Inc.
The follwing jars, which are all signed, are contained in my WAR file.
helptest.jar (This just contains the class show above plus the files in the META-INF directory)
helpfiles.jar (This just contains 3 files jhelpset.hs, jhelpmap.jhm, jhelptoc.xml plus the files in the META-INF directory)
javahelp-2_0_05.jar (Java Help, This was already signed by SUN)
If the application is run through JRE 1.5.0._15 I get the following output on the Java Console:-
Java Web Start 1.5.0_15
Using JRE version 1.5.0:file:I:/Documents%20and%20Settings/Application%20Data/Sun/Java/Deployment/cache/javaws/http/Dlocalhost/P8080/DMhelptest/java-XMhelpfiles.jar43748tmp!/jhelpset.hs
helpset = Help Test User Guide
If I then install JRE 1.5.0_16 and run the application I get the following error message on the Java Console:-
Java Web Start 1.5.0_16
Using JRE version 1.5.0:jhelpset.hs
javax.help.HelpSetException: Could not parse
Malformed URL: jhelpmap.jhm.
Parsing failed for jar:jhelpset.hs
at javax.help.HelpSet.(HelpSet.java:154)
at com.snh.HelpApp.HelpTest.(HelpTest.java:24)
at com.snh.HelpApp.HelpTest.main(HelpTest.java)
I have noticed that somehow the URL in 1.5.0_15 was picking up the directory that Web Start uses
for its cache which then loads the HelpSet correctly.
However the URL that 1.5.0_16 picks up seems to be incomplete/incorrect.
Does anyone have any ideas as to why this error message is suddenly being produced?
Any light shed on this would be much appreciated.
Thanks,
Steve.
Does anyone have a workaround for this? We are encountering the same problem on Mac OS X and unfortunately Apple doesn't offer an easy way to roll back its JRE to 1.5.0_15. And Java 6 is only available for 64-bit computers currently. Thanks.
This seems to be a 1.5.0_16 Regression. We see the same here. And not only with Helpsets but also with a EntityResolver of Xerces.
My guess is that the URL parser got more strict in order to prevent some exploits related with this sun Alert: (the URL contains the jar: and ! symbols).
However the Bugs are not public, so it is hard to say. I guess I will file a bug about this.
We are also having a problem with JavaHelp on OS X with the 1.5.0_16 revision -
Have you reported the Bug? I looked for a place to report bugs with Apple but couldn't find anything - maybe I didn't look hard enough. I hope Apple fixes this - I don't want to make changes to our application just so it can run on OS X!
|
https://www.java.net/node/681553
|
CC-MAIN-2014-15
|
refinedweb
| 642
| 69.48
|
osd_is_onscreen - Returns wether the XOSD window is shown
#include <xosd.h>
xosd *xosd_is_onscreen (xosd *osd);
osd_is_onscreen determines weather a XOSD window, is currently being
shown (is mapped to the X display). Because XOSD displays data
asynchronously (see xosd_display(3) for details) it can be difficult to
know if data is being displayed, xosd_is_onscreen solves this problem.
Call xosd_show(3) or xosd_hide(3) to alter the visibility of the XOSD
window.
osd The XOSD window to query.
A 1 is returned if the window is onscreen (mapped), or 0 if it is
hidden (unmapped). On error, -1 is returned and xosd_error is set to
indicate the reason for the error.
char *xosd_error
A string to a text string describing the error, if one occurred.
The xosd_is_onscreen function first appeared in version 2.1 of the XOSD
library.
The XOSD library was originally written by André Renaud and is currenly
maintained by Tim Wright, who also wrote the xosd_is_onscreen function.
Michael JasonSmith thinks he wrote this document, but is not sure; drop
Micahel an email (<mike@ldots.org>) if you think he didn't write this
document.
There are no known bugs with xosd_is_onscreen. Bug reports can be sent
to <xosd@ignavus.net>.
xosd_display(3), xosd_show(3), xosd_hide(3).
XOSD_IS_ONSCREEN(3)
All copyrights belong to their respective owners. Other content (c) 2014-2018, GNU.WIKI. Please report site errors to webmaster@gnu.wiki.Page load time: 0.088 seconds. Last modified: November 04 2018 12:49:43.
|
http://gnu.wiki/man3/xosd_is_onscreen.3.php
|
CC-MAIN-2020-05
|
refinedweb
| 245
| 69.18
|
The out simple tutorial to understand SOAP.
This tutorial will make you familiar to the SOAP implementation for Ruby (SOAP4R). This is a basic tutorial, so if you need a deep detail you would need to refer other resources.
SOAP4R is the SOAP implementation for Ruby developed by Hiroshi Nakamura and can be downloaded from:
...............
# Handler methods
def add(a, b)
return a + b
end
def div(a, b)
return a / b
end
end
Next step is to add our defined methods to our server. The initialize method is used to expose service methods with one of the two following methods::
add_method(self, 'aMeth', [
%w(in inParam),
%w(inout inoutParam),
%w(out outParam),
%w(retval return)
])
The final step is to start your server by instantiating one instance of the derived class and calling start method.
myServer = MyServer.new('ServerName',
'urn:ruby:ServiceName', hostname, port)
myServer.start
Here is the description of required parameters :
Now using above steps, let us write one standalone server::
$ ruby MyServer.rb&
The SOAP::RPC::Driver class provides support for writing SOAP client applications. This tutorial will describe this class and demonstrate its usage on the basis of an application.
Following is the bare minimum information you would need to call a SOAP service::
We create an instance of SOAP::RPC::Driver by calling its new method as follows:
SOAP::RPC::Driver.new(endPoint, nameSpace, soapAction)
To add a SOAP service method to a SOAP::RPC::Driver we can call the following method using SOAP::RPC::Driver instance:
driver.add_method(name, *paramArg)
The final step is to invoice SOAP service using SOAP::RPC::Driver instance as follows:
result = driver.serviceMethod(paramArg...)
Here serviceMethod is the actual web service method and paramArg... is the list parameters required to pass in the service method.
Based on the above steps, we will write a SOAP client as follows:
#!
I have explained you just very basic concepts of Web Services with Ruby. If you want to drill down it further then there is following link to find more detail on Web Services with Ruby.
Advertisement
|
http://www.tutorialspoint.com/ruby/ruby_web_services.htm
|
crawl-002
|
refinedweb
| 346
| 53.71
|
In this example, we'll explore a common approach that is particularly useful in real-world applications: take a pre-trained Caffe network and fine-tune the parameters on your custom data.
The advantage of this approach is that, since pre-trained networks are learned on a large set of images, the intermediate layers capture the "semantics" of the general visual appearance. Think of it as a very powerful generic visual feature that you can treat as a black box. On top of that, only a relatively small amount of data is needed for good performance on the target task.
First, we will need to prepare the data. This involves the following parts: (1) Get the ImageNet ilsvrc pretrained model with the provided shell scripts. (2) Download a subset of the overall Flickr style dataset for this demo. (3) Compile the downloaded Flickr dataset into a database that Caffe can then consume.
caffe_root = '../' # this file should be run from {caffe_root}/examples (otherwise change this line) import sys sys.path.insert(0, caffe_root + 'python') import caffe caffe.set_device(0) caffe.set_mode_gpu() import numpy as np from pylab import * %matplotlib inline import tempfile # Helper function for deprocessing preprocessed images, e.g., for display. def deprocess_net_image(image): image = image.copy() # don't modify destructively image = image[::-1] # BGR -> RGB image = image.transpose(1, 2, 0) # CHW -> HWC image += [123, 117, 104] # (approximately) undo mean subtraction # clamp values in [0, 255] image[image < 0], image[image > 255] = 0, 255 # round and cast from float32 to uint8 image = np.round(image) image = np.require(image, dtype=np.uint8) return image
Download data required for this exercise.
get_ilsvrc_aux.shto download the ImageNet data mean, labels, etc.
download_model_binary.pyto download the pretrained reference model
finetune_flickr_style/assemble_data.pydownloads the style training and testing data
We'll download just a small subset of the full dataset for this exercise: just 2000 of the 80K images, from 5 of the 20 style categories. (To download the full dataset, set
full_dataset = True in the cell below.)
# Download just a small subset of the data for this exercise. # (2000 of 80K images, 5 of 20 labels.) # To download the entire dataset, set `full_dataset = True`. full_dataset = False if full_dataset: NUM_STYLE_IMAGES = NUM_STYLE_LABELS = -1 else: NUM_STYLE_IMAGES = 2000 NUM_STYLE_LABELS = 5 # This downloads the ilsvrc auxiliary data (mean file, etc), # and a subset of 2000 images for the style recognition task. import os os.chdir(caffe_root) # run scripts from caffe root !data/ilsvrc12/get_ilsvrc_aux.sh !scripts/download_model_binary.py models/bvlc_reference_caffenet !python examples/finetune_flickr_style/assemble_data.py \ --workers=-1 --seed=1701 \ --images=$NUM_STYLE_IMAGES --label=$NUM_STYLE_LABELS # back to examples os.chdir('examples')
Downloading... --2016-02-24 00:28:36-- Resolving dl.caffe.berkeleyvision.org (dl.caffe.berkeleyvision.org)... 169.229.222.251 Connecting to dl.caffe.berkeleyvision.org (dl.caffe.berkeleyvision.org)|169.229.222.251|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 17858008 (17M) [application/octet-stream] Saving to: ‘caffe_ilsvrc12.tar.gz’ 100%[======================================>] 17,858,008 112MB/s in 0.2s 2016-02-24 00:28:36 (112 MB/s) - ‘caffe_ilsvrc12.tar.gz’ saved [17858008/17858008] Unzipping... Done. Model already exists. Downloading 2000 images with 7 workers... Writing train/val for 1996 successfully downloaded images.
Define
weights, the path to the ImageNet pretrained weights we just downloaded, and make sure it exists.
import os weights = os.path.join(caffe_root, 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel') assert os.path.exists(weights)
Load the 1000 ImageNet labels from
ilsvrc12/synset_words.txt, and the 5 style labels from
finetune_flickr_style/style_names.txt.
# Load ImageNet labels to imagenet_labels imagenet_label_file = caffe_root + 'data/ilsvrc12/synset_words.txt' imagenet_labels = list(np.loadtxt(imagenet_label_file, str, delimiter='\t')) assert len(imagenet_labels) == 1000 print 'Loaded ImageNet labels:\n', '\n'.join(imagenet_labels[:10] + ['...']) # Load style labels to style_labels style_label_file = caffe_root + 'examples/finetune_flickr_style/style_names.txt' style_labels = list(np.loadtxt(style_label_file, str, delimiter='\n')) if NUM_STYLE_LABELS > 0: style_labels = style_labels[:NUM_STYLE_LABELS] print '\nLoaded style labels:\n', ', '.join(style_labels)
Loaded ImageNet labels: n01496331 electric ray, crampfish, numbfish, torpedo n01498041 stingray n01514668 cock n01514859 hen n01518878 ostrich, Struthio camelus ... Loaded style labels: Detailed, Pastel, Melancholy, Noir, HDR
from caffe import layers as L from caffe import params as P weight_param = dict(lr_mult=1, decay_mult=1) bias_param = dict(lr_mult=2, decay_mult=0) learned_param = [weight_param, bias_param] frozen_param = [dict(lr_mult=0)] * 2 def conv_relu(bottom, ks, nout, stride=1, pad=0, group=1, param=learned_param, weight_filler=dict(type='gaussian', std=0.01), bias_filler=dict(type='constant', value=0.1)): conv = L.Convolution(bottom, kernel_size=ks, stride=stride, num_output=nout, pad=pad, group=group, param=param, weight_filler=weight_filler, bias_filler=bias_filler) return conv, L.ReLU(conv, in_place=True) def fc_relu(bottom, nout, param=learned_param, weight_filler=dict(type='gaussian', std=0.005), bias_filler=dict(type='constant', value=0.1)): fc = L.InnerProduct(bottom, num_output=nout, param=param, weight_filler=weight_filler, bias_filler=bias_filler) return fc, L.ReLU(fc, in_place=True) def max_pool(bottom, ks, stride=1): return L.Pooling(bottom, pool=P.Pooling.MAX, kernel_size=ks, stride=stride) def caffenet(data, label=None, train=True, num_classes=1000, classifier_name='fc8', learn_all=False): """Returns a NetSpec specifying CaffeNet, following the original proto text specification (./models/bvlc_reference_caffenet/train_val.prototxt).""" n = caffe.NetSpec() n.data = data param = learned_param if learn_all else frozen_param n.conv1, n.relu1 = conv_relu(n.data, 11, 96, stride=4, param=param) n.pool1 = max_pool(n.relu1, 3, stride=2) n.norm1 = L.LRN(n.pool1, local_size=5, alpha=1e-4, beta=0.75) n.conv2, n.relu2 = conv_relu(n.norm1, 5, 256, pad=2, group=2, param=param) n.pool2 = max_pool(n.relu2, 3, stride=2) n.norm2 = L.LRN(n.pool2, local_size=5, alpha=1e-4, beta=0.75) n.conv3, n.relu3 = conv_relu(n.norm2, 3, 384, pad=1, param=param) n.conv4, n.relu4 = conv_relu(n.relu3, 3, 384, pad=1, group=2, param=param) n.conv5, n.relu5 = conv_relu(n.relu4, 3, 256, pad=1, group=2, param=param) n.pool5 = max_pool(n.relu5, 3, stride=2) n.fc6, n.relu6 = fc_relu(n.pool5, 4096, param=param) if train: n.drop6 = fc7input = L.Dropout(n.relu6, in_place=True) else: fc7input = n.relu6 n.fc7, n.relu7 = fc_relu(fc7input, 4096, param=param) if train: n.drop7 = fc8input = L.Dropout(n.relu7, in_place=True) else: fc8input = n.relu7 # always learn fc8 (param=learned_param) fc8 = L.InnerProduct(fc8input, num_output=num_classes, param=learned_param) # give fc8 the name specified by argument `classifier_name` n.__setattr__(classifier_name, fc8) if not train: n.probs = L.Softmax(fc8) if label is not None: n.label = label n.loss = L.SoftmaxWithLoss(fc8, n.label) n.acc = L.Accuracy(fc8, n.label) # write the net to a temporary file and return its filename with tempfile.NamedTemporaryFile(delete=False) as f: f.write(str(n.to_proto())) return f.name
Now, let's create a CaffeNet that takes unlabeled "dummy data" as input, allowing us to set its input images externally and see what ImageNet classes it predicts.
dummy_data = L.DummyData(shape=dict(dim=[1, 3, 227, 227])) imagenet_net_filename = caffenet(data=dummy_data, train=False) imagenet_net = caffe.Net(imagenet_net_filename, weights, caffe.TEST)
Define a function
style_net which calls
caffenet on data from the Flickr style dataset.
The new network will also have the CaffeNet architecture, with differences in the input and output:
ImageDatalayer
fc8to
fc8_flickrto tell Caffe not to load the original classifier (
fc8) weights from the ImageNet-pretrained model
def style_net(train=True, learn_all=False, subset=None): if subset is None: subset = 'train' if train else 'test' source = caffe_root + 'data/flickr_style/%s.txt' % subset transform_param = dict(mirror=train, crop_size=227, mean_file=caffe_root + 'data/ilsvrc12/imagenet_mean.binaryproto') style_data, style_label = L.ImageData( transform_param=transform_param, source=source, batch_size=50, new_height=256, new_width=256, ntop=2) return caffenet(data=style_data, label=style_label, train=train, num_classes=NUM_STYLE_LABELS, classifier_name='fc8_flickr', learn_all=learn_all)
Use the
style_net function defined above to initialize
untrained_style_net, a CaffeNet with input images from the style dataset and weights from the pretrained ImageNet model.
Call
forward on
untrained_style_net to get a batch of style training data.
untrained_style_net = caffe.Net(style_net(train=False, subset='train'), weights, caffe.TEST) untrained_style_net.forward() style_data_batch = untrained_style_net.blobs['data'].data.copy() style_label_batch = np.array(untrained_style_net.blobs['label'].data, dtype=np.int32)
Pick one of the style net training images from the batch of 50 (we'll arbitrarily choose #8 here). Display it, then run it through
imagenet_net, the ImageNet-pretrained network to view its top 5 predicted classes from the 1000 ImageNet classes.
Below we chose an image where the network's predictions happen to be reasonable, as the image is of a beach, and "sandbar" and "seashore" both happen to be ImageNet-1000 categories. For other images, the predictions won't be this good, sometimes due to the network actually failing to recognize the object(s) present in the image, but perhaps even more often due to the fact that not all images contain an object from the (somewhat arbitrarily chosen) 1000 ImageNet categories. Modify the
batch_index variable by changing its default setting of 8 to another value from 0-49 (since the batch size is 50) to see predictions for other images in the batch. (To go beyond this batch of 50 images, first rerun the above cell to load a fresh batch of data into
style_net.)
def disp_preds(net, image, labels, k=5, name='ImageNet'): input_blob = net.blobs['data'] net.blobs['data'].data[0, ...] = image probs = net.forward(start='conv1')['probs'][0] top_k = (-probs).argsort()[:k] print 'top %d predicted %s labels =' % (k, name) print '\n'.join('\t(%d) %5.2f%% %s' % (i+1, 100*probs[p], labels[p]) for i, p in enumerate(top_k)) def disp_imagenet_preds(net, image): disp_preds(net, image, imagenet_labels, name='ImageNet') def disp_style_preds(net, image): disp_preds(net, image, style_labels, name='style')
batch_index = 8 image = style_data_batch[batch_index] plt.imshow(deprocess_net_image(image)) print 'actual label =', style_labels[style_label_batch[batch_index]]
actual label = Melancholy
disp_imagenet_preds(imagenet_net, image)
top 5 predicted ImageNet labels = (1) 69.89% n09421951 sandbar, sand bar (2) 21.76% n09428293 seashore, coast, seacoast, sea-coast (3) 3.22% n02894605 breakwater, groin, groyne, mole, bulwark, seawall, jetty (4) 1.89% n04592741 wing (5) 1.23% n09332890 lakeside, lakeshore
We can also look at
untrained_style_net's predictions, but we won't see anything interesting as its classifier hasn't been trained yet.
In fact, since we zero-initialized the classifier (see
caffenet definition -- no
weight_filler is passed to the final
InnerProduct layer), the softmax inputs should be all zero and we should therefore see a predicted probability of 1/N for each label (for N labels). Since we set N = 5, we get a predicted probability of 20% for each class.
disp_style_preds(untrained_style_net, image)
top 5 predicted style labels = (1) 20.00% Detailed (2) 20.00% Pastel (3) 20.00% Melancholy (4) 20.00% Noir (5) 20.00% HDR
We can also verify that the activations in layer
fc7 immediately before the classification layer are the same as (or very close to) those in the ImageNet-pretrained model, since both models are using the same pretrained weights in the
conv1 through
fc7 layers.
diff = untrained_style_net.blobs['fc7'].data[0] - imagenet_net.blobs['fc7'].data[0] error = (diff ** 2).sum() assert error < 1e-8
untrained_style_net to save memory. (Hang on to
imagenet_net as we'll use it again later.)
del untrained_style_net
Now, we'll define a function
solver to create our Caffe solvers, which are used to train the network (learn its weights). In this function we'll set values for various parameters used for learning, display, and "snapshotting" -- see the inline comments for explanations of what they mean. You may want to play with some of the learning parameters to see if you can improve on the results here!
from caffe.proto import caffe_pb2 def solver(train_net_path, test_net_path=None, base_lr=0.001): s = caffe_pb2.SolverParameter() # Specify locations of the train and (maybe) test networks. s.train_net = train_net_path if test_net_path is not None: s.test_net.append(test_net_path) s.test_interval = 1000 # Test after every 1000 training iterations. s.test_iter.append(100) # Test on 100 batches each time we test. # The number of iterations over which to average the gradient. # Effectively boosts the training batch size by the given factor, without # affecting memory utilization. s.iter_size = 1 s.max_iter = 100000 # # of times to update the net (training iterations) # Solve using the stochastic gradient descent (SGD) algorithm. # Other choices include 'Adam' and 'RMSProp'. s.type = 'SGD' # Set the initial learning rate for SGD. s.base_lr = base_lr # Set `lr_policy` to define how the learning rate changes during training. # Here, we 'step' the learning rate by multiplying it by a factor `gamma` # every `stepsize` iterations. s.lr_policy = 'step' s.gamma = 0.1 s.stepsize = 20000 # Set other SGD hyperparameters. Setting a non-zero `momentum` takes a # weighted average of the current gradient and previous gradients to make # learning more stable. L2 weight decay regularizes learning, to help prevent # the model from overfitting. s.momentum = 0.9 s.weight_decay = 5e-4 # Display the current training loss and accuracy every 1000 iterations. s.display = 1000 # Snapshots are files used to store networks we've trained. Here, we'll # snapshot every 10K iterations -- ten times during training. s.snapshot = 10000 s.snapshot_prefix = caffe_root + 'models/finetune_flickr_style/finetune_flickr_style' # Train on the GPU. Using the CPU to train large networks is very slow. s.solver_mode = caffe_pb2.SolverParameter.GPU # Write the solver to a temporary file and return its filename. with tempfile.NamedTemporaryFile(delete=False) as f: f.write(str(s)) return f.name
Now we'll invoke the solver to train the style net's classification layer.
For the record, if you want to train the network using only the command line tool, this is the command:
build/tools/caffe train \
-solver models/finetune_flickr_style/solver.prototxt \
-weights models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel \
-gpu 0
However, we will train using Python in this example.
We'll first define
run_solvers, a function that takes a list of solvers and steps each one in a round robin manner, recording the accuracy and loss values each iteration. At the end, the learned weights are saved to a file.
def run_solvers(niter, solvers, disp_interval=10): """Run solvers for niter iterations, returning the loss and accuracy recorded each iteration. `solvers` is a list of (name, solver) tuples.""" blobs = ('loss', 'acc') loss, acc = ({name: np.zeros(niter) for name, _ in solvers} for _ in blobs) for it in range(niter): for name, s in solvers: s.step(1) # run a single SGD step in Caffe loss[name][it], acc[name][it] = (s.net.blobs[b].data.copy() for b in blobs) if it % disp_interval == 0 or it + 1 == niter: loss_disp = '; '.join('%s: loss=%.3f, acc=%2d%%' % (n, loss[n][it], np.round(100*acc[n][it])) for n, _ in solvers) print '%3d) %s' % (it, loss_disp) # Save the learned weights from both nets. weight_dir = tempfile.mkdtemp() weights = {} for name, s in solvers: filename = 'weights.%s.caffemodel' % name weights[name] = os.path.join(weight_dir, filename) s.net.save(weights[name]) return loss, acc, weights
Let's create and run solvers to train nets for the style recognition task. We'll create two solvers -- one (
style_solver) will have its train net initialized to the ImageNet-pretrained weights (this is done by the call to the
copy_from method), and the other (
scratch_style_solver) will start from a randomly initialized net.
During training, we should see that the ImageNet pretrained net is learning faster and attaining better accuracies than the scratch net.
niter = 200 # number of iterations to train # Reset style_solver as before. style_solver_filename = solver(style_net(train=True)) style_solver = caffe.get_solver(style_solver_filename) style_solver.net.copy_from(weights) # For reference, we also create a solver that isn't initialized from # the pretrained ImageNet weights. scratch_style_solver_filename = solver(style_net(train=True)) scratch_style_solver = caffe.get_solver(scratch_style_solver_filename) print 'Running solvers for %d iterations...' % niter solvers = [('pretrained', style_solver), ('scratch', scratch_style_solver)] loss, acc, weights = run_solvers(niter, solvers) print 'Done.' train_loss, scratch_train_loss = loss['pretrained'], loss['scratch'] train_acc, scratch_train_acc = acc['pretrained'], acc['scratch'] style_weights, scratch_style_weights = weights['pretrained'], weights['scratch'] # Delete solvers to save memory. del style_solver, scratch_style_solver, solvers
Running solvers for 200 iterations... 0) pretrained: loss=1.609, acc=28%; scratch: loss=1.609, acc=28% 10) pretrained: loss=1.293, acc=52%; scratch: loss=1.626, acc=14% 20) pretrained: loss=1.110, acc=56%; scratch: loss=1.646, acc=10% 30) pretrained: loss=1.084, acc=60%; scratch: loss=1.616, acc=20% 40) pretrained: loss=0.898, acc=64%; scratch: loss=1.588, acc=26% 50) pretrained: loss=1.024, acc=54%; scratch: loss=1.607, acc=32% 60) pretrained: loss=0.925, acc=66%; scratch: loss=1.616, acc=20% 70) pretrained: loss=0.861, acc=74%; scratch: loss=1.598, acc=24% 80) pretrained: loss=0.967, acc=60%; scratch: loss=1.588, acc=30% 90) pretrained: loss=1.274, acc=52%; scratch: loss=1.608, acc=20% 100) pretrained: loss=1.113, acc=62%; scratch: loss=1.588, acc=30% 110) pretrained: loss=0.922, acc=62%; scratch: loss=1.578, acc=36% 120) pretrained: loss=0.918, acc=62%; scratch: loss=1.599, acc=20% 130) pretrained: loss=0.959, acc=58%; scratch: loss=1.594, acc=22% 140) pretrained: loss=1.228, acc=50%; scratch: loss=1.608, acc=14% 150) pretrained: loss=0.727, acc=76%; scratch: loss=1.623, acc=16% 160) pretrained: loss=1.074, acc=66%; scratch: loss=1.607, acc=20% 170) pretrained: loss=0.887, acc=60%; scratch: loss=1.614, acc=20% 180) pretrained: loss=0.961, acc=62%; scratch: loss=1.614, acc=18% 190) pretrained: loss=0.737, acc=76%; scratch: loss=1.613, acc=18% 199) pretrained: loss=0.836, acc=70%; scratch: loss=1.614, acc=16% Done.
Let's look at the training loss and accuracy produced by the two training procedures. Notice how quickly the ImageNet pretrained model's loss value (blue) drops, and that the randomly initialized model's loss value (green) barely (if at all) improves from training only the classifier layer.
plot(np.vstack([train_loss, scratch_train_loss]).T) xlabel('Iteration #') ylabel('Loss')
<matplotlib.text.Text at 0x7f75d49e1090>
plot(np.vstack([train_acc, scratch_train_acc]).T) xlabel('Iteration #') ylabel('Accuracy')
<matplotlib.text.Text at 0x7f75d49e1a90>
Let's take a look at the testing accuracy after running 200 iterations of training. Note that we're classifying among 5 classes, giving chance accuracy of 20%. We expect both results to be better than chance accuracy (20%), and we further expect the result from training using the ImageNet pretraining initialization to be much better than the one from training from scratch. Let's see.
def eval_style_net(weights, test_iters=10): test_net = caffe.Net(style_net(train=False), weights, caffe.TEST) accuracy = 0 for it in xrange(test_iters): accuracy += test_net.forward()['acc'] accuracy /= test_iters return test_net, accuracy
test_net, accuracy = eval_style_net(style_weights) print 'Accuracy, trained from ImageNet initialization: %3.1f%%' % (100*accuracy, ) scratch_test_net, scratch_accuracy = eval_style_net(scratch_style_weights) print 'Accuracy, trained from random initialization: %3.1f%%' % (100*scratch_accuracy, )
Accuracy, trained from ImageNet initialization: 50.0% Accuracy, trained from random initialization: 23.6%
Finally, we'll train both nets again, starting from the weights we just learned. The only difference this time is that we'll be learning the weights "end-to-end" by turning on learning in all layers of the network, starting from the RGB
conv1 filters directly applied to the input image. We pass the argument
learn_all=True to the
style_net function defined earlier in this notebook, which tells the function to apply a positive (non-zero)
lr_mult value for all parameters. Under the default,
learn_all=False, all parameters in the pretrained layers (
conv1 through
fc7) are frozen (
lr_mult = 0), and we learn only the classifier layer
fc8_flickr.
Note that both networks start at roughly the accuracy achieved at the end of the previous training session, and improve significantly with end-to-end training. To be more scientific, we'd also want to follow the same additional training procedure without the end-to-end training, to ensure that our results aren't better simply because we trained for twice as long. Feel free to try this yourself!
end_to_end_net = style_net(train=True, learn_all=True) # Set base_lr to 1e-3, the same as last time when learning only the classifier. # You may want to play around with different values of this or other # optimization parameters when fine-tuning. For example, if learning diverges # (e.g., the loss gets very large or goes to infinity/NaN), you should try # decreasing base_lr (e.g., to 1e-4, then 1e-5, etc., until you find a value # for which learning does not diverge). base_lr = 0.001 style_solver_filename = solver(end_to_end_net, base_lr=base_lr) style_solver = caffe.get_solver(style_solver_filename) style_solver.net.copy_from(style_weights) scratch_style_solver_filename = solver(end_to_end_net, base_lr=base_lr) scratch_style_solver = caffe.get_solver(scratch_style_solver_filename) scratch_style_solver.net.copy_from(scratch_style_weights) print 'Running solvers for %d iterations...' % niter solvers = [('pretrained, end-to-end', style_solver), ('scratch, end-to-end', scratch_style_solver)] _, _, finetuned_weights = run_solvers(niter, solvers) print 'Done.' style_weights_ft = finetuned_weights['pretrained, end-to-end'] scratch_style_weights_ft = finetuned_weights['scratch, end-to-end'] # Delete solvers to save memory. del style_solver, scratch_style_solver, solvers
Running solvers for 200 iterations... 0) pretrained, end-to-end: loss=0.781, acc=64%; scratch, end-to-end: loss=1.585, acc=28% 10) pretrained, end-to-end: loss=1.178, acc=62%; scratch, end-to-end: loss=1.638, acc=14% 20) pretrained, end-to-end: loss=1.084, acc=60%; scratch, end-to-end: loss=1.637, acc= 8% 30) pretrained, end-to-end: loss=0.902, acc=76%; scratch, end-to-end: loss=1.600, acc=20% 40) pretrained, end-to-end: loss=0.865, acc=64%; scratch, end-to-end: loss=1.574, acc=26% 50) pretrained, end-to-end: loss=0.888, acc=60%; scratch, end-to-end: loss=1.604, acc=26% 60) pretrained, end-to-end: loss=0.538, acc=78%; scratch, end-to-end: loss=1.555, acc=34% 70) pretrained, end-to-end: loss=0.717, acc=72%; scratch, end-to-end: loss=1.563, acc=30% 80) pretrained, end-to-end: loss=0.695, acc=74%; scratch, end-to-end: loss=1.502, acc=42% 90) pretrained, end-to-end: loss=0.708, acc=68%; scratch, end-to-end: loss=1.523, acc=26% 100) pretrained, end-to-end: loss=0.432, acc=78%; scratch, end-to-end: loss=1.500, acc=38% 110) pretrained, end-to-end: loss=0.611, acc=78%; scratch, end-to-end: loss=1.618, acc=18% 120) pretrained, end-to-end: loss=0.610, acc=76%; scratch, end-to-end: loss=1.473, acc=30% 130) pretrained, end-to-end: loss=0.471, acc=78%; scratch, end-to-end: loss=1.488, acc=26% 140) pretrained, end-to-end: loss=0.500, acc=76%; scratch, end-to-end: loss=1.514, acc=38% 150) pretrained, end-to-end: loss=0.476, acc=80%; scratch, end-to-end: loss=1.452, acc=46% 160) pretrained, end-to-end: loss=0.368, acc=82%; scratch, end-to-end: loss=1.419, acc=34% 170) pretrained, end-to-end: loss=0.556, acc=76%; scratch, end-to-end: loss=1.583, acc=36% 180) pretrained, end-to-end: loss=0.574, acc=72%; scratch, end-to-end: loss=1.556, acc=22% 190) pretrained, end-to-end: loss=0.360, acc=88%; scratch, end-to-end: loss=1.429, acc=44% 199) pretrained, end-to-end: loss=0.458, acc=78%; scratch, end-to-end: loss=1.370, acc=44% Done.
Let's now test the end-to-end finetuned models. Since all layers have been optimized for the style recognition task at hand, we expect both nets to get better results than the ones above, which were achieved by nets with only their classifier layers trained for the style task (on top of either ImageNet pretrained or randomly initialized weights).
test_net, accuracy = eval_style_net(style_weights_ft) print 'Accuracy, finetuned from ImageNet initialization: %3.1f%%' % (100*accuracy, ) scratch_test_net, scratch_accuracy = eval_style_net(scratch_style_weights_ft) print 'Accuracy, finetuned from random initialization: %3.1f%%' % (100*scratch_accuracy, )
Accuracy, finetuned from ImageNet initialization: 53.6% Accuracy, finetuned from random initialization: 39.2%
We'll first look back at the image we started with and check our end-to-end trained model's predictions.
plt.imshow(deprocess_net_image(image)) disp_style_preds(test_net, image)
top 5 predicted style labels = (1) 55.67% Melancholy (2) 27.21% HDR (3) 16.46% Pastel (4) 0.63% Detailed (5) 0.03% Noir
Whew, that looks a lot better than before! But note that this image was from the training set, so the net got to see its label at training time.
Finally, we'll pick an image from the test set (an image the model hasn't seen) and look at our end-to-end finetuned style model's predictions for it.
batch_index = 1 image = test_net.blobs['data'].data[batch_index] plt.imshow(deprocess_net_image(image)) print 'actual label =', style_labels[int(test_net.blobs['label'].data[batch_index])]
actual label = Pastel
disp_style_preds(test_net, image)
top 5 predicted style labels = (1) 99.76% Pastel (2) 0.13% HDR (3) 0.11% Detailed (4) 0.00% Melancholy (5) 0.00% Noir
We can also look at the predictions of the network trained from scratch. We see that in this case, the scratch network also predicts the correct label for the image (Pastel), but is much less confident in its prediction than the pretrained net.
disp_style_preds(scratch_test_net, image)
top 5 predicted style labels = (1) 49.81% Pastel (2) 19.76% Detailed (3) 17.06% Melancholy (4) 11.66% HDR (5) 1.72% Noir
Of course, we can again look at the ImageNet model's predictions for the above image:
disp_imagenet_preds(imagenet_net, image)
top 5 predicted ImageNet labels = (1) 34.90% n07579787 plate (2) 21.63% n04263257 soup bowl (3) 17.75% n07875152 potpie (4) 5.72% n07711569 mashed potato (5) 5.27% n07584110 consomme
So we did finetuning and it is awesome. Let's take a look at what kind of results we are able to get with a longer, more complete run of the style recognition dataset. Note: the below URL might be occasionally down because it is run on a research machine.
|
https://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/02-fine-tuning.ipynb
|
CC-MAIN-2021-49
|
refinedweb
| 4,308
| 53.27
|
GNAT User's Guide for Native Platforms / Unix and Windows
GNAT, The GNU Ada 95 Compiler
GCC version 3.4
This guide describes the use of GNAT, a compiler and software development toolset for the full Ada 95 programming language. It describes the features of the compiler and tools, and details how to use them to build Ada 95 applications.
This guide contains the following chapters:
gcc, the Ada compiler.
gnatbind, the GNAT binding utility.
gnatlink, a program that provides for linking using the GNAT run-time library to construct a program.
gnatlinkcan also incorporate foreign language object units into the executable.
gnatmake, a utility that automatically determines the set of sources needed by an Ada compilation unit, and executes the necessary compilations binding and link.stub, a utility that generates empty but compilable bodies for library units.
gnathtml.
This user's guide assumes that you are familiar with Ada 95 language, as described in the International Standard ANSI/ISO/IEC-8652:1995, January 1995.
For further information about related tools, refer to the following documents:
Following are examples of the typographical and graphic conventions used in this guide:
Functions,
utility program names,
standard names, and
classes.
and then shown this way.
Commands that are entered by the user are preceded in this manual by the
characters “
$ ” (dollar sign followed by space). If your system
uses this sequence as a prompt, then the commands will appear exactly as
you see them in the manual. If your system uses some other prompt, then
the command will appear with the
$ replaced by whatever prompt
character you are using.
Full file names are shown with the “
/” character
as the directory separator; e.g., parent-dir/subdir/myfile.adb.
If you are using GNAT on a Windows platform, please note that
the “
\” character should be used instead.
This chapter describes some simple ways of using GNAT to build executable Ada programs. Running GNAT, through Using the gnatmake Utility, show how to use the command line environment. Introduction to Glide and GVD, provides a brief introduction to the visually-oriented IDE for GNAT. Supplementing Glide on some platforms is GPS, the GNAT Programming System, which offers a richer graphical “look and feel”, enhanced configurability, support for development in other programming language, comprehensive browsing features, and many other capabilities. For information on GPS please refer to Using the GNAT Programming System.
Three steps are needed to create an executable file from an Ada source file:
All three steps are most commonly handled by using the
gnatmake
utility program that, given the name of the main program, automatically
performs the necessary compilation, binding and linking steps....
The
This section describes how to set breakpoints, examine/modify variables, and step through execution.
In order to enable debugging, you need to pass the -g switch
to both the compiler and to gnatlink. If you are using
the command line, passing -g to gnatmake will have
this effect. You can then launch GVD, e.g. on the
hello program,
by issuing the command:
$ gvd hello
If you are using Glide, then -g is passed to the relevant tools
by default when you do a build. Start the debugger by selecting the
Ada menu item, and then
Debug.
GVD comes up in a multi-part window. One pane shows the names of files comprising your executable; another pane shows the source code of the current unit (initially your main subprogram), another pane shows the debugger output and user interactions, and the fourth pane (the data canvas at the top of the window) displays data objects that you have selected.
To the left of the source file pane, you will notice green dots adjacent
to some lines. These are lines for which object code exists and where
breakpoints can thus be set. You set/reset a breakpoint by clicking
the green dot. When a breakpoint is set, the dot is replaced by an
X
in a red circle. Clicking the circle toggles the breakpoint off,
and the red circle is replaced by the green dot.
For this example, set a breakpoint at the statement where
Put_Line
is invoked.
Start program execution by selecting the
Run button on the top menu bar.
(The
Start button will also start your program, but it will
cause program execution to break at the entry to your main subprogram.)
Evidence of reaching the breakpoint will appear: the source file line will be
highlighted, and the debugger interactions pane will display
a relevant message.
You can examine the values of variables in several ways. Move the mouse
over an occurrence of
Ind in the
for loop, and you will see
the value (now
1) displayed. Alternatively, right-click on
Ind
and select
Display Ind; a box showing the variable's name and value
will appear in the data canvas.
Although a loop index is a constant with respect to Ada semantics,
you can change its value in the debugger. Right-click in the box
for
Ind, and select the
Set Value of Ind item.
Enter
2 as the new value, and press OK.
The box for
Ind shows the update.
Press the
Step button on the top menu bar; this will step through
one line of program text (the invocation of
Put_Line), and you can
observe the effect of having modified
Ind since the value displayed
is
2.
Remove the breakpoint, and resume execution by selecting the
Cont
button. You will see the remaining output lines displayed in the debugger
interaction window, along with a message confirming normal program
termination.
You may have observed that some of the menu selections contain abbreviations;
e.g.,
(C-x C-f) for
Open file... in the
Files menu.
These are shortcut keys that you can use instead of selecting
menu items. The <C> stands for <Ctrl>; thus
(C-x C-f) means
<Ctrl-x> followed by <Ctrl-f>, and this sequence can be used instead
of selecting
Files and then
Open file....
To abort a Glide command, type <Ctrl-g>.
If you want Glide to start with an existing source file, you can either
launch Glide as above and then open the file via
Files =>
Open file..., or else simply pass the name of the source file
on the command line:
$ glide hello.adb&
While you are using Glide, a number of buffers exist.
You create some explicitly; e.g., when you open/create a file.
Others arise as an effect of the commands that you issue; e.g., the buffer
containing the output of the tools invoked during a build. If a buffer
is hidden, you can bring it into a visible window by first opening
the
Buffers menu and then selecting the desired entry.
If a buffer occupies only part of the Glide screen and you want to expand it
to fill the entire screen, then click in the buffer and then select
Files =>
One Window.
If a window is occupied by one buffer and you want to split the window to bring up a second buffer, perform the following steps:
Files=>
Split Window; this will produce two windows each of which holds the original buffer (these are not copies, but rather different views of the same buffer contents)
Buffersmenu
To exit from Glide, choose
Files =>
Exit..
Ada source programs are represented in standard text files, using Latin-1 coding. Latin-1 is an 8-bit code that includes the familiar 7-bit ASCII set, plus additional characters used for representing foreign languages (see Foreign Language Representation for support of non-USA character sets). The format effector characters are represented using their standard ASCII encodings, as follows:
VT
16#0B#
HT
16#09#
CR
16#0D#
LF
16#0A#
FF
16#0C#
Source files are in standard text file format. In addition, GNAT will
recognize a wide variety of stream formats, in which the end of physical
physical lines is marked by any of the following sequences:
LF,
CR,
CR-LF, or
LF-CR. This is useful
in accommodating files that are imported from other operating systems.
The end of a source file is normally represented by the physical end of
file. However, the control character
16#1A# (
SUB) is also
recognized as signalling the end of the source file. Again, this is
provided for compatibility with other operating systems where this
code is used to represent the end of file.
Each file contains a single Ada compilation unit, including any pragmas associated with the unit. For example, this means you must place a package declaration (a package spec) and the corresponding body in separate files. An Ada compilation (which is a sequence of compilation units) is represented using a sequence of files. Similarly, you will place each subunit or child unit in a separate file.
GNAT supports the standard character sets defined in Ada 95 as well as several other non-standard character sets for use in localized versions of the compiler (see Character Set Control)..
GNAT allows wide character codes to appear in character and string literals, and also optionally in identifiers, by means of the following possible encoding schemes:
ESC a b c d
Where
a,
b,
c,
d are the four hexadecimal
characters (using uppercase letters) of the wide character code. For
example, ESC A345 is used to represent the wide character with code
16#A345#.
This scheme is compatible with use of the full Wide_Character set.
16#abcd#where the upper bit is on (in other words, “a” is in the range 8-F) is represented as two bytes,
16#ab#and
16#cd#. The second byte cannot be a format control character, but is not required to be in the upper half. This method can be also used for shift-JIS or EUC, where the internal coding matches the external coding.
16#ab#and
16#cd#, with the restrictions described for upper-half encoding as described above. The internal character code is the corresponding JIS character according to the standard algorithm for Shift-JIS conversion. Only characters defined in the JIS code set table can be used with this encoding method.
16#ab#and
16#cd#, with both characters being in the upper half. The internal character code is the corresponding JIS character according to the EUC encoding algorithm. Only characters defined in the JIS code set table can be used with this encoding method. be treated as illegal).
[ " a b c d " ]
Where
a,
b,
c,
d are the four hexadecimal
characters (using uppercase letters) of the wide character code. For
example, [“A345”] is used to represent the wide character with code
16#A345#. It is also possible (though not required) to use the
Brackets coding for upper half characters. For example, the code
16#A3# can be represented as
[``A3''].
This scheme is compatible with use of the full Wide_Character set, and is also the method used for wide character encoding in the standard ACVC (Ada Compiler Validation Capability) test suite distributions.
Note: Some of these coding schemes do not permit the full use of the Ada 95 character set. For example, neither Shift JIS, nor EUC allow the use of the upper half of the Latin-1 set..
In the previous section, we described the use of the
Source_File_Name
pragma to allow arbitrary names to be assigned to individual source files.
However, this approach requires one pragma for each file, and especially in
large systems can result in very long gnat.adc files, and also create
a maintenance problem.
GNAT also provides a facility for specifying systematic file naming schemes
other than the standard default naming scheme previously described. An
alternative scheme for naming is specified by the use of
Source_File_Name pragmas having the following format:
pragma Source_File_Name ( Spec_File_Name => FILE_NAME_PATTERN [,Casing => CASING_SPEC] [,Dot_Replacement => STRING_LITERAL]); pragma Source_File_Name ( Body_File_Name => FILE_NAME_PATTERN [,Casing => CASING_SPEC] [,Dot_Replacement => STRING_LITERAL]); pragma Source_File_Name ( Subunit_File_Name => FILE_NAME_PATTERN [,Casing => CASING_SPEC] [,Dot_Replacement => STRING_LITERAL]); FILE_NAME_PATTERN ::= STRING_LITERAL CASING_SPEC ::= Lowercase | Uppercase | Mixedcase
The
FILE_NAME_PATTERN string shows how the file name is constructed.
It contains a single asterisk character, and the unit name is substituted
systematically for this asterisk. The optional parameter
Casing indicates
whether the unit name is to be all upper-case letters, all lower-case letters,
or mixed-case. If no
Casing parameter is used, then the default is all
lower-case.
The optional
Dot_Replacement string is used to replace any periods
that occur in subunit or child unit names. If no
Dot_Replacement
argument is used then separating dots appear unchanged in the resulting
file name.
Although the above syntax indicates that the
Casing argument must appear
before the
Dot_Replacement argument, but it
is also permissible to write these arguments in the opposite order.
As indicated, it is possible to specify different naming schemes for
bodies, specs, and subunits. Quite often the rule for subunits is the
same as the rule for bodies, in which case, there is no need to give
a separate
Subunit_File_Name rule, and in this case the
Body_File_name rule is used for subunits as well.
The separate rule for subunits can also be used to implement the rather unusual case of a compilation environment (e.g. a single directory) which contains a subunit and a child unit with the same unit name. Although both units cannot appear in the same partition, the Ada Reference Manual allows (but does not require) the possibility of the two units coexisting in the same environment.
The file name translation works in the following steps:
Source_File_Namepragma for the given unit, then this is always used, and any general pattern rules are ignored.
Source_File_Namepragma that applies to the unit, then the resulting file name will be used if the file exists. If more than one pattern matches, the latest one will be tried first, and the first attempt resulting in a reference to a file that exists will be used.
Source_File_Namepragma that applies to the unit for which the corresponding file exists, then the standard GNAT default naming rules are used.
As an example of the use of this mechanism, consider a commonly used scheme in which file names are all lower case, with separating periods copied unchanged to the resulting file name, and specs end with .1.ada, and bodies end with .2.ada. GNAT will follow this scheme if the following two pragmas appear:
pragma Source_File_Name (Spec_File_Name => "*.1.ada"); pragma Source_File_Name (Body_File_Name => "*.2.ada");
The default GNAT scheme is actually implemented by providing the following default pragmas internally:
pragma Source_File_Name (Spec_File_Name => "*.ads", Dot_Replacement => "-"); pragma Source_File_Name (Body_File_Name => "*.adb", Dot_Replacement => "-");
Our final example implements a scheme typically used with one of the Ada 83 compilers, where the separator character for subunits was “__” (two underscores), specs were identified by adding _.ADA, bodies by adding .ADA, and subunits by adding .SEP. All file names were upper case. Child units were not present of course since this was an Ada 83 compiler, but it seems reasonable to extend this scheme to use the same double underscore separator for child units.
pragma Source_File_Name (Spec_File_Name => "*_.ADA", Dot_Replacement => "__", Casing = Uppercase); pragma Source_File_Name (Body_File_Name => "*.ADA", Dot_Replacement => "__", Casing = Uppercase); pragma Source_File_Name (Subunit_File_Name => "*.SEP", Dot_Replacement => "__", Casing = Uppercase);).
with'ed units, including presence of.
This section describes how to develop a mixed-language program, specifically one that comprises units in both Ada and C.
Interfacing Ada with a foreign language such as C involves using
compiler directives to import and/or export entity definitions in each
language—using
extern statements in C, for instance, and the
Import,
Export, and
Convention pragmas in Ada. For
a full treatment of these topics, read Appendix B, section 1 of the Ada
95 Language Reference Manual.
There are two ways to build a program using GNAT that contains some Ada sources and some foreign language sources, depending on whether or not the main subprogram is written in Ada. Here is a source example with the main subprogram in Ada:
/* file1.c */ #include <stdio.h> void print_num (int num) { printf ("num is %d.\n", num); return; } /* file2.c */ /* num_from_Ada is declared in my_main.adb */ extern int num_from_Ada; int get_num (void) { return num_from_Ada; }
-- my_main.adb procedure My_Main is -- Declare then export an Integer entity called num_from_Ada My_Num : Integer := 10; pragma Export (C, My_Num, "num_from_Ada"); -- Declare an Ada function spec for Get_Num, then use -- C function get_num for the implementation. function Get_Num return Integer; pragma Import (C, Get_Num, "get_num"); -- Declare an Ada procedure spec for Print_Num, then use -- C function print_num for the implementation. procedure Print_Num (Num : Integer); pragma Import (C, Print_Num, "print_num"); begin Print_Num (Get_Num); end My_Main;
gcc -c file1.c gcc -c file2.c
gnatmake -c my_main.adb
gnatbind my_main.ali
gnatlink my_main.ali file1.o file2.o
The last three steps can be grouped in a single command:
gnatmake my_main.adb -largs file1.o file2.o
If the main program is in a language other than Ada, then you may have more than one entry point into the Ada subsystem. You must use a special binder option to generate callable routines that initialize and finalize the Ada units (see Binding with Non-Ada Main Programs). Calls to the initialization and finalization routines must be inserted in the main program, or some other appropriate point in the code. The call to initialize the Ada units must occur before the first Ada subprogram is called, and the call to finalize the Ada units must occur after the last Ada subprogram returns. The binder will place the initialization and finalization subprograms into the b~xxx.adb file where they can be accessed by your C sources. To illustrate, we have the following example:
/* main.c */ extern void adainit (void); extern void adafinal (void); extern int add (int, int); extern int sub (int, int); int main (int argc, char *argv[]) { int a = 21, b = 7; adainit(); /* Should print "21 + 7 = 28" */ printf ("%d + %d = %d\n", a, b, add (a, b)); /* Should print "21 - 7 = 14" */ printf ("%d - %d = %d\n", a, b, sub (a, b)); adafinal(); }
-- unit1.ads package Unit1 is function Add (A, B : Integer) return Integer; pragma Export (C, Add, "add"); end Unit1; -- unit1.adb package body Unit1 is function Add (A, B : Integer) return Integer is begin return A + B; end Add; end Unit1; -- unit2.ads package Unit2 is function Sub (A, B : Integer) return Integer; pragma Export (C, Sub, "sub"); end Unit2; -- unit2.adb package body Unit2 is function Sub (A, B : Integer) return Integer is begin return A - B; end Sub; end Unit2;
gcc -c main.c
gnatmake -c unit1.adb gnatmake -c unit2.adb
gnatbind -n unit1.ali unit2.ali
gnatlink unit2.ali main.o -o exec_file
This procedure yields a binary executable called exec_file.
GNAT follows standard calling sequence conventions and will thus interface to any other language that also follows these conventions. The following Convention identifiers are recognized by GNAT:
Ada
Note that in the case of GNAT running on a platform that supports DEC DEC Ada 83, All the tasking operations must either be entirely within GNAT compiled sections of the program, or entirely within DEC Ada 83 compiled sections of the program.
Assembler
Asm
COBOL
C
Default
External.
Stdcall
DLL
Win32.
GN.
Usually the linker of the C++ development system must be used to link mixed applications because most C++ systems will resolve elaboration issues (such as calling constructors on global class instances) transparently during the link phase. GNAT has been adapted to ease the use of a foreign linker for the last phase. Three cases can be considered:
c++. Note that this setup is not very common because it may involve recompiling the whole GCC tree from sources, which makes it harder to upgrade the compilation system for one language without destabilizing the other.
$ c++ -c file1.C $ c++ -c file2.C $ gnatmake ada_unit -largs file1.o file2.o --LINK=c++
$ gnatbind ada_unit $ gnatlink -v -v ada_unit file1.o file2.o --LINK=c++
If there is a problem due to interfering environment variables, it can be worked around by using an intermediate script. The following example shows the proper script to use when GNAT has not been installed at its default location and g++ has been installed at its default location:
$ cat ./my_script #!/bin/sh unset BINUTILS_ROOT unset GCC_ROOT c++ $* $ gnatlink -v -v ada_unit file1.o file2.o --LINK=./my_script
$ cat ./my_script #!/bin/sh CC $* `gcc -print-libgcc-file-name` $ gnatlink ada_unit file1.o file2.o --LINK=./my_script
Where CC is the name of the non-GNU C++ compiler.
The following example, provided as part of the GNAT examples, shows how to achieve procedural interfacing between Ada and C++ in both directions. The C++ class A has two methods. The first method is exported to Ada by the means of an extern C wrapper function. The second method calls an Ada subprogram. On the Ada side, The C++ calls are modelled by a limited record with a layout comparable to the C++ class. The Ada subprogram, in turn, calls the C++ method. So, starting from the C++ main program, the process passes back and forth between the two languages.
Here are the compilation commands:
$ gnatmake -c simple_cpp_interface $ c++ -c cpp_main.C $ c++ -c ex7.C $ gnatbind -n simple_cpp_interface $ gnatlink simple_cpp_interface -o cpp_main --LINK=$(CPLUSPLUS) -lstdc++ ex7.o cpp_main.o
Here are the corresponding sources:
//cpp_main.C #include "ex7.h" extern "C" { void adainit (void); void adafinal (void); void method1 (A *t); } void method1 (A *t) { t->method1 (); } int main () { A obj; adainit (); obj.method2 (3030); adafinal (); } //ex7.h class Origin { public: int o_value; }; class A : public Origin { public: void method1 (void); virtual void method2 (int v); A(); int a_value; }; //ex7.C #include "ex7.h" #include <stdio.h> extern "C" { void ada_method2 (A *t, int v);} void A::method1 (void) { a_value = 2020; printf ("in A::method1, a_value = %d \n",a_value); } void A::method2 (int v) { ada_method2 (this, v); printf ("in A::method2, a_value = %d \n",a_value); } A::A(void) { a_value = 1010; printf ("in A::A, a_value = %d \n",a_value); } -- Ada sources package body Simple_Cpp_Interface is procedure Ada_Method2 (This : in out A; V : Integer) is begin Method1 (This); This.A_Value := V; end Ada_Method2; end Simple_Cpp_Interface; package Simple_Cpp_Interface is type A is limited record O_Value : Integer; A_Value : Integer; end record; pragma Convention (C, A); procedure Method1 (This : in out A); pragma Import (C, Method1); procedure Ada_Method2 (This : in out A; V : Integer); pragma Export (C, Ada_Method2); end Simple_Cpp_Interface;
GNAT offers the capability to derive Ada 95 tagged types directly from
preexisting C++ classes and . See “Interfacing with C++” in the
GNAT Reference Manual. The mechanism used by GNAT for achieving
such a goal
has been made user configurable through a GNAT library unit
Interfaces.CPP. The default version of this file is adapted to
the GNU C++ compiler. Internal knowledge of the virtual
table layout used by the new C++ compiler is needed to configure
properly this unit. The Interface of this unit is known by the compiler
and cannot be changed except for the value of the constants defining the
characteristics of the virtual table: CPP_DT_Prologue_Size, CPP_DT_Entry_Size,
CPP_TSD_Prologue_Size, CPP_TSD_Entry_Size. Read comments in the source
of this unit for more details.
The GNAT less specific syntactic or semantic rules.
The other major difference is the requirement for running the binder, which performs two important functions. First, it checks for consistency. In C or C++, the only defense against assembling inconsistent programs lies outside the compiler, in a makefile,.
This section is intended to be useful to Ada programmers who have previously used an Ada compiler implementing the traditional Ada library model, as described in the Ada 95 Language Reference Manual. If you have not used such a system, please go on to the next section..
gcc
This chapter discusses how to compile Ada programs using the
gcc
command. It also describes the set of switches
that can be used to control the behavior of the compiler..
gcc
The
gcc command accepts switches that control the
compilation process. These switches are fully described in this section.
First we briefly list all the switches, in alphabetical order, then we
describe the switches in more detail in functionally grouped sections.
gnat1, the Ada compiler) from dir instead of the default location. Only use this switch when multiple versions of the GNAT compiler are available. See the
gccmanual. See also -gnatn and -gnatN.
Pragma Assertand
pragma Debugto be activated..
gccto redirect the generated object file and its associated ALI file. Beware of this switch with GNAT, because it may cause the object file and ALI file to have different names which in turn may confuse the binder and the linker.
Inline. This applies only to inlining within a unit. For details on control of inlining see See Subprogram Inlining Control.
gnatmakeflag (see Switches for gnatmake).
gccdriver. Normally used only for debugging purposes or if you need to be sure what version of the compiler you are executing.
gccversion, not the GNAT version..
In addition to error messages, which correspond to illegalities as defined in the Ada 95 switches are available to control the handling of warning messages:
.
pragma Elaborate_Allstatements. See the section in this guide on elaboration checking for details on when such pragma.
ifstatements,
whilestatements and
exitstatements.
gccback end. To suppress these back end warnings as well, use the switch -w in addition to -gnatws. is includes the effect of -gnatwf).mprst, that is all checking
options enabled with the exception of -gnatyo,
with an indentation level of 3. This is the standard
checking option that is used for the GNAT sources.
If you compile with the default options, GNAT:
pragma Suppress (all_checks) had been present in the source. Validity checks are also suppressed (in other words -gnatp also implies -gnatVn. Use this switch to improve the performance of the code at the expense of safety in the presence of invalid data or program bugs.
Constraint_Erroras required by standard Ada semantics). These overflow checks correspond to situations in which the true value of the result of an operation may be outside the base range of the result type. The following example shows the distinction:
X1 : Integer := Integer'Last; X2 : Integer range 1 .. 5 := 5; X3 : Integer := Integer'Last; X4 : Integer range 1 .. 5 := 5; F : Float := 2.0E+20; ... X1 := X1 + 1; X2 := X2 + 1; X3 := Integer (F); X4 := Integer (F);
Here the first addition results in a value that is outside the base range
of Integer, and hence requires an overflow check for detection of the
constraint error. Thus the first assignment to
X1 raises a
Constraint_Error exception only if -gnato is set.
The second increment operation results in a violation
of the explicit range constraint, and such range checks are always
performed (unless specifically suppressed with a pragma
suppress
or the use of -gnatp).
The two conversions of
F both result in values that are outside
the base range of type
Integer and thus will raise
Constraint_Error exceptions only if -gnato is used.
The fact that the result of the second conversion is assigned to
variable
X4 with a restricted range is irrelevant, since the problem
is in the conversion, not the assignment.
Basically the rule is that in the default mode ( assignments
to
X1,
X2,
X3 all give results that are within the
range of the target variable, but the result is wrong in the sense that
it is too large to be represented correctly. Typically the assignment
to
X1 will result in wrap around to the largest negative number.
The conversions of
F will result in some
Integer value
and if that integer value is out of the
X4 range then the
subsequent assignment would generate an exception.
Note that the -gnato switch does not affect the code generated
for any floating-point operations; it applies only to integer
semantics).
For floating-point, GNAT has the
Machine_Overflows
attribute set to
False and the normal mode of operation is to
generate IEEE NaN and infinite values on overflow or invalid operations
(such as dividing 0.0 by 0.0).
The reason that we distinguish overflow checking from other kinds of range constraint checking is that a failure of an overflow check can generate an incorrect value, but cannot cause erroneous behavior. This is unlike the situation with a constraint check on an array subscript, where failure to perform the check can result in random memory description, or the range check on a case statement, where failure to perform the check can cause a wild jump.
Note again that -gnato is off by default, so overflow checking is
not performed in default mode. This means that out of the box, with the
default settings, GNAT does not do all the checks expected from the
language description in the Ada Reference Manual. If you want all constraint
checks to be performed, as described in this Manual, then you must
explicitly use the -gnato switch either on the
gnatmake or
gcc command.
The setting of these switches only controls the default setting of the
checks. You may modify them using either
Suppress (to remove
checks) or
Unsuppress (to add back suppressed checks) pragmas in
the program source.
For most operating systems,
gcc does not perform stack overflow
checking by default. This means that if the main environment task or
some other task exceeds the available stack space, then unpredictable
behavior will occur.) do not exceed the available stack space.
If the space is exceeded, then a
Storage_Error exception is raised.
For declared tasks, the stack size is always controlled by the size
given in an applicable
Storage_Size pragma (or is set to
the default size if no pragma is used.
For the environment task, the stack size depends on system defaults and is unknown to the compiler. The stack may even dynamically grow on some systems, precluding the normal Ada semantics for stack overflow. In the worst case, unbounded stack usage, causes unbounded stack expansion resulting in the system running out of virtual memory.
The..
1
2
3
4
5
9
p
8
f
n
w
See Foreign Language Representation, for full details on the
implementation of these character sets.
h
u
s
e
8
b
UTF-8encodings will be recognized. The units that are with'ed directly or indirectly will be scanned using the specified representation scheme, and so if one of the non-brackets scheme is used, it must be used consistently throughout the program. However, since brackets encoding is always recognized, it may be conveniently used in standard libraries, allowing these libraries to be used with any of the available coding schemes. scheme. If no -gnatW? parameter is present, then the default representation is Brackets encoding only.
Note that the wide character representation that is specified (explicitly or by default) for the main program also acts as the default encoding used for Wide_Text_IO files if not specifically overridden by a WCEM form parameter.
For the source file naming rules, See File Naming Rules. See Inlining of Subprograms.
gccwhen compiling multiple files indicates whether all source files have been successfully used to generate object files or not.
When -pass-exit-codes is used,
gcc exits with an extended
exit status and allows an integrated development environment to better
react to a compilation failure. Those exit status are:
Debugunit in the compiler source file debug.adb.
The format of the output is very similar to standard Ada source, and is
easily understood by an Ada programmer. The following special syntactic
additions correspond to low level features used in the generated code that
do not have any exact analogies in pure Ada source form. The following
is a partial list of these special constructions. See the specification
of package
Sprint in file sprint.ads for a full list.
newxxx
[storage_pool =yyy
]
at endprocedure-name
;
(ifexpr
thenexpr
elseexpr
)
x?y:zconstruction in C.
^(source
)
?(source
)
?^(source
)
#/y
#mody
#*y
#remy
freeexpr
[storage_pool =xxx
]
freestatement.
freezetypename
[actions
]
referenceitype
! (arg
,arg
,arg
)
: label
&&expr
&&expr
... &&expr
[constraint_error]
Constraint_Errorexception.
'reference
!(source-expression
)
[numerator
/denominator
]
gcc-g switch will refer to the generated xxx.dg file. This allows you to do source level debugging using the generated code which is sometimes useful for complex code, for example to find out exactly which part of a complex construction raised an exception. This switch also suppress generation of cross-reference information (see -gnatx).
GNATsources for full details on the format of -gnatR3 output. If the switch is followed by an s (e.g. -gnatR2s), then the output is to a file with the name file.rep where file is the name of the corresponding source file.
gnatfindand
gnatxref. The -gnatx switch suppresses this information. This saves some space and may slightly speed up compilation, but means that these tools cannot be used..
The following switches can be used to control which of the two exception handling methods is used.
gnatmake. This option is rarely used. One case in which it may be advantageous is if you have an application where exception raising is common and the overall performance of the application is improved by favoring exception propagation.
gnatmake. This option can only be used if the zero cost approach is available for the target in use (see below)...
GNAT sources may be preprocessed immediately before compilation; empty,:
gcccommand line, in the order given.
ADA_INCLUDE_PATHenvironment variable. Construct this value exactly as the
PATHenvironment variable: a list of directory names separated by colons (semicolons when working with the NT version).
ADA_PRJ_INCLUDE_FILEenvironment variable.
ADA_PRJ_INCLUDE_FILE is normally set by gnatmake or by the gnat
driver when project files are used. It should not normally be set
by other means..
The following are some typical Ada compilation command line examples:
$ gcc -c xyz.adb
$ gcc -c -O2 -gnata xyz-def.adb
Assert/
Debugstatements enabled.
$ gcc -c -gnatc abc-def.adb.
gnatbind
The form of the
gnatbind command is
$ gnatbind [switches] mainprog[.ali] [switches]
where mainprog.adb is the Ada file containing the main program
unit body. If no switches are specified,.).
The following switches are available with
gnatbind; details will
be presented in subsequent sections.
GNAT.Tracebackand
GNAT.Traceback.Symbolicfor more information. Note that on x86 ports, you must not use -fomit-frame-pointer
gccoption.
gnatbindwas invoked, and do not look for ALI files in the directory containing the ALI file named in the
gnatbindcommand line.
gnatmakeflag (see Switches for gnatmake)..
As described earlier, by default
gnatbind checks
that object files are consistent with one another and are consistent
with any source files it can locate. The following switches control binder
access to sources.
gnatmakebecause in this case the checking against sources has already been performed by
gnatmakein the course of compilation (i.e. before binding)..
The following switches provide additional control over the elaboration order. For full details see_All pragmas are implicitly inserted.
These implicit pragmas are still respected by the binder in
-p mode, so a
safe elaboration order is assured.
The following switches allow additional control over the output generated by the binder.
gnatbindoption.
gnatbindoption.
gnatbind.
pragma Restrictionsthat could be applied to the current unit. This is useful for code audit purposes, and also may be used to improve code generation in some cases..
It is possible to have an Ada program which does not have a main subprogram. This program will call the elaboration routines of all the packages, then the finalization routines.
The following switch is used to bind programs organized in this manner:
The package
Ada.Command_Line provides access to the command-line
arguments and program name. In order for this interface to operate
correctly, the two variables
int gnat_argc; char **gnat_argv;
are declared in one of the GNAT library routines. These variables must
be set from the actual
argc and
argv values passed to the
main program. With no n present,
gnatbind
generates the C main program to automatically set these variables.
If the n switch is used, there is no automatic way to
set these variables. If they are not set, the procedures in
Ada.Command_Line will not be available, and any attempt to use
them will raise
Constraint_Error. If command line access is
required, your main program must set
gnat_argc and
gnat_argv from the
argc and
argv values passed to
it._OBJECTS_PATHenvironment variable. Construct this value exactly as the
PATHenvironment variable: a list of directory names separated by colons (semicolons when working with the NT version of GNAT).
ADA_PRJ_OBJECTS_FILEenvironment variable.
ADA_PRJ_OBJECTS_FILE is normally set by gnatmake or by the gnat
driver when project files are used. It should not normally be set
by other means.. simplifying access to the RTL, a major use of search paths is in compiling sources from multiple directories. This can make development environments much more flexible..
gnatlink.
gnatlink
The following switches are available with the
gnatlink utility:
gnatlinkthat the binder has generated C code rather than Ada code.
gnatlinkwill generate a separate file for the linker if the list of object files is too long. The -f switch forces this file to be generated even if the limit is not exceeded. This is useful in some cases to deal with special situations where the command line length is exceeded. the
gccmanual page for further details. You would normally use the -b or -V switch instead.
gcc'. You need to use quotes around compiler_name if
compiler_namecontains spaces or other separator characters. As an example --GCC="foo -x -y" will instruct
gnatlinkto use
foo -x -yas your compiler. Note that switch -c is always inserted after your command name. Thus in the above example the compiler command that will be used by
gnatlinklink
Under Windows systems, it is possible to specify the program stack size.
gnatlink.
gnatmake.
gnatmake
The.
gnatmake
You may specify any of the following switches to
gnatmake:
gcc'. You need to use quotes around compiler_name if
compiler_namecontains spaces or other separator characters. As an example --GCC="foo -x -y" will instruct
gnatmaketo use
foo -x -yas your compiler. Note that switch -c is always inserted after your command name. Thus in the above example the compiler command that will be used by
gnatmakebind'. You need to use quotes around binder_name if binder_name contains spaces or other separator characters. As an example --GNATBIND="bar -x -y" will instruct
gnatmaketo use
bar -x -yas your binder. Binder switches that are normally appended by
gnatmaketo `
gnatbind' are now appended to the end of
bar -x -y.
gnatlink'. You need to use quotes around linker_name if linker_name contains spaces or other separator characters. As an example --GNATLINK="lan -x -y" will instruct
gnatmaketo use
lan -x -yas your linker. Linker switches that are normally appended by
gnatmaketo `
gnatlink' are now appended to the end of
lan -x -y.
gnatmakedoes not check these files, because the assumption is that the GNAT GNAT itself. The switch -a is also useful in conjunction with -f if you need to recompile an entire application, including run-time files, using special configuration pragmas, such as a
Normalize_Scalarspragma.
By default
gnatmake -a compiles all GNAT
internal files with
gcc -c -gnatpg rather than
gcc -c.
gnatmakewill attempt binding and linking unless all objects are up to date and the executable is more recent than the objects.
gnatmakeis invoked with this switch, it will create a temporary mapping file, initially populated by the project manager, if -P is used, otherwise initially empty. Each invocation of the compiler will add the newly accessed sources to the mapping file. This will improve the source search during the next invocation of the compiler.
This switch cannot be used when using a project file.
gnatmakecompwill automatically maintain and update this organization. If no ALI files are found on the Ada object path will be forced to recompile the corresponding source file, and it will be put the resulting object and ALI files in the directory where it found the dummy file.
gnatmakewill give you the full ordered list of failing compiles at the end). If this is problematic, rerun the make process with n set to 1 to get a clean list of messages.
gnatmaketerminates.
If
gnatmake is invoked with several file_names and with this
switch, if there are compilation errors when building an executable,
gnatmake will not attempt to build the following executables.
gnatmakeignores). Note that the debugging information may be out of date with respect to the sources if the -m switch causes a compilation to be switched, so the use of this switch represents a trade-off between compilation time and accurate debugging information..
gnatmakeare displayed.
This switch is recommended when Integrated Preprocessing is used.
gnatmakedecides are necessary.
external(name)when parsing the project file. See Switches Related to Project Files.
gccswitches
gcc(e.g. -O, -gnato, etc.)
Source and library search path switches:
gnatmaketo skip compilation units whose .ALI files have been located in directory dir. This allows you to have missing bodies for the units in dir and to ignore out of date bodies for the same units..
gnatmakewas invoked.
The selected path is handled like a normal RTS path..
gcc. They will be passed on to all compile steps performed by
gnatmake.
gnatbind. They will be passed on to all bind steps performed by
gnatmake.
gnatlink. They will be passed on to all link steps performed by
gnatmake.
gnatmake, regardless of any previous occurrence of -cargs, -bargs or -largs..
gnatmakeWorks its time stamp predates that of the object file.
gnatmakeUsage
gnatmake hello.adb
Hello) and bind and link the resulting object files to generate an executable file hello.
gnatmake main1 main2 main3
Main1), main2.adb (containing unit
Main2) and main3.adb (containing unit
Main3) and bind and link the resulting object files to generate three executable files main1, main2 and main3.
gnatmake -q Main_Unit -cargs -O2 -bargs -l
Main_Unit(from file main_unit.adb). All compilations will be done with optimization level 2 and the order of elaboration will be listed by the binder.
gnatmakewill operate in quiet mode, not displaying commands it is executing..
By default, GNAT generates all run-time checks, except arithmetic overflow checking for integer operations. see Inlining of Subprograms.
Although it is possible to do a reasonable amount of debugging at non-zero optimization levels, the higher the level the more likely that source-level constructs will have been eliminated by optimization. For example, if a loop is strength-reduced, the loop control variable may be completely eliminated and thus cannot be displayed in the debugger. This can only happen at -O2 or -O3. Explicit temporary variables that you code might be eliminated at level -O1 or higher.
The use of the -g switch, which is needed for source-level debugging, affects the size of the program executable on disk, and indeed the debugging information can be quite large. However, it has no effect on the generated code (and thus does not degrade performance)
Since the compiler generates debugging tables for a compilation unit before it performs optimizations, the optimizing transformations may invalidate some of the debugging data. You therefore need to anticipate certain anomalous situations that may arise while debugging optimized code. These are the most common cases:
stepor
nextcommands show the PC bouncing back and forth in the code. This may result from any of the following optimizations:
goto, a
return, or a
breakin a C
switchstatement.
In general, when an unexpected value appears for a local variable or parameter you should first ascertain if that value was actually computed by your program, as opposed to being incorrectly reported by the debugger. Record fields or array elements in an object designated by an access value are generally less of a problem, once you have ascertained that the access value is sensible. Typically, this means checking variables in the preceding code and in the calling subprogram to verify that the value observed is explainable from other values (one must apply the procedure recursively to those other values); or re-running the code and stopping a little earlier (perhaps before the call) and stepping to better see how the variable obtained the value in question; or continuing to step from the point of the strange value to see if code motion had simply moved the variable's assignments later.
In light of such anomalies, a recommended technique is to use -O0 early in the software development cycle, when extensive debugging capabilities are most needed, and then move to -O1 and later -O2 as the debugger becomes less critical. Whether to use the -g switch in the release version is a release management issue. Note that if you use -g you can then use the strip program on the resulting executable, which removes both debugging information and global symbols..
gnatelim
This section describes gnatelim, a tool which detects unused subprograms and helps the compiler to create a smaller executable for your program.
gn.
gnatelim
gnatelim has the following command-line interface:
$ gnatelim [options] name
name should be a name of a source file that contains the main subprogram
of a program (partition).
gnatelim has the following switches:
gnatelimoutputs to the standard error stream the number of program units left to be processed. This option turns this trace off.
gnatelimversion information is printed as Ada comments to the standard output stream. Also, in addition to the number of program units left
gnatelimwill output the name of the current unit being processed.
gnatmake.
gnatelimnot to look for sources in the current directory.
gnatelimto use specific
gcccompiler instead of one available on the path.
gnatelimto use specific
gnatmakeinstead of one available on the path.
Gnatelimunit in the compiler source file gnatelim.ads.
gnatelim sends its output to the standard output stream, and all the
tracing and debug information is sent to the standard error stream.
In order to produce a proper GNAT configuration file
gnat.adc, redirection must be used:
$ gnatelim main_prog.adb > gnat.adc
or
$ gnatelim main_prog.adb >> gnat.adc
in order to append the
gnatelim output to the existing contents of
gnat.adc..
Here
gnatchop
This chapter discusses how to handle files with multiple units by using
the
gnatchop utility. This utility is also useful in renaming
files to meet the standard GNAT default file naming conventions.
The.
gnatchop
The
gnatchop command has the form:
$ gnatchop switches file name [file name file name ...] [directory]
The only required argument is the file name of the file to be chopped. There are no restrictions on the form of this file name. The file itself contains one or more Ada units, in normal GNAT format, concatenated together. As shown, more than one file may be presented to be chopped.
When run in default mode,
gnatchop generates one output file in
the current directory for each unit in each of the files.
directory, if specified, gives the name of the directory to which the output files will be written. If it is not specified, all files are written to the current directory.
For example, given a file called hellofiles containing
the command
$.
When gnatchop is invoked on a file that is empty or that contains only empty lines and/or comments, gnatchop will not fail, but will not produce any new sources.
For example, given a file called toto.txt containing
the command
$ gnatchop toto.txt
will not produce any new file and will result in the following warnings:
toto.txt:1:01: warning: empty file, contains no compilation units no compilation units found no source files writtenoptionsor
gnatmake.is used at the other end to reconstitute the original file names.
gnatchop file1 file2 file3 direc
In Ada 95, configuration pragmas include those pragmas described as
such in the Ada 95 Reference Manual, as well as
implementation-dependent pragmas that are configuration pragmas. See the
individual descriptions of pragmas in the GNAT Reference Manual for
details on these additional GNAT-specific configuration pragmas. Most
notably, the pragma
Source_File_Name, which allows
specifying non-default names for source files, is a configuration
pragma. The following is a complete list of configuration pragmas
recognized by
GNAT:
Ada_83 Ada_95 C_Pass_By_Copy Component_Alignment Discard_Names Elaboration_Checks Eliminate Extend_System Extensions_Allowed External_Name_Casing Float_Representation Initialize_Scalars License Locking_Policy Long_Float Normalize_Scalars Polling Propagate_Exceptions Queuing_Policy Ravenscar Restricted_Run_Time Restrictions Reviewable Source_File_Name Style_Checks Suppress Task_Dispatching_Policy Universal_Data Unsuppress Use_VADS_Size Warnings Validity_Checks
Configuration pragmas may either appear at the start of a compilation unit, in which case they apply only to that unit, or they may apply to all compilations performed in a given compilation environment.
GNAT also provides the
gnatchop utility to provide an automatic
way to handle configuration pragmas following the semantics for
compilations (that is, files with multiple units), described in the RM.
See section see Operating gnatchop in Compilation Mode for details.
However, for most purposes, it will be more convenient to edit the
gnat.adc file that contains configuration pragmas directly,
as described in the following section., one additional file, however only the last one on the command line will be taken into account.
If you are using project file, a separate mechanism is provided using project attributes, see Specifying Configuration Pragmas for more details.
gnatname
The.
gn.
This chapter describes GNAT's Project Manager, a facility that allows you to manage complex builds involving a number of source files, directories, and compilation options for different system configurations. In particular, project files allow you to specify:
gnatls,
gnatxref,
gnatfind); you can apply these settings either globally or to individual compilation units.
Project files are written in a syntax close to that of Ada, using familiar notions such as packages, context clauses, declarations, default values, assignments, and inheritance. Finally, project files can be built hierarchically from other project files, simplifying complex system integration and project reuse.
A project is a specific set of values for various compilation properties. The settings for a given project are described by means of a project file, which is a text file written in an Ada-like syntax. Property values in project files are either strings or lists of strings. Properties that are not explicitly set receive default values. A project file may interrogate the values of external variables (user-defined command-line switches or environment variables), and it may specify property settings conditionally, based on the value of such variables.
In simple cases, a project's source files depend only on other source files
in the same project, or on the predefined libraries. (Dependence is
used in
the Ada technical sense; as in one Ada unit
withing another.) However,
the Project Manager also allows more sophisticated arrangements,
where the source files in one project depend on source files in other
projects:
More generally, the Project Manager lets you structure large development efforts into hierarchical subsystems, where build decisions are delegated to the subsystem level, and thus different compilation environments (switch settings) used for different subsystems.
The Project Manager is invoked through the -Pprojectfile switch to gnatmake or to the gnat front driver. There may be zero, one or more spaces between -P and projectfile. If you want to define (on the command line) an external variable that is queried by the project file, you must use the -Xvbl=value switch. The Project Manager parses and interprets the project file, and drives the invoked tool based on the project settings.
The Project Manager supports a wide range of development strategies, for systems of all sizes. Here are some typical practices that are easily handled:
The destination of an executable can be controlled inside a project file
using the -o
switch.
In the absence of such a switch either inside
the project file or on the command line, any executable files generated by
gnatmake are placed in the directory
Exec_Dir specified
in the project file. If no
Exec_Dir is specified, they will be placed
in the object directory of the project.
You can use project files to achieve some of the effects of a source versioning system (for example, defining separate projects for the different sets of sources that comprise different releases) but the Project Manager is independent of any source configuration management tools that might be used by the developers.
The next section introduces the main features of GNAT's project facility through a sequence of examples; subsequent sections will present the syntax and semantics in more detail. A more formal description of the project facility appears in the GNAT Reference Manual.
This section illustrates some of the typical uses of project files and explains their basic structure and behavior.
Suppose that the Ada source files pack.ads, pack.adb, and
proc.adb are in the /common directory. The file
proc.adb contains an Ada main subprogram
Proc that
withs
package
Pack. We want to compile these source files under two sets
of switches:
The GNAT project files shown below, respectively debug.gpr and release.gpr in the /common directory, achieve these effects.
Schematically:
/common debug.gpr release.gpr pack.ads pack.adb proc.adb /common/debug proc.ali, proc.o pack.ali, pack.o /common/release proc.ali, proc.o pack.ali, pack.o
Here are the corresponding project files:
project Debug is for Object_Dir use "debug"; for Main use ("proc"); package Builder is for Default_Switches ("Ada") use ("-g"); for Executable ("proc.adb") use "proc1"; end Builder; package Compiler is for Default_Switches ("Ada") use ("-fstack-check", "-gnata", "-gnato", "-gnatE"); end Compiler; end Debug;
project Release is for Object_Dir use "release"; for Exec_Dir use "."; for Main use ("proc"); package Compiler is for Default_Switches ("Ada") use ("-O2"); end Compiler; end Release;
The name of the project defined by debug.gpr is
"Debug" (case
insensitive), and analogously the project defined by release.gpr is
"Release". For consistency the file should have the same name as the
project, and the project file's extension should be
"gpr". These
conventions are not required, but a warning is issued if they are not followed.
If the current directory is /temp, then the command
gnatmake -P/common/debug.gpr
generates object and ALI files in /common/debug,
as well as the
proc1 executable,
using the switch settings defined in the project file.
Likewise, the command
gnatmake -P/common/release.gpr
generates object and ALI files in /common/release,
and the
proc
executable in /common,
using the switch settings from the project file...
A GNAT tool that is integrated with the Project Manager is modeled by a
corresponding package in the project file. In the example above,
The
Debug project defines the packages
Builder
(for gnatmake) and
Compiler;
the
Release project defines only the
Compiler package.
The Ada-like package syntax is not to be taken literally. Although packages in project files bear a surface resemblance to packages in Ada source code, the notation is simply a way to convey a grouping of properties for a named entity. Indeed, the package names permitted in project files are restricted to a predefined set, corresponding to the project-aware tools, and the contents of packages are limited to a small set of constructs. The packages in the example above contain attribute definitions..
One of the specifiable properties of a project is a list of files that contain
main subprograms. This property is captured in the
Main attribute,
whose value is a list of strings. If a project defines the
Main
attribute, it is not necessary to identify the main subprogram(s) when
invoking gnatmake (see gnatmake and Project Files).
By default, the executable file name corresponding to a main source is
deducted the executable files, when no attribute
Executable applies:
its value replace the platform-specific executable suffix.
Attributes
Executable and
Executable_Suffix are the only ways to
specify a non default executable file name when several mains are built at once
in a single gnatmake command.
Since the project files above do not specify any source file naming
conventions, the GNAT defaults are used. The mechanism for defining source
file naming conventions – a package named
Naming –
is described below (see Naming Schemes).
Since the project files do not specify a
Languages attribute, by
default the GNAT tools assume that the language of the project file is Ada.
More generally, a project can comprise source files
in Ada, C, and/or other languages.
Instead of supplying different project files for debug and release, we can
define a single project file that queries an external variable (set either
on the command line or via an environment variable) in order to
conditionally define the appropriate settings. Again, assume that the
source files pack.ads, pack.adb, and proc.adb are
located in directory /common. The following project file,
build.gpr, queries the external variable named
STYLE and
defines an object directory and switch settings based on whether
the value is
"deb" (debug) or
"rel" (release), and where
the default is
"deb".
project Build is for Main use ("proc"); type Style_Type is ("deb", "rel"); Style : Style_Type := external ("STYLE", "deb"); case Style is when "deb" => for Object_Dir use "debug"; when "rel" => for Object_Dir use "release"; for Exec_Dir use "."; end case; package Builder is case Style is when "deb" => for Default_Switches ("Ada") use ("-g"); for Executable ("proc") use "proc1"; end case; end Builder; package Compiler is case Style is when "deb" => for Default_Switches ("Ada") use ("-gnata", "-gnato", "-gnatE"); when "rel" => for Default_Switches ("Ada") use ("-O2"); end case; end Compiler; end Build;
Style_Type is an example of a string type, which is the project
file analog of an Ada enumeration type but whose components are string literals
rather than identifiers.
Style is declared as a variable of this type.
The form
external("STYLE", "deb") is known as an
external reference; its first argument is the name of an
external variable, and the second argument is a default value to be
used if the external variable doesn't exist. You can define an external
variable on the command line via the -X switch,
or you can use an environment variable
as an external variable.
Each
case construct is expanded by the Project Manager based on the
value of
Style. Thus the command
gnatmake -P/common/build.gpr -XSTYLE=deb
is equivalent to the gnatmake invocation using the project file debug.gpr in the earlier example. So is the command
gnatmake -P/common/build.gpr
since
"deb" is the default for
STYLE.
Analogously,
gnatmake -P/common/build.gpr -XSTYLE=rel
is equivalent to the gnatmake invocation using the project file release.gpr in the earlier example.
ADA_PROJECT_PATHis the same as the syntax of
ADA_INCLUDE_PATHand
ADA_OBJECTS_PATH: a list of directory names separated by colons (semicolons on Windows).
Thus, if we define
ADA.
In large software systems it is common to have multiple implementations of a common interface; in Ada terms, multiple versions of a package body for the same specification.. This facility is the project analog of a type extension in Object-Oriented Programming. Project hierarchies are permitted (a child project may be the parent of yet another project), and a project that inherits one project can also import other projects.
As an example, suppose that directory /seq contains the project file seq_proj.gpr as well as the source files pack.ads, pack.adb, and proc.adb:
/seq pack.ads pack.adb proc.adb seq_proj.gpr
Note that the project file can simply be empty (that is, no attribute or package is defined):
project Seq_Proj is end Seq_Proj;
implying that its source files are all the Ada source files in the project directory.
Suppose we want to supply an alternate version of pack.adb, in
directory /tasking, but use the existing versions of
pack.ads and proc.adb. We can define a project
Tasking_Proj that inherits
Seq_Proj:
/tasking pack.adb tasking_proj.gpr project Tasking_Proj extends "/seq/seq_proj" is end Tasking_Proj;
The version of pack.adb used in a build depends on which project file is specified.
Note that we could have obtained the desired behavior using project import
rather than project inheritance; a
base project would contain the
sources for pack.ads and proc.adb, a sequential project would
import
base and add pack.adb, and likewise a tasking project
would import
base and add a different version of pack.adb. The
choice depends on whether other sources in the original project need to be
overridden. If they do, then project extension is necessary, otherwise,
importing is sufficient.
In a project file that extends another project file, it is possible to indicate that an inherited source is not part of the sources of the extending project. This is necessary sometimes when a package spec has been overloaded and no longer requires a body: in this case, it is necessary to indicate that the inherited body is not part of the sources of the project, otherwise there will be a compilation error when compiling the spec.
For that purpose, the attribute
Locally_Removed_Files is used.
Its value is a string list: a list of file names.
project B extends "a" is for Source_Files use ("pkg.ads"); -- New spec of Pkg does not need a completion for Locally_Removed_Files use ("pkg.adb"); end B;
Attribute
Locally_Removed_Files may also be used to check if a source
is still needed: if it is possible to build using
gnatmake when such
a source is put in attribute
Locally_Removed_Files of a project P, then
it is possible to remove the source completely from a system that includes
project P.
This section describes the structure of project files.
A project may be an independent project, entirely defined by a single project file. Any Ada source file in an independent project depends only on the predefined library and other Ada source files in the same project.
A project may also depend on other projects, in either or both of the following ways:
The dependence relation is a directed acyclic graph (the subgraph reflecting the “extends” relation is a tree).
A project's immediate sources are the source files directly defined by that project, either implicitly by residing in the project file's directory, or explicitly through any of the source-related attributes described below. More generally, a project proj's sources are the immediate sources of proj together with the immediate sources (unless overridden) of any project on which proj depends (either directly or indirectly).
As seen in the earlier examples,.
Any name in a project file, such as the project name or a variable name, has the same syntax as an Ada identifier.
The reserved words of project files are the Ada reserved words plus
extends,
external, and
project. Note that the only Ada
reserved words currently used in project file syntax are:
case
end
for
is
others
package
renames
type
use
when
with
Comments in project files have the same syntax as in Ada, two consecutives hyphens through the end of the line.
A project file may contain packages. The name of a package must be one of the identifiers from the following list. A package with a given name may only appear once in a project file. Package names are case insensitive. The following package names are legal:
Naming
Builder
Compiler
Binder
Linker
Finder
Cross_Reference
Eliminate
gnatls
gnatstub
IDE.
An expression is either a string expression or a string list expression.
A string expression is either a simple string expression or a compound string expression.
A simple string expression is one of the following:
"comm/my_proj.gpr"
A compound string expression is a concatenation of string expressions,
using the operator
"&"
Path & "/" & File_Name & ".ads"
A string list expression is either a simple string list expression or a compound string list expression.
A simple string list expression is one of the following:
File_Names := (File_Name, "gnat.adc", File_Name & ".orig"); Empty_List := ();
A compound string list expression is the concatenation (using
"&") of a simple string list expression and an expression. Note that
each term in a compound string list expression, except the first, may be
either a string expression or a string list expression.
File_Name_List := () & File_Name; -- One string in this list Extended_File_Name_List := File_Name_List & (File_Name & ".orig"); -- Two strings Big_List := File_Name_List & Extended_File_Name_List; -- Concatenation of two string lists: three strings Illegal_List := "gnat.adc" & Extended_File_Name_List; -- Illegal: must start with a string list
A string type declaration introduces a discrete set of string literals. If a string variable is declared to have this type, its value is restricted to the given set of literals. Constructions).
The string literals in the list are case sensitive and must all be different. They may include any graphic characters allowed in Ada, including spaces.
A string type may only be declared at the project level, not inside a package.
A string type may be referenced by its name if it has been declared in the same project file, or by an expanded name whose prefix is the name of the project in which it is declared..
A project (and its packages) may have attributes that define the project's properties. Some attributes have values that are strings; others have values that are string lists.
There are two categories of attributes: simple attributes and associative arrays (see Associative Array Attributes).
Legal project attribute names, and attribute names for each legal package are listed below. Attributes names are case-insensitive.
The following attributes are defined on projects (all are simple attributes):
The following attributes are defined for package
Naming
(see Naming Schemes):
The following attributes are defined for packages
Builder,
Compiler,
Binder,
Linker,
Cross_Reference, and
Finder
(see Switches and Project Files).
In addition, package
Compiler has a single string attribute
Local_Configuration_Pragmas and package
Builder has a single
string attribute
Global_Configuration_Pragmas.
Each simple attribute has a default value: the empty string (for string-valued attributes) and the empty list (for string list-valued attributes).
An attribute declaration defines a new value for an attribute.
Examples of simple attribute declarations:
for Object_Dir use "objects"; for Source_Dirs use ("units", "test/drivers");
The syntax of a simple attribute declaration is similar to that of an attribute definition clause in Ada.
Attributes references may be appear in expressions.
The general form for such a reference is
<entity>'<attribute>:
Associative array attributes are functions. Associative
array attribute references must have an argument that is a string literal.
Examples are:
project'Object_Dir Naming'Dot_Replacement Imported_Project'Source_Dirs Imported_Project.Naming'Casing Builder'Default_Switches("Ada")
The prefix of an attribute");
Some attributes are defined as associative arrays. An associative array may be regarded as a function that takes a string as a parameter and delivers a string or string list value as its result.
Here are some examples of single associative array attribute associations:
for Body ("main") use "Main.ada"; for Switches ("main.ada") use ("-v", "-gnatv"); for Switches ("main.ada") use Builder'Switches ("main.ada") & "-g";
Like untyped variables and simple attributes, associative array attributes may be declared several times. Each declaration supplies a new value for the attribute, and replaces the previous setting.
An associative array attribute may be declared as a full associative array declaration, with the value of the same attribute in an imported or extended project.
package Builder is for Default_Switches use Default.Builder'Default_Switches; end Builder;
In this example,
Default must be either an project imported by the
current project, or the project that the current project extends. If the
attribute is in a package (in this case, in package
Builder), the same
package needs to be specified.
A full associative array declaration replaces any other declaration for the attribute, including other full associative array declaration. Single associative array associations may be declare after a full associative declaration, modifying the value for a single association of the attribute. and
attribute declarations. String type declarations, variable declarations and
package declarations are not allowed.
The value of the case variable is often given by an external reference (see External References in Project Files).
Each project has exactly one object directory and one or more source directories. The source directories must contain at least one source file, unless the project file explicitly specifies that no source files are present (see Source File Names).
The..
A project file may contain references to external variables; such references are called external references.
An external variable is either defined as part of the environment (an environment variable in Unix, for example) or else specified on the command line via the -Xvbl=value switch. If both, then the command line value is used.
The value of an external reference is obtained by means of the built-in
function
external, which returns a string value.
This function has two forms:
external (external_variable_name)
external (external_variable_name, default_value)
Each parameter must be a string literal. For example:
external ("USER") external ("OS", "GNU/Linux")
In the form with one parameter, the function returns the value of the external variable given as parameter. If this name is not present in the environment, the function returns an empty string.
In the form with two string parameters, the second argument is
the value returned when the variable given as the first argument is not
present in the environment. In the example above, if
"OS" is not
the name of an environment variable and is not passed on
the command line, then the returned value is
"GNU/Linux".
An external reference may be part of a string expression or of a string list expression, and can therefore appear in a variable declaration or an attribute declaration.
type Mode_Type is ("Debug", "Release"); Mode : Mode_Type := external ("MODE"); case Mode is when "Debug" => ...)..
Sometimes an Ada software system is ported from a foreign compilation
environment to GNAT, and the file names do not use the default GNAT
conventions. Instead of changing all the file names (which for a variety
of reasons might not be possible), you can define the relevant file
naming scheme in the
Naming package in your project file.
Note that the use of pragmas described in Alternative File Naming Schemes by mean of a configuration pragmas file is not supported when using project files. You must use the features described in this paragraph. You can however use specify other configuration pragmas (see Specifying Configuration Pragmas).
For example, the following package models the Apex file naming rules:
package Naming is for Casing use "lowercase"; for Dot_Replacement use "."; for Spec_Suffix ("Ada") use ".1.ada"; for Body_Suffix ("Ada") use ".2.ada"; end Naming;
You can define the following attributes in package
Naming:
"lowercase",
"uppercase"or
"mixedcase"; these strings are case insensitive.
If Casing is not specified, then the default is
"lowercase".
'.'except if the entire string is
"."
If
Dot_Replacement is not specified, then the default is
"-".
Spec_Suffix ("Ada")is not specified, then the default is
".ads".
Spec_Suffix ("Ada")
Body_Suffix ("Ada")is not specified, then the default is
".adb".
Body_Suffix.
If
Separate_Suffix ("Ada") is not specified, then it defaults to same
value as
Body_Suffix ("Ada").
Specto define the source file name for an individual Ada compilation unit's spec. The array index must be a string literal that identifies the Ada unit (case insensitive). The value of this attribute must be a string that identifies the file that contains this unit's spec (case sensitive or insensitive depending on the operating system).
for Spec ("MyPack.MyChild") use "mypack.mychild.spec";
Bodyto define the source file name for an individual Ada compilation unit's body (possibly a subunit). The array index must be a string literal that identifies the Ada unit (case insensitive). The value of this attribute must be a string that identifies the file that contains this unit's body or subunit (case sensitive or insensitive depending on the operating system).
for Body ("MyPack.MyChild") use "mypack.mychild.body";
Library projects are projects whose object code is placed in a library. (Note that this facility is not yet supported on all platforms)
To create a library project, you need to define in its project file
two project-level attributes:
Library_Name and
Library_Dir.
Additionally, you may define the library-related attributes
Library_Kind,
Library_Version,
Library_Interface,
Library_Auto_Init,
Library_Options and
Library_GCC.
The
Library_Name attribute has a string value. There is no restriction
on the name of a library. It is the responsability of the developer to
choose a name that will be accepted by the platform. It is recommanded
different from the project's object directory. It also needs to be writable.". If this attribute is not specified, the library is a
static library, that is an archive of object files that can be potentially
linked into an static executable. Otherwise, the library may be dynamic or
relocatable, that is a library that is loaded only at the start of execution.
Depending on the operating system, there may or may not be a distinction
between dynamic and relocatable libraries. For Unix and VMS Unix there is no
such distinction..
When a library is built or rebuilt, an attempt is made to delete all files in the library directory. All ALI files will also be copied from the object directory to the library directory. To build executables, gnatmake will use the library rather than the individual object files. The copy of the ALI files are made read-only.;
A Stand-alone Library is a library that contains the necessary code to elaborate the Ada units that are included in the library. A Stand-alone Library is suitable to be used in an executable when the main is not in Ada. However, Stand-alone Libraries may also be used with an Ada main subprogram.
A Stand-alone Library Project is a Library Project where the library is a Stand-alone Library.
To be a Stand-alone Library Project, in addition to the two attributes
that make a project a Library Project (
Library_Name and
Library_Dir, see Library Projects), the attribute
Library_Interface must be defined.
for Library_Dir use "lib_dir"; for Library_Name use "dummy"; for Library_Interface use ("int1", "int1.child");
Attribute
Library_Interface has a non empty string list value,
each string in the list designating a unit contained in an immediate source
of the project file.
When a Stand-alone Library is built, first the binder is invoked to build a package whose name depends on the library name (b~dummy.ads/b in the example above). This binder-generated package includes initialization and finalization procedures whose names depend on the library name (dummyinit and dummyfinal in the example above). The object corresponding to this package is included in the library.
A dynamic or relocatable Stand-alone Library is automatically initialized
if automatic initialization of Stand-alone Libraries is supported on the
platform and if attribute
Library_Auto_Init is not specified or
is specified with the value "true". A static Stand-alone Library is never
automatically initialized.
Single string attribute
Library_Auto_Init may be specified with only
two possible values: "false" or "true" (case-insensitive). Specifying
"false" for attribute
Library_Auto_Init will prevent automatic
initialization of dynamic or relocatable libraries..
For a Stand-Alone Library, only the ALI files of the Interface Units
(those that are listed in attribute
Library_Interface) are copied to
the Library Directory. As a consequence, only the Interface Units may be
imported from Ada units outside of the library. If other units are imported,
the binding phase will fail.
When a Stand-Alone Library is bound, the switches that are specified in
the attribute
Default_Switches ("Ada") in package
Binder are
used in the call to gnatbind.
The string list attribute
Library_Options may be used to specified
additional switches to the call to gcc to link the library.
The attribute
Library_Src_Dir, may be specified for a
Stand-Alone Library.
Library_Src_Dir is a simple attribute that has a
single string value. Its value must be the path (absolute or relative to the
project directory) of an existing directory. This directory cannot be the
object directory or one of the source directories, but it can be the same as
the library directory. The sources of the Interface
Units of the library, necessary to an Ada client of the library, will be
copied to the designated directory, called Interface Copy directory.
These sources includes the specs of the Interface Units, but they may also
include bodies and subunits, when pragmas
Inline or
Inline_Always
are used, or when there is a generic units in the spec. Before the sources
are copied to the Interface Copy directory, an attempt is made to delete all
files in the Interface Copy directory..
This section covers several topics related to gnatmake and
project files: defining switches for gnatmake
and for the tools that it invokes; specifying configuration pragmas;
the use of the
Main attribute; building and rebuilding library project
files. associative
array indexed by language name (case insensitive) whose value is a string list.
For example:
package Compiler is for Default_Switches ("Ada") use ("-gnaty", "-v"); end Compiler;
The
Switches attribute is also an associative array,
indexed by a file name (which may or may not be case sensitive, depending
on the operating system) whose value is a string list. For example:
package Builder is for Switches ("main1.adb") use ("-O2"); for Switches ("main2.adb") use (" gnatmake is invoked with a main project file that is a library project file, it is not allowed to specify one or more mains on the command line.
When a library project file is specified, switches -b and -l have special meanings.
A number of GNAT tools, other than gnatmake are project-aware: gnatbind, gnatfind, gnatlink, gnatls, gnatelim, and gnatxref. However, none of these tools can be invoked directly with a project file switch (-P). They must be invoked through the gnat driver.
The gnat driver is a front-end that accepts a number of commands and call the corresponding tool. It has been designed initially for VMS to convert VMS style qualifiers to Unix style switches, but it is now available to all the GNAT supported platforms.
On non VMS platforms, the gnat driver accepts the following commands (case insensitive):
Note that the compiler is invoked using the command gnatmake -f -u -c.
The command may be followed by switches and arguments for the invoked tool.
gnat bind -C main.ali gnat ls -a main gnat chop foo.txt
In addition, for command BIND, COMP or COMPILE, FIND, ELIM, LS or LIST, LINK, PP or PRETTY and XREF, the project file related switches (-P, -X and -vPx) may be used in addition to the switches of the invoking tool.
For each of these commands, there is optionally a corresponding package in the main project.
Binderfor command BIND (invoking
gnatbind)
Compilerfor command COMP or COMPILE (invoking the compiler)
Finderfor command FIND (invoking
gnatfind)
Eliminatefor command ELIM (invoking
gnatelim)
Gnatlsfor command LS or LIST (invoking
gnatls)
Linkerfor command LINK (invoking
gnatlink)
Pretty_Printerfor command PP or PRETTY (invoking
gnatpp)
Cross_Referencefor command XREF (invoking
gnatxref)
Package
Gnatls has a unique attribute
Switches,
a simple variable with a string list value. It contains switches
for the invocation of
gnatls.
project Proj1 is package gnatls is for Switches use ("-a", "-v"); end gnatls; end Proj1;
All other packages have two attribute
Switches and
Default_Switches.
Switches is an associated array attribute, indexed by the
source file name, that has a string list value: the switches to be
used when the tool corresponding to the package is invoked for the specific
source file.
Default_Switches is an associative array attribute,
indexed by the programming language that has a string list value.
Default_Switches ("Ada") contains the
switches for the invocation of the tool corresponding
to the package, except if a specific
Switches attribute
is specified for the source file.
project Proj is for Source_Dirs use ("./**"); package gnatls is for Switches use ("-a", "-v"); end gnatls; package Compiler is for Default_Switches ("Ada") use ("-gnatv", "-gnatwa"); end Binder; package Binder is for Default_Switches ("Ada") use ("-C", "-e"); end Binder; package Linker is for Default_Switches ("Ada") use ("-C"); for Switches ("main.adb") use ("-C", "-v", "-v"); end Linker; package Finder is for Default_Switches ("Ada") use ("-a", "-f"); end Finder; package Cross_Reference is for Default_Switches ("Ada") use ("-a", "-f", "-d", "-u"); end Cross_Reference; end Proj;
With the above project file, commands such as
gnat comp -Pproj main gnat ls -Pproj main gnat xref -Pproj main gnat bind -Pproj main.ali gnat link -Pproj main.ali
will set up the environment properly and invoke the tool with the switches
found in the package corresponding to the tool:
Default_Switches ("Ada") for all tools,
except
Switches ("main.adb")
for
gnatlink...
project ::= context_clause project_declaration context_clause ::= {with_clause} with_clause ::= with path_name { , path_name } ; path_name ::= string_literal project_declaration ::= simple_project_declaration | project_extension simple_project_declaration ::= project <project_>simple_name is {declarative_item} end <project_>simple_name; project_extension ::= project <project_>simple_name extends path_name is {declarative_item} end <project_>simple_name; declarative_item ::= package_declaration | typed_string_declaration | other_declarative_item package_declaration ::= package_specification | package_renaming package_specification ::= package package_identifier is {simple_declarative_item} end package_identifier ; package_identifier ::=
Naming|
Builder|
Compiler|
Binder|
Linker|
Finder|
Cross_Reference|
gnatls|
IDE|
Pretty_Printerpackage_renaming ::== package package_identifier renames <project_>simple_name.package_identifier ; typed_string_declaration ::= type <typed_string_>_simple_name is ( string_literal {, string_literal} ); other_declarative_item ::= attribute_declaration | typed_variable_declaration | variable_declaration | case_construction ) typed_variable_declaration ::= <typed_variable_>simple_name : <typed_string_>name := string_expression ; variable_declaration ::= <variable_>simple_name := expression; expression ::= term {& term} term ::= literal_string | string_list | <variable_>name | external_value | attribute_reference string_literal ::= (same as Ada) string_list ::= ( <string_>expression { , <string_>expression } ) external_value ::= external ( string_literal [, string_literal] ) attribute_reference ::= attribute_prefix ' <simple_attribute_>simple_name [ ( literal_string ) ] attribute_prefix ::= project | <project_>simple_name | package_identifier | <project_>simple_name . package_identifier case_construction ::= case <typed_variable_>name is {case_item} end case ; case_item ::= when discrete_choice_list => {case_construction | attribute_declaration} discrete_choice_list ::= string_literal {| string_literal} | others name ::= simple_name {. simple_name} simple_name ::= identifier (same as Ada)
gnatxrefand
gnatfind
The compiler generates cross-referencing information (unless you set the `-gnatx' switch), which are saved in the .ali files. This information indicates where in the source each entity is declared and referenced. Note that entities in package Standard are not included, but entities in all other predefined units are included in the output.
Before using any of these two tools, you need to compile successfully your application, so that GNAT gets a chance to generate the cross-referencing information..
To use these tools, you must not compile your application using the -gnatx switch on the gnatmake command line (see The GNAT Make Program gnatmake). Otherwise, cross-referencing information will not be generated.
gn.
The switches can be : permissions status in the file system for the current user.
gnatmakeflag (see Switches for gnatmake).
gnatxrefwill output the parent type reference for each matching derived types.
gnatfindand
gnatxref.xrefwill then display every unused entity and 'with'ed package.
gnatxrefwill generate a tags file that can be used by vi. For examples how to use this feature, see See Examples of gnatxref Usage.makeflag (see Switches for gnatmake)..
Project files allow a programmer to specify how to compile its
application, where to find sources, etc. These files are used
primarily by the Glide Ada mode, but they can also be used
by the two tools
gnatxref and
gnatfind.
A project file name must end with .gpr. If a single one is
present in the current directory, then
gnatxref and
gnatfind will
extract the information from it. If multiple project files are found, none of
them is read, and you have to use the `-p' switch to specify the one
you want to use.
The following lines can be included, even though most of them have default values which can be used in most cases. The lines can be entered in any order in the file. Except for src_dir and obj_dir, you can only have one instance of each line. If you have multiple instances, only the last one is taken into account.
src_dir=DIR
"./"] specifies a directory where to look for source files. Multiple
src_dirlines can be specified and they will be searched in the order they are specified.
obj_dir=DIR
"./"] specifies a directory where to look for object and library files. Multiple
obj_dirlines can be specified, and they will be searched in the order they are specified
comp_opt=SWITCHES
""] creates a variable which can be referred to subsequently by using the
${comp_opt}notation. This is intended to store the default switches given to gnatmake and gcc.
bind_opt=SWITCHES
""] creates a variable which can be referred to subsequently by using the `${bind_opt}' notation. This is intended to store the default switches given to gnatbind.
link_opt=SWITCHES
""] creates a variable which can be referred to subsequently by using the `${link_opt}' notation. This is intended to store the default switches given to gnatlink.
main=EXECUTABLE
""] specifies the name of the executable for the application. This variable can be referred to in the following lines by using the `${main}' notation.
comp_cmd=COMMAND
"gcc -c -I${src_dir} -g -gnatq"] specifies the command used to compile a single file in the application.
make_cmd=COMMAND
"gnatmake ${main} -aI${src_dir} -aO${obj_dir} -g -gnatq -cargs ${comp_opt} -bargs ${bind_opt} -largs ${link_opt}"] specifies the command used to recompile the whole application.
run_cmd=COMMAND
"${main}"] specifies the command used to run the application.
debug_cmd=COMMAND
"gdb ${main}"] specifies the command used to debug the application
gnatxref and gnatfind only take into account the
src_dir and
obj_dir lines, and ignore the others. :
gnatxrefUsage
For the following examples, we will consider the following units :
gnatxref main.adb
gnatxrefgenerates cross-reference information for main.adb and every unit 'with'ed by main.adb.
The output would be:
B Type: Integer Decl: bar.ads 2:22 B Type: Integer Decl: main.ads 3:20 Body: main.adb 2:20 Ref: main.adb 4:13 5:13 6:19 Bar Type: Unit Decl: bar.ads 1:9 Ref: main.adb 6:8 7:8 main.ads 1:6 C Type: Integer Decl: main.ads 4:5 Modi: main.adb 4:8 Ref: main.adb 7:19 D Type: Integer Decl: main.ads 6:5 Modi: main.adb 5:8 Foo Type: Unit Decl: main.ads 3:15 Body: main.adb 2:15 Main Type: Unit Decl: main.ads 2:9 Body: main.adb 1:14 Print Type: Unit Decl: bar.ads 2:15 Ref: main.adb 6:12 7:12
that is the entity
Main is declared in main.ads, line 2, column 9,
its body is in main.adb, line 1, column 14 and is not referenced any where.
The entity
gnatxref package1.adb package2.ads
gnatxrefwill generates cross-reference information for package1.adb, package2.ads and any other package 'with'ed by any of these.
gnatxref can generate a tags file output, which can be used
directly from vi. Note that the standard version of vi
will not work properly with overloaded symbols. Consider using another
free implementation of vi, such as vim.
$ gnatxref -v gnatfind.adb > tags
will generate the tags file for
gnatfind itself (if the sources
are in the search path!).
From vi, you can then use the command `:tag entity' (replacing entity by whatever you are looking for), and vi will display a new file with the corresponding declaration of entity.
gn..
Programs can be easier to read if certain constructs are vertically aligned. By default all alignments are set ON. Through the -A0 switch you may reset the default to OFF, and then use one or more of the other -An switches to activate alignment for specific constructs.
:in declarations
:=in initializations in declarations
:=in assignment statements
=>in associations
The -A switches are mutually compatible; any combination is allowed..
This group of gnatpp switches controls the layout of comments and complex syntactic constructs. See Formatting Comments, for details on their effect.
The -c1 and -c2 switches are incompatible. The -c3 and -c4 switches are compatible with each other and also with -c1 and -c2.
The -l1, -l2, and -l3 switches are incompatible.
These switches allow control over line length and indentation.
These switches control the inclusion of missing end/exit labels, and the indentation level in case statements.
To define the search path for the input source file, gnatpp uses the same switches as the GNAT compiler, with the same effects.
gnatppSwitches
The additional gnatpp switches are defined in this subsection.
The following subsections show how gnatpp treats “white space”, comments, program layout, and name casing. They provide the detailed descriptions of the switches shown above..
gnatpp always converts the usage occurrence of a (simple) name to the same casing as the corresponding defining identifier.
You control the casing for defining occurrences via the -n switch. With -nD (“as declared”, which is the default), defining occurrences appear exactly as in the source file where they are declared. The other values for this switch — -nU, -nL, -nM — value of the -n switch (subject to the dictionary file mechanism described below). Thus gnatpp acts as though the -n switch had affected the casing for the defining occurrence of the name.
Some names may need to be spelled with casing conventions that are not covered by the upper-, lower-, and mixed-case transformations. You can arrange correct casing by placing such names in a dictionary file, and then supplying a -D switch. The casing of names from dictionary files overrides any -n switch.
To handle the casing of Ada predefined names and the names from GNAT libraries, gnatpp assumes a default dictionary file. The name of each predefined entity is spelled with the same casing as is used for the entity in the Ada Reference Manual. The name of each entity in the GNAT libraries is spelled with the same casing as is used in the declaration of that entity.
The -D- -nU switch.
To ensure that even such names are rendered in uppercase,
additionally supply the -D- switch
(or else, less conveniently, place these names in upper case in a dictionary
file).
A dictionary file is a plain text file; each line in this file can be either a blank line (containing only space characters and ASCII.HT characters), an Ada comment line, or the specification of exactly one casing schema.
A casing schema is a string that has the following syntax:
(The
[] metanotation stands for an optional part;
see -Dfile):
*, and if it does, the casing of this simple_identifier is used for this subword
*simple_identifier, and if it does, the casing of this simple_identifier is used for this subword
*simple_identifier
*, and if it does, the casing of this simple_identifier is used for this subword
For example, suppose we have the following source to reformat:
And suppose we have two dictionaries:
If gnatpp is called with the following switches:
gnatpp -nM -D dict1 -D dict2 test.adb
then we will get the following name casing in the gnatpp output:.
gnatkr
The default file naming rule in GNAT is that the file name must be derived from the unit name. The exact default rule is as follows:
The -gnatknn switch of the compiler activates a “krunching” circuit that limits file names to nn characters (where nn is a decimal integer). For example, using OpenVMS, where the maximum file name length is 39, the value of nn is usually set to 39, but if you want to generate a set of files that would be usable if ported to a system with some different maximum file length, then a different value can be specified. The default value of 39 for OpenVMS need not be specified.
The
gnatkr utility can be used to determine the krunched name for
a given file, when krunched to a specified maximum length. impled crunching length is always eight characters.
The output is the krunched name. The output has an extension only if the original argument was a file name with an extension..ifi.adb
These system files have a hyphen in the second character position. That is why normal user files replace such a character with a tilde, to avoid confusion with system file names.ifi.
gnatkrUsage
$ gnatkr very_long_unit_name.ads --> velounna.ads $ gnatkr grandparent-parent-child.ads --> grparchi.ads $ gnatkr Grandparent.Parent.Child.ads --> grparchi.ads $ gnatkr grandparent-parent-child --> grparchi $ gnatkr very_long_unit_name.ads/count=6 --> vlunna.ads $ gnatkr very_long_unit_name.ads/count=0 --> very_long_unit_name.ads
gnatprep
The
gnatprep utility provides
a simple preprocessing capability for Ada programs.
It is designed for use with GNAT, but is not dependent on any special
features of GNAT.
gnatprep
To call
gnatprep use
$ gnatprep [-bcrsu] [-Dsymbol=value] infile outfile [deffile]
where
infile
outfile
deffile
switches:
Comment lines may also appear in the definitions file, starting with
the usual
--,
and comments may be added to the definitions lines..
gnatls
gnatls is a tool that outputs information about compiled
units. It gives the relationship between objects, unit names and source
files. It can also be used to check the source dependencies of a unit
as well as various characteristics.
gn)
gnatls
gnatls recognizes the following switches:
gnatmakeflags (see Switches for gnatmake).
gnatmakeflag (see Switches for gnatmake).
Preelaborable
No_Elab_Code
Pure
Elaborate_Body
Remote_Types
Shared_Passive
Predefined
Remote_Call_Interface
gnatlsUsage
Example of using the verbose switch. Note how the source and object paths are affected by the -I switch.
$ gnatls -v -I.. demo1.o GNATLS 3.10w (970212) Copyright 1999 Free Software Foundation, Inc. Source Search Path: <Current_Directory> ../ /home/comar/local/adainclude/ Object Search Path: <Current_Directory> ../ /home/comar/local/lib/gcc-lib/mips-sni-sysv4/2.7.2/adalib/ .
gnatclean
gnatclean is a tool that allows the deletion of files produced by the
compiler, binder and linker, including ALI files, object files, tree files,
expanded source files, library files, interface copy source files, binder
generated files and executable files.
gn.
gnatclean
gnatclean recognizes the following switches:
gnatclean.
external(name)when parsing the project file. See Switches Related to Project Files.
gnatcleanwas invoked.
gnatcleanUsage
This chapter addresses some of the issues related to building and using a library with GNAT. It also shows how the GNAT run-time library can be recompiled.
In the GNAT environment, a library has two components:
In order to use other packages The GNAT Compilation Model requires a certain number of sources to be available to the compiler. The minimal set of sources required includes the specs of all the packages that make up the visible part of the library as well as all the sources upon which they depend. The bodies of all visible generic units must also be provided. Although it is not strictly mandatory, it is recommended that all sources needed to recompile the library be provided, so that the user can make full use of inter-unit inlining and source-level debugging. This can also make the situation easier for users that need to upgrade their compilation toolchain and thus need to recompile the library from sources.
The compiled code can be provided in different ways. The simplest way is to provide directly the set of objects produced by the compiler during the compilation of the library. It is also possible to group the objects into an archive using whatever commands are provided by the operating system. Finally, it is also possible to create a shared library (see option -shared in the GCC manual).
There are various possibilities for compiling the units that make up the library: for example with a Makefile Using the GNU make Utility, or with a conventional script. For simple libraries, it is also possible to create a dummy main program which depends upon all the packages that comprise the interface of the library. This dummy main program can then be given to gnatmake, in order to build all the necessary objects. Here is an example of such a dummy program and the generic commands used to build an archive or a shared library.
with My_Lib.Service1; with My_Lib.Service2; with My_Lib.Service3; procedure My_Lib_Dummy is begin null; end;
# compiling the library $ gnatmake -c my_lib_dummy.adb # we don't need the dummy object itself $ rm my_lib_dummy.o my_lib_dummy.ali # create an archive with the remaining objects $ ar rc libmy_lib.a *.o # some systems may require "ranlib" to be run as well # or create a shared library $ gcc -shared -o libmy_lib.so *.o # some systems may require the code to have been compiled with -fPIC # remove the object files that are now in the library $ rm *.o # Make the ALI files read-only so that gnatmake will not try to # regenerate the objects that are in the library $ chmod -w *.ali
When the objects are grouped in an archive or a shared library, the user needs to specify the desired library at link time, unless a pragma linker_options has been used in one of the sources:
pragma Linker_Options ("-lmy_lib");
Please note that the library must have a name of the form libxxx.a or libxxx.so in order to be accessed by the directive -lxxx at link time.
In the GNAT model, installing a library consists in copying into a specific location the files that make up this library. It is possible to install the sources in a different directory from the other files (ALI, objects, archives) since the source path and the object path can easily be specified separately.
For general purpose libraries, it is possible for the system administrator to put those libraries in the default compiler paths. To achieve this, he must specify their location in the configuration files ada_source_path and ada_object_path that must be located in the GNAT installation tree at the same place as the gcc spec file. The location of the gcc spec file can be determined as follows:
$ gcc -v
The configuration files mentioned above have simple format: each line in them must contain one unique directory name. Those names are added to the corresponding path in their order of appearance in the file. The names can be either absolute or relative, in the latter case, they are relative to where theses files are located.
ada_source_path and ada_object_path might actually not be present in a GNAT installation, in which case, GNAT will look for its run-time library in he directories adainclude for the sources and adalib for the objects and ALI files. When the files exist, the compiler does not look in adainclude and adalib at all, runtime library at compilation time with the switch --RTS=rts-path. You can easily choose and change the runtime..
It may be useful to recompile the GNAT library in various contexts, the
most important one being the use of partition-wide configuration pragmas
such as Normalize_Scalar. A special Makefile called
Makefile.adalib is provided to that effect and can be found in
the directory containing the GNAT library. The location of this
directory depends on the way the GNAT environment has been installed and can
be determined by means of the command:
$ gnatls -v
The last entry in the object search path usually contains the gnat library. This Makefile contains its own documentation and in particular the set of instructions needed to rebuild a new library and to use it..
Complex project organizations can be handled in a very powerful way by using GNU make combined with gnatmake. For instance, here is a Makefile which allows you to build each subsystem of a big project into a separate shared library. Such a makefile allows you to significantly reduce the link time of very big applications while maintaining full coherence at each step of the build process.
The list of dependencies are handled automatically by
gnatmake. The Makefile is simply used to call gnatmake in each of
the appropriate directories.
Note that you should also read the example on how to automatically create the list of directories (see Automatically Creating a List of Directories) which might help you in case your project has a lot of subdirectories.
## This Makefile is intended to be used with the following directory ## configuration: ## - The sources are split into a series of csc (computer software components) ## Each of these csc is put in its own directory. ## Their name are referenced by the directory names. ## They will be compiled into shared library (although this would also work ## with static libraries ## - The main program (and possibly other packages that do not belong to any ## csc is put in the top level directory (where the Makefile is). ## toplevel_dir __ first_csc (sources) __ lib (will contain the library) ## \_ second_csc (sources) __ lib (will contain the library) ## \_ ... ## Although this Makefile is build for shared library, it is easy to modify ## to build partial link objects instead (modify the lines with -shared and ## gnatlink below) ## ## With this makefile, you can change any file in the system or add any new ## file, and everything will be recompiled correctly (only the relevant shared ## objects will be recompiled, and the main program will be re-linked). # The list of computer software component for your project. This might be # generated automatically. CSC_LIST=aa bb cc # Name of the main program (no extension) MAIN=main # If we need to build objects with -fPIC, uncomment the following line #NEED_FPIC=-fPIC # The following variable should give the directory containing libgnat.so # You can get this directory through 'gnatls -v'. This is usually the last # directory in the Object_Path. GLIB=... # The directories for the libraries # (This macro expands the list of CSC to the list of shared libraries, you # could simply use the expanded form : # LIB_DIR=aa/lib/libaa.so bb/lib/libbb.so cc/lib/libcc.so LIB_DIR=${foreach dir,${CSC_LIST},${dir}/lib/lib${dir}.so} ${MAIN}: objects ${LIB_DIR} gnatbind ${MAIN} ${CSC_LIST:%=-aO%/lib} -shared gnatlink ${MAIN} ${CSC_LIST:%=-l%} objects:: # recompile the sources gnatmake -c -i ${MAIN}.adb ${NEED_FPIC} ${CSC_LIST:%=-I%} # Note: In a future version of GNAT, the following commands will be simplified # by a new tool, gnatmlib ${LIB_DIR}: mkdir -p ${dir $@ } cd ${dir $@ }; gcc -shared -o ${notdir $@ } ../*.o -L${GLIB} -lgnat cd ${dir $@ }; cp -f ../*.ali . # The dependencies for the modules # Note that we have to force the expansion of *.o, since in some cases # make won't be able to do it itself. aa/lib/libaa.so: ${wildcard aa/*.o} bb/lib/libbb.so: ${wildcard bb/*.o} cc/lib/libcc.so: ${wildcard cc/*.o} # Make sure all of the shared libraries are in the path before starting the # program run:: LD_LIBRARY_PATH=`pwd`/aa/lib:`pwd`/bb/lib:`pwd`/cc/lib ./${MAIN} clean:: ${RM} -rf ${CSC_LIST:%=%/lib} ${RM} ${CSC_LIST:%=%/*.ali} ${RM} ${CSC_LIST:%=%/*.o} ${RM} *.o *.ali ${MAIN}
In most makefiles, you will have to specify a list of directories, and store it in a variable. For small projects, it is often easier to specify each of them by hand, since you then have full control over what is the proper order for these directories, which ones should be included...
However, in larger projects, which might involve hundreds of subdirectories, it might be more convenient to generate this list automatically.
The example below presents two methods. The first one, although less
general, gives you more control over the list. It involves wildcard
characters, that are automatically expanded by
make. Its
shortcoming is that you need to explicitly specify some of the
organization of your project, such as for instance the directory tree
depth, whether some directories are found in a separate tree,...
The second method is the most general one. It requires an external
program, called
find, which is standard on all Unix systems. All
the directories found under a given root directory will be added to the
list.
# The examples below are based on the following directory hierarchy: # All the directories can contain any number of files # ROOT_DIRECTORY -> a -> aa -> aaa # -> ab # -> ac # -> b -> ba -> baa # -> bb # -> bc # This Makefile creates a variable called DIRS, that can be reused any time # you need this list (see the other examples in this section) # The root of your project's directory hierarchy ROOT_DIRECTORY=. #### # First method: specify explicitly the list of directories # This allows you to specify any subset of all the directories you need. #### DIRS := a/aa/ a/ab/ b/ba/ #### # Second method: use wildcards # Note that the argument(s) to wildcard below should end with a '/'. # Since wildcards also return file names, we have to filter them out # to avoid duplicate directory names. # We thus use make's
dirand
sortfunctions. # It sets DIRs to the following value (note that the directories aaa and baa # are not given, unless you change the arguments to wildcard). # DIRS= ./a/a/ ./b/ ./a/aa/ ./a/ab/ ./a/ac/ ./b/ba/ ./b/bb/ ./b/bc/ #### DIRS := ${sort ${dir ${wildcard ${ROOT_DIRECTORY}/*/ ${ROOT_DIRECTORY}/*/*/}}} #### # Third method: use an external program # This command is much faster if run on local disks, avoiding NFS slowdowns. # This is the most complete command: it sets DIRs to the following value: # DIRS= ./a ./a/aa ./a/aa/aaa ./a/ab ./a/ac ./b ./b/ba ./b/ba/baa ./b/bb ./b/bc #### DIRS := ${shell find ${ROOT_DIRECTORY} -type d -print}
This chapter describes the gnatmem tool, which can be used to track down “memory leaks”, and the GNAT Debug Pool facility, which can be used to detect incorrect uses of access values (including “dangling references”).
The
gnatmem utility monitors dynamic allocation and
deallocation activity in a program, and displays information about
incorrect deallocations and possible sources of memory leaks.
It provides three type of information: x86,
Solaris (sparc and x86) Switches for gcc. For example to build my_program:
$ gnatmake -g my_program -largs -lgmem
When running my_program the file gmem.out is produced. This file
contains information about all allocations and deallocations done by the
program. It is produced by the instrumented allocations and
deallocations routines and will be used by
gnatmem..
gnatmemUsage
The following example shows the use of
gnatmem
on a simple memory-leaking program.
Suppose that we have the following Ada program:
The program needs to be compiled with debugging option and linked with
gmem library:
$ gnatmake -g test_gm -largs -lgmem
Then we execute the program as usual:
$ test_gm
Then
gnatmem is invoked simply with
$ gnatmem test_gm
which produces the following output (result may vary on different platforms):
Global information ------------------ Total number of allocations : 18 Total number of deallocations : 5 Final Water Mark (non freed mem) : 53.00 Kilobytes High Water Mark : 56.90 Kilobytes Allocation Root # 1 ------------------- Number of non freed allocations : 11 Final Water Mark (non freed mem) : 42.97 Kilobytes High Water Mark : 46.88 Kilobytes Backtrace : test_gm.adb:11 test_gm.my_alloc Allocation Root # 2 ------------------- Number of non freed allocations : 1 Final Water Mark (non freed mem) : 10.02 Kilobytes High Water Mark : 10.02 Kilobytes Backtrace : s-secsta.adb:81 system.secondary_stack.ss_init Allocation Root # 3 ------------------- Number of non freed allocations : 1 Final Water Mark (non freed mem) : 12 Bytes High Water Mark : 12 Bytes Backtrace : s-secsta.adb:181 system.secondary_stack.ss_init:
$ gnatmem 3 test_gm
which will give the following output:
Global information ------------------ Total number of allocations : 18 Total number of deallocations : 5 Final Water Mark (non freed mem) : 53.00 Kilobytes High Water Mark : 56.90 Kilobytes Allocation Root # 1 ------------------- Number of non freed allocations : 10 Final Water Mark (non freed mem) : 39.06 Kilobytes High Water Mark : 42.97 Kilobytes Backtrace : test_gm.adb:11 test_gm.my_alloc test_gm.adb:24 test_gm b_test_gm.c:52 main Allocation Root # 2 ------------------- Number of non freed allocations : 1 Final Water Mark (non freed mem) : 10.02 Kilobytes High Water Mark : 10.02 Kilobytes Backtrace : s-secsta.adb:81 system.secondary_stack.ss_init s-secsta.adb:283 <system__secondary_stack___elabb> b_test_gm.c:33 adainit Allocation Root # 3 ------------------- Number of non freed allocations : 1 Final Water Mark (non freed mem) : 3.91 Kilobytes High Water Mark : 3.91 Kilobytes Backtrace : test_gm.adb:11 test_gm.my_alloc test_gm.adb:21 test_gm b_test_gm.c:52 main Allocation Root # 4 ------------------- Number of non freed allocations : 1 Final Water Mark (non freed mem) : 12 Bytes High Water Mark : 12 Bytes Backtrace : s-secsta.adb:181 system.secondary_stack.ss_init s-secsta.adb:283 <system__secondary_stack___elabb> b_test_gm.c:33 adainit
The allocation root #1 of the first example has been split in 2 roots #1 and #3 thanks to the more precise associated backtrace.
gnatstub creates body stubs, that is, empty but compilable bodies for library unit declarations..
gnatstub has the command-line interface of the form
$ gnatstub [switches] filename [directory]
where
This chapter discusses some other utility programs available in the Ada environment.
The object files generated by GNAT are in standard system format and in particular the debugging information uses this format. This means programs generated by GNAT can be used with existing utilities that depend on these formats.
In general, any utility program that works with C will also often work with
Ada programs generated by GNAT. This includes software utilities such as
gprof (a profiling program),
gdb (the FSF debugger), and utilities such
as Purify..
Glide
The Glide mode for programming in Ada (both Ada83 and Ada95) helps the user to understand and navigate existing code, and facilitates writing new code. It furthermore provides some utility functions for easier integration of standard Emacs features when programming in Ada.
Its general features include:
Some of the specific Ada mode features are:
Glide directly supports writing Ada code, via several facilities:
For more information, please refer to the online documentation
available in the
Glide =>
Help menu.
gnathtml
This
Perl script allows Ada source files to be browsed using
standard Web browsers. For installation procedure, see the section
See Installing gnathtml.
Ada reserved keywords are highlighted in a bold font and Ada comments in a blue font. Unless your program was compiled with the gcc -gnatx switch to suppress the generation of cross-referencing information, user defined variables and types will appear in a different color; you will be able to click on any identifier and go to its declaration.
The command line is as follow:
$ perl gnathtml.pl [switches] ada-files
You can pass it as many Ada files as you want.
gnathtml will generate
an html file for every ada file, and a global file called index.htm.
This file is an index of every identifier defined in the files.
The available switches are the following ones :
withcommand, the latter will also be converted to html. Only the files in the user project will be converted to html, not the files in the run-time library itself.
gnathtmlwill number the html files every number line.
Using this switch, you can tell gnathtml to use these files. This allows
you to get an html version of your application, even if it is spread
over multiple directories.
gnathtml
Perl needs to be installed on your machine to run this script.
Perl is freely available for almost every architecture and
Operating System via the Internet.
On Unix systems, you may want to modify the first line of the script
gnathtml, to explicitly tell the Operating system where Perl
is. The syntax of this line is :
#!full_path_name_to_perl
Alternatively, you may run the script using the following command line:
$ perl gnathtml.pl [switches] files
This chapter discusses how to debug Ada programs. An incorrect Ada program may be handled in three ways by the GNAT compiler:. is
$ gdb program
where
program is the name of the executable file. This
activates the debugger and results in a prompt for debugger commands.
The simplest command is simply
run, which causes the program to run
exactly as if the debugger were not present. The following section
describes some of the additional commands that can be given to
GDB.
GDB contains a large repertoire of commands. The manual
Debugging with GDB
includes extensive documentation on the use
of these commands, together with examples of their use. Furthermore,
the command help invoked from within
GDB activates a simple help
facility which summarizes the available commands and their options.
In this section we summarize a few of the most commonly
used commands to give an idea of what
GDB is about. You should create
a simple program with debugging information and experiment with the use of
these
GDB commands on the program as you read through the
following section.
set argsarguments
set argscommand is not needed if the program does not require arguments.
run
runcommand causes execution of the program to start from the beginning. If the program is already running, that is to say if you are currently positioned at a breakpoint, then a prompt will ask for confirmation that you want to abandon the current execution and restart.
breakpointlocation
GDBwill await further commands. location is either a line number within a file, given in the format
file:linenumber, or it is the name of a subprogram. If you request that a breakpoint be set on a subprogram that is overloaded, a prompt will ask you to specify on which of those subprograms you want to breakpoint. You can also specify that all of them should be breakpointed. If the program is run and execution encounters the breakpoint, then the program stops and
GDBsignals that the breakpoint was encountered by printing the line of code before which the program is halted.
breakpoint exceptionname
GDB, so the expression can contain function calls, variables, operators, and attribute references.
continue
step
list
backtrace
up
GDBcan display the values of variables local to the current frame. The command
upcan be used to examine the contents of other active frames, by moving the focus up the stack, that is to say from callee to caller, one frame at a time.
down
GDBdown from the frame currently being examined to the frame of its callee (the reverse of the previous command),
framen
The above list is a very short introduction to the commands that
GDB provides. Important additional capabilities, including conditional
breakpoints, the ability to execute command sequences on a breakpoint,
the ability to debug at the machine instruction level and many other
features are described in detail in Debugging with GDB.
Note that most commands can be abbreviated
(for example, c for continue, bt for backtrace)..
You can set breakpoints that trip when your program raises selected exceptions.
break exception
break exceptionname
break exception unhandled
info exceptions
info exceptionsregexp
info exceptionscommand permits the user to examine all defined exceptions within Ada programs. With a regular expression, regexp, as argument, prints out only those exceptions whose name matches regexp....
gccwith the -gnatf. This first switch causes all errors on a given line to be reported. In its absence, only the first error on a line is displayed.
The -gnatdO switch causes errors to be displayed as soon as they are encountered, rather than after compilation is terminated. If GNAT terminates prematurely or goes into an infinite loop, the last error message displayed may help to pinpoint the culprit.
gccwith the -v (verbose) switch. In this mode,
gccproduces ongoing information about the progress of the compilation and provides the name of each procedure as code is generated. This switch allows you to find which Ada procedure was being compiled when it encountered a code generation problem.
gccwith the -gnatdc switch. This is a GNAT specific switch that does for the front-end what -v does for the back end. The system prints the name of each unit, either a compilation unit or nested unit, as it is being analyzed.backend, indicates the source line at which the execution stopped, and
input_file nameindicates the name of the source file. specifications. All the other .c files are modifications of common
gccfiles..
Traceback is a mechanism to display the sequence of subprogram calls that leads to a specified execution point in a program. Often (but not always) the execution point is an instruction at which an exception has been raised. This mechanism is also known as stack unwinding because it obtains its information by scanning the run-time stack and recovering the activation records of all active subprograms. Stack unwinding is one of the most important tools for program debugging.
The first entry stored in traceback corresponds to the deepest calling level, that is to say the subprogram currently executing the instruction from which we want to obtain the traceback.
Note that there is no runtime performance penalty when stack traceback is enabled and no exception are raised during program execution.
Note: this feature is not supported on all platforms. See GNAT.Traceback spec in g-traceb.ads for a complete list of supported platforms.
A runtime non-symbolic traceback is a list of addresses of call instructions.
To enable this feature you must use the -E
gnatbind's option. With this option a stack traceback is stored as part
of exception information. It is possible to retrieve this information using the
standard
Ada.Exception.Exception_Information routine.
Let's have a look at a simple example:
$ gnatmake
As we see the traceback lists a sequence of addresses for the unhandled
exception
CONSTRAINT_ERROR raised in procedure P1. It is easy to
guess that this exception come from procedure P1. To translate these
addresses into the source lines where the calls appear, the
addr2line tool, described below, is invaluable. The use of this tool
requires the program to be compiled with debug information.
$ gnatmake -g $ addr2line --exe=stb 0x401373 0x40138b 0x40139c 0x401335 0x4011c4 0x4011f1 0x77e892a4 00401373 at d:/stb/stb.adb:5 0040138B at d:/stb/stb.adb:10 0040139C at d:/stb/stb.adb:14 00401335 at d:/stb/b~stb.adb:104 004011C4 at /build/.../crt1.c:200 004011F1 at /build/.../crt1.c:222 77E892A4 in ?? at ??:0
addr2line has a number of other useful options:
--functions
--demangle=gnat
$ addr2line --exe=stb --functions --demangle=gnat 0x401373 0x40138b 0x40139c 0x401335 0x4011c4 0x4011f1 00401373 in stb.p1 at d:/stb/stb.adb:5 0040138B in stb.p2 at d:/stb/stb.adb:10 0040139C in stb at d:/stb/stb.adb:14 00401335 in main at d:/stb/b~stb.adb:104 004011C4 in <__mingw_CRTStartup> at /build/.../crt1.c:200 004011F1 in <mainCRTStartup> at /build/.../crt1.c:222
From this traceback we can see that the exception was raised in stb.adb at line 5, which was reached from a procedure call in stb.adb at line 10, and so on. The b~std.adb is the binder file, which contains the call to the main program. see Running gnatbind. The remaining entries are assorted runtime routines, and the output will vary from platform to platform.
It is also possible to use
GDB with these traceback addresses to debug
the program. For example, we can break at a given code location, as reported
in the stack traceback:
$ gdb -nw stb Furthermore, this feature is not implemented inside Windows DLL. Only the non-symbolic traceback is reported in this case. (gdb) break *0x401373 Breakpoint 1 at 0x401373: file stb.adb, line 5.
It is important to note that the stack traceback addresses do not change when debug information is included. This is particularly useful because it makes it possible to release software without debug information (to minimize object size), get a field report that includes a stack traceback whenever an internal bug occurs, and then be able to retrieve the sequence of calls with the same program compiled with debug information.
Non-symbolic tracebacks are obtained by using the -E binder argument.
The stack traceback is attached to the exception information string, and can
be retrieved in an exception handler within the Ada program, by means of the
Ada95
It is also possible to retrieve a stack traceback from anywhere in a
program. For this you need to
use the
GNAT.Traceback API. This package includes a procedure called
Call_Chain that computes a complete stack traceback, as well as useful
display procedures described below. It is not necessary to use the
-E gnatbind option in this case, because the stack traceback mechanism
is invoked explicitly.
In the following example we compute a traceback at a specific location in
the program, and we display it using
GNAT.Debug_Utilities.Image to
convert addresses to strings:
with Ada.Text_IO; with GNAT.Traceback; with GNAT.Debug_Utilities; procedure STB is use Ada; use GNAT; use GNAT.Traceback; procedure P1 is TB : Tracebacks_Array (1 .. 10); -- We are asking for a maximum of 10 stack frames. Len : Natural; -- Len will receive the actual number of stack frames returned. begin Call_Chain (TB, Len); Text_IO.Put ("In STB.P1 : "); for K in 1 .. Len loop Text_IO.Put (Debug_Utilities.Image (TB (K))); Text_IO.Put (' '); end loop; Text_IO.New_Line; end P1; procedure P2 is begin P1; end P2; begin P2; end STB;
$ gnatmake stb $ stb In STB.P1 : 16#0040_F1E4# 16#0040_14F2# 16#0040_170B# 16#0040_171C# 16#0040_1461# 16#0040_11C4# 16#0040_11F1# 16#77E8_92A4#.
with Ada.Text_IO; with GNAT.Traceback.Symbolic; procedure STB is procedure P1 is begin raise Constraint_Error; end P1; procedure P2 is begin P1; end P2; procedure P3 is begin P2; end P3; begin P3; exception when E : others => Ada.Text_IO.Put_Line (GNAT.Traceback.Symbolic.Symbolic_Traceback (E)); end STB;
$ gnatmake -g stb -bargs -E -largs -lgnat -laddr2line -lintl $ stb 0040149F in stb.p1 at stb.adb:8 004014B7 in stb.p2 at stb.adb:13 004014CF in stb.p3 at stb.adb:18 004015DD in ada.stb at stb.adb:22 00401461 in main at b~stb.adb:168 004011C4 in __mingw_CRTStartup at crt1.c:200 004011F1 in mainCRTStartup at crt1.c:222 77E892A4 in ?? at ??:0
The exact sequence of linker options may vary from platform to platform. The above -largs section is for Windows platforms. By contrast, under Unix there is no need for the -largs section. Differences across platforms are due to details of linker implementation.;.-fsu | | | +--- adainclude | | | +--- adalib | +--- rts-sjlj | +--- adainclude | +--- adalib
If the rts-fsu library is to be selected on a permanent basis, these soft links can be modified with the following commands:
$ cd $target $ rm -f adainclude adalib $ ln -s rts-fsu/adainclude adainclude $ ln -s rts-fsu/adalib adalib
Alternatively, you can specify rts-fsu/adainclude in the file $target/ada_source_path and rts-fsu/adalib in $target/ada_object_path.
Selecting another run-time library temporarily can be achieved by the regular mechanism for GNAT object or source path selection:
$ ADA_INCLUDE_PATH=$target/rts-fsu/adainclude:$ADA_INCLUDE_PATH $ ADA_OBJECTS_PATH=$target/rts-fsu/adalib:$ADA_OBJECTS_PATH $ export ADA_INCLUDE_PATH ADA_OBJECTS_PATH
You can similarly switch to rts-sjlj..
When using a POSIX threads implementation, you have a choice of several
scheduling policies:
SCHED_FIFO,
SCHED_RR
and
SCHED_OTHER.
Typically, the default is
SCHED_OTHER, while using
SCHED_FIFO
or
SCHED_RR requires special (e.g., root) privileges.
By default, GNAT uses the
SCHED_OTHER policy. To specify
SCHED_FIFO,
you can use one of the following:
pragma Time_Slice (0.0)
pragma Task_Dispatching_Policy (FIFO_Within_Priorities)
To specify
SCHED_RR,
you should use
pragma Time_Slice with a
value greater than
0.0, or else use the corresponding -T
binder option.
This section addresses some topics related to the various threads libraries on Sparc Solaris and then provides some information on building and debugging 64-bit applications.
Starting with version 3.14, GNAT under Solaris comes with a new.
The FSU run-time library is based on the FSU threads.
Starting with Solaris 2.5.1,)., dwarf-2 debug information is required, so you have to add -gdwarf-2 to your gnatmake arguments. In addition, a special version of gdb, called gdb64, needs to be used.
To summarize, building and debugging a “Hello World” program in 64-bit mode amounts to:
$ gnatmake -m64 -gdwarf-2 --RTS=m64 hello.adb $ gdb64 hello
On SGI IRIX, the thread library depends on which compiler is used.
The o32 ABI compiler comes with a run-time library based on the
user-level
athread
library. Thus kernel-level capabilities such as nonblocking system
calls or time slicing can only be achieved reliably by specifying different
sprocs via the pragma
Task_Info
and the
System.Task_Info package.
See the GNAT Reference Manual for further information.
The n32 ABI compiler comes with a run-time library based on the kernel POSIX threads and thus does not have the limitations mentioned above.
The default thread library under GNU/Linux has the following disadvantages compared to other native thread libraries:
killpg().
This Appendix displays the source code for gnatbind's output file generated for a simple “Hello World” program. Comments have been added for clarification purposes.
-- The package is called Ada_Main unless this name is actually used -- as a unit name in the partition, in which case some other unique -- name is used. with System; package ada_main is Elab_Final_Code : Integer; pragma Import (C, Elab_Final_Code, "__gnat_inside_elab_final_code"); -- The main program saves the parameters (argument count, -- argument values, environment pointer) in global variables -- for later access by other units including -- Ada.Command_Line. gnat_argc : Integer; gnat_argv : System.Address; gnat_envp : System.Address; -- The actual variables are stored in a library routine. This -- is useful for some shared library situations, where there -- are problems if variables are not in the library. pragma Import (C, gnat_argc); pragma Import (C, gnat_argv); pragma Import (C, gnat_envp); -- The exit status is similarly an external location gnat_exit_status : Integer; pragma Import (C, gnat_exit_status); GNAT_Version : constant String := "GNAT Version: 3.15w (20010315)"; pragma Export (C, GNAT_Version, "__gnat_version"); -- This is the generated adafinal routine that performs -- finalization at the end of execution. In the case where -- Ada is the main program, this main program makes a call -- to adafinal at program termination. procedure adafinal; pragma Export (C, adafinal, "adafinal"); -- This is the generated adainit routine that performs -- initialization at the start of execution. In the case -- where Ada is the main program, this main program makes -- a call to adainit at program startup. procedure adainit; pragma Export (C, adainit, "adainit"); -- This routine is called at the start of execution. It is -- a dummy routine that is used by the debugger to breakpoint -- at the start of execution. procedure Break_Start; pragma Import (C, Break_Start, "__gnat_break_start"); -- This is the actual generated main program (it would be -- suppressed if the no main program switch were used). As -- required by standard system conventions, this program has -- the external name main. function main (argc : Integer; argv : System.Address; envp : System.Address) return Integer; pragma Export (C, main, "main"); -- The following set of constants give the version -- identification values for every unit in the bound -- partition. This identification is computed from all -- dependent semantic units, and corresponds to the -- string that would be returned by use of the -- Body_Version or Version attributes. type Version_32 is mod 2 ** 32; u00001 : constant Version_32 := 16#7880BEB3#; u00002 : constant Version_32 := 16#0D24CBD0#; u00003 : constant Version_32 := 16#3283DBEB#; u00004 : constant Version_32 := 16#2359F9ED#; u00005 : constant Version_32 := 16#664FB847#; u00006 : constant Version_32 := 16#68E803DF#; u00007 : constant Version_32 := 16#5572E604#; u00008 : constant Version_32 := 16#46B173D8#; u00009 : constant Version_32 := 16#156A40CF#; u00010 : constant Version_32 := 16#033DABE0#; u00011 : constant Version_32 := 16#6AB38FEA#; u00012 : constant Version_32 := 16#22B6217D#; u00013 : constant Version_32 := 16#68A22947#; u00014 : constant Version_32 := 16#18CC4A56#; u00015 : constant Version_32 := 16#08258E1B#; u00016 : constant Version_32 := 16#367D5222#; u00017 : constant Version_32 := 16#20C9ECA4#; u00018 : constant Version_32 := 16#50D32CB6#; u00019 : constant Version_32 := 16#39A8BB77#; u00020 : constant Version_32 := 16#5CF8FA2B#; u00021 : constant Version_32 := 16#2F1EB794#; u00022 : constant Version_32 := 16#31AB6444#; u00023 : constant Version_32 := 16#1574B6E9#; u00024 : constant Version_32 := 16#5109C189#; u00025 : constant Version_32 := 16#56D770CD#; u00026 : constant Version_32 := 16#02F9DE3D#; u00027 : constant Version_32 := 16#08AB6B2C#; u00028 : constant Version_32 := 16#3FA37670#; u00029 : constant Version_32 := 16#476457A0#; u00030 : constant Version_32 := 16#731E1B6E#; u00031 : constant Version_32 := 16#23C2E789#; u00032 : constant Version_32 := 16#0F1BD6A1#; u00033 : constant Version_32 := 16#7C25DE96#; u00034 : constant Version_32 := 16#39ADFFA2#; u00035 : constant Version_32 := 16#571DE3E7#; u00036 : constant Version_32 := 16#5EB646AB#; u00037 : constant Version_32 := 16#4249379B#; u00038 : constant Version_32 := 16#0357E00A#; u00039 : constant Version_32 := 16#3784FB72#; u00040 : constant Version_32 := 16#2E723019#; u00041 : constant Version_32 := 16#623358EA#; u00042 : constant Version_32 := 16#107F9465#; u00043 : constant Version_32 := 16#6843F68A#; u00044 : constant Version_32 := 16#63305874#; u00045 : constant Version_32 := 16#31E56CE1#; u00046 : constant Version_32 := 16#02917970#; u00047 : constant Version_32 := 16#6CCBA70E#; u00048 : constant Version_32 := 16#41CD4204#; u00049 : constant Version_32 := 16#572E3F58#; u00050 : constant Version_32 := 16#20729FF5#; u00051 : constant Version_32 := 16#1D4F93E8#; u00052 : constant Version_32 := 16#30B2EC3D#; u00053 : constant Version_32 := 16#34054F96#; u00054 : constant Version_32 := 16#5A199860#; u00055 : constant Version_32 := 16#0E7F912B#; u00056 : constant Version_32 := 16#5760634A#; u00057 : constant Version_32 := 16#5D851835#; -- The following Export pragmas export the version numbers -- with symbolic names ending in B (for body) or S -- (for spec) so that they can be located in a link. The -- information provided here is sufficient to track down -- the exact versions of units used in a given build. pragma Export (C, u00001, "helloB"); pragma Export (C, u00002, "system__standard_libraryB"); pragma Export (C, u00003, "system__standard_libraryS"); pragma Export (C, u00004, "adaS"); pragma Export (C, u00005, "ada__text_ioB"); pragma Export (C, u00006, "ada__text_ioS"); pragma Export (C, u00007, "ada__exceptionsB"); pragma Export (C, u00008, "ada__exceptionsS"); pragma Export (C, u00009, "gnatS"); pragma Export (C, u00010, "gnat__heap_sort_aB"); pragma Export (C, u00011, "gnat__heap_sort_aS"); pragma Export (C, u00012, "systemS"); pragma Export (C, u00013, "system__exception_tableB"); pragma Export (C, u00014, "system__exception_tableS"); pragma Export (C, u00015, "gnat__htableB"); pragma Export (C, u00016, "gnat__htableS"); pragma Export (C, u00017, "system__exceptionsS"); pragma Export (C, u00018, "system__machine_state_operationsB"); pragma Export (C, u00019, "system__machine_state_operationsS"); pragma Export (C, u00020, "system__machine_codeS"); pragma Export (C, u00021, "system__storage_elementsB"); pragma Export (C, u00022, "system__storage_elementsS"); pragma Export (C, u00023, "system__secondary_stackB"); pragma Export (C, u00024, "system__secondary_stackS"); pragma Export (C, u00025, "system__parametersB"); pragma Export (C, u00026, "system__parametersS"); pragma Export (C, u00027, "system__soft_linksB"); pragma Export (C, u00028, "system__soft_linksS"); pragma Export (C, u00029, "system__stack_checkingB"); pragma Export (C, u00030, "system__stack_checkingS"); pragma Export (C, u00031, "system__tracebackB"); pragma Export (C, u00032, "system__tracebackS"); pragma Export (C, u00033, "ada__streamsS"); pragma Export (C, u00034, "ada__tagsB"); pragma Export (C, u00035, "ada__tagsS"); pragma Export (C, u00036, "system__string_opsB"); pragma Export (C, u00037, "system__string_opsS"); pragma Export (C, u00038, "interfacesS"); pragma Export (C, u00039, "interfaces__c_streamsB"); pragma Export (C, u00040, "interfaces__c_streamsS"); pragma Export (C, u00041, "system__file_ioB"); pragma Export (C, u00042, "system__file_ioS"); pragma Export (C, u00043, "ada__finalizationB"); pragma Export (C, u00044, "ada__finalizationS"); pragma Export (C, u00045, "system__finalization_rootB"); pragma Export (C, u00046, "system__finalization_rootS"); pragma Export (C, u00047, "system__finalization_implementationB"); pragma Export (C, u00048, "system__finalization_implementationS"); pragma Export (C, u00049, "system__string_ops_concat_3B"); pragma Export (C, u00050, "system__string_ops_concat_3S"); pragma Export (C, u00051, "system__stream_attributesB"); pragma Export (C, u00052, "system__stream_attributesS"); pragma Export (C, u00053, "ada__io_exceptionsS"); pragma Export (C, u00054, "system__unsigned_typesS"); pragma Export (C, u00055, "system__file_control_blockS"); pragma Export (C, u00056, "ada__finalization__list_controllerB"); pragma Export (C, u00057, "ada__finalization__list_controllerS"); -- BEGIN ELABORATION ORDER -- ada (spec) -- gnat (spec) -- gnat.heap_sort_a (spec) -- gnat.heap_sort_a (body) -- gnat.htable (spec) -- gnat.htable (body) -- interfaces (spec) -- system (spec) -- system.machine_code (spec) -- system.parameters (spec) -- system.parameters (body) -- interfaces.c_streams (spec) -- interfaces.c_streams (body) -- system.standard_library (spec) -- ada.exceptions (spec) -- system.exception_table (spec) -- system.exception_table (body) -- ada.io_exceptions (spec) -- system.exceptions (spec) -- system.storage_elements (spec) -- system.storage_elements (body) -- system.machine_state_operations (spec) -- system.machine_state_operations (body) -- system.secondary_stack (spec) -- system.stack_checking (spec) -- system.soft_links (spec) -- system.soft_links (body) -- system.stack_checking (body) -- system.secondary_stack (body) -- system.standard_library (body) -- system.string_ops (spec) -- system.string_ops (body) -- ada.tags (spec) -- ada.tags (body) -- ada.streams (spec) -- system.finalization_root (spec) -- system.finalization_root (body) -- system.string_ops_concat_3 (spec) -- system.string_ops_concat_3 (body) -- system.traceback (spec) -- system.traceback (body) -- ada.exceptions (body) -- system.unsigned_types (spec) -- system.stream_attributes (spec) -- system.stream_attributes (body) -- system.finalization_implementation (spec) -- system.finalization_implementation (body) -- ada.finalization (spec) -- ada.finalization (body) -- ada.finalization.list_controller (spec) -- ada.finalization.list_controller (body) -- system.file_control_block (spec) -- system.file_io (spec) -- system.file_io (body) -- ada.text_io (spec) -- ada.text_io (body) -- hello (body) -- END ELABORATION ORDER end ada_main; -- The following source file name pragmas allow the generated file -- names to be unique for different main programs. They are needed -- since the package name will always be Ada_Main. pragma Source_File_Name (ada_main, Spec_File_Name => "b~hello.ads"); pragma Source_File_Name (ada_main, Body_File_Name => "b~hello.adb"); -- Generated package body for Ada_Main starts here package body ada_main is -- The actual finalization is performed by calling the -- library routine in System.Standard_Library.Adafinal procedure Do_Finalize; pragma Import (C, Do_Finalize, "system__standard_library__adafinal"); ------------- -- adainit -- ------------- procedure adainit is -- These booleans are set to True once the associated unit has -- been elaborated. It is also used to avoid elaborating the -- same unit twice. E040 : Boolean; pragma Import (Ada, E040, "interfaces__c_streams_E"); E008 : Boolean; pragma Import (Ada, E008, "ada__exceptions_E"); E014 : Boolean; pragma Import (Ada, E014, "system__exception_table_E"); E053 : Boolean; pragma Import (Ada, E053, "ada__io_exceptions_E"); E017 : Boolean; pragma Import (Ada, E017, "system__exceptions_E"); E024 : Boolean; pragma Import (Ada, E024, "system__secondary_stack_E"); E030 : Boolean; pragma Import (Ada, E030, "system__stack_checking_E"); E028 : Boolean; pragma Import (Ada, E028, "system__soft_links_E"); E035 : Boolean; pragma Import (Ada, E035, "ada__tags_E"); E033 : Boolean; pragma Import (Ada, E033, "ada__streams_E"); E046 : Boolean; pragma Import (Ada, E046, "system__finalization_root_E"); E048 : Boolean; pragma Import (Ada, E048, "system__finalization_implementation_E"); E044 : Boolean; pragma Import (Ada, E044, "ada__finalization_E"); E057 : Boolean; pragma Import (Ada, E057, "ada__finalization__list_controller_E"); E055 : Boolean; pragma Import (Ada, E055, "system__file_control_block_E"); E042 : Boolean; pragma Import (Ada, E042, "system__file_io_E"); E006 : Boolean; pragma Import (Ada, E006, "ada__text_io_E"); -- Set_Globals is a library routine that stores away the -- value of the indicated set of global values in global -- variables within the library. procedure Set_Globals (Main_Priority : Integer; Time_Slice_Value : Integer; WC_Encoding : Character; Locking_Policy : Character; Queuing_Policy : Character; Task_Dispatching_Policy : Character; Adafinal : System.Address; Unreserve_All_Interrupts : Integer; Exception_Tracebacks : Integer); pragma Import (C, Set_Globals, "__gnat_set_globals"); -- SDP_Table_Build is a library routine used to build the -- exception tables. See unit Ada.Exceptions in files -- a-except.ads/adb for full details of how zero cost -- exception handling works. This procedure, the call to -- it, and the two following tables are all omitted if the -- build is in longjmp/setjump exception mode. procedure SDP_Table_Build (SDP_Addresses : System.Address; SDP_Count : Natural; Elab_Addresses : System.Address; Elab_Addr_Count : Natural); pragma Import (C, SDP_Table_Build, "__gnat_SDP_Table_Build"); -- Table of Unit_Exception_Table addresses. Used for zero -- cost exception handling to build the top level table. ST : aliased constant array (1 .. 23) of System.Address := ( Hello'UET_Address, Ada.Text_Io'UET_Address, Ada.Exceptions'UET_Address, Gnat.Heap_Sort_A'UET_Address, System.Exception_Table'UET_Address, System.Machine_State_Operations'UET_Address, System.Secondary_Stack'UET_Address, System.Parameters'UET_Address, System.Soft_Links'UET_Address, System.Stack_Checking'UET_Address, System.Traceback'UET_Address, Ada.Streams'UET_Address, Ada.Tags'UET_Address, System.String_Ops'UET_Address, Interfaces.C_Streams'UET_Address, System.File_Io'UET_Address, Ada.Finalization'UET_Address, System.Finalization_Root'UET_Address, System.Finalization_Implementation'UET_Address, System.String_Ops_Concat_3'UET_Address, System.Stream_Attributes'UET_Address, System.File_Control_Block'UET_Address, Ada.Finalization.List_Controller'UET_Address); -- Table of addresses of elaboration routines. Used for -- zero cost exception handling to make sure these -- addresses are included in the top level procedure -- address table. EA : aliased constant array (1 .. 23) of System.Address := ( adainit'Code_Address, Do_Finalize'Code_Address, Ada.Exceptions'Elab_Spec'Address, System.Exceptions'Elab_Spec'Address, Interfaces.C_Streams'Elab_Spec'Address, System.Exception_Table'Elab_Body'Address, Ada.Io_Exceptions'Elab_Spec'Address, System.Stack_Checking'Elab_Spec'Address, System.Soft_Links'Elab_Body'Address, System.Secondary_Stack'Elab_Body'Address, Ada.Tags'Elab_Spec'Address, Ada.Tags'Elab_Body'Address, Ada.Streams'Elab_Spec'Address, System.Finalization_Root'Elab_Spec'Address, Ada.Exceptions'Elab_Body'Address, System.Finalization_Implementation'Elab_Spec'Address, System.Finalization_Implementation'Elab_Body'Address, Ada.Finalization'Elab_Spec'Address, Ada.Finalization.List_Controller'Elab_Spec'Address, System.File_Control_Block'Elab_Spec'Address, System.File_Io'Elab_Body'Address, Ada.Text_Io'Elab_Spec'Address, Ada.Text_Io'Elab_Body'Address); -- Start of processing for adainit begin -- Call SDP_Table_Build to build the top level procedure -- table for zero cost exception handling (omitted in -- longjmp/setjump mode). SDP_Table_Build (ST'Address, 23, EA'Address, 23); -- Call Set_Globals to record various information for -- this partition. The values are derived by the binder -- from information stored in the ali files by the compiler. Set_Globals (Main_Priority => -1, -- Priority of main program, -1 if no pragma Priority used Time_Slice_Value => -1, -- Time slice from Time_Slice pragma, -1 if none used WC_Encoding => 'b', -- Wide_Character encoding used, default is brackets Locking_Policy => ' ', -- Locking_Policy used, default of space means not -- specified, otherwise it is the first character of -- the policy name. Queuing_Policy => ' ', -- Queuing_Policy used, default of space means not -- specified, otherwise it is the first character of -- the policy name. Task_Dispatching_Policy => ' ', -- Task_Dispatching_Policy used, default of space means -- not specified, otherwise first character of the -- policy name. Adafinal => System.Null_Address, -- Address of Adafinal routine, not used anymore Unreserve_All_Interrupts => 0, -- Set true if pragma Unreserve_All_Interrupts was used Exception_Tracebacks => 0); -- Indicates if exception tracebacks are enabled Elab_Final_Code := 1; -- Now we have the elaboration calls for all units in the partition. -- The Elab_Spec and Elab_Body attributes generate references to the -- implicit elaboration procedures generated by the compiler for -- each unit that requires elaboration. if not E040 then Interfaces.C_Streams'Elab_Spec; end if; E040 := True; if not E008 then Ada.Exceptions'Elab_Spec; end if; if not E014 then System.Exception_Table'Elab_Body; E014 := True; end if; if not E053 then Ada.Io_Exceptions'Elab_Spec; E053 := True; end if; if not E017 then System.Exceptions'Elab_Spec; E017 := True; end if; if not E030 then System.Stack_Checking'Elab_Spec; end if; if not E028 then System.Soft_Links'Elab_Body; E028 := True; end if; E030 := True; if not E024 then System.Secondary_Stack'Elab_Body; E024 := True; end if; if not E035 then Ada.Tags'Elab_Spec; end if; if not E035 then Ada.Tags'Elab_Body; E035 := True; end if; if not E033 then Ada.Streams'Elab_Spec; E033 := True; end if; if not E046 then System.Finalization_Root'Elab_Spec; end if; E046 := True; if not E008 then Ada.Exceptions'Elab_Body; E008 := True; end if; if not E048 then System.Finalization_Implementation'Elab_Spec; end if; if not E048 then System.Finalization_Implementation'Elab_Body; E048 := True; end if; if not E044 then Ada.Finalization'Elab_Spec; end if; E044 := True; if not E057 then Ada.Finalization.List_Controller'Elab_Spec; end if; E057 := True; if not E055 then System.File_Control_Block'Elab_Spec; E055 := True; end if; if not E042 then System.File_Io'Elab_Body; E042 := True; end if; if not E006 then Ada.Text_Io'Elab_Spec; end if; if not E006 then Ada.Text_Io'Elab_Body; E006 := True; end if; Elab_Final_Code := 0; end adainit; -------------- -- adafinal -- -------------- procedure adafinal is begin Do_Finalize; end adafinal; ---------- -- main -- ---------- -- main is actually a function, as in the ANSI C standard, -- defined to return the exit status. The three parameters -- are the argument count, argument values and environment -- pointer. function main (argc : Integer; argv : System.Address; envp : System.Address) return Integer is -- The initialize routine performs low level system -- initialization using a standard library routine which -- sets up signal handling and performs any other -- required setup. The routine can be found in file -- a-init.c. procedure initialize; pragma Import (C, initialize, "__gnat_initialize"); -- The finalize routine performs low level system -- finalization using a standard library routine. The -- routine is found in file a-final.c and in the standard -- distribution is a dummy routine that does nothing, so -- really this is a hook for special user finalization. procedure finalize; pragma Import (C, finalize, "__gnat_finalize"); -- We get to the main program of the partition by using -- pragma Import because if we try to with the unit and -- call it Ada style, then not only do we waste time -- recompiling it, but also, we don't really know the right -- switches (e.g. identifier character set) to be used -- to compile it. procedure Ada_Main_Program; pragma Import (Ada, Ada_Main_Program, "_ada_hello"); -- Start of processing for main begin -- Save global variables gnat_argc := argc; gnat_argv := argv; gnat_envp := envp; -- Call low level system initialization Initialize; -- Call our generated Ada initialization routine adainit; -- This is the point at which we want the debugger to get -- control Break_Start; -- Now we call the main program of the partition Ada_Main_Program; -- Perform Ada finalization adafinal; -- Perform low level system finalization Finalize; -- Return the proper exit status return (gnat_exit_status); end; -- This section is entirely comments, so it has no effect on the -- compilation of the Ada_Main package. It provides the list of -- object files and linker options, as well as some standard -- libraries needed for the link. The gnatlink utility parses -- this b~hello.adb file to read these comment lines to generate -- the appropriate command line arguments for the call to the -- system linker. The BEGIN/END lines are used for sentinels for -- this parsing operation. -- The exact file names will of course depend on the environment, -- host/target and location of files on the host system. -- BEGIN Object file/option list -- ./hello.o -- -L./ -- -L/usr/local/gnat/lib/gcc-lib/i686-pc-linux-gnu/2.8.1/adalib/ -- /usr/local/gnat/lib/gcc-lib/i686-pc-linux-gnu/2.8.1/adalib/libgnat.a -- END Object file/option list end ada_main;
The Ada code in the above example is exactly what is generated by the
binder. We have added comments to more clearly indicate the function
of each part of the generated
Ada_Main package.
The code is standard Ada in all respects, and can be processed by any
tools that handle Ada. In particular, it is possible to use the debugger
in Ada mode to debug the generated
Ada_Main package. For example,
suppose that for reasons that you do not understand, your program is crashing
during elaboration of the body of
Ada.Text_IO. To locate this bug,
you can place a breakpoint on the call:
Ada.Text_Io'Elab_Body;
and trace the elaboration routine for this package to find out where the problem might be (more usually of course you would be debugging elaboration code in your own application).
This chapter describes the handling of elaboration code in Ada 95 and in GNAT, and discusses how the order of elaboration of program units can be controlled in GNAT, either automatically or with explicit programming features.
Ada 95_2
will occur, but not the call to
Func..
In the previous section we discussed the rules in Ada 95 which ensure
that
Program_Error is raised if an incorrect elaboration order is
chosen. This prevents erroneous executions, but we need mechanisms to
specify a correct execution and avoid the exception altogether.
To achieve this, Ada 95 in
Ada 95 95 unit.
If this rule is not followed, then a program may be in one of four
states:
Elaborate,
Elaborate_All, or
Elaborate_Bodypragmas. In this case, an Ada 95 compiler must diagnose the situation at bind time, and refuse to build an executable program.
Program_Errorwill be raised when the program is run.
Note that one additional advantage of following our Elaborate_All rule is that the program continues to stay in the ideal (all orders OK) state even if maintenance changes some bodies of some subprograms...
The use of
pragma Elaborate
should generally be avoided in Ada 95 programs.
The reason for this is that 95._Allpragmas. The behavior then is exactly as specified in the Ada 95
of avoiding elaboration problems. With this switch,.).
First, compile your program with the default options, using none of
the special elaboration control switches. If the binder successfully
binds your program, then you can be confident that, apart from issues
raised by the use of access-to-subprogram types and dynamic dispatching,
the program is free of elaboration errors. If it is important that the
program be portable, then use the
-gnatwl
switch to generate warnings about missing
Elaborate_All
pragmas, and supply the missing pragmas.
If the program fails to bind using the default static elaboration
handling, then you can fix the program to eliminate the binder
message, or recompile the entire program with the
-gnatE switch to generate dynamic elaboration checks,
and, if you are sure there really are no elaboration problems,
use a global pragma
Suppress (Elaboration_Check).
This P; -- require a body end Init_Constants; with Constants; package body Init_Constants is procedure P Init_Constants body Main.
The assembler used by GNAT and gcc is based not on the Intel assembly language, but rather on a language that descends from the AT&T Unix assembler as (and which is often referred to as “AT&T syntax”). The following table summarizes the main features of as syntax and points out the differences from the Intel conventions. See the gcc as and gas (an as macro pre-processor) documentation for further information.
%eax
eax
$4
4
$loc
loc
loc
[loc]
(%eax)
[eax]
0xA0
A0h
movwto move a 16-bit word
mov
rep
stosl
rep stosl
movw $4, %eax
mov eax, 4;
The example in this section illustrates how to specify the source operands for assembly language statements. The program simply increments its input value by 1:
with Interfaces; use Interfaces; with Ada.Text_IO; use Ada.Text_IO; with System.Machine_Code; use System.Machine_Code; procedure Increment is function Incr (Value : Unsigned_32) return Unsigned_32 is Result : Unsigned_32; begin Asm ("incl %0", Inputs => Unsigned_32'Asm_Input ("a", Value), Outputs => Unsigned_32'Asm_Output ("=a", Result)); return Result; end Incr; Value : Unsigned_32; begin Value := 5; Put_Line ("Value before is" & Value'Img); Value := Incr (Value); Put_Line ("Value after is" & Value'Img); end Increment;
The
Outputs parameter to
Asm specifies
that the result will be in the eax register and that it is to be stored
in the
Result variable.
The
Inputs parameter looks much like the
Outputs parameter,
but with an
Asm_Input attribute.
The
"=" constraint, indicating an output value, is not present.
You can have multiple input variables, in the same way that you can have more than one output variable.
The parameter count (%0, %1) etc, now starts at the first input statement, and continues with the output statements. When both parameters use the same variable, the compiler will treat them as the same %n operand, which is the case here.
Just as the
Outputs parameter causes the register to be stored into the
target variable after execution of the assembler statements, so does the
Inputs parameter cause its variable to be loaded into the register
before execution of the assembler statements.
Thus the effect of the
Asm invocation is:
Valueinto eax
incl %eaxinstruction
Resultvariable
The resulting assembler file (with -O2 optimization) contains:
_increment__incr.1: subl $4,%esp movl 8(%esp),%eax #APP incl %eax #NO_APP movl %eax,%edx movl %ecx,(%esp) addl $4,%esp ret enabled (-gnatpn instead of -gnatp)..
AsmFunctionality
This section describes two important parameters to the
Asm
procedure:
Clobber, which identifies register usage;
and
Volatile, which inhibits unwanted optimizations..
This section contains a complete program illustrating a realistic usage
of GNAT's Inline Assembler capabilities. It comprises a main procedure
Check_CPU and a package
Intel_CPU.
The package declares a collection of functions that detect the properties
of the 32-bit x86 processor that is running the program.
The main procedure invokes these functions and displays the information.
The Intel_CPU package could be enhanced by adding functions to detect the type of x386 co-processor, the processor caching options and special operations such as the SIMD extensions.
Although the Intel_CPU package has been written for 32-bit Intel compatible CPUs, it is OS neutral. It has been tested on DOS, Windows/NT and GNU/Linux.
Check_CPUProcedure
--------------------------------------------------------------------- -- -- -- Uses the Intel_CPU package to identify the CPU the program is -- -- running on, and some of the features it supports. -- -- -- --------------------------------------------------------------------- with Intel_CPU; -- Intel CPU detection functions with Ada.Text_IO; -- Standard text I/O with Ada.Command_Line; -- To set the exit status procedure Check_CPU is Type_Found : Boolean := False; -- Flag to indicate that processor was identified Features : Intel_CPU.Processor_Features; -- The processor features Signature : Intel_CPU.Processor_Signature; -- The processor type signature begin ----------------------------------- -- Display the program banner. -- ----------------------------------- Ada.Text_IO.Put_Line (Ada.Command_Line.Command_Name & ": check Intel CPU version and features, v1.0"); Ada.Text_IO.Put_Line ("distribute freely, but no warranty whatsoever"); Ada.Text_IO.New_Line; ----------------------------------------------------------------------- -- We can safely start with the assumption that we are on at least -- -- a x386 processor. If the CPUID instruction is present, then we -- -- have a later processor type. -- ----------------------------------------------------------------------- if Intel_CPU.Has_CPUID = False then -- No CPUID instruction, so we assume this is indeed a x386 -- processor. We can still check if it has a FP co-processor. if Intel_CPU.Has_FPU then Ada.Text_IO.Put_Line ("x386-type processor with a FP co-processor"); else Ada.Text_IO.Put_Line ("x386-type processor without a FP co-processor"); end if; -- check for FPU -- Program done Ada.Command_Line.Set_Exit_Status (Ada.Command_Line.Success); return; end if; -- check for CPUID ----------------------------------------------------------------------- -- If CPUID is supported, check if this is a true Intel processor, -- -- if it is not, display a warning. -- ----------------------------------------------------------------------- if Intel_CPU.Vendor_ID /= Intel_CPU.Intel_Processor then Ada.Text_IO.Put_Line ("*** This is a Intel compatible processor"); Ada.Text_IO.Put_Line ("*** Some information may be incorrect"); end if; -- check if Intel ---------------------------------------------------------------------- -- With the CPUID instruction present, we can assume at least a -- -- x486 processor. If the CPUID support level is < 1 then we have -- -- to leave it at that. -- ---------------------------------------------------------------------- if Intel_CPU.CPUID_Level < 1 then -- Ok, this is a x486 processor. we still can get the Vendor ID Ada.Text_IO.Put_Line ("x486-type processor"); Ada.Text_IO.Put_Line ("Vendor ID is " & Intel_CPU.Vendor_ID); -- We can also check if there is a FPU present if Intel_CPU.Has_FPU then Ada.Text_IO.Put_Line ("Floating-Point support"); else Ada.Text_IO.Put_Line ("No Floating-Point support"); end if; -- check for FPU -- Program done Ada.Command_Line.Set_Exit_Status (Ada.Command_Line.Success); return; end if; -- check CPUID level --------------------------------------------------------------------- -- With a CPUID level of 1 we can use the processor signature to -- -- determine it's exact type. -- --------------------------------------------------------------------- Signature := Intel_CPU.Signature; ---------------------------------------------------------------------- -- Ok, now we go into a lot of messy comparisons to get the -- -- processor type. For clarity, no attememt to try to optimize the -- -- comparisons has been made. Note that since Intel_CPU does not -- -- support getting cache info, we cannot distinguish between P5 -- -- and Celeron types yet. -- ---------------------------------------------------------------------- -- x486SL if Signature.Processor_Type = 2#00# and Signature.Family = 2#0100# and Signature.Model = 2#0100# then Type_Found := True; Ada.Text_IO.Put_Line ("x486SL processor"); end if; -- x486DX2 Write-Back if Signature.Processor_Type = 2#00# and Signature.Family = 2#0100# and Signature.Model = 2#0111# then Type_Found := True; Ada.Text_IO.Put_Line ("Write-Back Enhanced x486DX2 processor"); end if; -- x486DX4 if Signature.Processor_Type = 2#00# and Signature.Family = 2#0100# and Signature.Model = 2#1000# then Type_Found := True; Ada.Text_IO.Put_Line ("x486DX4 processor"); end if; -- x486DX4 Overdrive if Signature.Processor_Type = 2#01# and Signature.Family = 2#0100# and Signature.Model = 2#1000# then Type_Found := True; Ada.Text_IO.Put_Line ("x486DX4 OverDrive processor"); end if; -- Pentium (60, 66) if Signature.Processor_Type = 2#00# and Signature.Family = 2#0101# and Signature.Model = 2#0001# then Type_Found := True; Ada.Text_IO.Put_Line ("Pentium processor (60, 66)"); end if; -- Pentium (75, 90, 100, 120, 133, 150, 166, 200) if Signature.Processor_Type = 2#00# and Signature.Family = 2#0101# and Signature.Model = 2#0010# then Type_Found := True; Ada.Text_IO.Put_Line ("Pentium processor (75, 90, 100, 120, 133, 150, 166, 200)"); end if; -- Pentium OverDrive (60, 66) if Signature.Processor_Type = 2#01# and Signature.Family = 2#0101# and Signature.Model = 2#0001# then Type_Found := True; Ada.Text_IO.Put_Line ("Pentium OverDrive processor (60, 66)"); end if; -- Pentium OverDrive (75, 90, 100, 120, 133, 150, 166, 200) if Signature.Processor_Type = 2#01# and Signature.Family = 2#0101# and Signature.Model = 2#0010# then Type_Found := True; Ada.Text_IO.Put_Line ("Pentium OverDrive cpu (75, 90, 100, 120, 133, 150, 166, 200)"); end if; -- Pentium OverDrive processor for x486 processor-based systems if Signature.Processor_Type = 2#01# and Signature.Family = 2#0101# and Signature.Model = 2#0011# then Type_Found := True; Ada.Text_IO.Put_Line ("Pentium OverDrive processor for x486 processor-based systems"); end if; -- Pentium processor with MMX technology (166, 200) if Signature.Processor_Type = 2#00# and Signature.Family = 2#0101# and Signature.Model = 2#0100# then Type_Found := True; Ada.Text_IO.Put_Line ("Pentium processor with MMX technology (166, 200)"); end if; -- Pentium OverDrive with MMX for Pentium (75, 90, 100, 120, 133) if Signature.Processor_Type = 2#01# and Signature.Family = 2#0101# and Signature.Model = 2#0100# then Type_Found := True; Ada.Text_IO.Put_Line ("Pentium OverDrive processor with MMX " & "technology for Pentium processor (75, 90, 100, 120, 133)"); end if; -- Pentium Pro processor if Signature.Processor_Type = 2#00# and Signature.Family = 2#0110# and Signature.Model = 2#0001# then Type_Found := True; Ada.Text_IO.Put_Line ("Pentium Pro processor"); end if; -- Pentium II processor, model 3 if Signature.Processor_Type = 2#00# and Signature.Family = 2#0110# and Signature.Model = 2#0011# then Type_Found := True; Ada.Text_IO.Put_Line ("Pentium II processor, model 3"); end if; -- Pentium II processor, model 5 or Celeron processor if Signature.Processor_Type = 2#00# and Signature.Family = 2#0110# and Signature.Model = 2#0101# then Type_Found := True; Ada.Text_IO.Put_Line ("Pentium II processor, model 5 or Celeron processor"); end if; -- Pentium Pro OverDrive processor if Signature.Processor_Type = 2#01# and Signature.Family = 2#0110# and Signature.Model = 2#0011# then Type_Found := True; Ada.Text_IO.Put_Line ("Pentium Pro OverDrive processor"); end if; -- If no type recognized, we have an unknown. Display what -- we _do_ know if Type_Found = False then Ada.Text_IO.Put_Line ("Unknown processor"); end if; ----------------------------------------- -- Display processor stepping level. -- ----------------------------------------- Ada.Text_IO.Put_Line ("Stepping level:" & Signature.Stepping'Img); --------------------------------- -- Display vendor ID string. -- --------------------------------- Ada.Text_IO.Put_Line ("Vendor ID: " & Intel_CPU.Vendor_ID); ------------------------------------ -- Get the processors features. -- ------------------------------------ Features := Intel_CPU.Features; ----------------------------- -- Check for a FPU unit. -- ----------------------------- if Features.FPU = True then Ada.Text_IO.Put_Line ("Floating-Point unit available"); else Ada.Text_IO.Put_Line ("no Floating-Point unit"); end if; -- check for FPU -------------------------------- -- List processor features. -- -------------------------------- Ada.Text_IO.Put_Line ("Supported features: "); -- Virtual Mode Extension if Features.VME = True then Ada.Text_IO.Put_Line (" VME - Virtual Mode Extension"); end if; -- Debugging Extension if Features.DE = True then Ada.Text_IO.Put_Line (" DE - Debugging Extension"); end if; -- Page Size Extension if Features.PSE = True then Ada.Text_IO.Put_Line (" PSE - Page Size Extension"); end if; -- Time Stamp Counter if Features.TSC = True then Ada.Text_IO.Put_Line (" TSC - Time Stamp Counter"); end if; -- Model Specific Registers if Features.MSR = True then Ada.Text_IO.Put_Line (" MSR - Model Specific Registers"); end if; -- Physical Address Extension if Features.PAE = True then Ada.Text_IO.Put_Line (" PAE - Physical Address Extension"); end if; -- Machine Check Extension if Features.MCE = True then Ada.Text_IO.Put_Line (" MCE - Machine Check Extension"); end if; -- CMPXCHG8 instruction supported if Features.CX8 = True then Ada.Text_IO.Put_Line (" CX8 - CMPXCHG8 instruction"); end if; -- on-chip APIC hardware support if Features.APIC = True then Ada.Text_IO.Put_Line (" APIC - on-chip APIC hardware support"); end if; -- Fast System Call if Features.SEP = True then Ada.Text_IO.Put_Line (" SEP - Fast System Call"); end if; -- Memory Type Range Registers if Features.MTRR = True then Ada.Text_IO.Put_Line (" MTTR - Memory Type Range Registers"); end if; -- Page Global Enable if Features.PGE = True then Ada.Text_IO.Put_Line (" PGE - Page Global Enable"); end if; -- Machine Check Architecture if Features.MCA = True then Ada.Text_IO.Put_Line (" MCA - Machine Check Architecture"); end if; -- Conditional Move Instruction Supported if Features.CMOV = True then Ada.Text_IO.Put_Line (" CMOV - Conditional Move Instruction Supported"); end if; -- Page Attribute Table if Features.PAT = True then Ada.Text_IO.Put_Line (" PAT - Page Attribute Table"); end if; -- 36-bit Page Size Extension if Features.PSE_36 = True then Ada.Text_IO.Put_Line (" PSE_36 - 36-bit Page Size Extension"); end if; -- MMX technology supported if Features.MMX = True then Ada.Text_IO.Put_Line (" MMX - MMX technology supported"); end if; -- Fast FP Save and Restore if Features.FXSR = True then Ada.Text_IO.Put_Line (" FXSR - Fast FP Save and Restore"); end if; --------------------- -- Program done. -- --------------------- Ada.Command_Line.Set_Exit_Status (Ada.Command_Line.Success); exception when others => Ada.Command_Line.Set_Exit_Status (Ada.Command_Line.Failure); raise; end Check_CPU;
Intel_CPUPackage Specification
------------------------------------------------------------------------- -- -- -- file: intel_cpu.ads -- -- -- -- ********************************************* -- -- * WARNING: for 32-bit Intel processors only * -- -- ********************************************* -- -- -- -- This package contains a number of subprograms that are useful in -- -- determining the Intel x86 CPU (and the features it supports) on -- -- which the program is running. -- -- -- -- The package is based upon the information given in the Intel -- -- Application Note AP-485: "Intel Processor Identification and the -- -- CPUID Instruction" as of April 1998. This application note can be -- -- found on. -- -- -- -- It currently deals with 32-bit processors only, will not detect -- -- features added after april 1998, and does not guarantee proper -- -- results on Intel-compatible processors. -- -- -- -- Cache info and x386 fpu type detection are not supported. -- -- -- -- This package does not use any privileged instructions, so should -- -- work on any OS running on a 32-bit Intel processor. -- -- -- ------------------------------------------------------------------------- with Interfaces; use Interfaces; -- for using unsigned types with System.Machine_Code; use System.Machine_Code; -- for using inline assembler code with Ada.Characters.Latin_1; use Ada.Characters.Latin_1; -- for inserting control characters package Intel_CPU is ---------------------- -- Processor bits -- ---------------------- subtype Num_Bits is Natural range 0 .. 31; -- the number of processor bits (32) -------------------------- -- Processor register -- -------------------------- -- define a processor register type for easy access to -- the individual bits type Processor_Register is array (Num_Bits) of Boolean; pragma Pack (Processor_Register); for Processor_Register'Size use 32; ------------------------- -- Unsigned register -- ------------------------- -- define a processor register type for easy access to -- the individual bytes type Unsigned_Register is record L1 : Unsigned_8; H1 : Unsigned_8; L2 : Unsigned_8; H2 : Unsigned_8; end record; for Unsigned_Register use record L1 at 0 range 0 .. 7; H1 at 0 range 8 .. 15; L2 at 0 range 16 .. 23; H2 at 0 range 24 .. 31; end record; for Unsigned_Register'Size use 32; --------------------------------- -- Intel processor vendor ID -- --------------------------------- Intel_Processor : constant String (1 .. 12) := "GenuineIntel"; -- indicates an Intel manufactured processor ------------------------------------ -- Processor signature register -- ------------------------------------ -- a register type to hold the processor signature type Processor_Signature is record Stepping : Natural range 0 .. 15; Model : Natural range 0 .. 15; Family : Natural range 0 .. 15; Processor_Type : Natural range 0 .. 3; Reserved : Natural range 0 .. 262143; end record; for Processor_Signature use record Stepping at 0 range 0 .. 3; Model at 0 range 4 .. 7; Family at 0 range 8 .. 11; Processor_Type at 0 range 12 .. 13; Reserved at 0 range 14 .. 31; end record; for Processor_Signature'Size use 32; ----------------------------------- -- Processor features register -- ----------------------------------- -- a processor register to hold the processor feature flags type Processor_Features is record FPU : Boolean; -- floating point unit on chip VME : Boolean; -- virtual mode extension DE : Boolean; -- debugging extension PSE : Boolean; -- page size extension TSC : Boolean; -- time stamp counter MSR : Boolean; -- model specific registers PAE : Boolean; -- physical address extension MCE : Boolean; -- machine check extension CX8 : Boolean; -- cmpxchg8 instruction APIC : Boolean; -- on-chip apic hardware Res_1 : Boolean; -- reserved for extensions SEP : Boolean; -- fast system call MTRR : Boolean; -- memory type range registers PGE : Boolean; -- page global enable MCA : Boolean; -- machine check architecture CMOV : Boolean; -- conditional move supported PAT : Boolean; -- page attribute table PSE_36 : Boolean; -- 36-bit page size extension Res_2 : Natural range 0 .. 31; -- reserved for extensions MMX : Boolean; -- MMX technology supported FXSR : Boolean; -- fast FP save and restore Res_3 : Natural range 0 .. 127; -- reserved for extensions end record; for Processor_Features use record FPU at 0 range 0 .. 0; VME at 0 range 1 .. 1; DE at 0 range 2 .. 2; PSE at 0 range 3 .. 3; TSC at 0 range 4 .. 4; MSR at 0 range 5 .. 5; PAE at 0 range 6 .. 6; MCE at 0 range 7 .. 7; CX8 at 0 range 8 .. 8; APIC at 0 range 9 .. 9; Res_1 at 0 range 10 .. 10; SEP at 0 range 11 .. 11; MTRR at 0 range 12 .. 12; PGE at 0 range 13 .. 13; MCA at 0 range 14 .. 14; CMOV at 0 range 15 .. 15; PAT at 0 range 16 .. 16; PSE_36 at 0 range 17 .. 17; Res_2 at 0 range 18 .. 22; MMX at 0 range 23 .. 23; FXSR at 0 range 24 .. 24; Res_3 at 0 range 25 .. 31; end record; for Processor_Features'Size use 32; ------------------- -- Subprograms -- ------------------- function Has_FPU return Boolean; -- return True if a FPU is found -- use only if CPUID is not supported function Has_CPUID return Boolean; -- return True if the processor supports the CPUID instruction function CPUID_Level return Natural; -- return the CPUID support level (0, 1 or 2) -- can only be called if the CPUID instruction is supported function Vendor_ID return String; -- return the processor vendor identification string -- can only be called if the CPUID instruction is supported function Signature return Processor_Signature; -- return the processor signature -- can only be called if the CPUID instruction is supported function Features return Processor_Features; -- return the processors features -- can only be called if the CPUID instruction is supported private ------------------------ -- EFLAGS bit names -- ------------------------ ID_Flag : constant Num_Bits := 21; -- ID flag bit end Intel_CPU;
Intel_CPUPackage Body
package body Intel_CPU is --------------------------- -- Detect FPU presence -- --------------------------- -- There is a FPU present if we can set values to the FPU Status -- and Control Words. function Has_FPU return Boolean is Register : Unsigned_16; -- processor register to store a word begin -- check if we can change the status word Asm ( -- the assembler code "finit" & LF & HT & -- reset status word "movw $0x5A5A, %%ax" & LF & HT & -- set value status word "fnstsw %0" & LF & HT & -- save status word "movw %%ax, %0", -- store status word -- output stored in Register -- register must be a memory location Outputs => Unsigned_16'Asm_output ("=m", Register), -- tell compiler that we used eax Clobber => "eax"); -- if the status word is zero, there is no FPU if Register = 0 then return False; -- no status word end if; -- check status word value -- check if we can get the control word Asm ( -- the assembler code "fnstcw %0", -- save the control word -- output into Register -- register must be a memory location Outputs => Unsigned_16'Asm_output ("=m", Register)); -- check the relevant bits if (Register and 16#103F#) /= 16#003F# then return False; -- no control word end if; -- check control word value -- FPU found return True; end Has_FPU; -------------------------------- -- Detect CPUID instruction -- -------------------------------- -- The processor supports the CPUID instruction if it is possible -- to change the value of ID flag bit in the EFLAGS register. function Has_CPUID return Boolean is Original_Flags, Modified_Flags : Processor_Register; -- EFLAG contents before and after changing the ID flag begin -- try flipping the ID flag in the EFLAGS register Asm ( -- the assembler code "pushfl" & LF & HT & -- push EFLAGS on stack "pop %%eax" & LF & HT & -- pop EFLAGS into eax "movl %%eax, %0" & LF & HT & -- save EFLAGS content "xor $0x200000, %%eax" & LF & HT & -- flip ID flag "push %%eax" & LF & HT & -- push EFLAGS on stack "popfl" & LF & HT & -- load EFLAGS register "pushfl" & LF & HT & -- push EFLAGS on stack "pop %1", -- save EFLAGS content -- output values, may be anything -- Original_Flags is %0 -- Modified_Flags is %1 Outputs => (Processor_Register'Asm_output ("=g", Original_Flags), Processor_Register'Asm_output ("=g", Modified_Flags)), -- tell compiler eax is destroyed Clobber => "eax"); -- check if CPUID is supported if Original_Flags(ID_Flag) /= Modified_Flags(ID_Flag) then return True; -- ID flag was modified else return False; -- ID flag unchanged end if; -- check for CPUID end Has_CPUID; ------------------------------- -- Get CPUID support level -- ------------------------------- function CPUID_Level return Natural is Level : Unsigned_32; -- returned support level begin -- execute CPUID, storing the results in the Level register Asm ( -- the assembler code "cpuid", -- execute CPUID -- zero is stored in eax -- returning the support level in eax Inputs => Unsigned_32'Asm_input ("a", 0), -- eax is stored in Level Outputs => Unsigned_32'Asm_output ("=a", Level), -- tell compiler ebx, ecx and edx registers are destroyed Clobber => "ebx, ecx, edx"); -- return the support level return Natural (Level); end CPUID_Level; -------------------------------- -- Get CPU Vendor ID String -- -------------------------------- -- The vendor ID string is returned in the ebx, ecx and edx register -- after executing the CPUID instruction with eax set to zero. -- In case of a true Intel processor the string returned is -- "GenuineIntel" function Vendor_ID return String is Ebx, Ecx, Edx : Unsigned_Register; -- registers containing the vendor ID string Vendor_ID : String (1 .. 12); -- the vendor ID string begin -- execute CPUID, storing the results in the processor registers Asm ( -- the assembler code "cpuid", -- execute CPUID -- zero stored in eax -- vendor ID string returned in ebx, ecx and edx Inputs => Unsigned_32'Asm_input ("a", 0), -- ebx is stored in Ebx -- ecx is stored in Ecx -- edx is stored in Edx Outputs => (Unsigned_Register'Asm_output ("=b", Ebx), Unsigned_Register'Asm_output ("=c", Ecx), Unsigned_Register'Asm_output ("=d", Edx))); -- now build the vendor ID string Vendor_ID( 1) := Character'Val (Ebx.L1); Vendor_ID( 2) := Character'Val (Ebx.H1); Vendor_ID( 3) := Character'Val (Ebx.L2); Vendor_ID( 4) := Character'Val (Ebx.H2); Vendor_ID( 5) := Character'Val (Edx.L1); Vendor_ID( 6) := Character'Val (Edx.H1); Vendor_ID( 7) := Character'Val (Edx.L2); Vendor_ID( 8) := Character'Val (Edx.H2); Vendor_ID( 9) := Character'Val (Ecx.L1); Vendor_ID(10) := Character'Val (Ecx.H1); Vendor_ID(11) := Character'Val (Ecx.L2); Vendor_ID(12) := Character'Val (Ecx.H2); -- return string return Vendor_ID; end Vendor_ID; ------------------------------- -- Get processor signature -- ------------------------------- function Signature return Processor_Signature is Result : Processor_Signature; -- processor signature returned begin -- execute CPUID, storing the results in the Result variable Asm ( -- the assembler code "cpuid", -- execute CPUID -- one is stored in eax -- processor signature returned in eax Inputs => Unsigned_32'Asm_input ("a", 1), -- eax is stored in Result Outputs => Processor_Signature'Asm_output ("=a", Result), -- tell compiler that ebx, ecx and edx are also destroyed Clobber => "ebx, ecx, edx"); -- return processor signature return Result; end Signature; ------------------------------ -- Get processor features -- ------------------------------ function Features return Processor_Features is Result : Processor_Features; -- processor features returned begin -- execute CPUID, storing the results in the Result variable Asm ( -- the assembler code "cpuid", -- execute CPUID -- one stored in eax -- processor features returned in edx Inputs => Unsigned_32'Asm_input ("a", 1), -- edx is stored in Result Outputs => Processor_Features'Asm_output ("=d", Result), -- tell compiler that ebx and ecx are also destroyed Clobber => "ebx, ecx"); -- return processor signature return Result; end Features; end Intel_CPU;
This chapter describes the compatibility issues that may arise between GNAT and other Ada 83 and Ada 95 compilation systems, and shows how GNAT can expedite subsections treat the most likely issues to be encountered.
Wide_Character;
abstract,
aliased,
protected,
requeue,
tagged, and
untilare reserved in Ada 95. Existing Ada 83 code using any of these identifiers must be edited to use some alternative name.
A particular case is that representation pragmas
cannot be applied to a subprogram body. If necessary, a separate subprogram
declaration must be introduced to which the pragma can be applied.
Requires_Body, which must then be given a dummy procedure body in the package body, which then becomes required. Another approach (assuming that this does not introduce elaboration circularities) is to add an
Elaborate_Bodypragma to the package spec, since one effect of this pragma is to require the presence of a package body.
Numeric_Erroris now the same as
Constraint_Error
Numeric_Erroris a renaming of
Constraint_Error. This means that it is illegal to have separate exception handlers for the two exceptions. The fix is simply to remove the handler for the
Numeric_Errorcase (since even in Ada 83, a compiler was free to raise
Constraint_Errorin place of
Numeric_Errorin all cases). type.
pragma Interfaceand the floating point type attributes (
Emax,
Mantissa, etc.), among other items..
Ada compilers are allowed to supplement the language-defined pragmas, and
these are a potential source of non-portability. All GNAT-defined pragmas
are described in the GNAT Reference Manual, and these include several that
are specifically intended to correspond to other vendors' Ada 83 pragmas.
For migrating from VADS, the pragma
Use_VADS_Size may be useful.
For
compatibility with DEC Ada 83, GNAT supplies the pragmas
Extend_System,
Ident,
Inline_Generic,
Interface_Name,
Passive,
Suppress_All,
and
Volatile.
Other relevant pragmas include
External and
Link_With.
Some vendor-specific
Ada 83 pragmas (
Share_Generic,
Subtitle, and
Title) are
recognized, thus
avoiding compiler rejection of units that contain such pragmas; they are not
relevant in a GNAT context and hence are not otherwise implemented.
Analogous to pragmas, the set of attributes may be extended by an
implementation. All GNAT-defined attributes are described in the
GNAT Reference Manual, and these include several that are specifically
intended
to correspond to other vendors' Ada 83 attributes. For migrating from VADS,
the attribute
VADS_Size may be useful. For compatibility with DEC
Ada 83, GNAT supplies the attributes
Bit,
Machine_Size and
Type_Class.
Vendors may supply libraries to supplement the standard Ada API. If Ada 83 code uses vendor-specific libraries then there are several ways to manage this in Ada 95:
The implementation can choose any elaboration order consistent with the unit dependency relationship. This freedom means that some orders can result in Program_Error being raised due to an “Access Before Elaboration”: an attempt to invoke a subprogram its body has been elaborated, or to instantiate a generic before the generic body has been elaborated. By default GNAT attempts to choose a safe order (one that will not encounter access before elaboration problems) by implicitly inserting Elaborate_All pragmas where needed. However, this can lead to the creation of elaboration circularities and a resulting rejection of the program by gnatbind. This issue is thoroughly described in Elaboration Order Handling in GNAT.).
|
http://www.mirbsd.org/htman/i386/manINFO/gnat_ugn_unw.html
|
CC-MAIN-2017-04
|
refinedweb
| 27,483
| 55.34
|
Vk MAY 1961
G E O R G I A
T E C H TiW*
A Special
The American College Student: 1961
The Chairman of the Regents Speaks Up
F t ; i m I>1 i 11'
A OF LATE we have received a large amount of the "Letter-to-the-editor" variety of mail. This deluge (spring shower might be a better term) started with our November-December issue. But, the thunderclap that really set it off was Professor Glenn Rainey's powerful defense of the public schools, which appeared in the February issue. The ratio of pro to con letters on "Witness for the Defense" was over 8 to 1. Here are how some of them read: Compliment from Chattanooga
Chattanooga, Tennessee—Professor Rainey's article, "Witness for the Defense" was the most informative, stimulating, and refreshing piece I have ever read on the subject of public schools. He is to be sincerely congratulated and thanked by all of us who have a stake in Georgia's public schools and their future. For my money the article is so tremendously worthwhile that I do hope some means will be found to bring it to the attention of the public in addition to its appearance in the Alumnus which, of course, is a very good start. You may be interested to know that even though the Westinghouse office where I work is located in downtown Chattanooga, my residence on Lookout Mountain is on Georgia soil, and that I, therefore, still reside in Georgia and pay Georgia taxes. My stake in the future of Georgia's education is no more or no less than any other parent who is fortunate enough to have three youngsters. Frank Willett An Alumnus-Educator Concurs
North Carolina State College—I would like to congratulate you on the fine article, "Witness for the Defense," which was published in the February issue of The Georgia Tech Alumnus. I would like to add that the course in public speaking which I took under Professor Rainey many years ago was one of the most pleasant experiences I had at Georgia Tech. Please let me congratulate you again on this fine article and express the hope that its message will be taken to heart by our "fellow Georgians." F. S. Barkalow, Jr. Head, Department of Zoology 2
I
Rebuttal from Europe
Baden, Switzerland—I have studied Professor Rainey's article in the February issue with great interest. After having read many articles, especially in ASME publications about various insufficiencies in the U. S. schools it seems that I have to revise my opinion of this subject. But is it not dangerous to tell to those who are responsible for education in the U. S. that they can take it easy and rest on their achievements of the past? As an European, I am especially concerned about the chapter of Professor Rainey's article dealing with the quality of European education. As a matter of fact he does not really answer the basic question but speaks only of quantities, i.e. the number of children having the benefit of this education. But these isolated figures can impress only those not familiar with education abroad. Let us examine the statement that in U. S. about 3 times as many 16 years old children are in school than in some European countries. These figures certainly do not consider the large number of European children leaving public school at the age of 16 and attending compulsory evening schools during 3 to 4 years of apprenticeship. These children are, of course, not listed in statistics as "in school" though they get an excellent training in these evening schools. The comparison of the size of the European student body versus the number of U. S. students is highly offset by the definition of the word student and the restricted selections of subjects being taught at European universities. Only a small portion of the "children" in the age group of 16 to 23 attending school full time are called students, i.e. only those attending a university. Persons attending high schools and professional schools like for instance Engineering Schools (Technikum) are not included in this definition, thus a girl studying home economics in the U. S. is not a student in our countries. I am really curious whether everybody on the campus agrees with Professor Rainey's article. W. Leeman Department 3A Brown, Boveri & Company
Covington, Georgia—"Witness for the Defense" by Professor Glenn Rainey is one of the finest articles I have ever read. It expresses, in language which any layman can understand, some very practical facts about much discussed questions. The Saturday Review this week is devoted to a discussion of many educational problems and not one of them is as illuminating to me as is this article by Mr. Rainey. I believe The Saturday Evening Post would like to print this or a similar article. While I am writing, I would like to congratulate you upon the appearance and the content of your magazine. Robert O. Arnold Chairman, Board of Regents Another alumnus dissents strongly
New York City—I submit that an impartial judge and, indeed, the defendant must wonder in whose behalf Professor Glenn Rainey was testifying in the February issue. If I was opposed to public education, I believe that I would quote this article to show the lengths to which its defenders feel forced to go. Question 1.—Professor Rainey speaks of quantity. His only reference to quality is that a personal sampling (hearken all survivors of third year statistics) revealed that students are as well prepared as before—and possibly (not probably!) better prepared. I do not think this sufficient for today's standards or is this a premise whose only value lies in aiding fund raising? Question 2.—Professor Rainey again refers solely to how much and refers not to how well; to method and not content (and is this not the cry of "educationists who claim method to be the more important?). Question 3.—I ask Professor Rainey (cross examination) to prove that the alternatives he suggests are the only ones. And if they're not why he used them and how prevalent he thinks them to be. Question 4.—There are some great well-poisoning questions here (or aren't engineers expected to understand logical fallacy?). Is not the issue really "how much of the rearing of children should be attempted in the school system and how much in the home elsewhere? Should tax money be spent to permit parents to default on their moral obligations?" Question 5.—Does it really take a column in The Georgia Tech Alumnus to give a defense of progress—to engineers? And does Professor Rainey deny that public classrooms have been used to TECH ALUMNUS
allow children to do as they please? I refer him to any 1959-60 issue of The Atlantic which has aired the methods of John Dewey's advocates, pro and con. Question 6.—I agree that ideals are wonderful things. However, I contrast Professor Rainey's Dr. Byron C. Hollingshead to Harvard's Dr. James Conant and his report on the state of the American high school. There's certainly nothing wrong with patting ourselves on the back—but prematurely, or through numbers without an identified base? Question 7.—I submit that Russia is so far ahead of us because they reward the products of knowledge much better than we do. And that their low standard of living is the incentive which works. Until we offer to the men who own superior minds the same privileges we offer to men who own cement we will stay behind the Russians. Question 8.—-I can only conclude from Professor Rainey's straw men that he agrees that Private Schools are superior to Public Schools. Perhaps the reasons are valid, but they are still superior. Question 9.—Is not the evidence for judging a curriculum the usefulness of it? Professor Rainey did not comment as to whether or not an education major has the necessary background for advanced study in any area other than education: Can a teacher get a master's degree in his subject specialty without having to take any undergraduate subjects? In summary I think that the prosecution has managed to sneak in one of its own witnesses. Let us defend public schools on whatever moral and philosophical principles we hold—but let us deal with the real problems—which do exist—and not try to whitewash them. Let us not use question begging or the stolen concept in hopes that our logical fallacies will not be discovered. Truth will out, but only when we do not allow our prejudices to suppress it. A. James Smith, Jr. P.S. These thoughts are offered in competition with Professor Rainey's reckoning, not his rhetoric. I hope they will not be dismissed because of their lack of style. A football coach sends congratulations
Atlanta—I enjoyed Professor Rainey's article in the Alumnus in regards to our educational system in our nation. It was very informative and your presentation was very interesting. More of us who are in educational activities should adopt this approach to those who are critics of our system. I thoroughly enjoyed the article. John Robert Bell MAY, 1961
A Public School Superintendent adds his thanks
Greensboro, N. C.—Mr. James Westbrook, an alumnus of Georgia Tech, has shared his February copy of The Georgia Tech Alumnus with me, and I have been most interested to read the article "Witness for the Defense." Please accept the humble and sincere appreciation of a North Carolina public school administrator for an excellent statement which I wish could have the publicity it deserves. I am sure I shall be quoting it many times in the future. P. J. Weaver
stitution long dedicated too exclusively to technology. Finally, Dr. Kenneth Wagner renders real service in calling us to look factually at the economic signs of our time. Sam T. Hurst Dean, School of Architecture and Arts Another Tiger heard from
Clemson College—Your March issue is one of the very finest alumni magazines I have ever seen—any where, any time. Can you spare me four more copies? £\ Joe Sherman Alumni Director
Agreement and disagreement from Virginia
Sterling, Virginia—This time I agree with Professor Rainey. Our public schools need our support. (They do not need, of course, federal money). But his defense seems rather weak in that the questions he asks, no responsible person would ask. By Mr. Rainey's broad, sweeping questions a straw man is allowed to be set up which Mr. Rainey neatly destroys. He reminds me of Ralph McGill, editor of The Atlanta Newspapers. Elroy Strickland A faculty member agrees
Georgia Tech—I just had the great pleasure of reading Glenn Rainey's refreshing article in the February issue of The Georgia Tech Alumnus. After having read it, I felt proud to be a member of the same faculty to which its author belonged. Please accept my thanks for your using this well-known communications skills to inform our alumni on such an important issue. Edward H. Loveland Director, School of Psychology A THE FLURRY of letters from the February issue had just started slowing down when out came the March issue and a new flow of letters began: The Face of the Institute is changing
Auburn, Alabama—As an alumnus and former faculty member, I salute you and your staff for the makeup and the content of the March edition. The quality of both photography and typography is fresh and imaginative. President Harrison's leadership as reported in "Approach to a Crisis" is most encouraging to those who love the institution and desire its reputation to continue and to grow.. Your editorial and your presentation of "The Face of A Poet" and "Two for the Show" suggest a renewal of interest in academic affairs and in the arts and humanities which augurs well for an in-
President congratulated
Lakewood, Ohio—I have received your March issue of the Alumnus and thought the handling of the "crisis" forthcoming at Tech was handled in an excellent manner by President Harrison. Unfortunately, the Cleveland papers did not give this side of the story as much publicity as they did the Georgia riots. But from my observations here, the people up here were very proud of the stand of the President of Georgia in his handling of the admission problems. Through your notes on the particular classes I was able to get together with one of my fraternity brothers who I haven't seen in three years. He had been with Firestone in Akron, only 30 miles away, for two years. Bruce E. Warnock Another country heard from
South Benfleet, Essex, England—I received my March issue a few days ago. I am so proud of the activity of the administration of our wonderful school that I must express to you, Mr. Harrison and all other persons concerned my appreciation of the wonderful way in which you are handling this crisis—as well as reporting it. I have always been very proud of being an alumnus of this institution, but I am even more proud today after reading your report which shows conclusively that we have a level-headed faculty and students at Georgia Tech today. Please accept my most hearty congratulations on a most wonderful job being done, and I am sure that all people connected with Tech can be proud of the way this crisis will be handled. Robert R. Gibson A THERE ARE FEW things in life as pleasing to an editor as reactions like these. Whether the letters are pro or con, we want to keep them coming. On page 10 of this issue, Glenn Rainey is at it again.
Uoj- HJ*M~<*.J,. 3
tTik MAY, 1961
GEORGIA TECH VOLUME 39
NUMBER 7
Y
CONTENTS 2. RAMBLIN'—we get letters and letters and here are just a few of them. 7. THE TECH PRIMER—a couple of Tech students take some pot shots at a few sacred cows and some not quite so sacred. 10. STUDENT LEADERSHIP—Professor Rainey returns for another encore.
Glenn
12. THE COLLEGE STUDENT—a special national report on an important subject. 29. DEPARTMENTAL RESEARCH—a look at another facet of Tech's dynamic research program. 31. Staff
B o b Wallace, Jr., '49, Editor Bill Diehl, Jr., Chief Photographer Mary Jane Reynolds, Editorial Assistant Tom Hall, '59, Advertising Mary Peeks, Class Notes
THE COVER There is no present generation as such. There is only a collection of individuals winding their own ways through the most important years of their lives or so -the theory goes according to the special report which begins on page 13 of this issue. And, as the editors of this report point out, perhaps this is the way that it has always been..
ou NEVER KNOW where Georgia Tech will turn up in print. For example, the February issue of the Auburn Alumnews (of all publications) features a front-page editorial entitled "A Georgia Tech Man Challenges the Auburn Spirit." I thought you might like to read its key paragraphs. Here they are: "Several weeks ago The Alumnews Editor, in search of a feature story, accompanied Alumni Field Secretary Herb White '55 on a trip to Huntsville. While there, the editor met a man from Georgia Tech—one will turn up almost anywhere you go. Like all Tech men this one had a word and a knowing smile. "The editor of The Alumnews liked the word, for it was challenge. But he didn't like the assurance of the smile, for it implied a negative answer to the challenge expressed in the form of a question: 'Do you think Auburn can bat in our league?' That was all he chose to say—that and all that he implied with the smile. "This Tech man wasn't talking about baseball, or football, or basketball, or even scholarship and research. He was talking about loyalty and school spirit. He was challenging the Auburn Spirit. "Tech men are proud because year after year Tech alumni rank among the top alumni groups in the nation in percentage of alumni participation in their loyalty program. "The Tech man at Huntsville had a little pamphlet on his desk in the engineering section at Brown Engineering. The pamphlet was a record of Georgia Tech's 'Loyalty Fund' success. He put the little pamphlet down before the editor of the Auburn alumnus, and he asked his question and smiled. "The editor of The Alumnews is not willing yet to accept the implication of that Tech man's smile. For he believes that the volunteer cards and the purchased shares in (Auburn's) Nuclear Science Center will come rolling in within the next few days. He believes that Auburn alumni can and will answer any challenge to the Auburn Spirit (and he respects the challenge of a Georgia Tech man)—But if several weeks pass and his expectations are unfulfilled, he may print the words and music to 'Ramblin' Wreck' in this place and prepare to live forever haunted by that knowing smile of the Tech alumnus."
This is what I call putting Tech alumni on the spot. For instance, if our Roll Call should fall down this year, think what that Auburn editor might be able to say. If you haven't sent in your contribution for the 1960-61 Roll Call, I trust that you will do it now before the June 30 deadline rolls around.
/£&J*£f^ TECH ALUMNUS
The Georgia Tech Student: 1961
THE GEORGIA TECH PRIMER The irritations of college life today are best reflected in satire. And, in the pages of The Rambler, Tech's successor to The Yellow Jacket, today's student gives vent to his frustrations in many articles like this:
If today's Georgia Tech were reported in the same style as a modern first-grade reader, here is how it might appear to the historian:
Artwork by David Cooper
THE PROFS See the profs. They are our leaders. Lead, lead, lead. Some profs are young. Some profs are old. All profs are sadists. What do they do for fun? They give grades. Bad grades. They think this funny. They don't have to take the course again. Profs are morally straight. They understand right and wrong. They are right. You are wrong.
MAY, 1961
STUDENTS These are students. These are students. How happy they are. Happy, happy, happy. Here is a physicist. He makes uranium. He is happy. He will be sterile in two days. But he is happy. Here is an EE. He makes electric chairs. He is very happy. Here is Fred. Fred is unhappy. He wants to string violins. Fred is a damn malcontent.
DRINKING This is a Tech man. He drinks. He drinks coffee, tea, or milk. He used to drink his whiskey clear. Damned intemperance! Now he is temperant. The WCTU is happy. The deans are happy. They have been in the communion wine, Joy, joy, joy!
FRATERNITIES This is a frat man. He is a social worker. He collects for the poor. Charity, charity, charity. Frats build school spirit. Frats build school spirit. Frats build displays. Frats build Ramblin' 'Recks. The school charges them to park their wrecks. Bureaucracy! Frat men love their frats. They also love their frats' parties. Get thee to a convent — frat man!
DEANS This is the dean. He is fairminded. Watch him play fair. Play, play, play. He is a straight shooter. —„ He shoots straight at you and me. Bang, bang, bang. He is a good scout. He is aces. Cheer, cheer, cheer. We feel sorry for the dean. He missed his calling. He would have made a first-rate den mother.
8
B &U This means Buildings and Grounds. We have B & G men at Tech. They work on the buildings and the grounds. They work hard. They work very, very, very hard. They do their best. They do good type jobs. They clean rooms and halls. They mow the lawns. They keep the campus tidy and clean. They even work some freshmen's math. Something should be done for them. They should be allowed to work in the afternoons, too
TECH ALUMNUS
COEDS These are coeds. They live on the hill. They are in the minority. They love it. Coeds are easy to tell. They don't shave on Thursdays. Shave, shave, shave. You should be nice to coeds. You should not be gruff. You should not be rude. They will stomp you if you are. They are smart. They must be smart. They are nothing else.
SATCHEL-CARRIERS This is a satchel-carrier. He carries a satchel. Carries, carries, carries. The satchel holds books. The satchel is heavy. But the satchel-carrier doesn't mind. The satchel has a handle. Maybe even two handles. Books don't even have one handle. We like the satchel-carrier. He is a good influence on us. Good, good, good! So are our mothers.
RATS COACHES This is a gym coach. He teaches gymnastics. He teaches us how to jump. He teaches us how to tumble. He teaches us how to break our necks in 16 different ways. He is very good at gymnastics. He thinks we should be too. What a mistake! He shows us how to do the stunts. He says they are very simple. He is very interested in them. It figures.
MAY, 1961
This is a Rat. His hat is in his hand. His rat hat is a medieval tradition. It should be done away with. Ax, Ax, Ax. He is naive and innocent. He is incapable of malice. See him disassemble a police car. Such gleeful ingenuity. Glee, glee, glee. Rat caps make him conform. So do dorm counsellors. Rat caps are bad. So are dorm counsellors.
Professor Glenn Rainey returns to the pages of the Alumnus to talk about one of the great problems of today's college student. . .
The great challenge is to develop tr e LEADERSHIP ON A CAMPUS IS MORE THAN WINNING OFFICE THROUGH A STUDENT ELECTION.
Fourth, we have the problem of working out constructively and humanely our internal problems of race, region, class, and religion—with the eyes of continents of skeptical spectators focussed upon us. A billion souls in the middle world watch while we struggle to revise old patterns of bigotry and discrimination in the image of our own incomparable ideal. Fifth, and finally only in the present context, we have a problem of molding human beings worthy of the heritage of leisure and of gracious living which is now easily within the reach of all mankind if we can but discipline ourselves. Against the backdrop*bf our needs and problems, if one should argue that it all boils down to a moral and a o TALK OF LEADERSHIP on the modern college campus spiritual crisis, the answer is that indeed it does, and that is to come to grips at once with a whole set of prob- it is just this crisis which confronts every home, every lems and of issues that are not so much unprecedented as church, and every school. For the responsible agents of they are desperately urgent. If ever in the past, in defining any institution of education to assume that they are comleadership, we could be content to equate it with personal pletely performing their function when they turn out highpopularity, with success in attaining desirable offices, with ly trained specialists—whom, to be sure, we must have scholastic or extra-scholastic achievement, with recognition in full supply—is to play a weary game of self-delusion and with honors—we cannot be content to do so now. Our and betrayal. The more effectively trained the specialist is, needs for true leadership are so pressing that success and the more powerful an instrument or weapon he is in the enviable position serve too often only to underscore the hands of whatever force manages to exploit him or to win poverty of leadership in a person so spot-lighted. Our need his allegiance. In a free society it is of first importance is not for leaders but for men and women fit to take the that the specialist equip himself to play a worthy role lead—for generously motivated leaders ready to move in the public processes which must control the framework ahead of us in the direction in which we dare not fail to go. in which his talents are to be employed. If he cannot think Our colleges exist in a world in which problems nearly and perform in the arena offirst-classcitizenship and statesinsuperable are overshadowed by others yet more grave: manship, he yields by forfeit to more broadly equipped First in gravity, one supposes, is the question of survival contestants. In a free society no man of stature has the —not just of our way of life, not just of our country, not right to beg off from the responsibilities of citizenship just of modern civilized society, but of intelligent life in and of leadership. our solar system, and perhaps of life itself. It is perfectly One needs to correct the delusion, if one harbors it, plain that unless men can invent ways to live together and that leadership on the college campus is something reserved to cabin in the forces of incalculable destruction, we face for a special few. A college is a place where leadership is the wiping out of those god-like investments of conscious- nurtured and concentrated. Every college student is inness and creativity which we have enjoyed for only a few escapably a leader—a good one or a bad one! College is seconds on our planet and which, so far as we can now not preparation for life: it is one of the vital, central sectors know, are unique in space and time. of life, and the climate of our colleges and the character Second, we have the problem of winning a victory over of our students are measures of our national vitality and Communist aggression and pretension as they threaten free maturity. What goes on in our students augurs the future institutions wherever such institutions survive—a victory of our society. which must be somehow effected without reconciling ourIt follows that we must call upon our students for a deselves to the malignant counter-balance of fascist totalivotion and a commitment. We need a leadership of intarianism. tegrity, of imagination, and of creativity. We cannot afford Third, we have the problem of working out the transithe sleazy luxury of degenerate posturing and complacency. tions in our own economy from one abundance to another We need a college community that is alert and informed without those anguishes, distortions, and paralyses which and concerned. We need daring and idealism at work. At deprive us of the fruits of our genius. In the whole economthis crossroads of the ages we cannot consent that our best ic history of man, no operation has ever been attempted young people barter off the high calling of man's fulfillment that is more delicate or more difficult than our effort to in exchange for the mere vulgarities of thrill and selfstrike a wholesome balance between free economic enterexpression. prise on the one hand and governmental safeguards and We cannot abandon our best young people to cheap complements on the other— safeguards and complements purposed not to hamper or destroy free enterprise but to exhibitionism, to self-indulgence or to irresponsible ambisustain a climate in which our scientific, engineering, pro- tion. Least of all can we permit them to flounder into cynicism—that final atheism! ductive and distributive forces may thrive vigorously.
The Georgia Tech Student: 1961
I EADERSHIP
T
MAY, 1961
11
o
VER THE PAST FOUR YEARS, this
magazine has become the top winner in coverage of the college student in the national publications competition sponsored by the American Alumni Council. In 1957, The Alumnus took first place in the student category; in 1958 it came up with an honorable mention in this category; in 1959, it won both first place for student coverage and a special award for its issue on "The Georgia Tech Student: 1959"; and in 1960, another special award came its way for its issue on "The Georgia Tech Student: 1960." It stands to reason then that we are more than a mite interested in the special 16-page supplement on "The College Student" which begins on the page opposite. This supplement is the fourth such project undertaken by a national group of alumni editors called Editorial Projects for Education, Inc., dubbed "Operation Moonshooter," by the editors. This year, this group has really taken aim at the moon by attempting to portray the college student on a national level. We like their approach. They did not attempt to label or libel the present generation. They just let the students speak for themselves (a method used by The Alumnus in its two issues on "The Tech Student"). We know that you will get a better idea of the college student by reading the provocative and varied statements of 16 students in this supplement.
12
TECH ALUMNUS
SUSAN GHEENBURG
Times have changed. Have Americans college students?
THE COLLEGE STUDENT, they say, is a young person who will
. . . use a car to get to a library two blocks away, knowing full well that the parking lot is three blocks on the other side. . . . move heaven, earth, and the dean's office to enroll in a class already filled; then drop the course. . . . complain bitterly about the quality of food served in the college dining halls—while putting down a third portion. . . . declaim for four solid years that the girls at his institution or at the nearby college for women are unquestionably the least attractive females on the face of the earth; then marry one of them.
UT there is a serious side. Today's students, many professors say, are more accomplished than the average of their predecessors. Perhaps this is because there is greater competition for college entrance, nowadays, and fewer doubtful candidates get in. Whatever the reason, the trend is important. For civilization depends upon the transmission of knowledge to wave upon wave of young people—and on the way in which they receive it, master it, employ it, add to it. If the transmission process fails, we go back to the beginning and start over again. We are never more than a generation away from total ignorance. Because for a time it provides the world's leaders, each generation has the power to change the course of history. The current wave is thus exactly as important, as the one before it and the one that will come after it. Each is crucial in its own time.
Scott Thompson
Barbara Nolan
Robert Schloredt
Arthur Wortman
B
HAT will the present student generation do? What are its hopes, its dreams, its principles? Will it build on our past, or reject it? Is it, as is so often claimed, a generation of timid organization people, born to be commanded? A patient band of revolutionaries, waiting for a breach? Or something in between? No one—not even the students themselves—can be sure, of course. One can only search for clues, as we do in the fourteen pages that follow. Here we look at, and listen to, college students of 1961—the people whom higher education is all about.
W
What are today J* students like'? To help find out, we invite you to join
A seminar
P H O T O S : HERB WEITMAN
Robert Thompson
Roy Muir
Ruth Vars
Galen Unger
Parker Palmer
Patricia Burgamy
Kenneth Weaver
David Gilmour
Martha Freeman
Dean Windgassen
p—w~^HE fourteen young men and women pictured above come from fourteen colleges and universiJL. ties, big and little, located in all parts of the United States. Some of their alma maters are private, some are state or city-supported, some are related to a church. The students' studies range widely—from science and social studies to agriculture and engineering. Outside the classroom, their interests are similarly varied. Some are athletes (one is All-American quarterback), some are active in student government, others stick to their books. To help prepare this report, we invited all fourteen, as articulate representatives of virtually every type of campus in America, to meet for a weekend of searching discussion. The topic: themselves. The objective: to ob-
tain some clues as to how the college student of the Sixties ticks. The resulting talk—recorded by a stenographer and presented in essence on the following pages—is a revealing portrait of young people. Most revealing—and in a way most heartening—is the lack of unanimity which the students displayed on virtually every topic they discussed. As the seminar neared its close, someone asked the group what conclusions they would reach about themselves. There was silence. Then one student spoke: "We're all different," he said. He was right. That was the only proper conclusion. Labelers, and perhaps libelers, of this generation might take note.
of students from coast to coast
Being a
EKU.H HAKTMANH, aiAunuw
cs
• as ?:
<
9>
student is a wonderful thing. are exciting years. They are exciting for the participants, many of whom are on their own for the first time in their lives—and exciting for the onlooking adult. But for both generations, these are frequently painful years, as well. The students' competence, which is considerable, gets them in dutch with their elders as often as do their youthful blunders. That young people ignore the adults' soundest, most heartfelt warnings is bad enough; that they so often get away with it sometimes seems unforgivable. Being both intelligent and well schooled, as well as unfettered by the inhibitions instilled by experience, they readily identify the errors of their elders—and they are not inclined to be lenient, of course. (The one unforgivable sin is the one you yourself have never committed.) But, lacking experience, they are apt to commit many of the same mistakes. The wise adult understands this: that only in this way will they gain experience and learn tolerance—neither of which can be conferred.
S
TUDENT YEARS
'They say the student is an animal in transition. You have to wait until you get your degree, they say; then you turn the big corner and there you are. But being a student is a vocation, just like being a lawyer or an editor or a business man. This is what we are and where we are," 'The college campus is an open market of ideas, I can walk around the campus, say what I please, and be a truly free person. This is our world for now. Let's face it— we'll never live in a more stimulating environment. Being a student is a wonderful and magnificent and free thing, "
a
You goto college to learn, of course.
SUSAN GHEENBURO
contrary to the memories that alumni and alumnae may have of "carefree" days, is often de^ scribed by its partakers as "the mill." "You just get in the old mill," said one student panelist, "and your head spins, and you're trying to get ready for this test and that test, and you are going along so fast that you don't have time to find yourself." The mill, for the student, grinds night and day—in classrooms, in libraries, in dining halls, in dormitories, and in scores of enterprises, organized and unorganized, classed vaguely as "extracurricular activities." Which of the activities —or what combination of activities—contributes most to a student's education? Each student must concoct the recipe for himself. "You have to get used to living in the mill and finding yourself," said another panelist. "You'll always be in the mill -—all through your life."
A
STUDENT'S LIFE,
99
But learning comes in many ways. "Td like to bring up something I think is a fault in our colleges: the great emphasis on grades.'''' "I think grades interfere with the real learning process. I've talked ivith people loho made an A on an exam —but next day they couldn't remember half the material. They just memorized to get a good grade." "You go to college to learn, of course. But learning comes in many ways—not just from classrooms and books, but from personal relations with people: holding office in student government, and that sort of thing." "It's a favorite academic cliche, that not all learning comes from books. I think it's dangerous. I believe the greatest part of learning does come from books—just plain books." EIUCH HAKTMANN, MAGNUM
It's imp ortan t to know you can do a good job at something." to conceive of this unless you've been through it . . . but the one thing that's done the most for me in college is baseball. I'd always been the guy with potential who never came through. The coach worked on me; I got my control and really started going places. The confidence I gained carried over into my studies. I say extracurricular activities are worthwhile. It's important to know you can do a good job at something, whatever it is."
I
T'S HARD
"The more you do, the more you seem to get done. You organize your time better."
• "No! Maybe I'm too idealistic. But I think college is a place for the pursuit of knowledge. If we're here for knowledge, that's what we should concentrate on." • "In your studies you can goof off for a while and still catch up. But in athletics, the results come right on the spot. There's no catching up, after the play is over. This carries over into your school work. I think almost everyone on our football team improved his grades last fall." • "This is true for girls, too. The more you have to do, the more you seem to get done. You organize your time better." • "I can't see learning for any other purpose than to better yourself and the world. Learning for itself is of no value, except as a hobby—and I don't think we're in school to join book clubs." SUSAN GREENHURG
• "For some people, learning is an end in itself. It can be more than a hobby. I don't think we can afford to be too snobbish about what should and what shouldn't be an end in itself, and what can or what can't be a creative channel for different people."
"In athletics, the results come right on the spot. There's no catching up, after the play."
"It seems to me you're saying that
C
is where many students meet the first great test of their personal integrity. There, where one's progress is measured at least partly by examinations and grades, the stress put upon one's sense of honor is heavy. For some, honor gains strength in the process. For others, the temptation to cheat is irresistible, and honor breaks under the strain. OLLEGE
Some institutions proctor all tests and examinations. An instructor, eagle-eyed, sits in the room. Others have honor systems, placing upon the students themselves the responsibility to maintain integrity in the student community and to report all violators. How well either system works varies greatly. "When you come right down to it," said one member of our student panel, "honor must be inculcated in the years before college —in the home."
S T . LOUIS POST-DISPATCH
"Maybe you need a Bin a test, or you donH get into medical school. And the guy ahead of you raises the average by cheating. That makes a real problem"
honor works only when it's easy. " Tmfrom a school with an honor system that works. But is the reason it works maybe because of the tremendous penalty that's connected ivith cheating, stealing, or lying? It's expulsion—and what goes along with that is that you can't get into another good school or even get a good job. It's about as bad a punishment as this country can give out, in my opinion. Does the honor system instill honor—or just fear?" "At our school the honor system works even though the penalties aren't that stiff. It's part of the tradition. Most of the girls feel they're given the responsibility to be honorable, and they accept it." "On our campus you can leave your books anywhere and they'll be there when you come back. You can even leave a tall, cold milkshake—I've done it—and when you come back two hours later, it will still be there. It won't be cold, but it will be there. You learn a respect for honor, a respect that will carry over into other fields for the rest of your life." "I'd say the minority who are top students don't cheat, because they're after knowledge. And the great majority in the middle don't cheat, because they're afraid to. But the poor students, who cheat to get by . . . The funny thing is, they're not afraid at all. I guess they figure they've nothing to lose." "Nobody is just honest or dishonest. I'm sure everyone here has been guilty of some sort of dishonest act in his lifetime. But everyone here would also say he's primarily honest. I know if I ivere really in the clutch Td cheat. I admit it— and I don't necessarily consider myself dishonest because I would." "It seems to me you're saying that honor tvorks only when it's easy." "Absolute honor is 150,000 miles out, at least. And we're down here, walking this earth with all our faults. You can look up at those clouds of honor up there and say, 'They'repretty, but I can't reach them.' Or you can shoot for the clouds. I think that's the approach I want to take. I don't think I can attain absolute honor, but I can try—and I'd like to leave this world with that on my batting record."
"It's not how we feel about issues
W:
'E ARE being criticized by other people all the time, and they're stamping down on us. 'You're not doing anything,' they say. I've noticed an attitude among students: Okay, just keep criticizing. But we're going to come back and react. In some ways we're going to be a little rebellious. We're going to shoiv you what we can really do."
Today's college students are perhaps the most thoroughly analyzed generation in our history. And they are acutely aware of what is being written about them. The word that rasps their nerves most sorely is "apathy." This is a generation, say many critics, that plays it cool. It may be casually interested in many things, but it is excited by none.
"Our student legislature fought most of the year about taking stands. The majority rationalized, saying it wasn't our place; what good would it do? They were afraid people ivould check the college in future years and if they took an unpopular stand they wouldn't get security clearance or tvouldnt get a job. I thought this was awful. But I see indications of an atvakening of interest. It isn't how we feel about issues, but whether we feel at all." "I'm sure if s practically the same everywhere. We have 5,500 full-time students, but only fifteen or twenty of us went on the sit-downs." "I think there is a great deal of student opinion about public issues. It isn't always rational, and maybe we don't talk about it, but I think most of us have definite feelings about most things."
Is the criticism deserved? Some college students and their professors think it is. Others blame the times —times without deprivation, times whose burning issues are too colossal, too impersonal, too remote— and say that the apparent student lassitude is simply society's lassitude in microcosm.
"I've felt the apathy at my school. The university is a sort of isolated little world. Students don't feel the big issues really concern them. The civil rights issue is close to home, but you'd have to chase a student down to get him to give his honest opinion."
The quotation that heads this column is from one of the members of our student panel. At the right is what some of the others think.
"We're quick to criticize, sloiv to act." "Do you think that just because students in America don't cause revolutions and riots and take active stands, this means . . .?"
-*•
"I'm not calling for revolution. I'm calling for interest, and I don't care what side the student takes, as long as he takes a side." "But even ivhen we went down to Woolworth's carrying a picket sign, what were some of the motives behind it? Was it just to get a day away from classes?"
but whether we feel at all. " "I attended a discussion where Negro students presented their views. I have never seen a group of more dynamic or dedicated or informed students." if
"But they had a personal reason." "That's just it. The only thing I can think of, where students took a stand on our campus, ivas when it was decided that it wasnt proper to have a brewery sponsor the basketball team on television. This caused a lot of student discussion, but it's the only instance I can remember." "Why is there this unwillingness to take stands?" "I think one big reason is that it's easier not to. It's much easier for a person just to go along." "I've sensed the feeling that unless it really burns ivithin you, unless there is something where you can see just what you have done, you might as well just let the world roll on as it is rolling along. After all, people are going to act in the same old way, no matter what we try to do. Society is going to eventually come out in the same way, no matter what I, as an individual, try to do." "A lot of us hang back, saying, 'Well, why have an idea now? If 11 probably be different when Tm 45.' " "And you ask yourself, Can I take time away from my studies? You ask yourself, Which is more important? Which is more urgent to me?" "Another reason is fear of repercussions—-fear of offending people. I went on some sit-doivns and I didn't sit uneasy just because the manager of the store gave me a dirty scowl—but because my friends, my grandparents, were looking at me with an uneasy scowl."
We need a purpose other than security and an $18,000 job. )>
*
"Perhaps 'waiting' is the attitude of our age—in every generation."
"Then there comes the obvious question, With all this waiting, what are we waiting for? Are we waiting for some disaster that will make us do something? Or are we waiting for some 'national purpose1 to come along, so we can jump on its bandwagon? So we are at a train station; what's coming?''''
HEIIB YHS11MAN
one of the things that bother us is that there is no great issue we feel we can personally come to grips with." The panel was discussing student purposes. "We need a purpose," one member said. "I mean a purpose other than a search for security, or getting that $18,000a-year job and being content for the rest of your life." "Isn't that the typical college student's idea of his purpose?" "Yes, but that's not a purpose. The generation of
I
GUESS
the Thirties—let's say they had a purpose. Perhaps we^ll get one, someday." "They had to have a purpose. They were starving, almost." "They were dying of starvation and we are dying of overweight. And yet we still should have a purpose — a real purpose, with some point to it other than selfish mediocrity. We do have a burning issue—just plain survival. You'd think that would be enough to make us react. We're not helpless. Let's do something."
Have students changedf —Some professors' opinions kH,
YES, indeed," a professor said recently, "I'd say students have changed greatly in the last ten years and—academically, at least—for the better. In fact, there's been such a change lately that we may have to revise our sophomore language course. What was new to students at that level three years ago is now old hat to most of them.
O"
"But I have to say something negative, too," the professor went on. "I find students more neurotic, more insecure, than ever before. Most of them seem to have no goal. They're intellectually stimulated, but they don't know where they're going. I blame the world situation—the insecurity of everything today." "I can't agree with people who see big changes in students," said another professor, at another school. "It seems to me they run about the same, year after year. We have the bright, hard-working ones, as we have always had, and we have the ones who are just coasting along, who don't know why they're in school —just as we've always had." "They're certainly an odd mixture at that age—a combination of conservative and romantic," a third professor said. "They want the world to run in their way, without having any idea how the world actually
i {' |
K
1
/ " I „ 11
O 4.-. -. J
runs. They don't understand the complexity of things; everything looks black or white to them. They say, 'This is what ought to be done. Let's do i t ! " ' "If their parents could listen in on their children's bull sessions, I think they'd make an interesting discovery," said another faculty member. "The kids are talking and worrying about the same things their fathers and mothers used to talk and worry about when they were in college. The times have certainly changed, but the basic agony—-the bittersweet agony of discovering its own truths, which every generation has to go through—is the same as it's always been. "Don't worry about it. Don't try to spare the kids these pains, or tell them they'll see things differently when they're older. Let them work it out. This is the way we become educated—and maybe even civilized." "I'd add only one thing," said a professor emeritus who estimates he has known 12,000 students over the years. "It never occurred to me to worry about students as a group or a class or a generation. I have worried about them as individuals. They're all different. By the way: when you learn that, you've made a pretty profound discovery."
Am y J
The material on this and the preceding 15 pages is the product of a cooperative endeavor in which scores of schools, colleges, and universities are taking part. It was prepared under the direction of the group listed below, who form EDITOKIAL rnojECTS FOR EDUCATION, a non-profit organization associated with the American Alumni Council. All rights reserved; no part of this supplement may be reproduced without express permission of the editors. Copyright © 1961 by Editorial Projects for Education, Inc., 1785 Massachusetts Ave., N.W., Washington 6, D.C. Printed in U.S.A.
The College Student" DENTON BEAL
Carnegie Institute
of
J. ALFRED GUEST
Amherst
DAVID A. BURR
Technology
L . FRANKLIN HEALD
College
The University
of New
The University Dartmouth
College
St. Johns
Baylor
Phillips
Academy
The University
of Arkansas
(Andover)
RANDOLPH L. FORT
Business School
Yale
University
FRANCES PROVENCE
University
REBA WILCOXON
Harvard
WALDO C. M. JOHNSTON
University
Emory-
University-
IE AN D. LINEHAN
American
Alumni
Council
ROBERT M. RHODES
University
FREDF.RIC A. STOTT
of California
DAN H . FENN, JR.
University
CHARLES M. HELMKEN
Hampshire
Washington
College
VERNE A. STADTMAN
CHARLES E . WIDMAYER
Stanford
ROBERT L. PAYTON
MARALYN ORBISON
Swarthmore
DAN ENDSLEY
The IJniversity of Oklahoma
The University
FRANK J. TATE
The Ohio State University
ELIZABETH B . WOOD
CHESLEY WORTHINGTON
Sweet Briar College
Brown University
of
Pennsylvania ERIK WENSBERG
Columbia
University
CORBIN GWALTNEY
Executive
Editor
Today's word for Tech research is also
DEPARTMENTAL Chemistry's Spicer: "Without departmental research support, we would be in bad shape in our very important graduate programs.
HAT MAKES a university outstanding? Obviously, there is no simple answer to this question, but university and college administrators would probably agree unanimously that the foundations of a first-rate academic institution must ultimately rest upon the strength of its academic departments. In turn, the strength of any academic department is dependent upon the way in which each faculty member meets his responsibility to the institution. The prime responsibilities of a faculty member at an academic institution come under these three categories: instruction of students, contributions to the general knowledge of a chosen discipline, and contributions to the operation and future planning of the institution. These responsibilities are often interrelated, and a contribution to one is frequently a contribution to one or both of the others. Teaching and basic research—the more generally applied names of the first two responsibilities—are the first obligations of a member of a departmental faculty. Though often thought of separately, they need not be. To teach is to pass on to new generations the best of the collective knowledge of the past. But the knowledge of the past is different each year, and the professor who teaches the same things next year as he teaches this will fall behind and be of less and less value with passing years. The professor is rightfully expected to be an authority in his field. In order to remain an authority he must engage in research on the frontiers of this field. Conversely, the responsible teacher who is continually seeking to make a contribution to the basic understanding of his field will be interested in passing on to his students his new-found understanding. Georgia Tech's academic departments all have similar basic research programs aimed at properly satisfying these responsibilities. One of its most active yet most typical departments is the School of Chemistry. Here's how Dr. Monroe Spicer, director of this school, describes its efforts in these areas: "Our permanent faculty now consists of twenty-one Ph.D.'s. Their degrees were obtained from many of the
W
MAY, 1961
great universities: Cal Tech, M.I.T., Illinois ( 2 ) , Duke, Michigan, Washington, Virginia ( 2 ) , California, Tennessee, Harvard, Northwestern, Stanford ( 2 ) , Kansas, Chicago, North Carolina, Ohio State, McGill (Canada), and Graz (Austria). Besides representing a broad educational background, this faculty has a broad subject matter interest ranging from biochemistry to theoretical chemistry. "At present the school is teaching 1931 undergraduates, 48 graduate students, and two post-doctorate students. Graduate students have come from such well known institutions as M.I.T., Minnesota, Illinois, N.Y.U., and, of course, Georgia Tech. But the real test of the reputation of a department in graduate work is whether it can attract postdoctorals, students who already have the highest earned degree. They continue merely because they are attracted to the work of a particularly outstanding professor. "At present, our two post-doctoral students in chemistry are from the University of Pennsylvania and Washington State. In recent years, others have been from Harvard, Cal Tech, and the University of London. "Faculty members in the department are expected to engage in research as part of their regular duties. Research is not separated from teaching. In the upper levels, the teaching consists of teaching the students how to do research. Since chemistry is a 'pure' science, the research done in the department is 'pure' or basic research. That is, it is directed towards the increase in the basic knowledge of chemistry. It is not programmatic, i.e., none of it is being done on problems suggested by outside agencies and directed towards a particular end result. (Tech, of course, is well equipped to handle programmatic, or sponsored research through its Engineering Experiment Station.) "How is the basic research financed in the department if the research is of such a nature that it is unlikely that the results of the research will be of immediate practical use to either industry or government? Financial support (Continued on page 30) 29
DEPARTMENTAL RESEARCH-continued comes from three principle sources: the school through its regular budget, certain government agencies such as the National Science Foundation and the National Institutes of Health, and private foundations such as the American Chemical Society, the Sloan Foundation, and other foundations set up by industrial companies. Each of these sources recognize the importance of the contributions of basic research and all have made, particularly in the past few years, increasing amounts of funds available to the department and to individual faculty members. "For example, our faculty members at the School of Chemistry have received the following grants during the last few months. From the National Science Foundation: (1) $29,900 to Dr. Jack Hine to investigate the "Polar Effects on Equilibria in Organic Chemistry." (2) $29,700 to James R. Ray for a study of "Thermodynamic properties of Alkali and Alkaline Earth Nitrities and Nitrates." (3) $30,000 to Dr. W. M. Spicer to help with the purchase of a Nuclear Magnetic Resonance Spectrometer. (4) $95,400 to help with the construction of a third floor on the Chemistry Annex. This grant was made only because this space is to be used mainly for graduate research. From the Petroleum Research Fund of the American Chemical Society: $21,555 to Dr. Robert A. Pierotti for an investigation of "The Interaction of Gases with Boron Nitride." From the Atomic Energy Commission: $5,885 to Dr. Henry M^Neumann for a study of "Solvent Extraction of Halo-Complexes." Other active research grants in the School of Chemistry: From the Alfred P. Sloan Foundation, $25,000 to Dr. William H. Eberhardt and $22,000 to Dr. Jack Hine. A $45,000 grant from the National Institutes of Health, and a $15,000 grant from the National Science Foundation to Dr. John Dyer. A $19,700 grant from the National Science Foundation to Dr. D. K. Carpenter. 30
A $15,900 grant from the Petroleum Research Fund of the American Chemical Society to Dr. Erling Grovenstein. A $16,000 grant from the National Science Foundation to Dr. Herman A. Flaschka. A $10,800 grant from the National Science Foundation to Dr. Donald J. Royer. "The aggregate total of the above grants, excluding the $95,400 for building, amounts to the impressive sum of $285,240. These funds came to the institution to support basic research in chemistry conducted within the department. (They are over and above the much larger sums that come to our Engineering Experiment Station to support sponsored programs.) "It should be emphasized that this support is for the research being done by graduate students for their theses. It is research that we must be doing if we are to be a graduate department. If this outside support were to end, we would have one of two choices; we could try to support it with institutional funds or we could discontinue graduate work. "Some might wonder if research of this kind is really of any importance. Is anyone on the outside interested in it? Normally, scientists submit the results of their research to the scientific journals of national and international circulation for publication. These journals, due to their limited financial support, must be very discriminating in deciding what they will publish. Each article is first submitted to several reviewers who are authorities in the given field and these reviewers decide whether the research is worth publication. If the reviews are favorable, if the editor is impressed, and if space is available, the article is published. During the last three years, the faculty members of Tech's School of Chemistry have published 21 such articles, usually with one or more graduate students as co-authors. Such publications as these establish the reputation of the institution among scientists in industry as well as in the academic world. Many of these papers have attracted a great deal of attention. For example, one faculty member (Dr. Hine) received over 200 requests for reprints of his papers last year alone. These came from most of the civilized countries of the world. Of course, this merely proves that our research is important enough to be published in scientific journals and to be of interest to other scientists. But is it really important? Why is basic research important? One very obvious reason is that it is the life blood of applied research. Basic research furnishes the new facts and laws on which applied research lives. It has often been pointed out, for example, that the cure for cancer will likely not be found by those who are seeking a cure for cancer. The cure will much more likely result from fundamental advances in physiology, chemistry, etc. "Finally, we wish to reiterate that this particular research activity is not something separate from teaching. One of our reasons for engaging in research is to be able to give our students the very best instruction. We at Tech should not be satisfied with less than the best." TECH ALUMNUS
I HBia
New Chemical-Ceramic Building approved
CLAUDE PETTY, head of Tech's Physical Plant Department, recently disclosed that plans for the new Chemical-Ceramic Engineering Building have been approved by the Board of Regents. The new four-story building, designed by Atlanta architects New deck to be added to East Stands Finch, Alexander, Barnes, Rothschild, and THROUGH AN OPTION PLAN, Georgia Tech is Paschal, will be located on Fourth Street finally going ahead with the double-decking between Atlantic Drive and Hemphill Street. of the East Stands of Grant Field, it was It is expected to be completed for the fall announced in Atlanta in mid-May. The new quarter of 1963. double-deck which effectively adds over 4,000 good seats to the East Stands as well President finally gets an airplane as replaces the old temporary stands with THE NEW Cessna 182 now being flown by good seats, will be completed for the 1962 President Harrison is a gift from an anonyseason. mous Tech alumnus. The airplane which Alumni in Georgia and immediate sur- arrived in Atlanta on March 3, was donated rounding areas are being offered by mail to help ease the president's extremely heavy the opportunity to purchase 10-year op- travel schedule. Of course, President Hartions at the following rates: $250 per seat rison will do his own piloting. in the covered lower deck stands from the 25-yard lines to the 50; $200 per seat in Buckingham heads subcommittee staff the upper deck from the 50-yard line to DR. WALTER BUCKINGHAM, Director of the the 30-yard line south and in the covered School of Industrial Management and Prosections from the 25-yard lines to the 12- fessor of Economics at Tech, has been apyard lines both north and south. Options pointed staff director of the Holland Conare also being offered Atlanta football fans. gressional Subcommittee on Unemployment All option purchases are on a first-come, and the Impact of Automation. Dr. Buckingham has previously served as consultant first-served basis. -The option purchaser must buy tickets in to the U. S. Senate-House Economics Comthe new seats at the regular price in addi- mittee. Congressman Elmer J. Holland, Chairman of the Subcommittee on Unemtion to purchasing the option. Complete information on the plan is ployment and Automation, stated, "The available to any Tech alumnus by writing members of my Committee are more than pleased by Dr. Buckingham's acceptance of Bob Eskew, business manager, Georgia Tech our offer to serve as our Staff Director. Athletic Association, 190 3rd Street, N.W., "We feel very fortunate in having Dr. Atlanta. Buckingham join us, for he has made exhaustive studies on the subject of automaNew student leaders elected tion, and his book Automation—Its Impact SENIOR Joe McCutchen of Dalton, Georgia on Business and People' is the latest pubwas elected president of the Georgia Tech lication on this problem now facing our student body in the annual elections held in people. March. Dick Frame was elected vice presi"Dr. Buckingham has had considerable dent in the first open election for that post. experience in business as he is Secretary and McCutchen is the son of alumnus Joe Mc- Director of the National Executive Life InCutchen, a former member of the board of surance Company; Director of Georgia trustees of the National Alumni Association. Tech's Public Utility Executive Course, MAY, 1961
1954-58; Consultant to Southern Bell Telephone and Telegraph Company, Duke Power Company and other firms. "Dr. Buckingham has also served as an impartial arbitrator of labor-management disputes in steel, textile, automobile and paper industries and was former secretary of the Southern Economic Association." The Holland Subcommittee started Public Hearings on the subject of Unemployment and the Impact of Automation in early March. One of its early witnesses was Tech English Professor Glenn Rainey. Two professors contribute to Encyclopedia
Two TECH professors are among the new contributors to the 1961 edition of the Encyclopedia Britannica. They are Edward Foster, associate professor of English, author of the article "Freeman, Mary Eleanor"; and Joseph P. Vidosic, professor of Mechanical Engineering, who wrote two articles, "Bearings and Lubrication." NASA scientist speaks at Tech
MR. S. S. MANSON, Chief of the Materials and Structures Division of the Lewis Research Center of the National Aeronautics and Space Administration, was a visitor to the campus on March 7 and March 8. He was the fourth top scientist or engineer brought to Tech through the Neely Visiting Professorship Fund, established by Mr. and Mrs. Frank H. Neely of Atlanta. Mr. Manson is nationally known as an authority in the fields of materials and stresses, with emphasis on high temperature strain gages; dynamic measurements under engine operating conditions; thermal stress; creep and stress-rupture data correlation; and the development of general methods for analyzing stress in high temperature parts in the elastic, plastic, and creep range. During his two-day stay at Tech, the Visiting Professor held various conferences with the instructional staff and graduate students in the Schools of Mechanical Engineering, Aeronautical Engineering, Civil Engineering, and Engineering Mechanics. 31
THE INSTITUTE -continued Sigma Xi speaker on campus
DR. SANBORN C. BROWN, associate professor
of physics at Massachusetts Institute of Technology, discussed plasma physics, the so-called "fourth state of matter" as a Sigma Xi national lecturer at Georgia Tech's Textile Auditorium Friday, March 17. Dr. Brown's lecture was sponsored by the Society of Sigma Xi Chapters at Georgia Tech and Emory University. Dr. Brown's lecture was one of 19 that he presented on this subject in the South during the month of March. Commencement set for June 10
TECH'S 78th commencement will be held at the Fox Theater on Saturday morning, June 10. Robert T. Stevens, president of J. P. Stevens & Co., Inc. and former secretary of the Army under the Eisenhower regime, will be the commencement speaker. Baccalaureate Services will be held on June 9 at the Alexander Memorial Coliseum with the Reverend Robert E. Lee, pastor of The Lutheran Church of the Redeemer presenting the sermon. Ceramic Engineering's Moody Honored
DR. WILLIS E. MOODY, associate professor
of Ceramic Engineering, was recently installed as vice president of the American Ceramic Society's Ceramic Educational Council. Moody was honored during the Society's 63rd annual meeting held April 23-27 in Toronto, Ontario, Canada. Over 2,000 ceramic scientists, plant operators, and engineers attended the meeting of the international organization devoted to the advancement of research and production methods in the ceramic field. Two Tech alumni named to award jury
Two GEORGIA TECH alumni were among the five prominent U. S. and South American architects named to select the recipient of the 1961 R. S. Reynolds Award for distinguished achievement in architecture. The
tjoces wvtfje News
32
two Tech graduates are Samuel T. Hurst, dean of Auburn University's School of Architecture and Fine Arts, and Hugh A. Stubbins, Jr. of Cambridge, Massachusetts, a Fellow of the American Institute of Architects. The award jury was announced by the AIA, which administers the $25,000 annual award. Tech men named to Tech-Georgia Drive
TWENTY-EIGHT Tech alumni have been named to head up the Joint Tech-Georgia campaign in major cities in the State outside the Atlanta area. Here is the listing for the 1961 campaign: Charles Oxford, Albany; James A. Gantt, Americus; Newman Corker, Athens; J. Wm. Weltch, Augusta; Robert H. Higdon, Bainbridge; James T. Robeson, Brunswick; John Fountain, Carrollton; Fred F. Lester, Cartersville; Phil H. Brewster, Cedartown. Also James A. Byars and George Morris, Columbus; Thomas Jones, Dalton; Alfred Eubanks, Dublin; Richard M. Dillard, Gainesville; John Hammond, Griffin; Arthur B. Edge, LaGrange; Leland Jackson, Macon; Richard Watkins, Marietta; Don Johnson, Milledgeville; W. C. Vereen, Jr., Moultrie; Karl Nixon, Newnan; Harold Clotfelter, Rome; Lee Mingledorff, Jr., Savannah; Charlie J. Mathews, Statesboro; Robert R. Jinright, Thomasville; Conner Thomson, Valdosta; Herbert Bradshaw, Jr., Waycross; and Joe Jennings, West Point.
The- Club ATLANTA,
GEORGIA — The
annual
spring
"Hall of Fame" meeting of the Greater Atlanta Georgia Tech Club was held on April 27, the night before the "T-Night" game. Coach Bobby Dodd was the featured speaker at the meeting and told the large crowd in attendance about Tech's spring practice and his thoughts on the coming season. Dean George Griffin inducted the
following former Tech athletes into the "Hall of Fame" in his inimitable fashion: Homer Whelchel (track) J. Frank Willett (tennis), Watts Gunn (golf), General K. J. "Wooch" Fielder (baseball and football), Lewis "Automobile" Clark (football), Leon Hardeman (football), and the late Mack Tharpe (football). Bob Tharpe accepted his late brother's certificate of membership from Dean Griffin. General Fielder, who flew in from Hawaii for the ceremonies, received a special hand from the crowd. During the business meeting, reports were heard from committees including the "TNight" ticket sales committee and the nominating committee. The following officers were elected to serve for the coming year: W. A. "Bill" Home, president; Massey Clarkson, 1st vice president; Ewell Pope, 2nd vice president, and Allen Hardin, treasurer. CHARLOTTE, NORTH CAROLINA — A record
178 alumni and wives turned out for the March 16 meeting of the Charlotte Club to hear Coach Bobby Dodd talk about football and basketball. Vice president Harold Couch presided in the absence of President John Hill who was out of the city on a special business trip. Couch introduced the new board of directors for the club which included Elmore Camp, John Hill, and Couch for 1961; Howard Duvall, James Teat, and Austin Thies for 1962; and Jim Buchanon, J. Ed Council, and W. G. Thomas for 1963. • Special guests were Gene McEver and Lowell Mason, friends of Coach Dodd; Roane Beard, and Jesse Berry of the Tech coaching staff. Bill Therrell introduced Coach Dodd. Next meeting of the club will be the annual outing on May 13 at Lake James. The invitation was issued by Charles Witmar on behalf of the Mill Power Supply Company. CHATTANOOGA, TENNESSEE—Over 150 Geor-
gia Tech alumni and wives attended the
Each year, Tech holds a special day for high school counselors on the campus. Here are some scenes of this year's event where the counselors can talk with their former students and Tech officials about college.
TECH ALUMNUS
Chattanooga Club dinner dance on Saturday night, March 11. Bob Huffaker, '57, served as master of ceremonies for Lou Blanks, '38, who was in the hospital. Special guests of the club were Professor and Mrs. Robert E. Stiemke and Mr. and Mrs. Roane Beard from Atlanta; Coach and Mrs. Humpy Heywood of Baylor School, and Mr. and Mrs. Bob Sherman (Personnel Director for DuPont of Chattanooga). Marvin A. Turner, '59, gave a report on the scholarship program. J. Frank Willett, '45, Vice President of the Georgia Tech National Alumni Association, introduced the two men from Atlanta. COLUMBUS,
OHIO—The
Columbus,
Ohio
Georgia Tech Club had its first official visitor from the Georgia Tech campus on Friday night, February 24 when 23 alumni and wives heard from Executive Secretary Roane Beard. A question and answer period followed a talk on the Association and the Institution; campus slides and the 1960 football highlights were shown. President William M. McGrew, 52, presided at the meeting. The next meeting is tentatively set for May 19. GREENSBORO,
NORTH
CAROLINA — T h e
Greensboro Georgia Tech Club held its annual ladies night dinner meeting on March 30. Cecil Adamson, '24, president of the club, presided. Beautiful camellias were provided for all the ladies by Mr. M. S. Hill, '11. Officers elected for the coming year were: A. I. " G u s " Merkle, III, president; Hal Strickland, vice president; and James H. Perry, secretary-treasurer. Guest speaker was Roane Beard who talked on high spots of institutional and alumni activities and shov/ed "The 1960 Football Highlights." JACKSONVILLE, FLORIDA—The greater
Jack-
sonville Georgia Tech Club had its Annual Meeting February 6. The meeting featured a talk by Bobby Dodd who reviewed the 1960 season and outlined prospects for the 1961 season. Coach Dodd was enthusiasti-
cally received by the approximately 100 persons in attendance. The following officers were elected: W. Ashley Verlander, president; Warren Parker, vice president; Herb Coons, secretary; and D o n Zell, treasurer. NASHVILLE,
TENNESSEE — Over 50
alumni
and wives attended the Nashville Georgia Tech Club's social meeting following the Tech-Vanderbilt basketball game on March 4. Coach Whack Hyder and Bob Eskew were guests of the club. The only business item was the election of the officers and board for the new year. They include George T. Hicks, president; 5. E. Dyer, Jr., vice president; Wallace B. Rogers, secretary; and Warren C. Wynn, treasurer. Board of Directors: (serving until 1962) John Charles Wheeler, and George A. Volkert; (serving until 1963) Herbert L. Waters, and Marion W. Swint. N E W YORK, N E W Y O R K — T h e spring meet-
ing of the New York Club was held April 13 with Coach Allie Sherman of the New York Football Giants as featured speaker. Secretary Bill Stein, new president of the New York Touchdown Club, introduced the speaker. Tech's Dorothy Crosland, director of libraries, was on hand to brief the club members of its special project, "Operations Library." ST.
LOUIS,
MISSOURI—The
football
FLORIDA—George
Barron
presided
over a unique meeting of the Florida West Coast Georgia Tech Club on February 23. Feature of the program was a tribute to Florida's football coach, Ray Graves. Graves was presented a silver cigar box inscribed from the Tech Club and then ribbed unmercifully by the record crowd. Also on hand was ex-Tech back, Pepper Rodgers, now backfield coach for Graves. Needless to say, Graves and Rodgers held their own during the ribbing session. WINSTON-SALEM,
NORTH
CAROLINA — The
Winston-Salem Alumni Club held its annual business meeting on March 3, and the following officers were elected for this year: Robert G. Schultz, president; Maxwell F . Stowers, Jr., vice president; Robert S. Chafee, secretary; and Donald L. Champion, treasurer. Jim Hartnett, the Club's past president, challenged the Auburn Alumni to a game in any sport of their choosing. T h e annual picnic this summer will include this game. RICHMOND, VIRGINIA—The Richmond Geor-
Jacksonville Club President Verlander and a friend from the Georgia Tech campus. MAY, 1961
'QC**?. Felton Gibbons died November 26, 3 0 1960. H e built and operated the Norton Company's Bauxite Plant at Bauxite, Arkansas until his retirement in 1946. He had been with the company since 1913. Mr. Gibbons is survived by his widow, who lives at Bauxite, Arkansas. ' Q Q Word Leigh, M E , died January 20, * * v 1961. N o further information was available at this writing. 'f11 Wayne James Holman, Sr., E E , died "* February 16, 1961 after a brief illness. He was owner of the Troy Laundry in Paris, Tennessee. Earlier in his career he was connected with utilities companies in Tennessee. Mr. Holman is survived by two sons, Wayne J. Holman, Jr., '28, and William G. Holman, '34.
film,
"Highlights of 1960," was shown to the members and guests attending the March 14 meeting of the St. Louis Club. The club members also enjoyed a tour of the WhiteRogers Plant and elected a new slate of officers including Harry J. Abeln, president; John B. Powers, vice president; Melville M. Zemek, secretary; and Carol Freedenthal, treasurer. TAMPA,
' Q C Gaston C. Raoul, of Lookout Moun*»*» tain, Tennessee, died September 4, 1960. His widow died unexpectedly the following week.
gia Tech Club held its winter meeting on Monday night, February 20.
' f l l Edwin H. Underwood, CE,.of " * Florida, died March 4. H e vived by his widow; son, Edwin H. wood, Jr., '41 and brother, Joel C. wood, '14, of Atlanta.
Miami, is surUnderUnder-
W '11 ' Pope Barney> -Arch, recently re' ' tired from active architectural practice in Philadelphia and is now living on his mountain farm in East Sandwich, New Hampshire. H e has just been honored by being made a Life Fellow of the International Institute of Arts & Letters for his contribution to creative art. James Echard, Arch, has returned to the United States from England where he lived for several years after retiring. His home address is 333 Cumberland Avenue, Asheville, North Carolina. William L. Heinz died September 27, 1960. His widow lives at 842 Kilbourne Road, Columbia, South Carolina.
' 1 A ^ a " ' " " Turner, president and owner ' " of Turner Realty Company in Atlanta, died March 12 of cancer. H e is survived by his widow. Don M. Forester, C E , retired from the United States Bureau of Reclamation on December 3 1 , I960, after 29 years of professional engineering service in heavy construction (Hoover D a m , Imperial D a m and Desilting Works, Shadehill D a m ) and in planning and developing land "and water resources of the arid west. He is a life member (Fellow) of the American Society of Civil Engineers and for the past many years has been listed in the several editions of Who's W h o In Engineering and Who's Who 33
tJocestntf)eNews R. W. BeaM, '18, retired as supervising engineer in the Atlanta office of Southern Bell after 38 years of service with the company. He joined Western Electric in 1922 as an engineer, and in 1924 went to Southern Bell in a similar capacity. He was promoted to his final position in 1948. / . Cleve Allen, '31, has been nominated for the 39th "All Star" Honor Roll April issue of The Insurance Salesman, leading journal in the life insurance field. Allen is Miami general agent for the Atlantabased Piedmont Southern Life Insurance Co. He entered the insurance business nine years ago. Capt. Ivan Monk, '34, USN (R), has joined De Laval Steam Turbine Co., Trenton, N. J. as manager of the service and repair department. At the time of his recent retirement from the Navy, Capt. Monk was director of the machinery div. Bureau of Ships. George E. Bevis, '37, represented The Georgia Institute of Technology at the February 23 installation of the University of Minnesota's new chancellor, Meredith Wilson. Bevis, a mechanical engineering graduate of Tech, is executive vice president of the G. H. Tennant Company of Minneapolis, Minnesota.
34
NEWS BY CLASSES - continued In the West. Last year he presented, as a gift, his personal technical library to Tech's Price Gilbert Library. He now resides in the Denver, Colorado, metropolitan area and is practicing as an engineering consultant on the planning and developing land and water resources. Milton A. Sullivan, ME, of Enka, North Carolina, died December 12, 1960 in a Columbus, Ohio hospital after an illness of several weeks. He was a research engineer with American Enka prior to his retirement two years ago. Mr. Sullivan is survived by his widow who lives at One Hillcrest, Enka, North Carolina.
'16
' I T Jameson Calvin Jones, ME, president If of the Corinth Machinery Company, Corinth, Mississippi, died of a heart attack March 16, 1961. H e joined the company in 1919 as a shipping clerk and had served as credit and collection manager, secretarytreasurer, vice president and had been president since 1950. Mr. Jones had been very active in Boy Scout, Rotary and Y M C A work. M Q Henry Rankin Dunwoody died Feb1 3 ruary 13, 1961 at the Veterans Hospital in Columbia, South Carolina. He joined DuPont in 1925 and was transferred to the Savannah River Plant in 1952. At the time of his retirement in 1959 he was foreman of the service department. His widow's address is South Pittsburgh Municipal Hospital, South Pittsburgh, Tennessee. Edwin R. Merry, Arch, died January 11, 1961. His widow lives at 173 Cleveland Park Drive, Spartanburg, S. C.
'20
,g
)A Robert B. Maclntyre died February t.'T 28 in a Macon, Georgia hospital. His home was 175 Peachtree Circle, Atlanta. He was in the Canadian Army during World War I. Mr. Maclntyre is survived by a brother, David W. Maclntyre, of Atlanta.
Robert Westbrook, '41, has been appointed head of the support division, Livermore Mechanical Engineering Department of the University of California's Lawrence Radiation Laboratory in Livermore, California. Westbrook joined the Radiation Laboratoryjn 1953.
' Q C Charles W. Anderson, of College fcw Park, Georgia, died January 20, 1961. / . W. DuBose has been named a vice president of the First National Bank of Atlanta. Prior to this appointment he was manager of the bank's Brookwood Office in Atlanta.
C. Malcom Gailey, '43, A.I.A., was the structural engineer on the Red Bud Coliseum in Gordon County, Georgia, which was selected by the American Institute of Steel Construction as one of the 12 buildings in the country to receive its architectural award of excellence for 1960.
CO 1960. Moultrie H. Lanier died December 27, 1960. His widow's address is P. O. Box 1279, Richmond, Virginia.
'OR
Hugh
c
-
Harris
>
EE
>
died Jul
y u>
' 9 R Harry E. Blakely, CE, died April 1 fcO in an Atlanta hospital. He was office engineer with the U. S. Bureau of Public Roads. His widow lives at 5144 Timber Trail, N.E., Atlanta, Georgia. Julian Hoke Harris, Arch, was awarded the Ivan Allen trophy at the annual awards
dinner of the Georgia Chapter, American Institute of Architects in March. This trophy is presented each year to the architect who made the greatest contribution to his community. Mr. Harris is associate professor of architecture at Georgia Tech. ' O Q James C- Cook, CE, has been ap* - 3 pointed general agent with the Southern Railway System with headquarters in Birmingham, Alabama. He had been in charge of industrial development activities in Alabama and Mississippi. Bob Shelley has been elected president of the Atlanta Retail Merchants Association. Âť Q r t T. D. Dunn, Jr., president of the 0\J Glenwood National Bank, died March 31 in an Atlanta hospital. He had been ill for several weeks. Mr. Dunn was active in real estate development in DeKalb County. Some of the projects he helped develop include the Glenwood-Candler Shopping District, Glenco Shopping District and Dunaire residential and commercial real estate. His widow lives at 830 W. Ponce de Leon Avenue, Decatur, Georgia. H. Griffith Edwards, Arch., has just had a revised edition of his book, Specifications, published by D. Van Nostrand Company of New York. The book, originally published in 1953, has been an accepted text for teaching specification writing at many schools of architecture and technical institutes throughout the states. In the new edition of the book the text and tables have been revised, up-dated, and supplemented; and two completely new chapters have been added, one covering "Asphalt Paving" and another entitled "Lawns and Planting." The author is a part time Associate Professor at the School of Architecture of Georgia Tech and a member of the Atlanta firm of Edwards and Portman, Architects. William L. Quinlen, Jr., Com., has been elected president of Shelby United Neighbors, a local civic organization which serves the Memphis area in many capacities. Mr. Quinlen is president of Choctow, Inc. He lives at 4151 Tuckahoe Road, Memphis, Tennessee. Daniel M. Lewis, Jr., M E , died unexpectedly March 2 of a heart attack. His widow lives at 832 Washington Street, Tallahassee, Florida.
'32
' Q Q William L. Avrett, ChE, has joined 00 Socony Mobil Oil Company, New Canaan, Connecticut, as Industrial Hygiene Toxicologist. C. Eagle Southern, Com, died of a heart attack in December, 1960 in Nashville, Tennessee. Dr. Fred Stilson Perkerson, ChE, died March 10 of a heart attack. He was head of the research department at Cone Mills, Greensboro, North Carolina. Earlier in his career, he had been head of the research department at Callaway Mills. After World War II, he worked for a number of years in Germany in scientific research in conjunction with the rehabilitation of factories and plants while employed by the U. S. GovernTECH ALUMNUS
ment Department of State. He is survived by his widow and two daughters. 'Q^
Braxton Blalock, Jr., GS, vice president of Blalock Machinery and Equipment Company, Inc. in Atlanta, has been named senior vice president of Associated Equipment Distributors, a national trade association of the construction equipment industry. Ian M. Davidson, CE, has been promoted to brigadier general in the Army Reserve. He is assistant division commander of the Reserve's 81st Division in Atlanta. Mr. Davidson is division engineering manager with American Mutual Liability Insurance Company in Atlanta. L. W. Robert, III has been appointed administrative assistant to the manager of the national sales department of the Coca-Cola Company. He was formerly national sales coordinator for Food Chains. He will remain in the Atlanta office. W. M. Teem, Com., retired president of American Finishing Company and Zell Manufacturing Company in Atlanta, has announced his partnership in the Tower Travel Service in Atlanta. 'OC
Henry D. Geigerman, Jr., ChE, of Atlanta, has been appointed to a three year term on an advisory education committee of the Life Underwriting Council. He is associated with the Harold T. Dillon Company, general agency of National Life Insurance Company of Vermont. ' O C Richard Aeck, Arch, has been elected ** " a fellow of the American Institute of Architects for his contributions and achievements in architectural design. His works have been included in exhibitions at the Museum of Modern Art, the Smithsonian Institute and National Gallery of Art in Washington, D. C. Mr. Aeck's structures include the Georgia Tech Alexander Memorial Coliseum. Chauncey W. Huth, ME, is chief of operations analysis office of Marshall Space Flight Center, Huntsville, Alabama. »0"7 Colonel Richard A. Beard, Jr., GS, has joined the Atlanta Real Estate Board's "Million Dollar Round Table" for 1960. Colonel Beard, USMC (ret.) is with Ward Wright Realty Company in Atlanta. » 0 Q Dillard Munford, ME, has been w « elected national vice president of Young Presidents' Organization. Mr. Munford is president of The Munford Company Inc. and Munford Do-It-Yourself Stores, Inc. with headquarters in Atlanta. ' A d Cnarles W. Carnes, USA, has. been *" promoted to lieutenant colonel. He is stationed with the ROTC unit at Georgia Tech. Colonel Gordon B. Cauble, USA, ME, has been assigned as commander of Headquarters, U. S. Army Signal Brigade in Heidelberg, Germany. Mora Newt on page 36
MAY, 1961
35
tJocesintfjeNews Dan C. Kyker, '46, has been appointed manager of materials for the General Electric Company's outdoor lighting department in Hendersonville, N. C. In the newly created post, Kyker will be responsible for the department's purchasing, shipping, and receiving operations. James C. Sheehan, '45, has been named a product line manager at Mine Safety Appliances Company in Pittsburgh. He will coordinate development and sales of the firm's line of gas masks for a wide variety of applications. Sheehan has been associated with MSA as a sales engineer. G. R. L. Shephard, '47, MS, has been named assistant division head of Humble Oil & Refining Company's manufacturing research and development div. at Baytown, Texas. He is responsible for research on fuels. Shephard joined Humble in 1947. Arnall T. Connell, '53, assistant professor of the Ohio State University School of Architecture and Landscape Architecture, has received the $3,000 Arnold W. Brunner Scholarship Award from the New York chapter of the American Institute of Architects. He will use the grant for research. Robert S. Schenck, '53, has been appointed sales manager of Electronic Devices, Inc. of New Rochelle, N. Y. Before joining EDI, Schenck was associated with National Semiconductor Corp. as district sales manager and with Thermosen, Inc. as sales manager. G. B. Rosenberger, '54, has been appointed as an advisory engineer with the IBM federal systems division command center engineering laboratory, Kingston, N. Y. He joined IBM in 1954 as a technical engineer in early SAGE computer development and has been a staff engineer since 1958.
36
NEWS BY CLASSES - continued , t
A) Donald S. Ross, Ch.E, died January • ^ 6, 1961. He was with the research and development department of Continental Can Company in Chicago. Mr. Ross is survived by his widow and two children, who live at 9611 Castello, Melrose Park, Illinois. ' ^ 0 Harold W. Harrison, EE, has been • w elected president of Menlo Park Engineering by the company's board of directors. He lives at 11790 Larnel Place, Los Altos, California. William W. Stein, Ph.E, has been elected president of the Touchdown Club of New York. He is past president of the Westchester Sports Forum and an Eastern Intercollegiate and Westchester County football official. Bill is a pension consultant with Mutual of New York. He lives at the Yorktown House, Scarsdale, New York. Tom C. Campbell has been named vice president of Davidson-Kennedy Company in Atlanta. He also will serve as president of Manufacturers Products Company, a wholly-owned subsidiary of Davidson-Kennedy.
'45
' ^ Q Gordon H. Lewis, ME, has been • *» named Manufacturing Division Product Manager with DuPont in Wilmington, Delaware. He lives at 616 Foulkstone Road in Wilmington. H. Ed Lindsey, Jr., IM, owner and president of the MWL Tool & Supply Company in Midland, Michigan, has announced the company's purchase of 5 0 % of the Diamond Oil Well Drilling Company. Mr. Lindsey will serve as president and manager of both companies with headquarters in Midland. Engaged: George W. Mathews, Jr., IM, to Miss Jane Kerr. The wedding will take place May 13. George is with the Columbus Iron Works in Columbus, Georgia. Born to: Mr. and Mrs. F. L. Penn, IM, a daughter, Sharon Ree, February 25. Hugh is with Revere Copper and Brass, Inc. They live at 1121 McConnel Drive, Decatur. 1
AQ James C. Huckaby, EE, has been ap*** pointed manager of customer engineering for Eastern Louisiana, Mississippi, Alabama and Western Florida with IBM. He will work out of New Orleans. Jim Nolan, IE, has been named head football coach at Lanier High School in Macon, Georgia. While in Tech, Jim earned 11 Varsity letters and served as captain of both the basketball and track teams. Born to: Mr. and Mrs. Gerald A. O'Shea, IE, a daughter, Kathleen Marie, December 5. Mr. O'Shea is a supervisor in the Light Vehicle Production Product & Service Engineering Section at Ford Motor Company. They live at 6153 Amboy Road, Dearborn, Michigan. 'CI Gerald Geller, IE, has been named * * ' chief of the Management Systems Branch in the Control Office of the Army Ballistic Missile Agency at Redstone Arsenal in Hunfsville, Alabama.
' E O Married: Oliver W. Reeves, EE, to " ^ Miss Helen V. Krofft, February 17. Mr. Reeves is attending Graduate School at the University of Colorado. They live at 1090 - 11th Street, Apartment 12, Boulder. ' C O ' Adrian D. Bolch, Jr., ME, has been * ' * ' promoted to mechanical engineer in the Manufacturing Technical Division at Humble Oil in Baytown, Texas. H e lives at 2227 Sheridan Street, Houston, Texas. Engaged: Martin Clark, EE, to Miss Julia Mitchell. The wedding will take place in late spring. They will live in Burlington, North Carolina where Mr. Clark is with the Bell Telephone Labs. Engaged: Thomas Ralph Grimes, ME, to Miss Carol Macon. The wedding is scheduled for April 22. Mr. Grimes is with the Coca-Cola Company in Atlanta. Born to: Mr. and Mrs. Thomas J. Hallyburton, IM, a daughter, Stacey Elizabeth, December 22. Mr. Hallyburton is with the American Bridge Division of U. S. Steel Corporation. His mailing address is P. O. Box 1107, Harrisburg, Pa. Harold C. McKenzie, Jr., IE, has become a member of the firm of Troutman, Sams, Schroder & Lockerman, with offices in the William Oliver Building in Atlanta. Howard R. Siggelko, ME, is the new superintendent of the Union Bag-Camp Paper Company's box plant in Spartanburg, South Carolina. ' E Jl J. M. Fisher, Jr., CE, has joined the J " Duke Power Company in Charlotte, North Carolina as an industrial development engineer. Born to: Mr. and Mrs. John Hunsinger, IE, a daughter, March 28. Johnny is with Chemstrand in Pensacola. They live at 730 Copley Drive. Married: Glenn F. Kirk, Jr., ME, to Miss Alberta Moss, December 31. Mr. Kirk is a plant engineer with Western Electric. They live at 132-35 Sanford Avenue, Apartment 3-J, Flushing 55, New York. John G. Moss, Ch.E, has joined Texaco as a chemical engineer. He lives at 3220 Fifth Street, Port Arthur, Texas. Married: George M. Poole, Jr., IM, to Miss Gretna Peacock, March 18. George is owner of the George Poole Insurance Agency in Atlanta. ' E E Married: A. F. (Bob) Blair, Jr., Arch, *»** to Miss Susan Annette Blanchard, July 16, 1960. They live at 2011 Esplanade Avenue, New Orleans 16, Louisiana. Engaged: James Chamblee Meredith, Ch.E, to Miss Sylvia Lacey. The wedding will take place June 17. Mr. Meredith is with the U. S. Public Health Service in Dallas, Texas. George P. Reynolds, Ch.E, has been promoted to chemical engineer in the distillation and finishing section of Process Technical Division at Humble Oil's Baytown, Texas refinery. ' E C E. H. Howell, Jr., has been appointed *J^J superintendent of the Texas Warehouse of U. S. Steel's Tennessee Coal and
TECH ALUMNUS
Iron Division. Born to: Mr. and Mrs. H. Gary Satterwhite, IE, a daughter, Kelly Ann, March 21. They live at 564 Frary Street, Alcoa, Tennessee. Married: Harry L. Tucker, M E , to Miss Eugenia Richardson, February 10. Mr. Tucker is attending the Emory University School of Medicine in Atlanta. ÂťL Born to: Mr. and Mrs. Robert A. J I Browne, IM, a son, James Wade, July 20, 1960. Mr. Browne is a technical representative, Photo Products Department, with DuPont. They live at 3258 El Morro Drive, Jacksonville 11, Florida. Born to: Mr. and Mrs. Theodore L. Edwards, IM, a daughter, Mary Elizabeth, March 17. Mr. Edwards is manager of the Weodee Manufacturing Company. They live at 516 Waller Street, Roanoke, Alabama. Philip W. Frick, Math, is a senior programmer in the Customer Services Department, Computer Department, Philco Corporation. His home address is 3128 Hedgerow Drive, Dallas 35, Texas. Engaged: Steven Harrison Fuller, Jr., CerE, to Miss Virginia Stone. Mr. Fuller is a sales representative in the Detroit, Michigan area for the Glasrock Products Corporation. Engaged: Lt. Joseph Leslie Jennings, Jr., USMCR, TE, to Miss Anne Martin. The wedding will take place June 10. Married: William Clinton Mann, Arch, to Miss Carolyn Becknell, April 2. Born to: Mr. and Mrs. Edward L. McGaughy, Ch.E, a son, David Daniel, January 6. Mr. McGaughy is technical assistant to the mill superintendent at International Paper Company's Moss Point (Mississippi) Mill. His home address is Route 2, Grand Bay, Alabama. Born to: Mr. and Mrs. Robert R. Propp, EE, a son, Robert, Jr., February 19. Mr. Propp recently completed a tour of duty with the Army and is now with Bell Telephone Labs in Burlington, North Carolina. Engaged: W. Howard Rogers, Ch.E, to
MAY, 1961
Miss Claire Seaman. The wedding will take place June 24. Mr. Rogers is with the Georgia Power Company in Brunswick, Georgia. His address is 1103 Ocean Boulevard, St. Simons, Georgia. Engaged: Lt. Joseph Ware Rumble, USN, IM, to Miss Julia Skelton. The wedding will take place this summer. Lt. Rumble is stationed at San Diego, California. Robert John Shornhorst, Jr., IE, died unexpectedly February 21 in a Jacksonville, Florida hospital. He was vice president of Automated Metals Company in Ocala, Florida. He is survived by his widow and one son. Married: Henry Boardman Stewart, III, IE, to Miss Lillian Campbell. The wedding took place in March. Mr. Stewart is attending Graduate School at Vanderbilt University. I C O Lt. Roy E. Brown, USA, IE, recently 3 0 completed the officer orientation course at the Armor School, Fort Knox, Kentucky. Lt. Samuel H. S. Fleming, USN, is assigned to the USS Southerland (DDR-743) with home port in San Diego, California. He completed his tour of duty in June and will return to DuPont in New Jersey. His current address is 3933 Promontory Street, San Diego 9, California. Engaged: Lt. Edward Patrick Kadingo, USNR, IE, to Miss Carolyn Bloodworth. The wedding will take place June 24. Lt. Kadingo is serving aboard the USS Woodson in New Orleans, Louisiana. Engaged: E. Cody Laird, Jr., ME, to Miss Joanne Herbert. The wedding will take place May 13. Mr. Laird is with Draper Owens Company in Atlanta. Married: Wilbur Franklin Lowe, Jr., Ch.E, to Miss Judith Andrea Yancey, April 22. Mr. Lowe is with Procter and Gamble Company in Cincinnati, Ohio. Engaged: Baxter Smith Raine, III, IM, to Miss Suzanne Gammel. The wedding will take place in August. Mr. Raine is with Adams-Cates Company in Atlanta.
Engaged: M. C. Schaff, CE, to Miss Darlene Strove. The wedding will take place in June. Mr. Schaff is with Magnolia Mobile Homes Manufacturing Company in Scottsbluff, Nebraska. Married: Marvin Edward Wallace, Physics, to Miss Joan Coggen, April 15. Mr. Wallace is with the Sperry Microwave Electronics Company in Clearwater, Florida. Engaged: Hugh Pattison Whitehead, Jr., IM, to Miss Tucinda Goodrum. Mr. Whitehead is with Westinghouse in Athens, Georgia. Robert L. Barnes, IM, is now associated with Pacific Mutual Life Insurance Company, Emory Jenks Agency, Atlanta, Georgia. Born to: Mr. and Mrs. Max M. Browning, IM, a daughter, Angela, February 7. They live on Country Club Road, Dublin, Georgia. Engaged: David Guy Herr, EE, to Miss Kathleen Luke. The wedding will take place in June. Mr. Herr is attending graduate school at Georgia Tech. Engaged: Hilton Ready Johnson, IM, to Miss Nancy Garner. The wedding will take place in May. Mr. Johnson is with the Jervis B. Webb Company of Georgia. Married: Henry W. Riviere to Miss Rebecca Maude Farran, March 11. They live at 6242 North Broad Street, Philadelphia 4 1 , Pennsylvania. Walter H. Sager, IM, has been appointed overseer, synthetic roving and spinning departments at McCormick Mill, McCormick, South Carolina. Owen Schweers, ME, has been promoted to Shift Foreman with Union Bag Camp's box plant in Spartanburg, S. C. Lt. Gerald E. Speck, USA, Ch.E, is stationed with the U. S. Army's Berlin Command. He is a helicopter pilot with the command's aviation section. Prior to this assignment Tie was with the 4th Armored Division at Goeppengen, Germany. Bruce
E.
Warnock,
IM, has been proMore News on page 38
37
NEWS BY CLASSES - continued moted to senior credit analyst for the National Bank of Cleveland. He has been with the bank since 1959. Mr. Warnock lives at 15031 Madison Avenue, Lakewood 7, Ohio. ' R n Engaged: Ensign Walter D. Cain, Jr., OU USNR, IM, to Miss Beth Gunnin. The wedding will take place in July. EnsignCain is presently undergoing advanced flight training in Pensacola. His mailing address is BOQ 1451, Room 235, NAAS, Whiting Field, Milton, Florida. Married: Henry Clay Halliday, Jr., IM, to Miss Judith Farkas, April 22. Mr. Halliday is with the Aetna Casualty and Surety Company in Atlanta. Max W. Harral, CP, is Planning Director for Ware County and the City of Waycross. His address is 201 State Street, Room 221, Waycross, Georgia. Engaged: Robert Brooks Harris, IM, to Miss Linda Nichols. The wedding will take place June 16. Mr. Harris is with Gulf Oil Corporation in Atlanta. Born to: Mr. and Mrs. Charles E. Hill, III, IM, a son, Andrew Charles, January 18. Mr. Hill is with DuPont's May Plant. They live at 705 Kirkwood Circle, Camden, South Carolina. Married: Lt. Harvard V. Hopkins, Jr., USMC, CE, to Miss Harriett Jo Hurt, September 2, 1960. Lt. Hopkins graduated from Marine Base School in March and is now attending Combat Engineer Course at Camp Lejeune, North Carolina. Married: Lt. Robert R. Jackson, USA, ME, to Miss Patricia Lorraine Bragg, March 11. They live in Huntsville, Alabama where Lt. Jackson is stationed at Redstone Arsenal. Born to: Mr. and Mrs. Robert E. Johnson, IM, a son, Robert E. Johnson, Jr., February 20. Mr. Johnson is with Reynolds Metals in the Production Control Department of the Alloys Plant at Listerhill, Alabama. Married: Ralph Ewing Lawrence, IE to Miss Virginia Carroll Tyson, November 19, 1960. Engaged: Charles Turner Lewis, Jr., Ch.E, to Miss Marian Foster. The wedding will take place June 10. Mr. Lewis is attending Graduate School at Georgia Tech. Married: Lt. Thomas Wayne Mewbourne, USAF, IM, to Miss Faye Smith, April 1. They live at March Air Force Base, California. Lawrence Wood Robert, IV, IE, has been promoted to Lt., J.G, and is stationed with the U. S. Navy aboard the S.S. Decatur, DD-936, Fleet Post Office, New York, New York. He will enter Harvard Business School upon completion of his Navy duty in the summer of 1962.
the Air Force and is now with Carrier-Atlanta Corporation in Atlanta. Engaged: Thomas Harold Espy, Jr., CE, to Miss Eugenia Marks. The wedding will take place June 24. Mr. Espy is with the Alabama Highway Department in Montgomery, Alabama. Born to: Lt. and Mrs. Peter W. Gissing, USAF, IE, a daughter, Deborah Karen, March 19, 1961. Lt. Gissing is stationed at Moody AFB, Valdosta, Georgia. Pvt. Fred T. Gillespie, USA, IM, recently completed eight weeks of military police training at the Provost Marshal General's School at Fort Gordon, Georgia. Born to: Mr. and Mrs. E. George Hudson, Jr., IM, a son, Wayne Thomas, February 13. Mr. Hudson is a trainee with the Bell Telephone Company. They live at 11201 Lynlyn Drive, Wilmington 3, Delaware. Engaged: William Carl Lineberger, EE, to Miss Aileen Jeffries. The wedding will take place in August. Mr. Lineberger will enter Graduate School at Georgia Tech in the fall. Engaged: Walter Scott Martin, EE, to Miss Hannah Sutter. The wedding will take place June 3. Mr. Martin is with the C & S Bank of Atlanta. Edmund Augustine Stawarz, Ch.E, has joined the staff of the project engineering division of Esso Research & Engineering Company, affiliate of Standard Oil Company. He lives at 100 Franklin Street. Apartment 6B8, Franklin Village, Morristown, New Jersey. HONORARY Mills B. Lane, Jr., president of the C & S Bank, has received the "Salesman of the Year Award" of the Atlanta Sales Executive Club.
ÂťC1 Engaged: Lt. Allen S. Becker, USAF, " I Ch.E, to Miss Lynne Kaye. The wedding will take place June 25. Lt. Becker is stationed at Keesler Air Force Base, Biloxi, Mississippi. Married: Robert Lee Cannon, Jr., ME, to Miss Lynnette Ard, March 25. Mr. Cannon recently completed a tour of duty with 38
TECH ALUMNUS
MAY, 1961
39
|
https://issuu.com/gtalumni/docs/1961_39_7_3cae485c5d0c69
|
CC-MAIN-2017-17
|
refinedweb
| 17,421
| 66.13
|
This configuration will help users to create new sites and site collections directly using the namespace.
How to configure?
Let’s open Central Admin of SharePoint on your machine. On your Windows desktop screen, click Start >> Administrative Tools >> SharePoint 2016 Central Administration.
We have four categories, where the events have been categorized.
In this part, we will see “Self-Service Site Creation” under "Security” category. Self- Service Site Creation
You have an option for the start link to,
You have an option for the start link to,
Once you click on OK, the configuration selected by you will be added by default in all the sites within the web and all the users will be allowed or disallowed to create the sites as per your configuration. In this article, we saw how to allow Self-Service Site Creation on the web applications. There are more features under managing the web applications which we will see in the next article. Until then, keep reading and keep learning.
View All
|
http://www.c-sharpcorner.com/article/self-service-site-creation-on-web-applications-in-sharepoint-2016-central-admini/
|
CC-MAIN-2018-09
|
refinedweb
| 168
| 63.19
|
tag:blogger.com,1999:blog-69666502012-04-16T07:15:35.108+02:00DwarflandDiane, i'm holding in my hand a small box of chocolate bunnies.Dwarfland is an XML content feed. It is intended to be viewed in a newsreader or syndicated to another site. Go to to Visit the original site.tag:blogger.com,1999:blog-6966650.post-1139576958302202352006-02-10T12:37:00.000+01:002006-02-11T02:43:41.290+01:00Lie to Me, and Tell Me Everything will be AlrightUntil just a couple days ago, it has been an absolute no-no to state concerns that maybe, just <span style="font-style: italic;">maybe</span>, Borland management wasn't all that interested in the IDE products anymore. Any such comment in public (such as in the non-tech newsgroup) was greeted with calls to tar-and-farher you as a pessimist-defeatist who just wanted to badmouth Delphi - even if your comments were out of care and genuine concern for Delphi.<br /><br />Two days ago then it made "snap", and the official line changed (and, as a loyal and faithful Borland ^H^H^H^H^H^H^H Delphi user, you better adopt the new party line, asap): Borland announced that it indeed <span style="font-style: italic;">is</span> <span style="font-style: italic;">en vogue</span> to bash Borland management, now.<br />.<br /><br />But, you might have expected, any mentioning of such concerns in the newsgroups will of course be brandmarked as defeatist pessimism again. After all, the tar pit is still hot, so why not make use of it.<br /><br />The Borlanders on the newsgroup are quick to ascertain that - of <span style="font-style: italic;">course</span> - the Borland board of directors has <span style="font-style: italic;">no</span> intention of just selling the IDE division to the highest bidder. Oh no! A board of directors looking out only for the interestes of the shareholder and maximum profit? - That's unheard of! Well, maybe not unheard of, but certainly <span style="font-style: italic;">Borland</span> wouldn't do that; we can trust <span style="font-style: italic;">them</span>*. How dare anyone even entertain such an idea? No, the board of directors' main goal - we are being told - is the interest of the <span style="font-style: italic;">Delphi customers</span>, to find a buyer that will "stay true to Delphi". Of course.<br /><br />Yeah right. For anyone who believes <span style="font-style: italic;">that</span> is going to happen, i have - as the saying goes - this bridge in Brooklyn that you might be interested in...<br /><br />So, in summary, while everything changed in Borland two days ago, nothing has changed. The rosy sunglasses are still up. We're still pretending everything will be alright (even it's yet again a slightly <span style="font-style: italic;">different</span>. <span style="font-style: italic;"></span>We're doing what's best for <span style="font-style: italic;">you</span>, now. Trust us. Would we lie to you?<br /><br /><br /><br /><br />(*Wait a minute. didn't you just finally come out and say the Borland board does <span style="font-style: italic;">not</span>!"?)Dwarfland up and Smell the Coffee, Delphites<p>There's a lot of buzz lately about <a href="">Borland</a>, <a href="">Delphi</a> and the lack of future of the latter. Most recently, Lino from <a href="">CodeFez</a> has just posted a piece titled <a href="">To Delphi or Not to Delphi</a> about how Delphi should be split off into a separate company, because it's just not feasible for Borland to succeed with Delphi.<br /><br />The reasoning is that, while Borland would of course love to please it's customers with high quality releases, you see, they just don't have the resources for it. 3-digit millions of dollar in the bank just don;t cut it. The Delphi community would be best served, Lino argues (as others have in the weeks before), by spinning off Delphi into a separate private company, with Borland owning most of the shares. That way Borland could "<span class="normal">get some serious revenue back while still allowing all expenses to be paid for by the private Delphi company</span>".<br /><br />Hmm. Let's think about that for a moment. Right now Delphi is doing crappy, because Borland cannot afford to pour resources into it [what it can do, though, shrink down the team to half the size after each release, to make room for a $500k raise in the CEOs salary. That's no problem], and because on top of that, a lot of the profits made from Delphi sales (assuming such sales exist for Delphi 2005) are skimmed off off the Delphi unit into ALM, SDO and other TLAs that nobody, not even the Borland employees directly involved with them, can properly define in a way that another human being could comprehend. So to fix that, we're going to spin off Delphi into a separate company that not only will have to live (pay salaries, office space, hardware, marketing and what not) on what little profits Delphi drives in on its own, but will also send a good chunks of earnings back to mother Borland. Yeah. That sounds like fine plan to me.<br /><br />Question: Assuming Delphi is actually quite the cash cow it is sometimes (when it pleases the person presenting the argument) made out to be, and only has too little resources because Borland skims the all off - why on earth would Borland be so stupid to spin it off and give up a considerable chunk of that revenue (and if they don't, what is going to improve)? On the other hand, if Delphi is <i>not</i> bringing enough return to actually make it's continued development feasible (as, again, it is often made out to be, sometimes even by the same people) - but instead needs injections from other Borland branches to live - how could it ever survive on it's own? Either way you look at it, it doesn't make sense; the onlt reason to spin off Delphi would be a "lose the worthless garbage" reaction.<br /><br />But is spinning Delphi off or not really the important issue?<br /><br />It's funny that for a society so bent on capitalism as the highest form of existence, it seems the majority of people seem to grasp so little about how that system works and how their involvement in it plays into the big picture. It seems that most people still think that Borland is that nice childhood friend that's only out to please them in equal relationship. After all, why else would Borland have those nice people with (Borland) tags in the newsgroups, chatting with them as their peers? Fact is, that mutual reationship is fiction. The only relationship between you and Borland is when you fork over your 3k for Delphi 200x and they hand over 4 disks of buggy software, sending you packing if you dare to complain.<br /><br />Wake up and smell the coffee: Borland isn't worried pleasing the "Delphi Lovers" as Lino calls them. Borland, as a capitalist company (not to mention a public company) in a capitalist system couldn't care less about them.What Borland cares about (and as true capitalist entity <i>should</i> care about) is it's shareholders. Do not make the mistake of thinking that if dropping computing business altogether <i>right now</i> and going into selling canned fruits would mean a $.50 rise in share value and a few-hundred-k-a-year of bonus to the CEO Borland would even hesitate for a split-second and think <i>Oh, but what about those poor Delphi users that depend on our products?</i>. Of course switching to SDO - whatever that is - seems a lot more practical then switching to canned fruits, after all you can keep using your existing hardware and office buildings. But whatever pleases the stock market.<br /><br />So does that mean Borland wouldn't spin off Delphi into a separate company? No, of course not. They might just do that. Actually the fact that here's so much talk about it lately might even mean it's in the plans and some of the talkers know more then they can admit. But make no mistake: <i>If</i> Delphi will be spun off into "The Delphi Company", it will not happen to please the Delphi Lovers - it will happen to generate the maximum revenue stream to Borland shareholders. The "Delphi Lover" will not have any part in the equation.<br /><br />And why should it? That's what capitalism is all about. if you don't get it, you must have dozed off during your 12th grade American Economy course... get a copy of The Capital and catch up.<br /><br /></p>Dwarfland selection from my DVD library provided <a href="" target="_blank">Antitrust</a> (2001) as the movie for tonight. <br /> <br />It's a very fun movie. Nice score. Nice cast. And it's of course utterly pathetic in it's try to propagate Open Source and vilify Micros...eh...Nurv as the one big evil software corporation. But just Nurv. All the others are good, and Sun's Scott McNealy - who we see handing some price to one of the protagonists in one scene - isn't a bad greedy monopolis as Bill...i mean...Gary Winston (awesomely played by Tim Robbins). Of course not. <i>He's</i> not just out to reap the benefits of what open source is laying in his lap. No Sir. The world is just so much easier if it's black and white. <br /> <br />What ticks me off about Open Source Freaks (aka Penguins) is that they take a perfectly great and honorable concept (yes, the big bad C word), and just utterly and completey fail to understand it. Thinking they can apply it to just one nice market (and software, of all things). Right to basic needs for living, nourishment and health care? Fuck that. We love america and we love capitalism. But sourcecode! Hey, <i>that</i> has to be free for everyone. What kind of world would we be living in, otherwise?Dwarfland<a target="_blank" href="">James Robertson</a> just complained about Enterprise and mentioned that "the original Star Trek did better in those situations". Very very true, and actually the reason i stopped watching Enterprise mid-Season 3 after the umpteenth dialog that went "i hate the Xindi as much as the next guy...". <br /> <br />Original Star Trek (as in, pre-Enterprise) tried to raise issues and bring balanced views on current political issues - Enterprise in comparison felt like a cheap and unquestioning advertisement for the american "War on Terror", including but not limited to the token race we're being taught to hate unconditionally (whether Xindi on Enterprise or Arabs/"Islamists" in the real world). <br /> <br />It used to be that Star Trek stood for enlightened views and a world without racism, not for propagating it. But it seems that just as the show is placed closer to the present time than previous Star Trek series, so have the creators and writers become stuck in the 20th century, far far behind Gene Roddenberry. Sad, but true. (Interestingly, the same tendencies tend to shimmer thru in selected episodes of the otherwise excellent The Dead Zone, which is also in the hands of Michael Pillar. Coincidence?) <br /> <br />Here's to hoping that the franchise will recover from the mistake that was Enterprise, and the future will treat us to more gems like TNG or DS9. <br /> <br />That said, i fully agree with you on being sad to see the Buffyverse leave the screen (though i have yet to see the final Angel episode; later tonight ;-). It will be dearly missed, and few (if any) worthy replacements seem to be on the horizon...Dwarfland't Leave Home Without It<a target="_blank" href="">This one</a> would start to come in handy more and more, lately... <br /> <br />Thanx to <a target="_blank" href="">Smalltalk Tidbits, Industry Rants</a> for the link!Dwarfland Product ActivationIt's rant time. <br /> <br />While preparing a new beta build of our upcoming poduct, it hits me that a good idea might be to bring the <a href="" target="_blank">Borland C#Builder</a> project files that we include with the installer up to date. So i fire up C#B, which i thankfully didn't have to touch in months, and what happens? Any guesses? Yes, our friend the "Registration Wizard" pops up to make my day. Just when i started to miss that friendly little guy. <br /> <br />Reluctantly, i cancel it, figuring since i activated C#B ca 378 times before, i might get away with it this time, but no luck: i get a friendly message from the <i>License Manager</i> that "No valid license information" can found. <br /> <br />Ok, running the registration Wizard again. Digging out my "Licensed Software.xls" file. Copy/pasting the serial number. Sending... "Your community account cannot be found". Double-checking 3 times. No avail. No C#Builder for me, today :-( <br /> <br /><a href="" target="_blank">Go figure</a>. <br /> <br />Luckily, using C#Builder doesn't matter to me <i>that</i> much, so opening the .bdsproj in notepad and adjusting it there will do. But imagine those poor fellows that actually rely on it to develop...Dwarfland to the dwarfland. <br /> <br />As with all blogs, over time this one will be filled with random rants and ramblings that no-body wants to know about, really. <br /> <br />It might also - once in a while - impose upon you unwanted words of wisdom from whatever work is keeping me busy at the moment, recommendations for movies you won't like anyway or complaints about the state of the world in general and lack of - say - namespace support in Delphi 8. <br /> <br />Currently reading: <a target="_blank" href="">Chomsky: Hegemony or Survival</a>, <a href="" target="_blank">West: Object Thinking</a>.Dwarfland
|
http://feeds.feedburner.com/dwarfland
|
CC-MAIN-2017-39
|
refinedweb
| 2,391
| 68.4
|
"Daniel P. Berrange" <berrange redhat com> wrote: > On Wed, Dec 17, 2008 at 08:39:04AM +0100, Jim Meyering wrote: >> Daniel Veillard <veillard redhat com> wrote: >> ... >> >> All tests pass for me with that patch. Looks good. >> > >> > Same for me, +1 ! >> >> Thanks. >> Pushed with this comment: >> >> fix numa-related (and kernel-dependent) test failures >> This change is required on some kernels due to the way a change in >> the kernel's CONFIG_NR_CPUS propagates through the numa library. >> * src/qemu_conf.c (qemudCapsInitNUMA): Pass numa_all_cpus_ptr->size/8 >> as the buffer-length-in-bytes in the call to numa_node_to_cpus, since >> that's what is required on second and subseqent calls. >> * src/uml_conf.c (umlCapsInitNUMA): Likewise. > > This change has broken the compile on Fedora 9 and earlier where the > numa_all_cpus_ptr symbol does not exist. So it needs to have a test > in configure.ac added, and if not present, go with our original code > of a fixed mask size. Fortunately on Fedora 9's libnuma, they don't > have the annoying mask size checks - that's a new Fedora 10 thing Thanks for the heads-up. While normally I'd prefer an autoconf test, it might make sense to use an #if in this case. Maybe this will do it: #if LIBNUMA_API_VERSION <= 1 use old code #else use numa_all_cpus_ptr #endif > I also just noticed that its only touching the size param passed into > the numa_node_to_cpus, but not the actual size that's allocated for the > array. This is fairly harmless....until someone does a kernel build > with NR_CPUS > 4096 I'll deal with this, too.
|
http://www.redhat.com/archives/libvir-list/2008-December/msg00544.html
|
CC-MAIN-2014-41
|
refinedweb
| 260
| 62.78
|
There are broadly two ways to answer this question:
- Looking at fundamental reasons why trend following is less likely to work
- Some kind of statistical analysis
Are there fundamental reasons why trend following won't work any more?
Some people spend their entire lives opining about why this or that strategy no longer makes sense (google 'is value/ momentum/xxx dead' and see how many results come back). Personally I find that a very pointless occupation.
I've always felt it is very difficult to forecast the future, and your best bet is to maintain an exposure to a diversified source of different return factors. You'd be mad to have 100% of your capital exposed to trend following (it's about 15% across my whole portfolio). However you'd be equally mad to have 0% of your capital exposed to it because you think it was dead. In this oft quoted recent interview with David Harding, he was cutting the exposure of his fund in the strategy in half, to 25%.
When I have occasionally checked to see if exogenous conditions can be used to predict trading strategy returns I've found very weak effects if anything at all, as in this post on the effect of QE on CTA returns. Also; trend following returns tend to have negative auto correlation for annual returns (alluded to in this blog post). So bad years tend to be followed by better years.
Before my patience is tested to it's limit let me quickly discuss just two of the reasons why people think trend following is dead:
- Strategy is overcrowded; possibly but trend following is mostly a self reinforcing strategy, unlike say relative value strategies where profits get squeezed out when investors rush in, having more trend followers causes trends to last longer. Having said that overcrowding is potentially problematic when trends end and numerous investors rush to the exits, especially as there are other players like risk parity investors whose behaviour will closely resemble trend followers (see February 2018). It's worth reading the excellent work of Dr. Robert Hillman on this subject.
- World is unpredictable (see Trump, also Brexit): perhaps this is true, but this unpredictability also affects discretionary human traders - I doubt any human can predict what Trump is going to do next (that probably includes Trump). Also trend following as a strategy has been around a long time, and on average it's worked despite the fact that there have always been unpredictable factors in the world. I'd be more concerned about a strategy that worked really well in the Obama presidency, but hadn't been tested before that (especially as the Obama presidency was a strong bull market in stocks).
Can statistical analysis tell us if trend following is dead
Statistical analysis is brilliant at telling us about the past. Less useful in telling us about the future. But perhaps it can tell us that trend following has definitely stopped working? I'm keener on this approach than thinking about fundamentals - because it's a useful exercise in understanding uncertainty. Let's find out.
First we need some data. I'm going to use (a) the SG Trend index, and (b) a back-test of trend following strategy returns. The advantage of (a) is that it represents actual returns by trend followers, whilst (b) goes back longer.
Heres the SG trend index (monthly values, cumulated % returns equivalent to a log scale):
It certainly looks like things get rather ugly after the middle of 2009.
Heres a backtest:
And for comparision a zoom of the backtest since 2000 to match the SG index:
The backtest by the way is just an equal weight of three EWMAC rules, equally weighted across 37 odd futures instruments in my dataset generated using pysystemtrade with the following configuration.
In terms of answering the question we can reframe it as:
Is there statistical evidence that the performance of the strategy is negative in the last X years?
Two open questions then are (a) how do we measure performance, and (b) what is X?
I normally measure performance using Sharpe Ratio, but I think it's more appropriate to use return here. One characteristic of trend following is that the vol isn't very stable; it tends to be low when losing and high when making money. This results in very bad rolling Sharpe Ratios in down periods, and not so good rolling Sharpes in the good times. So just this once I'm going to use return as my performance measure.
In terms of the time period we have the usual two competing effects; short periods won't give us statistical significance, longer periods won't help test the hypothesis of whether trend following is recently dead. The recent period of poor performance started either in 2009 or 2015 depending on how far back you want to go. Let's use a 2 year, 3 year, 5 year and 10 year window.
What I'm going to be backing out is the rolling T-statistic, testing against the null hypothesis that returns were zero or negative. A high positive T-statistic indicates it's likely the returns are significantly positive (yeah!). A low negative T-statistic indicates that it's likely that returns are significantly negative (boo!). A middling T-statistic means we don't really know.
Here's the python:
from scipy.stats import ttest_1samp # dull wrapper function as pandas apply functions have to return a floatdef ttest_series(xseries): return ttest_1samp(xseries, 0.0).statistic # given some account curve of monthly returns this will return the rolling 10 year
# series of t-statisticsacc.rolling(120).apply(ttest_series)
Incidentally I also tried bootstrapping the T-statistic, and it didn't affect the results very much.
Here are the results. Firstly for my back test, 2 year rolling window:
Let's look at 3 years:
Surely five years must show significant results:
Okay, let's switch to the SG CTA index. Starting with two years:
Some may accuse me of straw-manning here; "listen Rob we're not saying trend following is so broken it will lose money; just that it hasn't and won't do as well as in the past". Well looking again at those rolling plots I see no evidence of that eithier.
Looking at the SG index there has perhaps been a slight degradation in performance after 2009, but taking the long term view over the backtest I'd say that over the last 30 years at least performance has been very similar and the current period of poor returns is by no means as bad as things have got in the past before recovering.
One more VERY IMPORTANT POINT: It's arguably silly to look at the performance of any trading strategy in isolation; like I said above only a moron would have 100% of their money in trend following. One of the arguments for trend following is that it provides 'crisis alpha' or to be more precise it has a negative correlation to other assets in a bear market. Unfortunately it's virtually impossible to say whether trend following still retains that property, since (he said wistfully) there hasn't been a decent crisis for 10 years.
You should be happy to invest in crisis alpha even if it has a expected return of zero over all history - arguably you should even be happy to pay for it's insurance properties, and put up with a slightly negative return. Since the 2009 trend following has delivered some modestly positive performance; arguably better than we have a right to expect. We won't know for sure if trend following can still deliver until the next big crisis comes along.
Summary"Is trend following dead?" I don't know. Probably not. Now leave me alone and let us never speak of this again.
The next person who asks me this question will get a deep sigh in response. The one after that, a full eye roll. And with the third person I will have to resort to physical violence.
|
https://qoppac.blogspot.com/2018/11/
|
CC-MAIN-2019-22
|
refinedweb
| 1,352
| 58.42
|
In this tutorial we’ll walk you through the creation of a simple client-side implementation of a RESTful library API. This shows you the basics of RestORM. This example is included in the source code of RestORM and has more features, see: restorm.examples.mock.
Let’s examine the server-side of the library API. Normally you would read the documentation of a RESTful API since there is no standard (yet) to describe a RESTful API and have a computer generate a proxy.
Below, you’ll find an example of how the library API could be documented. The library contains books and each book is ofcourse written by an author. For the sake of this tutorial, it doesn’t expose a lot of features:
Welcome to the documentation for our library API! All resources are available on. No authentication is required and responses are in JSON format.
GET book/ – Returns a list of available books in the library:
[ { "isbn": 1, "title": "Dive into Python", "resource_url": "" }, # ... ]
GET book/{id} – Returns a specific book, identified by its isbn number:
{ "isbn": 1, "title": "Dive into Python", "author": "" }
GET author/ – Returns a list of authors that wrote the books in our library:
[ { "id": 1, "name": "Mark Pilgrim", "resource_url": "" }, # ... ]
GET author/{id} – Returns a specific author, identified by its id:
{ "id": 1, "name": "Mark Pilgrim", }
POST search/ – Searches the library and returns matching books:
{ "query": "Python" } [ { "isbn": 1, "title": "Dive into Python", "resource_url": "" }, # ... ]
A typical client that can talk to a RESTful API using JSON, is no more then:
from restorm.clients.jsonclient import JSONClient client = JSONClient(root_uri='')
Since this tutorial uses a non-existant library API, the client doesn’t work. We can however mock its intended behaviour.
In order to test your client, you can emulate a whole API using the MockApiClient. However, sometimes it’s faster or easier to use a single, predefined response, using the MockClient.
Since our library API is not that complex it is very straighforward to mock the entire API, so we’ll do just that. The MockApiClient takes two arguments. The root_uri is the same as for regular clients but in addition, there is the responses argument. The responses argument takes a dict of available resource URLs, supported methods, response headers and data. It’s best to just look at the example below to understand its structure.'})} 'search/': {'POST': ({'Status': 200}, [{'isbn': 1, 'title': 'Dive into Python', 'resource_url': ''}])}, }, root_uri='' )
It’s worth mentioning that you are not creating an API here, you are mocking it. Simple and limited responses are usually fine. If the API would contain huge responses, you can also use the FileResponse class to read the mock response from a file.
We start with the most basic resource, the Author resource:
from restorm.resource import Resource class Author(Resource): class Meta: list = r'^author/$' item = r'^author/(?P<id>\d+)$'
We subclass Resource and add an inner Meta class. In the Meta class we add two attributes that are internally used by the ResourceManager to perform get and all operations:
For our Book resource, it’s also possible to search for books. We can add this functionality with a custom ResourceManager:
from restorm.resource import ResourceManager class BookManager(ResourceManager): def search(self, query, client=None): response = client.post('search/', '{ "query": "%s" }' % query) return response.content
No validation or exceptions in the request and response are handled in the above example for readability reasons. In a production environment, you should.
We also need to define the Book resource itself and add our custom manager by adding an instance of it to the objects attribute on the resource.
class Book(Resource): objects = BookManager() class Meta: list = r'^book/$' item = r'^book/(?P<isbn>\d)$'
You can access the Book resource and the related Author resource using the mock_client, or if the library API was real, use the client. We can pass the client to use as an argument to all manager functions (like get, all and also the search function we defined earlier).
>>>'
Our custom manager added a search function, let’s use it:
>>> Book.objects.search(query='python', client=mock_client) [{'isbn': 1, 'title': 'Dive into Python', 'resource_url': ''}]
Since it’s mocked, we could search for anything and the same response would come back over and over.
Note
As you may have noticed, the response content contains actual Python objects. The MockApiClient simply returns the content as is. If you prefer using JSON, you can achieve the same behaviour with:
from restorm.clients.mockclient import BaseMockApiClient from restorm.clients.jsonclient import JSONClientMixin class MockJSONApiClient(BaseMockApiClient, JSONClientMixin): pass client = MockJSONApiClient( responses={ # Note the difference. The content is now JSON. 'book/1': {'GET': ({'Status': 200, 'Content-Type': 'application/json'}, '{"id": 1, "title": "Dive into Python", "author": ""}', # ... }, root_uri='' )
|
http://restorm.readthedocs.io/en/latest/tutorial.html
|
CC-MAIN-2017-30
|
refinedweb
| 790
| 55.95
|
scripted field that will calculate Risk based on Impact and Likelihood.
Both the Impact and Likelihood fields are a single select dropdown field. To do the calculation, I need to convert the String to an Integer. In Jira Datacenter, I know I can create an If/else statement that looks something like this:
def likelihood = issue.getCustomFieldValue(customFieldManager.getCustomFieldObjectByName("Risk Likelihood"))
if ( "Rare".equals (likelihood.getValue()) ) {
likelihood = 1
} else if ( "Unlikely".equals (likelihood.getValue()) ) {
likelihood = 2
} else if ( "Moderate".equals (likelihood.getValue()) ) {
likelihood = 3
} else if ( "Likely".equals (likelihood.getValue()) ) {
likelihood = 4
} else if ( "Almost Certain".equals (likelihood.getValue()) ) {
likelihood = 5
} else {
return null
}
On Jira Cloud, I can not use .equals
Does anyone know the Cloud equivalent?
Hi Jeanne Howe and welcome,
Do you mind to share with us how your actual code looks like the whole stacktrace of the error to see if anyone here can help you with your problem? You can use site to share those files with us.
Depending of the language used on your Cloud script you will be able to use .equals() method or not, Groovy / Java based language have this method available for each existing variable on the code since this method is provided by Object.class, the one every variable on those languages extend from.
Best Regards
Hi Jeanne,
I can confirm if the value returned in your script is a string you can use the parseInt() method that groovy provides to return it as an integer as documented in the groovy docs located here.
I hope this helps.
Regards,
Kristian
@Kristian Walker _Adaptavist_ @Jack Nolddor _Sweet Bananas_
Thank you both for the responses. I have "cheated" a little bit here and add a numeric value to each of my fields, so I can parse the field instead of trying to convert the string to integer.
The scripted field is working, but I am trying to change the calculation that is being performed. The current calculation adds the two field values:
def output = impact[0..0]+likelihood[0..0] as Integer
I would like to multiply the two field values:
def output = impact[0..0]*likelihood[0..0] as Integer
To do this I believe I need to declare a method - but have no idea what that method is.
This is a snippet of the error I see:
org.codehaus.groovy.runtime.InvokerInvocationException: groovy.lang.MissingMethodException: No signature of method: java.lang.String.multiply() is applicable for argument types: (String) values: [1] Possible solutions: multiply(java.lang.Number) at TestScriptedFieldExecution2_groovyProxy.run(Unknown Source) Caused by: groovy.lang.MissingMethodException: No signature of method: java.lang.String.multiply() is applicable for argument types: (String) values: [1] Possible solutions: multiply(java.lang.Number)
Here is the code that I have working so far: output = impact[0..0]+likelihood[0..0] as Integer
put("/rest/api/2/issue/${issue.key}")
.header("Content-Type", "application/json")
.body([
fields:[
(outputCfId1): output
]
])
.asString()
Any thoughts on how to modify this to multiply the two values instead of adding?
(would also appreciate any thoughts you had on the script formatting. I am very new to this)
Jeanne
I believe I have the Scripted field working. Would appreciate any feed back on the script formatting, I am still very new to this. Here is my scripted field: impactint = impact[0] as Integer
def likelihoodint = likelihood[0] as Integer
def output = impactint * likelihoodint as Integer
return output
put("/rest/api/2/issue/${issue.key}")
.header("Content-Type", "application/json")
.body([
fields:[
(outputCfId1): output
]
])
.asInteger
Hi Jeane,
The error message indicates the values are still stored as strings and you cannot perform mathematical operations on Strings.
This means you will need to parse the strings to convert them to integer values which are stored as integer values and then you can multiply these integer values in your script.
Regards,
Kristian
Thanks Kristian,
I have modified my comment above. I think I have it working.
Seems I still do not have a "number". When I attempt to PUT the Risk Score back to the issue, I get this message:
The scripted field ran successfully, but returned an invalid type. Please return a number.
Hi Jeane,
In you script above you just have the return statement of return output and you need to change this to convert the value returned as an integer value by changing the line to be similar to return output as Integer.
Also, I can confirm that scripted fields just calculate values on an issue when it is loaded and you cannot make a put request on a scripted field.
If you wish to make a put request to update another field then you need to use a Script Listener script which is configured to fire on the issue updated event.
Regards,
Kristian
@Kristian Walker _Adaptavist_
I have modified the script to
return output as Integer
but I am still getting the same message:
The scripted field ran successfully, but returned an invalid type. Please return a number.
As before, if I run the script manually (using the Test option when creating the field) it runs successfully and will update the ticket, but if I edit a ticket and modify the Risk Impact or Risk Likelihood the scripted field, Risk Score, is not updated.
Hi Jeanne,
As mentioned above, the reason is that you cannot run this code as a Scripted Field as I have explained as Scripted Fields cannot run put requests and only update when an issue is reloaded in the browser and not on issue update.
Also, I can confirm in your script that you are attempting to make a rest call after the return statement and the way that groovy works is, that once the return keyword is reached then the script will terminate, so this means you should have the return statement as the last line in your script so that the rest calls are made before the return statement is reached.
As mentioned previously if you wish to have the script update field values after the issue is edited, then you will need to rewrite this script as a script listener to run on the issue updated event so that it updates the field values when the issue is updated as Scripted Field values are only calculated when an issue is loaded..
|
https://community.atlassian.com/t5/Jira-questions/ScriptRunner-Jira-Cloud-Convert-String-to-Integrer/qaq-p/1611321
|
CC-MAIN-2021-10
|
refinedweb
| 1,048
| 53.81
|
Math::LP::Solve - perl wrapper for the lp_solve linear program solver
use Math::LP::Solve qw(:ALL); # imports all functions and variables # construct an LP with 0 initial constraints and 2 variables $lp = make_lp(0,2); # add the constraint x1 + 2 x2 <= 3 $coeffs = ptrcreate('double',0.0,2); # mallocs a C array ptrset($coeffs,1.0,0); ptrset($coeffs,2.0,1); add_constraint($lp,$coeffs,$LE,3); ptrfree($coeffs); # frees the C array # set the objective function to x1+x2 and solve for a maximum $obj = ptrcreate('double',1.0,2); set_obj_fn($lp,$obj); ptrfree($obj); set_maxim($lp); # solve the LP solve($lp) == $OPTIMAL or die "No solution found"; $solution = lprec_best_solution_get($lp); # extract the results from the solution array $obj_fn_val = ptrvalue($solution,0); $constr_val = ptrvalue($solution,1); $x1 = ptrvalue($solution,2); $x2 = ptrvalue($solution,3);
Math::LP::Solve is a wrapper around the freeware lp_solve library, which solves linear and mixed linear/integer programs. Most functions and data structures in the file lpkit.h of the lp_solve distribution are made available in the Math::LP::Solve namespace.
This document does not go into the details of how to setup and solve a linear program using the lp_solve library. For details on this you are referred to the documentation included in the source code for lp_solve.
That being said, a few details of the Perl wrappers around the underlying lp_solve library need explaining in order to be able to use them. (For those interested, the wrapping was done using SWIG, more info at) All symbols (functions and variables) are divided into 4 categories. All these symbols are in the
Math::LP::Solve namespace and are not exported by default. They are however tagged so that you can easily import them into your own code. The following
%EXPORT_TAGS are available:
pointer library functions, needed to handle C-style arrays;
pairs of get/set functions to access data fields of structs;
wrappers for lp_solve library functions;
perl scalar variables mapping
#define'd constants in lpkit.h.
A 5th category named ALL is available in
%EXPORT_TAGS, which includes all symbols of the 4 mentioned categories.
The pointer library functions are needed to pass arrays of coefficients etc. to and from the lp_solve functions and data structures. In the underlying C library, this is done using
double* pointers, which are not available in Perl. The pointer library functions provide a Perl interface to get around this problem.
There are several pointer library functions, and they are fully explained in the SWIG documentation. However, the following is all you need to know to use them with lp_solve:
Creates and returns a pointer to type
$type, which is an array with
$size fields initialized to
$initval. E.g. an array of 2 doubles initialized to zero is created with the command
$arr_double = Math::LP::Solve::ptrcreate('double',0.0,2);
sets the value of the
$index'th field of the array pointed to in
$ptr to the value
$val. E.g. the 2nd entry of an array of doubles is set to 3.14 using
Math::LP::Solve::ptrset($arr_double,3.14,1);
Note that the 1st entry is denoted by index 0, as in C.
returns the
$index'th entry of the array pointed to in
$ptr. E.g. the 1st value of an array of doubles is requested using
$d0 = Math::LP::Solve::ptrvalue($arr_double,0);
frees the memory allocated for
$ptr. Always do this when you are finished with an array you allocated yourself using ptrcreate(), or you will end up with memory leaks. Also, take care not to invoke ptrfree() twice on the same pointer if it is not re-created.
The functions have the same name as in
lpkit.h. Note however that
double* parameters need to be handled with the aforementioned pointer library functions. The pointer library functions are not needed for the
lprec* parameters, as their creation, manipulation and freeing is completely covered by the
lpkit.h functions. E.g. an LP is created with
$lp = Math::LP::Solve::make_lp(0,0);
subsequently manipulated with
Math::LP::Solve::set_obj_fn($lp,$arr_double);
and finally freed using
Math::LP::Solve::delete_lp($lp);
Some functions have been added to the ones available in
lpkit.h to ease file manipulation and handling names of rows and columns:
returns the name of the LP;
sets the name of the LP to
$name;
returns the name of the row resp. column with index
$i;
sets the name of the row resp. column with index
$i to
$name;
opens the file
$filename with mode
$mode, which is specified as a string. Calls the C function fopen() internally;
closes a filehandle obtained with open_file().
Following constants are available in the Math::LP::Solve namespace:
$DEF_INFINITE
$LE,
$EQ,
$GE and
$OF
$TRUE and
$FALSE
$OPTIMAL,
$MILP_FAIL,
$INFEASIBLE,
$UNBOUNDED,
$FAILURE and
$RUNNING
$FEAS_FOUND,
$NO_FEAS_FOUND and
$BREAK_BB
Each data field in
struct lprec can be queried from a Perl variable holding an LP using
Math::LP::Solve::lprec_FIELD_get($lp) and set using
Math::LP::Solve::lprec_FIELD_set($lp).
Note that the row and column names are accessed using the functions lprec_row_name_get(), lprec_col_name_get(), lprec_row_name_set() and lprec_col_name_set() described above.
Wim Verhaegen <wimv@cpan.org>
Copyright (c) 2000-2001 Wim Verhaegen. All rights reserved. This program is free software; you can redistribute and/or modify it under the same terms as Perl itself.
Consult the lp_solve documentation for copyright information on the lp_solve library.
|
http://search.cpan.org/~wimv/Math-LP-Solve-3.03/Solve.pod
|
CC-MAIN-2017-17
|
refinedweb
| 900
| 53.31
|
tag:blogger.com,1999:blog-8712770457197348465.post2303449542851015999..comments2017-01-16T06:46:33.362-08:00Comments on Javarevisited: How to Check if Integer Number is Power of Two in Java - 3 examplesJavin Paul Duran, Exactly: So if you started with i...@Miguel Duran, Exactly: So if you started with <br /><br />if(number < 1) return false; <br /><br /.carbon14 is not a value.Infinity is not a value.Miguel Duran the third example (using the bit-shift operator...In the third example (using the bit-shift operator) it checks to see if the number is less than 0 and raises an exception. Why!!<br / carbon14 % 2 equals 0, but 20 is not a power of 2.20 % 2 equals 0, but 20 is not a power of 2.Miguel Duran know, you could've just used modulo on a n...you know, you could've just used modulo on a number by dividing it by 2 (num%2). if it equals to 0, then it's an even/power by 2 ;)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-45889847564833398582016-06-10T09:11:11.840-07:002016-06-10T09:11:11.840-07:00Comments have a semantic error. Bit shift operato...Comments have a semantic error. Bit shift operators are <<, >>, and >>>. Solutions are using bit AND operator &.Johnny Hardcode, you are correct, its not <= but just le...@TCool, you are correct, its not <= but just less than, corrected it nowJavin Paul the first and third method, I think there is a...For the first and third method, I think there is a typo:<br />if(number<=0)<br />how can you display 0? And the result returns true?TCool this possible? class two { public static void ...Is this possible?<br /><br />class two<br />{<br />public static void power(int num)<br />{<br />int dup= num ;<br />while (dup > 2)<br />{<br />if (dup==2)<br />System.out.print (num+"is a power of two");<br />else if (dup < 2)<br />System.out.print(num+"is not a power of two");<br />dup=dup/2;<br />}<br />}<br />By<br />RamAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-75056652787629939562014-11-18T19:50:36.351-08:002014-11-18T19:50:36.351-08:00return x == 0 ? false : x & (x - 1) == 0;return x == 0 ? false : x & (x - 1) == 0;Miguel Duran! Very good explanation. I can now get rid o...Thanks! Very good explanation. I can now get rid of Bitwise operator because of your Brute force (easy to understand).Wennie Sasotona logs and cast one side as int and check if tw...Take logs and cast one side as int and check if two sides are equal<br />public static boolean (int i)<br />return if((int)(Math.log(i+1)/Math.log(2)) - ((Math.log(i+1)/Math.log(2)))==0)Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-62170924369230611312013-06-26T20:23:32.885-07:002013-06-26T20:23:32.885-07:00WOW, Some clever tips checking integer is power of...WOW, Some clever tips checking integer is power of two, on comments :)Ravinoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-53736723517751372812013-06-15T10:30:22.299-07:002013-06-15T10:30:22.299-07:00You may just use Integer.bitcount(number) == 1You may just use Integer.bitcount(number) == 1Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-59980633315080919652013-06-04T11:08:44.480-07:002013-06-04T11:08:44.480-07:00public static boolean isPowerOfTwo(int i){ if (i... public static boolean isPowerOfTwo(int i){<br /> if (i==0) return false;<br /> int d = Math.abs(i);<br /> return (d & (d-1)) == 0;<br /> }<br />Klāvs Priedītis often read, heard that power of 2 numbers have s...I often read, heard that power of 2 numbers have special significance in computer world, Why? Why most HashMap, HashSet use sizes which is power of 2? WHYAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-24010348078351443432013-06-02T09:40:12.179-07:002013-06-02T09:40:12.179-07:00Hi...i follow your blog while coding in my office....Hi...i follow your blog while coding in my office...I really find your site useful...recently i joined your site as I am also creating a blog for technical implementations. Its in a very early stage..<br /><br />Programming Language Bhatia about this??? public class PowerOftwo { ...what about this???<br /><br /><br />public class PowerOftwo <br />{<br /><br /> public void powerOftwo(int number)<br /> {<br /> int num = number;<br /> int d;<br /> boolean flag = true;<br /> while(num>1)<br /> {<br /> d = num % 2;<br /> if(d%2!=0)<br /> {<br /> flag = false;<br /> break;<br /> }<br /> num = num/2;<br saurabh chopranoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-73980799285026810342013-05-30T13:54:41.324-07:002013-05-30T13:54:41.324-07:00Is the brute force case, O(logN) complexity since ...Is the brute force case, O(logN) complexity since you are multiplying by 2 each time? So log base 2 of N?Anonymousnoreply@blogger.com
|
http://javarevisited.blogspot.com/feeds/2303449542851015999/comments/default
|
CC-MAIN-2017-04
|
refinedweb
| 843
| 61.63
|
The other day I was digging through some source code and stumbled upon something similar to this:
public class ObjectFactory<T> where T : new()
{
public T Construct()
{
return new T();
}
}
I was curious what the generic constraint new() meant, but didn't have time to investigate so just scribbled on the whiteboard in my office to investigate it over the weekend.
Fast forward to this weekend. Saturday ( today ) was an unbelievable beautiful day in Sarasota, Florida and the kids and I decided to spend most of the day playing in the backyard. During my breaks I was reading How to Code .NET: Tips and Tricks for Coding .NET 1.1 and .NET 2.0 Applications Effectively.
There is a chapter in this book, called Using the new and class keywords with .NET Generics, which preceded to explain that the new() constraint meant the type must have a parameterless constructor. Sweet! Cross that question off the list.
The book then started to dive into IL and talk about the performance implications of including the class constraint as follows:
public class ObjectFactory<T> where T : class, new()
{
public T Construct()
{
return new T();
}
}
Although sadly I hadn't thought about it, without the class constraint, the compiler doesn't know if the type T is a value type or reference type and hence has to check for both.
So I am hanging out having fun with my kids, but there is a nagging desire in the back of my head to run ildasm and look at the IL for myself. And, yeah, there is a difference. Here is the IL when you don't include the class constraint:
.method public hidebysig instance !T Construct() cil managed
{
// Code size 38 (0x26)
.maxstack 2
.locals init ([0] !T CS$1$0000,
[1] !T CS$0$0001)
IL_0000: nop
IL_0001: ldloca.s CS$0$0001
IL_0003: initobj !T
IL_0009: ldloc.1
IL_000a: box !T
IL_000f: brfalse.s IL_001c
IL_0011: ldloca.s CS$0$0001
IL_0013: initobj !T
IL_0019: ldloc.1
IL_001a: br.s IL_0021
IL_001c: call !!0 [mscorlib]System.Activator::CreateInstance<!T>()
IL_0021: stloc.0
IL_0022: br.s IL_0024
IL_0024: ldloc.0
IL_0025: ret
} // end of method ObjectFactory`1::Construct
and here is the code when you do include the class constraint:
.method public hidebysig instance !T Construct() cil managed
{
// Code size 11 (0xb)
.maxstack 1
.locals init ([0] !T CS$1$0000)
IL_0000: nop
IL_0001: call !!0 [mscorlib]System.Activator::CreateInstance<!T>()
IL_0006: stloc.0
IL_0007: br.s IL_0009
IL_0009: ldloc.0
IL_000a: ret
} // end of method ObjectFactory`1::Construct
When you include the class constraint, we jump immediately into Activator.CreateInstance because we know the type T is not a value type. I won't lose sleep at night if I forget the class constraint when I only mean reference types, but certainly I am performance conscious and will include the class constraint in code where I am definitely assuming T is a reference type.
Also, if you have never played with SqlCommandBuilder.DeriveParameters, I also played with it this weekend:
by David Hayden
[Advertisement]
Thank you David. You just saved me some work this evening with this
|
http://codebetter.com/blogs/david.hayden/archive/2006/11/04/Using-the-new-and-class-keywords-with-.NET-Generics.aspx
|
crawl-002
|
refinedweb
| 525
| 68.87
|
The QWebDatabase class provides access to HTML 5 databases created with JavaScript. More...
#include <QWebDatabase>
This class is not part of the Qt GUI Framework Edition.
This class was introduced in Qt 4.5.
The QWebDatabase class provides access to HTML 5 databases created with JavaScript.
The upcoming HTML 5 standard includes support for SQL databases that web sites can create and access on a local computer through JavaScript. QWebDatabase is the C++ interface to these databases.
To get access to all databases defined by a security origin, use QWebSecurityOrigin::databases(). Each database has an internal name(), as well as a user-friendly name, provided by displayName().
WebKit uses SQLite to create and access the local SQL databases. The location of the database file in the local file system is returned by fileName(). You can access the database directly through the QtSql database module.
For each database the web site can define an expectedSize(). The current size of the database in bytes is returned by size().
For more information refer to the HTML 5 Draft Standard.
See also QWebSecurityOrigin.
Constructs a web database from other.
Destroys the web database object. The data within this database is \b not destroyed.
Returns the name of the database as seen by the user.
Returns the expected size of the database in bytes as defined by the web author.
Returns the file name of the web database.
The name can be used to access the database through the QtSql database module, for example:
QWebDatabase webdb = ... QSqlDatabase sqldb = QSqlDatabase::addDatabase("QSQLITE", "myconnection"); sqldb.setDatabaseName(webdb.fileName()); if (sqldb.open()) { QStringList tables = sqldb.tables(); ... }
Note: Concurrent access to a database from multiple threads or processes is not very efficient because SQLite is used as WebKit's database backend.
Returns the name of the database.
Returns the databases's security origin.
Deletes all web databases in the configured offline storage path.
This function was introduced in Qt 4.6.
See also QWebSettings::setOfflineStoragePath().
Removes the database db from its security origin. All data stored in the database db will be destroyed.
Returns the current size of the database in bytes.
Assigns the other web database to this.
|
http://doc.qt.nokia.com/4.6-snapshot/qwebdatabase.html
|
crawl-003
|
refinedweb
| 360
| 61.43
|
in reply to Yet Another Variable Expansion Problem
Well, if you would change the match to something like the following and have him stick all the variables in a namespace other than main:: you'd be better off than you are. The key trick is requiring a real word character after the '$'.
$template =~ s/\[\$([a-zA-Z][\w\[\]\{\}'"]*)\]/'$Q::'.$1/eeg;
Basically, lop off the '$' too, make sure the first character is really a letter (and that one exists, yours allows [$] which isn't real good), and stuff that string onto a variable in another namespace.
If you are sure that every variable is a scalar you might just do /\[\$([a-zA-Z]+)\]/$hash{$1}/eg; as well.
And yes, I'm ashamed that I'm on the Template-Toolkit list and still helping you do this the wrong way. =) =)
++chromatic
--
$you = new YOU;
honk() if $you->love(perl)
Deep frier
Frying pan on the stove
Oven
Microwave
Halogen oven
Solar cooker
Campfire
Air fryer
Other
None
Results (322 votes). Check out past polls.
|
http://www.perlmonks.org/?node_id=158659
|
CC-MAIN-2016-26
|
refinedweb
| 174
| 65.96
|
There is something about XML that makes people go crazy: in particular, people trying to make standards: its that ol’ tag fever agin Maude. I think I know what that thing is: the emphasis on standards = good combined with the desire for complete schemas and the idea that organizing schemas by namespace is the way to shoehorn requirements (rather than being a way of expressing results).
The result: vocabularies where unnecessary order and structuring constraints are given. You can tell when a standard schema is over-specified, because people using it will just snip out the low-level elements they need and plonk these in their own home-made container elements.
I have noticed this in a few schemas I have been working with recently: in fact, the trend I notice is that people start off with their own home-made schema, then “adopt” the standard by finding any elements that have close semantics to their home-made elements, and changing the name of the home-made element to the standard name. SVG in ODF looks like an example of this, and there is another standard I have been working with recently that has the same issue: when you adopt arbitrary portions of a cohesive standard, are you really using or abusing that standard?
I suppose there is a case to be made that transitional schemas should be treated seriously.
One software engineering idea that has stuck with me over the last years (which I wrote about in The XML & SGML Cookbook) is the twinning of cohesion and coupling. Basically, that when some information is highly coherent (think of Eve Maler’s Information Units) i.e., it belongs together semantically and would not make much sense in isolation, it deserves an official container.
Conversely, you should try to reduce coupling of information that is not cohesive.
A rule of thumb for many situations is that industry standard groups (and, indeed, inhouse schema developers), may be well advised to standardize data elements eagerly but container elements suspiciously: standardize the jellybeans not the jars. The next bloke may likes your jellybeans but have his own jars.
Various approaches to do this come to mind: think in terms of creating a vocabulary rather than a language; split your industry standard in two, with the tightly coupled elements in one normative section and the loosely-coupled elements in another non-normative section, perhaps with different namespaces even; use open content models and order-independence for loosely-coupled elements.
Another upside for this approach, is that it reduces the number of trivial issues for committee members to get excited about.
The unsurprising part of this is that many SGMLers came to these conclusions over a decade ago (lots of litte schemas/DTDs) although that led to some of the wrapper approaches where entities were not well-supported (the Navy Work Package and the European cousins come to mind). I liked the frame approach and that was later replaced with divs oddly enough by the same people who disliked frames..
|
http://www.oreillynet.com/xml/blog/2007/11/standardize_the_jellybeans_not.html
|
crawl-001
|
refinedweb
| 504
| 50.7
|
In order to use Webdriver, you will need to have Maven installed and set up. Fortunately, the helpful people at Apache Software have created an excellent tutorial about how to set up Maven:
It probably won't take you five minutes as advertised, but it will take you less than a day. Don't worry too much about what you are doing as you are running these commands; once Maven is set up, we only have to think about the commands we need to run our automated tests.
Now let's return to Eclipse. The last time we were in Eclipse we created a project called MyFirstJavaProject. Right-click on this project and select "Configure-> Convert to Maven project". A popup window will appear. Click "Finish" on the window, and wait for the configuration to complete. When it's done, you'll notice that your project icon now has a little M next to the J. You'll also notice that you have a pom.xml file in your project folder. We'll learn more about the POM in a future blog post.
Let's create a JUnit test. First we'll run it from Eclipse, then we'll run it from the command line using Maven.
1. Right-click on the MyFirstJavaProject and select New->Source Folder
2. Give the new folder a name of src/test/java (click the "Update exclusion filters" checkbox if needed)
3 Right-click on the new folder and select New->Package
4. Give the package a name of myFirstTests
5. Now right-click on the myFirstTests package and select New->Class
6. Give the class a name of FirstTest
7. The FirstTest class should open in the center pane, and should look like this:
package myFirstTests;
public class FirstTest {
}
8. In between the brackets, add the following:
@Test
public void test() {
int x = 2;
int y = 3;
int total = x+y;
assertEquals(total, 5);
}
9. Right-click on the Project name (MyFirstJavaProject) and choose Build Path->Add Libraries
10. Select JUnit
11. Choose JUnit 4 and click Finish
12. Look again at the lines of code you added to the FirstTest class. There will be two lightbulb icons on the left side of the code.
13. Right-click on the one next to "@Test" and choose Add JUnit4 to Build Path
14. Right-click on the one next to "assertEquals(total,5)" and choose Import org.junit.Assert;
15. Now near the top of the page, you will see
import static org.junit.Assert.*;
import org.junit.Test;
16. Save your changes by clicking on the disk icon in the toolbar
17. Right-click on the FirstTest.java file in the left pane, and choose Run As-> JUnit Test
18. Your test should run, and a new tab named JUnit should appear in the left pane
19. Click on the left tab and notice that the window displays what test has been run, and that there is a green bar across the pane indicating that the test has passed
Now let's try running the test from the command line. First we'll need to make a change to the pom.xml file:
1. Double-click on the pom.xml file to open it in the center pane
2. Click the tab at the bottom of the pane that says pom.xml
3. You should now be viewing the file in xml format
4. Add this text to the xml file, right underneath </build>:
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.8.1</version>
</dependency>
</dependencies>
5. Save your changes
6. Now open the command window
7. Change directories until you are in your project folder (MyFirstJavaProject)
8. Run this command:
mvn clean test
9. Your project should build, your test should run from the command line, and you should be notified that the build is successful
Congratulations! You have run your first test from Maven!
|
http://fearlessautomation.blogspot.com/2013/06/setting-up-maven.html
|
CC-MAIN-2019-22
|
refinedweb
| 656
| 84.07
|
Introduction
Thanks to recent advances in storage capacity and memory management, it has become much easier to create machine learning and deep learning projects from the comfort of your own home.
In this article, I will introduce you to different possible approaches to machine learning projects in Python and give you some indications on their trade-offs in execution speed. Some of the different approaches are:
- Using a personal computer/laptop CPU (Central processing unit)/GPU (Graphics processing unit).
- Using cloud services (Kaggle, Google Colab).
First of all, we need to import all the necessary dependencies:
import numpy as np import pandas as pd from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split from sklearn import preprocessing from xgboost import XGBClassifier import xgboost as xgb from sklearn.metrics import accuracy_score
For this example, I decided to fabricate a simple dataset using Gaussian Distributions consisting of four features and two labels (0/1):
# Creating a linearly separable dataset using Gaussian Distributions. # The first half of the number in Y is 0 and the other half 1. # Therefore I made the first half of the 4 features quite different from # the second half of the features (setting the value of the means quite # similar) so that make quite simple the classification between the # classes (the data is linearly separable). dataset_len = 40000000 dlen = int(dataset_len/2) X_11 = pd.Series(np.random.normal(2,2,dlen)) X_12 = pd.Series(np.random.normal(9,2,dlen)) X_1 = pd.concat([X_11, X_12]).reset_index(drop=True) X_21 = pd.Series(np.random.normal(1,3,dlen)) X_22 = pd.Series(np.random.normal(7,3,dlen)) X_2 = pd.concat([X_21, X_22]).reset_index(drop=True) X_31 = pd.Series(np.random.normal(3,1,dlen)) X_32 = pd.Series(np.random.normal(3,4,dlen)) X_3 = pd.concat([X_31, X_32]).reset_index(drop=True) X_41 = pd.Series(np.random.normal(1,1,dlen)) X_42 = pd.Series(np.random.normal(5,2,dlen)) X_4 = pd.concat([X_41, X_42]).reset_index(drop=True) Y = pd.Series(np.repeat([0,1],dlen)) df = pd.concat([X_1, X_2, X_3, X_4, Y], axis=1) df.columns = ['X1', 'X2', 'X3', 'X_4', 'Y'] df.head()
Finally, now we just have to prepare our dataset to be fed into a machine learning model (dividing it into features and labels, and training and test sets):
train_size = 0.80 X = df.drop(['Y'], axis = 1).values y = df['Y'] # label_encoder object knows how to understand word labels. label_encoder = preprocessing.LabelEncoder() # Encode labels y = label_encoder.fit_transform(y) # identify shape and indices num_rows, num_columns = df.shape delim_index = int(num_rows * train_size) # Splitting the dataset in training and test sets X_train, y_train = X[:delim_index, :], y[:delim_index] X_test, y_test = X[delim_index:, :], y[delim_index:] # Checking sets dimensions print('X_train dimensions: ', X_train.shape, 'y_train: ', y_train.shape) print('X_test dimensions:', X_test.shape, 'y_validation: ', y_test.shape) # Checking dimensions in percentages total = X_train.shape[0] + X_test.shape[0] print('X_train Percentage:', (X_train.shape[0]/total)*100, '%') print('X_test Percentage:', (X_test.shape[0]/total)*100, '%')
The output train test split result is shown below:
X_train dimensions: (32000000, 4) y_train: (32000000,) X_test dimensions: (8000000, 4) y_validation: (8000000,) X_train Percentage: 80.0 % X_test Percentage: 20.0 %
We are now ready to get started benchmarking the different approaches. In all the following examples, we will be using XGBoost (Gradient Boosted Decision Trees) as our classifier.
1) CPU
Training an XGBClassifier on my personal machine (without using a GPU), led to the following results:
%%time model = XGBClassifier(tree_method='hist') model.fit(X_train, y_train)
CPU times: user 8min 1s, sys: 5.94 s, total: 8min 7s Wall time: 8min 6s='hist', verbosity=1)
Once we've trained our model, we can now check it's prediction accuracy:
sk_pred = model.predict(X_test) sk_pred = np.round(sk_pred) sk_acc = round(accuracy_score(y_test, sk_pred), 2) print("XGB accuracy using Sklearn:", sk_acc*100, '%')
XGB accuracy using Sklearn: 99.0 %
In summary, using a standard CPU machine, it took about 8 minutes to train our classifier to achieve 99% accuracy.
2) GPU
I will now instead make use of an NVIDIA TITAN RTX GPU on my personal machine to speed up the training. In this case, in order to activate the GPU mode of XGB, we need to specify the tree_method as gpu_hist instead of hist.
%%time model = XGBClassifier(tree_method='gpu_hist') model.fit(X_train, y_train)
Using the TITAN RTX led in this example to just 8.85 seconds of execution time (about 50 times faster than using just the CPU!).
sk_pred = model.predict(X_test) sk_pred = np.round(sk_pred) sk_acc = round(accuracy_score(y_test, sk_pred), 2) print("XGB accuracy using Sklearn:", sk_acc*100, '%')
XGB accuracy using Sklearn: 99.0 %
This considerable improvement in speed was possible thanks to the ability of the GPU to take the load off from the CPU, freeing up RAM memory and parallelizing the execution of multiple tasks.
3) GPU Cloud Services
I will now go over two examples of free GPU cloud services (Google Colab and Kaggle) and show you what benchmark score they are able to achieve. In both cases, we need to explicitly turn on the GPUs on the respective notebooks and specify the XGBoost tree_method as gpu_hist.
Google Colab
Using Google Colab NVIDIA TESLA T4 GPUs, the following scores have been registered:
CPU times: user 5.43 s, sys: 1.88 s, total: 7.31 s Wall time: 7.59 s
Kaggle
Using Kaggle instead led to a slightly higher execution time:
CPU times: user 5.37 s, sys: 5.42 s, total: 10.8 s Wall time: 11.2 s='gpu_hist', verbosity=1)
Using either Google Colab or Kaggle both led to a remarkable decrease in execution time.
One downside of using these services is the limited amount of CPU and RAM available. In fact, slightly increasing the dimensions of the example dataset caused Google Colab to run out of RAM memory (which wasn't an issue when using the TITAN RTX).
One possible way to fix this type of problem when working with constrained memory devices is to optimize the code to consume the least amount of memory possible (using fixed point precision and more efficient data structures).
4) Bonus Point: RAPIDS
As an additional point, I will now introduce you to RAPIDS, an open-source collection of Python libraries by NVIDIA. In this example, we will make use of its integration with the XGBoost library to speed up our workflow in Google Colab. The full notebook for this example (with instructions on how to set up RAPIDS in Google Colab) is available here or on my GitHub Account.
RAPIDS is designed to be the next evolutionary step in data processing. Thanks to its Apache Arrow in-memory format, RAPIDS can lead to up to around 50x speed improvement compared to Spark in-memory processing. Additionally, it is also able to scale from one to multi-GPUs.
All RAPIDS libraries are based on Python and are designed to have Pandas and Sklearn-like interfaces to facilitate adoption.
The structure of RAPIDS is based on different libraries in order to accelerate data science from end to end. Its main components are:
- cuDF = used to perform data processing tasks (Pandas-like).
- cuML = used to create machine learning models (Sklearn-like).
- cuGraph = used to perform graph analytics (NetworkX).
In this example, we will make use of it's XGBoost integration:
dtrain = xgb.DMatrix(X_train, label=y_train) dtest = xgb.DMatrix(X_test, label=y_test) %%time params = {} booster_params = {} booster_params['tree_method'] = 'gpu_hist' params.update(booster_params) clf = xgb.train(params, dtrain)
CPU times: user 1.42 s, sys: 719 ms, total: 2.14 s Wall time: 2.51 s
As we can see above, using RAPIDS it took just about 2.5 seconds to train our model (decreasing time execution by almost 200 times!).
Finally, we can now check that we obtained exactly the same prediction accuracy using RAPIDS that we registered in the other cases:
rapids_pred = clf.predict(dtest) rapids_pred = np.round(rapids_pred) rapids_acc = round(accuracy_score(y_test, rapids_pred), 2) print("XGB accuracy using RAPIDS:", rapids_acc*100, '%')
XGB accuracy using RAPIDS: 99.0 %
If you are interested in finding out more about RAPIDS, more information is available here.
Conclusion
Finally, we can now compare the execution time of the different methods used. As shown in Figure 2, using GPU optimization can substantially decrease execution time, especially if integrated with the use of RAPIDS libraries.
Figure 3 shows how many times faster the GPUs models are compared to our baseline CPU results.
Contacts
If you want to keep updated with my latest articles and projects, follow me on Medium and subscribe to my mailing list. These are some of my contacts details:
Cover photo from this article.
|
https://www.freecodecamp.org/news/benchmarking-machine-learning-execution-speeds/
|
CC-MAIN-2021-49
|
refinedweb
| 1,431
| 50.63
|
Java Exercises: Check a positive number is a palindrome or not
Java Basic: Exercise-115 with Solution
Write a Java program to check if a positive number is a palindrome or not.
Pictorial Presentation:
Sample Solution:
Java Code:
import java.util.*; public class Exercise115 { public static void main(String[] args) { int num; Scanner in = new Scanner(System.in); System.out.print("Input a positive integer: "); int n = in.nextInt(); System.out.printf("Is %d is a palindrome number?\n",n); System.out.println(palindrome(n)); } private static boolean palindrome(int num) { String str = String.valueOf(num); int i = 0; int j = str.length() - 1; while (i < j) { if (str.charAt(i++) != str.charAt(j--)) { return false; } } return true; } }
Sample Output:
Input a positive integer: 151 Is 151 is a palindrome number? true
Flowchart:
Java Code Editor:
Contribute your code and comments through Disqus.
Previous: Write a Java program to given a string and an offset, rotate string by offset (rotate from left to right).
Next: Write a Java program which iterates the integers from 1 to 100. For multiples of three print "Fizz" instead of the number and for the multiples of five print "Buzz". When number is divided by both three and five, print "fizz buzz".
What is the difficulty level of this exercise?
New Content: Composer: Dependency manager for PHP, R Programming
|
https://www.w3resource.com/java-exercises/basic/java-basic-exercise-115.php
|
CC-MAIN-2019-18
|
refinedweb
| 224
| 50.43
|
The following is a list of the typographical conventions used in this book:
Used to indicate new terms, URLs, filenames, file extensions, and directories and to highlight comments in examples. For example, a path in the filesystem will appear as /Developer/Applications.
Used to show code examples, the contents of files, commands, or the output from commands.
Constant width bold
Used in examples and tables to show commands or other text that should be typed literally.
Used in examples and tables to show text that should be replaced with user-supplied values.
The second color is used to indicate a cross-reference within the text.
A carriage return (RETURN) at the end of a line of code is used to denote an unnatural line break; that is, you should not enter these as two lines of code, but as one continuous line. Multiple lines are used in these cases due to page width constraints.
When looking at the menus for any application, you will see some
symbols associated with keyboard shortcuts for a particular command.
For example, to open an old chat in iChat, you would go to the File
menu and select Open . . . (File
Open . . . ), or you
could issue the keyboard shortcut,
-O. The
symbol corresponds to the
key (also known as the
"Command" key), located to the left
and right of the spacebar on any Macintosh keyboard.
You should pay special attention to notes set apart from the text with the following icons:
The thermometer icons, found next to each hack, indicate the relative complexity of the hack:
|
http://etutorials.org/Mac+OS/mac+os+hacks/Preface/Conventions+Used+in+This+Book/
|
CC-MAIN-2017-04
|
refinedweb
| 260
| 59.84
|
[newbie] scope of the variables
Discussion in 'Perl Misc' started by John, Sep 30, 2003.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
Declaring variables with global scope (Newbie Question)!Mortal Wombat, Aug 6, 2003, in forum: ASP .Net
- Replies:
- 3
- Views:
- 6,939
- Mortal Wombat
- Aug 7, 2003
IMPORT STATIC; Why is "import static" file scope? Why not class scope?Paul Opal, Oct 9, 2004, in forum: Java
- Replies:
- 12
- Views:
- 1,078
- Paul Opal
- Oct 11, 2004
Scope - do I need two identical classes, each with different scope?ann, Sep 12, 2005, in forum: Java
- Replies:
- 13
- Views:
- 753
- Patricia Shanahan
- Sep 13, 2005
How do namespace scope and class scope differ?Steven T. Hatton, Jul 18, 2005, in forum: C++
- Replies:
- 9
- Views:
- 585
- Kev
- Jul 19, 2005
newbie question about scope, variables, declarations of variables and option strict (as in perl)Talha Oktay, Mar 8, 2006, in forum: Ruby
- Replies:
- 8
- Views:
- 276
|
http://www.thecodingforums.com/threads/newbie-scope-of-the-variables.882833/
|
CC-MAIN-2015-27
|
refinedweb
| 195
| 79.7
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.