text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Handler for missing layers within project. More...
#include <qgsprojectbadlayerguihandler.h>
Handler for missing layers within project.
Gives user a chance to select path to the missing layers.
return the data source for the given layer
The QDomNode is a QgsProject Dom node corresponding to a map layer state.
Essentially dumps datasource tag.
returns data type associated with the given QgsProject file Dom node
The Dom node should represent the state associated with a specific layer.
find relocated data source for the given layer
This QDom object represents a QgsProject node that maps to a specific layer.
XXX Only implemented for file based layers. It will need to be extended for XXX other data source types such as databases.
find relocated data sources for given layers
These QDom objects represent QgsProject nodes that map to specific layers.
this is used to locate files that have moved or otherwise are missing
implementation of the handler
Implements QgsProjectBadLayerHandler..
set the datasource element to the new value
Flag to store the Ignore button press of MessageBox used by QgsLegend. | http://www.qgis.org/api/classQgsProjectBadLayerGuiHandler.html | CC-MAIN-2014-41 | refinedweb | 175 | 56.76 |
I have been searching for quite a while to no avail for this answer, and was hoping that someone may have the answer here.
I'm not new to c++; I have been working with it for about 3 years now, and recently moved into visual c++ (with Microsoft Visual Studio 2008), where I quickly ran into a problem I have yet to answer.
My question seems easy enough; is it possible to create objects (such as labels or picture boxes) on the fly from within functions, rather than in the namespace constructor?
I know the general creation code defined with the handles/pointers in the beginning of the form's namespace, but I'm wondering if - through a function - it is possible to create and destroy them?
And if so, what is the proper syntax, and how would I be able to - if at all possible - port it over to an external .cpp file?
Considering the diversity of the C++ language, I believe that the answer to my problems exist, but I have yet to find out precisely what it is.
Any potential help would be appreciated. | http://forums.codeguru.com/printthread.php?t=497972&pp=15&page=1 | CC-MAIN-2017-04 | refinedweb | 188 | 61.7 |
JavaScript Module Date Time Control Info
I have created a date/time control that you can add to your web pages…
It is in the form of a JavaScript Module that you can add to you web project.
Is it perfect? Nah! But it is usable. I will probably improve it here and there.
What can I say? I wanted to do this!
Try it out below!
This project is more of an experiment for future things to come (I hope).
What is it for? If you are writing web software (for example) where a user is picking an appointment date and time. See more info below…
- The control appears in your input “form.”
- If a date/time is already set for the control, it displays its value in the control.
- If no date/time has been set yet, the control displays a blank value.
- The control has an ellipsis button.
- When the user clicks this button, a calendar popup appears where the user can navigate to the date they are interested in and enter in a desired time in the form of hours/minutes and AM/PM.
- The user clicks the SET button to set the date and time.
- The calendar popup will disappear, and the date/time control will now show the date/time the user picked.
You can find the code in the Github Repo:
There is now a minified version of the file available.
To use Github as a CDN for the latest version this module, add the following line near the top of the JavaScript file that is going to use this module:
import DateTimeCtrl from '
or, better yet, use the minified version of this module. Right now, there is only one file (for version 1.0.1), so you would put:
import DateTimeCtrl from '
More About Your JavaScript File
About the JavaScript file that includes that import statement shown above…
- In your web page’s <script> tag for this file, it must have a: type=”module” in it, or the “import” line in your JavaScript file will cause a parse error!
- Any place in your pages HTML markup it has stuff like onclick= onselect=, etc., where it is calling JS functions in your JavaScript file, these will no longer work! Instead you will need to hook up event handlers using:
- myDomNode.addEventListener(‘theEventName’, theEventHandlerFunctionName);
- Note: This date/time control will handle wiring up all its own event handlers… you will not need to do anything to get this working yourself. Yay!
Adding Date/Time Controls to your Form
Let’s just say that the DIV container that you are sticking your form’s HTML markup’s Id is: “myForm”. And, it (the afore-mentioned DIV) is already in your page’s markup!
Here is some sample code you could possibly call:
function genSimpleForm() { const myFormNd = document.getElementById("myForm"); const dtCtrl = new DateTimeCtrl(); const s = []; s.push('<ul>'); s.push('<li>Appointment Name: '); s.push("<input id='appointmentName'></li>"); const sCtrlMarkup = dtCtrl.newCtrlMarkup( {field:"apptDate", pickDateCaption:"Pick Appointment Date", editTime:true}); s.push('<li>Appointment Date: '+sCtrlMarkup+'</li>'); s.push('</ul>'); s.push("<button id='btnSaveApptInfo'>Save Appointment</button>"); myFormNd.innerHTML = s.join(""); dtCtrl.activateControls(); // set up a click event handler on the 'Save Appointment' button: const btnSaveApptInfoNd = document.getElementById("btnSaveApptInfo"); btnSaveApptInfoNd.addEventListener('click', saveAppt); } // end of function genSimpleForm() /* */ function saveAppt(event) { const appointmentNameNd = document.getElementById("appointmentName"); const apptDateNd = document.getElementById("apptDate"); const sAppointmentName = appointmentNameNd.value; const sAppointmentDate = apptDateNd.value; // Do something with these values... // such as posting them via an Ajax call to an API end-point // that will save the values to a database! } // end of function saveAppt()
It is your job as the developer to generate your input form’s markup and add it to an element’s innerHTML property.
Once it is part of the the web page’s Document Object Model (DOM), then you can call the date control’s activateControls() method. This wires everything up for date/time controls with event handlers, etc.
NOTE: If you call the above method before the desired date controls are added to the web page’s DOM, it will not work properly.
This is the basics of how to get a working date time control operating on your web page.
newCtrlMarkup() Method’s Parameters
- the “field” parameter (string). This is the only required parameter. It must be a unique identifier of the date/time field that you are editing. It has to be a value that the web page could use just fine as a page element id.
- calling the: newCtrlMarkup() method will create a new hidden input tag that has an id set to a value of the field name you passed in.
- When a user picks a date/time, the value that they picked is placed in that input control.
- It is your job as the developer to check the “value” property of that control to get the value to do stuff like save the data in the form!
- the optional “pickDateCaption” parameter (string). The prompt you want to appear on the calendar popup. If you do not use this parameter, the prompt will be “Pick Date“
- The optional “editTime” parameter (boolean). Determines if you are allowed to not only edit the calendar date but the time as well. If you do not use this parameter, the default value will be false.
- The optional “dateValue” parameter (string). You would use this if you were building a form where you (for example) bringing up existing appointment info on an appointment saved previously and you wanted the value to be pre-populated.
Possible Future Parameters:
- minDate (string/date). Not allow user to pick a date earlier than this date.
- maxDate (string/date). Not allow user to pick a date later than this date.
- formCaption (string). Allow control to handle displaying its own caption to the left of the control. If not provided, it would assume that the developer’s own code would handle any caption.
- canClearDate (boolean). If true, user is allowed to clear any existing date value that is in the control. Clear button would only appear on the calendar popup when the date actually had a previously Set value.
- readOnly (boolean). If true, user could bring up calendar popup, but would not be able to change the date selection of the time.
Possible Future methods:
- setDateValue({field:’fieldName’,dateValue: sNewDateValue}). Allow for developer to set a controls current date value after the form has been generated.
- Give it its own AddEventListener method. Possible events:
- “datetimeset” – user clicks SET button on calendar popup.
- event.setDateTime could contain the date object set to value
- “datesel” – user selects a date on calendar
- event.selDate could contain the date user selected
- “datecleared” – user picked option to clear the date value out of the control.
- “invaliddate” – user picked a date outside of specified range.
Other Possible Future Improvements:
- Built in Localizable Date formatting used when displaying the date and time. Right now, it just supports what (I) want: MM/DD/YYYY @ H12:mm A/PM
- Be able to support 24 hour as opposed to just 12 hour time values.
- Improve input validation on editing hours and minutes.
- Improve CSS formatting of the control and give the ability to override the default CSS that is generated by the control.
- Convenience methods that do stuff like:
- return what week day a month begins on based on the month & year
- return the total number of days in a month based on month & year
- how many days/hours/minutes are between two date values | http://chomer.com/freeware/javascript-module-date-time-control-info/ | CC-MAIN-2022-21 | refinedweb | 1,254 | 65.12 |
> > > > > +-- NamespaceError (rename of NameError) > > > > > +-- UnboundFreeError (new) > > > > > +-- UnboundGlobalError (new) > > > > > +-- UnboundLocalError > > > > > > > > > > > > > What are these new exceptions for? Under what circumstances are they > > > > raised? Why is this necessary or an improvement? [James Y Knight] > > > Exceptions relating to when a name is not found in a specific > > > namespace (directly related to bytecode). So UnboundFreeError is > > > raised when the interpreter cannot find a variable that is a free > > > variable. UnboundLocalError already exists. UnboundGlobalError is to > > > prevent NameError from being overloaded. UnboundFreeError is to > > > prevent UnboundLocalError from being overloaded [Raymond] > > Do we have any use cases for making the distinctions. I have NEVER had > > a reason to write a different handler for the various types of > > NameError. > > > > Also, everyone knows what a Global is. Can the same be said for Free? > > I had thought that to be a implementation detail rather than part of the > > language spec. [Brett] > Perhaps then we should just ditch UnboundLocalError? Perhaps the hierarchy should be left unchanged unless there is shown to be something wrong with it. "just ditching" something is not a rationale that warrants a language change. What problem is being solved by making additions or deletions to subclasses of NameError? > If we just make > sure we have good messages to go with the exceptions the reasons for > the exception should be obvious. +1 Raymodn | https://mail.python.org/pipermail/python-dev/2005-August/055184.html | CC-MAIN-2018-26 | refinedweb | 216 | 58.58 |
Troubleshooting
While Kubernetes and the ArangoDB Kubernetes operator will automatically resolve a lot of issues, there are always cases where human attention is needed.
This chapter gives your tips & tricks to help you troubleshoot deployments.
Where to look
In Kubernetes all resources can be inspected using
kubectl using either
the
get or
describe command.
To get all details of the resource (both specification & status), run the following command:
kubectl get <resource-type> <resource-name> -n <namespace> -o yaml
For example, to get the entire specification and status
of an
ArangoDeployment resource named
my-arangodb in the
default namespace,
run:
kubectl get ArangoDeployment my-arango -n default -o yaml # or shorter kubectl get arango my-arango -o yaml
Several types of resources (including all ArangoDB custom resources) support events. These events show what happened to the resource over time.
To show the events (and most important resource data) of a resource, run the following command:
kubectl describe <resource-type> <resource-name> -n <namespace>
Getting logs
Another invaluable source of information is the log of containers being run
in Kubernetes.
These logs are accessible through the
Pods that group these containers.
To fetch the logs of the default container running in a
Pod, run:
kubectl logs <pod-name> -n <namespace> # or with follow option to keep inspecting logs while they are written kubectl logs <pod-name> -n <namespace> -f
To inspect the logs of a specific container in
Pod, add
-c <container-name>.
You can find the names of the containers in the
Pod, using
kubectl describe pod ....
Note that the ArangoDB operators are being deployed themselves as a Kubernetes
Deployment
with 2 replicas. This means that you will have to fetch the logs of 2
Pods running
those replicas.
What if
The
Pods of a deployment stay in
Pending state
There are two common causes for this.
1) The
Pods cannot be scheduled because there are not enough nodes available.
This is usually only the case with a
spec.environment setting that has a value of
Production.
Solution: Add more nodes.
1) There are no
PersistentVolumes available to be bound to the
PersistentVolumeClaims
created by the operator.
Solution: Use `kubectl get persistentvolumes` to inspect the available `PersistentVolumes` and if needed, use the [`ArangoLocalStorage` operator](deployment-kubernetes-storage-resource.html) to provision `PersistentVolumes`.
When restarting a
Node, the
Pods scheduled on that node remain in
Terminating state
When a
Node no longer makes regular calls to the Kubernetes API server, it is
marked as not available. Depending on specific settings in your
Pods, Kubernetes
will at some point decide to terminate the
Pod. As long as the
Node is not
completely removed from the Kubernetes API server, Kubernetes will try to use
the
Node itself to terminate the
Pod.
The
ArangoDeployment operator recognizes this condition and will try to replace those
Pods with
Pods on different nodes. The exact behavior differs per type of server.
What happens when a
Node with local data is broken
When a
Node with
PersistentVolumes hosted on that
Node is broken and
cannot be repaired, the data in those
PersistentVolumes is lost.
If an
ArangoDeployment of type
Single was using one of those
PersistentVolumes
the database is lost and must be restored from a backup.
If an
ArangoDeployment of type
ActiveFailover or
Cluster was using one of
those
PersistentVolumes, it depends on the type of server that was using the volume.
- If an
Agentwas using the volume, it can be repaired as long as 2 other agents are still healthy.
- If a
DBServerwas using the volume, and the replication factor of all database collections is 2 or higher, and the remaining dbservers are still healthy, the cluster will duplicate the remaining replicas to bring the number of replicas back to the original number.
- If a
DBServerwas using the volume, and the replication factor of a database collection is 1 and happens to be stored on that dbserver, the data is lost.
- If a single server of an
ActiveFailoverdeployment was using the volume, and the other single server is still healthy, the other single server will become leader. After replacing the failed single server, the new follower will synchronize with the leader. | https://www.arangodb.com/docs/stable/deployment-kubernetes-troubleshooting.html | CC-MAIN-2019-30 | refinedweb | 694 | 58.42 |
I have also used a 20×4 LCD which will display our heart beat value. You should download this New LCD Library for Proteus. I have counted the heart beat for ten seconds and then I have multiplied it with 6 to get the heart beat per minute which is abbreviated as bpm (beats per minute). So, let’s get started with Heart Beat Monitor using Arduino in Proteus ISIS.
Heart Beat Monitor using Arduino in Proteus
- First of all, click the below button to download this complete Proteus simulation & Arduino code for Heart Beat Monitor:
Download Proteus Simulation & Arduino Code
- Now let’s have a look at How we have designed this simulation and How it works.
- So, design a simple circuit in Proteus as shown in below figure:
- As you can see in the above figure, we have our Arduino UNO board along with LCD and Heart Beat Sensor.
- There’s also a Button attached to Pin # 2, so when we press this button our Arduino will start counting the Heart Beat and will update it on the LCD.
- Here’s the code which I have used for this Heart Beat Monitor using Arduino:
#include <LiquidCrystal.h> #include <TimerOne.h> LiquidCrystal lcd(13, 12, 11, 10, 9, 8); int HBSensor = 4; int HBCount = 0; int HBCheck = 0; int TimeinSec = 0; int HBperMin = 0; int HBStart = 2; int HBStartCheck = 0; void setup() { // put your setup code here, to run once: lcd.begin(20, 4); pinMode(HBSensor, INPUT); pinMode(HBStart, INPUT_PULLUP); Timer1.initialize(800000); Timer1.attachInterrupt( timerIsr ); lcd.clear(); lcd.setCursor(0,0); lcd.print("Current HB : "); lcd.setCursor(0,1); lcd.print("Time in Sec : "); lcd.setCursor(0,2); lcd.print("HB per Min : 0.0"); } void loop() { if(digitalRead(HBStart) == LOW){lcd.setCursor(0,3);lcd.print("HB Counting ..");HBStartCheck = 1;} if(HBStartCheck == 1) { if((digitalRead(HBSensor) == HIGH) && (HBCheck == 0)) { HBCount = HBCount + 1; HBCheck = 1; lcd.setCursor(14,0); lcd.print(HBCount); lcd.print(" "); } if((digitalRead(HBSensor) == LOW) && (HBCheck == 1)) { HBCheck = 0; } if(TimeinSec == 10) { HBperMin = HBCount * 6; HBStartCheck = 0; lcd.setCursor(14,2); lcd.print(HBperMin); lcd.print(" "); lcd.setCursor(0,3); lcd.print("Press Button again."); HBCount = 0; TimeinSec = 0; } } } void timerIsr() { if(HBStartCheck == 1) { TimeinSec = TimeinSec + 1; lcd.setCursor(14,1); lcd.print(TimeinSec); lcd.print(" "); } }
- In this code, I have used a TimerOne Library which creates an interrupt after every 1sec.
- On each interrupt, it executes timerIsr() function, in which I have placed a check that whenever this interrupt will call we will increment TimeinSec variable.
- So, when TimeinSec will become equal to 10 then I am simply multiplying it with 6 and updating it on the LCD.
- So, use the above code and get your Hex File from Arduino Software and update it in your Proteus Simulation.
- Now run your Proteus Simulation and you will get something as shown in below figure:
- Now click this HB button and it will start counting the HB as well as will count the Time in seconds.
- After ten seconds it will multiply the current HB with six and will give the Heart Beat Per Minute.
- Here’s a final image of the result:
- You can change the value of Heart Beat from the variable resistor connected with Heart Beat Sensor.
- Let’s change the value of variable resistance connected to Heart Beat sensor, and have a look at the results.
- You have to press the button again in order to get the value.
- Here’s the screenshot of the results obtained:
- So, now the heart is beating a little faster and we have got 108 bpm.
- If you run this simulation then you will notice that the second is quite slow which I think is because of Proteus.
- I have tested this code on hardware and it worked perfectly fine, although you need to change heart beat sensor’s values in coding.
- Here’s the video in which I have explained the working of this Heart Beat Monitor Simulation in detail.
1 Comment
Error compiling for board arduino/geniuo Uno. What i do sir | https://www.theengineeringprojects.com/2018/01/heart-beat-monitor-using-arduino-proteus.html | CC-MAIN-2020-10 | refinedweb | 678 | 62.48 |
Most current editors provide built in spell checking which validates user input. Spell checkers can be found in anything from word processors, to email clients, and even web browsers. Depending on the scenario, these spell checking utilities vary in complexity. Simple spell checkers compare the input against a default dictionary and then highlight mismatches. More complex spell checkers allow dynamic modification of the dictionary.
WPF includes a spell checker, but currently it only uses the OS provided dictionary for input validation. It lacks the functionality to allow the default dictionaries to be augmented with custom dictionaries provided by an application. This has been a major issue for apps which target specific industries with specialized lingo. Those apps are plagued by misspelling notifications.
A chat client is a classic example of an application that displays specialized words. In such an application, web slang such as lol, brb, or ttyl is frequently used. While these acronyms are not official words in English, in the context of a chat client these are not misspelled words.
This problem has been fixed in WPF 4.0. The ability to augment spelling support, via an application provided custom dictionary, has been added to the framework.
Custom Dictionaries API
The property CustomDictionaries has been added to the SpellCheck class.
· CustomDictionaries – This is a read-only dependency property which is an IList of URIs. Each URI points to a custom dictionary to be used for spell checking. If spell checking is disabled, SpellCheck.IsEnabled == false, no spell checking will occur, even if a custom dictionary is provided.
Custom Dictionary Format
Custom dictionaries are simple text files with single words on each line. Each line represents a legitimate word which should not be marked as misspelled.
The first line of this file can specify a language that the custom dictionary should correspond to (e.g. “#LID 1033” means that this custom dictionary applies to US English). If no language is set, the custom dictionary will apply to all default dictionaries.
· #LID 1033 – English
· #LID 3082 – Spanish
· #LID 1031 – German
· #LID 1036 – French
<Window x:Class=”CustomSpellerDictionaries.MainWindow”
xmlns=””
xmlns:x=””
xmlns:sys1=”clr-namespace:System;assembly=system”
Title=”Custom Dictionaries Example”>
<Grid>
<TextBox Text=”Finallly, my name, Chipalo, is not marked as misspelled.”
TextOptions.TextFormattingMode=”Display”>
<SpellCheck.IsEnabled>true</SpellCheck.IsEnabled>
<SpellCheck.CustomDictionaries>
<sys1:Uri>D:\TEMP\dictionary.lex</sys1:Uri>
</SpellCheck.CustomDictionaries>
</TextBox>
</Grid>
</Window>
Contents of D:\TEMP\dictionary.lex
Chipalo
TextBlock created with the above XAML. Notice that spell checking is enabled but “Chipalo” is not marked as misspelled.
Custom Dictionaries Suggested Usage
Custom dictionaries should be used when an application wants a set of words, which are not found in the default dictionary for a language, to not be marked as misspelled. Specifying a langue for a custom dictionary will limit the use of the dictionary to only the denoted language. A custom dictionary which is not marked as language specific will be used for spell checking in all languages which spell checking is supported.
Spelling support in WPF is limited to four languages: English, Spanish, French, and German. Custom dictionaries are designed to augment the default dictionaries for these languages and not to extend spelling support to other languages. Creating a custom dictionary which contains common Russian words, marking the dictionary as a Russian dictionary, and adding it to the SpellCheck object of a TextBox or RichTextbox will not add Russian spelling support. If the first line of a custom dictionary denotes that the dictionary is used for a language which is not supported, that dictionary will be ignored.
– Chipalo
Still no additional dictionaries? AND the CustomDictionaries API is off limits to other languages? That’s not a feature. That’s the LACK of a feature as far as I’m concerned.
I kinda agree with Ruben. I would like to add a custom dictionary of my words, in effect, my own language. It would have been great if I could create my own language with custom dictionaries.
Ruben – I would agree that additional dictionaries and customization of existing dictionaries are distinct features. Both features are targeted at specific scenarios and have different design considerations. For example, additional dictionaries (which support another language) are much larger than dictionary addons (which support industry specific lingo). The two need to be created, processed, and stored differently. We have done the work to support modification of existing dictionaries in WPF 4.0, and we would like to extend this work in future releases to support additional dictionaries.
Chipalo
I would like very much if you have a window that can spell like I had in XP
Eugene – Sorry I am not sure which feature you are referring to.
Good to hear that custom dictionaries are in .NET 4.0. However, where can I locate detailed information on what dictionaries are available?
I understand only 4 languages are supported, English, German, French and Spanish are available. But is only en-US supported, how about en-GB?
Documentation on this topic in MSDN docs. seems sparse at best. Where can I locate exact details on the languages supported?
I agree with Ruben, this is a missing feature for all people speaking other than english, french, german or spanish.
"…WPF includes a spell checker, but currently it only uses the OS provided dictionary for input validation…" this is exactly what is expected from a feature like this, italian dictionary for an italian OS…
I can’t see any valid reason to limit this to only 4 languages, having MS full dictionaries for all localized Office version…
I also agree with Ruben and Nicola, that looks to me like the key feature that users want is the dictionary on its own language… custom dictionarys like you have down now is also good but not what is most wanted.
Do you have that in plan for the final release of wpf4.0 ?
Luke – I’ve asked the team that makes the dictionaries about this issue and most dialects of the supported languages should be supported. Unfortunately, no good documentation exists that you can quickly reference. I threw together a quick prototype to test en-gb and it is supported. For now, you will have to test other dialects in a similar manner.
<Grid>
<Grid.RowDefinitions>
<RowDefinition/>
<RowDefinition/>
</Grid.RowDefinitions>
<TextBox xml:lang=”en-us” SpellCheck.IsEnabled=”True”>Arse is also a serious condition otherwise know as Arylsulfatase E</TextBox>
<TextBox Grid.Row=”1″ xml:lang=”en-gb” SpellCheck.IsEnabled=”True”>Arse is also a serious condition otherwise know as Arylsulfatase E</TextBox>
</Grid>
Rui & Nicola – I agree that the ability to extend spelling support to other languages is a very important feature that many customers want. We would like to provide this functionality, but the feature set that we chose for .Net4.0 did not allow time for us to do this. This is a feature will not be part of .Net4.0, but we are considering for the next release of WPF.
I wish that another languages would supported too. I have hope, is can be, example, open project for another dictionaries. But why nor create open web-service for every language with possibility add appropriate words. And then everybody developer and user would can update your’s dictionary at local PC? | https://blogs.msdn.microsoft.com/text/2009/10/02/custom-dictionaries/ | CC-MAIN-2017-04 | refinedweb | 1,211 | 56.35 |
In this post I'd like to introduce you to what I have been calling generic numeric programming.
What do we mean by generic numeric programming? Let's take a simple example; we
want to add 2 numbers together. However, we don't want to restrict ourselves to
a particular type, like
Int or
Double, instead we just want to work with
some generic type
A that can be added. For instance:
def add[A](x: A, y: A): A = x + y
Of course, this won't compile since
A has no method
+. What we are really
saying is that we want
A to be some type that behaves like a number. The
usual OO way to achieve this is by creating an interface that defines our
desired behaviour. This is less than ideal, but if we were to go this route,
our
add function might look like this:
trait Addable[A] { self: A => def +(that: A): A } def add[A <: Addable[A]](x: A, y: A): A = x + y
We've created an interface that defines our
+ method, and then bound our type
parameter
A to subsume this interface. The main problem with this is that we
can't directly use types out of our control, like those that come in the
standard library (ie.
Int,
Long,
Double,
BigInt, etc). The only option
would be to wrap these types, which means extra allocations and either explicit
or implicit conversions, neither of which are good options.
A better approach is to use type classes. A discussion on type classes is out
of the scope of this post, but they let us express that the type
A must have
some desired behaviour, without inheritence. Using the type class pattern, we
could write something like this:
trait Addable[A] { // Both arguments must be provided. Addable works with the type A, but // does not extend it. def plus(x: A, y: A): A } // This class adds the + operator to any type A that is Addable, // by delegating to that Addable's `plus` method. implicit class AddableOps[A](lhs: A)(implicit ev: Addable[A]) { def +(rhs: A): A = ev.plus(lhs, rhs) } // We use a context bound to require that A has an Addable instance. def add[A: Addable](x: A, y: A): A = x + y
We can then easily add implementations for any numeric type, regardless if we control it or not, or even if it is a primitive type:
implicit object IntIsAddable extends Addable[Int] { def plus(x: Int, y: Int): Int = x + y } add(5, 4)
This is, more or less, the approach Spire takes.
Why be generic? The flippant answer I could give is: why not? I do hope that after reading this, that is an acceptable answer to you, but I know that's not what you came here for.
The first reason is the obvious one; sometimes you want to run the same
algorithm, but with different number types. Euclid's GCD algorithm is the same
whether you are using
Byte,
Short,
Int,
Long, or
BigInt. Why implement
it only for 1, when you could do it for all 5? Worse; why implement it 5
times, when you need only implement it once?
Another reason is that you want to push certain trade-offs, such as speed vs
precision to the user of your library, rather than making the decision for
them.
Double is fast, but has a fixed precision.
BigDecimal is slow, but
can have much higher precision. Which one do you use? When in doubt, let
someone else figure it out!
A last great reason is that it let's you write less tests and can make testing much less hairy.
So, what does a generic version of Euclid's GCD algorithm look like? Spire strives to make generic numeric code look more or less like what you'd write for a direct implementation. So, let's let you compare; first up, the direct implementation:
def euclidGcd(x: Int, y: Int): Int = if (y == 0) x else euclidGcd(y, x % y)
With Spire, we can use the
spire.math.Integral type class to rewrite this as:
import spire.math.Integral import spire.implicits._ def euclidGcd[A: Integral](x: A, y: A): A = if (y == 0) x else euclidGcd(y, x % y)
The 2 methods are almost identical, save the
Integral context bound.
Integral gives us many methods we expect integers to have, like addition,
multiplication, and euclidean division (quotient + remainder).
Because Spire provides default implicit instances of
Integral for all of the
integral types that come in the Scala standard library, we can immediately use
euclidGcd to find the GCD of many integer types:
euclidGcd(42, 96) euclidGcd(42L, 96L) euclidGcd(BigInt(42), BigInt(96))
This is much better than writing 5 different versions of the same algorithm!
With Spire, you can actually do away with
euclidGcd altogether, as
gcd
comes with
Integral anyways:
spire.math.gcd(BigInt(1), BigInt(2))
Another benefit of generic numeric programming, is that you can push the choice
of numeric type off to someone else. Rather than hardcode a method or data
structure using
Double, you can simple require some
Fractional type.
I actually first found a need for generic numeric programming after I had
implemented a swath of algorithms with double precision floating point
arithmetic, only to find out that the minor precision errors were causing
serious correctness issues. The obvious fix was to just to use an exact type,
like
spire.math.Rational, which would've worked for many of my purposes.
However, many of the algorithms actually worked fine with doubles or even
integers, where as others required exact n-roots (provided by a number type
like
spire.math.Real). Being more precise meant everything got slower, even
when it didn't need to be. Being less precise meant some algorithms would
occasionally return wrong answers. Abstracting out the actual number type
used meant I didn't have to worry about these issues. I could make the choice
later, when I knew a bit more about my data, performance and precision
requirements.
We can illustrate this using a simple algorithm to compute the mean of some numbers.
import spire.math._ import spire.implicits._ // Note: It is generally better to use an incremental mean. def mean[A: Fractional](xs: A*): A = xs.reduceLeft(_ + _) / xs.size
Here, we don't care what type
A is, as long as it can be summed and divided.
If we're working with approximate measurements, perhaps finding the mean of a
list of
Doubles is good enough:
mean(0.5, 1.5, 0.0) // = 0.6666666666666666
Or perhaps we'd like an exact answer back instead:
import spire.math.Rational mean(Rational(1, 2), Rational(3, 2), Rational(0)) // = Rational(2, 3)
The main thing here is that as a user of the
mean function, I get to choose
whether I'd prefer the speed of
Double or the precision or
Rational. The
algorithm itself looks no different, so why not give the user the choice?
One of the best things is that if you write test code that abstracts over the number type, then you can re-use your tests for many different types. Spire makes great use of this, to ensure instances of our type classes obey the rules of algebra and that the number types in Spire (Rational, Complex, UInt, etc) are fundamentally correct.
There is another benefit though -- you can ignore the subtleties of floating
point arithmetic in your tests if you want! If your code works with any number
type, then you can test with an exact type such as
spire.math.Rational or
spire.math.Real. No more epsilons and NaNs. You shouldn't let this excuse you
from writing numerically stable code, but it may save you many false negatives
in your build system, while also making you more confident that the fundamentals
are correct.
This is a big topic, deserving of its own blog post (you know who you are), so I'll leave this here.
We've already seen
Integral, which can be used wherever you need something
that acts like an integer. We also saw the modulus operator,
x % y, but not
integer division. Spire differentiates between integer division and exact
division. You perform integer division with
x /~ y. To see it in action,
let's use an overly complicated function to negate an integer:
import spire.math._ import spire.implicits._ def negate[A: Integral](x: A) = -(42 * (x /~ 42) + x % 42)
Instances of
Integral exist for
Byte,
Short,
Int,
Long and
BigInt.
Another type class Spire provides is
Fractional[A], which is used for things
that have "exact" division. "Exact" is in quotes, since
Double or
BigDecimal division isn't really exact, but it's close enough that we give
them a pass.
Fractional also provides
x.sqrt and
x nroot k for taking the
roots of a number.
def distance[A: Fractional](x: A, y: A): A = (x ** 2 + y ** 2).sqrt
Note that
Fractional[A] <: Integral[A], so anything you can do with
Integral, you can do with
Fractional[A] too. Here, we can use
distance
to calculate the length of the hypotenuse with
Doubles,
Floats,
BigDecimals, or some of Spire's number types like
Real or
Rational.
Lastly, you often have cases where you just don't care if
/ means exact or
integer division, or whether you are taking the square root of an
Int or a
Double. For this kind of catch-all work Spire provides
Numeric[A].
If you've already hit the types of problems solved by generic numeric
programming, then you may have seen that
scala.math also provides
Numeric,
Integral, and
Fractional, so why use Spire? Well, we originally created
Spire largely due to the problems with the type classes as they exist in Scala.
To start, Scala's versions aren't specialized, so they only worked with boxed versions of primitive types. The operators in Scala also required boxing, which means you need to trade-off performance for readability. They also aren't very useful for a lot of numeric programming; what about nroots, trig functions, unsigned types, etc?
Spire also provides many more useful (and specialized) type classes. Some are
ones you'd expect, like
Eq and
Order, while others define more basic
algebras than
Numeric and friends, like
Ring,
Semigroup,
VectorSpace,
etc.
There are many useful number types that are missing from Scala in Spire, such
as
Rational,
Complex,
UInt, etc.
Spire was written by people who actually use it. I somewhat feel like Scala's
Numeric and friends weren't really used much after they were created, other
than for Scala's NumericRange support (ie.
1.2 to 2.4 by 0.1). They miss
many little creature comforts whose need becomes apparent after using Scala's
type classes for a bit.
One of Spire's goals is that the performance of generic code shouldn't suffer. Ideally, the generic code should be as fast as the direct implementation. Using the GCD implementation above as an example, we can compare Spire vs. Scala vs. a direct implementation. I've put the benchmarking code up as a Gist.
gcdDirect: 29.981 1.00 x gcdDirect gcdSpire: 30.094 1.00 x gcdDirect gcdSpireNonSpec: 36.903 1.23 x gcdDirect gcdScala: 38.989 1.30 x gcdDirect
For another example, we can look at the code to find the mean of an array.
meanDirect: 10.592 **1.00 x gcdDirect** meanSpire: 10.638 **1.00 x gcdDirect** meanSpireNonSpec: 13.434 **1.26 x gcdDirect** meanScala: 19.388 **1.83 x gcdDirect**
Spire achieves these goals fairly simply. All our type classes are
@specialized, so when using primitives types you can avoid boxing. We then use macros to remove
the boxing normally required for the operators by the implicit conversions.
Using
@specialized, both
gcdSpire and
meanSpire aren't noticably slower
than the direct implementation. We can see the slow down caused by dropping
@specialized in
gcdSpireNonSpec and
meanSpireNonSpec. The difference
between
gcdSpireNonSpec and
gcdScala is because Spire doesn't allocate an
object for the
% operator (using macros to remove the allocation). The
difference is even more pronounced between
meanSpireNonSpec and
meanScala.
Numeric,
Integral, and
Fractional
The 3 type classes highlighted in this post are just the tip of the iceberg.
Spire provides a whole slew of type classes in
spire.algebra. This package
contains type classes representing a wide variety of algebraic structures,
such as
Monoid,
Group,
Ring,
Field,
VectorSpace, and more. The 3 type
classes discussed above provide a good starting point, but if you use Spire in
your project, you will probably find yourself using
spire.algebra more and
more often. If you'd like to learn more, you can watch my talk on abstract
algebra in Scala.
As an example of using the algebra package,
spire.math.Integral is simply
defined as:
import spire.algebra.{ EuclideanRing, IsReal } trait Integral[A] extends EuclideanRing[A] with IsReal[A] // Includes Order[A] with Signed[A]. with ConvertableFrom[A] with ConvertableTo[A]
Whereas
spire.math.Fractional is just:
import spire.algebra.{ Field, NRoot } trait Fractional[A] extends Integral[A] with Field[A] with NRoot[A]
Spire also adds many new useful number types. Here's an incomplete list:
Spire also provides better operator integration with
Int and
Double. For
instance,
2 * x or
x * 2 will just work for any
x whose type has an
Ring. On the other hand, Scala requires something like
implicitly[Numeric[A]].fromInt(2) * x which is much less readable. This
also goes for working with fractions;
x * 0.5 will just work, if
x has a
Field.
Spire has a basic algebra that let's us work generically with numeric types. It does this without sacrificing readability or performance. It also provides many more useful abstractions and concrete number types. This means you can write less code, write less tests, and worry less about concerns like performance vs. precision. If this appeals to you, then you should try it out!
There is some basic information on getting up-and-running with Spire in SBT on
Spire's project page. If you have any further
questions, comments, suggestions, criticism or witticisms you can say what you
want to say on the Spire mailing list
or on IRC on Freenode in
#spire-math. | http://typelevel.org/blog/2013/07/07/generic-numeric-programming.html | CC-MAIN-2014-41 | refinedweb | 2,400 | 64.2 |
On Thu, 18 Feb 1999, Raul Miller wrote:> Alexander Viro <viro@math.psu.edu> wrote:> > OK, so you are doing lookup on foo from /bar/baz. You see that foo is a ^^^^^^> > directory and it has another parent. You see that this parent is nowhere> > near dcache/icache. Worse yet, actually it's the end of *long* chain of> > directories and none of them (except the root, that is) is in dcache. To> > make it even nastier, suppose that there are fan-ins in that chain (i.e> > some of those directories also have multiple parents). Your actions?> Bring at least one instance of each parent into dcache. I don't think> this should be significantly worse than having a long path for the first> reference to a directory. Erm??? *Their* parents should come into dcache too. > Note that I'm presuming that, within the filesystem, each directory must> have an internal unique id (perhaps block # -- if the file system doesn't> have inodes), and that the test you want to perform is: directory being> renamed is not in the set of {target directory, ancestors of target> directory in this same filesystem}. *All* ancestors, right? How would you recalculate this set onrename/rmdir/unlink? And besides, it's O(n^2) memory (n - numberof inodes in the game) in the worst case, O(n*depth) in best.> > Now, assume that another lookup goes through the alternative path in the> > same time from the other direction. Your actions wrt locking?> > Use a per-filesystem directory rename lock when renaming a directory> and parallel parents are an issue. [I guess, to be safe, we'd have to> use the lock to determine if parallel parents are an issue.] Erm... You'll need to do it on any lookup. Right now we lock theparent on each lookup step and release it before the next one. It is notrename-specific. Locking issues will become very tricky unless you aregoing to throttle lookups in the same way it is done with rename. I.e.single-threaded fs. Welcome back to Minix. Locking is used *not* only forrename/rename race prevention. Without it you'll get a race in about anypair of namespace-related operations.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.rutgers.eduPlease read the FAQ at | http://lkml.org/lkml/1999/2/18/128 | CC-MAIN-2013-48 | refinedweb | 394 | 66.33 |
Technote (FAQ)
Question
Set an XML node to nil with xmlns:xsi="" xsi:nil="true" when using IBM Rational Integration Tester
Cause
An empty node in XML implicitly sends an empty string. In order to send a nil or null in XML it is necessary to set xsi:nil="true" as an attribute on the element where xsi refers to the namespace "". This document explains how to achieve the same effect from IBM Rational Integration Tester.
Answer
Imagine a node namded "offences" which currently has no value defined, and which should be nil but which should not be omitted from the XML generated.
Firstly we must make the stub sent null values. To do this open the stub, select the "Output" tab, right click the "text(String) {XML}" node and select "properties", and check "Send NULL values". This will send the output:
<offences/>
Now right click the "offences" node and select "Add Child" and "Text". A "(Text)" node will be added below "offences". Double click the new node to open the field editor and change the Action Type to "Null". The output will now be:
<offences xmlns:
This will need to be done for each node which should be nil. | http://www-01.ibm.com/support/docview.wss?uid=swg21644740 | CC-MAIN-2014-15 | refinedweb | 200 | 77.16 |
Reading Image Sequences with OpenCV
Reading Image Sequences with OpenCV
Computer Vision data is often in the form of an image sequence (think
img0.png, img1.png ... img999.png) whether it is originally from a video or maybe a medical imaging set. Now writing an image sequence reader is pretty trivial and most of us have probably written a few however what is the point of re-writing code if its already been done? Anyways one day I decided that I was going to write a really good sequence reader and contribute it to OpenCV, however while looking at how I needed to form my reader I noticed that this functionality was already in OpenCV! curses! the time I’d wasted!
My goals immediately changed from writing my own sequence reader to writing a proper sample to demonstrate the usage of this functionality. It is actually super self explanatory it uses the familiar cv::VideoCapture interface only instead of a path to a video you send it a path to an image sequence. You can send it a generic string (
%0D,
%02D etc. for the precision of the numbering) or the exact path of the first image and it figures out the rest!
This example code explains it better than words:
/* * starter_image_sequence_reader.cpp * * Created on: July 23, 2012 * Author: Kevin Hughes * * A simple example of how to use the built in functionality of cv::VideoCapture to handle * sequences of images. Image sequences are a common way to distribute data sets for various * computer vision problems, for example the change detection data set from CVPR 2012 * * */ #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <iostream> using namespace cv; using namespace std; void help(char** argv) { cout << "\nThis program gets you started reading a sequence of images using cv::VideoCapture.\n" << "Image sequences are a common way to distribute video data sets for computer vision.\n" << "Usage: " << argv[0] << " <path to the first image in the sequence>\n" << "example: " << argv[0] << " right%%02d.jpg\n" << "q,Q,esc -- quit\n" << "\tThis is a starter sample, to get you up and going in a copy paste fashion\n" << endl; } int main(int argc, char** argv) { if(argc != 2) { help(argv); return 1; } string arg = argv[1]; VideoCapture sequence(arg); if (!sequence.isOpened()) { cerr << "Failed to open Image Sequence!\n" << endl; return 1; } Mat image; namedWindow("Image | q or esc to quit", CV_WINDOW_NORMAL); for(;;) { sequence >> image; if(image.empty()) { cout << "End of Sequence" << endl; break; } imshow("image | q or esc to quit", image); char key = (char)waitKey(500); if(key == 'q' || key == 'Q' || key == 27) break; } return 0; } | http://kevinhughes.ca/tutorials/reading-image-sequences-with-opencv.html | CC-MAIN-2016-50 | refinedweb | 437 | 60.85 |
FREE PROXY SOFTWARE
Saturday, July 4, 2009
PROXYWAY EVOLUTION v.5.0
has been released!
Latest Release
ProxyWay Extra 5.0
- September 1, 2008
ProxyWay Pro 5.0
- September 1, 2008
ProxyWay 5.0
- September 1, 2008
ProxyWay Evolution 5.0 New Features
NEW!
Added advanced
Proxy List Filter
that helps you easily select proxies according to their anonymity, response time (proxy speed), protocol type, country, IP and port number.
NEW!
Added proxy filter for random proxies that allows you to set special rules (filters) for random proxies e.g. by proxy location, speed, etc
NEW!
Improved 'Check proxies' module - more sensitive proxy anonymity detection
NEW!
Improved 'Change PC info' module
NEW!
Significantly simplied creating proxy chains. Program automatically selects necessary proxy type from supported types.
NEW!
Updated
Country list
NEW!
Improved 'Proxy Finder' - country detection.
NEW!
Changed program interface.
NEW!
To export only proxies that meet your needs (e.g. from certain country, certain type of proxies, etc) you can easily filter proxy list and then export it.
What's new
ProxyWay Extra
ProxyWay Pro
ProxyWay server
.
Shredder
option helps you completely
delete history
tracks and previously deleted files without recovery
2 Privacy.com
- Information about on-line privacy, security, viruses. FREE tests to check IP address, get detailed IP Whois information, determine IP or Domain geographic location.
Clear All History
- deletes Internet surfing and other computer activity tracks from your PC
Delete Files Permanently
- advanced shredder
Encryption And Decryption
- text and file encryption software for personal and professional security. Safely send emails and store sensitive information on your PC
My Password Generator
- advanced password generator software
Freeware Download.org
- your source of free tools
Create free ringtone
-
Make your own ringtone
ProxyWay Extra 5.0 - Evolution in your Internet surfing!
Meet ProxyWay Evolution - the most powerful proxy software
What is ProxyWay Evolution? It's version 5.x of ProxyWay product family
ProxyWay Evolution - new view on proxies
Make anonymous surfing and proxy management easier than ever
ProxyWay Evolution! Discover It! New step in proxy evolution!
Get your FREE 15-day Trial of ProxyWay Evolution Now
ProxyWay software
Easy. Powerful. Multifunctional. Over 1'000'000 downloads!
ProxyWay
- free multifunctional proxy software.
ProxyWay Pro
- powerful proxy management software. More options for anonymous surfing
ProxyWay Extra
- advanced proxy software. Everything you need and even more for operating with proxies and anonymous surfing
Don't know what to choose?
Compare our solutions
Free proxy sites list : Best proxy sites : Top proxy sites
Here is a list of proxy sites - top proxy sites (HTTP proxy sites, HTTPS proxy sites, SOCKS 4 proxy sites, SOCKS 5 proxy sites).
Free proxy sites list
- one of the top proxy sites with
daily updated HTTP/HTTPS proxy lists
- one of the best proxy sites with
free proxy servers list
- free proxy site with
HTTP proxy list
- one of the top proxy sites with daily updates free proxy lists (
HTTP proxies,
SOCKS proxies
(
SOCKS4,
SOCKS5 proxy list),
free anonymous proxy list,
IRC proxy list
- proxy site with HTTP proxy list
- proxy site with fresh HTTP/HTTPS proxy lists
- one of proxy sites that offers
anonymous proxy lists,
SOCKS4, SOCKS4A, SOCKS5 proxy lists
- one of the top proxy sites with regulary updated HTTP/HTTPS proxies lists
Daily updated HTTP/HTTPS proxy list
- proxy site with regularly updated free proxy lists (HTTP, HTTPS and SOCKS proxies)
Free daily updates SOCKS proxy list
- one of best proxy sites with fresh
HTTP proxy list
- free proxy site with HTTP/HTTPS proxy lists, SOCKS proxy list (SOCKS 4 proxies, SOCKS 5 proxies)
- proxy site with free proxy servers list:
HTTP/HTTPS proxy lists,
SOCKS (SOCKS4, SOCKS5) proxies
Need more
free proxy sites
with proxy lists?
to find more free proxy sites (HTTP proxy sites, HTTPS proxy sites, SOCKS proxy sites)
Web proxy sites
(CGI proxy sites) lists
Proxy sites that offer list of web proxy serversWay Extra - Anonymous proxy software
Easy setup - 'Auto Configuration' option
Powerful Proxy Finder
Advanced Proxy Checker (10 threads, check proxy country, proxy region/state)
Supports CGI proxies (web proxies)
Extended
Proxy Management
System
NEW!
Advanced proxy filter
Hide Your Real IP
&
Change PC Info
Erases All Traces
of
Internet Surfing
Ads &
Popup &
Cookies
Management
Get your FREE 10-day Trial of ProxyWay Extra Now
ProxyWay Pro - multifunctional proxy software
Easy setup - 'Auto Configuration' option
Powerful Proxy Finder
Advanced Proxy Checker
Extended
Proxy Management
System
NEW!
Advanced proxy filter
Works with CGI proxies (web proxies)
Hides
IP &
Changes PC Info
Erases All Traces of
Internet Surfing
Ads &
Popup &
Cookies Management
Get your FREE 10-day Trial of ProxyWay Pro Now
How to use ProxyWay to work with proxy sites
Where can I find free proxy sites (open proxy sites, new proxy sites)?
How can I download proxy lists from proxy sites automatically?
What do I do to download free proxy lists from different proxy sites simultaneously?
How can I download proxy lists from proxy sites automatically?
Where can I find free web proxy sites list?
Where can I find free proxy sites (open proxy sites, new proxy sites)?
You can find free proxy sites with proxy lists using search engines or you can use our list of free proxy sites (best proxy sites list - HTTP proxy sites, HTTPS proxy sites list, SOCKS proxy sites list, high anonymous proxy sites list, anonymous proxy sites list, transparent proxy sites).
to find a list of proxy sites (free proxy sites) where you can find
free proxy server lists.
Need more open proxy sites with free proxy lists?
to find more proxy sites with free daily updated proxy lists - HTTP proxy sites, HTTPS proxy sites, SOCKS proxy sites)
How can I download proxy lists from proxy sites automatically?
Using ProxyWay Extra you can easily download proxy lists from different proxy sites (HTTP proxy sites, HTTPS proxy sites, SOCKS proxy sites, SOCKS4 proxy sites, SOCKS4A proxy sites, SOCKS5 proxy sites, SOCKS5 with Authentication proxy servers sites).
You can set the program to download proxies lists from different free proxy sites simultaneously. For more convenience you can download proxies from certain countries only.
ProxyWay Extra has advanced "Update program proxy list" scheduler that allows you to update proxy list on a regular basis.
Please
to learn how to download proxy lists from different proxy sites.
What do I do to download free proxy lists from different proxy sites simultaneously?
Using ProxyWay Extra you can easily download proxy lists from different proxy sites (HTTP proxy sites, HTTPS proxy sites, SOCKS proxy sites, SOCKS4 proxy sites, SOCKS4A proxy sites, SOCKS5 proxy sites, SOCKS5 with Authentication proxy servers sites, anonymous proxy sites).
You can set the program to download proxies lists from different proxy sites simultaneously - just put the check mark in "Use this url for proxy list update" column, for urls you want to use for proxy list Auto Update.
Using the
Update proxy list
option, you can automatically add new proxies to your proxy list from the proxy lists in:
the form of table - proxy list is formed as a table using special html tags;
IP:port format (e.g. 155.255.111.22:80);
"only IP" format (e.g. 155.255.111.22) - for proxy pages (in program
Settings
).
To update proxy list using Proxy Finder:
Select Proxy => Proxy list => Proxy Fnder.
You can update proxy list using different proxy sites simultaneously.
Specify what proxy site url(s) you want to use for proxy list updating:
-
import proxy sites urls list from file
-
add proxy sites url(s) manually
Import proxy sites urls list:
To simplify proxy list updating, you can use our list of third-party proxy sites. Just click "Import" in "Update proxy list" window and select SiteList.txt.
Add proxy sites url(s) for proxy list update manually:
Click the "Add to List" button and specify what proxy site url you want to use for proxy list updating.
To save your proxy sites list, click "Export" button to export urls list to *.txt file (in special ProxyWay format).
Select proxy sites url(s) you want to use for proxy list updating (PUT the check mark in
U
column). You can use "+" and "-" buttons to select/deselect all proxy sites urls in the list
- to update proxy list immediately, click the
Update
button.
- for proxy list auto update (to update proxy list on a regular basis) you can use "Update proxy list" scheduler
to find a list of proxy sites (free proxy sites) where you can find
free proxy server lists.
How can I download proxy lists from proxy sites automatically?
You can easily turn
Update Proxy list Scheduler
on/off to enable/disable download proxies from proxy sites automatically.
To download proxies automatically:
1. Select
Proxy -> Settings -> Proxy list scheduler
2. In
'Run proxy list update'
area turn the
Scheduler
on. By default,
Proxy list update Scheduler
is off.
3. Set when and how often you want to run proxy list auto update.
4. Click the
'Apply'
button.
Please
to learn how to download proxy lists from different proxy sites automatically.
Does ProxyWay allow to download proxy lists in different formats?
Using ProxyWay Extra you can use different proxy list sites for proxy list updating Different proxy sites offer proxy lists in different formats:
- proxy ip:port without country in plain text format
- proxy ip:port with country in plain text format (country is separated from ip:port by tab)
- proxy ip:port with proxy country in table format (ip:port in one column, country in another one)
- proxy ip, proxy port and proxy country in different columns in table format
- "only IP" format for proxy pages with the same port number for all proxies if port information isn't placed near each proxy IP nut is placed e.g. only on the top of the page
If no proxy port information was found while proxy list updating, added ability to set a default proxy port for certain url (in program Settings).
ProxyWay.
Where can I find free web proxy sites list?
Proxy sites that offer list of web proxy sites (cgi proxy sites): server FAQ : Free proxy server lists
What are web proxy sites?
Web proxy sites (CGI proxy sites) are like buffers between you and other web resources. When you request a web page directly, you are exposing your details to the server you connect to. Web proxies (CGI proxies) are web sites which allow a user to access other sites through them. A user visits a web proxy site and then requests a web page, a file or other resources available on other sites. Web proxy sends this request to the necessary site, gets "answer" and returns it to user. A web proxy allows a user to be anonymous and access web sites that may be restricted for users from certain countries or geographical regions. Web proxies are frequently used to gain access to web sites blocked by corporate or school proxies. As well as hiding user's IP address and providing a way to access blocked web sites, a web proxy can also block scripts, cookies, images, etc. Web proxy supports HTTP and sometimes HTTPS and FTP protocols.
Web proxy sites are free proxy sites that help to unlock web sites (access any blocked websites) like MySpace at school or work.
Top
ProxyWay - Free proxy surfing software (proxy freeware)
Free proxy surfing software
Easy setup - 'Auto Configuration' option
Proxy Finder
Proxy Checker
Proxy Management
System
NEW!
Advanced proxy filter
Works with CGI proxies (web proxies)
Hides
IP
Download ProxyWay Now
Recommended
Anonymizer® Anonymous Surfing
Hides IP address
Supports all Internet browsers
Protects the data you send over wireless
Uses 128-bit SSL encryption
Learn More...
How ProxyWay Works
Learn more how
provides
anonymous surfing
, protects your privacy,
hides IP
address
,
How to configure proxy settings
Internet Risks
Every time you surf the web you are subject to different
Internet risks
including hackers, identity thieves, spammers and other people who want to get access to your computer and monitor your
activities
Proxy Way Tip
of the Day | http://www.proxyway.com/www/free-proxy-sites/proxy-sites-list.html | crawl-002 | refinedweb | 1,998 | 60.04 |
!ENTITY sdot "⋅"> ]> ncurses.scripts.mit.edu Git - ncurses.git/blob - doc/hackguide.doc projects / ncurses.git / blob commit grep author committer pickaxe ? search: re fbc8f0725ae68d73841999d6cc1cac631f91d34a 's (a) well-adapted for on-line 124 browsing through viewers that are everywhere; (b) more easily readable 125 as plain text than most other mark-ups, if you don't have a viewer; 126 don've taken these steps at the head of our queue. This means 146 that if you don't, you'll probably end up at the tail end and have to 147're using a traditional 159 asynchronous terminal or PC-based terminal emulator, rather than 160 xterm or a UNIX console entry. 161 It's therefore extremely helpful if you can tell us whether or not 162 your problem reproduces on other terminal types. Usually you'll 163've, 182 it'll find terminfo problems at this stage by noticing that 188 the escape sequences put out for various capabilities are wrong. 189 If not, you're likely to learn enough to be able to characterize 190 any bug in the screen-update logic quite exactly. 191 4. Report details and symptoms, not just interpretations. 192 If you do the preceding two steps, it is very likely that you'll 193 discover the nature of the problem yourself and be able to send us 194 a fix. This will create happy feelings all around and earn you 195 good karma for the first time you run into a bug you really can't 196 characterize and fix yourself. 197 If you're still stuck, at least you'll doesn't 203 throw away any information (actually they're better than 204 un-munched ones because they're easier to read). 205 If your bug produces a core-dump, please include a symbolic stack 206 trace generated by gdb(1) or your local equivalent. 207 Tell us about every terminal on which you've reproduced the bug -- 208 and every terminal on which you can't. Ideally, sent us terminfo 209've got a 223 There's one other interactive tester, tctest, that exercises 232 translation between termcap and terminfo formats. If you have a 233 serious need to run this, you probably belong on our development team! 234 235 A Tour of the Ncurses Library 236 237 Library Overview 238 239 Most of the library is superstructure -- fairly trivial convenience 240 interfaces to a small set of basic functions and data structures used 241 to manipulate the virtual screen (in particular, none of this code 242 does any I/O except through calls to more fundamental modules 243 described below). The files 244 245 lib_addch.c lib_bkgd.c lib_box.c lib_chgat.c lib_clear.c 246 lib_clearok.c lib_clrbot.c lib_clreol.c lib_colorset.c lib_data.c 247 lib_delch.c lib_delwin.c lib_echo.c lib_erase.c lib_gen.c 248 lib_getstr.c lib_hline.c lib_immedok.c lib_inchstr.c lib_insch.c 249 lib_insdel.c lib_insstr.c lib_instr.c lib_isendwin.c lib_keyname.c 250 lib_leaveok.c lib_move.c lib_mvwin.c lib_overlay.c lib_pad.c 251 lib_printw.c lib_redrawln.c lib_scanw.c lib_screen.c lib_scroll.c 252 lib_scrollok.c lib_scrreg.c lib_set_term.c lib_slk.c 253 lib_slkatr_set.c lib_slkatrof.c lib_slkatron.c lib_slkatrset.c 254 lib_slkattr.c lib_slkclear.c lib_slkcolor.c lib_slkinit.c 255 lib_slklab.c lib_slkrefr.c lib_slkset.c lib_slktouch.c lib_touch.c 256 lib_unctrl.c lib_vline.c lib_wattroff.c lib_wattron.c lib_window.c 257 258 are all in this category. They are very unlikely to need change, 259 barring bugs or some fundamental reorganization in the underlying data 260 structures. 261 262 These files are used only for debugging support: 263 264 lib_trace.c lib_traceatr.c lib_tracebits.c lib_tracechr.c 265 lib_tracedmp.c lib_tracemse.c trace_buf.c 266 267 It is rather unlikely you will ever need to change these, unless you 268 want to introduce a new debug trace level for some reasoon. 269 270 There is another group of files that do direct I/O via tputs(), 271 computations on the terminal capabilities, or queries to the OS 272 environment, but nevertheless have only fairly low complexity. These 273 include: 274 275 lib_acs.c lib_beep.c lib_color.c lib_endwin.c lib_initscr.c 276 lib_longname.c lib_newterm.c lib_options.c lib_termcap.c lib_ti.c 277 lib_tparm.c lib_tputs.c lib_vidattr.c read_entry.c. 278 279 They are likely to need revision only if ncurses is being ported to an 280 environment without an underlying terminfo capability representation. 281 282 These files have serious hooks into the tty driver and signal 283 facilities: 284 285 lib_kernel.c lib_baudrate.c lib_raw.c lib_tstp.c lib_twait.c 286 287 If you run into porting snafus moving the package to another UNIX, the 288 problem is likely to be in one of these files. The file lib_print.c 289 uses sleep(2) and also falls in this category. 290 291 Almost all of the real work is done in the files 292 293 hardscroll.c hashmap.c lib_addch.c lib_doupdate.c lib_getch.c 294 lib_mouse.c lib_mvcur.c lib_refresh.c lib_setup.c lib_vidattr.c 295 296 Most of the algorithmic complexity in the library lives in these 297 files. If there is a real bug in ncurses itself, it's probably here. 298 We'll tour some of these files in detail below (see The Engine Room). 299 300 Finally, there is a group of files that is actually most of the 301 terminfo compiler. The reason this code lives in the ncurses library 302 is to support fallback to /etc/termcap. These files include 303 304 alloc_entry.c captoinfo.c comp_captab.c comp_error.c comp_hash.c 305 comp_parse.c comp_scan.c parse_entry.c read_termcap.c write_entry.c 306 307 We'll discuss these in the compiler tour. 308 309 The Engine Room 310 311 Keyboard Input 312 313 All ncurses input funnels through the function wgetch(), defined in 314 lib_getch.c. This function is tricky; it has to poll for keyboard and 315 mouse events and do a running match of incoming input against the set 316 of defined special keys. 317 318 The central data structure in this module is a FIFO queue, used to 319 match multiple-character input sequences against special-key 320 capabilities; also to implement pushback via ungetch(). 321 322 The wgetch() code distinguishes between function key sequences and the 323 same sequences typed manually by doing a timed wait after each input 324 character that could lead a function key sequence. If the entire 325 sequence takes less than 1 second, it is assumed to have been 326 generated by a function key press. 327 328 Hackers bruised by previous encounters with variant select(2) calls 329 may find the code in lib_twait.c interesting. It deals with the 330 problem that some BSD selects don't return a reliable time-left value. 331 The function timed_wait() effectively simulates a System V select. 332 333 Mouse Events 334 335 If the mouse interface is active, wgetch() polls for mouse events each 336 call, before it goes to the keyboard for input. It is up to 337 lib_mouse.c how the polling is accomplished; it may vary for different 338 devices. 339 340 Under xterm, however, mouse event notifications come in via the 341 keyboard input stream. They are recognized by having the kmous 342 capability as a prefix. This is kind of klugey, but trying to wire in 343 recognition of a mouse key prefix without going through the 344 function-key machinery would be just too painful, and this turns out 345 to imply having the prefix somewhere in the function-key capabilities 346 at terminal-type initialization. 347 348 This kluge only works because kmous isn't actually used by any 349 historic terminal type or curses implementation we know of. Best guess 350 is it's a relic of some forgotten experiment in-house at Bell Labs 351 that didn't leave any traces in the publicly-distributed System V 352 terminfo files. If System V or XPG4 ever gets serious about using it 353 again, this kluge may have to change. 354 355 Here are some more details about mouse event handling: 356 357 The lib_mouse()code is logically split into a lower level that accepts 358 event reports in a device-dependent format and an upper level that 359 parses mouse gestures and filters events. The mediating data structure 360 is a circular queue of event structures. 361 362 Functionally, the lower level's job is to pick up primitive events and 363 put them on the circular queue. This can happen in one of two ways: 364 either (a) _nc_mouse_event() detects a series of incoming mouse 365 reports and queues them, or (b) code in lib_getch.c detects the kmous 366 prefix in the keyboard input stream and calls _nc_mouse_inline to 367 queue up a series of adjacent mouse reports. 368 369 In either case, _nc_mouse_parse() should be called after the series is 370 accepted to parse the digested mouse reports (low-level events) into a 371 gesture (a high-level or composite event). 372 373 Output and Screen Updating 374 375 With the single exception of character echoes during a wgetnstr() call 376 (which simulates cooked-mode line editing in an ncurses window), the 377 library normally does all its output at refresh time. 378 379 The main job is to go from the current state of the screen (as 380 represented in the curscr window structure) to the desired new state 381 (as represented in the newscr window structure), while doing as little 382 I/O as possible. 383 384 The brains of this operation are the modules hashmap.c, hardscroll.c 385 and lib_doupdate.c; the latter two use lib_mvcur.c. Essentially, what 386 happens looks like this: 387 388 The hashmap.c module tries to detect vertical motion changes between 389 the real and virtual screens. This information is represented by the 390 oldindex members in the newscr structure. These are modified by 391 vertical-motion and clear operations, and both are re-initialized 392 after each update. To this change-journalling information, the hashmap 393 code adds deductions made using a modified Heckel algorithm on hash 394 values generated from the line contents. 395 396 The hardscroll.c module computes an optimum set of scroll, insertion, 397 and deletion operations to make the indices match. It calls 398 _nc_mvcur_scrolln() in lib_mvcur.c to do those motions. 399 400 Then lib_doupdate.c goes to work. Its job is to do line-by-line 401 transformations of curscr lines to newscr lines. Its main tool is the 402 routine mvcur() in lib_mvcur.c. This routine does cursor-movement 403 optimization, attempting to get from given screen location A to given 404 location B in the fewest output characters posible. 405 406 If you want to work on screen optimizations, you should use the fact 407 that (in the trace-enabled version of the library) enabling the 408 TRACE_TIMES trace level causes a report to be emitted after each 409 screen update giving the elapsed time and a count of characters 410 emitted during the update. You can use this to tell when an update 411 optimization improves efficiency. 412 413 In the trace-enabled version of the library, it is also possible to 414 disable and re-enable various optimizations at runtime by tweaking the 415 variable _nc_optimize_enable. See the file include/curses.h.in for 416 mask values, near the end. 417 418 The Forms and Menu Libraries 419 420 The forms and menu libraries should work reliably in any environment 421 you can port ncurses to. The only portability issue anywhere in them 422 is what flavor of regular expressions the built-in form field type 423 TYPE_REGEXP will recognize. 424 425 The configuration code prefers the POSIX regex facility, modeled on 426 System V's, but will settle for BSD regexps if the former isn't 427 available. 428 429 Historical note: the panels code was written primarily to assist in 430 porting u386mon 2.0 (comp.sources.misc v14i001-4) to systems lacking 431 panels support; u386mon 2.10 and beyond use it. This version has been 432 slightly cleaned up for ncurses. 433 434 A Tour of the Terminfo Compiler 435 436 The ncurses implementation of tic is rather complex internally; it has 437 to do a trying combination of missions. This starts with the fact 438 that, in addition to its normal duty of compiling terminfo sources 439 into loadable terminfo binaries, it has to be able to handle termcap 440 syntax and compile that too into terminfo entries. 441 442 The implementation therefore starts with a table-driven, dual-mode 443 lexical analyzer (in comp_scan.c). The lexer chooses its mode (termcap 444 or terminfo) based on the first `,' or `:' it finds in each entry. The 445 lexer does all the work of recognizing capability names and values; 446 the grammar above it is trivial, just "parse entries till you run out 447 of file". 448 449 Translation of Non-use Capabilities 450 451 Translation of most things besides use capabilities is pretty 452 straightforward. The lexical analyzer's tokenizer hands each 453 capability name to a hash function, which drives a table lookup. The 454 table entry yields an index which is used to look up the token type in 455 another table, and controls interpretation of the value. 456 457 One possibly interesting aspect of the implementation is the way the 458 compiler tables are initialized. All the tables are generated by 459 various awk/sed/sh scripts from a master table include/Caps; these 460 scripts actually write C initializers which are linked to the 461 compiler. Furthermore, the hash table is generated in the same way, so 462 it doesn't have to be generated at compiler startup time (another 463 benefit of this organization is that the hash table can be in 464 shareable text space). 465 466 Thus, adding a new capability is usually pretty trivial, just a matter 467 of adding one line to the include/Caps file. We'll have more to say 468 about this in the section on Source-Form Translation. 469 470 Use Capability Resolution 471 472 The background problem that makes tic tricky isn't the capability 473 translation itself, it's the resolution of use capabilities. Older 474 versions would not handle forward use references for this reason (that 475 is, a using terminal always had to follow its use target in the source 476 file). By doing this, they got away with a simple implementation 477 tactic; compile everything as it blows by, then resolve uses from 478 compiled entries. 479 480 This won't do for ncurses. The problem is that that the whole 481 compilation process has to be embeddable in the ncurses library so 482 that it can be called by the startup code to translate termcap entries 483 on the fly. The embedded version can't go promiscuously writing 484 everything it translates out to disk -- for one thing, it will 485 typically be running with non-root permissions. 486 487 So our tic is designed to parse an entire terminfo file into a 488 doubly-linked circular list of entry structures in-core, and then do 489 use resolution in-memory before writing everything out. This design 490 has other advantages: it makes forward and back use-references equally 491 easy (so we get the latter for free), and it makes checking for name 492 collisions before they're written out easy to do. 493 494 And this is exactly how the embedded version works. But the 495 stand-alone user-accessible version of tic partly reverts to the 496 historical strategy; it writes to disk (not keeping in core) any entry 497 with no use references. 498 499 This is strictly a core-economy kluge, implemented because the 500 terminfo master file is large enough that some core-poor systems swap 501 like crazy when you compile it all in memory...there have been reports 502 of this process taking three hours, rather than the twenty seconds or 503 less typical on the author's development box. 504 505 So. The executable tic passes the entry-parser a hook that immediately 506 writes out the referenced entry if it has no use capabilities. The 507 compiler main loop refrains from adding the entry to the in-core list 508 when this hook fires. If some other entry later needs to reference an 509 entry that got written immediately, that's OK; the resolution code 510 will fetch it off disk when it can't find it in core. 511 512 Name collisions will still be detected, just not as cleanly. The 513 write_entry() code complains before overwriting an entry that 514 postdates the time of tic's first call to write_entry(), Thus it will 515 complain about overwriting entries newly made during the tic run, but 516 not about overwriting ones that predate it. 517 518 Source-Form Translation 519 520 Another use of tic is to do source translation between various termcap 521 and terminfo formats. There are more variants out there than you might 522 think; the ones we know about are described in the captoinfo(1) manual 523 page. 524 525 The translation output code (dump_entry() in ncurses/dump_entry.c) is 526 shared with the infocmp(1) utility. It takes the same internal 527 representation used to generate the binary form and dumps it to 528 standard output in a specified format. 529 530 The include/Caps file has a header comment describing ways you can 531 specify source translations for nonstandard capabilities just by 532 altering the master table. It's possible to set up capability aliasing 533 or tell the compiler to plain ignore a given capability without 534 writing any C code at all. 535 536 For circumstances where you need to do algorithmic translation, there 537 are functions in parse_entry.c called after the parse of each entry 538 that are specifically intended to encapsulate such translations. This, 539 for example, is where the AIX box1 capability get translated to an 540 acsc string. 541 542 Other Utilities 543 544 The infocmp utility is just a wrapper around the same entry-dumping 545 code used by tic for source translation. Perhaps the one interesting 546 aspect of the code is the use of a predicate function passed in to 547 dump_entry() to control which capabilities are dumped. This is 548 necessary in order to handle both the ordinary De-compilation case and 549 entry difference reporting. 550 551 The tput and clear utilities just do an entry load followed by a 552 tputs() of a selected capability. 553 554 Style Tips for Developers 555 556 See the TO-DO file in the top-level directory of the source 557 distribution for additions that would be particularly useful. 558 559 The prefix _nc_ should be used on library public functions that are 560 not part of the curses API in order to prevent pollution of the 561 application namespace. If you have to add to or modify the function 562 prototypes in curses.h.in, read ncurses/MKlib_gen.sh first so you can 563 avoid breaking XSI conformance. Please join the ncurses mailing list. 564 See the INSTALL file in the top level of the distribution for details 565 on the list. 566 567 Look for the string FIXME in source files to tag minor bugs and 568 potential problems that could use fixing. 569 570 Don't try to auto-detect OS features in the main body of the C code. 571 That's the job of the configuration system. 572 573 To hold down complexity, do make your code data-driven. Especially, if 574 you can drive logic from a table filtered out of include/Caps, do it. 575 If you find you need to augment the data in that file in order to 576 generate the proper table, that's still preferable to ad-hoc code -- 577 that's why the fifth field (flags) is there. 578 579 Have fun! 580 581 Porting Hints 582 583 The following notes are intended to be a first step towards DOS and 584 Macintosh ports of the ncurses libraries. 585 586 The following library modules are `pure curses'; they operate only on 587 the curses internal structures, do all output through other curses 588 calls (not including tputs() and putp()) and do not call any other 589 UNIX routines such as signal(2) or the stdio library. Thus, they 590 should not need to be modified for single-terminal ports. 591 592 lib_addch.c lib_addstr.c lib_bkgd.c lib_box.c lib_clear.c 593 lib_clrbot.c lib_clreol.c lib_delch.c lib_delwin.c lib_erase.c 594 lib_inchstr.c lib_insch.c lib_insdel.c lib_insstr.c lib_keyname.c 595 lib_move.c lib_mvwin.c lib_newwin.c lib_overlay.c lib_pad.c 596 lib_printw.c lib_refresh.c lib_scanw.c lib_scroll.c lib_scrreg.c 597 lib_set_term.c lib_touch.c lib_tparm.c lib_tputs.c lib_unctrl.c 598 lib_window.c panel.c 599 600 This module is pure curses, but calls outstr(): 601 602 lib_getstr.c 603 604 These modules are pure curses, except that they use tputs() and 605 putp(): 606 607 lib_beep.c lib_color.c lib_endwin.c lib_options.c lib_slk.c 608 lib_vidattr.c 609 610 This modules assist in POSIX emulation on non-POSIX systems: 611 612 sigaction.c 613 signal calls 614 615 The following source files will not be needed for a 616 single-terminal-type port. 617 618 alloc_entry.c captoinfo.c clear.c comp_captab.c comp_error.c 619 comp_hash.c comp_main.c comp_parse.c comp_scan.c dump_entry.c 620 infocmp.c parse_entry.c read_entry.c tput.c write_entry.c 621 622 The following modules will use open()/read()/write()/close()/lseek() 623 on files, but no other OS calls. 624 625 lib_screen.c 626 used to read/write screen dumps 627 628 lib_trace.c 629 used to write trace data to the logfile 630 631 Modules that would have to be modified for a port start here: 632 633 The following modules are `pure curses' but contain assumptions 634 inappropriate for a memory-mapped port. 635 636 lib_longname.c 637 assumes there may be multiple terminals 638 639 lib_acs.c 640 assumes acs_map as a double indirection 641 642 lib_mvcur.c 643 assumes cursor moves have variable cost 644 645 lib_termcap.c 646 assumes there may be multiple terminals 647 648 lib_ti.c 649 assumes there may be multiple terminals 650 651 The following modules use UNIX-specific calls: 652 653 lib_doupdate.c 654 input checking 655 656 lib_getch.c 657 read() 658 659 lib_initscr.c 660 getenv() 661 662 lib_newterm.c 663 lib_baudrate.c 664 lib_kernel.c 665 various tty-manipulation and system calls 666 667 lib_raw.c 668 various tty-manipulation calls 669 670 lib_setup.c 671 various tty-manipulation calls 672 673 lib_restart.c 674 various tty-manipulation calls 675 676 lib_tstp.c 677 signal-manipulation calls 678 679 lib_twait.c 680 gettimeofday(), select(). 681 _________________________________________________________________ 682 683 684 Eric S. Raymond <esr@snark.thyrsus.com> 685 686 (Note: This is not the bug address!) ncurses, with patches starting at ncurses-5.6; new users should use RSS Atom | https://ncurses.scripts.mit.edu/?p=ncurses.git;a=blob;f=doc/hackguide.doc;h=fbc8f0725ae68d73841999d6cc1cac631f91d34a;hb=fbc8f0725ae68d73841999d6cc1cac631f91d34a | CC-MAIN-2022-40 | refinedweb | 3,902 | 62.78 |
Created on 2008-08-16 15:41 by cananian, last changed 2015-04-09 01:58 by martin.panter. This issue is now closed..
Not that we've removed the try one more time branch of the code,
because it was causing other problems.
Jeremy
On Sun, Aug 16, 2009 at 6:24 PM, Gregory P. Smith<report@bugs.python.org> wrote:
>
> Gregory P. Smith <greg@krypto.org> added the comment:
>
>.
>
> ----------
> priority: -> normal
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <>
> _______________________________________
> _______________________________________________
> Python-bugs-list mailing list
> Unsubscribe:
>
>
unassigning, i don't had time to look at this one.
Seems to affect 2.7 too.
I.
Trying to reproduce this on my own in 3.5, 2.7 and 2.5 yields a ConnectionResetError (ECONNRESET), which seems to be correct. That said, this could be due to varying TCP implementations on the server so might still be valid. It could also be due to an older kernel which has been fixed since this was originally reported. Is this still reproducible? If so, can an example be provided?
If the error as written is reproducible, I think that the error message should be fixed, but I'm not so sure that any more than that should be done.
As far as the RFC goes, I think the MUST clause pointed out can be left to the interpretation of the reader. You /could/ consider http.client as the client, but you could also consider the application that a user is interacting with as the client.
Offhand, I think that the library as it is does the right thing in allowing the application code to handle the exceptions as it sees fit. Because http.client in its current state doesn't allow for request pipelining, retries from calling code should be trivial, if that is what the caller intends to do.
I believe the BadStatusLine can still happen, depending on the circumstances. When I get a chance I will see if I can make a demonstration. In the meantime these comments from my persistent connection handler <> might be useful:
# If the server closed the connection,
# by calling close() or shutdown(SHUT_WR),
# before receiving a short request (<= 1 MB),
# the "http.client" module raises a BadStatusLine exception.
#
# To produce EPIPE:
# 1. server: close() or shutdown(SHUT_RDWR)
# 2. client: send(large request >> 1 MB)
#
# ENOTCONN probably not possible with current Python,
# but could be generated on Linux by:
# 1. server: close() or shutdown(SHUT_RDWR)
# 2. client: send(finite data)
# 3. client: shutdown()
# ENOTCONN not covered by ConnectionError even in Python 3.3.
#
# To produce ECONNRESET:
# 1. client: send(finite data)
# 2. server: close() without reading all data
# 3. client: send()
I think these behaviours were from experimenting on Linux with Python 3 sockets, and reading the man pages.
I think there should be a new documented exception, a subclass of BadStatusLine for backwards compatibility. Then user code could catch the new exception, and true bad status lines that do not conform to the specification or whatever won’t be caught. I agree that the library shouldn’t be doing any special retrying of its own, but should make it easy for the caller to do so.
Okay here is a demonstration script, which does two tests: a short basic GET request, and a 2 MB POST request. Output for me is usually:
Platform: Linux-3.15.5-2-ARCH-x86_64-with-arch
Normal request: getresponse() raised BadStatusLine("''",)
2 MB request: request() raised BrokenPipeError(32, 'Broken pipe'); errno=EPIPE
Sometimes I get a BadStatusLine even for the second request:
Platform: Linux-3.15.5-2-ARCH-x86_64-with-arch
Normal request: getresponse() raised BadStatusLine("''",)
2 MB request: getresponse() raised BadStatusLine("''",)
Spotted code in Python’s own library that maintains a persistent connection and works around this issue:
Lib/xmlrpc/client.py:1142
Hi Martin,
Thanks for the example code..
The reason that you're seeing different responses sometimes (varying between BadStatusLine and BrokenPipeError) is because of an understandable race condition between the client sending the requests and the server fully shutting down the socket and the client receiving FIN.
After digging into this, I'm not sure that there is a better way of handling this case. This exception can occur whether the client has issued a request prior to cleaning up and is expecting a response, or the server is simply misbehaving and sends an invalid status line (i.e. change your response code to an empty string to see what I mean).
I'll do some further digging, but I don't believe that there's really a good way to determine whether the BadStatusLine is due to a misbehaving server (sending a non-conforming response) or a closed socket. Considering that the client potentially has no way of knowing whether or not a server socket has been closed (in the case of TCPServer, it does a SHUT_WR), I think that BadStatusLine may be the appropriate exception to use here and the resulting action would have to be left up to the client implementation, such as in xmlrpc.client.
>.
Sorry, I mis-worded that. I'm /assuming/ that the misbehaving client is what you were intending on demonstrating as it shows the server closing the connection before the client expects it to do so.
Hi Demian, my intention is to demonstrate normal usage of Python’s HTTP client, whether or not its implementation misbehaves. I am trying to demonstrate a valid persistent server that happens to decide to close the connection after the first request but before reading a second request. Quoting the original post: “Servers may close a persistent connection after a request due to a timeout or other reason.” I am attaching a second demo script which includes short sleep() calls to emulate a period of time elapsing and the server timing out the connection, which is common for real-world servers.
The new script also avoids the EPIPE race by waiting until the server has definitely shut down the socket, and also demonstrates ECONNRESET. However this synchronization is artificial: in the real world the particular failure mode (BadStatusLine/EPIPE/ECONNRESET) may be uncertain.
If you are worried about detecting a misbehaving server that closes the connection before even responding to the first request, perhaps the HTTPConnection class could maintain a flag and handle the closed connection differently if it has not already seen a complete response.
If you are worried about detecting a misbehaving server that sends an empty status line without closing the connection, there will still be a newline code received. This is already handled separately by existing code: Lib/http/client.py:210 versus Lib/http/client.py:223.
I think there should be a separate exception, say called ConnectionClosed, for when the “status line” is an empty string (""), which is caused by reading the end of the stream. This is valid HTTP behaviour for the second and subsequent requests, so the HTTP library should understand it. BadStatusLine is documented for “status codes we don’t understand”. The new ConnectionClosed exception should probably be a subclass of BadStatusLine for backwards compatibility.
A further enhancement could be to wrap any ConnectionError during the request() stage, or first part of the getresponse() stage, in the same ConnectionClosed exception. Alternatively, the new ConnectionClosed exception could subclass both BadStatusLine and ConnectionError. Either way, code like the XML-RPC client could be simplified to:
try:
return self.single_request(...)
except http.client.ConnectionClosed:
#except ConnectionError: # Alternative
#retry request once if cached connection has gone cold
return self.single_request(...)
FWIW, I agree with the analysis here, its standard HTTP behaviour in the real world, and we should indeed handle it.
Sorry Martin, I should really not dig into issues like this first thing in the morning ;)
My concern about the proposed change isn't whether or not it isn't valid HTTP behaviour, it is. My concern (albeit a small one) is that the change implies an assumption that may not necessarily be true. No matter how valid based on the HTTP spec, it's still an assumption that /can/ potentially lead to confusion. I do agree that a change /should/ be made, I just want to make sure that all potential cases are considered before implementing one.
For example, applying the following patch to the first attachment:
52,57c52,53
< self.wfile.write(
< b"HTTP/1.1 200 Dropping connection\r\n"
< b"Content-Length: 0\r\n"
< b"\r\n"
< )
<
---
> self.wfile.write(b'')
>
Should the proposed change be made, the above would error out with a ConnectionClosed exception, which is invalid because the connection has not actually been closed and BadStatusLine is actually closer to being correct. Admittedly, this is a little contrived, doesn't adhere to the HTTP spec and is much less likely to happen in the real world than a connection unexpectedly closed by the server, but I don't think it's an unreasonable assumption for lesser used servers or proxies or those in development. In those cases, the proposed change would result in just as much confusion as the current behaviour with connections that are intended to be persistent.
In my mind, the one constant through both of these cases is that the response that the client has read is unexpected. In light of that, rather than a ConnectionClosed error, what about UnexpectedResponse, inheriting from BadStatusLine for backwards compatibility and documented as possibly being raised as a result of either case? I think this would cover both cases where a socket has been closed as well as an altogether invalid response.
Calling self.wfile.write(b"") should be equivalent to not calling write() at all, as far as I understand. Using strace, it does not seem to invoke send() at all. So the result will depend on what is written next. In the case of my code, nothing is written next; the connection is shut down instead. So I don’t believe this case is any different from “a connection unexpectedly closed by the server”. To be clear, I think the situation we are talking about is:
1. Client connects to server and sends short request; server accepts connection and possibly reads request
2. Server does not write any response, or just calls write(b""), which is equivalent
3. Server shuts down connection
4. Client reads end of stream (b"") instead of proper status line
But to address your concern in any case, see the third paragram in <>. I propose some internal flag like HTTPConnection._initial_response, that gets set to False after the first proper response is received. Then the code could be changed to something like:
if not line:
# Presumably, the server closed the connection before
# sending a valid response.
if self._initial_response:
raise BadStatusLine(line)
else:
raise ConnectionClosed("Stream ended before receiving response")
> Calling self.wfile.write(b"") should be equivalent to not calling write() at all, as far as I understand.
Right (or at least, as I understand it as well).
Really, this boils down to a philosophical debate: Should the standard library account for unexpected conditions where possible (within reason of course), or should it only account for conditions as described by specifications?
> 1. Client connects to server and sends short request; server accepts connection and possibly reads request
> [snip]
This flow makes sense and is well accounted for with your proposed change. However, misbehaving cases such as:
Granted, the result is unexpected and doesn't comply with HTTP RFCs. However, leading the user to believe that the connection has been closed when it actually hasn't is misleading. I've spent many an hour trying to hunt down root causes of issues like this and bashed my head against a keyboard in disbelief when I found out what the cause /really/ was. Because of those headaches, I still think that the introduction of an UnexpectedResponse, if well documented, covers both cases nicely, but won't heatedly argue it further :) If others (namely core devs) think that the introduction of ConnectionClosed exception is a better way to go, then I'll bow out. It would maybe be nice to have Senthil chime in on this.
> But to address your concern in any case, see the third paragram in <>.
I don't think that should be added at all as the issue that I'm describing can occur at any point, not only the first request.
On another note, were you interested in submitting a patch for this?
Yeah I’m happy to put a patch together, once I have an idea of the details.
I’d also like to understand your scenario that would mislead the user to believe that the connection has been closed when it actually hasn’t. Can you give a concrete example or demonstration?
Given your misbehaving flow:
I would expect the client would still be waiting to read the status line of the response that was never sent in step 2. Eventually the server _would_ probably drop the connection (so ConnectionClosed would not be misleading), or the client would time it out (a different error would be raised).
I think that in other stdlib networking modules, a connection closed error is raised when an operation is attempted on a closed connection. For example, in smtplib, the server may return an error code and then (contrary to the RFC) close the connection. We fixed a bug in smtplib where this was mishandled (the error code was lost and SMTPServerDisconnected was raised instead). Now we return the error code, and the *next* operation on the connection gets the connection closed error.
I think this is a good model, but I'm not sure if/how it can be applied here.).
I think the remote server writing a blank line to the socket is a very different thing from the remote server closing the connection without writing anything, so I may be misunderstanding something here. Note that handling this is potentially more complicated with https, since in that case we have a wrapper around the socket communication that has some buffering involved.
But also note that if a new exception is introduced this becomes a feature and by our normal policies can only go into 3.5.
TL;DR: Because HTTP is an application-level protocol, it's nearly impossible to gauge how a server will behave given a set of conditions. Because of that, any time that assumptions can be avoided, they should be.
@R. David Murray:
>).
In the typical case, this is exactly what happens. This issue is around a race condition that can occur between the client issuing a request prior to terminating the connection with the server, but the server terminating it prior to processing the request. In these cases, the client is left in a state where as far as it's concerned, it's in a valid state waiting for a response which the server will not issue as it has closed the socket on its side. In this case, the client reading an empty string from the receive buffer implies a closed socket. Unfortunately, it's not entirely uncommon when using persistent connections, as Martin's examples demonstrate.
> I think the remote server writing a blank line to the socket is a very different thing from the remote server closing the connection without writing anything, so I may be misunderstanding something here.
+1. What Martin is arguing here (Martin, please correct me if I'm wrong) is that a decently behaved server should not, at /any/ time write a blank line to (or effectively no-op) the socket, other than in the case where the socket connection has been closed. While I agree in the typical case, a blend of Postel and Murphy's laws leads me to believe it would be better to expect, accept and handle the unexpected.
@Martin:
Here's a concrete example of the unexpected behaviour. It's not specific to persistent connections and would be caught by the proposed "first request" solution, but ultimately, similar behaviour may be seen at any time from other servers/sources:.
Googling for "http empty response" and similar search strings should also provide a number of examples where unexpected behaviour is encountered and in which case raising an explicit "ConnectionClosed" error would add to the confusion.
Other examples are really hypotheticals and I don't think it's worth digging into them too deeply here. Unexpected behaviour (regardless of whether it's on the first or Nth request) should be captured well enough by now.
> Eventually the server _would_ probably drop the connection (so ConnectionClosed would not be misleading)
Sure, but you're raising an exception based on future /expected/ behaviour. That's my issue with the proposed solution in general. ConnectionClosed assumes specific behaviour, where literally /anything/ can happen server side.
Now I think I'd like to take my foot out of my mouth.
Previous quick experiments that I had done were at the socket level, circumventing some of the logic in the HTTPResponse, mainly the calls to readline() rather than simple socket.recv(<N>).
I've confirmed that the /only/ way that the HTTPConnection object can possibly get a 0-length read is, in fact, if the remote host has closed the connection.
In light of that, I have no objection at all to the suggested addition of ConnectionClosed exception and my apologies for the added confusion and dragging this issue on much longer than it should have been.
I've also attached my proof of concept code for posterity.
(Admittedly, I may also have been doing something entirely invalid in previous experiments as well)
Here is a patch, including tests and documentation. It ended up a bit more complicated than I anticipated, so I’m interested in hearing other ideas or options.
* Added http.client.ConnectionClosed exception
* HTTPConnection.close() is implicitly called for a persistent connection closure
* BadStatusLine or ConnectionError (rather than new exception) is still raised on first getresponse()
* request() raising a ConnectionError does not necessarily mean the server did not send a response, so ConnectionClosed is only raised by getresponse()
* ConnectionClosed wraps ECONNRESET from the first recv() of the status line, but not after part of the status line has already been received
With this I hope code for making idempotent requests on a persistent connection would look a bit like this:
def idempotent_request(connection)
try:
attempt_request(connection, ...)
response = connection.get_response()
if response.status == HTTPStatus.REQUEST_TIMEOUT:
raise ConnectionClosed(response.reason)
except ConnectionClosed:
attempt_request(connection, ...)
response = connection.get_response()
return response
def attempt_request(connection):
try:
connection.request(...)
except ConnectionError:
pass # Ignore and read server response
Updated v2 patch. This version avoids intruding into the HTTPConnection.connect() implementation, so that users, tests, etc may still set the undocumented “sock” attribute without calling the base connect() method. Also code style changes based on feedback to another of my patches.
Thanks for the patch Martin (as well as catching a couple issues that I'd run into recently as well). I've left a couple comments up on Rietveld.
Thanks for the reviews.
I agree about the new HTTPResponse flag being a bit awkward; the HTTPResponse class should probably raise the ConnectionClosed exception in all cases. I was wondering if the the HTTPConnection class should wrap this in a PersistentConnectionClosed exception or something if it wasn’t for the first connection, now I’m thinking that should be up to the higher level user, in case they are doing something like HTTP preconnection.
> now I’m thinking that should be up to the higher level user
+1. A closed connection is a closed connection, whether it's persistent or not. The higher level code should be responsible for the context, not the connection level.
I have changed my opinion of the “peek hack” from <>. It would be useful when doing non-idempotent requests like POST, to avoid sending a request when we know it is going to fail. I looked into how to implement it so that it works for SSL (which does not support MSG_PEEK), and the neatest solution I could think of would require changing the non-blocking behaviour of BufferedReader.peek(), as described in Issue 13322. So I will leave that for later.
Adding ConnectionClosed.v3.patch; main changes:
* Removed the connection_reused flag to HTTPResponse
* ConnectionClosed raised even for the first request of a connection
* Added HTTPConnection.closed flag, which the user may check before a request to see if a fresh connection will be made, or an existing connection will be reused
* ConnectionClosed now subclasses both BadStatusLine and ConnectionError
* Fixed http.client.__all__ and added a somewhat automated test for it
BTW these patches kind of depend on Issue 5811 to confirm that BufferedReader.peek() will definitely return at least one byte unless at EOF.
If it would help the review process, I could simplify my patch by dropping the addition of the HTTPConnection.closed flag, so that it just adds the ConnectionClosed exception. Looking forwards, I’m wondering if it might be better to add something like a HTTPConnection.check_remote_closed() method instead of that flag anyway.
My apologies for the delay, but I've now reviewed the proposed patch. With a fresh outlook after taking a bit of time off, I'm not sure anymore that this is the best way of going about solving this problem. The main reason being that we now have two errors that effectively mean the same thing: The remote socket has encountered some error condition.
I understand that ConnectionClosed was added to maintain backwards compatibility with the BadStatusLine error, but I'm beginning to think that what really should be done is that backwards compatibility /should/ be broken as (in my mind) it's one of those cases where the backwards compatible solution may introduce just as many issues as it solves.
The root issue here (or at least what it has turned into) is that BadStatusLine is incorrectly raised when EOF is encountered when reading the status line. In light of that, I think that simply raising a ConnectionError in _read_status where line is None is the right way to fix this issue. Not only is it consistent with other cases where the remote socket is closed (i.e. when reading the response body), but it's removing the need for the addition of a potentially confusing exception.
I'm not 100% what the policy is around backwards introducing backwards incompatible changes is, but I really think that this is one of those few cases where it really should be broken.
Thanks for helping with this Demian. The idea of raising the same exception in all cases is new to me. Initially I was opposed, but it is starting to make sense. Let me consider it some more. Here are some cases that could trigger this exception:
1. EOF before receiving any status line. This is the most common case. Currently triggers BadStatusLine.
2. EOF in the middle of the status line. Triggers BadStatusLine, or is treated as an empty set of header fields.
3. EOF in the middle of a header line, or before the terminating blank line. Ignored, possibly with HTTPMessage.defects set.
4. EOF after receiving 100 Continue response, but before the final response. Currently triggers the same BadStatusLine.
5. ConnectionReset anywhere before the blank line terminating the header section.
In all those cases it should be okay to automatically retry an idempotent request. With non-idempotent requests, retrying in these cases seems about equally dangerous.
For contrast, some related cases that can still be handled differently:
6. Connection reset or broken pipe in the request() method, since the server can still send a response
7. Unexpected EOF or connection reset when reading the response body. Perhaps this could also be handled with a similar ConnectionError exception. Currently IncompleteRead is raised for EOF, at least in most cases. IncompleteRead has also been suggested as an alternative to BadStatusLine in the past.
> Thanks for helping with this Demian.
No worries. This textual white boarding exercise has also been greatly
valuable in getting my own head wrapped around various low frequency
socket level errors that can be encountered when using the client. The
downside is that this issue is now quite likely very difficult for new
readers to follow :/
> The idea of raising the same exception in all cases is new to me.
This initially puzzled me. I then re-read my response and realized that
I was thinking one thing and wrote another. The exception that I was
intending to suggest using here is ConnectionResetError, rather than the
ConnectionError base class. To elaborate a little further, as I
understand it, the /only/ case where the empty read happens is when the
remote connection has been closed but where the TCP still allows for EOS
to be read. In this case, the higher level implementation (in this case
the client) /knows/ that the empty line is signifying that the
connection has been closed by the remote host.
To be clear, a rough example of what I'm proposing is this:
diff -r e548ab4ce71d Lib/http/client.py
--- a/Lib/http/client.py Mon Feb 09 19:49:00 2015 +0000
+++ b/Lib/http/client.py Wed Feb 11 06:04:08 2015 -0800
@@ -210,7 +210,7 @@
if not line:
# Presumably, the server closed the connection before
# sending a valid response.
- raise BadStatusLine(line)
+ raise ConnectionResetError
try:
version, status, reason = line.split(None, 2)
except ValueError:
Your example of the XML-RPC client would then use the alternative approach:
try:
return self.single_request(...)
except ConnectionError:
#retry request once if cached connection has gone cold
return self.single_request(...)
That all said, I'm also not tremendously opposed to the introduction of
the ConnectionClosed exception. I simply wanted to explore this thought
before the addition is made, although the comments in my patch review
would still hold true.
It is possible to break backward compatibility in a feature release if the break is fixing a bug. In this case I think it is in fact doing so, and that in fact in the majority of cases the change would either not break existing code or would even improve it (by making debugging easier). However, I have no way to prove that.
Often in the cases of compatibility breaks we will do a deprecation of the old behavior in a given release and make the change in the next release. I'm not convinced that is necessary (or even possible) here. It would be nice if we could get some data on what the actual impact would be on existing code. For example: how, if at all, would this affect the requests package? I *can* give one data point: in an application I wrote recently the affect would be zero, since every place in my application that I catch BadStatusLine I also catch ConnectionError.
I would want at least one other committer to sign off on a compatibility break before anything got committed.
Posting RemoteDisconnected.v4.patch with these changes:
* Renamed ConnectionClosed → RemoteDisconnected. Hopefully avoids confusion with shutting down the local end of a connection, or closing a socket’s file descriptor.
* Dropped the HTTPConnection.closed attribute
* Dropped special distinction of ECONNRESET at start versus in middle of response. It certainly makes the code and tests simpler again, and I realize that distinction is not the most important problem to solve right now, if ever. Also avoids relying on the poorly defined BufferedReader.peek() method.
I would like to retain the backwards compatibility with BadStatusLine if that is okay though.
Requests and “urllib3”: I’m not overly familiar with the internals of these packages (Requests uses “urllib3”). I cannot find any reference to BadStatusLine handling in “urllib3”, and I suspect it might just rely on detecting a dropped connection before sending a request; see <>. In my opinion this is a race condition, but it is helpful and works most of the time. So I suspect “urllib3” would not be affected by any changes made relating to BadStatusLine.
Left a few minor comments in Rietveld.
My only opposition to the RemoteDisconnected error is now we have two exceptions that effectively mean the same thing. It looks like asyncio.streams has similar behaviour:. I think that if it's acceptable to break backwards compatibility here, we should.
Browsing through some Github repos, it seems like this change /could/ potentially impact a few smaller projects. I can confirm, however, that neither urllib3 nor requests are dependent on BadStatusLine.
I guess you saying RemoteDisconnected effectively means the same thing as ConnectionResetError. Would it help if it was derived from ConnectionResetError, instead of the ConnectionError base class? Or are you also worried about the multiple inheritance or clutter of yet another type of exception?
I’m not really familiar with the “asyncio” streams/protocols/transports/thingies, but it looks like the code you pointed to is actually called when writing, via drain(), fails. Maybe the equivalent code for when reading hits EOF is <>.
> On Feb 19, 2015, at 8:08 PM, Martin Panter <report@bugs.python.org> wrote:
> I guess you saying RemoteDisconnected effectively means the same thing as ConnectionResetError.
Exactly.
> Would it help if it was derived from ConnectionResetError, instead of the ConnectionError base class? Or are you also worried about the multiple inheritance or clutter of yet another type of exception?
My concern is more about consistency of exceptions and exception handling when using the client. Thinking about it from a user’s standpoint, when I issue a request and the remote socket closes, I would hope to get consistent exceptions for all remote resets. If I’m handling the lowest level errors independently of one another rather than catch-all ConnectionError, I don’t want to do something like this:
except (RemoteDisconnected, ConnectionResetError)
I /should/ be able to simply use ConnectionResetError. Reading the docs, the only real reason for this exception at all is for backwards compatibility. If we have a case to break backwards compatibility here, then that eliminates the need for the new exception and potential (minor) confusion.
In this special case, the behaviour that we see at the client socket level indicates a remote reset, but it’s only artificially known immediately due to the empty read. In my mind, because the client /knows/ that this is an early indicator of a ConnectionResetError, that is exactly the exception that should be used.
Hope that makes sense.
Posting RemoteDisconnected.v5.patch:
* Rebased and fixed minor merge conflict
* Change RemoteDisconnected base class from ConnectionError to ConnectionResetError
* Minor tweaks to tests
It seems that having a separate RemoteDisconnected exception (as in this patch) has at least two benefits:
1. It would allow the user to distinguish between a true ConnectionResetError (due to TCP reset or whatever) from a clean TCP shutdown
2. Backwards compatibility with user code that only handles BadStatusLine
The only disadvantage seems to be the bloat of adding a new exception type. But if some other comitter agrees that merging them is better and dropping backwards compatibility is okay I am happy to adjust the patch to go along with that.
Pending review of the exceptions from another core dev, the patch looks good to me. Thanks for sticking with it :)
New changeset eba80326ba53 by R David Murray in branch 'default':
#3566: Clean up handling of remote server disconnects.
Thanks, Martin and Demian. I tweaked the patch slightly before commit, so I've uploaded the diff. After thinking about it I decided that it does indeed make sense that the new exception subclass both ConnectionResetError and BadStatusLine, exactly because it *isn't* a pure ConnectionError, it is a synthetic one based on getting a '' response when we are expecting a status line. So I tweaked the language to not mention backward compatibility. I also tweaked the language of the docs, comments and error message to make it clear that the issue is that the server closed the connection (I understand why you changed it to 'shut down', but I think 'the server closed the connection' is both precise enough and more intuitive).
If you have any issues with the changes I made, let me know.
Your tweaks look fine. Thanks everyone for working through this one. | https://bugs.python.org/issue3566 | CC-MAIN-2020-45 | refinedweb | 5,364 | 53.21 |
#Name: Herbert H. Lehman #Date: September 4, 2013 #This program prints: hello world def main(): print("Hello world") main()
Submit the following programs via Blackboard:
Eric's converting program Dollars: Euros: 1 0.77 2 1.53 3 2.3 4 3.07 5 3.84Your program should print the information for 1, 2, ..., 10 dollars. You do not need to worry about formatting (we will talk about that more in Chapter 5), but you do need to calculate all 10 entries.
import turtle def main(): daniel = turtle.Turtle() #Set up a turtle named "daniel" myWin = turtle.Screen() #The graphics window #Draw a square for i in range(4): daniel.forward(100) #Move forward 10 steps daniel.right(90) #Turn 90 degrees to the right myWin.exitonclick() #Close the window when clicked main()Modify this program to draw a 6-sided figure or hexagon. Make sure to include the standard introductory comments at the beginning of your program as well as to change the name of the turtle to your name.
Repeat 45 times: Walk forward 100 steps Turn right 92 degreesYour output should look similar to:
Please enter the number of days: 4 Month 1: $10 Month 2: $20 Month 3: $30 Month 4: $40
Please enter a string: Look behind, look here, look ahead You entered (shouting): LOOK BEHIND, LOOK HERE, LOOK AHEAD But better (whispering): look behind, look here, look ahead
Please enter a line: I love python You entered 13 characters.
Note: Use characters in your output, even if it is not grammatically correct (i.e. "1 characters"). We will discuss how to fix this by making decisions in Chapter 7.
Your program should print out your name to the screen and then ask the user to enter a string. You should then print out how long the string is in terms of the length of your name (that is, the length of the user's string divided by your length). For example,
The measuring string is "Kate" Please enter a string: Hello world Your string is 2.25 Kate's long.While, if your name was Daniel, your program would look like:
The measuring string is "Daniel" Please enter a string: Hello world Your string is 1.8333 Daniel's long.
A sample of Caesar cipher disk (from en.wikipedia.org/wiki/Caesar_cipher) with an offset of 13 (that is, every letter in plain text goes to one 13 letters to its right):
Please enter the prices: 2.34, .99, 100, 81.05, 90 Your receipt: 2.34 0.99 100.00 81.05 90.00 ---------------- Total: 274.38Hint: use the format() statement discussed in Chapter 5.
01234567890123456789012345 This line has more than 20 characters. This one has less And this one has lots, lots, lots, more than 20 characters!and the user entered the length of 20, all lines longer than 20 would be wrapped to the next line:
01234567890123456789 012345 This line has more t han 20 characters. This one has less And this one has lot s, lots, lots, more than 20 characters!Hint: break the problem in to parts: first write a program that will print lines from a file to the screen (see Lab 6). Then modify your initial program to only print lines up to the length entered. And, to finish the program, then add in the code that prints lines that are longer than the length entered.
Finish my python homework. Buy milk. Do laundry. Update webpage.Then the output file would be:
1. Finish my python homework. 2. Buy milk. 3. Do laundry. 4. Update webpage.
For example, if the file inputTemplate.txt contained:
New York, New York 11 October 2013 **INSERT NAME HERE** **INSERT ADDRESS HERE** Dear **INSERT NAME HERE**, Thank you for your service to New York City, and, in particular, to the education of its residents. Those in **INSERT ADDRESS HERE** appreciate it! Best wishes to **INSERT NAME HERE** and your family, --CUNY
A sample run of the program would be:
Please enter the name of the template file: inputTemplate.txt Please enter names of recipients: Herbert H. Lehman, Bernard M. Baruch, Fiorello H. LaGuardia Please list addresses: Bronx NY, New York NY, Queens NY Your customized letters are below: New York, New York 11 October 2013 Herbert H. Lehman Bronx NY Dear Herbert H. Lehman, Thank you for your service to New York City, and, in particular, to the education of its residents. Those in Bronx NY appreciate it! Best wishes to Herbert H. Lehman and your family, --CUNY New York, New York 11 October 2013 Bernard M. Baruch New York NY Dear Bernard M. Baruch, Thank you for your service to New York City, and, in particular, to the education of its residents. Those in New York NY appreciate it! Best wishes to Bernard M. Baruch and your family, --CUNY New York, New York 11 October 2013 Fiorello H. LaGuardia Queens NY Dear Fiorello H. LaGuardia, Thank you for your service to New York City, and, in particular, to the education of its residents. Those in Queens NY appreciate it! Best wishes to Fiorello H. LaGuardia and your family, --CUNY
insert into customer (first, last, address) values ('FirstName', 'LastName', 'Address')A sample input file is mayorsAddresses.txt.
Old MacDonald had a farm, Ei-igh, Ee-igh, Oh! And on that farm he had a cow, Ee-igh, Ee-igh, Oh! Whith a moo, moo here and a moo, moo there. Here a moo, there a moo, everywhere a moo, moo. Old MacDonald had a farm, Ei-igh, Ee-igh, Oh!(Hint: use a function with two input parameters one for the animal and the other for the related sound)
from turtle import * def main(): myWin = turtle.Screen() #The graphics window tristan = turtle.Turtle() #Tristan will be our turtle for this program drawStem(tristan) #Draw a green stem for i in range(20): drawPetal(tristan,"blue") #Draws a blue petal for our flower drawPetal(tristan,"purple") #Draws a purple petal for our flower myWin.exitonclick() #Close the window when clickedThat is, write the functions drawStem() and drawPetal(). Include all functions, including the main() above in the file you submit. Sample output of the program:
(Note: you can change the color that your turtle, using the function, color(). For example, if you turtle is called tess to change it's color in stringColor, write tess.color(stringColor).)
def main(): welcome() #Prints "Hello, world" to the screen x,y = userInput() #Asks user for 2 inputs and returns numbers entered d = calculate(x,y) #Returns the difference of the parameters displayResults(x,y,d) #Prints the two inputs, and d = calculate(age) #Using age, calculates year born displayResults(age,y) #Prints age and birth year = retire(age) #Using age, calculates retirement year (year turns 65) displayResults(age,y) #Prints age and retirement year main()(That is, write the functions welcome(), userInput(), retire() and displayResults().) Submit a .py file containing all four functions you wrote, in addition to the main() function above.
Here is an example of the program:
Enter your birth year: 2005 You are under 18. You are under 21.And another sample run:
Enter your birth year: 1995 You are over 18. You are under 21.Hint: Split the problem in half, and first write the code that tests if the birth year is greater than currentYear - 18 = 2014 - 18 = 1996, then work on the second part. See Lab 10.
You should include in the file a main() that calls your function several times to demonstrate that it works.
You should include in the file a main() that calls your function several times to demonstrate that it works.
Modify this program to allow the user also to specify with the following symbols:
Submit your modified program (including a comment at the top of the program with your name).
For example, the input file NY.TXT starts out:
NYF1910Mary,1922 NYF1910Helen,1290 NYF1910Rose,990 NYF1910Anna,951 NYF1910Margaret,926 NYF1910Dorothy,897 NYF1910Ruth,712 NYF1910Lillian,648 NYF1910Florence,604 NYF1910Frances,589 NYF1910Elizabeth,579 NYF1910Mildred,562Here a two sample runs of the program:
Please enter the file name: NY.txt Please enter the minimum length: 5 The first name with length 5 or more is: Helen Thank you for using my name length searching program!and
Please enter the file name: NY.txt Please enter the minimum length: 9 The first name with length 9 or more is: Elizabeth Thank you for using my name length searching program!
Hint: The program from the lab looks at every line of the file, but this program is a bit different in that it stops after it finds a long enough name. To do this, you need to change the loop to an indefinite loop (i.e. a while loop) or figure out how to break out of a definite loop (i.e. the for loop).
In your submitted file, include a main() function that demonstrates that the sort algorithm works. | https://stjohn.github.io/teaching/cmp/cis166/f14/ps.html | CC-MAIN-2022-27 | refinedweb | 1,489 | 73.68 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
[9.0]Field attribute default: how to assign the value of another field as default?
According to the documentation, it is possible to assign the result of a function as default value of a field. But is it possible to assign the value of another field of the same model as default value?
Example:
display_name = fields.Char(string='Display Name', default=name, translate=True)
Hi,
you can use
@api.onchange('<field whose value to be taken>')
def onchange_func(self):
if self.<field value to be taken>:
self. <field to which you need value> = self.<field whose value to be taken>
Thanks for your reply. How would my field definition look like then:
display_name = fields.Char(default=onchange_func(name)) ?
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/9-0-field-attribute-default-how-to-assign-the-value-of-another-field-as-default-110882 | CC-MAIN-2017-17 | refinedweb | 173 | 52.05 |
Sequence comparinglooping array A 4 times would return it to its original state.
original 0 1 2 3 4 5 6 7
1st loop 7 5...
Sequence comparingIt should basically tell me how many times I would have to loop array B to get back to A. Or I guess...
Sequence comparingThis is the small version with preset arrays.
[code]
#include <iostream>
#include <string>
#include ...
Sequence comparing[code]//Monge's Shuffle
#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std...
Sequence comparingSo I changed it to
[code]
int A[8] = {0,1,2,3,4,5,6,7}
int B[8] = {7,5,3,1,0,2,4,6}
int same = 0;... | http://www.cplusplus.com/user/alepxpl93/ | CC-MAIN-2014-35 | refinedweb | 112 | 68.16 |
Results 1 to 2 of 2
Thread: "swapping of memory" priority
- Join Date
- Mar 2010
- 21
"swapping of memory" priority
Code:
#include <malloc.h> #define SIZE 160000000 int main (void) { char *p; char *p0; int i; int a; p0 = malloc (SIZE); a = 0; if (p) { while (1) for (i = SIZE, p = p0; (i -= 1000) >= 0;) { p[i] = (char) a; a++; } } else { return (-1); } }
(Do not try N>8.) I think that this is because of intense swapping of memory of "use-memory". The problem is that other programs get swapped too.
I want "memory usage priority", like I/O priority ("ionice") and CPU priority ("chrt"). It should work like this: a program with lower priority may not force to swap memory of a program with higher priority.
- Join Date
- Apr 2009
- Location
- I can be found either 40 miles west of Chicago, in Chicago, or in a galaxy far, far away.
- 13,449
This is, AFAIK, not a capability in the current kernel(s) of Linux, prioritization of memory usage (swap, vs. in-core). Sounds like you need to think about some kernel modifications, at least on an experimental basis. Hitting the swapper is a known issue with system performance. In fact, real-time systems usually don't support swapping because it negatively affect the determinist performance behavior required by real-time systems.Sometimes, real fast is almost as good as real time.
Just remember, Semper Gumbi - always be flexible! | http://www.linuxforums.org/forum/kernel/162102-swapping-memory-priority.html | CC-MAIN-2017-04 | refinedweb | 240 | 53.71 |
The Managed Provider and the DataSet provide the core functionality
of ADO.NET.
1. The Managed Provider.
The Managed Provider supplies the following four classes.
The DataReader class provides read-only and forward-only access to the data source. We will benchmark
this object later. The final class of the Managed Provider component is the DataAdapter class. The
DataAdapter is the channel through which our second component, the DataSet, connects
to the Managed Provider.
2. The DataSet
The DataSet class consists of in-memory DataTables, columns, rows, constraints and relations. The
DataSet provides an interface to establish relationships between tables, retrieve and update data.
Perhaps one of the most important benefits of the DataSet is its ability to be synchronized with an
XmlDataDocument. The DataSet provides real-time hierarchical access to the XmlDataDocument object.
Managed Providers currently come in two flavors. Each is represented by its own namespace in the .NET
Framework. The first, System.Data.SQLClient provides access to SQL Server 7.0 and
higher databases. The second Managed Provider is found within the System.Data.OleDb
namespace, and is used to access any OleDb source. The SQLClient Provider takes advantage of
Microsoft SQL Server's wire format (TDS), and therefore should give better performance than accessing
data through the OleDb provider. The performance tests done later in this article will be performed
for both provider types.
If we were to create a diagram explaining the general structure of ADO.NET, it would look like this.
In the diagram above, two ways exist to retrieve data from a data source. The first is through the
DataReader class in the Managed Provider component. The second is through the DataSet, which accesses
the data source through the DataAdapter class of the Managed Provider.
The robust Dataset object gives the programmer the ability to perform functions such as establishing
relationships between tables. The DataReader provides read-only and forward-only retrieval of data.
The tests uncover what performance loss, if any, we encounter when using the dataset rather than the
DataReader.
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
Your name/nickname
Your email
WebSite
Subject
(Maximum characters: 1200). You have 1200 characters left. | http://www.devx.com/vb2themax/Article/19887 | CC-MAIN-2014-15 | refinedweb | 369 | 60.21 |
tv — tv shield driver¶
The
tv module is used for controlling the tv shield.
Example usage:
import sensor, tv # Setup camera. sensor.reset() sensor.set_pixformat(sensor.RGB565) sensor.set_framesize(sensor.SIF) sensor.skip_frames() tv.init() # Show image. while(True): tv.display(sensor.snapshot())
Functions¶
tv.
init([type=tv.TV_SHIELD[, triple_buffer=False]])¶
Initializes an attached tv output module.
typeindicates how the lcd module should be initialized:
tv.TV_NONE: Do nothing.
tv.TV_SHIELD: Initialize a TV output module. Uses pins P0, P1, P2, and P3.
triple_bufferIf True then makes updates to the screen non-blocking in
tv.TV_SHIELDmode at the cost of 3X the display RAM (495 KB).
lcd.
width()¶
Returns 352 pixels. This is the
sensor.SIFresolution.
lcd.
height()¶
Returns 240 pixels. This is the
sensor.SIFresolution.
tv.
channel([channel])¶
For the wireless TV shield this sets the broadcast channel between 1-8. If passed without a channel argument then this method returns the previously set channel (1-8). Default is channel 8.
tv.
display(image[, x=0[, y=0[, x_scale=1.0[, y_scale=1.0[, roi=None[, rgb_channel=-1[, alpha=256[, color_palette=None[, alpha_palette=None[, hint=0[, x_size=None[, y_size=None]]]]]]]]]]]])¶
Displays an
imagewhose top-left corner starts at location x, y. You may either pass x, y separately, as a tuple (x, y), or neither.
x_scalecontrols how much the displayed image is scaled by in the x direction (float). If this value is negative the image will be flipped horizontally.
y_scalecontrols how much the displayed image is scaled by in the y direction (float). If this value is negative the image will be flipped vertically.
roiis the region-of-interest rectangle tuple (x, y, w, h) of the image to display. This allows you to extract just the pixels in the ROI to scale.
rgb_channelis the RGB channel (0=R, G=1, B=2) to extract from an RGB565 image (if passed) and to render on the display. For example, if you pass
rgb_channel=1this will extract the green channel of the RGB565 image and display that in grayscale.
alphacontrols how opaque the image is. A value of 256 displays an opaque image while a value lower than 256 produces a black transparent image. 0 results in a perfectly black image.
color_paletteif not
-1can be
sensor.PALETTE_RAINBOW,
sensor.PALETTE_IRONBOW, or a 256 pixel in total RGB565 image to use as a color lookup table on the grayscale value of whatever the input image is. This is applied after
rgb_channelextraction if used.
alpha_paletteif not
-1can be a 256 pixel in total GRAYSCALE image to use as a alpha palette which modulates the
alphavalue of the input image being displayed at a pixel pixel level allowing you to precisely control the alpha value of pixels based on their grayscale value. A pixel value of 255 in the alpha lookup table is opaque which anything less than 255 becomes more transparent until 0. This is applied after
rgb_channelextraction if used.
hintcan be a logical OR of the flags:
image.AREA: Use area scaling when downscaling versus the default of nearest neighbor.
image.BILINEAR: Use bilinear scaling versus the default of nearest neighbor scaling.
image.BICUBIC: Use bicubic scaling versus the default of nearest neighbor scaling.
image.CENTER: Center the image image being displayed on (x, y).
image.EXTRACT_RGB_CHANNEL_FIRST: Do rgb_channel extraction before scaling.
image.APPLY_COLOR_PALETTE_FIRST: Apply color palette before scaling.
x_sizemay be passed if
x_scaleis not passed to specify the size of the image to display and
x_scalewill automatically be determined passed on the input image size. If neither
y_scaleor
y_sizeare specified then
y_scaleinternally will be set to be equal to
x_sizeto maintain the aspect-ratio.
y_sizemay be passed if
y_scaleis not passed to specify the size of the image to display and
y_scalewill automatically be determined passed on the input image size. If neither
x_scaleor
x_sizeare specified then
x_scaleinternally will be set to be equal to
y_sizeto maintain the aspect-ratio.
Not supported for compressed images. | https://docs.openmv.io/library/omv.tv.html?highlight=tv | CC-MAIN-2021-39 | refinedweb | 655 | 50.94 |
(90%): keyboard, mouse, touch (missing gamepad)
- loader (100%)
- math (100%)
- particles (100%)
- sound & music (100%)
- system (100%)
- time (100%)
- tilemap (100%)
- tween (100%) (use Mirrors API)
- utils (100%)
Examples:
please check the online demo or download examples from github repo
import "package:play_phaser/phaser.dart"; main() { Game game = new Game(800, 480, WEBGL, '', new basic_01_load_an_image()); } class basic_01_load_an_image extends State { Text text; Sprite image; preload() { //('car', 'assets/sprites/car.png'); } create() { // This creates a simple sprite that is using our loaded image and // displays it on-screen image = game.add.sprite(game.world.centerX, game.world.centerY, 'car'); // Moves the image anchor to the middle, so it centers inside the game properly image.anchor.set(0.5); } }
The number of examples for each class.
- animation * 4
- arcade physics
- audio * 2
- basics * 5
- camera * 3
- display * 5
- games * 7
- imput
- loader
- particles
- tilemaps * 2
- tweens
Change log
0.9.3 * Fix bugs.
TODO
- Build more examples to comprehensively test the play_phaser game engine.
- Build in-game UI so that all examples can be tested in one CocoonJS app.
- Refactor the code, to improve the scalibility and performance.
- Complete the Document in Dart style. | https://www.dartdocs.org/documentation/play_phaser/0.9.3/index.html | CC-MAIN-2017-47 | refinedweb | 191 | 55.84 |
On 07/24/11 03:53, Bruno Haible wrote: > You call it a "wraparound" situation. But there is no arithmetic operation, > just a cast. Isn't there a better word to denote this situation? Sorry, I don't know of one. A cast is an arithmetic operation, though, even though it often has zero runtime cost (in this sense it is like "n + 0"); so "overflow" and "wraparound" are not inappropriate words here. > I would leave the > return (long)offset; > statement as it is. The code does a conversion, and if the code is more > explicit about it, the better. Otherwise, when reading the code, I keep > wondering "what is this if statement good for?", and when searching for > casts, I have no chance to find it with 'grep'. I left the cast alone, but still, I don't like it. It's better to avoid casts when possible, as they can mask more-important errors, e.g., converting pointers to integers. Also, conversions happen all the time in C code, and it would overly clutter the code to mark each one with a cast. Finally, for this particular case, surely it's obvious what's going on even without a cast. I thought of making it even more obvious by adding a comment, like this: if (LONG_MIN <= offset && offset <= LONG_MAX) { /* Convert offset to the result type 'long', and return. */ return offset; } but I dunno, that's uncomfortably close to: index++; /* Add one to index. */ and clutter like this is best avoided. How about the following idea instead? It makes the conversion clearer, and it avoids the cast. static long convert_off_t_to_long (off_t n) { if (LONG_MIN <= n && n <= LONG_MAX) return n; else { errno = EOVERFLOW; return -1; } } long ftell (FILE *fp) { /* Use the replacement ftello function with all its workarounds. */ return convert_off_t_to_long (ftello (fp)); } | http://lists.gnu.org/archive/html/bug-gnulib/2011-07/msg00367.html | CC-MAIN-2016-40 | refinedweb | 302 | 74.39 |
Feature #14625open
yield_self accepts an argument, calling to_proc
Description
Currently, yield_self doesn't accept any argument other than a block.
But there are situations where I would like to pass a method object to yield_self.
e.g.
result = collection .yield_self(&method(:filter1)) .yield_self(&method(:filter2))
Of course, we can get the same result with
result = filter2(filter1(collection))
but the order of reading/writing doesn't match the order of thinking.
My request is for yield_self to accept a proc-ish object and call to_proc on it so that we can write the code as shown below, which is more readable.
result = collection .yield_self(method :filter1) .yield_self(method :filter2)
Updated by zverok (Victor Shepelev) over 3 years ago
Question 1. How is this (proposed):
result = collection .yield_self(method :filter1) .yield_self(method :filter2)
better than this (already works):
result = collection .yield_self(&method(:filter1)) .yield_self(&method(:filter2))
?
Question 2: what about all other methods that accepts blocks of code? If the syntax shown above is available for
#yield_self, shouldn't it become available for
#each,
#map and everything else?..
collection.yield_self(method :filter1) collection.map(method :filter1)
I believe that the real improvement of "passing the method" situation would be the #13581, so you can write something like:
collection .yield_self(&.:filter1) .yield_self(&.:filter2)
Updated by shevegen (Robert A. Heiler) over 3 years ago
I can't answer all questions zverok posed but in regards to:
.yield_self(method :filter2)
versus
.yield_self(&method(:filter1))
The first variant is cleaner IMO.
As for
(&.:filter1) I don't really like that suggestion and
I think it should not be connected to irohiroki's issue
request here since he did not suggest it. :)
While I agree that it would be great if we could have a
way to also pass in arguments rather than just invoke
method calls vie e. g.
array.map(&:strip) alone, I am
not convinced that
&.: should be the way to go. It
looks very perlish and the dots are not so easily
distinguishable (we use
:: for "namespace" separators
currently and
. for method calls).
So on that, I think we should keep towards the suggestion
itself given by irohiroki. And I think he meant it only
for
yield_self, not for any other method. Of course one
can argue that symmetry should exist for all methods
(though I am not sure as to why, other than thinking
that symmetry is more important even if it may be useless
for some methods).
Updated by irohiroki (Hiroki Yoshioka) over 3 years ago
zverok,
Answer 1.
.yield_self(method :filter1)
is shorter than
.yield_self(&method(:filter1))
and doesn't have nested parens.
Answer 2: I don't really know about other methods, but there is a method named
Enumerable#inject and actually it accepts a symbol as an argument that is special among methods having a block. What I mean is that there can be a special method, although I'm not sure it's really nice.
Regarding #13581, it can help me but anyway it's still open right now.
Updated by irohiroki (Hiroki Yoshioka) over 3 years ago
shevegen,
That's what I meant to say. Thank you.
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/14625 | CC-MAIN-2021-43 | refinedweb | 527 | 56.15 |
Can Get Gridview Button Click With JqueryJan 20, 2011
I have a input button in the gridview to display the details for each row.
<input
id="imgBtnDetail"
type="button" src="~/ArtWork/btn_x.png"
runat="server"
value="More Details"
/>
[code]...
I have a input button in the gridview to display the details for each row.
<input
id="imgBtnDetail"
type="button" src="~/ArtWork/btn_x.png"
runat="server"
value="More Details"
/>
[code]...
I am used the method javascript method to add row to g=rid view dyanamically in javascript.
but when i write something in my text box and then add row then text also copied but i want to add empty row to grid view
here is my grid view
<asp:GridView
<Columns>
<asp:TemplateField
[Code]....
row added but with their text i want to add empty textboxes !!
i have gridview in which i have a Image button with id (' ImageButton3 ') on click of which i want to hide that corresponding row , by first highlighting it by followed fadeIn effect ....
Here is code of gridview:
<asp:GridView
<RowStyle Height="50px" />
[Code] ....
This is code for javascript:
<script type="text/javascript">
$(document).ready(function() {
$("[id*=ImageButton3]").live("click", function() {
$(this).closest("tr").hide();
return false;
});
});
</script>
But the code i am using only hide the row , i am not able to get the effects of fadeIn and highlighting ..
Also i want i know that am i using right code to hide gridview row ?
I have a grid with 3 templetefield columns. Above that i have a checkbox list collection. i want to add no of rows equal to no of checkbox checked from list.
If I check 3 checkbox then 3 rows should be added to grid dynamically without postback.. I am trying to do it using javascript.]....
I have a button within a TabPanel and i have a button click function written in Jquery that is not executing when i click the button. If i remove the tab panel and container it works, but i would like to use the tab panel/container functionality. See code below:
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %>
<%@ Register assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" tagprefix="cc1" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="">
<head id="Head1" runat="server">
new to Jquery and having trouble using ui-tab control. Thanks for helpI am able to initialize to tab 4 on page load.If I select a different tab and then click the edt button I want the new tab to remain selected. What actually happens is that tab 4. gets re-selected.
Protected Sub btnEdit_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btnEdit.Click
Session("DealSheetMode") = "Edit"
ManageVisibility(Session("DealSheetMode"))
PopulateEditPanels()
Page.ClientScript.RegisterClientScriptBlock(GetType(Page), "TabScript1", "SelectTab();", True)
[code]...
I've got a .net button that has an href attribute value set to
javascript:WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions("ctl00$cp1$ucInvoiceSearch$btnSearch", "", true, "", "", false, true))
I've got a textbox that when I press enter I want it to fire this event. Doing the 'Enter' event isn't an issue but I can't get the event on the href to fire using .click(). Here's my function so far:
$("[id*='tbInvNo']").keyup(function(event){
var $btn = $(".pnl-invoice-search");
if(event.keyCode == 13)
$btn.click();
});
I have a button and a dropdown list on my aspx page and I'd create a javascript function which will set a specific value of a dropdownlist present on the page. This function will be called on the click of the button.
[Code]....
I have a jQuery Pager control, for each page there is a set of textboxes and a submit button. When this submit button is clicked I want it to fire off some jQuery to update the controls on the front end (i.e. current page, set visibility of controls etc). The button is an asp.net web forms postback button.View 3 Replies
im running code while clik of anchor button from jquery, with this event i want to call another button click server side event how to do this?View 12 Replies
I have a simple form and i'm using a jquery to load a data (partial postback)...$("#divbody").load("OtherData.aspx?ID=12"); The OtherData.aspx contains a Data that shows in gridview and using a querystring..BUT the problem is when I click a button (default.aspx) I got an error...The state information is invalid for this page and might be corrupted.
[Code]....
I am new to JQuery and trying to display a Yes/No confirmation dialog box when the user clicks on an aspx button. When the dialog box gets displayed to the user he can click the Yes or No button and depending upon the user action i want to execute different code present in code behind file i.e. in aspx.cs
I have tried with the following code but not succeded in my objective.
Code present in aspx page:
<link href="jquery-ui-1.8.10.custom.css" rel="stylesheet" type="text/css" />
<script src="jquery-1.4.4.min.js" type="text/javascript"></script>
<script src="jquery-ui-1.8.10.custom.min.js" type="text/javascript"></script>
<script>
jQuery(document).ready(function() {
jQuery("#myButton").click(showDialog);
//variable to reference window
[Code]....
I have a dropdown list and a button. When you click the button it will set all the dropdowns to a value using JQuery. The drop down is inside a div with a unique id. Is there any way I can get JQuery to set a drop down list value that is in a div without giving it a div class or id? I've attached the html, its just three drop downs each one inside a div class.View 1 Replies
I have a panel at the top of my page which contains some cascading dropdown lists. Once the user selects from the lists, they will click a 'Search' asp.net button and a gridview will appear below showing the search results. What I want to do is use JQuery to toggle the visibility of the search panel. I tried using the Ajax collapsiblepanelextender but ran into all kinds of problems, because there are many updatepanels and ajax extension controls on this page. It sounds so simple, just hide or show, but I can't get it to work. When I click the button, the panel hides. When I click it again, the panel does not reappear. I have also tried having 2 buttons, 'hide' and 'show', and had the same results. right now here is my code:
javascript:
$(document).ready(function() {
$("[id$=btnHideSearch]").click(function() {
$("#<%= Panel1.ClientID %>").fadeOut('slow');
});
});
$(document).ready(function() {
[code]...
i have a link button in my page with atlas
<atlas:UpdatePanel
<ContentTemplate><asp:LinkButton
ID="btn_popup"
runat="server"
Font-Bold="True"
[code]...?
How to show/hide gif image when button click in asp.netView 3 Replies
I wanna call this jquery function in ASP.NET on button click event
var doRedirect = function() { location.href='' };
$("#button1").click(function() {
$("#label1").show();
window.setTimeout("$('#label1').fadeOut('slow', doRedirect)", 10000);
});
I am using a web form in vs2010 and in my form am having a gridview with button on footer. I have written the code to create a new row while click the button. If I click the button i got the following error
"unable to cast object of type 'system.web.ui.webcontrols.textbox' to type 'mysite.textbox"....
Below is the button click code.
private void AddNewRow() {
int rowIndex = 0;
if (ViewState["CurrentTable"] != null) {
dtCurrentTable = (DataTable)ViewState["CurrentTable"];
DataRow drCurrentRow = null;
if (dtCurrentTable.Rows.Count > 0)
[Code] .... | http://asp.net.bigresource.com/can-Get-gridview-button-click-with-jquery-mEs7Pdgjn.html | CC-MAIN-2018-39 | refinedweb | 1,276 | 67.25 |
The function nc__enddef takes an open netCDF dataset out of define mode. The changes made to the netCDF dataset while it was in define mode are checked and committed to disk if no problems occurred. Non-record variables may be initialized to a "fill value" as well. See nc_set_fill. The netCDF dataset is then placed in data mode, so variable data can be read or written.
This call may involve copying data under some circumstances. For a more extensive discussion see File Structure and Performance.
Caution: this function exposes internals of the netcdf version 1 file format. Users should use nc_enddef in most circumstances. This functionc_redef, nc_enddef by requesting that minfree bytes be available at the end of the section.
The align parameters allow one to set the alignment of the beginning of the corresponding sections. The beginning of the section is rounded up to an index which is a multiple of the align parameter. The flag value ALIGN_CHUNK tells the library to use the bufrsize (see above) as the align parameter.
The file format requires mod 4 alignment, so the align parameters are silently rounded up to multiples of 4. The usual call,
nc_enddef(ncid);
is equivalent to
n.
int nc__enddef(int ncid, size_t h_minfree, size_t v_align, size_t v_minfree, size_t r_align);
ncid
h_minfree
v_align
v_minfree
r_align
nc__enddef returns the value NC_NOERR if no errors occurred. Otherwise, the returned status indicates an error. Possible causes of errors include:
Here is an example using nc_enddef to finish the definitions of a new netCDF dataset named foo.nc and put it into data mode:
#include <netcdf.h> ... int status; int ncid; ... status = nc_create("foo.nc", NC_NOCLOBBER, &ncid); if (status != NC_NOERR) handle_error(status); ... /* create dimensions, variables, attributes */ status = nc_enddef(ncid); /*leave define mode*/ if (status != NC_NOERR) handle_error(status); | http://www.unidata.ucar.edu/software/netcdf/old_docs/docs_4_0_1/netcdf-c/nc_005f_005fenddef.html#nc_005f_005fenddef | crawl-003 | refinedweb | 295 | 66.13 |
Further questions about my site
Discussion in 'HTML' started by Josh, Jan 17, 2005.,597
- =?Utf-8?B?U3RldmUgSw==?=
- Jun 10, 2004
Stop further execution in Page_LoadGopal Krish, Oct 24, 2004, in forum: ASP .Net
- Replies:
- 2
- Views:
- 13,226
- Gopal Krish
- Oct 25, 2004
I am getting a little further, but still having issues...Brent White, Oct 26, 2005, in forum: ASP .Net
- Replies:
- 8
- Views:
- 700
- S. Justin Gengo
- Oct 26, 2005
Further questions on dictionaries, namespaces, etc.Talin, Aug 21, 2005, in forum: Python
- Replies:
- 1
- Views:
- 298
- Bruno Desthuilliers
- Aug 21, 2005
Need further comments\tips\hints or feedback for my siteMark C, Feb 2, 2007, in forum: ASP .Net
- Replies:
- 0
- Views:
- 386
- Mark C
- Feb 2, 2007 | http://www.thecodingforums.com/threads/further-questions-about-my-site.160639/ | CC-MAIN-2015-18 | refinedweb | 123 | 82.34 |
how to pre populate the fields using struts2 from the database
Struts2 Training
applications successful.
Struts2 Training Course Prerequisites
Before learning...
Struts2 Training
Apache Struts is an open-source framework that is used
to develop Java applications.
Struts tutorial
Struts tutorial Please send the Struts meterial and required jar files
Please visit the following links:
Struts Tutorials
Struts2 Tutorials
New to struts2
New to struts2 Please let me know the link where to start for struts 2 beginners
Struts 2 Tutorials 1 Tutorial and example programs
and reached end of life phase. Now you should start learning the
Struts 2 framework... framework then these struts 1 tutorials will help you in
learning the Struts 1... and struts2
Introduction to the Apache
| Site
Map | Business Software
Services India
Struts 2.18 Tutorial Section... Login
Application |
Struts 2 |
Struts1 vs
Struts2 |
Introduction... and notPresent Tags
Struts 2 Tutorial Section
Introduction
to Struts Actions
Struts2 Actions
When... generated by a Struts
Tag. The action tag (within the struts root node of ...1 vs Struts2
Struts1 vs Struts2
Struts2 is more... differences between struts1 and struts2
Feature
Struts 1
Struts 2
Action classes Tutorial: Struts 2 Tutorial for Web application development, Jakarta Struts Tutorial
Struts 2 Tutorials - Jakarta Struts Tutorial
Learn Struts 2... framework
you can actually start learning the concepts of Struts framework... validations are covered in this tutorial.
Here is the Tutorials Tiles Example
Struts2 Tiles Example
The Following are the steps for Stuts tiles plugin
1...;!DOCTYPE struts PUBLIC
"-//Apache Software Foundation//DTD Struts Configuration...;
<struts>
<package name="default" extends="struts
struts2
struts2 sir.... i am doing one struts2 application and i have to make pagination in struts2....how can i do
Why Struts 2
Why Struts 2
The new version Struts....
Better Tag
features - Struts 2 tags enables to add style sheet-driven... to the Struts 2.0.1 release announcement, some key features are:
Simplified tag
Struts2 tag function of hidden tag?
Hi Friend,
<s:hidden> tag create a hidden value field.It means it stores the value but cannot be visible.
For more information, visit the following link:
Struts2 Tutorial...problem in JSP..unable to get the values for menuTitle!!!
;/constant>
<package name="Basic" extends="struts-default" namespace="/">...Struts2...problem in JSP..unable to get the values for menuTitle!!! **Hello everyone...
i'm trying to make a dynamic menu from database in struts2
Advance Struts Action
Advance Struts2 Action
In struts framework Action is responsible... be a POJO also. The struts2
framework provides some classes and interfaces... is called this method executed automatically.
The action class in Struts2 framework
struts2 - Framework
struts2 thnx ranikanta
i downloaded example from below link
i m... roseindia tutorial.
But HelloWorld.jsp file didnt showing the current date and time Books
Apache
Struts
This free tutorial is an attempt to answer... of the various Struts books.
Also, this tutorial is not meant to be an evangelistic "Why everyone should use Struts and why MVC is impossible without
Struts Links - Links to Many Struts Resources
, this tutorial is not meant to be an evangelistic "Why everyone should use Struts... Tutorials - Jakarta Struts
Tutorial This complete reference of Jakarta Struts shows... in this tutorial.
Client Side Address Validation in Struts
In this lesson
Why it called Struts?
Why it called Struts? Why it called Struts
Struts 2.0- Deployment - Struts
Struts 2.0- Deployment Exception starting filter struts2... :
struts2
org.apache.struts2.dispatcher.FilterDispatcher
struts2
/*
For more information on Struts2 visit to :
Struts Book - Popular Struts Books
.
Learning
Jakarta Struts 1.2
Jakarta Struts is an Open Source Java..., and enables efficient object re-use.
Learning the New Jakarta Struts... techniques for using struts. It may be for getting basic concepts of struts
Why Struts in web Application - Struts
Why Struts in web Application Hi Friends, why struts introduced in to web application. Plz dont send any links . Need main reason for implementing struts. Thanks Prakash | http://www.roseindia.net/tutorialhelp/comment/83421 | CC-MAIN-2014-23 | refinedweb | 651 | 51.75 |
10 August 2010 06:07 [Source: ICIS news]
SINGAPORE (ICIS)--Jurong Aromatics Corp (JAC) is targeting to raise funds for the construction of its planned petrochemical complex in Singapore by the end of the year, an official of a company privy to the project said on Tuesday.
“The target is at the end of this year. We are working hard to meet that,” said Dongkeun Lee, an official of SK Energy International, which is the project leader in the planned 1.45m tonne/year aromatics complex in Jurong Island.
If everything went according to plan, plant construction would start next year, with the start-up due in 2014, said Lee.
He declined to say how much stake SK Energy now has in the project.
JAC intends to build a condensate splitter that will produce 200,000 tonnes/year of orthoxylene, 450,000 tonnes/year of benzene and 800,000 tonnes/year of paraxylene.
Based on previous estimates, the project would cost around $2.0bn (€1.52bn) to build. JAC has a capital of $550m.
Reuters reported that JAC was in the process of changing its equity structure, with SK Energy raising its stake to 30% and ?xml:namespace>
Major commodity trader Glencore is also among the foreign groups backing the project.
The project was originally due to start construction in late 2008 but funding was hit by the credit crunch following the collapse of Lehman Brothers in September of that year.
“Our position has always been it is important for JAC to seek financial closure,” the EDB said in an e-mailed statement.
“EDB will continue to facilitate the project in terms of its feasibility | http://www.icis.com/Articles/2010/08/10/9383490/jac-eyes-end-2010-to-raise-funds-for-singapore-aromatics-complex.html | CC-MAIN-2014-42 | refinedweb | 275 | 62.38 |
elementFormDefault is like a good character actor – you tend to overlook it until something isn’t quite right.
I Married Joan
Technically, Gilligan’s Island would be the same with someone other than Jim Backus playing Thurston Howell III just like that show about pompous Star Fleet officers is technically the same even when key crew are replaced by Po, Tinky-Winky, La-La and Dipsy. But it wouldn’t be quite right. The interpretation would be off.
The story is similar for elementFormDefault. People smarter than I know the full ramifications of “qualified” and “unqualified”. They also know the definition of the term: elementFormDefault specifies whether locally declared elements are namespace qualified, and can likely explain it so I and a group of 4 year olds could understand. For me, the definition is simple: when elementFormDefault is "unqualified", things aren't quite right with XML files output by BizTalk Server. Technically correct, but not quite right.
For example, given a flat file containing error messages with a header/body/trailer structure like so:
Los Angeles Facility
ERROR102|0|High|I can’t escape from new york|1981-05-31T13:20:00.000-05:00
ERROR16502|2|Low|The Love Boat hit an iceberg|1978-05-31T13:20:00.000-05:00
8675309
Create header and trailer schemas, then create a body schema that picked up the repeating error elements: ID, status, priority, description, dateTime. The first few lines of the body schema look like this:
<?xml version="1.0" encoding="utf-16"?>
<xs:schema xmlns:
Notice that elementFormDefault is “unqualified”. This is the default value. So what happens when I throw this schema into the Flat File Disassembler component and run it from the command line? Let’s find out:
set TOOLSDIR="C:\Program Files\Microsoft BizTalk Server 2006\SDK\Utilities\PipelineTools"
%TOOLSDIR%\FFDasm sample.txt -bs Error.xsd -hs Header.xsd -ts Trailer.xsd
Through the magic of the internet, two output files are produced (you did remember to set the Error node’s maxOccurs attribute to 1 right?). The files are easy to spot – they’re the ones with GUIDs. Opening one reveals:
<Error xmlns="">
<Error xmlns="">
<ID>16502</ID>
<Type>2</Type>
<Priority>Low</Priority>
<Description>The Love Boat hit an iceberg</Description>
<ErrorDateTime>1978-05-31T13:20:00.000-05:00</ErrorDateTime>
</Error>
</Error>
Notice the empty namespace declaration xmlns=””. For more complex schemas, the “extraneous” xmlns tags are more noticable. Probably not what you want but technically correct – yet something isn’t quite right. Now change elementFormDefault to “qualified”, run the flat file disassembler command-line tool again and bask in the difference:
<Error>
A parser would happily eat both versions but this later version is more pleasing. It seems more correct, just as it is correct that Lee Van Cleef was the banjo-playing deputy on the Andy Griffith Show.
The sample is attached as a zip.
Banjo-Playing Deputy?
So where does this leave us? How about a recommendation:
Always set elementFormDefault to “qualified”.
It won't hurt anything if you do, and your XML output will be more pleasing.
And if you are using the XML or Flat File disassembler component and have a body section that you want broken out, make sure the maxOccurs attribute of the root node for the body is set to 1. If you forget this and ask a newsgroup, you may look like a noob. | http://blogs.msdn.com/b/ebattalio/archive/2006/03/03/543154.aspx | CC-MAIN-2013-48 | refinedweb | 570 | 56.66 |
Universe: a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.
Project description
Universe is a software platform for measuring and training an AI’s general intelligence across the world’s supply of games, websites and other applications. This is the universe open-source library, which provides a simple Gym interface to each Universe environment., and presenting the AI with the same interface a human uses: sending keyboard and mouse events, and receiving screen pixels. Our initial release contains over 1,000 environments in which an AI agent can take actions and gather observations.
Additionally, some environments include a reward signal sent to the agent, to guide reinforcement learning. We’ve included a few hundred environments with reward signals. These environments also include automated start menu clickthroughs, allowing your agent to skip to the interesting part of the environment.
We’d like the community’s help to grow the number of available environments, including integrating increasingly large and complex games.
The following classes of tasks are packaged inside of publicly-available Docker containers, and can be run today with no work on your part:
- Atari and CartPole environments over VNC: gym-core.Pong-v3, gym-core.CartPole-v0, etc.
- Flashgames over VNC: flashgames.DuskDrive-v0, etc.
- Browser tasks (“World of Bits”) over VNC: wob.mini.TicTacToe-v0, etc.
We’ve scoped out integrations for many other games, including completing a high-quality GTA V integration (thanks to Craig Quiter and NVIDIA), but these aren’t included in today’s release.
Contents of this document
Getting started
Installation
Supported systems
We currently support Linux and OSX running Python 2.7 or 3.5.
We recommend setting up a conda environment before getting started, to keep all your Universe-related packages in the same place.
Install Universe
To get started, first install universe:
git clone cd universe pip install -e .
If this errors out, you may be missing some required packages. Here’s the list of required packages we know about so far (please let us know if you had to install any others).
On Ubuntu 16.04:
pip install numpy sudo apt-get install golang libjpeg-turbo8-dev make
On Ubuntu 14.04:
sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable # for newer golang sudo apt-get update apt-get install golang libjpeg-turbo8-dev make
On OSX:
You might need to install Command Line Tools by running:
xcode-select --install
Or numpy and libjpeg-turbo packages:
pip install numpy brew install golang libjpeg-turbo
Install Docker
The majority of the environments in Universe run inside Docker containers, so you will need to install Docker (on OSX, we recommend Docker for Mac). You should be able to run docker ps and get something like this:
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Notes on installation
- When installing universe, you may see warning messages. These lines occur when installing numpy and are normal.
- You’ll need a go version of at least 1.5. Ubuntu 14.04 has an older Go, so you’ll need to upgrade your Go installation.
- We run Python 3.5 internally, so the Python 3.5 variants will be much more thoroughly performance tested. Please let us know if you see any issues on 2.7.
- While we don’t officially support Windows, we expect our code to be very close to working there. We’d be happy to take pull requests that take our Windows compatibility to 100%.
System overview
A Universe environment is similar to any other Gym environment: the agent submits actions and receives observations using the step() method.
Internally, a Universe environment consists of two pieces: a client and a remote:
- The client is a VNCEnv instance which lives in the same process as the agent. It performs functions like receiving the agent’s actions, proxying them to the remote, queuing up rewards for the agent, and maintaining a local view of the current episode state.
- The remote is the running environment dynamics, usually a program running inside of a Docker container. It can run anywhere – locally, on a remote server, or in the cloud. (We have a separate page describing how to manage remotes.)
- The client and the remote communicate with one another using the VNC remote desktop system, as well as over an auxiliary WebSocket channel for reward, diagnostic, and control messages. (For more information on client-remote communication, see the separate page on the Universe internal communication protocols.)
The code in this repository corresponds to the client side of the Universe environments. Additionally, you can freely access the Docker images for the remotes. We’ll release the source repositories for the remotes in the future, along with tools to enable users to integrate new environments in the future. Please sign up for our beta if you’d like early access.
Run your first agent
Now that you’ve installed the universe library, you should make sure it actually works. You can paste the example below into your python REPL. (You may need to press enter an extra time to make sure the while loop is executing.)()
The example will instantiate a client in your Python process, automatically pull the quay.io/openai/universe.flashgames image, and will start that image as the remote. (In our remotes documentation page, we explain other ways you can run remotes.)
It will take a few minutes for the image to pull the first time. After that, if all goes well, a window like the one below will soon pop up. Your agent, which is just pressing the up arrow repeatedly, is now playing a Flash racing game called Dusk Drive. Your agent is programmatically controlling a VNC client, connected to a VNC server running inside of a Docker container in the cloud, rendering a headless Chrome with Flash enabled:
You can even connect your own VNC client to the environment, either just to observe or to interfere with your agent. Our flashgames and gym-core images conveniently bundle a browser-based VNC client, which can be accessed at. If you’re on Mac, connecting to a VNC server is as easy as running: open vnc://localhost:5900.
(If using docker-machine, you’ll need to replace “localhost” with the IP address of your Docker daemon, and use openai as the password.)
Breaking down the example
So we managed to run an agent, but what did all the code actually mean? We’ll go line-by-line through the example.
- First, we import the gym library, which is the base on which Universe is built. We also import universe, which registers all the Universe environments.
import gym import universe # register the universe environments
- Next, we create the environment instance. Behind the scenes, gym looks up the registration for flashgames.DuskDrive-v0, and instantiates a VNCEnv object which has been wrapped to add a few useful diagnostics and utilities. The VNCEnv object is the client part of the environment, and it is not yet connected to a remote.
env = gym.make('flashgames.DuskDrive-v0')
- The call to configure() connects the client to a remote environment server. When called with configure(remotes=1), Universe will automatically create a Docker image running locally on your computer. The local client connects to the remote using VNC. (More information on client-remote communication can be found in the page on universe internal communication protocols. More on configuring remotes is at remotes.)
env.configure(remotes=1)
When starting a new environment, you call env.reset(). Universe environments run in real-time, rather than stepping synchronously with the agent’s actions, so reset is asynchronous and returns immediately. Since the environment will not have waited to finish connecting to the VNC server before returning, the initial observations from reset will be None to indicate that there is not yet a valid observation.
Similarly, the environment keeps running in the background even if the agent does not call env.step(). This means that an agent that successfully learns from a Universe environment cannot take “thinking breaks”: it must keep sending actions to the environment at all times.
Additionally, Universe introduces the vectorized Gym API. Rather than controlling a single environment at a time, the agent can control a fixed-size vector of n environments, each with its own remote. The return value from reset is therefore a vector of observations. For more information, see the separate page on environment semantics)
observation_n = env.reset()
At each step() call, the agent submits a vector of actions; one for each environment instance it is controlling. Each VNC action is a list of events; above, each action is the single event “press the ArrowUp key”. The agent could press and release the key in one action by instead submitting [('KeyEvent', 'ArrowUp', True), ('KeyEvent', 'ArrowUp', False)] for each observation.
In fact, the agent could largely have the same effect by just submitting ('KeyEvent', 'ArrowUp', True) once and then calling env.step([[] for ob in observation_n]) thereafter, without ever releasing the key using ('KeyEvent', 'ArrowUp', False). The browser running inside the remote would continue to statefully represent the arrow key as being pressed. Sending other unrelated keypresses would not disrupt the up arrow keypress; only explicitly releasing the key would cancel it. There’s one slight subtlety: when the episode resets, the browser will reset, and will forget about the keypress; you’d need to submit a new ArrowUp at the start of each episode.
action_n = [[('KeyEvent', 'ArrowUp', True)] for ob in observation_n]
After we submit the action to the environment and render one frame, step() returns a list of observations, a list of rewards, a list of “done” booleans indicating whether the episode has ended, and then finally an info dictionary of the form {'n': [{}, ...]}, in which you can access the info for environment i as info['n'][i].
Each environment’s info message contains useful diagnostic information, including latency data, client and remote timings, VNC update counts, and reward message counts.
observation_n, reward_n, done_n, info = env.step(action_n) env.render()
Testing
We are using pytest for tests. You can run them via:
pytest
Run pytest --help for useful options, such as pytest -s (disables output capture) or pytest -k <expression> (runs only specific tests).
Additional documentation
More documentation not covered in this README can be found in the doc folder of this repository.
What’s next?
- Get started training RL algorithms! You can try out the Universe Starter Agent, an implementation of the A3C algorithm that can solve several VNC environments.
- For more information on how to manage remotes, see the separate documentation page on remotes.
- Sign up for a beta to get early access to upcoming Universe releases, such as tools to integrate new Universe environments or a dataset of recorded human demonstrations.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/universe/0.20.3/ | CC-MAIN-2021-17 | refinedweb | 1,833 | 53.92 |
11-08-2010 08:00 PM
Has anyone had any luck in setting the font of a control to a different color?
In the control "Label" has an attribute "format" of the type TextFormat that has a "color" attribute, but it does not seen to do anything. Same with size.
11-09-2010 02:45 PM
Yeah, I had to fiddle with it a bit to figure out the hierarchy of all the classes... but here's a snipet that should work for you.
import qnx.ui.text.Label;
/* A label in which to show the hello greeting. */
var helloLabel:Label = new Label();
helloLabel = new Label();
helloLabel.width = 800;
helloLabel.height = 30;
helloLabel.x = (stage.stageWidth - helloLabel.width) / 2;
helloLabel.y = 60;
var txtFormat:TextFormat = new TextFormat();
txtFormat.align = TextFormatAlign.CENTER;
txtFormat.font = "Arial";
txtFormat.color = 0x103f10;
txtFormat.size = 24;
helloLabel.format = txtFormat;
addChild(helloLabel);
Regards Brent
If I submitted something helpful, please give me a kudo. Thanks.
11-09-2010 02:51 PM
Ah, no direct manipulation to the object. Counter intuitive.
BB: Suggest the documentation be updated to reflext the issue (or better yet, allow for direct manipulation to cut down on the lines of code).
11-09-2010 03:04 PM
That's the way Actionscript handles everything, so I doubt they would change it and make it inconsistent.
11-09-2010 03:19 PM
Yes, it seems strange that the QNX Label which seems to support:
helloLabel.format.color = 0x103f10;
helloLabel.format.align = TextFormatAlign.CENTER;
helloLabel.format.font = "Arial";
helloLabel.format.color = 0x103f10;
helloLabel.format.size = 24;
wouldn't work in this fashion. But I do notice that it doesn't have nearly the same class as the spark.components.Label or the old mx.components.Label (which wouldn't allow you to do the above anyway as they don't support the .format methods). I had thought the qnx. classes were meant to be completely portable between spark and qnx (except for the import) but it doesn't appear that they are.
Regards
Brent
If my post was found to be helpful to you, please thank me with a Kudo. Thanks.
11-09-2010 03:30 PM
ActionScript as a language not not prevent direct manipulation of an object, the implementation of the class defines the behavior. In this case, if the Label.format was not allocated (assumed), then allocating a TextFormat and setting it to format is required. If Label allocated format in its construction, then direct manipulation of the format's attributes is possible.
Not the end of the world, just have to aware of it. This is similar the adding columns to a DataGrid. You cannot just push a new column to a DataGrid, you have to create your own array of columns, add them, and then set the DataGrids columns to this new array.
11-09-2010 03:39 PM
To be in violent agreement with you, at least you can manipulate MX styles individually, for example:
var label : Label = new Label();
label.setStyle( 'fontSize', 12 );
etc.
Now, I was never a big fan of meta driven style manipulation since is requires the developer to know them by heart (or look up in the documentation) instead of having the IDE (FB4) prompt you as your type. I like that QNX Label has the format attribute, its just unfortunete that you can not just change one of its attributes without having to instantiate the whole format class.
Again, not the end of the world. Just will extend to get the behavior I am looking for.
11-09-2010 03:55 PM
Yeah, I would have thought that when you instantiated a Label class into its own object, you'd also end up with an "internal" TextFormat object but I guess they are trying to save space. Of course the benefit to instantiating your own TextFormat class is that you can use that one instance for many different objects (Label 1, Label 2, Text 1, Text 2, etc.).
11-09-2010 07:44 PM
I ran into the label formatting problems earlier today. One way you can assign the formatting of a label is like this:
var label:Label = new Label(); label.format = new TextFormat(null, null, "0xFFFFFF");
You're using the regular AS3 TextFormat class, which has a constructor signature that looks like this:
TextFormat(font:String = null, size:Object = null, color:Object = null, bold:Object = null, italic:Object = null, underline:Object = null, url:String = null, target:String = null, align:String = null, leftMargin:Object = null, rightMargin:Object = null, indent:Object = null, leading:Object = null)
So in my example I'm keeping the default font and size, I'm just overriding the color to make it white. | http://supportforums.blackberry.com/t5/Adobe-AIR-Development/CheckBox-and-other-controls-label-font-color-change/td-p/632344 | CC-MAIN-2014-10 | refinedweb | 784 | 65.01 |
Red Hat Bugzilla – Bug 1479452
Exposing routes based on a sharded router using NAMESPACE_LABELS takes too long.
Last modified: 2017-08-09 14:20:41 EDT
Description of problem:
Sharded routers based on project namespace labels take too long to register a route. It can take around 7 minutes for the router to register a newly exposed route within a namespace.
Version-Release number of selected component (if applicable):
v3.6.131
How reproducible:
Always.
Steps to Reproduce:
Actual results:
Routes take too long to register based on NAMESPACE_LABELS.
Expected results:
Near "immediate" registration of an exposed route on a NAMESPACE_LABELS-sharded router, just as it happens when using ROUTE_LABELS.
Additional info:
You can fast forward to end of the recording to see how long it can take for a route to become exposed.
Can you confirm that it is only when route is created on a newly created project?
On an existing project (one that has synced in ~10 minutes) the routes appear as quickly as in a non-sharded router. If yes, this is a duplicate of
(In reply to Rajat Chopra from comment #1)
> Can you confirm that it is only when route is created on a newly created
> project?
Yes, this is correct. Also, if I wait for the routes to be exposed correctly (those max ~10 minutes), I delete the project and re-create the project under the same name, the routes are exposed correctly immediately. So this only seems to affect completely new namespaces for which routes have not been resynced with the router. So, yes a duplicate of.
Also duplicate of:
*** This bug has been marked as a duplicate of bug 1479295 *** | https://bugzilla.redhat.com/show_bug.cgi?id=1479452 | CC-MAIN-2018-22 | refinedweb | 279 | 62.88 |
I'm trying to find the root of the following equation using the bisection method:
x*(1 + (k*n)/(1 + k*x)) - L = 0.
the user inputs the values for k, n, and L and the program should solve for x. Can anyone tell me why my program isn't working? The problem is in the middle in my for loop because it's not calculatoin FP1 correctly but I don't know why. I inputted, k =1, n = 1, L = 3 and the root is 2.303 but my program said 0.0007, don't know why. Any help would be greatly appreciated! Thanks!
#include <iostream> #include <math.h> using namespace std; int main () { double k1; double n1; double tolerance; int N; double L; double a, b; cout << "Welcome to the Root Finding Calculator using the Bisection Method!" << endl << endl; cout << "please enter a value for k1: "; cin >> k1; cout << "please enter a value for n1: "; cin >> n1; cout << "please enter the value for L: "; cin >> L; cout << "please enter an endpoint, a: "; cin >> a; cout << "please enter the other endpoint, b: "; cin >> b; cout << "please enter the level of tolerance: "; cin >> tolerance; cout << "please enter the maximum number of iterations to be performed: "; cin >> N; double FA = (a * (1 + (k1*n1)/(1 + (k1*a)))) - L; double FB = (b * (1 + (k1*n1)/(1 + (k1*b)))) - L; if ((FA * FB) > 0) { cout << "Improper interval, try again!" << endl; return 0; } for (int i = 1; i <= N; i++) { double p1 = ((a + b)/2); double FP1 = (p1 * (1 + (k1*n1)/(1 + (k1*p1)))) - L; if (FP1 = 0 || fabs((b - a)/2) < tolerance) { cout << "The root is " << p1 << endl; return 0; } else { i += 1; } cout << "Iter " << i << ": p1 = " << p1 << ", dp = " << ((b - a)/2) << ", a = " << a << ", b = " << b << ", f(p) = " << FP1 << endl; if (FA * FP1 > 0) { a = p1; FA = FP1; } else { b = p1; FB = FP1; } } cout << "This method failed after " << N << " iterations." << endl; return 0; } | https://www.daniweb.com/programming/software-development/threads/42665/bisection-method-help-please | CC-MAIN-2017-43 | refinedweb | 320 | 73.92 |
Version 1.5.1¶
OEChem 1.5.1¶
New features¶
- Added new OEAssignFormalCharges function that operates on a single atom instead of just a version for the entire molecule.
- Renamed the OESystem::ParamVis namespace to OEParamVisibility to make it consistent with other OpenEye namespaces.
Mayor bug fixes¶
- Fixed two bugs in kekulization of large molecules. First, some large molecules would fail kekulization when they were actually ok. Second, even when they were kekulized correctly, the method would still return false.
- This tweaks the MDL mol file reader to use the test dimension != 3 instead of “dimension == 2” when deciding to honor the wedge/hash bonds or to determine the chirality from 3D coordinates. The subtle difference is that at the point this code is called, the dimension is not necessarily “2” or “3” if the (optional) header line is missing. If the header has been omitted, we treat the molfile like a 2D file (which it most probably is).
- Changed SMARTS parser to allow a TAB character (\t) to be treated as a separator following a SMARTS pattern. This reflects similar functionality in the SMILES parser and simplifies the task of writing “patty”-like applications. | https://docs.eyesopen.com/toolkits/python/oechemtk/releasenotes/version1_5_1.html | CC-MAIN-2021-21 | refinedweb | 195 | 55.54 |
I am curious about the MASSGIS data and related updates. I see there is no tag like the TIGER:reviewed=no such as the case with the TIGER import. With so many errors in the TIGER data, is there a higher level of accuracy with the MASSGIS data, so the review was not required? I would also like to know how updates to the MASSGIS database are getting pulled to OSM. There must have been many updates, since the original import in 2007. Is there a team associated with maintaining this data? Who is a good contact for questions about this part of the map?
asked
01 Sep '16, 20:44
mtc
411●20●22●29
accept rate:
0%
Short answer NO. Some MASSGIS stuff is OK, but masses of the earlier stuff also suffers from import practices which we'd throw our hands up now (for instance, the road network of each township was imported separately and not joined up: whereas for rest of US this was at county level)
For landcover type data the MASSGIS is not as accurate as one would wish. We've learnt the same with other datasets for Georgia, New Jersey & most of Europe (Corine Land Cover). In addition there are multiple overlapping polygons: open space, conservation etc. All really need rationalising & re-aligning with real features, then we'll have data which is more useful for ordinary people rather than planners in state or local government. Take a look at Gray's Beach/Bass Hole for an example.
I suspect the huge amount of data already in OSM for MA, and the complexity presented in the editors may deter people from improving it. That being said there's still an amazing amount of dross even in tourist destinations such as Nantucket. The Quakers worship silently because they've all drowned:
answered
02 Sep '16, 10:49
SK53 ♦
22.9k●46●232●360
accept rate:
20%
If I remember correctly, MASSGIS was considered higher quality than TIGER at the time. There is no automated mechanism to "pull" updates, nor do we want to have one, since OSM considers itself the authoritative database, and there's too much danger of things being broken by automatic updates. The import was meant to give the mapping community a head start but now it is down to individual mappers to keep OSM data up to date, just as it is practised in other parts of the world.
answered
01 Sep '16, 22:31:
import ×169
tiger ×27
review ×13
massgis ×1
question asked: 01 Sep '16, 20:44
question was seen: 1,251 times
last updated: 02 Sep '16, 10:49
Uploading a small bit of new TIGER data (and 3 other unrelated questions)
Error importing tiger 2011 data into nominatim
Issue importing tiger data into nominatim! | https://help.openstreetmap.org/questions/51858/does-massgis-data-get-double-checked?sort=oldest | CC-MAIN-2019-51 | refinedweb | 469 | 57.3 |
Walkthrough: Caching Application Data in a WPF Application
Caching enables you to store data in memory for rapid access. When the data is accessed again, applications can get the data from the cache instead retrieving it from the original source. This can improve performance and scalability. In addition, caching makes data available when the data source is temporarily unavailable.
The .NET Framework provides classes that enable you to use caching in .NET Framework applications. These classes are located in the System.Runtime.Caching namespace.
This walkthrough shows you how to use the caching functionality that is available in the .NET Framework as part of a Windows Presentation Foundation (WPF) application. In the walkthrough, you cache the contents of a text file.
Tasks illustrated in this walkthrough include the following:
Creating a WPF application project.
Adding a reference to the .NET Framework 4.
Initializing a cache.
Adding a cache entry that contains the contents of a text file.
Providing an eviction policy for the cache entry.
Monitoring the path of the cached file and notifying the cache instance about changes to the monitored item.
In order to complete this walkthrough, you will need:
Microsoft Visual Studio 2010.
A text file that contains a small amount of text. (You will display the contents of the text file in a message box.) The code illustrated in the walkthrough assumes that you are working with the following file:
c:\cache\cacheText.txt
However, you can use any text file and make small changes to the code in this walkthrough.
You will start by creating a WPF application project.
To create a WPF application
Start Visual Studio.
In the File menu, click New, and then click New Project.
The New Project dialog box is displayed.
Under Installed Templates, select the programming language you want to use (Visual Basic or Visual C#).
In the New Project dialog box, select WPF Application.
In the Name text box, enter a name for your project. For example, you can enter WPFCaching.
Select the Create directory for solution check box.
Click OK.
The WPF Designer opens in Design view and displays the MainWindow.xaml file. Visual Studio creates the My Project folder, the Application.xaml file, and the MainWindow.xaml file.
By default, WPF applications target the .NET Framework 4 Client Profile. To use the System.Runtime.Caching namespace in a WPF application, the application must target the .NET Framework 4 (not the .NET Framework 4 Client Profile) and must include a reference to the namespace.
Therefore, the next step is to change the .NET Framework target and add a reference to the System.Runtime.Caching namespace.
To change the target .NET Framework in Visual Basic
In Solutions Explorer, right-click the project name, and then click Properties.
The properties window for the application is displayed.
Click the Compile tab.
At the bottom of the window, click Advanced Compile Options….
The Advanced Compiler Settings dialog box is displayed.
In the Target framework (all configurations) list, select .NET Framework 4. (Do not select .NET Framework 4 Client Profile.)
Click OK.
The Target Framework Change dialog box is displayed.
In the Target Framework Change dialog box, click Yes.
The project is closed and is then reopened.
Add a reference to the caching assembly by following these steps:
In Solution Explorer, right-click the name of the project and then click Add Reference.
Select the .NET tab, select System.Runtime.Caching, and then click OK.
To change the target .NET Framework in a Visual C# project
In Solution Explorer, right-click the project name and then click Properties.
The properties window for the application is displayed.
Click the Application tab.
In the Target framework list, select .NET Framework 4. (Do not select .NET Framework 4 Client Profile.)
Add a reference to the caching assembly by following these steps:
Right-click the References folder and then click Add Reference.
Select the .NET tab, select System.Runtime.Caching, and then click OK.
Next, you will add a button control and create an event handler for the button's Click event. Later you will add code to so when you click the button, the contents of the text file are cached and displayed.
To add a button control
In Solution Explorer, double-click the MainWindow.xaml file to open it.
From the Toolbox, under Common WPF Controls, drag a Button control to the MainWindow window.
In the Properties window, set the Content property of the Button control to Get Cache.
Next, you will add the code to perform the following tasks:
Create an instance of the cache class—that is, you will instantiate a new MemoryCache object.
Specify that the cache uses a HostFileChangeMonitor object to monitor changes in the text file.
Read the text file and cache its contents as a cache entry.
Display the contents of the cached text file.
To create the cache object
Double-click the button you just added in order to create an event handler in the MainWindow.xaml.cs or MainWindow.Xaml.vb file.
At the top of the file (before the class declaration), add the following Imports (Visual Basic) or using (C#) statements:
In the event handler, add the following code to instantiate the cache object:
The ObjectCache class is a built-in class that provides an in-memory object cache.
Add the following code to read the contents of a cache entry named filecontents:
Add the following code to check whether the cache entry named filecontents exists:
If the specified cache entry does not exist, you must read the text file and add it as a cache entry to the cache.
In the if/then block, add the following code to create a new CacheItemPolicy object that specifies that the cache entry expires after 10 seconds.
If no eviction or expiration information is provided, the default is InfiniteAbsoluteExpiration, which means the cache entries never expire based only on an absolute time. Instead, cache entries expire only when there is memory pressure. As a best practice, you should always explicitly provide either an absolute or a siding expiration.
Inside the if/then block and following the code you added in the previous step, add the following code to create a collection for the file paths that you want to monitor, and to add the path of the text file to the collection:
Following the code that you added in the previous step, add the following code to add a new HostFileChangeMonitor object to the collection of change monitors for the cache entry:
The HostFileChangeMonitor object monitors the text file's path and notifies the cache if changes occur. In this example, the cache entry will expire if the content of the file changes.
Following the code that you added in the previous step, add the following code to read the contents of the text file:
The date and time timestamp is added so that you will be able to see when the cache entry expires.
Following the code that you added in the previous step, add the following code to insert the contents of the file into the cache object as a CacheItem instance:
You specify information about how the cache entry should be evicted by passing the CacheItemPolicy object that you created earlier as a parameter.
After the if/then block, add the following code to display the cached file content in a message box:
In the Build menu, click Build WPFCaching to build your project.
You can now test the application.
To test caching in the WPF application
Press CTRL+F5 to run the application.
The MainWindow window is displayed.
Click Get Cache.
The cached content from the text file is displayed in a message box. Notice the timestamp on the file.
Close the message box and then click Get Cache again.
The timestamp is unchanged. This indicates the cached content is displayed.
Wait 10 seconds or more and then click Get Cache again.
This time a new timestamp is displayed. This indicates that the policy let the cache entry expire and that new cached content is displayed.
In a text editor, open the text file that you created. Do not make any changes yet.
Close the message box and then click Get Cache again.
Notice the timestamp again.
Make a change to the text file and then save the file.
Close the message box and then click Get Cache again.
This message box contains the updated content from the text file and a new timestamp. This indicates that the host-file change monitor evicted the cache entry immediately when you changed the file, even though the absolute timeout period had not expired.
After you have completed this walkthrough, the code for the project you created will resemble the following example.
Imports System.Runtime.Caching Imports System.IO Class MainWindow Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs) Handles Button1.Click Dim cache As ObjectCache = MemoryCache.Default Dim fileContents As String = TryCast(cache("filecontents"), _ String) If fileContents Is Nothing Then Dim policy As New CacheItemPolicy() policy.AbsoluteExpiration = _ DateTimeOffset.Now.AddSeconds(10.0) Dim filePaths As New List(Of String)() filePaths.Add("c:\cache\cacheText.txt") policy.ChangeMonitors.Add(New _ HostFileChangeMonitor(filePaths)) ' Fetch the file contents. fileContents = File.ReadAllText("c:\cache\cacheText.txt") & vbCrLf & DateTime.Now.ToString() cache.Set("filecontents", fileContents, policy) End If MessageBox.Show(fileContents) End Sub End Class | http://msdn.microsoft.com/en-us/library/vstudio/dd997362.aspx?cs-save-lang=1&cs-lang=vb | CC-MAIN-2014-23 | refinedweb | 1,563 | 67.86 |
In Python, like most modern programming languages, the function is a primary method of abstraction and encapsulation. You’ve probably written hundreds of functions in your time as a developer. But not all functions are created equal. And writing “bad” functions directly affects the readability and maintainability of your code. So what, then, is a “bad” function and, more importantly, what makes a “good” function?
A Quick Refresher
Math is lousy with functions, though we might not remember them, so let’s think back to everyone’s favorite topic: calculus. You may remember seeing formulas like the following
f(x) = 2x + 3. This is a function, called
f, that takes an argument
x, and "returns" two times
x + 3. While it may not look like the functions we're used to in Python, this is directly analogous to the following code:
def f(x):
return 2*x + 3
Functions have long existed in math, but have far more power in computer science. With this power, though, comes various pitfalls. Let’s now discuss what makes a “good” function and warning signs of functions that may need some refactoring.
Keys To A Good Function
What differentiates a “good” Python function from a crappy one? You’d be surprised at how many definitions of “good” one can use. For our purposes, I’ll consider a Python function “good” if it can tick off most of the items on this checklist (some are not always possible):
- Is sensibly named
- Has a single responsibility
- Includes a docstring
- Returns a value
- Is not longer than 50 lines
- Is idempotent and, if possible, pure
For many of you, this list may seem overly draconian. I promise you, though, if your functions follow these rules, your code will be so beautiful it will make unicorns weep. Below, I’ll devote a section to each of the items, then wrap things up with how they work in harmony to create “good” functions.
Naming
There’s a favorite saying of mine on the subject, often misatributed to Donald Knuth, but which actually came from Phil Karlton:
There are only two hard things in Computer Science: cache invalidation and naming things. -- Phil Karlton
As silly as it sounds, naming things well is difficult. Here’s an example of a “bad” function name:
def get_knn_from_df(df):
Now, I’ve seen bad names literally everywhere, but this example comes from Data Science (really, Machine Learning), where its practitioners typically write code in Jupyter notebooks and later try to turn those various cells into a comprehensible program.
The first issue with the name of this function is its use of acronyms/abbreviations. Prefer full English words to abbreviations and non-universally known acronyms. The only reason one might abbreviate words is to save typing, but every modern editor has autocomplete, so you’ll only be typing that full name once. Abbreviations are an issue because they are often domain specific. In the code above,
knn refers to "K-Nearest Neighbors", and
df refers to "DataFrame", the ubiquitous pandas data structure. If another programmer not familiar with those acronyms is reading the code, almost nothing about the name will be comprehensible to her.
There are two other minor gripes about this function’s name: the word “get” is extraneous. For most well-named functions, it will be clear that something is being returned from the function, and its name will reflect that. The
from_df bit is also unnecessary. Either the function's docstring or (if living on the edge) type annotation will describe the type of the parameter if it's not already made clear by the parameter's name.
So how might we rename this function? Simple:
def k_nearest_neighbors(dataframe):
It is now clear even to the lay person what this function calculates, and the parameter’s name (
dataframe) makes it clear what type of argument should be passed to it.
Single Responsibility
Straight from “Uncle” Bob Martin, the Single Responsibility Principle applies just as much to functions as it does classes and modules (Mr. Martin’s original targets). It states that (in our case) a function should have a single responsibility. That is, it should do one thing and only one thing. One great reason is that if every function only does one thing, there is only one reason ever to change it: if the way in which it does that thing must change. It also becomes clear when a function can be deleted: if, when making changes elsewhere, it becomes clear the function’s single responsibility is no longer needed, simply remove it.
An example will help. Here’s a function that does more than one “thing”:
def calculate_and print_stats(list_of_numbers):
sum = sum(list_of_numbers)
mean = statistics.mean(list_of_numbers)
median = statistics.median(list_of_numbers)
mode = statistics.mode(list_of_numbers)
print('-----------------Stats-----------------')
print('SUM: {}'.format(sum) print('MEAN: {}'.format(mean)
print('MEDIAN: {}'.format(median)
print('MODE: {}'.format(mode)
This function does two things: it calculates a set of statistics about a list of numbers and prints them to
STDOUT. The function is in violation of the rule that there should be only one reason to change a function. There are two obvious reasons this function would need to change: new or different statistics might need to be calculated or the format of the output might need to be changed. This function is better written as two separate functions: one which performs and returns the results of the calculations and another that takes those results and prints them. One dead giveaway that a function has multiple responsibilities is the word and in the functions name.
This separation also allows for much easier testing of the function’s behavior and also allows the two parts to be separated not just into two functions in the same module, but possibly live in different modules altogether if appropriate. This, too, leads to cleaner testing and easier maintenance.
Finding a function that only does two things is actually rare. Much more often, you’ll find functions that do many, many more things. Again, for readability and testability purposes, these jack-of-all-trade functions should be broken up into smaller functions that each encapsulate a single unit of work.
Docstrings
While everyone seems to be aware of PEP-8, defining the style guide for Python, far fewer seem to be aware of PEP-257, which does the same for docstrings. Rather than simply rehash the contents of PEP-257, feel free to read it at your leisure. The main takeaways, however, are:
- Every function requires a docstring
- Use proper grammar and punctuation; write in complete sentences
- Begins with a one-sentence summary of what the function does
- Uses prescriptive rather than descriptive language
This is an easy one to tick off when writing functions. Just get in the habit of always writing docstrings, and try to write them before you write the code for the function. If you can’t write a clear docstring describing what the function will do, it’s a good indication you need to think more about why you’re writing the function in the first place.
Return Values
Functions can (and should) be thought of as little self-contained programs. They take some input in the form of parameters and return some result. Parameters are, of course, optional. Return values, however, are not optional, from a Python internals perspective. Even if you try to create a function that doesn’t return a value, you can’t. If a function would otherwise not return a value, the Python interpreter “forces it” to return
None. Don't believe me? Test out the following yourself:
❯ python3
Python 3.7.0 (default, Jul 23 2018, 20:22:55)
[Clang 9.1.0 (clang-902.0.39.2)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> def add(a, b):
... print(a + b)
...
>>> b = add(1, 2)
3
>>> b
>>> b is None
True
You’ll see that the value of
b really is
None. So, even if you write a function with no
return statement, it's still going to return something. And it should return something. After all, it's a little program, right. How useful are programs that produce no output, including whether or not they executed correctly? But most importantly, how would you test such a program?
I’ll even go so far as to make the following statement: every function should return a useful value, even if only for testability purposes. Code that you write should be tested (that’s not up for debate). Just think of how gnarly testing the
add function above would be (hint: you'd have to redirect I/O and things go south from there quickly). Also, returning a value allows for method chaining, a concept that allows us to write code like this:
with open('foo.txt', 'r') as input_file:
for line in input_file:
if line.strip().lower().endswith('cat'):
# ... do something useful with these lines
The line
if line.strip().lower().endswith('cat'): works because each of the string methods (
strip(), lower(), endswith()) return a string as the result of calling the function.
Here are some common reasons people give when asked why a given function they wrote doesn’t return a value:
“All it does is [some I/O related thing like saving a value to a database]. I can’t return anything useful.”
I disagree. The function can return
True if the operation completed successfully.
“We modify one of the parameters in place, using it like a reference parameter.”””
Two points, here. First, do your best to avoid this practice. For others, providing something as an argument to your function only to find that it has been changed can be surprising in the best case and downright dangerous in the worst. Instead, much like the string methods, prefer returning a new instance of the parameter with the changes applied to it. Even when this isn’t feasible because making a copy of some parameter is prohibitively expensive, you can still fall back to the old “Return
True if the operation completed successfully" suggestion.
“I need to return multiple values. There is no single value I could return that would make sense.”
This is a bit of a straw-man argument, but I have heard it. The answer, of course, is to do exactly what the author wanted to do but didn’t know how to do: use a tuple to return more than one value.
And perhaps the most compelling argument for always returning a useful value is that callers are always free to ignore them. In short, returning a value from a function is almost certainly a good idea and very unlikely to break anything, even in existing code bases.
Function Length
I’ve said a number of times that I’m pretty dumb. I can only hold about 3 things in my head at once. If you make me read a 200 line function and ask what it does, my eyes are likely to glaze over after about 10 seconds. The length of a function directly affects readability and, thus, maintainability. So keep your functions short. 50 lines is a totally arbitrary number that seemed reasonable to me. Most functions you write will (hopefully) be quite a bit shorter.
If a function is following the Single Responsibility Principle, it is likely to be quite short. If it is pure or idempotent (discussed below), it is also likely to be short. These ideas all work in concert together to produce good, clean code.
So what do you do if a function is too long? REFACTOR! Refactoring is something you probably do all the time, even if the term isn’t familiar to you. It simply means changing a program’s structure without changing its behavior. So extracting a few lines of code from a long function and turning them into a function of their own is a type of refactoring. It’s also happens to be the fastest and most common way to shorten a long function in a productive way. And since you’re giving all those new functions appropriate names, the resulting code reads much more easily. I could write a whole book on refactoring (in fact it’s been done many times) and won’t go into specifics here. Just know that if you have a function that’s too long, the way to fix it is through refactoring.
Idempotency and Functional Purity
The title of this subsection may sound a bit intimidating, but the concepts are simple. An idempotent function always returns the same value given the same set of arguments, regardless of how many times it is called. The result does not depend on non-local variables, the mutability of arguments, or data from any I/O streams. The following
add_three(number) function is idempotent:
def add_three(number):
"""Return *number* + 3."""
return number + 3
No matter how many times one calls
add_three(7), the answer will always be
10. Here's a different take on the function that is not idempotent:
def add_three():
"""Return 3 + the number entered by the user."""
number = int(input('Enter a number: '))
return number + 3
This admittedly contrived example is not idempotent because the return value of the function depends on I/O, namely the number entered by the user. It’s clearly not true that every call to
add_three() will return the same value. If it is called twice, the user could enter
3 the first time and
7 the second, making the call to
add_three() return
6 and
10, respectively.
A real-world example of idempotency is hitting the “up” button in front of an elevator. The first time it’s pushed, the elevator is “notified” that you want to go up. Because the pressing the button is idempotent, pressing it over and over again is harmless. The result is always the same.
Why is idempotency important?
Testability and maintainability. Idempotent functions are easy to test because they are guaranteed to always return the same result when called with the same arguments. Testing is simply a matter of checking that the value returned by various different calls to the function return the expected value. What’s more, these tests will be fast, an important and often overlooked issue in Unit Testing. And refactoring when dealing with idempotent functions is a breeze. No matter how you change your code outside the function, the result of calling it with the same arguments will always be the same.
What is a “pure” function?
In functional programming, a function is considered pure if it is both idempotent and has no observable side effects. Remember, a function is idempotent if it always returns the same value for a given set of arguments. Nothing external to the function can be used to compute that value. However, that doesn’t mean the function can’t affect things like non-local variables or I/O streams. For example, if the idempotent version of
add_three(number) above printed the result before returning it, it is still considered idempotent because while it accessed an I/O stream, that access had no bearing on the value returned from the function. The call to
print() is simply a side effect: some interaction with the rest of the program or the system itself aside from returning a value.
Let’s take our
add_three(number) example one step further. We can write the following snippet of code to determine how many times
add_three(number) was called:
add_three_calls = 0
def add_three(number):
"""Return *number* + 3."""
global add_three_calls
print(f'Returning {number + 3}')
add_three_calls += 1
return number + 3
def num_calls():
"""Return the number of times *add_three* was called."""
return add_three_calls
We’re now printing to the console (a side effect) and modifying a non-local variable (another side effect), but since neither of these affect the value returned by the function, it is still idempotent.
A pure function has no side effects. Not only does it not use any “outside data” to compute its value, it has no interaction with the rest of the system/program other than computing and returning said value. Thus while our new
add_three(number) definition is still idempotent, it is no longer pure.
Pure functions do not have logging statements or
print() calls. They do not make use of database or internet connections. They don't access or modify non-local variables. And they don't call any other non-pure functions.
In short, they are incapable of what Einstein called “spooky action at a distance” (in a Computer Science setting). They don’t modify the rest of the program or system in any way. In imperative programming (the kind you’re doing when you write Python code), they are the safest functions of all. They are eminently testable and maintainable and, even more so than mere idempotent functions, testing them is guaranteed to basically be as fast as executing them. And the test(s) itself is simple: there are no database connections or other external resources to mock, no setup code required, and nothing to clean up afterwards.
To be clear, idempotency and purity are aspirational, not required. That is, we’d love to only write pure or idempotent functions because of the benefits mentioned, but that isn’t always possible. The key, though, is that we naturally begin to arrange our code to isolate side effects and external dependencies. This has the effect of making every line of code we write easier to test, even if we’re not always writing pure or idempotent functions.
Summing Up
So that’s it. The secret to writing good functions is not a secret at all. It just involves following a number of established best-practices and rules-of-thumb. I hope you found this article helpful. Now go forth and tell your friends! Let’s all agree to just always write great code in all cases :). Or at least do our best not to put more “bad” code into the world. I’d be able to live with that…
Posted on Oct 11, 2018 by Jeff Knupp
Originally published at jeffknupp.com on October 11, 2018. | https://hackernoon.com/write-better-python-functions-c3a9a36382a6 | CC-MAIN-2019-39 | refinedweb | 3,024 | 62.78 |
avi sinha wrote:no object is eligible here
avi sinha
Aakash Goel wrote:When String pool is used, String objects are not created, right?
sudipto shekhar wrote:
How many String objects are created in the code provided by you??
Aakash Goel wrote:
sudipto shekhar wrote:
How many String objects are created in the code provided by you??
Thats what I am not sure of. Can you help?
Ninad Kulkarni wrote:Hi Aakash,
See the program given below.
Source : I created the following java program for explaination purpose.
public class EvaluationOrder{
public static void main(String[] args){
String s1 = "a";
String s2 = "b";
String s3 = "ab";
String s4 = new String(s3);
if ("ab" == s1 + "b") {
System.out.println("equal");
}
else {
System.out.println("not equal");
}
if ("ab" == "a"+"b") {
System.out.println("equal");
}
else {
System.out.println("not equal");
}
if (s4 == s1 + "b") {
System.out.println("equal");
}
else {
System.out.println("not equal");
}
if (s4 == "a"+"b") {
System.out.println("equal");
}
else {
System.out.println("not equal");
}
}
}
Output :
not equal
equal
not equal
not equal
After seeing the output we can say that at line 13 there is concatanation of two string literals but JVM checks that if it is available in pool then assign reference stored in pool other wise create new string object.
so in your situation we can say that string "aakashgoel" is eligible for garbage collection.
Correct me if I am wrong.
so in your situation we can say that string "aakashgoel" is eligible for garbage collection.
Ninad Kulkarni wrote:My thinking is when JVM checks that "aakashgoel" string is not in the pool then it is newly created on heap and eligible for garbage collection after s = null.
1. String s="aakash";
2. s+="goel";
3. s=null;
Bert Bates wrote:Sorry to be late to this thread guys!
On the real exam, String objects will NEVER be used in GC-related questions.
So, this is an interesting topic, but IT'S NOT ON THE EXAM
hth,
Bert
Mo Jay wrote:
Aakash, the point is to know and learn Java the proper way not just to depend on what will be included in the exam and what's not.
knowing where the objects and strings reside, and where there references are is very important for you in order to manipulate them successfully, weither it is included in the exam or not.
Cheers!!
In the original code snippet,
1. String s="aakash";
2. s+="goel";
3. s=null;
how many String objects are getting created actually. My idea is as follows:
First object is 's' which stores 'aakash'.
Second Object is some anonymous object with value 'goel'
Third object is again 's' since a String object cannot modify its value, we will be automatically creating a new object during this += operation.
Any thoughts on this? Am I completely wrong?
Mo Jay wrote:Sorry Jain Jose but your interpretation is inaccurate as the above code will:
First: create a String object with the value aakash then make the reference s point to it from the stack.
Second: it will create ANOTHER string object with the value: aakashgoel and make the previous s reference point to it, so NOW there is no reference pointing to the first string aakash anymore.
Third: it will make s reference point to null(means point to nothing) and at this point there is no reference pointing to the string object created in part two(aakashgoal). In total, only 2 strings objects were created.
Mo Jay wrote:Hi,
First of all, ALL String objects created either through sting literal ".." OR new keyword are stored in the heap. There is NO object string or any other kind of object stored anywhere else BUT in the heap.
The string pool has ONLY references to string objects when the string is created using string literal ".." (for ex. String s="aakash"), if a string object is created using the new keyword THEN there is no reference to that string object in the pool.
Going back to the original code for the question:
1. String s="aakash";
2. s+="goel";
3. s=null;
In the above code there only two strings created, however none of them is garbage collector candidate because they have object references to them in the pool.
I hope this clarifies the issue.
Cheers!!
more from paul wheaton's glorious empire of web junk:
cast iron skillet
diatomaceous earth
rocket mass heater
sepp holzer
raised garden beds
raising chickens
lawn care
CFL
flea control
missoula
heat
permaculture | http://www.coderanch.com/t/451565/java-programmer-SCJP/certification/strings-string-pool-garbage-collection | crawl-003 | refinedweb | 751 | 72.97 |
The {{fn}} helper (3.11)
Learn about Ember's conceptual shift in binding actions at the source, rather than at the invocation site.
Summary
Traditionally in Ember, actions have referred to either the built-in action modifier, or the built-in action helper:
{{! action modifier }} <button {{action 'increment'}}>Add 1</button> {{! action helper }} <button onclick={{action 'increment'}}>Add 1</button>
Both forms are responsible for quite a few things:
- Look up the
increment()function on the
actionshash of the component
- In the case of the modifier, add a "click" event listener
- Accept & curry arguments
- Bind the correct
thiscontext
For all these reasons, Actions have historically been a little confusing to work with at times, and they can feel like one of the more "magical" parts of the framework.
This is why the next few features, starting with the
{{fn}} helper, are all about decomposing Actions into new primitives that each have their own responsibility.
Before we introduce these new primitives, though, we need to learn about a new method we have for creating Actions: the new
action function import:
import Component from "@ember/component"; import { action } from "@ember/object"; export default Component.extend({ count: 0, increment: action(function () { this.incrementProperty("count"); }) })
What does that
action import do? It binds the
this context of our component's
increment method. Note that this is one of the things the
{{action}} helper/modifier did before, but now instead of doing it in the template every time we reference a method, we can do it once from the component file. This is nice because it means that no matter who references our
increment method from a template, it will be "safe" to call – the
this context will always be bound to the component instance.
So, this is a bit of a conceptual shift from how we've created Actions before. Usually we create them in the template, but now we're creating them in our JavaScript component file. By doing it in the JS at the definition site of the function, we only have to worry about binding once. The template can now invoke the action n times, just by calling the function, and without worrying about binding or
this context.
Why is the JavaScript import called
action, if all it's doing is binding the context? You could imagine it being called
bind or
autobind. It's a bit confusing since the word "action" has become so loaded at this point in Ember, but the conceptual model here is: the
action import turns a method into an Action, and Actions are safe for use by templates.
So, if you need to invoke a method from a template, turn that method into an Action using the
action import.
Going forward, this will be the only notion of "actions" in your Ember code! That's because the other responsibilities of the classic action helper/modifier are being split up, starting with the ability to accept and curry arguments. And that's what this new
{{fn}} feature is all about.
Once we create an action in our component file using the new import, we can reference it in our template like this:
<button onclick={{this.increment}} />
and everything will work as we expect. But what if we want to pass in arguments?
<button onclick={{this.incrementBy 5}} />
This won't work, because Ember will expect
incrementBy here to be a helper that takes in an arglist. Instead, we can use the new
{{fn}} helper to pass an argument into our
incrementBy function, like this:
<button onclick={{fn this.incrementBy 5}} />
The
fn helper applies the arglist to the function we give it, so that when the event handler callback occurs the function will be invoked with the args that we passed in.
One way to think of it that might be helpful is
fn is like invoking an anonymous function:
{{! You can think of it like this --}} <button onclick={{ () => this.incrementBy(5) }} /> <button onclick={{fn this.incrementBy 5}} />
So, that's how
fn lets us pass in arguments directly to actions from our template.
More than that, though,
fn is actually currying these arguments, meaning it can be used multiple times allowing for better composition throughout your component hierarchy. If you've used curried actions in your Ember template code in the past, you can use
{{fn}} in exactly the same way.
So, we can see that the
{{fn}} helper takes on the responsibility of arg passing and currying that used to be handled by
{{action}}. Next we'll learn how
{{on}} lets us easily add event listeners. | https://embermap.com/topics/what-s-new-in-ember/the-fn-helper-3-11 | CC-MAIN-2020-24 | refinedweb | 762 | 60.14 |
ShellExecute function
Performs an operation on a specified file.
Syntax
Parameters
- hwnd [in, optional]
Type: HWND
A handle to the parent window used for displaying a UI or error messages. This value can be NULL if the operation is not associated with a window.
- lpOperation [in, optional]
Type: LPCTSTR
A pointer to a null-terminated string, referred to in this case as a verb, that specifies the action to be performed. The set of available verbs depends on the particular file or folder. Generally, the actions available from an object's shortcut menu are available verbs. The following verbs are commonly used:
-
Launches an editor and opens the document for editing. If lpFile is not a document file, the function will fail.
-
Explores a folder specified by lpFile.
-
Initiates a search beginning in the directory specified by lpDirectory.
-
Opens the item specified by the lpFile parameter. The item can be a file or folder.
-
Prints the file specified by lpFile. If lpFile is not a document file, the function fails.
-
The default verb is used, if available. If not, the "open" verb is used. If neither verb is available, the system uses the first verb listed in the registry.
- lpFile [in]
Type: LPCTSTR
A pointer to a null-terminated string that specifies the file or object on which to execute the specified verb. To specify a Shell namespace object, pass the fully qualified parse name. Note that not all verbs are supported on all objects. For example, not all document types support the "print" verb. If a relative path is used for the lpDirectory parameter do not use a relative path for lpFile.
- lpParameters [in, optional]
Type: LPCTSTR
If lpFile specifies an executable file, this parameter is a pointer to a null-terminated string that specifies the parameters to be passed to the application. The format of this string is determined by the verb that is to be invoked. If lpFile specifies a document file, lpParameters should be NULL.
- lpDirectory [in, optional]
Type: LPCTSTR
A pointer to a null-terminated string that specifies the default (working) directory for the action. If this value is NULL, the current working directory is used. If a relative path is provided at lpFile, do not use a relative path for lpDirectory.
- nShowCmd [in]
Type: INT
The flags that specify how an application is to be displayed when it is opened. If lpFile specifies a document file, the flag is simply passed to the associated application. It is up to the application to decide how to handle it. These values are defined in Winuser.h.
-
Hides the window and activates another window.
-
Maximizes the specified window.
-
Minimizes the specified window and activates the next top-level window in the z-order.
-
Activates and displays the window. If the window is minimized or maximized, Windows restores it to its original size and position. An application should specify this flag when restoring a minimized window.
-
Activates the window and displays it in its current size and position.
-
Sets the show state based on the SW_ flag specified in the STARTUPINFO structure passed to the CreateProcess function by the program that started the application. An application should call ShowWindow with this flag to set the initial show state of its main window.
-
Activates the window and displays it as a maximized window.
-
Activates the window and displays it as a minimized window.
-
Displays the window as a minimized window. The active window remains active.
-
Displays the window in its current state. The active window remains active.
-
Displays a window in its most recent size and position. The active window remains active.
-
Activates and displays a window. If the window is minimized or maximized, Windows restores it to its original size and position. An application should specify this flag when displaying the window for the first time.
Return value
Type: HINSTANCE
If the function succeeds, it returns a value greater than 32. If the function fails, it returns an error value that indicates the cause of the failure. The return value is cast as an HINSTANCE for backward compatibility with 16-bit Windows applications. It is not a true HINSTANCE, however. It can be cast only to an int and compared to either 32 or the following error codes below.
Remarks
Because ShellExecute can delegate execution to Shell extensions (data sources, context menu handlers, verb implementations) that are activated using Component Object Model (COM), COM should be initialized before ShellExecute is called. Some Shell extensions require the COM single-threaded apartment (STA) type. In that case, COM should be initialized as shown here:
There are certainly instances where ShellExecute does not use one of these types of Shell extension and those instances would not require COM to be initialized at all. Nonetheless, it is good practice to always initalize COM before using this function.
This method allows you to execute any commands in a folder's shortcut menu or stored in the registry.
To open a folder, use either of the following calls:
or
To explore a folder, use the following call:
To launch the Shell's Find utility for a directory, use the following call.
If lpOperation is NULL, the function opens the file specified by lpFile. If lpOperation is "open" or "explore", the function attempts to open or explore the folder.
To obtain information about the application that is launched as a result of calling ShellExecute, use ShellExecuteEx.
Requirements
See also
- Launching Applications (ShellExecute, ShellExecuteEx, SHELLEXECUTEINFO)
- ShellExecuteEx
- IShellExecuteHook
- CoInitializeEx | https://msdn.microsoft.com/en-gb/library/bb762153.aspx | CC-MAIN-2017-34 | refinedweb | 912 | 57.87 |
We provide UCSF Chimera extension for displaing the reference alignments. The extension can also be used for calculating pairwise and multiple alignments with our psc++ software. Note that psc++ is still under development. A (preliminary) manual is packed with the extension or can be obtained here. The extension requires chimera build 2540 or newer.
Quick installation guide:
Log in as the user who owns the Chimera-installation!
For Linux and Windows you need to know the name of the Chimera base directory. The save way to find the name is to start chimera, open the IDLE (Menu Favorites), and execute at the IDLE prompt:
import os ; print os.environ['CHIMERA']
To use the extension for viewing the reference structural alignments you need to download the xml files and the corresponding pdb files from this server. Unzip the xml files and the pdb files and move everything into a single directory. Start chimera, open the menu Tools -> Psc++ -> Load XML File. Use the Browse button to open an reference alignment XML file. Once the file is loaded, press Apply. For further details please consult the viewer extension manual. | https://pbwww.che.sbg.ac.at/?page_id=156 | CC-MAIN-2019-47 | refinedweb | 187 | 66.64 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
How to display an error message: There are no articles at the click of a button ?
I try to add a error box on my function button but I do not know how to do it..
I try this :
def testError(self, cr, uid, ids, context=None): for line in self.browse(cr, uid, ids, context=context): if line.invoiced: raise osv.except_osv(_('Invalid Action!'), _('Test.')) return self.write(cr, uid, ids, {'state': 'draft'})
Try this. This will show a popup message.
def testError(self, cr, uid, ids, context=None): for line in self.browse(cr, uid, ids, context=context): if line.invoiced: warning = { 'title': 'Warning!', 'message' : 'Invalid action!'} return {'warning': warning} return self.write(cr, uid, ids, {'state': ) | https://www.odoo.com/forum/help-1/question/how-to-display-an-error-message-there-are-no-articles-at-the-click-of-a-button-45122 | CC-MAIN-2016-50 | refinedweb | 153 | 71 |
#include <MIkHandleGroup.h>
Group class for ik handles. Each group has an associated solver and priority. A single chain solver handle group has only one handle in its group.
Default constructor. The instance is set to contain no groups.
The class destructor.
Return the priority value of this handle group.
Return the solver id used by this handle group.
return the priority of the solver used by this handle group.
Set the priority of this handle group.
Set the solver id for this handle group.
Determines whether the end-effector at the handle(goal) location.
Do all ik solving steps for this group.
Return the total number of degrees of freedom of this handle group.
Return the number of handles in the handle list for this group.
Return the ith handle in the handle list for this group. | http://download.autodesk.com/us/maya/2009help/api/class_m_ik_handle_group.html | CC-MAIN-2017-17 | refinedweb | 138 | 70.8 |
Building (1 review) - 05 Dec 2013 01:18:51 GMT - Search in distribution
"IO::Pipe" provides an interface to creating pipes between processes. CONSTRUCTOR new ( [READER, WRITER] ) Creates an "IO::Pipe", which is a reference to a newly created symbol (see the "Symbol" package). "IO::Pipe::new" optionally takes two argument...GBARR/IO-1.25 - 14 May 2009 00:29:16 GMT - Search in distribution.360 (12 reviews) - 08 Jul 2014 12:34:51 GMT - Search in distribution
The PGP module allow a perl script to work with PGP related files. * PGP::new $pgp = new PGP [$pgppath], [$pgpexec]; Create the PGP encapsulation object. The standard location for the PGP executable is /usr/local/bin/pgp. * PGP::Exec $pid = Exec $pgp...GEHIC/PGP-0.3a - 14 Aug 1996 19:41:51 GMT - Search in distribution
This class is a factory for text pipes. A pipe has a "filter()" method through which input can pass. The input can be a string or a reference to an array of strings. Pipes can be stacked together using Text::Pipe::Stackable. The problem that this dis...MARCEL/Text-Pipe-0.10 - 18 Sep 2009 11:14:02 GMT - Search in distribution
This module is a Feed model that can mimic the functionality of standard UNIX pipe and filter style text processing tools. Instead of operating on lines from text files, it operates on entries from Atom (or RSS) feeds. The idea is to provide a high-l...VESELOSKY/Feed-Pipe-1.003 - 29 Dec 2009 15:36:09 GMT - Search in distribution
AUTHOR Ingy döt Net <ingy@cpan.org> COPYRIGHT Copyright (c) 2004-2005. Brian Ingerson. Copyright (c) 2006-2014. Ingy döt Net. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See < (7 reviews) - 14 Jun 2014 19:50:29 GMT - Search in distribution
The ->csv() call can get a HASH reference parameter, the same parameter as Text::CSV would get. We pass it directly to that module. Split up lines of csv file and return an array reference for each line. TODO: use the first row as key names and on ev...SZABGAB/Pipe-Tube-Csv-0.04 - 21 May 2012 04:29:26 GMT - Search in distribution
This is a marker class; the actual pipe segment classes live in the "Text::Pipe::HTML::" namespace. INSTALLATION See perlmodinstall for information and options on installing Perl modules. BUGS AND LIMITATIONS No bugs have been reported. Please report...MARCEL/Text-Pipe-HTML-1.100880 - 29 Mar 2010 21:39:35 GMT - Search in distribution-1.6 - 16 Nov 2012 20:03:21 GMT - Search in distribution
In an event driven program, you must be careful with every Operation System call, because it can block the event mechanism, hence the program as a whole. Often you can be lazy with writes, because its communication buffers are usually working quite a...MARKOV/IOMux-0.12 - 27 Jan 2011 13:27:32 GMT - Search in distribution
This is a pipe-based logging engine that allows you to pipe your log output METHODS init This method is called when "->new()" is called. It opens the pipe for writing. _log Writes the log message to the pipe. AUTHOR Moshe Good LICENSE AND COPYRIGHT C...MOSHEGOOD/Dancer-Logger-Pipe-0.01 - 11 Jan 2011 19:50:26 GMT - Search in distribution
This is a plug-in format parser for the AnyData and DBD::AnyData modules. It will read column names from the first row of the file, or accept names passed by the user. In addition to column names, the user may set other options as follows: col_names ...SDOWIDEIT/AnyData-0.11 - 14 Dec 2012 05:44:30 GMT - Search in distribution
This package implements a PerlIO layer for reading files only. It exports, on request, a function "set_io_pipe" that you can use to set a Text::Pipe pipe. If you then use the "Pipe" layer as shown in the synopsis, the input gets filtered through the ...MARCEL/PerlIO-via-Pipe-1.100860 - 27 Mar 2010 12:34:20 GMT - Search in distribution
Web::Hippie::Pipe provides unified bidirectional communication over HTTP via websocket, mxhr, or long-poll, for your "PSGI" applications. SEE ALSO Web::Hippie AUTHOR Chia-liang Kao <clkao@clkao.org> LICENSE This library is free software; you can redi...CLKAO/Web-Hippie-0.40 - 23 Feb 2012 13:37:03 GMT - Search in distribution
This module sends events down a pipe, one line per event. The data is formatted in the scheme: begin-stage('build', '12450052') end-stage('build', '12452345', 'failed') CONFIGURATION Along with the standard configuration parameters for "Test::AutoBui...DANBERR/Test-AutoBuild-1.2.4 - 01 Sep 2011 21:11:37 GMT - Search in distribution | https://metacpan.org/search?q=Pipe | CC-MAIN-2014-23 | refinedweb | 790 | 65.73 |
This is mostly taken from the very helpful documentation on Stevedore; when I started working on Gnocchi I found myself wondering a lot about the functions of two modules in particular, stevedore and pecan. The other day I needed to use stevedore to load a plugin and so finally had the chance to use it in practice. These are some notes from the process – hopefully they can be useful for someone needing to use stevedore for the first time.
So my basic understanding of stevedore is that it is used for managing plugins, or pieces of code that you want to load into an application. The manager classes work with plugins defined through entry points to load and enable the code.
In practice, this is what the process looks like:
- Create a plugin
The documentation, which is authored by Doug Hellman I believe, recommends making a base class with the abc module, as good API practice. In my case, I wanted to make a class that would calculate the moving average of some data. So my base class, defined in the init file of my directory (/gnocchi/statistics) looked like this:
import abc import six @six.add_metaclass(abc.ABCMeta) class CustomStatistics(object): @abc.abstractmethod def compute(data): '''Returns the custom statistic of the data.'''
The code is implemented in the class MovingAverage (/gnocchi/statistics/moving_statistics.py):
from gnocchi import statistics class MovingAverage(statistics.CustomStatistics): def compute(self, data): ... do stuff ... return averaged_data
- Create the entry point
The next step is to define an entry point for the code in your setup.cfg file. The entry point format for the syntax is
plugin_namespace= name = module.path:thing_you_want_to_import_from_the_module
so I had
[entry_points] gnocchi.statistics = moving-average = gnocchi.statistics.moving_statistics:MovingAverage
The stevedore documentation on registering plugins has more information on how to package a library in general usng setuptools.
- Load the Plugins
You can either use drivers, hooks, or the extensions pattern to load your plugins. I ended up starting with drivers and then moving to extensions. The difference between them is whether you want to load a single plugin (use drivers) or multiple plugins at a time (extensions). I believe hooks also allows you to load many plugins at once but is meant to be used for multiple entry points with the same name. This allows you to invoke several functions with a single call…that’s about the limit of my knowledge on hooks.
The syntax for a driver is the following:
from stevedore import driver mgr = driver.DriverManager( namespace='gnocchi.statistics', name='moving-average', invoke_on_load=True, ) output = mgr.driver.compute(data)
The invoke_on_load argument lets you call the object when loaded. Here the object is an instance of the MovingAverage class. You access it with the driver property and then call the methods (in this case, compute). You can also pass in arguments in DriverManager; see the documentation for more detail.
I ended up going with extensions instead of drivers, as there were multiple statistical functions I had as plugins and I wanted to load all the entry points at once. The syntax is then
from stevedore import extension mgr = extension.ExtensionManager( namespace = 'gnocchi.statistics', invoke_on_load=True )
This loads all of the plugins in the namespace. In my case I wanted to make a dictionary of all the function names and the extension objects so I did:
configured_statistics = dict((x.name, x.obj) for x in mgr)
When a GET request to Gnocchi had a query for computing statistics on the data, the dict was consulted to see if there was a match with a configured statistics function name. If so, the extension object was called with the compute() method.
output = configured_statistics[user_query].compute(data)
The documentation shows an example using map() to call all the plugins. For the code below results would be a sequence of function names and the resulting data once the statistic is applied :
def compute_data(ext, data): return (ext.name, ext.obj.compute(data)) results = mgr.map(compute_data, data)
If you need the order to matter when loading the extension objects, you can use NamedExtensionManager.
That’s about it for my notes on stevedore – it’s a clean, well-designed module and I’m glad I got to learn about it. | http://amalagon.github.io/blog/2014/07/25/notes-on-stevedore/ | CC-MAIN-2016-44 | refinedweb | 707 | 55.24 |
mapps:
pillar_roots: base: - /srv/pillar
This example configuration declares that the base environment will be located
in the
/srv/pillar directory. It must not be in a subdirectory of the
state tree.
The top file used matches the name of the top file used for States, and has the same structure:
/srv/pillar/top.sls
base: '*': - packages
In the above top file, it is declared that in the
base environment, the
glob matching all minions will have the pillar data found in the
packages
pillar available to it. Assuming the
pillar_roots value of
/srv/pillar
taken from above, the
packages pillar would be located at
/srv/pillar/packages.sls..
Another example shows how to use other standard top matching types to deliver specific salt pillar data to minions with different properties.
Here is an example using the
grains matcher to target pillars to minions
by their
os grain:
dev: 'os:Debian': - match: grain - servers
/srv/pillar/packages.sls
{% if grains['os'] == 'RedHat' %} apache: httpd git: git {% elif grains['os'] == 'Debian' %} apache: apache2 git: git-core {% endif %} company: Foo Industries
Important
See Is Targeting using Grain Data Secure? for important security information.
The above pillar sets two key/value pairs. If a minion is running RedHat, then
the
apache key is set to
httpd and the
git key is set to the value
of
git. If the minion is running Debian, those values are changed to
apache2 and
git-core respectively. All minions that have this pillar
targeting to them via a top file will have the key of
company with a value
of
Foo Industries.
Consequently this data can be used from within modules, renderers, State SLS files, and more via the shared pillar dict:
apache: pkg.installed: - name: {{ pillar['apache'] }}
git: pkg.installed: - name: {{ pillar['git'] }}
Finally, the above states can utilize the values provided to them via Pillar. All pillar values targeted to a minion are available via the 'pillar' dictionary. As seen in the above example, Jinja substitution can then be utilized to access the keys and values in the Pillar dictionary.
Note that you cannot just list key/value-information in
top.sls. Instead,
target a minion to a pillar file and then list the keys and values in the
pillar. Here is an example top file that illustrates this point:
base: '*': - common_pillar
And the actual pillar file at '/srv/pillar/common_pillar.sls':
foo: bar boo: baz.
New in version 0.16.0.
Pillar SLS files may include other pillar files, similar to State files. Two syntaxes are available for this purpose. The simple form simply includes the additional pillar as if it were part of the same file:
include: - users
The full include form allows two additional options -- passing default values to the templating engine for the included pillar file as well as an optional key under which to nest the results of the included pillar:
include: - users: defaults: sudo: ['bob', 'paul'] key: users
With this form, the included file (users.sls) will be nested within the 'users' key of the compiled pillar. Additionally, the 'sudo' value will be available as a template variable to users.sls. a delimiter.
If a structure like this is in pillar:
foo: bar: baz: qux
Extracting it from the raw pillar in an sls formula or file template is done this way:
{{ pillar['foo']['bar']['baz'] }}
Now, with the new
pillar.get function the data
can be safely gathered and a default can be set, allowing the template to fall
back if the value is not available:
{{ salt['pillar.get']('foo:bar:baz', 'qux') }}
This makes handling nested structures much easier.
Note
pillar.get() vs
salt['pillar.get']()
It should be noted that within templating, the
pillar variable is just
a dictionary. This means that calling
pillar.get() inside of a
template will just use the default dictionary
.get() function which
does not include the extra
: delimiter functionality. It must be
called using the above syntax (
salt['pillar.get']('foo:bar:baz',
'qux')) to get the salt function, instead of the default dictionary
behavior.
Minion configuration options can be set on pillars. Any option that you want to modify, should be in the first level of the pillars, in the same way you set the options in the config file. For example, to configure the MySQL root password to be used by MySQL Salt execution module, set the following pillar variable:
mysql.pass: hardtoguesspassword
By default if there is an error rendering a pillar, the detailed error is hidden and replaced with:
Rendering SLS 'my.sls' failed. Please see master log for details.
The error is protected because it's possible to contain templating data which would give that minion information it shouldn't know, like a password!
To have the master provide the detailed error that could potentially carry
protected data set
pillar_safe_render_error to
False:
pillar_safe_render_error: False | https://docs.saltstack.com/en/latest/topics/pillar/index.html | CC-MAIN-2017-22 | refinedweb | 809 | 62.88 |
Object Oriented Programming (OOP) is the design process used in Java and many other programming languages. With OOP, programming is based on relationships and interactions between objects (classes). In OOP, classes define the properties and actions of an object and represents them as fields and methods. There are many other abilities used in OOP, but the concept of classes is the most basic and essential component of the subject.
Here is an example of a class Dog:
public class Dog { private Color c; private double weightLBS; private int age; private String name; public Dog() { ... } public static void bark() { ... } }
The benefits of this programming format are that with objects, once a library is built, every class’s information is kept and can be reused. OOP also is a much easier programming method for reusing information in both inheritance and class libraries. OOP is also easier to debug due to its modularity and easier to prevent errors with built in try-catch loops.
Lesson Quiz
1. Which is not a benefit of Object Oriented Programming?
Written by James Richardson
Notice any mistakes? Please email us at [email protected] so that we can fix any inaccuracies. | https://teamscode.com/learn/ap-computer-science/object-oriented-programming/ | CC-MAIN-2019-04 | refinedweb | 193 | 56.55 |
sh - executing system commands and applications in Python
sh is a wrapper around Python subprocess library. It offers a nice and easy to use API - it allows using programs or system commands like if they were a function.
The installation is typical: pip install sh. After installation you can import any system command or application from the sh module:
Arguments can also be passed as methods:
from sh import cd, ls, wesnoth cd('/var/log') print ls('./') print wesnoth('--version')
from sh import cd, ls cd('/var/log') print ls.nginx('./')
from sh import git git.checkout("master")
If you want to execute a command in background then use _bg=True. Data passed to standard output or error can be redirected to a file or function passed by using _out and _err arguments. You can also controll interactive commands.
RkBlog | https://rk.edu.pl/en/sh-executing-system-commands-and-applications-python/ | CC-MAIN-2021-31 | refinedweb | 140 | 56.66 |
using StackExchange.Redis;The connection to the Redis cache is managed by the
ConnectionMultiplexerclass instance. This instance is designed to be shared and reused throughout your client application, and does not need to be created on a per operation basis. To connect to an Azure Redis Cache and be returned an instance of a connected
ConnectionMultiplexer, call the static
Connectmethod and pass in the cache endpoint.
ConnectionMultiplexer connection = ConnectionMultiplexer.Connect("contoso5.redis.cache.windows.net,ssl=true,password=... ");Once the connection is established, return a reference to the Redis cache database, by calling the
ConnectionMultiplexer.GetDatabasemethod.
// connection refers to a previously configured ConnectionMultiplexer IDatabase cache = connection.GetDatabase();A Database in Redis is just a grouping of data and is typically helpful if the application needs to logically group related data. Items can be stored in and retrieved from a cache by using the
StringSetand
StringGetmethods.
// If key1 exists, it is overwritten. cache.StringSet("key1", "value1"); string value = cache.StringGet("key1");When calling
StringGet, if the object exists, it is returned, and if it does not, null is returned. In this case you can retrieve the value from the desired data source and store it in the cache for subsequent use.parameter of
StringSet.
cache.StringSet("key1", "value1", TimeSpan.FromMinutes(90));Azure Redis Cache can cache .NET objects as well as primitive data types, but before a .NET object can be cached it must be serialized. In StackExchange.Redis this is the responsibility of the application developer, as it gives the developer flexibility in the choice of serializer. For more information, see Work with .NET objects in the cache. We have now successfully connected to an Azure Redis Cache and performed a Set and Get operation against it. For an example of how to use Azure Redis Cache for storing ASP.NET Session State, please check out the Azure Redis Cache ASP.NET Session State Provider blog post. Microsoft Hosted Redis Cache Benefits Azure Redis Cache gives customers the ability to use a secure, dedicated Redis cache, managed by Microsoft. With this offer, you get to leverage the rich feature set and ecosystem provided by Redis, and reliable hosting and monitoring from Microsoft. Below, we will dig deeper into two key value adds on Microsoft Hosted Redis: - Turn-key replication support on Azure Redis Cache Standard SKUs. - Built in monitoring and alerting Replication The Redis cache engine supports master-slave replication, with very fast non-blocking first synchronization and auto-reconnection on net split. Azure Redis cache builds on top of this feature and offers replication on all standard tier caches. This helps greatly improve the availability of the cache; node failure recovery is fast, with zero (or minimal) data loss. Internally we provisions two nodes for each Standard SKU cache (master + slave) and use Redis replication to keep the two in sync. In addition, we maintain a heartbeat to the master, and as soon as we detect the master is down, we promote the slave to master. Typically, all the user of the cache notices is a small blip during which the cache will be unavailable, and then things are back to normal very quickly. Monitoring and Alerts Every Azure Redis cache has its key metrics monitored by default. In the initial Preview release we track Cache Hits, Cache Misses, Get/Set Commands, Evicted Keys, Expired Keys, Used Memory and Used CPU. Over the course of the next few months we will be adding to this set. The monitored data is displayed on the portal page for the cache instance, where the user can choose to view the data for the hour, day, week, or for a specific custom time range.
| https://azure.microsoft.com/es-es/blog/lap-around-azure-redis-cache-preview/ | CC-MAIN-2017-51 | refinedweb | 611 | 55.64 |
I am using the following code coming (with very little modifications) from this post. This script simply takes an array of hex color and writes a PNG image out of it. I am trying to adapt it to Py3 but something is going wrong.
import zlib, struct
def png_pack(png_tag, data):
chunk_head = png_tag + data
return (struct.pack("!I", len(data)) +
chunk_head +
struct.pack("!I", 0xFFFFFFFF & zlib.crc32(chunk_head)))
def write_png(buf, width, height):
# reverse the vertical line order and add null bytes at the start
width_byte_4 = width * 4
raw_data = b''.join(b'\x00' + buf[span:span + width_byte_4]
for span in range((height - 1) * width * 4, -1, - width_byte_4))
return b''.join([
b'\x89PNG\r\n\x1a\n',
png_pack(b'IHDR', struct.pack("!2I5B", width, height, 8, 6, 0, 0, 0)),
png_pack(b'IDAT', zlib.compress(raw_data, 9)),
png_pack(b'IEND', b'')])
def saveAsPNG(array, filename):
import struct
if any([len(row) != len(array[0]) for row in array]):
raise ValueError("Array should have elements of equal size")
#First row becomes top row of image.
flat = []
map(flat.extend, reversed(array))
#Big-endian, unsigned 32-byte integer.
buf = b''.join([struct.pack('>I', ((0xffFFff & i32)<<8)|(i32>>24) )
for i32 in flat]) #Rotate from ARGB to RGBA.
#print(type(buf))
data = write_png(buf, len(array[0]), len(array))
f = open(filename, 'wb')
f.write(data)
f.close()
saveAsPNG([[0xffFF0000, 0xffFFFF00],
[0xff00aa77, 0xff333333]], 'test.png')
It works perfectly with Python 2.7 and run without raising any error on Python 3. However the resulting image is empty... I cannot figure out what's the problem. I tried to substitute
map with
starmap but nothing changed. I have checked that
buf is
bytes instead of
string, and it is. I really don't know why it doesn't write the file correctly.
Any clue?
In Python2,
map returns a list. In Python3,
map returns a map object:
print(type(map(flat.extend, reversed(array)))) # <class 'map'>
The map object is an iterator. It does not call
flat.extend until it is iterated over. Since no variable is used to save the map object, it is discarded, and
flat remains an empty list.
So instead you need:
flat = [] for item in reversed(array): flat.extend(item)
or a list comprehension:
flat = [item for arr in reversed(array) for item in arr]
or itertools.chain.from_iterable:
import itertools as IT flat = IT.chain.from_iterable(reversed(array))
map should never be used for its side-effects. It should only be used for generating a list in Python2, or an iterator in Python3.
In Python2 using
map for its side-effects was a frowned-upon practice. In Python3, that policy is "enforced" to some degree by making
map an iterator. | http://www.dlxedu.com/askdetail/3/0b3db7ec89a0dffab30cf4dcddb70fc4.html | CC-MAIN-2019-09 | refinedweb | 452 | 61.12 |
Message-ID: <1712211218.9492.1422621504586.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_9491_2062105272.1422621504586" ------=_Part_9491_2062105272.1422621504586 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
The debate is not over, scripting vs programming. In fact the de= finitions between these two still go blurry. In the context of Java, a prog= ram comprises of definition of a class and its structure. For example, the = simplest program may have just the class definition.=20
class ClassName { }=20
Obviously, this particular code doesn't do anything other than just defi= ning the class. A bare minimum useful program may contain a couple of more = lines of code in Java. Lets try to write a program that prints "hello = world"=20
class WorldGreeter { public static void Main(String args[]) { System.out.println ("hello world") ; } }=20
The same greeter program can be re-written using a static block or even = with the constructor. So, programs follow some strict guidelines in terms o= f structure.=20
On the other hand, scripts have a mixture of free-hand statements and se= mi-to-fully structured method elements. We will try to write the greeter pr= ogram in groovy.=20
println "hello world"=20
Thats it!!. No class definitions, no curly brackets, no strict syntax ch= ecks such as semicolons. Just a simple readable one line statement. All oth= er overheads have been "encapsulated". After all encapsulation is= one of the qualities of any Object Oriented Program (oops, Script), isn't = it?=20
Ok, lets not delve too much into the contradiction between script and pr= ogram, neither it is true 'encapsulation' or not. This article is to show t= he power of scripting using Groovy by walking you with a business example. = Enough has been said about the short-hand structure of groovy, how it shrin= ks coding elements into small fractions to achieve the same result as its c= ounterpart, Java. So, we will take a business problem and try to solve usin= g a groovy script. Think of groovy as a replacement to your perl scripting = or shell scripting, in this context. Groovy is not the competitor to Java, = it is just complementing it.=20
Feed files are general scenario in business applications. Applications r= eceive/fetch feed files from another application or location, to process an= d import into the application stream. There are many situations where we ne= ed to create a diff or delta between two successive feeds to make the feed = processing perform better. For our example, lets assume a feed with an XML = structure given below.=20
<WorkRequests> <WorkRequest id=3D'1'> <Client> <name>BigClient</name> <businessUnit>DotCom</businessUnit> </Client> <RequestType>Requested</RequestType> <RequestDate>2009-12-31</RequestDate> <ProductInfo>Create the plugin component for the upper deck</P= roductInfo> </WorkRequest> </WorkRequests>=20
Briefly on the structure, the external entities (customer applications) = send work requests to the enterprise application, through feed files. The a= pplication processes all the work request records by navigating the tree li= ke structure of different work request elements. Basically each request con= tains client information, request type, date, supporting product informatio= n and order processing information. The real functionality of the applicati= on is something out of scope for this example.=20
Our script needs exactly two input data to create the delta file, viz., = current feed file, last run feed file. The script has to validate and respo= nd to the caller based on the values of the input elements. The idea is to = build the script with three sections, first one to verify the input data, s= econd to define a reusable block of code that does the actual check and fin= d out the difference and finally, a small block that writes the output into= a new file. On the way we will handle some exception cases also.=20
Lets start building the script by validating the input data.=20
if (this.args.size() !=3D2 ) { println "this script expects exactly two arguments" // You can even print the usage here. return }=20
If the number of arguments is anything not equal to 2, then we will not = proceed. We can print the usage and exit too.=20
So, the arguments are string values of input file, file to compare and t= he output folder name.=20
def inputFile =3D new File(this.args[0]) def fileToCompare =3D new File(this.args[1]) // Some more validations. if (!inputFile.exists()) { // Nothing to process, return println "The input file doesn't exist. Process incomplete" return } if (!fileToCompare) { // invalid file for comparison println "File to compare doesn't exist. Process incomplete" return } // If we are here, then the input data is valid, we can start the main proc= essing.=20
Lets skip some of the sanity checks such as whether the input file is an= XML, is it following the XSD structure etc. Lets assume the well structure= d, required XML files, ready for processing. We have some special parser AP= I available in groovy for processing XML files as dotted notations. Lets lo= ad the input file using the API.=20
def todaysFile =3D new XmlSlurper().parse (inputFile)=20
With this single statement we have loaded the XML file into memory and a= ll the elements of the XML file are available as dotted notations and as XP= ATH query elements. All we are interested is to check whether each workrequ= est record in today's file is a new/modified record in comparison with the = "corresponding record" of the last run feed file. Here another as= sumption is that the id attribute of the work request is something like a p= rimary key and never changes.=20
As we loaded todaysFile into memory we will load the fileToCompare also.==20
def lastRunFile =3D new XmlSlurper().parse(fileToCompare)=20
Now we need to write a simple closure routine that fetches the WorkReque= st record from the last run file by searching the workrequest id.=20
def getLastRunWorkRequest =3D { wrID -> def lastRunWorkRequest =3D lastRunFile.WorkRequest.find(it. @id.text() = =3D=3D wrID) return lastRunWorkRequest }=20
The above routine searches the entire last run file, looking for a speci= fic work request using the xpath query. If it finds a matching record, then= the entire work request is returned. otherwise a null will be returned. No= w we need to write another small routine that verifies whether two elements= (of todays file and last run file) are changed.=20
def dataChanged =3D { todays, lastrun -> return todays.toString() !=3D lastrun.toString() }=20
Simple enough, isn't it?=20
With the two helper routines ready to fire, we need to loop through the = feed file to see which record is changed and which is not. The changed or t= he new records will have to be included in the delta file and the unchanged= records will be left out.=20
def deltaRequests =3D [] todaysFile.WorkRequest.each { def todays =3D it def lastrun =3D getLastRunWorkRequest (todays.@id.text()) if ( (!lastrun) || (dataChanged (todays, lastrun) ) ) { deltaRequests.add (todays) } }=20
Now we have collected all the records to be processed in the deltaReques= ts list. Only thing pending is to attach this list under workrequests eleme= nt of the XML and write the delta file. To write the XML output there is an= API by name groovy.xml.StreamingMarkupBuilder. We will use this to write t= he file.=20
def mb =3D new groovy.xml.StreamingMarkupBuilder() def doc =3D mb.bind { WorkRequests { mkp.yield deltaRequests } } def outputWriter =3D new FileWriter(new File(inputFile).name + '_Delta') outputWriter << doc=20
With very simple steps of processing, we have written a clean script tha= t processes two similar structured XML file and creates the delta file. Whe= n this script is plugged in into the application that processes redundant f= eeds on a daily process, we can significantly reduce the processing time by= just creating the delta file and making the delta file as the input to the= application. | http://docs.codehaus.org/exportword?pageId=136676114 | CC-MAIN-2015-06 | refinedweb | 1,340 | 64.91 |
[
]
Sebb commented on NET-499:
--------------------------
Yes.
But that was wrong, sorry.
setSendBufferSize(int) and setReceiveBufferSize(int) aren't the problem in 3.2.
The problem was that was overriding the socket buffer sizes if > 0.
The fix was supposed to be as you originally had it, i.e.;
I don't understand why that did not fix it for you.
Anyway, glad the snapshot works.
> FTP transfer to mainframe extremely slow
> ----------------------------------------
>
> Key: NET-499
> URL:
> Project: Commons Net
> Issue Type: Bug
> Components: FTP
> Affects Versions: 3.2
> Environment: Windows and OSX
> Reporter: Denis Molony
>
> FTPClient.storeFile() is incredibly slow. I have two example files, one FB (4MB) and
one in ravel VB (94K) format. Under 3.1 both files transfer in less than a second (FB:328ms,
VB:112ms). Under 3.2 the VB transfer takes 30,000ms, and the FB transfer takes too long to
find out (> 15 minutes).
> I have checked the FB file on the mainframe after cancelling the transfer and it is always
partly there. But the length varies, suggesting that it hasn't hit the same error each time.
> I have built two jar files, one with 3.1 and the other with 3.2. These jars are available.
The code is as follows:
> {code}
> public class FTPTransfer
> {
> public static void transfer (String name, FTPClient ftp, File file) throws IOException
> {
> FileInputStream fis = new FileInputStream (file);
> long start = System.currentTimeMillis ();
> if ( (name, fis))
> System.out.print ("File transferred");
> else
> System.out.print ("Transfer failed");
> System.out.printf (" in %d ms%n", (System.currentTimeMillis () - start));
> fis.close ();
> }
> public static void main (String[] args)
> {
> File file1 = null;
> File file2 = null;
> if (System.getProperty ("os.name").toLowerCase ().startsWith ("mac"))
> {
> file1 = new File ("/Users/Denis/comtest/DENIS-018.SRC"); // ravel file format
> file2 = new File ("/Users/Denis/comtest/MOLONYD.NCD"); // FB252 format
> }
> else
> {
> file1 = new File ("D:/comtest/DENIS-018.SRC"); // ravel file format
> file2 = new File ("D:/comtest/MOLONYD.NCD"); // FB252 format
> }
> FTPClient ftp = new FTPClient ();
> (new PrintCommandListener (new PrintWriter (System.out),
true));
> try
> {
> ("server");
> int reply = ();
> if (!FTPReply.isPositiveCompletion (reply))
> {
> ();
> System.err.println ("FTP server refused connection.");
> System.exit (1);
> }
> ("user", "pw");
> FTPFile[] files = ();
> System.out.printf ("%nListing contains %d files%n%n", files.length);
> ();
> ();
> transfer ("TEST.VB", ftp, file1);
> ();
> transfer ("TEST.FB", ftp, file2);
> ();
> }
> catch (IOException e)
> {
> e.printStackTrace ();
> }
> finally
> {
> if ( ())
> {
> try
> {
> ();
> }
> catch (IOException ioe)
> {
> }
> }
> }
> }
> }
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/commons-issues/201302.mbox/%3CJIRA.12630506.1359784488937.234714.1359803413761@arcas%3E | CC-MAIN-2014-49 | refinedweb | 414 | 62.44 |
[
]
Ning Zhang commented on HIVE-1488:
----------------------------------
Both MultiFileInputFormat and CombineFileInputFormat (on which CombineHiveInputFormat is based)
can combine multiple files into one split. In addition to the differences in locality, CFIF
also provide the interface to define pools and implement filters so that you can define which
files should be/should not be combined into one split. In CHIF, the logics of putting multiple
files in one directory (but not in different directories) in one split is implemented in CombineHiveInputFormat.CombineFilter.
It seems the MFIF support in hadoop 0.19 was added based on some external use cases in the
hive-user mailing list ().
I'm not sure whether anyone is still actively using it though.
> CombineHiveInputFormat for hadoop-19 is broken
> ----------------------------------------------
>
> Key: HIVE-1488
> URL:
> Project: Hadoop Hive
> Issue Type: Bug
> Components: Query Processor
> Reporter: Joydeep Sen Sarma
> Assignee: Ning Zhang
>
> I don't if anyone is using it. After making some recent testing related changes in HIVE-1408,
combine[12].q are no longer working when testing against 19. I have seen them fail earlier
as well and not investigated. Looking at the code, it seems pretty hokey:
> getInputPathsShim():
> Path[] newPaths = new Path[paths.length];
> // remove file:
> for (int pos = 0; pos < paths.length; pos++) {
> newPaths[pos] = new Path(paths[pos].toString().substring(5));
> }
> since we are no longer using 'file:' namespace for test warehouse, this is broke. But
this would be broken against any hdfs instance it would seem(?). Also not clear what we are
trying to do here.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/hive-dev/201007.mbox/%3C5910022.14231280193017570.JavaMail.jira@thor%3E | CC-MAIN-2018-13 | refinedweb | 272 | 57.37 |
If your application performs other types of tasks besides text processing, a skim of this module list can suggest where to look for relevant functionality. As well, readers who find themselves maintaining code written by other developers may find that unfamiliar modules are imported by the existing code. If an imported module is not summarized in the list below, nor documented elsewhere, it is probably an in-house or third-party module. For standard library modules, the summaries here will at least give you a sense of the general purpose of a given module.
Access to built-in functions, exceptions, and other objects. Python does a great job of exposing its own internals, but "normal" developers do not need to worry about this.
In object-oriented programming (OOP) languages like Python, compound data and structured data is frequently represented at runtime as native objects. At times these objects belong to basic datatypes?lists, tuples, and dictionaries?but more often, once you reach a certain degree of complexity, hierarchies of instances containing attributes become more likely.
For simple objects, especially sequences, serialization and storage is rather straightforward. For example, lists can easily be represented in delimited or fixed-length strings. Lists-of-lists can be saved in line-oriented files, each line containing delimited fields, or in rows of RDBMS tables. But once the dimension of nested sequences goes past two, and even more so for heterogeneous data structures, traditional table-oriented storage is a less-obvious fit.
While it is possible to create "object/relational adaptors" that write OOP instances to flat tables, that usually requires custom programming. A number of more general solutions exist, both in the Python standard library and in third-party tools. There are actually two separate issues involved in storing Python objects. The first issue is how to convert them into strings in the first place; the second issue is how to create a general persistence mechanism for such serialized objects. At a minimal level, of course, it is simple enough to store (and retrieve) a serialization string the same way you would any other string?to a file, a database, and so on. The various *dbm modules create a "dictionary on disk," while the shelve module automatically utilizes cPickle serialization to write arbitrary objects as values (keys are still strings).
Several third-party modules support object serialization with special features. If you need an XML dialect for your object representation, the modules gnosis.xml.pickle and xmlrpclib are useful. The YAML format is both human-readable/editable and has support libraries for Python, Perl, Ruby, and Java; using these various libraries, you can exchange objects between these several programming languages.
SEE ALSO: gnosis.xml.pickle 410; yaml 415; xmlrpclib 407;
A dbm-style database is a "dictionary on disk." Using a database of this sort allows you to store a set of key/val pairs to a file, or files, on the local filesystem, and to access and set them as if they were an in-memory dictionary. A dbm-style database, unlike a standard dictionary, always maps strings to strings. If you need to store other types of objects, you will need to convert them to strings (or use the shelve module as a wrapper).
Depending on your platform, and on which external libraries are installed, different dbm modules might be available. The performance characteristics of the various modules vary significantly. As well, some DBM modules support some special functionality. Most of the time, however, your best approach is to access the locally supported DBM module using the wrapper module anydbm. Calls to this module will select the best available DBM for the current environment without a programmer or user having to worry about the underlying support mechanism.
Functions and methods are documents using the nonspecific capitalized form DBM. In real usage, you would use the name of a specific module. Most of the time, you will get or set DBM values using standard named indexing; for example, db["key"]. A few methods characteristic of dictionaries are also supported, as well as a few methods special to DBM databases.
SEE ALSO: shelve 98; dict 24; UserDict 24;
Open the filename fname for dbm access. The optional argument flag specifies how the database is accessed. A value of r is for read-only access (on an existing dbm file); w opens an already existing file for read/write access; c will create a database or use an existing one, with read/write access; the option n will always create a new database, erasing the one named in fname if it already existed. The optional mode argument specifies the Unix mode of the file(s) created.
Close the database and flush any pending writes.
Return the first key/val pair in the DBM. The order is arbitrary but stable. You may use the DBM.first() method, combined with repeated calls to DBM.next(), to process every item in the dictionary.
In Python 2.2+, you can implement an items() function to emulate the behavior of the .items() method of dictionaries for DBMs:
>>> from __future__ import generators >>> def items(db): ... try: ... yield db.first() ... while 1: ... yield db.next() ... except KeyError: ... raise StopIteration ... >>> for k,v in items(d): # typical usage ... print k,v
Return a true value if the DBM has the key key.
Return a list of string keys in the DBM.
Return the last key/val pair in the DBM. The order is arbitrary but stable. You may use the DBM.last() method, combined with repeated calls to DBM.previous() , to process every item in the dictionary in reverse order.
Return the next key/val pair in the DBM. A pointer to the current position is always maintained, so the methods DBM.next() and DBM.previous() can be used to access relative items.
Return the previous key/val pair in the DBM. A pointer to the current position is always maintained, so the methods DBM.next() and DBM.previous() can be used to access relative items.
Force any pending data to be written to disk.
SEE ALSO: FILE.flush() 16;
Generic interface to underlying DBM support. Calls to this module use the functionality of the "best available" DBM module. If you open an existing database file, its type is guessed and used?assuming the current machine supports that style.
SEE ALSO: whichdb 93;
Interface to the Berkeley DB library.
Interface to the BSD DB library.
Interface to the Unix (n)dbm library.
Interface to slow, but portable pure Python DBM.
Interface to the GNU DBM (GDBM) library.
Guess which db package to use to open a db file. This module contains the single function whichdb.whichdb(). If you open an existing DBM file with anydbm, this function is called automatically behind the scenes.
SEE ALSO: shelve 98;
The module cPickle is a comparatively fast C implementation of the pure Python pickle module. The streams produced and read by cPickle and pickle are interchangeable. The only time you should prefer pickle is in the uncommon case where you wish to subclass the pickling base class; cPickle is many times faster to use. The class pickle.Pickler is not documented here.
The cPickle and pickle modules support a both binary and an ASCII format. Neither is designed for human readability, but it is not hugely difficult to read an ASCII pickle. Nonetheless, if readability is a goal, yaml or gnosis.xml.pickle are better choices. Binary format produces smaller pickles that are faster to write or load.
It is possible to fine-tune the pickling behavior of objects by defining the methods .__getstate__(), .__setstate__(), and .__getinitargs__(). The particular black magic invocations involved in defining these methods, however, are not addressed in this book and are rarely necessary for "normal" objects (i.e., those that represent data structures).
Use of the cPickle or pickle module is quite simple:
>>> import cPickle >>> from somewhere import my_complex_object >>> s = cPickle.dumps(my_complex_object) >>> new_obj = cPickle.loads(s)
Write a serialized form of the object o to the file-like object file. If the optional argument bin is given a true value, use binary format.
Return a serialized form of the object o as a string. If the optional argument bin is given a true value, use binary format.
Return an object that was serialized as the contents of the file-like object file.
Return an object that was serialized in the string s.
SEE ALSO: gnosis.xml.pickle 410; yaml 415;
Internal Python object serialization. For more general object serialization, use pickle, cPickle, or gnosis.xml.pickle, or the YAML tools at < marshal is a limited-purpose serialization to the pseudo-compiled byte-code format used by Python .pyc files.
The module pprint is similar to the built-in function repr() and the module repr. The purpose of pprint is to represent objects of basic datatypes in a more readable fashion, especially in cases where collection types nest inside each other. In simple cases pprint.pformat and repr() produce the same result; for more complex objects, pprint uses newlines and indentation to illustrate the structure of a collection. Where possible, the string representation produced by pprint functions can be used to re-create objects with the built-in eval() .
I find the module pprint somewhat limited in that it does not produce a particularly helpful representation of objects of custom types, which might themselves represent compound data. Instance attributes are very frequently used in a manner similar to dictionary keys. For example:
>>> import pprint >>> dct = {1.7:2.5, ('t','u','p'):['l','i','s','t']} >>> dct2 = {'this':'that', 'num':38, 'dct':dct} >>> class Container: pass ... >>> inst = Container() >>> inst.this, inst.num, inst.dct = 'that', 38, dct >>> pprint.pprint(dct2) {'dct': {('t', 'u', 'p'): ['l', 'i', 's', 't'], 1.7: 2.5}, 'num': 38, 'this': 'that'} >>> pprint.pprint(inst) <__main__.Container instance at 0x415770>
In the example, dct2 and inst have the same structure, and either might plausibly be chosen in an application as a data container. But the latter pprint representation only tells us the barest information about what an object is, not what data it contains. The mini-module below enhances pretty-printing:
from pprint import pformat import string, sys def pformat2(o): if hasattr(o,'__dict__'): lines = [] klass = o.__class__.__name__ module = o.__module__ desc = '<%s.%s instance at 0x%x>' % (module, klass, id(o)) lines.append(desc) for k,v in o.__dict__.items(): lines.append('instance.%s=%s' % (k, pformat(v))) return string.join(lines,'\n') else: return pprint.pformat(o) def pprint2(o, stream=sys.stdout): stream.write(pformat2(o)+'\n')
Continuing the session above, we get a more useful report:
>>> import pprint2 >>> pprint2.pprint2(inst) <__main__.Container instance at 0x415770> instance.this='that' instance.dct={('t', 'u', 'p'): ['l', 'i', 's', 't'], 1.7: 2.5} instance.num=38
Return a true value if the equality below holds:
o == eval(pprint.pformat(o))
Return a true value if the object o contains recursive containers. Objects that contain themselves at any nested level cannot be restored with eval().
Return a formatted string representation of the object o.
Print the formatted representation of the object o to the file-like object stream.
Return a pretty-printing object that will format using a width of width, will limit recursion to depth depth, and will indent each new level by indent spaces. The method pprint.PrettyPrinter.pprint() will write to the file-like object stream.
>>> pp = pprint.PrettyPrinter(width=30) >>> pp.pprint(dct2) {'dct': {1.7: 2.5, ('t', 'u', 'p'): ['l', 'i', 's', 't']}, 'num': 38, 'this': 'that'}
The class pprint.PrettyPrinter has the same methods as the module level functions. The only difference is that the stream used for pprint.PrettyPrinter.pprint() is configured when an instance is initialized rather than passed as an optional argument.
SEE ALSO: gnosis.xml.pickle 410; yaml 415;
The module repr contains code for customizing the string representation of objects. In its default behavior the function repr.repr() provides a length-limited string representation of objects?in the case of large collections, displaying the entire collection can be unwieldy, and unnecessary for merely distinguishing objects. For example:
>>> dct = dict([(n,str(n)) for n in range(6)]) >>> repr(dct) # much worse for, e.g., 1000 item dict "{0: '0', 1: '1', 2: '2', 3: '3', 4: '4', 5: '5'}" >>> from repr import repr >>> repr(dct) "{0: '0', 1: '1', 2: '2', 3: '3', ...}" >>> 'dct' "{0: '0', 1: '1', 2: '2', 3: '3', 4: '4', 5: '5'}"
The back-tick operator does not change behavior if the built-in repr() function is replaced.
You can change the behavior of the repr.repr() by modifying attributes of the instance object repr.aRepr.
>>> dct = dict([(n,str(n)) for n in range(6)]) >>> repr(dct) "{0: '0', 1: '1', 2: '2', 3: '3', 4: '4', 5: '5'}" >>> import repr >>> repr.repr(dct) "{0: '0', 1: '1', 2: '2', 3: '3', ...}" >>> repr.aRepr.maxdict = 5 >>> repr.repr(dct) "{0: '0', 1: '1', 2: '2', 3: '3', 4: '4', ...}"
In my opinion, the choice of the name for this module is unfortunate, since it is identical to that of the built-in function. You can avoid some of the collision by using the as form of importing, as in:
>>> import repr as _repr >>> from repr import repr as newrepr
For fine-tuned control of object representation, you may subclass the class repr.Repr. Potentially, you could use substitutable repr() functions to change the behavior of application output, but if you anticipate such a need, it is better practice to give a name that indicates this; for example, overridable_repr().
Base for customized object representations. The instance repr.aRepr automatically exists in the module namespace, so this class is useful primarily as a parent class. To change an attribute, it is simplest just to set it in an instance.
Depth of recursive objects to follow.
Number of items in a collection of the indicated type to include in the representation. Sequences default to 6, dicts to 4.
Number of digits of a long integer to stringify. Default is 40.
Length of string representation (e.g., s[:N]). Default is 30.
"Catch-all" maximum length of other representations.
Behaves like built-in repr(), but potentially with a different string representation created.
Represent an object of the type TYPE, where the names used are the standard type names. The argument level indicates the level of recursion when this method is called (you might want to decide what to print based on how deep within the representation the object is). The Python Library Reference gives the example:
class MyRepr(repr.Repr): def repr_file(self, obj, level): if obj.name in ['<stdin>', '<stdout>', '<stderr>']: return obj.name else: return 'obj' aRepr = MyRepr() print aRepr.repr(sys.stdin) # prints '<stdin>'
The module shelve builds on the capabilities of the DBM modules, but takes things a step forward. Unlike with the DBM modules, you may write arbitrary Python objects as values in a shelve database. The keys in shelve databases, however, must still be strings.
The methods of shelve databases are generally the same as those for their underlying DBMs. However, shelves do not have the .first(), .last(), .next(), or .previous () methods; nor do they have the .items () method that actual dictionaries do. Most of the time you will simply use name-indexed assignment and access. But from time to time, the available shelve.get(), shelve.keys(), shelve.sync(), shelve.has_key(), and shelve.close() methods are useful.
Usage of a shelve consists of a few simple steps like the ones below:
>>> import shelve >>> sh = shelve.open('test_shelve') >>> sh.keys() ['this'] >>> sh['new_key'] = {1:2, 3:4, ('t','u','p'):['l','i','s','t']} >>> sh.keys() ['this', 'new_key'] >>> sh['new_key'] {1: 2, 3: 4, ('t', 'u', 'p'): ['l', 'i', 's', 't']} >>> del sh['this'] >>> sh.keys() ['new_key'] >>> sh.close()
In the example, I opened an existing shelve, and the previously existing key/value pair was available. Deleting a key/value pair is the same as doing so from a standard dictionary. Opening a new shelve automatically creates the necessary file(s).
Although shelve only allows strings to be used as keys, in a pinch it is not difficult to generate strings that characterize other types of immutable objects. For the same reasons that you do not generally want to use mutable objects as dictionary keys, it is also a bad idea to use mutable objects as shelve keys. Using the built-in hash() method is a good way to generate strings?but keep in mind that this technique does not strictly guarantee uniqueness, so it is possible (but unlikely) to accidentally overwrite entries using this hack:
>>> '%x' % hash((1,2,3,4,5)) '866123f4' >>> '%x' % hash(3.1415) '6aad0902' >>> '%x' % hash(38) '26' >>> '%x' % hash('38') '92bb58e3'
Integers, notice, are their own hash, and strings of digits are common. Therefore, if you adopted this approach, you would want to hash strings as well, before using them as keys. There is no real problem with doing so, merely an extra indirection step that you need to remember to use consistently:
>>> sh['%x' % hash('another_key')] = 'another value' >>> sh.keys() ['new_key', '8f9ef0ca'] >>> sh['%x' % hash('another_key')] 'another value' >>> sh['another_key'] Traceback (most recent call last): File "<stdin>", line 1, in ? File "/sw/lib/python2.2/shelve.py", line 70, in __getitem__ f = StringIO(self.dict[key]) KeyError: another_key
If you want to go beyond the capabilities of shelve in several ways, you might want to investigate the third-party library Zope Object Database (ZODB). ZODB allows arbitrary objects to be persistent, not only dictionary-like objects. Moreover, ZODB lets you store data in ways other than in local files, and also has adaptors for multiuser simultaneous access. Look for details at:
<
SEE ALSO: DBM 90; dict 24;
The rest of the listed modules are comparatively unlikely to be needed in text processing applications. Some modules are specific to a particular platform; if so, this is indicated parenthetically. Recent distributions of Python have taken a "batteries included" approach?much more is included in a base Python distribution than is with other free programming languages (but other popular languages still have a range of existing libraries that can be downloaded separately).
Access to the Windows registry (Windows).
AppleEvents (Macintosh; replaced by Carbon.AE).
Conversion between Python variables and AppleEvent data containers (Macintosh).
AppleEvent objects (Macintosh).
Rudimentary decoder for AppleSingle format files (Macintosh).
Build MacOS applets (Macintosh).
Print calendars, much like the Unix cal utility. A variety of functions allow you to print or stringify calendars for various time frames. For example,
>>> print calendar.month(2002,11) November 2002 Mo Tu We Th Fr Sa Su 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Interfaces to Carbon API (Macintosh).
CD-ROM access on SGI systems (IRIX).
Code Fragment Resource module (Macintosh).
Interface to the standard color selection dialog (Macintosh).
Interface to the Communications Tool Box (Macintosh).
Call C functions in shared objects (Unix).
Basic Macintosh dialogs (Macintosh).
Access to Unix fcntl() and iocntl() system functions (Unix).
AppleEvents interface to MacOS finder (Macintosh).
Functions and constants for working with the FORMS library (IRIX).
Functions and constants for working with the Font Manager library (IRIX).
Floating point exception control (Unix).
Structured development of MacOS applications (Macintosh).
The module gettext eases the development of multilingual applications. While actual translations must be performed manually, this module aids in identifying strings for translation and runtime substitutions of language-specific strings.
Information on Unix groups (Unix).
Control the language and regional settings for an application. The locale setting affects the behavior of several functions, such as time.strftime() and string.lower(). The locale module is also useful for creating strings such as number with grouped digits and currency strings for specific nations.
Macintosh implementation of os module functionality. It is generally better to use os directly and let it call mac where needed (Macintosh).
Filesystem services (Macintosh).
Access to MacOS Python interpreter (Macintosh).
Locate script resources (Macintosh).
Interface to Speech Manager (Macintosh).
Easy access serial to line connections (Macintosh).
Create CodeWarrior projects (Macintosh).
Miscellaneous Windows-specific functions provided in Microsoft's Visual C++ Runtime libraries (Windows).
Interface to Navigation Services (Macintosh).
Access to Sun's NIS Yellow Pages (Unix).
Manage pipes at a finer level than done by os.popen() and its relatives. Reliability varies between platforms (Unix).
Wrap PixMap objects (Macintosh).
Access to operating system functionality under Unix. The os module provides more portable version of the same functionality and should be used instead (Unix).
Application preferences manager (Macintosh).
Pseudo terminal utilities (IRIX, Linux).
Access to Unix password database (Unix).
Preferences manager for Python (Macintosh).
Helper to create PYC resources for compiled applications (Macintosh).
Buffered, nonvisible STDOUT output (Macintosh).
Examine resource usage (Unix).
Interface to Unix syslog library (Unix).
POSIX tty control (Unix).
Widgets for the Mac (Macintosh).
Interface to the WorldScript-Aware Styled Text Engine (Macintosh).
Interface to audio hardware under Windows (Windows).
Implements (a subset of) Sun eXternal Data Representation (XDR). In concept, xdrlib is similar to the struct module, but the format is less widely used.
Read and write AIFC and AIFF audio files. The interface to aifc is the same as for the sunau and wave modules.
Audio functions for SGI (IRIX).
Manipulate raw audio data.
Read chunks of IFF audio data.
Convert between RGB color model and YIQ, HLS, and HSV color spaces.
Functions and constants for working with Silicon Graphics' Graphics Library (IRIX).
Manipulate image data stored as Python strings. For most operations on image files, the third-party Python Imaging Library (usually called "PIL"; see < is a versatile and powerful tool.
Support for imglib files (IRIX).
Read and write JPEG files on SGI (IRIX). The Python Imaging Library (< provides a cross-platform means of working with a large number of image formats and is preferable for most purposes.
Read and write SGI RGB files (IRIX).
Read and write Sun AU audio files. The interface to sunau is the same as for the aifc and wave modules.
Interface to Sun audio hardware (SunOS/Solaris).
Read QuickTime movies frame by frame (Macintosh).
Read and write WAV audio files. The interface to wave is the same as for the aifc and sunau modules.
Typed arrays of numeric values. More efficient than standard Python lists, where applicable.
Exit handlers. Same functionality as sys.exitfunc, but different interface.
HTTP server classes. BaseHTTPServer should usually be treated as an abstract class. The other modules provide sufficient customization for usage in the specific context indicated by their names. All may be customized for your application's needs.
Restricted object access. Used in conjunction with rexec.
List insertion maintaining sort order.
Mathematical functions over complex numbers.
Build line-oriented command interpreters.
Utilities to emulate Python's interactive interpreter.
Compile possibly incomplete Python source code.
Module/script to compile .py files to cached byte-code files.
Analyze Python source code and generate Python byte-codes.
Helper to provide extensibility for pickle/cPickle.
Full-screen terminal handling with the (n)curses library.
Cached directory listing. This module enhances the functionality of os.listdir().
Disassembler of Python byte-code into mnemonics.
Build and install Python modules and packages. distutils provides a standard mechanism for creating distribution packages of Python tools and libraries, and also for installing them on target machines. Although distutils is likely to be useful for text processing applications that are distributed to users, a discussion of the details of working with distutils is outside the scope of this book. Useful information can be found in the Python standard documentation, especially Greg Ward's Distributing Python Modules and Installing Python Modules.
Check the accuracy of _doc_ strings.
Standard errno system symbols.
General floating point formatting functions. Duplicates string interpolation functionality.
Control Python's (optional) cyclic garbage collection.
Utilities to collect a password without echoing to screen.
Access the internals of the import statement.
Get useful information from live Python objects for Python 2.1+.
Check whether string is a Python keyword.
Various trigonometric and algebraic functions and constants. These functions generally operate on floating point numbers?use cmath for calculations on complex numbers.
Work with mutual exclusion locks, typically for threaded applications.
Create special Python objects in customizable ways. For example, Python hackers can create a module object without using a file of the same name or create an instance while bypassing the normal .__init__() call. "Normal" techniques generally suffice for text processing applications.
A Python debugger.
Functions to spawn commands with pipes to STDIN, STDOUT, and optionally STDERR. In Python 2.0+, this functionality is copied to the os module in slightly improved form. Generally you should use the os module (unless you are running Python 1.52 or earlier).
Profile the performance characteristics of Python code. If speed becomes an issue in your application, your first step in solving any problem issues should be profiling the code. But details of using profile are outside the scope of this book. Moreover, it is usually a bad idea to assume speed is a problem until it is actually found to be so.
Print reports on profiled Python code.
Python class browser; useful for implementing code development environments for editing Python.
Extremely useful script and module for examining Python documentation. pydoc is included with Python 2.1+, but is compatible with earlier versions if downloaded. pydoc can provide help similar to Unix man pages, help in the interactive shell, and also a Web browser interface to documentation. This tool is worth using frequently while developing Python applications, but its details are outside the scope of this book.
"Compile" a .py file to a .pyc (or .pyo) file.
A multiproducer, multiconsumer queue, especially for threaded programming.
Interface to GNU readline (Unix).
Restricted execution facilities.
General event scheduler.
Handlers for asynchronous events.
Customizable startup module that can be modified to change the behavior of the local Python installation.
Maintain a cache of os.stat() information on files. Deprecated in Python 2.2+.
Constants for interpreting the results of os.statvfs() and os.fstatvfs().
Create multithreaded applications with Python. Although text processing applications?like other applications?might use a threaded approach, this topic is outside the scope of this book. Most, but not all, Python platforms support threaded applications.
Python interface to TCL/TK and higher-level widgets for TK. Supported on many platforms, but not on all Python installations.
Extract, format, and print information about Python stack traces. Useful for debugging applications.
Unit testing framework. Like a number of other documenting, testing, and debugging modules, unittest is a useful facility?and its usage is recommended for Python applications in general. But this module is not specific enough to text processing applications to be addressed in this book.
Python 2.1 added a set of warning messages for conditions a user should be aware of, but that fall below the threshold for raising exceptions. By default, such messages are printed to STDERR, but the warning module can be used to modify the behavior of warning messages.
Create references to objects that do not limit garbage collection. At first brush, weak references seem strange, and the strangeness does not really go away quickly. If you do not know why you would want to use these, do not worry about it?you do not need to.
Wichmann-Hill random number generator. Deprecated since Python 2.1, and not necessary to use directly before that?use the module random to create pseudorandom values. | https://etutorials.org/Programming/Python.+Text+processing/Chapter+1.+Python+Basics/1.3+Other+Modules+in+the+Standard+Library/ | CC-MAIN-2022-21 | refinedweb | 4,591 | 60.31 |
Racing Datalogger With an Arduino
Introduction: Racing Datalogger With an Arduino
This is an old project of mine that I got asked a couple of times during a trackday and figured I'd post it up for people interested. There are some current limitations such as data resolution and data syncing from different inputs, but it's a good way to get your feet wet into data logging. There are many ways to do this, the most popular one is dumping data from the OBD port. This can all be done nowadays with an OBD bluetooth module and a smartphone since it has GPS. But, my motorcycle doesn't have OBD and OBD is missing a key input which is the brakes. So we'll go straight to the source and do it manually. The 3 main sensor inputs that I believe are the most important to look at for a beginner are 1: Throttle Position, 2: Brakes and 3: Position (GPS). These are where the most improvements can be made to improve your laptime. There's a big problem with the brake sensor though, most cars/motorcycles don't come with a pressure transducer sensor in the brake line, but we can tap into the brake light circuit. The downside to this is that the brake light only has a one bit resolution (ie. digital signal), its either on or off. You won't be able to tell from the data how hard you're pressing the brakes, only when you started to press it or when you let go. Other then that, let's get started!*
*You must be knowledgeable with the electronic system of the car/motorcycle you are trying to data log! Unfortunately, I cannot help you in this area since it varies from car to car / motorcycle to motorcycle. However, I will try my best to describe it to you and point you in the right direction.
Step 1: Find Brake Light Wire and Throttle Position Sensor
Before we get started you have to be able to find these two points on your car/motorcycle otherwise we cannot continue any further. You're going to need a multimeter and whatever tools you need to get to these parts, hopefully just screwdriver and ratchet set. The Factory Service Manual for your car/motorcycle definitely helps also!
Brake Light:
This is the easiest of the two so lets get started on that. Just find the plug that goes directly in your rear brake light. There should only be 3 wires. One is ground, one is for illumination for night time, and 3rd one is the actual brake light. Press and hold down the brake somehow (either have someone else do it, a brick, etc.), set your multimeter to VDC and start probing with reference to chassis ground. The one with +12V with the brake pressed will be the one you're interested in. Mark it down and we'll get back to it later.
Throttle Position Sensor (TPS):
This one I would recommend looking at your Factory Service Manual to see how it operates and where its located. In general it will be by your Intake Manifold on the Throttle Body (obviously). If you your Throttle Body is cable driven, you should be able to find it fairly easy by pressing the gas / pulling the throttle and tracing the cable. The sensor nearest to cable lever on the throttle body should the TPS and should have 3 wires. One is the reference/supply voltage, one is ground, and the other throttle position output. The last one is the one you are interested in. With the Key On/Engine Off (KO/EF) probe the 3 wires with reference to chassis ground. The reference will be the highest voltage of the three, Ground should be 0, and the throttle position output will be the one in the middle, typically very low <1V. You can confirm this with the KO/EF, and have someone press the gas / pull the throttle and you'll see the voltage change. Still with the KO/EF, record the idle voltage, and then with the gas pressed / throttle pulled all the way down record it again. These will be your min and max throttle values. Mark this sensor down and we'll get back to this later when we start to wire things up.
Step 2: Bill of Materials
- 1 x Arduino Uno
- 1 x USB Cable for Arduino
- 1 x iTead Studio GPS Shield (NEO-6M model)
- 1 x MicroSD Card (1GB is fine)
- 1 x Protoshield
- 1 x Optocoupler (PC817)
- 1 x USB Power Port Socket for Motorcycles
- 1 x 750 Ohm Resistor
- 1 x 7.5 kOhm Resistor
- 1 x 150 Ohm Resistor
- 2 x Push Button Switches
- Wires
Tools:
- Multimeter
- Soldering Iron
- Electrical Crimpers
- Wire Stripper
- Screwdriver
- Ratchet set
You can ignore the rest below if you just want to just buy the parts and have a working setup without any headache, but I'll list some ways you can save money and buy generic parts. The iTead Studio GPS Shield is just a NEO-6M module, an SD Card module, a u.FL to SMA adapter, and a GPS SMA Antenna. The USB Power Port Socket for Motorcycles is just some terminal connectors to the battery, an inline fuse holder, and USB car charger. By ordering the parts individually from China you can probably cut the cost in half for this project, but YMMV depending on your technical skills so just giving you a heads up.
Step 3: Code
Before flashing the code to the Arduino Uno find this function:
void getSensorData() { // 0% Throttle reads approx 0.66 V = 135, 100% Throttle reads 3.87 V = 793 tpsValue = map(analogRead(TPS_PIN), 135, 793, 0 , 100); brakeValue = map(digitalRead(BRAKE_PIN), 0, 1, 0, 100); }
Divide your idle voltage value you recorded from the TPS by 1023 and replace 135 in the code with the value calculated. Now do same with maximum voltage and replace the value 793 with it.
After that's complete, compile and flash the code above to your Arduino Uno, make sure nothing else is connected. Once successfully go ahead and disconnect for now, I will explain the code later.
Step 4: Schematic & Wiring It Up
As you can see the schematic is fairly straightforward.
- The USB Power Adapter for the Motorcycle hooks up directly to the battery on your car/motorcycle. Internally it is fused to protect the circuit (not drawn in schematic). From the adapter you can now connect it to your Arduino via USB, but don't do it yet.
- Stack the iTead Studio GPS Shield on top of your Arduino. Make sure you toggle the switch to the correct operating voltage, chances are it is 5V as shown in schematic. Set Digital Pin 0(D0) as TX and Digital Pin 1(D1) as RX with the jumpers that came with the kit. See picture of shield for confirmation. This is an easy trap for beginners because on the Arduino, D0 is RX and D1 is TX. This just means that the iTead Studio Shield is Transmitting data Out of D0 and the Arduino is going to Receive it on D0, and the reverse for D1. Now, put MicroSD card into slot and make sure it is formatted to either FAT or FAT32.
- From here on out, figure out where you want to place your Arduino along with where you want to place your pushbutton switches, LED, and GPS antenna so you can plan your wire lengths accordingly. The pushbutton switches should be easy to access, LED should be easily visible, and GPS antenna exposed so it has better reception.
- Once you figured that all out we can get to the fun stuff. Tap into the TPS wire you marked earlier, you can either use a T-Tap Connector or cut and resolder, I'll leave that up to you. Do the same for the Brake Light wire.
- Stack the Protoshield on top of the iTead Studio GPS Shield and wire up the optocoupler and resistors, see pictures/schematics/documentation for reference.
- Now hook up the reset push button switch to the reset pin(RST) from the Arduino and Ground.
- Do the same for the stop push button switch but to Digital Pin 6(D6) and Ground.
- Then, hook up the resistor and LED to Digital Pin 4 (D4) and Ground.
- Lastly, screw in the GPS antenna.
Step 5: Plug It In!
Plug in the USB cable from the USB Power Adapter to the Arduino. If everything is wired up correctly and there is no magic smoke you should see the LED go on. This means that the GPS currently DOES NOT have a lock. Depending on environmental conditions and antenna placement it can take a couple minutes. When it goes off this baby is logging! Press the stop switch and you'll see the LED flash three times then pause. Remove the MicroSD and plug it into your computer. There should be a file called "LOGGER01.CSV". Open it up and you should see something like this:
MM/DD/YYYY,HH:MM:SS.CC,Latitude,Longitude,MPH,TPS,Brake 04/06/2016,20:00:54.50,XXXXXXXX,YYYYYYYYY,0,1,0 04/06/2016,20:00:54.60,XXXXXXXX,YYYYYYYYY,0,0,0 04/06/2016,20:00:54.70,XXXXXXXX,YYYYYYYYY,0,3,0
The "XXXXXXXX" and "YYYYYYYY" are you GPS coordinates in radians×10^6. So multiply each one by 10^-6 to get the correct GPS value and plug it into Google Maps to verify it is correct coordinate. If it is take your baby out for a spin and do some data logging!
Step 6: I Got Some Data Now What?
This is the actually the hard part, what to actually do with the data. Easiest is plotting graphs with something like Excel and checking out your trends, seeing how big the time gaps are from transitioning from brake to throttle, see how consistent you are with the throttle, etc. Don't let the data fool you though. If you take a look at the sample data I posted you can see a lot of spikes, you would think that's noise, but is it? Nah, well depends on what you're trying to look for, but that's actually me blipping the the throttle while I'm downshifting. :) If you add a sensor input for the clutch and neutral switch you can weed that noise out, but other then that you'll have to learn how to analyze the data.
You can also use something like RaceRender and see your time, position, and sensor data all at once. You can even sync up a camera with it!
There's a big science behind data acquisition and data analysis and the options are endless!
Step 7: Explaining the Code
The code procedure is pretty simple. Initialize the pins, initialize the serial port for GPS, initialize the SD card, and then create a file. Once that's complete if GPS has a lock it will enter it's loop cycle and read data from the TPS sensor, brake light, and GPS values and then write that to the SD card and loop again. Pretty simple right? There's a couple gotchas along the way, particularly with the GPS and compiler. I'll describe it below.
// Can comment out this statement if it compiles. Bug on certain Arduino IDE versions. // Read here for more info: <a href="" rel="nofollow">...</a> #if 1 __asm volatile("nop"); #endif
This one's a bit of a weird one. If you can comment this out and it compiles then that's good. You don't have to worry about it. Chances are you can comment this out since it's been fixed by newer Arduino versions, I believe. You can go to the link in the comment if you want to read about the problem.
// Debug Condition #define DUEMILANOVE true // Set to true if using Duemilanove #define DEBUG false // Set to true if want to debug
This one is for determining what type of Arduino you're using and if you plan on using the serial port for debug. The reason why is due to the serial port being occupied by the GPS during normal operation for the Duemilanove (old Arduino I know, like I said I did this project awhile back :)). The same thing occurs for the Uno since there's only one serial port which is why we left it the way it was here. If you plan on using the serial port to debug, set DEBUG to true and move the GPS RX and TX pins to Digital Pins 3, 2 respectively. If you're using the Mega which has 4 serial ports set DUEMILANOVE to false and set the GPS RX and TX pins to Digital Pins 19, and 20 respectively.
#define SERIAL_DEBUG if(DEBUG)Serial // If DEBUG is set to true then all SERIAL_DEBUG statements will work #if DUEMILANOVE #if DEBUG //If debugging on Duemilanove set GPS to 3,2 and use SoftwareSerial #include <SoftwareSerial.h> #define RX_GPS_PIN 3 #define TX_GPS_PIN 2 SoftwareSerial SERIAL_GPS(3,2); #else //If not debugging on Duemilanove put GPS pins on 0,1 #define RX_GPS_PIN 0 #define TX_GPS_PIN 1 #define SERIAL_GPS Serial #endif #else // If using Mega 2560 set GPS pins to Serial1 #define RX_GPS_PIN 19 #define TX_GPS_PIN 20 #define SERIAL_GPS Serial1 #endif
This is the logic for the Debug condition. You typically don't have to touch this, but as you can see if we're debugging on the single serial Arduino's you must use the SoftwareSerial library for the GPS, which actually slows down your data resolution significantly.
// Initialize GPS and Debugging // iTeadStudio GPS Shield set at 38400 bps @ 1 Hz by default (on my unit) // GPS Shield uses NEO-6M GPS Module which can be configured by downloading uBlox uCenter on computer. // Go to uBlox uCenter > Edit > Messages > NMEA > PUBX to change Baud Rate and get Hex. // Go to uBlox uCenter > Edit > Messages > UBX > CFG > Rate to change Measurement Period and get Hex. // Best results at 38400 bps @ 5 Hz on Arduino Hardware Serial. // If using SoftwareSerial MUST set GPS Module to 4800 bps @ 1 Hz. void setSerial() { char baudRate[] = {0x24, 0x50, 0x55, 0x42, 0x58, 0x2C, 0x34, 0x31, 0x2C, 0x31, 0x2C, 0x30, 0x30, 0x30, 0x37, 0x2C, 0x30, 0x30, 0x30, 0x33, 0x2C, 0x34, 0x38, 0x30, 0x30, 0x2C, 0x30, 0x2A, 0x31, 0x33, 0x0D, 0x0A}; // Hex to change to 4800 bps char fiveHz[] = {0xB5, 0x62, 0x06, 0x08, 0x06, 0x00, 0xC8, 0x00, 0x01, 0x00, 0x01, 0x00, 0xDE, 0x6A}; //Hex to change to 5Hz char tenHz[] = {0xB5, 0x62, 0x06, 0x08, 0x06, 0x00, 0x64, 0x00, 0x01, 0x00, 0x01, 0x00, 0x7A, 0x12}; // Hex to change to 10Hz #if DUEMILANOVE #if DEBUG SERIAL_DEBUG.begin(115200); SERIAL_GPS.begin(38400); delay(1000); SERIAL_GPS.write(baudRate,sizeof(baudRate)); SERIAL_GPS.begin(4800); // Set SoftwareSerial to 4800 #else SERIAL_GPS.begin(38400); delay(5000); SERIAL_GPS.write(tenHz,sizeof(tenHz)); SERIAL_GPS.flush(); #endif #else SERIAL_DEBUG.begin(115200); SERIAL_GPS.begin(38400); delay(5000); SERIAL_GPS.write(tenHz,sizeof(tenHz)); SERIAL_GPS.flush(); #endif }
This one sets the baudrate and measurement period for the GPS. The faster the better for us. The NEO-6M supports a measurement period up to 5Hz, but I've managed to get 10Hz and it works, but I think it might be interpolating the data. I haven't read the data sheet thoroughly about this though. Anyways, 10Hz translates one sample every 0.1 seconds. This is actually not that good when it comes to racing, however the fastest GPS units you can get off the shelf I believe are 20Hz. So regardless, you'll need to do some interpolating since the Arduino can sample a lot faster then that. I didn't however in this code, but it'll be a good add on for you!
If you plan on using debug mode on a single serial port Arduino (Duemilanove, Uno, Pro Mini, Nano, etc), you will notice that you will have to set the baudrate to 4800 bps and the measurement period to 1Hz. Moral of story? Use a Mega for developing.
You may be asking, where the hell did you get those Hex codes? Good question. A lot of Googling led me to the uBlox center program, documentation, and learning how to talk to it.
Everything else, I commented as much as I could so hopefully its self explanatory.
Step 8: Logging Limitations
The two biggest limitations with this current setup is data resolution and data syncing. I didn't go further into this, because I just felt like doing it for a proof of concept during the time.
Data Resolution
The current max data resolution is one sample for every 0.1 seconds. This can be further improved easily because the way the code is set up, it's limited by the measurement period of the GPS. A better way to write the code would be to log data regardless if we received a new GPS coordinate. Once the data is dumped, the data rows without GPS can be post processed and interpolated. Now the sampling rate will be limited to the Arduino and SD Card write speeds which should be more than fast enough for racing applications.
Data Syncing
An interesting one I noticed is the delay for the GPS. Mine was off by 1.5 seconds based off my throttle position. I noticed this because in the data I released the throttle and my MPH from the GPS kept on increasing. That doesn't make any sense! I just shifted my data accordingly and it matched up, but you may want to take a good look at the TinyGPS library and look at the fix_age parameter for further data evaluation.
Another one to consider is the delay between each procedure. Remember the Arduino is not capable of hardware threading so the time it takes to read one input, and then the next, and then actually write to the SD card does not all happen simultaneously. You must take this into consideration if you plan to add more sensor inputs and if you plan on having interrupt counters. If you plan on multi-threading you can have multiple Arduinos set to do a simple task and use a vehicle bus (I2C, SPI, CAN) to have them communicate between each other. A good read on this that I found was here. I would also have a look into FPGAs which excel in this.
Step 9: Addons, Improvements, and TODOs
Want to add more or improve on this design? Some easy ones I thought off my head are:
RPM/Gear Indicator
This one is fairly easy. Tap into the crankshaft position sensor and wheel speed sensor. From the crankshaft position sensor you can get RPM and with both you can calculate your gear ratio and figure out what gear you're in. See here for how:
Brake Pressure Transducer
Add a pressure transducer to your brake line to convert your brake signal from digital to analog and get much more resolution. This will behave the same as the TPS input essentially. Note however this sensor is not cheap ($100+) unless your are able to scavenge one from a vehicle that comes with one and improper installation can lead to a loss of brakes! So be careful with this one.
Add Shunt Regulator to the TPS Input Pin
Add a shunt regulator right before the TPS input pin to prevent the Arduino from getting damaged. The electrical system of any car/motorcycle is very noisy. I noticed while testing under real life conditions there were times the analog input would spike significantly. Best to put a resistor and a 5.1V zener diode in parallel just in case.
Neutral/Clutch Switch
Tapping into this can make it easier to analyze and filter out data depending on what you're trying to look at. It will be the same as setup as the brake light.
Calibration Mode
Have a separate function that gets triggered by an interrupt switch which goes into calibration mode. This can set your max and min values for the throttle position.
Stop Switch
This is an easy one, move the stop switch to an interrupt pin and set it up as an interrupt function. I didn't do this at the time since the interrupt pins were occupied by the SoftwareSerial when I was debugging.
Anymore? Feel free to leave in comments.
Step 10: References
Want to get more into datalogging? Here's some references.
-
Loguino by Clusterfsck is an awesome Arduino data logging library
-
TinyGPS library by Mikal Hart is a must if you plan on using GPS with the Arduino
- The Competition Car Data Logging Manual by Graham Templeman if you can get your hands on a copy. Was around $30-40 when I got it.
Firstly, great project! This is pretty much what I've been looking for as I currently use a phone with a bluetooth GPS for track day data logging and have been looking around for options to add throttle and brake to that as well.
It seems simple enough and though I'm an absolute novice when it comes to circuitry, I'm seriously giving some thought to trying to build this.
An todo I could request some way to segregate the individual laps or sectors using a GPS point at the Start/Finish line and/or sectors. I guess this could be done in the spreadsheeting software with a reasonably basic lookup but just a thought.
Just dive in! That's how I started.
For your todo there's a couple ways you can do this:
1. RaceRender already does this for you.
2. Setup your own infrared beacon at the track. You will learn a lot about noise, signal degradation, and power requirements this way.
3. Tap into the racetracks transponder setup. Most likely will be MYLAPS setup which fall under Title 47, Part 15 (47 CFR 15) FCC Rules. At its core root, its just an RFID signal. But figuring out what spectrum it's in is a whole different animal but it is an awesome way if you want to get into ham radio.
Hopefully that'll give you some ideas. Have fun!
Thank you!
I probably won't have this ready for my first track day this year, on the 30th, but will aim to have it at least by mid season.
I was also going to wire the push buttons into the left hand controls as my track bike doesn't really have or need headlights or indicators.
very good, Motorcycle what you are wearing | http://www.instructables.com/id/Racing-Datalogger-With-an-Arduino/ | CC-MAIN-2017-34 | refinedweb | 3,787 | 70.53 |
21 March 2012 13:37 [Source: ICIS news]
HOUSTON (ICIS)--Sherwin-Williams has won a multi-year supply deal with property management and apartment rental firm Riverstone Residential Group, the ?xml:namespace>
Sherwin-Williams said that under the deal it would supply Riverstone with paint, coatings and flooring products, and it would be Riverstone’s exclusive marketing partner for paint between 2012 and 2014.
Dallas, Texas-based Riverstone Residential is one of the largest US providers of rental apartments with more than 170,000 apartments under management.
Financial or volume details were not disclosed.
Sherwin-Williams added that such longer-term supply agreements can help standardise purchases and stabilise costs. At the same time, Sherwin-Williams can create product specifications, customise on-site training and provide other services to customers, | http://www.icis.com/Articles/2012/03/21/9543768/sherwin-williams-to-supply-us-property-management-firm-riverstone.html | CC-MAIN-2015-06 | refinedweb | 130 | 51.68 |
I recently upgraded my OS to High Sierra, and since then I have not been able to use the sqlanydb library with my flask app. I was using SQLAnywhere16 and recently upgraded to SQLAnywhere17 after reading that it might solve problem, but it did not.
I start by sourcing sa_config.sh located at /Applications/SQLAnywhere17/System/bin64/sa_config.sh. Then I run my flask app. In the initialization of my app, I create a connection using sqlanydb.connect(), and I am met with the following traceback.
File "/usr/local/lib/python3.6/site-packages/lib/python/site-packages/sqlanydb.py", line 522, in connect
return Connection(args, kwargs)
File "/usr/local/lib/python3.6/site-packages/lib/python/site-packages/sqlanydb.py", line 538, in __init__
parent = Connection.cls_parent = Root("PYTHON")
File "/usr/local/lib/python3.6/site-packages/lib/python/site-packages/sqlanydb.py", line 464, in __init__
'libdbcapi_r.dylib')
File "/usr/local/lib/python3.6/site-packages/lib/python/site-packages/sqlanydb.py", line 456, in load_library
raise InterfaceError("Could not load dbcapi. Tried: " + ','.join(map(str, names)))
sqlanydb.InterfaceError: (u'Could not load dbcapi. Tried: None,dbcapi.dll,libdbcapi_r.so,libdbcapi_r.dylib', 0)
When I run the python3.6 interpreter, I am able to import the sqlanydb and connect to my database just fine, with no error. But when I try to connect inside my flask app, I get this error.
asked
21 Feb '18, 18:29
midnight_mil...
11●1●2●5
accept rate:
100%
Update.
When I open a python interpreter, or just run a plain python script, I added these lines to make sure that sourcing sa_config.sh was definitely adding the paths to my environment.
print(os.environ['PATH'])
print(os.environ['NODE_PATH'])
print(os.environ['DYLD_LIBRARY_PATH'])
Running those lines in the python interpreter or in a plain python script, it prints out the correct paths. However, when I added those three lines to just above the line where my flask app was breaking, it prints out the correct paths for the first two, but a key error is raised on $DYLD_LIBRARY_PATH. Apparently flask is unable to see the library path added by sa_config.sh. When I run
echo $DYLD_LIBRARY_PATH
echo $DYLD_LIBRARY_PATH
in a terminal, it prints the path just fine. So then is flask not using the right environment? Why is flask unable to see the updated library path?
I'm posting this as a solution because it at least solves my current problem. However, I think this is worth looking into more, and I'll be taking this problem to the Flask contributors.
It seems that when I run flask the following way, I get all of the errors that I have mentioned.
export FLASK_APP=modules
export FLASK_DEBUG=1
flask run
This seems to wipe out the added key: $DYLD_LIBRARY_PATH. However, when starting flask the following way, by calling app.run(), the key is not wiped out and sqlanydb is able to correctly locate the library files.
$DYLD_LIBRARY_PATH
from myapp import app
if __name__== '__main__':
app.run(debug=True)
If anyone on this forum has an explanation for what I'm seeing, I'm all ears. This is all very confusing. Again, everything worked just fine until I upgraded to High Sierra, so I suspect that must have some part to play.
answered
22 Feb '18, 14:48
edited
22 Feb '18, 14:49
High Sierra will "purge" DYLD_LIBRARY_PATH now for protected processes - see the docs from Apple.
answered
23 Feb '18, 08:52
Jeff Albion
10.7k●1●71●175
accept rate:
24%
edited
23 Feb '18, 10:51
Graeme Perrow
9.4k●3●77●121
Ah, I see. Thank you for pointing me in the right direction.
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
Question tags:
python ×18
dbcapi ×10
question asked: 21 Feb '18, 18:29
question was seen: 1,586 times
last updated: 23 Feb '18, 16:06
Error loading dbcapi while connecting to sqlanydb
Problem loading DBCAPI in macOS Sierra
External environment support in Python language
Fetching data from IQ with python (sqlanydb) as unicode
SELECT ABS(-1.5) returning a string result via C API?
Error: Code: -2004 Msg: Can't initialize DBCAPI
python, sql anywhere 12/16
Django 1.8.5 & Sql Anywhere 12.0.1 Developer Edition
Could not load dbcapi
python3 cannot import sqlanydb
SQL Anywhere Community Network
Forum problems?
Maintenance log
and OSQA
Disclaimer: Opinions expressed here are those of the poster and do not
necessarily reflect the views of the company.
First time here? Check out the FAQ! | https://sqlanywhere-forum.sap.com/questions/31773/after-upgrading-mac-os-to-high-sierra-using-python36-sqlanydb-in-flask-app-gives-error-could-not-load-dbcapi | CC-MAIN-2020-29 | refinedweb | 774 | 66.94 |
I'm trying yo execute the RastertoPolygon conversion tool in ArcGIs 10.1. I'm executing it through a for loop, then it resulted to an error (Unable to open feature class Failed to execute (RasterToPolygon)) at the python console. As I checked for some output, the first raster from the list is successfully converted to shp file while the rest are not. Any suggestions? Note: (All of my rasters are already in integer data type)
#import the module
import arcpy
from arcpy.sa import *
from arcpy import env
arcpy.CheckOutExtension("Spatial")
env.overwriteOutput = True
#set the workspace
arcpy.env.workspace = r"C:\Users\Windows\Documents\JO_GIS_Analyst"
#Get a list of rasters and convert to shapefile
for raster in arcpy.ListRasters("nofpt*", "TIF"):
print raster #check the presence of rasters"
#convert the raster to polygon
arcpy.RasterToPolygon_conversion(raster, raster +".shp", "SIMPLIFY")
print "Finish converting the rasters to polygon"
Ok, does it print the other raster names and just not produce anything? if it prints the other names, have you looked on disk to confirm whether they are or are not there. I am not sure about the wildcard you are using either, | https://community.esri.com/thread/106258-unable-to-open-feature-class-failed-to-execute-rastertopolygon | CC-MAIN-2018-43 | refinedweb | 192 | 63.9 |
Linux Security Modules (LSM) is a hook-based framework for implementing security policies and Mandatory Access Control in the Linux kernel. Until recently users looking to implement a security policy had just two options. Configure an existing LSM module such as AppArmor or SELinux, or write a custom kernel module.
Linux 5.7 introduced a third way: LSM extended Berkeley Packet Filters (eBPF) (LSM BPF for short). LSM BPF allows developers to write granular policies without configuration or loading a kernel module. LSM BPF programs are verified on load, and then executed when an LSM hook is reached in a call path.
Let’s solve a real-world problem
Modern operating systems provide facilities allowing "partitioning" of kernel resources. For example FreeBSD has "jails", Solaris has "zones". Linux is different - it provides a set of seemingly independent facilities each allowing isolation of a specific resource. These are called "namespaces" and have been growing in the kernel for years. They are the base of popular tools like Docker, lxc or firejail. Many of the namespaces are uncontroversial, like the UTS namespace which allows the host system to hide its hostname and time. Others are complex but straightforward - NET and NS (mount) namespaces are known to be hard to wrap your head around. Finally, there is this very special very curious USER namespace.
USER namespace is special, since it allows the owner to operate as "root" inside it. How it works is beyond the scope of this blog post, however, suffice to say it's a foundation to having tools like Docker to not operate as true root, and things like rootless containers.
Due to its nature, allowing unpriviledged users access to USER namespace always carried a great security risk. One such risk is privilege escalation.
Privilege escalation is a common attack surface for operating systems. One way users may gain privilege is by mapping their namespace to the root namespace via the unshare syscall and specifying the CLONE_NEWUSER flag. This tells unshare to create a new user namespace with full permissions, and maps the new user and group ID to the previous namespace. You can use the unshare(1) program to map root to our original namespace:
$ id uid=1000(fred) gid=1000(fred) groups=1000(fred) … $ unshare -rU # id uid=0(root) gid=0(root) groups=0(root),65534(nogroup) # cat /proc/self/uid_map 0 1000 1
In most cases using unshare is harmless, and is intended to run with lower privileges. However, this syscall has been known to be used to escalate privileges.
Syscalls clone and clone3 are worth looking into as they also have the ability to CLONE_NEWUSER. However, for this post we’re going to focus on unshare.
Debian solved this problem with this "add sysctl to disallow unprivileged CLONE_NEWUSER by default" patch, but it was not mainlined. Another similar patch "sysctl: allow CLONE_NEWUSER to be disabled" attempted to mainline, and was met with push back. A critique is the inability to toggle this feature for specific applications. In the article “Controlling access to user namespaces” the author wrote: “... the current patches do not appear to have an easy path into the mainline.” And as we can see, the patches were ultimately not included in the vanilla kernel.
Our solution - LSM BPF
Since upstreaming code that restricts USER namespace seem to not be an option, we decided to use LSM BPF to circumvent these issues. This requires no modifications to the kernel and allows us to express complex rules guarding the access.
Track down an appropriate hook candidate
First, let us track down the syscall we’re targeting. We can find the prototype in the include/linux/syscalls.h file. From there, it’s not as obvious to track down, but the line:
/* kernel/fork.c */
Gives us a clue of where to look next in kernel/fork.c. There a call to ksys_unshare() is made. Digging through that function, we find a call to unshare_userns(). This looks promising.
Up to this point, we’ve identified the syscall implementation, but the next question to ask is what hooks are available for us to use? Because we know from the man-pages that unshare is used to mutate tasks, we look at the task-based hooks in include/linux/lsm_hooks.h. Back in the function unshare_userns() we saw a call to prepare_creds(). This looks very familiar to the cred_prepare hook. To verify we have our match via prepare_creds(), we see a call to the security hook security_prepare_creds() which ultimately calls the hook:
… rc = call_int_hook(cred_prepare, 0, new, old, gfp); …
Without going much further down this rabbithole, we know this is a good hook to use because prepare_creds() is called right before create_user_ns() in unshare_userns() which is the operation we’re trying to block.
LSM BPF solution
We’re going to compile with the eBPF compile once-run everywhere (CO-RE) approach. This allows us to compile on one architecture and load on another. But we’re going to target x86_64 specifically. LSM BPF for ARM64 is still in development, and the following code will not run on that architecture. Watch the BPF mailing list to follow the progress.
This solution was tested on kernel versions >= 5.15 configured with the following:
BPF_EVENTS BPF_JIT BPF_JIT_ALWAYS_ON BPF_LSM BPF_SYSCALL BPF_UNPRIV_DEFAULT_OFF DEBUG_INFO_BTF DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT DYNAMIC_FTRACE FUNCTION_TRACER HAVE_DYNAMIC_FTRACE
A boot option
lsm=bpf may be necessary if
CONFIG_LSM does not contain “bpf” in the list.
Let’s start with our preamble:
deny_unshare.bpf.c:
#include <linux/bpf.h> #include <linux/capability.h> #include <linux/errno.h> #include <linux/sched.h> #include <linux/types.h> #include <bpf/bpf_tracing.h> #include <bpf/bpf_helpers.h> #include <bpf/bpf_core_read.h> #define X86_64_UNSHARE_SYSCALL 272 #define UNSHARE_SYSCALL X86_64_UNSHARE_SYSCALL
Next we set up our necessary structures for CO-RE relocation in the following way:
deny_unshare.bpf.c:
… typedef unsigned int gfp_t; struct pt_regs { long unsigned int di; long unsigned int orig_ax; } __attribute__((preserve_access_index)); typedef struct kernel_cap_struct { __u32 cap[_LINUX_CAPABILITY_U32S_3]; } __attribute__((preserve_access_index)) kernel_cap_t; struct cred { kernel_cap_t cap_effective; } __attribute__((preserve_access_index)); struct task_struct { unsigned int flags; const struct cred *cred; } __attribute__((preserve_access_index)); char LICENSE[] SEC("license") = "GPL"; …
We don’t need to fully-flesh out the structs; we just need the absolute minimum information a program needs to function. CO-RE will do whatever is necessary to perform the relocations for your kernel. This makes writing the LSM BPF programs easy!
deny_unshare.bpf.c:
SEC("lsm/cred_prepare") int BPF_PROG(handle_cred_prepare, struct cred *new, const struct cred *old, gfp_t gfp, int ret) { struct pt_regs *regs; struct task_struct *task; kernel_cap_t caps; int syscall; unsigned long flags; // If previous hooks already denied, go ahead and deny this one if (ret) { return ret; } task = bpf_get_current_task_btf(); regs = (struct pt_regs *) bpf_task_pt_regs(task); // In x86_64 orig_ax has the syscall interrupt stored here syscall = regs->orig_ax; caps = task->cred->cap_effective; // Only process UNSHARE syscall, ignore all others if (syscall != UNSHARE_SYSCALL) { return 0; } // PT_REGS_PARM1_CORE pulls the first parameter passed into the unshare syscall flags = PT_REGS_PARM1_CORE(regs); // Ignore any unshare that does not have CLONE_NEWUSER if (!(flags & CLONE_NEWUSER)) { return 0; } // Allow tasks with CAP_SYS_ADMIN to unshare (already root) if (caps.cap[CAP_TO_INDEX(CAP_SYS_ADMIN)] & CAP_TO_MASK(CAP_SYS_ADMIN)) { return 0; } return -EPERM; }
Creating the program is the first step, the second is loading and attaching the program to our desired hook. There are several ways to do this: Cilium ebpf project, Rust bindings, and several others on the ebpf.io project landscape page. We’re going to use native libbpf.
deny_unshare.c:
#include <bpf/libbpf.h> #include <unistd.h> #include "deny_unshare.skel.h" static int libbpf_print_fn(enum libbpf_print_level level, const char *format, va_list args) { return vfprintf(stderr, format, args); } int main(int argc, char *argv[]) { struct deny_unshare_bpf *skel; int err; libbpf_set_strict_mode(LIBBPF_STRICT_ALL); libbpf_set_print(libbpf_print_fn); // Loads and verifies the BPF program skel = deny_unshare_bpf__open_and_load(); if (!skel) { fprintf(stderr, "failed to load and verify BPF skeleton\n"); goto cleanup; } // Attaches the loaded BPF program to the LSM hook err = deny_unshare_bpf__attach(skel); if (err) { fprintf(stderr, "failed to attach BPF skeleton\n"); goto cleanup; } printf("LSM loaded! ctrl+c to exit.\n"); // The BPF link is not pinned, therefore exiting will remove program for (;;) { fprintf(stderr, "."); sleep(1); } cleanup: deny_unshare_bpf__destroy(skel); return err; }
Lastly, to compile, we use the following Makefile:
Makefile:
CLANG ?= clang-13 LLVM_STRIP ?= llvm-strip-13 ARCH := x86 INCLUDES := -I/usr/include -I/usr/include/x86_64-linux-gnu LIBS_DIR := -L/usr/lib/lib64 -L/usr/lib/x86_64-linux-gnu LIBS := -lbpf -lelf .PHONY: all clean run all: deny_unshare.skel.h deny_unshare.bpf.o deny_unshare run: all sudo ./deny_unshare clean: rm -f *.o rm -f deny_unshare.skel.h # # BPF is kernel code. We need to pass -D__KERNEL__ to refer to fields present # in the kernel version of pt_regs struct. uAPI version of pt_regs (from ptrace) # has different field naming. # See: # deny_unshare.bpf.o: deny_unshare.bpf.c $(CLANG) -g -O2 -Wall -target bpf -D__KERNEL__ -D__TARGET_ARCH_$(ARCH) $(INCLUDES) -c $< -o [email protected] $(LLVM_STRIP) -g [email protected] # Removes debug information deny_unshare.skel.h: deny_unshare.bpf.o sudo bpftool gen skeleton $< > [email protected] deny_unshare: deny_unshare.c deny_unshare.skel.h $(CC) -g -Wall -c $< -o [email protected] $(CC) -g -o [email protected] $(LIBS_DIR) [email protected] $(LIBS) .DELETE_ON_ERROR:
Result
In a new terminal window run:
$ make run … LSM loaded! ctrl+c to exit.
In another terminal window, we’re successfully blocked!
$ unshare -rU unshare: unshare failed: Cannot allocate memory $ id uid=1000(fred) gid=1000(fred) groups=1000(fred) …
The policy has an additional feature to always allow privilege pass through:
$ sudo unshare -rU # id uid=0(root) gid=0(root) groups=0(root)
In the unprivileged case the syscall early aborts. What is the performance impact in the privileged case?
Measure performance
We’re going to use a one-line unshare that’ll map the user namespace, and execute a command within for the measurements:
$ unshare -frU --kill-child -- bash -c "exit 0"
With a resolution of CPU cycles for syscall unshare enter/exit, we’ll measure the following as root user:
- Command ran without the policy
- Command run with the policy
We’ll record the measurements with ftrace:
$ sudo su # cd /sys/kernel/debug/tracing # echo 1 > events/syscalls/sys_enter_unshare/enable ; echo 1 > events/syscalls/sys_exit_unshare/enable
At this point, we’re enabling tracing for the syscall enter and exit for unshare specifically. Now we set the time-resolution of our enter/exit calls to count CPU cycles:
# echo 'x86-tsc' > trace_clock
Next we begin our measurements:
# unshare -frU --kill-child -- bash -c "exit 0" & [1] 92014
Run the policy in a new terminal window, and then run our next syscall:
# unshare -frU --kill-child -- bash -c "exit 0" & [2] 92019
Now we have our two calls for comparison:
# cat trace # tracer: nop # # entries-in-buffer/entries-written: 4/4 #P:8 # # _-----=> irqs-off # / _----=> need-resched # | / _---=> hardirq/softirq # || / _--=> preempt-depth # ||| / _-=> migrate-disable # |||| / delay # TASK-PID CPU# ||||| TIMESTAMP FUNCTION # | | | ||||| | | unshare-92014 [002] ..... 762950852559027: sys_unshare(unshare_flags: 10000000) unshare-92014 [002] ..... 762950852622321: sys_unshare -> 0x0 unshare-92019 [007] ..... 762975980681895: sys_unshare(unshare_flags: 10000000) unshare-92019 [007] ..... 762975980752033: sys_unshare -> 0x0
unshare-92014 used 63294 cycles.
unshare-92019 used 70138 cycles.
We have a 6,844 (~10%) cycle penalty between the two measurements. Not bad!
These numbers are for a single syscall, and add up the more frequently the code is called. Unshare is typically called at task creation, and not repeatedly during normal execution of a program. Careful consideration and measurement is needed for your use case.
Outro
We learned a bit about what LSM BPF is, how unshare is used to map a user to root, and how to solve a real-world problem by implementing a solution in eBPF. Tracking down the appropriate hook is not an easy task, and requires a bit of playing and a lot of kernel code. Fortunately, that’s the hard part. Because a policy is written in C, we can granularly tweak the policy to our problem. This means one may extend this policy with an allow-list to allow certain programs or users to continue to use an unprivileged unshare. Finally, we looked at the performance impact of this program, and saw the overhead is worth blocking the attack vector.
“Cannot allocate memory” is not a clear error message for denying permissions. We proposed a patch to propagate error codes from the cred_prepare hook up the call stack. Ultimately we came to the conclusion that a new hook is better suited to this problem. Stay tuned! | https://blog.cloudflare.com/live-patch-security-vulnerabilities-with-ebpf-lsm/ | CC-MAIN-2022-33 | refinedweb | 2,061 | 55.34 |
Volume 1 of 9
Printed in USA
Legal Notices
The.
Use of this document and any supporting software media is restricted to
this product only. Additional copies of the programs may be made for
security and back-up purposes only. Resale of the programs, in their
present form or with alterations, is expressly prohibited.
Warranty
A copy of the specific warranty terms applicable to your Hewlett-Packard
product and replacement parts can be obtained from your local Sales and
Service Office.
Reproduction, adaptation, or translation of this document without prior
written permission is prohibited, except as allowed under the copyright
laws.
This document and the software it describes may also be protected under
one or more of the following copyrights. Additional copyrights are
acknowledged in some individual manpages.
of California.
ii
Trademark Notices
Intel and Itanium are registered trademarks of Intel Corporation in
the US and other countries and are used under license.
Java is a US trademark of Sun Microsystems, Inc.
Microsoft and MS-DOS are U.S. registered trademarks of Microsoft
Corporation.
OSF/Motif is a trademark of The Open Group in the US and other
countries.
UNIX is a registered trademark of The Open Group.
X Window System is a trademark of The Open Group.
iii
Revision History
This document’s printing date and part number indicate its edition. The
printing date changes when a new edition is printed. (Minor corrections
and updates which are incorporated at reprint do not cause the date to
change.) New editions of this manual incorporate all material updated
since the previous edition.
Part Number Date, Release, Format, Distribution
B2355-60103 August 2003. HP-UX release 11i version 2, one volume
HTML, docs.hp.com and Instant Information.
B2355-90779-87 August 2003. HP-UX release 11i version 2, nine
volumes PDF, docs.hp.com and print.
B9106-90010 June 2002. HP-UX release 11i version 1.6, one volume
HTML, docs.hp.com and Instant Information.
B9106-90007 June 2001. HP-UX release 11i version 1.5, seven
volumes HTML, docs.hp.com and Instant Information.
B2355-90688 December 2000. HP-UX release 11i version 1, nine
volumes.
B2355-90166 October 1997. HP-UX release 11.0, five volumes.
B2355-90128 July 1996. HP-UX release 10.20, five volumes, online
only.
B2355-90052 July 1995. HP-UX release 10.0, four volumes.
Conventions
We use the following typographical conventions.
audit (5) An HP-UX manpage. audit is the name and 5 is the
section in the HP-UX Reference. On the web and on the
Instant Information CD, it may be a hot link to the
iv
manpage itself. From the HP-UX command line, you
can enter “man audit” or “man 5 audit” to view the
manpage. See man (1).
Book Title The title of a book. On the web and on the Instant
Information CD, it may be a hot link to the book itself.
KeyCap The name of a keyboard key. Note that Return and Enter
both refer to the same key.
Emphasis Text that is emphasized.
Emphasis Text that is strongly emphasized.
ENVIRONVAR The name of an environment variable.
[ERRORNAME] The name of an error number, usually returned in the
errno variable.
Term The defined use of an important word or phrase.
ComputerOutput Text displayed by the computer.
UserInput Commands and other text that you type.
Command A command name or qualified command phrase.
Variable The name of a variable that you may replace in a
command or function or information in a display that
represents several possible values.
[ ] The contents are optional in formats and command
descriptions. If the contents are a list separated by |,
you may choose one of the items.
{ } The contents are required in formats and command
descriptions. If the contents are a list separated by |,
you must choose one of the items.
... The preceding element may be repeated an arbitrary
number of times.
| Separates items in a list of choices.
v
vi
Preface
HP-UX is the Hewlett-Packard Company’s implementation of an
operating system that is compatible with various industry standards. It
is based on the UNIX System V Release 4 operating system and
includes important features from the Fourth Berkeley Software
Distribution.
The nine volumes of this manual contain the system reference
documentation, made up of individual entries called manpages, named
for the man command that displays them on the system. The entries are
also known as manual pages or reference pages.
General For a general introduction to HP-UX and the structure and format of the
Introduction manpages, please see the introduction (9) manpage in volume 9.
Section The manpages are divided into sections that also have introduction
Introductions (intro) manpages that describe the contents. These are:
intro (1) Section 1: User Commands
(A-M in volume 1; N-Z in volume 2)
intro (1M) Section 1M: System Administration Commands
(A-M in volume 3; N-Z in volume 4)
intro (2) Section 2: System Calls
(in volume 5)
intro (3C) Section 3: Library Functions
(A-M in volume 6; N-Z in volume 7)
intro (4) Section 4: File Formats
(in volume 8)
intro (5) Section 5: Miscellaneous Topics
(in volume 9)
intro (7) Section 7: Device (Special) Files
(in volume 9)
intro (9) Section 9: General Information
(in volume 9)
vii
viii
Volume One
Table of Contents
Section 1
Volume One
Table of Contents
Section 1
Table of Contents
Volumes One and Two
User Commands
A-M
Section 1
Part 1
User Commands
A-M
intro(1) intro(1)
NAME
intro - introduction to command utilities and application programs
DESCRIPTION
This section describes commands accessible by users, as opposed to system calls in Section (2) or library
routines in Section (3), which are accessible by user programs.
Command Syntax
Unless otherwise noted, commands described in this section accept options and other arguments accord-
ing to the following syntax:
A A
name [ option ( s ) ] [ cmd_arg ( s ) ]
where the elements are defined as follows:
name Name of an executable file.
option One or more option s can appear on a command line. Each takes one of the following
forms:
-no_arg_letter
A single letter representing an option without an argument.
-no_arg_letters
Two or more single-letter options combined into a single command-line argu-
ment.
-arg_letter <>opt_arg
A single-letter option followed by a required argument where:
arg_letter
is the single letter representing an option that requires an argu-
ment,
opt_arg
is an argument (character string) satisfying the preceding
arg_letter ,
<> represents optional white space.
cmd_arg Path name (or other command argument) not beginning with -, or - by itself indicating
the standard input. If two or more cmd_arg s appear, they must be separated by white
space.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−1
intro(1) intro(1)
RETURN VALUE
Upon termination, each command returns two bytes of status, one supplied by the system giving the
cause for termination, and (in the case of ‘‘normal’’ ‘‘exit code’’, ‘‘exit status’’, ‘‘return code’’, or ‘‘return value’’, and are described only where
special conventions are involved.
WARNINGS
Some commands produce unexpected results when processing files containing null characters. These
commands often treat text input lines as strings, and therefore become confused when they encounter a
null character (the string terminator) within a line.
SEE ALSO
getopt(1), exit(2), wait(2), getopt(3C), hier(5), introduction(9).
Web access to HP-UX documentation at.
Section 1−−2 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
adb(1) adb(1)
NAME
adb - absolute debugger
SYNOPSIS
adb [-h]
adb [-n-o] [-w] [-I path ] kernelfile memfile
adb [-n-o] [-w] [-I path ] kernelfile crashdir
adb [-n-o] [-w] [-I path ] crashdir
adb [-n-o] [-w] [-I path ] [objfile ] [corefile ] A aA
adb [-n-o] [-w] [-I path ] -P pid [execfile ]
DESCRIPTION
The adb command executes a general-purpose debugging program that is sensitive to the underlying
architecture of the processor and operating system on which it is run It can be used to examine files and
provide a controlled environment for executing HP-UX programs.
adb inspects exactly one object file, referred to as the current object file, and one memory file, referred
to as the current memory file. Either of these files can be the NULL file, specified by the - argument,
which is a file with no contents. The object file and the memory file are specified using the following
arguments:
kernelfile An HP-UX kernel, usually vmunix.
memfile /dev/mem or /dev/kmem. memfile is assumed to be on an HP-UX system running
kernelfile if kernelfile is specified. /dev/mem is supported only on PA-RISC platforms.
crashdir A directory containing an HP-UX system crash dump, which is assumed to be produced from
kernelfile if kernelfile is specified.
objfile Normally an executable program file. It can also be a relocatable object file, shared library
file or a DLKM module. The default for objfile is a.out.
corefile A core image file produced after executing objfile . The default for corefile is core.
execfile The executable file corresponding to pid , the process ID of the process to be adopted for
debugging by adb.
The current object file may be any one of kernelfile , the vmunix file in crashdir , objfile , or execfile . The
current object file preferably should contain a symbol table; if it does not, the symbolic features of adb
cannot be used, although the file can still be examined. The current memory file may be any one of
memfile , the system memory dump in crashdir , corefile , or the memory of process pid .
Requests to adb are read from standard input and adb responds on standard output. If the -w flag is
present, objfile is created (if necessary) and opened for reading and writing, to be modified using adb.
adb ignores QUIT; INTERRUPT causes return to the next adb command.
There are two modes of operation for adb: backward compatibility mode and normal mode. Backward
compatibility mode is the default on PA-RISC systems. Normal mode is the default on Itanium systems.
On startup adb executes adb commands from the file $HOME/.adbrc.
To debug a MxN process or the core, adb requires the MxN debug library, libmxndbg. Depending on
the application type, it loads /usr/lib/libmxndbg.sl (for 32 bit PA-RISC systems) or
/usr/lib/libmxndbg64.sl (for 64 bit PA-RISC systems) or /usr/lib/hpux32/libmxndbg.so
(for Itanium(R)-based systems). If the relevant library is not found in the specified path, you should set
the shell variable ADB_PATHMXNDBG to the path where the correct library can be found.
Options
adb recognizes the following command-line options , which can appear in any order but must appear
before any file arguments:
-h Print a usage summary and exit. If this option is used, all other options and arguments are
ignored.
-i Ignores $HOME/.adbrc.
-I path path specifies a list of directories where files read with < or << (see below) are sought. This
list has the same syntax as, and similar semantics to, the PATH shell variable; the default is
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−3
adb(1) adb(1)
.:/usr/lib/adb.
-n Specify the normal mode. This is the default on Itanium systems. This option is mutually
exclusive with the -o option. The last one specified takes effect.
-o Specify backward compatibility mode. This is the default on PA-RISC systems. This option is
mutually exclusive with the -n option. The last one specified takes effect.
-P pid Adopt process with process ID pid as a ‘‘traced’’ process; see ttrace (2). This option is helpful
for debugging processes that were not originally run under the control of adb.
A aA -w This option must be specified to enable the file write commands of adb. Objfile is opened for
reading and writing. It also enables writes to memfile if it is a kernel memory file.
The following command-line options to adb are obsolete and are no longer required. (If used they gen-
erate a warning.)
-k Previously adb required this option to recognize HP-UX crash dumps or /dev/mem.
-m Previously adb required this option to recognize multiple file HP-UX crash dumps.
Requests to adb follow either the traditional form:
[address ] [ ,count ] [command-char ] [command-arguments] [;]
or the new form:
keyword [command-arguments] [;]
Only the traditional form is available in backward compatibility mode.
If address is present, dot is set to address . dot is the adb state variable which keeps track of the
current address. dotincr is another state variable which keeps track of increments to dot as adb
steps through a format string; see Format String below. Initially dot and dotincr are set to 0. For
most commands, count specifies the number of times the command is to be executed. The default count is
1. address and count are expressions.
The interpretation of an address depends on the context in which it is used. If a subprocess is being
debugged, addresses are interpreted in the address space of the subprocess. (For further details see
Address Mapping below.)
The command-char and command-arguments specify the command to run. See Commands below.
Expressions
All adb expression primaries are treated as 64-bit unsigned integers and the expression also evaluates to
a 64-bit unsigned integer. The following primaries are supported:
integer A number. The prefixes 0 (zero), 0o and 0O force interpretation in octal radix; the
prefixes 0t, OT, 0d, and 0D force interpretation in decimal radix; the prefixes 0x and
0X force interpretation in hexadecimal radix; the prefixes 0b and 0B force interpreta-
tion in binary radix. Thus 020 = 0d16 = 0x10 = 0b1000 = sixteen. If no prefix
appears, the default radix is used; see the d command. The radix is initialized to hexa-
decimal. Note that a hexadecimal number whose most significant digit would other-
wise be an alphabetic character must have a 0x (or 0X) prefix.
’cccccccc ’ The ASCII value of up to 8 characters. If more than 8 characters are specified, the
value is undefined. A backslash (\) can be used to escape a single quote (’).
$register Register. The value of the register is obtained from the register set corresponding to
the current memory file. Register names are implementation dependent; see the r
command.
symbol A symbol is a sequence of uppercase or lowercase letters, underscores, or digits, not
starting with a digit. A backslash (\) can be used to escape other characters. The
value of the symbol is taken from the symbol table in the current object file.
variable A variable name consists of alphabets and numerals and always starts with $. Names
of registers in the target processor are reserved as variable names and can be used to
access registers in expressions.
In backward compatibility mode, a variable is a single numeral or alphabet except for
registers and the prefix letter is >.
Section 1−−4 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
adb(1) adb(1)
Commands
As mentioned above, adb commands may be specified in the traditional form or the keyword form.
In backward compatibility mode, only the traditional form is supported.
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−5
adb(1) adb(1)
• Keyword commands
• Process commands
• Thread commands
• Shell commands
In backward compatibility mode:
• Variable commands
File commands
A aA These commands operate on the current object file or the current memory file and are used to read, write,
etc.
file_selector [ modifier ] [ ,size | index ] [arglist ]
The file_selector can be one of these:
? The selected file is the current object file.
/ The selected file is the current memory file.
= This special symbol is only used for printing the value of dot.
The modifier specifies the operation on the file; modifier can be:
(no modifier)
It takes a single optional argument list which is a format string. adb prints data from
the selected file according to the format string. If a format string is not present and the
file selector is ? or / then adb uses the format string used by either of these earlier.
If the file selector is = and a format string is not present, then adb uses the format
string used by the previous = command.
/ [ , size ] value [ mask ]
Search the selected file. Words of size, size starting at dot are masked with mask and
compared with value until a match is found. If found, dot is set to that address of
masked object. If mask is omitted, no mask is used. dotincr is set to 0. Valid
values of size are 1, 2, 4, 8. If no size is specified then sizeof(int) is assumed.
value and mask are unsigned integers of size size bytes.
For example: expr?/,4 6 5. Search for 4 byte value, 4 ( 6 & 5 ) in the current object
file, starting at expr .
= [ , size ] value1 value2 ...
Write a size sized value at the addressed location. dot is incremented by size after
each write. dotincr is set to 0. Values of size and values are same as for /
modifier. For this operation, the file should be opened with -w option.
For example: expr?=,4 6 5. Write 6 & 5 in the current object file at addresses expr
and expr +4 respectively, starting at expr .
> [ , index ] b e f
Set the index th mapping triple parameters of the selected file to the corresponding
argument values in order. Refer to Address Maps . If fewer than three arguments are
given, remaining maps remain unchanged. The arguments are expressions. If not
specified, index is assumed to be 0. For example: ?>,0 1 2 3 Set b, e, f (index 0)
of the current object file to 1, 2, 3 respectively.
In backward compatibility mode the following modifiers are also present.
* It has same behavior as that when no modifier is present. However, it uses the second
mapping triple to locate the file address of data to be printed.
l It has same behavior as modifier / with an implicit size of 2. It sets dotincr to 2.
L It has same behavior as modifier / with an implicit size of 4. It sets dotincr to 4.
w It has same behavior as modifier = with an implicit size of 2. It sets dotincr to 2. It
increments dot by the total size of all the values written minus dotincr.
W It has same behavior as modifier / with an implicit size of 4. It sets dotincr to 4.
dot is set as for w.
Section 1−−6 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
adb(1) adb(1)
Keyword Commands
Run the Keyword Command Form using the traditional command form by prefixing the command with $.
Please refer to Keyword Form Commands for the complete list of keyword commands.
Process Commands
These commands deal with managing subprocesses. adb can run an object file as a subprocess. Also, it A aA
can adopt a subprocess given its pid. adb can debug multi-threaded and/or forked subprocesses. It can
also debug multiple subprocesses at the same time. However, at any time it focuses on a one subprocess
and one of its threads called the current subprocess and current thread respectively.
The command consists of : followed by the modifier and an optional list of arguments. They are:
r [ objfile ] Run objfile as a subprocess. If address is given explicitly, the program is entered at
this point; otherwise the program is entered at its standard entry point. The value
count specifies how many breakpoints are ignored before stopping. arguments to the
subprocess may be supplied on the same line as the command. Semicolon is not used
as a command separator. An argument starting with < or > causes the standard input
or output to be established for the command. All signals are turned on when entering
the subprocess. Such a subprocess is referred to as a created subprocess.
If there are other created subprocesses running, all are killed. It does not kill any
attached subprocesses. This becomes the current subprocess.
e [ objfile ] Set up a subprocess as in :r; no instructions are executed.
a [ objfile ] Causes adb to adopt process with pid as a traced subprocess. If the objfile is specified,
adb uses it to lookup symbol information. Count has same meaning as in :r. Such a
subprocess is referred to as an attached subprocess. This subprocess becomes the
current subprocess.
k [ pid | * ]
Kills a created subprocess. If no argument is specified it kills the current subprocess.
If a pid is given, it kills the subprocess with that pid . If * is given, it kills all created
subprocesses.
The current subprocess is chosen from the remaining subprocesses.
de [ pid | * ]
The arguments can be a pid or a *. Same as :k, however it applies to attached sub-
processes. adb detaches from them.
c [ signal ] Continues the current subprocess with signal signal . It continues all the threads of the
subprocess. If no signal is specified, the signal that caused it to stop is sent. If address
is specified, the current thread continues at this address. Breakpoint skipping is the
same as for :r.
s [ signal | arg1 arg2 ... ]
Step the current thread count times. If address is given, then the thread continues at
that address , else from the address where it had stopped. If no signal is specified, the
signal that caused it to stop is sent. If there is no current subprocess, object file is run
as a subprocess as for :r. In this case no signal can be sent; the remainder of the line
is treated as arguments to the subprocess.
b [ command ]
Sets breakpoint at address in the current subprocess The breakpoint is executed
count -1 times before causing a stop. Each time the breakpoint is encountered, the
command is executed. This breakpoint is a subprocess breakpoint. If any of the thread
executes the instruction at this address , it will stop. Multiple breakpoints can be set at
the same address .
d [ num | * ]
Deletes all breakpoints at address in the current subprocess, if it is specified. If * is
specified, it deletes all the current subprocess breakpoints. If num is specified, break-
point with number num is deleted.
HP-UX 11i Version 2: August 2003 −5− Hewlett-Packard Company Section 1−−7
adb(1) adb(1)
en [ num | * ]
Enables all breakpoints at address in the current subprocess, if it is specified. If * is
specified, it enables all the current subprocess breakpoints. If num is specified, break-
point with number num is enabled.
di [ num | * ]
Disables all breakpoints at address in the current subprocess, if it is specified. If * is
specified, it disables all the current subprocess breakpoints. If num is specified, break-
point with number num is disabled.
A aA z signum [ +s | -s | +r | -r | +d | -d ]
Changes signal handling for a specified signum for all the threads of the current sub-
process. Disposition can be specified as:
+s Stop subprocess when signum is received.
-s Do not stop subprocess when signum is received.
+r Report when signum is received.
-r Do not report when signum is received.
+d Deliver signum to the target subprocess .
-d Do not deliver signal to the target subprocess .
w [ pid ] Switches from the current subprocess to the subprocess with process ID pid . This pro-
cess becomes the current subprocess. This subprocess must be an already attached or
created subprocess. Both subprocesses are in stopped state after this command.
wc [ pid ] Same as w however the previous current subprocess is not stopped.
Thread Commands
These commands manage the threads in the current subprocess. The command consists of a ] followed
by a modifier and an optional argument list.
s [ signum ] Same as :s. However it is strictly for the current thread only.
c [ signum ] Same as :c. However it continues only the current thread. And count refers to the
breakpoint to skip for the current thread.
b [ command ] Same as :b. However it applies to the current thread only.
d [ num | * ] Same as :d. However it applies to current thread only.
en [ num | * ] Same as :en. However it applies to the current thread only.
di [ num | * ] Same as :di. However it applies to the current thread only.
z signum [ +s | -s | +r | -r | +d | -d ]
Same as :z. However it is meant for the current thread only. If a signum occurs in the
context of this thread this disposition value is used instead of that of the subprocess.
es [ signum ] Sets the flag for this signum for the current thread. It means that if this signal signum
occurs in the context of this thread’s signal disposition value is used instead of that of the
subprocess.
w [ pid ] Switch from the current thread to some other thread. Both the threads are in stopped
state after this, and the thread with threadid becomes the current thread. This com-
mand is also applicable to core file debugging. It switches from present thread to given
thread and makes the given thread as the current thread.
Shell Commands
This action consists of a ! character followed by a string . The string is passed unchanged to the shell
defined by the SHELL environment variable or to /bin/sh.
Variable Commands
This is supported in backward compatibility mode only. It consists of a > followed by a variable , var and
an optional value . This action assigns value to the variable or register named by var .
If not specified, value is assumed to be the value of dot. This behavior is deprecated.
Section 1−−8 Hewlett-Packard Company −6− HP-UX 11i Version 2: August 2003
adb(1) adb(1)
HP-UX 11i Version 2: August 2003 −7− Hewlett-Packard Company Section 1−−9
adb(1) adb(1)
A aA n [ nodenumber ]
Without arguments print node information on a CCNUMA machine. With a
nodenumber argument, change to that node.
p traditional_cmd
This keyword command takes a traditional command as argument and interprets it.
a var value Assign value to adb variable var .
pa Virtual_Offset
Prints the physical address for a given Virtual Offset in HEX format. Space ID is
taken from the adb variable space. You can set the adb variable space using the key-
word command a explained earlier.
The following commands can run only in backward compatibility mode.
newline Print the process ID and register values.
M Toggle the address mapping of memfile between the initial map set up or a valid
memory file and the default mapping pair which the user can modify with the file
action modifier >. If the memory file was invalid, only the default mapping is avail-
able.
N [ nodenumber ]
Print the number of nodes on V-class multinode machines and the current node
number. To switch to another node, enter $N nodenumber.
F Print double precision floating point registers.
R Print all registers.
U Print unwind tables.
Format String
A format string is used to specify the formatting to be done before data is printed by adb. There are
two types of format strings supported by adb: traditional style and printf style. A traditional style
format string is a sequence of format specifiers. A printf-style format string is always preceded by a
comma (,) and enclosed within double quotes (""), and is a sequence of format specifiers and other char-
acters. Each format specifier should be preceded by a % character. Characters other than format
specifiers are printed as is. If needed, % should be escaped by %. It supports C language style \ charac-
ter escape sequences.
While processing a format string, adb scans the format string from left to right and applies each conver-
sion specifier encountered to the object addressed by the sum of dot and dotincr. After each
conversion specifier is processed, dotincr is incremented by count times size (implicit or explicit) of
that conversion specifier. If the format string is used to print the value of dot (using action =), dot
and dotincr remain unchanged. For dotincr operator, dotincr is updated appropriately.
In backward compatibility mode, only the traditional style format string is supported.
Format Specifier
A format specifier can be a conversion specifier or a dot operator.
1. Conversion Specifier
Each conversion specifier consists of an optional count or pspec followed by an optional size specifier char-
acter , followed by a conversion specifier character .
count This is available only for the traditional style format string. The count specifies the number of
times this conversion specifier is to be repeated. If not specified, count is assumed to be 1.
Section 1−−10 Hewlett-Packard Company −8− HP-UX 11i Version 2: August 2003
adb(1) adb(1)
pspec This is available only for the printf-style format string. It is a sequence of flags, fieldwidth and
precision as in the printf (3S) library function.
size specifier character
This specifies the size of object to which this is applied. Size can be specified in two ways. One is
using absolute size specifier and other is relative size specifier. Absolute size specifiers are as fol-
lows.
b The size of the object is 1 byte.
e The size of the object is 2 bytes.
g The size of the object is 4 bytes. A aA
j The size of the object is 8 bytes.
k The size of the object is 16 bytes.
Relative size specifiers are as follows
w The size of the object is the size of a machine word of the target processor.
h The size of the object is half the size of a machine word of the target processor.
l The size of the object is double the size of a machine word of the target processor.
n The size of the object is the size of a pointer on the target processor. This will be different for
wide files and narrow files.
m The size of the object is the size of an instruction of the target processor. This will be sup-
ported only on processors where this is constant.
Conversion Specifier Character
The following characters are supported
a The value of dot is printed in symbolic form.
c The object is printed as a character.
o The object is printed as an unsigned octal number.
d The object is printed as a signed decimal number.
u The object is printed as an unsigned decimal number.
i The object is disassembled as an instruction and printed.
f The object is printed in a floating point format according to its size.
p The object is printed in symbolic form.
s The object is assumed to be a null terminated string and printed. This cannot be used to
print dot.
y The object is cast to type time_t and printed in the ctime (3C) format.
Here the printf-style format strings support only c, o, d, u, x, f, and s. If the size specifier
character is not specified, it is assumed to be b for conversion character c; w for conversion char-
acters d, u, x, o, and f; m for i; sizeof(time_t) for y; and w for everything else.
For example. 10=2bo, ’abc’=,"%s", main?4i
2. Dot Operator
A dot operator consists of an optional count , optional size specifier character , and a dot operator charac-
ter .
count count specifies the number of times this dot operator is to be repeated. If not specified, count
is assumed to be 1. The count is always 1 for printf-style format strings.
Size Specifier Character
Same as size specifier character of conversion specifier.
Dot operator character
This can be one of these
v Increment dotincr by count times size.
HP-UX 11i Version 2: August 2003 −9− Hewlett-Packard Company Section 1−−11
adb(1) adb(1)
Address Maps
In files like object files and application core files, the virtual memory address is the not the same as the
file offset. So adb keeps an array of address maps for these files to map a given virtual memory address
to a file offset. Each address map is a triple: start virtual address (b), end virtual address (e) and start
file offset (f). The triple specifies that all addresses from b to e - 1 occupy a contiguous region in the file
starting at f. Given a virtual address a such that b≤ a< e, the file offset of a can be computed as f+ a- b.
State variables
There are several variables which define the state of adb at any instant in time. They are:
dot Current address. Initial value is 0.
dotincr Current address increment. Initial value is 0.
prompt Prompt string used by adb. Initial value is ‘‘adb> ’’.
radix The current input radix. Initial value is as in the assembly language of the target proces-
sor.
maxwidth The maximum width of the display. Initial value is 80.
maxoffset If an address is within this limit from a known symbol, adb prints the address as
symbol_name +offset , else the address is printed. Initial value is 0xffffffff.
macropath List of directories to be searched for adb macros. Initial value is .:/usr/lib/adb.
pager Pager command used by adb. Initial value is more -c.
backcompat Set to 1 if adb is in backward compatibility mode. Initial value depends on the host pro-
cessor.
Note
adb64 is a symbolic link to adb. This symbolic link is maintained for backward compatibility with some
old scripts which may be using adb64.
EXTERNAL INFLUENCES
International Code Set Support
Single- and multi-byte character code sets are supported.
RETURN VALUE
A aA adb comments about inaccessible files, syntax errors, abnormal termination of commands, etc. Exit
status is 0 unless the last command failed or returned non-zero status.
AUTHOR
adb was developed by HP.
FILES
a.out
core
/dev/mem
/dev/kmem
SEE ALSO
ttrace(2), crt0(3), ctime(3C), end(3C), a.out(4), core(4), signal(5).
ADB Tutorial
NAME
adjust - simple text formatter
SYNOPSIS
adjust [-b] [-c|-j|-r ] [-m column ] [-t tabsize ] [ files ... ]
DESCRIPTION
The adjust command is a simple text formatter for filling, centering, left and right justifying, or only
right justifying text paragraphs, and is designed for interactive use. It reads the concatenation of input
files (or standard input if none are given) and produces on standard output a formatted version of its
input, with each paragraph formatted separately. If - is given as an input filename, adjust reads stan- A aA
dard input at that point (use - - as an argument to separate - from options.)
adjust reads text from input lines as a series of words separated by space characters, tabs, or newlines.
Text lines are grouped into paragraphs separated by blank lines. By default, text is copied directly to the
output, subject only to simple filling (see below) with a right margin of 72, and leading spaces are con-
verted to tabs where possible.
Options
The adjust command recognizes the following command-line options:
-b Do not convert leading space characters to tabs on output; (output contains no tabs, even
if there were tabs in input).
-c Center text on each line. Lines are pre- and post-processed, but no filling is performed.
-j Justify text. After filling, insert spaces in each line as needed to right justify it (except in
the last line of each paragraph) while keeping the justified left margin.
-r After filling text, adjust the indentation of each line for a smooth right margin (ragged
left margin).
-mcolumn
Set the right fill margin to the given column number, instead of 72. Text is filled, and
optionally right justified, so that no output line extends beyond this column (if possible).
If -m0 is given, the current right margin of the first line of each paragraph is used for
that and all subsequent lines in the paragraph.
By default, text is centered on column 40. With -c, the -m option sets the middle column
of the centering ‘‘window’’, but -m0 auto-sets the right side as before (which then deter-
mines the center of the ‘‘window’’).
-ttabsize Set the tab size to other than the default (eight columns).
Only one of the -c, -j, and -r options is allowed in a single command line.
Details
Before doing anything else to a line of input text, adjust first handles backspaces, rubbing out preced-
ing characters in the usual way. Next, it ignores all non-printable characters except tab. It then expands
all tabs to spaces.
For simple text filling, the first word of the first line of each paragraph is indented the same amount as in
the input line. Each word is then carried to the output followed by one space. ‘‘Words’’ ending in
terminal_character[quote ][closing_character] are followed by two spaces, where terminal_character is
any of ., :, ?, or !; quote is a single closing quote ( ’ ) character or double-quote character ( " ), and close
is any of ), ], or }. Here are some examples:
end. of? sentence.’ sorts!" of.) words?"]
(adjust does not place two spaces after a pair of single closing quotes ( ’’ ) following a
terminal_character).
adjust starts a new output line whenever adding a word (other than the first one) to the current line
would exceed the right margin.
adjust understands indented first lines of paragraphs (such as this one) when filling. The
second and subsequent lines of each paragraph are indented the same amount as the second line of the
input paragraph if there is a second line, else the same as the first line.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−15
adjust(1) adjust(1)
* adjust also has a rudimentary understanding of tagged paragraphs (such as this one) when
filling. If the second line of a paragraph is indented more than the first, and the first line has
a word beginning at the same indentation as the second line, the input column position of the
tag word or words (prior to the one matching the second line indentation) is preserved.
Tag words are passed through without change of column position, even if they extend beyond the right
margin. The rest of the line is filled or right justified from the position of the first non-tag word.
When -j is given, adjust uses an intelligent algorithm to insert spaces in output lines where they are
most needed, until the lines extend to the right margin. First, all one space word separators are exam-
A aA ined. One space is added to each separator, starting with the one having the most letters between it and
the preceding and following separators, until the modified line reaches the right margin. If all one space
separators are increased to two spaces and more spaces must be inserted, the algorithm is repeated with
two space separators, and so on.
Output line indentation is held to one less than the right margin. If a single word is larger than the line
size (right margin minus indentation), that word appears on a line by itself, properly indented, and
extends beyond the right margin. However, if -r is used, such words are still right justified, if possible.
If the current locale defines class names ekinsoku and bkinsoku (see iswctype (3C)), adjust formats
the text in accordance with the ekinsoku/bkinsoku character classification and margin settings (see
-r, -j, and -m options).
EXTERNAL INFLUENCES
Environment Variables
LANG provides a default value for the internationalization variables that are unset or null. If LANG is
unset or null, the default value of "C" (see lang (5)) is used. If any of the internationalization variables
contains an invalid setting, adjusts for the processing of LC_MESSAGES.
DIAGNOSTICS
adjust complains to standard error and later returns a nonzero value if any input file cannot be opened
(it skips the file). It does the same (but quits immediately) if the argument to -m or -t is out of range, or
if the program is improperly invoked.
Input lines longer than BUFSIZ are silently split (before tab expansion) or truncated (afterwards). Lines
that are too wide to center begin in column 1 (no leading spaces).
EXAMPLES
This command is useful for filtering text while in vi (1). For example,
!}adjust
reformats the rest of the current paragraph (from the current line down), evening the lines.
The vi command:
:map ˆX {!}adjust -jˆVˆM
(where ˆ denotes control characters) sets up a useful ‘‘finger macro’’. Typing ˆX (Ctrl-X) reformats the
entire current paragraph.
adjust -m1 is a simple way to break text into separate words without white space, except for tagged-
paragraph tags.
Section 1−−16 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
adjust(1) adjust(1)
WARNINGS
This program is designed to be simple and fast. It does not recognize backslash to escape white space or
other characters. It does not recognize tagged paragraphs where the tag is on a line by itself. It knows
that lines end in newline or null, and how to deal with tabs and backspaces, but it does not do anything
special with other characters such as form feed (they are simply ignored). For complex operations, stan-
dard text processors are likely to be more appropriate.
This program could be implemented instead as a set of independent programs, fill, center, and justify
(with the -r option). However, this would be much less efficient in actual use, especially given the
program’s special knowledge of tagged paragraphs and last lines of paragraphs.
AUTHOR
A aA
adjust was developed by HP.
SEE ALSO
nroff(1).
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−17
admin(1) admin(1)
NAME
admin - create and administer SCCS files
SYNOPSIS
admin -i[name] [-n] [-b] [-a login ] ... [-d flag[flag-val ] ] ... [-f flag[flag-val ] ] ...
[-m mrlist ] ... [-r rel ] [-t[name] ] [-y[comment ] ] file ...
admin -n [-a login ] ... [-d flag[flag-val ] ] ... [-f flag[flag-val ] ] ... [-m mrlist ] ...
[-t[name] ] [-y[comment ] ] file ...
A aA admin [-a login ] ... [-e login ] ... [-d flag[flag-val ] ] ... [-m mrlist ] ...
[-r rel ] [-t[name] ] file ...
admin -h file ...
admin -z file ...
DESCRIPTION
The admin command is used to create new SCCS files and change the parameters of existing ones. Argu-
ments to admin, which may appear in any order, ( unless -- is specified as an argument, in which case
all arguments after -- are treated as files ) consist of option arguments, beginning with -, and named
file s (note that SCCS file names must begin with the characters s.). If a named file does not exist, it is
created and its parameters are initialized according to the specified option arguments. Parameters not
initialized by an option argument are assigned a default value. If a named file does exist, parameters
corresponding to specified option arguments are changed, and other parameters are left unaltered.
If directory is named instead of file , admin acts on each file in directory , except that non-SCCS files (the
last component of the path name does not begin with s.) and unreadable files are silently ignored. If a
name of - is given, the standard input is read, and each line of the standard input is assumed to be the
name of an SCCS file to be processed. Again, non-SCCS files and unreadable files are silently ignored.
The admin option arguments apply independently to all named file s, whether one file or many. In the
following discussion, each option is explained as if only one file is specified, although they affect single or
multiple files identically.
Options
The admin command supports the following options and command-line arguments:
-n This option indicates that a new SCCS file is to be created.
-i[name] The name of a file from which the contents for a new SCCS file is to be taken. (if
name is a binary file, then you must specify the -b option) The contents constitutes
the first delta of the file (see the -r option for the delta numbering scheme). If the
-i option is used but the file name is omitted, the text is obtained by reading the
standard input until an end-of-file is encountered. If this option is omitted, the
SCCS file is created with an empty initial delta. Only one SCCS file can be created
by an admin command on which the -i option is supplied. Using a single admin
to create two or more SCCS files requires that they be created empty (no -i option).
Note that the -i option implies the -n option.
-b Encode the contents of name, specified to the -i option. This keyletter must be used
if name is a binary file; otherwise, a binary file will not be handled properly by
SCCS commands.
-r rel The release (rel ) into which the initial delta is inserted. This option can be used
only if the -i option is also used. If the -r option is not used, the initial delta is
inserted into release 1. The level of the initial delta is always 1 (by default initial
deltas are named 1.1).
-t[name] The name of a file from which descriptive text for the SCCS file is to be taken. If
the -t option is used and admin is creating a new SCCS file (the -n and/or -i
options are also used), the descriptive text file name must also be supplied. In the
case of existing SCCS files:
• A -t option without a file name causes removal of descriptive text (if any)
currently in the SCCS file.
• A -t option with a file name causes text (if any) in the named file to
replace the descriptive text (if any) currently in the SCCS file.
Section 1−−18 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
admin(1) admin(1)
-f flag This option specifies a flag, and possibly a value for the flag, to be placed in the
SCCS file. Several -f options can be supplied on a single admin command line.
The allowable flags and their values are:
b Allows use of the -b option on a get command (see get (1)) to
create branch deltas.
cceil The highest release (i.e., "ceiling"), a number less than or equal to
9999, which can be retrieved by a get command for editing. The
default value for an unspecified c flag is 9999.
ffloor The lowest release (i.e., "floor"), a number greater than 0 but less
than 9999, which may be retrieved by a get command for editing.
A aA
The default value for an unspecified f flag is 1.
dSID The default delta number SID to be used by a get command (see
get (1)).
istr Causes the message:
No id keywords (cm7)
issued by get or delta to be treated as a fatal error (see delta (1)).
In the absence of this flag, the message is only a warning. The mes-
sage is issued if no SCCS identification keywords (see get (1)) are
found in the text retrieved or stored in the SCCS file. If a value is
supplied, the keywords must exactly match the given string. How-
ever, the string must contain a keyword, but must not contain
embedded newlines.
j Allows concurrent get commands for editing on the same SID of an
SCCS file. This allows multiple concurrent updates to the same
version of the SCCS file.
Only one user can perform concurrent edits. Access by multiple
users is usually accomplished by using a common login or a set user
ID program (see chmod(1) and exec (2)).
llist A list of releases to which deltas can no longer be made. (A get
-e against one of these locked releases fails). The list has the fol-
lowing syntax:
list ::= range | list , range
range ::= RELEASE NUMBER | a
The character a in the list is equivalent to specifying all releases
for the named SCCS file. Omitting any list is equivalent to a.
n Causes delta to create a null delta in each of those releases being
skipped (if any) when a delta is made in a new release (such as
when making delta 5.1 after delta 2.7, release 3 and release 4 are
skipped). These null deltas serve as anchor points so that branch
deltas can be created from them later. The absence of this flag
causes skipped releases to be nonexistent in the SCCS file, prevent-
ing branch deltas from being created from them in the future.
qtext User-definable text substituted for all occurrences of the %Q% key-
word in SCCS file text retrieved by get.
mmod The module name of the SCCS file substituted for all occurrences of
the %M% keyword in SCCS file text retrieved by get. If the m flag is
not specified, the value assigned is the name of the SCCS file with
the leading s. removed.
ttype The type of module in the SCCS file substituted for all occurrences
of %Y% keyword in SCCS file text retrieved by get.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−19
admin(1) admin(1)
A aA be supplied on a single admin command line. See the -f option for allowable flag
names.
llist A list of releases to be unlocked. See the -f option for a descrip-
tion of the l flag and the syntax of a list .
-a login A login name, or numerical HP-UX group ID, to be added to the list of users
allowed to make deltas (changes) to the SCCS file. A group ID is equivalent to
specifying all login names common to that group ID. Several a options can be used
on a single admin command line. As many login s or numerical group IDs as
desired can be on the list simultaneously. If the list of users is empty, anyone can
add deltas. A login or group ID preceded by a ! denies permission to make deltas.
-e login A login name or numerical group ID to be erased from the list of users allowed to
make deltas (changes) to the SCCS file. Specifying a group ID is equivalent to
specifying all login names common to that group ID. Several e options can be used
on a single admin command line.
-y[comment] The comment text is inserted into the SCCS file as a comment for the initial delta in
a manner identical to that of delta (1). Omission of the -y option results in a
default comment line being inserted in the form:
date and time created YY / MM / DD / HH / MM / SS by login
The -y option is valid only if the -i and/or -n options are specified (i.e., a new
SCCS file is being created).
-m mrlist The list of Modification Request (MR) numbers is inserted into the SCCS file as the
reason for creating the initial delta, in a manner identical to delta (1). The v flag
must be set and the (MR) numbers are validated if the v flag has a value (the name
of an (MR) number validation program). Diagnostic messages occur if the v flag is
not set or (MR) validation fails.
-h Causes admin to check the structure of the SCCS file (see sccsfile (4)), and to com-
pare a newly computed checksum (the sum of all of the characters in the SCCS file
except those in the first line) with the checksum that is stored in the first line of the
SCCS file. Appropriate error diagnostics are produced.
This option inhibits writing on the file, thus canceling the effect of any other options
supplied, and therefore is only meaningful when processing existing files.
-z The SCCS file checksum is recomputed and stored in the first line of the SCCS file
(see -h, above).
Note that use of this option on a truly corrupted file can prevent future detection of
the corruption.
EXTERNAL INFLUENCES
Environment Variables
LC_CTYPE determines the interpretation of text as single- and/or multi-byte characters.
LC_MESSAGES determines the language in which messages are displayed.
Section 1−−20 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
admin(1) admin(1), admin behaves as if all internationalization variables are set to C.
See environ (5).
DIAGNOSTICS
Use sccshelp (1) for explanations. A aA
WARNINGS
SCCS files can be any length, but the number of lines in the text file itself cannot exceed 99 999 lines.
FILES
The last component of all SCCS file names must be of the form s.filename. New SCCS files are given
mode 444 (see chmod(1)). Write permission in the pertinent directory is required to create a file. All
writing done by admin is to a temporary x-file, called x.filename, (see get (1)), created with mode 444 if
the admin command is creating a new SCCS file, or with the same mode as the SCCS file if it exists.
After successful execution of admin, the SCCS file is removed (if it exists), and the x-file is renamed to
the name of the SCCS file. This ensures that changes are made to the SCCS file only if no errors
occurred.
It is recommended that directories containing SCCS files be mode 755 and that SCCS files themselves be
mode 444. The mode of any given directory allows only the owner to modify SCCS files contained in that
directory. The mode of the SCCS files prevents any modification at all except by SCCS commands.
If it should be necessary to patch an SCCS file for any reason, the mode can be changed to 644 by the
owner, thus allowing the use of vi or any other suitable editor. Care must be taken! The edited file
should always be processed by an admin -h to check for corruption followed by an admin -z to gen-
erate a proper checksum. Another admin -h is recommended to ensure the SCCS file is valid.
admin also makes use of a transient lock file called z.filename), which is used to prevent simultaneous
updates to the SCCS file by different users. See get (1) for further information.
SEE ALSO
delta(1), ed(1), get(1), sccshelp(1), prs(1), what(1), sccsfile(4), acl(5).
STANDARDS CONFORMANCE
admin: SVID2, SVID3, XPG2, XPG3, XPG4
HP-UX 11i Version 2: August 2003 −4− Hewlett-Packard Company Section 1−−21
answer(1) answer(1)
NAME
answer - phone message transcription system
SYNOPSIS
answer [-pu]
DESCRIPTION
The answer interactive program helps you to transcribe telephone (and other) messages into electronic
mail.
A aA The program uses your personal elm alias database and the system elm alias database, allowing you to
use aliases to address the messages.
Options
answer supports the following options:
-p Prompt for phone-slip-type message fields.
-u Allow addresses that are not aliases.
Operation
answer begins with the Message to: prompt. Enter a one-word alias or a two-word user name
("Words" are separated by spaces.) The user name is converted to an alias in the form f _lastword, where
f is the first character of the first word, lastword is the second word, and all letters are shifted to lower-
case. For example, Dave Smith is converted to the alias d_smith.
Without the -u option, the specified or converted alias must exist in the alias databases. With the -u
option, if the processed "alias" is not in the alias databases, it is used for the address as is.
The fully expanded address is displayed.
With the -p option, you are asked for typical message slip data:
Caller:
of:
Phone:
TELEPHONED -
CALLED TO SEE YOU -
WANTS TO SEE YOU -
RETURNED YOUR CALL -
PLEASE CALL -
WILL CALL AGAIN -
*****URGENT****** -
Enter the appropriate data. You can put just an X or nothing after the pertinent dash prompts, or you
can type longer comments. Whatever you enter becomes part of the message. Lines with no added text
are omitted from the message.
Finally, you are prompted for a message. Enter a message, if any, ending with a blank line. The mes-
sage is sent and the Message to: prompt is repeated.
To end the program, enter any one of bye, done, exit, or quit, at the Message to: prompt.
EXAMPLES
User input is in normal type.
With No Options
This example shows a valid alias, an invalid user name, and a valid user name. In the invalid case, the
converted alias is displayed in square brackets.
-----------------------------------------------------------------
Section 1−−22 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
answer(1) answer(1)
-----------------------------------------------------------------
-----------------------------------------------------------------
TELEPHONED - at 4:30pm
CALLED TO SEE YOU -
WANTS TO SEE YOU - X
RETURNED YOUR CALL -
PLEASE CALL - X
WILL CALL AGAIN -
*****URGENT****** - very very!
FILES
$HOME/.elm/aliases User alias database data table
$HOME/.elm/aliases.dir User alias database directory table
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−23
answer(1) answer(1)
AUTHOR
SEE ALSO
elm(1), newalias(1).
Section 1−−24 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
ar(1) ar(1)
NAME
ar - create and maintain portable archives and libraries
SYNOPSIS
ar [-]key [-][modifier ...] [posname ] afile [name ...]
DESCRIPTION. A aA:
• If the u modifier is used with the operation character r, only those files with modification
dates later than those of the corresponding member files are replaced.
•.
• ar creates afile if it does not already exist.
• If no name is specified and:
• the specified archive file does not exist, ar creates an empty archive file containing only
the archive header (see ar (4)).
• the archive contains one or more files whose names match names in the current direc-
tory, each matching archive file is replaced by the corresponding local file without con-
sidering con-
tents of all files are printed in the order that they appear in the archive.
m Move the named files. By default, the files are moved to the end of the archive. If a position-
ing.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−25
ar(1) ar(1)
x Extract the named files. If no names are given, all files in the archive are extracted. In nei-
ther case does x alter entries from the archive file.
The following list describes the optional modifier characters:
a Position the files after the existing positioning file specified by posname .
b Place the new files before the existing positioning file specified by posname .
c Suppress the message normally produced when afile is created. For r and q operations, ar
normally creates afile if it does not already exist.
A aA use-
ful only to avoid long build times when creating a large archive piece-by-piece. If an existing
archive contains a symbol table, the z modifier will cause it to be invalidated. If a file name warn-
ing pro-
vided
Section 1−−26 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
ar(1) ar(1)
q: v, f, F, l, c, A, z, s
t: v, f, F, s
p: v, f, F, s
x: v, f, F, s, C, T
EXTERNAL INFLUENCES
Environment Variables
The following internationalization variables affect the execution of ar:
LANG
Determines the locale category for native language, local customs and coded character set in the
absence of LC_ALL and other LC_* environment variables. If LANG is not specified or is set to the
A aA, ar behaves as if all internationalization
variables are set to C. See environ (5).
In addition, the following environment variable affects ar:
TMPDIR
Specifies a directory for temporary files (see tmpnam (3S)). The l modifier overrides the TMPDIR
variable, and TMPDIR overrides /var/tmp, the default directory.
DIAGNOSTICS
Create a new file (if one does not already exist) in archive format with its constituents entered in the
order indicated:
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−27
ar(1) ar(1)
WARNINGS.
FILES
/var/tmp/ar* Temporary files
SEE
STANDARDS CONFORMANCE
ar: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
Section 1−−28 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
as(1) as(1)
(Itanium(R)-based System Only)
NAME
as - assembler (Itanium Processor Family)
SYNOPSIS
as [ option ...] [ file ]
DESCRIPTION
as assembles the named source file file , or the standard input if file is not specified. The output of the
assembler is an ELF relocatable object file that must be processed by ld before it can be executed.
Assembler output is stored in file outfile . If the -o outfile option is not specified, the assembler constructs
a default name. If no source file is specified, outfile will be a.out; otherwise the .s suffix (if present) is
A aA
stripped from the name of the source file and .o is appended to it. Any directory names are removed
from the name so that the object file is always written to the current directory.
as does not not perform any macro processing. Standard C preprocessor constructs can be used if the
assembler is invoked through the C compiler.
Options
as recognizes the following options.
+A32 Specify that the source file contains 32-bit ABI targeted code. This option is overridden by
the .psr abi64 assembler directive in the source file. The object file is a 32-bit ELF file
by default.
+A64 Specify that the source file contains 64-bit ABI targeted code. This option is overridden by
the .psr abi32 assembler directive in the source file. The object file is a 64-bit ELF file
by default.
+E32 Specify that the object file should be 32-bit ELF. This is the default (see also +A32). Note
that it is valid to write 64-bit ABI targeted code to a 32-bit ELF file. All 32-bit addresses in
the object file are zero-extended to 64-bit upon loading. Zero-extension, however, may
invalidate any negative addresses (such as with relocations).
-elf32 See +E32.
+E64 Specify that the object file should be 64-bit ELF (see also +A64).
-elf64 See +E64.
-o outfile Produce an output object file with the name outfile instead of constructing a default name.
EXTERNAL INFLUENCES
Environment Variables
NLSPATH determines the location of the message catalog for the processing of LC_MESSAGES.
SDKROOT controls which assembler to invoke and enables support for multiple (cross-) development kits.
The SDKROOT variable points to the root of a specific SDK. No provision has been made to validate the
value of the variable or the suitability of the assembler that’s being invoked.
WARNINGS
The assembler does not check dependencies.
DIAGNOSTICS
When syntactic or semantic errors occur, a single-line diagnostic is displayed on standard error, together
with the line number and the file name in which it occurred.
FILES
/usr/lib/nls/C/as.cat assembler error message catalog
a.out default assembler output file
SEE ALSO
cc(1), elf(3E), ld(1).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−29
asa(1) asa(1)
NAME
asa - interpret ASA carriage control characters
SYNOPSIS
asa [files ]
DESCRIPTION
asa interprets the output of FORTRAN programs that utilize ASA carriage control characters. It
processes either the files whose names are given as arguments, or the standard input if - is specified or if
A aA no file names are given. The first character of each line is assumed to be a control character. The follow-
ing control characters are interpreted as indicated:
(blank) Output a single new-line character before printing.
(space) (XPG4 only.) The rest of the line will be output without change.
0 A <newline> shall be output, then the rest of the input line.
1 Output a new-page character before printing.
+ Overprint previous line.
+ (XPG4 only.) The <newline> of the previous line shall be replaced with one or more
implementation-defined characters that causes printing to return to column position 1, fol-
lowed by the rest of the input line. If the + is the first character in the input, it shall have
the same effect as <space>.
Lines beginning with other than the above characters are treated the same as lines beginning with a
blank. The first character of a line is not printed. If any such lines appear, an appropriate diagnostic is
sent to standard error. This program forces the first line of each input file to start on a new page.
(XPG4 only.) The action of the asa utility is unspecified upon encountering any character other than those
listed above as the first character in a line.
To view the output of FORTRAN programs which use ASA carriage control characters and have them
appear in normal form, asa can be used as a filter:
a.out | asa | lp
The output, properly formatted and paginated, is then directed to the line printer. FORTRAN output
previously sent to a file can be viewed on a user terminal screen by using:
asa file, asa behaves as if all internationalization
variables are set to "C". See environ (5).
SEE ALSO
efl(1), f77(1), fsplit(1), ratfor(1).
STANDARDS CONFORMANCE
asa: XPG4, POSIX.2
Section 1−−30 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
A aA
Enter commands from a file to run at a specified time:
at -f job-file [-m] [-q queue ] -t spectime
at -f job-file [-m] [-q queue ] time [date ] [next timeunit +count timeunit ]:
• From the keyboard on separate lines immediately after the at or batch command line, followed by
the currently defined eof (end-of-file) character to end the input. The default eof is Ctrl-D. It can be
redefined in your environment (see stty (1)).
• With the -f option of the at command to read input from a script file.
• From output piped from a preceding command.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−31
at(1) at(1)
Section 1−−32 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
at(1) at(1)
EXTERNAL INFLUENCES
Environment Variables
LC_TIME determines the format and contents of date and time strings.
LC_MESSAGES determines the language in which messages are displayed.
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−33
at(1) at(1)
LC_MESSAGES also determines the language in which the words days, hours, midnight, minutes,
months, next, noon, now, today, tomorrow, weeks, years, and their singular forms can also be
specified.
IF LC_TIME, all internationalization variables default
to "C" (see environ (5)).
RETURN VALUE
The exit code is set to one of the following: mes-
sage stan-
dard:
Section 1−−34 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
at(1) at(1) A aA trail-
ing operator is silently ignored.
If you use both -t and time ... in the same command, the first specified is accepted and the second is
silently ignored.
If the FIFO used to communicate with cron fills up, at is suspended until cron has read sufficient mes-
sages
HP-UX 11i Version 2: August 2003 −5− Hewlett-Packard Company Section 1−−35
attributes(1) attributes(1)
NAME
attributes - describe an audio file
SYNOPSIS
/opt/audio/bin/attributes filename
DESCRIPTION
This command provides information about an audio file, including file format, data format, sampling rate,
number of channels, data length and header length.
A aA EXAMPLE
The following is an example of using attributes on an audio file supplied with HP-UX.
$ /opt/audio/bin/attributes /opt/audio/sounds/welcome.au
File Name: /opt/audio/sounds/welcome.au
File Type: NeXT/Sun
Data Format: Mu-law
Sampling Rate: 22050
Channels: Mono
Duration: 1.972 seconds
Bits per Sample: 8
Header Length: 40 bytes
Data Length: 43492 bytes
AUTHOR
attributes was developed by HP.
Sun is a trademark of Sun MicroSystems, Inc.
NeXT is a trademark of NeXT Computers, Inc.
SEE ALSO
audio(5), asecure(1M), aserver(1M), convert(1), send_sound(1).
Using the Audio Developer’s Kit
Section 1−−36 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
awk(1) awk(1)
NAME
awk - pattern-directed scanning and processing language
SYNOPSIS
awk [-Ffs ] [-v var =value ] [ program -f progfile ... ] [ file ... ]
DESCRIPTION
awk scans each input file for lines that match any of a set of patterns specified literally in program or in
one or more files specified as -f progfile . With each pattern there can be an associated action that is to
be performed when a line in a file matches the pattern. Each line is matched against the pattern portion
of every pattern-action statement, and the associated action is performed for each matched pattern. The A aA
file name - means the standard input. Any file of the form var =value is treated as an assignment, not a
filename. An assignment is evaluated at the time it would have been opened if it were a filename, unless
the -v option is used.
An input line is made up of fields separated by white space, or by regular expression FS. The fields are
denoted $1, $2, ...; $0 refers to the entire line.
Options
awk recognizes the following options and arguments:
-F fs Specify regular expression used to separate fields. The default is to recognize space
and tab characters, and to discard leading spaces and tabs. If the -F option is
used, leading input field separators are no longer discarded.
-f progfile Specify an awk program file. Up to 100 program files can be specified. The
pattern-action statements in these files are executed in the same order as the files
were specified.
-v var =value Cause var =value assignment to occur before the BEGIN action (if it exists) is exe-
cuted.
Statements
A pattern-action statement has the form:
pattern { action }
A missing { action } means print the line; a missing pattern always matches. Pattern-action statements
are separated by new-lines or semicolons..
delete array [ expression ] # delete an array element.
exit [ expression ] # exit immediately; status is expression.
Statements are terminated by semicolons, newlines or right braces. An empty expression-list stands for
$0. String constants are quoted (" "), with the usual C escapes recognized within. Expressions take on
string or numeric values as appropriate, and are built using the operators +, -, *, /, %, ˆ (exponentia-
tion), and concatenation (indicated by a blank). The operators ++, - -, +=, -=, *=, /=, %=, ˆ=, **=, >,
>=, <, <=, ==, !=, "" (double quotes, string conversion operator), and ?: are also available in expres-
sions. Variables can be scalars, array elements (denoted x [i ]) or fields. Variables are initialized to the
null string. Array subscripts can be any string, not necessarily numeric (this allows for a form of associa-
tive memory). Multiple subscripts such as [ i ,j ,k ] are permitted. The constituents are concatenated,
separated by the value of SUBSEP.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−37
awk(1) awk(1)
The print statement prints its arguments on the standard output (or on a file if >file or >>file is
present or on a pipe if |cmd is present), separated by the current output field separator, and terminated
by the output record separator. file and cmd can be literal names or parenthesized expressions. Identical
string values in different statements denote the same open file. The printf statement formats its
expression list according to the format (see printf (3)).
Built-In Functions
The built-in function close(expr ) closes the file or pipe expr opened by a print or printf state-
ment or a call to getline with the same string-valued expr . This function returns zero if successful,
Patterns
Patterns are arbitrary Boolean combinations (with ! || &&) of regular expressions and relational
expressions. awk supports Extended Regular Expressions as described in regexp (5). Isolated regular
expressions in a pattern apply to the entire line. Regular expressions can also occur in relational expres-
sions, using the operators ˜ and !˜. /re / is a constant regular expression; any string (constant or vari-
able) can be used as a regular expression, except in the position of an isolated regular expression in a pat-
tern.
Section 1−−38 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
awk(1) awk(1)
A pattern can ˜ (matches) or !˜ (does
not match). A conditional is an arithmetic expression, a relational expression, or a Boolean combination
of the two.
A aA
The special patterns BEGIN and END can be used to capture control before the first input line is read
and after the last. BEGIN and END do not combine with other patterns.
Special Characters
The following special escape sequences are recognized by awk in both regular expressions and strings:
Escape Meaning
\a alert character
\b backspace character
\f form-feed character
\n new-line character
\r carriage-return character
\t tab character
\v vertical-tab character
\nnn 1- to 3-digit octal value nnn
\xhhh 1- to n-digit hexadecimal number
Variable Names
Variable names with special meanings are:
FS Input field separator regular expression; a space character by default; also sett-
able by option -Ffs.
NF The number of fields in the current record.
NR The ordinal number of the current record from the start of input. Inside a
BEGIN action the value is zero. Inside an END action the value is the number of
the last record processed.
FNR The ordinal number of the current record in the current file. Inside a BEGIN
action the value is zero. Inside an END action the value is the number of the last
record processed in the last file processed.
FILENAME A pathname of the current input file.
RS The input record separator; a newline character by default.
OFS The print statement output field separator; a space character by default.
ORS The print statement output record separator; a newline character by default.
OFMT Output format for numbers (default %.6g). If the value of OFMT is not a
floating-point format specification, the results are unspecified.
CONVFMT Internal conversion format for numbers (default %.6g). If the value of
CONVFMT is not a floating-point format specification, the results are unspecified.
Refer to the UNIX95 variable under EXTERNAL INFLUENCES for additional
information on CONVFMT.
SUBSEP The subscript separator string for multi-dimensional arrays; the default value is
"\034"
ARGC The number of elements in the ARGV array.
ARGV An array of command line arguments, excluding options and the program argu-
ment numbered from zero to ARGC-1.
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−39
awk(1) awk(1)
The arguments in ARGV can be modified or added to; ARGC can be altered. As
each input file ends, awk will treat the next non-null element of ARGV, up to the
current value of ARGC-1, inclusive, as the name of the next input file. Thus, set-
ting an element of ARGV to null means that it will not be treated as an input file.
The name - indicates the standard input. If an argument matches the format of
an assignment operand, this argument will be treated as an assignment rather
than a file argument.
ENVIRON Array of environment variables; subscripts are names. For example, if environ-
ment variable V=thing, ENVIRON["V"] produces thing.
A aA RSTART The starting position of the string matched by the match function, numbering
from 1. This is always equivalent to the return value of the match function.
RLENGTH The length of the string matched by the match function.
Functions can be defined (at the position of a pattern-action statement) as follows:
function foo(a, b, c) { ...; return x }
Parameters are passed by value if scalar, and by reference if array name. Functions can be called recur-
sively. Parameters are local to the function; all other variables are global.
Note that if pattern-action statements are used in an HP-UX command line as an argument to the awk
command, the pattern-action statement must be enclosed in single quotes to protect it from the shell. For
example, to print lines longer than 72 characters, the pattern-action statement as used in a script (-f
progfile command form) is:
length > 72
The same pattern action statement used as an argument to the awk command is quoted in this manner:
awk ’length > 72’
EXTERNAL INFLUENCES
Environment Variables
UNIX95 If defined, specifies to use the XPG4 behavior for this command. The changes for XPG4
include support for the entire behaviour specified above and include the following
behavioral change:
• If CONVFMT is not specified and UNIX95 is set, %d is used as the internal conversion
format for numbers by default.
LANG Provides a default value for the internationalization variables that are unset or null. If
LANG is unset or null, the default value of "C" (see lang (5)) is used. If any of the interna-
tionalization variables contains an invalid setting, awk will behave as if all internation-
alization variables are set to "C". See environ (5).
LC_ALL If set to a non-empty string value, overrides the values of all the other internationaliza-
tion variables.
LC_CTYPE Determines the interpretation of text as single and/or multi-byte characters, the
classification of characters as printable, and the characters matched by character class
expressions in regular expressions.
LC_NUMERIC Determines the radix character used when interpreting numeric input, performing
conversion between numeric and string values and formatting numeric output. Regard-
less of locale, the period character (the decimal-point character of the POSIX locale) is
the decimal-point character recognized in processing awk programs (including assign-
ments in command-line arguments).
LC_COLLATE Determines the locale for the behavior of ranges, equivalence classes and multi-character
collating elements within.
PATH Determines the search path when looking for commands executed by system(cmd), or
input and output pipes.
Section 1−−40 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
awk(1) awk(1)
In addition, all environment variables will be visible via the awk variable ENVIRON.
DIAGNOSTICS
awk supports up to 199 fields ($1, $2, ..., $199) per record.
EXAMPLES
Print lines longer than 72 characters: A aA command (see echo (1)):
BEGIN { # Simulate echo(1)
for (i = 1; i < ARGC; i++) printf "%s ", ARGV[i]
printf "\n"
exit }
AUTHOR
awk was developed by AT&T, IBM, OSF, and HP.
SEE ALSO
lex(1), sed(1).
A. V. Aho, B. W. Kernighan, P. J. Weinberger: The AWK Programming Language , Addison-Wesley, 1988.
STANDARDS CONFORMANCE
awk: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
HP-UX 11i Version 2: August 2003 −5− Hewlett-Packard Company Section 1−−41
banner(1) banner(1)
NAME
banner - make posters in large letters
SYNOPSIS
banner strings
DESCRIPTION
banner prints its arguments (each up to 10 characters long) in large letters on the standard output.
Each argument is printed on a separate line. Note that multiple-word arguments must be enclosed in
quotes in order to be printed on the same line.
A bA EXAMPLES
Print the message ‘‘Good luck Susan’’ in large letters on the screen:
banner "Good luck" Susan
The words Good luck are displayed on one line, and Susan is displayed on a second line.
WARNINGS
This command is likely to be withdrawn from X/Open standards. Applications using this command might
not be portable to other vendors’ platforms.
SEE ALSO
echo(1).
STANDARDS CONFORMANCE
banner: SVID2, SVID3, XPG2, XPG3
Section 1−−42 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
basename(1) basename(1)
NAME
basename, dirname - extract portions of path names
SYNOPSIS
basename string [ suffix ]
dirname [ string ]
DESCRIPTION
basename deletes any prefix ending in / and the suffix (if present in string ) from string , and prints the
result on the standard output. If string consists entirely of slash characters, string is set to a single slash
character. If there are any trailing slash characters in string , they are removed. If the suffix operand is
present but not identical to the characters remaining in string , but it is identical to a suffix of the charac- A bA
ters remaining in string , the suffix is removed from string . basename is normally used inside com-
mand substitution marks ( `... ` ) within shell procedures.
dirname delivers all but the last level of the path name in string . If string does not contain a directory
component, dirname returns ., indicating the current working directory.
EXTERNAL INFLUENCES
Environment Variables
LC_CTYPE determines the interpretation of string and, in the case of basename, suffix, basename and dirname behave as if all internationalization variables are set to "C".
See environ (5).
EXAMPLES
The following shell script, invoked with the argument /usr/src/cmd/cat.c, compiles the named file
and moves the output to a file named cat in the current directory:
cc $1
mv a.out ‘basename $1 .c‘
The following example sets the shell variable NAME to /usr/src/cmd:
NAME=‘dirname /usr/src/cmd/cat.c‘
RETURNS
basename and dirname return one of the following values:
0 Successful completion.
1 Incorrect number of command-line arguments.
SEE ALSO
expr(1), sh(1).
STANDARDS CONFORMANCE
basename: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
dirname: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−43
bc(1) bc(1)
NAME
bc - arbitrary-precision arithmetic language
SYNOPSIS
bc [-c] [-l] [ file ... ]
DESCRIPTION
bc is an interactive processor for a language that resembles C but provides unlimited-precision arith-
metic. It takes input from any files given, then reads the standard input.
Options:
A bA bc recognizes the following command-line options:
-c Compile only. bc is actually a preprocessor for dc which bc invokes automatically
(see dc(1)). Specifying -c prevents invoking dc, and sends the dc input to standard out-
put.
-l causes an arbitrary-precision math library to be predefined. As a side effect, the scale
factor is set.
Program Syntax:
L a single letter in the range a through z;
E expression;
S statement;
R relational expression.
Names:
Names include:
simple variables: L
array elements: L [ E ]
The words ibase,obase, and scale
stacks: L
Other Operands
Other operands include:
Arbitrarily long numbers with optional sign and decimal point.
(E)
sqrt ( E )
length ( E ) number of significant decimal digits
scale ( E ) number of digits right of decimal point
L ( E , ... , E )
Strings of ASCII characters enclosed in quotes ( " ).
Arithmetic Operators:
Arithmetic operators yield an E as a result and include:
+ - * / % ˆ ( % is remainder (not mod, see below); ˆ is power).
++ -- (prefix and append; apply to names)
= += -= *= /= %= ˆ=
Relational Operators
Relational operators yield an R when used as E op E:
== <= >= != < >
Section 1−−44 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
bc(1) bc(1)
Statements
E
{ S ; ... ; S }
if ( R ) S
while ( R ) S
for ( E ; R ; E ) S
null statement
break
quit
Function Definitions:
define L ( L ,..., L ) {
auto L, ... , L
A bA
S; ... S
return ( E )
}
EXAMPLES
Define
}
}
Print approximate values of the exponential function of the first ten integers.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−45
bc(1) bc(1)
WARNINGS
There are currently no && (AND) or || (OR) comparisons.
The for statement must have all three expressions.
quit is interpreted when read, not when executed.
bc’s parser is not robust in the face of input errors. Some simple expression such as 2+2 helps get it back
into phase.
The assignment operators: =+ =- =* =/ =% and =ˆ are obsolete. Any occurences of these
A bA operators cause a syntax error with the exception of =- which is interpreted as = followed by a unary
minus.
Neither entire arrays nor functions can be passed as function parameters.
FILES
/usr/bin/dc desk calculator executable program
/usr/lib/lib.b mathematical library
SEE ALSO
bs(1), dc(1).
bc tutorial in Number Processing Users Guide
STANDARDS CONFORMANCE
bc: XPG4, POSIX.2
Section 1−−46 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
bdiff(1) bdiff(1)
NAME
bdiff - diff for large files
SYNOPSIS
bdiff file1 file2 [ n ] [-s]
DESCRIPTION
bdiff compares two files and produces output identical to what would be produced by diff (see
diff(1)), specifying changes that must be made to make the files identical. bdiff is designed for han-
dling files that are too large for diff, but it can be used on files of any length.
bdiff processes files as follows:
• Ignore lines common to the beginning of both files.
A bA
• Split the remainder of each file into n-line segments, then execute diff on corresponding seg-
ments. The default value of n is 3500.
Command-Line Arguments
bdiff recognizes the following command-line arguments:
file1
file2 Names of two files to be compared by bdiff. If file1 or file2 (but not both) is -, stan-
dard input is used instead.
n If a numeric value is present as the third argument, the files are divided into n-line
segments before processing by diff. Default value for n is 3500. This option is useful
when 3500-line segments are too large for processing by diff.
-s Silent option suppresses diagnostic printing by bdiff, but does not suppress possible
error messages from diff). If the n and -s arguments are both used, the n argument
must precede the -s option on the command line or it will not be properly recognized.diff behaves as if all internationaliza-
tion variables are set to "C". See environ (5).
DIAGNOSTICS
both files standard input (bd2)
Standard input was specified for both files. Only one file can be specified as standard input.
non-numeric limit (bd4)
A non-numeric value was specified for the n (third) argument.
EXAMPLES
Find differences between two large files: file1 and file2, and place the result in a new file named
diffs_1.2.
bdiff file1 file2 >diffs_1.2
Do the same, but limit file length to 1400 lines; suppress error messages:
bdiff file1 file2 1400 -s >diffs_1.2
WARNINGS
bdiff produces output identical to output from diff, and makes the necessary line-number corrections
so that the output looks like it was processed by diff. However, depending on where the files are split,
bdiff may or may not find a fully minimized set of file differences.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−47
bdiff(1) bdiff(1)
FILES
/tmp/bd??????
SEE ALSO
diff(1).
A bA
Section 1−−48 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
bs(1) bs(1)
NAME
bs - a compiler/interpreter for modest-sized programs
SYNOPSIS
bs [ file [ args ] ]
DESCRIPTION
bs is a remote descendant of BASIC and SNOBOL4 with some C language added. bs is designed for pro-
gramming tasks where program development time is as important as the resulting speed of execution.
Formalities of data declaration and file/process manipulation are minimized. Line-at-a-time debugging,
the trace and dump statements, and useful run-time error messages all simplify program testing.
Furthermore, incomplete programs can be debugged; inner functions can be tested before outer functions
have been written, and vice versa.
A bA
If file is specified on the command-line, it is used for input before any input is taken from the keyboard.
By default, statements read from file are compiled for later execution. Likewise, statements entered from
the keyboard are normally executed immediately (see compile and execute below). Unless the final
operation is assignment, the result of an immediate expression statement is printed.
bs programs are made up of input lines. If the last character on a line is a \, the line is continued. bs
accepts lines of the following form:
statement
label statement
A label is a name (see below) followed by a colon. A label and a variable can have the same name.
A bs statement is either an expression or a keyword followed by zero or more expressions. Some key-
words (clear, compile, !, execute, include, ibase, obase, and run) are always executed as
they are compiled.
Statement Syntax:
expression The expression is executed for its side effects (value, assignment, or function call). The
details of expressions follow the description of statement types below.
break break exits from the innermost for/while loop.
clear Clears the symbol table and compiled statements. clear is executed immediately.
compile [expression]
Succeeding statements are compiled (overrides the immediate execution default). The
optional expression is evaluated and used as a file name for further input. A clear is
associated with this latter case. compile is executed immediately.
continue continue transfers to the loop-continuation of the current for/while loop.
dump [name] The name and current value of every non-local variable is printed. Optionally, only the
named variable is reported. After an error or interrupt, the number of the last statement
is displayed. The user-function trace is displayed after an error or stop that occurred
in a function.
edit A call is made to the editor selected by the EDITOR environment variable if it is present,
or ed(1) if EDITOR is undefined or null. If the file argument is present on the command
line, file is passed to the editor as the file to edit (otherwise no file name is used). Upon
exiting the editor, a compile statement (and associated clear) is executed giving that
file name as its argument.
exit [expression]
Return to system level. The expression is returned as process status.
execute Change to immediate execution mode (an interrupt has a similar effect). This statement
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−49
bs(1) bs(1)
Section 1−−50 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
bs(1) bs(1)
Expression Syntax:
name A name is used to specify a variable. Names are composed of a letter (uppercase or
lowercase) optionally followed by letters and digits. Only the first six characters of a
name are significant. Except for names declared in fun statements, all names are global
to the program. Names can take on numeric (double float) values, string values, or can
be associated with input/output (see the built-in function open( ) below).
name ( [expression [ , expression] ... ] )
Functions can be called by a name followed by the arguments in parentheses separated
by commas. Except for built-in functions (listed below), the name must be defined with a
fun statement. Arguments to functions are passed by value. If the function is undefined,
the call history to the call of that function is printed, and a request for a return value (as
an expression) is made. The result of that expression is taken to be the result of the
undefined function. This permits debugging programs where not all the functions are yet
defined. The value is read from the current input file.
name [ expression [ , expression ] ... ]
This syntax is used to reference either arrays or tables (see built-in table functions
below). For arrays, each expression is truncated to an integer and used as a specifier for
the name. The resulting array reference is syntactically identical to a name; a[1,2] is
the same as a[1][2]. The truncated expressions are restricted to values between 0 and
32 767.
number A number is used to represent a constant value. A number is written in Fortran style,
and contains digits, an optional decimal point, and possibly a scale factor consisting of an
e followed by a possibly signed exponent.
string Character strings are delimited by " characters. The \ escape character allows the dou-
ble quote (\"), new-line (\n), carriage return (\r), backspace (\b), and tab (\t) charac-
ters to appear in a string. Otherwise, \ stands for itself.
( expression ) Parentheses are used to alter the normal order of evaluation.
( expression , expression [ , expression ... ] ) [ expression ]
The bracketed expression is used as a subscript to select a comma-separated expression
from the parenthesized list. List elements are numbered from the left, starting at zero.
The expression:
( False, True )[ a == b ]
has the value True if the comparison is true.
? expression The interrogation operator tests for the success of the expression rather than its value.
At the moment, it is useful for testing end-of-file (see examples in the Programming Tips
section below), the result of the eval built-in function, and for checking the return from
user-written functions (see freturn). An interrogation ‘‘trap’’ (end-of-file, etc.) causes
an immediate transfer to the most recent interrogation, possibly skipping assignment
statements or intervening function levels.
- expression The result is the negation of the expression.
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−51
bs(1) bs(1)
++ name Increments the value of the variable (or array reference). The result is the new value.
- - name Decrements the value of the variable. The result is the new value.
!expression The logical negation of the expression. Watch out for the shell escape command.
expression operator expression Common functions of two arguments are abbreviated by the two
arguments separated by an operator denoting the function. Except for the assignment,
concatenation, and relational operators, both operands are converted to numeric form
before the function is applied.
Built-in Functions:
Dealing with arguments
arg(i ) is the value of the i-th actual parameter on the current level of function call. At level
zero, arg returns the i-th command-line argument (arg(0) returns bs).
narg( ) returns the number of arguments passed. At level zero, the command argument count is
returned.
Mathematical
abs(x ) is the absolute value of x.
atan(x ) is the arctangent of x. Its value is between −π/2 and π/2.
ceil(x ) returns the smallest integer not less than x.
cos(x ) is the cosine of x (radians).
exp(x ) is the exponential function of x.
floor(x ) returns the largest integer not greater than x.
log(x ) is the natural logarithm of x.
rand( ) is a uniformly distributed random number between zero and one.
sin(x ) is the sine of x (radians).
sqrt(x ) is the square root of x.
String operations
size(s ) the size (length in bytes) of s is returned.
format(f , a )
returns the formatted value of a. f is assumed to be a format specification in the style of
printf (3S). Only the % ... f, % ... e, and % ... s types are safe. Since it is not
Section 1−−52 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
bs(1) bs(1)
always possible to know whether a is a number or a string when the format call is
coded, coercing a to the type required by f by either adding zero (for e or f format) or
concatenating (_) the null string (for s format) should be considered.
index(x , y ) returns the number of the first position in x that any of the characters from y matches.
No match yields zero.
trans(s , f, t )
Translates characters of the source s from matching characters in f to a character in the
same position in t . Source characters that do not appear in f are copied to the result. If
the string f is longer than t , source characters that match in the excess portion of f do not
appear in the result.
substr(s , start, width ) A bA
returns the sub-string of s defined by the start ing position and width .
match(string , pattern )
mstring(n ) The pattern is a regular expression according to the Basic Regular Expression definition
(see regexp (5)). mstring returns the n-th (1 <= n <= 10) substring of the subject that
occurred between pairs of the pattern symbols \( and \) for the most recent call to
match . To succeed, patterns must match the beginning of the string (as if all patterns
began with ˆ). The function returns the number of characters matched. For example:
match("a123ab123", ".*\([a-z]\)") == 6
mstring(1) == "b"
File handling
open(name , file, function )
close(name )
name argument must be a bs variable name (passed as a string). For the open, the file
argument can be:
1. a 0 (zero), 1, or 2 representing standard input, output, or error output, respec-
tively;
2. a string representing a file name; or
3. a string beginning with an ! representing a command to be executed (via sh
-c). The function argument must be either r (read), w (write), W (write
without new-line), or a (append). After a close, name reverts to being an
ordinary variable. If name was a pipe, a wait() is executed before the close
completes (see wait (2)). The bs exit command does not do such a wait. The
initial associations are:
open("get", 0, "r")
open("put", 1, "w")
open("puterr", 2, "w")
Examples are given in the following section.
access(s , m )
executes access() (see access (2)).
ftype(s ) returns a single character file type indication: f for regular file, p for FIFO (i.e., named
pipe), d for directory, b for block special, or c for character special.
Tables
table(name , size )
A table in bs is an associatively accessed, single-dimension array. ‘‘Subscripts’’ ord-
erly progression of key values). Where the item function accesses values, the key
function accesses the ‘‘subscript’’ of the previous item call. It fails (or in the absence of
an interrogate operator, returns null) if there was no valid subscript for the previ-
ous item call. The name argument should not be quoted. Since exact table sizes are
HP-UX 11i Version 2: August 2003 −5− Hewlett-Packard Company Section 1−−53
bs(1) bs
A bA elements in the table. Null is, however, a legal ‘‘subscript’’.
iskey(name , word )
iskey tests whether the key word exists in the table name and returns one for true, zero
for false.
Section 1−−54 Hewlett-Packard Company −6− HP-UX 11i Version 2: August 2003
bs(1) bs(1)
EXTERNAL INFLUENCES
Environment Variables A bA
LC_COLLATE determines the collating sequence used in evaluating regular expressions.
LC_CTYPE determines the characters matched by character class expressions in regular expressions.s behaves as if all internationalization variables are set to "C".
See environ (5).
EXAMPLES
Using bs as a calculator ($ is the shell prompt):
$ bs
# Distance (inches) light travels in a nanosecond.
186000 * 5280 * 12 / 1e9
11.78496
...
# Compound interest (6% for 5 years on $1,000).
int = .06 / 4
bal = 1000
for i = 1 5*4 bal = bal + bal*int
bal - 1000
346.855007
...
exit
The outline of a typical bs program:
# initialize things:
var1 = 1
open("read", "infile", "r")
...
# compute:
while ?(str = read)
...
# clean up:
close("read")
...
# last statement executed (exit or stop):
exit
# last input line:
run
Input/Output examples:
# Copy file oldfile to file newfile.
open("read", "oldfile", "r")
open("write", "newfile", "w")
HP-UX 11i Version 2: August 2003 −7− Hewlett-Packard Company Section 1−−55
bs(1) bs(1)
...
while ?(write = read)
...
# close "read" and "write":
close("read")
close("write")
# Pipe between commands.
open("ls", "!ls *", "r")
open("pr", "!pr -2 -h ’List’", "w")
while ?(pr = ls) ...
...
A bA # be sure to close (wait for) these:
close("ls")
close("pr")
WARNINGS
The graphics mode (plot ...) is not particularly useful unless the tplot command is available on your
system.
bs is not tolerant of some errors. For example, mistyping a fun declaration is difficult to correct
because a new definition cannot be made without doing a clear. The best solution in such a case is to
start by using the edit command.
SEE ALSO
ed(1), sh(1), access(2), printf(3S), stdio(3S), lang(5), regexp(5).
See Section (3M) for a further description of the mathematical functions.
pow() is used for exponentiation — see exp (3M));
bs uses the Standard I/O package.
Section 1−−56 Hewlett-Packard Company −8− HP-UX 11i Version 2: August 2003
cal(1) cal(1)
NAME
cal - print calendar
SYNOPSIS
cal [ [ month ] year ]
DESCRIPTION
cal prints a calendar for the specified year. If a month is also specified, a calendar just for that month is
printed. If neither is specified, a calendar for the present month is printed. year can be between 1 and
9999. month is a decimal number between 1 and 12. The calendar produced is a Gregorian calendar.
EXTERNAL INFLUENCES
Environment Variables
LANG determines the locale to use for the locale categories when both LC_ALL and the corresponding A cA_TIME determines the format and contents of the calendar.
TZ determines the timezone used to calculate the value of the current month.
If any internationalization variable contains an invalid setting, cal behaves as if all internationalization
variables are set to "C". See environ (5).
EXAMPLES
The command:
cal 9 1850
prints the calendar for September, 1850 on the screen as follows:
September 1850
S M Tu W Th F S
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30
However, for XPG4 the output looks like below:
Sep 1850
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30
WARNINGS
The year is always considered to start in January even though this is historically naive.
Beware that cal 83 refers to the early Christian era, not the 20th century.
STANDARDS CONFORMANCE
cal: SVID2, SVID3, XPG2, XPG3, XPG4
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−57
calendar(1) calendar(1)
NAME
calendar - reminder service
SYNOPSIS
calendar [-]
DESCRIPTION
calendar consults the file calendar in the current directory and prints out lines containing today’s or
tomorrow’s date anywhere in the line. On weekends, ‘‘tomorrow’’ extends through Monday.
When a - command-line argument is present, calendar searches for the file calendar in each
user’s home directory, and sends any positive results to the user by mail (see mail (1)). Normally this is
done daily in the early morning hours under the control of cron (see cron (1M)). When invoked by
A cA cron, calendar reads the first line in the calendar file to determine the user’s environment.
Language-dependent information such as spelling and date format (described below) are determined by
the user-specified LANG statement in the calendar file. This statement should be of the form
LANG=language where language is a valid language name (see lang (5)). If this line is not in the
calendar file, the action described in the EXTERNAL INFLUENCES Environment Variable section is
taken.
calendar is concerned with two fields: month and day. A month field can be expressed in three
different formats: a string representing the name of the month (either fully spelled out or abbreviated), a
numeric month, or an asterisk (representing any month). If the month is expressed as a string represent-
ing the name of the month, the first character can be either upper-case or lower-case; other characters
must be lower-case. The spelling of a month name should match the string returned by calling
nl_langinfo() (see nl_langinfo (3C)). The day field is a numeric value for the day of the month.
Month-Day Formats
If the month field is a string, it can be followed by zero or more blanks. If the month field is numeric, it
must be followed by either a slash (/) or a hyphen (-). If the month field is an asterisk (*), it must be fol-
lowed by a slash (/). The day field can be followed immediately by a blank or non-digit character.
Day-Month Formats
The day field is expressed as a numeral. What follows the day field is determined by the format of the
month. If the month field is a string, the day field must be followed by zero or one dot (.) followed by
zero or more blanks. If the month field is a numeral, the day field must be followed by either a slash (/)
or a hyphen (-). If the month field is an asterisk, the day field must be followed by a slash (/).
EXTERNAL INFLUENCES
Environment Variables
LC_TIME determines the format and contents of date and time strings when no LANG statement is
specified in the calendar file.
LANG determines the language in which messages are displayed.
If, calendar behaves as if all internationalization variables are set to "C". See environ (5).
EXAMPLES
The following calendar file illustrates several formats recognized by calendar :
LANG=en_US.roman8
Friday, May 29th: group coffee meeting
meeting with Boss on June 3.
3/30/87 - quarter end review
4-26 Management council meeting at 1:00 pm
It is first of the month ( */1 ); status report due.
In the following calendar file, dates are expressed according to European English usage:
Section 1−−58 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
calendar(1) calendar(1)
LANG=en_GB.roman8
On 20 Jan. code review
Jim’s birthday is on the 3. February
30/3/87 - quarter end review
26-4 Management council meeting at 1:00 pm
It is first of the month ( 1/* ); status report due.
WARNINGS
To get reminder service, either your calendar must be public information or you must run calendar
from your personal crontab file, independent of any calendar - run systemwide. Note that if you
run calendar yourself, the calendar file need not reside in your home directory.
calendar’s extended idea of ‘‘tomorrow’’ does not account for holidays.
This command is likely to be withdrawn from X/Open standards. Applications using this command might A cA
not be portable to other vendors’ platforms.
AUTHOR
calendar was developed by AT&T and HP.
FILES
calendar
/tmp/cal*
/usr/lbin/calprog to figure out today’s and tomorrow’s dates
/usr/bin/crontab
/etc/passwd
SEE ALSO
cron(1M), nl_langinfo(3C), mail(1), environ(5).
STANDARDS CONFORMANCE
calendar: SVID2, SVID3, XPG2, XPG3
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−59
cat(1) cat(1)
NAME
cat - concatenate, copy, and print files
SYNOPSIS
cat [-benrstuv] file ...
DESCRIPTION
cat reads each file in sequence and writes it on the standard output. Thus:
cat file
prints file on the default standard output device;
cat file1 file2 > file3
A cA concatenates file1 and file2 , and places the result in file3 .
If - is appears as a file argument, cat uses standard input. To combine standard input and other files,
use a combination of - and file arguments.
Options
cat recognizes the following options:
-b Omit line numbers from blank lines when -n option is specified. If this option is specified, the
-n option is automatically selected.
-e Print a $ character at the end of each line (prior to the new-line). If this option is specified,
the -v option is automatically selected.
-n Display output lines preceded by line numbers, numbered sequentially from 1.
-r Replace multiple consecutive empty lines with one empty line, so that there is never more than
one empty line between lines containing characters.
-s Silent option. cat suppresses error messages about non-existent files, identical input and
output, and write errors. Normally, input and output files cannot have identical names unless
the file is a special file.
-t Print each tab character as ˆI and form feed character as ˆL. If this option is specified, the
-v option is automatically selected.
-u Do not buffer output (handle character-by-character). Normally, output is buffered.
-v Cause non-printing characters (with the exception of tabs, new-lines and form-feeds) to be
printed visibly. Control characters are printed using the form ˆX (Ctrl-X), and the DEL char-
acter (octal 0177) is printed as ˆ? (see ascii (5)). Single-byte control characters whose most
significant bit is set, are printed using the form M-ˆx, where x is the character specified by the
seven low order bits. All other non-printing characters are printed as M-x, where x is the
character specified by the seven low order bits. This option is influenced by the LC_CTYPE
environment variable and its corresponding code set.
EXTERNAL INFLUENCES
Environment Variables
LANG provides a default value for the internationalization variables that are unset or null. If LANG is
unset or null, the default value of "C" (see lang (5)) is used. If any of the internationalization variables
contains an invalid setting, cat60 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
cat(1) cat(1)
RETURN VALUE
Exit values are:
0 Successful completion.
>0 Error condition occurred.
EXAMPLES
To create a zero-length file, use any of the following:
cat /dev/null > file
cp /dev/null file
touch file A cA
The following prints ˆI for all the occurrences of tab character in file1
cat -t file1
To suppress error messages about files that do not exist, use:
cat -s file1 file2 file3 > file
If file2 does not exist, the above command concatenates file1 and file3 without reporting the error on file2 .
The result is the same if -s option is not used, except that cat displays the error message.
To view non-printable characters in file2 , use:
cat -v file2
WARNINGS
Command formats such as
cat file1 file2 > file1
overwrites the data in file1 before the concatenation begins, thus destroying the file. Therefore, be care-
ful when using shell special characters.
SEE ALSO
cp(1), more(1), pg(1), pr(1), rmnl(1), ssp(1).
STANDARDS CONFORMANCE
cat: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−61
cc_bundled(1) cc_bundled(1)
(Bundled C Compiler - Limited Functionality)
NAME
cc - bundled C compiler
SYNOPSIS
cc [options ] files
DESCRIPTION
This manual page describes the Bundled C compiler. cc invokes the HP-UX bundled C compiler. C
source code is compiled directly to object code.
The command uses the ctcom (Itanium(R)-based systems) or ccom (PA-RISC, Precision Architecture)
compiler for preprocessing, syntax and type checking, as well as for code generation.
cc accepts several types of arguments as files :
A cA (PA-RISC) or libx .a in an attempt to resolve currently unresolved
external references. Because a library is searched when its name is encountered, placement
of a -l is significant. If a file contains an unresolved external reference, the library contain-
ing.
Other Suffixes
All other arguments, such as those names ending with .o, .a, or .so are taken to be relo-
catable parame-
ters tem-
porary files, overriding the default directory /var/tmp.
Options).
Section 1−−62 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
cc_bundled(1) cc_bundled(1)
(Bundled C Compiler - Limited Functionality)
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−63
cc_bundled(1) cc_bundled(1)
(Bundled C Compiler - Limited Functionality)
Section 1−−64 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
cc_bundled(1) cc_bundled(1)
(Bundled C Compiler - Limited Functionality)
Options can-
not assure that location zero acts as if it was initialized to zero or is locked at zero, the hardware should
act as if the -z flag is always set.
A cA
EXTERNAL INFLUENCES
Environment Variables com-
ponents.
DIAGNOSTICS
The diagnostics produced by the compiler itself are intended to be self-explanatory. Occasional messages
may be produced by the assembler or the link editor.
If any errors occur before cc is completed, a non-zero value is returned. Otherwise, zero is returned.
DEPENDENCIES
file.c C input file
file.i previously preprocessed cc input file
file.o object file
file.so shared library, created with -b on Itanium-based systems
file.sl shared library, created with -b on PA-RISC
a.out linked executable output file
/var/tmp/* temporary files used by the compiler (Itanium-based systems)
/var/tmp/ctm* temporary files used by the compiler (PA-RISC)
HP-UX 11i Version 2: August 2003 −4− Hewlett-Packard Company Section 1−−65
cc_bundled(1) cc_bundled(1)
(Bundled C Compiler - Limited Functionality)
/usr/ccs/bin/cc C driver
/usr/ccs/bin/cc_bundled C driver
/usr/ccs/lbin/ctcom C Compiler (Itanium-based systems)
/usr/ccs/lbin/ccom C Compiler (PA-RISC)
/usr/ccs/lbin/cpp preprocessor, to assemble .s files
/usr/lib/nls/msg/$LANG/aCC.cat
C compiler message catalog (Itanium-based systems)
/usr/lib/nls/msg/$LANG/cc.cat
C compiler message catalog (PA-RISC)
A cA /usr/ccs/bin/as
/usr/ccs/bin/ld
assembler, as (1)
link editor, ld(1)
/usr/ccs/lib/crt0.o Runtime startup (PA-RISC)
/usr/include Standard directory for #include files
Other Libraries
Sec-
tion (3). (PA-RISC)
/usr/lib/libc.sl Standard C library (shared version), see HP-UX Reference Sec-
tion (3). (PA-RISC)
/usr/lib/hpux32/libm.a Math Library (Itanium-based system)
/usr/lib/hpux64/libm.a Math Library (Itanium-based systems)
/usr/lib/libm.a Math Library (PA-RISC)
/usr/lib/hpux32/libdld.so Dynamic loader library (Itanium-based systems)
/usr/lib/hpux64/libdld.so Dynamic loader library (Itanium-based systems)
/usr/lib/libdld.sl Dynamic loader library (PA-RISC)
/usr/lib/hpux32/dld.so Dynamic loader (Itanium-based systems)
/usr/lib/hpux64/dld.so Dynamic loader (Itanium-based systems)
/usr/lib/dld.so Dynamic loader (PA-RISC)
SEE ALSO
Online help
The online help can be displayed using a default HTML browser, or you can invoke your own HTML
browser with the URL file:/opt/ansic/html/$LANG/guide/index.htm (Itanium-based sys-
tems) or file:/opt/ansic/html/guide/$LANG/index.htm (PA-RISC)
Other topics available are: Compiler Pragmas, Floating Installation and Implementation Defined aspects
of the compiler.
Information is also available on the web at:
Section 1−−66 Hewlett-Packard Company −5− HP-UX 11i Version 2: August 2003
cc_bundled(1) cc_bundled(1)
(Bundled C Compiler - Limited Functionality)
System Tools
as (1) translate assembly code to machine code
cpp (1) invoke the C language preprocessor
cc(1) C compiler
ld(1) invoke the link editor
Miscellaneous
strip (1) strip symbol and line number information from an object file
crt0 (3) execution startup routine
end(3C) symbol of the last locations in program
exit (2) termination of a process
HP-UX 11i Version 2: August 2003 −6− Hewlett-Packard Company Section 1−−67
cd(1) cd(1)
NAME
cd - change working directory
SYNOPSIS
cd [ directory ]
DESCRIPTION
If directory is not specified, the value of shell parameter HOME is used as the new working directory. If
directory specifies a complete path starting with /, ., or .., directory becomes the new working direc-
tory. If neither case applies, cd tries to find the designated directory relative to one of the paths
specified by the CDPATH shell variable. CDPATH has the same syntax as, and similar semantics to, the
PATH shell variable. cd must have execute (search) permission in directory .
A cA cd exists only as a shell built-in command because a new process is created whenever a command is exe-
cuted, making cd useless if written and processed as a normal system command. Moreover, different
shells provide different implementations of cd as a built-in utility. Features of cd as described here
may not be supported by all the shells. Refer to individual shell manual entries for differences.
If cd is called in a subshell or a separate utility execution environment such as:
find . -type d -exec cd {}; -exec foo {};
(which invokes foo on accessible directories) cd does not affect the current directory of the caller’s
environment. Another usage of cd as a stand-alone command is to obtain the exit status of the com-
mand.
EXTERNAL INFLUENCES
International Code Set Support
Single- and multi-byte character code sets are supported.
Environment Variables
The following environment variables affect the execution of cd:
HOME The name of the home directory, used when no directory operand is specified.
CDPATH is set to the first
matching directory found. An empty string in place of a directory pathname
represents the current directory. If CDPATH is not set, it is treated as if it was an
empty string.
EXAMPLES
Change the current working directory to the HOME directory from any location in the file system:
cd
Change to new current working directory foo residing in the current directory:
cd foo
or
cd ./foo
Change to directory foobar residing in the current directory’s parent directory:
cd ../foobar
Change to the directory whose absolute pathname is /usr/local/lib/work.files:
cd /usr/local/lib/work.files
Change to the directory proj1/schedule/staffing/proposals relative to home directory:
cd $HOME/proj1/schedule/staffing/proposals
RETURN VALUE
Upon completion, cd exits with one of the following values:
0 The directory was successfully changed.
Section 1−−68 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
cd(1) cd(1)
SEE ALSO
csh(1), pwd(1), ksh(1), sh-posix(1), sh(1), chdir(2).
STANDARDS CONFORMANCE
cd: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
A cA
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−69
cdc(1) cdc(1)
NAME
cdc - change the delta commentary of an SCCS delta
SYNOPSIS
cdc -r SID [-m[ mrlist ] ] [-y[ comment ] ] files
DESCRIPTION
The cdc command changes the delta commentary, for the SID specified by the -r option, of each
named SCCS file.
Delta commentary is defined to be the Modification Request (MR) and comment information normally
specified via the delta (1) command (-m and -y options).
If a directory is named, cdc behaves as if each file in the directory were specified as a named file, except
A cA that non-SCCS files (last component of the path name does not begin with s.) and unreadable files are
silently ignored. If a name of - is given, the standard input is read (see WARNINGS); each line of the
standard input is taken to be the name of an SCCS file to be processed.
Options
Arguments to cdc, which can appear in any order, consist of option arguments and file names.
All of the described option arguments apply independently to each named file:
-rSID Used to specify the S CCS IDentification (SID) string of a delta for which the delta
commentary is to be changed.
-m[mrlist] If the SCCS file has the v option set (see admin (1)), a list of MR numbers to be
added and/or deleted in the delta commentary of the SID specified by the -r option
may be supplied. A null MR list has no effect.
MR entries are added to the list of MRs in the same manner as that of delta (1). To
delete an MR, precede the MR number with the character ! (see EXAMPLES). If the
MR to be deleted is currently in the list of MRs, it is removed and changed into a
‘‘comment’’ line. A list of all deleted MRs is placed in the comment section of the
delta commentary and preceded by a comment line stating that they were deleted. -y option).
MRs in a list are separated by blanks and/or tab characters. An unescaped new-line
character terminates the MRs list.
Note that if the v option has a value (see admin (1)), it is treated as the name of a
program (or shell procedure) that validates the correctness of the MR numbers. If a
non-zero exit status is returned from the MR number validation program, cdc ter-
minates and the delta commentary remains unchanged.
-y[comment] Arbitrary text used to replace the comment or comment s already existing for the
delta specified by the -r option. Previous comments are kept and preceded by a
comment line stating that they were changed. A null comment has no effect.
If -y is not specified and the standard input is a terminal, the prompt comments?
is issued on the standard output before standard input is read; if standard input is
not a terminal, no prompt is issued. An unescaped new-line character terminates
the comment text.
The exact permissions necessary to modify the SCCS file are documented in get (1). Simply stated, they
are either:
• If you made the delta, you can change its delta commentary, or
• If you own the file and directory, you can modify the delta commentary.
EXTERNAL INFLUENCES
Environment Variables
LANG determines the language in which messages are displayed.
Section 1−−70 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
cdc(1) cdc(1)
DIAGNOSTICS
Use sccshelp (1) for explanations.
EXAMPLES
Add bl78-12345 and bl79-00001 to the MR list, remove bl77-54321 from the MR list, and add the
comment trouble to delta 1.6 of s.file:
cdc -r1.6 -m"bl78-12345 !bl77-54321 bl79-00001" -ytrouble s.file
The following does the same thing:
cdc -r1.6 s.file
MRs? !bl77-54321 bl78-12345 bl79-00001 A cA
WARNINGS
If SCCS file names are supplied to the cdc command via the standard input (- on the command line), the
-m and -y options must also be used.
FILES
x-file See delta (1).
z-file See delta (1).
SEE ALSO
admin(1), delta(1), get(1), sccshelp(1), prs(1), sccsfile(4), rcsfile(4), acl(5), rcsintro(5).
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−71
chacl(1) chacl(1)
NAME
chacl - add, modify, delete, copy, or summarize access control lists (ACLs) of files
SYNOPSIS
/usr/bin/chacl acl file ...
chacl -r acl file ...
chacl -d aclpatt file ...
chacl -f fromfile tofile ...
chacl - [ z Z F ] file ...
DESCRIPTION
A cA chacl extends the capabilities of chmod(1), by enabling the user to grant or restrict file access to addi-
tional specific users and/or groups. Traditional file access permissions, set when a file is created, grant or
restrict access to the file’s owner, group, and other users. These file access permissions (eg., rwxrw-r--)
are mapped into three base access control list entries: one entry for the file’s owner (u.%, mode), one for
the file’s group (%.g, mode), and one for other users (%.%, mode).
chacl enables a user to designate up to thirteen additional sets of permissions (called optional access
control list (ACL) entries) which are stored in the access control list of the file.
To use chacl , the owner (or superuser) constructs an acl , a set of (user.group, mode) mappings to associ-
ate with one or more files. A specific user and group can be referred to by either name or number; any
user (u), group (g), or both can be referred to with a % symbol, representing any user or group. The @
symbol specifies the file’s owner or group.
Read, write, and execute/search (rwx) modes are identical to those used by chmod; symbolic operators
(op) add (+), remove (-), or set (=) access rights. The entire acl should be quoted if it contains whitespace
or special characters. Although two variants for constructing the acl are available (and fully explained in
acl (5)), the following syntax is suggested:
entry [, entry ] ...
where the syntax for an entry is
u.g op mode[ op mode ] ...
By default, chacl modifies existing ACLs. It adds ACL entries or modifies access rights in existing ACL
entries. If acl contains an ACL entry already associated with a file, the entry’s mode bits are changed to
the new value given, or are modified by the specified operators. If the file’s ACL does not already contain
the specified entry, that ACL entry is added. chacl can also remove all access to files. Giving it a null
acl argument means either ‘‘no access’’ (when using the -r option) or ‘‘no changes.’’
For a summary of the syntax, run chacl without arguments.
If file is specified as -, chacl reads from standard input.
Options
chacl recognizes the following options:
-r Replace old ACLs with the given ACL. All optional ACL entries are first deleted from the
specified files’s ACLs, their base permissions are set to zero, and the new ACL is applied.
If acl does not contain an entry for the owner (u.%), the group (%.g), or other (%.%)
users of a file, that base ACL entry’s mode is set to zero (no access). The command affects
all of the file’s ACL entries, but does not change the file’s owner or group ID.
In chmod(1), the ‘‘modify’’ and ‘‘replace’’ operations are distinguished by the syntax
(string or octal value). There is no corollary for ACLs because they have a variable
number of entries. Hence chacl modifies specific entries by default, and optionally
replaces all entries.
-d Delete the specified entries from the ACLs on all specified files. The aclpatt argument can
be an exact ACL or an ACL pattern (see acl (5)). chacl -d updates each file’s ACL only
if entries are deleted from it.
If you attempt to delete a base ACL entry from any file, the entry remains but its access
mode is set to zero (no access). If you attempt to delete a non-existent ACL entry from a
file (that is, if an ACL entry pattern matches no ACL entry), chacl informs you of the
Section 1−−72 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
chacl(1) chacl(1)
EXTERNAL INFLUENCES
Environment Variables
LANG determines the language in which messages are displayed.
If LANG is not specified or is set to the empty string, a default of "C" (see lang (5)) is used instead of
LANG. If any internationalization variable contains an invalid setting, chacl behaves as if all interna-
tionalization variables are set to "C". See environ (5).
RETURN VALUE
If chacl succeeds, it returns a value of zero.
If chacl encounters an error before it changes any file’s ACL, it prints an error message to standard
error and returns 1. Such errors include invalid invocation, invalid syntax of acl (aclpatt ), a given user
name or group name is unknown, or inability to get an ACL from fromfile with the -f option.
If chacl cannot execute the requested operation, it prints an error message to standard error, contin-
ues, and later returns 2. This includes cases when a file does not exist, a file’s ACL cannot be altered,
more ACL entries would result than are allowed, or an attempt is made to delete a non-existing ACL entry.
EXAMPLES
The following command adds read access for user jpc in any group, and removes write access for any
user in the files’s groups, for files x and y.
chacl "jpc.%+r, %.@-w" x y
This command replaces the ACL on the file open as standard input and on file test with one which only
allows the file owner read and write access.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−73
chacl(1) chacl(1)
WARNINGS
An ACL string cannot contain more than 16 unique entries, even though converting @ symbols to user or
group names and combining redundant entries might result in fewer than 16 entries for some files.
DEPENDENCIES
chacl will fail when the target file resides on a file system which does not support ACLs.
NFS
Only the -F option is supported on remote files.
AUTHOR
chacl was developed by HP.
SEE ALSO
chmod(1), getaccess(1), lsacl(1), getacl(2), setacl(2), acl(5), glossary(9).
Section 1−−74 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
chatr(1) chatr(1)
NAME
chatr - change program’s internal attributes
SYNOPSIS
Format 1: for files with a single text segment and a single data segment
chatr [-s] [-z] [-Z] [-l library ] [-B mode ] [+as mode ] [+b flag ] [+cd flag ] [+ci flag ]
[+dbg flag ] [+es flag ] [+gst flag ] [+gstsize size ] [+id flag ] [+k flag ] [+l library ]
[+md flag ] [+mergeseg flag ] [+mi flag ] [+o flag ] [+pd size ] [+pi size ] [+s flag ] [+z
flag ] [+I flag ] file ...
Options
-l library Indicate that the specified shared library is subject to run-time path lookup if directory
path lists are provided (see +s and +b).
-s Perform its operation silently.
-z Enable run-time dereferencing of null pointers to produce a SIGSEGV signal. (This is
the complement of the -Z option.)
-B mode Select run-time binding behavior mode of a program using shared libraries. You must
specify one of the binding modes immediate or deferred. See the HP-UX Linker
and Libraries User’s Guide for a description of binding modes.
-Z Disable run-time dereferencing of null pointers. (This is the complement of the -z
option.)
+as mode should have been built with the -N compiler option to
ensure that the text and data segments are contiguous.
+b flag. See the +s
option. You can use the +b option to enable the embedded path for filter libraries.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−75
chatr(1) chatr(1)
+c flag (Format 2 only.) Enable or disable the code bit for a specified segment. If this is enabled,
it is denoted by the c flag for the segment listing in the chatr output.
+cd flag Enable or disable the code bit for the file’s data segment(s). If this is enabled, it is
denoted by the c flag for the segment listing in the chatr output.
+ci flag Enable or disable the code bit for the file’s text segments(s). If this is enabled, it is
denoted by the c flag for the segment listing in the chatr output.
+dbg flag Enable or disable the ability to run a program, and, after it is running, attach to it with a
debugger and set breakpoints in its dependent shared libraries.
+dz flag (Format 2 only.) Enable or disable lazy swap allocation for dynamically allocated seg-
ments (such as the stack or heap).
A cA +es flag Control the ability of user code to execute from stack with the flag values, enable and
disable. See the Restricting Execute Permission on Stacks section below for additional
information related to security issues.
+gst flag Control whether the global symbol table hash mechanism is used to look up values of
symbol import/export entries. The two flag values, enable and disable, respectively
enable and disable use of the global symbol table hash mechanism. The default is dis-
able.
+gstsize size
Request a particular hash array size using the global symbol table hash mechanism. The
value can vary between 1 and MAXINT. The default value is 1103. Use this option with
+gst enable. This option works on files liked with the +gst option.
+id flag Controls the preference of physical memory for the data segment. This is only important
on ccNUMA (Cache Coherent Non-Uniform Memory Architecture) systems. The flag
value may be either enable or disable. When enabled, the data segment will use inter-
leaved memory. When disabled (the default), the data segment will use cell local
memory. This behavior will be inherited across a fork(), but not an exec().
For more information regarding ccNUMA, see pstat_getlocality(2).
+k flag Request kernel assisted branch prediction. The flags enable and disable turn this
request on and off, respectively.
+l library Indicate that the specified shared library is not subject to run-time path lookup if direc-
tory path lists are provided (see +s and +b).
+m flag (Format 2 only.) Enable or disable the modification bit for a specified segment. If this is
enabled, it is denoted by the m flag for the segment listing in the chatr output.
+md flag Enable or disable the modification bit for the file’s data segment(s). If this is enabled, it
is denoted by the m flag for the segment listing in the chatr output.
+mergeseg flag
Enable or disable the shared library segment merging features. When enabled, all data
segments of shared libraries loaded at program startup are merged into a single block.
Data segments for each dynamically loaded library will also be merged with the data seg-
ments of its dependent libraries. Merging of these segments increasesrun-time perfor-
mance by allowing the kernel to use larger size page table entries.
+mi flag Enable or disable the modification bit for the file’s text segment(s). If this is enabled, it is
denoted by the m flag for the segment listing in the chatr output.
+o flag Enable or disable the DF_ORIGIN flag to control use of $ORIGIN $ORIGIN. The loader then uses this path for all occurrences
of $ORIGIN in the dependent libraries.
If there are no occurrences of $ORIGIN, you should disable the DF_ORIGIN flag, to
avoid calculating the absolute path. By default, if $ORIGIN is not present, the
DF_ORIGIN flag is disabled.
Section 1−−76 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
chatr(1) chatr(1)
+p size (Format 2 only.) Set the page size for a specified segment.
+pd size Request a particular virtual memory page size that should.
+pi size Request a particular virtual memory page size that should be used for text (instructions).
See the +pd option for additional information.
+r flag Request static branch prediction when executing this program. The flags enable and
disable turn this request on and off, respectively. If this is enabled, it is denoted by
the r flag for the segment listing in the chatr output.
+s flag Control whether the directory path list specified with the LD_LIBRARY_PATH and
SHLIB_PATH environment variable can be used to locate shared libraries needed by the A cA
program. The two flag values, enable and disable, respectively enable and disable
use of the environment variable. If both +s and +b are used, their relative order on the
command line indicates which path list will be searched first. See the +b option.
+sa address (Format 2 only.) Specify a segment using an address for a set of attribute modifications.
+sall (Format 2 only.) Use all segments in the file for a set of attribute modifications.
+si index (Format 2 only.) Specify a segment using a segment index number for a set of attribute
modifications.
+z flag Enable or disable lazy swap on all data segments (using FORMAT 1) or on a specific seg-
ment (using 2). The flags enable and disable turn this request on or off respec-
tively. May not be used with non-data segments.
+I flag Enable or disable dynamic instrumentation by /opt/langtools/bin/caliper. If
enabled, the dynamic loader (see dld.so (5)) will automatically invoke caliper upon
program execution to collect profile information.
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−77
chatr(1) chatr(1)
An alternate method is setting the kernel tunable parameter, executable_stack, to set a system-
wide default for whether stacks are executable. Setting the executable_stack parameter to 1 (one)
with sam (see sam(1M)) tells the HP-UX kernel to allow programs to execute on the program stack(s).
Use this setting if compatibility with older releases is more important than security. Setting the
executable_stack parameter to 0 (zero), the recommended setting, is appropriate if security is more
important than compatibility. This setting significantly improves system security with minimal, if any,
negative effects on legitimate applications.
Combinations of these settings may be appropriate for many applications. For example, after setting
executable_stack res-
trictive system default while still letting these specific applications run correctly, set
A cA executable_stack to 0, and run chatr +es enable on the specific binaries that need to execute
code from their stack(s). These binaries can be easily identified when they are executed, because they
will print error messages referring to this manual page.
The possible settings for executable_stack are as follows:
executable_stack = 0
A setting of 0 causes stacks to be non-executable and is strongly preferred from a security per-
spective.
executable_stack = 1 (default)
A setting of 1 (the default value) causes all program stacks to be executable, and is safest from
a compatibility perspective but is the least secure setting for this parameter.
executable_stack = 2 combinations of chatr +es and
executable_stack when executing from the program’s stack. Running chatr +es disable relies
solely on the setting of the executable_stack kernel tunable parameter when deciding whether or
not to grant execute permission for stacks and is equivalent to not having run chatr +es on the binary.
chatr +es executable_stack Action
enable 1 program runs normally
disable or 1 program runs normally
chatr is not run
enable 0 program runs normally
disable or 0 program is killed
chatr is not run
enable 2 program runs normally
disable or 2 program runs normally
chatr is not run with warning displayed
RETURN VALUE
chatr returns zero on success. If the command line contents is syntactically incorrect, or one or more of
the specified files cannot be acted upon, chatr returns information about the files whose attributes
could not be modified. If no files are specified, chatr returns decimal 255.
Illegal options
If you use an illegal option, chatr returns the number of non-option words present after the first illegal
option. The following example returns 4:
chatr +b enable +xyz enable +mno enable +pqr enable file
Invalid arguments
If you use an invalid argument with a valid option and you do not specify a filename, chatr returns 0,
as in this example:
chatr +b <no argument>
Section 1−−78 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
chatr(1) chatr(1)
If you specify a file name (regardless of whether or not the file exists), chatr returns the number of files
specified. The following example returns 3:
chatr <no argument> file1 file2 file3
Invalid files
If the command cannot act on any of the files given, it returns the total number of files specified (if some
option is specified). Otherwise it returns the number of files upon which it could not act. If a2 does not
have read/write permission, the first of the following examples returns 4 and the second returns 1:
chatr +b enable a1 a2 a3 a4
chatr a1 a2 a3 a4
EXTERNAL INFLUENCES
Environment Variables A cA
The following internationalization variables affect the execution of chatr: diagnos-
tic messages written to standard error.
LC_NUMERIC Determines the locale category for numeric formatting.
NLSPATH Determines the location of message catalogues for the processing of LC_MESSAGES.
If any internationalization variable contains an invalid setting, chatr behaves as if all internationaliza-
tion variables are set to C. See environ (5).
In addition, the following environment variable affects chatr:
TMPDIR Specifies a directory for temporary files (see tmpnam (3S)).
EXAMPLES
Change a.out to demand-loaded
chatr -q a.out
Change binding mode of program file that uses shared libraries to immediate and nonfatal. Also enable
usage of SHLIB_PATH environment variable:
chatr -B immediate -B nonfatal +s enable a.out
Disallow run-time path lookup for the shared library /usr/lib/libc.sl that the shared library
libfoo.sl depends on:
chatr +l /usr/lib/libc.sl libfoo.sl
Given segment index number 5 from a previous run of chatr, change the page size to 4 kilobytes:
chatr +si 5 +p 4K average64
HP-UX 11i Version 2: August 2003 −5− Hewlett-Packard Company Section 1−−79
chatr(1) chatr(1)
WARNINGS
This release of the chatr command no longer supports the following options:
• -n
• -q
• -M
• -N
• +getbuckets size
• +plabel_cache flag
• +q3p flag
• +q4p flag
AUTHOR
chatr was developed by HP.
SEE ALSO
System Tools
ld(1) invoke the link editor
dld.so (5) dynamic loader
Miscellaneous
a.out (4) assembler, compiler, and linker output
magic (4) magic number for HP-UX implementations
sam(1M) system administration manager
Section 1−−80 Hewlett-Packard Company −6− HP-UX 11i Version 2: August 2003
checknr(1) checknr(1)
NAME
checknr - check nroff/troff files
SYNOPSIS
checknr [-s] [-f] [-a.x1 .y1.x2 .y2 ... .xn .yn ] [-c.x1 .x2 .x3 ...c .xn ] [ file ... ]
DESCRIPTION
checknr searches a list of nroff or troff input files for certain kinds of errors involving
mismatched opening and closing delimiters and unknown commands. If no files are specified, checknr
searches the standard input. checknr looks for the following:
• Font changes using \fx ... \fP.
• Size changes using \sx ... \s0.
• Macros that come in open ... close forms, such as the .TS and .TE macros, which must appear
A cA
in matched pairs.
checknr knows about the ms and me macro packages.
Options
checknr recognizes the following options:
-a Define additional macro pairs in the list. -a is followed by groups of six characters, each
group defining a pair of macros. Each six characters consist of a period, the first macro name,
another period, and the second macro name. For example, to define the pairs .BS and .ES,
and .XS and .XE, use:
-a.BS.ES.XS.XE
No spaces are allowed between the option and its arguments.
-c Define commands that checknr would otherwise interpret as undefined.
-f Ignore \fx font changes.
-s Ignore \sx size changes.
EXTERNAL INFLUENCES
International Code Set Support
Single-byte character code sets are supported.
DIAGNOSTICS
checknr complains about unmatched delimiters, unrecognized commands, and bad command syntax.
EXAMPLES
Check file sorting for errors that involve mismatched opening and closing delimiters and unknown
commands, but disregard errors caused by font changes:
checknr -f sorting
WARNINGS
checknr is designed for use on documents prepared with the intent of using checknr, much the same
as lint is used. It expects a certain document writing style for \f... and \s... commands, in which
each \fx is terminated with \fP and each \sx is terminated with \s0. Although text files format prop-
erly when the next font or point size is coded directly instead of using \fP or \s0, such techniques pro-
duce complaints from checknr . If files are to be examined by checknr , the \fP and \s0 delimiting con-
ventions should be used.
-a cannot be used to define single-character macro names.
checknr does not recognize certain reasonable constructs such as conditionals.
AUTHOR
checknr was developed by the University of California, Berkeley.
SEE ALSO
checkeq(1), lint(1), nroff(1).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−81
chfn(1) chfn(1)
NAME
chfn - change user information; used by finger
SYNOPSIS
chfn [login-name ]
chfn -r files [login-name ]
chfn -r nis [login-name ]
chfn -r nisplus [login-name ]
chfn -r dce [login-name ]
A cA DESCRIPTION
The chfn command changes the user information that is stored in the repository for the current
logged-in user or for the user specified by login-name (see passwd (1)).
The information is organized as four comma-separated subfields within the reserved (5th) field of the
password file entry. It consists of the user’s full name, location code, office phone number, and home
phone number, in that order. This information is used by the finger command and other programs (see
finger(1)).
chfn prompts you for each subfield. The prompt includes a default value, which is enclosed in brackets.
Accept the default value by pressing the Return key. To enter a blank subfield, type the word none.
The DCE repository (-r dce) is only available if Integrated Login has been configured, see
auth.adm (1M). If Integrated Login has been configured, other considerations apply. A user with
appropriate DCE privileges is capable of modifying a user’s finger (gecos) information; this is not depen-
dent upon superuser privileges.
If the repository is not specified; i.e., chfn [login-name ], the finger information is changed in the passwd
file only.
Run finger after running chfn to make sure the information was processed correctly.
Options
The following option is recognized:
-r Specify the repository to which the operation is to be applied. Supported reposi-
tories include files, nis, nisplus, and dce.
Subfield Values
Name Up to 1022 printing characters.
The finger command and other utilities expand an & found anywhere in this
subfield by substituting the login name for it and shifting the first letter of the
login name to uppercase. (chfn does not alter the input &.)
Location Up to 1022 printing characters.
Office Phone Up to 25 printing characters.
finger inserts appropriate hyphens if the value is all digits.
Home Phone Up to 25 printing characters.
finger inserts appropriate hyphens if the value is all digits.
Security Restrictions
You must have appropriate privileges to use the optional login-name argument to change another user’s
information.
EXAMPLES
The following is a sample run. The user’s input is shown in regular type.
Name [Tracy Simmons]:
Location (Ex: 47U-P5) []: 42L-P1
Office Phone (Ex: 1632) [77777]: 71863
Home Phone (Ex: 9875432) [4085551546]: none
Section 1−−82 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
chfn(1) chfn(1)
WARNINGS
The encoding of office and extension information is installation-dependent.
For historical reasons, the user’s name, etc., are stored in the /etc/passwd file. This is an inappropri-
ate place to store the information.
Because two users may try to write the passwd file at once, a synchronization method was developed.
On rare occasions, chfn prints a message that the password file is busy. When this occurs, chfn sleeps
for a short time, then tries to write to the passwd file again.
AUTHOR
chfn was developed by the University of California, Berkeley.
FILES
/etc/passwd A cA
/etc/ptmp
NOTES
The chfn command is a hard link to passwd command. When chfn is executed, actually the passwd
command gets executed with appropriate arguments to change the user gecos information in the reposi-
tory specified in command line. If no repository is specified the gecos information is changed in
/etc/passwd file.
SEE ALSO
chsh(1), finger(1), passwd(1), passwd(4).
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−83
chkey(1) chkey(1)
NAME
chkey - change user’s secure RPC key pair
SYNOPSIS
chkey [ -p ] [ -s nisplus | nis | files ]
DESCRIPTION
chkey is used to change a user’s secure RPC public key and secret key pair. chkey prompts for the old
secure-rpc password and verifies that it is correct by decrypting the secret key. If the user has not
already keylogged in,.
A cA chkey ensures that the login password and the secure-rpc password are kept the same.
The key pair can be stored in the /etc/publickey file, (see publickey (4)), NIS publickey map or
NIS+ cred.org_dir table. If a new secret key is generated, it will be registered with the local
keyserv (1M) daemon.. How-
ever, if multiple name services are listed, chkey can not decide which source to update and will display
an error message. The user should specify the source explicitly with the -s option.
Non root users are not allowed to change their key pair in the /etc/publickey file.
Options
-p Re-encrypt the existing secret key with the user’s login password.
-s nisplus Update the NIS+ database.
-s nis Update the NIS database.
-s files Update the files database.
AUTHOR
chkey was developed by Sun Microsystems, Inc.
FILES
/etc/nsswitch.conf
/etc/publickey
SEE ALSO
keylogin(1), keylogout(1), keyserv(1M), newkey(1M), nisaddcred(1M), nsswitch.conf(4), publickey(4).
Section 1−−84 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
chmod(1) chmod(1)
NAME
chmod - change file mode access permissions
SYNOPSIS
/usr/bin/chmod [-A] [-R] symbolic_mode_list file ...
Obsolescent form:
/usr/bin/chmod [-A] [-R] numeric_mode file ...
DESCRIPTION
The chmod command changes the permissions of one or more file s according to the value of
symbolic_mode_list or numeric_mode . You can display the current permissions for a file with the ls -l
command (see ls (1)).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−85
chmod(1) chmod(1)
Options
-A Preserve any optional access control list (ACL) entries associated with the file (HFS file sys-
tems).
RETURN VALUE
Upon completion, chmod returns one of the following values:
0 Successful completion.
>0 An error condition occurred.
EXAMPLES
Deny write permission to others:
chmod o-w file
Section 1−−86 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
chmod(1) chmod(1)
DEPENDENCIES
The -A option causes chmod to fail on file systems that do not support ACLs.
AUTHOR
chmod was developed by AT&T and HP.
SEE ALSO
chacl(1), ls(1), umask(1), chmod(2), acl(5), aclv(5).
STANDARDS CONFORMANCE
chmod: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−87
chown(1) chown(1)
NAME
chown, chgrp - change file owner or group
SYNOPSIS
chown [-h] [-R] owner[:group ] file ...
chgrp [-h] [-R] group file ...
DESCRIPTION
The chown command changes the owner ID of each specified file to owner and optionally the group ID of
each specified file to group .
The chgrp command changes the group ID of each specified file to group .
A cA owner can be either a decimal user ID or a login name found in the /etc/passwd file.
group can be either a decimal group ID or a group name found in the /etc/group file.
In order to change the owner or group, you must own the file and have the CHOWN privilege (see
setprivgrp (1M)). If either command is invoked on a regular file by other than the superuser, the set-
user-ID and set-group-ID bits of the file mode (04000 and 02000 respectively) are cleared. Note that a
given user’s or group’s ability to use this command can be restricted by setprivgrp (see
setprivgrp (1M)).
Access Control Lists − HFS File Systems Only
Users can permit or deny specific individuals and groups to access a file by setting optional ACL entries
in the file’s access control list (see acl (5)). When using chown in conjunction with HFS ACLs, if the new
owner and/or group of a file does not have an optional ACL entry corresponding to user .% and/or
%.group in the file’s access control list, the file’s access permission bits remain unchanged. However, if
the new owner and/or group is already designated by an optional ACL entry of user .% and/or %.group in
the file’s ACL, chown sets the corresponding file access permission bits (and the corresponding base ACL
entries) to the permissions contained in that entry.
Options
chown and chgrp recognize the following options:
-h Change the owner or group of a symbolic link.
By default, the owner or group of the target file that a symbolic link points to is changed. With
-h, the target file that the symbolic link points to is not affected. If the target file is a direc-
tory, and you specify -h and -R, recursion does not take place.
-R Recursively change the owner or group. For each file operand that names a directory, the
owner or group of the directory and all files and subdirectories in the file hierarchy below it
are changed., chown behaves as if all internationaliza-
tion variables are set to "C". See environ (5).
Section 1−−88 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
chown(1) chown(1)
RETURN VALUE
chown and chgrp return the following values:
0 Successful completion.
>0 An error condition occurred.
EXAMPLES
The following command changes the owner of the file jokes to sandi:
chown sandi jokes
The following command searches the directory design_notes and changes each file in that directory to
owner mark and group users:
chown -R mark:users design_notes
WARNINGS
A cA
The default operation of chown and chgrp for symbolic links has changed as of HP-UX release 10.0.
Use the -h option to get the former default operation.
FILES
/etc/group
/etc/passwd
SEE ALSO
chmod(1), setprivgrp(1M), chown(2), group(4), passwd(4), acl(5), aclv(5).
STANDARDS CONFORMANCE
chown: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
chgrp: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−89
chsh(1) chsh(1)
NAME
chsh - change default login shell
SYNOPSIS
chsh login-name [shell ]
chsh -r files login-name [shell ]
chsh -r nisplus login-name [shell ]
chsh -r nis login-name [shell ]
chsh -r dce login-name [shell ]
A cA DESCRIPTION
The chsh command changes the login-shell for a user’s login name in the repository (see passwd (1)).
The DCE repository (-r dce) is only available if Integrated Login has been configured, see
auth.adm (1M). If Integrated Login has been configured, other considerations apply. A user with
appropriate DCE privileges is capable of modifying a user’s shell; this is not dependent upon superuser
privileges.
If the repository is not specified; i.e., chsh [login-name ], the login shell is changed in passwd file only.
Run finger after running chsh to make sure the information was processed correctly.
Arguments
login-name A login name of a user.
shell The absolute path name of a shell. If the file /etc/shells exists, the new login shell
must be listed in that file. Otherwise, you can specify one of the standard shells listed
in the getusershell (3C) manual entry. If shell is omitted, it defaults to the POSIX shell,
/usr/bin/sh.
Options
The following option is recognized:
-r Specify the repository to which the operation is to be applied. Supported reposi-
tories include files, nis, nisplus, and dce.
Security Restrictions
You must have appropriate privileges to use the optional login-name argument to change another user’s
login shell.
NETWORKING FEATURES
NFS
File /etc/passwd can be implemented as a Network Information Service (NIS) database.
EXAMPLES
To change the login shell for user voltaire to the default:
chsh voltaire
To change the login shell for user descartes to the C shell:
chsh descartes /usr/bin/csh
To change the login shell for user aristotle to the Korn shell in the DCE registry:
chsh -r dce aristotle /usr/bin/ksh
WARNINGS
As many users may try to write the /etc/passwd file simultaneously, a passwd locking mechanism was
deviced. If this locking fails after subsequent retrying, chsh terminates.
AUTHOR
chsh was developed by HP and the University of California, Berkeley.
Section 1−−90 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
chsh(1) chsh(1)
NOTES
The chsh command is a hard link to passwd command. When chsh is executed actually the passwd
command gets executed with appropriate arguments to change the user login shell in the repository
specified in command line. If no repository is specified the login shell is changed in /etc/passwd file.
FILES
/etc/shells
/etc/ptmp
SEE ALSO
chfn(1), csh(1), ksh(1), passwd(1), sh(1), sh-posix(1), getusershell(3C), pam(3), passwd(4), shells(4).
A cA
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−91
ci(1) ci(1)
NAME
ci - check in RCS revisions
SYNOPSIS
ci [ options ] file ...
DESCRIPTION
ci stores new revisions into RCS files. Each file name ending in ,v is treated as an RCS file; all others
are assumed to be working files. ci deposits the contents of each working file into the corresponding
RCS file (see rcsintro (5)).
If the RCS file does not exist, ci creates it and deposits the contents of the working file as the initial
revision. The default number is "1.1". The access list is initialized to empty. Instead of the log message,
A cA ci requests descriptive text (see the -t option below).
An RCS file created by ci inherits the read and execute permissions from the working file. If the RCS
file exists, ci preserves its read and execute permissions. ci always turns off all write permissions of
RCS files.
The caller of the command must have read/write permission for the directories containing the RCS file
and the working file, and read permission for the RCS file itself. A number of temporary files are created.
A semaphore file is created in the directory containing the RCS file. ci always creates a new RCS file
and unlinks the old one; therefore links to RCS files are useless.
For ci to work, the user’s login must be in the access list unless the access list is empty, the user is the
owner of the file, or the user is super-user.
Normally, ci checks whether the revision to be deposited is different from the preceding one. If it is not
different, ci either aborts the deposit (if -q is given) or asks whether to abort (if -q is omitted). A
deposit can be forced with the -f option.
If sufficient memory is not available for checking the difference between the revision to be deposited and
the preceding one, then either swap or maxdsiz values can be increased.
For each revision deposited, ci prompts for a log message. The log message should summarize the
change and must be terminated with a line containing a single "." or a control-D. If several files are being
checked in, ci asks whether or not to reuse the log message from the previous file. If the standard input
is not a terminal, ci suppresses the prompt and uses the same log message for all files (see -m option
below.
The number of the deposited revision can be given with any of the options -r, -f, -k, -l, -u, or -q (see
-r option below).
To add a new revision to an existing branch, the head revision on that branch must be locked by the
caller. Otherwise, only a new branch can be created. This restriction is not enforced for the owner of the
file, unless locking is set to strict (see rcs (1)). A lock held by someone else can be broken with the
rcs command (see rcs (1)).
Options
-f[ rev ] Forces a deposit. The new revision is deposited even if it is not different from the preced-
ing one.
-k[ rev ] -k option at these
sites to preserve its original number, date, author, and state.
-l[ rev ] Works like -r, except it performs an additional co -l for the deposited revision. Thus,
the deposited revision is immediately checked out again and locked. This is useful for
saving a revision although one wants to continue editing it after the check-in.
-m"msg" Uses the string msg as the log message for all revisions checked in.
-n"name" Assigns the symbolic name name to the checked-in revision. ci prints an error message
if name is already assigned to another number.
Section 1−−92 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
ci(1) ci(1)
DIAGNOSTICS
For each revision, ci prints the RCS file, the working file, and the number of both the deposited and the
preceding revision. The exit status always refers to the last file checked in, and is 0 if the operation was
successful, 1 if unsuccessful.
EXAMPLES
If the current directory contains a subdirectory RCS with an RCS file io.c,v, all of the following com-
mands deposit the latest revision from io.c into RCS/io.c,v:
ci io.c
ci RCS/io.c,v
ci io.c,v
ci io.c RCS/io.c,v
ci io.c io.c,v
ci RCS/io.c,v io.c
ci io.c,v io.c
Check in version 1.2 of RCS file foo.c,v, with the message Bug fix:
ci -r1.2 -m"Bug Fix" foo.c,v
WARNINGS
The names of RCS files are generated by appending ,v to the end of the working file name. If the result-
ing RCS file name is too long for the file system on which the RCS file should reside, ci terminates with
an error message.
The log message cannot exceed 2046 bytes.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−93
ci(1) ci(1)
A file with approximately 240 revisions may cause a hash table overflow. ci cannot add another revi-
sion to the file until some of the old revisions have been removed. Use the rcs -o (obsolete) command
option to remove old revisions.
RCS is designed to be used with TEXT files only. Attempting to use RCS with non-text (binary) files
results in data corruption.
AUTHOR
ci was developed by Walter F. Tichy.
SEE ALSO
co(1), ident(1), rcs(1), rcsdiff(1), rcsmerge(1), rlog(1), rcsfile(4), acl(5), rcsintro(5).
A cA
Section 1−−94 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
ckconfig(1) ckconfig(1)
NAME
ckconfig - verify the path names of all the FTP configuration files.
SYNOPSIS
/usr/bin/ckconfig [-V]
DESCRIPTION
The ckconfig utility is used to verify the path names of the FTP configuration files,
/etc/ftpd/ftpusers, /etc/ftpd/ftpaccess, /etc/ftpd/ftpconversions,
/etc/ftpd/ftpgroups, /etc/ftpd/ftphosts, /var/adm/syslog/xferlog, and
/etc/ftpd/pids/*.
This utility checks to see that all the FTP configuration files are in the path specified. If it is not able to
find the configuration files in the path, it will give out an error message to the system administrator about
the same.
A cA
The -V option causes the program to display copyright and version information, then terminate.
FILES
/usr/bin/ckconfig
AUTHOR
ckconfig was developed by the Washington University, St. Louis, Missouri.
SEE ALSO
ftpusers(4), ftpconversions(4), ftpaccess(4), ftphosts(4), ftpgroups(4), xferlog(5).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−95
cksum(1) cksum(1)
NAME
cksum - print file checksum and sizes
SYNOPSIS
cksum [file ...]
DESCRIPTION file.
A cA x
32
+x
26
+x
23
+x
22
+x
16
+x
12
+x
11
+x
10 7 5 4 2
+x +x +x +x +x +x
1 0
The results of the calculation are truncated to a 32-bit value. The number of bytes in the file is also
printed.
Standard input is used if no file names are given.
cksum is typically used to verify data integrity when copying files between systems._MESSAGES determines the language in which messages are displayed.
If any internationalization variable contains an invalid setting, cksum behaves as if all internationaliza-
tion variables are set to "C". See environ (5).
RETURN VALUE
Upon completion, cksum returns one of the following values:
0 All files were processed successfully.
>0 One or more files could not be read or another error occurred.
If an inaccessible file is encountered, cksum continues processing any remaining files, but the final exit
status is affected.
SEE ALSO
sum(1), wc(1).
STANDARDS CONFORMANCE
cksum: XPG4, POSIX.2
Section 1−−96 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
clear(1) clear(1)
NAME
clear - clear terminal screen
SYNOPSIS
clear
DESCRIPTION
clear clears the terminal screen if it is possible to do so. It reads the TERM environment variable for
the terminal type, then reads the appropriate terminfo database to determine how to clear the screen.
FILES
/usr/share/lib/terminfo/?/* terminal database files
AUTHOR
clear was developed by the University of California, Berkeley.
A cA
SEE ALSO
terminfo(4).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−97
cmp(1) cmp(1)
NAME
cmp - compare two files
SYNOPSIS
cmp [-l] [-s] file1 file2 [skip1 [skip2 ]]
DESCRIPTION or decimal; the form of the
number is determined by the environment variable LC_NUMERIC (in the C locale, a leading 0 denotes an
octal number. See LANG on environ (5) and strtol (3C)).
A cA cmp recognizes the following options:
-l Print the byte number (decimal) and the differing bytes (octal) for each difference (byte
numbering begins at 1 rather than 0).
-s Print nothing for differing files; return codes only.
EXTERNAL INFLUENCES
Environment Variables
LANG determines the language in which messages are displayed. If LANG is not specified or is set to the
empty string, a default of "C" (see lang (5)) is used instead of LANG. If any internationalization variable
contains an invalid setting, cmp behaves as if all internationalization variables are set to "C". See
environ (5).
DIAGNOSTICS
cmp returns the following exit values:
0 Files are identical.
1 Files are not identical.
2 Inaccessible or missing argument.
cmp prints the following warning if the comparison succeeds till the end of file of file1(file2) is reached.
cmp: EOF on file1(file2)
SEE ALSO
comm(1), diff(1).
STANDARDS CONFORMANCE
cmp: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
Section 1−−98 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
co(1) co(1)
NAME
co - check out RCS revisions
SYNOPSIS
co [ options ] file ...
DESCRIPTION reading or processing (e.g., compiling) need not be locked. A revision
checked out for editing and later checked in must normally be locked. Locking a revision currently
locked by another user fails (a lock can be broken with the rcs command, but poses inherent risks when
A cA
independent changes are being made simultaneously (see rcs (1)). co with locking requires the caller to
be on the access list of the RCS file unless: he is the owner of the file, a user with appropriate privileges,
or the access list is empty. co without locking is not subject to access list restrictions.
A revision is selected by number, check-in date/time, author, or state. If none of these options are
specified, the latest revision on the trunk is retrieved. When the options are applied in combination, the
latest revision that satisfies all of them is retrieved. The options for date/time, author, and state retrieve
a revision on the selected branch. The selected branch is either derived from the revision number (if
given), or is the highest branch on the trunk. A revision number can be attached to the options -l, -p,
-q, or -r.
The caller of the command must have write permission in the working directory, read permission for the
RCS file, and either read permission (for reading) or read/write permission (for locking) in the directory
that contains the RCS file.
The working file inherits the read and execute permissions from the RCS file. In addition, the owner
write permission is turned on, unless the file is checked out unlocked and locking is set to strict (see
rcs (1)).
If a file with the name of the working file exists already and has write permission, co aborts the check
out if -q is given, or asks whether to abort if -q is not given. If the existing working file is not writable,
it is deleted before the check out.
A number of temporary files are created. A semaphore file is created in the directory of the RCS file to
prevent simultaneous update.
A co command applied to an RCS file with no revisions creates a zero-length file. co always performs
keyword substitution (see below).
Options
-l[ rev ] Locks the checked out revision for the caller. If omitted, the checked out revision is not
locked. See option -r for handling of the revision number rev .
-p[ rev ] Prints the retrieved revision on the standard output rather than storing it in the working
file. This option is useful when co is part of a pipe.
-q[ rev ] Quiet mode; diagnostics are not printed.
-ddate Retrieves the latest revision on the selected branch whose check in date/time is less than
or equal to date . The date and time may be given in free format and are converted to
local time. Examples of formats for date :
Tue-PDT, 1981, 4pm Jul 21 (free format)
Fri April 16 15:52:25 EST 1982 (output of ctime(3C))
4/21/86 10:30am (format: mm/dd/yy hh:mm:ss)
Most fields in the date and time can be defaulted. co determines the defaults in the
order year, month, day, hour, minute, and second (from most- to least-significant). At
least one of these fields must be provided. For omitted fields that are of higher
significance than the highest provided field, the current values are assumed. For all
other omitted fields, the lowest possible values are assumed. For example, the date 20,
10:30 defaults to 10:30:00 of the 20th of the current month and current year. Date/time
fields can be delimited by spaces or commas. If spaces are used, the string must be sur-
rounded by double quotes.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−99
co(1) co(1)
For 2-digit year input (yy) without the presence of the century field, the following
interpretation is taken: [70-99, 00-69 (1970-1999, 2000-2069)].
-r[ rev ] Retrieves the latest revision whose number is less than or equal to rev . If rev indicates a
branch rather than a revision, the latest revision on that branch is retrieved. rev is com-
posed of one or more numeric or symbolic fields separated by . . The numeric equivalent
of a symbolic field is specified with the ci -n and rcs -n commands (see ci (1) and
rcs (1)).
-sstate Retrieves the latest revision on the selected branch whose state is set to state .
-w[ login ] Retrieves the latest revision on the selected branch that was checked in by the user with
login name login . If the argument login is omitted, the caller’s login is assumed.
-jjoinlist Generates a new revision that is the result of the joining of the revisions on joinlist . join-
A cA list is a comma-separated list of pairs of the form rev2:rev3, where rev2 and rev3 are
(symbolic or numeric) revision numbers. For the initial pair, rev1 denotes the revision
selected by the options -l, ..., -w. For all other pairs, rev1 denotes the revision gen-
erated ances-
tor. If rev1 < rev2 < rev3 on the same branch, joining generates a new revision that is
similar to rev3 , but with all changes that lead from rev1 to rev2 undone. If changes from
rev2 to rev1 overlap with changes from rev2 to rev3 , co prints a warning and includes
the overlapping sections, delimited as follows:
<<<<<<<
rev1
=======
rev3
>>>>>>>
For the initial pair, rev2 can be omitted. The default is the common ancestor. If any of
the arguments indicate branches, the latest revisions on those branches are assumed. If
the -l option is present, the initial rev1 is locked.
Keyword Substitution check out, co replaces these strings with
strings of the form $keyword : value $. If a revision containing strings of the latter form is checked back
in, the value fields are replaced during the next checkout. Thus, the keyword values are automatically
updated on checkout.
Keywords and their corresponding values:
$Author$ The login name of the user who checked in the revision.
$Date$ The date and time the revision was checked in.
$Header$ A standard header containing the RCS file name, the revision number, the date, the
author, and the state.
$Locker$ The login name of the user who locked the revision (empty if not locked).
$Log$ The log message supplied during checkin, preceded by a header containing the RCS file
name, the revision number, the author, and the date. Existing log messages are not
replaced. Instead, the new log message is inserted after $Log:... $. This is useful for
accumulating a complete change log in a source file.
$Revision$ The revision number assigned to the revision.
$Source$ The full pathname of the RCS file.
$State$ The state assigned to the revision with rcs -s or ci -s.
Section 1−−100 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
co(1) co(1)
DIAGNOSTICS
The RCS file name, the working file name, and the revision number retrieved are written to the diagnostic
output. The exit status always refers to the last file checked out, and is 0 if the operation was successful,
1 if unsuccessful.
EXAMPLES
Assume the current directory contains a subdirectory named RCS with an RCS file named io.c,v.
Each of the following commands retrieves the latest revision from RCS/io.c,v and stores it into io.c:
co io.c
co
co
RCS/io.c,v
io.c,v A cA
co io.c RCS/io.c,v
co io.c io.c,v
co RCS/io.c,v io.c
co io.c,v io.c
co -r1.1 foo.c,v
Check out version 1.1 of RCS file foo.c,v to the standard output:
co -p1.1 foo.c,v
Check out the version of file foo.c,v that existed on September 18, 1992:
co -d"09/18/92" foo.c,v
WARNINGS
The co command generates the working file name by removing the ,v from the end of the RCS file
name. If the given RCS file name is too long for the file system on which the RCS file should reside, co
terminates with an error message.
There is no way to suppress the expansion of keywords, except by writing them differently. In nroff
and troff, this is done by embedding the null-character \& into the keyword.
The -d option gets confused in some circumstances, and accepts no date before 1970.
The -j option does not work for files containing lines consisting of a single . .
RCS is designed to be used with text files only. Attempting to use RCS with non-text (binary) files results
in data corruption.
AUTHOR
co was developed by Walter F. Tichy.
SEE ALSO
ci(1), ident(1), rcs(1), rcsdiff(1), rcsmerge(1), rlog(1), rcsfile(4), acl(5), rcsintro(5).
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−101
col(1) col(1)
NAME
col - filter reverse line-feeds and backspaces
SYNOPSIS
col [-blfxp]
DESCRIPTION com-
mand, and output resulting from use of the tbl preprocessor (see nroff(1) and tbl (1)).
If the -b option is given, col assumes that the output device in use is not capable of backspacing. In
A cA this case, if two or more characters are to appear in the same place, only the last one read is output.
If the -l option is given, col assumes the output device is a line printer (rather than a character
printer) and removes backspaces in favor of multiply overstruck full lines. It generates the minimum
number of print operations necessary to generate the required number of overstrikes. (All but the last
print operation on a line are separated by carriage returns (\r); the last print operation is terminated by
a newline ( converts white space to tabs on output wherever possible to shorten
printing time.
The ASCII control characters SO (\016) and SI (\01 accepted are space, backspace, tab, return, new-line, SI , SO , and VT
, (\013), and ESC followed by 7, 8, or 9. The VT character is an alternate form of full reverse line-feed,
included for compatibility with some earlier programs of this type. All other non-printing characters are
ignored.
Normally, col ignores any unrecognized escape sequences found in its input; the -p option can be used
to cause col to output these sequences as regular characters, subject to overprinting from reverse line
motions. The use of this option is highly discouraged unless the user is fully aware of the textual position
of the escape sequences.
EXTERNAL INFLUENCES
Environment Variables
LANG provides a default value for the internationalization variables that are unset or null. If LANG is
unset or null, the default value of "C" (see lang (5)) is used. If any of the internationalization variables
contains an invalid setting, col102 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
col(1) col(1)
EXAMPLES
col is used most often with nroff and tbl. A common usage is:
tbl filename | nroff -man | col | more -s
(very similar to the usual man(1) command). This command allows vertical bars and outer boxes to be
printed for tables. The file is run through the tbl preprocessor, and the output is then piped through
nroff, formatting the output using the -man macros. The formatted output is then piped through col,
which sets up the vertical bars and aligns the columns in the file. The file is finally piped through the
more command, which prints the output to the screen with underlining and highlighting substituted for
italic and bold typefaces. The -s option deletes excess space from the output so that multiple blank lines
are not printed to the screen.
SEE ALSO
nroff(1), tbl(1), ul(1), man(5). A cA
NOTES
The input format accepted by col matches the output produced by nroff with either the -T37 or -
Tlp options. Use -T37 (and the -f option of col) if the ultimate disposition of the output of col is a
device that can interpret half-line motions, and -Tlp otherwise.
BUGS
Cannot back up more than 128 lines. Cannot back up across page boundaries.
There is a maximum limit for the number of characters, including backspaces and overstrikes, on a line.
The maximum limit is at least 800 characters.
Local vertical motions that would result in backing up over the first line of the document are ignored. As
a result, the first line must not have any superscripts.
WARNINGS
This command is likely to be withdrawn from X/Open standards. Applications using this command might
not be portable to other vendors’ systems.
STANDARDS CONFORMANCE
col: SVID2, SVID3, XPG2, XPG3
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−103
comb(1) comb(1)
NAME
comb - combine SCCS deltas
SYNOPSIS
comb [-p SID ] [-clist ] [-o] [-s] file ...
DESCRIPTION pro-
A cA cessed; non-SCCS files and unreadable files are silently ignored. The generated shell procedure is written
on the standard output.
Options
comb recognizes the following options. Each is explained as if only one named file is to be processed, but
the effects of any option apply independently to each named file.
-pSID The S CCS origi-
nal file.
-s This option causes comb to generate a shell procedure which, when run, produces a
report giving, for each file: the file name, size (in blocks) after combining, original
size (also in blocks), and percentage change computed by:
100 × (original − combined) / original
It is recommended that this option be used before any SCCS files are actually com-
bined to determine exactly how much space is saved by the combining process.
If no options are specified, comb preserves only leaf deltas and the minimal number of ancestors needed
to preserve the tree.
EXTERNAL INFLUENCES
International Code Set Support
Single- and multi-byte character code sets are supported.
DIAGNOSTICS
Use sccshelp (1) for explanations.
EXAMPLES
comb may rearrange the shape of the tree of deltas. Combining files may or may not save space; in fact,
it is possible for the reconstructed file to actually be larger than the original.
Section 1−−104 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
comb(1) comb(1)
FILES
s.COMB????? Temporary file
comb????? Temporary file
SEE ALSO
admin(1), delta(1), get(1), sccshelp(1), prs(1), sh(1), sccsfile(4).
A cA
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−105
comm(1) comm(1)
NAME
comm - select or reject lines common to two sorted files
SYNOPSIS
comm [-[123] ] file1 file2
DESCRIPTION
comm reads file1 and file2 , which should be ordered in increasing collating sequence (see sort (1) and
Environment Variables below), and produces a three-column output:
Column 1: Lines that appear only in file1 ,
Column 2: Lines that appear only in file2 ,
Column 3: Lines that appear in both files.
A cA If - is used for file1 or file2 , the standard input is used.
Options 1, 2, or 3 suppress printing of the corresponding column. Thus comm -12 prints only the lines
common to the two files; comm -23 prints only lines in the first file but not in the second; comm -123
does nothing useful.
EXTERNAL INFLUENCES
Environment Variables
LC_COLLATE determines the collating sequence comm expects from the input files.
LC_MESSAGES determines the language in which messages are displayed.
If LC_MESSAGES is not specified in the environment or is set to the empty string, the value of LANG
determines the language in which messages are displayed. inter-
nationalization variable contains an invalid setting, comm behaves as if all internationalization variables
are set to ‘‘C’’. See environ (5).
EXAMPLES
The following examples assume that file1 and file2 have been ordered in the collating sequence
defined by the LC_COLLATE or LANG environment variable.
Print all lines common to file1 and file2 (in other words, print column 3):
comm -12 file1 file2
Print all lines that appear in file1 but not in file2 (in other words, print column 1):
comm -23 file1 file2
Print all lines that appear in file2 but not in file1 (in other words, print column 2):
comm -13 file1 file2
SEE ALSO
cmp(1), diff(1), sdiff(1), sort(1), uniq(1).
STANDARDS CONFORMANCE
comm: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
Section 1−−106 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
command(1) command(1)
NAME
command - execute a simple command
SYNOPSIS
command command_name [ argument ... ]
DESCRIPTION
command enables the shell to treat the arguments as a simple command, suppressing the shell function
lookup.
If command_name is not the name of the function, the effect of command is the same as omitting com-
mand.
OPERANDS
command recognizes the following operands: A cA
command_name The name of a HP-UX command or a shell built-in command.
argument One or more strings to be interpreted as arguments to command_name .
The command command is necessary to allow functions that have the same name as a command to call
the command (instead of a recursive call to the function).
Nothing in the description of command is intended to imply that the command line is parsed any
differently than any other simple command. For example,
command a | b ; c
is not parsed in any special way that causes | or ; to be treated other than a pipe operator or semicolon
or that prevents function lookup on b or c.
EXTERNAL INFLUENCE
Environment Variables
PATH determines the search path used during the command search.
RETURN VALUE
command exits with one of the following values:
• If command fails:
126 The utility specified by the command_name is found but not executable.
127 An error occurred in the command utility or the utility specified by
command_name is not found.
• If command does not fail:
The exit status of command is the same as that of the simple command specified by the
arguments:
command_name [ argument ... ]
EXAMPLES
Create a version of the cd command that always prints the name of the new working directory whenever
it is used:
cd() {
command "$@" >/dev/null
pwd
}
Circumvent the redefined cd command above, and change directories without printing the name of the
new working directory:
command cd
SEE ALSO
getconf(1), sh-posix(1), confstr(3C).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−107
command(1) command(1)
STANDARDS CONFORMANCE
command: XPG4, POSIX.2
A cA
Section 1−−108 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
compact(1) compact(1)
NAME
compact, uncompact, ccat - compact and uncompact files, and cat them
SYNOPSIS
compact [ name ...]
uncompact [ name ...]
ccat [ file ...]
DESCRIPTION
compact compresses the named files using an adaptive Huffman code. If no file names are given, stan-
dard input is compacted and sent to the standard output. compact operates as an on-line algorithm.
Each time a byte is read, it is encoded immediately according to the current prefix code. This code is an
optimal Huffman code for the set of frequencies seen so far. It is unnecessary to attach a decoding tree in A cA
front of the compressed file because the encoder and the decoder start in the same state and stay syn-
chronized. Furthermore, compact and uncompact can operate as filters. In particular,
... | compact | uncompact | ...
operates as a (very slow) no-op.
When an argument file is given, it is compacted, the resulting file is placed in file .C, and file is unlinked.
The first two bytes of the compacted file code the fact that the file is compacted. These bytes are used to
prohibit recompaction.
The amount of compression to be expected depends on the type of file being compressed. Typical file size
reduction (in percent) through compression are: Text, 38%; Pascal Source, 43%; C Source, 36%; and
Binary, 19%.
uncompact restores the original file from a file compressed by compact. If no file names are specified,
standard input is uncompacted and sent to the standard output.
ccat cats the original file from a file compressed by compact, without uncompressing the file.
WARNINGS
On short-filename systems, the last segment of the file name must contain 12 or fewer characters to allow
space for the appended .C.
DEPENDENCIES
NFS
Access control list entries of networked files are summarized (as returned in st_mode by stat()), but
not copied to the new file (see stat (2)).
AUTHOR
compact was developed by Colin L. Mc Master.
FILES
*.C compacted file created by compact, removed by uncompact
SEE ALSO
compress(1), pack(1), acl(5), aclv(5).
Gallager, Robert G., ‘‘Variations on a Theme of Huffman,’’ I.E.E.E. Transactions on Information Theory ,
vol. IT-24, no. 6, November 1978, pp. 668 - 674.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−109
compress(1) compress(1)
NAME
compress, uncompress, zcat, compressdir, uncompressdir - compress and expand data
SYNOPSIS
Compress Files
compress [-d] [-f|-z] [-z] [-v] [-c] [-V] [-b maxbits ] [ file ... ]
uncompress [-f] [-v] [-c] [-V] [ file ... ]
zcat [-V] [ file ... ]
DESCRIPTION
The following commands compress and uncompress files and directory subtrees as indicated:
compress Reduce the size of the named file s using adaptive Lempel-Ziv coding. If reduc-
tion file s to original form. Resulting files have the original
filename, ownership, and permissions, and the .Z filename suffix is removed.
If no file is specified, or if - is specified, standard input is uncompressed to the
standard output.
zcat Restore the compressed file s-
bits ) per code, and the distribution of common substrings. Typically, text such as source code or English
is reduced by 50-60 percent. Compression is generally much better than that achieved by Huffman cod-
ing .
Section 1−−110 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
compress(1) compress(1), compress, uncompress, and zcat
behave as if all internationalization variables are set to "C". See environ (5).
RETURN VALUE
These commands return the following values upon completion:
0 Completed successfully.
2 Last file is larger after (attempted) compression.
1 An error occurred.
DIAGNOSTICS
Usage: compress [-f|-z] [-dvcV] [ sys-
tem on which the source file resides. Make the source file name shorter and try again.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−111
compress(1) compress(1).
SEE ALSO
compact(1), pack(1), acl(5).
STANDARDS CONFORMANCE
compress: XPG4
Section 1−−112 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
compress(1) compress(1)
A cA
HP-UX 11i Version 2: August 2003 −4− Hewlett-Packard Company Section 1−−113
convert(1) convert(1)
NAME
convert - convert an audio file
SYNOPSIS
/opt/audio/bin/convert [source_file ] [target_file ] [-sfmt format ] [-dfmt format ]
[-ddata data_type ] [-srate rate ] [-drate rate ]
[-schannels number] [-dchannels number]
DESCRIPTION
This command converts audio files from one supported file format, data format, sampling rate, and
number of channels to another. The unconverted file is retained as a source file.
-sfmt format -dfmt format
A cA are the file formats for the source and destination files. Each format can be one of these:
au Sun file format
snd NeXT file format
wav Microsoft RIFF Waveform file format
u MuLaw format
al ALaw
l16 linear 16-bit format
lo8 offset (unsigned) linear 8-bit format
l8 linear 8-bit format
If you omit -sfmt, convert uses the header or filename extension in the source file. You can
omit -dfmt if you supply a filename extension for the destination file.
-ddata data_type
is the data type for the destination files. data_type can be one of these:
u MuLaw
al ALaw
l16 linear 16-bit
lo8 offset (unsigned) linear 8-bit data
l8 linear 8-bit data
If you omit -ddata, convert uses an appropriate data type, normally the data type of the source
file.
-srate rate -drate rate
are the number of samples per second for the source and destination file. Typical sampling rates
range from 8 to 11k (for voice quality) to 44,100 (for CD quality). You can use k to indicate
thousands. For example, 8k means 8,000 samples per second.
If you omit -srate, convert uses a rate defined by the source file header or its filename exten-
sion. For a raw file with no extension, 8,000 is used. By playing the file, you can determine if 8,000
samples is too fast or too slow.
If you omit -drate, convert uses a sampling rate appropriate for the destination file format; if
possible, it matches the sampling rate of the source file.
-schannels number -dchannels number
are the number of channels in the source and destination files. Use 1 for mono; 2 for stereo. If -
schannels is omitted, convert uses the information in the header; for raw data files, it uses
mono.
If -dchannels is omitted, convert matches what was used for the source file (through the
header or -schannels option); for raw data files, it uses mono.
EXAMPLES
Convert a raw data file to a headered file.
Section 1−−114 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
convert(1) convert(1)
cd /opt/audio/bin
convert beep.l16 beep.au
Convert a raw data file to a headered file when the source has no extension, was sampled at 11,025 per
second, and has stereo data.
cd /opt/audio/bin
convert beep beep.au -sfmt l16 -srate 11025 -schannels 2
To save disk space, convert an audio file with CD quality sound to voice quality sound.
cd /opt/audio/bin
convert idea.au idea2.au -ddata u -drate 8k -dchannels 1
AUTHOR
convert was developed by HP. A cA
Sun is a trademark of Sun MicroSystems, Inc.
NeXT is a trademark of NeXT Computers, Inc.
Microsoft is a trademark of Microsoft Corporation.
SEE ALSO
audio(5), asecure(1M), aserver(1M), attributes(1), send_sound(1).
Using the Audio Developer’s Kit
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−115
cp(1) cp(1)
NAME
cp - copy files and directory subtrees
SYNOPSIS
cp [-f-i] [-p] [-e extarg ] file1 new_file
cp [-f-i] [-p] [-e extarg ] file1 [file2 ... ] dest_directory
cp [-f-i] [-p] [-R-r] [-e extarg ] directory1 [ directory2 ... ] dest_directory
DESCRIPTION
cp copies:
• file1 to new or existing new_file ,
•
A cA •
file1 to existing dest_directory,
file1 , file2 , ... to existing dest_directory,
• directory subtree directory1 , to new or existing dest_directory. or
• multiple directory subtrees directory1 , directory2 , ... to new or existing dest_directory.
cp fails if file1 and new_file are the same (be cautious when using shell metacharacters). When destina-
tion is a directory, one or more files are copied into that directory. If two or more files are copied, the des-
tination must be a directory. When copying a single file to a new file, if new_file exists, its contents are
destroyed.
If the access permissions of the destination dest_directory or existing destination file new_file forbid writ-
ing, cp aborts and produces an error message ‘‘cannot create file ’’.
To copy one or more directory subtrees to another directory, the -r option is required. The -r option is
ignored if used when copying a file to another file or files to a directory.
If new_file is a link to an existing file with other links, cp overwrites the existing file and retains all
links. If copying a file to an existing file, cp does not change existing file access permission bits, owner,
or group.. The last modification time of new_file (and last access time, if new_file did not
exist) and the last access time of the source file1 are set to the time the copy was made.
Options
-i (interactive copy) Cause cp to write a prompt to standard error and wait for a response before
copying a file that would overwrite an existing file. If the response from the standard input is
affirmative, the file is copied if permissions allow the copy. If the -i (interactive) and -f
(forced-copy) options are both specified, the -i option is ignored.
-f Force existing destination pathnames to be removed before copying, without prompting for
confirmation. This option has the effect of destroying and replacing any existing file whose name
and directory location conflicts with the name and location of the new file created by the copy
operation.
-p (preserve permissions) Causes cp to preserve in the copy as many of the modification time,
access time, file mode, user ID, and group ID as allowed by permissions.
-r (recursive subtree copy) Cause cp to copy the subtree rooted at each source directory to
dest_directory. If dest_directory exists, it must be a directory, in which case cp creates a direc-
tory within dest_directory with the same name as file1 and copies the subtree rooted at file1 to
dest_directory/file1 . An error occurs if dest_directory/file1 already exists. If dest_directory does
not exist, cp creates it and copies the subtree rooted at file1 to dest_directory. Note that cp
-r cannot merge subtrees.
Usually normal files and directories are copied. Character special devices, block special devices,
network special files, named pipes, symbolic links, and sockets are copied, if the user has access
to the file; otherwise, a warning is printed stating that the file cannot be created, and the file is
skipped.
dest_directory should not reside within directory1 , nor should directory1 have a cyclic directory
structure, since in both cases cp attempts to copy an infinite amount of data.
Section 1−−116 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
cp(1) cp(1)
-R (recursive subtree copy) The -R option is identical to the -r option with the exception that
directories copied by the -R option are created with read, write, and search permission for the
owner. User and group permissions remain unchanged.
With the -R and -r options, in addition to regular files and directories, cp also copies FIFOs,
character and block device files and symbolic links. Only superusers can copy device files. All
other users get an error. Symbolic links are copied so the target points to the same location that
the source did.
Warning: While copying a directory tree that has device special files, use the -r option; other-
wise, an infinite amount of data is read from the device special file and is duplicated as a special
file in the destination directory occupying large file system space.
-e extarg
Specifies the handling of any extent attributes of the file[s] to be copied. extarg takes one of the
following values.
A cA
warn Issues a warning message if extent attributes cannot be copied, but copies the
file anyway.
ignore Does not copy the extent attributes.
force Fails to copy the file if the extent attribute can not be copied.
Extent attributes can not be copied if the files are being copied to a file system which does not
support extent attributes or if that file system has a different block size than the original. If -e
is not specified, the default value for extarg is warn.
EXTERNAL INFLUENCES
Environment Variables
LC_CTYPE determines the interpretation of text as single and/or multi-byte characters.
LANG and LC_CTYPE determine the local language equivalent of y (for yes/no queries)., cp behaves as if all internationalization variables are set to "C". See environ (5).
EXAMPLES
The following command moves the directory sourcedir and its contents to a new location (targetdir ) in the
file system. Since cp creates the new directory, the destination directory targetdir should not already
exist.
cp -r sourcedir targetdir && rm -rf sourcedir
The -r option copies the subtree (files and subdirectories) in directory sourcedir to directory targetdir .
The double ampersand (&&) causes a conditional action. If the operation on the left side of the && is
successful, the right side is executed (and removes the old directory). If the operation on the left of the
&& is not successful, the old directory is not removed.
This example is equivalent to:
mv sourcedir targetdir
To copy all files and directory subtrees in the current directory to an existing targetdir , use:
cp -r * targetdir
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−117
cp(1) cp(1)
DEPENDENCIES
NFS
Access control lists of networked files are summarized (as returned in st_mode by stat()), but not
A cA copied to the new file. When using mv or ln on such files, a + is not printed after the mode value when
asking for permission to overwrite a file.
AUTHOR
cp was developed by AT&T, the University of California, Berkeley, and HP.
SEE ALSO
cpio(1), ln(1), mv(1), rm(1), link(1M), lstat(2), readlink(2), stat(2), symlink(2), symlink(4), acl(5), aclv(5).
STANDARDS CONFORMANCE
cp: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
Section 1−−118 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
cpio(1) cpio(1)
NAME
cpio - copy file archives in and out; duplicate directory trees
SYNOPSIS
cpio -o [-e extarg ] [achvxABC]
cpio -i[bcdfmrstuvxBPRSU6] [pattern ...]
cpio -p [-e extarg ] [adlmruvxU] directory
DESCRIPTION
The cpio command saves and restores archives of files on magnetic tape, other devices, or a regular file,
and copies files from one directory to another while replicating the directory tree structure. When cpio
completes processing the files, it reports the number of blocks written.
cpio -o (copy out, export) Read standard input to obtain a list of path names, and copy those files to
A cA
standard output together with path name and status information. The output is padded to a
512-byte boundary.
cpio -i (copy in, import) Extract files from standard input, which is assumed to be the result of a
previous cpio -o.
If pattern ..., is specified, only the files with names that match a pattern according to the
rules of Pattern Matching Notation (see regexp (5)) are selected. A leading ! on a pattern
indicates that only those names that do not match the remainder of the pattern should be
selected. Multiple patterns can be specified. The patterns are additive. If no pattern is
specified, the default is * (select all files). See the f option, as well.
Extracted files are conditionally created and copied into the current directory tree, as deter-
mined by the options described below. The permissions of the files match the permissions of
the original files when the archive was created by cpio -o unless the U option is used.
File owner and group are that of the current user unless the user has appropriate privileges,
in which case cpio retains the owner and group of the files of the previous cpio -o.
cpio -p (pass through) Read standard input to obtain a list of path names of files which are then
conditionally created and copied into the destination directory tree as determined by the
options described below. directory must exist. Destination path names are interpreted rela-
tive to directory .
With the -p option, when handling a link, only the link is passed and no data blocks are
actually read or written. This is especially noteworthy with cpio -pl, where it is very
possible that all the files are created as links, such that no blocks are written and "0 blocks"
is reported by cpio. (See below for a description of the -l option.)
Options
cpio recognizes the following options, which can be appended as appropriate to -i, -o, and -p. White
space and hyphens are not permitted between these options and -i, -o, or -p.
a Reset access times of input files after they are copied.
b Swap both bytes and half-words. Use only with -i. See the P option for details; see also
the s and S options.
c Write or read header information in ASCII character form for portability.
d Create directories as needed.
-e extarg
Specifies the handling of any extent attributes of the file(s) to be archived or copied.
extarg takes one of the following values.
warn Archive or copy the file and issue a warning message if extent attributes can-
not be preserved.
ignore Do not issue a warning message even if extent attributes cannot be preserved.
force Any file(s) with extent attributes will not be archived and a warning message
will be issued.
When using the -o option, extent attributes are not preserved in the archive. Further-
more, the -p option will not preserve extent attributes if the files are being copied to a
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−119
cpio(1) cpio(1)
file system that does not support extent attributes. If -e is not specified, the default
value for extarg is warn.
f Copy in all files except those selected by pattern ....
h Follow symbolic links as though they were normal files or directories. Normally, cpio
archives the link.
l Whenever possible, link files rather than copying them. This option does not destroy
existing files. Use only with -p.
m Retain previous file modification time. This option does not affect directories that are
being copied.
r Rename files interactively. If the user types a null line, the file is skipped.
A cA s Swap all bytes of the file. Use only with -i. See the P option for details; see also the s
and S options.
t Print only a table of contents of the input. No files are created, read, or copied.
u Copy unconditionally (normally, an older file does not replace a newer file with the same
name).
v Print a list of file names as they are processed. When used with the modified, and filename is the path name of the file as recorded in
the archive.
x Save or restore device special files. Since mknod() is used to recreate these files on a
restore, -ix and -px can be used only by users with appropriate privileges (see
mknod(2)). This option is intended for intrasystem (backup) use only. Restoring device
files from previous versions of the OS, or from different systems can be very dangerous.
cpio may prevent the restoration of certain device files from the archive.
A Suppress warning messages regarding optional access control list entries. cpio does not
back up optional access control list entries in a file’s access control list (see acl (5)). Nor-
mally, a warning message is printed for each file that has optional access control list
entries.
B Block input/output at 5120 bytes to the record (does not apply to cpio -p). This option
is meaningful only with data directed to or from devices that support variable-length
records such as magnetic tape.
C Have cpio checkpoint itself at the start of each volume. If cpio is writing to a stream-
ing tape drive with immediate-report mode enabled and a write error occurs, it normally
aborts and exits with return code 2. With this option specified, cpio instead automati-
cally restarts itself from the checkpoint and rewrites the current volume. Alternatively,
if cpio is not writing to such a device and a write error occurs, cpio normally continues
with the next volume. With this option specified, however, the user can choose to either
ignore the error or rewrite the current volume.
P Read a file written on a PDP-11 or VAX system (with byte-swapping) that did not use the
c option. Use only with -i. Files copied in this mode are not changed. Non-ASCII files
are likely to need further processing to be readable. This processing often requires
knowledge of file contents, and thus cannot always be done by this program. The b, s,
and S options can be used when swapping all the bytes on the tape (rather than just in
the headers) is appropriate. In general, text is best processed with P and binary data
with one of the other options.
R Resynchronize automatically when cpio goes "out of phase", (see the DIAGNOSTICS
section).
S Swap all half-words in the file. Use only with -i. See the P option for details; see also
the b and s options.
Section 1−−120 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
cpio(1) cpio(1)
U Use the process’s file-mode creation mask (see umask(2)) to modify the mode of files
created, in the same manner as creat (2).
6 Process a UNIX Sixth-Edition-format file. Use only with -i.
Note that cpio archives created using a raw device file must be read using a raw device file.
When the end of the tape is reached, cpio prompts the user for a new special file and continues.
If you want to pass one or more metacharacters to cpio without the shell expanding them, be sure to
precede each of them with a backslash (\).
Device files written with the -ox option (such as /dev/tty03) do not transport to other implementa-
tions of HP-UX.
EXTERNAL INFLUENCES
Environment Variables
A cA
LC_COLLATE determines the collating sequence used in evaluating pattern matching notation for file
name generation.
LC_CTYPE determines the interpretation of text as single and/or multi-byte characters, and the charac-
ters matched by character class expressions in pattern matching notation.
LC_TIME determines the format and content of date and time strings output when listing the contents of
an archive with the v option.
LANG determines the language in which messages are displayed.
If LC_COLLATE, LC_CTYPE, or inter-
nationalization variable contains an invalid setting, cpio behaves as if all internationalization variables
are set to "C". See environ (5).
RETURN VALUE
cpio returns the following exit codes:
0 Successful completion. Review standard error for files that could not be transferred.
1 Error during resynchronization. Some files may not have been recovered.
2 Out-of-phase error. A file header is corrupt or in the wrong format.
DIAGNOSTICS
Out of phase--get help
Perhaps the "c" option should[n’t] be used
cpio -i could not read the header of an archived file. The header is corrupt or it was written in a
different format. Without the R option, cpio returns an exit code of 2.
If no file name has been displayed yet, the problem may be the format. Try specifying a different
header format option: null for standard format; c for ASCII; b, s, P, or S, for one of the byte-
swapping formats; or 6 for UNIX Sixth Edition.
Otherwise, a header may be corrupt. Use the R option to have cpio attempt to resynchronize the
file automatically. Resynchronizing means that cpio tries to find the next good header in the
archive file and continues processing from there. If cpio tries to resynchronize from being out of
phase, it returns an exit code of 1.
Other diagnostic messages are self-explanatory.
EXAMPLES
Copy the contents of a directory into a tape archive:
ls | cpio -o > /dev/rmt/c0t0d0BEST
Duplicate a directory hierarchy:
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−121
cpio(1) cpio(1)
cd olddir
find . -depth -print | cpio -pd newdir
The trivial case
find . -depth -print | cpio -oB >/dev/rmt/c0t0d0BEST
can be handled more efficiently by:
find . -cpio /dev/rmt/c0t0d0BEST
WARNINGS
Because of industry standards and interoperability goals, cpio does not support the archival of files
larger than 2 GB or files that have user/group IDs greater than 60 K. Files with user/group IDs greater
than 60 K are archived and restored under the user/group ID of the current process.
A cA Do not redirect the output of cpio to a named cpio archive file residing in the same directory as the ori-
ginal files belonging to that cpio archive. This can cause loss of data.
cpio strips any leading ./ characters in the list of file names piped to it.
Path names are restricted to PATH_MAX characters (see <limits.h> and limits (5)). If there are too
many unique linked files, the program runs out of memory to keep track of them. Thereafter, linking
information is lost. Only users with appropriate privileges can copy special files.
cpio tapes written on HP machines with the -ox[c] options can sometimes mislead (non-HP) versions of
cpio that do not support the x option. If a non-HP (or non-AT&T) version of cpio happens to be
modified so that the (HP) cpio recognizes it as a device special file, a spurious device file might be
created.
If /dev/tty is not accessible, cpio issues a complaint and exits.
The -pd option does not create the directory typed on the command line.
The -idr option does not make empty directories.
The -plu option does not link files to existing files.
POSIX defines a file named TRAILER!!! as an end-of-archive marker. Consequently, if a file of that
name is contained in a group of files being written by cpio -o, the file is interpreted as end-of-archive,
and no remaining files are copied. The recommended practice is to avoid naming files anything that
resembles an end-of-archive file name.
To create a POSIX-conforming cpio archive, the c option must be used. To read a POSIX-conforming
cpio archive, the c option must be used and the b, s, S, and 6 options should not be used. If the user
does not have appropriate privileges, the U option must also be used to get POSIX-conforming behavior
when reading an archive. Users with appropriate privileges should not use this option to get POSIX-
conforming behavior.
DEPENDENCIES
If the path given to cpio contains a symbolic link as the last element, this link is traversed and path
name resolution continues. cpio uses the symbolic link’s target, rather than that of the link.
SEE ALSO
ar(1), find(1), tar(1), cpio(4), acl(5), environ(5), lang(5), regexp(5).
STANDARDS CONFORMANCE
cpio: SVID2, SVID3, XPG2, XPG3
Section 1−−122 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
cpp(1) cpp(1)
NAME pre-
ferred way to invoke cpp is through the cc command, since the functionality of cpp may someday be
moved elsewhere. See m4(1) for a general macro processor. A cA 188 bytes. This option serves to eliminate ‘‘Macro param too large’’,
‘‘Macro invocation too large’’, ‘‘Macro param too large after substitution’’, ‘‘Quoted macro
param too large’’, ‘‘Macro buffer too small’’, ‘‘Input line too long’’, and ‘‘Cat direc-
tory argu-
ment.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−123
cpp(1) cpp(1)
A cA−#endif pairs. This
makes it easier, when reading the source, to match #if, #ifdef, and #ifndef direc-
tives with their associated #endif directive.
#elif constant-expression
Equivalent to:
#else
#if constant-expression
#else Reverses the notion of the test directive that matches this directive. Thus, if lines previ-
ous.
Section 1−−124 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
cpp(1) cpp(1)
#ifdef name The lines following appear in the output if and only if name has been the subject of a pre-
vious .
A cA
#undef name Cause the definition of name (if any) to be forgotten from now on.
The test directives and the possible #else directives can be nested. cpp supports names up to 255 char-
acters in length.
Notes
The macro substitution scheme has been changed. Previous versions of cpp saved macros in a macro
definition table whose table size is 128 000 bytes by default. The current version of cpp replaces this
macro definition table with several small buffers. The default size of the small buffers is 8 188 bytes.
EXTERNAL INFLUENCES
Environment Variables
LC_CTYPE determines the interpretation of comments and string literals as single- or multibyte charac-
ters.,
it defaults to "C" (see lang (5)). If any internationalization variable contains an invalid setting, cpp
behaves as if all internationalization variables are set to "C". See environ (5).
Workstation.)
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−125
cpp(1) cpp(1)
FILES
/usr/include Standard directory for #include files
SEE ALSO
m4(1).
STANDARDS CONFORMANCE
cpp: SVID2, SVID3, XPG2
A cA
Section 1−−126 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
cront:
A cA
crontab [file] Create or replace your crontab file by copying the specified file , or stan-
dard input if file is omitted or - is specified as file , into the crontab direc-
tory, on-
tab:
minute The minute of the hour, 0−59
hour The hour of the day, 0−23
monthday The day of the month, 1−31
month The month of the year, 1−12
weekday The day of the week, 0−6, exe-
cuted.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−127
crontab(1) crontab(1)
Blank lines and those whose first non-blank character is # will be ignored.
cron invokes the command from the user’s HOME directory with the POSIX shell, (/usr/bin/sh). It
runs in the c queue (see queuedefs (4))..
A cA, crontab behaves as if all international-
ization variables are set to "C". See environ (5). EDITOR determines the editor to be invoked when -e
option is specified. The default editor is vi.
WARNINGS
Be sure to redirect the standard output and standard error from commands. If this is not done, any gen-
erated
Section 1−−128 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
crypt(1) crypt(1)
NAME
crypt - encode/decode files
SYNOPSIS
crypt [ password ]
DESCRIPTION. A cA
Files encrypted by crypt are compatible with those treated by the ed editor in encryption mode (see
ed(1)).
Security of encrypted files depends on three factors: the fundamental method must be hard to solve;
direct search of the key space must be infeasible; ‘‘sneak paths’’ by which keys or clear text can become
visible must be minimized.
crypt implements a one-rotor machine designed along the lines of the German Enigma, but with a 256-
element rotor. Methods of attack on such machines are widely known; thus crypt provides minimal secu-
rity.
The transformation of a key into the internal settings of the machine is deliberately designed to be expen-
sive; i.e., to take a substantial fraction of a second to compute. However, if keys are restricted to, for
example, three lowercase letters, then encrypted files can be read by expending only a substantial frac-
tion of five minutes of machine time.
Since the key is an argument to the crypt command, it is potentially visible to users executing the ps
or a derivative (see ps (1)). The choice of keys and key security are the most vulnerable aspect of crypt.
EXAMPLES
The following example demonstrates the use of crypt to edit a file that the user wants to keep strictly
confidential:
$ crypt <plans >plans.x
key: violet
$ rm plans
...
$ vi -x plans.x
key: violet
...
:wq
$
...
$ crypt <plans.x | pr
key: violet
Note that the -x option is the encryption mode of vi, and prompts the user for the same key with which
the file was encrypted.
WARNINGS
/dev/tty for typed key
SEE ALSO
ed(1), makekey(1), stty(1).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−129
csh(1) csh(1)
NAME
csh - a shell (command interpreter) with C-like syntax
SYNOPSIS
csh [-cefinstvxTVX] [ command_file ] [ argument_list ... ]
DESCRIPTION
csh is a command language interpreter that incorporates a command history buffer, C-like syntax, and
job control facilities.
Command Options
Command options are interpreted as follows:
A cA -c Read commands from the (single) following argument which must be present. Any
remaining arguments are placed in argv.
-e C shell exits if any invoked command terminates abnormally or yields a non-zero exit
status.
-f Suppress execution of the .cshrc file in your home directory, thus speeding up shell
start-up time.
-i Force csh to respond interactively when called from a device other than a computer ter-
minal (such as another computer). csh normally responds non-interactively. If csh is
called from a computer terminal, it always responds interactively, regardless of which
options are selected.
-n Parse but do not execute commands. This is useful for checking syntax in shell scripts.
All substitutions are performed (history, command, alias, etc.).
-s Take command input from the standard input.
-t Read and execute a single line of input.
-v Set the verbose shell variable, causing command input to be echoed to the standard
output device after history substitutions are made.
-x Set the echo shell variable, causing all commands to be echoed to the standard error
immediately before execution.
-T Disable the tenex features which use the ESC key for command/file name completion and
CTRL-D for listing available files (see the CSH UTILITIES section below)
-V Set the verbose variable before .cshrc is executed so that all .cshrc commands
are also echoed to the standard output.
-X Set the echo variable before .cshrc is executed so that all .cshrc commands are
also echoed to the standard output.
After processing the command options, if arguments remain in the argument list, and the -c, -i, -s, or
-t options were not specified, the first remaining argument is taken as the name of a file of commands to
be executed.
COMMANDS
A simple command is a sequence of words, the first of which specifies the command to be executed. A
sequence of simple commands separated by vertical bar (|) characters forms a pipeline. The output of
each command in a pipeline becomes the input for the next command in the pipeline. Sequences of pipe-
lines can be separated by semicolons (;) which causes them to be executed sequentially. A sequence of
pipelines can be executed in background mode by adding an ampersand character (&) after the last entry.
Any pipeline can be placed in parentheses to form a simple command which, in turn, can be a component
of another pipeline. Pipelines can also be separated by | | or && indicating, as in the C language, that
the second pipeline is to be executed only if the first fails or succeeds, respectively.
Jobs
csh associates a job with each pipeline and keeps a table of current jobs (printed by the jobs com-
mand) and assigns them small integer numbers. When a job is started asynchronously using &, the shell
prints a line resembling:
Section 1−−130 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
csh(1) csh(1)
[1] 1234
indicating that the job which was started asynchronously was job number 1 and had one (top-level) pro-
cess, whose process id was 1234.
If you are running a job and want to do something else, you can type the currently defined suspend char-
acter (see termio (7)) which sends a stop signal to the current job. csh then normally indicates that the
job has been ‘Stopped’, and prints another prompt. You can then manipulate the state of this job, putting
it in the background with the bg command, run some other commands, and then eventually bring the job
back into the foreground with the foreground command fg. A suspend takes effect immediately and is
like an interrupt in that pending output and unread input are discarded when it is typed. There is a
delayed suspend character which does not generate a stop signal until a program attempts to read (2) it.
This can usefully be typed ahead when you have prepared some commands for a job which you want to
stop after it has read them.
A job being run in the background stops if it tries to read from the terminal. Background jobs are nor-
A cA
mally allowed to produce output, but this can be disabled by giving the command stty tostop (see
stty (1)). If you set this tty option, background jobs stop when they try to produce output, just as they do
when they try to read input. Keyboard signals and line-hangup signals from the terminal interface are
not sent to background jobs on such systems. This means that background jobs are immune to the effects
of logging out or typing the interrupt, quit, suspend, and delayed suspend characters (see termio (7))., typing %1 & resumes job 1
in the background. Jobs can also be named by prefixes of the string typed in to start them if these
prefixes are unambiguous; thus %ex normally restarts a suspended ex(1) job, if there is only one
suspended job whose name begins with the string ex. It is also possible to say %?string which
specifies a job whose text contains string , if there is only one such job.
csh maintains a notion of the current and previous jobs. In output pertaining to jobs, the current job is
marked with a + and the previous job with a -. The abbreviation %+ refers to the current job and %-
refers to the previous job. For close analogy with the syntax of the history mechanism (described below),
%% is also a synonym for the current job.
csh learns immediately whenever a process changes state. It normally informs you whenever a job
becomes blocked so that no further progress is possible, but only just before printing a prompt. This is
done so that it does not otherwise disturb your work. If, however, you set the shell variable notify,
csh notifies you immediately of changes in status of background jobs. There is also a csh built-in com-
mand called notify which marks a single process so that any status change is immediately reported.
By default, notify marks the current process. Simply type notify after starting a background job to
mark it.
If you try to leave the shell while jobs are stopped, csh sends the warning message: You have
stopped jobs. Use the jobs command to see what they are. If you do this or immediately try to exit
again, csh does not warn you a second time, and the suspended jobs are terminated (see exit (2)).
Built-In Commands
Built-in commands are executed within the shell without spawning a new process. If a built-in command
occurs as any component of a pipeline except the last, it is executed in a subshell. The built-in commands
are:
alias
alias name
alias name wordlist
The first form prints all aliases. The second form prints the alias for name. The third form
assigns the specified wordlist as the alias of name. Command and file name substitution
are performed on wordlist . name cannot be alias or unalias.
bg [ %job ... ]
Put the current (job not specified).
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−131
csh(1) csh(1)
breaksw
Causes a break from a switch, resuming after the endsw.
case label :
A label in a switch statement as discussed below.
cd
cd directory_name
chdir
chdir directory_name
Change the shell’s current working directory to directory_name. If not specified,
directory_name defaults to your home directory.
If directory_name is not found as a subdirectory of the current working directory (and does
not begin with /, ./, or ../), each component of the variable cdpath is checked to see if it
A cA has a subdirectory directory_name. Finally, if all else fails, csh treats directory_name as
a shell variable. If its value begins with /, this is tried to see if it is a directory. See also
cd(1).
continue
Continue execution of the nearest enclosing while or foreach. The rest of the com-
mands on the current line are executed.
default:
Labels the default case in a switch statement. The default should come after all other
case labels.
dirs Prints the directory stack; the top of the stack is at the left; the first directory in the stack
is the current directory.
echo wordlist
echo -n wordlist
The specified words are written to the shell’s standard output, separated by spaces, and ter-
minated with a new-line unless the -n option is specified. See also echo (1).
else
end
endif
endsw See the descriptions of the foreach, if, switch, and while statements below.
eval arguments ...
(Same behavior as sh(1).) arguments are read as input to the shell and the resulting
command(s) executed. This is usually used to execute commands generated as the result of
command or variable substitution, since parsing occurs before these substitutions.
exec command
The specified command is executed in place of the current shell.
exit
exit (expression )
csh exits either with the value of the status variable (first form) or with the value of the
specified expression (second form).
fg [ %job ... ]
Brings the current (job not specified) or specified jobs into the foreground, continuing them
if they were stopped.
foreach name (wordlist )
...
end The variable name is successively set to each member of wordlist and the sequence of com-
mands between this command and the matching end are executed. (Both foreach and
end must appear alone on separate lines.)
The built-in command continue can be used to continue the loop prematurely; the built-
in command break to terminate it prematurely. When this command is read from the
terminal, the loop is read once, prompting with ? before any statements in the loop are
executed. If you make a mistake while typing in a loop at the terminal, use the erase or
line-kill character as appropriate to recover.
Section 1−−132 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
csh(1) csh(1)
glob wordlist
Like echo but no \ escapes are recognized and words are delimited by null characters in
the output. Useful in programs that use the shell to perform file name expansion on indicating how effective the internal hash table has been at locating
commands (and avoiding execs). An exec is attempted for each component of the path
where the hash function indicates a possible hit, and in each component that does not begin
with a /. A cA
history [-h] [-r] [ n ]
Displays the history event list. If n is given, only the n most recent events are printed. The
-r option reverses the order of printout to be most recent first rather than oldest first. The
-h option prints the history list without leading numbers for producing files suitable for the
source command.
if (expression ) command
If expression evaluates true, the single command with arguments is executed. Variable
substitution on command happens early, at the same time it does for the rest of the if
command. command must be a simple command; not a pipeline, a command list, a
parenthesized command list, or an aliased command. Input/output redirection occurs even
if expression is false, meaning that command is not executed (this is a bug).
if (expression1 ) then
...
else if (expression2 ) then
...
else
...
endif If expression1 is true, all commands down to the first else are executed; otherwise if
expression2 is true, all commands from the first else down to the second else are exe-
cuted, etc. Any number of else-if pairs are possible, but only one endif is needed.
The else part is likewise optional. (The words else and endif must appear at the
beginning of input lines. The if must appear alone on its input line or after an else.)
jobs [-l]
Lists active jobs. The -l option lists process IDs in addition to the usual information.
kill % job
kill - sig % job ...
kill pid
kill - sig pid ...
kill -l
Sends either the TERM (terminate) signal or the specified signal to the specified jobs or
processes. Signals are either given by number or by names (as given in
/usr/include/signal.h, stripped of the SIG prefix (see signal (2)). The signal
names are listed by kill -l. There is no default, so kill used alone does not send a
signal to the current job. If the signal being sent is TERM (terminate) or HUP (hangup),
the job or process is sent a CONT (continue) signal as well. See also kill (1).
limit[-h][ resource ][ maximum_use ]
Limits the usage by the current process and each process it creates not to (individually)
exceed maximum_use on the specified resource . If maximum_use is not specified, then the
current limit is displayed; if resource is not specified, then all limitations are given.
If the -h flag is specified,:
HP-UX 11i Version 2: August 2003 −4− Hewlett-Packard Company Section 1−−133
csh(1) csh(1)
Section 1−−134 Hewlett-Packard Company −5− HP-UX 11i Version 2: August 2003
csh(1) csh(1)
the n th entry in the stack. The elements of the directory stack are numbered from 0 start-
ing at the top. A synonym for popd, called rd, is provided for historical reasons. Its use is
not recommended because it is not part of the standard BSD csh and may not be sup-
ported in future releases.
pushd [ name ] [ +n ]
With no arguments, pushd exchanges the top two elements of the directory stack. Given a
name argument, pushd changes to the new directory (using cd) and pushes the old current
working directory (as in csw) onto the directory stack. With a numeric argument, pushd
rotates the n th argument of the directory stack around to be the top element and changes
to that directory. The members of the directory stack are numbered from the top starting
at 0. A synonym for pushd , called gd, is provided for historical reasons. Its use is not
recommended since it is not part of the standard BSD csh and may not be supported in
future releases.
A cA one of the system direc-
tories. set shows the value of all shell variables. Variables whose value is other
than a single word . In all cases the value is command and file-name expanded.
These arguments can be repeated to set multiple values in a single set command. Note,
however, that variable expansion happens for all arguments before any setting occurs.
setenv name value
Sets the value of environment variable name to be value , a single string. The most com-
monly used environment variables, USER, TERM, and PATH, are automatically imported to
and exported from the csh variables user , term , and path ; there is no need to use
setenv for these.
shift [ variable ]
If no argument is given, the members of argv are shifted to the left, discarding argv[1].
An error occurs if argv is not set or has less than two strings assigned to it. When vari-
able is specified, shift performs the same function on the specified variable .
source [-h] name
csh reads commands from name. source commands can be nested, but if nested too
deeply the shell may run out of file descriptors or reach the max stack size (see maxssiz (5)).
An error in a source at any level terminates all nested source commands. Normally,
input during source commands is not placed on the history list. The -h option can be
used to place commands in the history list without being executing them.
stop [ %job ... ]
Stops the current (no argument) or specified jobs executing in the background.
suspend
Causes csh to stop as if it had been sent a suspend signal. Since csh normally ignores
suspend signals, this is the only way to suspend the shell. This command gives an error
message if attempted from a login shell.
HP-UX 11i Version 2: August 2003 −6− Hewlett-Packard Company Section 1−−135
csh(1) csh(1)
switch (string )
case str1 :
...
breaksw
...
default:
...
breaksw
endsw Each case label (str1 ) is successively matched against the specified string which is first
command and file name expanded. The form of the case labels is the Pattern Matching
Notation with the exception that non-matching lists in bracket expressions are not sup-
ported (see regexp (5)). If none of the labels match before a default label is found, the
execution begins after the default label. Each case label and the default label
A cA must appear at the beginning of a line. The breaksw command causes execution to con-
tinue after the endsw. Otherwise, control may fall through case labels and default
labels as in C. If no label matches and there is no default, execution continues after the
endsw.
time [ command ]
When command is not specified, a summary of time used by this shell and its children is
printed. If specified, the simple command is timed and a time summary as described under
the time variable is printed. If necessary, an extra shell is created to print the time
statistic when the command completes.
umask [ value ]
The current file creation mask is displayed (value not specified) or set to the specified
value . The mask is given in octal. Common values for the mask are 002, which gives all
permissions to the owner and group and read and execute permissions to all others, or 022,
which gives all permissions to the owner, and only read and execute permission to the
group and all others. See also umask(1).
unalias pattern
All aliases whose names match the specified pattern are discarded. Thus, all aliases are
removed by unalias *. No error occurs if pattern does not match an existing alias.
unhash
Use of the internal hash table to speed location of executed programs is disabled.
unset pattern
All variables whose names match the specified pattern are removed. Thus, all variables are
removed by unset *; this has noticeably undesirable side-effects. No error occurs if pat-
tern matches nothing.
unsetenv pattern
Removes all variables whose names match the specified pattern from the environment. See
also the setenv command above and printenv (1).
wait Waits for all background jobs to terminate. If the shell is interactive, an interrupt can dis-
rupt the wait, at which time the shell prints names and job numbers of all jobs known to be
outstanding.
while (expression )
...
end While the specified expression evaluates non-zero, the commands between the while and
the matching end are evaluated. break and continue can be used to terminate or
continue the loop prematurely. (The while and end must appear alone on their input
lines.) If the input is a terminal (i.e., not a script), prompting occurs the first time through
the loop as for the foreach statement.
%job Brings the specified job into the foreground.
%job & Continues the specified job in the background.
@
@ name =expression
@ name [index ]=expression
The first form prints the values of all the shell variables. The second form sets the specified
name to the value of expression . If the expression contains <, >, &, or |, at least this part
Section 1−−136 Hewlett-Packard Company −7− HP-UX 11i Version 2: August 2003
csh(1) csh(1)
of the expression must be placed within parentheses. The third form assigns the value of
expression to the index th argument of name. Both name and its index th component must
already exist.
The operators *=, +=, etc., are available as in C. White space can optionally separate the
name from the assignment operator. However, spaces are mandatory in separating com-
ponents of expression which would otherwise be single words.
Special postfix ++ and - - operators increment and decrement name, respectively (e.g.,
@ i++).
History Substitutions
History substitutions enable you to repeat commands, use words from previous commands as portions of
new commands, repeat arguments of a previous command in the current command, and fix spelling or
typing mistakes in an earlier command.
History substitutions begin with an exclamation point (!). Substitutions can begin anywhere in the input
stream, but cannot be nested. The exclamation point can be preceded by a backslash to cancel its special
meaning. For convenience, an exclamation point is passed to the parser unchanged when it is followed by
a blank, tab, newline, equal sign, or left parenthesis. Any input line that contains history substitution is
echoed on the terminal before it is executed for verification.
Commands input from the terminal that consist of one or more words are saved on the history list. The
history substitutions reintroduce sequences of words from these saved commands into the input stream.
The number of previous commands saved is controlled by the history variable. The previous com-
mand is always saved, regardless of its value. Commands are numbered sequentially from 1.
HP-UX 11i Version 2: August 2003 −8− Hewlett-Packard Company Section 1−−137
csh(1) csh(1)
You can refer to previous events by event number (such as !10 for event 10), relative event location
(such as !-2 for the second previous event), full or partial command name (such as !d for the last event
using a command with initial character d), and string expression (such as !?mic? referring to an event
containing the characters mic).
These forms, without further modification, simply reintroduce the words of the specified events, each
separated by a single blank. As a special case, !! is a re-do; it refers to the previous command.
To select words from a command, use a colon (:) and a designator for the desired words after the event
specification. The words of an input line are numbered from zero. The basic word designators are:
0 First word (i.e., the command name itself).
n nth word.
Section 1−−138 Hewlett-Packard Company −9− HP-UX 11i Version 2: August 2003
csh(1) csh(1)
text of the previous line. Thus ˆlbˆlib fixes.
Alias Substitution
csh, significant to the
parser: -, &, |, <, >, (, and ).
Also available in expressions as primitive operands are command executions enclosed in curly braces
( { } ) and file enquiries of the form -l filename, where l is one of:
r read access
w write access
x execute access
e existence
o ownership
z zero size
f plain file
d directory
The specified filename is command- and file-name expanded then tested to see if it has the specified rela-
tionship to the real user. If the file does not exist or is inaccessible, all inquiries return false (0). Com-
A cA flow of control in command files
.
CSH VARIABLES
csh maintains a set of variables. Each variable has a value equal to zero or more strings (words). Vari-
ables have names consisting of up to 80 letters and digits starting with a letter. The underscore character
is considered a letter. The value of a variable may be displayed and changed by using the set and
unset commands. Some of the variables are Boolean, that is, the shell does not care what their value is,
only whether they are set or not.
Some operations treat variables numerically. The at sign (@) command permits numeric calculations to
be performed and the result assigned to a variable. The null string is considered to be zero, and any sub-
sequent words of multi-word values are ignored.
After the input line is aliased and parsed, and before each command is executed, variable expansion is
performed keyed by the dollar sign ($) character. Variable expansion can be prevented by preceding the
dollar sign with a backslash character (\) except within double quotes (") where substitution always
occurs. Variables are never expanded if enclosed in single quotes. Strings quoted by single quotes are A cA
interpreted later (see Command Substitution ) so variable substitution does not occur there until later, if
at all. A dollar sign is passed unchanged if followed by a blank, tab, or end-of-line.
Input/output redirections are recognized before variable expansion, and are variable expanded
separately. Otherwise, the command name and entire argument list are expanded together.
Unless enclosed in double quotes or given the :q modifier, the results of variable substitution may even-
tually be command and file name substituted. Within double quotes, a variable whose value consists of
multiple words expands to a portion of a single word, with the words of the variable’s shell input. Except as
noted, it is an error to reference a variable that is not set.
$variable_name
${variable_name }
When interpreted, this sequence is replaced by the words of the value of the variable
variable_name , each separated by a blank. Braces insulate variable_name from subse-
quent characters that would otherwise be interpreted to be part of the variable name itself.
If variable_name is not a csh variable, but is set in the environment, that value is used.
Non- csh variables cannot be modified as shown below.
$variable_name[selector]
${variable_name[selector] }
This modification selects only some of the words from the value of variable_name. The
selector is subjected to variable substitution, and can consist of a single number or two
numbers separated by a dash. The first word of a variable’s value is numbered 1. If the
first number of a range is omitted it defaults to 1. If the last member of a range is omitted
it defaults to the total number of words in the variable ($#variable_name). An asterisk
metacharacter used as a selector selects all words.
$#variable_name
${#variable_name }
This form gives the number of words in the variable, and is useful for forms using a [selec-
tor ] option.
$0 This form substitutes the name of the file from which command input is being read. An
error occurs if the file name is not known.
$number
${number }
This form is equivalent to an indexed selection from the variable argv ($argv[number]).
$* This is equivalent to selecting all of argv ($argv[*]).
The modifiers :h, :t, :r, :q, and :x can be applied to the substitutions above, as can :gh, :gt, and
:gr. If curly braces ({ }) appear in the command form, the modifiers must appear within the braces.
The current implementation allows only one : modifier on each $d expansion .
The following substitutions cannot be modified with : modifiers:
$?variable_name
${?variable_name }
Substitutes the string 1 if variable_name is set, 0 if it is not.
$?0 Substitutes 1 if the current input file name is known, 0 if it is not.
$$ Substitutes the (decimal) process number of the (parent) shell.
$< Substitutes a line from the standard input, with no further interpretation thereafter. It can
be used to read from the keyboard in a shell script.
noglob If set, file name expansion is inhibited. This is most useful in shell scripts that are
not dealing with file names, or after a list of file names has been obtained and
further expansions are not desirable.
nonomatch If set, it is no longer an error for a file name expansion to not match any existing
files. If there is no match, the primitive pattern is returned. It is still an error for
the primitive pattern to be malformed. For example, ’echo [’ still gives an
error.
notify If set, csh notifies you immediately (through your standard output device) of back-
ground job completions. The default is unset (indicate job completions just before
printing a prompt).
path Each word of the path variable specifies a directory in which commands are to be
sought for execution. A null word specifies your current working directory. If there
is no path variable, only full path names can be executed. When path is not set and
A cA
when users do not specify full path names, csh searches for the command through
the directories . (current directory) and /usr/bin. A csh which is given nei-
ther the -c nor the -t option normally hashes the contents of the directories in
the path variable after reading .cshrc, and each time the path variable is
reset. If new commands are added to these directories while the shell is active, it is
necessary to execute rehash for csh to access these new commands.
prompt This variable lets you select your own prompt character string. The prompt is
printed before each command is read from an interactive terminal input. If a !
appears in the string, it is replaced by the current command history buffer event
number unless a preceding \ is given. The default prompt is the percent sign (%)
for users and the # character for the super-user.
savehist The number of lines from the history list that are saved in ˜/.history when the
user logs out. Large values for savehist slow down the csh during startup.
shell This variable contains the name of the file in which the csh program resides. This
variable is used in forking shells to interpret files that have their execute bits set
but which are not executable by the system. (See the description of Non-Built-In
Command Execution ).
status This variable contains the status value returned by the last command. If the com-
mand terminated abnormally, 0200 is added to the status variable’s value. Built-in
commands which terminated abnormally return exit status 1, and all other built-in
commands set status to 0.
time This variable contains a numeric value that controls the automatic timing of com-
mands. If set, csh prints, for any command taking more than the specified
number of cpu seconds, a line of information to the standard output device giving
user, system, and real execution times plus a utilization percentage. The utilization
percentage is the ratio of user plus system times to real time. This message is
printed after the command finishes execution.
verbose This variable is set by the -v command line option. If set, the words of each com-
mand are printed on the standard output device after history substitutions have
been made.
Command Substitution
Command substitution is indicated by a command enclosed in grave accents (‘... ‘). The output from
such a command is normally broken into separate words at blanks, tabs and newlines, with null words
being discarded; this text then replacing the original string. Within double quotes, only newlines force
new words; blanks and tabs are preserved.
In any case, the single final newline does not force a new word. Note that it is thus possible for a com-
mand substitution to yield only part of a word, even if the command outputs a complete line.
Input/Output
The standard input and standard output of a command can be redirected with the following syntax:
< name Open file name (which is first variable, command and file name expanded) as the
standard input.
<< word Read the shell input up to a line which is identical to word. word is not subjected to
variable, file name or command substitution, and each input line is compared to
word before any substitutions are done on this input line. Unless a quoting \, ’, or
‘ appears in word, variable and command substitution is performed on the inter-
vening lines, allowing \ to quote $, \ and ‘., it is created; if
the file exists, it is truncated, and its previous contents are lost.
If the variable noclobber is set, the file must not exist or be a character special
file (e.g., a terminal or /dev/null) or an error results. This helps prevent
accidental destruction of files. In this case the exclamation point (!) forms can be
used to suppress this check.
The forms involving the ampersand character (&) route the standard error into the
specified file as well as the standard output. name is expanded in the same way as
< input file names are.
>> name
>>& name
>>! name
>>&! name Uses file name as standard output the same as >, but appends output to the end of
the file. If the variable noclobber is set, it is an error for the file not to exist
unless one of the ! forms is given. Otherwise, it is similar to >.
A command receives the environment in which the shell was invoked as modified by the input-output
parameters and the presence of the command in a pipeline. Thus, unlike some previous shells, com-
mands executed from a shell script have no access to the text of the commands by default; rather they
receive the original standard input of the shell. The << mechanism should be used to present inline
data. This permits shell scripts to function as components of pipelines and allows the shell to block-read
its input.
Diagnostic output can be directed through a pipe with the standard output. Simply use the form |&
rather than | by itself.
CSH UTILITIES
File Name Completion
In typing file names as arguments to commands, it is no longer necessary to type a complete name, only a
unique abbreviation is necessary. When you want the system to try to match your abbreviation, press the
ESC key. The system then completes the file name for you, echoing the full name on your terminal. If
A cA
the abbreviation does not match an available file name, the terminal’s bell is sounded. The file name may
be partially completed if the prefix matches several longer file names. In this case, the name is extended
up to the ambiguous deviation, and the bell is sounded.
File name completion works equally well when other directories are addressed. In addition, the tilde (˜)
convention for home directories is understood in this context.
Autologout
A new shell variable has been added called autologout. If the terminal remains idle (no character
input) at the shell’s top level for a number of minutes greater than the value assigned to autologout,
you are automatically logged off. The autologout feature is temporarily disabled while a command is
executing. The initial value of autologout is 600. If unset or set to 0, autologout is entirely dis-
abled.
Sanity
C shell restores your terminal to a sane mode if it appears to return from some command in raw, cbreak,
or noecho mode.
EXTERNAL INFLUENCES
Environment Variables
LC_COLLATE determines the collating sequence used in evaluating pattern matching notation for file
name substitution.
LC_CTYPE determines the interpretation of text as single and/or multi-byte characters, the classification
of characters as letters, and the characters matched by character class expressions in pattern matching
notation.
LANG determines the language in which messages are displayed. vari-
able contains an invalid setting, csh behaves as if all internationalization variables are set to "C". See
environ (5).
WARNINGS
The .cshrc file should be structured such that it cannot generate any output on standard output or
standard error, including occasions when it is invoked without an affiliated terminal. rcp (1) causes
.cshrc to be sourced, and any output generated by this file, even to standard error causes problems.
Commands such as stty (1) should be placed in .login, not in .cshrc, so that their output cannot affect
rcp (1).
csh has certain limitations. Words or environment variables can be no longer than 10240 characters.
The system limits argument lists to 10240 characters. The number of arguments to a command which.
When a command is restarted from a stop, csh prints the directory it started in if it is different from the
current directory; this can be misleading (i.e., wrong) because the job may have changed directories inter-
nally.
Shell built-in functions are not stoppable/restartable. Command sequences of the form a ; b ; c are
also not handled gracefully when stopping is attempted. If you interrupt b, the shell then immediately
executes c. This is especially noticeable if this expansion results from an alias. It suffices to place the
sequence of commands in parentheses to force it into a subshell; i.e., ( a ; b ; c ).
Because of the signal handling required by csh, interrupts are disabled just before a command is exe-
cuted, and restored as the command begins execution. There may be a few seconds delay between when a
command is given and when interrupts are recognized., prompted for by ?, are not placed in the history list. Control structure should be
parsed rather than being recognized as built-in commands. This would allow control commands to be
placed anywhere, to be combined with |, and to be used with & and ; metasyntax.
It should be possible to use the : modifiers on the output of command substitutions. All and more than
one : modifier should be allowed on $ substitutions.
Terminal type is examined only the first time you attempt recognition.
To list all commands on the system along PATH, enter [Space]-[Ctrl]-[D].
The csh metasequence !˜ does not work.
In an international environment, character ordering is determined by the setting of LC_COLLATE, rather A cA
than by the binary ordering of character values in the machine collating sequence. This brings with it
certain attendant dangers, particularly when using range expressions in file name generation patterns.
For example, the command,
rm [a-z]*
might be expected to match all file names beginning with a lowercase alphabetic character. However, if
dictionary ordering is specified by LC_COLLATE, it would also match file names beginning with an
uppercase character (as well as those beginning with accented letters). Conversely, it would fail to match
letters collated after z in languages such as Norwegian.
The correct (and safe) way to match specific character classes in an international environment is to use a
pattern of the form:
rm [[:lower:]]*
This uses LC_CTYPE to determine character classes and works predictably for all supported languages
and codesets. For shell scripts produced on non-internationalized systems (or without consideration for
the above dangers), it is recommended that they be executed in a non-NLS environment. This requires
that LANG, LC_COLLATE, etc., be set to "C" or not set at all.
csh implements command substitution by creating a pipe between itself and the command. If the root
file system is full, the substituted command cannot write to the pipe. As a result, the shell receives no
input from the command, and the result of the substitution is null. In particular, using command substi-
tution for variable assignment under such circumstances results in the variable being silently assigned a
NULL value.
Relative path changes (such as cd ..), when in a symbolically linked directory, cause csh’s knowledge
of the working directory to be along the symbolic path instead of the physical path.
Prior to HP-UX Release 9.0, csh, when getting its input from a file, would exit immediately if unable to
execute a command (such as if it was unable to find the command). Beginning at Release 9.0, csh con-
tinues on and attempts to execute the remaining commands in the file. However, if the old behavior is
desired for compatibility purposes, set the environment variable EXITONERR to 1.
AUTHOR
csh was developed by the University of California, Berkeley and HP.
FILES
˜/.cshrc A csh script sourced (executed) at the beginning of execution by each shell.
See WARNINGS
˜/.login A csh script sourced (executed) by login shell, after .cshrc at login.
˜/.logout A csh script sourced (executed) by login shell, at logout.
/etc/passwd Source of home directories for ˜name.
/usr/bin/sh Standard shell, for shell scripts not starting with a #.
/etc/csh.login A csh script sourced (executed) before ˜/.cshrc and ˜/.login when
starting a csh login (analogous to /etc/profile in the POSIX shell).
/tmp/sh* Temporary file for <<.
SEE ALSO
cd(1), echo(1), kill(1), nice(1), sh(1), umask(1), access(2), exec(2), fork(2), pipe(2), umask(2), wait(2),
tty(7), a.out(4), environ(5), lang(5), regexp(5).
C Shell tutorial in Shells Users Guide .
A cA
NAME
csplit - context split
SYNOPSIS
csplit [-s] [-k] [-f prefix ] [-n number] file arg1 [ ... argn ]
DESCRIPTION
csplit reads file , separates it into n+1 sections as defined by the arguments arg1 ... argn , and places
the results in separate files. The maximum number of arguments (arg1 through argn ) allowed is 99
unless the -n number option is used to allow for more output file names. If the -f prefix option is
specified, the resulting filenames are prefix 00 through prefix NN where NN is the two-digit value of n
using a leading zero if n is less than 10. If the -f prefix option is not specified, the default filenames
xx00 through xxNN are used. file is divided as follows:
A cA
Default Prefixed
Filename Filename Contents
xx00 prefix00 From start of file up to (but not including) the line
referenced by arg1.
xx01 prefix01 From the line referenced by arg1 up to the line
referenced by arg2.
.
.
.
xxNN prefixNN From the line referenced by argn to end of file.
If the file argument is -, standard input is used.
csplit supports the Basic Regular Expression syntax (see regexp (5)).
Options
csplit recognizes the following options:
-s Suppress printing of all character counts (csplit normally prints the character
counts for each file created).
-k Leave previously created files intact (csplit normally removes created files if an
error occurs).
-f prefix Name created files prefix 00 through prefixNN (default is xx00 through xxNN.
-n number The output file name suffix will use number digits instead of the default 2. This
allows creation of more than 100 output files.
Arguments (arg1 through argn ) to csplit can be any combination of the following:
/regexp / Create a file containing the section from the current line up to (but not including)
the line matching the regular expression regexp . The new current line becomes the
line matching regexp .
/regexp /+n
/regexp /-n Create a file containing the section from the current line up to (but not including)
the nth before (-n) or after (+n) the line matching the regular expression regexp .
(e.g., /Page/-5). The new current line becomes the line matching regexp ±n lines.
%regexp % equivalent to /regexp /, except that no file is created for the section.
line_number Create a file from the current line up to (but not including) line_number . The new
current line becomes line_number .
{num } Repeat argument. This argument can follow any of the above argument forms. If it
follows a regexp argument, that argument is applied num more times. If it follows
line_number , the file is split every line_number lines for num times from that point
until end-of-file is reached or num expires.
{* } Repeats previous operand as many times as necessary to finish input.
Enclose in appropriate quotes all regexp arguments containing blanks or other characters meaningful to
the shell. Regular expressions must not contain embedded new-lines. csplit does not alter or remove
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−149
csplit(1) csplit(1)
EXTERNAL INFLUENCES
Environment Variables
LC_COLLATE determines the collating sequence used in evaluating regular expressions.
LC_CTYPE determines the characters matched by character class expressions in regular expressions.
LC_MESSAGES determines the language in which messages are displayed.
If LC_COLLATE orsplit behaves as if all internationalization
A cA variables are set to "C". See environ (5).
DIAGNOSTICS
Messages are self explanatory except for:
arg - out of range
which means that the given argument did not reference a line between the current position and the end of
the file. This warning also occurs if the file is exhausted before the repeat count is.
EXAMPLES
Create four files, cobol00 through cobol03. After editing the ‘‘split’’ files, recombine them back into
the original file, destroying its previous contents.
csplit -f cobol file ’/procedure division/’ /par5./ /par16./
Perform editing operations
cat cobol0[0-3] > file
Split a file at every 100 lines, up to 10,000 lines (100 files). The -k option causes the created files to be
retained if there are fewer than 10,000 lines (an error message is still printed).
csplit -k file 100 ’{99}’
Assuming that prog.c follows the normal C coding convention of terminating routines with a } at the
beginning of the line, create a file containing each separate C routine (up to 21) in prog.c.
csplit -k prog.c ’%main(%’ ’/ˆ}/+1’ ’{20}’
SEE ALSO
sh(1), split(1), environ(5), lang(5), regexp(5).
STANDARDS CONFORMANCE
csplit: SVID2, SVID3, XPG2, XPG3, XPG4
Section 1−−150 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
ct(1) ct(1)
NAME
ct - spawn getty to a remote terminal (call terminal)
SYNOPSIS
ct [-w n] [-x n] [-h] [-v] [-s speed ] telno ...
DESCRIPTION con-
A cA
nected use-
ful value is -x9.
-h Prevent ct from disconnecting ("hanging up") the current tty line. This option is
necessary if the user is using a different tty line than the one used by ct to spawn
the getty.
/var/adm/ctlog
/etc/uucp/Devices
SEE ALSO
cu(1), login(1), uucp(1), getty(1M), uugetty(1M).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−151
ctags(1) ctags(1)
NAME
ctags - create a tags file
SYNOPSIS
ctags [-xvFBatwu] files ...
DESCRIPTION
ctags makes a tags file for ex(1) (or vi (1)) from the specified C, Pascal and FORTRAN sources. A tags file
gives the locations of specified objects (for C, functions, macros with argments, and typedefs; Pascal, pro-
cedures,.
A cA Specifiers are given in separate fields on the line, separated by spaces or tabs. Using the tags file, ex
can quickly find these objects’ definitions.
-x Cause ctags to print a simple function index. This is done by assembling a list of func-
tion begin-
ning of name of the file, with any trailing .c removed, and leading pathname components also removed.
This makes use of ctags practical in directories with more than one program.
EXTERNAL INFLUENCES
Environment Variables
LC_COLLATE determines the order in which the output is sorted.
LC_CTYPE determines the interpretation of the single- and/or multi-byte characters within comments
and string literals. ‘‘C’’ (see lang (5)) is used instead of LANG. If any internationalization
variable contains an invalid setting, ctags behaves as if all internationalization variables are set to ‘‘C’’.
See environ (5).
Section 1−−152 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
ctags(1) ctags(1)
DIAGNOSTICS.
Duplicate entry in files file1 and file2 : name (Warning only).
The same name was detected in two different files. A tags entry was made only for the first name A cA
found.
EXAMPLES editing a
tags file.
If more than one (function) definition appears on a single line, only the first definition is indexed.
AUTHOR
ctags was developed by the University of California, Berkeley.
FILES
tags output tags file
OTAGS temporary file used by -u
SEE ALSO
ex(1), vi(1).
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−153
ctags(1) ctags(1)
STANDARDS CONFORMANCE
ctags: XPG4
A cA
Section 1−−154 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
cu(1) cu(1)
NAME
cu - call another (UNIX) system; terminal emulator
SYNOPSIS
cu [-s speed ] [-l line ] [-h] [-q] [-t] [-d level ] [-e-o] [-m] [-n] [ telno systemname dir ]
XPG4 Syntax:
cu [-s speed ] [-l line ] [-h] [-q] [-t] [-d] [-e-o] [-m] [-n] [ telno systemname dir ]
DESCRIPTION
cu calls up another system, which is usually a UNIX operating system, but can be a terminal or a non-
UNIX operating system. cu manages all interaction between systems, including possible transfers of
ASCII files.
Options
A cA over-
ride dev-
ice is usually a directly connected asynchronous line (such as /dev/ttyapb). In
this case, a telephone number is not required, but the string dir can be used to
specify that a dialer is not required. If the specified device is associated with an
auto-dialer, a telephone number must be provided.
-h Emulate local echo, supporting calls to other computer systems that expect termi-
nals to be set to half-duplex mode.
-q Use ENQ/ACK handshake (remote system sends ENQ, cu sends ACK.)
-t Used when dialing an ASCII terminal that has been set to auto-answer. Appropri-
ate mapping of carriage-return to carriage-return-line-feed pairs is set.
-dlevel Print diagnostic traces. level is a number from 0-9, where higher level s tele-
phone number or direct line for systemname in the Systems file until a connection
is made or all the entries are tried.
dir Using dir ensures that cu uses the line specified by the -l option.
After making the connection, cu runs as two processes:
• transmit process reads data from the standard input and, except for lines beginning with ˜,
passes it to the remote system;
• receive process accepts data from the remote system and, except for lines beginning with ˜,
passes it to the standard output.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−155
cu(1) cu(1)
Normally, an automatic DC3/DC1 protocol is used to control input from the remote to ensure that the
buffer is not overrun. "Prompt handshaking" can be used to control transfer of ASCII files to systems that
have no type-ahead capability but require data to be sent only after a prompt is given. This is described
in detail below. Lines beginning with ˜ have special meanings.
Section 1−−156 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
cu(1) cu(1)
Ctrl-X form where a circumflex (ASCII 94) precedes the character, as in ˆX. A null
character can be specified with ˆ@. (A null first character in the prompt implies a
"null" prompt, which always appears to be satisfied.) A circumflex is specified by
^ ˆ.
~%>[>]file Divert output from the remote system to the specified file until another ˜%> com-
mand is given. When an output diversion is active, typing ˜%> terminates it,
whereas ˜%> anotherfile terminates it and begins a new one. The output diversion
remains active through a ˜& subshell, but unpredictable results can occur if
input/output diversions are intermixed with ˜%take or ˜%put. The ˜%>> com-
mand appends to the named file. Note that these commands, which are interpreted
by the transmit process, are unrelated to the ˜> commands described below, which
are interpreted by the receive process.
~susp Suspend the cu session. susp is the suspend character set in the terminal when A cA
cu was invoked (usually ˆZ — see stty (1)). As in all other lines starting with tilde,
a ˜susp line must be terminated by pressing Return.
Receive Process
The receive process normally copies data from the remote system to its standard output. A line from the
remote that begins with ˜> initiates an output diversion to a file. The complete sequence is:
~>[>]: file
zero or more lines to be written to file
~>
Data from the remote is diverted (or appended, if >> is used) to file . The trailing ˜> terminates the
diversion.
The use of ˜%put requires stty (1) and cat (1) on the remote side. It also requires that the current erase
and kill characters on the remote system be identical to the current ones on the local system.
Backslashes are inserted at appropriate places.
The use of ˜ ˜˜ is used. For example, using the keyboard on sys-
tem X, uname can be executed on Z, X, and Y as follows where lines 1, 3, and 5 are keyboard commands,
and lines 2, 4, and 6 are system responses:
uname
Z
~!uname
X
~˜!uname
Y
In general, ˜ causes the command to be executed on the original machine; ˜˜ causes the command to be
executed on the next machine in the chain.
EXTERNAL INFLUENCES-
ization variables are set to "C". See environ (5).
DIAGNOSTICS
Exit code is zero for normal exit; non-zero (various values) otherwise.
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−157
cu(1) cu(1)
EXAMPLES:
A cA cu -l/dev/culXpX 2015551212
To use a system name (yyyzzz):
cu yyyzzz
To connect directly to a modem:
cu -l/dev/culXX -m dir cu -l/dev/cu1XX -m dir
WARNINGS
cu buffers input internally.
AUTHOR
cu was developed by AT&T and HP.
FILES
/etc/uucp/Systems
/etc/uucp/Devices
/etc/uucp/Dialers
/var/spool/locks/LCK ..(tty-device)
/dev/null
SEE ALSO
cat(1), ct(1), echo(1), stty(1), uname(1), uucp(1), uuname(1).
STANDARDS CONFORMANCE
cu: SVID2, SVID3, XPG2, XPG3, XPG4
Section 1−−158 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
cut(1) cut(1)
NAME
cut - cut out (extract) selected fields of each line of a file
SYNOPSIS
cut -c list [ file ... ]
cut -b list [-n] [ file ... ]
cut -f list [-d char ] [-s] [ file ... ]
DESCRIPTION). A cA
Options are interpreted as follows:
list A comma-separated list of integer byte (-b option), character (-c option), or field (-f
option) numbers, in increasing order, with optional - to indicate ranges. For exam-
ple: char-
acter (see -d); for example, -f 1,7 copies the first and seventh field only. Lines
with no field delimiters will be passed through intact (useful for table subheadings),
unless -s is specified.
-d char The character following -d is the field delimiter (-f option only). Default is tab .
Space or other characters with special meaning to the shell must be quoted. Adja-
cent field delimiters delimit null fields. char may be an international code set char-
acter.
-n."
-s Suppresses lines with no delimiter characters when using -f option. Unless -s is
specified, lines with no delimiters appear in the output without alteration.
Hints
Environment Variables
LC_CTYPE determines the interpretation of text,
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−159
cut(1) cut(1)
a default of "C" (see lang (5)) is used instead of LANG. If any internationalization variable contains an
invalid setting, cut behaves as if all internationalization variables are set to "C". See environ (5).
EXAMPLES
Password file mapping of user ID to user names:
cut -d : -f 1,5 /etc/passwd
Set environment variable name to current login name:
A cA name=‘who am i | cut -f 1 -d " "‘
cut does not expand tabs. Pipe text through expand (1) if tab expansion is required.
Backspace characters are treated the same as any other character. To eliminate backspace characters
before processing by cut, use the fold or col command (see fold (1) and col (1)).
AUTHOR
cut was developed by OSF and HP.
SEE ALSO
grep(1), paste(1).
STANDARDS CONFORMANCE
cut: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
Section 1−−160 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
date. A dA adjust-
ment direc-
tive for all languages except the C default language. See Formatting Directives and EXAM-
PLES below.
date [-u] +format
Display the current date and time according to formatting directives specified in format ,
which is a string of zero or more formatting directives and ordinary characters. If it con-
tains blanks, enclose it in apostrophes or quotation marks.,
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−161.
A dA exam-
ple, 02.
Section 1−−162 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
date(1) date(1)
Obsolescent Directives
The following directives are provided for backward compatibility. It is recommended that the preceding
A dA.
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−163
date(1) date.
DIAGNOSTICS
The following messages may be displayed.
bad conversion
The date/time specification is syntactically incorrect. Check it against the usage and for the
correct range of each of the digit-pairs.
bad format character - c
The character c is not a valid format directive, field width specifier, or precision specifier.
do you really want to run time backwards?[yes/no]
The date/time you specified is earlier than the current clock value. Type yes (or the equivalent
Section 1−−164 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
date(1) date(1) → Fri Aug 20 15:03:37 PDT 1993 ← C (default)
date -u → Fri Aug 20 22:03:37 UTC 1993 ← C (default)
date → Fri, Aug 20, 1993 03:03:37 PM ← en_US.roman8 (U.S. English)
date → Fri. 20 Aug, 1993 03:03:37 PM ← en_GB.roman8 (U.K. English)
date → 20/08/1993 15.47.47 ← pt_PT.roman8 (Portuguese)
Set Date
A dA
Set the date to Oct 8, 12:45 a.m.
date 10080045
WARNINGS
The former HP-UX format directive A has been changed to W for ANSI compatibility.
Changing the date while the system is running in multiuser mode should be avoided to prevent disrupt-
ingtmps
SEE ALSO
locale(1), stime(2), ctime(3C), strftime(3C), tztab(4), environ(5), lang(5), langinfo(5).
STANDARDS CONFORMANCE
date: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
HP-UX 11i Version 2: August 2003 −5− Hewlett-Packard Company Section 1−−165
dc(1) dc(1)
NAME
dc - desk calculator
SYNOPSIS
dc [ file ]
DESCRIPTION stan-
dard input. An end of file on standard input or the q command stop dc. The following constructions are
recognized:
A dA number The value of the number is pushed on the stack. A number is an unbroken string of
the digits 0-9 or A-F. It can be preceded by an underscore (_) to input a negative
number. Numbers can contain decimal points.
+ - / * % ˆ
The top two values on the stack are added (+), subtracted (-), multiplied (*),
divided (/), remaindered (%), or exponentiated (ˆ). The two entries are popped off
the stack; the result is pushed on the stack in their place. Any fractional part of an
exponent is ignored and a warning generated. The remainder is calculated accord-
ing regis-
ters inter-
prets the top of the stack as an ASCII string, removes it, and prints it.
f All values on the stack are printed.
q exits the program. If executing a string, the recursion level is popped by two. If q
is capitalized,. Strings can be nested by
using nested pairs of brackets.
<x >x =x
! an HP-UX system command (unless the next charac-
ter is <, >, or =, in which case appropriate relational operator above is used).
c All values on the stack are popped.
i The top value on the stack is popped and used as the number radix for further
input.
Section 1−−166 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
dc(1) dc(1)
DIAGNOSTICS.
Nesting Depth There are too many levels of nested execution.
EXAMPLES
This example prints the first ten values of n! (n factorial):
[la1+dsa*pla10>y]sy
0sa1
lyx
SEE ALSO
bc(1).
DC: An Interactive Desk Calculator tutorial in Number Processing Users Guide .
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−167
dd(1) dd(1)
NAME
dd - convert, reblock, translate, and copy a (tape) file
SYNOPSIS
dd [option =value ] ...
DESCRIPTION
dd copies the specified input file to the specified output file with possible conversions. The standard
input and output are used by default. Input and output block size can be specified to take advantage of
raw physical I/O. Upon completion, dd reports the number of whole and partial input and output
records.
Options
dd recognizes the following option =value pairs:
A dA if=file Input file name; default is standard input.
of=file Output file name; default is standard output. The output file is created using the
same owner and group used by creat().
ibs=n Input block size is n bytes; default is 512.
obs=n Output block size is n bytes; default is 512.
bs=n Set both input and output block size to the same size, superseding ibs and obs.
This option is particularly efficient if no conversion (conv option) is specified,
because no in-core copy is necessary.
cbs=n Conversion buffer size is n bytes.
skip=n Skip n input blocks before starting copy.
iseek=n Skip n input blocks before starting copy. (This is an alias for the skip option.)
seek=n Skip n blocks from beginning of output file before copying.
oseek=n Skip n blocks from beginning of output file before copying. (This is an alias for the
seek option.)
count=n Copy only n input blocks.
files=n Copy and concatenate n input files. This option should be used only when the input
file is a magnetic tape device.
conv=value [,value ...]
Where value s are comma-separated symbols from the following list.
ascii Convert EBCDIC to ASCII.
ebcdic Convert ASCII to EBCDIC.
ibm Convert ASCII to EBCDIC using an alternate conversion table.
The ascii, ebcdic, and ibm values are mutually exclusive.
block Convert each newline-terminated or end-of-file-terminated input
record to a record with a fixed length specified by cbs. Any
newline character is removed, and space characters are used to
fill the block to size cbs. Lines that are longer than cbs are
truncated; the number of truncated lines (records) is reported
(see DIAGNOSTICS below).
The block and unblock values are mutually exclusive.
unblock Convert fixed-length input records to variable-length records.
For each input record, cbs bytes are read, trailing space char-
acters are deleted, and a newline character is appended.
lcase Map upper-case input characters to the corresponding lower-
case characters.
The lcase and ucase values are mutually exclusive.
Section 1−−168 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
dd(1) dd(1)
EXTERNAL INFLUENCES
International Code Set Support
Single- and multi-byte character code sets are supported.
Environment Variables
The following environment variables affect execution of dd:
LANG determines the locale when LC_ALL and a corresponding variable (beginning with LC_) do not
specify a locale.
LC_ALL determines the locale used to override any values set by LANG or any environment variables
beginning with LC_.
The LC_CTYPE variable determines the locale for the interpretation of sequences of bytes of text data as
characters (single-byte/multi-byte characters, upper-case/lower-case characters).
The LC_MESSAGES variable determines the language in which messages are written.
RETURN VALUE
Exit values are:
0 Successful completion.
>0 Error condition occurred.
DIAGNOSTICS
Upon completion, dd reports the number of input and output records:
f +p records in Number of full and partial blocks read.
f +p records out Number of full and partial blocks written.
When conv=block is specified and there is at least one truncated block, the number of truncated
records is also reported:
n truncated records
EXAMPLES
Read an EBCDIC tape blocked ten 80-byte EBCDIC card images per block into an ASCII file named x:
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−169
dd(1) dd(1)
WARNINGS
Some devices, such as 1/2-inch magnetic tapes, are incapable of seeking. Such devices may be positioned
prior to running dd by using mt(1) or some other appropriate command. The skip, seek, iseek and
oseek
ibm conversion, while less widely accepted as a standard, corresponds better to certain IBM print train
conventions. There is no universal solution.
Newline characters are inserted only on conversion to ASCII; padding is done only on conversion to
A dA EBCDIC. These should be separate options.
If if or of refers to a raw disk, bs should always be a multiple of the sector size of the disk. By default,
bs is 512 bytes. If the sector size of the disk is different from 512 bytes, bs should be specified using a
multiple of sector size. The character special (raw) device file should always be used for devices.
It is entirely up to the user to insure there is enough room in the destination file, file system and/or device
to contain the output since dd cannot pre-determine the required space after conversion.
SEE ALSO
cp(1), mt(1), tr(1), disk(7), mt(7).
STANDARDS CONFORMANCE
dd: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
Section 1−−170 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
delta(1) delta(1)
NAME
delta - make a delta (change) to an SCCS file
SYNOPSIS
delta [-r SID ] [-s] [-n] [-g list ] [-m mrlist ] [-y comment ] [-p] files
DESCRIPTION
The delta command is used to permanently introduce into the named SCCS file changes that were made
to the file retrieved by get (called the g-file , or generated file). See get (1).
delta makes a delta to each named SCCS file. If a directory is named, delta behaves as though each
file in the directory was specified as a named file, except that non- SCCS files (last component of the path
name does not begin with .s) and unreadable files are silently ignored. If a name of - is given, the stan-
dard input is read (see WARNINGS). Each line of the standard input is taken to be the name of an SCCS
file to be processed.
delta may issue prompts on the standard output, depending upon certain options specified and flags A dA
(see admin (1)) that may be present in the SCCS file (see the -m and -y options below).
Options
Option arguments apply independently to each named file.
-rSID Uniquely identifies which delta is to be made to the SCCS file. Use of this option is
necessary only if two or more outstanding gets for editing (get -e) on the same
SCCS file were done by the same person (login name). The SID value specified with
the -r option can be either the SID specified on the get command line or the SID to
be made as reported by the get command (see get (1)). A diagnostic results if the
specified SID is ambiguous, or, if necessary and omitted on the command line.
-s Suppresses issuing, on the standard output, of the created delta’s SID as well as the
number of lines inserted, deleted and unchanged in the SCCS file.
-n Specifies retention of the edited g-file (normally removed at completion of delta pro-
cessing).
-glist Specifies a list (see get (1) for the definition of list ) of deltas which are to be ignored
when the file is accessed at the change level (SID ) created by this delta.
-m[mrlist] If the SCCS file has the v flag set (see admin (1)), a Modification Request (MR)
number must be supplied as the reason for creating the new delta. the -y option).
MRs in a list are separated by blanks and/or tab characters. An unescaped new-line
character terminates the MR list.
Note that if the v flag has a value (see admin (1)), it is assumed to be the name of a
program (or shell procedure) that is to validate the correctness of the MR numbers.
If a non-zero exit status is returned from the MR number-validation program,
delta assumes that the MR numbers were not all valid and terminates.
-y[comment] Arbitrary text used to describe the reason for making the delta. A null string is
considered a valid comment .
If -y is not specified and the standard input is a terminal, the prompt comments?
is issued on the standard output before the standard input is read. If the standard
input is not a terminal, no prompt is issued. An unescaped new-line character ter-
minates the comment text.
-p Causes delta to print (on the standard output in a diff(1) format) the SCCS file
differences before and after the delta is applied.
EXTERNAL INFLUENCES
Environment Variables
LC_CTYPE determines the interpretation of text as single- and/or multi-byte characters.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−171
delta(1) delta(1)
DIAGNOSTICS
Use sccshelp (1) for explanations.
WARNINGS
A dA SCCS files can be any length, but the number of lines in the text file itself cannot exceed 99 999 lines.
Lines beginning with an ASCII SOH character (octal 001) cannot be placed in the SCCS file unless the SOH
is escaped. This character has special meaning to SCCS (see sccsfile (4)) and will cause an error.
A get of many SCCS files, followed by a delta of those files, should be avoided when the get generates
a large amount of data. Instead, multiple get/delta sequences should be used.
If the standard input (-) is specified on the delta command line, the -m (if necessary) and -y options
must also be present. Omission of these options causes an error.
Comments can be of multiple lines. The maximum length of the comment (total length of all comment
lines) cannot exceed 1024 bytes. No line in a comment should have a length of more than 1000 bytes.
FILES
All of the auxiliary files listed below, except for the g-file , are created in the same directory as the s-file
(see get (1)). The g-file is created in the user’s working directory.
g-file Existed before the execution of delta; removed after completion of delta
(unless -n was specified).
p-file Existed before the execution of delta; may exist after completion of delta.
q-file Created during the execution of delta; removed after completion of delta.
x-file Created during the execution of delta; renamed to SCCS file after completion of
delta.
z-file Created during the execution of delta; removed during the execution of
delta.
d-file Created during the execution of delta; removed after completion of delta.
/usr/bin/bdiff Program to compute differences between the file retrieved by get and the g-file .
SEE ALSO
admin(1), bdiff(1), cdc(1), get(1), sccshelp(1), prs(1), rmdel(1), sccsfile(4).
STANDARDS CONFORMANCE
delta: SVID2, SVID3, XPG2, XPG3, XPG4
Section 1−−172 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
deroff(1) deroff(1)
NAME
deroff - remove nroff, tbl, and neqn constructs
SYNOPSIS
deroff [-mx ] [-w] [-i] [ file ... ]
DESCRIPTION
deroff reads each file in sequence and removes all nroff requests, macro calls, backslash constructs,
neqn constructs (between .EQ and .EN lines, and between delimiters — see neqn(1)), and tbl descrip-
tions (see tbl (1)), replacing them with white space (blanks and blank lines), and writes the remainder of
the file on the standard output. deroff follows chains of included files (.so and .nx nroff/troff
formatter commands); if a file has already been included, a .so naming that file is ignored and a .nx
naming that file terminates execution. If no input file is given, deroff reads the standard input.
The -m option can be followed by an m, s, or l. The -mm option causes the macros be interpreted such
that only running text is output (that is, no text from macro lines). The -ml option forces the -mm
option and also causes deletion of lists associated with the mm macros.
A dA
If the -w option is given, the output is a word list, one ‘‘word’’ per line, with all other characters deleted.
Otherwise, the output follows the original, with the deletions mentioned above. In text, a ‘‘word’’ is any
multi-byte character string or any string that contains at least two letters and is composed of letters,
digits, ampersands (&), and apostrophes (’); In a macro call, however, a ‘‘word’’ is a multi-byte character
string or a string that begins with at least two letters and contains a total of at least three letters. Delim-
iters are any characters other than letters, digits, apostrophes, and ampersands. Trailing apostrophes
and ampersands are removed from ‘‘words.’’
If the -i option is specified, deroff ignores the .so and .nx nroff/troff commands.
EXTERNAL INFLUENCES
Environment Variables
LC_CTYPE determines the interpretation of text and filenames as single and/or multi-byte characters.
Note that multi-byte punctuation characters are not recognized when using the -w option., deroff behaves as if all internationali-
zation variables are set to "C". See environ (5).
WARNINGS
deroff is not a complete nroff interpreter; thus it can be confused by subtle constructs. Most such
errors result in too much rather than too little output.
The -ml option does not handle nested lists correctly.
AUTHOR
deroff was developed by the University of California, Berkeley.
SEE ALSO
neqn(1), nroff(1), tbl(1).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−173
dhcpv6client_ui(1) dhcpv6client_ui(1)
NAME
dhcpv6client_ui - DHCPv6 client interface for requesting configuration parameters from the DHCPv6
server.
SYNOPSIS
dhcpv6client_ui is the interface through which a user contacts the client daemon to obtain IP
A dA addresses and other configuration parameters from the server. The default configuration parameters are
specified as command line options when the DHCPv6 client daemon is invoked.
When dhcpv6client_ui requests for IP addresses or other configuration parameters, the client dae-
mon inter-
face instead of requesting new IP addresses from the server.
This option must be used in conjunction with the -m option.
Section 1−−174 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
dhcpv6client_ui(1) dhcpv6client_ui(1)
RETURN VALUES
dhcpv6client_ui returns 0 on success and 1 on failure.
EXAMPLES
dhcpv6client_ui obtains two IP addresses for the lan0 interface:
dhcpv6client_ui -m lan0 -n 2
dhcpv6client_ui obtains two IP addresses for the lan0 interface and additional configuration
parameters:
dhcpv6client_ui -m lan0 -n 2 -o dns_sa dns_sx
FILES
/etc/dhcpv6client.data All the data obtained from the server daemon is saved to this file. A dA
AUTHOR
dhcpv6client_ui was developed by Hewlett-Packard.
SEE ALSO
dhcpv6clientd(1M), dhcpv6d(1M).
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−175
diff reg-
ular file diff algorithm (described below) on text files that have the same name in each directory but
are different. Binary files that differ, common subdirectories, and files that appear in only one directory
are listed. When comparing directories, the following options are recognized:
A dA
When run on regular files, and when comparing text files that differ during directory comparison, diff
tells what lines must be changed in the files to bring them into agreement. diff usually finds a smal-
lest fol-
lowing: mutu-
ally
Section 1−−176 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
diff(1) diff(1) con-
trols included so that a compilation of the result without defining string is equivalent to
compiling file1 , while compiling. A dA
inter-
nationalization variables are set to "C". See environ (5).
RETURN VALUE
Upon completion, diff returns with one of the following exit values:
0 No differences were found.
1 Differences were found.
>1 An error occurred.
EXAMPLES.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−177
diff(1) diff(1)
diff -wi x1 x2
WARNINGS.
A dA
diff was developed by AT&T, the University of California, Berkeley, and HP.
Section 1−−178 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
diff3(1) diff3(1)
NAME
diff3 - 3-way differential file comparison
SYNOPSIS
diff3 [-exEX3] file1 file2 file3
DESCRIPTION
diff3 compares three versions of a file, and prints disagreeing ranges of text flagged with these codes:
==== all three files differ
====1 file1 is different
====2 file2 is different
====3 file3 is different
The type of change required to convert a given range of a given file to some other is indicated in one of
these ways:
f :n1 a Text is to be appended after line number n1 in file f, where f = 1, 2, or 3.
A dA
f :n1 ,n2 c Text is to be changed in the range line n1 through line n2. If n1 = n2, the range
can be abbreviated to n1.
The original contents of the range follows immediately after a c indication. When the contents of two
files are identical, the contents of the lower-numbered file is suppressed.
-e diff3 Produces a script for the ed editor that can be used to incorporate into file1 all
changes between file2 and file3 (see ed(1)); i.e., the changes that normally would be
flagged ==== and ====3.
-x Produces a script to incorporate only changes flagged ====
-3 Produces a script to incorporate only changes flagged ====3
-E Produces a script that will incorporate all changes between file2 and file3 , but treat over-
lapping changes (that is, changes that would be flagged with ==== in normal listing)
differently. The overlapping lines in both files will be inserted by the edit script bracketed
by <<<<<< and >>>>>> lines.
-X Produces a script that will incorporate only changes flagged ==== , but treat these
changes in the manner of -E option.
The following command applies the resulting script to file1 .
(cat script; echo ’1,$p’) | ed - file1
EXTERNAL INFLUENCES
International Code Set Support
Single- and multi-byte character code sets are supported.
WARNINGS
Text lines that consist of a single period (.) defeat -e.
Files longer than 64K bytes do not work.
FILES
/var/tmp/d3*
/usr/lbin/diff3prog
SEE ALSO
diff(1).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−179
diffmk(1) diffmk(1)
NAME
diffmk - mark changes between two different versions of a file
SYNOPSIS
diffmk prevfile currfile markfile
DESCRIPTION
diffmk compares the previous version of a file with the current version and creates a file that includes
nroff/troff ‘‘change mark’’ commands. prevfile is the name of the previous version of the file and
currfile is the name of the current version of the file. diffmk generates markfile which contains all the
lines of the currfile plus inserted formatter ‘‘change mark’’ (.mc) requests. When markfile is formatted,
changed or inserted text is shown by a | character at the right margin of each line. The position of
deleted text is shown by a single *.
If the characters | and * are inappropriate, a copy of diffmk can be edited to change them because
A dA diffmk is a shell script.
EXTERNAL INFLUENCES
International Code Set Support
Single- and multi-byte character code sets are supported.
EXAMPLES
A typical command line for comparing two versions of an nroff/troff file and generating a file with
the changes marked is:
diffmk prevfile currfile markfile; nroff markfile | pr
diffmk can also be used to produce listings of C (or other) programs with changes marked. A typical
command line for such use is:
diffmk prevfile.c currfile.c markfile.c; nroff macs markfile.c | pr
where the file macs contains:
.pl 1
.ll 77
.nf
.eo
The .ll request can specify a different line length, depending on the nature of the program being
printed. The .eo request is probably needed only for C programs.
WARNINGS
Aesthetic considerations may dictate manual adjustment of some output.
diffmk does not differentiate between changes in text and changes in formatter request coding. Thus,
file differences involving only formatting changes (such as replacing .sp with .sp 2 in a text source file)
with no change in actual text can produce change marks.
Although unlikely, certain combinations of formatting requests can cause change marks to either disap-
pear or to mark too much. Manual intervention may be required because the subtleties of various format-
ting macro packages and preprocessors is beyond the scope of diffmk. tbl cannot tolerate .mc com-
mands in its input (see tbl (1)), so any .mc request that would appear inside a .TS range is silently
deleted. The script can be changed if this action is inappropriate, or diffmk can be run on two files that
have both been run through the tbl preprocessor before any comparisons are made.
diffmk uses diff, and thus has the same limitations on file size and performance that diff may
impose (see diff(1)). In particular the performance is nonlinear with the size of the file, and very large
files (well over 1000 lines) may take extremely long to process. Breaking the file into smaller pieces may
be advisable.
diffmk also uses the ed(1) editor. If the file is too large for ed, ed error messages may be embedded in
the file. Again, breaking the file into smaller pieces may be advisable.
SEE ALSO
diff(1), nroff(1).
Section 1−−180 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
dircmp(1) dircmp(1)
NAME
dircmp - directory comparison
SYNOPSIS
dircmp [-d] [-s] [-wn ] dir1 dir2
DESCRIPTION
dircmp examines dir1 and dir2 and generates various tabulated information about the contents of the
directories. Sorted listings of files that are unique to each directory are generated for all the options. If
no option is entered, a sorted list is output indicating whether the filenames common to both directories
have the same contents.
-d Compare the contents of files with the same name in both directories and output a list tel-
ling what must be changed in the two files to bring them into agreement. The list format is
described in diff(1).
-s Suppress messages about identical files. A dA
-wn Change the width of the output line to n characters. The default width is 72.
EXTERNAL INFLUENCES
Environment Variables
LC_COLLATE determines the order in which the output is sorted. internationalization variable contains an invalid setting, dircmp behaves
as if all internationalization variables are set to ‘‘C’’ (see environ (5)).
EXAMPLES
Compare the two directories slate and sleet and produce a list of changes that would make the
directories identical:
dircmp -d slate sleet
WARNINGS
This command is likely to be withdrawn from X/Open standards. Applications using this command might
not be portable to other vendors’ systems. As an alternative diff -R is recommended.
SEE ALSO
cmp(1), diff(1).
STANDARDS CONFORMANCE
dircmp: SVID2, SVID3, XPG2, XPG3
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−181
dmpxlt(1) dmpxlt(1)
NAME
dmpxlt - dump iconv translation tables to a readable format
SYNOPSIS
/usr/bin/dmpxlt [-f output_filename ] [input_filename ]
DESCRIPTION
dmpxlt dumps the compiled version of the iconv codeset conversion tables into an ASCII-readable for-
mat that can be modified and used as input to genxlt (1) to regenerate the table for iconv (1).
Options
dmpxlt recognizes the following options:
-f output_filename If this option is not selected, the data will be sent to standard output.
dmpxlt will create an output file in the prescribed format, giving the filecode mapping between the two
A dA code sets, which can be edited and reused by genxlt (1) to create new tables for iconv (1). The entries are
in hexadecimal.
EXTERNAL INFLUENCES
Environment Variables
LANG provides a default value for the internationalization variables that are unset or null. If LANG is
unset or null, the default value of "C" (see lang (5)) is used. If any of the internationalization variables
contains an invalid setting, dmpxlt will behave as if all internationalization variables are set to "C".
See environ (5).
LC_ALL If set to a non-empty string value, overrides the values of all the other internationalization vari-
ables.
LC_MESSAGES determines the locale that should be used to affect the format and contents of diagnostic
messages written to standard error and informative messages written to standard output.
NLSPATH determines the location of message catalogues for the processing of LC_MESSAGES.
RETURN VALUE
The following are exit values:
0 Successful completion.
>0 Error condition occurred.
EXAMPLES
This example creates the source file genxlt_input from the table roma8=iso81:
dmpxlt -f genxlt_input /usr/lib/nls/iconv/tables/roma8=iso81
FILES
/usr/lib/nls/iconv/tables All tables must be installed in this directory.
SEE ALSO
iconv(1), genxlt(1), iconv(3C), environ(5) lang(5).
Section 1−−182 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
dnssec-keygen(1) dnssec-keygen(1)
NAME
dnssec-keygen - key generation tool for DNSSEC
SYNOPSIS
dnssec-keygen [-a algorithm ] [-b keysize ] [-e] [-g generator ] [-h] [-n nametype ]
[-p protocol-value] [-r randomdev ] [-s strength-value] [-t type ] [-v level ] name
DESCRIPTION
dnssec-keygen generates keys for Secure DNS (DNSSEC) as defined in RFC2535. It also generates
keys for use in Transaction Signatures (TSIG) which is defined in RFC2845.
Argument
name Specifies the domain name for which the key is to be generated.
Options
-a algorithm This option is used to specify the encryption algorithm. The algorithm can be
RSAMD5, DH, DSA or HMAC-MD5. RSA can also be used, which is equivalent to
A dA
RSAMD5.
The algorithm argument identifying the encryption algorithm is case-insensitive.
DNSSEC specifies DSA as a mandatory algorithm and RSA as a recommended one.
Implementations of TSIG must support HMAC-MD5.
-b keysize This option is used to determine the number of bits in the key. The choice of key
size depends on the algorithm that is used.
If RSA algorithm is used, keysize must be between 512 and 2048 bits.
If the DH (Diffie-Hellman) algorithm is used, keysize must be between 128 and 4096
bits.
If the DSA (Digital Signature Algorithm) is used, keysize must be between 512 and
1024 bits and a multiple of 64.
If the HMAC-MD5 algorithm is used, keysize should be between 1 and 512 bits.
-e This option is used for generating RSA keys with a large exponent value.
-g generator This option is used when creating Diffie-Hellman keys. The -g option selects the
Diffie-Hellman generator that is to be used. The only supported values for genera-
tor are 2 and 5. If no Diffie-Hellman generator is supplied, a known prime from
RFC2539 will be used if possible; otherwise, 2 will be used as the generator.
-h A summary of the options and arguments to dnssec-keygen is printed by this
option.
-n nametype This option specifies how the generated key will be used.
nametype can be either ZONE, HOST, ENTITY, or USER to indicate that the key will
be used for signing a zone, host, entity or user; respectively. In this context HOST
and ENTITY are identical. nametype is case-insensitive.
-p protocol-value
This option sets the protocol value for the generated key to protocol-value. The
default is 2 (email) for keys of the type USER and 3 (DNSSEC) for all other key
types. Other possible values for this argument are listed in RFC2535 and its suc-
cessors.
-r randomdev This option overrides the behaviour of dnssec-keygen to use random numbers to
seed the process of generating keys when the system does not have a
/dev/random device to generate random numbers. The dnssec-keygen pro-
gram will prompt for keyboard input and use the time intervals between keystrokes
to provide randomness. With this option it will use randomdev as a source of ran-
dom data.
-s strength-value
This option is used to set the key’s strength value. The generated key will sign DNS
resource records with a strength value of strength-value. It should be a number in
the range 0-15. The default strength is zero. The key strength field currently has
no defined purpose in DNSSEC.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−183
dnssec-keygen(1) dnssec-keygen(1)
-t type This option indicates if the key is used for authentication or confidentiality. type
can be either AUTHCONF, NOAUTHCONF, NOAUTH or NOCONF. The default is
AUTHCONF. If type is AUTHCONF, the key can be used for authentication and
confidentiality. Setting type to NOAUTHCONF indicates that the key cannot be used
for authentication or confidentiality. A value of NOAUTH means the key can be used
for confidentiality but not for authentication. Similarly, NOCONF defines that the
key cannot be used for confidentiality though it can be used for authentication.
-v level This option can be used to make dnssec-keygen more verbose. As the
debugging/tracing level increases, dnssec-keygen generates increasingly
detailed reports about what it is doing. The default level is zero.
Generated Keys
When dnssec-keygen completes, it prints a string in the form Knnnn.+aaa+iiiii on the standard out-
put. This is an identification string for the key it has generated. These strings can be supplied as argu-
A dA ments to the dnssec-makekeyset utility.
The nnnn part is the dot-terminated domain name given by name. The DNSSEC algorithm identifier is
indicated by aaa : 001 for RSA, 002 for Diffie-Hellman, 003 for DSA, or 157 for HMAC-MD5. iiiii is a
five-digit number identifying the key.
dnssec-keygen creates two files. The file names are adapted from the key identification string above.
They have names in the form:
Knnnn.+aaa+iiiii. key and
Knnnn.+aaa+iiiii. private.
These contain the public and private parts of the key respectively. The files generated by dnssec-
keygen follow this naming convention to make it easy for the signing tool dnssec-signzone to iden-
tify which file(s) have to be read to find the necessary key(s) for generating or validating signatures.
The .key file contains a KEY resource record that can be inserted into a zone file with a $INCLUDE
statement. The private part of the key is in the .private file. It contains details of the encryption
algorithm that was used and any relevant parameters: prime number, exponent, modulus, subprime, etc.
For obvious security reasons, this file does not have general read permission. The private part of the key
is used by dnssec-signzone to generate signatures and the public part is used to verify the signa-
tures. Both .key and .private key files are generated by symmetric encryption algorithm such as
HMAC-MD5, even though the public and private key are equivalent.
EXAMPLE
To generate a 768-bit DSA key for the domain example.com, the following command would be issued:
dnssec-keygen -a DSA -b 768 -n ZONE example.com
dnssec-keygen has printed the key identification string Kexample.com.+003+26160, indicating a
DSA key with identifier 26160. It would have created the files
Kexample.com.+003+26160.key and
Kexample.com.+003+26160.private
containing the public and private keys respectively for the generated DSA key.
FILES
/dev/random
SEE ALSO
dnssec-makekeyset(1), dnssec-signkey(1), dnssec-signzone(1), RFC2535, RFC2845, RFC2539.
BUGS
The naming convention for the public and private key files is a little clumsy. It won’t work for domain
names that are longer than 236 characters because the .+aaa+iiiii .private suffix results in filenames
that are too long for most UNIX systems.
Section 1−−184 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
dnssec-makekeyset(1) dnssec-makekeyset(1)
NAME
dnssec-makekeyset - used to produce a set of DNSSEC keys
SYNOPSIS
dnssec-makekeyset [-a] [-h help ] [-s start-time ] [-e end-time ] [-t TTL] [-r randomdev ]
[-p] [-v level ] keyfile ...
DESCRIPTION
dnssec-makekeyset generates a key set from one or more keys created by dnssec-keygen. It
creates a file containing KEY and SIG records for some zone which can then be signed by the zone’s
parent if the parent zone is DNSSEC-aware.
keyfile should be a key identification string as reported by dnssec-keygen; such as, Knnnn.+aaa+iiiii,
where nnnn is the name of the key, aaa is the encryption algorithm and iiiii is the key identifier. Multi-
ple keyfile arguments can be supplied when there are several keys to be combined by dnssec-
makekeyset into a key set. A dA
Options
-a This option is used to verify all generated signatures.
-e end-time The expiration date for the SIG records can be set by the -e option. Note that in
this context, the expiration date specifies when the SIG records are no longer valid,
not when they are deleted from caches on name servers..
When no expiration date is set for the SIG records, dnssec-makekeyset
defaults to an expire time of 30 days from the start time of the SIG records.
-h help This option is used to display a short summary of the options provided with
dnssec-makekeyset.
-p This option is used to instruct dnssec-makekeyset to use pseudo-random data
when self-signing the keyset. This is faster, but less secure, than using genuinely
random data for signing. This option may be useful when the entropy source is lim-
ited.
-r randomdev An alternate source of random data can be specified with the -r option. randomdev
is the name of the file to use to obtain random data. By default, /dev/random is
used if this device is available. If this file is not provided by the operating system
and no -r option is used, dnssec-makekeyset will prompt the user for input
from the keyboard and use the time between keystrokes to derive some random
data.
-s start-time For any SIG records that are in the key set, the start time when the SIG records
become valid is specified with the -s option. -s option is supplied, the current date and time is used for the start time of
the SIG records.
-t TTL The -t option is followed by a time-to-live argument TTL which indicates the TTL
value that will be assigned to the assembled KEY and SIG records in the output file.
TTL is expressed in seconds. If no -t option is provided, dnssec-makekeyset
prints a warning and uses a default TTL of 3600 seconds.
-v level This option can be used to make dnssec-makekeyset more verbose. As the
debugging/tracing level level increases, dnssec-makekeyset generates increas-
ingly detailed reports about what it is doing. The default level is zero.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−185
dnssec-makekeyset(1) dnssec-makekeyset(1)
If dnssec-makekeyset is successful, it creates a file name of the form nnnn .keyset. This file
contains the KEY and SIG records for domain nnnn, the domain name part from the key file
identifier produced when dnssec-keygen created the domain’s public and private keys. The
.keyset file can then be transferred to the DNS administrator of the parent zone for them to sign
the contents with dnssec-signkey.
EXAMPLE
The following command generates a key set for the DSA key for example.com that was shown in the
dnssec-keygen man page. (Note the backslash is simply a line continuation character and not part of
the dnssec-makekeyset command syntax.)
dnssec-makekeyset -t 86400 -s 20000701120000 -e +2592000 \
Kexample.com.+003+26160
dnssec-makekeyset will create a file called example.com.keyset containing a SIG and KEY
A dA record for example.com. These records will have a TTL of 86400 seconds (1 day). The SIG record
becomes valid at noon UTC on July 1st 2000 and expires 30 days (2592000 seconds) later.
The DNS administrator for example.com could then send example.com.keyset to the DNS
administrator for .com so that they could sign the resource records in the file. This assumes that the
.com zone is DNSSEC-aware and the administrators of the two zones have some mechanism for authen-
ticating each other and exchanging the keys and signatures securely.
FILES
/dev/random
SEE ALSO
dnssec-keygen(1), dnssec-signkey(1), dnssec-signzone(1), RFC2535.
Section 1−−186 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
dnssec-signkey(1) dnssec-signkey(1)
NAME
dnssec-signkey - DNSSEC keyset signing tool
SYNOPSIS
dnssec-signkey [-a] [-c class ] [-e end-time ] [-h] [-p] [-r randomdev ] [-s start-time ]
[-v level ] keyset keyfile ...
DESCRIPTION
dnssec-signkey is used to sign a key set for a child zone. Typically this would be provided by a
.keyset file generated by the dnssec-makekeyset utility. This provides a mechanism for a
DNSSEC-aware zone to sign the keys of any DNSSEC-aware child zones. The child zone’s key set gets
signed with the zone keys for its parent zone.
keyset will be the pathname of the child zone’s .keyset file.
Each keyfile argument will be a key identification string as reported by dnssec-keygen for the parent
zone. This allows the child’s keys to be signed by more than one parent zone key. A dA
Options
-a This option verifies all generated signatures.
-c class This option specifies the DNS class of the key sets. Currently only IN class is sup-
ported.
-e end-time This option specifies the date and time when the generated-SIG records expire.. If no end-time is specified, 30 days from the
start time is used as a default.
-h This option makes dnssec-signkey print a summary of its command line
options and arguments.
behavior of dnssec-signkey to use random numbers to
seed the process of generating keys when the system does not have a
/dev/random device to generate random numbers. The dnssec-signkey pro-
gram will prompt for keyboard input and use the time intervals between keystrokes
to provide randomness. With this option, it will use randomdev as a source of ran-
dom data.
-s start-time This option specifies the date and time when the generated SIG records become
valid. start-time is specified, the current time is
used.
-v level This option can be used to make dnssec-signkey more verbose. As the
debugging/tracing level increases, dnssec-signkey generates increasingly
detailed reports about what it is doing. The default level is zero.
When dnssec-signkey completes successfully, it generates a file called nnnn .signedkey containing
the signed keys for child zone nnnn. The keys from the keyset file would have been signed by the
parent zone’s key or keys which were supplied as keyfile arguments. This file should be sent to the
DNS administrator of the child zone. They arrange for its contents to be incorporated into the zone file
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−187
dnssec-signkey(1) dnssec-signkey(1)
when it next gets signed with dnssec-signzone. A copy of the generated signedkey file should be
kept by the parent zone’s DNS administrator, since it will be needed when signing the parent zone.
EXAMPLE
The DNS administrator for a DNSSEC-aware .com zone would use the following command to make
dnssec-signkey sign the .keyset file for example.com created in the example shown in the man
page for dnssec-makekeyset:
dnssec-signkey example.com.keyset Kcom.+003+51944
where Kcom.+003+51944 was a key file identifier that was produced when dnssec-keygen gen-
erated a key for the .com zone.
dnssec-signkey will produce a file called example.com.signedkey which has the keys for
example.com signed by the com zone’s zone key.
A dA FILES
/dev/random
SEE ALSO
dnssec-keygen(1), dnssec-makekeyset(1), dnssec-signzone(1), RFC2535.
Section 1−−188 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
dnssec-signzone(1) dnssec-signzone(1)
NAME
dnssec-signzone - DNSSEC zone signing tool
SYNOPSIS
dnssec-signzone [-a] [-c cycle-time ] [-d directory ] [-e end-time ] [-f output-file ] [-h]
[-i interval ] [-n ncpus ] [-o origin ] [-p] [-r randomdev ] [-s start-time ] [-t]
[-v level ] zonefile keyfile ....
DESCRIPTION
dnssec-signzone is used to sign a zone. Any .signedkey files for the zone to be signed should be
present in the current directory, along with the keys that will be used to sign the zone.
Arguments
zonefile This is the name of the unsigned zone file.
keyfile If no keyfile arguments are supplied, the default behaviour is to use all of the zone’s keys
that are present in the current directory. Providing specific keyfile arguments constrains
A dA
dnssec-signzone to only use those keys for signing the zone. Each keyfile argument
would be an identification string for a key created with dnssec-keygen.
If the zone to be signed has any secure subzones, the .signedkey files for those subzones need to be
available in the current working directory used by dnssec-signzone.
Options
-a This option is used to force verification of the signatures generated by dnssec-
signzone. By default the signature files are not verified.
-c cycle-time
This option is used to configure the cycle period which is used for resigning records when
a previously signed zone is passed as input to dnssec-signzone. The cycle period is
an offset from the current time (in seconds). If a SIG record expires after the cycle
period, it is retained. Otherwise, it is considered to be expiring soon, and dnssec-
signzone will remove it and generate a new SIG record to replace it.
-d directory
This option is used to look for signedkey files in directory as the directory.
-e end-time
This option is used to set the expiration time for the SIG records. The expiration time
specifies when the SIG records are no longer valid, not when they are deleted from caches
on name servers. end-time can represent an absolute or relative date.
The YYYYMMDDHHMMSS notation is used to indicate an absolute date and time.
When end-time is +N, it indicates that the SIG records will expire in N seconds after
their start time.
-f output-file
This option is used to override the use of the default signed zone file,
zonefile.signed by dnssec-signzone.
-h This option is used to print a short summary of the options and arguments to dnssec-
signzone.
-i interval
When a previously signed zone is passed as input, records may be resigned. The interval
option specifies the cycle interval as an offset from the current time (in seconds). If a SIG
record expires after the cycle interval, it is retained. Otherwise, it is considered to be
expiring soon, and it will be replaced.
The default cycle interval is one quarter of the difference between the signature end and
start times. So if neither end-time nor start-time is specified, dnssec-signzone gen-
erates signatures that are valid for 30 days, with a cycle interval of 7.5 days. Therefore,
if any existing SIG records are due to expire in less than 7.5 days, they would be
replaced.
-n ncpus This option can be used to create worker threads equal to ncpus to take advantage of
multiple CPUs. If no option is given, named will try to determine the number of CPUs
present and create one thread per CPU.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−189
dnssec-signzone(1) dnssec-signzone(1)
-o origin This option specifies the zone origin. If not specified, the name of the zone file is assumed
to be the origin .
behaviour of dnssec-signzone to use random numbers to
seed the process of signing the zone. If the system does not have a /dev/random device
to generate random numbers, the dnssec-signzone program will prompt for key-
board input and use the time intervals between keystrokes to provide randomness. With
EXAMPLE
The example below shows how dnssec-signzone could be used to sign the example.com zone with
the key that was generated in the example given in the man page for dnssec-keygen. The zone file for
this zone is example.com, which is the same as the origin, so there is no need to use the -o option to
set the origin. This zone file contains the key set for example.com that was created by dnssec-
makekeyset. The zone’s keys are either appended to the zone file or incorporated using a $INCLUDE
statement. If there was a .signedkey file from the parent zone; i.e., example.com.signedkey, it
should be present in the current directory. This allows the parent zone’s signature to be included in the
signed version of the example.com zone.
dnssec-signzone example.com Kexample.com.+003+26160
dnssec-signzone will create a file called example.com.signed, the signed version of the
example.com zone. This file can then be referenced in a zone{} statement in /etc/named.conf so
that it can be loaded by the name server.
FILES
/dev/random
SEE ALSO
dnssec-keygen(1), dnssec-makekeyset(1), dnssec-signkey(1), RFC2535.
Section 1−−190 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
domainname(1) domainname(1)
NAME
domainname - set or display name of Network Information Service domain
SYNOPSIS
domainname [ name_of_domain ]
DESCRIPTION
Network Information Service (NIS) uses domain names to refer collectively to a group of hosts. Without
an argument, domainname displays the name of the NIS domain. Only superuser can set the domain
name by providing name_of_domain. The domain name is usually set in the configuration file
/etc/rc.config.d/namesvrs, by setting the NIS_DOMAIN variable.
DEPENDENCIES
NIS servers use the NIS domain name as the name of a subdirectory of /var/yp. For this,
name_of_domain should not be a . or .. and it should not contain /. Since the NIS domain name can
be as long as 64 characters, name_of_domain may exceed the maximum file name length allowed on a A dA
given file system. If that length is exceeded, the subdirectory name becomes a truncated version of the
NIS domain name.
The first 14 characters of all NIS domains on the network must be unique: truncated names should be
checked to verify that they meet this requirement.
AUTHOR
domainname was developed by Sun Microsystems, Inc.
SEE ALSO
ypinit(1M), getdomainname(2), setdomainna(2).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−191
dos2ux(1) dos2ux(1)
NAME
dos2ux, ux2dos - convert ASCII file format
SYNOPSIS
dos2ux file ...
ux2dos file ...
DESCRIPTION
dos2ux and ux2dos read each specified file in sequence and write it to standard output, converting to
HP-UX format or to DOS format, respectively. Each file can be either DOS format or HP-UX format for
either command.
A DOS file name is recognized by the presence of an embedded colon (:) delimiter; see dosif (4) for DOS
file naming conventions.
A dA If no input file is given or if the argument - is encountered, dos2ux and ux2dos read from standard
input. Standard input can be combined with other files.
EXAMPLES
Print file myfile on the display:
dos2ux myfile
Convert file1 and file2 to DOS format then concatenate them together, placing them in file3.
ux2dos file1 file2 > file3
RETURN VALUE
dos2ux and ux2dos return 0 if successful or 2 if the command failed. The only possible failure is the
inability to open a specified file, in which case the commands print a warning.
WARNINGS
Command formats resembling:
dos2ux file1 file2 > file1
overwrite the data in file1 before the concatenation begins, causing a loss of the contents of file1.
Therefore, be careful when using shell special characters.
SEE ALSO
doschmod(1), doscp(1), dosdf(1), dosls(1), dosmkdir(1), dosrm(1), dosif(4).
Section 1−−192 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
doschmod(1) doschmod(1)
(TO BE OBSOLETED)
NAME
doschmod - change attributes of a DOS file
SYNOPSIS
doschmod [-mu] mode device : file ...
DESCRIPTION
The doschmod command is targeted for removal from HP-UX; see the WARNINGS below.
doschmod is the DOS counterpart of chmod (see chmod(1)).
Options
doschmod recognizes one option:
-m If an ordinary file with the same name as volume label exists, operation will be performed on
the file instead of.
The attributes of each named file are changed according to mode, which is an octal number in the range
000 to 0377. mode is constructed from the logical OR of the following modes:
200 Reserved. Do not use.
100 Reserved. Do not use.
040 Archive. Set whenever the file has been written to and closed.
020 Directory. Do not modify.
010 Volume Label. Do not modify.
004 System file. Marks files that are part of the DOS operating system.
002 Hidden file. Marks files that do not appear in a DOS directory listing using the DOS DIR
command.
001 Read-Only file. Marks files as read-only.
WARNINGS
Use of doschmod is discouraged because it is targeted for removal from HP-UX.
Specifying inappropriate mode values can make files and/or directories inaccessible, and in certain cases
can damage the file system. To prevent such problems, do not change the mode of directories and volume
labels.
Normal users should have no need to use mode bits other than 001, 002, and 040.
EXAMPLES
Mark file /dev/rfd9122:memo.txt as a hidden file:
doschmod 002 /dev/rfd9122:memo.txt
Mark file driveC:autoexec.bat read-only:
doschmod 001 driveC:autoexec.bat
SEE ALSO
chmod(1), dos2ux(1), doscp(1), dosdf(1), dosls(1), dosmkdir(1), dosrm(1), chmod(2), dosif(4).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−193
doscp(1) doscp(1)
(TO BE OBSOLETED)
NAME
doscp - copy to or from DOS files
SYNOPSIS
doscp [-fmvu] file1 file2
doscp [-fmvu] file1 [ file2 ... ] directory
DESCRIPTION
The doscp command is targeted for removal from HP-UX; see the WARNINGS below.
doscp is the DOS counterpart of cp (see cp (1)). doscp copies a DOS file to a DOS or HP-UX file, an
HP-UX file to an HP-UX or DOS file, or HP-UX or DOS files to an HP-UX or DOS directory. The last name in
the argument list is the destination file or directory.
A DOS file name is recognized by the presence of an embedded colon (:) delimiter; see dosif (4) for DOS
A dA file naming conventions.
Metacharacters *, ?, and [ ... ] can be used when specifying both HP-UX and DOS file names. These
must be quoted when specifying a DOS file name, because file name expansion must be performed by the
DOS utilities, not by the shell. DOS utilities expand file names as described in regexp (5) under PATTERN
MATCHING NOTATION.
The file name - (dash) is interpreted to mean standard input or standard output depending upon its
position in the argument list.
Options
doscp recognizes the following options:
-f Unconditionally write over an existing file. In the absence of this option, doscp asks per-
mission to overwrite an existing HP-UX file.
-v Verbose mode. doscp prints the source name.
-u Disable argument case conversion. In the absence of this option, all DOS file names are con-
verted to upper case.
-m In this case you may have a filename same as DOS volume label.
RETURN VALUE
doscp returns 0 if all files are copied successfully. Otherwise, it prints a message to standard error and
returns with a non-zero value.
EXAMPLES
Copy the files in the HP-UX directory abc to the DOS volume stored as HP-UX file hard_disk:
doscp abc/* hard_disk:
Copy DOS file /backup/log through the HP-UX special file /dev/rfd9127 to HP-UX file logcopy
located in the current directory:
doscp /dev/rfd9127:/backup/log logcopy
Copy DOS file zulu on the volume stored as HP-UX file bb to standard output:
doscp bb:zulu -
Copy all files in directory /dameron with extension txt in the DOS volume /dev/rdsk/c1t2d0 to
the HP-UX directory abacus located in the current directory:
doscp ’/dev/rdsk/c1t2d0:/dameron/*.txt’ abacus
WARNINGS
Use of doscp is discouraged because it is targeted for removal from HP-UX. Use dos2ux (1) instead.
doscp works more reliably if you use a raw device special file (/dev/rdsk/) than a block device special
file.
To use SCSI floppy disk devices, the sflop device driver must be configured into the kernel. (You can
use the ioscan command to verify the configuration.)
Section 1−−194 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
doscp(1) doscp(1)
(TO BE OBSOLETED)
SEE ALSO
cp(1), dos2ux(1), doschmod(1), dosdf(1), dosls(1), dosmkdir(1), dosrm(1), ioscan(1M) dosif(4).
A dA
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−195
dosdf(1) dosdf(1)
(TO BE OBSOLETED)
NAME
dosdf - report number of free disk clusters
SYNOPSIS
dosdf device[:]
DESCRIPTION
The dosdf command is targeted for removal from HP-UX; see the WARNINGS below.
dosdf is the DOS counterpart of the df command (see df(1)). It prints the cluster size in bytes and the
number of free clusters on the specified DOS volume.
WARNINGS
Use of dosdf is discouraged because it is targeted for removal from HP-UX.
A dA SEE ALSO
df(1), dos2ux(1), doschmod(1), doscp(1), dosls(1), dosmkdir(1), dosrm(1), dosif(4).
Section 1−−196 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
dosls(1) dosls(1)
(TO BE OBSOLETED)
NAME
dosls, dosll - list contents of DOS directories
SYNOPSIS
dosls [-aAmudl] device :[ file ] ...
dosll [-aAmudl] device :[ file ] ...
DESCRIPTION
A dA con-
verted.ls and dosll is discouraged because they are targeted for removal from HP-UX.
EXAMPLES
These examples assume that a DOS directory structure exists on the device accessed through HP-UX spe-
cial
dos2ux(1), doschmod(1), doscp(1), dosdf(1), dosmkdir(1), dosrm(1), ls(1), dosif(4).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−197
dosmkdir(1) dosmkdir(1)
(TO BE OBSOLETED)
NAME
dosmkdir - make a DOS directory
SYNOPSIS
dosmkdir [-mu] device :directory ...
DESCRIPTION
The dosmkdir command is targeted for removal from HP-UX; see the WARNINGS below.
dosmkdir is the DOS counterpart of the mkdir command (see mkdir (1)). It creates specified direc-
tories. The standard entries, . for the directory itself and .. for its parent, are made automatically.
There is one option:
-m In this case you may have a directory name same as DOS.
DIAGNOSTICS
dosmkdir returns 0 if all directories were successfully created. Otherwise, it prints a message to stan-
dard error and returns non-zero.
WARNINGS
Use of dosmkdir is discouraged because it is targeted for removal from HP-UX.
EXAMPLES
Create an empty subdirectory named numbers under the directory /math/lib on the device accessed
through HP-UX special file /dev/rfd9122:
dosmkdir /dev/rfd9122:/math/lib/numbers
SEE ALSO
dos2ux(1), doschmod(1), doscp(1), dosdf(1), dosls(1), dosrm(1), mkdir(1), dosif(4).
Section 1−−198 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
dosrm(1) dosrm(1)
(TO BE OBSOLETED)
NAME
dosrm, dosrmdir - remove DOS files or directories
SYNOPSIS
dosrm [-fmriu] device :file ...
dosrmdir [-mu] device :file ...
DESCRIPTION
The dosrm and dosrmdir commands are targeted for removal from HP-UX; see the WARNINGS
below.
dosrm and dosrmdir are DOS counterparts of rm and rmdir (see rm(1) and rmdir (1), respectively).
dosrm removes the entries for one or more files from a directory. If a specified file is a directory, an
error message is printed unless the optional argument -r is specified (see below).
dosrmdir removes entries for the named directories, provided they are empty. A dA
Options
dosrm and dosrmdir recognize the following options:
-f (force) Unconditionally remove the specified file, even if the file is marked read-only.
-r Cause dosrm to recursively delete the entire contents of a directory, followed by the direc-
tory itself. dosrm can recursively delete up to 17 levels of directories.
-i (interactive) Cause dosrm to ask whether or not to delete each file. If -r is also specified,
dosrm asks whether to examine each directory encountered.
-m If an ordinary file with the same name as volume label exists, operation will be performed on
the file instead of volume label.
-u Disable argument case conversion. In the absence of this option, all DOS file names are con-rm and dosrmdir is discouraged because they are targeted for removal from HP-UX.
EXAMPLES
These examples assume that a DOS directory structure exists on the device accessed through the HP-UX
special file /dev/rfd9122.
Recursively comb through the DOS directory /tmp and ask if each DOS file should be removed forcibly
(that is, with no file mode checks):
dosrm -irf /dev/rfd9122:/tmp
Remove the DOS directory doug from the DOS volume stored as HP-UX file hard_disk:
dosrmdir hard_disk:doug
SEE ALSO
dos2ux(1), doschmod(1), doscp(1), dosdf(1), dosls(1), dosmkdir(1), rm(1), rmdir(1), dosif(4).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−199
du(1) du(1)
NAME
du - summarize disk usage
SYNOPSIS
du [-a-s] [-bkrx] [-t type ] [ name ... ]
DESCRIPTION
The du command gives the number of 512-byte blocks allocated for all files and (recursively) directories
within each directory and file specified by the name operands. The block count includes the indirect
blocks of the file. A file with two or more links is counted only once. If name is missing, the current
working directory is used.
By default, du generates an entry only for the name operands and each directory contained within those
hierarchies.
A dA Options
The du command recognizes the following options:
-a Print entries for each file encountered in the directory hierarchies in addition to the
normal output.
-b For each name operand that is a directory for which file system swap has been
enabled, print the number of blocks the swap system is currently using.
-k Gives the block count in 1024-byte blocks.
-r Print messages about directories that cannot be read, files that cannot be accessed,
etc. du is normally silent about such conditions.
-s Print only the grand total of disk usage for each of the specified name operands.
-x Restrict reporting to only those files that have the same device as the file specified
by the name operand. Disk usage is normally reported for the entire directory
hierarchy below each of the given name operands.
-t type Restrict reporting to file systems of the specified type . (Example values for type are
hfs, cdfs, nfs, etc.) Multiple -t type options can be specified. Disk usage is
normally reported for the entire directory hierarchy below each of the given name
operands.
EXAMPLES
Display disk usage for the current working directory and all directories below it, generating error mes-
sages for unreadable directories:
du -r
Display disk usage for the entire file system except for any cdfs or nfs mounted file systems:
du -t hfs /
Display disk usage for files on the root volume (/) only. No usage statistics are collected for any other
mounted file systems:
du -x /
WARNINGS
Block counts are incorrect for files that contain holes.
SEE ALSO
df(1M), bdf(1M), quot(1M).
STANDARDS CONFORMANCE
du: SVID2, SVID3, XPG2, XPG3, XPG4
Section 1−−200 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
echo(1) echo(1)
NAME
echo - echo (print) arguments
SYNOPSIS
echo [ arg ] ...
DESCRIPTION
echo writes its arguments separated by blanks and terminated by a new-line on the standard output. It
also understands C-like escape conventions; beware of conflicts with the shell’s use of \:
\a write an alert character
\b backspace
\c print line without appending a new-line
\f form-feed
\n new-line
\r carriage return
\t tab
\v vertical tab A eA
\\ backslash
\n the 8-bit character whose ASCII code is the 1-, 2-, 3- or 4-digit octal number n, whose first
character must be a zero.
\0num write an 8-bit value that is the zero-, one-, two- or three-digit octal number num
echo is useful for producing diagnostics in command files and for sending known data into a pipe.
Notes
Berkeley echo differs from this implementation. The former does not implement the backslash escapes.
However, the semantics of the \c escape can be obtained by using the -n option. The echo command
implemented as a built-in function of csh follows the Berkeley semantics (see csh (1)).
EXTERNAL INFLUENCES
Environment Variables
LC_CTYPE determines the interpretation of arg, echo behaves as if all internationalization variables are set to "C". See environ (5).
AUTHOR
echo was developed by OSF and HP.
SEE ALSO
sh(1).
BUGS
No characters are printed after the first \c. This is not normally a problem.
STANDARDS CONFORMANCE
echo: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−201
ed(1) ed(1)
NAME
ed, red - line-oriented text editor
SYNOPSIS
ed [-p string ] [-s-] [-x] [file ]
red [-p string ] [-s-] [-x] [file ]
DESCRIPTION
The ed command executes a line-oriented text editor. It is most commonly used in scripts and nonin-
ter
A eA
<:t5,10,15 s72:>
the tab stops would be set at columns 5, 10, and 15, and a maximum line length of 72 would be imposed.
Note: When you input text, ed expands tab characters as they are typed to every eighth column as a
default.
Regular Expressions
ed supports the Basic Regular Expression (RE) syntax (see regexp (5)), with the following additions:
• The null RE (for example, //) is equivalent to the last RE encountered.
• If the closing delimiter of an RE or of a replacement string (for example, /) would be the last
character before a newline, that delimiter can be omitted, in which case the addressed line is
printed. The following pairs of commands are equivalent:
Section 1−−202 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
ed(1) ed(1) accord-
ing to the following rules:
1. The character . refers to the current line.
2. The character $ refers to the last line of the buffer. the first line found by searching forward from the
line following the current line toward the end of the buffer and stopping at the first line contain- A eA
ing a string matching the RE. If necessary, the search wraps around to the beginning of the
buffer and continues up to and including the current line, so that the entire buffer is searched.
(Also see WARNINGS below.)
6. An RE enclosed by question marks (?RE ?) addresses the first line found by searching back-
ward from the line preceding the current line toward the beginning of the buffer and stopping at
the first line containing a string matching the RE. If necessary, (ˆ) and - charac-
ters.
Addresses are usually separated from each other by a comma (,). They can also be separated by a semi-
colon (;),, num-
bered, or printed, as discussed below under the l, n, and p commands.
(.)a The a (append) command reads text and appends it after the addressed line. Upon com-
text pletion, the new current line is the last inserted line, or, if no text was added, at the
. addressed line. Address 0 is legal for this command, causing the appended text to be
placed at the beginning of the buffer.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−203
ed(1) ed(1)
(.,.)c The c (change) command deletes the addressed lines then accepts input text to replace
text the current
file name.
Section 1−−204 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
ed(1) ed(1) A eA
printed. The n command can be appended to any command other than e, f, r, or com-
mand com-
mands)..
HP-UX 11i Version 2: August 2003 −4− Hewlett-Packard Company Section 1−−205
ed(1) ed(1)
When the character % is the only character in replacement , the replacement used in the
most recent substitute command is used as the replacement in the current substitute com-
mand. The % loses its special meaning when it is in a replacement string containing
more than one character or when preceded by a \.
A line can be split by substituting a newline character into it. The newline in replace-
ment
A eA line is written in the format specified for the n command.
p Write to standard output the final line in which a substitution was made. The
line is written in the format specified for the p command.
(.,. any-
thing in the buffer, that is, the most recent a, c, d, g, G, i, j, m, r, s, t, v, or V com-
mand. immedi-
ately expli-
citly empty key turns off encryption.
($)= The line number of the addressed line is displayed. The current line address is
unchanged by this command.
com-
mand,.
Section 1−−206 Hewlett-Packard Company −5− HP-UX 11i Version 2: August 2003
ed(1) ed(1) vari-
able set-
ting, all internationalization variables default to "C". See environ (5).
If LC_ALL is set to a nonempty string value, it overrides the values of all the other internationalization
variables, including LANG.
A eA
LC_CTYPE determines the interpretation of text as single- and/or multibyte. ‘w’,
ed(1) allows a Maximum Line Length of 4096 characters. Attempting to create lines longer than
the allowable limit causes ed(1) to produce a Line too long error message.)).
HP-UX 11i Version 2: August 2003 −6− Hewlett-Packard Company Section 1−−207
ed(1) ed(1)
If the editor input is coming from a command file (e.g., ed file < ed-cmd-file), the editor exits at
the first failure of a command in the command file.
When reading a file, ed discards ASCII NUL characters and all characters after the last newline. This
can cause unexpected behavior when using regular expressions to search for character sequences contain-
ing).
A eA The ed section in Text Processing: User’s Guide .
STANDARDS CONFORMANCE
ed: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
red: SVID2, SVID3, XPG2, XPG3
Section 1−−208 Hewlett-Packard Company −7− HP-UX 11i Version 2: August 2003
elfdump(1) elfdump(1)
NAME
elfdump - dump information contained in object files.
SYNOPSIS
elfdump [-acCdfghHjkLopqrsStuUvV] [-dc] [-dl] [-tx] [-tv] [-D num ] [+D num2 ]
[+interp] [+linkmap] [+linkmap_bss] [+linkmap_file] [-n name ] [+objdebug] [+s
section ] [-T num ] [+T num2 ] files ...
DESCRIPTION
elfdump takes one or more object files or libraries and dumps information about them. The following
options are supported:
-a Dumps archive headers from an archive library.
-c Dumps the string table(s).
-C (Modifier) Demangles C++ symbol names before printing them. This modifier is valid
with -c, -r, -s, and -t. If specified with -H, this modifier is ignored. If specified with
-n name, the symbol whose unmangled name matches name will be printed, and its sym- A eA
bol name will be printed as a demangled name.
-d Prints the .note section which contains the compilation unit dictionary and linker foot-
print. This option has the same effect as elfdump -dc -dl.
-dc Prints the compilation unit dictionary of the .notes section.
-dl Prints the linker footprint of the .notes section. The linker footprint has information on
the linker used to generate the file as well as the link time.
-D num (Modifier) Prints the section whose index is num.
+D num2 (Modifier) Prints the sections in the range 1 to num2. If used with -D, the sections in the
range num to num2 are printed. Valid with -h, -r, -s. If used with -r, only the reloca-
tions which apply to the section(s) in the range are printed.
-f Dumps the file header (ELF header).
-g Dumps global symbols from an archive.
-h Dumps the section headers.
-H (Modifier) Dumps output information in hexadecimal, octal, or decimal format, with all
options.
+interp Displays the run-time interpreter path name for a.out (usually the location of the
dynamic loader and microloader). Only shared bound executables have this string. To
change the setting, use the ld +interp command.
-j Prints the object dictionary for one or more executable files, if the source file was com-
piled with the +objdebug option. The object dictionary entry contains the name of the
object file that contributed to a particular section, the relative offset within the section,
size of the object file’s contribution, and attributes of the entry.
-k Prints the CTTI section headers according to the directory member relationship.
-L Dumps the .dynamic section in shared libraries and dynamically linked program files.
+linkmap Prints the .linkmap section, which is only created when the incremental linker is used
(with the ld +ild command), or when the linker option +objdebug is used (which is
the default), along with the compiler option, -g (which is NOT the default).
+linkmap_bss
Prints the .linkmap_bss section, which is only created when the incremental linker is
used (with the ld +ild command), or when the linker option +objdebug is used
(which is the default), along with the compiler option, -g (which is NOT the default).
+linkmap_file
Prints the .linkmap_file section, which is only created when the incremental linker is
used (with the ld +ild command), or when the linker option +objdebug is used
(which is the default), along with the compiler option, -g (which is NOT the default).
-n name (Modifier) Dumps information about the specified section or symbol name. This option is
valid with -h, -r, -s, and -t. If used with -t, name pertains to a symbol name and
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−209
elfdump(1) elfdump(1)
elfdump will only dump the symbol entry whose name matches name. If used with the
other options, name pertains to a section name and elfdump will only dump the section
whose name matches it.
-o Dumps the optional headers (program headers).
-p (Modifier) Do not print titles, with all options.
-q (Modifier) Suppresses printing CTTI section headers. Valid with -k option.
-r Dumps the relocations.
-s Dumps the section contents.
+objdebug Dumps any section beginning with .objdebug_ as a string table.
+s name (Modifier) Dumps the section specified by name. Valid with -c and -t only.
-S (Modifier) Dumps output information in short format. Valid with the -h and -o options.
-t Dumps the symbol table entries.
A eA -tx Dumps the value of st_shndx in symbol table, in addition to information dump from -t
option. This option is useful to verify the data stored in the symbol table.
-T num Prints the symbol whose index is num.
+T num2 (Modifier) Prints the symbols in the range 0 to num2. If used with -T, print the symbols
in the range num to num2. Valid with -t.
-tv Prints versioned symbols.
-u Prints the usage menu.
-U Prints the unwind table.
-v (Modifier) Verifies the CTTI section headers before printing. Valid with the -k option.
-V Prints the version number for elfdump.
EXAMPLES
To see the functions exported from a shared library:
$ elfdump -s -n .dynsym libsubs.so | grep ’FUNC GLOB’ | grep -v UNDEF
To see the global data items exported from a shared library:
$ elfdump -s -n .dynsym libsubs.so | grep ’OBJT GLOB’ | grep -v UNDEF
To display string table information (.strtab):
$ elfdump -c subs.o
To list the shared libraries (.sl) linked with a program or shared library (dependent libraries):
$ elfdump -L a.out | grep Needed
$ chatr a.out # shared library list
To list the embedded path for shared libraries (.so) opened by a program:
$ elfdump -L a.out | grep Rpath # or
$ elfdump -s -n .dynamic a.out | grep Rpath
$ chatr a.out # embedded path
SEE ALSO
System Tools
ld(1) Invoke the link editor
Miscellaneous
a.out (4) Assembler, compiler, and linker output
elf (3E) Executable and Linking Format
Section 1−−210 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
elfdump(1) elfdump(1)
A eA
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−211
elm(1) elm(1)
NAME
elm - process electronic mail through a screen-oriented interface
SYNOPSIS
elm [-aKkmtVz] [-f folder ]
elm [-s subject ] address-list
elm -c [alias-list ]
elm -h
elm -v
DESCRIPTION
The elm program is a screen-oriented electronic mail processing system. It supports the industry-wide
MIME standard for nontext mail, a special forms message and forms reply mechanism, and an easy-to-
use alias system for individuals and groups. elm operates in three principal modes:
A eA • Interactive mode, running as an interactive mail interface program. (First syntax.)
• Message mode, sending a single interactive message to a list of mail addresses from a shell com-
mand line. (Second syntax.)
• File mode, sending a file or command output to a list of mail addresses via a command-line pipe or
redirection. (Second syntax.)
In all three cases, elm honors the values that are set in your elmrc initialization file, in your elm alias
database, and in the system elm alias database.
The modes are described below in inverse order (shortest description to longest).
Options
The following options are recognized:
-a Set arrow=ON. Use the arrow (->) instead of the inverse bar to mark the current
item in the various indexes. This overrides the setting of the arrow boolean vari-
able (see the ELM CONFIGURATION section).
-c Check alias. Check the aliases in alias-list against your personal elm alias data-
base and the system elm alias database. The results are written to standard out-
put. Errors are reported first, in the form:
(alias "alias" is unknown)
Successes are reported in a header-entry format, with group aliases replaced by
their members, in the form:
Expands to: alias-address (fullname ),
alias-address (fullname ),
...
alias-address (fullname )
If there is no fullname , the " (fullname )" portion is omitted.
-f folder Folder file. Read mail from the folder file rather than from the incoming mailbox.
A folder file is in the standard mail file format, as created by the mail system or
saved by elm itself.
-h Help. Display an annotated list of command-line options.
-k Set softkeys=OFF. Disable the use of softkeys (HP 2622 function keys). This
overrides the setting of the softkeys boolean variable (see the ELM CONFI-
GURATION section).
-K Set keypad=OFF and softkeys=OFF. Disable the use of softkeys and arrow cur-
sor keys. If your terminal does not have the HP 2622 function key protocols, this
option is required. This overrides the settings of the keypad and softkeys
boolean variables (see the ELM CONFIGURATION section).
Section 1−−212 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
elm(1) elm(1)
-m Set menu=OFF. Do not display the command menus on several Interactive Mode
screens. This overrides the setting of the menu boolean variable (see the ELM
CONFIGURATION section).
-s subject Subject. Specify the subject for a File Mode or Message Mode message.
-t Set usetite=OFF. Do not use the termcap ti/te and terminfo cup cursor-
positioning entries. This overrides the setting of the usetite boolean variable
(see the ELM CONFIGURATION section).
-V Verbose transmission. Pass outbound messages to the sendmail mail transport
agent using the -v option (see sendmail (1M)).
-v Version. Print out the elm version information. This displays the version number
and the compilation features that were specified or omitted.
-z Zero. Do not enter elm if there is no mail in the incoming mailbox.
Operands
The following operands are recognized: A eA
address-list A blank-separated list of one or more mail addresses, your elm user aliases, or elm
system aliases.
alias-list A blank-separated list of one or more of your elm user aliases or elm system
aliases.
Terminology
The following terms are used throughout this manpage.
blank A space or a tab character, sometimes known as linear white space.
body The body of a message. See message.
boolean variable
See configuration variable.
configuration variable
A boolean, numeric, or string variable that defines default behavior in the elm mail system.
See the ELM CONFIGURATION section.
elm system alias text file
The source file, /var/mail/.elm/aliases.text, for the elm system alias database.
elm user alias text file
The source file , $HOME/.elm/aliases.text, for a user’s own elm alias database.
elm user headers file
A file, $HOME/.elm/elmheaders, where a user can specify special header entries that are
included in all outbound messages.
elmrc configuration file
A file, $HOME/.elm/elmrc, that defines the initial values for elm configuration variables.
environment variable
A global variable set in the shell that called elm. See the EXTERNAL INFLUENCES section.
folder A file that contains mail messages in the format created by sendmail or elm.
full name
The first and last name of a user, as extracted from an alias text file or from the
/etc/passwd file.
header The header of a message. See message.
header entry
An entry in the header portion of a message, sometimes called a header field.
incoming mailbox
The mailbox where you receive your mail, usually /var/mail/loginname.
mail directory
The directory, defined by the maildir string variable, where a user normally stores mail
messages in folders.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−213
elm(1) elm(1)
FILE MODE
If standard input is connected to a pipe or to a file, and an address-list is specified, elm operates in File
Mode.
The output of the previous command in the pipe, or the content of the file, is mailed to the members of the
address-list . The address-list is expanded, based on your elm alias database and the system elm alias
database, and placed in the To: header entry.
If -s is omitted or subject is null, subject defaults to:
no subject (file transmission)
The expressed or default value of subject is placed in the Subject: header entry.
See the EXAMPLES section.
MESSAGE MODE
If standard input is connected to your terminal, and an address-list is specified, elm operates in Message
Mode.
The address-list is expanded, based on your elm alias database and the system elm alias database, and
placed in the To: header entry. The To: header entry is displayed, in the same form as for the Message
Menu m (mail) command in Interactive Mode.
The value of subject , if nonnull, or a null string, is placed in the Subject: header entry and the Sub-
ject: line is displayed for modification.
If askcc is ON in your elmrc file, you are prompted for Copies to:.
Then the editor defined by the editor string variable (if a signature file is not added) or the altedi-
tor string variable (if a signature file is added) is started so that you can write your message.
Section 1−−214 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
elm(1) elm(1)
When you leave your editor, you enter the Send Menu, as described for Interactive Mode.
If you choose the Send Menu s (send) command, the message is sent and the program terminates. If you
select the Send Menu f (forget) command, the message is stored in $HOME/Canceled.mail and the
program terminates. If you select other commands, the appropriate action occurs.
See the EXAMPLES section.
INTERACTIVE MODE
If standard input is connected to your terminal, and there is no address-list , elm operates in a screen-
oriented Interactive Mode.
If you do not have a $HOME/.elm directory, or if you do not have a mail directory, defined by the mail-
dir string variable, you are asked in turn if they should be created. You can answer y for yes , n for no,
or q for quit . For y or n, the directories are created or not, as appropriate, and the program continues.
For q, the program terminates.
Overview
When invoked, elm reads customized variables from file $HOME/.elm/elmrc (if it exists) to initialize A eA
parameters. This file can be saved from within elm and some of these variables can also be modified
with the Message Menu o (option) command.
elm first displays the Main or Message Menu, which shows index entries for the messages in your incom-
ing mailbox or selected mail folder. Among other options, you can read, print, reply to, and forward these
messages, as well as initiate new mail messages to other users.
You can also move to the Alias Menu, where you can create, modify, and delete your personal aliases.
From the Alias Menu, you can select one or more of your aliases and send a message to the corresponding
users.
When you send a message, you can include attachments in a number of formats, such as PostScript,
images, audio, and video, as well as plain text. The attachments are managed separately, which can be
convenient both for you and your correspondents.
Sending Messages
When you send a message, you use the editor defined by the editor or alteditor string variable. If
builtin is your editor, a set of commands described in the Built-In Editor subsection is available while
composing your message
If the elmheaders file exists (see the HEADER FILE section), all nonblank lines in the file are copied to
the headers of all outbound mail. This is useful for adding special information headers such as X-
Organization:, X-Phone:, and so forth.
MIME Support
elm supports the MIME protocols for headers and messages (RFC 1521 and RFC 1522) enabling it to
view and send mail containing other than normal ASCII text. For example, the mail contents can be
audio, video, images, etc., or a combination of these.
This also enables conformance with SMTP (RFC 821), which allows only 7-bit characters in the message,
by using MIME-encoding (base64 and quoted-printable) to convert 8-bit data to 7-bit.
elm also provides a facility to view multipart MIME messages. If elm receives a message whose type is
not text/plain, it invokes metamail, which invokes the appropriate utility (for example, ghost-
view, xv, an audio editor, mpeg) to display the different mail parts according to the content type (for
example, application/postscript, image, audio, video).
Aliases
elm has its own alias system that supports both personal and system-wide aliases. Personal aliases are
specific to a single user; system aliases are available to everyone on the system where the system aliases
reside (see newalias (1)). You can access the Alias Menu by executing the Message Menu a (alias) com-
mand. You can then create and save an alias for the current message, create and check other aliases, and
send messages to one or more aliases.
Aliases are limited to 2500 bytes. If you wish to create a group alias that is longer than 2500 bytes,
please ask your system administrator to create it for you in the sendmail system alias file,
/etc/mail/aliases (see sendmail (1M)).
HP-UX 11i Version 2: August 2003 −4− Hewlett-Packard Company Section 1−−215
elm(1) elm(1)
Message Menu
The Message Index is displayed on the Message Menu. You can use the following commands to manipu-
late and send messages. Some commands use a series of prompts to complete their action. You can use
Ctrl-D to cancel their operations.
The commands are:
!command Shell Escape. Send command to the shell defined by the shell string variable
without leaving elm.
# Display all known information about the current message.
$ Resynchronize the messages without leaving elm. If there are any messages
marked for deletion, you are asked if you want to delete them. If any messages are
A eA deleted or any status flags have changed, the messages are written back to the mail-
box file. All tags are removed.
% Display the computed return address of the current message.
* Set the current message pointer to the last message.
+ Display the next message index page, when applicable.
- Display the previous message index page, when applicable.
/pattern Pattern match. Search for pattern in the from and subject fields of the current mes-
sage index. The search starts at the current message and wraps around to the
beginning of the index. The current message pointer is set to the first message that
matches. Uppercase and lowercase are treated as equivalent.
//pattern Pattern match. Search for pattern through all the lines of the current folder. The
search starts at the current message and wraps around to the beginning of the
folder. The current message pointer is set to the first message that matches.
Uppercase and lowercase are treated as equivalent.
< Calendar. Scan message for calendar entries and add them to your calendar file. A
calendar entry is defined as a line whose first nonblank characters are ->, as in:
->calendar-entry
The delimiter -> and surrounding blanks are removed before the entry is added to
the calendar file. Resultant blank lines are ignored. You can define the calendar
file name in your elmrc file or with the Options Menu.
= Set the current message pointer to the first message.
> Save in folder. Same as the Message Menu s (save) command.
?key ... Help on key. Display a one-line description of what each key does. ? displays a
summary listing for each command available. A period (.) returns you to the Mes-
sage Menu.
@ Display a summary of the messages indexed on the current screen.
| Pipe the current message or the set of tagged messages through other filters as
desired. Use the shell defined by the shell string variable.
n New current message. Change the current message pointer to the one indexed as n.
If the message is not on the current page of headers, the appropriate page
displayed.
Return Read current message. The screen is cleared and the current message is displayed
by the pager defined by the pager string variable.
a Alias. Switch to the Alias Menu.
b Bounce mail. This is similar to forwarding a message, except that you do not edit
the message and the return address is set to the original sender’s address, rather
than to your address.
Section 1−−216 Hewlett-Packard Company −5− HP-UX 11i Version 2: August 2003
elm(1) elm(1)
c Change folder. This command is used to change the file whose messages are
displayed on the Message Menu. You are asked for a file name. The file must be in
message format; otherwise, elm aborts. You can use the customary wildcards for
your shell, as well as the following special names:
! Your incoming mail folder.
> Your received folder, defined by the receivedmail string vari-
able.
< Your sent folder, defined by the sentmail string variable.
. The previously used folder.
@alias The default folder for the login name associated with the alias
alias.
=filename A file in the directory defined by the maildir string variable.
C Copy message. Save the current message or the set of tagged messages to a folder.
You are prompted for a file name with a default value. The default value is a file in A eA
the maildir directory with the user name of the sender of the first message in the
set being saved. Any tags are cleared. Unlike the > and s commands, the messages
are not marked for deletion and the current message pointer is not moved.
d Delete. Mark the current message for deletion. See also Ctrl-D, u, and Ctrl-U.
Ctrl-D Delete. Mark all messages for deletion that contain a specified pattern in the
From: and Subject: header entries. See also d, u, and Ctrl-U.
e Edit. Allows you to physically edit the current mail folder using the editor defined
by the editor string variable. When you exit from your editor, elm resynchron-
izes your mail folder (see the $ command).
f Forward the current message. You are asked if you want to edit the outbound mes-
sage. If you answer y, the characters defined by the prefix string variable are
prefixed to each line of the message and the editor defined by the editor string
variable will be invoked to allow you to edit the message. If you answer n, the char-
acters are not prefixed and the editor will not be invoked. In either case, you are
prompted for To: recipients, allowed to edit the Subject: header entry, and, if
the askcc boolean variable is ON, you are prompted for Cc: recipients.
If the userlevel numeric variable is 1 (intermediate) or 2 (expert), and there
was a previous sent or forgotten message in this session, you are asked if you would
like to
Recall last kept message instead? (y/n)
If you answer y, the previous message is returned to the send buffer. If you answer
n, the current message is copied into the send buffer and your signature file (if any)
is appended.
Then the editor is invoked if you chose to edit the outbound message (above). When
you leave the editor, or if it was not invoked, the Send Menu is displayed.
g Group reply. The reply is automatically sent To: the sender of the message, with
Cc: to all the original To: and Cc: recipients. Otherwise, the action is the same
as for the r command.
h Same as Return, except that the message is displayed with all headers.
j Move down. Move the current message pointer down to the next message.
J Move down. Move the current message pointer down to the next undeleted mes-
sage.
k Move up. Move the current message pointer up to the previous message.
K Move up. Move the current message pointer up to the previous undeleted message.
l (ell) Limit the displayed messages to those that contain certain string values. You are
prompted with Enter criteria:. To set, add to, or clear the limiting criteria,
type one of:
HP-UX 11i Version 2: August 2003 −6− Hewlett-Packard Company Section 1−−217
elm(1) elm(1)
all Clear all the criteria and restore the normal display.
from string Restrict to entries that contain string in the From: header.
subject string Restrict to entries that contain string in the Subject:
header.
to string Restrict to entries that contain string in the To: header.
You can add limiting criteria by repeating the l command.
Ctrl-L Redraw the screen.
m Mail. Send mail to one or more addresses. You are prompted for To: recipients, a
Subject: and, if the askcc boolean variable is ON, Cc: recipients.
If the userlevel numeric variable is 1 (intermediate) or 2 (expert), and there
was a previous sent or forgotten message in this session, you are asked if you would
like to
Section 1−−218 Hewlett-Packard Company −7− HP-UX 11i Version 2: August 2003
elm(1) elm(1)
provided by the alwayskeep boolean variable (ON means y (yes) and OFF
means n (no)).
If you answer y, all undeleted unread (new and old) messages are returned to
your incoming mailbox.
If you answer n, all undeleted unread messages will be moved to the folder
defined by the receivedmail string variable.
If the ask boolean variable is OFF, the answers to the questions (which are not
displayed) are taken automatically from the values of the alwaysdelete,
alwaysstore, and alwayskeep boolean variables, respectively.
Q Quick quit. This is equivalent to executing the q command with the ask boolean
variable set to OFF.
r Reply to the sender of the current message. If the autocopy boolean variable is
OFF, you are asked if the source message should be copied into the edit buffer. If it
is ON, the message is copied automatically. If copied in, all lines from the message
are preceded by the prefix string defined by the prefix string variable. The To: A eA
header is set to the sender of the message (or the address in the Reply-To:
header, if one was set), the Subject: is set to the subject of the message, preceded
by Re:, and presented for you to edit. If the askcc boolean variable is ON, you are
prompted for Cc: recipients. Then, the editor defined by the editor string vari-
able is invoked. After you exit from your editor, the Send Menu is displayed.
s Save in folder (same as >). Save the current message or the set of tagged messages
to a folder. You are prompted for a file name with a default value. The default
value is a file in the maildir directory with the login name of the sender of the
first message in the set being saved. Any tags are cleared and the messages are
marked for deletion. The current message pointer is moved to the first undeleted
message after the last saved message.
t Tag toggle. Tag the current message for a later operation and move the current
message pointer to the next undeleted message. The operation can be one of |, C,
p, and s.
Or, remove the tag from a tagged message. See also the Ctrl-T command.
T Tag toggle. Tag the current message for a later operation and remain at the current
message. The operation can be one of |, C, p, and s.
Or, remove the tag from a tagged message. See also the Ctrl-T command.
Ctrl-T Tag all messages containing the specified pattern. Or remove the tags from all
tagged messages.
If any messages are currently tagged, you are asked if the tags should be removed.
Answer y to remove the old tags; answer n to keep them. In either case, you are
prompted for a string to match in either the From: or Subject: line of each mes-
sage. All messages that match the criterion are tagged. If you enter a null string
(carriage-return alone), no more messages are tagged.
u Undelete. Remove the deletion mark from the current message. See also d, Ctrl-
D, and Ctrl-U.
Ctrl-U Undelete. Remove any deletion mark from all messages that contain a specified
pattern in the From: and Subject: header entries. See also d, Ctrl-D, and u.
v View attachments. Invoke the Attachment View Menu for the current message.
x Exit. Exit without changing the mailbox. If changes are pending, such as deletions,
you are asked if they can be abandoned. If you answer y, the changes are aban-
doned and the program terminates. If you answer n the exit is abandoned and you
return to the Message Menu command prompt.
X Exit immediately without changing the mailbox. All pending changes are aban-
doned.
HP-UX 11i Version 2: August 2003 −8− Hewlett-Packard Company Section 1−−219
elm(1) elm(1)
Message Index
The messages in the current folder are indexed on the Message Menu, one per line, in the format:
sssnum mmm d from (lines ) subject
defined as:
sss A three-character status field, described in the Message Status subsection.
num The ordinal message index number.
mmm The month from the last Date: header entry, or from the From message header.
d The day from the last Date: header entry, or from the From message header.
from Either the sender name from the last From: header entry or from the From message
header.
lines The number of lines in the message.
subject The subject description from the first Subject: header entry, truncated to fit your
A eA screen.
The current message index entry is either highlighted in inverse video or marked in the left margin with
an arrow (->). See the -a option in the Options subsection and the arrow string variable in the ELM
CONFIGURATION section.
Message Status
The first three characters of each message index entry describe the message status. Each can be blank or
one of the values described below in descending order of precedence.
When a message has more than one status flag of a particular type set, the highest-precedence indicator
is displayed on the index line. For example, if a forms message (F) is also marked as company
confidential (C), the C rather than the F status character is displayed.
Section 1−−220 Hewlett-Packard Company −9− HP-UX 11i Version 2: August 2003
elm(1) elm(1)
? MIME. The message or its attachments is in a MIME format whose version is not supported.
Blank. Normal status.
Built-In Editor
When you are composing an outbound message with the builtin built-in editor, it prompts you for text
lines with an empty line. Enter a period (.) to end the message and continue with the Send Menu.
Built-in editor commands are lines that begin with an escape character, defined by the escape string
variable. The default escape character is tilde (˜).
Note: Some remote login programs use tilde as their default escape character when it is the first charac-
ter on a line. (You can tell, because the tilde does not print.) Usually, the tilde is transmitted when you A eA
enter a second character that is not recognized by the program or when you enter a second tilde. See the
program documentation for further information.
The built-in editor commands are:
~! [command] Execute the shell command, if one is given (as in ˜!ls), or start an interac-
tive shell, using the shell defined by the shell string variable.
~< command Execute the shell command and place the output of the command into the edi-
tor buffer. For example, "˜< who" inserts the output of the who command in
your message.
~? Print a brief help menu.
~˜ Start a line with a single tilde (˜) character.
~b Prompt for changes to the Blind-Carbon-Copy (Bcc:) list.
~c Prompt for changes to the Carbon-Copy (Cc:) list.
~e Invoke the editor defined for the easyeditor string variable on the mes-
sage, if possible.
~f [options] Add the specified list of messages or the current message. This uses read-
mail which means that all readmail options are available (see read-
mail (1)).
~h Prompt for changes to all the available headers (To:, Cc:, Bcc:, and Sub-
ject:).
~m [options] Same as ˜f, but each line is prefixed with the current prefix. See the prefix
string variable.
~o Prompt for the name of an editor to use on the message.
~p Print out the message as typed in so far.
~r filename Include (read in) the contents of the specified file.
~s Prompt for changes to the Subject: line.
~t Prompt for changes to the To: list.
~v Invoke the editor defined for the visualeditor string variable on the mes-
sage, if possible.
Alias Menu
The Alias Menu is invoked with the Message Menu a command. The source text for your alias file is
stored in the file $HOME/.elm/aliases.text. You can edit this file directly or with the following
commands.
The aliases currently compiled into your database and the system database are displayed in an indexed
list similar to the Message Menu. The entry format is described in the Alias Index subsection. The index
is sorted in the order defined by the aliassortby string variable. | https://de.scribd.com/document/19344828/HPUX-Command-A-M | CC-MAIN-2019-26 | refinedweb | 62,095 | 64.3 |
Deploying Machine Learning Model on Docker
In this blog, we are going to deploy a Machine Learning Model on the top of Docker container. For this, we need various software as a prerequisite. So let's start this by installing Docker.
To install Docker first we need to configure yum for docker. So paste the below code in your repo dir
cd /etc/yum.repos.d
yum install docker-ce --nobest
systemctl enable docker --now
These commands will install and start the container services.
After that, we need to launch a docker container on which we will run the Machine Learning model.
To run a container we can run the following command
docker run -it --name ml centos:latest
After running the docker run command we will directly land into the docker container. Now, we need to install some packages like CLI text editor- vim, and python3.
yum install vim python3-pip -y
After installing python3 we need to install some necessary py libraries to train our Machine Learning Model.
pip3 install numpy pandas sklearn joblib
After installing all the required libraries we need to copy the dataset which we are going to use to train our model.
docker cp SalaryData.csv os1:/
Now, we can write our Machine Learning code in our docker container using vim as a text editor.
vim ml.pyimport pandas
ds= pandas.read_csv('SalaryData.csv')
from sklearn.linear_model import LinearRegression
mind= LinearRegression()
x= ds['YearsExperience'].values.reshape(30, 1)
y= ds['Salary'] mind.fit(x,y)
mind.fit(x,y)import joblib
joblib.dump(mind, 'model.pk1')
This line shows that our code is a success and our model is created( model.pk1).
We need another program to run the pre-created model. Hence, I have created another program for running the model.
vim final.pyimport joblib
mind= joblib.load('model.pk1')
x= input("Enter the experiance in years:")
x=[[x]]
op =mind.predict(x)
print(op)
This code will first load our pre-trained model(model.pk1) and ask the number of years of experience then it will predict the estimation of the salary based on the years of experiance.
Output
| https://gauravtank2203.medium.com/deploying-machine-learning-model-on-docker-366490347417?source=post_page-----366490347417----------------------------------- | CC-MAIN-2022-05 | refinedweb | 357 | 58.69 |
Opened 3 years ago
Closed 3 years ago
Last modified 3 years ago
#22337 closed Bug (fixed)
makemigrations not working when Field takes FileSystemStorage parameter
Description
FileSystemStorage is not serializable. Because of this running
./manage.py makemigrations for an app that contains fields that use such storage results in an exception:
ValueError: Cannot serialize: <django.core.files.storage.FileSystemStorage object at 0x109c02310>
Background:
When
manage.py makemigrations is run on a new app the
deconstruct method is invoked for all fields. According to the django documentation every value in the deconstructed
kwargs dictionary "should itself be serializable". However, this is not the case when one passes a
FileSystemStorage object as
storage parameter to the
FileField constructor. The
deconstruct method of the
FileFIeld class adds the
FileSystemStorage storage object to
kwargs without it being serializable.
Fix:
I attach a patch that adds a
deconstruct method to the
django.core.files.storage.FileSystemStorage class.
Reproducing
The problem can be reproduced by starting a new app using the following models.py file and running
./manage.py makemigrations <appname>
from django.db import models from django.core.files.storage import FileSystemStorage class MyModel(models.Model): myfile = models.FileField('File', upload_to='myfiles', storage=FileSystemStorage(location='/tmp'))
Attachments (2)
Change History (15)
comment:1 Changed 3 years ago by
comment:2 Changed 3 years ago by
Changed 3 years ago by
Add deconstruct method to django.core.files.storage.FileSystemStorage
Changed 3 years ago by
Tests for FileSystemStorage decode method
comment:3 Changed 3 years ago by
I modified the original patch slightly. The
decode method now avoids adding items to kwargs where the value is the default value. Furthermore I added some test code to
tests/file_storage/tests.py in the second patch.
comment:4 Changed 3 years ago by
comment:5 Changed 3 years ago by
I propose that the storage should be omitted from
FileField.deconstruct(). It's an optional kwarg and it has no impact on the database. Changes in custom storages could potentially break past migrations. In south the various states of each model were stored as dicts of strings so that changes in python code could not break schema. I cover this in more detail in several related tickets: #22373, #22351, and #22436.
comment:6 Changed 3 years ago by
comment:7 Changed 3 years ago by
+1 on the excluding storage from FileField.deconstruct as chriscauley proposes. This affects migrations with custom storages such as S3BotoStorage from django-storages. Unless there is an expectation that all storage backends will implement deconstruct (to no benefit from what I can tell, as it does not actually affect the migration), it seems like an unnecessary burden on updating projects.
comment:8 Changed 3 years ago by
Why not using
®deconstructible from
django.utils.deconstruct? That should work with most subclasses of storage without extra efforts from their authors'.
Regarding skipping storage in
deconstruct(), that means
FileField wouldn't work in
RunPython operation, I'd rather keep the fake ORM as capable as possible.
comment:9 Changed 3 years ago by
Yes, we have to keep all parameters on fields, even if they're not database-affecting; the RunPython method still sees those fields and uses them from Python.
I'll fix this now using @deconstructible, as Loic is correct that it's safe to use here.
The patch looks sensible and it does seem to fix the issue but it needs some tests too.
Thanks. | https://code.djangoproject.com/ticket/22337 | CC-MAIN-2017-30 | refinedweb | 573 | 56.35 |
RationalWiki:Saloon bar/Archive118
Contents
- 1 To Train Up a Child
- 2 Question for the veggies
- 3 Rights of Man - grammar
- 4 Spam
- 5 Obama the traitor honors mean old indian who killed whites (Fox)
- 6 Idea of the decade
- 7 Organizing a sports competition that's ostensibly about bringing the peoples of the world together in the spirit of brotherhood and fair competition...
- 8 The Great Recession and the death of manufacturing
- 9 Should we capitalize "internet?"
- 10 What the fuck has Obama done so far?
- 11 You're driving down the road late at night...
- 12 1421 exposed!
- 13 Chillis!
- 14 Dear Potheads...
- 15 Very Special Epsiode
- 16 Job Hunt Update
- 17 A request for a favor.
- 18 Project Blue beam
- 19 Links in long articles
- 20 Posting nudity/pornography on other's user talk pages
- 21 Priest Church employee. Pedophile. Lather, rinse, repeat.
- 22 Richard Dawkins - Beware the Believers (Expelled promo)
- 23 Any Haitians in the house?
- 24 The poor get poorer and the rich get pumped full of Vitamin C
- 25 Yes or No on RW-Tan.
- 26 The difference between policy and enforcement (or, crap political arguments)
- 27 Crossword help
- 28 The evolution of Republicans
- 29 General site news
- 30 Fat too much time on their hands
- 31 Hiding comments in "recent changes" view
- 32 News?
- 33 Stoned lemurs
- 34 All the woo you want
- 35 The Conspiracy Files
- 36 Something I stumbled upon in my readings of stuff
- 37 Hurricane freakin Irene
- 38 Gay Nazis
- 39 Holy shit, Obama makes a good advisory appointment!
- 40 Links to hate sites
To Train Up a Child[edit]
NZ looks to censor and remove the book from sale which is probably good considering the vicious child abuse problem NZ has (it's often considered NZ's "dark secret") but it puts me in a bit of a position because I don't believe any books should be censored. Thoughts anyone? Abuse? Sexual innuendo? Aceof Spades 23:02, 23 August 2011 (UTC)
- You can 'abuse' me... with fuzzy handcuffs. Ah no, but seriously I think it's a fucking great idea. I don't advocate removing the book, more like putting it in the hands of psychologists, therapists, other such people who can read it and understand what sort of people and what sort of trauma they'll be dealing with. It can be a learning tool... just not the kind it was created for. HollowWorld (talk) 23:10, 23 August 2011 (UTC)
- EC Not too familiar with the book, and while I'm against censorship, I'm also for appropriate places and times (which is why I don't expect the local library to carry Hustler); what's this book on about that some people think Kiwis shouldn't read it? Also, fuck you, toad. Also, I'm wet for you right now. B♭maj7 So let us be loving, hopeful and optimistic. And we’ll change the world. 23:13, 23 August 2011 (UTC)
- It's best to fight lies with the truth, and not censorship.--User:Brxbrx/sig 23:17, 23 August 2011 (UTC)
- It advocates raising children the "biblical way". How to administer proper discipline and it has been implicated in a couple of murders. In NZ we have a serious child abuse problem so its not the sort of thing we want on the shelves. But, that said, it shouldn't be banned. Aceof Spades 23:22, 23 August 2011 (UTC)
Sounds like banning the book is deflecting attention away from the real issues here. B♭maj7 So let us be loving, hopeful and optimistic. And we’ll change the world. 23:29, 23 August 2011 (UTC)
- No it's not that - the book is more seen as "one of things we don't need considering the problem we have". Aceof Spades 23:32, 23 August 2011 (UTC)
- I was thinking along the lines of "we have this problem. I know, let's blame this book, instead of addressing social issues." Anyway, yeah, can't see a case for banning it. B♭maj7 So let us be loving, hopeful and optimistic. And we’ll change the world. 23:36, 23 August 2011 (UTC)
- Its been a social policy nightmare in NZ for many years. But I am more concerned about AD's beer drinking habits. Aceof Spades 23:41, 23 August 2011 (UTC)
- Tui is cheap!--talk 00:23, 24 August 2011 (UTC)
- Wouldn't banning the book just make it more legitimate in the eyes of the type of people who would follow the advice within anyway? I mean, it's not like you can *actually* ban a book these days. At the most it'd make it slightly harder to obtain, but in reality it'd just be a symbolic gesture. Within hours there'd be scans/full text of it on the internet. X Stickman (talk) 00:24, 24 August 2011 (UTC)
- I do not think it should be banned, because (1) sunlight is the best disinfectant, and (2) I think the book just appeals to the kind of wretch who is inclined to child abuse anyway. ListenerXTalkerX 03:29, 24 August 2011 (UTC)
- Bleach is the best disinfectant. I say the book should be bleached. ONE / TALK 08:12, 24 August 2011 (UTC)
- Sunlight, bleach, fire, sulfuric acid, are weak dilutions; the surest disinfectant is time. Markedc 64.28.251.41 (talk) 16:04, 25 August 2011 (UTC)
Question for the veggies[edit]
I have friends coming over on Saturday for a meal, and one is a veggie which means basically all the food (well, bar one dish) will have to be veggie, but I'm struggling for a good indian veggie starter. I thought maybe I could use some of the Quorn fillets and do them tandoori style, but I have no idea if that would actually work or not. Any vegetarians here who use the Quorn fillets often and can tell me if it would work OK? CrundyTalk nerdy to me 14:52, 24 August 2011 (UTC)
- Oh, maybe I should learn to JFGI. CrundyTalk nerdy to me 14:58, 24 August 2011 (UTC)
- tell 'em to bring sarnies. Pippa (talk) 15:00, 24 August 2011 (UTC)
- Veggies get quite arsy if you invite them for dinner and don't throw out every bit of animal-based food in the house, hide all leather clothing / furniture, and stop buggering your cat. CrundyTalk nerdy to me 15:10, 24 August 2011 (UTC)
- Not all. Some. But when you do get one like that you really want to just punch them in the face. There are a few I know who were in an uproar about an "abuse of trust" because chips were being fried in the same fryer that some chicken samosas were fried in a few days previously. I mean, get a grip, that's like homeopathic meat. ADK...I'll redeem your centrifuge! 16:49, 24 August 2011 (UTC)
- Of course if you use beef dripping in your fryer they may have a valid point. ГенгисRationalWiki GOLD member 17:00, 24 August 2011 (UTC)
I think Crundy needs to meet a better class of vegetarian; it's one thing for me to say "no, thanks, I'll just have the salad and some of the rice," which is what I do when in somebody else's home--it's another to rag on somebody for their food choices. The Qurn looks like it might go nice in a tandoori sort of thing, yes. I would eat it. B♭maj7 So let us be loving, hopeful and optimistic. And we’ll change the world. 17:07, 24 August 2011 (UTC)
- but I'm struggling for a good indian veggie starter - Crundy needs a better Indian cookbook. I'm a committed carnivore and but I never miss the meat when I'm eating authentic Indian. I mean, apart from veg samosas and Onion Bahjis (sp?) how about some puris with an interesting filling. The choice is nigh on endless. Jack Hughes (talk) 17:51, 24 August 2011 (UTC)
- As a vegetarian who loves Indian food (and who's not a dick about it) I have to agree with Jack. At least when eating out I can always find tons of good appetizers. Pakoras or samosas are some of my favorites. DickTurpis (talk) 18:17, 24 August 2011 (UTC)
- Veggie here too, we're not evil you know, even the dickish ones are dickish because of their compassion. Not an excuse to be rude, of course, but they're just misguided. FairyCupcake (talk) 23:08, 24 August 2011 (UTC)
- I would advise against the Quorn. Most of my veggie friends are divided on fake meat things, with some liking the texture of TVP in food and others reviling the chunkiness of it. I myself don't eat much of any meat substitute things, because it just reminds me of what I'm not eating in all the worst ways. The best veggie patties, to me, are not the ones that try to imitate beef but simply replace it with something different yet delicious in its own way.
- That said, Quorn would "work" okay for tandoori, if you do decide to go that route anyway. I have actually had a similar sort of thing at an Indian restaurant, and they also have it at some Loving Hut franchises.--talk 00:01, 25 August 2011 (UTC)
- Heh, my friend isn't one of "those" veggies. I was making a joke. It was my decision to make everything as meat free as possible. Re other starters, I dislike onion bhagies, and the mother-in-law never taught me how to make proper "bhajia" (diced mixed veg made up with gram flour and spices and then fried) which are awesome. I do like pakoras, but in general I get banned from any kind of deep frying because tehwife hates the lingering smell. I did think of doing puris, but (1) I can't think of a nice veggie filling, and (2) I'm only used to making "breakfast puri" which are very small and puff up like UFOs. Oh @AD: I'm pretty sure he likes quorn because at a BBQ a few years ago he made a quorn chilli con carne (which was actually really nice). I'll give the quorn a go and report back :) CrundyTalk nerdy to me 08:08, 25 August 2011 (UTC)
Rights of Man - grammar[edit]
dumb question, but i'm doing one of those "is this plural or singular" issues. The sentence - The old testament is troubling for those who think that the Rights of Man is important. -- even though "rights" is plural, it's a single document, so the copula should be singular, right? En attendant Godot 17:16, 24 August 2011 (UTC)
- Go with human rights. Also, it's "are" for both cases.--User:Brxbrx/sig 17:26, 24 August 2011 (UTC)
- Rights of Man is a French document (formally, Déclaration des droits de l'Homme et du Citoyen), which is why I used italics in it. And why i said it's a single document. En attendant Godot 17:29, 24 August 2011 (UTC)
- If talking about the document it's singular (obviously) if talking about "rights" then it's plural (equally obviously) Pippa (talk) 17:33, 24 August 2011 (UTC)
- The Rights of Man is a document about the rights of man which are undeniable. Pippa (talk) 17:38, 24 August 2011 (UTC)
- I guess i knew that should be correct, but each time i wrote it, I would change it cause it sounded wrong. I do that with sports teams, too. ;-)En attendant Godot 17:39, 24 August 2011 (UTC)
- Yeah, being French, I know at least a little bit about the French Revolution. But why pull this one out? Say human rights. The rights of all humans. This old manifesto ignores women, and human rights encompasses all of its contents anyways.--User:Brxbrx/sig 17:58, 24 August 2011 (UTC)
- Sports teams are an exception here, I think -- nobody would say "The Yankees has signed a new pitcher", it sounds silly. Stranger still are teams with a singular noun form like the Miami Heat: "The Heat has won five straight games"? Still odd to me (I prefer "have won"), but better than "The Yankees has won five straight." I think there's a Safire column about this somewhere. --Benod (talk) 18:01, 24 August 2011 (UTC)
- Some words are ambiguous or context-dependent though. Words like "headquarters" and "government". Depends on whether you conceptualise them as a group or a unit.--BobSpring is sprung! 18:11, 24 August 2011 (UTC)
- Oft-times we omit words which can help sort these things out. When you say "the Rights of Man is important" you are really saying that "the Rights of Man is [an] important [book]" or "the Yankees [team] has won five straight games" but "the Yankees [players] are great sportsmen". ГенгисRationalWiki GOLD member 18:38, 24 August 2011 (UTC)
- Yeah, when it ever sounds really wrong, I just add the qualifier. But then it messes with my style. ;-) As for why "The Rights of Man" specifically, cause it is one of the single most ground breaking texts we have, (along with some declarations of independence, maybe some constitutions and bills of rights), etc. that really puts into action, the ideas of Enlightenment. Those ideas, and that set of documents are the first time we are able to point to something and say "human rights matters". Up until then, Human rights was only important if/when someone bothered to care that given day. And why DdH and not the Bill of Rights? It's more expansive, more general and probably more influential world wide.En attendant Godot 18:48, 24 August 2011 (UTC)
- GK just made me more confused about sports teams. "The Yankess have won 10 straight games". BUT "The Yankees defines itself as one of the oldest franchises in sports". (maybe you'd only see that if they add the qualifer "itself", but you dont' say "themselves". heh. ah, grammar, how i loathe/love thee.En attendant Godot 18:51, 24 August 2011 (UTC)
- At the risk of repeating myself it depends on whether you are conceptualising it (or them) as a unit or a group. Some entities can be conceptualised as one or the other depending on the context.--BobSpring is sprung! 18:58, 24 August 2011 (UTC)
- Thinking about the case in question, "The Rights of Man", I'm finding it difficult to think of a context where one wouldn't be referring to the name of the book - in which case it would be singular. As been mentioned above, we would probably say "human rights" if were were speaking about the rights themselves.--BobSpring is sprung! 19:03, 24 August 2011 (UTC)
- Rights of Man is a book by Thomas Paine.
- Declaration of the Rights of Man and of the Citizen is a document of the French Revolution.
- The Rights of Man is a fictional ship, named after the book by Paine, from which Billy Budd was pressed in 1797.
- The Rights of Man is a hornpipe in E minor which is the first thing I think of when hearing the phrase "rights of man." Oddly enough, the Fiddler's Companion link leads to some historical perspective on the eponymous book and other contemporary writings. Sprocket J Cogswell (talk) 19:22, 24 August 2011 (UTC)
- Then here's the tricky bit, Cogs: how do you refer to these as a group? "The Rights of Mans"?--talk 00:04, 25 August 2011 (UTC)
- Why would anyone in their right minds want to refer to such a group? Certainly not me. Rightses of Man? Sprocket J Cogswell (talk) 01:25, 25 August 2011 (UTC)
- I just confirmed with a composition postgrad that no possible construction exists. Just a gap in the language because no one ever needs to say it.--talk 04:04, 25 August 2011 (UTC)
- There are three books on the table. Three Rights of Mans. I use "peoples" a lot, which makes people think I'm strange... a plural plural, but in teh context of native american peoples, it's correct.En attendant Godot 04:08, 25 August 2011 (UTC)
- The difference, I think, is that "people" is the plural of "persons," when it comes to talking about individual humans, but in another sense it is singular when it comes to talking about grouped sets of those humans. The two uses of "people" have a different functional definition: one is a generic group of humans and the other is a specific group of humans organized around a social or political framework. Even further, "peoples" operates on existing and accepted conventions. That's in distinction to the proper noun "Rights of Man", which is a singular label containing both a plural label modified by a singular label, and the various forms of which cannot be easily pluralized under traditional conventions. That's why we have to use italics to make the distinction - Rights of Mans works on a read, but Rights of Mans looks like a different book unless you have already been working through this discussion. Another complication is the audibility factor - we're inclined to write things in a way that is functional when spoken, which is why people who are feeling freed from the 18th century rules on plural possessive are shifting towards "Jesus's" rather than "Jesus'." When I, at least, say "Rights of Mans," I actually say "Rights of Manses," which works even less. That might just be me, of course, but it introduces an additional disconnect to make it difficult to naturally pluralize Rights of Man as Rights of Mans, even if would practically function that way if it was forced.
- tl;dr: You can force it but there are reasons no convention dictates a solution.--talk 05:05, 25 August 2011 (UTC)
There are some hoops not worth jumping through. In casual speech, nonverbal cues are available if needed. In careful writing, strategic parsimonious circumlocution can help keep it clear. Sprocket J Cogswell (talk) 07:08, 25 August 2011 (UTC)
- If you really wanted to force a plural wouldn't it be "Rights of Men"? But the book is not about "man" but (in an age before political correctness) "mankind".
- Furthermore, "Rights of Man" is is grammatically equivalent to "Men's Rights" - a concept which now has different meaning. All the more reason to simply refer to "Human rights". --BobSpring is sprung! 08:32, 25 August 2011 (UTC)
- There are three books on the table. Three Rights of Mans. Nah, I don't think you can pluralise the name of the book. You would simply say "Three copies of...". Otherwise you get problems:
- Three On the Origin of Speciess
- Three The Infinite Books
- Three A Brief History of Times
- I'm not buying it. ONE / TALK 08:50, 25 August 2011 (UTC)
- I have no problem with "Rights of Mans" when talking about several editions or copies of the book, yet acknowledge it could confuse someone not up to speed on the context. Pedantic opinion will differ, no doubt.
- I would not use that construction in writing, nor in front of someone with a limited grasp of playful English. Sprocket J Cogswell (talk) 14:02, 25 August 2011 (UTC)
- Man here is a generic singular . "Rights of men" suggests rights that all men possess collectively, "rights of man" suggests rights that each man possesses individually. I prefer the term human to man, since it is indisputably gender neutral; and while rights of humans sounds right, rights of human sounds a bit odd (makes me think of the rights of User:Human, such as the right to control his own talk page which he has recently been so pettily denied). I think the answer is, that the generic singular is no longer very productive in English, so it sounds fine in set phrases established back when it was productive, but attempting to use it in newer formulations just doesn't sound right. (((Zack Martin)))™ 10:16, 25 August 2011 (UTC)
Spam[edit]
For the first time ever, I checked my Gmail "Spam" folder. Horrors: Loads of not spam in there including some I'd been waiting for. Pippa (talk) 16:26, 25 August 2011 (UTC)
- I pretty much never have anything fall into my junk folder. I also keep an eye on it regularly for the laughs. So many messages pretending to be from Facebook. ADK...I'll model your ax murderer! 16:31, 25 August 2011 (UTC)
- Damn, now i'm singing that song.En attendant Godot 16:34, 25 August 2011 (UTC)
- Pippa (talk) 16:40, 25 August 2011 (UTC)For the unaware.
- Did someone say... spam? HollowWorld (talk) 16:56, 25 August 2011 (UTC)
- SPAMSPAMSPAMSPAMSPAMBACON--Dumpling (talk) 17:10, 25 August 2011 (UTC)
- Scrapple. The unbusinesslikeman of business 21:14, 25 August 2011 (UTC)
Obama the traitor honors mean old indian who killed whites (Fox)[edit]
You all know I have a particular tie to things Native, especially Lakota, so this article inflamed me. Please remember, sitting bull was killed during a prayer services. A bunch of old men (all teh young bucks were dead, too dangerous too keep on the rez), women and of course children. Slaughtered at Wounded Knee by an ancient "machine gun" type weapon... that ripped though the camp with devastating power. The Ghost Shirt dancers were praying. Just praying. But fox thinks that attention should not be given to them, or Sitting Bull. --En attendant Godot 03:42, 26 August 2011 (UTC)
- eh, what great outrage. what old news. damn you facebook for recycling news and tricking me into thinking they are new articles. evil. (hum, i could of course, just read teh DATE.... but thatwould be silly now).En attendant Godot 03:54, 26 August 2011 (UTC)
- "Comments for this page are closed." Heh, indeed. Nebuchadnezzar (talk) 04:07, 26 August 2011 (UTC)
Idea of the decade[edit]
Via Facebook:
If academia fails, I want to open a hairdressers called the Socially Awkward Salon, which will have a sign promising no inane questions or unnecessary talking. There'll be books, comics and maybe even a games console below the mirror. And you could get a hair cut without having to interact with anyone (except to clarify the haircut). Hell, maybe even a poster of various standard hair cuts, like a menu, so you could go "that one".
ADK...I'll terrorize your lighting! 13:34, 21 August 2011 (UTC)
- What's a haircut? Тytalk 14:57, 21 August 2011 (UTC)
- Fucking cutters always asking me questions about my life...--User:Brxbrx/sig 17:13, 21 August 2011 (UTC)
Something about... I have no fucking clue[edit]
- wow..... you gals and guys are good.
- If I may interject at this point..... as I have slowly found myself descending to the bottom of this page I have been
- intrinsically enveloped in the wide thought 'pattern's of each person talking. Listening as intently as I could I found myself
- sinking ever so deeply into thought ..... I think I found my brain!
- As we process the 'qualia' of cause vs effect or experience either physical or emotional sensations, the 'mind' is but the
- body/physical brain 'We' use to process all of the 'intricate stuff' that makes up our Universes. Therefore 'We' as the energy,
- that is divine/spiritual wasteland of ever-living, ongoing, never-dying space, 'We' guide these experiences/qualia ..... yes???
- Splinter groups of religion after man-made religious folklore ebb and flow across the seas of time, expressing individual ideas
- that are of want to be followed, whether or not there may or may not be a truth to that particular idea. Us/We as the One who
- exists in this realm of choice drive this body/physical 'vehicle' for lack of a better word/analogy.
- But as to tasty ponies, since I could only suppose that the pony would be more tender than a horse, I would only wonder if a
- pinto would be sweeter than a mustang, ( oil not included for digestion ).....
- Our minds are just that ..... the keeper of the knowledge that 'We' throw into it and hopefully something sticks! ( I really
- hope that sometimes ).....
- Evil is the product of Abuses..... our bodily illnesses are of the bodies life breakdown and maladies that affect all bodies at
- one juncture or another. If 'One' was not brainwashed from childhood with any sort of abuses then the evil that comes from
- those abused would therefore not exist, quite literally. As for a 'Goddess/God' (lets be well rounded here and say that you
- can't have a female without a male and visa versa ) - Whatever made or created all of this had one thing in mind..... to give
- Us something so Beautifully coordinated that when we look up through the universal Eye we can see forever.
- And as far as cutting One's hair - I prefer to not even walk near a person trained or not who bears a tool remotely shaped like
- a pair of scissors...... for I feel that nothing good could ever come of it ..... So if I were to see something of that type
- floating out in cyberspace ..... I personally would hit the delete key and take the train North!!! It just reminds me of that
- commercial where a piece of dust lives between the keys of K and L and ends up spending to much time at the Space Bar!
- This is a really interesting site. Thanks for letting a Canadian into your lair, it really is very appreciated. Oh, last note:
- don't worry - we got your backs up here ..... although I don't know about our feds either at this point - they're not exactly
- the brightest lightbulbs in the package. TLOS 10:20, 21 August 2011 (UTC)
- I'm sure that was really very witty, but without my Concerta it's just tl;dr--User:Brxbrx/sig 18:36, 21 August 2011 (UTC)
- Didn't mean to be..... but what is Concerta??? and does tl;dr mean total drivel??? ..... for in that case..... ok!
- and I'm not a hair dresser so I surely won't be asking you questions about your life any time soon either. TLOS 11:51, 21 August 2011
- Concerta is nickname for methylphenidate, a drug taken against the effects of ADHD. "tl:dr" is internet slang for "too long; didn't read". So both together make sense... --★uːʤɱ atheist 19:45, 21 August 2011 (UTC)
- Well, sorry but first I wasn't writing it to you - it was to the page - second - if you are refering to a malady of ADHD
- from a simple little paragraph written in expose - pretty bad - and third if you've really got nothing nice to say, why
- bother writing anything - you don't know who I am and are certainly not in any position to assume - for that just makes an
- ass out of you then - doesn't it - I would certainly not assume that you where an ass - that would just make me like you-
- something I surely wouldn't want to be - but thank you for your input - I'm sure you had fun writing it. TLOS 1:23, 21 August 2011
- What I was saying is that I'm out of my medication and that makes it a chore to read long posts.--User:Brxbrx/sig 20:33, 21 August 2011 (UTC)
- Then please accept my misunderstanding of what you said - I felt as if I'd stepped on toes here and there is no way I
- would even consider doing that. I came across this 'community' very accidentally - I was just here to comment on a page I
- had come across - and then found myself writing introductory excerpts - sorry for the length - I'm a programmer not a
- good writer and this is my first community - I'll do better in future. TLOS 1:42, 21 August 2011
- Hello TLOS, I read your post, I understand what you are saying. Hey, people here love to call me "TLDR" too ("too long didn't read"). Then again, I think yours and my "TLDR" styles are very different. Yours seems much more diffuse, like a painting... mine is sometimes more like hyperrational, hyperdetails oriented, like I'm trying to write out a mathematical theorem in prose... (((Zack Martin)))™ 10:13, 23 August 2011 (UTC)
- Everytime I read "tl;dr" without a good excuse (like "have to go to work" or "forgot to take my pills") I assume the peson has lost the debate or at least any interest, But hey, still better than people moaning about your "walls of text" (equally bull) or "rants"... --★uːʤɱ sinner 11:32, 23 August 2011 (UTC)
- If you've lost interest, the best thing to do is not respond at all. If you are still interested enough to respond, even if just to say TLDR, you haven't really lost interest. Polite responses include, (1) not responding at all, (2) That sounds interesting, sorry I don't have time to respond to it in full now, but I will later if I get the chance (even if you never get around to doing it, so long as you had some genuine possibility of doing so when you said that, you aren't being impolite or untruthful)... (((Zack Martin)))™ 11:43, 23 August 2011 (UTC)
- I'm glad UHM accepts that me running out of my medication is a valid excuse. Me and Maratrean, however, have a score to settle now...--User:Brxbrx/sig 03:28, 24 August 2011 (UTC)
- Brxbrx, that wasn't really directed at you. If someone says Too long to read because I haven't taken my ADHD medication, I don't think there is anything impolite about that. Such a statement is too honest about your own limitations for me to consider it impolite. It is those who respond with TLDR in a dismissive tone whom I am addressing. (((Zack Martin)))™ 10:23, 25 August 2011 (UTC)
- I have no grudge with you. I was joking. It's hard for me to get that across at times.--User:Brxbrx/sig 20:05, 26 August 2011 (UTC)
Organizing a sports competition that's ostensibly about bringing the peoples of the world together in the spirit of brotherhood and fair competition...[edit]
...while a billion of those people are in the middle of a month-long fast. Discuss. B♭maj7 Define "talk." Define "page." 19:48, 21 August 2011 (UTC)
- Ramadan moves around quite a bit, the Olympics less so. And as it lasts a month, if you were to pick a random time in the year to do something you have a 1 in 12 chance of doing it when Muslims are fasting. So. Fucking. What. If they want to put religious fasting above the Olympics that would be their problem, not anyone else's. They made a choice, well we assume they made a free informed choice, to follow a religion and actively hold it above absolutely everything else in their lives so they should deal with it. ADK...I'll bescumber your deity of personal preference! 02:23, 22 August 2011 (UTC)
- Besides, a billion people are already going hungry for that month, and the month after, and the month after that, and not out of choice or religious compulsion. ADK...I'll voice your rope! 02:26, 22 August 2011 (UTC)
- That second argument is weak -- Famine-afflicted states make sure that their athletes (as well as just about everyone living in the cities, food experts call it "urban bias.") As for the first part of your argument, I find it cold. Belief exists. Belief matters to actual, living, breathing people, and dismissing it with a "fuck them" is rarely productive. Moving the Olympics forward by a couple of weeks would have allowed the games to do in deed what they say they're all about. B♭maj7 Define "talk." Define "page." 02:33, 22 August 2011 (UTC)
- Unless countries are forcing their athletes to adhere to the fast during the games, it must be regarded as a personal decision on their part; they need to consider whether they believe more in fasting or in winning a medal at the games, then make their decision and face the consequences. ListenerXTalkerX 02:47, 22 August 2011 (UTC)
- It's putting people in conflict between two really important values, and it's the organizers thumbing their noses at a billion or so people worldwide, no small number of whom, coincidentally, lived under the colonial rule of the country putting the games on (its the local organizers and not the IOC that fixes the dates). It runs totally contradictory to the values espoused by the Olympic movement (for whatever those are worth) B♭maj7 Define "talk." Define "page." 02:52, 22 August 2011 (UTC)
- I assume that the argument that "No season, week, month or day is any more relevant than any other" (as one of our anti-theistic editors put it a while back) would carry no weight with you in this question?
- Also, do you know if any of the Muslim countries that are participating in these games, such as Iran, have lodged any complaints about this matter? ListenerXTalkerX 04:00, 22 August 2011 (UTC)
- I don't know one way or the other, and maybe therefore, I'm making a tempest in a teapot. I wonder what this guy might say. Amazing how humans can never quite make up their minds, eh? B♭maj7 Define "talk." Define "page." 04:05, 22 August 2011 (UTC)
- I am of the "who gives a flying fuck" persuasion. Aceof Spades 04:10, 22 August 2011 (UTC)
- There are always other things going on, in the world, in a particular country, and in people's individual lives. What I see from Islamic voices is mostly a fairly moderate position, that it's unfortunate but can't really be helped, the Summer Games is held in the summer, and Ramadan moves so that an overlap is very nearly inevitable in some years (there was literally no way to prevent the 2012 Games overlapping with at least some interpretations of Ramadan, without violating the IOC's rules on timing of the summer games). The requirement to fast is not strong enough to utterly compel a moderate Muslim, a person could argue to himself or herself that representing their country and Islam to the world was of overriding importance, and then fast after they cease competition for the required time. Only the strictest observers would find this unacceptable (the same kind of people who would think the Olympic swimming events were inherently immodest and therefore no Muslim should watch let alone participate - fundies are killjoys everywhere). 82.69.171.94 (talk) 15:24, 22 August 2011 (UTC)
- Also I saw some quite positive thinking, suggesting that the inconvenience itself is a positive attribute, that Muslims benefit from living in societies that tolerate their belief but don't rearrange everything to make it as convenient as possible (as almost inevitably happens in a majority Muslim country, secular state or not). The idea here will be familiar to Jews of a certain stripe, that confronting obstacles that are unique to your religion reminds you how important your religion is, and not to take it for granted. Ramadan doesn't make competition impossible just more difficult. 82.69.171.94 (talk) 15:33, 22 August 2011 (UTC)
- This has been an issue that Muslim sports men and women have had to deal with for ages. Some footballers take a pragmatic approach to fasting, in that they'll eat normally the day before and on a match day and then make up the fasting time at a later date. See here. Bondurant (talk) 12:40, 23 August 2011 (UTC)
- Cold? Really? It's plain fact. We could bung the Olympics at any point in the year and have a 1 in 12 chance of bumping into Ramadan. The world doesn't stop for a month just because some religion demands people stop eating. The Earth turns, the sun fuses hydrogen, and people still eat, excrete and watch TV. The universe is in the same camp as Ace here, it doesn't give a flying fuck. Adhering to a religion is a personal choice. Millions of people go out each Friday night after work, get absolutely slammed and wake up on a Saturday afternoon with a mental hangover, should we move the Olympics away from the weekend so those people also have a chance to compete without it conflicting with their personal choices, desires and traditions? No. That would be crazy. But because it's religion it's suddenly special and magical and we can't even take a dump without offending one of them. To put it in polite terms: Fuck. That. Shit. ADK...I'll graphitize your equestrian! 13:36, 23 August 2011 (UTC)
- Sorry, made a miscalculation there, as the complaint is that the Olympics is oraganised in the weeks after Ramadan rather than during it, it's actually closer to 1 in 10 or 1 in 9 chance of it clashing, not 1 in 12. ADK...I'll bescumber your embryo! 13:39, 23 August 2011 (UTC)
Would it be utterly tasteless to suggest that in light of their judicial dismembering the likes of Saudi Arabia could send a team to the Paralympics which are post Ramadan? Also it should not be forgotten that Eric Liddell did not run in his best event at the 1924 Olympics because as a devout Christian he refused to race on a Sunday. ГенгисRationalWiki GOLD member 16:19, 23 August 2011 (UTC)
- Seems to me like looking for reasons to be offended. If you don't like the rules, don't play the game. See also Johnathon Edwards who refused to jump on Sundays. (He's now atheist) Pippa (talk) 16:36, 23 August 2011 (UTC)
- Given that Muslims are involved, this could become an explosive controversy LOL! 74.89.192.173 (talk) 03:30, 26 August 2011 (UTC)
- Doesn't 'LOL' mean something is supposed to be funny? AMassiveGay (talk) 03:42, 26 August 2011 (UTC)
- Yeah, it does, and it IS funny, at least, in my opinion. And the opinions of some of my friends, who love these kinds of politically-incorrect jokes. 74.89.192.173 (talk) 16:04, 26 August 2011 (UTC)
The Great Recession and the death of manufacturing[edit]
I came across this IMF paper recently, which ties together income inequality, debt, and financial instability. It's an interesting analysis of the spike in all three that occurred in the 1920s and 2000s. Another similarity not mentioned in the paper is the erosion of the economic base over these periods -- in the 1920s it was agriculture that was wiped out (and even more so in the '30s) and manufacturing in the 2000s (as well as years prior). Interesting to speculate that there may be a single underlying cause of these economic disasters, but it's always best to remember Mencken's words: "For every complex problem, there is an answer that is clear, simple--and wrong." Nebuchadnezzar (talk) 21:15, 23 August 2011 (UTC)
- These industries aren't dead, they were just relegated to developing and undeveloped nations. Faced with unions and the labor rights movement in general, the bourgeoisie simply moved the proletariat to places where they weren't organized. Now, the US and other developed countries are mostly just masses of petit bourgeois unaware or uncaring of the fact that while they enjoy their easy lives the very shirts of off their back were made under grueling conditions by foreign laborers that can only be thankful for the few dollars a week they make, because protesting the miserable conditions would simply mean that production would move to another location free from regulation or labor syndicates. The classes are still quite real, we just don't see it because the lower classes are far away and out of sight.--User:Brxbrx/sig 21:37, 23 August 2011 (UTC)
- And our rivers get less of the dyestuff and other nastiness, while the airborne pollution has a chance to spread out or precipitate before it reaches our shores. Even less reason to keep the EPA bureaucracy in business. I remember the eighties, when the suits got all breathless about the service economy. Yeah, right, we can all make our living by shining one another's shoes. Sprocket J Cogswell (talk) 22:00, 23 August 2011 (UTC)
- Well it doesn't matter that our rivers get less dye stuff because we're going to be shoving a giant fucking oil pipeline through, from the tar sands in Alberta to the Gulf of Mexico... which is apparently bad news for the environment. Who knew? HollowWorld (talk) 22:08, 23 August 2011 (UTC)
- I should have added the qualifier "American." Globalization is one reason for the erosion of manufacturing, though I imagine technology played a role as well. And both of these changes coincide with a shift in business cycle and employment trends. Nebuchadnezzar (talk) 22:19, 23 August 2011 (UTC)
- Will there ever be a time when even the robots are getting laid off? It's hard to get a job where I live, because employers are firing, or 'hardening up' by keeping the employees they have now. Apparently, everyone here thinks we're at the tip of a roller coaster and we're about to plummet hard. HollowWorld (talk) 22:24, 23 August 2011 (UTC)
- This is the future! Nebuchadnezzar (talk) 22:39, 23 August 2011 (UTC)
- Offshoring has played some role in "de-industrialization," but automation has made the remaining industry much less visible. I read somewhere that Sheffield is now producing more steel than at any other time in its history, except that the industry is now automated, so large numbers of steel workers are not needed anymore. ListenerXTalkerX 03:32, 24 August 2011 (UTC)
- Yup. The problem is that automation destroys the low skill / low responsibility jobs because those are the jobs we most readily get machines to do. The economics works out OK (socialism lets you re-arrange the numbers so that some of the benefits of the automation are siphoned off to feed, house and clothe those whose jobs were destroyed) but the psychology is tricky. We have been teaching people that working for a living is a virtue for a long time. If all the jobs a particular person can do no longer exist, how are they to achieve this virtue? In my informal surveys although a fair number of people had no trouble with the idea of spending the rest of their life living modestly with no fear of poverty but no job, many more felt they would have to have some kind of job to give structure to their lives. There is definitely potential for violence as a society tries to adjust to permanent low employment -- both from those who want jobs and don't have them, and from those who have jobs and resent the fact that other people live comfortably without -- and it is perhaps for this reason that governments around the world have tried hard since the industrial revolution to keep people employed. 82.69.171.94 (talk) 13:37, 24 August 2011 (UTC)
- I doubt we will ever run out of work to do. --145.94.77.43 (talk) 22:58, 25 August 2011 (UTC)
- We don't have to run out of work, we just have to run out of work that some people can do. For every firefighter, tax accountant and train driver, there are hundreds of people filling sandwiches and sewing T-shirts. One day a robot derived from work on Robocup Rescue might replace one firefighter per shift, the government might simplify tax regulations, and a few more of the world's railways might switch to driverless operation, but meanwhile all the sandwich fillers can lose their jobs in a single day when a machine does their job for half the money. The sandwich fillers probably don't have the physical stamina of the firefighter, the head for figures of the accountant, or the ability to work alone and low risk health record of the train driver so they won't be competing for any of those jobs, that's why they became sandwich fillers in the first place. 82.69.171.94 (talk) 08:22, 26 August 2011 (UTC)
Should we capitalize "internet?"[edit]
A number of publications do not capitalize the noun "internet." I personally think the word's headed for common noun usage. Blue Talk 00:25, 24 August 2011 (UTC)
- Are you volunteering to go through the wiki and make our usage consistent? B♭maj7 So let us be loving, hopeful and optimistic. And we’ll change the world. 00:27, 24 August 2011 (UTC)
- No, by the royal "we" I meant humanity in general, not RationalWiki. Blue Talk 00:28, 24 August 2011 (UTC)
- No more than we capitalise or even hyphenate email nowadays, in my opinion. E-Mail just looks weird now. X Stickman (talk) 00:30, 24 August 2011 (UTC)
- An internet is a network of networks. The Internet is one particular internet, built on the TCP/IP protocol suite. The Internet is getting close to being the only internet, although arguably historically some other networks have qualified, such as the global X.25 network.
- I would expect in the future, with space colonisation, the current TCP/IP Internet will fragment into many separate TCP/IP internets, since the protocols that are optimal for intraplanetary use are highly non-optimal for interplanetary and interstellar use, although then I'd expect we'll probably have an interplanetary/interstellar DTN internet of TCP/IP internets.
- In conclusion the Internet is an internet, and is only one of many internets past, present and future, although it is close to being the only internet right now. (((Zack Martin)))™ 00:32, 24 August 2011 (UTC)
- Your conclusion there is a good summary of it, I agree. Generally speaking, either Internet or internet is acceptable for those reasons.--talk 02:18, 24 August 2011 (UTC)
- Uh, right; similarly, the Norwegian Nazi collaborator's name may be scribed "Vidkun quisling." ListenerXTalkerX 03:26, 24 August 2011 (UTC)
- A proper noun used to refer to that person is rather a different matter. As you probably see, it introduces confusion about the referent - "Are we speaking of the generic quisling or the actual Quisling or some hybrid?" - which is the opposite of what communication is intended to do. In contrast, the near-complete identification of the mega-internet that is the Internet with the term itself is more of a form of a "generic trademark": a specific example that comes to stand for the whole set because of its ubiquity, like xerox or kleenex. So when you say "internet" you almost always mean "Internet," and that's understood.--talk 03:42, 24 August 2011 (UTC)
- An internet, the Internet. The latter refers to a unique entity, hence is a proper noun and should be capitalized. The analogies with genericized trademarks are invalid, since there is more than one photostatic copier and more than one facial tissue. ListenerXTalkerX 03:52, 24 August 2011 (UTC)
- There is more than one Internet, as well. My school's Student Internet is a link of local computers networked together. But no one calls it the Internet or even internet, really, because that word has been so thoroughly generalized.
- I'm not a proscriptivist, so I don't hold much with the idea of rules in English. I do agree that formal use should probably prefer Internet, but functionally they are identical and both correct.--talk 04:17, 24 August 2011 (UTC)
- I think your school's network is technically an "intranet."
- I grew much more enthusiastic in my prescriptivism after a couple of semesters grading student homework. You could tell the foreigners apart from the Americans in that the former wrote legible English and the latter, well-stuffed with descriptivist dreck, often did not. ListenerXTalkerX 04:31, 24 August 2011 (UTC)
- I was thinking, of what other internets are there, or have there been, other than the TCP/IP Internet we all know and love. The global Signaling System 7 network (also known as the public telephone system) is arguably an internet (a global network of networks), although due to VoIP and other factors it is slowly merging into the TCP/IP Internet. Past examples of internets would include FidoNet, BitNet, UUCPnet, and I've already mentioned X.25. Other possible present day examples of internets include the global AMHS system used for international communication between air traffic control authorities, and MMHS networks used for communication between allied military forces (including NATO). The later two are interesting, as being networks which for security reasons are kept quite separate from the public Internet, whether or not they are running on TCP/IP. (They are both X.400 email systems, so they could run either over the TCP/IP or OSI TP stacks, not sure what they use in practice, I suspect quite possibly a combination of both.) (((Zack Martin)))™ 04:38, 24 August 2011 (UTC)
- I never realised it was supposed to be capitalised until a spell checker started shouting at me for it. Generally I think it's a bit "meh". ADK...I'll forsake your flagella! 16:44, 24 August 2011 (UTC)
- Gonna have to agree. I honestly don't give a shit. If that makes me a bad person, I don't give a shit about that either. ONE / TALK 09:05, 25 August 2011 (UTC)
- Military networks are separate from "the" Internet, but use TCP/IP. Their IPv4 version used part of the public address space, as they are far too large to use RFC1918. Most of the telephone network, including the higher layers of SS7, run over TCP/IP (e.g., IETF SIGTRAN). Howard C. Berkowitz (talk) 01:59, 27 August 2011 (UTC)
- Yes. TCP/IP military networks are a TCP/IP internet, separate from the TCP/IP Internet. Most SS7 now may well be over SIGTRAN, but that is a relatively recent development, for a long time most of the global SS7 internet did not run on top of TCP/IP. And, most current X.400 email systems (i.e. MMHS and AMHS) probably are running over TCP/IP, but I would expect many of them would have run over OSI TP instead originally. (((Zack Martin)))™ 02:02, 27 August 2011 (UTC)
What the fuck has Obama done so far?[edit]
What the fuck has Obama done so far? Dan Savage posted this. Too bad the expletive means you can't link to it in a bunch of places. Of course, I still don't like the president,
he's black he promised change but I see nothing but politics. He supported DOMA, FFS!--User:Brxbrx/sig 22:25, 24 August 2011 (UTC)
- An interesting counter-point--User:Brxbrx/sig 22:26, 24 August 2011 (UTC)
- I don't care, primary his ass! ARGLE BARGLE /Firebagger Nebuchadnezzar (talk) 22:50, 24 August 2011 (UTC)
- Wat?--User:Brxbrx/sig 23:15, 24 August 2011 (UTC)
- 47 positive things, according to the first website. I'm still meh towards him. Тytalk 23:19, 24 August 2011 (UTC)
- See here. I'm still pretty much on the Firebagger side, but the whole "Primary Obama!" and the Hamsher-Norquist thing are just fucking idiotic. Nebuchadnezzar (talk) 23:28, 24 August 2011 (UTC)
- Yeah, I'm far left of Obama, who is a moderate conservative, but even I recognize that it's about getting the best possible President in a practical sense, and that shouldn't be sacrificed to some hypothetical perfect candidate. Of the possibilities, Obama really is the best - the best just doesn't happen to be very good because it continues the extension of executive power and the security state, and only half-heartedly fights Austrian economic nonsense and the Judeo-Christian dominionist crowd.--talk 23:54, 24 August 2011 (UTC)
- Ah, you see, that's the brilliance of the Democrats -- get elected, ignore your base, then come next election, point to the GOP candidate and go "Look, do you want this batshit crazy wingnut instead?!" Wash, rinse, repeat. It helps even more now that the 'baggers will essentially primary out all the "RINOs." Also, even the wingnuts aren't insane enough for the Austrian school stuff except for a few, and crediting them with having any knowledge of economic theory is probably giving them too much credit. Nebuchadnezzar (talk) 00:46, 25 August 2011 (UTC)
- He
is blackstill collects taxes, therefore he is a douche. That's all his opponents will be thinking. Balls to what actual stuff he has managed to get through the fragmented clusterfuck that passes for government in the US. ADK...I'll admonish your noseblower! 12:27, 25 August 2011 (UTC)
- I really don't think Obama is the problem here. If he had the majorities he would do great things for America - the thing is he doesn't. And so all he can do is push has hard as he can, whch considering that he has batshit insane opponents that would go so far to shut the whole country down and then blame it on him. I mean the whole debt seiling deal made it pretty clear, you have wingnuts that make up half of the other party pushing for less spending and being basically completely unreasonable in their demands, on the other side you have a guy who really isn't an agenda pusher and neither is he equally insane. I think if I would be in his position I would have long ago quit office and tell America they can fuck up their own country if they insist so much. The man is basically caught between a rock and hard place, so even if I'm much much more left then he is (hell I'm left wing in Europe, they'd probably call me a radical socialist in the US), I try to go easy on the guy because he didn't actively create
these insane idiotsthe other side.
- I think the only thing that could help right now is a huge propaganda deal of the progressives using the same style as the Teabaggers, forcing the Teabaggers to go completely off track - basically trying to turn the othe side into raving lunatics that even in the midwest loose all grounds, until the left wing of the Republicans either splits up or strikes out their Rs and change into Ds - which would resonate in the Republicans splitting their base and the Democrats winning elections because the other side isn't in any form for contest. But then again, that isn't the style of liberals. So basically, the guy is fucked no matter what he does. --★uːʤɱ constructivist 13:57, 25 August 2011 (UTC)
- I love obama, cause i love smart, sharp people. so understand that when I say this, it's while sobbing in my cereal. He HAD the majority, for 2 years. and the party (and i too do not blame obama, i blame all of them) acted like pussies, just wallowing in their own incompetence over simple disagreements. And they wouldn't do anything if they did not have 60% and they were SO FUCKING SCARED of the voters. The republicans do something that I both despise and admire. they say, plain and simple "vote with us, or get the fuck out cause we will spend every time we have replacing your sorry ass". they threaten within their own party, and end up with a line that is all but lock step. (a few too-powerful-to-remove descenters like Olympia Snow are still in the Senate). We need to learn to do that. but we like to be so inclusive and all. ;-)En attendant Godot 14:39, 25 August 2011 (UTC)
- They had the majority, but also staggering levels of obstructionism that continue today. Despite that, they passed the Affordable Care Act, which alone makes him the most successful recent Democratic president. It's not a perfect bill, but it is pretty great and provides a good foundation for later further reforms once it (hopefully!) starts to see real results in 2014 and 2015.--talk 13:01, 26 August 2011 (UTC)
- In reference to the OP, that site has a link to a clean version, if you want to link to it somewhere with profanity restrictions. άλφαTalk 12:42, 26 August 2011 (UTC)
You're driving down the road late at night...[edit]
what would be the scariest thing to see on the side of the road? An alien, a guy in a hockey mask, or a clown?--Thanatos (talk) 05:18, 25 August 2011 (UTC)
- Ed Poor. ГенгисRationalWiki GOLD member 06:46, 25 August 2011 (UTC)
-
- We need a description of the alien.--BobSpring is sprung! 06:58, 25 August 2011 (UTC)
- I agree with Bob, it depends on the alien. The other two I can likely take.The unbusinesslikeman of business 21:12, 25 August 2011 (UTC)
- If it's a proper Stanisław Lem alien it's going to be totally incomprehensible. Like, maybe there are forty things that look like cat-sized floating pyramids, and they're painstakingly disassembling a tree into component atoms. As you slow down, six of them float over to you, project a holographic image that vaguely resembles an octopus, dematerialise the car stereo and your glasses, then vanish. If it's not stranger than any dream you've ever had, it's not a proper alien. The Clown and Mr Hockey Mask are just people, whatever they do is at least going to be largely consistent with your previous understanding of the world. 82.69.171.94 (talk) 07:54, 26 August 2011 (UTC)
- Well, there have been attractive aliens out there. Osaka Sun (talk) 06:20, 27 August 2011 (UTC)
1421 exposed![edit]
For anyone interested in Chinese history, I just happened to come across this site. Apparently some crank is claiming that China discovered the entire world in 1421. Nebuchadnezzar (talk) 07:09, 25 August 2011 (UTC)
- "For anyone interested in Chinese history"... uh, I think maybe the Chinese would be interested in Chinese history! There's 1.3 billion of them and they have thousands of years of history behind them! Or don't you think they're smart enough to be interested in history? Are those Asians getting out of control and too uppity for you? Maybe they should go back to work building railroads? You are so racist.
- This unjust accusation has been brought to you by the Department of Irrational Offense-Taking.--talk 08:58, 25 August 2011 (UTC)
- That's really old news. No offense, nebby.--User:Brxbrx/sig 15:36, 25 August 2011 (UTC)
- I must confess, however, that both books were a fairly interesting read, in a Fingerprints of the Gods way. --PsyGremlinSermā! 15:42, 25 August 2011 (UTC)
- I remember reading about 1421 in a Chinese history class I took in secondary school, and my teacher was partly "this would be really cool if it were actually true!" but mostly "this is a load of crap, but it just goes to show you that anyone can make up/distort evidence." άλφαTalk 12:44, 26 August 2011 (UTC)
Chillis![edit]
Huzah! It's that time of the year again where I buy extremely hot chillis and get my arsehole burnt off. May consider sending seeds if anyone's interested? CrundyTalk nerdy to me 12:51, 25 August 2011 (UTC)
- Arsehole seeds sound terrible.--talk 12:56, 25 August 2011 (UTC)
- We're running short of arseholes here so it would be productive to grow a few more. I'm slightly concerned that in 2 out of my 3 latest SB posts the conversation has been more about my arse than the topic in hand :-\ CrundyTalk nerdy to me 13:03, 25 August 2011 (UTC)
- Dorset Nagas at the bottom? What is it with you? BTW my own nagas from the seeds you sent me are pretty pathetic after 2 months of neglect because I was stuck on a coral reef in the middle of the Timor Sea and have only just got home. ГенгисRationalWiki GOLD member 13:31, 25 August 2011 (UTC)
- Nagas are pretty hard to grow. They seem to be obsessed with getting as tall as possible rather than making good chillis. CrundyTalk nerdy to me 14:00, 25 August 2011 (UTC)
- As a Jalapeño/Cayenne farmer, I'd love some seeds, I need something spicier. The unbusinesslikeman of business 21:13, 25 August 2011 (UTC)
- OK, I'll start drying some seeds out. CrundyTalk nerdy to me 08:11, 26 August 2011 (UTC)
Dear Potheads...[edit]
Please report to your friendly local police station.... we need your help, we really do... to solve a murder.... Trust us! Yours Sincerely, Your Friends at Victoria Police. (((Zack Martin)))™ 13:37, 25 August 2011 (UTC)
- It's a trap! CrundyTalk nerdy to me 07:58, 26 August 2011 (UTC)
Very Special Epsiode[edit]
I tucked this comment into the "moral panic" talk page, - but who knows if you all read the talk pages. heh. anyhow, do we have a page about "A Very Special Episode" under a different title? I linked to a non page, making a dreaded redline, and before i pull the link, i wanted to make sure it's not that I just mis-remembered what "a very special episode" meme is really called.En attendant Godot 16:25, 25 August 2011 (UTC)
- It's the proper phrase from what I can gather [1] [2]. Any particular reason you need it? ADK...I'll toast your guru! 16:27, 25 August 2011 (UTC)
- Don't know of it here (UK) but I don't watch much telly. Pippa (talk) 16:33, 25 August 2011 (UTC)
- I'm trying to decide if i care to write it or not. (I loath stubs more than I loath redlines), but in effect it was this tagline added to mostly comedy 1/2 type shows (family focus shows) that dealt with "serious" topics. I remember it most from Different strokes when the littlest boy was approached by a (yes, gay) man with the intent of "touching" him. Before the show started, this deep, serious and grave voice announces "Tonight's Espiode of Different strokes is a very special episode." hence the name of the meme. All the shows did it in that era. Different strokes, Facts of life, Family Ties, etc. Drinking and driving, drugs, aggressive dates (never got as far as rape or anything, but you know, he pushed you and touched your boobie, or something).En attendant Godot 16:38, 25 August 2011 (UTC)
- Totally out of fashion now. Thank fuck. ADK...I'll subvocalise your Honda! 17:34, 26 August 2011 (UTC)
Job Hunt Update[edit]
So far, so good.
Three phone interviews so far, one has led to a definite in person interview next Monday. (That could get postponed, because some of the people have property that's in the path of Hurricane Irene.) With another one, the recruiter said there would be an in person interview, but two of the people I'd interview with are on vacation, so she's setting up another phone interview with one of their engineers.
There's a third that's probably my favorite. The phone interview went very well. I'm still waiting to hear about a face-to-face for that one. This one is a little more difficult because I was contacted by a talent search company, not by the employer themselves. My suspicion is that it would probably be the lowest salary, but the commute is the best, and their 401K match... the term I've used for it is "jaw-dropping." (It's a financial services company. Which leads to another plus -- for the first time in my entire professional career, I would not be doing government work.) MDB (talk) 16:53, 25 August 2011 (UTC)
- Excellent. En attendant Godot 21:27, 25 August 2011 (UTC)
- Financial services doesn't have anything to do with mortgage-backed securities, does it? Anyway, good luck. Nebuchadnezzar (talk) 04:08, 26 August 2011 (UTC)
- This is a large, diversified company, and one that seems to have come through the turmoil. I'd be in their data center anyway, not doing financial work myself. MDB (talk) 10:18, 26 August 2011 (UTC)
A request for a favor.[edit]
One of the facebook pages I am subscribed to posted a link to a MotherJones.com article titled something along the lines of "The Greatest 110 Words About Dick Cheney, Ever" or something like that. However, I can't even access MotherJones.com for some reason. Would someone please copy and paste the article into an e-mail and send it to me? THe comments on the post have me intrigued as all hell. Thanks in advance. The Foxhole Atheist (talk) 10:34, 26 August 2011 (UTC)
- If.
- Thanks, One. The Foxhole Atheist (talk) 10:48, 26 August 2011 (UTC)
Project Blue beam[edit]
Found this while rooting round the web. " This RationalWiki article has itself been redigested into a Romanian Wikipedia article (translation), which was translated back for English Wikipedia and then deleted.". Interest? Pippa (talk) 17:29, 26 August 2011 (UTC)
- ??? Someone reposted one of our articles on a crank site? Nebuchadnezzar (talk) 17:31, 26 August 2011 (UTC)
- Creative Commons, what can you do? But really, it looks like a spam blog to me. Things get copy/pasted by bots to try and link farm to other places for whatever purposes they want. You find all sorts of random stuff copied to them that it barely makes any sense. ADK...I'll wash your Suzuki! 17:37, 26 August 2011 (UTC)
Links in long articles[edit]
If i'm editing a long article, with lots of initial content links at the top, i tend to make linkable references several times in an article. ie., if a page on gay rights had 5 longish content (edit) sections, I will make "marriage" linkable the first time it's cited, but also one or two other times near the end of the article. To me, it seems more rational, but i wanted to make sure i wasn't stepping on toes doing it.En attendant Godot 19:05, 26 August 2011 (UTC)
- I must admit that repeated wikilinks of common terms (like "marriage!) irk me. I'd tend to stick them all in the "see also" at the end,if at all. Pippa (talk) 19:10, 26 August 2011 (UTC)
- You mean from article to article, or just in one article? And i admit that where i do this most is on Wiki, when talking a techinical issue about native americans, like a link to Wavoka or Paiute, where people actually say "what is that" or "who is that person", rather than links to "gay" or "marriage". but the habbit sticks. ;-)--En attendant Godot 19:16, 26 August 2011 (UTC)
- I hate having to hunt down the first mention of something in long articles on WP just so I can click it. It's okay for common terms, but for more obscure stuff it should probably be linked more than once. -- Nx / talk 19:21, 26 August 2011 (UTC)
- If the article has subsections which are directly linked to from elsewhere, then yeah. Otherwise, no. Occasionaluse (talk) 19:23, 26 August 2011 (UTC)
- NX, me too. when i'm editing WP, i make sure that any somewhat obsure reference is lined in any sub "edit" section. (I don't know the technical name for it). the other reason i do this is a cop out. I edit by clicking on that sub section, and really have no idea if I'm using a word that has or has not been used before, so i wikilink it anyhow. Here, not as much, cause it's not generally as technical (or rather, where it's technical we are not likely to have an article and that's ok) and our sub sections are not as long. but yeah, i hate reading something I'm not well educated in (oh, the history of boullibase) and finding some ingredient i've never heard of, and having to look it up higher on the page.En attendant Godot 19:29, 26 August 2011 (UTC)
- Is the policy there to link the first mention in each section, or am I imagining that? People there also seem to adhere to rigid protocols at times when it comes to things like that. Really common words shouldn't be linked, such as "time", so people will remove such links even when the article is on some time-related concept, apparently believing that policy dictates the time article should be an orphan, which is ridiculous. DickTurpis (talk) 19:31, 26 August 2011 (UTC)
- MOS: "Link the first time a term is used (unless it's in a header), not every time." Pippa (talk) 19:34, 26 August 2011 (UTC)
- That is a rule I see lightly enforced at WP. I have probably been guilty myself once or twice, but I do it with my eyes open. If it is in long article and the previous link was more than a pageUp away, I am willing to link it again. In this case, being considerate to the reader trumps rigid sticking to rules. Sprocket J Cogswell (talk) 21:16, 26 August 2011 (UTC)
- The more complete WP becomes as an encyclopedia, the less well it is served by conventional hyperlinks internally. Before the web came along and distorted everybody's idea of how to do hypermedia, it was very common for hypermedia systems to provide so-called "generic links" where the user can select any arbitrary thing and get links to related material. So you see that WP's entry for "King Arthur" mentions something about "knights" and you can select that and get a dictionary definition of the word 'knight' or the WP article on knights without some editor having to manually specify this for that particular instance of that particular word. Sadly the web doesn't really provide generic links, and attempts to retro-fit them have met with little success. In the early days of WP this was good because manually authoring links causes "red links" that helped editors find useful work to do. But today the red links are mostly for nonsense, or the most obscure things possible e.g. "Popularity of beverages in Southern Australia" and so generic links would be way better. 82.69.171.94 (talk) 21:54, 26 August 2011 (UTC)
- Do I understand you? Generic unfiltered ambiguous links are not what I go to Wikipedia looking for. I go to WP looking for information and hyperlinks that have been selected as relevant and useful, selected by a crowd of editors who, for the most part, are knowledgeable enough about their subject to choose relevant reliable sources. The process is transparent enough that sources of bias are mostly visible, and most silliness is swiftly reverted. The manual specification of the appropriate link for each instance of a particular word, taken in context, is the value that crowd adds, and I am not sure I would want it any other way. Sprocket J Cogswell (talk) 00:34, 27 August 2011 (UTC)
- I'm not sure, did you miss the word internally in what I wrote I above? Because what you're describing sounds like it might be useful or relevant for external links, but I don't think it's really "value that crowd adds" within Wikipedia itself. Sure, a generic link isn't going to pick up the fact that "All Tomorrow's Parties" in one context is a reference to the song, in another context it's a reference to the festival. But situations where that's a major problem are the exception rather than the rule, in the ATP case they're a disambiguation page away and nothing stops WP from still having manually authored links, just they shouldn't be the burden they are now. 82.69.171.94 (talk) 05:45, 27 August 2011 (UTC)
- Yes, I meant internal links as well. (By the way, in the ATP example, there is also Gibson's 1999 novel, the last one in the Bridge trilogy.) I watch slightly fewer than two thousand pages on WP, and a fair bit of my minor editing has to do with targeting wikilinks more accurately, in aid of building the web. Based on that experience, I cannot agree that "situations where that's a major problem are the exception rather than the rule." A "disambiguation page away" is still a miss, in my estimation. Worse cases happen with words like "property," where describing something as a physical property can link to an article on real estate.
- Manual linking more burdensome than generic? I think you are greatly underestimating the magnitude of the task of vetting and fixing an encyclopedia's worth of inaccurately targeted machine-generated links. Smartening up the link-generating algorithm to a usable standard would be even more Sisyphean.
- In the WP context, I still prefer hyper-links selected for usefulness by human editors. Sprocket J Cogswell (talk) 13:30, 27 August 2011 (UTC)
Posting nudity/pornography on other's user talk pages[edit]
Seriously, I can't be the only one here that finds this objectionable. If people want to see that stuff, they can post it on their own talk page all they want. But, putting it on another user's talk page, when you know they don't want to see it there, surely others would object to this. I am particularly interested to hear the opinions of our female editors, since I suspect many of them may have a somewhat different perspective on this issue from what many of our male editors do. (((Zack Martin)))™ 11:32, 27 August 2011 (UTC)
- If you KNOW that the other editor doesn't want it there it's harassment. If it happened to me I'd just remove it and block the other editor for 5 minutes to send the message. Rinse and repeat until HCM. Senator Harrison (talk) 12:33, 27 August 2011 (UTC)
- I personally don't find it offensive, but if a user says he or she finds it offensive and the other user doesn't stop it - it's trolling. I would only delete hardcore pornography from my pages, but that's me. --★uːʤɱ soviet 12:45, 27 August 2011 (UTC)
- The images are not displayed by default, they are turned into links (which now have warnings), you have to either click the link or enable display of filtered images. Both of those actions count as wanting to see it -- Nx / talk 12:50, 27 August 2011 (UTC)
- (EC)Well, the issue for me is not just whether I personally feel uncomfortable with the image, I want others who might find such images make them uncomfortable to feel welcome on my talk page. So I don't want to have those images displayed, or linked to without any fair warning of their content. Different things offend different people, but discomfort at nudity or pornography are common enough that such discomfort deserves some respect. You don't have to agree with the people who feel that way feeling that way, but please show them some respect. (((Zack Martin)))™ 12:50, 27 August 2011 (UTC)
Priest Church employee. Pedophile. Lather, rinse, repeat.[edit]
Ho-hum. B♭maj7 “We are moving too fast for any label to stick.”-CLRJ 15:34, 26 August 2011 (UTC)
- Jesus. Because of these people (disclaimer: it's the Mirror - it might be horseshit), it's almost getting to the stage now that you're automatically suspected of being a paedophile if you want to work with children. Sorry state of affairs. Ajkgordon (talk) 15:41, 26 August 2011 (UTC)
- I've just finished "Beyond Belief" by the excellent David Yallop. I'm not sure which is worse - the scope of the abuse, or the way in which the church has covered it up for so long. Either way, it's horrific. --PsyGremlin講話 15:47, 26 August 2011 (UTC)
- The church cover-up is a result of that noted Christian doctrine, "forgiveness;" a good deal of the hysteria over the abuse scandal is aggravated by wretches who want to have their cake (be forgiven their debts) and eat it also (not forgive their debtors). ListenerXTalkerX 15:53, 26 August 2011 (UTC)
To be fair, this guy is C of E, not RC. Has four kids of his own, and the C of E seems to be co-operating with the po-pop more than the Catholics have, historically B♭maj7 “We are moving too fast for any label to stick.”-CLRJ 15:54, 26 August 2011 (UTC)
- The article says he is a Catholic. ListenerXTalkerX 15:57, 26 August 2011 (UTC)
- Here Pippa (talk) 16:09, 26 August 2011 (UTC)
- Is there a project anywhere that keeps a tally on the number of reported paedophiles with religious connections (priests and so on) compared to the number of reported paedophiles with no specific religious connection (i.e. they may be religious but they aren't a member of the clergy)? X Stickman (talk) 16:50, 26 August 2011 (UTC)d
- Joe.my.god does not keep tallys, but he does do a week by week "over view" of the religious (including sexual) crimes done by religious people. En attendant Godot 17:40, 26 August 2011 (UTC)
So ListenerX is right, we are talking Catholic here. I skimmed the article really quickly and saw the guy was married with four kids and therefore assumed he wasn't RC. He was an RC employee, not a priest. B♭maj7 “We are moving too fast for any label to stick.”-CLRJ 21:01, 26 August 2011 (UTC)
- And there are plenty of abusers out there with no connection with religious groups. I'd even say the majority of abusers have no links to religious groups. Obviously, if abusers are everywhere in society, you'll find them in religious groups too. People seem to me here to be picking up on the Catholic church connection even though it is irrelevant. It would be relevant if this was a case of the church trying to cover up this man's offences, but in this case, there is no evidence it has done so. And, even though the Catholic Church is guilty of covering up abuse, so are to my knowledge Anglicans, Jews and Buddhists, and probably quite a few other religious groups I'm sure, and secular organisations also. Many people, including those in authority, religious or secular, find it easier to discredit or hush up abuse allegations than to actually acknowledge them publicly. I know someone who was abused as a child by a family member, and the rest of her family refuse to accept it happened, and so she has just given up on that — just like church cover ups, except on a smaller scale. (((Zack Martin)))™ 22:54, 26 August 2011 (UTC)
- The Catholic church has always touted itself as a moral arbiter. For them to knowingly conceal such behaviour is much worse than a single person, or family, doing so. Hypocrisy in the extreme. Alright, they haven't hidden this guy but they facilitated him in the past and one would suspect that if this had come to attention of the church ten years ago we'd never have heard about it. True, everyone's as fallible as everyone else but we can't all hide it. Pippa (talk) 23:07, 26 August 2011 (UTC)
- PS. Marathing: you're a total idiot. Pippa (talk) 23:09, 26 August 2011 (UTC)
- The Catholic Church being hypocritical? Nothing new there. Plenty of cases of that going back centuries that have nothing to do with abuse. I don't deny the Catholic Church has done a lot of wrong things, both involving this issue, and many others. I just can't agree with people who want to paint it as some kind of problem exclusive to the Catholic Church, when other denominations/religions, and secular institutions, are guilty of the same things. And, it is sad that you feel the need to resort to personal attacks, it is a poor substitute for discussing the issues. (((Zack Martin)))™ 23:13, 26 August 2011 (UTC)
- " I just can't agree with people who want to paint it as some kind of problem exclusive to the Catholic Church." Okay, name one other/employees. B♭maj7 “We are moving too fast for any label to stick.”-CLRJ 23:27, 26 August 2011 (UTC)
- EC) Not a personal attack: a statement of fact. I cannot think of one word that you have written on this site that would dissuade anyone from agreeing with me. You are an overeducated simpleton. Pippa (talk) 23:29, 26 August 2011 (UTC)
- @BBMaj, in terms of covering up abuse, other religious groups have problems too - see e.g. the Rabbi who is opposed to reporting abuse revelations to the police. See what I wrote on Talk:Anti-Catholicism. Pointing out the Catholic Church's wrongdoings while ignoring those of other churches/religions is a form of Anti-Catholicism. (((Zack Martin)))™ 23:33, 26 August 2011 (UTC)
So, Maratrean, than you can't name a/employee, then. So there is something particularly noteworthy about Catholic sexual abuse then. Thanks. B♭maj7 “We are moving too fast for any label to stick.”-CLRJ 23:58, 26 August 2011 (UTC)
- Yes, the Catholic Church is bigger and wealthier and more centralised than other religions/denominations. And it has the same abuse coverup problem the others do. You are pointing out things that make the Catholic Church special, not what makes its abuse coverup special. It's abuse coverup is not essentially different from that of Anglicans or Jews or Buddhists or other religious groups. (((Zack Martin)))™ 00:00, 27 August 2011 (UTC)
- Let me add, it is only in more developed Western countries that people feel free to bring up these sort of issues in the open. I'm sure in lots of third world or less free countries, there is lots of abuse that gets covered up (both by religious and non-religious authorities), that never sees the light of day. Do you think, if an imam in Saudi Arabia is abusing kids, will it get covered up? Quite possibly. Will you ever hear about the coverup? Probably not. Do you think that Roman Catholic priests are more likely to abuse children than imams in Saudi Arabia? Who could know? We certainly hear more about the misdeeds of Roman Catholic priests than those of imams in Saudi Arabia, but that could well be because in the Western cultures in which the RCC is having this problem, these kinds of issues are more likely to become public. (((Zack Martin)))™ 00:18, 27 August 2011 (UTC)
"You are pointing out things that make the Catholic Church special, not what makes its abuse coverup special." No, those are the things that make the abuse coverup most odius. Men will always abuse children, I get that. But most men do not enjoy the protection, if not the facilitation, of a wealthy, transnational organization that provides them with easy access to potential victims, stymies law enforcement efforts, and uses its moral imprimatur as a way to keep the dogs at bay. You're argument about other religions has no evidence whatsoever to support a contention that a similar kind of coordinated, centralized decades-long pattern repeated itself with another group that is plugged in to hospitals, schools, social services and other target-rich environments. B♭maj7 “We are moving too fast for any label to stick.”-CLRJ 00:31, 27 August 2011 (UTC)
- The Catholic Church isn't a single organisation anyway. It is a whole collective of many different organisations. It isn't as centralised as you think. In many ways, it is less centralised than many other religions, due to factors such as the enormous proliferation of religious orders. Do the Franciscans take orders from the Jesuits? Or the Sisters of Mercy from the Sisters of Charity? Has there been one big coverup? There have been lots of little coverups, independently arrived at. Sure, the Vatican has in some cases made policy decisions that would made these coverups possible, but its not like there is some secret Coverup Committee deep inside the Vatican calling all the shots.
- Are you ignorant of the long and odious history of Anti-Catholicism? (((Zack Martin)))™ 02:46, 27 August 2011 (UTC)
- Most anti-Catholicism was nothing more than a cover for ethnic animosities, as in Ireland after the Cromwellian conquest, or in the U.S. during the large waves of Irish immigration. ListenerXTalkerX 03:36, 27 August 2011 (UTC)
- There is an element of truth in that statement, but it is more complicated than that. Look at the history of persecution of the English recusants — a history of native English Protestants persecuting native English Catholics. Or consider the Anti-Catholicism of Jack Chick, which has nothing to do with ethnic conflict that I can see. (((Zack Martin)))™ 10:00, 27 August 2011 (UTC)
- Yeah, bashing the Vatican for the failings of its sub-units is like bashing corporate Shell for its record in Nigeria (where SPDC is 55% owned by the Nigerian government). It's not at all fair. ГенгисRationalWiki GOLD member 10:10, 27 August 2011 (UTC)
- Don't know enough about Shell to comment about the validity of that comparison... but when you bash the Catholic Church, you are bashing all Catholics, a lot of whom actually strongly disagree with the Vatican. Most of the Catholics in my family strongly disagree with the Vatican — and that includes religious brothers/sisters in my extended family... actually, the majority of lay Catholics disagree with the Vatican on many issues (like contraception)... some religious orders are well known for being a thorn in the Vatican's side... e.g. a lot of well known Jesuits, like Father Frank Brennan, are much too liberal for the Vatican's liking. I've seen Fr. Brennan, on national television, speaking positively of a lesbian relationship — although he was careful not to go so far as to openly endorse same-sex marriage, which was the topic, but he wasn't very strident in his opposition to it either... — I really doubt the Vatican would be keen on his endorsement of a lesbian relationship, or the meekness of his opposition to same-sex marriage, but the reality is the Vatican can't control him, because the Catholic Church is not actually as hierarchical/authoritarian as many non-Catholics believe it to be, or as many conservative Catholics wish it was. Most Catholics understand the difference between Catholicism and the Vatican, something which many non-Catholics don't get. (((Zack Martin)))™ 10:20, 27 August 2011 (UTC)
- I'd suggest that if you don't agree with what the Pope says then you'd be better off leaving the Catholic Church and starting your own religion; then you don't get to be associated with them. ГенгисRationalWiki GOLD member 14:55, 27 August 2011 (UTC)
- Which is exactly what I've done... but unlike me, the vast majority of Catholics don't see things that way. (((Zack Martin)))™ 14:57, 27 August 2011 (UTC)
- My experience of Catholic flocks is that the style has a lot to do with obedience to authority. Some of them know how to party as well, but the reflexive obedience is still there when Fr. Riordan or Fr. Vespucci shows up. I would suggest that a sane course for a recovering Catholic is not to make up another set of hymns and prayers of their own, but see if the local Unitarian-Universalist congregation is at all simpatico. When my children were in grade school, we went to the UU church on Sunday, for some civilizing influence on the little savages, and some hymn-singing for Dad. The fellowship was supportive too. Sprocket J Cogswell (talk) 16:25, 27 August 2011 (UTC)
Richard Dawkins - Beware the Believers (Expelled promo)[edit]
I present to RationalWiki this old but hilarious video, which was released sometime in 2008 as promotional material for Expelled. But according to PZ Meyers, it was made by Michael Edmondson, who apparently didn't care too much about Stein's message and just wanted to make something funny. Is it a parody of the New Atheists? A parody of creationist critiques of the New Atheists? Or is it a parody of a parody? Who knows! Tetronian you're clueless 21:25, 26 August 2011 (UTC)
- I've seen that, I always thought it was an attempt to capitalize on the popularity of MC Hawking. Nebuchadnezzar (talk) 02:11, 27 August 2011 (UTC)
- MC Hawking? Nah. Epic Rap Battles of History is how you do it. Osaka Sun (talk) 06:34, 27 August 2011 (UTC)
- Two things: 1) Why does he sound more like David Attenborough? 2) I am never going to be able to look at Eugenie Scott in the same way again. ADK...I'll reiterate your businessman! 14:35, 27 August 2011 (UTC)
- ADK, your #2 was pretty much what I thought as well. I'm still a bit unnerved by the way they portrayed her. Tetronian you're clueless 18:46, 28 August 2011 (UTC)
- Generally everything in that video is unnerving. ADK...I'll revolt your fact tag! 00:23, 29 August 2011 (UTC)
Any Haitians in the house?[edit]
Or anybody who speaks Haitian Creole (Kréyol)? B♭maj7 “We are moving too fast for any label to stick.”-CLRJ 03:12, 27 August 2011 (UTC)
- Not here. ГенгисRationalWiki GOLD member 10:13, 27 August 2011 (UTC)
The poor get poorer and the rich get pumped full of Vitamin C[edit]
I have been looking through *hangs head in shame* Fox News, when I happened through this: Now, I get, kind of, why someone would like to turn himself into a low-entropy popsickle for the unlife after, but what is the rational dirt with the intravenous B-12, magnesium, vitamin C? Is it bull as suspected or have I been missing out in a whole lot of fun to be had with needles and oranges? Sen (talk) 11:54, 28 August 2011 (UTC)
- I've heard people say that taking large amounts of vitamin C can help you recover from small illnesses like the common cold faster than you would otherwise (though I've never actually tried it). I don't know what the effects of taking it every day would be though. Tetronian you're clueless 12:05, 28 August 2011 (UTC)
- Ah, here we go. Tetronian you're clueless 12:12, 28 August 2011 (UTC)
- That's the Ray Kurzweil diet. Sounds like old Simon has fallen for his bullshit hook, line and sinker. --JeevesMkII The gentleman's gentleman at the other site 13:22, 28 August 2011 (UTC)
- It won't do much besides darkening your urine. And it's all Linus Pauling's fault. Nebuchadnezzar (talk) 17:06, 28 August 2011 (UTC)
- It was a plot point in House that vit C was supposed to cure TB. I won't spoil it, just in case, but suffice to say the punchline is "it's total bollocks". ADK...I'll pull your ox! 00:22, 29 August 2011 (UTC)
- No, it was a plot point on House that large doses of vitamin C had once been shown to cure TB but that the study had never been followed up because nobody rich cared any more. The point of the episode was that the chosen method of gaining support for testing the hypothesis was so outrageous that even House wouldn't support it. –SuspectedReplicant retire me 00:26, 29 August 2011 (UTC)
- The House episode was about curing polio with vitamin C, not TB. Still really good though.
- "You gotta get over here. [The CIA's] got a satellite aimed directly into Cuddy's vagina." Osaka Sun (talk) 04:02, 29 August 2011 (UTC)
Yes or No on RW-Tan.[edit]
There. I kept it simple. Now whether or not you like it...is none of my concern.
--Dumpling (talk) 23:05, 25 August 2011 (UTC)
- That... That… looks like a victim of pedobear. Also that's the RWW logo... --★uːʤɱ structuralist 23:09, 25 August 2011 (UTC)
- HAHAHAHA! XD Well, it was mentioned before (The RWW logo, I mean, not the pedobear victim). The logo can easily be changed. And minor edits I can do. Starting from scratch is a no. Unless I have more free time.--Dumpling (talk) 23:11, 25 August 2011 (UTC)
- I don't usually comment on such things but I feel it necessary now: Fuck no. The whole concept of a 'tan' is just so much arse. (Very nice work none the less) AMassiveGay (talk) 23:12, 25 August 2011 (UTC)
- Yes. HollowWorld (talk) 23:15, 25 August 2011 (UTC)
- Eh. I don't really care if there is/isn't a 'tan'. I just drew on command. (Thank you though.)--Dumpling (talk) 23:19, 25 August 2011 (UTC)
- WHOO! HOO! Now I can knock this off my "shit to draw" list! ADK...I'll revolve your igneous protrusion! 17:32, 26 August 2011 (UTC)
- OHDEARGODWHATISTHATTHING? The unbusinesslikeman of business 15:00, 29 August 2011 (UTC)
Different idea[edit]
How about this "Tan"? I have zero art skills though. --★uːʤɱ atheist 23:23, 25 August 2011 (UTC)
- That can be done rather easily. Any debate as to what color the goat should be?--Dumpling (talk) 23:25, 25 August 2011 (UTC)
- Actually that was a joke. --★uːʤɱ digital native 23:26, 25 August 2011 (UTC)
- This one cannot sense sarcasm very well and is very gullible.--Dumpling (talk) 23:28, 25 August 2011 (UTC)
- You can't be gullible - the word has been removed from the Oxford English Dictionary. --PsyGremlinKhuluma! 12:00, 28 August 2011 (UTC)
- This one too — the gullible at least. --★uːʤɱ anti-communist 23:35, 25 August 2011 (UTC)
- No, no, I really like the idea of having a manga-style goat as our tan...~SuperHamster Talk 23:30, 25 August 2011 (UTC)
- YAY! Me too actually! I much rather prefer a cute goat. Unless you're being sarcastic as well...NO MATTER! I'm not.--Dumpling (talk) 23:32, 25 August 2011 (UTC)
- Why is a tan needed or why is a tan even desirable? AMassiveGay (talk) 23:34, 25 August 2011 (UTC)
- People that comment in this section really should say if they are serious or not, 'cause I really can't tell. --★uːʤɱ libertarian 23:35, 25 August 2011 (UTC)
- I'm very much serious. And to answer your question (AMassiveGay)...It isn't. Or at least I don't think it's needed, but it'd be something interesting to have.--Dumpling (talk) 23:41, 25 August 2011 (UTC)
- I"m 40. i'm slow, old, and get off my lawn. What the fuck is a "tan".En attendant Godot 23:47, 25 August 2011 (UTC)
- Aw~ You're not slow or old. And Yes ma'am! Anyways, a "tan" is basically like a Japanese-manga drawn mascot that a lot of wikis have.--Dumpling (talk) 23:48, 25 August 2011 (UTC)
- By 'manga', read 'hideously generic'. They are an unnecessary abomination. AMassiveGay (talk) 23:52, 25 August 2011 (UTC)
- I don't think we'll be officially adopting any tans, like via a vote, but if you make a cute goat one, then that can just be the RW tan. It doesn't have to be an official thing or anything and it's interesting to have. Awesome work by the way!--talk 00:06, 26 August 2011 (UTC)
- Fine by me. :3 And...thanks.--Dumpling (talk) 00:11, 26 August 2011 (UTC)
- Not even knowing what a TAN is used for, if it's cute, and it's a goat - i'll vote. but only if it's cute. like this but in goat shape.En attendant Godot 00:20, 26 August 2011 (UTC)
- Its the cuteness that I find so displeasing. Also, that parrot isn't cute. It is a rapist AMassiveGay (talk) 00:26, 26 August 2011 (UTC)
- Yes, but Mark was clearly all for it. I mean he went out of his way to make sure he had a parrot scar teh scene before. Little did he know that the cute guy would try to rape him. and Fry wasn't about to help, he was dying of laughter (as were my husband and I). En attendant Godot 17:43, 26 August 2011 (UTC)
- I'll voat for a gote if it is sickeningly cute. I'm talking massive-eyes cute. ONE / TALK 11:01, 26 August 2011 (UTC)
- How about this shit? It's something called "Pleasant Goat" (ʞlɐʇ) ɹǝɯɯɐHʍoƆ 18:40, 26 August 2011 (UTC)
- Is this maybe a case of Wikipedia Jealousy Complex (WJC)? I disapprove about every Manga-crap except ɹǝɯɯɐHʍoƆ's proposal, which is so incredibly amateurish as to qualify as a parody. Rursus dixit (yada³!) 09:52, 27 August 2011 (UTC)
- I must be getting old. Tan?--BobSpring is sprung! 11:23, 28 August 2011 (UTC)
- Cute anime-esque mascot. WP has one. CP has one, courtesy of ED. The "tan" is a childish honourific from Japan. Supposed to be really cute, comes from small kids who apparently can't say "san". Teddy bears are kuma-tan. So our mascot - when not being rogered by Pedobear - would be Rational-tan. Or something.
- And I would oppose having a goat as our tan until my dying breath. Jerboas on the other hand basically define the word "cute". My dictionary even says (ok, in my handwriting) "See jerboa" under "cute". --PsyGremlinSermā! 11:57, 28 August 2011 (UTC)
- How about a girl with goat horns? The unbusinesslikeman of business 15:00, 29 August 2011 (UTC)
The difference between policy and enforcement (or, crap political arguments)[edit]
PolitiFact slips up a bit here, as anyone who's been following the creationist movement knows the "analyze and critique" language is just an attempt to slip creationism in through the cracks. The article notes that to some extent at least, but glosses over it. Assuming law and reality are consistent needs to be its own political fallacy. It tends to lead to idiocy like this. Hey, murder can never happen because it's illegal! Nebuchadnezzar (talk) 21:47, 27 August 2011 (UTC)
- Reminds me of an old Monty Python sketch - If you want to bring down the number of crimes you just need to reduce the number of offences. ГенгисRationalWiki GOLD member 08:15, 29 August 2011 (UTC)
Crossword help[edit]
Stuck on one clue on the everyman crossword in the Observer. 22 Lot, husband of biblical character (4) _ E _ H. I think those letter are correct but I could be wrong. I have no clue as the answer. AMassiveGay (talk) 03:24, 28 August 2011 (UTC)
- Perhaps 'Leah' who was a biblical character. Lot & lea are both words used to describe land, with 'h' for husband. RagTopGone sailing 03:43, 28 August 2011 (UTC)
- May be something useful in here. Also, is that what crosswords are like in the UK? They could use a Real Man's Crossword. Nebuchadnezzar (talk) 03:47, 28 August 2011 (UTC)
- UK crosswords are prettier. And the you linked appeared to be a concise crossword. They are for pussys AMassiveGay (talk) 03:52, 28 August 2011 (UTC)
- And you can get half the answers just by solving the other clues. You only need to do half of it. AMassiveGay (talk) 03:55, 28 August 2011 (UTC)
- Yeah, Leah is right. ГенгисRationalWiki GOLD member 10:22, 28 August 2011 (UTC)
- Also yes, once Torquemada's cryptic crosswords appeared all the non-cryptic ones should have vanished except from puzzles aimed at primary school children. It's as if instead of Poker, some people played "Go Fish" for money. Or there was a "pro gamer" league playing Pong instead of Starcraft for cash prizes with a live audience. 82.69.171.94 (talk) 21:53, 29 August 2011 (UTC)
The evolution of Republicans[edit]
[3] What paper is this from? - David Gerard (talk) 22:15, 28 August 2011 (UTC)
- I did a tineye.com search and then looked around on Google... must be some student newspaper or something.--talk 03:37, 29 August 2011 (UTC)
- It's an old joke, "the descent of American presidents from George Washington to Ulysses S. Grant was enough to discredit the theory of evolution." First appeared in the 1870s about the second Republican president. Henry Adams, U.S. Grant, and Evolution: Practicing History in the Age of Darwin. [4] nobsI am a fugitive from an ideological fever swamp 03:39, 29 August 2011 (UTC)
Ah, if Lincoln was alive to see what's happening to his party. Osaka Sun (talk) 04:05, 29 August 2011 (UTC)
- Here is a highly amusing quote from a Southern crank in support of the Dominionist Michael Peroutka, 2004 Constitution Party candidate (capitalization in the original, emphasis mine):
- ListenerXTalkerX 06:21, 29 August 2011 (UTC)
- Pardon me for asking a dumb question, but isn't the Constitution Party not the Republican Party? nobsI am a fugitive from an ideological fever swamp 16:31, 29 August 2011 (UTC)
General site news[edit]
I just noticed on Rc: (Intercom log) . . Blue (Talk | contribs | block) sent a message to General site news (Vote closing soon). What is this "General site news" and where may it be read? Pippa (talk) 04:12, 29 August 2011 (UTC)
- Click on "message" -- Nx / talk 05:49, 29 August 2011 (UTC)
- Yeah, but if I don't happen to notice it on Rc, what's the point? Seems silly to me. Pippa (talk) 05:58, 29 August 2011 (UTC)
- I'm not sure if you're aware of this, but intercom messages appear at the top of the page, similar to the "you have new messages" orange box, so you'll probably notice them. However they also have an expiration date, after which they won't appear there. In this case there's no point in bothering you with the message if the vote is over. -- Nx / talk 06:02, 29 August 2011 (UTC)
- (EC) Go to the Intercom page, click on the "Configure groups" link there, and make sure you are a member of the "General site news" group. ListenerXTalkerX 06:03, 29 August 2011 (UTC)
- Thanks LX, Probably worth telling newbies about that, or making "General site news" also default. Pippa (talk) 06:12, 29 August 2011 (UTC)
- "Urgent" appears to be the default. The unbusinesslikeman of business 15:02, 29 August 2011 (UTC)
Fat too much time on their hands[edit]
The SCP Foundation For when TV Tropes gets boring. --PsyGremlinZungumza! 14:50, 29 August 2011 (UTC)
- Maybe we should slip this ( into Andy's homeschool classroom.........--Lefty (talk) 18:40, 29 August 2011 (UTC)
- I've loved the SCP site for years now. My favourites are this one (especially the testing logs at the bottom) and this one, again because of the test log, even if that does go against the spirit of the site I guess. Word of advice, though; *seriously* think through *any* ideas you possibly have for SCPs if you plan on trying to add one to the site. The community will completely and totally rip your second asshole a new asshole (your first asshole will have had a second asshole ripped in it when you try to join the site in the first place, unless they've changed it recently) when criticising your attempts. It's how they keep the site relatively well written and interesting and make sure it doesn't turn into a huge list of SCPs like "Stickman, The Man Who Gets All The Women And Is Totally Awesome All Of The Time". Oh! Also for fun, TV Tropes has a page for the foundation and the foundation has an entry for TV Tropes. X Stickman (talk) 19:25, 29 August 2011 (UTC)
- WARNING!!
Don't read uncensored SPC reports.
You will never sleep again.
They all have something in common.
It isn't what you think, and it is about 10 times worse than the average Nightmare Fuel in horror movies.--Lefty (talk) 20:51, 29 August 2011 (UTC)
- "Fat too much time"? You've been spending too much time with Kenny-baby, Psy. ГенгисRationalWiki GOLD member 21:36, 29 August 2011 (UTC)
Hiding comments in "recent changes" view[edit]
I know we can hide things like "m" or group all of the changes to a document into one - but is there any way to hide (for a given user, like, oh - me) from having to have particular pages or people listed? There are some
trollspeople I'd rather not read, if only cause i can't control myself and have to say "are you frigging kidding me? Science does not work like this, and i'm not even a scientists!". ;-)En attendant Godot 18:33, 29 August 2011 (UTC)
- You can filter a namespace, but I don't think you can filter users. Тytalk 18:36, 29 August 2011 (UTC)
- shucks. I have no impulse control (says the obese woman who is sitting here eating chocolate). En attendant Godot 18:42, 29 August 2011 (UTC)
- Don't use Rc: use your own, editable, "watchlist" and check Rc once a visit or every hour? Pippa (talk) 18:49, 29 August 2011 (UTC)
- Smart woman, that pippa. even though she makes me think of "pippi?" longstocking. :-)En attendant Godot 19:05, 29 August 2011 (UTC)
- There is also watchuser... Тytalk 19:09, 29 August 2011 (UTC)
- That script serves the opposite function as she inquired about, though it could easily be modified to accommodate hiding a particular user's edits from RC as well. Blue Talk 19:15, 29 August 2011 (UTC)
- I know, I was wondering if that was possible. Тytalk 19:20, 29 August 2011 (UTC)
- If I had that script on hand (the follow user), i'd not make one useful edit on this page. I'd just follow those I deem stupid enough to deserve my vitriol, making whiny comments. ;-)En attendant Godot 19:22, 29 August 2011 (UTC)
- Some people have talked of a script or something to ignore a certain editor at Conservapedia. I think Nx might be able to help.--User:Brxbrx/sig 19:42, 29 August 2011 (UTC)
- I think it's Night Jaguar's, but I have a feeling it might need to be updated. Occasionaluse (talk) 19:56, 29 August 2011 (UTC)
- This may or may not work. It's hardcoded to hide Theemperor, so you have to change that. -- Nx / talk 20:07, 29 August 2011 (UTC)
- Completely off topic, but when are browsers going to support tag selectors? Occasionaluse (talk) 20:14, 29 August 2011 (UTC)
News?[edit]
BBCBreaking BBC Breaking News Judge temporarily blocks a tough new immigration law in the US state of #Alabama after challenge from the #Obama administration.
That's all I've got off Twitter. Anyone got any more? Pippa (talk) 20:20, 29 August 2011 (UTC)
Stoned lemurs[edit]
Sounds like one of Punky's bands, but it's true.
Yo, man, gonna get me some 'pede man... --PsyGremlinSpeak! 16:49, 29 August 2011 (UTC)
- I had a friend that would say "Do bugs, not drugs!" He would then follow by taking up a sleazy sounding voice you'd imagine coming from a drug dealer or pusher and say "I've got centipeeedes..." Centipedes was on a higher note than the rest of the sentence. This friend of mine never failed to make me laugh.--User:Brxbrx/sig 03:53, 30 August 2011 (UTC)
- You should read 'Naked Lunch'. People are getting high on centipedes all the time in that. AMassiveGay (talk) 10:05, 30 August 2011 (UTC)
- I started the movie, but I got bored and I stopped watching--User:Brxbrx/sig 11:27, 30 August 2011 (UTC)
- The film is not like the book at all. It is still good though, you should give it another try. The book is cool too. AMassiveGay (talk) 13:10, 30 August 2011 (UTC)
All the woo you want[edit]
The Quickening - your recommended annual dose of woo in one easy package. My brain hurts. Everything you ever wanted to know about the end of the Myan calendar... which is Oct this year, not Dec 2012, complete with Comet Elanin, collective consciousness and god knows what else.
All narrated by a stoned chick, over repetitive visuals. Why can't woo-meisters make interesting movies?? It's like Terrance McKenna - just listening to 5 minutes of him speak and you want to jab fondue forks in your ears. --PsyGremlinSprich! 15:31, 29 August 2011 (UTC)
- What's worse? They copied their name from one of my favorite action movies. Lord of Reckless Noise Hooray! I'm helping! 16:35, 29 August 2011 (UTC)
- Whats even worse is is that you list Highlander II as one of your favourite movies. For Shame AMassiveGay (talk) 10:40, 30 August 2011 (UTC)
The Conspiracy Files[edit]
For the Brits in the house who can get iPlayer, BBC2 is showing The Conspiracy Files with a special on 9/11 ten years on. ADK...I'll hurt your squibble! 23:04, 29 August 2011 (UTC)
- Really, how are people this stupid?!!? ADK...I'll stride your liquid goo! 23:42, 29 August 2011 (UTC)
- I saw that program. It doesn't matter how much evidence/proof you provide, they will always think it was conspiracy. They've already made up there minds and they are not budging. AMassiveGay (talk) 10:16, 30 August 2011 (UTC)
- Also, the chap who made that 9/11 conspiracy film, keep saying he and his friends were just civilians, but kept refering to them with psuedo military terms. He also had a big picture of Christ with the caption 'employee of the month'. I always find strange to see young people with what I assume to be strong religious conviction. AMassiveGay (talk) 10:19, 30 August 2011 (UTC)
- Those Loose Change guys promised me a version narrated by Charlie Sheen. I am disappointed. Nebuchadnezzar (talk) 11:51, 30 August 2011 (UTC)
- The Bullshit! episode on conspiracies (I think it was 9/11 specifically) had a guy and this open-mic night for conspiracy theorists who uttered the immortal line "no one can convince me [that the government wasn't involved]". No one can convince me. No one can convince me. No one can convince me. If you want proof that conspiracy theorists are full of shit, look no further than that one little phrase. ADK...I'll stride your kumquat! 15:17, 30 August 2011 (UTC)
- Yeah, I think the basic structure of all conspiracy theories can be boiled down to the equation: Morton's fork + paranoia. Nebuchadnezzar (talk) 15:25, 30 August 2011 (UTC)
Something I stumbled upon in my readings of stuff[edit]
I found this as a citation on Wikipedia- actually, the citation was for Fox News, which had a summary and linked to the Daily Mail for the complete article. Naturally, an anti-union piece pimped by both the Daily Mail and Fox aroused my suspicions. Can anyone offer some insight into the context of the controversial comment in question? Thank you, RationalWiki--User:Brxbrx/sig 06:49, 30 August 2011 (UTC)
- The article itself seems confused as to what it's about. Both the article and the alleged quote use an example of an 18 yo sixth former but in reality an 18 yo is an adult, and the law they're talking about (the article itself eventually explains) only covers minors. Obviously it wouldn't be unusual for a school to fire a teacher who has sex with older pupils, but that's a long way from them being convicted of a crime, or even arrested let alone put on the sex offenders list. But anyway, it's a press release to generate interest in a commercial TV show. 82.69.171.94 (talk) 09:08, 30 August 2011 (UTC)
- There was change to the law in 2000(?) with the Sexual Offences (Amendment) Bill (1999-2000)/Age of Consent and Abuse of Trust Bill which makes it an offence for professionals in education and social care who regularly come into contact with people 'under the age of 19' to engage in a sexual relationship (which includes taking 'indecent' photographs) with those under their care. This is a link to a parliamentary briefing paper but I have been unable to actually find the bill's exact wording. So yes, it is a crime but whether it merits being put on the sex-offenders register is a different matter. ГенгисRationalWiki GOLD member 10:26, 30 August 2011 (UTC)
- Where does this under the age of 19 that you've quoted come from? It clearly doesn't come from the Bill itself, since you admit that you were somehow unable to find that. Here you can read the relevant text for yourself starting here: -- so, no, as I already explained it's not a crime for a teacher and a consenting 18 yo pupil to have sex. 82.69.171.94 (talk) 11:34, 30 August 2011 (UTC)
- "1)Subject to subsections (2) and (3) below,.". So anyone below under 18. AMassiveGay (talk) 13:08, 30 August 2011 (UTC)
- Working with persons 'under the age of 19' actually refers to the Education Reform Act 1988 which defines who is covered by the legislation. While AMG has pointed out that the law applies to sexual activity with persons under 18 the Secretary of State for Education has powers to enforce regulations that apply to the wider abuse of positions of trust which would not be restricted to the lower age limit and relevant institutions are expected to have internal rules governing this. However, this would then be disciplinary issue rather than a criminal one. ГенгисRationalWiki GOLD member 13:59, 30 August 2011 (UTC)
- BTW the Mail article doesn't even cite the case of an 18 year old it is just a hypothetical scenario so it appears to be another bit of scare-mongering. ГенгисRationalWiki GOLD member 14:05, 30 August 2011 (UTC)
Hurricane freakin Irene[edit]
I'm right in its path (so is the ASchlafly). There was a time where I loved stuff like this and wanted it to happen but now I'm anxietying out over losing power for a week or my workplace blowing away or something. If I don't post for a week after this Sunday afternoon, I either died or lost power. On the plus side, the Conservapedia server might get flooded. Senator Harrison (talk) 01:21, 26 August 2011 (UTC)
- Yep, it's going to be interesting. Can't be as bad as Floyd was back in 1999, though. My whole town was knee-deep in water for a few days. Tetronian you're clueless 02:18, 26 August 2011 (UTC)
- I think it's supposed to be worse than Floyd. They're also making evacuations in Cape May county mandatory. Senator Harrison (talk) 03:33, 26 August 2011 (UTC)
- I'm in central Maryland, between DC and Baltimore. When the leaves are off the trees, you can see I-95 from my front yard. I'm under a tropical storm warning, though I think I saw I-95 is the extent of the tropical storm conditions. I've got bottled water and food I can cook on my gas stove.
- If someone asks nicely, I'll tell you how the last hurricane (Isabel, I think) to hit this area prevented me from meeting Sarah Ferguson. MDB (talk) 10:17, 26 August 2011 (UTC)
- Oh, and as far as Andy being in the path of the storm -- Andy, if you're reading this... we make fun of you here, we insult you, but no one here wants to see you harmed. Keep you and your family safe. MDB (talk) 10:29, 26 August 2011 (UTC)
- I'm sure that he's praying to God and when he, his family and his property emerge unscathed it will all be down to the power of prayer. ГенгисRationalWiki GOLD member 10:35, 26 August 2011 (UTC)
- I remember Katrina and Gustav. Massive power-outage for about 2-3 weeks. Everyone in the neighborhood started gathering their meats and such from the freezers and started to BBQ. It was hot. Humid. Mosquito-infested. And only very few people would actually have electricity...or even internet.Good luck. D:--Dumpling (talk) 15:17, 26 August 2011 (UTC)
- I guess we'll be seeing a stream of Lead Belly references in the headlines soon. Nebuchadnezzar (talk) 17:38, 26 August 2011 (UTC)
I'll keep you all updated until my power goes out which I fully expect to happen. When it does I'll post from my phone if I can but I want to conserve the battery obviously. And yes I hope that Andy and his family make it through just fine, I just won't be shedding any tears if the server doesn't. Senator Harrison (talk) 21:01, 26 August 2011 (UTC)
- Well, I doubt it will be worse than Floyd was for me. That one knocked over a tree that smashed a transformer, causing fireballs to shoot out of it and set our lawn on fire. No shit. Nebuchadnezzar (talk) 02:38, 27 August 2011 (UTC)
- Fireballs shooting all over your lawn in way cooler than the drain backing up and flooding your basement, which is the only kind of disaster we get around here. Doctor Dark (talk) 03:34, 27 August 2011 (UTC)
- Or a horse flying through your living room window. Senator Harrison (talk) 12:34, 27 August 2011 (UTC)
- I hope that everybody is all right. It occurs to me though that if this one is big enough to climb the entire east coast of the US without seeming to lose a great deal of energy then it'll presumably still have some life left in it when it crosses the Atlantic and hits the dear old United Kingdom. (Otherwise know as "England".)--BobSpring is sprung! 09:37, 28 August 2011 (UTC)
- Reading some of the tweets on the BBC reminds me that I was in NY when Agnes hit in 1972. I don't remember much in the way of wind but a hell of a lot of rain. ГенгисRationalWiki GOLD member 10:48, 28 August 2011 (UTC)
- In my town (central western NJ), we had minor wind damage but major flooding. Most of central NJ is flooded. Rt 1 is under six feet of water. Senator Harrison (talk) 02:21, 29 August 2011 (UTC)
My home has been without electricity since sometime between 2:30 and 6:00 AM Sunday. Believe me when I tell you it is a challenge trying to make sure you're clean-shaven for a job interview when the only light you have is flashlight and candles. MDB (talk) 10:26, 29 August 2011 (UTC)
- Power was out here in Cape Cod, Massachusetts (easternmost part of the US) until about 10:30 PM Monday. We have a generator, but the power really needs to be filtered better before we put electronics on it. Howard C. Berkowitz (talk) 22:33, 30 August 2011 (UTC)
Gay Nazis[edit]
So I've seen this masterpiece being cited to back up the "gay Nazi" meme. But has anyone read it/know anything about it? I have to say, I'm rather tempted to read it if I could find a free copy. Nebuchadnezzar (talk) 12:48, 30 August 2011 (UTC)
- You can probably find a torrent of it some whereAMassiveGay (talk) 12:55, 30 August 2011 (UTC)
- hereAMassiveGay (talk) 12:56, 30 August 2011 (UTC)
- No seeds. Anyway, stop posting communism. Nebuchadnezzar (talk) 15:44, 30 August 2011 (UTC)
- Viva revolution, comrade AMassiveGay (talk) 16:09, 30 August 2011 (UTC)
- Try this - looks tiny tho - 15k. Also here here and here --PsyGremlinParla! 16:18, 30 August 2011 (UTC)
- The online versions don't have any images which is why they are quite small but nearer 920kb than 15kb. I looked at the review quotes at "The Pro-Family Resource Center" (Abiding Truth Ministries) and ... well the name of the site should probably tell you all you need to know. ГенгисRationalWiki GOLD member 16:54, 30 August 2011 (UTC)
- Why does 'pro-family' invariably mean 'anti-gay'? AMassiveGay (talk) 21:43, 30 August 2011 (UTC)
- Why does a bear shit in the woods? Anyway, thanks for the links, quite awesome. Nebuchadnezzar (talk) 05:05, 31 August 2011 (UTC)
Holy shit, Obama makes a good advisory appointment![edit]
As WIGO'ed. And Galt almighty, he's not a total Wall Street hack (cf. Larry Summers, Bob Rubin, Gene Sperling). Cue wingnut outrage at the SOCIALIZMZ of the Card and Krueger studies. Nebuchadnezzar (talk) 15:41, 30 August 2011 (UTC)
- Yeah, he's pretty good. An appointment to be happy about, in the face of the swelling ranks of unfilled positions, and the judgeships staffed by strangely elderly appointees.--talk 04:39, 31 August 2011 (UTC)
Links to hate sites[edit]
Some of what I edit deals with ultra-racist or otherwise hateful bilge. I was just wondering if there's an official policy on this. It seems like it's mostly older articles that have links to these sites broken up by spaces, hyphens, etc. Nebuchadnezzar (talk) 06:39, 29 August 2011 (UTC)
- I am not sure about written policy, but precedent is probably on your side: at one time we had a vandal, "Fred," who kept getting blocked for linking to Fred Phelps's site from the article on Phelps. He wound up creating about 70 separate accounts, but when we later allowed the link to Phelps's site in the article, he stopped vandalizing. ListenerXTalkerX 06:48, 29 August 2011 (UTC)
- No official policy - just use your own good judgment. If you think linking to a site will do more harm than good, don't do it.--talk 06:50, 29 August 2011 (UTC)
- I think one of the reasons for that semi-policy was that people thought it would improve their pagerank. But external links are all nofollow, so don't worry about that. -- Nx / talk 06:56, 29 August 2011 (UTC)
- Ah, I see. Call me crazy, but I've always been of the opinion that people can't truly understand hateful bullshit if they aren't actually allowed to, you know, read it. Nebuchadnezzar (talk) 07:14, 29 August 2011 (UTC)
- A lot of the more rabid advocates for censorship of hate-propaganda (Searchlight in the U.K., for example) are openly communist. Such people often view themselves as locked in a propaganda battle with a fascist/capitalist machine that pulled the wool over the eyes of non-communists, partly through the use of "fascist" propaganda. Unfortunately, these censorship policies have the double effect of giving the propaganda "forbidden fruit" status and giving the spewers of it something legitimate to gripe about. ListenerXTalkerX 07:30, 29 August 2011 (UTC)
I think you should be allowed to link to whatever you want, so long as it is somehow relevant. If a site is a legitimate target of criticism, then it is legitimate to link to it for the purposes of that criticism, no matter how odious its content. (((Zack Martin)))™ 10:28, 29 August 2011 (UTC)
- ORLY? NDSP 10:55, 29 August 2011 (UTC)
- NDSP, personally I have no qualms about linking to stuff if I feel doing so adds value, but I know Philip etc. have a different perspective, so I try to show some respect for that... (((Zack Martin)))™ 10:57, 29 August 2011 (UTC)
- I seem to remember us having at least two full-scale debates about this, during which I was down and out with a case of IDGAF... I don't see a problem with linking to hate sites, as long as there's a little NSFW-type warning. Blue Talk 10:47, 29 August 2011 (UTC)
- Maybe mostly a "will make your skin crawl" warning. NSFW as well because I suppose that your boss may not view you browsing StormFront too kindly.--User:Brxbrx/sig 11:55, 29 August 2011 (UTC)
- Yeah, I try to put NSFW warnings on that kind of stuff anyway, or it's something that's pretty obvious, like a ref to White Power magazine. Nebuchadnezzar (talk) 17:35, 29 August 2011 (UTC)
- This does pop up occasionally because the ruling is only informal and mostly maintained as more a community meme than an actual law. The rationale against linking directly basically revolves around depriving such sites of their hits and search engine rankings through links, and not on grounds of censorship and not wanting to link to offensive things - as Neb rightly says, how can you understand it without reading it? Screen grabs help with this (as well as recording any potential dynamic changes), and this is what we have capturebot for. But this has never been an official or written policy. The Fred vandalism pretty much put and end to the informal ban on linking to Westboro Baptist. Stormfront is also now linked to directly as a result, but Metapedia still mostly consists of screenshot links and no directly links. ADK...I'll erect your railing! 00:38, 30 August 2011 (UTC)
- But nofollow makes that irrelevant, right? Nebuchadnezzar (talk) 15:29, 30 August 2011 (UTC)
- Pardon me for stepping in doo doo (I just love the feel of shit squishing between my toes), but what is a hate site? Are the wp:American Enterprise Institute, the wp:Bradley Foundation, and the wp:Ludwig von Mises Institute hate sites? nobsI am a fugitive from an ideological fever swamp 03:18, 30 August 2011 (UTC)
- Don't be obtuse, Rob. You'll know it when you see it.--User:Brxbrx/sig 03:35, 30 August 2011 (UTC)
- The term "hate" as used in "hate group" and "hate site" does differ from the usual sense a bit. It generally refers only to hatred based on some immutable characteristic (usually ethnicity, or a religion that is the external representation of that ethnicity). Also, it only applies to hatred of those groups designated as underdogs (cf., "persons of color cannot be racists") — unless the hate is too obvious to be swept under the rug, as with Farrakhan's Nation of Islam. ListenerXTalkerX 05:31, 30 August 2011 (UTC)
- ^What he said. If you want to get pissy about definitions, that's fine, but as this is only an informal rule and casually applied very haphazardly partly out of tradition, partly because the individual examples came up and people rolled with it, and certainly not a hard and fast legally binding Law, then "if I think it's a hate site then it's a hate site" is adequate justification. Same reason that the political statement "I know what pornography is when I see it" is fine when someone is installing Net Nanny or whatever, but not when prosecuting and jailing people for possessing it. ADK...I'll withstand your paper! 15:23, 30 August 2011 (UTC)
- Unless you want to link to some absurdity in a sort of FSTDT way then I can never see a need to. Those who want further clarification can use Google or whatever to go and look for themselves. If I say to you (I just made that up, btw) is a hate site against those with red hair then that's all most people want to know. The curious, should they desire, can google it. Bob Soles (talk) 15:32, 30 August 2011 (UTC)
- wouldn't be a hate site. It would be purely factual. It is why judas is always depicted with red hair.AMassiveGay (talk) 16:07, 30 August 2011 (UTC)
- True, but <ref>JFGI</ref> is rather poor form at the very least. When we're talking about linking to sites we're really talking about the intellectual honesty to prove what one is saying is true. That's true whether a politician has said something or someone has said something on a forum post and whether it's easy to find or quite obscure. If I was to cite a scientific fact, I would put the authors, journal, date and page/volume numbers in the footnote, not instructions on what key words to stick into Web Of Knowledge to find the right thing. Making any kind of work for a potential read, to make them jump through hoops to understand you, is no different than what multiple creationists and conspiracy theorists do on forums all the fucking time where they say "there's plenty of evidence if you Google it!!!11". The same standard of citation should be expected regardless of the content of that citation. So these full URLs should be given. Additionally, in an examples I have just recalled, we had the case of Christopedia where I failed to include the URL in the first version of the article (indeed, that place was worse than Metapedia by a good order of magnitude) and subsequently someone mistook it for a different Christopedia which was an obvious parody site and the article was mashed up between the two for a time. Now, that's a very specific example that is unlikely to be repeated, but shows the ambiguity that can occur if we use "JFGI" as an actual policy against linking. ADK...I'll curate your cuddly toy! 17:03, 30 August 2011 (UTC)
- The example given is taken from the true life files of Wikipedia. According to the reputable and verifiable Southern Poverty Law Center, the Bradley Foundation is a hate group, meaning because of its extremist nature, the Bradley Foundation, like Stormfront, can only be used as a source about itself and in no other mainspace article. While it's interesting that the PBS's News Hour has been sponsored by a hate group, no one would dare challenge the integrity and scholarship of a reputable and verifiable source like SPLC, especially now because BLP, as well. nobsI am a fugitive from an ideological fever swamp 19:33, 30 August 2011 (UTC)
OT. but still wrong. Not on the list. Seems like the only mention of Bradley on SPLC is criticism of their funding of The Bell Curve. Nebuchadnezzar (talk) 19:48, 30 August 2011 (UTC)
- Bingo. You're right. But Wikipedia did suffer growing pains earlier. Simply cause the SPLC designates something a hate group, does that qualify Wikipedia using "an authoritive" source, like the SPLC, and citing it likewise? As you've rightly shown, Wikipedia does not. nobsI am a fugitive from an ideological fever swamp 23:17, 30 August 2011 (UTC)
- Wait, what? My point was that SPLC doesn't classify it as a hate group. Nebuchadnezzar (talk) 18:08, 31 August 2011 (UTC)
- The National Review covered this:
If they’re too promiscuous with the “hate group” label, they would lose what little credibility they have left... as they reserve the designation mainly for groups that at least sound scary — Aryan Nations, Supreme Ferret of the Ku Klux Klan — even if they’re just P.O. boxes with no members, they can get away with it. But labeling AEI, for instance, or the Bradley Foundation as “hate groups” would strain the credulity of even a lot of gullible lefties...instead, the SPLC includes such targets ... in [lists] of those “spreading bigotry,” or whatever, along with others they have decided to label “hate groups,” secure in the knowledge that the SPLC’s allies further down the leftist food chain will apply the label for them...
- and you'll recall, this discusion began not about "hate groups", but about "hate sites". nobsI am a fugitive from an ideological fever swamp 19:40, 31 August 2011 (UTC) | https://rationalwiki.org/wiki/RationalWiki:Saloon_bar/Archive118 | CC-MAIN-2022-21 | refinedweb | 22,179 | 70.02 |
Imagine you build an API only to realize later down the road, your documentation for it isn’t kept up to date and you are now spending more time trying to figure how it works than it took to build. Sound familiar? This is one of the common pitfalls of a code-first approach to APIs. There are two key approaches to building APIs:
If you were to follow a design-first approach, you can design your API in API Management and export it to an OpenAPI specification file, which could be used to bring organizational alignment and serve as a guidance for implementing backend services that support the API logic and client applications that consume the API.
To support the design-first approach, we recently released an update to our Azure Functions and API Management Visual Studio Code extensions that lets you generate Azure Function apps by importing an OpenAPI specification. The same functionality is available via the command line.
Functions provide not only ready-made scaffolding for writing an API but all the necessary hosting.
Functions allow your API to dynamically scale up or down depending on the number of requests. Since it is consumption-based, you're only charged when an HTTP endpoint is called, and you aren’t paying for idle resources as you would if you were using a VM as a host.
Imagine a scenario where you are creating an API for a food delivery service. After gathering the business requirements, you begin developing the API. Once the API is built, dependent teams start testing, documenting, and providing feedback. But while gathering feedback you realize you need to pivot and rewrite a significant portion of your API which is going to push back the project. That’s where the code-first approach falls short.
What if instead of developing the API, you started with an API description to establish a shared understanding of how the API is supposed to function. During the process of creating the description you start asking questions like: What functionality will the API have? What will the developer experience be like? What data will it expose and in what format? How will we add new functionality in the future? To answer some of these questions you need to bring in other teams to provide input. You can now get earlier validation on the API functionality from various stakeholders while the cost of making changes are low.
In addition to gathering feedback earlier in the API development process, teams that are consumers of the API can begin testing and integration before the business logic is complete. Dependent teams are not blocked by another team’s progress and can work in parallel during API development process.
Determining what approach to use will depend on your business needs and use case. The code-first approach is a more traditional approach to building APIs, but the popularity of API definition formats has led to the growing support of the design-first methodology.
Design-first allows consumers to better understand of how to use your API and ensures consistency across APIs.
Key benefits of a design-first approach are:
In three easy steps you can design and build your API’s without writing a bunch of lines of code and documentation.
Step 1: Create an API definition using API Management
In API Management, choose API’s from the left menu, and then select OpenAPI.
In the Create from OpenAPI specification window, select Full. Fill in your API values in the necessary fields and select create. Here is a sample Open API file you can use.
Step 2: Generate function apps from Open API Specifications
Now that you have created an OpenAPI specification file you can import the specification via the API Management or Azure Functions VS Code extensions to create the scaffolding for your API.
Getting started with VS Code
Before you get started there’s a few things you will need:
From API Management Extension
In your VS code settings make sure you enable the API Management extension to create Functions.
In the API management VS code extension right click on the API definition you would like to use and select scaffold Azure Functions.
Select your language of choice and where you would like to create the functions.
Watch as it starts to generate function apps from your specification file!
From Azure Functions Extension
You can also use the Azure Functions extension to import a specification to create the scaffolding for your API.
Create a new function app and choose your language of choice (TypeScript, Python, C#).
Select the HTTP trigger(s) from OpenAPI V2/V3 Specification template.
Select your OpenAPI specification file (You can find some samples to try here).
Step 3: Add your business logic
Once it completes you will see it has automatically generated all the necessary functions for your API, preconfigured based on the OpenAPI specification. All you need to do now is add your business logic to each function/route and deploy to Azure.
Getting started with CLI
You can also use the CLI as well to create the scaffolding for your API’s.
autorest --azure-functions-csharp \ --input-file:/path/to/spec.json \ --output-folder:./generated-azfunctions \ --version:3.0.6314 \ --namespace:Namespace
Check out the GitHub repo for more information on how to get started.
The design-first approach to building APIs comes with many benefits, saving you time and effort in bringing your APIs to market. Now you can reap the benefits even more by generating the Azure Functions scaffolding for your APIs from OpenAPI definition in Visual Studio Code. If you have an existing API hosted on Azure and you’re interested in exploring a serverless architecture or consumption-based billing model, this new capability is a fast track for getting started with Azure Functions.
To learn more about building serverless APIs with Azure Functions and Azure API Management, check out a new workshop we recently published on GitHub. | https://techcommunity.microsoft.com/t5/apps-on-azure-blog/build-rest-apis-in-three-steps-with-api-management-and-azure/ba-p/1869627 | CC-MAIN-2022-21 | refinedweb | 994 | 51.28 |
Module development – Bindings Martin Hejtmanek — Jan 9, 2015 modulerelationshipapi and internals During custom development, you sometimes just need to create relationships between objects, rather than defining a child object with a full set of data fields. This article will lead you through the options you have when building M:N relationships in Kentico Hi there, Welcome to a new year and let me continue with my module development article series. We are still just at the beginning of all the great features Kentico can provide in this regard. I have already covered the following topics in this module development series of articles: Defining custom classes and the management UI Leveraging and proper configuration of foreign keys Building parent-child hierarchy in your data Today, I would like to discuss the building of M:N relationships, which we call Bindings in Kentico. I will be using the Binding terminology, because that is the name we use throughout the system. In the last articles, we built a list of Industries and Occupations and extended Contacts to have two properties pointing to these objects. Because both the industry and occupation of a person may change and we only have one field to store the information in a contact, it may be good to store the history of this information. Having the industry and occupation history could help when communicating with contacts during the sales or support process. Imagine that you have a former developer who for whatever unlikely reason switched their focus to be a volunteer nurse in a hospital. If this person contacts you for advice on how to solve a critical problem with a faulty CPR machine, having knowledge of their occupation history could even save a life – you could give advice on a proper technical level rather than just suggesting turning the machine off and on. Let’s build this support using Kentico and the customizations we already have from the previous articles. I am going to build a binding between contacts and industries, with the relationship meaning “this contact has worked in this industry”. Defining data and API We will need to store some data again, therefore we need to start by building a module class, as in the previous articles. While Kentico technically supports bindings without a primary key for historical reasons, the current best practice is to build bindings that have a regular primary key. This has several advantages: Better performance on SQL server due to an ever increasing clustered index based on a single primary key Ability to later add and manage additional data for a binding, such as the role expiration date that we provide within the membership module So start by creating a new class with just three fields – an ID as the primary key and two additional foreign keys. Each of the foreign keys must point to one of the sides that you want to bind together. You will eventually need to decide which of the sides will be the parent of the binding for general operations such as staging etc. The general rule of thumb is to select the side from which the number of bindings will be lower. The reason for this is the inclusion to parent data that I explained in my parent/child article. In our case, we have contacts on one side, and there can be many of them, let’s say tens of thousands. On the other side we have industries, and I expect no more than several tens of them. Now imagine how the complete parent data would look like in both scenarios. Always think about the worst case while planning for scalability and performance. If the parent object is industry and each contact has a binding to each industry, the complete data of an industry will contain several tens of thousands of records for individual contacts, and the system will need to regenerate this full set of data for every new binding, making staging tasks enormously big. If the parent object is contact and each contact has a binding to each industry, the contact data will not include more than several tens of records for all industries belonging to a given contact. When the bindings change, the new staging task for updating the contact will always stay at a sustainable size, not causing any significant overhead. In reality, there will be no more than a few industries assigned to each contact, which makes the data even smaller. So the choice here is clear, the parent object of our binding is going to be contact to keep the individual sets of parent data small enough. Note that if the previous exercise indicates an enormous amount of data on both sides, it may be better to set up synchronization using separate staging tasks for the binding itself, instead of the default inclusion to the parent data. This is however beyond the scope of this article. When defining your data, I suggest that you always use naming with the parent object first so that the hierarchy can clearly be recognized from the names in in your API, allowing developers to have a clear indication of what behavior to expect. For this reason, I will create a class named “MHM.ContactIndustry” (contact first) with fields: ContactIndustryID – integer, primary key (contact first) ContactID – integer, foreign key to contact, reference type Binding (contact first) IndustryID – integer, foreign key to industry, reference type Binding Note that in this case I didn’t use prefixes such as “ContactIndustryContactID”, because it would make the API too overwhelming. I just used the regular names of the target primary keys, which is completely OK in this case, as it is unlikely that we will need to provide just these IDs in a database join with the target objects. Generate the API for ContactIndustryInfo and the corresponding provider on the Code tab. If you look at the code of the generated provider, you can see that the system recognized the binding foreign keys, and generated methods for getting objects by both contact and industry IDs. Because our object has a regular primary key, we need to tell the system that our object type is a binding. Set the IsBinding property of its type info to true as shown in this example: public class ContactIndustryInfo : AbstractInfo<ContactIndustryInfo> { ... public static ObjectTypeInfo TYPEINFO = new ObjectTypeInfo(...) { ... IsBinding = true }; } Bindings are different from regular objects in this way, because you work with them using the target foreign keys that bind objects together, not the primary key. The primary key is only used internally to update potential binding data if needed. The system also automatically ensures that only one binding between two specific objects exists. Even if you attempt to create multiple bindings between two objects, the result is only one. The code generator also automatically sets the parent object based on the first binding foreign key, and defines the second one as a foreign key. As I mentioned in my foreign keys article, the binding configuration in the field editor is just used for code generation, so if you are not happy with it later, you can redefine everything directly in the info code. Notice that the generated code by default uses string constants such as “om.contact” and “mhm.industry”. If your code has a reference to the corresponding libraries and you want to have the code cleaner (and more upgrade-proof), you can use ContactInfo.OBJECT_TYPE and IndustryInfo.OBJECT_TYPE constants instead. Creating a UI for bindings Like other general pages, even binding editing pages can be easily built using a predefined UI template. I am going to show you how to create a UI on both sides. We will create the following tabs: Industries tab in Contact properties with a UI element called ContactIndustries Contacts tab in Industry properties with a UI element called IndustryContacts We already have tabs in both locations, so it will be very easy. If you skipped the parent/child article and are not sure how to set up tabs, please read it first and create the tabs. In both cases, create a new UI element under the tabs using the “Edit bindings” page template. Now navigate to the Properties tab and select our new binding object type in the Binding object type property. Like in the previous article, the object type name is not localized by default, and you need to provide the localization. Note that if you don’t see your object type in that listing, you probably didn’t set the IsBinding property or didn’t recompile your web application project. You also need to set a condition for the parent object as we did when we built the parent/child relationship. As I explained in that article, bindings are technically just a special kind of child object, so the same rules that apply to child objects apply to bindings. Set the where condition to the following: IndustryID = {% ToInt(UIContext.ObjectID) %} Note that the target object type is recognized automatically, so we don’t need to set it. It is simply the side of the binding opposite from the currently edited object. You would only need to manually set it in more complex scenarios where the system could be confused by the context settings. If you follow my hierarchy guidelines, you don’t need to worry about that. To make the resulting UI easier to understand, also set the “List label” property. This text appears above the binding listing to explain to users what the purpose of the page is. I used the following text: “The following contacts have worked in this industry:” The resulting UI displays the following, and we are now able to manage the bindings. One thing I would like to mention is that the editing UI you see is a Uni selector control in multiple selection mode. You can see that it currently only displays the last name of contacts, since that is the display name column of the contact object type. That is not very convenient in this case. I will show you in my next article how you can leverage extenders to customize UI page templates. Let’s now set up the UI from the other side. Repeat the previous steps for the UI element on the other side. To summarize: Create the UI element ContactIndustries under the contact property tabs. Configure it to use the “Edit bindings” page template. On the Properties tab, select “Contact industry” as the binding object type. Set the where condition to restrict the listing based on the parent object. Optionally provide a listing label to explain the context to the user. In this case my where condition is the following: ContactID = {% ToInt(UIContext.ObjectID) %} And I used the following text to explain the content of my new tab: “This contact has worked in the following industries:” The result is the following UI that lets me easily manage the industries of contacts. My marketers can now also update this information manually based on phone conversation or other communication with clients. The choice on which side of the relationship you provide the editing interface is yours, you have full control over it. Imagine typical scenarios that you will need to cover, and decide based on that, or simply based on specific requirements from your clients. Automatic population of the relationship I mentioned that we want to keep the history of contact industries at the beginning of this article. Keeping track of the history manually would be too complicated, so we are going to write a piece of code that will handle that for us automatically. We will leverage object event handlers. I mentioned them in my article about handling foreign keys. Start by creating a new class which will represent our module and define its initialization. I will create it as ~/AppCode/CMSModules/MHM.ContactManagement/MHMContactManagementModule.cs. Here is my code: using CMS; using CMS.DataEngine; using CMS.Helpers; using CMS.OnlineMarketing; using MHM.ContactManagement; [assembly: RegisterModule(typeof(MHMContactManagementModule))] namespace MHM.ContactManagement { /// <summary> /// MHM Contact management module entry /// </summary> public class MHMContactManagementModule : Module { /// <summary> /// Initializes module metadata /// </summary> public MHMContactManagementModule() : base("MHM.ContactManagement") { } /// <summary> /// Fires at the application start /// </summary> protected override void OnInit() { base.OnInit(); ContactInfo.TYPEINFO.Events.Insert.After += InsertOnAfter; ContactInfo.TYPEINFO.Events.Update.Before += UpdateOnBefore; } /// <summary> /// Ensures that a newly inserted object ensures its binding to industry /// </summary> private void InsertOnAfter(object sender, ObjectEventArgs e) { EnsureBinding((ContactInfo)e.Object); } /// <summary> /// Ensures that when a contact industry field changes, the system ensures proper corresponding binding /// </summary> private void UpdateOnBefore(object sender, ObjectEventArgs e) { var contact = (ContactInfo)e.Object; if (contact.ItemChanged("ContactIndustryID")) { e.CallWhenFinished(() => EnsureBinding(contact)); } } /// <summary> /// Ensures that the binding between the contact and its current industry exists /// </summary> private void EnsureBinding(ContactInfo contact) { var industryId = ValidationHelper.GetInteger(contact.GetValue("ContactIndustryID"), 0); if (industryId > 0) { var binding = new ContactIndustryInfo { ContactID = contact.ContactID, IndustryID = industryId }; binding.Insert(); } } } } Let me explain the code in more detail, in the order as the parts appear in the class code: The module class must inherit from the class CMS.DataEngine.Module and must be registered within the system using the RegisterModule assembly attribute. Do not forget to use the attribute, the application won’t know about the module code without it. The module name provided as metadata in the constructor must match the module code name defined when we registered the module. The module has two initialization methods: OnPreInit and OnInit. OnPreInit is always called, even for applications that aren’t yet connected to the database. OnInit is called right after the application connects to the database if the database is available. Both methods are called once at the application start and can provide module initialization code. In our case we are working with data, so it makes more sense to do these actions only when the database is available. That is why I chose OnInit. We attach two object event handlers, one after insert, and one before update of the object. These two event handlers ensure that the system creates corresponding bindings in the database for any industry references defined in our contacts. I chose after insert because that is the point where the contact is already saved and the data is consistent in the DB. I chose before update, because I perform the action based on detection of changes to the field “ContactIndustryID“ to keep maximum performance. This detection must always be done in the before handler. In the after handler, the object change status is already reset. I however perform the actual action after the update is finished using the CallWhenFinished method for the same reason as with the insert handler. Creating the actual binding is simple. You just create a new object with corresponding IDs and insert it. As I explained earlier, bindings have automatic detection of redundancy, so the system automatically performs an upsert operation to maintain only one such object. That is all. Once you have this code present in your application, the system will automatically maintain all industry history of your contacts in the form of M:N bindings. Bindings in macros and API I already mentioned in my parent/child article that bindings are available in a similar way as child objects. The only difference is that they are available in a collection named “Bindings” under their main parent: But also in a collection named “OtherBindings” from the other target object: The same rules as for child objects also apply for the regular API. Use the corresponding collections to access bindings or the binding info provider directly. Wrap up Today, you learned how to set up an editable M:N relationship between two object types. We went through: Creating a binding object type and its API Leveraging a predefined UI template to manage bindings Writing module initialization code Attaching event handlers to automatically maintain data Accessing bindings in macros and the API As I mentioned earlier in this article, I will show you how to customize UI templates using extenders in the next article. Mar 13, 2015 Hi Dan,If you enable staging / export for that binding explicitly, then it either has to have code name or GUID. If you just want to use default staging / export support with parent, you should not configure these at all in the binding class and it should work automatically.If you still struggle with it, please contact support and they will help you. Dan commented on Mar 9, 2015 I created binding class using some of the information in your article. But when trying to export this binding class I'm getting an error that states "Missing code name column information". Do you need to assign a code name field for binding class? If so what should the code name value be and will the user need to enter this? What else is needed for this to support export and staging?Thank you. MartinH commented on Jan 29, 2015 No, DON'T select "Is M:N table" if you want to proceed according to my examples. That is an option for the other case without ID column I mentioned and is more complicated. jkrill-janney commented on Jan 26, 2015 So are you, or are you not supposed to select "Is M:N table" when creating the class? Because it seems that when I do that, my primary doesn't get set to be auto-incrementing. And it seems there's no way to change it once you've gone past that step? So I have to delete everything and start over from scratch? Or am I missing something? MartinH commented on Jan 22, 2015 I am doing all my examples on Module development so far on 8.1, when I switch to 8.2, I will mention it in the articles. Alex commented on Jan 22, 2015 What version of Kentico is this in? I'm using 8.0.14 and I don't see a "Multiple object binding control" form control. Is this in a newer version of Kentico? MartinH commented on Jan 21, 2015 Hi Alex,Here is how you should be able to do it, I will explain it on my examples (just map it to yours):1) Create an integer field named "ContactIndustries" in Contact class2) Set the field up with form control "Multiple object binding control" with the following properties:Binding object type: "mhm.contactindustry"Target object type: "mhm.industry"Display name format: "{%IndustryDisplayName%}"At this point, you are able to see the Uni selector in multiple mode on the editing form, and be able to view and add bindings. Note that in this case changes are not saved immediately, but with the whole form.However you won't be able to remove them, and get following errors in event log while trying it: "ObjectBinding BindObject Message: [BaseInfo.Delete]: Object ID (ContactIndustryID) is not set, unable to delete this object."That is because the underlying API that this control uses requires knowledge of object ID to be able to delete it, but the control was built and tested for our legacy bindings without ID column. To fix that, change these lines in ~/CMSFormControls/System/MultiObjectBindingControl.ascx.csBaseInfo bindingObj = SetBindingObject(item.ToInteger(0), resolvedObjectType);bindingObj.Delete();to:BaseInfo bindingObj = SetBindingObject(item.ToInteger(0), resolvedObjectType);bindingObj = bindingObj.Generalized.GetExisting();bindingObj.Delete();Let me know if that works for you. Alex commented on Jan 21, 2015 How can we use this "many to many" binding in a form field. For example.I have a car object and a color object and i can have cars of different colorsCar TableCarIDNameColor TableColorIDNameCarColor TableCarColorIDCarIDColorIDNow i need a form field to show this and save it. How would I do this? | http://devnet.kentico.com/articles/module-development-bindings | CC-MAIN-2016-44 | refinedweb | 3,280 | 52.39 |
New and Improved
Coming changes to unittest in Python 2.7 & 3.2
The Pycon Testing Goat.
Note
This article started life as a presentation at PyCon 2010. You can watch a video of the presentation:
Since that presentation lots of features have been added to unittest in Python 2.7 and unittest2.
This article also introduces a backport of the new features in unittest to work with Python 2.4, 2.5 & 2.6:
For a more general introduction to unittest see: Introduction to testing with unittest.
There are now ports of unittest2 for both Python 2.3 and Python 3. The Python 2.3 distribution is linked to from the unittest2 PyPI page. The Python 3 distribution is available from:
New and improved: Coming changes to unittest
- Introduction
- unittest is changing
- New Assert Methods
- Deprecations
- Type Specific Equality Functions
- Set Comparison
- Unicode String Comparison
- Add New type specific functions
- assertRaises
- Command Line Behaviour
- Test Discovery
- load_tests
- Cleanup Functions with addCleanup
- Test Skipping
- More Skipping
- As class decorator
- Class and Module Level Fixtures
- Minor Changes
- The unittest2 Package
- The Future
Introduction
unittest is the Python standard library testing framework. It is sometimes known as PyUnit and has a rich heritage as part of the xUnit family of testing libraries.
Python has the best testing infrastructure available of any of the major programming languages, but by virtue of being included in the standard library unittest is the most widely used Python testing framework.
unittest has languished whilst other Python testing frameworks have innovated. Some of the best innovations have made their way into unittest which has had quite a renovation. In Python 2.7 and 3.2 a whole bunch of improvements to unittest will arrive.
This article will go through the major changes, like the new assert methods, test discovery and the load_tests protocol, and also explain how they can be used with earlier versions of Python.
unittest is changing
The new features are documented in the Python 2.7 development documentation at: docs.python.org/dev/library/unittest.html. Look for "New in 2.7" or "Changed in 2.7" for the new and changed features.
An important thing to note is that this is evolution not revolution, backwards compatibility is important. In particular innovations are being brought in from other test frameworks, including test frameworks from large projects like Zope, Twisted and Bazaar, where these changes have already proved themselves useful.
New Assert Methods
The point of assertion methods in unittest is to provide useful messages on failure and to provide ready made methods for common assertions. Many of these were contributed by google or are in common use in other unittest extensions.
- assertGreater / assertLess / assertGreaterEqual / assertLessEqual
- assertRegexpMatches(text, regexp) - verifies that regexp search matches text
- assertNotRegexpMatches(text, regexp)
- assertIn(value, sequence) / assertNotIn - assert membership in a container
- assertIs(first, second) / assertIsNot - assert identity
- assertIsNone / assertIsNotNone
And even more...
- assertIsInstance / assertNotIsInstance
- assertDictContainsSubset(subset, full) - Tests whether the key/value pairs in dictionary full are a superset of those in superset.
- assertSequenceEqual(actual, expected) - ignores type of container but checks members are the same
- assertItemsEqual(actual, expected) - ignores order, equivalent of assertEqual(sorted(first), sorted(second)), but it also works with unorderable types
It should be obvious what all of these do, for more details refer to the friendly manual.
As well as the new methods a delta keyword argument has been added to the assertAlmostEqual / assertNotAlmostEqual methods. I really like this change because the default implementation of assertAlmostEqual is never (almost) useful to me. By default these methods round to a specified number of decimal places. When you use the delta keyword the assertion is that the difference between the two values you provide is less than (or equal to) the delta value. This permits them to be used with non-numeric values:
import datetime delta = datetime.timedelta(seconds=10) second_timestamp = datetime.datetime.now() self.assertAlmostEqual(first_timestamp, second_timestamp, delta=delta)
Deprecations
unittest used to have lots of ways of spelling the same methods. The duplicates have now been deprecated (but not removed).
- assert_ -> use assertTrue instead
- fail* -> use assert* instead
- assertEquals -> assertEqual is the one true way
New assertion methods don't have a fail... alias as well. If you preferred the fail* variant, tough luck.
Not all the 'deprecated' methods issue a PendingDeprecationWarning when used. assertEquals and assert_ are too widely used for official deprecations, but they're deprecated in the documentation. In the next version of the documentation the deprecated methods will be expunged and relegated to a 'deprecated methods' section.
Methods that have deprecation warnings are:
failUnlessEqual, failIfEqual, failUnlessAlmostEqual, failIfAlmostEqual, failUnless, failUnlessRaises, failIf
Type Specific Equality Functions
More import new assert methods are the type specific ones. These provide useful failure messages when comparing specific types.
- assertMultiLineEqual - uses difflib, default for comparing unicode strings
- assertSetEqual - default for comparing sets
- assertDictEqual - you get the idea
- assertListEqual
- assertTupleEqual
The nice thing about these new assert methods is that they are delegated to automatically by assertEqual when you compare two objects of the same type.
Add New type specific functions
- addTypeEqualityFunc(type, function)
Functions added will be used by default for comparing the specified type. For example if you wanted to hookup assertMultiLineEqual for comparing byte strings as well as unicode strings you could do:
self.addTypeEqualityFunc(str, self.assertMultiLineEqual)
addTypeEqualityFunc is useful for comparing custom types, either for teaching assertEqual how to compare objects that don't define equality themselves, or more likely for presenting useful diagnostic error messages when a comparison fails.
Note that functions you hook up are only used when the exact type matches, it does not use isinstance. This is because there is no guarantee that sensible error messages can be constructed for subclasses of the registered types.
assertRaises
The changes to assertRaises are one of my favourite improvements. There is a new assertion method and both methods can be used as context managers with the with statement. If you keep a reference to the context manager you can access the exception object after the assertion. This is useful for making further asserts on it, for example to test an error code:
# as context manager with self.assertRaises(TypeError): add(2, '3') # test message with a regex msg_re = "^You shouldn't Foo a Bar$" with self.assertRaisesRegexp(FooBarError, msg_re): foo_the_bar() # access the exception object with self.assertRaises(TypeError) as cm: do_something() exception = cm.exception self.assertEqual(exception.error_code, 3)
Command Line Behaviour
python -m unittest test_module1 test_module2 python -m unittest test_module1.suite_name python -m unittest test_module.TestClass python -m unittest test_module.TestClass.test_method
The unittest module can be used from the command line to run tests from modules, suites, classes or even individual test methods. In earlier versions it was only possible to run individual test methods and not modules or classes.
If you are running tests for a whole test module and you define a load_tests function, then this function will be called to create the TestSuite for the module. This is the load_tests protocol.
You can run tests with more detail (higher verbosity) by passing in the -v flag:
python -m unittest -v test_module
For a list of all the command line options:
python -m unittest -h
There are also new verbosity and exit arguments to the main() function. Previously main() would always sys.exit() after running tests, making it not very useful to call programmatically. The new parameters make it possible to control this:
>>> from unittest import main >>> main(module='test_module', verbosity=2, ... exit=False)
Passing in verbosity=2 is the equivalent of the -v command line option.
failfast, catch and buffer command line options
There are three more command line options for both standard test running and test discovery. These command line options are also available as parameters to the unittest.main() function.
-f / --failfast
Stop the test run on the first error or failure.
-c / --catch
Control-c during the test run waits for the current test to end and then reports all the results so far. A second control-c raises the normal KeyboardInterrupt exception.
There are a set of functions implementing this feature available to test framework writers wishing to support this control-c handling. See Signal Handling in the development documentation [1].
-b / --buffer
The standard out and standard error streams are buffered during the test run. Output during a passing test is discarded. Output is echoed normally on test fail or error and is added to the failure messages.
The command line can also be used for test discovery, for running all of the tests in a project or just a subset.
Test Discovery
Test discovery has been missing from unittest for a long time, forcing everyone to write their own test discovery / collection system.
python -m unittest discover
The options can also be passsed in as positional arguments. The following two command lines are equivalent:
python -m unittest discover -s project_directory -p '*_test.py' python -m unittest discover project_directory '*_test.py'
There are a few rules for test discovery to work, these may be relaxed in the future. For test discovery all test modules must be importable from the top level directory of the project.
Test discovery also supports using dotted package names instead of paths. For example:
python -m unittest discover package.test
There is an implementation of just the test discovery (well, plus load_tests) to work with standard unittest. The discover module:
pip install discover python -m discover
load_tests
If a test module defines a load_tests function it will be called to create the test suite for the module.
This example loads tests from two specific TestCases:
def load_tests(loader, tests, pattern): suite = unittest.TestSuite() case1 = loader.loadTestsFromTestCase(TestCase1) case2 = loader.loadTestsFromTestCase(TestCase2) suite.addTests(case1) suite.addTests(case2) return suite
The tests argument is the standard tests that would be loaded from the module by default as a TestSuite. If you just want to add extra tests you can just call addTests on this. pattern is only used in the __init__.py of test packages when loaded from test discovery. This allows the load_tests function to continue (and customize) test discovery into the package. In normal test modules pattern will be None.
Cleanup Functions with addCleanup
This is an extremely powerful new feature for improving test readability and making tearDown obsolete! Push clean-up functions onto a stack, at any point including in setUp, tearDown or inside clean-up functions, and they are guaranteed to be run when the test ends (LIFO).
def test_method(self): temp_dir = tempfile.mkdtemp() self.addCleanup(shutil.rmtree, temp_dir) ...
No need for nested try: ... finally: blocks in tests to clean up resources.
The full signature for addCleanup is: self.addCleanup(function, *args, **kwargs). Any additional args or keyword arguments will be passed into the cleanup function when it is called. It saves the need for nested try:..finally: blocks to undo actions performed by the test.
If setUp() fails, meaning that tearDown() is not called, then any cleanup functions added will still be called. Exceptions raises inside cleanup functions will cause the test to report an error, but all cleanup functions will still run.
If you want to manually clear out the cleanup stack you can call doCleanups().
Test Skipping
Decorators that work as class or method decorators for conditionally or unconditionally skipping tests:
@skip("skip this test") def test_method(self): ... @skipIf(sys.version_info[2] < 5, "only Python > 2.5") def test_method(self): ... @skipUnless(sys.version_info[2] < 5, "only Python < 2.5") def test_method(self): ...
More Skipping
def test_method(self): self.skipTest("skip, skippety skip") def test_method(self): raise SkipTest("whoops, time to skip") @expectedFailure def test_that_fails(self): self.fail('this *should* fail')
Ok, so expectedFailure isn't for skipping tests. You use it for test that are known to fail currently. If you fix the problem, so the test starts to pass, then it will be reported as an unexpected success. This will remind you to go back and remove the expectedFailure decorator.
Skipped tests appear in the report as 'skipped (s)', so the number of tests run will always be the same even when skipping.
As class decorator
If you skip an entire class then all tests in that class will be skipped.
# for Python >= 2.6 @skipIf(sys.platform == 'win32') class SomeTest(TestCase) ... # Python pre-2.6 class SomeTest(TestCase) ... SomeTest = skipIf(sys.platform == 'win32')(SomeTest)
Class and Module Level Fixtures
You can now define class and module level fixtures; these are versions of setUp and tearDown that are run once per class or module. in the suite have run the final tearDownClass and tearDownModule are run.
setUpClass and tearDownClass..
The Details.
If there are any exceptions raised during one of these functions / methods then.
Caution!
Note that shared fixtures do not play well with features like test parallelization and they also break test isolation. They should be used with care.
A setUpModule or setUpClass that raises a SkipTest exception will be reported as skipped instead of as an error.
Minor Changes
There are a host of other minor changes, some of them steps towards making unittest more extensible. For full details on these see the documentation:
- unittest is now a package instead of a module
- Better messages with the longMessage class attribute
- TestResult: startTestRun and stopTestRun
- TextTestResult public and the TextTestRunner takes a runnerclass argument for providing a custom result class (you used to have to subclass TextTestRunner and override _makeResult)
- TextTestResult adds the test name to the test description even if you provide a docstring
setuptools test command
Included in unittest2 is a test collector compatible with the setuptools test command. This allows you to run:
python setup.py test
and have all your tests run. They will be run with a standard unittest test runner, so a few features (like expected failures and skips) don't work fully, but most features do. If you have setuptools or distribute installed you can see it in action with the unittest2 test suite.
To use it specify test_suite = 'unittest2.collector' in your setup.py. This starts test discovery with the default parameters from the directory containing setup.py, so it is perhaps most useful as an example (see the unittest2/collector.py module).
The unittest2 Package
To use the new features with earlier versions of Python:
pip install unittest2
-
-
- Tested with Python 2.4, 2.5 & 2.6
- This is the documentation...
Replace import unittest with import unittest2. An alternative pattern for conditionally using unittest2 where it is available is:
try: import unittest2 as unittest except ImportError: import unittest
python -m unittest ... works in Python 2.7 even though unittest is a package. In Python 2.4-2.6 this doesn't work (packages can't be executed with -m).
The unittest2 command line functionality is provided with the unit2 / unit2.py script..
There is also the discover module if all you want is test discovery: python -m discover (same command line options).
The Future
The big issue with unittest is extensibility. This is being addressed in an experimental "plugins branch" of unittest2, which is being used as the basis of a new version of nose:
Please:
- Use unittest2 and report any bugs or problems
- Make feature requests on the Python issue tracker: bugs.python.org
- Join the Testing in Python mailing list
For buying techie books, science fiction, computer hardware or the latest gadgets: visit The Voidspace Amazon Store.
Last edited Tue Aug 2 00:51:34 2011.
Counter... | http://www.voidspace.org.uk/python/articles/unittest2.shtml | CC-MAIN-2017-04 | refinedweb | 2,568 | 56.45 |
#include "resource_slot.h"
A slot is a place in a web-site resource a URL is found, and may be rewritten. Types of slots include HTML element attributes and CSS background URLs. In principle they could also include JS ajax requests, although this is NYI.
Returns true if DirectSetUrl is supported by this slot (html and css right now).
Reimplemented in net_instaweb::HtmlResourceSlot, and net_instaweb::CssResourceSlot.
Detaches a context from the slot. This must be the first or last context that was added. in net_instaweb::HtmlResourceSlot, net_instaweb::AssociationSlot, and net_instaweb::CssResourceSlot.
Called after all contexts have had a chance to Render. This is especially useful for cases where Render was never called but you want something to be done to all slots.
Reimplemented in net_instaweb::CssResourceSlot.
Return the last context to have been added to this slot. Returns NULL if no context has been added to the slot so far.
Returns a human-readable description of where this slot occurs, for use in log messages..
Determines whether rendering the slot deletes the HTML Element. For example, in the CSS combine filter we want the Render to rewrite the first <link href>="">, but delete all the other <link>s.
Calling RequestDeleteElement() also forces set_disable_further_processing(true);
If disable_further_processing is true, no further filter taking this slot as input will run. Note that this affects only HTML rewriting (or nested rewrites) since fetch-style rewrites do not share slots even when more than one filter was involved. For this to persist properly on cache hits it should be set before RewriteDone is called. (This also means you should not be using this when partitioning failed). Only later filters are affected, not the currently running one.
If disable_rendering is true, this slot will do nothing on rendering, neither changing the URL or deleting any elements. This is intended for use of filters which do the entire work in the Context.
If this is true, input info on all inputs affecting this slot will be collected from all RewriteContexts chained to it.
Disables changing the URL of resources (does nothing if slot is not associated with a URL (for example, InlineResourceSlot).
Note that while slots can be mutated by multiple threads; they are implemented with thread-safety in mind – only mainline render their results back into the DOM.
For example, SetResource may be run from a helper-thread, but we would not want that threaded mutation to propagate instantly back into the HTML or CSS DOM. We buffer the changes in the ResoureSlot and then render them in the request thread, synchronous to the HTML filter execution.
Returns true if any of the contexts touching this slot optimized it successfully. This in particular includes the case where a call to RewriteContext::Rewrite() on a partition containing this slot returned kRewriteOk. Note in particular that was_optimized() does not tell you whether your filter optimized the slot! For this you should check output_partition(n)->optimizable(). | https://www.modpagespeed.com/psol/classnet__instaweb_1_1ResourceSlot.html | CC-MAIN-2017-47 | refinedweb | 488 | 65.01 |
In this Programme, you’ll learn Program to Find GCD Using Recursion in C++.
To nicely understand this example to Find GCD Using Recursion, you should have the knowledge of following C++ programming topics:
- Recursion
- C++ if, if…else and Nested if…else
- Functions
- Types of User-defined Functions
Here in this program, we have to provide 2 positive numbers and we’ll get GCD of those two numbers.
Program to Find GCD Using Recursion
#include <iostream> using namespace std; int hcf(int n1, int n2); int main() { int n1, n2; cout << "Enter two positive integers: "; cin >> n1 >> n2; cout << "H.C.F of " << n1 << " & " << n2 << " is: " << hcf(n1, n2); return 0; } int hcf(int n1, int n2) { if (n2 != 0) return hcf(n2, n1 % n2); else return n1; }
Output:
Enter two positive integers: 200 500 H.C.F of 200 & 500 is: 100
- Find Sum of Natural Numbers using Recursion
- Calculate Factorial of a Number Using Recursion
- Check Whether a Number can be Express as Sum of Two Prime Numbers
- Calculate Sum of Natural Numbers
- Check Whether a Number is Prime or Not
Ask your questions and clarify your/others doubts on how to Find G.C.D Using Recursion by commenting. Documentation | https://coderforevers.com/cpp/cpp-program/find-gcd-using-recursion/ | CC-MAIN-2020-16 | refinedweb | 204 | 52.63 |
I am working in a Server for ArcGIS 10.3 environment with a Versioned Enterprise Oracle 12.g gdb.
We have a number of spatial tables/queries that are renewed every day. The Spatial tables are created joining several tables. We want to create spatial Views on the Default gdb.
If I export the required Spatial table/query to a Spatial table they lose the Joined data??
I want to use ArcPY/Python to export the Joined Spatial Table to a Spatial table (SpTable_JN), then create a relationship with a table (Data_JN). This should be a simple task, yet I keep getting an error that the tables cannot be Registered with the database.
Has anyone experienced a similar problem and found a solution?? I would appreciate any advice.
Kind regards,
Clive
Can you please post some of the Python that you are using to export the table and create the relationship? This will make troubleshooting easier. Also post a screenshot of the exact error message.
Thanks!
Hi George,
Thanks for the reply. I am on holiday this week, don't have access to the code at work.
The code below is a rough copy of the logic used. The only error that I get from Create Relationship is:
"the feature is not Registered with the db"
# Import system modules
import arcpy
from arcpy import env
# Set environment settings
env.workspace = "D:/users/user"
## Copy Data
## Copy the Spatial View to Featyre
database = "TESTDB Connection.sde"
arcpy.CopyFeatures_management("TESTDB.SpTable_JNS_VW", "TESTDB.SpTable_JN_VW")
##Create Relationship
SptTbl = "TESTDB.SpTable_JN_VW"
SptData = "TESTDB.Data_JN"
# Create simple relationship class between SpTable_JN layer
# and Data_JN table with additional parcel owner information
SpTable = "SptTbl"
relClass = "TESTDB.EQUIP_RelClass"
forLabel = "SpTable.EQUIP_ID"
backLabel = "SptData.SW_ID"
primaryKey = "SpTable.EQUIP_ID"
foreignKey = "SptData.SW_ID"
arcpy.CreateRelationshipClass_management(SpTable,
SptData,
relClass,
"SIMPLE",
forLabel,
backLabel,
"BACKWARD",
"ONE_TO_MANY",
"NONE",
primaryKey,
foreignKey) | https://community.esri.com/t5/geodatabase-questions/spatial-views-and-relationships/td-p/864909 | CC-MAIN-2020-50 | refinedweb | 304 | 52.05 |
I have a fairly simple Openstack setup for a PoC. 2 nodes, both running Nova, and everything else on node 1. It is running CentOS 6 and was set up using RDO. Importantly I am using Neutron for the networking, with GRE tenant networks set up from the RDO docs for an existing network.
Periodically (every few days I reckon) I lose all communication with Openvswitch (and thus my instances). I know it OVS, because I can SSH into node 2, then connect to node 1 via their private network. The most telling thing I see in the logs is this:
unix:/var/run/openvswitch/db.sock: database connection failed (Protocol error)
In addition OVS is using HUGE amounts of CPU (800% on my 16-core boxes), and when I try and do a clean shutdown, it just never happens because it cannot kill ovsdb-server.
I have done some Googling and found some old suggestions based on older Openstack releases where people had OVS/kernel version mismatches. As I am running the versions from RDO I reckon I can discount that (unless Red Hat have made a massive screw up).
Anyone else seen this? have any suggestions?
PS: Do not tell me to recompile Openvswitch, for various reasons that is not happening in the immediate future.
Which version OpenStack, which version RDO repo are you using? I'm merely guessing with such little detail, but looks as you indicate some kind of issue with OpenvSwitch and your kernel, a runaway OVS process. Could likely be database or messaging agent related.
Check your qpid logs: /var/log/messages for something that shows a reason for disconnect at the time of your instance communication loss. This could reveal as to why there may be messaging disconnects and whether caused by messaging connect failure (external/tertiary cause); or the other way around, caused by OVS disconnect (likely OVS/kernel build issue).
Since RDO is "...tested on a RHEL 6.4", I would be using CentOS 6.4 minimum, rather than 6 as you state. Even better use 6.5 as there are a number of components included in the kernel, rather than patched as required with RDO.
Additional troubleshooting on your behalf is difficult without logs and details of your config, but after you have assessed this, suffice to say that there are known Neutron configuration challenges to overcome with GRE and MTU settings.
In any case for a successful OpenStack build (no matter how basic, it is complicated), you need to start with a supported and up to date build of OS, kernel and OVS. How can you be sure that you can discount "OVS/kernel version mismatch", what versions are you using?
I'd suggest you configure with latest CentOS 6.5 and RDO, then re-post if issue persists (with updated details, logfiles, etc) additionally on RDO forum: as then you will get the distro specific details that you may need.
EDIT: Check dhcp.ini and dnsmask config via these articles for MTU settings, apparrently 1454 should be about right for guest instances when running GRE:
Don't forget there could still be issues with MTU and GRE depending on your kernel and OVS versions, so please advise what versions you have and update your post, so you can assist with others having similar issues as well, On both nodes show results for:
uname -a
rpm -qpi | grep openvswitch
Also take a look at your OVS GRE flows and run some tcpdumps in the relevant qrouter namespace when you are making your large 20G transfer, this guide from RDO will help, take a look at Joe Talerico's great GRE debugging on two node explanation at 60 minutes onwards:
And finally you also need to check you aren't being affected by Generic Receive Offload config as per post #24:
By posting your answer, you agree to the privacy policy and terms of service.
asked
2 years ago
viewed
1319 times
active
1 year ago | http://serverfault.com/questions/584863/openstack-neutron-stabilty-problems | CC-MAIN-2016-18 | refinedweb | 666 | 67.99 |
- Author:
- DrMeers
- Posted:
- April 27, 2010
- Language:
- Python
- Version:
- 1.1
- middleware ssl https redirection href
- Score:
- 2 (after 2 ratings)
See docstrings for details. To use, add to
MIDDLEWARE_CLASSES in
settings.py, and in your
views.py:
from path.to.this.middleware import secure
- Decorate SSL views with
@secure
More like this
- SSL Middleware for Webfaction by parlar 8 years, 4 months ago
- Enable AWS ELB with SSL Termination by zvikico 4 years, 3 months ago
- SSL Middleware by sjzabel 8 years, 7 months ago
- HTTPS redirections middleware with updated URL template tag by xlq 2 years, 11 months ago
- Fake SSL Middleware for Tests and Local Development by DrMeers 5 years, 5 months ago
Very nice, but I had a problem where the /admin/ application was being forced to (insecure) http: even where I entered https:
My solution was to revise the _ _correct__protocol routine:
#
Please login first before commenting. | https://djangosnippets.org/snippets/1999/ | CC-MAIN-2015-40 | refinedweb | 152 | 55.27 |
16 March 2011 09:35 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The maintenance will last 15-20 days, he added. Xianglu Petrochemical had initially planned to roll out maintenance in the second half of the year.
“PX supply has been severely tightened due to major shutdowns in
Xianglu would meanwhile cancel its seven-day maintenance plan due in late March, the source said.
The company had informed its customers that it would cut 50% of contracted commitments in June because of the shutdown.
Discussion levels for Asia spot PX surged to $1,810-1,820/tonne (€1,285-1,292) CFR
( | http://www.icis.com/Articles/2011/03/16/9444290/Chinas-Xianglu-to-start-PTA-turnaround-earlier-on-tight-PX.html | CC-MAIN-2013-48 | refinedweb | 102 | 64.71 |
On 22-Apr-2005, Keith Goodman <address@hidden> wrote: | Is there a clean and simple way for octave to automatically configure | readline (without changing the behavior of readline outside of octave) | so that the up and down arrows search through history for matches to | the characters already on the command line? | | If so, I think it would be a great addition to octave. Try adding $if Octave "\e[A": history-search-backward "\e[B": history-search-forward $endif to your ~/.inputrc file. The "\e[A" and "\e[B" sequences should match up/down arrow in an xterm window. You may need something else for other terminals. jwe | https://lists.gnu.org/archive/html/octave-maintainers/2005-04/msg00055.html | CC-MAIN-2021-25 | refinedweb | 107 | 52.19 |
Combining WCF Data Services, JSONP and jQuery
During Mike Flasko’s session at MIX11,
he showed how to create a JSONP aware WCF Data Service with a JSONPSupportBehavior attribute that is available for download from MSDN code gallery (and is supposed to be a part of Microsoft.Data.Services.Extensions namespace). In this post I’ll show a simple example that uses the attribute and jQuery in order to make a JSONP cross domain call for a WCF Data Service.
Setting up the Environment
First I started by creating two different ASP.NET web applications. The first application includes the calling page and the second includes the WCF Data Service. Then, I created in the second web application an Entity Framework model and the WCF Data Service from that model. I also added the JSONPSupportBehavior.cs class that exists in the link I supplied earlier. The class includes the implementation of JSONPSupportBehavior which implements the WCF IDispatchMessageInspector interface. Also it includes the JSONPSupportBehaviorAttribute which I use in my code. The code is simple and looks like:
[JSONPSupportBehavior]
public class SchoolDataService : DataService<SchoolEntities>
{
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
{
config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
}
}
Making the JSONP Call
In the second web application I’ve created a web form that will hold the JSONP call example. Here is the code that makes the call:
<!DOCTYPE html>
<html>
<head runat="server">
<title>JSONP Call</title>
<script src="" type="text/javascript"></script>
</head>
<body>
<form id="form1" runat="server">
<output id="result">
</output>
</form>
<script type="text/javascript">
$.getJSON('?',
function (response) {
$.each(response.d, function (index, value) {
var div = document.createElement('div');
div.innerHTML = value.Title;
$('#result').append(div);
})
});
</script>
</body>
</html>
Lets explore the web form code:
At first I use Microsoft CDN in order to retrieve the jQuery library. Then, I’ve created a HTML5 output element in order to append to it the output of the call. In the main script I use jQuery’s getJSON function which is calling the WCF Data Service. Pay attention that in order to get a JSON response from the WCF Data Service you need to use the $format=json query string parameter. After I retrieve the data I iterate and create a div element for each course title that was retrieved. This is done in the success function that I wired in the getJSON function call. Here is the output of running the code:
Summary
In the post I supplied a simple example of making a JSONP call to a WCF Data Service using jQuery. This sort of solution can help you to consume WCF Data Services that exists in other domains from your client side. In a follow up post I’ll show the same example using the new datajs library. | http://blogs.microsoft.co.il/gilf/2011/04/24/combining-wcf-data-services-jsonp-and-jquery/ | CC-MAIN-2013-48 | refinedweb | 474 | 53.81 |
A Configuration and Tweak SystemSetting up a configuration system for a game sounds like a trivial task. What exactly is there to store beyond some graphics, sound and controller information? For the final released game, those items cover most of the needs of a simple game, but during development it is handy to store considerably more information. Simple items such as auto loading a specific level you are working on, if bounding boxes are displayed and other items can greatly speed debugging. Of course it is possible to write different systems, one for the primary configuration and another for tweakables, but that is a duplication of effort and not required. Presented in this article is configuration system which supports both standard configuration and development time configuration. This article builds on the CMake environment presented in the articles here and extends from the updated version presented with the SIMD articles here. The code for the article can be found here starting at the tag 'ConfigSystem' and contained in the "Game" directory.
GoalsThe primary goals of the configuration system are fairly simple: provide configuration serialization without getting in the way of normal game programming or requiring special base classes. While this seems simple enough, there are some tricky items to deal with. An example could be the choice of a skin for the in-game UI. The configuration data will be loaded when the game starts up in order to setup the main window's position, size and if it is fullscreen or not, but the primary game UI is created later after the window is setup. While it is possible to simply set a global flag for later inspection by the UI, it is often preferable to keep the data with the objects which use them. In order to solve this sort of delayed configuration, the system maintains a key value store of the configuration such that access is available at any point during execution. Keeping the solution as simple and non-intrusive as possible is another important goal. It should take no more than a minute or two to hook up configuration file persistence without requiring multiple changes. If it is needed to change a local variable to be attached to a configuration item it should not be required to change the local to a global or move it to a centralized location. The system should work with the local value just as well as member variables and globals in order to remain non-intrusive. Finally, while not a requirement of the configuration data directly, it should be possible to control and display the items from within the game itself. For this purpose, a secondary library is supplied which wraps the open source library AntTweakBar () and connects it to variables in the game. This little library is a decent enough starting point to get going after hacking it a bit to fit into the CMake build environment. Eventually the library will likely be replaced with the chosen UI library for the game being written as part of these articles. For the time being though, it serves the purpose as something quick to use with some nice abilities.
Using the SystemA basic overview of using the configuration system is presented through a series of small examples. For full examples the current repository contains an application called XO which is the beginnings of a game and includes a number of useful additions beyond the scope of this article. It currently builds off of SFML 2.0 and includes a test integration of the 'libRocket' UI framework in addition to a number of utilities for logging and command line argument parsing. For more detailed examples, please see the application being created.
The SingletonThe configuration system requires a singleton object to store the in memory database. While there are many ways to implement singletons, the choice of singleton style here is to use a scoped object somewhere within the startup of the game in order to explicitly control creation and shutdown. A very simple example of usage:
#includeCompared to other solutions, such as the Pheonix singleton, there are no questions as to the lifetime of the object which is very important in controlling when data is loaded and saved.
int main( int argc, char** argv ) { Config::Initializer configInit( "config.json" ); return 0; }
Simple Global Configuration ItemsThe first use of the configuration system will show a simple global configuration item. This will be shown without any of the helpers which ease usage in order to provide a general overview of the classes involved. The example is simply a modification of the standby "Hello World!" example:
#includeWhen you run this the first time, the output is as expected: "Hello World!". Additionally, the executable will create the file "HelloWorld.json" with the following contents:
#include #include std::string gMessage( "Hello World!" ); Config::TypedItem< std::string > gMessageItem( "HelloWorld/Message", gMessage ); int main( int argc, char** argv ) { Config::Initializer configInit( "HelloWorld.json" ); // Initialize the message from the configuration. // If the item does not exist, the initial value is retained. Config::Instance->Initialize( gMessageItem ); std::cout << gMessage; // Update the message in the configuration registry. // This example never changes the value but it is // possible to modify it in the json file after // the first run. Config::Instance->Store( gMessageItem ); return 0; }
{ "Registry" : [ { "Name" : "HelloWorld\/Message", "Value" : "Hello World!" } ] }If you edit the value string in the JSON file to be "Goodbye World!" and rerun the example, the output will be changed to the new string. The default value is overwritten by the value read from the configuration file. This is not in itself all that useful, but it does show the basics of using the system.
Auto Initialization and MacrosIn order to ease the usage of the configuration system there is a utility class and a set of macros. Starting with the utility class, we can greatly simplify the global configuration item. Rewrite the example as follows:
#includeThe InitializerList class works with any global or static item and automates the initialization and storage of the item. This system uses a safe variation of static initialization in order to build a list of all items wrapped with the InitializerList class. When the configuration initializer is created and the configuration is loaded, items in the list are automatically configured. At shutdown, the configuration data is updated from current values and as such the user does not need to worry about the global and static types when the InitializerList is in use.
#include #include std::string gMessage( "Hello World!" ); Config::InitializerList gMessageItem( "HelloWorld/Message", gMessage ); int main( int argc, char** argv ) { Config::Initializer configInit( "HelloWorld.json" ); std::cout << gMessage; return 0; }
Using the static initializer from a static library in release builds will generally cause the object to be stripped as it is not directly referenced. This means that the item will not be configured in such a scenario. While this can be worked around, it is not currently done in this implementation. If you require this functionality before I end up adding it, let me know and I'll get it added.A further simplification using macros is also possible. The two lines defining the variable and wrapping it with an initializer list are helpfully combined into the CONFIG_VAR macro. The macro takes the type of the variable, the name, the key to be used in the registry and the default starting value. The macro expands identically to the two lines and is not really needed, but it does make things more readable at times.
Scoped ConfigurationOther forms of configuration such as local scoped variables and members can be defined in similar manners to the global items. Using the InitializerList class is safe in the case of locals and members as it will initialize from the registry on creation and of course store to the registry on destruction. The class automatically figures out if the configuration is loaded and deals with the referenced item appropriately. So, for instance the following addition to the example works as intended:
#includeThe currently configured message will be printed out followed by ": 1234" and the configuration JSON will reflect the new variable. Changing the variable in the configuration file will properly be reflected in a second run of the program and if you changed the value within the program it would be reflected in the configuration file.
#include #include CONFIG_VAR( std::string, gMessage, "HelloWorld/Message", "Hello World!" ); int main( int argc, char** argv ) { Config::Initializer configInit( "HelloWorld.json" ); CONFIG_VAR( int32_t, localVar, "HelloWorld/localVar", 1234 ); std::cout << gMessage << " : " << localVar; return 0; }
Class Member ConfigurationConfiguring classes using the system is not much more difficult than using globals, the primary difference is in splitting the header file declaration from the implementation initialization and adding some dynamic key building abilities if appropriate. Take the following example of a simple window class:
class MyWindow { public: MyWindow( const std::string& name ); private: const std::string mName; Math::Vector2i mPosition; Math::Vector2i mSize; }; MyWindow::MyWindow( const std::string& name ) : mName( name ) , mPosition( Math::Vector2i::Zero() ) , mSize( Math::Vector2i::Zero() ) { }In order to add configuration persistence to this class simply make the following modifications:
class MyWindow { public: MyWindow( const std::string& name ); private: const std::string mName; CONFIG_MEMBER( Math::Vector2i, mPosition ); CONFIG_MEMBER( Math::Vector2i, mSize ); }; MyWindow::MyWindow( const std::string& name ) : mName( name ) , CONFIG_INIT( mPosition, name+"/Position", Math::Vector2i::Zero() ) , CONFIG_INIT( mSize, name+"/Size", Math::Vector2i::Zero() ) { }Each differently named window will now have configuration data automatically initialized from and stored in the configuration registry. On first run, the values will be zero'd and after the user moves the windows and closes them, they will be properly persisted and restored next use. Once again, the macros are not required and are simply a small utility to create the specific type instance and the helper object which hooks it to the configuration system. While it would be possible to wrap the actual instance item within the configuration binding helpers and avoid the macro, it was preferable to leave the variables untouched so as not to affect other pieces of code. This was a tradeoff required to prevent intrusive behavior when converting items to be configured.
Adding New TypesAdding new types to the configuration system is intended to be fairly simple. It is required to add a specialized template class for your type and implement three items: the constructor, a load and save function. The following structure will be serialized in the example:
struct MyStruct { int32_t Test1; uint32_t Test2; float Test3; std::string Test4; };While prior examples only dealt with single types, it is quite simple to deal with composites such as this structure given the underlying nature of the JSON implementation used for serialization; the outline of the implementation is as follows:
#includeThat is everything you need to do to handle your type, though of course we need to fill in the _Load and _Save functions which is also quite simple. The _Load function is:
namespace Config { template<> struct Serializer< MyStruct > : public Config::Item { Serializer( const std::string& key ) : Item( key ) {} protected: bool _Load( MyStruct& ms, const JSONValue& inval ); JSONValue* _Save( const MyStruct& inval ); }; }
inline bool Serializer< MyStruct >::_Load( MyStruct& ms, const JSONValue& inval ) { if( inval.IsObject() ) { const JSONObject& obj = inval.AsObject(); ms.Test1 = (int32_t)obj.at( L"Test1" )->AsNumber(); ms.Test2 = (uint32_t)obj.at( L"Test2" )->AsNumber(); ms.Test3 = (float)obj.at( L"Test3" )->AsNumber(); ms.Test4 = string_cast< std::string >( obj.at( L"Test4" )->AsString() ); return true; } return false; }Obviously this code does very little error checking and can cause problems if the keys do not exist. But other than adding further error checks, this code is representative of how easy the JSON serialization is in the case of reading value data. The save function is just as simplistic:
inline JSONValue* Serializer< MyStruct >::_Save( const MyStruct& inval ) { JSONObject obj; obj[ L"Test1" ] = new JSONValue( (double)inval.Test1 ); obj[ L"Test2" ] = new JSONValue( (double)inval.Test2 ); obj[ L"Test3" ] = new JSONValue( (double)inval.Test3 ); obj[ L"Test4" ] = new JSONValue( string_cast< std::wstring >( inval.Test4 ) ); return new JSONValue( obj ); }This implementation shows just how easy it is to implement new type support thanks to both the simplicity of the library requirements and the JSON object serialization format in use.
The JSON library used works with L literals or std::wstring by default, the string_cast functions are simple helpers to convert std::string to/from std::wstring. These conversions are not code page aware or in anyway safe to use with real unicode strings, they simply trim/expand the width of the char type since most of the data in use is never intended to be presented to the user.
The Tweak UIAs mentioned in the usage overview, the UI for tweaking configuration data is currently incorporated into the beginnings of a game application called XO. The following image shows a sample of configuration display, some debugging utility panels and a little test of the UI library I incorporated into the example. While this discussion may seem related to the configuration system itself, that is only a side affect of both systems going together. There is no requirement that the panels refer only to data marked for persistence, the tweak UI can refer to any data persisted or not. This allows hooking up panels to debug just about anything you could require.
A Basic PanelCreating a basic debug panel and hooking it up for display in the testbed is fairly easy, though it uses a form of initialization not everyone will be familiar with. Due to the fact that AntTweakBar provides quite a number of different settings and abilities, I found it preferable to wrap up the creation in a manner which does not require a lot of default arguments, filling in structures or other repetitive details. The solution is generally called chained initialization which looks rather odd at first but can reduce the amount of typing in complicated initialization scenarios. Let's create a simple empty panel in order to start explaining chaining:
TweakUtils::Panel myPanel = TweakUtils::Panel::Create( "My Panel" );If that were hooked into the system and displayed it would be a default blue colored panel with white text and no content. Nothing too surprising there, but let's say we want to differentiate the panel for quick identification by turning it bright red with black text. In a traditional initialization system you would likely have default arguments in the Create function which could be redefined. Unfortunately given the number of options possible in a panel, such default arguments become exceptionally long chains. Consider that a panel can have defaults for position, size, color, font color, font size, iconized or not, and even a potential callback to update or read data items it needs to function; the number of default arguments gets out of hand. Chaining the initialization cleans things up, though it looks a bit odd as mentioned:
TweakUtils::Panel myPanel = TweakUtils::Panel::Create( "My Panel" ) .Color( 200, 40, 40 ) .DarkText();If you look at the example and say "Huh???", don't feel bad, I said the same thing when I first discovered this initialization pattern. There is nothing really fancy going on here, it is normal C++ code. Color is a member function of Panel with the following declaration:
Panel& Color( uint8_t r, uint8_t g, uint8_t b, uint8_t a=200 );Because Panel::Create returns a Panel instance, the call to Color works off the returned instance and modifies the rgba values in the object, returning a reference to itself which just happens to be the originally created panel instance. DarkText works off the reference and modifies the text color, once again returning a reference to the original panel instance. You can chain such modifiers as long as you want as long as they all return a reference to the panel. At the end of the chain, the modified panel object is assigned to your variable with all the modifications in place. When you have many possible options, this chaining is often cleaner than definition structures or default arguments. This is especially apparent with default arguments where you may wish to add only one option but if that option were at the end of the defaults, you would have to write all the intermediates just to get to the final argument.
Adding a ButtonWith the empty panel modified as desired, it is time to add something useful to it. For the moment, just adding a simple button which logs when it is pressed will be enough. Adding a button also uses the initializer chaining though there is one additional requirement I will discuss after the example:
TweakUtils::Panel myPanel = TweakUtils::Panel::Create( "My Panel" ) .Color( 200, 40, 40 ) .DarkText(); .Button( "My Button", []{ LOG( INFO ) << "My Button was pressed."; } ).End();Using a lambda as the callback for button press, we simply log event. But you may be wondering what the End function is doing. When adding controls to the panel the controls don't return references to the panel, they return references to the created control. In this way, if it were supported, you could add additional settings to the button such as a unique color. The initialization chain would affect the control being defined and not the panel itself even if the functions were named the same. When you are done setting up the control, End is called to return the owning Panel object such that further chaining is possible. So, adding to the example:
static uint32_t sMyTestValue = 0; TweakUtils::Panel myPanel = TweakUtils::Panel::Create( "My Panel" ) .Color( 200, 40, 40 ) .DarkText(); .Button( "My Button", []{ LOG( INFO ) << "My Button was pressed."; } ).End() .Variable< uint32_t >( "My Test", &someVariable ).End();Adds an editable uint32_t variable to the panel. Panel variables can be most fundamental types, std::string and the math library Vector2i, Vector3f and Quaternionf types. With the Vector3f and Quaternionf types, AntTweakBar displays a graphical representation of direction and orientation respectively which helps when debugging math problems. Further and more detailed examples exist in the XO application within the repository.
The current implementation of the TweakUI is fairly preliminary which means that it is both dirty and subject to rapid change. As with the configuration system, it gets the required job done but it is missing some features which would be nice to have. An additional note, in order to prevent direct dependencies, the Vector and Quaternion types are handled in a standalone header. If you do not desire them, simply do not include the header and there will be no dependencies on the math library.
Adding a Panel to the ScreenAdding a panel to the system can be done in a number of ways. The obvious method is to create a panel, hook up a key in the input processing and toggle it from there. This is perfectly viable and is in fact how the primary 'Debug Panel' works. I wanted something a little easier though and as such I added a quick (and dirty) panel management ability to the application framework itself. The function RegisterPanel exists in the AppWindow class where you can hand a pointer to your panel over and it will be added to the 'Debug Panel' as a new toggle button. At the bottom of the 'Debug Panel' in the screen shot, you see the 'Joystick Debugger' button, that button is the result of registering the panel in this manner. It is a quick and simple way to add panels without having to bind specific keys to each panel.
Currently the 'Debug Panel' is bound to the F5 function key and all panels default to hidden. Pressing F5 will open/close the panel, which the first time will be very small in the upper left corner. Move it, resize it and exit the application. The next run it will be shown or not at the location and size of the last exit. Additionally worth noting, the current panel creation may change to better integrate with other sections of code. The chained initialization is likely to remain unchanged but the returned types may be modified a bit.
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!Register a new account
Already have an account? Sign in here.Sign In Now | https://www.gamedev.net/articles/programming/general-and-gameplay-programming/configuration-and-tweaking-r3154/ | CC-MAIN-2018-39 | refinedweb | 3,375 | 50.67 |
JRuby Roundup: Java Integration and Debugging (JSR-45) Improvements
- |
-
-
-
-
-
-
Read later
Reading List
Charles Nutter shows some of the recent improvements in JRuby's Java Integration:
0. Obviously, there's been a lot of performance work.
1. Closures can be passed to any method with an interface as its last argument; the closure will be converted to the target interface type. thread = java.lang.Thread.new { puts 'here' }
2. Interfaces can be implemented using Ruby-cased (underscored) names for all methods now.
class Foo3. Interfaces with bean-like methods can be implemented using attr*. [..]
include java.awt.event.ActionListener
def action_performed(event)
...
end
end
4. Interfaces with boolean methods can be implemented using the question-marked version of the method name.
Another improvement is the use of JSR-45 features to improve debugging (Note: link seems to be broken at the time of publication). JSR-45 allows to map source files and source lines to class files. JSR-45 defines class attributes (SourceDebugExtension) that contain metadata in the SMAP format defined in the JSR to define which input source file a class file comes from. A look at the compiler code shows that line number information is already being added to the generated class files. ASM, used for generating the class files, allows to set line numbers via the
visitLineNumbermethod.
With the addition of SMAP files, regular Java debuggers, eg jdb, can step through Ruby code compiled to bytecode (sample pastie showing how to step through a Ruby file using jdb).
The new capabilities are another step towards a fast debugger for JRuby - however it's important to note that this method only works for code that gets compiled to bytecode. Code that is interpreted still needs to be handled by the existing trace or hook based debuggers. How much of an applications code is turned into bytecode depends on a couple of factors. For instance, there's the risk of running out of PermGen space, which usually ends in a JVM crash/termination. To solve that, the JRuby JIT limits the amount of compiled methods (the limit can be configured).
Rate this Article
- Editor Review
- Chief Editor Action | https://www.infoq.com/news/2008/08/jruby-java-integration-jsr45-dbg | CC-MAIN-2017-43 | refinedweb | 359 | 55.03 |
DESCRIPTION
Today I am going to discuss ‘how to measure input frequency using LPC2148. This is helpful in case when u don’t have an oscilloscope and still want to measure frequency.
The process is very straight forward, you have to measure the number of times input pin goes high and low in 1 second. This measure will give you the frequency.
HOW TO
1.) Set a pin as input and make sure you connect a pull-up register to it. Pull-up is necessary otherwise pin will not go to high state.
2.) Give in the frequency input to the pin and display the frequency on the serial terminal via UART.
Some Insight into the Code:–
IODIR1 |= (1<<18); // will select the Port1.18 as input while (1) { if (!(IOPIN1&(1<<18))) // if the pin is low { while (!(IOPIN1&(1<<18))) // wait until the pin is low } else if (IOPIN1&(1<<18)) // if the pin is high { while (IOPIN1&(1<<18)) // wait until the pin is high } counter++; // increment the counter by 1 freq = increment/2; counter = 0; }
CODE
#include <lpc214x.h> #include <stdint.h> #include "pll.h" #include "timer0.h" #include "uart0.h" #include <stdio.h> static uint32_t increment = 0; uint32_t time, freq; char converted[6]; int main () { PINSEL1 = 0; IODIR0|= (0<<18); // set P0.18 as input pll_init (); // initialsise pll clocks VPBDIV = 0x00; // set pclk = 15 MHz timer0_init (15000000); // initialize timer with 1 sec base Uart0_init (); // initialize uart while (1) { T0TC=0; // set 0 to TC counter T0TCR=(1<<1); //reset timer T0TCR=(1<<0); // enable timer while (T0TC<1) // while loop for a delay of 1 second { if (!(IOPIN0&(1<<18))) // if the pin is low { while(!(IOPIN0&(1<<18))); // wait until the pin is low } else if ((IOPIN0&(1<<18))) // if the pin is high { while (IOPIN0&(1<<18)); // wait until the pin is high } increment++; // increment the counter } freq = increment/2; increment = 0; sprintf (converted, "%u", freq); Uart0_Tx_string (converted); // send via UART Uart0_Tx_string (" Hz n"); while (*converted) { int i =0; converted[i++] = 0; } } }
RESULT
100%
DOWNLOAD
You can buy me a
coffee sensor 🙂
download the CODE below | https://controllerstech.com/frequency-measurement-using-lpc2148/ | CC-MAIN-2019-51 | refinedweb | 353 | 81.73 |
The SDK manager automatically closes in visual studio 2017
recently I installed visual studio 2017 community with Xamarin. I selected everything needed, however when I try to open the SDK manager from the root folder or from the same visual opens and closes immediately.
Here the images of what I selected: enter image description here
- How to run opengl example project on QT 5.9 and Visual studio 2017
I installed QT 5.9 online and then I run an opengl example project named "boxes". It showed error when I built.
:-1: error: This example requires Qt to be configured with -opengl desktop How can I fix it. Thanks.
I tried to fixed it following the link QT and native OpenGL support in MS Windows, but failed. The site said that 'different versions of Qt, for different targets, with or without OpenGL support.'. I'm using QT 5.9 and I can't find any options msvc201x xx-bit OpenGL
- TFS: shared class library and continuous integration
I have lots of projects that share a common internally developed class library.
We use "continuous integration", where checkins trigger builds, tests and deployment.
We currently don't do anything fancy with branching or things like that.
But what we want to do really is
- a) automatically update our client projects with any changes made to the shared library
- b) trigger the build of those client projects, whenever a change is made to shared library.
For us simplicity is the key.
I notice when we set up builds, we can trigger builds from different workspaces, I assume that we can use this feature to trigger the build of the client?
But we would need to somehow distribute the new class library to these client projects, before the build was triggered....so that probably isn't the way to do it.
We can potentially include the share class library as code in all our client apps...does that work? Or is it a REALLY stupid thing to do?...it doesn't feel right (it feels like copy/paste code), but it is simple.
Any ideas...simple is good....text book "correct" is a nice to have.....
- TFS Authenticode Sign Error: Not found signtool.exe
My idea are build my project in TFS using MSBUILD to get .msi files. After that i need to sign .msi file with .pfx file. Then i try to add task with Authenticode Sign which i found in i get an error :
node:4568) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Not found signtool.exe: d:\a\_tasks\authenticode-sign_752fe535-ed47-4c2c-afcf-0778adb0bb12\0.0.2\x64\signtool.exe, my .pfx file is in project dir. my configuration of this tool are :
Maybe whom know how can i fix it ? Please suggest!
- Xamarin.Android with MvvmCross has frequest MvxLayoutInflater.java java.lang.OutOfMemoryError Crashes
I am working with Xamarin.Android with MvvmCross 5.1.1 . My App mostly consumes REST Api and uses ListViews & RecylerViews. I am getting frequest crash reports from stores and the frequest crashes Says
MvxLayoutInflater.java with java.lang.OutOfMemoryError
I have also set the
Java Max Heap sizeto
1G.
but the same is also not fixing this issue. Is there a better way to handle or fix this?
- In activity OnDestroy is it necessary to remove event handlers for buttons and set to null for memory allocation?
Currently have the following code within an activity
//declaration public Android.Widget.Button logInButton; //assigning logInButton = FindViewById<Android.Widget.Button>(Resource.Id.loginButton); logInButton.Click += logInButton_Click;
Then on my OnDestroy method doing the following for memory allocation.
logInButton.Click -= logInButton_Click; logInButton = null; GC.Collect();
Is what I am doing in my OnDestroy really necessary or is it over kill ?
Would just setting the button to null and calling the garbage collector achieve the same result ?
- iOS 11 - UITableview - Delete row animation - Xamarin
I am experiencing weird animation glitches with UITableview on iOS 11 GoldMaster and Xcode 8 (havent updated to Xcode 9 GM) when deleting rows.
The first glitch appears to be when near the bottom of a section or at the end of the tableview the footer or next set of cells appears to slide down and then back up to replace the cell that is being deleted Or the footer appears to blink in. When running on iOS 10 this does not happen.
I have created a sample app to reproduce this.
The application is fairly simple -
Simple tableview with one type of cell that has a label and autolayout constraints (top bottom left and right)
Datasource is a list of lists< strings > for multiple sections.
Below is my delete code in the datasource.
public override void CommitEditingStyle(UITableView tableView, UITableViewCellEditingStyle editingStyle, NSIndexPath indexPath) { Strings[indexPath.Section].RemoveAt(indexPath.Row); tableView.DeleteRows(new NSIndexPath[] { indexPath }, UITableViewRowAnimation.Left); }
I have tried with and without TableView.BeginUpdate() and TableView.EndUpdate() around the lines above
Also I have another slightly more complicated application which i am unable to share code of which appears to have a similar issue with deleting rows where if you delete a row it may cause all the cells to disappear and then reappear. (only mentioning this as it could be related to either the way im deleting rows and handling the update of the datasource in both applications). Both use autolayout as well - the more complicated application modifies constraints during the getCell method of the datasource.
I am wondering has anyone seen any of these issues or could potentially suggestions places to look?
Cheers in advance.
-..
- Connect to MySQL Db in Visual Studio 2017 with F#
I'm trying to connect to a test database I created for far too long and I finally decided to come for help. I am using F#, Visual Studio Community 2017 and MySQL Database. The server is local (Wamp).
I have this for now:
#r "\...\packages\SQLProvider.1.1.8\lib\FSharp.Data.SqlProvider.dll" open FSharp.Data.Sql open FSharp.Data.Sql.Providers open System.Collections.Generic open System [<Literal>] let connString = "Server=localhost; Database=test1; Uid=root" let resPath = __SOURCE_DIRECTORY__ + @"..\packages\MySql.Data.8.0.8-dmr\lib\net452" type sql = SqlDataProvider< Common.DatabaseProviderTypes.MYSQL, connString, ResolutionPath = resPath, UseOptionTypes = true> let db = sql.GetDataContext()
And I can't make it work.. 'resPath' gives the error "This is not a valid constant expression or custom attribute value". If I insert a literal '[]' in front of it (see it below), it gives another error: "Strong name validation failed. (Exception from HRESULT: 0x8013141A)".
let [<Literal>] resPath = __SOURCE_DIRECTORY__ + @"..\packages\MySql.Data.8.0.8-dmr\lib\net452"
After connection with the db I want to add information in it and analyze the data of it using F# still.
I don't know what else should I try. All packages are updated, I installed everything through the NuGet package manager and they are all in the references.. I appreciate any ideas / suggestions.
Thank a lot.
- visual studio 2017 command line fatal error C1083
I just installed the latest version of visual studio 2017 (15.3.4), the "Desktop development with C++" package. I didn't install any other tools/packages as I'm not 100% sure what they're for (I'm just trying to write a simple C++ program).
I was able to compile a simple "hello world" program using the IDE, but when I tried to compile the code using the "Development Command Prompt for VS 2017" I keep getting an error saying
fatal error C1083: Cannot open include file: whatever.h: No such file or directory. NOTE:
whatever.his just a place holder for the header file name.
For example, building the following c++ code
#include <iostream> using namespace std; void main() { cout << "Hello, world, from Visual C++!" << endl; }
with
cl \EHsc hello.cppresults in not being able to find
corecrt.h. I can see the header file under
C:\Program Files (x86)\Windows Kits\10\Include\10.0.15063.0\ucrt. I even tried to include the path with the
\Ioption and sill no luck. Again, I did nothing but the default installation.
This error occurs with all command line build attempts. Specifically, I'm trying to build the latest
boostlibrary (1.65) because I need
filesystemin the program I'm working with. Following their "Getting started guide", I couldn't even get past the first step, executing "bootstrap", as it spit out a bunch of
C1083errors complaining about missing header files.
Anyway, the failed command line build of
hello.cpp(as stated above) suggests something went wrong with my default install of VS2017. I'm using 64-bit Windows 7 Professional SP1. I didn't get any errors during the installation process and I even uninstalled and reinstalled the program to arrive at the same issue. Is there something I'm missing on how to use the command line build approach? | http://codegur.com/44608402/the-sdk-manager-automatically-closes-in-visual-studio-2017 | CC-MAIN-2017-39 | refinedweb | 1,477 | 58.28 |
Hello everyone,
I have a C program and when compiling it in TURBO C i'm getting the following errors. I'm attaching the program below along with the errors.. Kindly help out
# include <math.h>
# include "udf.h"
# include "dynamesh_tools.h"
# define DEFINE_CG_MOTION(wing,dt,vel,omega,time,dtime) {\
double Freq, pi, w, degree, phimax, thmax, phi, dphi, dtheta;\
degree = 20; /*degree step set by journal file*/ \
Freq=1.0/(dtime*degree); /*dtime is the physical time step defined in the journal file*/ \
pi = 3.141592654;\
w=2*pi*Freq; /*Omega (radians)*/\
phimax = 30.0*pi/180; /*flapping amplitude set by journal file*/\
thmax = 30.0*pi/180; /*feathering amplitude set by journal file*/\
phi = -phimax*sin(w*time); /*flapping angle*/\
dphi = -phimax*w*cos(w*time); /*flapping speed*/\
dtheta = thmax*w*sin(w*time); /*feathering speed*/\
vel[0] = 0;\
vel[1] = 0;\
vel[2] = 0;\
omega[0] = dphi;\
omega[1] = dtheta*cos(phi);\
omega[2] = dtheta*sin(phi);\
}
the errors are as follows
Error C:\ TC\ Airfoil 1.c : unable to open include file ' DYNAMESH_TOOLS.H'
Error C:\ TC\ include\udf.h : unexpected end of file in conditional started on line 0
thanks
sam | http://forums.devshed.com/programming/940104-program-help-last-post.html | CC-MAIN-2015-14 | refinedweb | 196 | 60.11 |
Issue
The problem
I am facing a problem as I am managing a dataset each of which entry has associated a dictionary in the form of
dictionary = { 'Step_1': { 'Q':'123', 'W':'456', 'E':'789' }, 'Step_2': { 'Q':'753', 'W':'159', 'E':'888' } }
please note that the dicts have a variable number of
Steps
So I am organising the data into a pandas df like:
dicts 0 {'Step_1': {'Q': '123', 'W': '456', ... 1 {'Step_1': {'Q': '123', 'W': '456', ... 2 {'Step_1': {'Q': '123', 'W': '456', ...
and would like now to do some row-wise operations, like getting each
dict['Step_1']['Q'] value.
I know that it’s generally suggested to not work with dicts as df values, so I’d like to use a good, pythonic (read: fast) solution.
How would you proceed to get each
dict['Step_1']['Q'] row-wise?
What I tried
A simple solution that came to my mind was:
df[dicts]['Step_1']['Q'], but it doesn’t seem to work. (Why? Might it be because this way pandas doesn’t "unpack" the row values, hence cannot access the dicts?)
A more complex solution that I found to work is to use a function to access the data, as follows:
def access(x): return (x["Step_1"]["V"]) df['new_col'] = df['dicts'].apply(lambda x: access(x))
but I don’t quite like this solution. As far as I know, using the apply method is not the optimal way to tackle the problem.
Solution
I think you should reshape your dataset. Check this out:
# Let's say we have this dictionary = { "Step_1": {"Q":"123", "W":"456", "E":"789"}, "Step_2": {"Q":"753", "W":"159", "E":"888"}, } dicts = [dictionary, dictionary] # This would be your dataset
Do this:
better = [] for i, d in enumerate(dicts): # d is a dictionary # Iterate over the keys and values of the dictionary for k, v in d.items(): # Get the step from the key step = k.split("_")[1] # Add this step to the new list better.append({"id": i, "step": step, **v})
This is what
better is now:
[{'id': 0, 'step': '1', 'W': '456', 'Q': '123', 'E': '789'}, {'id': 0, 'step': '2', 'W': '159', 'E': '888', 'Q': '753'}, {'id': 1, 'step': '1', 'W': '456', 'Q': '123', 'E': '789'}, {'id': 1, 'step': '2', 'W': '159', 'E': '888', 'Q': '753'}]
Now you can re-build your DataFrame and perform all kind of row-wise operations:
df = pd.DataFrame(better) id step W Q E 0 0 1 456 123 789 1 0 2 159 753 888 2 1 1 456 123 789 3 1 2 159 753 888
For example, get all of the "Step 1 W" values:
df[df.step == "1"].W # Output: 0 456 2 456 Name: W, dtype: object
Note: it’s probably a good idea to turn the columns to
ints, right now they are stored as
strs
This Answer collected from stackoverflow, is licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 | https://errorsfixing.com/accessing-dictionaries-when-having-dictionary-dataframe-values/ | CC-MAIN-2022-27 | refinedweb | 497 | 74.42 |
Hello,
I've got a model in which I'd like to minimize the entropy of a given text. As soon as I launch the model however, the memory gets exhausted.
using CP; int encrypted_len = ...; range encrange = 0..encrypted_len-1; int encrypted[0..encrypted_len-1] = ...; int xor[0..255][0..255]; execute { for (var i = 0; i < 256; i++) { for (var j = 0; j < 256; j++) { xor[i][j] = i^j; } } } dvar int ckey[0..511] in 0..255; dexpr int plain[i in encrange] = xor[ckey[i % 512]][encrypted[i]]; // shannon dexpr int frequency[o in 0..255] = sum(i in encrange) plain[i] == o; dexpr int freqsum = sum(o in 0..255) frequency[o]; dexpr float prob[o in 0..255] = frequency[o]/freqsum; dexpr float comb[o in 0..255] = prob[o] * lg(prob[o]); dexpr float entropy = -sum(o in 0..255) comb[o]; minimize entropy;
The data file contains the data as an array of ints which are the ordinals for the characters in the string.
I imagine that this model is far from optimal:
- can it better be represented as an LP?
- how can a XOR be done more efficiently (rather than precalculating a XOR table...)
At this moment I won't get any results at all, regardless of the size of the data file. Any help on what is going wrong would be welcome :-)
Kind regards,
Cedric | https://www.ibm.com/developerworks/community/forums/html/topic?id=551b6bff-a43c-4e64-b0da-1b1739ef8161&ps=25 | CC-MAIN-2018-13 | refinedweb | 234 | 68.47 |
Type: Posts; User: Gener4tor
I cant change to xmlFormater because I have already serialized things. I try to explain it in another way:
* I serialized Class "StayedClass" whitch has a member called "Moved" which is of the...
For various reasons I moved a class (MovedClass) from assamblyA to assamblyB and also changed the namespace it was in. Everything else stayed the same.
But now I have a problem reading the Data I...
Is there any solution to this without have to write the string twice in the Code?
Hi...I have a question concerning const/literal-functions and members
c++:
struct StyleConstants
{
static const CString TestConst="UniqueStringWhichShouldBeOnlyWrittenOnceInTheCode";
}
... | http://forums.codeguru.com/search.php?s=b257b80583c185bc7fdc2ec8e591fe1b&searchid=5373885 | CC-MAIN-2014-42 | refinedweb | 107 | 56.55 |
#include <closejob.h>
Inherits KIMAP2::Job.
Detailed Description
Closes the current mailbox.
This job can only be run when the session is in the selected state.
Permanently removes all messages that have the \Deleted flag set from the currently selected mailbox, and returns to the authenticated state from the selected state.
The server will not provide any notifications of which messages were expunged, so this is quicker than doing an expunge and then implicitly closing the mailbox (by selecting or examining another mailbox or logging out).
No messages are removed if the mailbox is open in a read-only state, or if the server supports ACLs and the user does not have the Acl::Expunge right on the mailbox.
Definition at line 53 of file closejob.h.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2019 The KDE developers.
Generated on Sun Sep 15 2019 05:25:12 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/kdepim/kimap2/html/classKIMAP2_1_1CloseJob.html | CC-MAIN-2019-39 | refinedweb | 173 | 55.13 |
Hello,
I would like to modify template of PHPUnit test file. Each time I uses right click, "New->PHPUnit" I enter class name that I want to test. PhpStorm does the rest - creates a file and writes a class code that extends PHPUnit base class. My problem is that all the time I need to provide: "use my/company/namespace/classname/etc/;" manualy - in fact I have already put it into dialog so it should be available in a tempalte file via some symbol/variable.
Could anyone please tell me if it is possible to get this value in template file somehow - to automate generation of test classes?
Best regards
Arek
Hi there,
Settings | Editor | File and Code Templates | Templates | PHPUnit Test
Hi,
Thanks for answer but this is an editor of templates - I already know it.
Question is: what is a variable name that I should use to have tested class name that I have specified in a dialog (like on previous screenshot).
Best regards
Arek
I see.
Currently such variable does not exist -- when you type your class name (code completion) first time and have auto import enabled .. it will be added to the use list automatically. -- watch this ticket (star/vote/comment) to get notified on progress. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206327249-PHPUnit-Test-Template-file-modification?page=1 | CC-MAIN-2020-24 | refinedweb | 210 | 69.72 |
Feature Selection Techniques in Machine Learning
This article was published as a part of the Data Science Blogathon
Introduction
In this article, we will be discussing feature selection and the techniques that are used in feature selection. Let’s see about the curse of dimensionality.
Suppose in a particular dataset if we have many features, this may increase the threshold value which in turn decreases the accuracy of the model. Whenever we give those data to train our model, the model gets confused because it is learning too much data.
To resolve this situation, what we do is that we do not select all the features from a particular dataset. Instead, we apply various techniques of feature selection. In this article, I will be discussing some of the techniques including
- Univariate selection
- Feature importance
- A correlation matrix with Heatmap
These techniques are very efficient and let’s see how to implement them practically.
Before we start coding, let me explain some basic techniques of feature selection.
1. Filter method
First, we will see about the filter method.
In the filter method, we have three sub-components. The first component is that suppose I have all the set of features I will be selecting the best subset.
How I will be selecting the best subset?
We can apply various techniques. Some of the techniques I would like to tell you are the ANOVA test which is a statistical method and other one is the CHI SQUARE test and one more method I would specify is correlation coefficient. These are the three techniques we use to select some important features. The important features mean that these features will be much correlated with the target output.
Let’s take an example. Here I am having an independent variable X and a target variable Y.
In this scenario, you can see that as X increases, Y also increases. So, concerning the correlation coefficient, you can say that X and Y are highly correlated. We have two terms. One is covariance and the other one is a correlation. Covariance maps the value between 0 and 1. Correlation is between -1 to +1. This correlation is for the Pearson correlation coefficient.
The second technique is the wrapper method.
2. Wrapper method
The wrapper method is quite simple when compared to the filter method. Here, you don’t need to apply any statistical kinds of stuff. You have to apply only a simple mechanism. There are three basic mechanisms in this.
Let me explain it.
2.1 Forward selection
This method is used to select the best important features from the particular dataset concerning the target output. Forward selection works simply. It is an iterative method in which we start having no feature in the model. In each iteration, it will keep adding the feature.
Let me explain this with an example.
I am considering A, B, C, D, and E as my independent features. Let F be the output or target feature.
Initially, the model will train with feature A only and record the accuracy. In the next iteration, it will take A and B and train and record accuracy. If this accuracy is better than the previous accuracy, it will be considering adding B in its features set. Likewise, in each iteration, it will be adding different features until it reaches better accuracy.
This is what forward selection is.
Next, we will see about backward selection.
2.2 Backward elimination
This works slightly differently. Let’s discuss the same example. A, B, C, D, and E are independent features. F is the target variable. Now, I will take all the independent features and train the model. Before training the model, I will just apply a statistical test. This test will say that which feature is having the lowest impact on the target variable. This is how backward elimination is implemented.
Let me explain the recursive feature elimination.
2.3 Recursive feature elimination
It is a greedy optimization algorithm. The main aim of this method is to select a best-performing feature subset. It will not randomly select any feature. Rather than, it will find out which is the most useful feature. And in the next iteration, it will add the next useful feature concerning the target variable. Finally, it will rank all the features and eliminate the lower ones.
Remember that the above-mentioned techniques are useful when the dataset is small.
But in reality, you will get a large dataset.
Let’s try to understand the third technique called embedded methods.
3. Embedded methods
Let me start with an example. I am having A, B, C, D, and E as independent variables. F is the target variable. The embedded technique creates a lot of subsets from the particular dataset. Sometimes, it may give A to the model and find the accuracy. It may give AB to the model and find the accuracy. It will try to do all the permutations and combinations. Whichever subset is having the maximum accuracy, that will be selected as a subset of features which will later be given to the dataset for training. That is how an embedded method works.
Let’s go and find out how univariate selection is done.
4. Univariate selection
Univariate selection is a statistical test and it can be used to select those features that have the strongest relationship with the target variable.
Here, I am using the SelectKBest library. Suppose if you give K value as 5. It will find out the best 5 attributes concerning the target variable.
I am using a mobile price classification dataset. you can download it here.
import pandas as pd import numpy as np from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 data = pd.read_csv("train.csv") X = data.iloc[:,0:20] y = data.iloc[:,-1]
The dataset has many features. We have to select the best one. Because as you know in the curse of dimension, if I increase the number of features after a particular threshold value, the accuracy of the model will decrease.
For that, I am using univariate selection and the SelectKBest.
bestfeatures = SelectKBest(score_func=chi2, k=10) fit = bestfeatures.fit(X,y)
After fitting, I will get two different parameters. One is fit.scores which will calculate the score with respect to the chi-square test value.
dfscores = pd.DataFrame(fit.scores_) dfcolumns = pd.DataFrame(X.columns)
I am concatenating in the next statement for better visualization and I am renaming the column as Specs and Score.
featureScores = pd.concat([dfcolumns,dfscores],axis=1) featureScores.columns = ['Specs','Score']
Here, you can see all the features. The higher the score, the more important the feature is. here, the ram has the highest score.
featureScores
I am printing the top 10 features.
print(featureScores.nlargest(10,'Score'))
These 10 best features can be used to train the model.
Let’s look into the next technique called feature importance.
5. Feature importance
Here, you can get the feature importance of every feature. The higher the score, the more important the feature is. An inbuilt classifier called Extra Tree Classifier is used here to extract the best 10 features.
from sklearn.ensemble import ExtraTreesClassifier import matplotlib.pyplot as plt model = ExtraTreesClassifier() model.fit(X,y)
After fitting, you can see the scores of the features.
print(model.feature_importances_)
The best 10 features can be seen like this.
feat_importances = pd.Series(model.feature_importances_, index=X.columns) feat_importances.nlargest(10).plot(kind='barh') plt.show()
Let me explain the last technique.
6. Correlation matrix with Heatmap
Here, we are checking each and every feature. The correlation can be plotted like this.
import seaborn as sns corrmat = data.corr() top_corr_features = corrmat.index plt.figure(figsize=(20,20)) g=sns.heatmap(data[top_corr_features].corr(),annot=True,cmap="RdYlGn")
Here, the correlation value ranges from 0 to 1. The correlation between price_range and ram is very high and between battery and price_range is low.
Endnotes
These are basic techniques of feature selection. Now, you know that you just have to choose which features are important with respect to the target output. You can use those features to train your model. This is all about feature selection. I hope you enjoyed this article. Start practicing.
The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.
Leave a Reply Your email address will not be published. Required fields are marked * | https://www.analyticsvidhya.com/blog/2021/06/feature-selection-techniques-in-machine-learning-2/ | CC-MAIN-2022-33 | refinedweb | 1,409 | 60.31 |
The trick with Tornado is to understand where this blocking behavior occurs, and how that contrasts to the threaded-approach of handling many client connections concurrently. With a threaded-approach, the concept of a request handler, the code the application developer writes to fulfill user requests, doesn't need to do anything special. The framework takes care of maintaining thread management for any given request. With IO loops, we need to be conscientious of the fact that only one request handler is handled at any given time. So the trick is this — exit from the request handler as quickly as possible.
Serving large blobs helps to illustrate this idea because they typically take a while to complete a request. Imagine you have a request handler that takes care of serving these blobs, and you have 10 users all asking for one at the same time. The request handler can only do one thing at a time, so the first download will happen really fast, while the user at the end of the line isn't so impressed. Tornado has some asynchronous facilities to help mitigate scenarios such as these. Remember, we want to exit the request handler as quickly as possible. Take a look at this really simple blob server.
import os.path import httplib from tornado.options import define, options from tornado.ioloop import IOLoop from tornado.web import Application,\ RequestHandler,\ asynchronous define( 'blob', default = '', type = str ) define( 'chunk_size', default = 1024 * 1024, type = int ) class FileHandler(RequestHandler): content_type = 'application/octet-stream' @asynchronous def get(self, *args): IOLoop.instance().add_callback(self.init_blob) def init_blob(self): try: self.blob = open(options.blob) except IOError, exc: self.set_status(httplib.NOT_FOUND) self.finish(str(exc)) return blob_name = os.path.basename(self.blob.name) blob_size = os.path.getsize(self.blob.name) self.set_header('Content-Type', self.content_type) self.set_header('Content-Length', blob_size) self.set_header( 'Content-Disposition', 'inline; filename="%s"' % blob_name ) IOLoop.instance().add_callback(self.send_chunk) def send_chunk(self): try: chunk = self.blob.read(options.chunk_size) except IOError, exc: self.set_status(httplib.INTERNAL_SERVER_ERROR) self.finish(str(exc)) return if chunk: self.write(chunk) self.flush() IOLoop.instance().add_callback(self.send_chunk) else: self.blob.close() self.finish() def main(): options.parse_command_line() Application([ (r'/', FileHandler), ]).listen(8888) IOLoop.instance().start() if __name__ == "__main__": main()
Notice that the main job of the FileHandler.get() method is to add a callback handler, after which, we return immediately. This allows the IO loop to move onto the next request. The callback we've registered is FileHandler.init_blob(), which prepares the blob file that we're to serve in this request. Once the blob is prepared, we register yet another callback — FileHandler.send_chunk(). This is the meat of the file serving where each IO loop iteration sends a chunk of the blob back to the client. We repeatedly add new callbacks that send chunks to the client till the entire blob has been transferred.
A little awkward compared to a more monolithic request handler? Maybe. This approach does, however, allow you to "chunk" rather than "thread". Just another way of looking at the problem. | http://www.boduch.ca/2012/11/tornado-and-blob-chunking.html | CC-MAIN-2020-05 | refinedweb | 513 | 51.44 |
I finally got Orcas Professional downloaded (I am SO switching to Verizon FiOS on May 1st from Comcast). I got it installed on my Alienware laptop with Vista Ultimate RTM. It imported my settings from VS2005 it seems good so far. The first place I headed, of course, is Walkthrough: Simple Object Model and Query (C#) (LINQ to SQL).
While, my first experience came to a halt fast -) because it can't find the file alink.dll. I see from the Principal Engineer that this is some MDAC or SQL issue that has been around forever. Nope, its happening on any Orcas programs I try to compile on Vista RTM. I'm still parsing how to fix it. Anyone know? The error is:
Required file 'alink.dll with IAlink3' could not be found.
The program is:
using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Data.Linq;namespace LinqConsoleApp { [Table(Name="Customers")] public class Customer { } class Program { static void Main(string[] args) { } }}
| http://codebetter.com/blogs/sam.gentile/archive/2007/04/21/orcas-professional-beta-1-installed-on-vista.aspx | crawl-002 | refinedweb | 167 | 58.89 |
Modular Farm Management on the NetBeans Platform
The Java Zone is brought to you in partnership with ZeroTurnaround. Discover how you can skip the build and redeploy process by using JRebel by ZeroTurnaround.
I'm Timon Veenstra and I live in Groningen, in the Netherlands. Currently, I am working at Ordina as a software engineer and technical architect.
Below follows an overview of "AgroSense", which is a rich-client platform providing a highly modular farm management system on the NetBeans Platform:
Overview of "AgroSense"
Agriculture is rapidly becoming a very data intensive industry. A multitude of agricultural specialities is needed to transform all this data into useful farm management information, which can lead to higher productivity, less use of pesticides, etc. Specific agricultural knowledge is often owned by small companies, not capable of developing an overall competitive farm management system.
AgroSense provides an open source modular framework in which modules from several sources are combined into one integrated farm management system. Modules can be provided by governmental departments, universities, or commercial businesses. Some will be free of charge, while others will require some form of payment. In the end, the
farmer decides which modules contribute to the business:
History of the Application
This was my first rich-client application based on Maven. We had created some Ant-based NetBeans Platform applications previously. At first, it seemed the Ant library wrappers were no longer needed. Adding things we needed as dependencies of the module worked fine, until we created our second module and started using the API of the first. It took us several days of gremlin hunting before we concluded that we still needed the module wrappers. After adding those to our module architecture and sticking with the designed dependencies, we solved that problem.
The second problem to solve was our persistence layer. We wanted to support multiple persistence units, but still stay in control of our database. We solved this puzzle by creating three database modules, plus a Hibernate/hsqldb library wrapper.
The database API module contains an interface describing how modules should make themselves known, i.e., as wanting to do something with persistence (persistence unit name, etc.), and an interface to obtain an "EntityManager". The second module contains our test database implementation, it provides an "EntityManager" connection to an in-memory Derby database and assumes a persistence unit named "test". The third module contains our local database.
The local database "EntityManagerProvider" creates an "EntityManager" factory for each module that provides persistence information through the service provider mechanism, i.e., implementations of the "PersistingModule" interface in the database API module. This results in a connection per persistence unit. Since Derby allows only one connection to its database, we had to switch to hsqldb instead.
With a little utility class we managed to make Hibernate play along as well:
public class JavaAssistHelper {
public static ClassLoaderProvider createJavaAssistClassLoader() {
ProxyFactory.classLoaderProvider = new ProxyFactory.ClassLoaderProvider() {
public ClassLoader get(ProxyFactory pf) {
return Thread.currentThread().getContextClassLoader();
}
};
return ProxyFactory.classLoaderProvider;
}
}
In our project, we want to support 3rd party modules which should be able to communicate with other modules. For example, a module implementing a business process for crop harvesting needs to be able to retrieve weather information from a weather module implementation. Not only do we need these modules to interact with each other, a lot of data needs to be stored in a database as well.
We basically had two options, the first option was to split up the data model and define the entities in the API module, for example, the "WeatherReport" entity in the Weather API module and the "SensorRun" entity in the Sensor API module. At first, this seemed to be the way to go. We love modularity. But, while we where designing this aproach, we ran into some problems.
Sensor data needs to be linked to a plot. The "Plot" entity was in the area-plot module, as well as in another persistence unit. We foresaw a highly fragmented data model with all kinds of integrity and performance issues. In fact, we think those problems will be bigger than the "big knot in the middle" problem of the second approach, which was a common domain model. Since a lot of modules will be implementing business processes, they will need other modules to provided data.
Whichever option we chose, upgrading to a new model will be difficult. We implemented the second approach by creating a model module containing the core domain entities and their relations. Simplified, we have a "Plot", which can have zero or more "SensorRuns", which can have zero or more "SensorReadings". The sensor-cropcircle module contains a cropcircle-specific sensor implementation. It does not know anything about plots or runs. It does want to store some cropcircle specific attributes which are not in the more generic "SensorReading" entity. So we create a "CropcircleSensorReading" entity and connect it to the "SensorReading" entity from the model module.
The "persistence.xml" in the sensor-cropcircle module only contains two entities, "SensorReading" and "CropcircleSensorReading". The area-plot module imports plots from a shape file, storing them in the "Plot" table, the "persistence.xml" only contains the "Plot" entity. The link-sensor-plot module links sensor data to plots by creating sensor runs, the "persistence.xml" contains "Plot", "SensorRuns", and "SensorReadings".
The service layer has a similar hierarchy, an abstract base service in the model module with some functionality to query, store common domain model entities, with specific services in persiting modules wich extend the base service. The API only exposes domain entities and not implementation specific entities like "CropcircleSensorReading".
Decision to Use the NetBeans Platform
A few years ago we were developing an application for a major European harbor. One of the requirements was that the application be web-based. We started to build some parts of the required components in a proof of concept. Soon we discovered that the amount of JavaScript required to implement the amount of user interaction the customer wanted was blowing up the browser. With loads of modules still to come, there was no way we could build a smooth interacting web-based application.
We successfully challenged the web-based requirement. We narrowed down the choices for a rich-client platform to NetBeans and Eclipse. Our team consisted at least half of junior members. The low learning curve of the NetBeans IDE, combined with the usage of standard Swing in the NetBeans Platform made us choose NetBeans. After a short ramp up period, we reached a pretty high development velocity.
Being an architect as well as a developer, the component-based strategy of the NetBeans Platform fitted like a glove. Architectural problems could be adressed and solved very quickly, not leading to any development delay.
Unfortunately, we never got to finish that application, for other reasons.
After being on another project for almost three years, a new development opportunity arose for a Java rich-client application. With the very good experience from the first NetBeans Platform application, we decided to use it again. We were pleasantly surprised with the leaps of progress the NetBeans team had made on the Maven integration, which we decided to use as our build tool.
Advantages of the NetBeans Platform
I really like structured/modular applications. Especially when things get big and complex, structure is required
to keep a decent level of quality. The NetBeans Platform is not only structured and modular itself, but also more or less forces you to structure your own application in that way as well.
As a technical architect, I am responsible for the quality and the structure of the code. The NetBeans Platform structure puts the developers on the team in the right direction, making my job a lot easier. If, for example, a developer asks how he should document his API, I point him to one of the NetBeans API's as reference. Many advantages spin off from the modular design of the NetBeans Platform. The lookup, loose coupling, public packages, module classloaders, etc.
An active community helps with solving problems.
Another advantage is the NetBeans Platform support for Maven. I really started to like and appreciate Maven as a build tool in my previous project. There are still some things to improve, but the foundation is sound.
Technical Tips
Make a design of your modules and stick to it. That will save you a lot of trouble. Everytime I took an architectural shortcut because of time pressure or whatever reason, it backfired.
We created an event bus in our first NetBeans Platform application. We used it in most applications since and it has proven its worth. The Lookup is not always suitable for fire & forget events. Sometimes you just want to shout something into oblivion, not caring if anyone does something with it. Our event bus provides this for us. I could/should probably write a dedicated article about it because it is a bit beyond the scope of this interview.
If you want to peek at our event bus, go here, and consider the article pending. :)
Future Plans
Currently we only have a few "apps" in our store. With the support of public and commercial parties, we want to evolve it into a full flex "farmers IDE". Weather reports, tax forms, fertilizer shops, automated driverless tractors... the sky is the limit.
If anyone is interested in the AgroSense project or has an idea for a related "app", visit the homepage.
You could even make some money with it in the future if farmers like it. And finally, don't worry, be happy, and have great fun building NetBeans rich-client applications! }} | https://dzone.com/articles/nb-modular-farm-management | CC-MAIN-2015-48 | refinedweb | 1,598 | 56.05 |
In day 2 we will look into the following four labs:
Get links of other parts of MVC article as below: -
In case you are completely a fresher I will suggest to start with the below 4 videos which are 10 minutes approximately so that you can come to MVC quickly.
Lab description
So let’s start with the above 4 labs one by one.
When we started this whole MVC series (Day 1) we started with two concerns regarding behind code:
In this section, let’s concentrate on the first point, i.e., Unit Testing.
Just a quick recap if we need to unit test the below method btngenerateinvoices_click in ASP.NET behind code, we have the following problems:
btngenerateinvoices_click
sender
eventargs
HttpContext
FYI: Many developers would talk about mock tests, rhino mocks, etc., but still it's cryptic and the complication increases with session variables, view data objects, ASP.NET UI controls, creating further confusion.
So what we will do in this section is we will create a simple MVC application and we will do unit tests on the ASP.NET application using the VSTS unit test framework.
The first step is to create a simple MVC project. We will use the same project which we have discussed in MVC (Model View Controller) day 1 LearnMVC.aspx. So in case you do not have any sample projects, please create one using the link above.
The controller class at the end of the day is a simple .NET class. For instance, if you watch your project code closely, you can easily see the customer controller class as shown below:
public class CustomerController : Controller
{
...
....
public ViewResult DisplayCustomer()
{
Customer objCustomer = new Customer();
objCustomer.Id = 12;
objCustomer.CustomerCode = "1001";
objCustomer.Amount = 90.34;
return View("DisplayCustomer",objCustomer);
}
}
In simple words because this is a simple .NET class we can easily instantiate the class and create automated unit tests for the same. That’s is exactly what we are going to do in our next steps.
Let’s use our VSTS unit test framework to test the controller class. In case you are a complete fresher to VSTS unit testing, see this article to get a hang of unit testing:. Add a new project to your solution using the test project solution template.
We need to add a reference to the MVC application in our unit test project so that we can get hold of the controller class.
Once you add the references, you should see the MVC application in your project references as shown in the below figure:
Once you have added the references, open the unit test class, i.e., UnitTest1.cs. In this class create a simple test method called DisplayCustomer which is attributed by the TestMethod attribute as shown in the below code snippet.
DisplayCustomer
TestMethod
If you see the below code snippet we are creating an object of the controller class, invoking the controller action, i.e., DisplayCustomer and then checking if the view name is DisplayCustomer. If they are equal that means the test passes, or else it fails.
[TestMethod]
public void DisplayCustomer()
{
CustomerController obj = new CustomerController();
var varresult = obj.DisplayCustomer();
Assert.AreEqual("DisplayCustomer", varresult.ViewName);
}
Once you have written your test case it’s time to run the test case by clicking on test, windows, and then clicking test view.
On the test view, right click on the test and run the selected test case as shown below:
If everything goes well you should see a green color indicating that the test has passed, or else you should see a red color with details regarding why the test failed.
In the next lab we will discuss about MVC routing. MVC is all about connecting the actions to the controllers and MVC routing helps us achieve this. So be ready to get routed in our next tutorial.
At the end of the day, MVC is nothing but a URL mapped to controllers and controllers mapped to actions.
For example when a user sends a request URL like from the browser, these actions are mapped with MVC controllers, and MVC controllers finally invokes those functions.
Below is a simple table which shows how the whole thing looks like:
Adding further to the complication we can have multiple URLs mapped to one controller or you can have more than one controller mapped to a single URL. For instance you can have and mapped to a single controller called AboutUsController.
It would be great if we have some kind of mechanism by which we can configure these mappings. That’s what exactly MVC routing is meant for. MVC routing helps to easily configure and map the URL with the controllers.
Let’s take the same customer project we had discussed in the previous section.
All route mappings are stored in the “global.asax.cs” code-behind file. So the first step is we need to go and change this file.
All routing mappings are stored into a collection called routes. This collection belongs to the namespace System.Web.Routing. To add a route you need to call the MapRoute method and pass three parameters name, url, and defaults.
routes
System.Web.Routing
MapRoute
name
url
defaults
Below is a print screen of the snippet of the maproute function.
maproute
Name
Url
Defaults
Customer
In case your controller takes parameters, you can use the { brackets. For instance in the below code snippet we have used { to specify that we can have an id parameter. If you want to define the parameter as optional you can use the UrlParameter.Optional enum.
id
UrlParameter.Optional
The first thing is comment the default mapping code. We will explain the default mapping code later.
//routes.MapRoute(
// "Default", // Route name
// "{controller}/{action}/{id}", // URL with parameters
// new { controller = "Home", action = "Index", id =
UrlParameter.Optional }); // Parameter defaults
Put the below code, which means when we call it will invoke the customer controller and will call the displaycustomer function.
displaycustomer
routes.MapRoute(
"View", // Route name
"View/ViewCustomer/{id}", // URL with parameters
new { controller = "Customer", action = "DisplayCustomer",
id = UrlParameter.Optional }); // Parameter defaults
Below is the action function DisplayCustomer which will be invoked.
public ViewResult DisplayCustomer()
{
Customer objCustomer = new Customer();
objCustomer.Id = 12;
objCustomer.CustomerCode = "1001";
objCustomer.Amount = 90.34;
return View("DisplayCustomer",objCustomer);
}
If you run the application you should see the below display.
If you remember we commented the default entry route. Let’s understand what exactly this default code meant.
"{controller}/{action}/{id}" defines that the URL will be automatically named with the convention of controller name / function action name / value. So if you have a controller class with Customer and action function as Search then the URL will be structured as automatically.
In the next lab, we will discuss how to validate an MVC URL. All actions to MVC come via MVC URL and even the data is fed via MVC URL. So in the next section we will see how we can validate the data passed in the MVC URL.
MVC is all about action which happens via URL and data for those actions is also provided by the URL. It would be great if we can validate data which is passed via these MVC URLs.
For instance let’s consider the MVC URL. If anyone wants to view customer details for the 1001 customer code, he needs to enter.
The customer code is numeric in nature. In other words anyone entering a MVC URL like is invalid. The MVC framework provides a validation mechanism by which we can check on the URL itself if the data is appropriate. In this lab we will see how to validate data which is entered on the MVC URL.
The first step is to create a simple customer class model which will be invoked by the controller.
public class Customer
{
public int Id { set; get; }
public string CustomerCode { set; get; }
public double Amount { set; get; }
}
The next step is to create a simple controller class which has a collection of the customer model object which was created in step 1.
public class CustomerController : Controller
{
List<Customer> Customers = new List<Customer>();
//
// GET: /Customer/
public CustomerController()
{
Customer obj1 = new Customer();
obj1.Id = 12;
obj1.CustomerCode = "1001";
obj1.Amount = 90.34;
Customers.Add(obj1);
obj1 = new Customer();
obj1.Id = 11;
obj1.CustomerCode = "1002";
obj1.Amount = 91;
Customers.Add(obj1);
}
[HttpGet]
public ViewResult DisplayCustomer(int id)
{
Customer objCustomer = Customers[id];
return View("DisplayCustomer",objCustomer);
}
}
The controller has a simple DisplayCustomer function which displays the customer using the id value. This function takes the id value and looks up through the customer collection. Below is the downsized reposted code of the function.
[HttpGet]
public ViewResult DisplayCustomer(int id)
{
Customer objCustomer = Customers[id];
return View("DisplayCustomer",objCustomer);
}
If you look at the DisplayCustomer function it takes an id value which is numeric. We would like to put a validation on this ID field with the following constraints:
We want the above validations to fire when the MVC URL is invoked with data.
The validation described in the step 2 can be achieved by applying a regular expression on the route map. If you go to the global.asax file and see the maproute function, the input to this function is the constraint as shown in the below figure.
In case you are new to regular expressions, we would advise you to go through this video on regular expressions:.
So in order to accommodate the numeric validation we need to the specify the regex constraint, i.e., ‘\d{1,2}’ in the maproute function as shown below. ‘\d{1,2}’ in regex means that the input should be numeric and should be a maximum of length 1 or 2 , i.e., between 0 to 99.
You can specify the default values by saying id=0 as shown in the below code snippet. So just in case if some one does not specify the value to the ID, it will take the value as zero by default.
routes.MapRoute(
"View", // Route name
"View/ViewCustomer/{id}", // URL with parameters
new { controller = "Customer", action = "DisplayCustomer",
id = 0 }, new { id = @"\d{1,2}" }); // Parameter defaults
So now that we are done with the validation using the maproute functions, it’s time to test if these validations work. So in the first test we have specified valid 1 and we see that the controller is hit and the data is displayed.
If you try to specify value more than 100 you would get an error as shown below. Please note that the error is confusing but it’s the effect of the regex validation which is specified on the maproute function.
If you try to specify a non-numeric value you should again get the same error which confirms that our regex validation is working properly.
The most important point to note is that these validations are executed even before the request reaches the controller functions.
One of the crucial things in any website development is defining navigations from one page to another page. In MVC everything is an action and those actions invoke the views or pages. We can not specify direct hyperlinks like, this would defeat the purpose of MVC. In other words we need to specify actions and these actions will invoke the URLs.
In the next lab we will look into how to define an outbound URL in MVC views which will help us navigate from one page to another page.
When we talk about web applications, end users would like to navigate from one page to another page. So as a simple developer your first thought would be to just give page names as shown in the below figure.
So for example if you want to go and browse from home.aspx to about.aspx give the anchor a hyper link page name and things should be fine.
By doing that you are violating MVC principles. MVC principles say that the hit should first come to the controller, but by specifying <a href="Home.aspx"> the first hit comes to the view. This bypasses your controller logic completely and your MVC architecture falls flat.
<a href="Home.aspx">
Ideally the actions should direct which page should be invoked. So the hyperlink should have actions in the anchor tags and not the page names, i.e., direct view name.
Let's create three views as shown in the below figure Home, About, and Product.
Let’s create a simple navigation between these three pages as shown below. From the home view we would like to navigate to the about and product views. From the about and product views we would like to navigate back to the home view.
The next step is to define controller actions which will invoke these views. In the below code snippet we have defined three actions: GotoHome (this invokes the home view), Aboutus (this invokes the about view), and SeeProduct (this invokes the product view).
GotoHome
SeeProduct URLs are pointing to the actions rather than absolute page names like home.aspx, aboutus.aspx, etc., which violates the complete MVC principle.
In the third session, we will talk about Partial views, Validation using data annotations, Razor (MVC 3), MVC Windows Authentication, MVC Forms Authentication and a lot more. The next lab will be a bit more advanced as compared to the second day. Below is the link for the third day: Click here for the day third MVC step by step.
A final note: you can watch my .NET interview questions and answers videos on various topics like WCF, Silverlight, LINQ, WPF, Design Patterns, Entity Framework etc.. | http://www.codeproject.com/Articles/259560/Learn-MVC-Model-view-controller-Step-by-Step-in?msg=4387680 | CC-MAIN-2014-52 | refinedweb | 2,265 | 64 |
XAML Usage Syntax
Types and members in the .NET Framework Class Library for Silverlight include a Syntax section. This Syntax section contains the syntax for Visual Basic, C#, and possibly other languages. Some types and members also include XAML usage syntax and XAML values. This topic helps you understand the XAML usage syntax and XAML Values sections.
This topic contains the following sections.
In the .NET Framework Class Library for Silverlight, some types and members have a XAML usage section. The XAML usage is included whenever there is a XAML usage that is technically possible, and also when there is a clear scenario for using XAML.
There are different types of XAML usage sections based on the XAML language specification. For example, there can be XAML Object Element Usage, XAML Attribute Usage, XAML Property Element Usage, and XAML Implicit Collection Usage sections. The following shows an example of a XAML Object Element Usage section for the Button control:
The following shows an example of a XAML Attribute Usage section for the Image.Source property:
The following shows an example of a XAML Property Element Usage section for the UIElement.Clip property:
XAML usage sections include styles, such as the following:
The element that is relevant to a particular XAML usage is typically bold.
Placeholders are typically italic.
The surrounding text typically indicates a literal part of the usage.
For C# and other languages, the syntax shown in the Syntax section is a definition syntax. A definition syntax is a syntax that emulates the code that defines the type or its members within the compiled assembly. For XAML, no equivalent definition syntax exists. This is because the fundamentals of XAML rely on markup elements being backed by existing defined types in assemblies, and showing a "definition" for XAML would be inaccurate. Therefore, XAML syntax is shown as usage syntax. (Visual Basic also shows a usage syntax, but shows the definition syntax as well.)
In many cases, the placeholders and strings in a XAML usage section of the .NET Framework Class Library for Silverlight are defined in a XAML Values section below the XAML usage section. This is similar to the Parameters section that describes the parameters you must pass to a method, based on the parameter names in the Syntax section.
The following shows an example of a XAML Values section for the Button control:
Not all of the placeholders in a XAML usage section are defined in the XAML Values section. Some of the terms are generalized. For more information, see the following sections later in this topic: Generalized Placeholders, Native Text Syntaxes, and Prefixes and Mappings for Silverlight Libraries.
Many of the UI elements in Silverlight support a content model. The content model defines how other UI elements can be placed as child elements of a parent element. For a broader example of a content model, consider the <table> element in HTML 1.0. The intended content model for an HTML table is that the child elements of <table> might include elements such as <tr>, <td>, <th>, or <caption>. Basic HTML does not enforce its content model strongly. However, XAML generally enforces the content models based on the underlying types and properties. Introducing an element that violates the content model will typically result in a XAML parse exception.
The XAML usage for object elements that support a content model presents some simplification of the content model. Most content models definitions use some base type in the underlying code, meaning that the content is a placeholder because it can be any XAML-usable object element that derives from a particular base class. For example, the following is the XAML usage for the Border control:
singleChild is a placeholder that is defined in the XAML Values section. Even without looking at XAML Values, you already know something about the content model because the placeholder starts with single. This means that Border can support a maximum of one object element child. Therefore the following is invalid markup that violates the content model for Border:
Not all XAML elements fit the Border content model. As explained in its XAML Values section, the single child for Border must be an object element that derives from the base class UIElement.
A similar convention is oneOrMore. This string indicates that the class supports a content model where the content property is a collection property. Collections are explained in the next section.
Because XAML is tied to an underlying type system, whenever there is a content model that supports more than one child element, the property that backs the content model is a collection type. Because of XAML language features that optimize the markup form, neither the content property nor any element that represents the collection type are present in many XAML collection syntaxes. Instead, you often see a XAML usage such as the following for DoubleAnimationUsingKeyFrames:
This XAML usage indicates that you can include multiple key frames, each of which are added to an underlying collection, as shown in the following example (this example is actual markup and does not have placeholders):
<DoubleAnimationUsingKeyFrames Storyboard. <LinearDoubleKeyFrame Value="500" KeyTime="0:0:3" /> <DiscreteDoubleKeyFrame Value="400" KeyTime="0:0:4" /> <SplineDoubleKeyFrame KeySpline="0.6,0.0 0.9,0.00" Value="0" KeyTime="0:0:6" /> </DoubleAnimationUsingKeyFrames>
XAML usage for the collections is generally shown in the form that omits implicit syntax elements. There might be a valid syntax where objects are specified explicitly, but this is generally not shown for simplicity and for best-practice reasons.
For more information about collections in XAML, see XAML Overview.
Markup extensions are a XAML language concept whereby the XAML parser can escape the typical string processing of XAML and instead invoke a self-contained string syntax. Each syntax is supported by a particular markup extension class or implementation. The following are specific reference topics for the markup extensions that exist in Silverlight XAML:
For more information about markup extensions, see "Markup Extensions" section of XAML Overview.
A few APIs in the .NET Framework Class Library for Silverlight do include a markup extension as part of their XAML usage. These APIs are cases where setting the property with a particular extension is the primary intended usage scenario, and therefore showing the extension in the usage is more helpful for the beginning user. For example, the following is the XAML usage presented for the FrameworkElement.DataContext property:
FrameworkElement.DataContext is a property that takes an object and thus might conceivably use a property element syntax that wraps an object value to set it. However, the scenario for FrameworkElement.DataContext is that you would never define a "new" object as an object element here. Instead, you would use one of two possible techniques that are hinted at by these placeholders and then explained in the XAML Values in more detail. The following are the descriptions in the XAML Values section of the FrameworkElement.DataContext topic:
binding: A binding expression that can reference an existing data context, or a property within within the ResourceDictionary.
Type converters are a key part of XAML, because type converters enable simple string attribute usages instead of complicated object element usages. Type converters for XAML generally process a string and convert the string to produce an object. This technique is similar to markup extensions. The difference is that type converters are either associated with a particular type (a type that is used as a property value), or with a particular property, whereas a markup extension can conceivably provide a value anywhere in markup, regardless of which property it sets or the related types.
Consider the DataGridLength structure and its type conversion behavior. The following shows the XAML Attribute Usage section for DataGridLength:
The XAML Attribute Usage section lists the possible values for properties. absolutePixelValue is a placeholder that is defined in the XAML Values section. The other values are literal strings that map to behaviors. A XAML parser accesses type converter classes while constructing the object tree to determine which strings are valid for conversion to the relevant object type.
Another type converter to consider is the one used for all values that use the Point type. A Point is simply a data structure that reports an object position in a two-dimensional coordinate space. In other words, a Point reports X and Y coordinate values. The following shows the XAML Attribute Usage section for Point:
The purpose of the XAML usage shown on Point is to show that there are X and Y components to a Point provided in XAML, and that either a comma or space can be the delimiter between X and Y. More specifics (such as that the numeric types for X and Y can have decimal separators) are available in the XAML Values section in the Point topic.
Notice the object property placeholders. There is no realistic way to convey the full range of properties that can use the Point type, so object and property are shown as generalized placeholders.
XAML usage sections in the .NET Framework Class Library for Silverlight use various native text syntaxes. A native text syntax is a string-conversion behavior that is not necessarily represented by a CLR TypeConverter / class-level TypeConverterAttribute and is instead built in to the Silverlight XAML parser's native implementation. These native type syntaxes identify the type that the item will be converted to. The following table describes the double, int, bool, string, xamlName, and uriString placeholders.
Some members in the .NET Framework Class Library for Silverlight use enumerations. The following shows the XAML usage format for all enumerations that have a XAML scenario:
This XAML usage is entirely placeholders. The object property placeholders are used because there is no accurate way to determine which property is being set. However, some enumerations are intended for properties with a particular name. The enumMemberName placeholder indicates that you should specify a string that is the name of one of the enumeration's named constants. The following shows a XAML example that sets the Visibility property of a button to the Visible enumeration constant:
When using enumerations in XAML, be sure to remember the following:
Do not use the qualified form. Instead, use the unqualified constant member name. For example, the following usage does not work: <Button Visibility="Visibility.Visible"/>.
Do not use the value of the constant. In other words, do not rely on the integer value of the enumeration. Although this technique sometimes works, it is not a recommended practice either in XAML or in code. For example, the following usage is not recommended: <Button Visibility="1"/>.
Enumerations in Silverlight can be flagwise, meaning that they are attributed with FlagsAttribute. If you need to specify a combination of values for these enumerations as a XAML attribute value, use the name of each enumeration constant, with a comma (,) between each name, and no intervening space characters.
For property members and event members of a type that is mainly intended as a base type for other practical UI types, the XAML usage sometimes use an italicized placeholder for the parent type, with an initial lowercase letter.
For example, the UIElement type is a very common base type, which literally hundreds of types in Silverlight derive from. The XAML Attribute Usage section for the UIElement.Visibility property looks like the following:
uiElement is a placeholder that represents all of the possible classes that inherit from UIElement and that can support the object element usage for the member being documented. The Button class is one example. The uiElement placeholder is used because all of the classes that inherit the UIElement.Visibility property point to this topic in the .NET Framework Class Library for Silverlight, so some kind of placeholder is necessary. (Note that the literal <UIElement Visibility="Visible"/> with that capitalization and formatting is not accurate and usable XAML, because UIElement is abstract.)
Other base types that are indicated by placeholders in XAML usage include frameworkElement and control.
With the exception of certain base classes as mentioned in the previous section, XAML usage for members of a type are generally shown as if that member was used on an instance of the defining type. For example, the XAML usage in documentation for Grid.ShowGridLines is <Grid ShowGridLines="True"/>, which is markup that you could paste directly into XAML. However, because Grid is an open type, the object element where you might set the Grid.ShowGridLines property is not necessarily literally Grid. The object element could be a class that derives from Grid. This Grid-derived class might appear in future versions of the Silverlight libraries, as part of the Silverlight Toolkit, or as a component of third-party libraries. The Grid-derived class then inherits the existing Grid.ShowGridLines, just like Button inherits Visibility. Therefore, you may also need to consider the capitalized Grid usage as a possible placeholder for any Grid-derived type, if the member is then inherited by other classes. The derived class is typically not in the default XAML namespace, and would require a XAML namespace definition and a prefix.
Open types that are in a XAML namespace that already requires a prefix have similar considerations. In this case, the prefix shown in XAML usage might only apply to the usage of the defining type. Derived types might not be in the same XAML namespace. Using the inherited member first requires that the derived type itself can be resolved by a prefix, which might be a different prefix that is shown in the usage. Once the type is resolved, the inherited member can be resolved based on that type; there is no need to apply a specific prefix to the attributes. For example, HeaderedItemsControl defines members that are inherited by the Legend class that is defined in the Silverlight Toolkit. The usage shown for these members is shown with the object element as sdk:HeaderedItemsControl per the base class definition, for example <sdk:HeaderedItemsControl. But when used on a Legend, the true usage might be <toolkit:Legend (note the toolkit: prefix).
For some properties, it is technically possible to set the value by using either a property element syntax or an attribute syntax. Generally speaking, the XAML usage in .NET Framework Class Library for Silverlight shows the recommended usage. The recommended usage is the least verbose usage, the usage that best aligns with real-world XAML scenarios, or both. If attribute usage is possible, generally only the attribute usage is shown.
You could still attempt to use a more verbose property element form for many properties, and there are legitimate reasons for doing so. One reason is that you can place a Name attribute on an object element that provides a property element's value, which you cannot do for an attribute, and this might make it easier to reference the object value at run time. Also, you might prefer the verbose form as a matter of markup style, or you might use and modify XAML produced by a tool that emits a verbose XAML form for some properties.
There are certain properties where both property element and attribute usages are shown because one of those usages involves a type converter. The type converter usage uses the attribute syntax, other possible usages that do not involve the type converter use the property element syntax. For example, this is the case with any of the properties that use the Brush type.
There are certain properties where both property element and attribute usages are shown because the attribute usage shows a markup extension involved in the usage and that usage is clearly one of the major XAML usage scenarios for the property.
The .NET Framework Class Library for Silverlight includes references for assemblies and types that are outside the core Silverlight assemblies. These types are usable in XAML, but using them requires that you map the XAML namespaces and assemblies to a prefix, or that you rely on Visual Studio design surface behavior that maps the XAML namespace to a prefix automatically. These prefixes are often part of the XAML usages sections and typically follow a convention. If you follow that same convention for consistent mappings at the root level, you can copy and paste the syntax from .NET Framework Class Library for Silverlight topics into your XAML page. The actual XAML namespace mapping is not typically shown in XAML usages; typically the mapping is on the root only, and the root is not shown.
The following shows an example of the XAML Object Element Usage section for the TreeView control, which has the sdk prefix. The sdk prefix is defined to map the XAML namespace, which includes the assembly and CLR namespace where TreeView is defined. Thus, the XAML usage as shown in the TreeView topic is <sdk:TreeView .../>.
The prefix conventions and mappings are described in a separate topic. For more information, see Prefixes and Mappings for Silverlight Libraries. | http://msdn.microsoft.com/en-us/library/dd638655(v=vs.95).aspx | CC-MAIN-2014-42 | refinedweb | 2,814 | 53.21 |
Available with Spatial Analyst license.
Available with Image Analyst license.
Summary
Determines which values from the first input are contained in a set of other inputs, on a cell-by-cell basis.
For each cell, if the value of the first input raster is found in any of the list of other inputs, that value will be assigned to the output raster. If it is not found, the output cell will be NoData.
Illustration
Usage
If all of the inputs are integer, the output raster will be integer. If any of the inputs are floating point, the output will be floating point.
In the list of input rasters, the order is not relevant to the outcome of this tool.
If the Process as multiband parameter is unchecked (process_as_multiband is set to SINGLE_BAND in Python), only the first band of a multiband Input raster or constant value (input_raster_or_constant raster or constant value parameter. If the input raster is a single band or a constant, the number of bands in the output raster will be the same as the maximum number of bands of all multiband rasters from the Input rasters or constant values. If the input raster is a multiband, the output raster will have the same number of bands as the input raster.
If any of the Input rasters or constant values is a raster with a smaller number of bands than the output raster, the missing bands will be interpreted as a band filled with NoData. If any of the entries in the input list is a constant, it will be interpreted as a multiband raster in which the cell values of all bands are the same as the constant and have the same number of bands as the output raster.
Parameters
InList(in_raster_or_constant, in_raster_or_constants, {process_as_multiband})
Return Value
Code sample
This example determines which cell values in the first input are found in the set of other input rasters.
import arcpy from arcpy import env from arcpy.ia import * env.workspace = "C:/iapyexamples/data" outInList = InList("redlandsc1", ["redlandsc2", "redlandsc3"]) outInList.save("C:/iapyexamples/output/outinlist.tif")
This example determines which cell values in the first input are found in the set of other input rasters.
# Name: InList_Ex_02.py # Description: Determines which values from the first input are # contained in the other inputs # Requirements: Image Analyst Extension # Import system modules import arcpy from arcpy import env from arcpy.ia import * # Set environment settings env.workspace = "C:/iapyexamples/data" # Set local variables inRaster1 = "redlandsc1" inRaster2 = "redlandsc2" inRaster3 = "redlandsc3" # Check out the ArcGIS Image Analyst extension license arcpy.CheckOutExtension("ImageAnalyst") # Execute InList outInList = InList(inRaster1, [inRaster2, inRaster3]) # Save the output outInList.save("C:/iapyexamples/output/outinlist")
Environments
Licensing information
- Basic: Requires Image Analyst or Spatial Analyst
- Standard: Requires Image Analyst or Spatial Analyst
- Advanced: Requires Image Analyst or Spatial Analyst | https://pro.arcgis.com/en/pro-app/latest/tool-reference/image-analyst/inlist.htm | CC-MAIN-2022-33 | refinedweb | 468 | 51.99 |
Forum Index
We've just completed our monthly meeting for the month of May 2021. One decision we made is to start providing summaries of the topics discussed. Hence this forum post.
The participants: Walter, Atila, Andrei, Razvan, Max, and me.
The primary topic on our agenda for this meeting was a goal-oriented task list. In place of the old vision documents, we want to start maintaining a list of our current major long-term goals and some more minor short-term goals, broken down into specific tasks. This serves both as the current vision and as a task list for community members looking to make an impact with their contributions. For example, Bugzilla issues that fall under a goal will be labeled as such, so contributors can more effectively focus their attention. The list will also be used to guide some of the work our new strike teams will be doing.
We've got a preliminary list of high-level goals that we will flesh out with specific tasks over the next couple of weeks. For example, major long-term goals are memory safety (e.g., specific bugs, fully enabling DIP 1000 support) and Phobos v2. There were other goals discussed, such as implementing named arguments, improving compile-time introspection, improving Phobos @safety, and more. I don't know yet what the initial version of the final list will look like, but I hope to publish it in the next two or three weeks.
We discussed how to improve error messages. Walter exhorts everyone to please raise a Bugzilla issue for specific error messages you encounter that you think need improvement. Walter also said he is open to accepting the implementation of a command-line switch that enables URLs in error messages to provide more information.
Our next monthly meeting will take place on June 25th. We haven't yet set the agenda, but a portion of it will be devoted to following up on some of the topics discussed in today's meeting.
On Friday, 28 May 2021 at 14:56:08 UTC, Mike Parker wrote:
And Ali!
[...]
Splendid! Communication is king
We've ...
Good,maybe we can raise our sounding slogan "Better C++" again!
On Saturday, 29 May 2021 at 00:26:54 UTC, zjh wrote:
Good,maybe we can raise our sounding slogan "Better C++" again!
There are too many talents in C++
We must attract them.
Only them can make d great!Because they are library writer. They can make d great.
On Saturday, 29 May 2021 at 00:30:38 UTC, zjh wrote:
If I'm a marketer of rust, I'll say rust is abnormal C++.would you like a try?
If I were a marketing person of D, I would say d is a Better c++.will you try?
We can make use of the C++ fame to make us famous.
C++ is always our good friend.
rust
abnormal C++
d
Better c++
C++
always
For example, major long-term goals are memory safety (e.g., specific bugs, fully enabling DIP 1000 support) and Phobos v2.
Phobos v2 is an official plan? That was news for me! Any chance to get a glimpse of what's planned for it?
On Wednesday, 2 June 2021 at 11:10:36 UTC, Dukc wrote:
The overall goal is that it doesn't replace the current Phobos, but sits alongside it. Changed/improved/new functionality goes in the std.v2 namespace (or whatever it looks like in the end) and you can import that alongside existing std packages.
Andrei has talked about it a little here in the forums, and Steve did some preliminary work a while back. Beyond that, I have no details about plans. We'll have more after the workgroup gets going.
I disagree. Attracting C++ folks doesn't seem to work. You may try to lure them with promises of nicer templates and no header files, but after a while they will complain about the garbage collector and go back to their new C++ draft standard.
If you advertise yourself as Better C++, you are instantly setting yourself up for comparison and are turning away everyone who dislikes C++ in the first place.
Rust doesn't advertise itself as "Safe C++", Go doesn't advertise itself as "Native Java", Zig doesn't advertise itself as "Better C".
On Thursday, 3 June 2021 at 22:40:50 UTC, JN wrote:
On Saturday, 29 May 2021 at 00:26:54 UTC, zjh wrote:
OK, how do you position "d"?
What kind of programmers do you want to attract? beginner?pythoner?scripter?
How to attract them and why and what feature attact them?
What slogan of "d", Can d occupy a bigger market?
Does d still occupy the field of system programming
The GC of D is a burden.in the speaking of AA.
D does not owns the advantages of GC , but all the disadvantages of GC .Why not discard it? | https://forum.dlang.org/thread/iqtdfbbbidqoeqyxgqax@forum.dlang.org | CC-MAIN-2021-49 | refinedweb | 828 | 74.29 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How to send SMS to mobile from openerp SMS Gateway
I have install sms_client module in my data base. i am using openerp 7. i configured Gateway List in General tab Username,Password,Receipt No,SMS Mesage all this parameters. i am getting SMS in Message Queue status as Queued so, some one tell me how can i set automated action and schedular to get SMS to mobile.
Hello Mozib Khan,
There are no. of script available in python to send message to mobile.
Here you go! for the sample script.
The thing is you need to get register any message service provider, which might be not free.
Once you will have that service provider credential you can develop your own module and scheduler for this.
Hope this will help you.
Rgds,
Anil.
import urllib2
import cookielib
from getpass import getpass
import sys
username = raw_input("Enter your login Username: ")
passwd = getpass()
message = raw_input("Enter Message: ")
number = raw_input("Enter Mobile number:")
message = "+".join(message.split(' '))
# Logging into the SMS Site
url = '?'
data = 'username=' + username + '&password=' + passwd + '&Submit=Sign+in'
# For Cookies:
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
# Adding Header detail:
opener.addheaders = [('User-Agent', 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.120 Safari/537.36')]
try:
usock = opener.open(url, data)
except IOError:
print "Error while logging in."
sys.exit(1)
jession_id = str(cj).split('~')[1].split(' ')[0]
send_sms_url = '?'
send_sms_data = 'ssaction=ss&Token=' + jession_id + '&mobile=' + number + '&message=' + message + '&msgLen=136'
opener.addheaders = [('Referer', '' + jession_id)]
try:
sms_sent_page = opener.open(send_sms_url, send_sms_data)
except IOError:
print "Error while sending message"
sys.exit(1)
print "SMS has been canny service provider credentials.with out creating new module i can't perform this action??. by using sms_client Module? and i have created server and automated action also. message are coming in to openerp sms gateway->message queue. i want that message to come to my mobile. how can i do that? help me in that | https://www.odoo.com/forum/help-1/question/how-to-send-sms-to-mobile-from-openerp-sms-gateway-82151 | CC-MAIN-2017-09 | refinedweb | 360 | 54.29 |
9656/what-is-hf-registrar-roles-in-fabric-ca-client-command
I am working on a tutorial and i have the following command there:
fabric-ca-client register --id.name admin2 --id.type user --id.affiliation org1.department1 --id.attrs '"hf.Registrar.Roles=peer,user",hf.Revoker=true'
I have a few questions.. Why is admin2 used and not admin? what are the roles of admin2 compared to admin? What is admin and hf.Registrar.Roles?
The "hf.Registrar.Roles" attribute is used to control the type of identity that can be registered by an identity. The "hf.Revoker" attribute is used to control which identities can revoke certificates. admin2 is not a role, its the name of the user.. you can use other names instead of admin2(but it should be defined). As admin2 is a user, its roles compared to admin depends on the privileges it has.. in this case, it is just a normal user.
An admin has special privileges, ex: it can enroll other users.. adminWithoutRoles is a user with no special privileges.
admin2 is just a name given to the admin. You can give any name to the admin.
An admin user is allowed to register certain nodes in the network. So while registering the admin, you need to specify which nodes the admin can register. To specify this, the flag hf.Registrar.Roles is used. In the above command, it is specified that admin2 can register peers and users.
Suppose you want to revoke a certificate or an identity, then any random node cannot do this. There are particular admin nodes in the network that have permission to do this. There are different types of identities: user, peer, orderer, etc. And different admins can be assigned to register/revoke different types of identities. To specify which admin has permission to revoke which type of identity, hf.Registrar.Roles is used. The admin can revoke or register only those types of identities that is allowed to.
Hyperledger fabric includes a modular Certificate Authority ...READ MORE
Let me explain with an example. Suppose ...READ MORE
'o' indicates has-a relationship
'-->' indicates pass by ...READ MORE
You need to setup the Fabric CA ...READ MORE
Summary: Both should provide similar reliability of ...READ MORE
This will solve your problem
import org.apache.commons.codec.binary.Hex;
Transaction txn ...READ MORE
To read and add data you can ...READ MORE
You can find the details you want ...READ MORE
No, there is no restriction of he ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in. | https://www.edureka.co/community/9656/what-is-hf-registrar-roles-in-fabric-ca-client-command | CC-MAIN-2022-21 | refinedweb | 444 | 61.33 |
Python – Uploading Data
We can upload data to a serer using python’s module which handle ftp or File Transfer Protocol.
We need to install the module ftplib to acheive this.
pip install ftplib
Using ftplib
In the below example we use FTP method to connect to the server and then supply the user credentials. Next we mention the name of the file and the storbinary method to send and store the file in the server.
import ftplib ftp = ftplib.FTP("127.0.0.1")("username", "password") file = open('index.html','rb')("STOR " + file, open(file, "rb")) file.close()
When we run the above program, we observer that a copy of the file has been created in the server.
Using ftpreety
Similar to ftplib we can use ftpreety to connect securely to a remote server and upload file. We can aslo download file using ftpreety. The below program illustraits the same.
from ftpretty import ftpretty # Mention the host host = "127.0.0.1" # Supply the credentisals f = ftpretty(host, user, pass ) # Get a file, save it locally f.get('someremote/file/on/server.txt', '/tmp/localcopy/server.txt') # Put a local file to a remote location # non-existent subdirectories will be created automatically f.put('/tmp/localcopy/data.txt', 'someremote/file/on/server.txt')
When we run the above program, we observer that a copy of the file has been created in the server. | https://scanftree.com/tutorial/python/python-network-programming/python-uploading-data/ | CC-MAIN-2022-40 | refinedweb | 234 | 58.08 |
core team talking about a transition period of at least five years. Currently, Python 2.7 is scheduled to receive official security support until 2020, with Red Hat and probably other vendors providing third-party support for even longer.
But although a lot of things changed going from Python 2 to Python 3, most of them have been accepted as positive minor cleanups. The standard library got a bit better organized. Python 2.2’s “new-style” classes, which are what you want even if you don’t know what that means, are now the default. The
print() function (OK, some people gripe about that, but it’s mostly just sensible consistency).
The big one, the one that people argue endlessly about, the one that causes endless consternation, is the change to strings.
I’d been bitten more than once by the weirdness of Python 2’s string handling, so I was in favor of cleaning that up. But for a long time I was mostly on the sidelines of the debate; my previous job was Python 2 only, and I didn’t have a ton of free time to look into porting open-source stuff I’d released. That changed this past summer, when I suddenly had (thanks to being job-hunting) the time to start porting my personal projects, and more recently when I started a my new job where I’m using Python 3 every day. So I figure it’s a good time to look over this again and say… that I’m still 100% in favor of Python 3, and the string changes.
Secure the perimeter
Are there things that are harder to do in Python 3 because of the string changes? Absolutely. People who write command-line Unix utilities, some types of file handling applications, and code in some other domains suddenly have a tougher time writing code Python will accept than they did when
bytes was named
str. I’m not going to downplay that at all, and in fact it’s the biggest relevant-to-actually-writing-code complaint there is about the change. But as with so many things, it’s a trade-off, and programming in Python didn’t, as a whole, get harder. What happened was more of a shuffling around of where the hard bits are: quite a lot of people who used to have to worry constantly about strings and bytes and encodings and the whole mess of baggage that comes with them now don’t have to worry about that as much. In exchange, some people who used to rarely have to worry about it when writing their code now have to worry about it all the time.
Of course, if that was all there was to the change, then it would have been bad: arbitrarily shifting difficulty from one group of programmers to another isn’t really a good reason for a major change in a language. Even if we can quantify how many people have it easier vs. how many have it harder, it still might be difficult to justify depending on the exact numbers and which groups got disproportionately hit (for example, making life much harder for brand-new programmers in exchange for ease for experienced people might not be a great trade to make, but dumping all the difficulty on the developers of vital low-level tools for everyone else is also a problem).
But the more I’ve explored and thought about it, the more I think arguing about particular domains or numbers of people is actually the wrong way to think about this. The real questions that matter are:
- At what point should string encoding be an issue a developer has to think about?
- Is that the point where developers now have to think about it?
And I think Python 3 produced the right answers to those questions, and baked them into the implementation. The place where developers should have to think about strings is at boundaries, at the points where data leaves or comes under the control of the developer (or the framework or the library being used by the developer).
Digging back into history a bit, this was the policy Django settled on, well before Python 3 came along: in code that you, an application developer, will write with Django, you’ll be working with Unicode everywhere. Text data coming in through HTTP requests gets turned into Unicode. Text data coming out of your database gets turned into Unicode. Functions and methods that work on text are built for Unicode. And when you push a response back out, or send data back to your database, or do whatever thing your application will actually do, it gets turned back into the appropriate bytes on the way out the door.
This was incredibly difficult to set up initially, was backwards-incompatible with how Django had worked previously, and had a large cost in developer time and sanity to rebuild the framework around Unicode. But it was worth it, because it was the right thing. These boundaries simply are the places where encoding and bytes-vs.-Unicode issues should happen. Everywhere else, string data should simply be Unicode, period, and tools should always be built to ensure that.
Unicode and its discontents
And for the most part, if you look at complaints coming from people who say Python 3’s string handling made their lives harder, you’ll see that what really happened was exposure of a boundary they previously weren’t thinking about.
Now, I should point out here that I’m not really knocking the people who were writing, say, command-line and file-handling utilities in Python. For years, Python sort of accepted the status quo of the Unix world, which was mostly to stick its fingers in its ears and shout LA LA LA I CAN’T HEAR YOU I’M JUST GOING TO SET
LC_CTYPE TO
C AGAIN AND GO BACK TO MY HAPPY PLACE. A bit later on it changed to “just use UTF-8 everywhere, UTF-8 is perfectly safe”, which really meant “just use UTF-8 everywhere because we can continue pretending it’s ASCII up until the time someone gives us a non-ASCII or multi-byte character, at which point do the fingers-in-ears-can’t-hear-you thing again”.
So a lot of what you’ll see in terms of complaints about string handling are really complaints that Unix’s pretend-everything-is-ASCII-until-it-breaks approach was never very good to begin with and just gets worse with every passing year.
As of Python 3, suddenly you do have to think about the boundary between your program and the filesystem (or the socket, or other Unix constructs) and think about what encoding text will be in when it arrives and what encoding it needs to be as it departs. To people who are used to just arbitrarily grabbing a file handle and doing
read() and
write() on it this is kind of a rude awakening. But it’s also the right place to require thinking about that. I’m aware some smart people disagree with me on this; Armin seems to think Python is fighting a losing battle by trying to ask Unix to become Unicode-aware, for example.
But even if we set aside the Unix problem, we’re increasingly forced to accept that Unicode isn’t going away, and that acting like everything is ASCII until we learn otherwise doesn’t work as a strategy.
A phone-screen question
When I was job hunting last year, I did a lot of phone screens (and toward the end I adopted a policy, for my own sanity, of hanging up on certain types of phone screens, but that’s another story). There really aren’t that many phone-screen problems out there, so I saw some of them multiple times, including the palindrome question. It comes in a number of variations, but most of them center around proving that you know how to detect a palindrome — a string which reads the same forwards and backwards. Some of the utility here is in getting the candidate to ask questions about things like spacing and punctuation and capitalization, since some famous multi-word palindromes only work when ignoring those (like “Able was I ere I saw Elba” or “A man, a plan, a canal — Panama!”).
On about the third phone screen I’d gotten with a palindrome problem, I decided to throw in a wrinkle none of the interviewers had ever mentioned, asked about or expected to be asked about: OK, so what about Unicode?
The naïve way to check for a palindrome is simply to compare the sequence of characters in both directions. For example:
def is_palindrome(s): return s == s[::-1]
And that’s the sort of solution most phone-screen palindrome questions are going for (you can do it a few different ways, but the slice-with-negative-step version is the most concise and “native” implementation in Python). But let’s throw a wrench in this. Suppose I have two strings I’ve named
palindrome1 and
palindrome2, and throw them through the function above:
>>> print(palindrome1) aña >> is_palindrome(palindrome1) True >>> print(palindrome2) aña >>> is_palindrome(palindrome2) False
Oops.
These strings are both palindromes when considered in terms of graphemes, but only one of them is a palindrome when considered as a sequence of Unicode characters. Here’s what those strings actually are:
>>> palindrome1 = u"a\u00f1a" >>> palindrome2 = u"an\u0303a"
The first one is three Unicode characters long. The second character in it is
U+00F1, officially known as “LATIN SMALL LETTER N WITH TILDE”. The second string is four characters long; its second character is just a lowercase “n”, but its third character is
U+0303 — that’s “COMBINING TILDE” to you. The combining tilde goes with the character it comes after, so when we reverse that string we get:
>>> print(palindrome2[::-1]) ãna
since the combining tilde came after the “a” this time, not after the “n”.
If you want to impress your next phone screener, here’s a better (but still not perfect) version of
is_palindrome():
import unicodedata def is_palindrome(s): if any(unicodedata.combining(c) for c in s): s = unicodedata.normalize('NFC', s) return s == s[::-1]
Yes, writing a correct palindrome checker requires (at least) knowledge of Unicode character classes and normalization. Getting it as correct as possible requires more (especially since the solution above only works for combining characters which reliably have a composed form), but this is a good start and will probably distract any interviewer you’re dealing with long enough to let you move on to something actually relevant to your job qualifications. Just make sure you can answer or bluff your way through any questions you’re asked about it, like why it uses normalization form NFC instead of NFKC.
Yeah, but everybody around here speaks English!
You probably already know that if your company ever offers something for sale in Canada, you’ll need to get bilingual in a hurry. Or that if any part of your infrastructure or supply chain passes through Mexico you’re going to need the ability to work in Spanish.
But even if you live in the United States, even if your full supply chain is in the United States, even if all your co-workers, customers and users are in the United States, and even if you never plan to expand beyond the United States, you’ll still run into some inconvenient facts:
- One out of every five people in the US speaks a language other than English at home.
- In at least five states I know of — two of which might surprise you — languages other than English have official or otherwise specially-recognized legal status.
- Of languages with more than one million speakers in the US, two don’t use any variation of Latin script, and one uses Latin characters but with so many diacritics your head will swim.
- And even English text often contains loanwords or intersperses names or words from other languages, requiring non-ASCII characters to represent.
And there are entire industries in the US which are required by law to be accessible to their customers in the customers’ preferred language. I work in one of them!
Plus, of course, there’s this thing you may have heard of, called “emoji”. Emoji are the real torture test since even some vaguely Unicode-aware approaches fall over when they encounter characters outside the Basic Multilingual Plane, or which make use of some of the combining features required to make modern emoji work correctly.
The result is that every piece of code you write which handles text but doesn’t think up-front about character sets and encoding is a ticking time bomb. Every piece of code you write that assumes everything is ASCII, or assumes everything is UTF-8 staying in the one-byte-per-character range, is a ticking time bomb.
And that’s without getting into the security implications of handling Unicode incorrectly. You have read Unicode Technical Report 36, right? Right?
Rant over
I could write a lot more about the adventures I’ve had with text handling over the course of my career. So could many of my colleagues and co-workers, both present and former (ask Jannis about the time he got to set up and install a new MySQL collation! On second thought, don’t — just buy him a drink or give him a hug). But there’s a common theme to most of those adventures, and that theme is: not asking questions up-front about how text was really being handled in code I was working on or with. I got into those situations, over and over again, because I was working in Python 2, and Python 2 didn’t make me ask those questions.
Python 3 does. Text-handling problems in Python 3 are up-front and in my face. And in your face. When you’re doing something with text that could be unsafe, Python 3 often tells you right away instead of letting you get away with it for the moment, meaning you discover many of those vectors for bugs immediately instead of at 3AM on a Sunday when your pager goes off.
Does being forced to think about these problems up-front make the process of writing code harder? Yes. Are some domains now objectively more difficult to write code for due to the requirement to think about these problems up-front? Yes. But is it better to think about and solve these problems up-front? Yes. Do I think the tradeoff of having to handle text correctly, at the cost of more up-front work, is worth making? Absolutely. Would I ever willingly go back to how I used to write code, littered with text-handling land mines that I might not find until weeks, months or even years later, usually at the least convenient time possible? Not a chance.
So yeah. I’m still in favor of Python 3, string changes and all. | https://www.b-list.org/weblog/2016/jun/10/python-3-again/ | CC-MAIN-2018-17 | refinedweb | 2,531 | 63.12 |
QSqlDatabase - memory leak
qt 5.7 mingw 32bits
#include <qcoreapplication.h> #include <QSqlDatabase> #include <qdebug.h> #include <quuid.h> #include <qsqldatabase.h> #include <qsqlquery.h> bool gExit = false; void handleSignal(int sig) { gExit = true; } void test1() { QString name = QUuid::createUuid().toString(); QSqlDatabase *db = new QSqlDatabase(QSqlDatabase::addDatabase("QOCI", name)); db->setDatabaseName("test"); db->setHostName("localhost"); db->setPort(1521); db->setUserName("system"); db->setPassword("123456"); for (int i = 0; i < 1000; ++i) { if(!db->open()) { qDebug("db->open failed"); delete db; return; } db->close(); qDebug("done %d", i); } delete db; //QSqlDatabase::removeDatabase(name); //unless enable this } int main(int argc, char *argv[]) { signal(SIGINT, handleSignal); QCoreApplication app(argc, argv); test1(); int ret = app.exec(); return ret; }
memory malloc in db.open, but not free in db.close.
- jsulm Qt Champions 2018
@Tancen It would be nice to describe the problem instead simply throwing some code...
You don't delete db if you do a return in
if(!db->open()) { qDebug("db->open failed"); return; }
Why do you create db on the stack?
QSqlDatabase db;
would be enough.
i forget it. but the problem is not about it. you can try and repeat open / close more times.
@Tancen said in QSqlDatabase - memory leak:
you can try and repeat open / close more times.
He knows it.
It tells you that if the base is not opened, you're doing 'return' and this is exactly the problem, 'return' - has finished function and the base itself will not be removed.
for (int i = 0; i < 1000; ++i) { if(!db->open()) { qDebug("db->open failed"); delete db; return; //stop function } db->close(); qDebug("done %d", i); } delete db; //It will not be fulfilled.
thanks.
i know he means.
i tested and get memory leak about open/close
my code is only a mini show
- jsulm Qt Champions 2018
i repeat the for 1000 times, i see the memory increase in windows task manager and not decrease on the for finished
- SGaist Lifetime Qt Champion
Hi,
No OS guaranties that when you call delete, the memory is returned instantly to said OS. All the more if you are using a tight loop like that.
Note that you are also not using the QSqlDatabase API correctly.
it's same as wait it some time.
i tested the memory free at QSqlDatabase::removeDatabase be called | https://forum.qt.io/topic/87062/qsqldatabase-memory-leak/4 | CC-MAIN-2019-09 | refinedweb | 386 | 68.16 |
Wang,
On Wed, 2005-04-06 at 09:45, Wang Jian wrote:
> Hi jamal,
>
>
> On 06 Apr 2005 08:12:50 -0400, jamal <hadi@xxxxxxxxxx> wrote:
>
> > Essentially rewrite the classid/flowid. In the action just set
> > skb->tc_classid and u32 will make sure the packet goes to the correct
> > major:minor queue. This already works with kernels >= 2.6.10.
> >
>
> One question:
>
> Can I set skb->tc_classid in a qdisc and pass this skb to the specified
> class to handle?
>
Well, remember the qdisc asks the filter what class a packet belongs to
every time. The best place to change things is at that point. Why not
tell the qdisc whatever class you want it to know instead of passing
metadata like skb->tc_classid for it to further intepret?
>
> My idea is that the action itself dynamically creates classes. So you
> don't need any other rules. It is looks like #2 but the work is done in
> dynfwmark action. The workflow
>
> 0. setup, and create flow entry hash;
> 1. when a packet arrives, check if it is a flow or should be a new flow;
> 2. alloc a class id for this flow;
so far so good. But you probably wanna drop the packet after a certain
upper bound for number of flows is reached.
> 3. dynamically create a htb class using the <flow parameter>
> 4. skb->tc_classid = allocated_ht_class_id
>
> 1.5 if can't create flow, skb->tc_classid = default_class_id
>
This may be a little tricky with locks etc. Maybe you could send a
netlink message to the qdisc. Perhaps a workthread which creates these
classes etc. My suggestion was:
a) qdisc asks filter "what class is this packet?"
b) filter responds "class 1:234"
c) qdisc says "hrm, dont have that class. call create_default_class"
d) qdisc::create_default_class()
create class and set default parameters. Default parameters are
passed from user space (eg 10kbps rate etc).
e) enqueue packet on new queue.
Also now you have to create the dynamic class rewriter action and change
a qdisc ;-> What i was saying earlier is that the create_default_class
maybe a good addition to qdiscs.
The question is do you keep those queues forver or kill them after a
period of no activity?
The idea of creating them permanently upto a certain max numbers is
probably ok. It will be no different than creating them via a script.
> > One thing i noticed is that you dont have multiple queues actually in
> > your code - did i misread that? i.e multiple classes go to the same
> > queue.
> >
>
> Didn't you notice that it is a classless qdisc? The queue is per qdisc,
> not per class :)
Sorry missed that ;->
> It is the parent class's duty to define what kind of
> flow this qdisc handle. It is very generic, you can even mix UDP/TCP
> flows together but I don't think it is good idea.
>
Doesnt that defeat the purpose of it being "per flow queue" ;->
> Think the scenario
>
> 1. You have VoIP flows (UDP)
> 2. You have pptp vpn flows (eh, per flow can't handle it at this time, I
> think)
>
> You create HTBs for them, and use filter to classify them. And then, use
> per flow qdisc as the only child of these HTBs. per flow qdisc will
> guarantee the per flow QoS.
>
I got this. Of course you understand creating all these filters and
queues is taking system resources. It may be sufficient to create
heuristics like a simple priority qdisc where you put voip as first
class citizen, vpn as second and rest as best effort.
> > The only unfriendliness to user space is in #1 where you end up having a
> > script creating as many classes as you need. It is _not_ bloat because
> > you dont add any code at all. It is anti-bloat actually ;->
> >
>
> In this way, it is very hard to write good user interface in userspace.
> My current implementation takes good user-friendly (or user space
> scripter/programmer friendly) into serious consideration.
>
It is not hard - its annoying in user space ;->
You could write a for loop to produce them. such as:
for i stating from 1 to 1000
tc class add .... rate 10Kbps classid 1:$i
endfor
Trying to list those classes will list at least a 1000 lines of course.
But this listing issue will happen even when you dynamically create
these classes.
So annoyance is more descriptive.
> The 'bloated' comment is on the
>
> "If it carries an unique id or something like in its own namespace, then
> it can be clean and friendly for userspace"
>
I am not disagreeing with you. If you want to go that path instead what
i am saying is its more coding.
> I think too much and my thought is jumpy ;) You even didn't notice that
> I gave another suggestion on implementation in this sentence.
>
What? You did?
just kidding;->
> > I am suprised no one has compared all the rate control schemes.
> >
> > btw, would policing also suffice for you? The only difference is it will
> > drop packets if you exceed your rate. You can also do hierachical
> > sharing.
>
> policy suffices, but doesn't serve the purpose of per flow queue.
>
Policing will achieve the goal of rate control without worrying about
any queueing. I like the idea of someone trying to create dynamic queues
though ;->
cheers,
jamal | http://oss.sgi.com/projects/netdev/archive/2005-04/msg01459.html | CC-MAIN-2015-22 | refinedweb | 890 | 74.08 |
This article was written and submitted by an external contributor. The Google App Engine team thanks Christopher O'Donnell for his time and expertise.Christopher O'Donnell (@markitecht)
Technology product designer and developer from Cambridge, Massachusetts
August 2010
Modern Funnel Analytics Using the Mapper API
The announcement of the Mapper API at Google I/O 2010 was a welcome one to the community, to say the least. The Mapper API represents a big step toward being able to boil down significant amounts of data into reports efficiently, in addition to scores of other types of parallel manipulation of large data sets. Though the 'reduce' component of the traditional Map/Reduce recipe remains to be built, there are already many compelling uses for this exciting library.
One problem space that immediately called out for a Map/Reduce-based solution was the area of online marketing analytics, specifically multi-step conversion tracking, also known as funnel analytics. My goal was to track the life cycle of each user, ultimately tying the average 'value' of different types of users to my advertising spend in AdWords. What other tools lacked, was the ability to see multi-level outcomes for users who first came to our site within a specific time period.
Here is a snapshot of the kind of report I wanted to be able to produce:
Note: For clarity this sample report only shows the first three steps of the funnel. Also, the production system is able to support much larger data sets and targets than shown above.
Requirements and Insights
My requirements were:
- provide weekly snapshot reports
- allow drill-down to see daily performance
- provide a mechanism to set targets for each step, in order to scale my funnel visualization to represent relative, not absolute, performance
To deliver the relative performance numbers, my third requirement, I simply need to allow the user to enter historically realistic target values for each step in the funnel process. I then scale the resulting visualization to the provided target values.
The key insight to addressing the first two requirements is that having daily statistics amounts to having weekly statistics, since summing daily numbers gives us weekly numbers:
In order to produce daily numbers, which will feed into the weekly numbers, we can sum hourly data:
Secret Sauce
You might be wondering at this point why we simply don't determine the daily statistics on demand. In order to determine the number of conversions in any given step of the funnel, we have to count. However, we're limited to how many entities we can practically count in a single query or in a single request. For example, if we were to count up to, say, 1,500 entities per query, then, in any given time period, we could count at most 1,500 visitors. In other words, if we tried to determine daily visitor counts on demand, our system would be limited to tracking about 45,000 unique visitors a month (30 days x 1,500 / day)
I want to support around 500,000 monthly visitors, with room to grow in the future. To achieve this, I count the number unique visitors in each hour, allowing the system to count well over 500,000 monthly visitors (30 days x 24 hours x 1,000 / hour). To support even more visitors, I could count at a finer grained time period. For example, counting unique visitors at a ten second interval would support over 250 million monthly visitors.
The secret sauce is to not iterate over all the entities which represent unique visits to the site, but rather, map over a set of entities which represent time slices. As we map over each time slice entity we count the number of unique visitor in that time slice, updating the entity counter as we go.
Implementation
First, we must create these time slice entities ahead of time, so that the Mapper API will have entities to map over. There are many ways to do this. I chose the first that came to mind, which is simply to use a cron job to ensure that Hour, Day and Week time slice entities are created before the Mapper runs. Here's an example of a cron handler, which is set to run every 30 minutes, that creates
Hour entities for a trailing 24-hour period.
class EnsureHoursExist(webapp.RequestHandler): def get(self): today = datetime.datetime.today() for hour in range(24): val = today - datetime.timedelta(hours=hour) # Ensure the existence of, or trigger the creation of any missing Hour entities # This, and any similar entitiy creation methods, must be idempotent get_or_create_hour(val)
All time slice classes extend my
Slice class, which itself extends db.Model:
class Slice(db.Model): value = db.DateTimeProperty() leads = db.IntegerProperty(default=0) registrants = db.IntegerProperty(default=0) downloads = db.IntegerProperty(default=0) # ... additional funnel steps here ...
Once all the time slice entities are prepared, it's time to run the Mapper job to populate the empty slices with up-to-date counts. For background on how to add the Mapper API to your app and get started with defining and running jobs, read the excellent documentation provided by the team.
Note: It's important to ensure that all your tasks are idempotent, because tasks may execute more than once. To build my Hour statistics, my Mapper job handler looks like the following code snippet.
def build_hourly_stats(hour): start = hour.value end = start + datetime.timedelta(hours=1) hour.leads = count_all_leads(start, end) hour.registrants = count_registrants(start, end) hour.downloads = count_downloads(start, end) # ... additional funnel steps counted here ... # Use yield here as Mapper is iterating over all
Hourentities # Actual datastore transactions will be performed in bulk mutations automatically yield op.db.Put(hour)
If the yield statement is unfamiliar to you, you can check out the Python generator documentation and yield statement documentation, or read this useful explanation.
You'll notice I've factored out my counting helper functions; here's an example of one:
def count_registrants(start, end): return Lead.all().filter('created >=', start).filter('created <=', end).filter('registered', True).count()
Once the
Hour entities have been updated by the Mapper job to create 'Hour' statistics, I run Mapper jobs to summarize 'Day,' and then 'Week' statistics. Through each subsequent iteration, this approach requires no additional counting of the original entities, and is therefore extremely fast, regardless of how many visitor entities were counted in the first iteration.
Conclusion
Thanks to the App Engine team for inviting me to share my experience using Mapper to build multi-stage conversion reports. I hope some of these approaches are applicable to your projects! | https://cloud.google.com/appengine/articles/mr/mapper | CC-MAIN-2015-40 | refinedweb | 1,104 | 51.38 |
Editor’s Note: This blog post was updated 7 August 2022 to include sections on why you should use mixins in Vue and the drawbacks of doing so.
If you are a Vue lover (like me) and are looking for a way to extend your Vue application, you’ve come to the right place. Vue mixins and directives are a powerful combination and a great way to add more reusable functions across parts of your application.
If you are from an object-oriented programming background, you’ll see Vue mixins as an imitation of parent classes. You will also see that directives are similar to helper functions.
If you do not have an OOP background, then think of mixins as a utility that you design to be shared by multiple people. If you are thinking about an office, it would be the photocopier. If you are thinking about a mall, it would be mall security. Basically, mixins are resources that multiple parts of your application share.
- Prerequisites
- What are Vue mixins?
- Why should I use mixins in Vue?
- Using mixins in Vue
- Global vs. local mixins
- Directives in Vue
- Filters in Vue
- Bringing it together
- Hardships with mixins
Prerequisites
Below are a few prerequisites that you’ll need before moving forward in this article.
- Knowledge of JavaScript
- You have, at the very least, built a Vue application. One with more than five components is a plus
- If you have shared the photocopier in the office, you can take a seat in front here
What are Vue mixins?
The Vue documentation has a really simple and straightforward explanation for what mixins are and how they work. According to the docs, mixins are a flexible way to distribute reusable functionalities for Vue components. A mixin object can contain any component options. When a component uses a mixin, all options in the mixin will be “mixed” into the component’s own options.
In simpler terms, it means that we can create a component with its data, methods, and life-cycle components, as well as have other components extend it. Now, this is different from using components inside other components where you can have a custom component with a name like
<vue-custom></vue-custom> inside of your template.
For example, we can build a normal Vue component to hold basic configurations for our app, such as:
- App name
- Greeter method
- Company name for copyright at the footer
Why should I use mixins in Vue?
Mixins allow us to reuse functionalities and logic within our Vue components. This means we can use the same logic or functionality in multiple components without having to manually rewrite the logic in each one.
This provides us a level of flexibility and allows us to reduce code duplication, making sure we abide by the popular DRY (Don’t Repeat Yourself) principle.
One other thing that makes mixins important is that they don’t have any effect outside the defined scope.
Using mixins in Vue
Let’s create a simple mixin:
<template> <div> <div>{{title}}</div> <div>{{copyright}}</div> </div> </template> <script> export default { name: "HelloWorld", data() { return { title: 'Mixins are cool', copyright: 'All rights reserved. Product of super awesome people' }; }, created: function() { this.greetings(); }, methods: { greetings: function() { console.log('Howdy my good fellow!'); } } }; </script>
Interestingly, we can refactor the logic in this component with mixins. This comes in handy in cases where you need to repeat this exact logic in multiple components.
Let’s create a simple mixin in a
myMixin.js file within our project:
export const myMixin = { data() { return { title: 'Mixins are cool', copyright: 'All rights reserved. Product of super awesome people' }; }, created: function() { this.greetings(); }, methods: { greetings: function() { console.log('Howdy my good fellow!'); } } };
Okay, that’s as simple as it gets for a mixin. Now, if we use this in our component, you will see the magic in it.
And to use this, we can import it and do the following in our template:
<template> <div> <div>{{title}}</div> <div>{{copyright}}</div> </div> </template> <script> import myMixin from "/myMixin"; export default { name: "HelloWorld", mixins: [myMixin] }; </script>
Global vs. local mixins
One important thing to note is that there are two types of mixins – global and local.
Local mixins are what we’ve explained above with the
myMixin.js file. The mixin is defined in an individual
.js file and imported for use within individual components in our Vue project.
On the other hand, global mixins allow us to do even more. Similar to local mixins, we also have our
myMixin.js file. This time, we import it directly into our
main.js file, making it globally available to all components within our project
For instance, once we’ve created our
myMixin.js file, we go to our
main.js file and import it as shown below:
import { createApp } from 'vue' import App from './App.vue' import myMixin from './myMixin' const app = createApp(App); app.mixin(GlobalMixin); app.mount('#app')
Now, any component within our Vue component can access the functionality in our mixin file without needing to import individually.
Directives in Vue
Directives are methods like
v-for that we can create to modify elements on our template. You know how
v-if hides a component if a condition is not met? How about if we underline a long sentence with a directive?
We can even change the text a little as a way to highlight it. We can have global directives that we register so that all of the components in our Vue application can use them. We also have local directives that are specific to that particular component. Awesome, right?
Let’s create a global directive in our
main.js now.
Register a global custom directive called
v-highlight:
// It's app.directive in Vue 3 Vue.directive('highlight', { // When the bound element is inserted into the DOM... // Use "mounted" instead of "inserted" in Vue 3 inserted: function(el, binding) { // set the colour we added to the bind el.style.backgroundColor = binding.value ? binding.value : 'blue'; el.style.fontStyle = 'italic'; el.style.fontSize = '24px'; } });
Here, we’re changing the style of the element attached to this directive. Plus, if there’s a color attached as a
value to the directive, we set it as the background color. If not, we set the background color to
blue.
Now, if we use this directive, you should see that parts of the text have changed.
To use this, we can do the following in our template:
<template> <div> <p v-highlight>Hello There!</p> <p v-This is a red guy</p> </div> </template>
Filters in Vue
This is another customization helper we will look at. Filters help us in many ways (you might get angry that you didn’t know about these earlier if this is your first time encountering them). We can define filters globally or locally, just like directives.
Filters can be used to apply common formatting to text or heavy filtration to an array or object. They are JavaScript functions, so we can define them to take as many arguments as possible. Also, we can chain them and use multiple filters as well. Cool, right?
Let’s define a simple filter to capitalize the first word of the body of text (this is really useful when displaying things like names supplied by your user):
Vue.filter('capitalize', function(value) { if (!value) return ''; value = value.toString(); return value.charAt(0).toUpperCase() + value.slice(1); });
And to use this, we can do the following in our template:
Now, anywhere we use this filter, the first character will always be capitalized.
N.B., Filters are still applicable in Vue 2 but have been deprecated in Vue 3.
Bringing it together
We are going to compose a simple Vue application using everything we’ve learned.
First, let’s define our mixin:
const myMixin = { data() { return { title: 'mixins are cool' }; }, created: function() { alert('Howdy my good fellow!'); } };
Then we define our directive in our
main.js:
Vue.directive('highlight', { inserted: function (el, binding) { el.style.color = binding.value ? binding.value : "blue" el.style.fontStyle = 'italic' el.style.fontSize = '24px' } })
Now, let’s define our filter in our
main.js:
Vue.filter('capitalize', function (value) { if (!value) return '' value = value.toString() return value.charAt(0).toUpperCase() + value.slice(1) })
Finally, the simple template to see if these things actually work:
<template> <div id="app"> <p v-highlight>{{title | capitalize}}</p> <p v-This is a red guy</p> <p>{{'this is a plain small guy' | capitalize}}</p> <div> </template> <script> export default { name: "HelloWorld", mixins: [myMixin] }; </script>
And that’s it!
Hardships with mixins
Vue 3 now provides other means of sharing functionalities that provide a better developer experience, such as using composables.
Mixins are still as efficient while working with the Options API, but you might experience some drawbacks, including:
Naming conflicts
While working with mixins, there’s a higher chance of having conflicting names within our components. This might be a challenge, especially in cases where a new developer is inheriting a legacy codebase and they’re not exactly familiar with the existing property names within the mixin.
This can end up causing unwanted behaviors in our Vue app.
Difficult to understand and debug
Similar to the potential naming issues I’ve highlighted above, it’s somewhat stressful for a new developer to figure out mixins and how they might be affecting the components, especially if they’re dealing with global mixins.
In general, figuring out all the functionalities within the components can be difficult.
Conclusion
Everything we mentioned here comes in handy when building applications that are likely to grow in complexity. You want to define many reusable functions or format them in a way that can be reused across components, so you do not have to define the same thing over and over again.
Most importantly, you want to have a single source of truth. Dedicate one spot to make changes.
I’m excited by the thought of the cool things you can now build with these features. Please do share them with. | https://blog.logrocket.com/how-use-mixins-custom-functions-vue/ | CC-MAIN-2022-40 | refinedweb | 1,680 | 56.55 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.