text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
A Developer.com Site
An Eweek.com Site
Hi! Is it possible to read from specific rows from a notepad? I'm using Borland C++
For Example:
Jay
Mark
Christian
I only want to display the word Christian which is on the third row, what will I do? can you please show me? Thanks! Here's my code:
#include <iostream.h>
#include <iomanip.h>
#include <fstream.h>
#include <cstring.h>
#include <string.h>
#include <conio.h>
//using namespace std;
int main()
{
char account[80];
char string1[] = ".txt";
char string2[17];
cout<<endl;
cout<<"Enter Account Number: ";
cin>>account;
strcpy(string2, account);
strcat(string2, string1);
int bal = 0;
int x;
ifstream inFile;
inFile.open(string2);
if (!inFile) {
cout << "Invalid Account";
exit(1); // terminate with error
}
while (inFile >> x)
{
bal = bal + x;
}
inFile.close();
clrscr();
cout<<endl;
cout<< bal << endl;
getch ();
return 0;
}
A file is really nothing more than an array of characters to a computer. It is not possible to "skip" to a given line, unless you know its start index. What you can do though is read the file and count the lines until you reach the line you want. In particular, you can use getline n times.
That said, you need to use a newer compiler. iostream.h ain't no header I've ever heard of (amongst other.
I don't understand what the code you posted has to do with your question. They seem unrelated. One thing I'll point out is that your Borland compiler must be over 10 years old. Library headers that end with ".h" are not a part of the C++ standard. Conio.h is very much depreciated as it comes from the days of DOS, and was never a part of any standard. I suggest you get a modern compiler, they aren't hard to find and generally cost nothing.
Code:
#include <iostream>
#include <fstream>
#include <vector>
using namespace std;
int main()
{
ifstream myFile("myfile.txt");
vector<string> lines;
string line;
while(getline(myFile, line))
{
lines.push_back(line);
}
cout << lines[2] << endl;
return 0;
}
#include <iostream>
#include <fstream>
#include <vector>
using namespace std;
int main()
{
ifstream myFile("myfile.txt");
vector<string> lines;
string line;
while(getline(myFile, line))
{
lines.push_back(line);
}
cout << lines[2] << endl;
return 0;
}
I know Borland C++ is an old compiler, but as much as i want to use another program, i cannot, because this what my professor is using, we can only choose from Borland and Turbo c++
Looks like your professor is going to teach you how he did tricks in the stone age. Using the same tools.
Nobody is actually forcing you to use it at your own time. You can always compile your code with both the old and current compiler and see what fails where. At least you'll learn what changed.
Originally Posted by jaydee_ado
I know Borland C++ is an old compiler, but as much as i want to use another program, i cannot, because this what my professor is using, we can only choose from Borland and Turbo c++
Well if your professor isn't going to teach using an ANSI C++ compiler, then you probably shouldn't be taking his class at all. I think you should bring up this fact with him as it's pretty serious. Would you stick around if the professor was teaching earth science out of a book that said the world was flat?
Last edited by Chris_F; March 2nd, 2011 at 11:13 PM.
1st of all use code tags. and for reading from particular line you need to set file pointer to that particular point and then start reading.
read char by char and if '\n' found ie: the line break increment an int counter by 1. now set the file pointer on the desired count ie: line no.
and really there is no link between your qus and posted. | http://forums.codeguru.com/showthread.php?509440-Reading-from-specific-row-on-notepad&p=2001094 | CC-MAIN-2019-47 | refinedweb | 649 | 74.9 |
- Portable Asphalt Plant For Sale | Mobile Asphalt Plant
small asphalt plant of capacity 20 30 tph for sale. mobile asphalt plant gives consistent production. fast installation and setup for quick hot mix production
- Portable Asphalt Mixing Plant Atlas Industries
portable asphalt drum mix plant many customers have seen tempting advertisements of portable asphalt plants for sale and mobile asphalt drum mix plant
- Technical Parameters Of Qlb Mobile Asphalt Mixing Plant
asphalt drum mixing plant; mobile asphalt mixing plant .. main structure of qlb mobile asphalt mixing plant .. mobile asphalt plant, asphalt hot mix plant,
- Used Asphalt Plants For Sale. Vögele Equipment & More
search 637 listings for used asphalt plants. find vögele, dynapack, a, xuetao, bitelli, abg, a, bomag, a, a for sale on machinio.
- Mobile Asphalt Plant | Asphalt Mixing Plant Manufacturer
atlas has one concrete solution for customers who are looking for quality mobile asphalt plant for sale .. hot mix plant to its fullest. by mobile drum mix
- Mobile Asphalt Drum Mix Plant Atlas Industries
atlas industries is a provider of mobile asphalt drum mix plant which is available in mobile asphalt plants in the mini hot mix plant is
- Machinerytrader | Asphalt / Pavers / Concrete
peru uruguay adm asphalt / pavers / concrete equipment for sale. number of matches 11 adm spl 30 portable asphalt drum mix plant,
- Speedcrafts Road Maintenance Machines, Road
speedcrafts limited, patna pothole repairing machine, mechanical broom, mobile hot mix plant, asphalt drum mixing plants
- Stansteel
full service manufacturers of hot mix asphalt plants. supplies and provides parts, components, and service for a wide range of permanent and mobile requirements. site
- Asphalt Drum Mix Plant Supplier, Hot Asphalt Drum Mix
asphalt drum mix plant manufacturer in china. supplier of mobile asphalt drum mix plant, drum mix asphalt plant. high quality and good price.
- Asphalt Plants. Batch & Drum Mix | A Plant
hot mix storage; bitumen a plant's range of batch and drum mix asphalt plants can meet the super roadmix fulfils all the requirements of a truly mobile
- Asphalt Plant For Kazakhstan Market Asphalt Mixing Plant
feb 26, 2014· plant /mobile hot asphalt drum mixing plant mix asphalt plant; panama, ecuador, peru, asphalt batching plant for sale in china hot
- Asphalt Drum Mix Plants Supply Supplier, Find Best Asphalt
find best asphalt drum mix plants supply dhb40 hot asphalt plant, mobile asphalt drum mix a rd175x 175tph stationary asphalt drum mix plant for sale
- Cap40 Continuous Asphalt Mixing Plant 40t/h Asphalt
cap40 continuous asphalt plant for slb8 small mobile asphalt cap40 asphalt plant in peru a group our 2015 hot sale mobile asphalt drum mix plant,cap40
- High Quality Asphalt Plant As Per American Standers
mobile hot mix plant; self loading concrete mixer; asphalt plant (american specs) new generation drum mix asphalt plants
- Asphalt Plant Drum Mixer For Sale & Rental New &
asphalt plant drum mixer for sale & rental | rock & dirt. search from 1000's of listings for new & used drum mixers for sale or rental updated daily from 100's of
- Heavy Construction Equipment Of Mobile Asphalt
hot mix plant mini asphalt plant mobile asphalt peru | gulin solutions. asphalt plant for sale, asphalt mixing plant,mobile asphalt plant,asphalt drum mix.
- Professional Manufacture Lb800 Small Asphalt Mixing Plant
peru asphalt mixing plant asphalt drum mix plant asphalt mixing plant exporter in peru, cusco, projects hzs25 mobile concrete batching plant for sale.
- Portable Asphalt Drum Mix Plant Supplier, Find Best
find best portable asphalt drum mix plant supplier on asphalt drum mix plant also searched mobile asphalt plant,hot mix asphalt plant for sale.
- Rsl Mobile Asphalt Recycling Plant
rsl mobile asphalt recycling plant mobile asphalt drum mix developed the hot mix asphalt recycler range and have plants, asphalt plant for sale,
- Speedcrafts Limited Asphalt Mixing Plants And Other
speedcrafts limited indias leading manufacturer & exporter of asphalt mixing plants, drum mix plant, asphalt batch mix plants, hot mix plants, bitumen sprayers, etc.
- Supply Mobile Asphalt Plant Rock Crusher Equipment
thus the name hot mix asphalt with the remaining 1,300 are drum mix plants. the asphalt pavement xsm mobile asphalt crushing plant for sale can be
- Manufacturers Of Continuous And Batch Process Hot A
a manufacturer of hot mix asphalt plants.
- Concrete Mixing Plants Irsil.org
asphalt plants, hot mix plants, supplier of mobile asphalt plant, asphalt drum mix plant. best concrete & asphalt plant for sale.
- Portable Asphalt Recycling Plant For Sale
portable asphalt recycling plant for sale .. a model pvm 10 portable drum mix plant .. asphalt plant for sale | mobile portable mix
- Used Portable Asphalt Plants For Sale
used portable asphalt plants for sale supplier of mobile asphalt plant, asphalt drum mix plant .. asphalt drum mix plant | hot mix asphalt plant
- Efficient Asphalt Mixing Plant Road Machine In Kazakhstan
asphalt drum mix plant india, hot mix plant vinayak equipments is peru , chile, india an efficient high quality mixing plant hot sale ready mobile plant
- Mobile Asphalt Plant For Sale In Pakistan
asphalt drum mix plant | hot mix asphalt algeria, peru, indonesia,pakistan xsm mobile asphalt hot sale appunti al volo mobile asphalt plant for sale in
- Bitumen Hot Mix Plant For Sale In Eritrea
bitumen hot mix plant for sale in addis ababa qlb20 mobile asphalt mixing plant hot sale us .. drum mix plant, hot mix plant hot mix asphalt mixing plant
- Statistical Process Control In Asphalt Plant | Mining
4.19 statistical techniques 4.20 verification of asphalt drum mix plant statistical process control in asphalt plant peru 100 tph river stone mobile
- Asphalt Plant Cost In Guinea Bissau
hot mix asphalt for county roads used asphalt plants for sale. your session mobile asphalt mix plant and mobile asphalt drum mix plant from india. we
- Mobile Asphalt Plant For Sale In Pakistan Rrcser.in
mobile asphalt plant for sale in pakistan hot sale panama, papua new guinea, paraguay, peru, philippines, pitcairn, drum mix asphalt plant for sale
- Asphalt Plant Parts Wilsonswharf.co.za
asphalt plants. batch & drum mix kazakhstan, egypt, asphalt plants peru etc a 240t/h asphalt mobile batch plant. hot sale 20m3/h small concrete mixing
- Lb2500 Asphalt Mixing Plant 200t/h In India
batch mix asphalt plants, asphalt drum mix plant for sale hot selling, we supply asphalt drum mix plant drum mix plant. slb mobile asphalt
- Asphalt Mixing Plant Manufacturer
asphalt drum mixing plant, mobile asphalt asphalt hot mix plant for sale; small mobile concrete batch plant for sale; a xap160 asphalt mixing plant asphalt
- Batching Mixing Asphalt Station Structure With High
mixed concrete batching plant in the peru; plant from 80tph mobile hot mix asphalt batching mixing plant h drum asphalt mixing plant dhb60 for sale
- Go Aggregates Asphalt Plant Co Philippines Evam.co.in
buy hot asphalt drum mix plant hot brazil , philippines , indonesia mobile asphalt drum mix plant brand continuous asphalt mixing plant for sale chinese
- Factory Directly Sell Asphalt Blending Plant In Mongolia
view 80tph hot mix mobile asphalt plant for road construction asphalt drum mix plant. we manufacture asphalt plant a asphalt plant lb800 64tph hot sale
- Hsr Concrete Mixing Plant For Sale In Republic Of Poland
concrete batching plant, hot sale ready concrete mixing plant hzs60 provider of asphalt drum mix plant mobile asphalt mixing norway; panama; peru .
- Lb1000/2000 Asphalt Batch Mix Plant Asphalt Mixing Plant
lb2000 hot mix asphalt batching plant for. lb2000 hot mix asphalt batching plant for sale, peru ce eveirmential asphalt mixing plant lb1000 asphalt drum mix.
- Machinerytrader | Asphalt / Pavers / Concrete
peru uruguay erie strayer asphalt / pavers / concrete equipment for sale. number of matches 5 mobile gravity central mix plant
- Asphalt Recycling Equipment For Sale 13 Listings
we have 13 asphalt recycling equipment for sale .. 2005 bomag mp1300 milling machine. 52 drum, folding 2003 marathon tra rem asphalt/hot oil trailer
- Asphalt Plant Component For Sale & Rental New &
asphalt plant component for sale & rental | rock & dirt. search from 1000's of listings for new & used asphalt plant components for sale or rental updated daily from
- Asphalt / Pavers / Concrete Equipment For Sale
post a free for sale listing .. this a as4252c 8ft expandable asphalt screed is unused mobile concrete batching plant twin shaft mixer cap. 2.25m3
- Cost Of 80 To 100 Tph Hot Mix Plant In India
cost of 80 to 100 tph hot mix plant in india. hot mix posted at 8 39 in news by sme machinery asphalt drum mix plants mobile crusher for sale.
- Hot Sale Good Quality Simple Concrete Mixing Plant
mixing plant,asphalt mixing plant,bitumen hot mix plant batching plant hot sale used mobile plant plant machinery with js1000 hot sale drum mix
- Concrete Plants | A Roadbuilding
asphalt plants. counter flow drum range of dry and wet mix concrete plants from 4 yd highly in high production mobile concrete batch plants for the
- Saudi Aramco Buy Asphalt Plant 3 Ton Construction
saudi aramco buy asphalt plant 3 ton pollution control unit, wet mix plant, drum mix mobile plant, line of continuous and batch process hot mix asphalt
- Used Concrete Batching Plant On Sale | Mobile Concrete
used concrete batching plant on sale mobile and portable ready mix plants; used concrete batching plant asphalt plant.
- Iran Asphalt Plant, Iranian Asphalt Plant Manufacturers
manufacturing no. of employees 51 100 tags hot mix asphalt plant mobile asphalt plant portable asphalt plant spl110. refurbished asphalt drum mi
- China Batch Asphalt Plant Suppliers, Batch Asphalt Plant
import china batch asphalt plant from various high quality chinese batch asphalt plant suppliers asphalt hot mix plant drum mobile asphalt mixing plant.
- Self Erecting Storage Silos | A Roadbuilding
asphalt plants. counter flow drum mixers. e3 rap star series all a self erecting hot mix storage systems come complete with all necessary lifting and
- Asphalt Mixing Plant,china Asphalt Mixing Plant
hot sale qlb40 mobile asphalt mixing plant specifications asphalt hot mix plant 1.productivity 160t/h 2 integrated design of drying drum and
- Asphalt Pavers For Sale 359 Listings Page 4 Of 15
we have 359 asphalt pavers for sale .. complete 250 tph cedarapids semi portable asphalt plant price used parallel flow 8828 cedarapids drum and baghouse.
- Mixing Plant Importers & Mixing Plant Buyers Tradekey
instantly connect with verified mixing plant buyers & mixing plant importers from mobile hot mix plant hot mix plant mobile| asphalt drum mix plant
- Used Concrete Batching Plant For Sale In India Codep
used concrete batching plant for sale in high quality hzs25 concrete batching plant for sale · mobile batching plants for sale, . click used ready mix | http://www.aboutastrology.co.in/china_made/32242/mobile-hot-asphalt-drum-mix-plant-for-sale-in-peru-cusco.html | CC-MAIN-2017-47 | refinedweb | 1,723 | 53.04 |
Hi everyone.
Basically, I am creating a game based on two dices, and a guess box. The user inputs the guess of what the dice outcome will be, and then clicks the roll button. This generates a random number in each dice. If the user is correct, a win message comes up. If wrong, a lose message comes up.
I need to add a bit of code to the game, to allow the user to have an amount of 'credit' such as 10, and be able to place a bet of whatever amount they wish, within there credit range. If the user is correct at guessing the die outcome, this amount is 'added' to there 'credit'. If they are incorrect, it is subtracted. When the credit reaches 0, I need the game to end and reset.
Can anyone help me out please? I have the code for my game up to now, Which I will paste here for anyone wanting to help to take a look at.
First, main class:
DiceRoller
import java.applet.Applet; import java.awt.Button; import java.awt.Color; import java.awt.Graphics; import java.awt.TextField; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; /** * @author snk2012 * */ public class DiceRoller extends Applet implements ActionListener{ // We need 2 dice private Die d1,d2; // a button to roll the dice private Button btnRoll; private TextField txfGuess; //a field to enter a guess of the addition of the two dice values private int guess; // a field for the guess to be stored and compared with actual result private int total; // a total field to find the addition of the two dice values private boolean win; //a boolean value to display win/loose message. /** * Initialise the applet */ public void init(){ setSize(320,200); //sets the size of the window setBackground(Color.red); //sets the background color setLayout(null); /*gets rid of the standard layout precedure of placing from centre outwards. Allows for user to input coordinates for objects to be placed */ // Create the 2 dice d1 = new Die(20,40); //creates a new dice, dice1, using the attributes from the dice class d2 = new Die(220,40); // does as before, creates a new dice // Create the button and add it to the GUI btnRoll = new Button("ROLL THE DICE"); //This adds a button named roll the dice to the program btnRoll.addActionListener(this); btnRoll.setBounds(10, 10, 100, 20); //sets its 'boundaries' or position on the screen. add(btnRoll); //adds the actual button, 'btnroll', to the program. txfGuess = new TextField(""); /* adds a textfield to enter the numerical guess. As its stored as a string it will need to be converted to a integer using the 'Integer parseInt' command. */ txfGuess.setBounds(200,10,40,20); //sets the position of the Guess box. add(txfGuess); //adds the guess textbox to the program. } /** * Display the current values of the 2 dice */ public void paint(Graphics g){ /* in the dice class the graphics g is used to set the dimensions of the dice, set the colour of the dice, set the fonts used on the numbering for the dice, draws the value from the mathrandom calculation to the dice, */ d1.display(g); //using the paint method, dice1 & 2 are drawn to the applet window d2.display(g); //using the boolean command, we can set the win or loose to display based on weather the statement is true or false. if(win == true){ //display win if equal to true g.drawString("Win", 20,180); } else { g.drawString("Lose", 20,180); //otherwise, display lose. } } /** * When the button is clicked roll the dice. * */ public void actionPerformed(ActionEvent ae) { d1.roll(); /*this is linked with the actionlistener. when the roll the dice button is clicked, is uses d1.roll() statement. This is listed in the dice class, as a mathrandom calculation. This means it will work out a random number as displayed it as the result for the dice roll.*/ d2.roll(); total = d1.getValue() + d2.getValue(); /*This is showing that the 'total' is created from the addition of the two dice roll values. using the getValue command. In the dice class the value is declared as 'Value'. then linked with a mathrandom calculation, to produce a random number on the dice when rolled.*/ guess = Integer.parseInt(txfGuess.getText()); //This converts a string character '2' to an integer numerical value '2'. if(guess == total){ System.out.println("WIN"); win = true; // this is a boolean statement. If the guess input by the user is EQUAL to that of the total of the two dice, it is classed as TRUE, and WIN is displayed as a string on the screen } else { System.out.println("LOSE"); win = false; // or else, (guess is not equal to total) the string Lose is displayed as win is returned false. } {} repaint(); } }
Next class, Die class:
import java.awt.Color; import java.awt.Font; import java.awt.Graphics; /** * Simple class to demonstrate a Die * * @author Peter lager * */ public class Die { private int value; private int posX, posY; private Font font; /** * Create a new die at the given position and * set the initial value to 1 * * @param posX * @param posY */ public Die(int posX, int posY) { this.posX = posX; this.posY = posY; value = 1; font = new Font("Courier", Font.BOLD, 60); } /** * Set the top left position to display the die * @param posX * @param posY */ public void setPos(int posX, int posY){ this.posX = posX; this.posY = posY; } /** * @return the value */ public int getValue() { return value; } /** * @param value the value to set */ public void setValue(int value) { this.value = value; } /** * @param font the font to set */ public void setFont(Font font) { this.font = font; } /** * Roll the dice to get a random value between 1& 6 */ public void roll(){ value = (int)(Math.random()*4 + 1 + 1); } public void display(Graphics g){ g.setColor(Color.yellow); g.fillRect(posX, posY, 80, 80); g.setColor(Color.black); g.drawRect(posX, posY, 80, 80); g.setFont(font); g.drawString(""+value, posX + 20, posY+60); } }
Can anyone help me implement some code to allow me to use a gambling like function within this game?
Thanks alot for any help, Greatly appreciated!
G | http://www.javaprogrammingforums.com/java-applets/1592-need-help-creating-simple-credit-system-game.html | CC-MAIN-2015-35 | refinedweb | 1,021 | 74.29 |
11.21.3. Trio Student Solution 2.
For example, assume that the menu includes the following items. The objects listed under each heading are instances of the class indicated by the heading.).
11.21.3.1. Grading Rubric¶
Below is the grading rubric for the Trio class problem.
11.21.3.2. Practice Grading¶
The following is the second sample student response.
Apply the grading rubric shown above as you answer the following questions.
Apply the Grading Rubric
- Yes
- This declares the class correctly as
public class Trio implements MenuItem
- No
- What do you think is wrong with the class declaration?
10-21-1: Should the student earn 1 point for the correct declaration of the
Trio class?
- Yes
- Remember that all instance variables should be declared private so that the class controls access to the variables.
- No
- The student did not make the instance variables private, so the student does not get this point.
10-21-2: Should the student earn 1 point for declaring the private instance variables (sandwich, salad, and drink or name and price)?
- Yes
- This solution declares the constructor as
public Trio(Sandwich s, Salad sa, Drink d)
- No
- What do you think is wrong with the constructor declaration?
10-21-3: Should the student earn 1 point for declaring the the constructor correctly?
- Yes
- This solution initializes the instance variables (sandwich, salad, and drink) correctly with the values from the parameters (s, sa, and d).
- No
- What do you think is wrong with the initialization of the instance variables in the constructor?
10-21-4: Should the student earn 1 point for correctly initializing the appropriate instance variables in the constructor?
- Yes
- This solution contains correct declarations for
public String getName()and
public double getPrice().
- No
- To implement an interface the class must have a getName and getPrice method as defined by the MenuItem interface.
10-21-5: Should the student earn 1 point for correctly delcaring the methods in the
MenuItem interface (getName and getPrice)?
- Yes
- Look at what
getNameis supposed to return.
- No
- This solution doesn't include the "/" between the sandwich and salad and between the salad and the drink and is also missing the "Trio" at the end of the name, so it loses this point.
10-21-6: Should the student earn 1 point for correctly constructing the string to return from
getName and making it available to be returned?
- Yes
- This solution does return the constructed string, even if the string is not completely correct.
- No
- Even though the string is not correct it was constructed and returned.
10-21-7: Should the student earn 1 point for returning a constructed string from
getName?
- Yes
- What if b is equal to c but both are less than a?
- No
- This does not always compute the price correctly (when b is equal to c and they are both less than a).
10-21-8: Should the student earn 1 point for correctly calculating the price and making it available to be returned from
getPrice?
- Yes
- This solution does return the calculated price, even if that price is not always correct.
- No
- This point is earned if the student attempted to calculate the price and returned what was calculated.
10-21-9: Should the student earn 1 point for returning the calculated price in
getPrice?
10-21-10: What should the total score be for this student response (out of 9 points)? Enter it as a number (like 3). | https://runestone.academy/runestone/books/published/apcsareview/OOBasics/TrioScore2.html | CC-MAIN-2019-35 | refinedweb | 579 | 63.09 |
I am excited to announce the availability of March 2007 CTP version for Sandcastle. The latest version is now available for download at. My sincere thanks to the Sandcastle user community and to Eric Woodruff for providing us with valuable feedback.
What's new in this version:
Changes in this version:
Issues fixed in this version (Reported by Customers):
1. Sandcastle: MSHelp Index, apiname attributes: namespace and type name of members isn't showing up
2. Sandcastle: MSHelp F Index does not have an entry for typename.membername
3. Sandcastle:Issues with Sandcastle and .NETCF
4. Sandcastle: ApplyVsDocModel - Missing items: constructorsTopicTitle, membersTopicTitle, etc.
5. Sandcastle: not showing the "obsolete" messages for items with an ObsoleteAttribute.
6. Sandcastle: The <br/> and <hr/> HTML tags are getting stripped in Prototype transforms
7. Sandcastle: error message of ResolveReferenceLinksComponent is not correct
8. Sandcastle: property keyword is missing in managed cpp syntax
9. Sandcastle: apiIcon template issue
10. Sandcastle: Javascript error on members page in vs2005 transform
11. Sandcastle: December CTP - Wrong icons shown on member pages
12. Sandcastle: suggestion for GetMsdnUrl() method
13. Sandcastle: MrefBuilder fails when /internal+ switch is used on
14. Sandcastle: The reference_content.xml file is missing some keys
15. Sandcastle: syntax for Indexed Properties is not correct in managed cpp
16. Sandcastle: When using firefox the method signatures is being displayed in very small font size
17. Sandcastle: Is it possible to get rid of fully qualified type names?
18. Sandcastle: February CTP should have a version number higher than 2.2.61208.0
19. Sandcastle:Split the big framework reflection data file per assembly for shipping.
20. Sandcastle:CHM-table of contents does not show class members when applying the ApplyVSDocModel.xls
21. Sandcastle: Ignore CompilerGenerated attribute
22. Sandcastle: The border of empty <td> elements are not rendered (IE issue).
23. Sandcastle: parameter types not shown if it is an array (vs2005 theme)
24. Sandcastle: Some sandcastle images don't have a transparent background.
25. Sandcastle: extended character not supported in productTitle
26. Sandcastle: Update the setup script for MSI to get new files and changes.
27. Sandcastle: bug in K-keyword generation logic
28. Sandcastle: tables with icons have "Icon" as column header
29. Sandcastle: member Syntax section: Visual Basic (Usage) is not showing (looks like there is no generator for that, yet)
30. Sandcastle: MRefBuilder Namespace Ripping Still Broken
31. Sandcastle: performance issue related to ReflectionToChmContents.xsl
32. Sandcastle: Mref ripping feature doesn't work
33. Sandcastle: SandcastleGUI changes the table of contents displaying all public innerclasses at the same level as the parent class.
34. Sandcastle: html page is too wide in firefox
35. Sandcastle: Provide support for code Example
36. Sandcastle: MSHelp DevLang, codelang, Technology attributes are not present
37. Sandcastle:F1 context sensitive help does not work for sandcastle.
38. Sandcastle: Check in changeset of TocToChmContents.xsl into main tree
39. Sandcastle: Inherited from self info is showing up (vs2005 theme)
40. Sandcastle: Transform changes for Prototype and VS2005
Issues fixed in this version (Features and Bugs as a part of Orcas Beta 1):
1. MrefBuilder fails against the Orcas Cpref Assemblies
2. # in pathname makes HHC.EXE generate bad CHM-file
3. DCR: Change "C++" to "Visual C++" in managed reference topic syntax blocks, language filter and code blocks
4. BuildAssembler: If the output path does not exist as specified in the .config, tool should create it for you
5. VTP: K keywords containing angle brackets < or > do not appear in Index
6. VTP: Code displayed incorrectly (on one line)
7. Overloads pages do not include boilerplate text for overloads supported by the .NET Compact Framework
8. Dynamic generated links show up twice, with one working and one broken link
9. Index item broken in half; second half appears as subindex of first half; first half goes to Empty Index Entry
10. Internal Only boilerplate text does not cascade to members when set at type level
11. Sandcastle:Split the big framework reflection data file per assembly for shipping.
12. Square brackets are not showing properly in syntax section for VB
13. Mref Attached Event syntax is wrong in all non XAML syntaxes (regression)
14. Mref Attached Property syntax is wrong in all non XAML syntaxes (regression)
15. Index shows "unattached" properties
16. Index does not always support navigation to type members
17. J# compiler options and operators are missing from index
18. Link text is not correct in 'see also/other resource'
19. See Also section links not sorted under subheadings
20. Authored “XAML Attribute Usage” sections of Attached Property and Attached Event pages do not appear
21. No reflection going on for NETCF Orcas assemblies
22. See Also does not have subdivisions
23. Text should be bold in the "Inheritance Hierarchy" sections of managed ref
24. The word "Namespace" appearing in TOC instead of the actual namespace name.
25. URL of topics have GUID in them.
26. Classes in TOC appear out of order
27. "Inheritance Hierarchy" appearing after "Thread Safety" in managed ref
28. Attributes appearing above declarations in managed ref.
29. Bold appearing in Syntax section of managed ref
30. copy code text missing
31. Remove Single Bullet
32. Missing Header text for a section
33. Link Text in table appears bold
34. in-topic links are not displaying as links
35. Gray box with link options appears after clicking link
36. & in VB sample code is replaced with &
37. < and > in Code Reference in List 1 are replaced with < and >
38. Icon display problems
39. Clicking Link creates Popup
40. Interface member declarative syntax now has "abstract" keywording - bug or feature?
41. Method member appearing twice in cpref TOC
42. Graphic icons missing on the page
43. Property members appearing twice in cpref TOC
44. Dynamic links don't work
45. Members tables are not showing protected internal members
46. Keyword "true" should be "True" for VB
47. Missing graphics on some topics
48. XAML snippets appear twice
49. Titles of member tables have swapped text for "Icon" and "Type" columns
50. Index entries for members are missing for many CLR types
51. In managed ref C++ code examples, member-selection operator -> renders as ->
52. Manifest: language filter not appearing
53. When filter is set to Visual C#, the System.Linq* namespaces disappear from the Class Library Reference
54. Microsoft.VisualStudio.Profiler Namespace missing protected methods and properties
55. Configuration schema hierarchies not displaying hierarchically
56. Unable to distinguish between topics in Index Results pane
57. Incorrect Version Information for new types
58. Managed reference missing running header text
59. Attached property and attached event topics appear twice in TOC
60. event table missing from Events topics
61. wrong column headings in member list topics
62. Angle brackets for C# generic types are incorrectly displayed in code snippets
63. All APIs supported by 3.5 package should indicate 3.5 in Version section
64. mref topics need frlrf A keywords
65. link resolution failure warnings should identify topic in which the error occurred
66. Parameters: type line is not correct for generics
67. Bold text has taken over topic
68. HostProtectionAttribute boilerplate
69. Indices for some classes are missing
70. "Obsolete" words are gone for all obsolete types.
71. Rendering Empty XAML syntax blocks
72. Assembly information is missing on namespace pages
73. Version info incorrect for WPF assemblies on all pages
74. Odd indentation in rendering of XAML syntax when it is text vs. code
75. BuildAssembler: Namespace top-level pages are dispaying the
76. Constructor node incorrectly displays at the end of member list topics
77. problem with constructors in the TOC
78. XSLTTransform: When building the HXT for Orcas, System.Data.Entity.dll and members are not being added
79. Manifold test Builds: brackets in code samples rendering as escape characters
80. Manifold builds: copy code produces script error on hover
81. NSR "Example" link throws script error
82. namespace wording in topic titles
83. nested types not in correct alpha location
84. method names always include param list
85. param lists use param name, should be param type
86. generic apis have no indicator of genericness
87. HxComp: Error HXC3031: A group of keywords for a single Help link or KTable exceeds 4,096 bytes.
88. XML Namespace info missing
89. Version Info section: missing data
90. Namespace/Assembly lines incorrect on type topics and missing on member topics
91. In Parameters section, param names followed by empty parens
92. Return Value section is missing on method topics
93. missing Property Value section
94. mref topics need MSHelp:RLTitle for search results
95. Phoenix Reference CHM - Class name not listed
96. See Also autogenerated links should have descriptor, e.g. "System.Window Namespace", not just "System.Window"
97. Specialized type pages (derived from generics) not showing
98. Inheritance Hierarchy for classes derived from generic specialization shows incorrect link text for ancestor class
99. Method names in links and NSR title should show param list only for overloads
100. param lists in titles and tocs should use param type, not param name
101. Manifold Build produces only part of hyperlink text in a hrefs
102. Syntax boilerplate tokens not resolved, e.g. UnsupportedGeneric_JSharp
103. Intro boilerplate missing on member list topics
104. Member lists missing CF and XNA icons
105. Missing Type Parameter section for generic types
106. XSLTransform: XLSTs: Default Namespace should be labelled
107. BuildAssembler.SharedComponent: If a USC file is not well-formed, we may want to log it and move on to at least get a build
108. BuildAssembler: SharedContentComponent: Tool is not expanding the PlatformNote and Assembly SSC items
109. BuildAssembler: CopyFromIndexComponent does not recursively search for the files as specified via the ./index/data@files
110. BuildAssembler: XSLTs: Syntax block for Visual Basic (Declaration) missing "Visual Basic" text
111. BuildAssembler / VersionBuilder: XSLT: Running the steps for versioning, it appears that basic text is appearing on the built pages
112. Sandcastle: DXROOT is not system environment variable
113. Sandcastle: virtual keyword is not in the right place for managed cpp property syntax
114. MrefBuilder addin creates attached property node with incorrect content
115. MrefBuilder addin creates attached event node with incorrect content
116. XSLTransform: XSLT (DocModelTOCFormat) for building HXT for CHM is causing Out of Memory for Orcas (WinSDK will be bigger)
117. XAML in page filtering behavior
118. Dependency Property Information sections
119. Routed Event Information sections
120. autogenerated XAML syntax boilerplate for classes and properties
121. BuildAssembler: XSLT -> Type page headings should be qualified (ex: BuildPropertyGroupCollection.CopyTo Method)
122. BuildAssembler: XSLT -> Namespace and Assembly information missing from top of member pages
123. need custom sections (XAML Syntax, Dependency Property, Event Routing)
124. need to support Attached Properties
125. need to support Attached Events
126. Syntax color coding applied oddly to text in syntax blocks
127. Missing technology qualifiers for the index
128. BuildAssembler: ResolveArtLinksComponent should not terminate if output directory does not eixt
129. BuildAssembler: TransformComponent -> Support for VisualBasicDeclarationSyntax needed
130. BuildAssembler: TransformComponent -> Support for CPlusPlusDeclarationSyntax
131. BuildAssembler: What exactly does Warn: ResolveReferenceLinksComponent message mean?
132. need links in NSR to See Also, Example, Members
133. Need multiple assembly info for duplicates like Theme APIs
134. Need to support all required keywords (xml data island attributes)
135. Terms in Inheritance Hierarchy section need to have fully qualified names
136. Need support for "Version Information" section
137. Namespace topics have a syntax block section; not part of the original design
138. UCER Style not appearing in build
139. XAML Syntax Autogeneration & Authored Section Overrides
140. No separate Derived Classes topic for inheritance hierarchy section
141. Missing Visual Basic (Usage) syntax
142. Missing JScript syntax
143. no Index entries for mref build
144. no member list topics, e.g. Methods, Properties, etc.
145. no member list filtering, e.g. for inherited, CF, XNA, protected
146. Manifold BuildAssembler: If the output path does not exist as specified in the .config, tool should create it for you
147. TextureLoader.FromFile has not been updating in any of the builds
148. Environment variable is not parsed in ExampleComponent
149. Sandcastle Nov CTP - <list type="number"> generated as bullet list
150. Examples code is not well formatted in vs2005 transform
151. Escape Characters in Codesnippet.xml are not converted back to original characters
152. Manifold ExampleComponent drops every other code sample
153. Manifold ExampleComponent double escapes HTML entities
154. DS Bug: Enumeration members are not ordered by value in the built page, they are ordered alphabetically.
155. Manifold Explicit Interface Implements rendering behavior does not follow CPLT guidelines
156. The requirements section on WPF reference pages should include XML namespace information
157. SandCastle:Presentation\Prototype\Scripts\StyleUtilities.js file that causes pages to throw JavaScript errors when it tries to access a style sheet with an ms-help URL
158. Sandcastle: CHM tab Index are untranslated.
159. Ability to rip NS, Types and members from Reflection.xml
160. BuildAssembler: .config (BuildComponents.dll) - SyntaxGenerators.dll - Support for XAML syntax
161. Syntax Builder VB Syntax: Class name in Interface implementation example should be in PascalCase.
162. Syntax Builder VB Syntax: There should be a newline between the parameter declarations and method call for Method Usage Syntax
163. Syntax Builder UETP: Syntax for generic types that implement generic interfaces is lacking the generic interfaces
164. Syntax Builder Generics: Property marked as virtual by class implementing a generic interface
165. Syntax Builder UETP: VB, C#, and C++ syntax for properties of explicit interface implementations on generic classes is incorrect
166. Syntax Builder NDP: UETP: Bad link in VB usage syntax for some explicit interface implementations on System.Windows.Forms.Control class
167. Syntax Attribute filtering not working
168. Visual Basic Usage syntax is missing boilerplate for static classes
169. Syntax Builder interface implementation should be marked virtual only if the keywork "final" does not appear in its IL;
170. Syntax Builder Add support for Operator and Type Conversion methods
171. Syntax Builder ConditionalAttribute should not be displayed in C++ syntax section because it is ignored by the compiler
172. Syntax Builder - links to HIS instead of MSDN for DataRelationCollection and DataTableCollection
173. Syntax Builder generates incorrect MSDN type links for ADODB types and MSXMLLib.DOMDocument
174. Syntax Builder Generics: Missing syntax block for type that inherits from generic ICollection
175. Span class=UI needs to be bolded
176. Tables too wide
177. The Abstract/Description attribute is missing
178. Investigate missing index keywords in dv_fxtools?
179. Blue text in topic
180. "Copy Code" Text missing next to the copy code icon
181. Content Dropout: Namespace topics need to include text from both Summary and Remarks
182. namespace/Assembly note should not be in namespace topics
183. Poor XAML "Not Applicable" notes layout
184. K and F Index transforms for managed ref content are not rendering specific enough information
185. See Also section has formatting problem on namespaces pages
186. need XAML F-keywords for F1 Help
187. Some art in tables drops out
Issues not fixed in this version:
1. CHM is missing state -
2. Bug: Interihance Hierarchy VS2005 incorrect when using inner classes due to issue in BuildComponents.
I would love to hear your feedback about this CTP. Cheers.
Anand..
Excellent! It looks really nice.
It took only 2 hours to upgrade all scripts and files for the new releases ( is upgraded as well).
A (minor) problem: the TOC entry of methods that have overloads do not display correctly. The 'intrinsic' parameter types are not displayed at all and maybe in other cases.
Thank you!
Gael
Unless I'm doing something wrong, I no longer see method or parameter comments? I do see parameter names and types however.
Awesome!!!. Thank you.
But I have a problem. I tried to run the batch file with "/internal+" and I get "BuildAssembler" error and when I click "Don't Send" I get "XSLTransformer" error and the help file is created with empty stuff.
I added a private method so that I can see the documentation. But I'm not.
Am I missing something here? Any help would be appreciated.
The method and parameter comments should appear and I am very sure there is some issue with your config files. Are you including comments file location in your config?
Anand.
It looks like we have a regression in Build Assembler with /internal+ option. I am sorry about this and we will provide a technical refresh with a fix for this.
Got it fixed, thank you. -- Looks great!
I'm still not getting the method and parameter comments. Would you please tell me in detail what needs to be done to have those comments pop up.
Kid.
Got the Method and parameter comments fixed. Thank you.
What is Sandcastle? Sandcastle documentation compilers enable managed class library developers throughout
Une nouvelle CTP de Sandcastle est disponible . Au menu : What's new in this version: Added 4 new transforms
I kind of got the impression from some of the comments here that the /internal+ option has been fixed, but I still cannot create help files using that option. I get the BuildAssembler and XSLTransformer errors. Any timeframe for a fix?
Richard
I've just upgraded to the Sandcastle March CTP and Sandcastle Help File Builder 1.4.0.0, and in general, they're great, but I'm seeing a bunch of new warnings.
First, if I have "show missing summaries" turned on in SHFB, I get:
"Warn: ShowMissingComponent: Missing <summary> documentation..."
for every class for the new AllMembers, Methods, and Properties pages. Those pages pick up the class summary, so the warnings appear to be spurious, but I have to turn off "show missing summaries" to keep the "missing" messages from appearing in my help output.
Then, when generating the HxS help, for each of my enums, I get:
Warning HXC6042: File (html filename) Line 1, Char 1137: <MSHelp:Keyword> tag requires attribute Term.
Warning HXC6031: File (html filename) Line 1, Char 1137: No term defined in <MsHelp:Keyword>.
I haven't been able to find anything that I can add to my enum XML doc tags to address this.
(Hmm, I also just noticed that the table heading for enum value names now reads "Class", whereas before it was the correct "Member".)
None of these are showstoppers, but I'd prefer not to have to ignore warnings.
Thanks!
I stumbled across some information on "Sandcastle" this morning. Sandcastle creates documentation for
When I execute MRefBuilder it give me errors of other assemblies references by my current assembly.
Error: Unresolved assembly reference: Radixx.Types (Radixx.Types, Version=2.0.0.0, Culture=neutral, PublicKeyToken=null)
required by ASSEMBLYNAME.
What Am I missing???
josestromberg@yahoo.com
Announcing March 2007 Sandcastle CTP
Trademarks |
Privacy Statement | http://blogs.msdn.com/sandcastle/archive/2007/03/06/announcing-march-2007-sandcastle-ctp.aspx | crawl-002 | refinedweb | 3,111 | 60.92 |
Parameters specifying how to draw text. More...
#include <StelRenderer.hpp>
Parameters specifying how to draw text.
These are passed to StelRenderer::drawText().
This is a builder-style struct. Parameters can be specified like this:
// Note that text position and string must be always specified, so they // are in the constructor. // Default parameters (no rotation, no shift, no gravity, 2D projection). TextParams a(16, 16, "Hello World"); // Rotate by 30 degrees. TextParams b = TextParams(16, 16 "Hello World!").angleDegrees(30.0f); // Rotate by 30 degrees and shift by (8, 4) in rotated direction. TextParams c = TextParams(16, 16 "Hello World!").angleDegrees(30.0f).shift(8.0f, 4.0f);. | http://stellarium.org/doc/0.12.0/structTextParams.html | CC-MAIN-2014-42 | refinedweb | 107 | 53.78 |
MapTransformMapTransform
Map and transform objects with mapping definitions.
Behind this boring name hides a powerful object transformer.
Some highlighted features:
- You pretty much define the transformation by creating the JavaScript object you want as a result, setting paths and transformation functions, etc. where they apply.
- There's a concept of a transform pipeline, that your data is passed through, and you define pipelines anywhere you'd like on the target object.
- By defining a mapping from one object to another, you have at the same time defined a mapping back to the original format (with some gotchas).
Let's look at a simple example:
const { mapTransform } = require('map-transform') // You have this object const source = { data: [ { content: { name: 'An interesting piece', meta: { author: 'fredj', date: 1533750490952 } } } ] } // You describe the object you want const def = { title: 'data[0].content.name', author: 'data[0].content.meta.author', date: 'data[0].content.meta.date' } // You feed it to mapTransform and get a map function const mapper = mapTransform(def) // Now, run the source object through the mapper and get what you want const target = mapper(source) // --> { // title: 'An interesting piece', // author: 'fredj', // date: 1533750490952 // } // And run it in reverse to get to what you started with: const source2 = mapper.rev(target) // -> { data: [ { content: { name: 'An interesting piece' }, meta: { author: 'fredj', date: 1533750490952 } }, ] }
You may improve this with pipelines, expressed through arrays. For instance,
retrieve the
content object first, so you don't have to write the entire
path for every attribute:
const def2 = [ 'data[0].content', { title: 'name', author: 'meta.author', date: 'meta.date' } ] const target2 = mapTransform(def2)(source) // --> { // title: 'An interesting piece', // author: 'fredj', // date: 1533750490952 // }
Maybe you want the actual date instead of the microseconds since the seventies:
const { mapTransform, transform } = require('map-transform') // .... // Write a transform function, that accepts a value and returns a value const msToDate = ms => new Date(ms).toISOString() const def3 = [ 'data[0].content', { title: 'name', author: 'meta.author', date: ['meta.date', transform(msToDate)] } ] const target3 = mapTransform(def3)(source) // --> { // title: 'An interesting piece', // author: 'fredj', // date: '2018-08-08T17:48:10.952Z' // } // You may also reverse this, as long as you write a reverse version of // `msToDate` and provide as a second argument to the `trasform()` function.
... and so on.
Getting startedGetting started
PrerequisitsPrerequisits
Requires node v8.6.
InstallingInstalling
Install from npm:
npm install map-transform --save
UsageUsage
The transform objectThe transform object
Think of the transform object as a description of the object structure you want.
Keys on the transform objectKeys on the transform object
In essence, the keys on the transform object will be the keys on the target object. You may, however, specify a key with dot notation, which will be split out to child objects on the target. You can also specify the child objects directly on the transform object, so in most cases this is just a matter of taste.
const def1 = { 'data.entry.title': 'heading' } const def2 = { data: { entry: { title: 'heading' } } } // def1 and def2 are identical, and will result in an object like this: // { // data: { // entry: { // title: 'The actual heading' // } // } // }
If MapTransform happens upon an array in the source data, it will map it and
set an array where each item is mapped according to the mapping object. But to
ensure that you get an array, even when the source data contains only an object,
you may suffix a key with brackets
[].
const def3 = { 'data.entries[]': { title: 'heading' } } // def3 will always give you entries as an array: // { // data: { // entries: [ // {title: 'The actual heading'} // ] // } // }
Values on the transform objectValues on the transform object
The values on the transform objects define how to retrieve and transform data from the source object, before it is set on the target object.
As you have already seen, you may set a transform object as the value, which will result in child objects on the target, but at some point, you'll have to define how to get data from the source object.
The simplest form is a dot notation path, that describes what prop(s) to pick from the source object for this particular target key. It will retrieve whatever is at this path on the source object.
const def4 = { title: 'data.item.heading' } const source1 = { data: { item: { id: 'item1', heading: 'The actual heading', intro: 'The actual intro' } } } // `mapTransform(def4)(source1)` will transform to: // { // title: 'The actual heading' // }
The target object will only include values from the source object that is "mentioned" by the mapping object.
The paths for the source data may also include brackets to indicate arrays in the data. It is usually not necessary, as MapTransform will map any array it finds, but it may be good to indicate what you expect from the source data, and it may be important if you plan to reverse transform the mapping object.
To pass on the value on the pipeline, use an empty path
''or a dot
'.'.
Another feature of the bracket notation, is that you may pick a single item from an array by indicating the array index in the brackets.
const def5 = { title: 'data.items[0].heading' } // def5 will pull the heading from the first item in the `items` array, and will // not return any array: // { // title: 'The actual heading' // }
Finally, a transform object value may be set to a transform pipeline, or one function that could go in the transform pipeline (which the dot notation path really is, and – come to think of it – the transform object itself too). This is explained in detail below.
Transform pipelineTransform pipeline
The idea of the transform pipeline, is that you describe a set of transformations that will be applied to the data given to it, so that the data will come out on the other "end" of the pipeline in another format. You may also insert data on the other end of the pipeline, and get it out in the original format again (although with a potential loss of data, if not all properties are transformed). This is what you do in a reverse mapping.
One way to put it is that the pipeline describes the difference between the two possible states of the data, and allows you to go back and forth between them. Or you can just view it as operations applied in the order they are defined – or back again.
You define a pipeline as an array that may hold dot notation paths, transform objects and transform operations of different kinds (see below). If the pipeline holds only one of these, you may actually skip the array. This is a handy shortcut in some cases.
Here's an example pipeline that will retrieve an array of objects from the path
data.items[], map each object to an object with the props
id,
title, and
sections (
title is shortened to max 20 chars and
sections will be an array
of ids pulled from an array of section objects), and finally filter away all
items with no values in the
sections prop.
import { transform, filter } from 'map-transform' const def6 = [ 'data.items[]', { id: 'articleNo', title: ['headline', transform(maxLength(20))], sections: 'meta.sections[].id' }, filter(onlyItemsWithSection) ]
(Note that in this example, both
maxLength and
onlyItemsWithSection are
custom functions for this case, but their implementations are not provided.)
transform(fn, fnRev) operation
The simple beauty of the
transform() operation, is that it will apply whatever
function you provide it with to the data at that point in the pipeline. It's
completely up to you to write the function that does the transformation.
You may supply a second function (
fnRev), that will be used when
reverse mapping. If you only supplies one function, it will
be used in both directions. You may supply
null for either of these, to make
it uni-directional, but it might be clearer to use
fwd() or
rev() operations
for this.
The functions you write for the transform operation should accept the source data as its only argument, and return the result of the relevant transformation. The data may be an object, a string, a number, a boolean, or an array of these. It's really just up to you to write the appropriate function and use it at the right place in a transform pipeline.
A simple transform function could, for instance, try to parse an integer from whatever you give it. This would be very useful in the pipeline for a property expecting numeric values, but MapTransform would not protest should you use it on an object. You would probably just not get the end result you expected.
import { mapTransform, transform } from 'map-transform' const ensureInteger = data => Number.parseInt(data, 10) || 0 const def7 = { count: ['statistics.views', transform(ensureInteger)] } const data = { statistics: { view: '18' // ... } } mapTransform(def7)(data) // --> { // count: 18 // }
This is also a good example of a transformation that only makes sense in one
direction. This will still work in reverse, ending in almost the same object
that was provided, but with a numeric
view property. You may supply a
reverse transform function called
ensureString, if it makes sense in your
particular case.
The functions you provide for the transform operation are expected to be pure, i.e. they should not have any side effects. This means they should
- not alter the data their are given, and
- not rely on any state outside the function
Principle 1 is an absolute requirement, and principle 2 should only be violated
when it is what you would expect for the particular case. As an example of the
latter, say you write the function
toAge, that would return the number of
years since a given year or date. You would have to use the current date to be
able to do this, even though it would be a violation of principle 2.
That said, you should always search for ways to satisfy both principles. Instead
of a
toAge function, you could instead write a curried
yearsSince function,
that would accept the current date (or any date) as the first argument. This
would be a truly pure function.
Example transformation pipeline with a
yearsSince function:
const def8 = { age: ['birthyear', yearsSince(new Date())] }
You may also define a transform operation as an object:
import { mapTransform } from 'map-transform' const ensureInteger = operands => data => Number.parseInt(data, 10) || 0 const customFunctions = { ensureInteger } const def7asObject = { count: ['statistics.views', { $transform: 'ensureInteger' }] } const data = { statistics: { view: '18' // ... } } mapTransform(def7asObject, { customFunctions })(data) // --> { // count: 18 // }
Note that the function itself is passed on the
customFunctions object. When
you provide the custom function this way, it should be given as a function
accepting an object with operands / arguments, that returns the actual function
used in the transform. Any properties given on the operation object, apart from
$transform, will be passed in the
operands object.
filter(fn) operation
Just like the transform operation, the filter operation will apply whatever
function you give it to the data at that point in the transform pipeline, but
instead of transformed data, you return a boolean value indicating whether to
keep the data or not. If you return
true the data continues through the
pipeline, if you return
false it is removed.
When filtering an array, the function is applied to each data item in the array,
like a normal filter function, and a new array with only the items that your
function returns
true for. For data that is not in an array, a
false value
from your function will simply mean that it is replaced with
undefined.
The filter operation only accepts one argument, which is applied in both
directions through the pipeline. You'll have to use
fwd() or
rev()
operations to make it uni-directional.
Functions passed to the filter operation, should also be pure, but could, when it is expected and absolutely necessary, rely on states outside the function. See the explanation of this under the transform operation above.
Example of a filter, where only data of active members are returned:
import { mapTransform, filter } from 'map-transform' const onlyActives = (data) => data.active const def9 = [ 'members' { name: 'name', active: 'hasPayed' }, filter(onlyActives) ]
Defining a filter operation as an object:
import { mapTransform } from 'map-transform' const onlyActives = (data) => data.active const customFunctions = { onlyActives } const def9asObject = [ 'members' { name: 'name', active: 'hasPayed' }, { $filter: 'onlyActives' } ]
See the
transform() operation on how defining as an object works.
value(data) operation
The data given to the value operation, will be inserted in the pipeline in place of any data that is already present at that point. The data may be an object, a string, a number, a boolean, or an array of any of those.
This could be useful for:
- Setting a fixed value on a property in the target data
- Providing a default value to the alt operation
Example of both:
import { value, alt } from 'map-transform' const def10 = { id: 'data.customerNo', type: value('customer'), name: ['data.name', alt(value('Anonymous'))] }
The operation will not set anything when mapping with
.onlyMappedValues().
fixed(data) operation
The data given to the fixed operation, will be inserted in the pipeline in place of any data that is already present at that point. The data may be an object, a string, a number, a boolean, or an array of any of those.
This is exactly the same as
value(), except that the value set with
fixed()
will be included when mapping with
.onlyMappedValues() as well.
alt(pipeline) operation
The alt operation will apply the function or pipeline it is given when the data
already in the pipeline is
undefined. This is how you provide default values
in MapTransform. The pipeline may be as simple as a
value() operation, a dot
notation path into the source data, or a full pipeline of several operations.
import { alt, transform, value } from 'map-transform' const currentDate = data => new Date() const formatDate = data => { /* implementation not included */ } const def11 = { id: 'data.id', name: ['data.name', alt(value('Anonymous'))], updatedAt: [ 'data.updateDate', alt('data.createDate'), alt(transform(currentDate)), transform(formatDate) ] }
In the example above, we first try to set the
updatedAt prop to the data found
at
data.updateDate in the source data. If that does not exist (i.e. we get
undefined), the alt operation kicks in and try the path
data.createDate. If
we still have
undefined, the second alt will call a transform operation with
the
currentDate function, that simply returns the current date as a JS object.
Finally, another transform operation pipes whatever data we get from all of this
through the
formatDate function.
You may also define an alt operation as an object:
const def11asObject = { id: 'data.id', name: ['data.name', { $transform: 'alt', value: 'Anonymous' }] }
For now, only the value operand is supported. In the example above, the value
'Anonymous' will be used when
data.name is undefined.
concat(pipeline, pipeline, ...) operation
The
concat() operation will flatten the result of every pipeline it is given
into one array. A pipeline that does not return an array will simple have its
return value appended to the array.
This operation will always return an array, even when it is given only one
pipeline that does not return an array. Pipelines that does not result in a
value (i.e. return
undefined) will be filtered away.
fwd(pipeline) and
rev(pipeline) operation
All operations in MapTransform will apply in both directions, although some of
them will behave a bit different dependending on the direction. If you want an
operation to only apply to one direction, you need to wrap it in a
fwd() or
rev() operation. The
fwd() operation will only apply its pipeline when we're
going forward, i.e. mapping in the normal direction, and its pipeline will be
skipped when we're mapping in reverse. The
rev() operation will only apply its
pipeline when we're mapping in reverse.
import { fwd, rev, transform } from 'map-transform' const increment = data => data + 1 const decrement = data => data - 1 const def12 = { order: ['index', fwd(transform(increment)), rev(transform(decrement))] }
In the example above, we increment a zero-based index in the source data to get a one-based order prop. When reverse mapping, we decrement the order prop to get back to the zero-based index.
Note that the
order pipeline in the example above could also have been written
as
['index', transform(increment, decrement)], as the transform operation
supports seperate forward and reverse functions, when it is given two functions.
In this case you may choose what you think is clearer, but in other cases, the
fwd() and
rev() operations are your only friends.
divide(fwdPipeline, revPipeline) operation
divide() is
fwd() and
rev() operations combined, where the first argument
is a pipeline to use when going forward and the second when going in reverse.
See
fwd() and
rev() for more details.
get(path) and
set(path) operation
Both the
get() and
set() operations accepts a dot notation path to act on.
The get operation will pull the data at the path in the source data, and insert
it in the pipeline, while the set operation will take what's in the pipeline
and set it on the given path at a new object.
One reason they come as a pair, is that they will switch roles for reverse mapping. Their names might make this a bit confusing, but in reverse, the get operation will set and the set operation will get.
import { get, set } from 'map-transform' const def13 = [get('data.items[].content'), set('content[]')]
In the example above, the get operation will return an array of whatever is in
the
content prop at each item in the
data.items[] array. The set operation
will then create a new object with the array from the pipeline on the
content
prop. Reverse map this end result, and you'll get what you started with, as the
get and set operations switch roles.
You may notice that the example above could have been written with a transform object, and you're absolutely right. The transform object is actually an alternative to using get and set operations, and will be converted to get and set operations behind the curtains.
This example results in the exact same pipeline as the example above:
const def14 = { 'content[]': 'data.items[].content' }
It's simply a matter of taste and of what's easiest in each case. We believe that the transform object is best in cases where you describe a target object with several properties, while get and set operations is best suited to define root paths for objects or arrays.
The get operation also has a shortcut in transform pipelines: Simply provide the
path as a string, and will be treated as
get(path).
root(pipeline) operation
When you pass a pipeline to the root operation, the pipeline will be apply to the data that was original passed to the pipeline. Note that the result of a root pipeline will still be applied at the point you are in the parent pipeline, so this is not a way to alter data out of the pipeline.
Let's look at an example:
import { mapTransform, root } from 'map-transform' const def15 = [ 'articles[]', { id: 'id', title: 'headline', section: root('meta.section') } ] const data = { articles: [{ id: '1', headline: 'An article' } /* ... */], meta: { section: 'news' } } mapTransform(def15)(data) // --> [ // { id: '1', title: 'An article', section: 'news' } // /* ... */ // ]
As you see, every item in the
articles[] array, will be mapped with the
section property from the
meta object. This would not be available to the
items without the root operation.
There's also a shortcut notation for root, by prefixing a dot notation path with
get() and
set() operations, and in transformation objects.
$. This only works when the path is used for getting a value, and it will be plugged when used as set (i.e., it will return no value). This may be used in
In the following example,
def16 and
def17 is exactly the same:
const def16 = get('$meta.section') const def17 = divide(root('meta.section'), plug())
plug() operation
All the
plug() operation does is set clear the value in the pipeline - it
plugs it. The value will be set to
undefined regardless of what has happened
before that point. Any
alt() operations etc. coming after the plug will still
have an effect.
This main use case for this is to clear the value going one way. E.g. if you
need a value when you map in reverse, but don't want it going forward, plug it
with
fwd(plug()). You will also need it in a pipeline where the only operation
is uni-directional (i.e. using
fwd() or
rev()). An empty pipeline (which is
what a uni-directional pipeline will be in the other direction), will return
the data you give it, which is usually not what you want in these cases.
The solution is to plug it in the other direction.
You could have accomplished the same with
value(undefined), but this will not
work for
onlyMappedValues().
plug() will do its trick in all cases.
lookup(arrayPath, propPath) operation
lookup() will take the value in the pipeline and replace it with the first
object in the
arrayPath array with a value in
propPath matching the value.
In reverse, the
propPath will simply be used as a get path. (In the future,
MapTransform might support setting the items back on the
arrayPath in
reverse.)
Example:
const def18 = ['content.meta.authors[]', lookup('$users[]', 'id')] const data = { content: { meta: { authors: ['user1', 'user3'] } }, users: [ { id: 'user1', name: 'User 1' }, { id: 'user2', name: 'User 2' }, { id: 'user3', name: 'User 3' } ] } const mapper = mapTransform(def18) const mappedData = mapper(data) // --> [ // { id: 'user1', name: 'User 1' }, // { id: 'user3', name: 'User 3' } // ] mapper.rev(mappedData) // --> { content: { meta: { authors: ['user1', 'user3'] } } }
compare(path, value) helper
This is a helper intended for use with the
filter() operation. You pass a dot
notation path and a value (string, number, boolean) to
compare(), and it
returns a function that you can pass to
filter() for filtering away data that
does not not have the value set at the provided path. If the path points to an
array, the value is expected to be one of the values in the array.
Here's an example where only data where role is set to 'admin' will be kept:
import { filter, compare } from 'map-transform' const def19 = [ { name: 'name', role: 'editor' }, filter(compare('role', 'admin')) ]
validate(path, schema) helper
This is a helper for validating the value at the path against a
JSON Schema. We won't go into details of JSON Schema
here, and the
validate() helper simply retrieves the value at the path and
validates it according to the provided schema.
Note that if you provide a schema that is always valid, it will be valid even when the data has no value at the given path.
import { filter, validate } from 'map-transform' const def20 = [ 'items', filter(validate('draft', { const: false })), { title: 'heading' } ]
not(value) helper
not() will return
false when value if truthy and
true when value is falsy.
This is useful for making the
filter() operation do the opposite of what the
filter function implies.
Here we filter away all data where role is set to 'admin':
import { filter, compare } from 'map-transform' const def21 = [ { name: 'name', role: 'role' }, filter(not(compare('role', 'admin'))) ]
Reverse mappingReverse mapping
When you define a transform pipeline for MapTransform, you also define the reverse transformation, i.e. you can run data in both direction through the pipeline. This comes "for free" for simple mappings, but might require some extra work for more complex mappings with transform operations, alt operations, etc.
You should also keep in mind that, depending on your defined pipeline, the mapping may result in data loss, as only the data that is mapped to the target object is kept. This may be obvious, but it's an important fact to remember if you plan to map back and forth between two states – all values must be mapped to be able to map back to the original data.
Let's see an example of reverse mapping:
import { mapTransform, alt, value } from 'map-transform' const def22 = [ 'data.customers[]', { id: 'customerNo', name: ['fullname', alt(value('Anonymous'))] } ] const dataInTargetState = [ { id: 'cust1', name: 'Fred Johnsen' }, { id: 'cust2', name: 'Lucy Knight' }, { id: 'cust3' } ] const dataInSourceState = mapTransform(def22).rev(dataInTargetState) // --> { // data: { // customers: [ // { customerNo: 'cust1', fullname: 'Fred Johnsen' }, // { customerNo: 'cust2', fullname: 'Lucy Knight' }, // { customerNo: 'cust3', fullname: 'Anonymous' } // ] // } // }
Transform objects allow the same property on the source data to be mapped to several properties on the target object, but to this in reverse, you have to use a special syntax, as object properties need to be unique. By suffixing a key with a slash and a number, you tell MapTransform to use it in reverse, but skipping it going forward.
For example:
import { mapTransform, transform } from 'map-transform' const username = name => name.replace(/\s+/, '.').toLowerCase() const def23 = [ 'data.customers[]', { id: 'customerNo', name: 'fullname', 'name/1': ['username', rev(transform(username))] } ] const dataInTargetState = [{ id: 'cust1', name: 'Fred Johnsen' }] const dataInSourceState = mapTransform(def23).rev(dataInTargetState) // --> { // data: { // customers: [ // { customerNo: 'cust1', fullname: 'Fred Johnsen', username: 'fred.johnsen' } // ] // } // }
Mapping without fallbacksMapping without fallbacks
MapTransform will try its best to map the data it gets to the state you want,
and will always set all properties, even though the mapping you defined result
in
undefined. You may include
alt() operations to provide default or fallback
values for these cases.
But sometimes, you want just the data that is actually present in the source
data, without defaults or properties set to
undefined. MapTransform's
onlyMappedValues() method gives you this.
Note that
value() operations will also be skipped when mapping with
onlyMappedValues(), to honor the request for only the values that comes from
the data source. To override this behavior, use the
fixed() operation instead,
which will set a value also in this case.
import { mapTransform, alt, value } from 'map-transform' const def24 = { id: 'customerNo', name: ['fullname', alt(value('Anonymous'))] } const mapper = mapTransform(def24) mapper({ customerNo: 'cust4' }) // --> { id: 'cust4', name: 'Anonymous' } mapper.onlyMappedValues({ customerNo: 'cust4' }) // --> { id: 'cust4' } mapper.onlyMappedValues({ customerNo: 'cust5', fullname: 'Alex Troy' }) // --> { id: 'cust5', name: 'Alex Troy' } // The method is also available for reverse mapping mapper.rev.onlyMappedValues({ id: 'cust4' }) // -> { customerNo: 'cust4' }. | https://www.npmjs.com/package/map-transform | CC-MAIN-2022-21 | refinedweb | 4,336 | 52.09 |
An ANTLR-based Dart parser.
Eventual goal is compliance with the ECMA Standard.
Right now, it will need a lot more tests to prove it works.
Special thanks to Tiago Mazzutti for this port of ANTLR4 to Dart.
dependencies: dart_parser: ^1.0.0-dev
This will automatically install
antlr4dart as well. addition, I had to change the way strings are lexed. This is out of
line with the specification. The
stringLiteral now looks like this:
stringLiteral: (SINGLE_LINE_STRING | MULTI_LINE_STRING)+;
To handle the contents of strings, you will have to do it manually, like via Regex. Sorry!
In addition, I modified the rules of external declarations, so that
you could include metadata before the keyword
external. The rule
defined in the spec didn't permit that, although that's accepted by
dartanalyzer, dart2js, etc.(parser.compilationUnit()); }
This package includes a web app that diagrams your parse trees.
Alternatively, you can use
grun, which ships with the ANTLR tool
itself.?).
As always, thanks for using this.
Feel free to follow me on Twitter, or to check out my blog.
Add this to your package's pubspec.yaml file:
dependencies: dart_parser: "^1.0.0-dev+7"
You can install packages from the command line:
with pub:
$ pub get
with Flutter:
$ flutter packages get
Alternatively, your editor might support
pub get or
packages get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:dart_parser/dart_parser.dart';
We analyzed this package, and provided a score, details, and suggestions below.
Detected platforms: Flutter, web, other
No platform restriction found in primary library
package:dart_parser/dart_parser.dart.
Fix
lib/src/dartlang_parser.dart.
Strong-mode analysis of
lib/src/dartlang_parser.dartfailed with the following error:
line: 1197 col: 27
The method 'adaptivePredict' isn't defined for the class 'AtnSimulator'.
Maintain
CHANGELOG.md.
Changelog entries help clients to follow the progress in your code.
dart_parser.dart. | https://pub.dartlang.org/packages/dart_parser | CC-MAIN-2018-09 | refinedweb | 316 | 59.9 |
I finally figured out a solution and I am posting it here in case someone else faces this issue. As I had suspected in my original post, this turned out to be an Arduino issue and to a lesser extent a Windows thing. In Arduino when you create a custom characteristic you can choose the option to read/write/notify/indicate. These are the standard 4 characteristic properties. There is also another property 'BLEWriteWithoutResponse', which hasn't been discussed a lot on forums. And it is this property that needs to used in your Arduino Nano33 BLE sketch.
The difference between BLEWrite and BLEWriteWithoutResponse is that the latter doesn't expect an acknowledgement or approval for MATLAB's request to write to the characteristic. If your Arduino is setup to send acknowledgements in response to Write request, then BLEWrite will work too (I think, but I haven't tried this). Now, the other weird thing is (which is why I said its a Windows thing), it didn't matter on a MAC computer whether the Arduino was configured with BLEWrite or BLEWriteWithoutResponse. It worked perfectly well on a MAC computer running MATLAB. But, on a Windows machine, it did make a difference.
Finally, here is a snippet of my Arduino sketch, just to show clearly how the custom characteristic must be setup -
#include <ArduinoBLE.h>
#include <Arduino_LSM9DS1.h> // 9 axis IMU on Nano 33 BLE Sense board
// BLE Service
BLEService imuService("917649A0-D98E-11E5-9EEC-0002A5D5C51B"); // Custom UUID
// Matlab writes to this characteristic:
BLEByteCharacteristic WriteChar("917649A1-D98E-11E5-9EEC-0002A5D5C51C", BLERead | BLEWriteWithoutResponse);
void setup(){
...}
void loop(){
...} | https://www.mathworks.com/matlabcentral/answers/556798-ble-write-to-arduino-nano-33-ble-is-unsuccessful?s_tid=prof_contriblnk | CC-MAIN-2022-27 | refinedweb | 266 | 62.07 |
Strut
Strut example how to use validators in struts
Hi.. - Struts
Hi.. Hi,
I am new in struts please help me what data write.......
How about... |
Hibernate Tutorial |
Spring Framework Tutorial
| Struts Tutorial
hi... - Struts
also its very urgent Hi Soniya,
I am sending you a link. I hope...hi... Hi Friends,
I am installed tomcat5.5 and open the browser and type the command but this is not run please let me
Capturing JSP Content Within My Strut Action Servlet - JSP-Servlet
Capturing JSP Content Within My Strut Action Servlet My end goal... or Struts Action ... */
BufferedHttpResponseWrapper wrapper = new... by: deeeeeean0hhh on Mar 5, 2009 9:05 AM Hi friend,
For solving
broad band case study
broad band case study Broadband Case Study
LKV Company is a new firm... the billing is done on 20th
of very month. The payment has to be made within 10 days... and also change the plans.
The customer can call anytime to en-quire about his details
Know About Outsourcing, More About Outsourcing, Useful Information Outsourcing
Everything you need to Know about Outsourcing
Introduction
Let us start at the beginning with a definition of what is outsourcing. Outsourcing can... and offshoring. For them, both terms mean the same, but this is not very accurate
Struts Warnings ...About FormBeanConfig & about Cancel Forward - Struts
Struts Warnings ...About FormBeanConfig & about Cancel Forward Hi Friends...
I am trying a very small code samples of Contact Application i...)WARNING: Unable to find 'cancel' forward.
the contents of struts-config.xml
Index | About-us | Contact Us
|
Advertisement |
Ask
Questions | Site
Map | Business Software
Services India
Tutorial Section ... Tutorials |
WAP Tutorial
|
Struts
Tutorial |
Spring
Explain about Cross site scripting?
Explain about Cross site scripting? Explain about Cross site scripting
Struts - Struts
Struts Dear Sir ,
I am very new in Struts and want to learn about validation and custom validation. U have given in a such nice way... provide the that examples zip.
Thanks and regards
Sanjeev. Hi friend
Hi.. - Struts
Hi..
Hi Friends,
I am new in hibernate please tell me.....if i am using hibernet with struts any database pkg is required or not.....without any database package using maintain data in struts+hiebernet....please help
About Struts processPreprocess method - Struts
About Struts processPreprocess method Hi java folks,
Help me... is not valid? Can I access DB from processPreprocess method.
Hi... will abort request processing.
For more information on struts visit
HI!!!!!!!!!!!!!!!!!!!!!
HI!!!!!!!!!!!!!!!!!!!!! import java.awt.*;
import java.sql....);
label2=new JLabel("Year of study");
text2=new JTextField(10...;Hi Friend,
Try this:
import java.awt.*;
import java.sql.*;
import javax.swing.
hi.......
hi....... import java.awt.;
import java.sql.*;
import javax.swing....");
text=new JPasswordField(10);
label2=new JLabel("Year of study");
text2=new...){
}
}
can anyone tell wts wrong with this code??
Hi,
Check it:
import
Hi... - Struts
Hi... Hi,
If i am using hibernet with struts then require... of this installation Hi friend,
Hibernate is Object-Oriented mapping tool... more information,tutorials and examples on Struts with Hibernate visit
Offshore Outsourcing Tips,Useful Offshore Outsourcing Tips,Helpful Outsourcing Tips
way to go about offshore outsourcing?
There are no definite answers... very little of. Selection of associates can be
very tricky... These
First things first, have a detailed study conducted on
all
Useful Negotiation Tips on Outsourcing, Helpful Negotiation Tips
about what you want and keep that in focus while negotiating
terms...
requirements. This can be useful, as the third party can
bring in a lot of objectivity. However the organization
needs to study the consultant well. Some
Hi... - Struts
Hi... Hello Friends,
installation is successfully
I am instaling jdk1.5 and not setting the classpth in enviroment variable please write the classpath and send classpath command Hi,
you set path = C
WEB SITE
WEB SITE can any one plzz give me some suggestions that i can implement in my site..(Some latest technology)
like theme selection in orkut
like... Technical Subject if u have knowledge about PHP, MySQL, JavaScript, Jquery, and CSS
struts i have no any idea about struts.please tell me briefly about struts?**
Hi Friend,
You can learn struts from the given link:
Struts Tutorials
Thanks 2 version 2.3.16.3 released
Struts 2 is very elegant and highly extensible web application
development...Struts 2 version 2.3.16.3 released - Here is the latest maven dependencies
code
Latest version of Struts 2: Struts 2 version 2.3.16.3 released
struts
struts hi
Before asking question, i would like to thank you for clarifying my doubts.this site help me a lot to learn more & more technologies like servlets, jsp,and struts.
i am doing one struts application where i
About struts
About struts How will we configure the struts - Framework
/struts/". Its a very good site to learn struts.
You dont need to be expert... struts application ?
Before that what kind of things necessary to learn
and can u tell me clearly sir/madam? Hi
Its good
Interceptors in Struts 2
in Struts 2 but
custom interceptors can also be integrated. Lets discuss about...: The autowiring interceptor is very
useful as it auto wires action classes to Spring...Interceptors in Struts 2
Interceptors are conceptually analogous to Servlet
struts - Struts
struts hi,
what is meant by struts-config.xml and wht are the tags... of struts, u understand wverything
goto google type for strut-blank.jar and search you get the jar file Hi friend,
struts.config.xml : Struts has
Program Very Urgent.. - JSP-Servlet
Program Very Urgent.. Respected Sir/Madam,
I am R.Ragavendran.... its most urgent..
Thanks/Regards,
R.Ragavendran..
Hi friend... the correct value of empid and empname and run once again.
Read about ajax
Struts Books
covers everything you need to know about Struts and its supporting technologies...: you will learn to use Struts very
effectively.John Carnell... the Jakarta Struts Framework
Companion Web site provides electronic
Hi... - Java Beginners
Hi... Hi friends,
I hv two jsp page one is aa.jsp & bb.jsp... but this not working
Upload Record
please tell me Hi Friend,
Please clarify your question.
Thanks Hi frnd,
Your asking
Hi... - Hi All,
Can we have more than one struts-config.xml... in Advance.. Yes we can have more than one struts config files..
Here we use SwitchAction. So better study to use switchaction class
Hi.... - Java Beginners
me its very urgent....
Hi Friend,
Plz give full details...Hi.... Hi Friends,
Thanks for reply can send me sample....
For example : Java/JSP/JSF/Struts 1/Struts 2 etc....
Thanks
web site creation
web site creation Hi All ,
i want to make a web site , but i am using Oracle data base for my application .
any web hosting site giving space for that with minimum cost .
please let me know which site offering all
que - Struts
que how can i run a simple strut programm?
please explain with a proper example.
reply soon. Hi Friend,
Please visit the following link: roseindia - Java Beginners
hi roseindia what is java? Java is a platform independent..., Threading, collections etc.
For Further information, you can go for very good...).
Thanks/Regards,
R.Ragavendran... Hi deepthi,
Read for more
struts - Struts
struts Hi ,
I have been asked in one of the technical interviews if struts framework is stateless or stateful . I could not answer. Please answer and explain a bit about it how it is achieved.
thanks
kanchan Upload Site Online?
How to Upload Site Online? After designing and developing your... program is free for uploading a website?
How to Upload Site Online?
Thanks
Hi,
After that you will get the user name and password to access the server
What is Struts?
What is a Struts? Understand the Struts framework
This article tells you "What is a Struts?". Tells you how
Struts learning is useful in creating...
Struts is very elegant framework based on the latest technologies of Java
very urgent - Java Server Faces Questions
very urgent Hi sir,
yesterday i send total my code to find where i... output is not coming.I explain about the output in previeous question.
Please check it once and give me a correct solution.
It is very urgent for me.
Thanks
We have organized our site map for easy access.
You can browser though Site Map to reach the tutorials and information
pages. We will be adding the links to our site map as and when new pages
are added
struts - Struts
application,and why would you use it? Hi mamatha,
The main aim of the MVC...
struts - Struts
struts Hi,
i want to develop a struts application,iam using eclipse... you. hi,
to add jar files -
1. right click on your project.
2... projects so you need not worry about them. have you read document
About the project
About the project
... ,Struts
and Hibernate. This helps people that want to develop... this
application for your own site look here for documentation.
Book - Popular Struts Books
sophisticated Struts
1.1.This book covers everything you need to know about... and a case-study illustrates the 1.0 to 1.1 transition.
Struts Survival Guide... the "Struts way." The hot topics in the construction of any Web site such as initial
a competitive study on cryptography techniques over block cipher
a competitive study on cryptography techniques over block cipher i need "a competitive study on cryptography techniques over block cipher" project source code... plz reply to my post...
for more information about this project
Struts 2 Architecture - Detail information on Struts 2 Architecture
Architecture
Struts 2 is a very elegant and flexible front controller
framework....
In this section we have learnt about the Architecture
of Struts 2 Framework.
...
Struts 2 Architecture
Struts
turorials for struts - Struts
turorials for struts hi
till now i dont about STRUTS. so want beginers struts tutorials notes. pls do
It?s Easy to See Why You Should Learn PHP
. PHP is one such type of coding system that can be used. It can be very useful... is used.
Another point about using PHP is that it can work very well on any type...The code of a website is important. It is used to get the site to run properly
Developing Struts Application
-about ActionServlet?
That is provided by Struts Framework itself.
We do.... This is not JSTL
but Struts Tag Library. We should note the following very carefully...Developing Struts Application
struts internationalisation - Struts
struts internationalisation hi friends
i am doing struts iinternationalistaion in the site... problem its urgent Hi friend,
Plz give full details and tools update site
Hibernate tools update site
Are you looking for hibernate tools update site...
download the latest version from Hibernate Tools site.
You can also run Hibernate... productivity.
Hibernate tools update site is
Help Very Very Urgent - JSP-Servlet
Help Very Very Urgent Respected Sir/Madam,
I am sorry..Actually... requirements..
Please please Its Very very very very very urgent...
Thanks/Regards,
R.Ragavendran..
Hi
Very Very Urgent -Image - JSP-Servlet
Very Very Urgent -Image Respected Sir/Madam,
I am... with some coding, its better..
PLEASE SEND ME THE CODING ASAP BECAUSE ITS VERY VERY URGENT..
Thanks/Regards,
R.Ragavendran... Hi friend,
Code
about db - Struts
About DB in Struts I have one problem about database. i am using netbeans serveri glassfish. so which is the data source struts config file should be? Please help in industry,
struts 1 and struts 2. which is the best?
which is useful as a professuional
Have a look at the following link:
Struts Tutorials
SEO Tips,Latest SEO Tips,Free SEO Tips & Tricks,Useful Search Engine Optimization Tips
designed site becomes easy to use and also get back the traffic on the site.
4... for getting the information about a
page. Put your all important information... and directories. It
will take some time to list your site url.
7. Link Building
The last
Struts Links - Links to Many Struts Resources
covers Struts 1.2. The course is usually taught on-site at customer locations..., JSP, Struts, and JSF training course page. To inquire about a customized training...
Struts Links - Links to Many Struts Resources
Jakarta
hi - Ajax
hi hi, i am doing Web surfing project. i used start and end... hour result. waiting for yours reply.. Hi vinoth
Here is the some project where u may found the solution about your problem
http
Detailed introduction to Struts 2 Architecture
the action very easily.
Expression Language
Struts 2 expression language...
Detailed introduction to Struts 2 Architecture
Struts 2 Framework Architecture
In the previous section we learned -
Outsourcing Communication Tips,Useful Cultural Tips in Offshore Outsourcing,Communication and Culture Tips
Communication and Culture Tips in Offshore Outsourcing Relationships
Introduction
Many offshore outsourcing and other business arrangements run into difficulties due to communication problems. The trend is very prominent in both
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/3624 | CC-MAIN-2015-35 | refinedweb | 2,143 | 68.47 |
Structures Versus Classes
So what’s the difference between a class and a structure? Structures are value types like int, double, and string. When you copy a value of structures to other variables, it copies its values to the other variable and not its address or reference. Classes are reference types like all the classes in the .NET Framework. Let’s demonstrate the difference of the two by looking at an example.
using System; namespace StructureVsClass { struct MyStructure { public string Messageone { get; set; } } class MyClass { public string Messageone { get; set; } } class Program { static void Main() { MyStructure structure1 = new MyStructure(); MyStructure structure2 = new MyStructure(); structure1.Messageone = "ABC"; structure2 = structure1; Console.WriteLine("Showing that structure1 " + "was copied to structure2."); Console.WriteLine("structure2.Messageone = {0}", structure2.Messageone); Console.WriteLine(" Modifying the value of structure2.Messageone..."); structure2.Messageone = "123"; Console.WriteLine(" Showing that structure1 was not affected " + "by the modification of structure2"); Console.WriteLine("structure1.Messageone = {0}", structure1.Messageone); MyClass class1 = new MyClass(); MyClass class2 = new MyClass(); class1.Messageone = "ABC"; class2 = class1; Console.WriteLine(" Showing that class1 " + "was copied to class2."); Console.WriteLine("class2.Messageone = {0}", class2.Messageone); Console.WriteLine(" Modifying the value of class2.Messageone..."); class2.Messageone = "123"; Console.WriteLine(" Showing that class1 was also affected " + "by the modification of class2"); Console.WriteLine("class1.Messageone = {0}", class1.Messageone); } } }
Example 1 – Structures Vs Classes
Showing that structure1 was copied to structure2. structure2.Message = ABC Modifying the value of structure2.Message... Showing that structure1 was not affected by the modification of structure2 structure1.Message = ABC Showing that class1 was copied to class2. class2.Message = ABC Modifying the value of class2.Message... Showing that class1 was also affected by the modification of class2 class1.Message = 123
We made a structure and a class that we will use to demonstrate their differences. We provided a Message property for each of the two. We then create two instances for each of them. We assigned a value for the Message property of structure1. We then assign the value of structure1 to structure2 so everything inside structure1 will be copied to structure2. To prove that everything was copied from structure1, we showed the value of structure2‘s Message property and you can see that it is the same as the structure1‘s Message property value. To show that structures are value types, we assigned another message to the Message property of structure2. The Message property of structure1 is not affected because the structure2 is a separate copy of structure1.
Now to demonstrate the behavior of reference types and classes. Classes pass their address and not their value when they’re being assigned to other variables. Therefore, when you edit a property of the object that receive the address of the original object, the property of that original object is also modified. When you’re passing an object as an argument to a method, only the address of that object is passed. Any modifications to that object inside that method will also reflect the original object that passed its address. | https://compitionpoint.com/structures-versus-classes/ | CC-MAIN-2021-21 | refinedweb | 499 | 59.8 |
<br />
#ifdef _WINSOCKAPI_<br />
int<br />
WI_NOBLOCKSOCK(long sock)<br />
{<br />
int err;<br />
int option = TRUE;<br />
<br />
err = ioctlsocket((int)sock, FIONBIO, (u_long *)&option);<br />
return(err);<br />
}
static char * cMyClass::getDisplayStringXYZ();
Display Sting XYZ goes just here: <!--#include file="XYZ" -->
XYZ -e u_long x
fsbuilder -g wsfcode.c filelist
#include "MyClass.h"
extern "C" {
int
wi_cvariables(wi_sess * sess, int token)
{
switch(token)
{
}
return 0;
}
}
case 43:
wi_printf( sess, cMyClass::getDisplayStringXYZ() );
break;
char * getDisplayStringXYZ();
cWebio::SetLink( "XYZ", &getDisplayStringXYZ );
error = wi_cvariables(sess, ssifname );
int
wi_cvariables(wi_sess * sess, char * idname)
{
return cWebio::ResolveLink( sess, idname );
}
ravenspoint wrote:The issue seems to be that without running fsbuilder with the string identifier specified in filelist, line 729 in webio.c is not executed. Perhaps John Bartas could explain this mystery to me?
Display Sting XYZ goes just here: <!--#include file="XYZ.cex" -->
if( strstr( ssifname, ".cex" ) )
{
*(ssifname + strlen(ssifname) - 4 ) = '\0';
wi_cvariables(sess, ssifname );
return 0;
}
#include "stdafx.h"
#include "cWebio.h"
char * fun3()
{
return "CCCC";
}
class cMyClass
{
public:
char * Display6()
{
return "well done!";
}
};
int _tmain(int argc, _TCHAR* argv[])
{
cMyClass theClass;
cWebio webio;
// specify port
if( argc == 2 )
webio.setPort( _wtoi( argv[1] ) );
// make arrangements so that when webio encounters XYZ.cex
// global method fun3 is called
webio.setLink(
"XYZ", // display string id
&fun3); // global function
// make arrangements so that when webio encounters HAHA.cex
// method Display6 is called on theClass an instance of cMyClass
webio.setLink(
"HAHA", // display string id
boost::bind(
&cMyClass::Display6, // member function
&theClass ) ); // instance of class
// start the web server
webio.Start();
if( webio.myError ) {
printf("ERROR: Could not start web server\n%s\n",webio.myErrorMsg );
return 1;
}
printf("OK, web server is up and running\n");
// keep busy
while( 1 ) {
Sleep(5000);
printf("I'm not dead yet\n");
}
return 0;
}
#include "stdafx.h"
#include "cWebio.h"
int _tmain(int argc, _TCHAR* argv[])
{
// start the web server, listening on port 1570
cWebio webio;
webio.setPort( 1570 );
webio.Start();
// keep busy
while( 1 ) {
Sleep(5000);
printf("I'm not dead yet\n");
}
return 0;
}
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/27950/Webio-An-embedded-web-server?msg=2650793 | CC-MAIN-2017-43 | refinedweb | 369 | 58.79 |
Package, a future replacement for setup.rb and mkmf.rb
Sat Jun 04 23:50:30 CEST 2005
(This blog post will go to ruby-talk as soon as Gmail fixes its SMTP.)
Hello,
During a discussion on IRC, I started to wonder if Ruby’s install scripts are state of the art, what could be done better and how.
Ruby’s
mkmf.rb and Aoki’s
setup.rb probably have their roots in the
oldest pieces of Ruby source still in use. While
setup.rb had some
changes in the latter time,
mkmf.rb more or less stayed the same.
I have looked into how other languages install source and compile
extensions, and the library I liked best so far is Python’s
distutils.
I’m not very familiar with Python, but I like the general approach and
the essence of API. Basically, you create a file,
setup.py, like
this:
from distutils.core import setup setup (name = "Distutils", version = "0.1.1", description = "Python Module Distribution Utilities", author = "Greg Ward", author_email = "gward@python.net", url = "", packages = ['distutils', 'distutils.command'])
In Ruby, this would maybe look like that:
require 'package' setup { name "Distutils" version "0.1.1" description "Python Module Distribution Utilities" author "Greg Ward" author_email "gward@python.net" url "" packages ['distutils', 'distutils/command'] }
Given this file, we can simply run:
python setup.py install
and the files will get installed where they belong to. distutils can
also handle different prefixes, installing into home directories, and
complex cases like putting scripts to
/usr/bin, but libraries to
/opt/local and whatever.
Python’s distutils also handles compiling extensions:
name = 'DateTime.mxDateTime.mxDateTime' src = 'mxDateTime/mxDateTime.c' setup ( ... ext_modules = [(name, { 'sources': [src] 'include_dirs': ['mxDateTime'] } )] )
Here, something like this would be possible in Ruby (I’m not yet sure about exact semantics of the Python version):
setup { # ... extension("DateTime/mxDateTime/mxDateTime") { sources "mxDateTime/mxDateTime.c" include_dirs "mxDateTime" } }
Of course, more complex build descriptions can be represented too:
extension("foolib") { sources "foo.c", "bar.c" if have_library("foo", "fooble") define "HAVE_FOO_H" cflags << `foo-config --cflags` ldflags << `foo-config --libs` else fail "foolib is needed" end }
Whether this will generate a
Makefile (like
mkmf.rb), a
Rakefile
or compile directly (like distutils) is still an open question.
To allow for an easy conversion of setup.rb usage, Package will provide convenience methods that will make it behave like setup.rb with respect to the directory structure.
Package doesn’t try to conquer the world, however, it aims to be just a tool that would be useful if it was standard and everyone could build on due to it’s policy-neutrality
What advantages will Package have over setup.rb and mkmf.rb, as they are now?
- simple, clean and consistent working
- unified library to handle both extensions and libraries
- lightweight approach (if included in the standard library)
- easy adaption
- more flexible directory layout: especially small projects profit from this, as setup.rb’s directory layout is quite bulky by default and not very customizable
- easier packaging by third-party packagers due to simple but flexible and standardized invocation
What do we need to get a wide adoption of Package?
- inclusion in the standard library so it doesn’t need to be shipped with every package (as setup.rb unfortunately is).
- backing from the community to make use of Package.
- acceptance from packaging projects like RPA, RubyGems and distributions like Debian, FreeBSD and PLD.
Coding of Package has not started yet (the name is also not set into stone yet, so if you have better ideas, please tell me) because it would be pointless without a strong feedback from the community. I expect to get a first version done rather quickly, possibly borrowing code from setup.rb and mkmf.rb, but Package will not depend on these both. If anyone is interested in helping development, please mail me; helpful hands are always of use. Also, there will be need for testers on all and even the most weird platforms.
But now, I’ll ask you: Are you satisfied with the way installing Ruby extensions and libraries works? Do you think there is a place for Package? Do you have further improvements or can provide alternative ideas?
NP: Neil Young—My My, Hey Hey (Out Of The Blue) | http://chneukirchen.org/blog/archive/2005/06/package-a-future-replacement-for-setup-rb-and-mkmf-rb.html | crawl-001 | refinedweb | 713 | 57.98 |
Chapter 2 — Handling Data examines the various considerations for handling data on the client, including data caching, data concurrency, and the use of datasets and Windows Forms data binding.
Contents
Types of Data
Caching Data
Data Concurrency
Using ADO.NET DataSets to Manage Data
Windows Forms Data Binding
Summary
In smart clients, application data is available on the client. If your smart clients are to function effectively, it is essential that this data be managed appropriately to make sure that it is kept valid, consistent, and secure.
Application data can be made available to the client by a server-side application (for example, through a Web service), or the application can use its own local data. If the data is provided by a server application, the smart client application may cache the data to improve performance or to enable offline usage. In this case, you need to decide how the client application should handle data that is out of date with respect to the server.
If your smart client application provides the ability to modify data locally, the client changes have to be synchronized with the server-side application at a later time. In this case, you have to decide how the client application can handle data conflicts and how to keep track of the changes that need to be sent to the server.
You need to carefully consider these and a number of other issues when designing your smart client application. This chapter examines the various considerations for handling data on the client, including:
- Types of data.
- Caching data.
- Data concurrency.
- Using ADO.NET datasets to manage data.
- Windows Forms data binding.
A number of other issues related to handling data are not discussed in this chapter. In particular, data handling security issues are discussed in Chapter 5: Security Considerations and offline considerations are discussed in Chapter 4: Occasionally Connected Smart Clients
Types of Data
Smart clients generally have to handle two categories of data:
- Read-only reference data
- Transient data
Typically, these types of data need to be handled in different ways, so it is useful to examine each of them in more detail.
Read-Only Reference Data
Read-only reference data is data that is not changed by the client and that is used by the client for reference purposes. Therefore, from the client's point of view, the data is read-only data, and the client performs no update, insert, or delete operations on it. Read-only reference data is readily cached on the client. Reference data has a number of uses in a smart client application, including:
- Providing static reference or lookup data. Examples include product information, price lists, shipping options, and prices.
- Supporting data validation, allowing data entered by the user to be checked for correctness. An example is checking entered dates against a delivery schedule.
- Helping to communicate with remote services. An example is converting a user selection to a product ID locally and then sending the information to a Web service.
- Presenting data. Examples include presenting Help text or user interface labels.
By storing and using reference data on the client, you can reduce the amount of data that needs to travel from client to server, improve the performance of your application, help enable offline capabilities, and provide early data validation, which increases the usability of your application.
Although read-only reference data cannot be changed by the client, it can be changed on the server (for example, by an administrator or supervisor). You need to determine a strategy for updating the client when changes to the data occur. Such a strategy could involve pushing changes out to the client when a change occurs or pulling changes from the server at certain time intervals or prior to certain actions on the client. However, because the data is read-only at the client, you do not need to keep track of client-side changes. This simplifies the way in which read-only reference data needs to be handled.
Transient Data
Transient data can be changed on the client as well as the server. Generally, transient data changes as a direct or indirect result of user input and manipulation. In this case, changes that are made on either the client or server need to be synchronized at some point. This type of data has a number of uses in a smart client, including:
- Adding new information. Examples include adding banking transactions or customer details.
- Modifying existing information. An example is updating customer details.
- Deleting existing information. An example is removing a customer from a database.
One of the most challenging aspects of dealing with transient data on smart clients is that it can generally be modified on multiple clients at the same time. This problem is exacerbated when the data is very volatile, because changes are more likely to conflict with one another.
You need to keep track of any client-side changes that you make to transient data. Until the data is synchronized with the server and any conflicts have been resolved, you should not consider transient data to be confirmed. You should be very careful not to rely on unconfirmed data to make important decisions or use it as the basis for other local changes without carefully considering how data consistency can be guaranteed even in the event of a synchronization failure.
For more details about the issues surrounding handling data when offline and how to handle data synchronization, see Chapter 4: Occasionally Connected Smart Clients
Caching Data
Smart clients often need to cache data locally, whether it is read-only reference data or transient data. Caching data has the potential to improve performance in your application and provide the data necessary to work offline. However, you need to carefully consider which data is cached on the client, how that data is to be managed, and the context in which that data can be used.
To enable data caching, your smart client application should implement some form of caching infrastructure that can handle the data caching details transparently. Your caching infrastructure should include one or both of the following caching mechanisms:
- Short-term data caching. Caching data in memory is good for performance but is not persistent, so you may need to pull data from the source when the application is re-run. Doing so may prevent your application from operating when offline.
- Long-term data caching. Caching data in a persistent medium, such as isolated storage or the local file system, allows you to use the application when there is no connectivity to the server. You may choose to combine long-term storage with short-term storage to improve performance.
Regardless of the caching mechanisms you adopt, you should ensure that only data to which the user has access is made available to the client. Also, sensitive data cached on the client requires careful handling to ensure that it is kept secure. Therefore, you may need to encrypt the data as it is transferred to the client and as it is stored on the client. For more information, see "Handling Sensitive Data" in Chapter 5: Security Considerations
As you design your smart client to support data caching, you should consider providing a mechanism for your client to request fresh data, regardless of the state of the cache. This means that you can be sure that the application is ready to perform new transactions without using stale data. You may also configure your client to pre-fetch data so that it can mitigate the risk of being offline when cached data expires.
Wherever possible, you should associate some form of metadata with the data to enable the client to manage the data in an intelligent way. Such metadata can be used to specify the data's identity and any constraints or desired behaviors associated with the data. Your client-side caching infrastructure should consume this metadata and use it to handle the cached data appropriately.
All data that is cached on the client should be uniquely identifiable (for example, through a version number or date stamp), so that it can be properly identified when determining whether it needs to be updated. Your caching infrastructure is then able to ask the server whether the data that it has is currently valid and determine if any updates are required.
Metadata can also be used to specify constraints or behaviors that relate to the usage and handling of the cached data. Examples include:
- Temporal constraints. These constraints specify the time or date range in which the cached data can be used. When the data becomes stale or expires, it can be dropped from the cache or automatically refreshed by obtaining the latest data from the server. In some cases, it may be appropriate to let the client use out-of-date reference data and map it to up-to-date data when it is synchronized with the server.
- Geographic constraints. Some data may be appropriate only for a particular region. For example, you may have different price lists for different locations. Your caching infrastructure can be used to access and store data on a per-location basis.
- Security requirements. Data that is specifically intended for a particular user can be encrypted to ensure that only the appropriate user can access it. In this case, the data is provided already encrypted, and the user has to provide the credentials to the caching infrastructure to allow the data to be decrypted.
- Business rules. You may have business rules associated with your cached data that dictate how it should be used. For example, your caching infrastructure may take into consideration the role of the user to determine what data is provided to him or her and how it is handled.
The metadata associated with the data enables your caching infrastructure to handle the data appropriately so that your application does not have to be concerned with data caching issues or implementation details. You can pass the metadata associated with the reference data within the data itself, or you can use an out-of-band mechanism. The exact mechanism used to transport the metadata to the client depends on how your application communicates with the network services. When using Web services, using SOAP headers to communicate the metadata to the client is a good solution.. If the data has changed on the server in the meantime, a data conflict occurs and needs to be handled appropriately.. Also, you may want to restrict the application or the user from making important decisions or initiating important actions based on tentative data. Such data should not be relied on until it has been synchronized with the server. By using an appropriate caching infrastructure, you can keep track of tentative and confirmed data.
The Caching Application Block
The Caching Application Block is a Microsoft® .NET Framework extension that allows developers to easily cache data from service providers. It was built and designed to encapsulate Microsoft's recommended practices for caching in .NET Framework applications as described in Caching Architecture Guide for .NET Framework Applications at.
The overall architecture of the caching block is shown in Figure 2.1.
Figure 2.1 Caching block workflow
The caching workflow consists of the following steps:
- A client or service agent makes a request to the CacheManager for cached data items.
- If the item is already cached, the CacheManager retrieves the item from storage and returns it as a CacheItem object. If the item is not already cached, the client is notified.
- After retrieving noncached data from a service provider, the client sends the data to the CacheManager. The CacheManager adds a signature (that is, metadata), such as a key, expiration, or priority, to the item and loads it into the cache.
- The CacheService monitors the lifetime of CacheItems. When a CacheItem expires, the CacheService removes it and, optionally, calls a callback delegate.
- The CacheService can also flush all items from the cache.
The caching block offers a variety of caching expiration options, which are described in Table 2.1.
Table 2.1 Caching Block Expiration Options
The following storage mechanisms are available for the caching block:
- Memory-mapped file (MMF). MMFs are best suited for a client-based, high-performance caching scenario. You can use MMFs to develop a cache that can be shared across multiple application domains and processes within the same computer. The .NET Framework does not support MMFs, so any implementation of an MMF cache runs as unmanaged code and does not benefit from any .NET Framework features, including memory management features (such as garbage collection) and security features (such as code access security).
- Singleton object. A .NET remoting singleton object can be used to cache data that can be shared across processes in one or several computers. This is done by implementing a caching service using a singleton object that serves multiple clients through .NET remoting. Singleton caching is simple to implement, but it lacks the performance and scalability provided by solutions based on Microsoft SQL Server.
- Microsoft SQL Server 2000 database. SQL Server 2000 storage is best suited to an application that requires high durability or when you need to cache a very large amount of data. Because the cache service needs to access SQL Server over a network and the data is retrieved using database queries, data access is relatively slow.
- Microsoft SQL Server Desktop Engine (MSDE). MSDE is a lightweight database alternative to SQL Server 2000. It provides reliability and security features but has a smaller client footprint than SQL Server, so it requires less setup and configuration. Because MSDE supports SQL, developers also gain much of the power of a database. You can migrate an MSDE database to a SQL Server database if necessary.
Data Concurrency
As mentioned earlier, one problem with using smart clients is that changes to the data held on the server can occur before any client-side changes are synchronized with the server. You need a mechanism to ensure that when the data is synchronized, any data conflicts are handled appropriately and the resultant data is consistent and correct. The ability for data to be updated by more than one client is known as data concurrency.
There are two approaches that you could use to handle data concurrency:
- Pessimistic concurrency. Pessimistic concurrency allows one client to maintain a lock over the data to prevent any other clients from modifying the data until the client's own changes are complete. In such cases, if another client attempts to modify the data, the attempt fails or is blocked until the lock's owner releases the lock.
- Pessimistic concurrency can be problematic, because a single user or client may hold on to a lock for a significant period of time, possibly inadvertently. Therefore, the lock could prevent important resources, such as database rows or files, from being released in a timely manner, which can seriously affect the scalability and usability of the application. However, pessimistic concurrency may be appropriate when you need to have complete control over changes made to important resources. Note that it cannot be used if your clients are to work offline, because they are not able to put a lock on data.
- Optimistic concurrency. Optimistic concurrency does not lock the data .To decide whether an update is actually required, the original data can be sent along with the update request and the changed data. The original data is then checked against the current data to see if it has been updated in the meantime. If the original data and the current data match, the update is executed; otherwise, the request is denied, producing an optimistic failure. To optimize this process, you can use a time stamp or an update counter in the data instead of sending the original data, and in this case only the time stamp or counter needs to be checked.
Optimistic concurrency provides a good mechanism for updating master data that does not change very often, such as a customer's phone number or address. Optimistic concurrency allows everyone to read the data, and in situations where updates are less likely than read operations, the risk of an optimistic failure may be acceptable. Optimistic concurrency may not be suitable in situations where the data is changed often and where the optimistic updates are likely to fail often.
In most smart client scenarios, including those in which clients are to work offline, optimistic concurrency is the correct approach because it allows multiple clients to work on data at the same time without unnecessarily locking data and affecting all other clients.
For more information about optimistic and pessimistic concurrency, see "Optimistic Concurrency" in the .NET Framework Developer's Guide at.
Using ADO.NET DataSets to Manage Data
A DataSet is an object that represents one or more relational database tables. Datasets store data in a disconnected cache. The structure of a dataset is similar to that of a relational database: It exposes a hierarchical object model of tables, rows, and columns. In addition, it contains constraints and relationships defined for the dataset.
An ADO.NET DataSet contains a collection of zero or more tables represented by DataTable objects..
Datasets can be strongly typed or untyped. A typed DataSet inherits from the DataSet base class but adds strong typed language functionality to the DataSet, allowing users to access content in a more strongly typed programmatic manner. Either type can be used when building applications. However, the Microsoft Visual Studio® development system has more support for typed datasets, and they make programming with the dataset easier and less error prone.
Datasets are particularly useful in a smart client environment, because they offer functionality that helps clients to work with data while offline. They can keep track of local changes made to the data, which helps to synchronize the data with the server and reconcile data conflicts, and they can be used to merge data from different sources.
For more information about working with datasets, see "Introduction to Datasets" in Visual Basic and Visual C# Concepts at.
Merging Data with Datasets
Datasets have the ability to merge the contents of DataSet, DataTable, or DataRow objects into existing datasets. This functionality is particularly useful for keeping track of changes on the client and merging with updated content from the server. Figure 2.2 shows a smart client requesting an update from the Web service, and the new data being returned as a data transfer object (DTO). A DTO is an enterprise pattern that allows you to package all the data required to communicate with a Web service into one object. Using a DTO often means that you can make a single call to a Web service rather than multiple calls.
Figure 2.2 Merging data on the client by using datasets
In this example, when the DTO is returned to the client, the DTO is used to create a new dataset locally on the client.
Note After a merge operation, ADO.NET does not automatically change the row state from modified to unchanged. Therefore, after merging the new dataset with the local client dataset, you need to invoke the AccceptChanges method on your dataset to reset the RowState property to unchanged.
For more information about using datasets, see "Merging DataSet Contents" in the .NET Framework Developer's Guide at.
Increasing the Performance of Datasets
Datasets can often contain a large amount of data, which, if passed over the network, can lead to performance problems. Fortunately, with ADO.NET DataSets, you can use the GetChanges method on your datasets to ensure that only the data that is changed in a dataset is communicated between the client and the server, packaging the data in a DTO. Then the data is merged into the dataset at its destination.
Figure 2.3 shows a smart client that makes changes to local data and uses the GetChanges method on a dataset to submit only changed data to the server. The data is transferred to a DTO for performance reasons.
Figure 2.3 Using a DTO to improve performance
The GetChanges method can be used for smart client applications that need to go offline. When an application is again online, you can use the GetChanges method to determine what information has changed and then generate a DTO to communicate with the Web service to ensure that the changes are submitted to a database.
Windows Forms Data Binding
Windows Forms data binding enables you to connect the user interface of your application to the application's underlying data. Windows Forms data binding supports bidirectional binding so you can bind a data structure to the user interface, display the current data values to the user, allow the user to edit the data, and then update the underlying data automatically, using the values entered by the user.
You can use Windows Forms data binding to bind virtually any data structure or object to any property of the user interface controls. You can bind a single item of data to a single property of a control, or you can bind more complex data (for example, a collection of data items or a database table) to the control so it can display all of the data in a data grid or list box.
Note You can bind any object that supports one or more public properties. You can bind only to public properties of your classes and not to public members.
Windows Forms data binding allows you to provide a flexible, data-driven user interface with your applications. You can use data binding to provide customizable control over the look and feel of your user interface (for example, by binding to control properties such as the background or foreground color, size, image, or icon).
Data binding has many uses. For example, it can be used to:
- Display read-only data to users.
- Allow users to update data from the user interface.
- Provide master-detail views on data.
- Allow users to explore complex related data items.
- Provide lookup table functionality, allowing the user interface to connect user-friendly display names.
This section examines some features of data binding and discusses some of the data binding features that you frequently need to implement in a smart client application.
For in-depth information about data binding, see "Windows Forms Data Binding and Objects" at.
Windows Forms Data Binding Architecture
Windows Forms data binding provides a flexible infrastructure to bidirectionally connect data to the user the interface. Figure 2.4 shows a schematic representation of the overall architecture of Windows Forms data binding.
Figure 2.4 Architecture of Windows Forms data binding
Windows Forms data binding uses the following objects:
- Data source. Data sources are the objects that contain the data to be bound to the user interface. Data providers can be any object that has public properties, an array or a collection that supports the IList interface or an instance of a complex data class (for example, DataSet or DataTable).
- CurrencyManager. The CurrencyManager object keeps track of the current position of the data within an array, collection, or table that is bound to the user interface. The CurrencyManager allows you to bind a collection of data to the user interface and to navigate through that data, updating the user interface to reflect the currently selected item within the collection.
- PropertyManager. The PropertyManager object is responsible for maintaining the current property of an object that is bound to a control. Both the PropertyManager and CurrencyManager classes inherit from a common base class, BindingManagerBase. All data providers bound to a control to have an associated CurrencyManager or PropertyManager object.
- BindingContext. Each Windows Form has a default BindingContext object that keeps track of all of the CurrencyManager and PropertyManager objects on the form. The BindingContext object allows you to easily retrieve the CurrencyManager or PropertyManager object for a specific data source. You can assign a specific BindingContext object to a container control (such as a GroupBox, Panel, or TabControl) that contains data-bound controls. Doing so allows each part of your form to be managed by its own CurrencyManager or PropertyManager objects.
- Binding. The Binding objects are used to create and maintain a simple binding between a single property of a control and either the property of another object or the property of the current object in a list of objects.
Binding Data to Windows Forms Controls
There are a number of properties and methods that you can use to bind to specific Windows Forms controls. Table 2.2 shows some of the more important ones.
Table 2.2 Properties and Methods for Binding to Windows Forms Controls
Note If the DataSource is a DataTable, DataView, collection, or array, setting the DataMember property is not required.
You can also use the DataBindings collection property available on all Windows Forms control objects to add Binding objects explicitly to any control object. Binding objects are used to bind a single property on the control to a single data member of the data provider. The following code example adds a binding between the Text property of a text box control to the customer name in the customers table of a data set.
textBox1.DataBindings.Add( new Binding( "Text", dataset, "customers.customerName" ) );
When you construct a Binding instance with the Binding constructor, you must specify the name of the control property to bind to, the data source, and the navigation path that resolves to a list or property in the data source. The navigation path can be an empty string, a single property name, or a period-delimited hierarchy of names. You can use a hierarchical navigation path to navigate through data tables and relations in a DataSet object, or over an object model where an object's properties return instances to other objects. If you set the navigation path to an empty string, the ToString method is called on the underlying data source object.
Note If a property is read-only (that is, the object does not support a set operation for that property), data binding does not by default make the bound Windows Forms control read-only. This can lead to confusion for the user, because the user can edit the value in the user interface, but the value in the bound object will not be updated. Therefore, make sure the read-only flags are set to true for all Windows Forms controls that are bound to read-only properties.
Binding Controls to DataSets
It is often useful to bind controls to datasets. Doing so allows you to display the dataset data in a data grid, and it allows the user to easily update the data. You can bind a data grid control to a DataSet using the following code.
DataSet newDataSet = webServiceProxy.GetDataSet(); this.dataGrid.SetDataBinding( newDataSet, "tableName" );
Sometimes you need to replace the contents of your dataset after all of the bindings with your controls have already been established. However, when you replace existing sets with new ones, the bindings all remain with the old data set.
Rather than manually recreating the data bindings with the new data source, you can use the Merge method of the DataSet class to bring the data from the new data set into the existing one, as shown in the following code example.
DataSet newDataSet = myService.GetDataSet(); this.dataSet1.Clear(); this.dataSet1.Merge( newDataSet );
Note To avoid threading issues, you should only update bound data objects on the UI thread. For more information, see Chapter 6: Using Multiple Threads
Navigating Through a Collection of Data
If your data sources contain a collection of items, you can bind the data collection to your Windows Forms controls and navigate through the collection of data one item at a time. The user interface is automatically updated to reflect the current item in the collection.
You can bind to any collection object that supports the IList interface. When you bind to a collection of objects, you can allow the user to navigate through each item in the collection, automatically updating the user interface for each item. Many of the collection and complex data classes provided by the .NET Framework already support the IList interface, so you can easily bind to arrays or complex data such as data rows or data views. For example, any array object that is an instance of the System.Array class implements the IList interface by default, and so can be bound to the user interface. Many ADO.NET objects also support the IList interface, or a derivative of it, allowing these objects to be easily bound too. For example, the DataViewManager, DataSet, DataTable, DataView, and DataColumn classes all support data binding in this way.
Data sources that implement the IList interface are managed by the CurrencyManager object. This object maintains an index into the data collection though its Position property. The index is used to ensure that all controls bound to the data source read and write to the same item in the data collection.
If your form contains controls bound to multiple data sources, it will have multiple CurrencyManager objects, one for each distinct data source. The BindingContext object provides easy access to all CurrencyManager objects on the form. The following code example shows how to increment the current position within a collection of customers.
this.BindingContext[ dataset, "customers" ].Position += 1;
You should use the Count property on the CurrencyManager object as shown in the following code example to ensure that an invalid position is not set.
if ( this.BindingContext[ dataset, "customer" ].Position < ( this.BindingContext[ dataset, "customer" ].Count 1 ) ) { this.BindingContext[ dataset, "customers" ].Position += 1; }
The CurrencyManager object also supports a PositionChanged event. You can create a handler for this event so that you can update your user interface to reflect the current binding position. The following code example displays a label to show the current position and the total number of records.
this.BindingContext[ dataset, "customers" ].PositionChanged += new EventHandler( this.BindingPositionChanged );
The method BindingPositionChanged is implemented as follows.
private void BindingPositionChanged( object sender, System.EventArgs e ) { positionLabel.Text = string.Format( "Record {0} of {1}", this.BindingContext[dsPubs1, "authors"].Position + 1, this.BindingContext[dsPubs1, "authors"].Count ); }
Custom Formatting and Data Type Conversion
You can provide custom formatting for data bound to a control using the Format and Parse events of the Binding class. These events allow you to control how data is displayed in the user interface and how data is taken from the user interface and parsed, so that the underlying data can be updated. These events can also be used to convert data types so that the source and destination data types are compatible.
Note If the data type of the bound property on the control does not match the data type of the data in the data source, an exception is thrown. If you need to bind incompatible types, you should use the Format and Parse events on the Binding object.
The Format event occurs when data is read from the data source and displayed in the control, and when the data is read from the control and used to update the data source. When the data is read from the data source, the Binding object uses the Format event to display the formatted data in the control. When the data is read from the control and used to update the data source, the Binding object parses the data using the Parse event.
The Format and Parse events allow you to create custom formats for displaying data. For example, if the data in a table is of type Decimal, you can display the data in the local currency format by setting the Value property of the ConvertEventArgs object to the formatted value in the Format event. You must consequently format the displayed value in the Parse event.
The following code sample binds an order amount to a text box. The Format and Parse events are used to convert between the string type expected by the text box and the decimal types expected by the data source.
private void BindControl() { Binding binding = new Binding( "Text", dataset, "customers.custToOrders.OrderAmount" ); // Add the delegates to the event. binding.Format += new ConvertEventHandler( DecimalToCurrencyString ); binding.Parse += new ConvertEventHandler( CurrencyStringToDecimal ); text1.DataBindings.Add( binding ); } private void DecimalToCurrencyString( ); }
Using the Model-View-Controller Pattern to Implement Data Validation
Binding a data structure to a user interface element allows the user to edit the data and ensures that these changes are then written back to the underlying data structure. Often, you will need to check the changes that the user makes to the data to ensure that the values entered are valid.
The Format and Parse events described in the previous section provide one way to intercept the changes the user makes to the data, so that the data can be checked for validity. However, this approach requires that the data validation logic be implemented together with the custom formatting code, typically at the form level. Implementing these two responsibilities together in the event handlers can make your code difficult to understand and maintain.
A more elegant approach is to design your code so that it uses the Model-View-Controller (MVC) pattern. The pattern provides natural separation of the various responsibilities involved with editing and changing data through data binding. You should implement custom formatting within the form that is responsible for presenting the data in a certain format, and then associate the validation rules with the data itself, so that the rules can be reused across multiple forms.
In the MVC pattern, the data itself is encapsulated in a model object. The view object is the Windows Forms control that the data is bound to. All changes to the model are handled by an intermediary controller object, which is responsible for providing access to the data and for controlling any changes made to the data through the view object. The controller object provides a natural location for validating changes made to the data, and all user interface validation logic should be implemented here.
Figure 2.5 depicts the structural relationship between the three objects in the MVC pattern.
Figure 2.5 Objects in Model-View-Controller pattern
Using a controller object in this way has a number of advantages. You can configure a generic controller to provide custom validation rules, which are configurable at run time according to some contextual information (for example, the role of the user). Alternatively, you can provide a number of controller objects, with each controller object implementing specific validation rules, and then select the appropriate object at run time. Either way, because all validation logic is encapsulated in the controller object, the view and model objects do not need to change.
In addition to separating data, validation logic, and user interface controls, the MVC model gives you a simple way to automatically update the user interface when the underlying data changes. The controller object is responsible for notifying the user interface when changes to the data have occurred by some other programmatic means. Windows Forms data binding listens for events generated by the objects that are bound to the controls so that the user interface can automatically respond to changes made to the underlying data.
To implement automatic updates of the user interface, you should ensure that the controller implements a change notification event for each property that may change. Events should follow the naming convention <property>Changed, where <property> is the name of the property. For example, if the controller supports a Name property, it should also support a NameChanged event. If the value of the name property changes, this event should be fired so Windows Forms data binding can handle it and update the user interface.
The following code example defines a Customer object, which implements a Name property. The CustomerController object handles the validation logic for a Customer object and supports a Name property, which in turn represents the Name property on the underlying Customer object. This controller fires an event whenever the name is changed.
public class Customer { private string _name; public Customer( string name ) { _name = name; } public string Name { get { return _name; } set { _name = value; } } } public class CustomerController { private Customer _customer = null; public event EventHandler NameChanged; public Customer( Customer customer ) { this._customer = customer; } public string Name { get { return _customer.Name; } set { // TODO: Validate new name to make sure it is valid. _customer.Name = value; // Notify bound control of change. if ( NameChanged != null ) NameChanged( this, EventArgs.Empty ); } } }
Note Customer data source members need to be initialized when they are declared. In the preceding example, the customer.Name member needs to be initialized to an empty string. This is because the .NET Framework does not have a chance to interact with the object and set the default setting of an empty string before the data binding occurs. If the customer data source member is not initialized, the attempt to retrieve a value from an uninitialized variable causes a run-time exception.
In the following code example, the form has a TextBox object, textbox1, which needs to be bound to the customer's name. The code binds the Text property of the TextBox object to the Name property of the controller.
_customer = new Customer( "Kelly Blue" ); _controller = new CustomerController( _customer ); Binding binding = new Binding( "Text", _controller, "Name" ); textBox1.DataBindings.Add( binding );
If the name of the customer is changed (using the Name property on the controller), the NameChanged event is fired and the text box is automatically updated to reflect the new name value.
Updating the User Interface When the Underlying Data Changes
You can use Windows Forms data binding to automatically update the user interface when the corresponding underlying data changes. You do this by implementing a change notification event on the bound object. Change notification events are named according to the following convention.
public event EventHandler <propertyName>Changed;
So, for example, if you bind an object's Name property to the user interface and then that object's name changes as a result of some other processing, you can automatically update the user interface to reflect the new Name value by implementing the NameChanged event on the bound object.
Summary
There are many different considerations involved in determining how to handle data in your smart clients. You need to determine whether and how to cache your data, and how to handle data concurrency issues. You will often decide to use ADO.NET datasets to handle your data, and you will probably also decide to take advantage of the Windows Forms data binding functionality.
In many cases, read-only reference data and transient data needs to be dealt with differently. Because smart clients typically use both types of data, you need to determine the best way to handle each category of data in your application. | https://msdn.microsoft.com/en-us/library/ff647254.aspx | CC-MAIN-2015-48 | refinedweb | 6,445 | 51.68 |
Post your Comment
Java variable
Java variable To what value is a variable of the String type automatically initialized
Local Variable and Instant Variable
they compile fine. But in the case of local variable in Java, local variable in Java do...Local Variable and Instant Variable difference between Local Variable and Instant Variable
Difference between local variable
Converting jsp variable to java variable
Converting jsp variable to java variable Hi how to convert java script variable to java variable on same jsp page
environment variable
environment variable what is environment variable in java?
Environment variables are used by the Operating System to pass configuration information to applications
Static Variable
Static Variable What is the basic use of introducing static variable type in java?Please explain clearly....
The Static means..., it means that the static variable is shared among all instances of the class
Java Final Variable
Java Final Variable I have some confusion --Final variable has single copy for each object so can we access final variable through class name,or it is necessary to create object for accessing final variable
java Transient variable - Java Beginners
java Transient variable Hi,
What is the use of transient variables in java? Hi friend,
The transient is a keyword defined in the java programming language. Keywords are basically reserved words which
have
Referance variable by using which reference variable we declare instance variable as local variable in java Hi friend,Local variables... variable and parameter. When the method is called, the parameter slots are initialized
JSP Create Variable
JSP Create Variable
JSP Create Variable is used to create a variable in jsp. The scriptlets
include a java code to be written as <%! %>
Static variable in java
Static variable in java.
Static is a keyword in java used to create static methods, variable
inside a class and static class.
Static variable is also called class variable which belongs to class not
to object.
Static
Passing a java property to a javascript variable
Passing a java property to a javascript variable i want to assign a java property to a js variable:
action class:
collegename have a property.../javascript"); %>
var collegename = '';
Java Script code:
alert("collegename
Using a variable filename - Java Beginners
Using a variable filename Dear sir, my program a JFrame containes of 3 JTextFields and 2 different JButtons intend to sends data to 2 Input files... the buttons global variable and using the code e.getSource()== or by associating
Extracting variable from jsp
Extracting variable from jsp how to Write a java program which will be extracting the variables and putting them in an excel format?
The given code allow the user to enter some fields and using the POI API
What is the purpose of declaring a variable as final?
What is the purpose of declaring a variable as final? Hi,
What is the purpose of declaring a variable as final?
Thanks
Hi,
In Java when we declare a final variable as a variable which has been initialized
Java Transient Variable
Java Transient Variables
Before knowing the transient variable you should firs know about the
Serialization in java. Serialization means making a object... mark that object as
persistent. You can also say that the transient variable
Get Environment Variable Java
Get Environment Variable Java
In this example we are getting environment variable. To
get the environment variable we use getenv() method.
Map is an interface
Dynamic variable in java - Java Interview Questions
Dynamic variable in java My interview asked what is dynamic variable in java and where we use them. Hi Friend,
The variables that are initialized at run time is called as dynamic variable.You can use it anywhere
Local Variable ,Package & import
Variable
public class LocalVar {
public static void main(String args... the Class Variable
public class ClassVar {
private static int...Local Variable ,Package & import
A local variable has a local scope.Such
Example of a class variable (static variable)
Example of a class variable (static variable)
... you
can define the static class variable in a class. When a number of objects... of a static variable first of all create
a class StaticVariable. Define one static
Post your Comment | http://roseindia.net/discussion/48457-Static-variable-in-java.html | CC-MAIN-2013-48 | refinedweb | 689 | 52.9 |
#include <deal.II/lac/solver.h>
A base class for iterative linear solvers. This class provides interfaces to a memory pool and the objects that determine whether a solver has converged.
In general, iterative solvers do not rely on any special structure of matrices or the format of storage. Rather, they only require that matrices and vectors define certain operations such as matrix-vector products, or scalar products between vectors. Consequently, (although they also provide many more interfaces that solvers do not in fact need – for example, element access). In addition, you may want to take a look at step-20, step-22, or a number of classes in the LinearSolvers namespace for examples of how one can define matrix-like classes that can serve as linear operators for linear solvers.
Concretely, matrix and vector classes that can be passed to a linear solver need to provide the following interfaces:Base one of the following blocks:Base:
thisargument has been bound using a lambda function (see the example below).Base a lambda function 333 51938 of file solver.h.57 of file sol63 of file solver.h. | https://www.dealii.org/developer/doxygen/deal.II/classSolverBase.html | CC-MAIN-2019-47 | refinedweb | 187 | 51.99 |
Oh, this is "suppressed at threshold 3". It is (1). Did it get a default 1 when the system certified these entries to Apprentice? And how is it supposed to go up if nobody can read anything because of the threshold suppression?
Oh, this is "suppressed at threshold 3". It is (1). Did it get a default 1 when the system certified these entries to Apprentice? And how is it supposed to go up if nobody can read anything because of the threshold suppression?
Some people are writing blogs on Advogato about general
topics, but recently I turned to the opinion that you
should only, or at least mostly, write about your
open-source projects at Advogato. So I will try not to
write anymore about politics, science in general, or
personal things. Unfortunately that means there is
nothing to write now since I have not been working much
on my open-source projects for various reasons. One
reason is that I have to do so much "serious" work, you
know what I mean. Another is that some of the projects
have reached a stage where a boring programming task at
hand. And perhaps the most important reason is that I
just can't find the right people to join a project.
Working alone on open-source is a thing to avoid to my
opinion. You need another project team member who is willing to read
your source code (Python in my case) and who preferably
has a more or less the same programming style as you
do; somebody to discuss things with; somebody who can
test your ideas; somebody who can keep you from diving
into something silly; and most important of all
somebody to join the fun with, to share the proud
feeling when you've created something that works.
Oh well, enough serious work to do. of all I am totally disgusted by the attack on Madrid. Don't know how to do something about it, support the Spanish, write about it, buy Spanish wine, whatever.
I decided not to put here a big anti-arab and anti-terrorist rant I wrote. But things here in Europe are starting to look grim. In the USA you don't have that many arab people, but in Europe it is really running out of control. There is a *huge* difference in culture.
Back to software development etc.
via Martin
Fowler to a Ruby library
for XML.
There is this example from Java
for (Enumeration e=parent.getChildren(); e.hasMoreElements(); ) { Element child = (Element)e.nextElement(); // Do something with child }
and then a Ruby evangelist writes that in Ruby it is so much better because it is
parent.each_child{ |child| # Do something with child }
and then there is "Can't you feel the peace and contentment in this block of code?. Ruby is the language Buddha would have programmed in."
Talk about a difference in culture. If Ruby is still for you then you are very different from me.
Next time you develop a tool call it a "Distributed mobile data-driven XML application". Interoperable. Buzzwords compliant. Great.
But now there is a new problem whenever a bug is detected: where is the bug.?
I read several writers who wrote books back in the 1970's. They did not have editors then, just typewriters. Seeing a smart remark of one of those writers recently in the newspaper, it made me wonder if today they would even write better. It is so hard to write without an editor! Would their books be even better when written with an editor? Are we writing better books these days?.
XML
On second thought, maybe XML isn't all that bad. What about handling tabs, unicode, namespaces, processing instructions, schema declarations, ...
But recently I had to "code" a lot of small XML snippets with an editor. Not just configurations files, all sorts of stuff. Maybe I can come up with a simple translator tool, from human-readable-editable to XML and back. Should not be too difficult. Except for all the fancy XML stuff mentioned above.
If.! | http://www.advogato.org/person/wspace/diary.html?start=32 | CC-MAIN-2014-41 | refinedweb | 681 | 74.19 |
Halloween Gremlins
In this tutorial, we'll create a Silverlight pinball game using Behaviors, a new addition to Expression Blend 3 & Silverlight that allows you to create interactivity with little or no coding.
Below you'll find a video and step-by-step walkthrough.
First you have to setup Physics Helper controls into Expression Blend. Follow these steps:
Next, you create the controller and the pinball. Here's how.
In a pinball game, the Flippers are controlled by the player and are used to move the ball up the playfield. Standard pinball games have one right and one left flipper, but many games have three or more flippers located in various locations on the playfield. We'll create two user controls to represent the flippers – a right flipper and a left flipper – that way we can easily add as many flippers as we want to our main playfield.
<UserControl
xmlns=""
xmlns:x=""
xmlns:d=""
xmlns:mc=""
mc:Ignorable="d"
xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"
xmlns:pb="clr-namespace:Spritehand.PhysicsBehaviors;assembly=Spritehand.PhysicsBehaviors"
x:Class="PinballGame.RightFlipper"
d:
Now let's create more of a playfield for our Pinball game and add a Camera control so that the game will scroll and always follow the pinball. Feel free to tweak the design as you wish.
Pinball games have lots of different types of targets – some are simple sensors which give points on contact, others are little toys that gobble up the ball for a few seconds. In this step, we'll create a Kicking Target, which gives points when hit by the ball but also kicks the ball back in the opposite direction.
Our pinball game is looking up, but the performance could be a lot better. The startup time for the game is taking quite awhile because the Physics Helper Library is determining the outline of all of the shapes. Also, the frame rate is too low.
By default, Silverlight has a target frame rate of 60 frames per second. This is great for a lot of casual games, but Pinball requires a bit more speed. Additionally, Silverlight 3 introduces GPU Acceleration which can greatly increase the performance of our game by offloading graphics operations to the Video Card.
<object data="data:application/x-silverlight," type="application/x-silverlight-2" width="800" height="600">
<param name="source" value="ClientBin/PinballGame.xap"/>
<param name="onerror" value="onSilverlightError" />
<param name="background" value="#010141" />
<param name="minRuntimeVersion" value="3.0.40624.0" />
<param name="MaxFrameRate" value="160" />
<param name="EnableGPUAcceleration" value="true" />
<param name="EnableCacheVisualization" value="false" />
<param name="autoUpgrade" value="true" />
<a href="" style="text-decoration: none;">
<img src="" alt="Get Microsoft Silverlight" style="border-style: none"/>
</a>
</object>
<Rectangle x:
<i:Interaction.Behaviors>
<pb:PhysicsObjectBehavior
</i:Interaction.Behaviors>
</Rectangle>
public partial class MainPage : UserControl
{
PhysicsControllerMain _physicsController;;
BoundaryCache.ReadBoundaryCache(_physicsController);
}
}
In this section, we'll embellish our game with a Score.
public int Score
{
get
{
return Convert.ToInt32(txtScore.Text);
}
set
{
txtScore.Text = value.ToString();
}
}
void MainPage_Loaded(object sender, RoutedEventArgs e)
{
_physicsController = LayoutRoot.GetValue(PhysicsControllerMain.PhysicsControllerProperty) as PhysicsControllerMain;
BoundaryCache.ReadBoundaryCache(_physicsController);
_physicsController.Collision += new PhysicsControllerMain.CollisionHandler(_physicsController_Collision);
}
void _physicsController_Collision(string sprite1, string sprite2)
{
if (sprite1 == "ellBall" && sprite2.StartsWith("ellKicker"))
Score += 10;
}
When the ball collides with the “rectPlatform” obstacle at the bottom, the ball has been lost and we should launch a new ball.
void _physicsController_Collision(string sprite1, string sprite2)
{
if (sprite1 == "ellBall" && sprite2.StartsWith("cnvKicker"))
{
Score += 10;
}
if (_lostTheBall == null &&sprite1 == "ellBall" && sprite2 == "rectPlatform")
{
_lostTheBall = new LostTheBall();
_lostTheBall.Closed += new EventHandler(dialog_Closed);
_lostTheBall.Show();
}
}
void dialog_Closed(object sender, EventArgs e)
{
_lostTheBall = null;
PhysicsSprite ball = _physicsController.PhysicsObjects["ellBall"];
ball.BodyObject.Position = new Vector2(460, 430);
}
We can easily add buffered sound effects using the PhysicsSoundBehavior.
Andy Beaulieu is a software developer and trainer who is well versed in many Microsoft technologies including Silverlight, ASP.NET, ADO.NET and WindowsForms. Visit Andy's Blog for more fun and games with Silverlight.
@Michael Washington, Thanks man, we try our best.
This is a really great tutorial. This covers a lot of advanced stuff but it is clear and easy to follow. Top notch work.
@Rich Alger, you need to do the "Setup and Prerequisites" step
I don't see the "Physics Controller Behavior" in my list of behaviors. I am using Microsoft Expression Blend 3 (Free Trial)
@Rich Alger - I was on one computer and I could not get the Behaviors to show up. However, I was unable to get any behaviors to show up. I had to uniinstall my Expression Blend and Expression SDK ect. and reinstall everything to get them to show up properly.
This tutorial does work, but your Expression Blend set-up may not work correctly and re-installing should help.
@Keith Simmons thanks for the heads up, yeah, I'm going to have to fix the video links.
The video on this site doesn't work. It times out when you try to click on it.
Big fan I wrote a small ragdoll program using these steps at my site (My name is a link to it)and it all worked smoothly except that when I tried to use any of the water or magnet behaviors it decides that non of the physics should work and freezes up. But as long as I use the basic stuff it works fine.
Keep up the cool posts!
@Keith Simmons, took a bit but got the videos on Channel 9
@ge-force watch video 3 for how to do that.
I don't know how to round the corners of the rectangle for the left flipper. And when you said "Expression Blend's main menu" I got a little lost.
That looks like a library I need to spend some time with. Nice video series too. | https://channel9.msdn.com/coding4fun/articles/Creating-a-Pinball-Game-in-Silverlight-Using-the-Physics-Helper-Library--Farseer-Physics | CC-MAIN-2019-04 | refinedweb | 961 | 57.27 |
This is a simple ATtiny13 project that controls LED RGB using Software PWM. In result It gives really nice and colourful light effect. In our circuit a LED cathodes are connected to PB0, PB1 and PB2 while common anode is connected to VCC. The code is on Github, click here.
Parts Required
- ATtiny13 – i.e. MBAVR-1 (Minimalist Development Board)
- Resistors R1, R2, R3 – 560Ω, see LED Resistor Calculator
- LED RGB (common anode)
Circuit Diagram
Firmware
This code is written in C and can be compiled using the avr-gcc. More details on how compile this project is here.
#include <avr/io.h> #include <util/delay.h> /* LED RBG pins */ #define LED_RED PB0 #define LED_GREEN PB1 #define LED_BLUE PB2 /* Rainbow settings */ #define MAX (512) #define STEP (4) /* Fading states */ #define REDtoYELLOW (0) #define YELLOWtoGREEN (1) #define GREENtoCYAN (2) #define CYANtoBLUE (3) #define BLUEtoVIOLET (4) #define VIOLETtoRED (5) /* Global variables */ uint16_t red = MAX; uint16_t green = 0; uint16_t blue = 0; uint16_t state = 0; void rainbow(int n) { switch (state) { case REDtoYELLOW: green += n; break; case YELLOWtoGREEN: red -= n; break; case GREENtoCYAN: blue += n; break; case CYANtoBLUE: green -= n; break; case BLUEtoVIOLET: red += n; break; case VIOLETtoRED: blue -= n; break; default: break; } if (red >= MAX || green >= MAX || blue >= MAX || red <= 0 || green <= 0 || blue <= 0) { state = (state + 1) % 6; // Finished fading a color so move on to the next } } int main(void) { uint16_t i = 0; /* --- setup --- */ DDRB = 0b00000111; PORTB = 0b00000111; /* --- loop --- */ while (1) { /* Rainbow algorithm */ if (i < red) { PORTB &= ~(1 << LED_RED); } else { PORTB |= 1 << LED_RED; } if (i < green) { PORTB &= ~(1 << LED_GREEN); } else { PORTB |= 1 << LED_GREEN; } if (i < blue) { PORTB &= ~(1 << LED_BLUE); } else { PORTB |= 1 << LED_BLUE; } if (i >= MAX) { rainbow(STEP); i = 0; } i++; } return (0); }
19 thoughts on “ATtiny13 – controlling LED RGB | fancy light effects”
Łukasz ,
I like your code! So straight! Good job! I made a Rainbow for my 6 year old daughter out of it, she drawed the picture and in the center of the rainbow there ist the RGB LED.
Greetings
Josi
Hi, good to hear that! Thank you! 🙂
Hi
Could you explain this:
state = (state + 1) % 6;
Because statement is true every six states so how does it prevent compiler to go above values which you defined in /*fading states*/?
Hi, this counter is counting to 5 then resets.
Well I tried it anyway….connecting the a common cathode RGB LED (instead of a common anode RGB LED). Works fine as well!
Hi, good! The signals will be inverted but it should work. L
Hi there, been a while!
Am revisiting this wonderful project of yours. However this time I got a lot of common cathode RGB LEDs given by a friend.
What do I need to edit to make your code adaptable to common cathode RGB LEDs?
Hope to receive from you soon
Hi, really want to try this project. I see 2 includes at the top of your code – where are they from? Other code snippets? And AVRDUDESS is wanting fuse settings. Any advice on these and clock settings?
The complete code is on github.
Can it changed to be a random cycle of random colours? I’ve tried modifying but not with much success. Thanks for all of your work though – I’ve switched to the atiiny13 and learned a lot through your code – keep it up!
It require redesign of the algorithm, probably. What do you mean by random colours and what light effect you would like to get ?
Thanks for visiting my website. I’m glad you can find useful info here. Due to busy days at work I have not much time for creating news posts but I have several ATtiny13 projects to go. Hope to publish it soon!
Hi Łukasz. I am trying to get a smooth random transition of colours – so that the colour varies randomly in the amount of time it is “on” and also a random intensity. So far I have made this ball () with this code () which results in this video ().
I’m happy so far with the result except:
1. It’s not very smooth (colours jump a bit – which is not such a big problem)
2. I’d like to have less cpu time (less intense calculations)
3. I want to lower the clock speed for improved battery life (ideally 1.2MHz).
That’s the plan! Any help/hints would be really useful.
Hello. Thanks for the great tutorial. Tried replicating this and it works! However I’ve been out of my wits trying to modify the rate of change between colors. Tried changing osc. freq, putting delays on the loop, to no avail. Hope you can point me to the right direction…. Thanks again
Hi Ting, to change loop delay please set other STEP value, i.e. 2.
Hi, thanks for your advise. Tried setting #define STEP (1) to 2 (or even 4), it made the rate of change faster. I wanted the rate of change to get slower. Sorry for not pointing it out in my original post. Setting #define STEP to 0.5 was not good either… (actually it was bad, no rate of change). Maybe there is another way…perhaps? Thanks again!
Hi, to make a rate of change faster or slower change the STEP or MAX value:
Have a good prototyping!
I have set bascom avr and arduino))))
Handsomely! You can make a compilation – hex file? And, ideally, it would be desirable option program for overall cathode.
Hi! I really prefer to not share a hex files on a blog posts but I can send it via email if you want. Good hint. Next version of this project could handle two types of LED RBG (w/ common cathode and /w common anode). Thanks! | https://blog.podkalicki.com/attiny13-controlling-led-rgb-fancy-light-effects/ | CC-MAIN-2020-40 | refinedweb | 952 | 82.14 |
0
def random(fileName) setup = open(fileName, "r" ) z = setup.readline() waiting = stack() for line in setup: customerNum, arrivalTime, serviceTime = line.split() plane = Customer(customerNum, arrivalTime, serviceTime) push(waiting, plane) setup.close() print "The data file specifies a", z, "-server simulation." print "There are", len(waiting) , "arrival events in the data file."
This is the output, I tried a bunch of stuff but I get that line break after the variable z every time!
The data in the text file is ran through a clas I created but that has not effect with this print statement??
Please Enter a Filename. test.txt
The data file specifies a 5
-server simulation.
There are 1000 arrival events in the data file.
linux2[23]% | https://www.daniweb.com/programming/software-development/threads/441416/how-do-i-get-rid-of-this-line-break | CC-MAIN-2018-43 | refinedweb | 121 | 60.21 |
Every script I've tried isn't able to copy the feature dataset in the top image to the bottom feature dataset (in an SDE).
It's probably something simple, but is there a known function for this? I can't change anything in the SDE as I don't have administrative rights to it.
I've tried Copy_management. But, here the in_data and out_data variables must have the same file extension to work properly.
I've also tried CopyFeatures_management. Here's the syntax and error message for that:
# Import modules
import os
import arcpy
# set workspace
env.workspace = r"T:\Departments\E911\Transfer Files\E911WeeklyUpdate\E911WeeklyUpdate.gdb"
#create a list of feature classes in the current workspace
fclist = arcpy.ListFeatureClasses()
#copy each feature class to a fgdb
for fc in fclist:
fcdesc = arcpy.Describe(fc)
arcpy.CopyFeatures_management(fc, os.path.join("Database Connections\
Jared to plainfield.sde\gisedit.DBO.MGU_Will", fcdesc.basename))
Try 'print fclist'. It looks like there's nothing there.
Also, there's something fishy going on. You should get an error at 'env.workspace' since you haven't imported env. | https://community.esri.com/thread/187208-copy-feature-dataset-to-feature-dataset | CC-MAIN-2018-34 | refinedweb | 184 | 61.93 |
A Look at the $100,000 Stretch Goal for Shadows of Esteren: Occultism
Warning: A non-numeric value encountered in /nfs/c12/h02/mnt/222827/domains/diehardgamefan.com/html/wp-includes/functions.php on line 64
If you’re unaware, the next book for the multi-award winning Shadows of Esteren line is currently crowdfunding over at Kickstarter. It’s currently raised $92,000 thanks to 772 backers. Nelyhann and his team saw I was a backer (as I always am) and asked if they could debut the $100K stretch goal here at Diehard GameFAN. Of course I said yes…especially once I saw what the goal was.
If Occultism is able to raise $100,000, we’ll get a chance to explore a totally new aspect of Shadows of Esteren – the cuisine. The goal is to create a full menu with recipes for a Shadows of Esteren dinner. Now this idea is not totally new. In 2013 we saw the Werewolf: The Apocalypse Cookbook, but frankly it wasn’t very good nor thematically correct. That won’t be the case with the Shadows of Esteren menu. Here the goal is actually create a full medieval menu based on Shadows of Esteren lore. To ensure an authentic feel and results, the Shadows of Esteren team will be pairing with Thibaud Villanova and Chef Maxime Leonard who some of you might know as the authors of Gastronogeek, whose website is currently down until March. You CAN visit their Facebook page though.
For those of you who aren’t francophones, Gastronogeek was a popular pop culture based cookbook published in September 2014 was an unexpected hit over in France. If you collect cookbooks like I do (Most kids wanted to be Optimus Prime or Mike Schmidt when I was growing up. I wanted to be Joel Robuchon), you’ll find a lot of them are really stuffy and dry. There are a few that are entertaining to read but generally don’t have very good recipes in return (Like the WWF cookbook I picked up in the 90s). Very few are a mix of information and entertainment. Those are books like Alton Brown’s Feasting on Asphalt, The Frasier themed cookbook Café Nervosa and one I have by Chen Kenichi. Now I don’t own Gastronogeek. but I’ve been given samples of what they have done and it definitely one of those cookbooks with excellent recipes but still fun for a non-cook to read. There are five sections to the cookbook: Sci-Fi, Horror, Fantasy, Comics, and Manga. Each section of the cookbook then features references based on pop culture. For example under the Horror section, you’ll find a recipe for Chicken Paprikash which Jonathan Harker ate on his fateful trip to Castle Dracula (Not actually Castle Bran. Damn Romanian tourism board).Under comics, you’ll find Martha Kent’s Apple Pie. Things like that. The photography is fantastic and the pairing of real, delicious recipes to concepts like Conan, Hellboy and Lord of the Rings is pretty fantastic. To learn more about Gastronogeek, click here for an English Press Release PDF. Hopefully this means the book will be in English someday for those of you on this side of the Atlantic that don’t speak Francais (I can’t make a Cedilla or the back end of this site goes INSANE). I’ve also included a gallery of pictures below.
So as a person who loved cooking, cookbooks, and role-playing games – this is a fantastic idea. When I was a teenager we used to do those dinner & murder mysteries you used to see in book stores during the 90s. I’m not sure if this will be a free addition or an add-on (Nelyhann didn’t share that with me), but it definitely has me considering upping my pledge to ensure this happens (I’ve gone all digital with SoE after my late lamented rabbit Mr. Chewie Biteums lived up to his name and ate my limited edition of Shadows of Esteren, Book One.) because it’s an idea that strongly appeals to me, although I can understand if it’s not for every gamer. As the Horror on the Orient Express Kickstarter showed though, people love stretch goals that help to create a full immerse experience. Chaosium gave those backers placemats, napkins, matchboxes and even mugs to help create the feel of actually being on the Orient Express in the 1920s. Of course it’s much easier to create a realistic experience with something that actually existed in our own world. It is quite another to create something for a fantasy world like Shadows of Esteren. If anyone is up to this terrific challenge though, It’s going to be the Gastronogeek team.
Shadows of Esteren needs less than eight thousand dollars to achieve this stretch goal, so it’s fairly safe to assume they will make it. What do you think Gastronogeek will make for the starter, main course and dessert selections if the stretch goal is achieved. What sorts of dishes and food do you think best fit the Shadows of Esteren world? Sound off here and over at the Kickstarter page for Shadows of Esteren: Occultism. If you’re new to Shadows of Esteren, visiting the Kickstarter page is a great way to learn more about the game and also take the plunge and purchase some books for it. You can also learn more by visiting the official website for the game and the game’s Facebook page.
Good luck reaching the 100K stretch goal Nelyhann. Allez Cuisine!
Tags: Shadows of Esteren
Thank you very much for this interesting article. Maybe I have a nice tipfor you as a pasionate collector of cookboks.
Many, many years ago TSR pubished a book named “Leaves from the Inn of the Last Home”. It was not alone a cookbook but a largesection of the book is devoted to extracts from Tika Waylan’s cookbook. But I habe to admit I never read this section. I was more interested in other prts of the book.
Best wishes
Alexander
———————————————————–
Visit my virtual museum of The Dark Eye (Germany’s biggest role-playing game) and see a selection of extraordinary, one-of-a-kind and curious items out of its 30 year old history: | https://diehardgamefan.com/2015/01/13/a-look-at-the-100000-stretch-goal-for-shadows-of-esteren-occultism/ | CC-MAIN-2021-10 | refinedweb | 1,056 | 69.92 |
2. Getting Started¶
2.1. How to use HORTON?¶
HORTON is essentially a Python library that can be used for performing electronic structure calculations as well as interpreting these calculations (i.e. post-processing). There are two different ways to use HORTON. The most versatile approach is to write Python scripts that use HORTON as a Python library. This gives you full access to all the features available in HORTON; however, this requires some knowledge of the Python programming language. Alternatively, part of HORTON’s functionality is accessible through built-in python scripts whose performance can be controlled through command line arguments. Obviously, this requires less programming knowledge.
2.1.1. Running HORTON as a Python library¶
There will be many examples in the following sections demonstrating how HORTON can be used when writing your own Python scripts. These scripts should all start with the following lines:
#!/usr/bin/env python # Import the HORTON library from horton import * # Import some other stuff (optional) import numpy as np, h5py as h5, matplotlib.pyplot as pt # Actual Python script
This header is then followed by some Python code that does the actual computation of interest. In such a script, you basically write the main program to do your calculations, exploiting the components that HORTON offers. The HORTON library is designed such that all its features are as modular as possible, allowing you to combine them in various ways.
Before running your script, say
run.py, we recommend that you to make it
executable (this needs to be done only once for every script):
chmod +x run.py
Now, when your script has completed, you can run it as follows:
./run.py
Do not use
horton.py as your script name; this will cause trouble when loading
the
horton library (due to a namespace collision).
2.1.2. Running HORTON as
horton-*.py scripts¶
The built-in HORTON scripts all have the
horton-*.py filename pattern.
Through command line arguments, one can control the actual calculations performed by these scripts. Basic
information on how to use each built-in script can be obtained by using the
--help flag. For example,
horton-convert.py --help
2.2. Writing a basic HORTON Python script¶
HORTON scripts just run with a regular Python interpreter (like ASE and unlike PSI4, which uses a modified Python interpreter). This means that you need to have a basic knowledge of Python. In addition, it will be helpful to be familiar with popular Python packages for scientific computing. The links below provide some resources to broaden your Python knowledge:
- Python, Python documentation, Getting started with Python
- NumPy (array manipulation), Getting started with NumPy
- SciPy (scientific computing library), The SciPy tutorial
- Matplotlib (plotting)
- H5Py (load/dump arrays from/to binary files)
- Programming Q&A
- Python code snippets
The following sections go through some basic features that will appear in many other examples in the documentation.
2.2.1. Atomic Units¶
Internally, HORTON works exclusively in atomic units. If you want to convert a
value from a different unit to atomic units, multiply it with the appropriate unit
constant, e.g. the following snippet sets
length to 5 Angstrom and prints it in
atomic units:
length = 5*angstrom # recording 5 angstrom in atomic units print length
Conversely, if you want to print a value in a different unit than atomic units, divide it by the appropriate constant. For example, the following prints an energy of 0.001 Hartree in kJ/mol:
energy = 0.001 # recording energy in Hartree print energy/kjmol # printing energy in kJ/mol
An overview of all units can be found in
horton.units.
There are two special cases:
- Angles are in radians, but you can use the
degunit to work with degrees, for example,
90*degand
np.pi/2are equivalent.
- Temperatures are in Kelvin.
2.2.2. Array Indexing¶
All arrays and list-like objects in Python use zero-based indexing. This means that the first element of a vector is accessed as follows:
vector = np.array([1, 2, 3]) print vector[0]
This convention also applies to all array-like (and list-like) objects in HORTON, e.g. the first orbital in a Slater determinant has index 0.
2.2.3. Loading/Dumping Data from/to a File¶
All input and output of data in HORTON is managed through the
IOData
class. To load data, call the
from_file() method
of the
IOData class, e.g.:
mol = IOData.from_file('water.xyz')
The information read from the file is accessible through attributes of the
mol object. For
example, the following prints the coordinates of the nuclei in Angstrom:
print mol.coordinates/angstrom
To write data into a file, first create an instance of the
IOData
class, then set the appropriate attributes, and write the content into a file by
calling the
to_file() method of the
IOData class. For example, the following snippet creates a
.xyz
file for a Neon atom:
mol = IOData(title='Neon') mol.coordinates = np.array([[0.0, 0.0, 0.0]]) mol.numbers = np.array([10]) mol.to_file('neon.xyz')
For a complete list of supported input/output file formats please refer to
Data file formats (input and output); this includes a list of
IOData
attributes supported by each file format. A definition of all possible
IOData attributes can be found in
horton.io.iodata.IOData.
2.2.4. Periodic Table¶
HORTON has a periodic table of elements alongside several atomic properties that may
come in handy in computations. For more details please refer to
horton.periodic.
The following example prints some information for Carbon atom:
print periodic[6].mass/amu # The mass in atomic mass units print periodic['C'].cov_radius/angstrom # The covalent radius in angstrom print periodic['c '].c6 # The C6 coefficient in atomic units
As demonstrated above, you can be relatively sloppy with the index when referring to elements of the periodic table.
2.2.5. A Complete Example¶
This first example is kept very simple in order to illustrate the basics of a HORTON
Python script. (It neither performs an electronic calculation nor does post-processing.) This example
loads a
.xyz file and computes the molecular mass. Finally, it writes the
data read from the
.xyz file and the calculated mass into a
.h5 file, using HORTON’s
internal data format.
#!/usr/bin/env python from horton import * # Load the molecule from an ``.xyz`` file. mol = IOData.from_file(context.get_fn('test/water.xyz')) # Compute the molecular mass mass = 0.0 # Loop over all atomic numbers for number in mol.numbers: mass += periodic[number].mass # Print the mass in the amu unit print 'MOLECULAR MASS [amu]: %.5f' % (mass/amu) # Store the mass in the IOData instance, in order to write it to the file mol.mass = mass # Write data in the mol object to a file in HORTON's internal HDF5-based # file format. mol.to_file('water.h5')
Note that the
context.get_fn('test/water.xyz') expression is used to look up a data
file from the HORTON data directory. If you want to use your own file, load the
molecule as follows:
mol = IOData.from_file('your_file.xyz') | http://theochem.github.io/horton/2.0.2/user_getting_started.html | CC-MAIN-2022-05 | refinedweb | 1,184 | 56.86 |
CodeGuru Forums
>
Visual C++ & C++ Programming
>
C++ (Non Visual C++ Issues)
> simple C++ problem
PDA
Click to See Complete Forum and Search -->
:
simple C++ problem
rimby
November 3rd, 2001, 07:08 PM
I can't figure this out. What am I missing in this simple little program? I can build & compile this and there are no errors or warnings. But as soon as I enter a string and press enter everthing goes away.
#include <iostream>
using namespace std;
int main()
{
char s[80];
cout << "Enter a character string" << endl;
cin >> s;
cout << "The string you entered is: " << s << endl;
return 0;
}
Andreas Masur
November 3rd, 2001, 07:19 PM
Hi,
There is nothing wrong with your code. The only problem I see will occur if you write over the boundaries which means typing in more than 80 characters...but besides that the program will work fine...
Did you debug your code?? If you still have the problem it would be useful to see some debug output like the function its crashes etc.
Ciao, Andreas
"Software is like sex, it's better when it's free." - Linus Torvalds
rimby
November 3rd, 2001, 07:38 PM
Thank you for replying. I was able to compile and run my program and it does display the first cout statement. I get to type in a string of characters and once I press enter the screen disappears. I wish there was a way to pause after I type in a string and press enter so that my program will go to the 2nd cout statement. I hope this makes sense.
Thanks Rimby.
Paul McKenzie
November 3rd, 2001, 09:12 PM
I compiled and ran your program. There is nothing wrong with entering characters until you enter a space.
If you enter a string with spaces, operator >> will stop at the first space. Use getline() if you want to make sure that the entire string is read in:
cin.getline(s,79);
Regards,
Paul McKenzie
rimby
November 3rd, 2001, 10:44 PM
Thank you so much for replying. I made that change. It compiled w/out errors or warnings but as soon as I typed in a string of characters and then pressed enter, the box or screen went away. Perhaps it is a software issue.
I really like your suggestion!
Thanks Rimby
James Curran
November 3rd, 2001, 11:21 PM
You are writing this under Windows, correct? If so, there's nothing wrong with your program --- it's doing exactly what you told it to do.
When you run a "Console-mode" program like this under Windows, a "console window" is created just for it. Once the program ends, the windows is destroyed. So, after you enter your text, the program print the line in the window, and then the programs ends, the window closes. It probably happened so fast that you didn't see it print the line.
If you open a Dos Session manually, and then run the program by typing it's name at the prompt, the window should stay open.
Truth,
James
I don't do it for the points (OK, maybe I do), but rating a post is a good way for me to know if I helped.
rimby
November 4th, 2001, 12:06 AM
Thank you so much, I was going insane trying to figure out why? Now I can rest about it. Your tip has been a great help.
Thanks again,
Rimby
Andreas Masur
November 4th, 2001, 12:08 PM
Hi,
Your second 'cout' will be executed but since you exit your application right after printing the second line via the 'return' statement you will probably not see the output very well. You can add a function like 'getchar()' (in Windows you can use '_getch()', defined in conio.h) which is defined in stdio.h. It will wait for another keystroke. So
#include <iostream>
#include <stdio.h> // in Windows use conio.h
using namespace std;
int main()
{
char s[80];
cout << "Enter a character string" << endl;
cin >> s;
cout << "The string you entered is: " << s << endl;
getchar(); // in Windows use _getch()
return 0;
}
The way James described works of course as well...
Ciao, Andreas
"Software is like sex, it's better when it's free." - Linus Torvalds
rimby
November 4th, 2001, 03:01 PM
I like both ideas. I tried your idea just now and that window still disappears. I can watch it by stepping into and over the function and it works. I just would really like it, if the window would just stay after I enter my string of characters w/out spaces.
I will keep trying!
Thanks you again,
Rimby
Andreas Masur
November 4th, 2001, 11:09 PM
Hmmm, well my description was not totally clear. 'getchar()' and '_getch()' don't wait for the next keystroke. It's just not the proper description but I was a little bit in hurry. Both functions are reading one character from the input stream in your case 'stdin' (Standard input). Normally you can use them to wait for one keystroke as well. BUT, if some character is still waiting in the input streambuffer this function won't wait, it will immediately read the next character and in your case close the window.
As Paul already mentioned, if you type in some kind of string with a space, operator '>>' will stop at the space. So the rest of your string will stay in the input streambuffer and if you call 'getchar()' or '_getch()' it will notice that there are characters waiting inside the buffer and will read the first character from it. If you type in
"This is a string"
's' will only contain "This ". If you then call 'getchar()' or '_getch()' they will return 'i' since this is the next character which is still inside the buffer.
That might be the problem, I don't know if your program is running under Windows but normally '_getch()' shouldn't make bigger problems.
Ciao, Andreas
"Software is like sex, it's better when it's free." - Linus Torvalds
Oliver Wraight
November 5th, 2001, 05:59 AM
Try pressing <Ctrl> + F5 instead of just F5 when you run the program. This will make Visual Studios ask for a keypress before closing the console window when the program exits.
Hope this helps,
Oliver
rimby
November 10th, 2001, 11:07 PM
Hello,
Someone suggested to me to use:
system ("PAUSE");
prior to the return and it worked perfectly. Someone else suggested using "ignore" but I wasn't sure how to do this.
Rosee
codeguru.com | http://forums.codeguru.com/archive/index.php/t-179749.html | crawl-003 | refinedweb | 1,092 | 80.21 |
Setting up a simple CI/CD flow with k3s and gitlab
Yeah, it’s a new year and it’s time for a new setup! I have been using docker swarm as my personal cluster for the past year. As I’m exposed to kubernetes more and more in my job, I just thought that maybe it’s time for me to actually do something with kubernetes in my free time to learn more about it. Besides, having knowledge about kubernetes benefits me in many ways considering how popular kubernetes has become.
In this blog post, I’m going to describe how I set up my personal cluster using k3s (a Lightweight kubernetes distribution) and rewrite the CI/CD pipeline to deploy to the new cluster instead of the old docker swarm cluster.
I’m assuming that you all have the basic knowledge of Kubernetes (pod, service, deployment, ingress etc…), and will not go into details when it comes to the k8s manifests
What is this k3s thing?⌗
As opposed to k8s, which is the first + last letters of kubernetes and 8 truncated characters in between, k3s doesn’t have any long form (or at least not that I’m aware of). K3s is capable of nearly everything k8s has to offer, meaning that you can write your manifests normally and it’s highly possible that you can apply them to either a full-fledged k8s cluster or a k3s cluster. The main reason why I chose k3s instead of the “standard” k8s distribution is because of the hardware constraints. According to the offical guide, a master node in a k8s cluster requires at least 2GB RAM, my tiny cloud instance in hetzner has exactly 2GB but I’m running other stuff there as well.
Another reason is the learning curve, it will probably take a lot more time for me to learn how to set up a proper k8s cluster compared to a 5-min 1 command run to set up a k3s cluster with everything I need to kickstart my journey with k8s (well technically not k8s but a k8s compliant cluster).
Set up a k3s cluster⌗
First thing first, we need to set up a k3s cluster. For the sake of simplicity, I’m gonna set up a simple 1 master node cluster. k3s is capable of having a multi master setup for high availability.
There are several ways to set up a k3s cluster, the most straightforward (and official) way is to run this command in your master node
curl -sfL | sh -
and you are good to go. There are several environment variables that you set to configure the setup process, but the command itself by default should set up everything.
And to join a cluster, simply run this in your worker node
curl -sfL | K3S_URL= K3S_TOKEN=mynodetoken sh -
I shamelessly copied it from the official docs. The value used for
K3S_TOKEN can be found from
/var/lib/rancher/k3s/server/node-token in the master node.
Alternatively, you can also use k3sup to set up the cluster and your local kubectl at the same time. Just need to run this in your local machine
k3sup install --ip $IP --user root
This assumes that you have added your SSH key to the server. The user can be something other than
root. There are other options as well, you can check them out here
After running this, you will also have access to
kubectl, and then you can see what nodes are in the cluster. I have only 1 here because I don’t need to care about high availability for my setup (yet)
kubectl get nodes NAME STATUS ROLES AGE VERSION dev-1 Ready master 4d1h v1.19.5+k3s2
To join a cluster, run this
k3sup join --ip $AGENT_IP --server-ip $SERVER_IP --user $USER
Notice that you don’t even have to get the token, k3sup will do all of that for you.
Connect to the k3s cluster from the gitlab CI/CD pipeline⌗
Although Gitlab has a nice k8s integration (which can be used with any k3s clusters because they are compatible), we can do this in a generic way so that it can be applied to any other CI/CD platforms.
The main idea is to use
kubectl to manage the cluster from within the CI/CD pipeline deployment step. In order to connect to the cluster, we need 2 things
- The Certificate Authority (CA) of the cluster so that our connections are secured by TLS
- A credential to with proper permissions to access the cluster
Let’s see how it works first and then we can go into details
kubectl config set-cluster k8s --server= --certificate-authority=$KUBE_API_CERT kubectl config set-credentials k8s-deployer --token=$KUBE_API_TOKEN kubectl config set-context k8s --cluster k8s --user k8s-deployer kubectl config use-context k8s
Pretty standard setup, first we need to define the cluster (its IP adddress and the CA used for TLS). Then, we need to set a credential (in form of a Bearer token) to connect to the cluster. After that, it’s just normal stuff to make sure we are running the correct context.
And I happen to have all the necessary commands to do get/create all the necessary data (make sure you are using the
default namespace when running these commands)
SERVICE_ACCOUNT=blog-deployer kubectl create serviceaccount $SERVICE_ACCOUNT kubectl create clusterrolebinding $SERVICE_ACCOUNT --clusterrole cluster-admin --serviceaccount default:$SERVICE_ACCOUNT KUBE_DEPLOY_SECRET_NAME=`kubectl get serviceaccount $SERVICE_ACCOUNT -o jsonpath='{.secrets[0].name}'` KUBE_HOST=`kubectl get ep -o jsonpath='{.items[0].subsets[0].addresses[0].ip}'` KUBE_API_TOKEN=`kubectl get secret $KUBE_DEPLOY_SECRET_NAME -o jsonpath='{.data.token}'|base64 --decode` KUBE_API_CERT=`kubectl get secret $KUBE_DEPLOY_SECRET_NAME -o jsonpath='{.data.ca\.crt}'|base64 --decode`
There are several things here, first we need to create a service account and then we need to add (bind) that service account to the
cluster-admin cluster role. The
cluster-admin role is a bit too much because it has access to the entire cluster. But for the sake of simplicity I just use the most powerful one to avoid any permission problems during the process.
Then we need to get the secret name of the service account which contains all the necessary credentials to connect to the cluster using that service account, namely the (bearer) token and the certificate authority to secure the connection. With all that available, we can now set some environment variables in gitlab (Settings -> CI/CD -> Variables)
One tip here is to create the
KUBE_API_CERT as a “file” variable which is a neat feature of Gitlab CI/CD (more details can be found here). Without it, we would have to do 2 steps
echo $KUBE_API_CERT > /tmp/ca.crt kubectl config set-cluster k8s --server= --certificate-authority=/tmp/ca.crt
and then we always need to clean it up (
rm /tmp/ca.crt) after we are done. With the file variable in Gitlab, we can just use the environment variable.
Now that we have a working kubectl configuration, to actually do the deployment, the good old
kubectl apply -f kubernetes works just fine (with some drawbacks).
Here is the manifests that I’m using to deploy this blog. It’s for krane but the overall structure is the same as regular manifests. We will go into details why simly doing
kubectl apply -f is not enough.
And here is a simple CI/CD pipeline that builds the docker image, and then publishes it to my k3s cluster.
Why is
kubectl apply not enough?⌗
Unlike docker swarm’s compose files where you can at least refer to environment variables, the standard k8s manifests are just static files (as far as I know). You can’t for example change your image tag to refer to the latest deployment tag, what you can do is to always use
latest which is not a good thing because you can’t roll back (what does previous
latest even mean). For example here is the deployment manifest written normally.
--- apiVersion: apps/v1 kind: Deployment metadata: name: blog-site labels: app: blog-app spec: replicas: 2 selector: matchLabels: app: blog-app template: metadata: labels: app: blog-app spec: containers: - name: blog image: registry.gitlab.com/tanqhnguyen/blog:latest resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "96Mi" cpu: "500m" ports: - containerPort: 80
With this deployment, the best we can do is to deploy the latest image. And if there are some problems with the latest image, we can’t roll back (which is fine for my blog but not really for a real production environment). There are 3 solutions (that I know of at the moment) to solve this particular problem.
- Use
sedto replace the fixed
latesttag with the actual latest image tag (in Gitlab, it’s
$CI_COMMIT_SHORT_SHAvariable) before calling
kubectl apply
- Use helm and add the manifests into a helm chart with proper variables. Then we can use helm to install the package with custom values when installing/updating the chart
- Use krane and change the manifest to be an ERB template. Then we can use krane to render the manifest templates and provide custom bindings
All solutions have their pros/cons, I won’t go into details because it’s outside the scope of this blog post. After evaluating these options, I decided to go with (3) because in addition to the manifest template, krane also has better support for managing secrets (via EJSON) among other things.
Now with
krane we can change this line to make it possible to deploy using the commit hash as image tag instead of
latest
image: <%= registry_image %>:<%= commit_short_sha %>
And instead of running
kubectl apply -f, we need to render the templates first and then pipe the output to
kubectl apply
krane render -f kubernetes --bindings=registry_image=$CI_REGISTRY_IMAGE,commit_short_sha=$CI_COMMIT_SHORT_SHA | krane deploy ${KUBE_NAMESPACE} k8s -f -
And that’s it, it’s my first time setting up a k3s cluster and expose myself more to k8s workflow so there might be some weird stuff here and there. You can also check out the real setup in my gitlab repository | https://tannguyen.org/2021/01/setting-up-a-simple-ci/cd-flow-with-k3s-and-gitlab/ | CC-MAIN-2022-05 | refinedweb | 1,677 | 54.26 |
#Goose - Article Extractor
Goose fork published on Maven Central.
This is a forkThis is a fork
If you haven't guessed already, this is a fork of the wonderful Goose library by Gravity Labs. The original repo hasn't been updated for 2 years now, and there have been quite a few nice pull requests that are lying dormant.
The project now uses sbt, and is hosted on Sonatype. Add the following to to your
build.sbt to pull it in:
libraryDependencies ++= Seq("com.gravity" %% "goose" % "2.1.25-SNAPSHOT") resolvers += Resolver.sonatypeRepo("public")
##Intro
Goose was originally an article extractor written in Java that has most recently (aug2011) converted to a Scala project. wiki has the full details on how to use Goose
Goose was open sourced by Gravity.com in 2011
Lead Programmer: Jim Plush (Gravity.com)
Contributers: Robbie Coleman (Gravity.com)
Try it out online!
##Licensing If you find Goose useful or have issues please drop me a line, I'd love to hear how you're using it or what features should be improved
Goose is licensed by Gravity.com under the Apache 2.0 license, see the LICENSE file for more details
##Environment Prerequisites
The default behaviour is by using java image processing capabilities.
ImageMagickImageMagick
You will need to have ImageMagick installed for Goose to work correctly.
On osx, you can install with brew: $ brew install imagemagick
Update Configuration.scala with the location of identify and convert (eg /usr/local/bin)
##Take it for a spin
SBTSBT
To use goose from the command line:
cd into the goose directory sbt "run-main com.gravity.goose.TalkToMeGoose
MVNMVN
cd into the goose directory mvn compile MAVEN_OPTS="-Xms256m -Xmx2000m"; mvn exec:java -Dexec.mainClass=com.gravity.goose.TalkToMeGoose -Dexec.args=" -e -q > ~/Desktop/gooseresult.txt
##Testing To run the junit tests, kick off the sbt test target:
sbt test
Note that there are currently problems in the tests. (8 failures in 41 tests on 2014-07-10 - raisercostin)
##Usage as a maven dependency
Last version (goose_2.10-2.2.0.jar) is hosted at Goose is hosted on Sonatype's OSS repository,
<dependency> <groupId>com.gravity</groupId> <artifactId>goose</artifactId> <version>2.1.22</version> </dependency>
##Regarding the port from Java to Scala
Here are some of the reasons for the port to Scala:
- Gravity has moved more towards Scala development internally so maintenance started to become an issue
- There wasn't enough contribution to warrant keeping it in Java
- The packages were all namespaced under a person's name and not the company's name
- Scala is more fun
##Issues
It was a pretty fast Java to Scala port so lots of the nicities of the Scala language aren't in the codebase yet, but those will come over the coming months as we re-write alot of the internal methods to be more Scalesque. We made sure it was still nice and operable from Java as well so if you're using goose from java you still should be able to use it with a few changes to the method signatures.
##Goose is now language aware
The stopword lists introduced in the Python-Goose project have been incorporated into Goose.
Deploy libraries to bintrayDeploy libraries to bintray
configure your ~/.m2/settings.xml as
<servers> <server> <id>raisercostin-releases</id> <username>svn-user</username> <password>svn-pass</password> </server> </servers>
deploy for scala 2.11
mvn -f pom_scala211.xml deploy -DskipTests -Prelease
deploy for scala 2.10
mvn -f pom_scala210.xml deploy -DskipTests -Prelease | https://index.scala-lang.org/raisercostin/goose | CC-MAIN-2022-21 | refinedweb | 588 | 56.15 |
I'm still very new to java and I'm working on a school assignment that's working on cryptography. When I was searching for methods to read from files I saw many had a try and catch block. I'm not very familiar with the use of these and I want to try and avoid using them in my code but when I remove them I have two exceptions reported at the new in
new FileReader
reader.readLine()
import java.io.*;
import java.util.*;
public class Encrypter {
public static void main(String[] args) {
File input = null;
if (1 < args.length) {
input = new File(args[1]);
} else {
System.err.println("Invalid arguments count:" + args.length);
System.exit(0);
}
String key = args[0];
BufferedReader reader = null;
try {
int i = 0;
String[] inputText = new String[20];
String[] encryptText = new String[20];
reader = new BufferedReader(new FileReader(input));
while ((inputText[i] = reader.readLine()) != null) {
encryptText[i] = inputText[i];
System.out.println(inputText[i]);
++i;
}
int hash = key.hashCode();
Random random = new Random(hash);
String alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ ";
String alphabetPerm = alphabet;
char temp;
for (int j = 0; j < 100; j++) {
int n1 = random.nextInt(27) + 0;
int n2 = random.nextInt(27) + 0;
char[] swapper = alphabet.toCharArray();
temp = swapper[n1];
swapper[n1] = swapper[n2];
swapper[n2] = temp;
String alphaSwap = new String(swapper);
alphabet = alphaSwap;
}
System.out.println(alphabet);
for (int k = 0; k < inputText.length; k++) {
encryptText[k] = inputText[k].replaceAll("[^A-Za-z0-9 ]+", " ");
for (int j = 0; j < inputText[k].length(); j++) {
int index = alphabetPerm.indexOf(encryptText[k].charAt(j));
encryptText[k] = alphabetSwapper(encryptText[k], alphabet, index, j);
System.out.println(encryptText[k]);
}
}
} catch (Exception e) {
System.err.println("Caught Exception: " + e.getMessage());
}
}
public static String alphabetSwapper(String s, String alpha, int index, int value) {
char toSwap = s.charAt(value);
char[] inputChars = s.toCharArray();
inputChars[value] = alpha.charAt(index);
String swapped = new String(inputChars);
return swapped;
}
}
You are better off not catching the exception the way you are because you are discarding the two most useful pieces of information, which exception was thrown and where did it happen.
Instead of catching the exception, you can add the exceptions to the
throws clause of
main()
e.g.
public static void main(String[] args) throws IOException { // ... }
Can anyone explain what is happening?
When you read a file you might be an
IOException e.g. if the file is not present. You might have to catch this exception but for now you can let the caller to
main print it out if it happens.
Also when using the catch and try I get an exception Null when my encoding is done
This means you are triggering an exception without a message. If you print the exception and where it happened(or let the JVM do it for you) you will see where.
To determine why this is happening I suggest stepping through the code in your debugger so you can understand what each line does. | https://codedump.io/share/4GuRG7A4VWwl/1/program-reports-exceptions-when-not-using-try-and-catch | CC-MAIN-2018-17 | refinedweb | 488 | 59.4 |
The Event Hubs team is happy to announce the general availability of our integration with Apache Spark. Now, Event Hubs users can use Spark to easily build end-to-end streaming applications..
Setting up a stream is easy, check it out:
import org.apache.spark.eventhubs._ import org.apache.spark.sql.SparkSession val eventHubsConf = EventHubsConf("{EVENT HUB CONNECTION STRING FROM AZURE PORTAL}") .setStartingPosition(EventPosition.fromEndOfStream) // Create a stream that reads data from the specified Event Hub. val spark = SparkSession.builder.appName("SimpleStream").getOrCreate() val eventHubStream = spark.readStream .format("eventhubs") .options(eventHubsConf.toMap) .load()
It's as easy as that! Once your events are streaming into Spark, you can process them as you wish. Spark provides a variety of processing options, such as graph analysis and machine learning. Our documentation has more details on linking our connector with your project!
The project is open source and available on GitHub. All details and documentation can be found there. Any and all community involvement is welcome, come say hello! If you like what you see, please star the repo to show your support!
Finally, if you have any questions, comments, feedback, please join our gitter chat. Contributors are in the channel to chat and answer questions as they come up, Enjoy the connector!
Next steps | https://azure.microsoft.com/en-in/blog/azure-event-hubs-integration-with-apache-spark-now-generally-available/ | CC-MAIN-2018-39 | refinedweb | 213 | 52.26 |
FAQ - What is the
tend interval ?
Detail
Within the client policy for Aerospike Clients, there is a
tend interval configuration. What does this do?
Answer
Aerospike spreads the load across the cluster by using intelligent clients that direct transactions to the node holding the appropriate records. In addition to efficient and evenly balanced operation (assuming an evenly balanced workload from the application), this negates the need for a single ‘gatekeeper’ node meaning that cluster resilience is orders of magnitude greater than traditional cluster frameworks. Clients direct transactions based on a partition map held within each client object. The partition map holds locations for the master and one or many replica copies of the partitions which make up a namespace. As the partition to which a record is assigned is determinstic, when the location of the partition is known, the location of the record is also known.
The partition map is not pushed out from the cluster to the clients, it is pulled from each node by the client on a periodic basis. The client tends to the nodes in the cluster and the nodes report back which partitions they own. The time period between tending calls to each node is the
tendInterval and can be configured within the client policy.
As the client has a map of partitions by node, transactions can be sent directly to the node that holds the record meaning that there is no load balancing required. The load balancing is implicit and performed by the partition hashing algorithm at a cluster level. It then follows that if there is an imbalance of connections to any particular node in the cluster then there is a problem of sorts with that node or with the non uniformity of the application workload.
If there has been a change of partition ownership within the cluster, as soon as a new master is assigned, transactions will proceed correctly. There may be a gap of up to the configured
tendInterval before the client updates the partition map, during that period the transactions will be proxied to the new master and will complete successfully.
Notes
- If a node leaves the cluster unexpectedly there will be a period mainly defined by (
timeoutx
interval) to which a grace period called
Quantum Intervalis added before a new master takes ownership.
- The
Quantum Intervalis the period of time the cluster allows before rebalancing after a node leaves unexpectedly to avoid multiple rebalances over a short space of time.
- If the node removal is expected the
quiesceinfo command should be used to minimise operational impact to clients and manage the removal gracefully.
- Most clients hold master partitions and N replica partitions in the partition map where N is replication factor. The C client only holds the master and first replica.
Keywords
TEND INTERVAL CLIENT CONNECTION LOAD BALANCE
Timestamp
March 2020 | https://discuss.aerospike.com/t/faq-what-is-the-tend-interval/7276 | CC-MAIN-2020-40 | refinedweb | 472 | 58.11 |
when i am creating new c# script it is showing errors that namespace not found(although these namespace are default namespaces). i have recently install unity 5.3 . restarted monobehaviour many time,change language from default to english. selected monobehaviour in unity editor --> edit>preferences..>external tools. reinstalled unity 5.3......but nothing solved my issue.
You mean MonoDevelop? Are you sure that you actually opened the solution / project in MonoDevelop and not just a single file? Usually when you double click on a script in Unity it should automatically open the project. Are you on a mac ? Or why are you using MonoDevelop?
ooops ! its too late bunny i have solved that problem ! the problem was occurred due to improper installation of unity. but thanks for.
Asset does not correspond to file location, should be Assets.
2
Answers
Monobehaviour Update not working in namespace
2
Answers
List of namespaces?
1
Answer
my system missing monobehavior
0
Answers
Simple question about single or multiple Monobehaviours
1
Answer | https://answers.unity.com/questions/1533347/namespace-not-found.html | CC-MAIN-2019-43 | refinedweb | 167 | 61.93 |
#include <dlfcn.h>
#define _GNU_SOURCE #include <dlfcn.h>
The.
On success, these functions return the address associated
with
symbol. On
failure, they return NULL; the cause of the error can be
diagnosed using dlerror(3).
For an explanation of the terms used in this section, see attributes(7).
There are several scenarios when the address of a global
symbol is NULL. For example, a symbol can be placed at zero
address by the linker, via a linker script or with
−−defsym command-line
option. Undefined weak symbols also have NULL value. Finally,
the symbol value may be the result of a GNU indirect function
(IFUNC) resolver function that returns NULL as the resolved
value. In the latter case,
dlsym() also returns NULL without error.
However, in the former two cases, the behavior of GNU dynamic
linker is inconsistent: relocation processing succeeds and
the symbol can be observed to have NULL value, but
dlsym() fails and
dlerror() indicates a lookup error.
dl_iterate_phdr(3), dladdr(3), dlerror(3), dlinfo(3), dlopen(3), ld.so(8) | https://manpages.courier-mta.org/htmlman3/dlsym.3.html | CC-MAIN-2021-21 | refinedweb | 173 | 56.66 |
Martin Kosek wrote:
On 10/18/2012 12:04 AM, Rob Crittenden wrote:Martin Kosek wrote:Hello,
AdvertisingI was investigating global unit test failure on Fedora 18 for most of today, I would like to share results I found so far. Unit test and its related scripts on F18 now reports NSS BUSY exception, just like this one: # ./make-testcert Traceback (most recent call last): File "./make-testcert", line 134, in <module> sys.exit(makecert(reqdir)) File "./make-testcert", line 111, in makecert add=True) File "./make-testcert", line 68, in run result = self.execute(method, *args, **options) File "/root/freeipa-master2/ipalib/backend.py", line 146, in execute raise error #pylint: disable=E0702 ipalib.errors.NetworkError: cannot connect to '': [Errno -8053] (SEC_ERROR_BUSY) NSS could not shutdown. Objects are still in use. Something In F18 must have changed, this worked before... But leaked NSSConnection objects without proper close() now ends with the exception above. In case of make-testcert script, the exception is raised because the script does the following procedure: 1) connect, do one command 2) disconnect 3) connect, do second command However, during disconnect, NSSConnection is leaked which makes NSS very uncomfortable during second connection atempt (and nss_shutdown()). I managed to fix this issue with attached patch. ./make-testcert or "./make-test tests/test_xmlrpc/test_group_plugin.py" works fine now. But global "./make-test" still fails, I think there is some remaining NSSConnection leak, I suspect there is something wrong with how we use our context (threading.local object). It looses a connection or some other thread invoked in ldap2 module may be kicking in, here is my debug output: CONTEXT[xmlclient] = <ipalib.request.Connection object at 0x9a1f5ec> Test a simple LDAP bind using ldap2 ... SKIP: No directory manager password in /root/.ipa/.dmpw Test the `ipaserver.rpcserver.jsonserver.unmarshal` method. ... ok tests.test_ipaserver.test_rpcserver.test_session.test_mount ... CONTEXT 150714476: GET languages CONTEXT[xmlclient] = None The connection is in the context, but then something happens and it is gone. Then, unit tests try to connect again and NSS fails. I would be really glad if somebody with a knowledge of NSS or how threads in Python/IPA work could give me some advice... Thanks! MartinI built upon your patch and have something that seems to work at least somewhat. I'm getting some unexpected test failures when running the entire suite but no NSS shutdown errors. I haven't had a chance to really investigate everything yet, sending this out as a work-in-progress in case you want to take a look. robYeah, this is great! I tested a fresh build+install on Fedora 18 with your patch and all tests succeeded. So as for F18, I am inclined to ACK the patch as is. I am just not sure that this will work on platforms with Python version < 2.7, xmlrpclib is different there. Martin
Here is my first crack at fixing that too. It requires a bunch of run time juggling though.Here is my first crack at fixing that too. It requires a bunch of run time juggling though.
rob
>From c53e283986f2b00db53e28009829ba09d62930aa Mon Sep 17 00:00:00 2001 From: Rob Crittenden <rcrit...@redhat.com> Date: Wed, 17 Oct 2012 16:58:54 -0400 Subject: [PATCH] Close connection after each request, avoid NSS shutdown problem. The unit tests were failing when executed against an Apache server in F-18 due to dangling references causing NSS shutdown to fail. --- ipalib/rpc.py | 30 +++++++++++++++++++++++++----- ipapython/nsslib.py | 6 ++++++ 2 files changed, 31 insertions(+), 5 deletions(-) diff --git a/ipalib/rpc.py b/ipalib/rpc.py index e97536d9de5c455d3ff58c081fca37f16d087370..8389396e0e23623b5edb60d634041949f95711ce 100644 --- a/ipalib/rpc.py +++ b/ipalib/rpc.py @@ -257,16 +257,24 @@ class SSLTransport(LanguageAwareTransport): # If we an existing connection exists using the same NSS database # there is no need to re-initialize. Pass thsi into the NSS # connection creator. + if sys.version_info > (2, 6): + if self._connection and host == self._connection[0]: + return self._connection[1] + dbdir = '/etc/pki/nssdb' no_init = self.__nss_initialized(dbdir) - (major, minor, micro, releaselevel, serial) = sys.version_info - if major == 2 and minor < 7: + if sys.version_info < (2, 7): conn = NSSHTTPS(host, 443, dbdir=dbdir, no_init=no_init) else: conn = NSSConnection(host, 443, dbdir=dbdir, no_init=no_init) self.dbdir=dbdir + conn.connect() - return conn + if sys.version_info < (2, 7): + return conn + else: + self._connection = host, conn + return self._connection[1] class KerbTransport(SSLTransport): @@ -331,6 +339,13 @@ class KerbTransport(SSLTransport): return (host, extra_headers, x509) + + def single_request(self, host, handler, request_body, verbose=0): + try: + return SSLTransport.single_request(self, host, handler, request_body, verbose) + finally: + self.close() + def parse_response(self, response): session_cookie = response.getheader('Set-Cookie') if session_cookie: @@ -371,7 +386,8 @@ class xmlclient(Connectible): """ if not hasattr(self.conn, '_ServerProxy__transport'): return None - if type(self.conn._ServerProxy__transport) in (KerbTransport, DelegatedKerbTransport): + if (isinstance(self.conn._ServerProxy__transport, KerbTransport) or + isinstance(self.conn._ServerProxy__transport, DelegatedKerbTransport)): scheme = "https" else: scheme = "http" @@ -493,7 +509,11 @@ class xmlclient(Connectible): return serverproxy def destroy_connection(self): - pass + if sys.version_info > (2, 6): + conn = getattr(context, self.id, None) + if conn is not None: + conn = conn.conn._ServerProxy__transport + conn.close() def forward(self, name, *args, **kw): """ diff --git a/ipapython/nsslib.py b/ipapython/nsslib.py index 06bcba64895b0ba7a6b814ed6748eff8bf5ff9b3..7afccd5685baccdb8e9eff737cb7dd4b11d46630 100644 --- a/ipapython/nsslib.py +++ b/ipapython/nsslib.py @@ -238,6 +238,12 @@ class NSSConnection(httplib.HTTPConnection, NSSAddressFamilyFallback): def connect(self): self.connect_socket(self.host, self.port) + def close(self): + """Close the connection to the HTTP server.""" + if self.sock: + self.sock.close() # close it manually... there may be other refs + self.sock = None + def endheaders(self, message=None): """ Explicitly close the connection if an error is returned after the -- 1.7.12.1
_______________________________________________ Freeipa-devel mailing list Freeipa-devel@redhat.com | https://www.mail-archive.com/freeipa-devel@redhat.com/msg13548.html | CC-MAIN-2018-34 | refinedweb | 946 | 52.46 |
ilang Ilhami12,045 Points
String formatting with dictionaries
I tried doing the loop, but i just can't get it, am i missing something?
dicts = [ {'name': 'Michelangelo', 'food': 'PIZZA'}, {'name': 'Garfield', 'food': 'lasanga'}, {'name': 'Walter', 'food': 'pancakes'}, {'name': 'Galactus', 'food': 'worlds'} ] string = "Hi, I'm {name} and I love to eat {food}!" def string_factory(dicts, string): new_list = [] for words in dicts: new_list = string.append(sting.format(**dicts)) return new_list print (string_factory(dicts, string))
2 Answers
Steven Parker217,443 Points
You're closer, but you have a few syntax issues yet.
Ignoring the print statement at the end, which is not part of the challenge but doesn't hurt it, your remaining issues are all in the statement inside the loop:
new_list = string.append(sting.format(**dicts))
- you can't "append" a string, you probably want to append the new_list (instead of assigning to it)
- you spelled "sting" (without the "r") where you apply format to it.
- your loop extracts the individual dictionaries as "words", but the format still attempts to use dicts.
Gilang Ilhami12,045 Points
Thank you steven, i was wondering if you can help me witht he 4th callenge?
def courses(my_dict): single = [] for value in my_dict.values(): single.append(value) return value | https://teamtreehouse.com/community/string-formatting-with-dictionaries-4 | CC-MAIN-2022-40 | refinedweb | 205 | 61.36 |
use of wait() is ambiguous concerning exception FindFailed
Bug Description
Not really a bug --- to evaluate together with renewal of api
We need a companion for wait(), that internally does not throw FindFailed and returns None independent of setThrowExeception if not found.
Additionallly it would be very convenient to have a getThrowException() to get the current status. Same goes for getAutoWaitTime
Why?
looking along a workflow, getting a FindFailed normally means, that something on the screen is not as it should be. Therefor the standard, that find() and all the companions throw exception FindFailed if not found is a good support, to get a workflow running in a short time, with a minimum of needed statements.
If you want some special handling, you have setThrowException() and <try: except:> to do so.
But there is a "convenience gap":
if you want a wait(), to be intentionally used, to check, wether something is there within a specific time, to decide, how to go on with the workflow, its rather inconvenient to get it go.
example: the script does something on a web page. if its not already there, you have to bring it up first.
to be consistent I always use wait() for theses checks.
switchApp(browser)
if not wait(pic, timeout=0): # something to check, wether web page already there, short timeout
myGetNew(url) # privat def: to get a new window with the url opened
wait(pic) # normal or extended timeout
for the first wait(), we need a None returned, but if not there we get FindFailed and the script stops.
second try:
setThrowExcepti
if not wait(pic, timeout=0): # something to check, wether web page already there, short timeout
myGetNew(url) # privat def: to get a new window with the url opened
setThrowExcepti
wait(pic) # normal or extended timeout
This works, but may impact whats happening inside myGetNew(). Since in the moment there is no getThrowException, you have to define a global yourself, to keep track.
another user made up his own _wait() to get around this:
def _wait(img, timeout=3000):
try:
s = wait(img, timeout)
except:
s = []
return s
for me this is not a solution, since its too much depending on the current api.
my current solution is:
global mySetExc # set accordingly by the excO..()
excOn() # throw
excOff() # don't throw
myGetExc() # gets current from mySetExc
with this in every def(): I always know, whats the current situation about FindFailed
check() # private that does a wait without throwing and resets to mySetExc
so my solution for the example:
switchApp(browser)
if not check(pic, timeout=0):
myGetNew(url)
wait(pic)
Version 0.10: new method exits() returns False instead of raising exception FindFailed | https://bugs.launchpad.net/sikuli/+bug/529025 | CC-MAIN-2015-14 | refinedweb | 446 | 54.46 |
This site uses strictly necessary cookies. More Information
Hi,
I switched over to Visual Studio recently and I always get an error whenever I have a public class referenced in another script. The error/warning message is:
Warning CS0649: Field 'Script.referencedScript' is never assigned to, and will always have its default value null (CS0649) (Assembly-CSharp).
even though in the script there are functions called from the referenced script. Actually everything works just fine when running the game but I am still curious what these warnigns mean exactly and how I can get rid of them.
Thanks in advance :)
Answer by JVene
·
Sep 17, 2018 at 01:36 PM
Without a code example of the compiler warning we can only answer in generalities, but this warning is part of what Visual Studio brings to the development toolset (and there are lots of ways to get this feature).
What you're seeing is the result of some code checking / analysis, and some tools go much deeper into studying the code we write to make suggestions and observations known to create bugs that aren't obvious when we write. It isn't always a good idea to ignore warnings, professional developers work to write code that passes all inspection from analysis tools, but here I speak of development outside of Unity in all manner of applications, many of which are critical. If you're studying, consider the warnings of this type as an introduction into the kinds of things known by professionals and academics to be noteworthy, historically the source of bugs, but for you they may occasionally be ignored until you have a problem (crashes, errors, etc.). Sometimes these warnings give you some indication of why a crash has happened.
It would seem the compiler things there is a member variable that starts uninitialized, where the default for all class and struct types is null. If you access such a variable it would create a crash (null exception), but if you know you do initialize before accessing the value, or you know that you test for null before trying to use it, you're generally covered.
As you become more advanced, these warnings should reduce in volume. They're worth understanding and avoiding, and as you select options to analyze code or increase the warning level (an option to the compiler), the compiler starts to list a lot of warnings you may never have seen before. Full code analysis can even read to be insulting if you didn't realize it was generated by a computer program ;)
Thanks for this in depth answer!
To make it more clear: I basically have two scripts. ScriptA and ScriptB. ScriptA has some type of function let's say functionA.
In ScriptB I have a reference to ScriptA and for example in ScriptB I have an On$$anonymous$$ouseDown() function in which I am also calling functionA from ScriptA. In code it would look something like this.
using UnityEngine;
public class ScriptB : $$anonymous$$onoBehaviour
{
[SerializeField]
ScriptA SA;
void On$$anonymous$$ouseDown()
{
SA.functionA();
}
}
The error message of the original post is displayed next to the "ScriptA SA;" part. Like I said the script doesn't seem to be affected by this at all since functionA is called without any problems but I really would like to know why this warning appears and how to get rid of it.
Answer by dan_wipf
·
Sep 17, 2018 at 01:12 PM
Have you downloaded any extions for the visual studio? like c# completer. sometimes a restart helps aswell.
but the error means that you have properties which are never used.
Script Error
1
Answer
Visual studio says no issues found even though there are errors.
1
Answer
Visual Studio 2019 stopped working with Unity 2019.1.10f1
0
Answers
Error : GPGSUpgrader.cs(194,80): warning CS0162: Unreachable code detected
0
Answers
Visual Studio Code doesn't work for with unity
2
Answers
EnterpriseSocial Q&A | https://answers.unity.com/questions/1553951/visual-studio-error-for-referenced-scripts.html | CC-MAIN-2021-49 | refinedweb | 658 | 59.94 |
Dear reader, whats_really_hot wrote: > > This is the second post of mine. First, I would like to thank all u that > responded to first post. Here I repeat some of the questions of my first > post, because they were not properly answered or cause I didn't get straight > answers. > In the first post u answered me that in Unix based platforms "cat" command > starts the creation of a module. OK, now I go on with some more queries. > On Unix I'd prefer to use vi over cat. Others use other editors like emacs, or the editor in the Python development environment. But this subject is off topic here. > 1. When we type "dir()" in the interactive command line prompt get among all > the others the module "__name__" which really tell us the name of the module > that we are . But now we are in the interactive command line prompt, which > is a module with the name "__main__". Now, my question is, why isn't > __name__ = __main__? (Why isn't __name__ substituted by __main__? > This is simply a difference between interactive interpreter and the 'main python program' interpreter. The difference is not specified by the language, it is just that these environments are not completely the same: any python interpretation may start with some variables predefined. > Sorry for repeating this question but I didn't really get a clear answer. > Even the docs are bit terse about this. > 2. What is the difference between "!=" and "<>" (which really mean > "different")? > I think only their origins are different: != comes from C, <> comes from Pascal (who remembers Jensen and Wirth nowadays?). In python they are equivalent, but it is possible that one of them is more favoured. > 3. Are the built-in tools (such as "len", "sort" etc) maintained in a folder > into the Python directory, so as I can access and edit them? > They are implemented as functions in the implementation language, currently C or Java. You can access them by extracting the sources during python installation. In case you need the precise names of the equivalent functions, just ask here. > 4. What is the difference between "%d" and "%i" (which really point > integer values)? > Pass, my C usage is too far in the past. See the docs on the % operator. > 5. What does it mean that Python is a "high-level language" and other > languages, such as C, are low-level? > In python you don't need to declare variables and a variable can be set to a reference to a value of any type. When the parser does not detect syntax error in Python, everything is executed until an uncaught exception occurs. Execution of a declaration or import has the effect of adding name(s) to the namespace used by the interpreter. Other statements change the references/values of the names in the applicable namespace. > Sorry for the simplicity of some of the above questions, but are crucial for > me for the full understanding of Python. > > Thanks again for ur attention and immediate responds. My pleasure. > > Mail me: whats_really_hot at hotmail.com By request. Please see the newsgroup for more answers. Good luck, Ype -- email at xs4all.nl | https://mail.python.org/pipermail/python-list/2001-August/063957.html | CC-MAIN-2014-15 | refinedweb | 528 | 75 |
1. DPDK Coding Style
1.1. Description
This document specifies the preferred style for source files in the DPDK source tree. It is based on the Linux Kernel coding guidelines and the FreeBSD 7.2 Kernel Developer’s Manual (see man style(9)), but was heavily modified for the needs of the DPDK.
1.2. General Guidelines
The rules and guidelines given in this document cannot cover every situation, so the following general guidelines should be used as a fallback:
- The code style should be consistent within each individual file.
- In the case of creating new files, the style should be consistent within each file in a given directory or module.
- The primary reason for coding standards is to increase code readability and comprehensibility, therefore always use whatever option will make the code easiest to read.
Line length is recommended to be not more than 80 characters, including comments. [Tab stop size should be assumed to be 8-characters wide].
Note
The above is recommendation, and not a hard limit. However, it is expected that the recommendations should be followed in all but the rarest situations.
1.3. C Comment Style
1.3.1. Usual Comments
These comments should be used in normal cases. To document a public API, a doxygen-like format must be used: refer to Doxygen Guidelines.
/* * VERY important single-line comments look like this. */ /* Most single-line comments look like this. */ /* * Multi-line comments look like this. Make them real sentences. Fill * them so they look like real paragraphs. */
1.3.2. License Header
Each file should begin with a special comment containing the appropriate copyright and license for the file. Generally this is the BSD License, except for code for Linux Kernel modules. After any copyright header, a blank line should be left before any other contents, e.g. include statements in a C file.
1.4. C Preprocessor Directives
1.4.1. Header Includes
In DPDK sources, the include files should be ordered as following:
- libc includes (system includes first)
- DPDK EAL includes
- DPDK misc libraries includes
- application-specific includes
Include files from the local application directory are included using quotes, while includes from other paths are included using angle brackets: “<>”.
Example:
#include <stdio.h> #include <stdlib.h> #include <rte_eal.h> #include <rte_ring.h> #include <rte_mempool.h> #include "application.h"
1.4.2. Header File Guards
Headers should be protected against multiple inclusion with the usual:
#ifndef _FILE_H_ #define _FILE_H_ /* Code */ #endif /* _FILE_H_ */
1.4.3. Macros
Do not
#define or declare names except with the standard DPDK prefix:
RTE_.
This is to ensure there are no collisions with definitions in the application itself.
The names of “unsafe” macros (ones that have side effects), and the names of macros for manifest constants, are all in uppercase.
The expansions of expression-like macros are either a single token or have outer parentheses. If a macro is an inline expansion of a function, the function name is all in lowercase and the macro has the same name all in uppercase. If the macro encapsulates a compound statement, enclose it in a do-while loop, so that it can be used safely in if statements. Any final statement-terminating semicolon should be supplied by the macro invocation rather than the macro, to make parsing easier for pretty-printers and editors.
For example:
#define MACRO(x, y) do { \ variable = (x) + (y); \ (y) += 2; \ } while(0)
Note
Wherever possible, enums and inline functions should be preferred to macros, since they provide additional degrees of type-safety and can allow compilers to emit extra warnings about unsafe code.
1.4.4. Conditional Compilation
- When code is conditionally compiled using
#ifdefor
#if, a comment may be added following the matching
#endifor
#elseto permit the reader to easily discern where conditionally compiled code regions end.
- This comment should be used only for (subjectively) long regions, regions greater than 20 lines, or where a series of nested
#ifdef‘s may be confusing to the reader. Exceptions may be made for cases where code is conditionally not compiled for the purposes of lint(1), or other tools, even though the uncompiled region may be small.
- The comment should be separated from the
#endifor
#elseby a single space.
- For short conditionally compiled regions, a closing comment should not be used.
- The comment for
#endifshould match the expression used in the corresponding
#ifor
#ifdef.
- The comment for
#elseand
#elifshould match the inverse of the expression(s) used in the preceding
#ifand/or
#elifstatements.
- In the comments, the subexpression
defined(FOO)is abbreviated as “FOO”. For the purposes of comments,
#ifndef FOOis treated as
*/
Note
Conditional compilation should be used only when absolutely necessary, as it increases the number of target binaries that need to be built and tested.
1.5. C Types
1.5.1. Integers
For fixed/minimum-size integer values, the project uses the form uintXX_t (from stdint.h) instead of older BSD-style integer identifiers of the form u_intXX_t.
1.5.2. Enumerations
- Enumeration values are all uppercase.
enum enumtype { ONE, TWO } et;
- Enum types should be used in preference to macros #defining a set of (sequential) values.
- Enum types should be prefixed with
rte_and the elements by a suitable prefix [generally starting
RTE_<enum>_- where <enum> is a shortname for the enum type] to avoid namespace collisions.
1.5.3. Bitfields
The developer should group bitfields that are included in the same integer, as follows:
struct grehdr { uint16_t rec:3, srr:1, seq:1, key:1, routing:1, csum:1, version:3, reserved:4, ack:1; /* ... */ }
1.5.4. Variable Declarations.
For example:
int *x; /* no space after asterisk */ int * const x; /* space after asterisk when using a type qualifier */
- All externally-visible variables should have an
rte_prefix in the name to avoid namespace collisions.
- Do not use uppercase letters - either in the form of ALL_UPPERCASE, or CamelCase - in variable names. Lower-case letters and underscores only.
1.5.5. Structure Declarations
- In general, when declaring variables in new structures, declare them sorted by use, then by size (largest to smallest), and then in alphabetical order. Sorting by use means that commonly used variables are used together and that the structure layout makes logical sense. Ordering by size then ensures that as little padding is added to the structure as possible.
- For existing structures, additions to structures should be added to the end so for backward compatibility reasons.
- Each structure element gets its own line.
- Try to make the structure readable by aligning the member names using spaces as shown below.
- Names following extremely long types, which therefore cannot be easily aligned with the rest, should be separated by a single space.
struct foo { struct foo *next; /* List of active foo. */ struct mumble amumble; /* Comment for mumble. */ int bar; /* Try to align the comments. */ struct verylongtypename *baz; /* Won't fit with other members */ };
- Major structures should be declared at the top of the file in which they are used, or in separate header files if they are used in multiple source files.
- Use of the structures should be by separate variable declarations and those declarations must be extern if they are declared in a header file.
- Externally visible structure definitions should have the structure name prefixed by
rte_to avoid namespace collisions.
1.5.6. Queues
Use queue(3) macros rather than rolling your own lists, whenever possible. with other members */ }; LIST_HEAD(, foo) foohead; /* Head of global foo list. */
DPDK also provides an optimized way to store elements in lockless rings. This should be used in all data-path code, when there are several consumer and/or producers to avoid locking for concurrent access.
1.5.7. Typedefs
Avoid using typedefs for structure types.
For example, use:
struct my_struct_type { /* ... */ }; struct my_struct_type my_var;
rather than:
typedef struct my_struct_type { /* ... */ } my_struct_type; my_struct_type my_var
Typedefs are problematic because they do not properly hide their underlying type; for example, you need to know if the typedef is the structure itself, as shown above,.
Note that #defines used instead of typedefs also are problematic (since they do not propagate the pointer type correctly due to direct text replacement).
For example,
#define pint int * does not work as expected, while
typedef int *pint does work.
As stated when discussing macros, typedefs should be preferred to macros in cases like this.
When convention requires a typedef; make its name match the struct tag.
Avoid typedefs ending in
_t, except as specified in Standard C or by POSIX.
Note
It is recommended to use typedefs to define function pointer types, for reasons of code readability. This is especially true when the function type is used as a parameter to another function.
For example:
/** * Definition of a remote launch function. */ typedef int (lcore_function_t)(void *); /* launch a function of lcore_function_t type */ int rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned slave_id);
1.6. C Indentation
1.6.1. General
- Indentation is a hard tab, that is, a tab character, not a sequence of spaces,
Note
Global whitespace rule in DPDK, use tabs for indentation, spaces for alignment.
- Do not put any spaces before a tab for indentation.
- If you have to wrap a long statement, put the operator at the end of the line, and indent again.
- For control statements (if, while, etc.), continuation it is recommended that the next line be indented by two tabs, rather than one, to prevent confusion as to whether the second line of the control statement forms part of the statement body or not. Alternatively, the line continuation may use additional spaces to line up to an appropriately point on the preceding line, for example, to align to an opening brace.
Note
As with all style guidelines, code should match style already in use in an existing file.
while (really_long_variable_name_1 == really_long_variable_name_2 && var3 == var4){ /* confusing to read as */ x = y + z; /* control stmt body lines up with second line of */ a = b + c; /* control statement itself if single indent used */ } if (really_long_variable_name_1 == really_long_variable_name_2 && var3 == var4){ /* two tabs used */ x = y + z; /* statement body no longer lines up */ a = b + c; } z = a + really + long + statement + that + needs + two + lines + gets + indented + on + the + second + and + subsequent + lines;
- Do not add whitespace at the end of a line.
- Do not add whitespace or a blank line at the end of a file.
1.6.2. Control Statements and Loops
- Include a space after keywords (if, while, for, return, switch).
- Do not use braces (
{and
}) for control statements with zero or just a single statement, unless that statement is more than a single line in which case the braces are permitted.
for (p = buf; *p != '\0'; ++p) ; /* nothing */ for (;;) stmt; for (;;) { z = a + really + long + statement + that + needs + two + lines + gets + indented + on + the + second + and + subsequent + lines; } for (;;) { if (cond) stmt; } if (val != NULL) val = realloc(val, newsize);
- Parts of a for loop may be left empty.
for (; cnt < 15; cnt++) { stmt1; stmt2; }
- Closing and opening braces go on the same line as the else keyword.
- Braces that are not necessary should be left out.
if (test) stmt; else if (bar) { stmt; stmt; } else stmt;
1.6.3. Function Calls
- Do not use spaces after function names.
- Commas should have a space after them.
- No spaces after
(or
[or preceding the
]or
)characters.
error = function(a1, a2); if (error != 0) exit(error);
1.6.4. Operators
- Unary operators do not require spaces, binary operators do.
- Do not use parentheses unless they are required for precedence or unless the statement is confusing without them. However, remember that other people may be more easily confused than you.
1.6.5. Exit
Exits should be 0 on success, or 1 on failure.
exit(0); /* * Avoid obvious comments such as * "Exit 0 on success." */ }
1.6.6. Local Variables
- Variables should be declared at the start of a block of code rather than in the middle. The exception to this is when the variable is
constin which case the declaration must be at the point of first use/assignment.
- When declaring variables in functions, multiple variables per line are OK. However, if multiple declarations would cause the line to exceed a reasonable line length, begin a new set of declarations on the next line rather than using a line continuation.
- Be careful to not obfuscate the code by initializing variables in the declarations, only the last variable on a line should be initialized. If multiple variables are to be initialized when defined, put one per line.
- Do not use function calls in initializers, except for
constvariables.
int i = 0, j = 0, k = 0; /* bad, too many initializer */ char a = 0; /* OK, one variable per line with initializer */ char b = 0; float x, y = 0.0; /* OK, only last variable has initializer */
1.6.7. Casts and sizeof
- Casts and sizeof statements are not followed by a space.
- Always write sizeof statements with parenthesis. The redundant parenthesis rules do not apply to sizeof(var) instances.
1.7. C Function Definition, Declaration and Use
1.7.1. Prototypes
- It is recommended (and generally required by the compiler) that all non-static functions are prototyped somewhere.
- Functions local to one source module should be declared static, and should not be prototyped unless absolutely necessary.
- Functions used from other parts of code (external API) must be prototyped in the relevant include file.
- Function prototypes should be listed in a logical order, preferably alphabetical unless there is a compelling reason to use a different ordering.
- Functions that are used locally in more than one module go into a separate header file, for example, “extern.h”.
- Do not use the
__Pmacro.
- Functions that are part of an external API should be documented using Doxygen-like comments above declarations. See Doxygen Guidelines for details.
- Functions that are part of the external API must have an
rte_prefix on the function name.
- Do not use uppercase letters - either in the form of ALL_UPPERCASE, or CamelCase - in function names. Lower-case letters and underscores only.
- When prototyping functions, associate names with parameter types, for example:
void function1(int fd); /* good */ void function2(int); /* bad */
- Short function prototypes should be contained on a single line. Longer prototypes, e.g. those with many parameters, can be split across multiple lines. The second and subsequent lines should be further indented as for line statement continuations as described in the previous section.
static char *function1(int _arg, const char *_arg2, struct foo *_arg3, struct bar *_arg4, struct baz *_arg5); static void usage(void);
Note
Unlike function definitions, the function prototypes do not need to place the function return type on a separate line.
1.7.2. Definitions
- The function type should be on a line by itself preceding the function.
- The opening brace of the function body should be on a line by itself.
static char * function(int a1, int a2, float fl, int a4) {
- Do not declare functions inside other functions. ANSI C states that such declarations have file scope regardless of the nesting of the declaration. Hiding file declarations in what appears to be a local scope is undesirable and will elicit complaints from a good compiler.
- Old-style (K&R) function declaration should not be used, use ANSI function declarations instead as shown below.
- Long argument lists should be wrapped as described above in the function prototypes section.
/* * All major routines should have a comment briefly describing what * they do. The comment before the "main" routine should describe * what the program does. */ int main(int argc, char *argv[]) { char *ep; long num; int ch;
1.8. C Statement Style and Conventions
1.8.1. NULL Pointers
- NULL is the preferred null pointer constant. Use NULL instead of
(type *)0or
(type *)NULL, except where the compiler does not know the destination type e.g. for variadic args to a function.
- Test pointers against NULL, for example, use:
if (p == NULL) /* Good, compare pointer to NULL */ if (!p) /* Bad, using ! on pointer */
- Do not use ! for tests unless it is a boolean, for example, use:
if (*p == '\0') /* check character against (char)0 */
1.8.2. Return Value
- Functions which create objects, or allocate memory, should return pointer types, and NULL on error. The error type should be indicated may setting the variable
rte_errnoappropriately.
- Functions which work on bursts of packets, such as RX-like or TX-like functions, should return the number of packets handled.
- Other functions returning int should generally behave like system calls: returning 0 on success and -1 on error, setting
rte_errnoto indicate the specific type of error.
- Where already standard in a given library, the alternative error approach may be used where the negative value is not -1 but is instead
-errnoif relevant, for example,
-EINVAL. Note, however, to allow consistency across functions returning integer or pointer types, the previous approach is preferred for any new libraries.
- For functions where no error is possible, the function type should be
voidnot
int.
- Routines returning
void *should not have their return values cast to any pointer type. (Typecasting can prevent the compiler from warning about missing prototypes as any implicit definition of a function returns int, which, unlike
void *, needs a typecast to assign to a pointer variable.)
Note
The above rule about not typecasting
void * applies to malloc, as well as to DPDK functions.
- Values in return statements should not be enclosed in parentheses.
1.8.3. Logging and Errors
In the DPDK environment, use the logging interface provided:
/* register log types for this application */ int my_logtype1 = rte_log_register("myapp.log1"); int my_logtype2 = rte_log_register("myapp.log2"); /* set global log level to INFO */ rte_log_set_global_level(RTE_LOG_INFO); /* only display messages higher than NOTICE for log2 (default * is DEBUG) */ rte_log_set_level(my_logtype2, RTE_LOG_NOTICE); /* enable all PMD logs (whose identifier string starts with "pmd.") */ rte_log_set_level_pattern("pmd.*", RTE_LOG_DEBUG); /* log in debug level */ rte_log_set_global_level(RTE_LOG_DEBUG); RTE_LOG(DEBUG, my_logtype1, "this is a debug level message\n"); RTE_LOG(INFO, my_logtype1, "this is a info level message\n"); RTE_LOG(WARNING, my_logtype1, "this is a warning level message\n"); RTE_LOG(WARNING, my_logtype2, "this is a debug level message (not displayed)\n"); /* log in info level */ rte_log_set_global_level(RTE_LOG_INFO); RTE_LOG(DEBUG, my_logtype1, "debug level message (not displayed)\n");
1.8.4. Branch Prediction
- When a test is done in a critical zone (called often or in a data path) the code can use the
likely()and
unlikely()macros to indicate the expected, or preferred fast path. They are expanded as a compiler builtin and allow the developer to indicate if the branch is likely to be taken or not. Example:
#include <rte_branch_prediction.h> if (likely(x > 1)) do_stuff();
Note
The use of
likely() and
unlikely() should only be done in performance critical paths,
and only when there is a clearly preferred path, or a measured performance increase gained from doing so.
These macros should be avoided in non-performance-critical code.
1.8.5. Static Variables and Functions
- All functions and variables that are local to a file must be declared as
staticbecause it can often help the compiler to do some optimizations (such as, inlining the code).
- Functions that should be inlined should to be declared as
static inlineand can be defined in a .c or a .h file.
Note
Static functions defined in a header file must be declared as
static inline in order to prevent compiler warnings about the function being unused.
1.8.6. Const Attribute
The
const attribute should be used as often as possible when a variable is read-only.
1.8.7. Inline ASM in C code
The
asm and
volatile keywords do not have underscores. The AT&T syntax should be used.
Input and output operands should be named to avoid confusion, as shown in the following example:
asm volatile("outb %[val], %[port]" : : [port] "dN" (port), [val] "a" (val));
1.8.8. Control Statements
- Forever loops are done with for statements, not while statements.
- Elements in a switch statement that cascade should have a FALLTHROUGH comment. For example:
switch (ch) { /* Indent the switch. */ case 'a': /* Don't indent the case. */ aflag = 1; /* Indent case body one tab. */ /* FALLTHROUGH */ case 'b': bflag = 1; break; case '?': default: usage(); /* NOTREACHED */ }
1.9. Dynamic Logging
DPDK provides infrastructure to perform logging during runtime. This is very
useful for enabling debug output without recompilation. To enable or disable
logging of a particular topic, the
--log-level parameter can be provided
to EAL, which will change the log level. DPDK code can register topics,
which allows the user to adjust the log verbosity for that specific topic.
In general, the naming scheme is as follows:
type.section.name
- Type is the type of component, where
lib,
pmd,
busand
userare the common options.
- Section refers to a specific area, for example a poll-mode-driver for an ethernet device would use
pmd.net, while an eventdev PMD uses
pmd.event.
- The name identifies the individual item that the log applies to. The name section must align with the directory that the PMD code resides. See examples below for clarity.
Examples:
- The virtio network PMD in
drivers/net/virtiouses
pmd.net.virtio
- The eventdev software poll mode driver in
drivers/event/swuses
pmd.event.sw
- The octeontx mempool driver in
drivers/mempool/octeontxuses
pmd.mempool.octeontx
- The DPDK hash library in
lib/librte_hashuses
lib.hash
1.9.1. Specializations
In addition to the above logging topic, any PMD or library can further split logging output by using “specializations”. A specialization could be the difference between initialization code, and logs of events that occur at runtime.
An example could be the initialization log messages getting one specialization, while another specialization handles mailbox command logging. Each PMD, library or component can create as many specializations as required.
A specialization looks like this:
- Initialization output:
type.section.name.init
- PF/VF mailbox output:
type.section.name.mbox
A real world example is the i40e poll mode driver which exposes two
specializations, one for initialization
pmd.net.i40e.init and the other for
the remaining driver logs
pmd.net.i40e.driver.
Note that specializations have no formatting rules, but please follow
a precedent if one exists. In order to see all current log topics and
specializations, run the
app/test binary, and use the
dump_log_types
1.10. Python Code
All Python code should work with Python 2.7+ and 3.2+ and be compliant with PEP8 (Style Guide for Python Code).
The
pep8 tool can be used for testing compliance with the guidelines.
1.11. Integrating with the Build System
DPDK supports being built in two different ways:
- using
make- or more specifically “GNU make”, i.e.
gmakeon FreeBSD
- using the tools
mesonand
ninja
Any new library or driver to be integrated into DPDK should support being
built with both systems. While building using
make is a legacy approach, and
most build-system enhancements are being done using
meson and
ninja
there are no plans at this time to deprecate the legacy
make build system.
Therefore all new component additions should include both a
Makefile and a
meson.build file, and should be added to the component lists in both the
Makefile and
meson.build files in the relevant top-level directory:
either
lib directory or a
driver subdirectory.
1.11.1. Makefile Contents
The
Makefile for the component should be of the following format, where
<name> corresponds to the name of the library in question, e.g. hash,
lpm, etc. For drivers, the same format of Makefile is used.
# pull in basic DPDK definitions, including whether library is to be # built or not include $(RTE_SDK)/mk/rte.vars.mk # library name LIB = librte_<name>.a # any library cflags needed. Generally add "-O3 $(WERROR_FLAGS)" CFLAGS += -O3 CFLAGS += $(WERROR_FLAGS) # the symbol version information for the library EXPORT_MAP := rte_<name>_version.map # all source filenames are stored in SRCS-y SRCS-$(CONFIG_RTE_LIBRTE_<NAME>) += rte_<name>.c # install includes SYMLINK-$(CONFIG_RTE_LIBRTE_<NAME>)-include += rte_<name>.h # pull in rules to build the library include $(RTE_SDK)/mk/rte.lib.mk
1.11.2. Meson Build File Contents - Libraries
The
meson.build file for a new DPDK library should be of the following basic
format.
sources = files('file1.c', ...) headers = files('file1.h', ...)
This will build based on a number of conventions and assumptions within the DPDK itself, for example, that the library name is the same as the directory name in which the files are stored.
For a library
meson.build file, there are number of variables which can be
set, some mandatory, others optional. The mandatory fields are:
- sources
- Default Value = []. This variable should list out the files to be compiled up to create the library. Files must be specified using the meson
files()function.
The optional fields are:
- build
- Default Value = true Used to optionally compile a library, based on its dependencies or environment. When set to “false” the
reasonvalue, explained below, should also be set to explain to the user why the component is not being built. A simple example of use would be:
if not is_linux build = false reason = 'only supported on Linux' endif
- cflags
- Default Value = [<-march/-mcpu flags>]. Used to specify any additional cflags that need to be passed to compile the sources in the library.
- deps
- Default Value = [‘eal’]. Used to list the internal library dependencies of the library. It should be assigned to using
+=rather than overwriting using
=. The dependencies should be specified as strings, each one giving the name of a DPDK library, without the
librte_prefix. Dependencies are handled recursively, so specifying e.g.
mempool, will automatically also make the library depend upon the mempool library’s dependencies too -
ringand
eal. For libraries that only depend upon EAL, this variable may be omitted from the
meson.buildfile. For example:
deps += ['ethdev']
- ext_deps
- Default Value = []. Used to specify external dependencies of this library. They should be returned as dependency objects, as returned from the meson
dependency()or
find_library()functions. Before returning these, they should be checked to ensure the dependencies have been found, and, if not, the
buildvariable should be set to
false. For example:
my_dep = dependency('libX', required: 'false') if my_dep.found() ext_deps += my_dep else build = false endif
- headers
- Default Value = []. Used to return the list of header files for the library that should be installed to $PREFIX/include when
ninja installis run. As with source files, these should be specified using the meson
files()function.
- includes:
- Default Value = []. Used to indicate any additional header file paths which should be added to the header search path for other libs depending on this library. EAL uses this so that other libraries building against it can find the headers in subdirectories of the main EAL directory. The base directory of each library is always given in the include path, it does not need to be specified here.
- name
- Default Value = library name derived from the directory name. If a library’s .so or .a file differs from that given in the directory name, the name should be specified using this variable. In practice, since the convention is that for a library called
librte_xyz.so, the sources are stored in a directory
lib/librte_xyz, this value should never be needed for new libraries.
Note
The name value also provides the name used to find the function version
map file, as part of the build process, so if the directory name and
library names differ, the
version.map file should be named
consistently with the library, not the directory
- objs
- Default Value = []. This variable can be used to pass to the library build some pre-built objects that were compiled up as part of another target given in the included library
meson.buildfile.
- reason
- Default Value = ‘<unknown reason>’. This variable should be used when a library is not to be built i.e. when
buildis set to “false”, to specify the reason why a library will not be built. For missing dependencies this should be of the form
'missing dependency, "libname"'.
- use_function_versioning
- Default Value = false. Specifies if the library in question has ABI versioned functions. If it has, this value should be set to ensure that the C files are compiled twice with suitable parameters for each of shared or static library builds.
1.11.3. Meson Build File Contents - Drivers
For drivers, the values are largely the same as for libraries. The variables supported are:
- build
- As above.
- cflags
- As above.
- deps
- As above.
- ext_deps
- As above.
- includes
- Default Value = <driver directory> Some drivers include a base directory for additional source files and headers, so we have this variable to allow the headers from that base directory to be found when compiling driver sources. Should be appended to using
+=rather than overwritten using
=. The values appended should be meson include objects got using the
include_directories()function. For example:
includes += include_directories('base')
- name
- As above, though note that each driver class can define it’s own naming scheme for the resulting
.sofiles.
- objs
- As above, generally used for the contents of the
basedirectory.
- pkgconfig_extra_libs
- Default Value = [] This variable is used to pass additional library link flags through to the DPDK pkgconfig file generated, for example, to track any additional libraries that may need to be linked into the build - especially when using static libraries. Anything added here will be appended to the end of the
pkgconfig --libsoutput.
- reason
- As above.
- sources [mandatory]
- As above
- version
- As above | https://doc.dpdk.org/guides/contributing/coding_style.html | CC-MAIN-2020-29 | refinedweb | 4,865 | 56.15 |
This is a URI defined in the
This document describes the WSDL 2.0 HTTP Binding namespace. It also contains a directory of links to these related resources, using Resource Directory Description Language.
This URI always points to the latest schema (including errata) for the WSDL 2.0 HTTP Binding namespace. The resource at this location may change as new errata are incorporated.
This URI points to the schema for the WSDL 2.0 HTTP Binding namespace corresponding to the 2005-05-10 Web Services Description Language (WSDL) Version 2.0 Part 2: Adjuncts specification.
Comments on this document may be sent to the public public-ws-desc-comments@w3.org mailing list (public archive). | http://www.w3.org/2005/05/wsdl/http | crawl-002 | refinedweb | 115 | 53.37 |
The Python/XML community has an unfortunately long tradition of dodgy benchmarks. I had a lot to say about probably the most egregious example in my article on PyRXP. PyRXP is called an XML parser, and its developers benchmark it as such against other Python/XML parsers. The problem is that it turns out PyRXP is not an XML parser. It fails the most fundamental conformance to the most important aspect of XML: Unicode support. As a result, a benchmark of PyRXP against an XML parser is ludicrously unfair. In my article I had a lot to say about how poisonous such unfair benchmarks are.
On the less egregious end are benchmarks of libxml2's default Python binding, which is in many ways so gnomic (no pun intended) and trecherous that it's also an unfair comparison against most Pythonic XML tools. It sounds as if Martijn Faassen's lxml is making decent progress towards rectifying this.
But I must say that the benchmarks that were the last straw for me came from an old friend. Fredrik Lundh ("/F") is IMO one of the few XML package developers in the Python community who really understand both Python and XML. This has been generally borne out in his ElementTree library, about which I've always had a lot of good things to say. cElementTree came along and suddenly raised the Python/XML benchmark sweepstakes once again. As part of promotion of cElementTree, /F posted a benchmark on the home page. The benchmarks are very flattering to cElementTree, and it's probably deserving of some such flattery, but as I examined the performance issue a bit more, I've come to conclude that his benchmarks are pretty much useless.
The problem is that besides a performance bug in my own Amara 0.9.2, which /F brought to my notice, and that was fixed in the subsequent release, I was unable to reproduce under real-world conditions anything like the proportions implied in /F's benchmarks. Well, /F pretty much admits that all he's doing in his benchmark is reading in a file using each library. Hmm. This is not the stuff of which useful benchmarks are made. Nobody reads in a 3MB XML document just to throw all the data away, least of all Python developers who have long been vocal of their desire to do as little with XML as possible. Of course of I can't be 100% sure in this complaint because I haven't seen the benchmark code, but then again that's just another complaint.
I set out to run at least one real-world benchmark, in order to determine whether there is anything to the no-op benchmarks /F uses. The basics come from
this article, where I introduce the Old Testament test. The idea is simply to print all verses containing the word 'begat' Jon Bosak's Old Testament in XML, a 3.3MB document. A quick note on the characteristics of the file: it contains 23145 v elements containing each Bible verse and only text: no child elements. The
v elements and their content represent about 3.2 of the file's total 3.3MB. In the rest of this article I present the code and results.
I'm working on a Dell Inspiron 8600 notebook with 2GB RAM. It's a Centrino 1.7GHz, which is about equivalent to a P4-3GHz (modulo the equally wacky world of CPU benchmarks). The OS is Fedora Core 3 Linux, and I've tuned DMA and the like. I'm running Python 2.3.2. The following are my pystone results:
$ python /home/uogbuji/lib/lib/python2.3/test/pystone.py Pystone(1.1) time for 50000 passes = 2.99 This machine benchmarks at 16722.4 pystones/second
I ran each case 5 times and recorded the high and low run times, according to the UNIX
time command. In understand very well that this is not quite statistically thorough, but It's well ahead of all the other such benchmarks I've seen in terms of reproduceability (I present all my code) and usefulness (this is a real-world use-case for XML processing).
First up: plain old PySAX. Forget the performance characteristics for a moment: this code was just a pain in the arse to write.
from xml import sax class OtHandler(sax.ContentHandler): def __init__(self): #Yes, all this rigmarole *is* required, otherwise #you could miss The word "begat" split across #multiple SAX events self.verse = None return def startElementNS(self, (ns, local), qname, attrs): if local == u'v': self.verse = u'' return def endElementNS(self, name, qname): if (self.verse is not None and self.verse.find(u'begat') != -1): print self.verse self.verse = None return def characters(self, text): if self.verse is not None: #Yeah yeah, probably a tad faster to use the #''.join(fragment_list) trick, but not worth #the complication with these small verse chunks self.verse += text return handler = OtHandler() parser = sax.make_parser() parser.setContentHandler(handler) parser.setFeature(sax.handler.feature_namespaces, 1) parser.parse("ot.xml")
I get numbers ranging from 2.32 - 3.97 seconds.
Next up is PySAX using a filter to normalize text events, and thus simplify the SAX code a great deal. The filter,
amara.saxtools.normalize_text_filter is basically the one I
posted here, with some improvements. The code is much less painful than the PySAX example above, but it still demonstrates why SAX turns off people used to Python's simplicity.
from xml import sax from amara import saxtools class OtHandler(sax.ContentHandler): def characters(self, text): if text.find(u'begat') != -1: print text return handler = OtHandler() parser = sax.make_parser() normal_parser = saxtools.normalize_text_filter(parser) normal_parser.setContentHandler(handler) normal_parser.setFeature(sax.handler.feature_namespaces, 1) normal_parser.parse("ot.xml")
I get numbers ranging from 2.66 - 4.88 seconds.
Next up is Amara pushdom, which tries to combine some of the performance advantages of SAX with the (relative) ease of DOM.
from amara import domtools for docfrag in domtools.pushdom(u'v', source='ot.xml'): text = docfrag.childNodes[0].firstChild.data if text.find(u'begat') != -1: print text
I get numbers ranging from 5.83 - 7.11 seconds.
Next up is Amara pushbind, which tries to combine some of the performance advantages of SAX with the most Pythonic (and thus easy) API I can imagine.
from amara import binderytools for v in binderytools.pushbind(u'v', source='ot.xml'): text = unicode(v) if text.find(u'begat') != -1: print text
I get numbers ranging from 10.46 - 11.40 seconds.
Next up is Amara bindery chunker, which is the basis of pushbind.
from xml import sax from amara import binderytools def handle_chunk(docfrag): text = unicode(docfrag.v) if text.find(u'begat') != -1: print text xpatterns = 'v' handler = binderytools.saxbind_chunker(xpatterns=xpatterns, chunk_consumer=handle_chunk ) parser = sax.make_parser() parser.setContentHandler(handler) parser.setFeature(sax.handler.feature_namespaces, 1) parser.parse("ot.xml")
I get numbers ranging from 9.44 - 10.27 seconds.
Finally, I look at /F's cElementTree.
import cElementTree as ElementTree tree = ElementTree.parse("ot.xml") for v in tree.findall("//v"): text = v.text if text.find(u'begat') != -1: print text
I get numbers ranging from 1.53 - 3.18 seconds.
So what do I conclude from these numbers? As I've said before, the speed of cElementTree amazes, me, but it's advantage in the real world is nowhere near as dramatic as /F's benchmarks claim. More relevant to my own vanity, Amara 0.9.3's disadvantage in the real world is nowhere as dramatic as /F's benchmarks claim. IMHO, it's close enough in performance to all the other options, and offers so many advantages in areas besides performance, that it's a very respectable alternative to any Python/XML library out there.
But the point of this exercise goes far beyond all that. We really need to clean up our act in what is a very strange political battleground in the Python/XML space. If we've decided that MIPS wars are what we're going to be all about in development, then let's benchmark properly. Let's gather some real-world use-cases and normalized test conditions. Let's make sure all our benchmarks are transparent (at least release all the code used), and let's put some statistical rigor behind them (not an easy thing to do, and not something I claim to have done in this article). Let's do all this as a community.
While we're at it, I'd like to repeat my call for test case diversity from my PyRXP article: [R]un the tests on a variety of hardware and operating systems, and [don't] focus on a single XML file, but rather examine a variety of XML files. Numerous characteristics of XML files can affect parsing and processing speed, including:
And if we're not willing to do things rightly, let's stop deceiving users with meaningless benchmarks.
Uche Ogbuji is a Partner at Zepheira, LLC, a solutions firm specializing in the next generation of Web technologies.
oreillynet.com Copyright © 2006 O'Reilly Media, Inc. | http://www.oreillynet.com/lpt/wlg/6291 | crawl-002 | refinedweb | 1,532 | 66.13 |
Keeping servers from knowing your true identity
Marc is an Assistant Professor in the Computer Information Systems Department at Manhattan College. He can be contacted at marc.waldmanmanhattan.edu. Stefan is a Ph.D. student at the Institute for System Architecture, Dresden University of Technology, Germany. He can be contacted at sk13inf.tu-dresden.de.
The need for Internet-based anonymous communication has been well documented in the media. Unfortunately, the Internet currently lacks a deployed, general-purpose infrastructure that supports anonymous communication between distributed applications. With a little work, however, you can use existing web-based anonymity tools to support anonymous communication. In this article, we describe how to create a general-purpose request-reply anonymous communication channel by utilizing currently deployed web-based "anonymizing" tools. Channels such as these let clients and servers exchange messages in such a way that servers don't know the true identity of clients.
Admittedly, anonymous Internet-based communication is a controversial topic. As with a lot of software, there is always concern that individuals could abuse anonymity for malicious purposes. However, today's Internet-based applications make it relatively easy for third parties to track and catalog an individual's online activity, including web-browsing history, online purchases, news postings, and the like. Anonymizing tools provide individuals with some form of protection against these third parties. For more information about anonymizing tools and the need for them, see "Privacy-enhancing Technologies for the Internet," by Ian Goldberg, David Wagner, and Eric Brewer ( .berkeley.edu/~daw/papers/privacy- compcon97-www/privacy-html.html). In this article, we examine the anonymity provided by various web-based anonymizing tools, and show how you can interface with an existing web-based anonymizing tool to create an anonymous communication channel between distributed applications. While all source code is implemented in Java, the techniques we present can be used with any programming language that supports socket-based communication.
Web-Based Anonymity
Over the past few years, a number of tools have been developed that let you anonymously retrieve web pages. By anonymously we mean that the web server does not learn the true IP address of the web-browsing client. Roughly speaking, these tools can be classified as being either mix-net-based or proxy-based. Proxy-based designs are simpler than mix-net-based designs, but usually offer a weaker form of anonymity.
Figure 1 shows the typical architecture of proxy-based web anonymity tools such as Anonymizer () and Rewebber (). The web-browsing client sends its web-page request to the proxy server. The proxy server makes the web-page request on behalf of the client. The web page that results from this request is then sent back to the requesting client. The server only learns the proxy's IP address. Clearly, this is not a strong form of anonymity, as the proxy itself knows the client and the requested web page. This information could be logged or shared with third parties. Mix-net-based anonymity tools address this problem.
Figure 2 shows the typical architecture of mix-net-based web-anonymity tools. A mix-net consists of a collection of computers that store-and-forward encrypted messages. Each computer in the mix-net is called a "mix." The web-browsing client uses a layered encryption technique to construct a web-page request message, suitable for sending over the mix-net. This message is then sent to the first mix in the mix-net. When a mix receives an encrypted message, it decrypts the message. This decryption reveals only the name of the next mix that the decrypted message should be forwarded to. All other information contained in the message is unreadable due to the layered encryption. When the last mix in the mix-net receives a message, it also decrypts it. However, this decryption reveals the URL that the client wishes to retrieve. The last mix then contacts the appropriate web server to retrieve the document specified by the URL. The retrieved document is sent back to the client over the same mix-net path that the client request tookthe last mix sends the document to the next-to-last mix, and so on.
The anonymity that can be provided by mix-net-based tools is much stronger than that of proxy-based tools because of the encryption/decryption performed by the client and the mixes. The client encrypts the initial request in such a way that each mix can fully decrypt and interpret only part of the requestthe part of the request that specifies the next mix that the request should be forwarded to. Only the first mix in the mix-net learns the IP address of the client and only the last mix in the mix-net learns the URL that the client has requested. As the document makes its way back to the requesting client, each mix in the mix-net encrypts the document, hiding the content from the other mixes.
Interfacing with Web Anonymity Tools
Although the architecture of the proxy-based and mix-net-based anonymity tools are quite different, they do share an important design characteristicthey both carry HTTP-based messages. HTTP is the protocol that governs how web browsers and web servers talk to each other. All messages sent between browsers and servers must be formatted according to the HTTP specification. All of the web-based anonymity tools we just described speak the HTTP protocolthe tools expect HTTP-formatted messages. Therefore, if you want distributed applications to utilize these anonymity tools, your applications must be capable of sending/receiving HTTP-based messages.
Figure 3 shows the HTTP message a web browser might send to Amazon.com to retrieve the file "index.html." The first line specifies the type of action the browser wishes the server to perform. The word GET is called the HTTP "method." In the context of HTTP, a method is essentially the name of a command. This HTTP method tells the web server to send the file "index.html." The second and third line of Figure 3 specify the HTTP headers, which are essentially just a collection of name-value pairs that provide additional information to the recipient of the message. The HTTP header name appears to the left of the colon and the value appears to the right. The server responds with an HTTP-based message similar to that in Figure 4. Every HTTP response begins with a line, called the "status line," which specifies the result of the corresponding request. Requests can succeed or fail. A numeric code, called the "response code," specifies the result of the request. In Figure 4, the response code is 200. A set of numeric codes have been defined by the HTTP standard. For example, the response code 200 means that the request succeeded. Immediately following the response code is a text string, called the "reason phrase," that specifies (in natural language) what the response code means. In Figure 4, the reason phrase is "OK," meaning that the request succeeded. Immediately following the status line are the HTTP headers, which serve the same purpose as those in the HTTP request messagenamely to provide information to the recipient of the message.
Following the HTTP response headers is a blank linemore specifically it is a carriage return character followed by a linefeed character (CR/LF). This line marks the end of the HTTP headers and the beginning of the response body. The response body usually contains the item that the browser requested. In Figure 4, the response body contains the HTML code that is stored in the server's index.html file (the file requested in Figure 3). The size (in bytes) of the response body must be specified in the "Content-Length" HTTP header (Figure 4). The "Content-Type" HTTP header identifies the type of data being stored in the response body. This header essentially tells the browser how to interpret the data that is stored in the response body.
The POST method is similar to the GET method in that it lets browsers request content from servers. However, it is typically used when browsers need to send data, usually from an HTML form, to servers. The data to be sent to the server follows the request HTTP headers. This data is referred to as the "request body" and is separated from the HTTP headers by a blank line (more specifically, a CR/LF). The content of the request body is not limited to ASCII textany sequence of bytes can be sent in the request body. The size (in bytes) of the request body must be specified in the "Content-Length" HTTP header.
The HTTP specification defines a number of standard HTTP headers. However, the spec does not exclude the inclusion of additional headers that might be application specific. For example, you can develop a web-based application that requires new HTTP header values. Web servers, browsers, or other programs that read HTTP-based messages are supposed to ignore headers they don't understand. Ignored headers are simply meant to be passed to the next application that processes the HTTP-based message. As the HTTP message is sent over the Internet from one computer to another, it may gain or lose HTTP headersthe exact action taken depends on the headers and the applications that process them. A header that is unknown to all computers in this forwarding chain is simply left untouched as it moves to its destination. You can use this aspect of HTTP to transport application-specific messages over web-based anonymity tools.
To send messages over web-based anonymity tools, your applications need to send/receive HTTP-based messages. However, we are only interested in a small subset of HTTPnamely, the POST method and a few headers and response codes.
Assume you are developing a system that lets students anonymously send/receive reviews of college courses. Clearly, students would be more forthcoming in their reviews if they believed the reviews could be sent anonymously. The system consists of two components: the server that stores the reviews, and the client that can send reviews to the server and retrieve reviews from the server. You want the client to contact the server using a web-based anonymizing tool. Therefore, both the server and client must send/receive HTTP-based messages.
Figure 5 shows the HTTP that the client generates to store a review on the server. The client simply creates an HTTP-based POST message that contains the server's name () and listening port (4321) in the URL. A colon must separate the server name from the port. The client knows that the server is listening for connections on port 4321. We are not utilizing port 80the port traditionally utilized by web servers. By specifying the port in the URL, we can accommodate any number of applications running on the same server, each awaiting connections on a different port. (You may need to use port 80 if the other server ports are blocked by a firewall.)
The client generates three HTTP headers. The first two are standard HTTP headers that specify the size and type of the request body. The "Anon-Message-ID" header was created specifically for our application and is not a standard HTTP header. Its purpose is to identify the type of message we are sending. This would be unnecessary if the client could only send one type of message to the server, but that is rather limiting. The request body consists of a serialized version of the object that stored the course comments. How the comments are stored in the request body is immaterialthey could just as easily be sent as ASCII text. The "Content-Length" header specifies the size, in bytes, of the request body. Therefore, the size of the request body must be computed before the HTTP headers can be generated. The client sends this HTTP-based message to the appropriate web-based anonymizing tool. This is typically done by opening a socket connection to the tool and sending the message.
Once the client's message has been received by the server, via the anonymizing tool, the server parses the HTTP message and prepares a response. Figure 6 shows the server's HTTP-based response. The first line of the response indicates that the client's request was successfully processed (response code 200). The HTTP headers indicate the size and type of the response body. We once again utilize the "Anon-Message-ID" header to identify the type of message the server is sending back to the client. The response body holds a serialized version of an object that will be sent back to the client. For example, this object could hold a username and password that clients could later use to update their course review. The response body is optionalit is presented simply to show the HTTP necessary to return an object to clients. The anonymizing tool sends the entire response back to clients.
Implementation
The Java code we present interfaces with JAP, a freely available mix-based web anonymizing tool developed by Technische Universitt Dresden ( .inf.tu-dresden.de/index_en.html). Although we are using Java in our example implementation, you can use any programming language that supports sockets. Since our program will be sending/receiving HTTP-based messages, it should be relatively easy to alter the code so that it can be used with other web-based anonymizing tools.
JAP is a free mix-based anonymizing tool that lets you browse the Web anonymously by configuring your browser to use JAP as an HTTP web proxy. Setting a web proxy is a standard option on almost all web browsers. However, our programs will not be using JAP in this way. Instead, our Java program generates HTTP-based messages and sends these requests to JAP. JAP delivers these messages to recipients, using a mix network, and returns the recipient's reply.
Listing One is used to connect to JAP. This source code assumes that JAP is running on the client (localhost) and that JAP is listening for client connections on port 4000. The port number that JAP listens on can be set using JAP's user interface. The source code simply creates a socket connection to JAP.
Listing Two shows the HTTPMessageHeader class. This class contains two static methods that generate HTTP messages. The prepareRequestMessageHeader method generates the POST method and associated HTTP headers are used when a client is sending a message to a server. The terms "client" and "server" could easily be replaced by "sender" and "receiver," respectively; the methods described here are meant for peer-to-peer communication. The prepareReplyMessageHeader method is used to generate the status line and HTTP headers needed for a response message. As required by the HTTP specification, CR/LF characters are placed at the end of every line. The size (in bytes) of the request and reply body must be known at the time the headers are created. This is because the "Content-Length" HTTP header appears before the request and reply body.
Listing Three is the sendMessage method, which sends an HTTP-based message to the recipient via JAP. The code sends a serialized copy of an object of type SendObject to the recipient. First, a socket connection is opened to JAP. Next, the object to be sent to the recipient is serializedconverted to a series of bytes that can be used by the recipient to reconstruct the object. Once the object has been serialized, its size, in bytes, can be determined. The intended recipient of the message is "courseserver.edu." The port number must be appended to the server's namethe port number follows the colon. This host and port format is specified by the HTTP standard. Finally, the HTTP-based message is sent to JAP. JAP will read the message and format it for transport over the mix-net. The last mix in the mix-net sends the HTTP-based message to the intended recipient (courseserver.edu, port 1765).
The intended recipient of the message (courseserver.edu) must be listening for mix-net connections on port 1765 (based on our example code). Once the mix-net delivers the message to courseserver.edu, the HTTP-based message must be parsed to determine the message ID and the size of the enclosed object. These items can be found using Java's regular expression matching library. Given the size of the enclosed object, the recipient can read the serialized bytes and reconstruct the object. The recipient is then free to send a reply to the sender using the HTTPMessageHeader.prepareReplyMessageHeader method (in Listing Two). An example Java program that utilizes these methods is available electronically from DDJ (see "Resource Center, page 3) and at ~marc.waldman/DDJ_JavaAnon.zip.
DDJ
SocketChannel japConnection; try{ japConnection=SocketChannel.open(); japConnection.connect(new InetSocketAddress("localhost",4000)); } catch(Exception e){ System.err.println("Unable to connect to JAP - exiting"); System.exit(1); }Back to article
Listing Two
public class HTTPMessageHeader{ private static final String CONTENT_LENGTH="Content-Length: "; private static final String MESSAGE_ID="Anon-Message-ID: "; private static final String CRLF="\r\n"; private static final String OCTET_CONTENT="Content-Type: octet/stream"; private static final String HTTP_OK_RESPONSE="HTTP/1.0 200 OK"; public static ByteBuffer prepareRequestMessageHeader(String recipient, int messageID, int contentLength){ StringBuffer sb=new StringBuffer("POST http://"); sb.append(recipient); // recipient string should include a colon and port number sb.append(" HTTP/1.0"+CRLF); sb.append(CONTENT_LENGTH+contentLength+CRLF); sb.append(OCTET_CONTENT+CRLF); sb.append(MESSAGE_ID+messageID); sb.append(CRLF+CRLF); try{ return ByteBuffer.wrap((sb.toString()).getBytes("US-ASCII")); } catch(Exception e){ return null; } } public static ByteBuffer prepareReplyMessageHeader(int messageID, int contentLength){ StringBuffer sb=new StringBuffer(HTTP_OK_RESPONSE+CRLF); sb.append(OCTET_CONTENT+CRLF); sb.append(MESSAGE_ID); sb.append(messageID+CRLF); sb.append(CONTENT_LENGTH); sb.append(contentLength); sb.append(CRLF+CRLF); try{ return ByteBuffer.wrap((sb.toString()).getBytes("US-ASCII")); } catch(UnsupportedEncodingException e){ return null; } } }Back to article
Listing Three
public void sendMessage(Object sendObject) throws Exception{ String recipient="courseserver.edu:1765"; SocketChannel japConnection=null; int messageID=1; // connect to JAP try{ japConnection=SocketChannel.open(); japConnection.connect(new InetSocketAddress("localhost",4000)); } catch(Exception e){ System.err.println("Unable to connect to JAP - exiting"); System.exit(1); } // serialize sendObject ByteArrayOutputStream baos=new ByteArrayOutputStream(); ObjectOutputStream oos=new ObjectOutputStream(baos); oos.writeObject(sendObject); oos.close(); ByteBuffer bb=HTTPMessageHeader.prepareRequestMessageHeader(recipient, messageID, baos.size()); if (bb==null){ System.err.println("Unable to create HTTP Header"); System.exit(1); } ByteBuffer objectBB=ByteBuffer.wrap(baos.toByteArray()); ByteBuffer bbArray[]=new ByteBuffer[2]; bbArray[0]=bb; bbArray[1]=objectBB; // send the header and object to JAP japConnection.write(bbArray); }Back to article | http://www.drdobbs.com/web-development/http-based-anonymous-communication-chann/184405682 | CC-MAIN-2016-22 | refinedweb | 3,105 | 56.05 |
Self Join
- Apr 14 • 6 min read
- Key Terms: self join, pandas merge, python, pandas
In SQL, a popular type of join is a self join which joins a table to itself. This is helpful for comparing rows to one another, based on their values in columns, in a single table.
In this article, I'll walk through two examples in which self joins can be helpful.
Import Modules
import pandas as pd
Example 1: Basic Real Estate Transactions
Create Dataset
I'll create a small dataset of 5 real estate transactions that include a unique transaction id for each purchase, a close date for each sale, the buyer's name and seller's name.
Notice how Julia was the buyer for transaction id
1 and later a seller for transaction id
2.
data = {'transaction_id': [1, 2, 3, 4, 5], 'close_date': ["2012-08-01", "2012-08-02", "2012-08-03", "2012-08-04", "2012-08-04"], 'buyer_name': ["Julia", "Joe", "Jake", "Jamie", "Jackie"], 'seller_name': ["Lara", "Julia", "Barbara", "Emily", "Mason"] } df = pd.DataFrame(data)
View entire
df.
df
Find People Who Were Both Buyers and Sellers
Often times, people buy homes and then later sell that homes. In this dataset, I'm curious, which people both bought and sold a home? We noticed earlier Julia bought a home and later sold one so Julia's name should be the only result.
One method of finding a solution is to do a self join. In pandas, the DataFrame object has a
merge() method. Below, for
df, for the
merge method, I'll set the following arguments:
right=dfso that the first
dflisted in the statement merges with another DataFrame,
df
left_on='buyer_name'is the column to join from the left
df
right_on='seller_nameis the column to join from the right
df
By default, these arguments are also set in the
merge method:
how='inner'so returned results only show records in which the left
dfhas a value in
buyer_nameequivalent to the right
dfwith a value of
seller_name.
suffixes=(‘_x’, ‘_y’)so
_xis appended to the end of column names from our left
dfif those column names originally match the right
df.
_yis appended to the end of column names from our right
dfif those column names originally match the left
df.
df2 = df.merge(right=df, left_on='buyer_name', right_on='seller_name') df2df2 = df.merge(right=df, left_on='buyer_name', right_on='seller_name') df2
Our output of
df2 shows in a single record, the details of Julia who bought a home and sold a home.
We can find all unique values in the
buyer_name_x field to programmatically arrive at our result.
df2['buyer_name_x'].unique()
array(['Julia'], dtype=object)
Example 2: Intermediate Real Estate Transactions
Append New Row to Dataset
Below, I create a new row for another real estate transaction in which Julia buys a 2nd home.
df.loc[5] = [6, "2012-08-05", "Julia", "Mary"]
View new
df with additional row.
df
Find People Who Are Both Buyers and Sellers
This is the same ask as with Example 1. However, our dataset is slightly different so a self join will return different results.
I'll use the same code to perform a self join but assign the output to
df instead.
df3 = df.merge(right=df, left_on='buyer_name', right_on='seller_name') df3
There are two records!
The first record indicates Julia's purchase for
transaction_id of
1 and later a sale with
transaction_id of
2.
The second record indicates Julia's purchase for
transaction_id of
6 and later a sale with
transaction_id of
2.
This is the correct output as I wanted all rows of
df to be joined with
df in which a
buyer_name from the left
df is equivalent to a
seller_name from the right
df.
I can find all unique values of the
buyer_name_x field to programmatically arrive at our result.
df3['buyer_name_x'].unique()
array(['Julia'], dtype=object) | https://dfrieds.com/data-analysis/self-join-python-pandas | CC-MAIN-2019-26 | refinedweb | 644 | 59.43 |
Solving the classic “aaaabbbcca” coding challenge
I’ve seen many versions of this problem in coding interviews, so I decided to solve it here, just for fun.
The problem goes like this:
Input: “aaaabbbcca”
Output: [(“a”, 4), (“b”, 3), (“c”, 2), (“a”, 1)]Write a function that converts the input to the output
One important thing to note here is that the input can have the same letter in different groups, making the solution a little bit tricky if, for example, you try to use dictionaries.
Here is my proposed solution:
def count_in_a_row(word)
counter = 1
result = Hash.new.compare_by_identity
for i in (1..word.length)
if word[i] == word[i-1]
counter += 1
else
result[word[i-1]] = counter
counter = 1
end
end
result
end
Pay attention to the line with
Hash.new.compare_by_identity, that gives us the ability to use a
hash with duplicated keys.
You can read the algorithm like this:
# Iterate the letters in the word, starting from the second one# # If the current letter is equal to the previous one, increase the counter# # If the current letter is different to the previous one,
# # save the key (the letter) with the value (the counter) in the hash,
# # and then reset the counter.# Return the hash
But calling that function, the output would look a little bit different than required.
puts count_in_a_row("aaaabbbcca")Output: {"a"=>4, "b"=>3, "c"=>2, "a"=>1}
Expected: [("a", 4), ("b", 3), ("c", 2), ("a", 1)]
I would ask the interviewer if she is ok with the output or if the format is a hard requirement, and she wants me to implement it. In that case, I would do something like this:
def format_output(raw_output)
formatted_output = "["
raw_output.each do |key, value|
formatted_output += "(\"" + key + "\", " + value.to_s + "), "
end
formatted_output.delete_suffix!(", ")
formatted_output += "]"
endputs format_output(count_in_a_row("aaaabbbcca"))Output: [("a", 4), ("b", 3), ("c", 2), ("a", 1)]
Expected: [("a", 4), ("b", 3), ("c", 2), ("a", 1)]
You can find the source code of this post here. | https://gepser.medium.com/solving-the-classic-aaaabbbcca-coding-challenge-d025b31b9037?source=user_profile---------1------------------------------- | CC-MAIN-2022-05 | refinedweb | 329 | 56.39 |
Laura3 morph for Aiko3?
Has anyone done this? Created a morph to set Aiko3's head to the same physiognomy as Laura3, to then make her workable with some of the character morphs available for Laura (or just use Laura's head as-is)? I like Laura's head much better than the 'realistic' Aiko head, but the current option is to try to parent Laura's body to Aiko, usually at the neck, and make all of her invisible below the neck, while making Aiko's head and eyes invisible, but the necks don't match up.
I've tried exporting Laura's head as a morph target, but because of the difference in their heights, the head descends down into her chest. The vertices are compatible - they're just wayyyy too offset to use.
Any suggestions? Any successes out there that someone has shared already? Thanks! (BTW, if this has been accomplished with V3, instead, that would be a useable option, too)
There used to be a product called shape shifter that would let you do this with the Gen3 characters.
If you can't find it you can do this (Poser instructions, the same should be do-able in DS but I don't know how)
Export Laura's head as a wavefront object.
Load Aiko
Select Aiko's head
import the laura head obj as a morph (The old style morph)
If the head is too far out of place, move it with a big magnet.
If there is too much distortion in the neck, you can try repeating the steps with the neck or you can play around with magnets to adjust it
Or just hide it with a collar.
Laura 3 is really based off of David 3, so you'd be better off using that to convert.
I'm not really sure what the relevance of using David 3 would be, since the goal is to create a more realistic head for Aiko.
I've checked out the ShapeShifter thread here on DAZ, and went to PhilC's site, but haven't located the script, yet. In the meantime, I did pretty much exactly as pwiecek described: after a couple of abortive attempts at exporting Laura's head as a morph and getting some really unusable results (the difference in their heights means Laura's head ends up in Aiko's chest, even with exporting it as a morph target only), I loaded Laura along with Aiko and lined up their heads in the scene as closely as possible, then exported Laura's head as a regular OBJ, and loaded that as a morph target. Since I wanted to use Aiko's realistic body, and I don't see any "realistic" morph control for Aiko's head alone, I set Aiko to realistic, then exported her head as a morph target. I also had to unhide the translation controls for Aiko's head, so I could adjust its location once the Laura morph was loaded, as there was some offset in both the Y- and Z-axes. I loaded the 'Aiko realistic' head morph, and after dialing the overall 'realistic' to 1.0, I dialed her head back to -1.0, and set the Laura head morph to 1.0. It looks pretty good, although Aiko's neck is still much too slender for Laura's head; I had to use a lot of magnetic manipulation to get it closer - it's still not perfect, but with hair, it's fairly unnoticeable..
Ahhhh, okay.
However, I've already used a couple of Laura morph sets on the 'Laiko' head and haven't seen any more problems than I would have expected, given that underneath, Aiko is is lurking ;). These were character morphs, though - I haven't tried loading Laura's standard features or expression controls.
Thanks for the info!
Damn! Not bad! Now if only that could be done with Genesis, or can Gen X do that after the shagbag presented here?.
There are only two gen 3 groupings - one for Michael 3 and the Freak, and one for all the other figures.
Someone wrote a tutorial on how to put one character's head onto another character's body. I found a reference link, but the actual daz tutoirail seems to be gone.
Here's the reference link.
And here's teh 'dead' daz link
If I remember, the process involved loading the character whose head you wanted to use (in this case Laura), zero the body pose, hide everything except the head, then save the visible head as a scene. Open a new empty scene, and load the character for the body you want to use (Aiko). Zero the body pose, then hide the head. Use file merge to bring the 'head only' file into the body only file.
Position the head onto the body, then use the scene tab to parent the head to the neck.
If it's just a matter of the head morphs, exporting Laura's head (in position for A3's head) should work. I've gotten Capsces' "Hitoro" morphs to work on Luke (but I may have transferred them from the D3 cr2, so you'd have to test it).
WOOHOO!!
The mighty Wayback Machine comes through again
Unfortunately, no pictures
Shapeshifter is a python based script. It will not work in DS at all and has issues with newer versions of Python. | http://www.daz3d.com/forums/viewreply/293732/ | CC-MAIN-2017-43 | refinedweb | 908 | 73.92 |
Hey guys,
Maybe simple question but I dont find any solution right now...
I have a number counting up from 0 to 86400.
When I'm pressing a key, I want to get the CURRENT number to be saved into a new variable. This new variable has to be static (not counting).
But the Basic counting have to go on.
Hope you know what I mean and can help me =)
To get a key to trigger something you need to make use of a KeyboardEvent listener
stage.addEventListener(KeyboardEvent.KEY_UP, assignValue);
function checkText(event:KeyboardEvent):void {
yourVar = currentCount;
}
getting the key pressing work is not the problem, my problem is: How to get this Number.
The Number is counting up, and anytime I press the key i get the value in a new variable
Maybe you should show your code then, because if you do not know how to assign a new variable but can deal with the keyboard end of things, there is a disconnect with what your ability appears to be and what you say it is.
Ok here is my code:
import flash.display.Sprite;
import flash.events.TimerEvent;
import flash.utils.Timer;
import com.greensock.*;
import com.greensock.easing.*;
////////////////////////DAYTIME IN SECONDS
var today:Date;
var current:Date;
var time:Date;
var millisecond:int;
var second:int;
var minute:int;
var hour:int;
var day:int;
var month:int;
var secs:int;
var timer:Timer = new Timer(500);
timer.addEventListener(TimerEvent.TIMER, timerHandler);
timer.start();
function timerHandler(event:TimerEvent):void
{
time = new Date();
millisecond = time.getMilliseconds();
second = time.getSeconds();
minute = time.getMinutes();
hour = time.getHours();
day = time.getDate();
month = time.getMonth();
today = new Date(2012,5,28,0,0,0,0);
current = new Date(2012,month+1,day,hour,minute,second,millisecond);
secs = (current.getTime() - today.getTime())/1000;
trace(secs); //"secs" is the current daytime in seconds
stage.addEventListener(KeyboardEvent.KEY_DOWN, HerdOn);
////////////////////////EVENT
function HerdOn(event:KeyboardEvent):void
{
var xPos:Number = -411.10;
var yPos:Number = -8.05;
var pixWidth:int = 0;
if (event.charCode == 49) // key "1"
{
trace ("start");
//So, when I have pressed the key "1", here I want to have a new Variable which gives me a static number
//of "secs"
currentCount
trace ("One is pressed");
bob_mc.x = xPos;
bob_mc.y = yPos;
bob_mc.width = pixWidth;
TweenLite.to(bob_mc, 15.25, {x:xPos, y:yPos, scaleX:2, ease:Linear.easeNone});
}
if (event.charCode == 50) //key "2"
{
trace("stop");
TweenLite.to(bob_mc, 15.25, {paused:true, ease:Linear.easeNone});
}
}
}
I answered your question yeaterday regarding how to get the seconds (you should mark that posting as answered). You should clean up that code so that it is not doing things that you don't need done - which appears to be most of the date functions you run.
Also, you should never nest function within other functions. So you should separate the HerdOn function from being inside the timeHandler function.
If you want another variable, then outside any function declare a new variable, and then assign the secs value to it inside your keyboard handler function.
Ok I understand, maybe this is my big problem, so
How do I call a function or a variable in another function, even the variable is local. I thought it only works with global variables.
For me it's a bit cunfusing, why I put function into function.
Now, when I seperate the function, where I calculate the daytime in seconds, I have to call it again when I press my key, right?
How? (sorry for my inexperience xD)
I think you could use a Global Array.
var laps:Array=new Array(); //out of all functions
laps.push(value); // inside your "HerdOn"
Each time you press a correct key is that mark would add to the array, you would have ordered marks and delete would use:
laps=new Array();
Declare your variable outside any functions so that it is available outside of any function if you need it to be, juist like the rest of the vars you declared. If you need to store different values at different times and want to retain them all, use an array to store them in.
Thanks for your help^^ Works fine =)
you're welcome | http://forums.adobe.com/message/4444693 | CC-MAIN-2014-15 | refinedweb | 701 | 66.94 |
Hubble Flash<<
September 10, 2009 at 4:00 pm
My goodness! It used to be just a plain blue sky from my end. Now I can see beyond! Amazing! Whoever painted these wonderful things you see up there must have known perfection! Awesome! It is sooooooooo beautiful!
Thank you for making me feel good by seeing all these.
September 14, 2009 at 5:08 pm
[…] As a very quick postscript to my previous post about the amazing performance of Hubble’s spanking new camera, let me just draw attention to a […]
September 14, 2009 at 10:03 pm
The refurbishment of the Hubble Space Telescope indeed appears to have been a great success. This is a triumph for NASA and for science.
Of course, the HST has been a magnificent success over most of the 19 years since its launch. This is down to a number of factors. One of most important has been the refurbishment missions, which have seen the telescope repaired and upgraded, and instruments replaced. The telescope has therefore operated in a manner rather similar to conventional ground-based observatories, rather than the having the short lives of most space observatories. The scientific results have been magnificient and in research fields ranging from planets within our Solar System to observational cosmology.
I have been rather sceptical of the benefits of manned space activities. The International Space Station – the main objective of human spaceflight for the past decade – has been extremely expensive to build and maintain, while the science carried out onboard does not strike me as of being of critical importance. In contrast, the HST refurbishment missions by the space shuttle have had profoundly important research results: they have been by far the most significant accomplishments of manned spaceflight over the past twenty years. They were magnificent triumphs in science, engineering and technology. Science was the primary motivation behind the refurbishment missions, not adventure or political considerations, and the consequences were, and will be, profound.
June 16, 2012 at 6:21 am
Blessed is He in whose hand is the dominion, and He is over all things competent.
(He) who created death and life to test you (as to) which of you is best in deed – and He is the Exalted in Might, the Forgiving –
And who created seven heavens in layers (one covering or fitting over the other). You do not see in the creation of the Merciful any inconsistency. So return (your) vision (to the sky) do you see any breaks?
Then return your vision twice again (repeayedly). Your vision will return to you humbled while it is fatigued.
And We have certainly beautified the nearest heavens with lamps (i.e., stars) and have made from them what is thrown at the devils (driving them from the heavens and preventing them eavesdrooping). and have prevented them the punishment from the Blaze.
Quran 67:15 | https://telescoper.wordpress.com/2009/09/09/hubble-flash/ | CC-MAIN-2015-32 | refinedweb | 479 | 62.27 |
NAME
epoll_ctl - control interface for an epoll descriptor
SYNOPSIS
#include <sys/epoll.h> int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);
DESCRIPTION
Control an epoll descriptor, epfd, by requesting that the operation op be performed on the target file descriptor, fd. The event set composed using the following available event types: EPOLLIN The associated file is available for read(2) operations. EPOLLOUT The associated file is available for write(2) operations. EPOLLRDHUP (since kernel 2.6.17) Stream socket peer closed connection, or shut down writing half of connection. (This flag is especially useful for writing. EPOLLET Sets the Edge Triggered behavior for the associated file descriptor. The default behavior for epoll is Level Triggered. See epoll(7) for more detailed information about Edge and Level Triggered event distribution architectures. EPOLLONESHOT (since kernel in epfd. EINVAL epfd is not an epoll file descriptor, or fd is the same as epfd, or the requested operation op is not supported by this interface. ENOENT op was EPOLL_CTL_MOD or EPOLL_CTL_DEL, and fd is not in epfd. ENOMEM There was insufficient memory to handle the requested op control operation. EPERM The target file fd does not support epoll.
CONFORMING TO
epoll_ctl() is Linux-specific, and was introduced in kernel 2.5.44.), epoll_wait(2), poll(2), epoll(7)
COLOPHON
This page is part of release 2.77 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. | http://manpages.ubuntu.com/manpages/hardy/en/man2/epoll_ctl.2.html | CC-MAIN-2015-48 | refinedweb | 246 | 59.19 |
This is a C++ program to generate a graph for a given fixed degree sequence.This algorithm generates a undirected graph for the given degree sequence.It does not include self-edge and multiple edges.
Examples:
Input : degrees[] = {2, 2, 1, 1} Output : (0) (1) (2) (3) (0) 0 1 1 0 (1) 1 0 0 1 (2) 1 0 0 0 (3) 0 1 0 0 Explanation : We are given that there are four vertices with degree of vertex 0 as 2, degree of vertex 1 as 2, degree of vertex 2 as 1 and degree of vertex 3 as 1. Following is graph that follows given conditions. (0)----------(1) | | | | | | (2) (3)
Approach :
1- Take the input of the number of vertexes and their corresponding degree.
2- Declare adjacency matrix, mat[ ][ ] to store the graph.
3- To create the graph, create the first loop to connect each vertex ‘i’.
4- Second nested loop to connect the vertex ‘i’ to the every valid vertex ‘j’, next to it.
5- If the degree of vertex ‘i’ and ‘j’ are more than zero then connect them.
6- Print the adjacency matrix.
Based on the above explanation, below are implementations:
C++
Python3
# Python3 program to generate a graph
# for a given fixed degrees
# A function to prthe adjacency matrix.
def printMat(degseq, n):
# n is number of vertices
mat = [[0] * n for i in range(n)]
for i in range(n):
for j in range(i + 1, n):
# For each pair of vertex decrement
# the degree of both vertex.
if (degseq[i] > 0 and degseq[j] > 0):
degseq[i] -= 1
degseq[j] -= 1
mat[i][j] = 1
mat[j][i] = 1
# Prthe result in specified form
print(” “, end = ” “)
for i in range(n):
print(” “, “(“, i, “)”, end = “”)
print()
print()
for i in range(n):
print(” “, “(“, i, “)”, end = “”)
for j in range(n):
print(” “, mat[i][j], end = “”)
print()
# Driver Code
if __name__ == ‘__main__’:
degseq = [2, 2, 1, 1, 1]
n = len(degseq)
printMat(degseq, n)
# This code is contributed by PranchalK
Output:
(0) (1) (2) (3) (4) (0) 0 1 1 0 0 (1) 1 0 0 1 0 (2) 1 0 0 0 0 (3) 0 1 0 0 0 (4) 0 0 0 0 0
Time Complexity: O(v*v).
1 Comments | https://tutorialspoint.dev/data-structure/graph-data-structure/construct-graph-given-degrees-vertices | CC-MAIN-2021-17 | refinedweb | 380 | 71.07 |
In some programming languages, the main function is where a program starts execution.
It is generally the first user-written function run when a program starts (some system-specific software generally runs before the main function), though some languages (notably C++ with global objects that have constructors) can execute user-written functions before main runs. The main function usually organizes at a high level the functionality of the rest of the program. The main function typically has access to the command arguments given to the program at the command-line interface.
Variants
C and C++
In C and C++, the function prototype of the main function looks like one of the following:
int main(void) int main(int argc, char *argv[])
The parameters
argc, argument count, and
argv, argument vector[1], respectively give the number and value of the program's command-line arguments. The names of
argc and
argv may be any valid identifier, but it is common convention to use these names. Other platform-dependent formats are also allowed by the C and C++ standards; for example, Unix (though not POSIX.1) and Microsoft Visual C++:[2]
int main(int argc, char *argv[], char *envp[], char *apple[])
The value returned from the main function becomes the exit status of the process, though the C standard only ascribes specific meaning to two values:
EXIT_SUCCESS (traditionally zero) and
EXIT_FAILURE. The meaning of other possible return values is implementation-defined.
By convention, the command-line arguments specified by
argc and
argv include the name of the program as the first element; if a user types a command of "
rm file", the shell will initialise the
rm process with
argc = 2 and
argv = ["rm", "file"]. As
argv[0] is the name that processes appear under in
ps,
top etc., some programs, such as daemons or those running within an interpreter or virtual machine (where
argv[0] would be the name of the host executable), may choose to alter their argv to give a more descriptive
argv[0], usually by means of the
exec system call.
The name
main is special; normally every C and C++ program must define exactly one function with that name.
main must be declared as if it has external linkage; it cannot be declared
static.
In C++,
main must be in the global namespace (i.e.
::main) and cannot be a (class or instance) member function.
Clean
Clean is a functional programming language based on graph rewriting. The initial node is called
Start and is of type
*World -> *World if it changes the world or some fixed type if the program only prints the result after reducing
Start.
Start :: *World -> *World Start world = startIO ...
Or even simpler
Start :: String Start = "Hello, world!"
One tells the compiler which option to use to generate the executable file.
C#
When executing a program written in C#, the CLR searches for a static method marked with the
.entrypoint IL directive, which takes either no arguments, or a single argument of type
string[], and has a return type of
void or
int, and executes it.[3]
static void Main(); static void Main(string[] args); static int Main(); static int Main(string[] args);
Command-line arguments are passed in
args, similar to how it is done in Java. For versions of
Main returning an integer, similar to both C and C++, it is passed back to the environment as the exit status of the process.
GNAT
Using GNAT, the programmer is not required to write a function called
main; a source file containing a single subprogram can be compiled to an executable. The binder will however create a package
ada_main, which will contain and export a C-style main function.
Haskell
A Haskell program must contain a name called
main bound to a value of type
IO t, for some type
t[4]; which is usually
IO ().
IO is a monad, which organizes side-effects in terms of purely functional code.[5] The
main value represents the side-effects-ful computation done by the program. The result of the computation represented by
main is discarded; that is why
main usually has type
IO (), which indicates that the type of the result of the computation is
(), the unit type, which contains no information.
Command line arguments are not given to
main; they must be fetched using another IO action, such as
System.Environment.getArgs.
Java
Java programs start executing at the main method, which has the following method heading:
public static void main(String[] args) public static void main(String... args)
Command-line arguments are passed in
args. As in C and C++, the name "
main" is special. Java's main methods do not return a value directly, but one can be passed by using the
System.exit() function.
Unlike C, the name of the program is not included in
args, because the name of the program is exactly the name of the class that contains the main method called, so it is already known.
Pascal
In Pascal, the main procedure is the only unnamed procedure in the program. Because Pascal programs have the procedures and functions in a more rigorous top-down order than C, C++ or Java programs, the main procedure is usually the last procedure in the program. Pascal does not have a special meaning for the name "
main" or any similar name.
program Hello; procedure HelloWorld; begin writeln('Hello, world!'); end; begin HelloWorld; end.
Command-line arguments are counted in
ParamCount and accessible as strings by
ParamStr(n), with n between 0 and
ParamCount.
Perl
In Perl, there is no main function. Statements are executed from top to bottom.
Command-line arguments are available in the special array
@ARGV. Unlike C,
@ARGV does not contain the name of the program, which is
$0.
Pike
In Pike syntax is similar to that of C and C++. The execution begins at
main. The "
argc" variable keeps the number of arguments passed to the program. The "
argv" variable holds the value associated with the arguments passed to the program.
Example:
int main(int argc, array(string) argv)
Python
In Python a function called
main doesn't have any special significance. However, it is common practice to organize a program's main functionality in a function called
main and call it with code similar to the following:
def main(): # the main code goes here if __name__=="__main__": main()
When a Python program is executed directly (as opposed to being imported from another program), the special global variable
__name__ has the value "
__main__".[6]
Some programmers use the following, giving a better look to exits:
import sys def main(*args): try: # some code here except: # handle some exceptions else: return 0 # exit errorlessly if __name__ == '__main__': sys.exit(main(*sys.argv))
REALbasic
In REALbasic, there are two different project types, each with a different main entry point. Desktop (GUI) applications start with the
App.Open event of the project's
Application object. Console applications start with the
App.Run event of the project's
ConsoleApplication object. In both instances, the main function is automatically generated for you, and cannot be removed from your project.
Ruby
In Ruby, there is no distinct main function. The code written without additional "
class .. end", "
module .. end" enclosures is executed directly, step by step, in context of special "
main" object. This object can be referenced using:
self # => main
and contain the following properties:
self.class # => Object self.class.ancestors # => [Object, Kernel]
Methods defined without additional classes/modules are defined as private methods of the "
main" object, and, consequentally, as private methods of almost any other object in Ruby:
def foo 42 end foo # => 42 [].foo # => private method `foo' called for []:Array (NoMethodError) false.foo # => private method `foo' called for false:FalseClass (NoMethodError)
Number and values of command-line arguments can be determined using the single
ARGV constant array:
ARGV # => ["foo", "bar"] ARGV.size # => 2
Note that first element of
ARGV,
ARGV[0], contains the first command-line argument, not the name of program executed, as in C. The name of program is available using
$0 or
$PROGRAM_NAME.[7]
Similar to Python, one could use:
if __FILE__ == $PROGRAM_NAME # Put "main" code here end
LOGO
In FMSLogo, the procedures when loaded do not execute. To make them execute, it is necessary to use this code:
to procname ... ; Startup commands (such as print [Welcome]) end
make "startup [procname]
Note that the variable
startup is used for the startup list of actions, but the convention is that this calls another procedure that runs the actions. That procedure may be of any name.
AHLSL
In AIGE's AHLSL, the main function, by default, is defined as:
[main]
References
- ^ argv: the vector term in this variable's name is used in traditional sense to refer to strings.
- ^ The
char *appleArgument Vector
- ^
- ^
- ^ Some Haskell Misconceptions: Idiomatic Code, Purity, Laziness, and IO — on Haskell's monadic IO>
- ^ Python
main()functions
- ^ Programming Ruby: The Pragmatic Programmer's Guide, Ruby and Its World — on Ruby
ARGV
External links
This entry is from Wikipedia, the leading user-contributed encyclopedia. It may not have been reviewed by professional editors (see full disclaimer) | http://www.answers.com/topic/main-function | crawl-002 | refinedweb | 1,519 | 52.39 |
Hi Laurie,
And thanks for your quick answer! Here are my comments.
I tried that first, changing the default encoding (in struts.xml) to utf-8.
That works fine, in java and in our web application. The problem is our
Sybase database which is configured to ISO-8859-1. And as our JDBC driver
(jconn2) does not convert from utf-8 to iso-8859-1, it will throw an
exception when trying to update or insert the characters it does not
understand.
So therefore I had to convert them myself. I can also add that there is a
special case when it comes to the Euro (€) character. It did not exist when
iso-8859-1 was created, but added as part of iso-8859-15. But our Sybase
database still only understands iso-8859-1, so a conversion needs to take
place. What I did was first convert it from utf-8 to iso-8859-15, then from
iso-8859-15 to iso-8859-1. Here is the code:
byte[] characters = charsBeforeConvert.getBytes("iso-8859-15");
for (int i = 0; i < characters.length; i++) {
if (characters[i] == (byte) 0xa4) {
//0x80 is control character and has no symbol in iso-8859-1. It
is used for € in windows-1252
characters[i] = (byte) 0x80;
}
}
return new String(characters, "iso-8859-1");
Kind of a hassle, but it works.
It was a good idea to override the setCharacterEncoding method. This would
open the opportunity to move my converting logic from the filter to an
interceptor. But then another problem occurs. If I do the conversion in an
interceptor, I would need to know exactly which parameters that would need
to be converted. We are working with a solution for maintaining CV’s. I
would then have to do something like (pseudocode):
- String firstName = Request.getParamater(“firstName”);
- get CV object from the value stack
- firstName = performConversion(firstName)
- cv.setFirstName(firstName)
- put cv back on the value stack
In some cases this would work fine, but I have so many parameters I need to
retrieve and convert that it would not work as a proper solution. My filter
takes care of all requests parameters without the need of specifying which
parameter it is.
To improve my code, I will move the converting logic to a utility class, so
the filter can stay as thin as possible.
I will post the entire code if you like to take a look at it. Any comments
would be appreciated!
Thanks
import com.google.common.collect.Maps;
import javax.servlet.*;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletRequestWrapper;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.util.Map;
/**
* Filter to fix utf-8 to iso-8859-1 conversion
*
* @author Asgaut Mjolne
* @version $Revision: 1.6 $, 05.feb.2008, modified by: $Author: fiasmjol
*/
public class CharsetEncodingFilter implements Filter {
@Override
public void init(FilterConfig filterConfig) throws ServletException {
}
@Override
public void doFilter(ServletRequest servletRequest, ServletResponse
servletResponse, FilterChain filterChain) throws IOException,
ServletException {
HttpServletRequest req = (HttpServletRequest) servletRequest;
if ("utf-8".equalsIgnoreCase(req.getCharacterEncoding())) {
req = new CharsetRequestWrapper(req);
req.getParameter("foo"); //Needed to fill params. Do not remove
}
filterChain.doFilter(req, servletResponse);
}
@Override
public void destroy() {
}
static class CharsetRequestWrapper extends HttpServletRequestWrapper {
private static final byte ISO_8859_15_EURO_CODE_POINT = (byte) 0xa4;
/**
* Not in use in ISO-8859-1
*/
private static final byte CP_1252_EURO_CODE_POINT = (byte) 0x80;
public CharsetRequestWrapper(HttpServletRequest httpServletRequest)
{
super(httpServletRequest);
}
@Override
public String getParameter(String s) {
return super.getParameter(s);
}
Map<String, String[]> iso88591EncodedParams = null;
/**
* Looping through all parameters on the request, checking for
special characters.
* If any found, convert them with the fixCharset method
*/
@Override
public Map<String, String[]> getParameterMap() {
if (iso88591EncodedParams == null) {
iso88591EncodedParams = Maps.newHashMap();
Map<String, String[]> params = super.getParameterMap();
for (String key : params.keySet()) {
String[] values = params.get(key);
for (int j = 0; j < values.length; j++) {
values[j] = fixCharset(values[j]);
}
iso88591EncodedParams.put(key, values);
}
}
return iso88591EncodedParams;
}
/**
* Converting special chars from utf-8 to iso-8859-1
* Add more convertions here when needed
*/
static String fixCharset(String charsBeforeConvert) {
try {
byte[] characters =
charsBeforeConvert.getBytes("iso-8859-15");
for (int i = 0; i < characters.length; i++) {
if (characters[i] == ISO_8859_15_EURO_CODE_POINT) {
characters[i] = CP_1252_EURO_CODE_POINT;
}
}
return new String(characters, "iso-8859-1");
} catch (UnsupportedEncodingException e) {
return charsBeforeConvert;
}
}
@Override
public String[] getParameterValues(String s) {
return super.getParameterValues(s);
}
}
}
Laurie Harper wrote:
>
> Asgaut wrote:
>> I have recently been struggling with a utf-8 to ISO-8859-1 problem with
>> Ajax
>> and Struts2.
>>
>> The problem is basically that our application requires iso-8859-1
>> characters
>> and Ajax is configured to only post utf-8 (ajax is utf-8 either way, can
>> not
>> be changed). So some kind of conversion has to take place at some level.
>>
>> My problem can be divided into two parts:
>> 1. Make Struts2 understand that there is a incoming utf-8 POST, even
>> though
>> struts.xml (which set the struts2 default encoding) is configured to use
>> iso-8859-1
>> 2. Convert the characters from utf-8 to iso-8859-1
>
> 3. Change your default encoding to utf-8, which should have no effect on
> any of your code but will allow greater flexibility in the range of
> characters you can display and read. Is there any reason you must use
> iso-8859-1?
>
>> [...]
>>
>> If you take a look at this piece of code, you can see that it overrides
>> the
>> encoding if it is set as defaultEncoding (from struts.xml). This is OK,
>> the
>> problem is this check:
>> if (encoding != null) {
>> try {
>> request.setCharacterEncoding(encoding);
>> } catch (Exception e) {
>> LOG.error("Error setting character encoding to '" +
>> encoding
>> + "' - ignoring.", e);
>> }
>> }
>>
>> I think the correct thing would be to also do a check if the
>> request.getCharacterEncoding was already set. I should look like this:
>> if (encoding != null && request.getCharacterEncoding() == null ) {
>> try {
>> request.setCharacterEncoding(encoding);
>> } catch (Exception e) {
>> LOG.error("Error setting character encoding to '" +
>> encoding
>> + "' - ignoring.", e);
>> }
>> }
>> With this change utf-8 would be kept as the request character encoding
>> and I
>> could do my conversion in my interceptor.
>> This would solve my problem number 1. Am I correct when I say this is a
>> bug?
>
> I don't know if I'd call that a bug, but it does seem like a reasonable
> enhancement. It would probably require some testing with different
> browsers to make sure getCharacterEncoding() really is returning null in
> the 'normal' cases, but assuming that's true you could open a ticket in
> JIRA and attach a patch.
>
>> The way I went around it was to create a filter which is executed before
>> FilterDispatcher in struts2. In this filter I check if it is a uft-8 post
>> and if it is, I wrap the HttpServletRequest into my own
>> CharsetRequestWrapper. In my wrapper I will override getParameterMap
>> which
>> converts my characters, put them back into the map and return them. I
>> also
>> run a req.getParameter("foo"); after my wrapping to populate the
>> parameters
>> on the request.
>>
>> It works, but it took me a couple of days to work it out.
>>
>> Any comments on this?
>
> It might be simpler for your filter to call
> setCharacterEncoding("utf-8") and use a trivial request wrapper that
> delegates all calls to the wrapped request *except*
> setCharacterEncoding(), making that a no-op. It would make it clearer
> what the filter was acutaly doing with less code :-) Otherwise, seems
> like a reasonable work-around.
>
> L.
>
>
> ---------------------------------------------------------------------
> | http://mail-archives.us.apache.org/mod_mbox/struts-user/200802.mbox/%3C15497775.post@talk.nabble.com%3E | CC-MAIN-2020-16 | refinedweb | 1,211 | 50.23 |
GNU Readline is a powerful line editor with support for fancy editing commands, history, and tab completion. Even if you’re not familiar with the name Readline you might still be using it: it’s integrated into all kinds of tools including GNU Bash, various language REPLs, and our own gitsh project.
This post will talk you through the more advanced Readline tab completion features gitsh uses and show you how to use them in your own programs.
To avoid getting lost in the details of the gitsh code1, we’ll use a simplified example application for this post.
Basic tab completion
To get us started, here’s the simplest Readline program I can think of. It uses Readline to get input from the user, echoes that input back, and then exits.
#include <stdio.h> #include <stdlib.h> #include <readline/readline.h> int main(int argc, char *argv[]) { char *buffer = readline("> "); if (buffer) { printf("You entered: %s\n", buffer); free(buffer); } return 0; }
main.cat revision 9b8c3e6
Hiding among the boiler-plate code is our first invocation of a GNU Readline function:
char *buffer = readline("> ");
The
readline function prompts the user for input,
with all of Readline’s power behind it.
This includes tab completion for file system paths.
If you don’t want to complete anything more than filenames
you don’t need to go any further than this.
Custom completion options
In gitsh—and many other programs that use Readline—it’s useful to be able to complete things other than paths. In gitsh, we’re interested in completing things like Git commands, branch names, and remotes. For the purpose of this example, let’s say we’re only interested in completing values from a fixed list of the names of some characters from The Hitchiker’s Guide to the Galaxy.
Here’s our expanded program with custom tab completion:
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <readline/readline.h> char **character_name_completion(const char *, int, int); char *character_name_generator(const char *, int); char *character_names[] = { "Arthur Dent", "Ford Prefect", "Tricia McMillan", "Zaphod Beeblebrox", NULL }; int main(int argc, char *argv[]) { rl_attempted_completion_function = character_name_completion; printf("Who's your favourite Hitchiker's Guide character?\n"); char *buffer = readline("> "); if (buffer) { printf("You entered: %s\n", buffer); free(buffer); } return 0; } char ** character_name_completion(const char *text, int start, int end) { rl_attempted_completion_over = 1; return rl_completion_matches(text, character_name_generator); } char * character_name_generator(const char *text, int state) { static int list_index, len; char *name; if (!state) { list_index = 0; len = strlen(text); } while ((name = character_names[list_index++])) { if (strncmp(name, text, len) == 0) { return strdup(name); } } return NULL; }
main.cat revision ef33b0b
We’re making use of three new Readline features here.
First, we set
rl_attempted_completion_function:
rl_attempted_completion_function = character_name_completion;
When the user hits their tab key
Readline will invoke the function we’ve assigned to
rl_attempted_completion_function.
The partial argument we’re completing
and the positions where it starts and ends in the current line of input
will be passed as arguments.
If we modify our
character_name_completion function
to print its arguments,
we’d see something like this:
Who's your favourite Hitchiker's Guide character? > I like Arth⇥ text="Arth", start=7, end=11
character_name_completionmodified to print arguments
Note that we’re only passed
"Arth",
and not the whole input.
Given this information,
we need to return the possible completions:
- If there are no possible completions, we should return
NULL.
- If there is one possible completion, we should return an array containing that completion, followed by a
NULLvalue.
- If there are two or more possibilities, we should return an array containing the longest common prefix of all the options, followed by each of the possible completions, followed by a
NULLvalue.
Rather than building this array by hand,
including all of the complexity of finding the longest common prefix,
we can use the helpful
rl_completion_matches function
with a generator function:
return rl_completion_matches(text, character_name_generator);
The generator function—in our case
character_name_generator—is called with
the
text that was passed to
rl_completion_matches,
and a
state value that will be zero on the first call
and non-zero on subsequent calls
(we’re using the fact that
state is zero on the first call
to initialise some static variables,
but otherwise ignoring it).
Each time it’s called,
character_name_generator returns a completion that matches the given text.
When it can’t find any more options
it returns
NULL.
If our
character_name_completion function
returned no matches
(i.e.
character_name_generator returned
NULL on the first call),
Readline’s default behaviour would be to
fall back to its default path completion.
In this case
we don’t want that to happen,
so we added one more line to
character_name_completion
to tell it our list of completions is final,
even when it’s empty,
by setting
rl_attempted_completion_over
to a non-zero value:
rl_attempted_completion_over = 1;
Quoting and escaping
Our current implementation works well enough when the user is entering the name of a single character. But what would happen if they needed to enter a list of characters, separated by spaces? How would we know if we were seeing a space between a character’s first name and last name, or a space between two different characters?
Shells like bash, zsh, and gitsh get around this with quoting and escaping.
We could quote each character’s name:
"Arthur Dent" "Ford Prefect"
Or we could escape the spaces that don’t indicate the start of a new character’s name:
Arthur\ Dent Ford\ Prefect
Quoting and escaping are important for tab completion. As we’ve seen, Readline passes only the last argument of the user’s input to our completion function. If we want to support quoting and escaping we need some way of telling Readline if the space separating two words counts as the start of a new argument. We also need to make sure that when we complete an argument containing a space that it is appropriately escaped.
The cases we need to cover are:
Adding quoting support
Quoting is easier than escaping, so let’s tackle that first.
All we need to do
is tell Readline
which characters our program uses
as delimiters for quoted strings,
by setting
rl_completer_quote_characters:
rl_completer_quote_characters = "\"'";
Now, when we press tab within a single- or double-quoted string, Readline will pass everything after the opening quote to our completion function.
It’ll even close the quotes for us if there’s only one possible completion, or leave them open if there are several to choose from.
Adding escaping support
The first thing we need to do to support escaping is to make sure that the completion options we return are properly escaped.
We’d expect unquoted input to produce escaped output, and quoted input to produce unescaped but quoted output:
Conveniently,
we’ve already set
rl_completer_quote_characters,
so Readline is aware of whether or not we are
completing a quoted string.
We can modify our
character_name_generator function
to read the
rl_completion_quote_character
variable then
produce escaped character names
if we’re not completing a quoted argument:
char * character_name_generator(const char *text, int state) { static int list_index, len; char *name; if (!state) { list_index = 0; len = strlen(text); } while ((name = character_names[list_index++])) { if (rl_completion_quote_character) { name = strdup(name); } else { name = escape(name); } if (strncmp(name, text, len) == 0) { return name; } else { free(name); } } return NULL; } char * escape(const char *original) { char *escaped; // ... return escaped; }
The important bit of new functionality here is that we conditionally escape our options:
if (rl_completion_quote_character) { name = strdup(name); } else { name = escape(name); }
If Readline has seen an un-closed quote
it will set
rl_completion_quote_character
to the appropriate quote character
(in our case
' or
",
since those are the characters we listed in
rl_completer_quote_characters).
If
rl_completion_quote_character is zero,
we know we’re not completing a quoted argument.
The
escape function I’ve written for this example
allocates a new character array on the heap,
so we don’t need to use
strdup
if we’ve already used
escape2.
I’ve omitted the full implementation of
escape here
because it’s rather long,
but you can see the
full example code on GitHub.
Detecting escaped word breaks
This is getting pretty good, but we’re still left with one case we can’t handle. If the user input contains a space that’s escaped:
Readline will still see the space
as an argument boundary.
Our completion function will be passed
"D",
when we want it to be passed
"Arthur\ D".
To handle this,
we need to give Readline
a pointer to a function that can tell it
if the space between words is escaped,
which we can do with the
rl_char_is_quoted_p
setting:
rl_char_is_quoted_p = "e_detector;
main.cat revision 5219206
Our
quote_detector function
takes the whole line of input
and the index of the space
that might indicate a break between arguments,
or a quote character that might indicate the start of a quoted string.
It should return zero
if the character isn’t quoted,
and a non-zero value
if it is quoted:
int quote_detector(char *line, int index) { return ( index > 0 && line[index - 1] == '\\' && !quote_detector(line, index - 1) ); }
quote_detectorfrom
main.cat revision 5219206
It’s worth noting that this implementation is recursive.
In many shells,
it’s possible to escape the
\ character with another
\ character.
The sequence
\\ represents a literal
\
and doesn’t escape the character that follows it.
The recursion makes sure we handle
any number of
\ characters before a space,
and always do the right thing.
When is
rl_char_is_quoted_p called?
The Readline documentation would have us believe that there’s nothing else we need to do. The reality is a little more complex.
Readline won’t make use of
rl_char_is_quoted_p
unless it believes some kind of quoting or escaping
is being used in the user’s input.
Remember our old friend
rl_completion_quote_character?
We used it to determine if we needed to escape our completion options.
Readline does something similar
with the closely related
rl_completion_found_quote
variable to determine
if it needs to call
rl_char_is_quoted_p3.
There are several practical implications of this:
rl_completion_found_quoteis only ever set if
rl_completer_quote_charactersis set. Therefore, without
rl_completer_quote_characters,
rl_char_is_quoted_pdoes nothing.
rl_completion_found_quoteis only ever set if the input contains an unclosed quoted string, or a literal
\character. This limits the kind of escaping schemes
rl_char_is_quoted_pcan implement to those that use a
\in some way.
Which characters separate arguments?
Readline will only invoke
rl_char_is_quoted_p
with characters that would,
if unescaped,
indicate a break between arguments.
For our
quote_detector implementation to work,
we need to customise the list of word break characters:
rl_completer_word_break_characters = " ";
main.cat revision 5219206
Notice that we’ve been happily completing space-separated arguments from the very first example, so why do we need to explicitly specify this now?
The default value of
rl_completer_word_break_characters
includes the
\ character,
which we use for escaping.
If encountering a
\ indicated a word break,
we wouldn’t get very far with escaped spaces;
Readline would include the space in the value
passed to our completion function,
but stop at the
\.
An alternative solution to this problem
would be to decrement
rl_point
in our
rl_char_is_quoted_p function,
but since we don’t need
\ characters
to act as word breaks,
we can happily remove them
from
rl_completer_word_break_characters.
That’s all, folks
So far, that’s everything we’re using in gitsh. But we’re still only scratching the surface of what GNU Readline can do.
Update: Tab completion in Ruby
When I wrote this post,
many of the features it covered weren’t available
via Ruby’s
Readline module.
A couple of patches later, and all of this is possible in Ruby.
Check out the Ruby edition of this post to see the same example without a single line of C.
[1]
gitsh is mostly implemented in Ruby, and
until very recently we used
Ruby’s built-in
Readline module.
The default Ruby bindings only expose a subset of
Readline’s functionality—it’s a very useful subset,
but gitsh has now outgrown it.
In the gitsh source,
we expose the features discussed in this post via a Ruby extension,
and then make use of them from Ruby.
To keep things simple
I’ll stick to C in this post,
but you can see the full Ruby implementation
in
gitsh’s
line_editor.c file.
[2]
We could be more memory efficient here,
and avoid calling
strdup for strings that don’t match the user input,
but the code would be harder to read.
I’m generally in favour of sacrificing a little efficiency
for readability,
and doubly so in examples.
[3]
To be more precise,
the value of a local variable called
found_quote is used
to determine if
rl_char_is_quoted_p should be called
before it’s assigned to
the externally accessible
rl_completion_found_quote.
See the
_rl_find_completion_word function definition
in
lib/readline/complete.c in the GNU Bash source code for details. | https://thoughtbot.com/blog/tab-completion-in-gnu-readline | CC-MAIN-2020-40 | refinedweb | 2,121 | 50.77 |
An IoT Moisture sensor that sends moisture data from an Arduino Nano 33 IoT to the Arduino IoT Cloud
Things used in this project
Hardware components
Software apps and online services
Arduino IoT Cloud
Arduino Web Editor
Story
Earlier this year, my parents had our lawn redone. To keep our lawn fresh and green, my dad had to water the lawn every day but had to make sure he was not overwatering the soil, otherwise, the grass would die. To help my dad, I created this small device that could sense the moisture level of the soil and send messages through the Arduino IoT Cloud to my computer, so we could know when to water the lawn.
Notes:
This project is battery powered, so to achieve a secured fit to the Nano 33 IoT, you may need to solder headers or jumper wires onto the battery clip
You must be careful when water is poured over this. If you plan on using it in an outdoor environment as I did, make sure you put a static-proof plastic cover over your nano 33 IoT, moisture sensor, and battery to ensure the safety of your device. If you plan on using this indoors, like the cover picture, you do not need to have a cover, but you must be careful when pouring water over the moisture sensor as there are components fixed to it. Water away from the moisture sensor and keep the soil underneath the white line for safety purposes, else you risk damaging your sensor and board.
Link to Arduino's Getting Started guide for Arduino IoT Cloud if you need:
Building Sequence:
1. Connect a moisture sensor to the Nano 33 IoT board. The red wire goes to 3.3V, the black wire goes to GND, and the yellow wire goes to Analog Pin A1 on the board. You can choose any analog pin you like, however, make sure you change the pin number inside the code. Change the number inside the analogRead function inside the code. A snippet of the lines is below.
//Read data from Analog Pin 1 on Nano 33 IoT and print to Serial Monitor for debugging soilMoistureLevel = analogRead(1); Serial.println(soilMoistureLevel);
2. Next, connect a cable to your Nano 33 IoT board and create the variables listed below inside your thing
-Class Variable
-string message
-int moistureLevel
-CloudPercentage moisturePercent
3. Then, set up your Nano 33 IoT as the device for your thing and enter your network credentials in the fields below.
4. Copy and Paste the code below into the sketch tab of your thing and upload it to your Arduino. Once it has been uploaded, open the serial monitor to check whether the moisture sensor is functioning or not. If it is not, try troubleshooting by resetting the code back to its original state or checking the moisture sensor's pin if you had changed the moisture sensor's pin
5. Next, create a dashboard with a few widgets. Below is the dashboard I use.
Here is the list that shows what variable each widget is connected to
-PLANT MOISTURE widget ==> moisturePercent
-MOISTURE VALUE widget ==> moistureValue
-the graph in the bottom left ==> moisturePercent
-MESSENGER widget ==> message
6. Reload the dashboard. if you see the data in the moisture value widget fluctuate by a few digits, then you have completed this project. Congratulations! If you have any problems, questions, or advice, please comment below and I will get back to you as soon as possible.
A demonstration of the project is below.
Thanks for reading!
Schematics
Circuit Diagram using Paint 3D (since I don't have fritzing)
Code
#include "thingProperties.h" int soilMoistureLevel; int soilMoisturePercent; int airValue = 830; int waterValue = 360; void setup() { // Initialize serial and wait for port to open: Serial.begin(9600); //(); // Update the Cloud's data //Read data from Analog Pin 1 on Nano 33 IoT and print to Serial Monitor for debugging soilMoistureLevel = analogRead(1); Serial.println(soilMoistureLevel); //Convert soilMoistureLevel integer into percentage by using air and water values defined above and mapping with 0 - 100 soilMoisturePercent = map(soilMoistureLevel, airValue, waterValue, 0, 100); Serial.println(soilMoisturePercent); //Print to Serial Monitor for debugging //Update Arduino IoT Cloud values moistureLevel = soilMoistureLevel; moisturePercent = soilMoisturePercent; //If the moisture percentage is less than 30% if(moisturePercent < 30) { //Send message to Arduino Cloud messenger on dashboard message = "Water Levels Low. Watering Required!"; } //If the moisture percentage is greater than 50% if(moisturePercent > 50) { //Send message to Arduino Cloud messenger on dashboard message = "Water Levels Sufficient."; } } //End of Code
The article was first published in hackster, August 31, 2021
cr:
author: buddhimaan
| https://community.dfrobot.com/makelog-312024.html | CC-MAIN-2022-05 | refinedweb | 768 | 58.32 |
According to Diego Elio Pettenò on 2/24/2010 8:04 AM: > +dnl AFX to get a unique namespace before it is submitted somewhere > +dnl (like autoconf-archive). > +AC_DEFUN([AFX_LIB_DLOPEN], [ Just curious: Why the choice of AFX_, and not something like LV_ for libvirt? > + DLOPEN_LIBS= > + > +m4_pushdef([action_if_not_found], > + [m4_default([$2], > + [AC_MSG_ERROR([Unable to find the dlopen() function])] > + ) > + ]) > + > + old_libs=$LIBS > + AC_SEARCH_LIBS([dlopen], [dl], > + [DLOPEN_LIBS=${LIBS#${old_libs}} The shell expansion ${a#b} is not portable to Solaris /bin/sh. Is that a problem for libvirt, or are we Linux-centric enough to not care? > + $1], action_if_not_found) Underquoted; if the AC_MSG_ERROR were ever changed to contain a comma, then this would be passing an unintentional extra argument to AC_SEARCH_LIBS. It should work to pass [action_if_not_found] rather than action_if_not_found, or even avoid the temporary macro altogether and inline the m4_default directly as the argument to AC_SEARCH_LIBS. Is this something that would be worth wrapping in an AC_CACHE_CHECK, to avoid repeating the check if config.cache exists? -- Eric Blake eblake redhat com +1-801-349-2682 Libvirt virtualization library
Attachment:
signature.asc
Description: OpenPGP digital signature | https://www.redhat.com/archives/libvir-list/2010-February/msg00846.html | CC-MAIN-2015-14 | refinedweb | 180 | 53.41 |
23 October 2013 14:36 [Source: ICIS news]
HOUSTON (ICIS)--?xml:namespace>
Methanex is relocating two methanol plants from
Entergy Gulf States Louisiana said that under the deal it will supply Methanex's Geismar site with up to 30 megawatts of power.
The companies’ power contract is for an initial 10-year term and will continue thereafter on a year-to-year basis. Entergy plans to upgrade its electric transmission system to meet the increased demand from Geismar, it added.
Gustavo Parra, Methanex’s vice president for “Global Excellence and Projects”, added: “Methanex selected Entergy as the sole source for their power needs because of Entergy’s ability to provide safe, clean, reliable and affordable power within the required timeframe.”
Financial details were not disclosed.
Entergy added that Methanex’s first plant is expected to be operational by the end of 2014, and the second plant is expected to be operational | http://www.icis.com/Articles/2013/10/23/9718196/methanex-secures-power-supply-deal-for-louisiana-methanol-site.html | CC-MAIN-2015-06 | refinedweb | 151 | 51.07 |
how to search bin values that contains or starts with the to be searched value
How to search bin values that contains or starts with the to be searched value
The way to do this (AFAIK) is to use a UDF and the Lua string functions. So, a.) set up some sort of primary key to get a subset of your data, and b.) write a UDF filter to operate on the stream using Lua’s .find or similar.
Hi amitvengs,
Depending on your application/use case Andrew’s suggestion is one way of going about it.
If for some reason you are unable to limit the number of records first by way of creating a secondary index on a bin and applying equality or range filter on it, as of Aerospike Server release 3.4.1 you have the option to perform Scan Aggregations coupled with Streaming UDFs. They work very similar to Query Aggregations except you don’t apply any filter. In other words, Scan Aggregations will result in entire set scan – which may be OK for some use cases and not for others. Please use your best judgement.
Here’s some sample code.
Assumptions: There’s a namespace called ‘test’ and within it is set ‘users’ that is populated with user records. Each record has at least one bin ‘username’ and in some records those values start with ‘usr22’.
Goal: You’d like to retrieve usernames that start with ‘usr22’.
Code using Aerospike Node.js Client v1.0.31 (this can also be done in the latest releases of Java and C# clients):
// Note: UDF registration is done here for convenience. In production env, it should be done via AQL. client.udfRegister('startsWith.lua', function(err) { if ( err.code === aerospike.status.AEROSPIKE_OK ) { console.error('startsWith registeration complete!'); var statement = {aggregationUDF: {module: 'startsWith', funcname: 'find', args: ['usr22']}}; var query = client.query('test', 'users', statement); var stream = query.execute(); stream.on('data', function(result) { var users = result.users.split(",").filter(function(e){return e}); console.log('Users: ', users); console.log('Count: ', users.length); }); stream.on('error', function(err) { console.log('scanAggregate Error: ',err); }); stream.on('end', function() { // console.log('scanAggregate: ', '!done!'); }); } else { // An error occurred console.error('startsWith registeration error: ', err); } });
Streaming UDF – startsWith.lua:
local function starts_with(map,rec) if rec.username:find('^' .. map['chars']) ~= nil then map['users'] = map['users'] .. ',' .. rec.username end -- Return accumulated map return map end local function reduce_stats(a,b) -- Merge values from map b into a a.users = a.users .. b.users -- Return updated map return a end function find(stream,chars) -- Process incoming record stream and pass it to aggregate function, then to reduce function -- NOTE: aggregate function starts_with accepts two parameters: -- 1) A map that contains usernames and initial chars to match username -- 2) function name starts_with -- which will be called for each record as it flows in -- Return reduced value of the map generated by reduce function reduce_stats return stream : aggregate(map{users='',chars=chars},starts_with) : reduce(reduce_stats) end
Let me know if you need clarification on any of the details.
For more on Scan Aggregations, click here
For more on Streaming UDFs, click here
For more on Aggregations, click here
I hope this helps.
Help debugging Error: code: 100, message: 'UDF: Execution Error 1 | https://discuss.aerospike.com/t/how-to-search-bin-values-that-contains-or-starts-with-the-to-be-searched-value/985 | CC-MAIN-2018-30 | refinedweb | 546 | 58.08 |
A function is a block of statements which executes only when it is called somewhere in the program. A function provides re-usability of same code for different inputs, hence saves time and resources. There are some in-built functions in C and one of the common function is main() which is used to execute code. Users can also create their own function which is also termed as user-defined functions.
In C, creating a function starts with defining return type of function followed by function's name and parenthesis containing function's parameter(s), if it has any. At last, it contains block of statements which is also called body of the function. Please see the syntax below:
//Defining function return_type function_name(parameters) { statements; }
return_type: A function can return value(s). The return_type is the data type of the value the function returns. If a function does not return anything in that case void is used as return_type.
In the below example, a function called MyFunction is created to print Hello World!. The function requires no parameters to run and has no return type, hence void keyword is used.
#include <stdio.h> void MyFunction(){ printf("%s\n","Hello World!."); } int main (){ MyFunction(); return 0; }
Hello World!.
After defining the function, it can be called anywhere in the program with it's name followed by parenthesis containing function's parameter(s), if it has any and semicolon (;). As in the above example, the function is called inside main() function using the following statement:
MyFunction();
In C, declaration of a function starts with return type of function followed by function's name and parenthesis containing function's parameter(s), if it has any. Please see the syntax below:
//declaration of a function return_type function_name(parameters);
If the function is defined after it is called in the program, the C program will raise an exception. To avoid such situation, the function is declared before calling it, and the function can be defined anywhere in the program.
In the below example, the function called MyFunction is defined after it is called in the program. Since the function is defined after it is called in the program, program will raise an exception. To avoid such situation, function must be declared before calling it.
#include <stdio.h> //function declaration void MyFunction(); int main (){ //function calling MyFunction(); return 0; } //function definition void MyFunction(){ printf("%s\n","Hello World!."); }
Hello World!.
A parameter (or also known as argument) is a variable which is used to pass information inside a function. In above example, the function does not has any parameter. But a user can create a function with single or multiple parameters. Value of a parameter can be further used by the function to achieve desired result.
In the below example, the function called MyFunction is created which requires two integer numbers as parameters and print sum of two numbers in desired style. Please note that, the function returns nothing hence void is used as return type.
#include <stdio.h> void MyFunction(int x, int y); int main (){ int a = 15, b = 10; MyFunction(a, b); return 0; } void MyFunction(int x, int y){ printf("Sum of %i and %i is: %i\n", x, y, x+y); }
Sum of 15 and 10 is: 25
A function can be used to return values. To achieve this, user must have to define return type in definition and declaration of the function. In the below example, the return type is int.
#include <stdio.h> int MyFunction(int x, int y); int main (){ int a = 15, b = 10; int sum = MyFunction(a, b); printf("%i\n", sum); return 0; } int MyFunction(int x, int y){ return x+y; }
25
A function which can call itself is known as recursive function. A recursive function generally ends with one or more boundary conditions.
A function for calculating factorial using recursion method is described below.
#include <stdio.h> int factorial(int x); int main (){ printf("%i\n", factorial(3)); printf("%i\n", factorial(5)); return 0; } int factorial(int x){ if(x==0) {return 1;} else {return x*factorial(x-1);} }
6 120 | https://www.alphacodingskills.com/c/c-functions.php | CC-MAIN-2019-51 | refinedweb | 689 | 52.39 |
#45 — Pickle Objects in Acquistion WrappersReturn to tracker
Last modified on Jan 08, 2009 by Matthew Wilkes
I was using the 1.5 beta with Zope2.9.3/Plone2.5 and with a new blog if I a keyword I get the following error.
beren
2006-07-26 22:56:29 ERROR Zope.SiteErrorLog
le/jeff-s-palantir/test/base_edit
Traceback (most recent call last):
File "C:\Zope\lib\python\ZPublisher\Publish.py", line 121, in publish
transactions_manager.commit()
File "C:\Zope\lib\python\Zope2\App\startup.py", line 240, in commit
transaction.commit()
File "C:\Zope\lib\python\transaction\_manager.py", line 96, in commit
return self.get().commit(sub, deprecation_wng=False)
File "C:\Zope\lib\python\transaction\_transaction.py", line 380, in commit
self._saveCommitishError() # This raises!
File "C:\Zope\lib\python\transaction\_transaction.py", line 378, in commit
self._commitResources()
File "C:\Zope\lib\python\transaction\_transaction.py", line 433, in _commitRes
ources
rm.commit(self)
File "C:\Zope\lib\python\ZODB\Connection.py", line 484, in commit
self._commit(transaction)
File "C:\Zope\lib\python\ZODB\Connection.py", line 526, in _commit
self._store_objects(ObjectWriter(obj), transaction)
File "C:\Zope\lib\python\ZODB\Connection.py", line 553, in _store_objects
p = writer.serialize(obj) # This calls __getstate__ of obj
File "C:\Zope\lib\python\ZODB\serialize.py", line 407, in serialize
return self._dump(meta, obj.__getstate__())
File "C:\Zope\lib\python\ZODB\serialize.py", line 416, in _dump
self._p.dump(state)
TypeError: Can't pickle objects in acquisition wrappers.
Added byMaurits van ReesonAug 14, 2006 09:02 PMI can't confirm this. I'm not the most knowledgeable Quills user though, so don't despair. :)
Target release: 1.6 → None
Responsible manager: justizin → (UNASSIGNED)
But it seems there is a word missing in your report, which may be important. So can you clarify?
Quoting:
"I was using the 1.5 beta with Zope2.9.3/Plone2.5 and with a new blog if I [add? remove? something else?] a keyword I get the following error."
Also the keywords system for Quills has changed recently, going from its own system to using the usual plone keyword system. So maybe you caught Quills at a bad time. Can you try again please? Tim or someone else may have inadvertently solved this already. ;-)
Added byBeren ErchamiononAug 15, 2006 01:49 AMSorry - its when I "add" a keyword.
I think I have the latest available release on the plone.org site. Is there a more recent release in svn? Can you supply the svn repo url?
beren
Added byTim HicksonAug 15, 2006 11:54 AMTry a checkout from (or .../Quills/bundles/with-friends-trunk if you want to get hold of all the dependencies as well).
Please do report back if this problem persists even on the newer code.
Added byJustin RyanonAug 28, 2006 03:03 PMTwo things:
Issue state: unconfirmed → postponed
Target release: None → 1.5
Responsible manager: (UNASSIGNED) → justizin
(a) We're about to cut what we hope to be the final RC of Quills 1.5, which is targeted at Zope 2.9 and Plone 2.5, and should work in Zope 2.8.6+ and Plone 2.1.
(b) point yourself at this bundle, rather than with-friends-trunk:[…]/latest-known-working
If you are still having this on a supported configuration of Plone, we'd like some details before we cut a 1.5 release.
Added byBeren ErchamiononAug 30, 2006 01:43 AMI checked the svn spot and I didn't see any files there? did you do the merge already?
beren
Added byJustin RyanonAug 30, 2006 02:38 AMIt's a bundle, it pulls Quills and some dependencies via the svn:external property.
Just grab the RC3 tarball. ;)
Added byBeren ErchamiononAug 30, 2006 10:53 AMAh - duh - sorry. Not too familiar with the externals thing, but looks interesting. I grabbed the .tar and will give it a shot today.
beren
Added byJustin RyanonSep 08, 2006 12:51 AMBeren - any progress?
Added byBeren ErchamiononSep 09, 2006 10:33 PMI'll test ASAP. I've been busy with PAS and Active Directory...
beren
Added byRomain onSep 14, 2006 07:04 PMI have the same problem with an image field inside an AGX generated content-type when creating the object(not when modify) in plone-2.5.
If you have any idea what the problem could be...
Added byKiran JonnalagaddaonSep 17, 2006 08:43 AMCan confirm the "Can't pickle objects in acquisition wrappers" is still there with Python 2.4.3, Zope 2.9.4, Plone 2.5.1 and Quills 1.5.0.a3-dev10 off SVN Quills-with-friends (except Five, which is Plone 2.5.1's included version).
Previously, the error existed under Python 2.3.5, Zope 2.8, Plone 2.5 with included Five and Quills from svn (-dev3, IIRC, about two months old).
Added byKiran JonnalagaddaonSep 17, 2006 05:13 PMHere's my traceback, this time using latest-known-working (but Five 1.3.7) on Plone 2.5.1 and Zope 2.9.4. 96, in __call__
Module Products.CMFFormController.BaseControllerPageTemplate, line 39, in _call
Module Products.CMFFormController.ControllerBase, line 245, in getNext
- __traceback_info__: ['id = base_edit', 'status = success', 'button=None', 'errors={}', 'context=<WeblogEntry at weblogentry.2006-09-17.9237427638>', "kwargs={'portal_status_message': 'Changes saved.'}", 'next_action=None', '']
Module Products.CMFFormController.Actions.TraverseTo, line 36, in __call__
Module ZPublisher.mapply, line 88, in mapply
Module ZPublisher.Publish, line 41, in call_object
Module Products.CMFFormController.FSControllerPythonScript, line 108, in __call__
Module Products.CMFFormController.Script, line 141, in __call__ 1, in content_edit
- <FSControllerPythonScript at /sites/jace.seacrow.com/content_edit used for /sites/jace.seacrow.com/blog/weblogentry.2006-09-17.9237427638>
- Line 1 11, in content_edit_impl
- <FSPythonScript at /sites/jace.seacrow.com/content_edit_impl used for /sites/jace.seacrow.com/blog/weblogentry.2006-09-17.9237427638>
- Line 11
Module Products.Archetypes.BaseObject, line 648, in processForm
Module Products.Archetypes.BaseObject, line 750, in _renameAfterCreation
Module transaction._manager, line 110, in savepoint
Module transaction._transaction, line 295, in savepoint
Module transaction._transaction, line 292, in savepoint
Module transaction._transaction, line 675, in __init__.
Added byKevin TeagueonSep 23, 2006 06:59 AMI am getting "Can't pickle Object in acquisition wrappers" as well. Using Zope 2.9.3 with Plone 2.5.1-rc1 and latest-known-working (except Five). I was also getting pickle errors with 1.5-RC3 tarball.
I've worked around the issue by commenting out the self.migrateCategoriesForEntry(entry) call in the migrateWeblog method, and then doing an Uninstall/Reinstall. I can now add and publish posts.
Added byKevin TeagueonSep 23, 2006 07:58 AMIt looks like the bug might be caused by this:
# Add the two together
subjects = existingSubjects[:] # Hard copy
This only creates a shallow copy of the entryCategories attribute. The following seems to work for me:
# Add the two together
import copy
subjects = copy.deepcopy(existingSubjects)
Added byTim HicksonSep 23, 2006 03:28 PMkteague, thanks for digging into this issue.
Your proposed fix - which I assume is applied in Quills.migrations.quills09to15 - is interesting. It's not clear to me why the existing code::
existingSubjects = list( entry.Subject() )
subjects = existingSubjects[:]
should result in 'subjects' being Acquisition-wrapped.
I'm happy to apply the change, but I'd like to understand the reasoning for it first. Can you elaborate?
Further, it seems unlikely that your proposed fix will actually solve the issue for the original poster, Beren Erchamion, as he reports the problem on a "new blog", not a migrated one. Beren, can you verify that kteague's fix doesn't solve your problem? In fact, can you verify that you are still seeing this issue on 1.5RC3 (or a more recent svn checkout)?
Added byBeren ErchamiononSep 23, 2006 05:09 PMYes I'm still seeing the issue as I originally posted with the latest Quills bundle from plone.org
-beren
Added byKevin TeagueonSep 23, 2006 06:25 PMYeah, I don't see how that list would get acquisition wrapped, so maybe my 'fix' is of no help. I was Reinstalling the Quills product a lot (and fiddling with various bits here and there). Also, I did not fully test my upgraded Quills, I am getting the same bug as Beren where I can not add a keyword.
I do have a clean instance of my Plone before upgrading to Quills so maybe I will start over with the latest svn version of Quills.
- Kevin
Added byKevin TeagueonSep 23, 2006 10:22 PMI have just tried doing an Uninstall/Reinstall with Quills running latest-known-working from svn on a Plone 2.5.1 with Zope 2.9.3 (again using the Five 1.3.7). I get the following error upon install:
Traceback (innermost last):
Module ZPublisher.Publish, line 121, in publish
Module Zope2.App.startup, line 240, in commit
Module transaction._manager, line 96, in commit
Module transaction._transaction, line 370, in commit
Module transaction._transaction, line 250, in _prior_operation_failed
TransactionFailedError: An operation previously failed, with traceback:
File "/opt/Zope-2.9.3/lib/python/ZServer/PubCore/ZServerPublisher.py", line 23, in __init__
response=response)
File "/opt/Zope-2.9.3/lib/python/ZPublisher/Publish.py", line 393, in publish_module
environ, debug, request, response)
File "/opt/Zope-2.9.3/lib/python/ZPublisher/Publish.py", line 194, in publish_module_standard
response = publish(request, module_name, after_list, debug=debug)
File "/var/zope/instances/gsc_gin/Products/PlacelessTranslationService/PatchStringIO.py", line 34, in new_publish
File "/opt/Zope-2.9.3/lib/python/ZPublisher/Publish.py", line 115, in publish
request, bind=1)
File "/opt/Zope-2.9.3/lib/python/ZPublisher/mapply.py", line 88, in mapply
if debug is not None: return debug(object,args,context)
File "/opt/Zope-2.9.3/lib/python/ZPublisher/Publish.py", line 41, in call_object
result=apply(object,args) # Type s<cr> to step into published object.
File "/var/zope/instances/gsc_gin/Products/CMFQuickInstallerTool/QuickInstallerTool.py", line 454, in installProducts
File "/var/zope/instances/gsc_gin/Products/CMFQuickInstallerTool/QuickInstallerTool.py", line 326, in installProduct
File "/opt/Zope-2.9.3/lib/python/transaction/_manager.py", line 96, in commit
return self.get().commit(sub, deprecation_wng=False)
File "/opt/Zope-2.9.3/lib/python/transaction/_transaction.py", line 366, in commit
self._subtransaction_savepoint = self.savepoint(optimistic=True)
File "/opt/Zope-2.9.3/lib/python/transaction/_transaction.py", line 295, in savepoint
self._saveCommitishError() # reraises!
File "/opt/Zope-2.9.3/lib/python/transaction/_transaction.py", line 292, in savepoint
savepoint = Savepoint(self, optimistic, *self._resources)
File "/opt/Zope-2.9.3/lib/python/transaction/_transaction.py", line 675, in __init__
savepoint = savepoint()
File "/opt/Zope-2.9.3/lib/python/ZODB/Connection.py", line 1012, in savepoint
self._commit(None)
File "/opt/Zope-2.9.3/lib/python/ZODB/Connection.py", line 526, in _commit
self._store_objects(ObjectWriter(obj), transaction)
File "/opt/Zope-2.9.3/lib/python/ZODB/Connection.py", line 553, in _store_objects
p = writer.serialize(obj) # This calls __getstate__ of obj
File "/opt/Zope-2.9.3/lib/python/ZODB/serialize.py", line 407, in serialize
return self._dump(meta, obj.__getstate__())
File "/opt/Zope-2.9.3/lib/python/ZODB/serialize.py", line 416, in _dump
self._p.dump(state)
TypeError: Can't pickle objects in acquisition wrappers.
If I comment out the migrateCategoriesForEntry() call in Quills.migrations.quills09to15.py then the Installer runs without errors. Not that none of the WeblogEntry objects that I am migrating have any categories set, however the "# nothing to do here." return statement never gets called, it seems to try and migrate the empty categories anyways.
Added byTim HicksonSep 24, 2006 01:54 PMKevin and Beren,
Can you try a trunk checkout of Quills now, please? I just made check-in that may help (but is untested by me), ().
Tim
Added byKevin TeagueonSep 25, 2006 02:26 AMI thought perhaps entryCategories might be being acquired from elsewhere in my Plone, so I tried the latest svn verion, but no luck.
Further poking around inicates that I am simply not able to call setSubject(value) on a Weblog Entry. For example, I can get migration to work by just commenting out the "entry.setSubject(subjects)" line in Quills/migrations/quills09to15.py. Post migration I can add and edit blog entries, but I can not assign them Keywords. If I do Zope has kittens (can't pickle error).
Looking through the source I have no idea why this happens, but it does :(
Added byKevin TeagueonSep 25, 2006 02:38 AMIt looks like this is caused by some interaction between Subjects and portal catalog. For example I can call the following Python script:
# bingoballs is a Weblog Entry
entry = context.bingoballs
entry.setSubject('Cat and Dog')
This adds 'Cat and Dog' as a Keyword and works fine. However, if I try and reindex that Weblog Entry in the portal catalog, for example:
# bingoballs is a Weblog Entry
entry = context.bingoballs
entry.reindexObject()
I get the Can't pickle error.
Added byKevin TeagueonSep 25, 2006 03:05 AMFixed it!
It seems my problem was that there was a getCategories Keyword Index in my portal catalog (with 0 items in the index). I removed this from the indexes and metadata tabs of my portal catalog, and I can now add Keywords to Weblog Entries.
Note sure what added this index to my portal_catalog, although the Plone I was testing on has been around for a lot time so it's had a lot of Products added/removed to it over time.
Added byKiran JonnalagaddaonSep 26, 2006 05:32 AMRemoving getCategories from indexes and metadata fixed it for me too.
Added byJustin RyanonJan 09, 2007 02:22 AMthis worked for me as well, where is getCatalogs coming from? An old / beta / alpha version of Quills, perhaps?
Issue state: postponed → open
Added byEmyr ThomasonMar 01, 2007 11:49 AMDo you have PloneSoftwareCenter installed? This product adds getCategories to the catalog metadata (and catalog indexes too).
Added byBeren ErchamiononMar 01, 2007 12:05 PMYes I do - one of the nest products for sure!
Does PSC require this catalog entry? I can try deleting it on a scratch instance and see what happens.
Added byBeren ErchamiononMar 14, 2007 02:00 AMI did some testing tonight and the getCategories index is required for PloneSoftwareCenter. If I delete this entry in the catalog then the main page will no longer display and I get a key error complaining that the index is missing. If I add the key back then it works again. The key is for the category field associated with projects and is used to build the main page just like on the products area here.
beren
Added byMatthew X. EconomouonJun 10, 2007 01:37 PMI'm getting the same TypeError, but upon migration (0.9 to 1.5-branch) and with a very different traceback:
Traceback (innermost last): 476, in commit.
I tried modifying the migration script to perform a deep copy, but that doesn't fix the error.
Added byLucie LejardonAug 08, 2007 02:41 PMI am using Zope 2.10.4-final, python 2.4.4 and Plone3-rc2.
I tried Quills today and, after fixing a typo [1], I have the "can't pickle object" error too. It happens when, as an admin, i try to add a weblog entry into my weblog:
2007-08-08 10:26:00 ERROR Zope.SiteErrorLog[…]/createObject
Traceback (innermost last): 555, in _commit
Module ZODB.Connection, line 582, in _store_objects
Module ZODB.serialize, line 407, in serialize
Module ZODB.serialize, line 416, in _dump
TypeError: Can't pickle objects in acquisition wrappers.
[1] In WeblogEntry.py, line 147, we have to use a capital "C" for Creators:
return [AuthorTopic(each).__of__(weblog) for each in self.Creators()]
Added byTom LazaronSep 17, 2007 06:56 PMi definitely cannot reproduce this bug with current trunk and plone 3.0.1.
Issue state: open → postponed
Target release: 1.5 → 1.6
can we close this?
Added byChristian LedermannonSep 24, 2007 10:07 AMthis only occurs together with plone softwarecenter which adds the getCategories Keyword Index.
I do not know if this is present in the current version too, please check.
Added byTim HicksonSep 24, 2007 02:00 PMAs of 1.6, Quills no longer uses getCategories as it uses getTopics instead. This should no longer be an issue.
Added byClayton ParkeronFeb 16, 2008 05:34 AMI was still receiving this error on a Plone 3 site I was developing against the Quills trunk (1.6). Here is the fix that ended up working for me:
Added byClayton ParkeronFeb 23, 2008 11:56 PMAfter talking to tim2p in irc I have retracted the change I made. Turns out I had a catalog index named getAuthors that was trying to index the method on the WeblogEntry. This made the catalog mad because there were items with acquisition wrappers on them being returned.
Basically if you have an index that lines up with one of the methods on a Quills objects that is returning items in acquisition wrappers then you are going to receive this error. While discussing the issue tim2p had some ideas about avoiding this behavior. But he and I both agreed that these solutions would be pretty ugly. So for now just be aware that this **could** happen depending on the indexes created by add-on products or custom products you are developing.
Add response
If you can, please log in before submitting a reaction. | http://plone.org/products/quills/issues/45 | crawl-002 | refinedweb | 2,910 | 52.56 |
select, pselect, FD_CLR, FD_ISSET, FD_SET, FD_ZERO - synchronous I/O multiplexing
/*);
#define _XOPEN_SOURCE 600
#include <sys/select.h>
int pselect(int nfds, fd_set *readfds, fd_set *writefds,
fd_set *exceptfds, const struct timespec *timeout,
const sigset_t *sigmask););. zero, and a non-NULL timeout as a fairly portable way to sleep with subsecond precision.
On Linux, select() modifies timeout to reflect the amount of time not slept; most other implementations do not do this. (POSIX.1-2001 permits either behaviour.));
/* Dont rely on the value of tv now! */
if (retval == -1)
perror("select()");
else if (retval)
printf("Data is available now.\n");
/* FD_ISSET(0, &rfds) will be true. */
else
printf("No data within five seconds.\n");
return 0;
}
select() conforms to POSIX.1-2001 and 4.4BSD
pselect() is defined in POSIX.1g, and in POSIX.1-2001.
Advertisements | http://www.tutorialspoint.com/unix_system_calls/_newselect.htm | CC-MAIN-2016-36 | refinedweb | 135 | 53.98 |
GenStage behaviour (gen_stage v1.1.1) View Source
Stages are data-exchange steps that send and/or receive data from other stages.
When a stage sends data, it acts as a producer. When it receives data, it acts as a consumer. Stages may take both producer and consumer roles at once.
Stage types
Besides taking both producer and consumer roles, a stage may be called "source" if it only produces items or called "sink" if it only consumes items.
For example, imagine the stages below where A sends data to B that sends data to C:
[A] -> [B] -> [C]
we conclude that:
- A is only a producer (and therefore a source)
- B is both producer and consumer
- C is only a consumer (and therefore a sink)
As we will see in the upcoming Examples section, we must specify the type of the stage when we implement each of them..
A consumer may have multiple producers and a producer may have
multiple consumers. When a consumer asks for data, each producer
is handled separately, with its own demand. When a producer
receives demand and sends data to multiple consumers, the demand
is tracked and the events are sent by a dispatcher. This allows
producers to send data using different "strategies". See
GenStage.Dispatcher for more information.
Many developers tend to create layers of stages, such as A, B and C, for achieving concurrency. If all you want is concurrency, starting multiple instances of the same stage is enough. Layers in GenStage must be created when there is a need for back-pressure or to route the data in different ways.
For example, if you need the data to go over multiple steps but without a need for back-pressure or without a need to break the data apart, do not design it as such:
[Producer] -> [Step 1] -> [Step 2] -> [Step 3]
Instead it is better to design it as:
[Consumer] / [Producer]-<-[Consumer] \ [Consumer]
where "Consumer" are multiple processes running the same code that subscribe to the same "Producer".
Example
Let's define the simple pipeline below:
[A] -> [B] -> [C]
where A is a producer that will emit items starting from 0, B is a producer-consumer that will receive those items and multiply them by a given number and C will receive those events and print them to the terminal.
Let's start with A. Since A is a producer, its main
responsibility is to receive demand and generate events.
Those events may be in memory or an external queue system.
For simplicity, let's implement a simple counter starting
from a given value of
counter received on
init/1:
defmodule A do use GenStage def start_link(number) do GenStage.start_link(A, number) end def init(counter) do {:producer, counter} end def handle_demand(demand, counter) when demand > 0 do # If the counter is 3 and we ask for 2 items, we will # emit the items 3 and 4, and set the state to 5. events = Enum.to_list(counter..counter+demand-1) {:noreply, events, counter + demand} end end
B is a producer-consumer. This means it does not explicitly handle the demand because the demand is always forwarded to its producer. Once A receives the demand from B, it will send events to B which will be transformed by B as desired. In our case, B will receive events and multiply them by a number given on initialization and stored as the state:
defmodule B do use GenStage def start_link(multiplier) do GenStage.start_link(B, multiplier) end def init(multiplier) do {:producer_consumer, multiplier} end def handle_events(events, _from, multiplier) do events = Enum.map(events, & &1 * multiplier) {:noreply, events, multiplier} end end
C will finally receive those events and print them every second to the terminal:
defmodule C do use GenStage def start_link(_opts) do GenStage.start_link(C, :ok) end def init(:ok) do {:consumer, :the_state_does_not_matter} end def handle_events(events, _from, state) do # Wait for a second. Process.sleep(1000) # Inspect the events. IO.inspect(events) # We are a consumer, so we would never emit items. {:noreply, [], state} end end
Now we can start and connect them:
{:ok, a} = A.start_link(0) # starting from zero {:ok, b} = B.start_link(2) # multiply by 2 {:ok, c} = C.start_link([]) # state does not matter GenStage.sync_subscribe(c, to: b) GenStage.sync_subscribe(b, to: a)
Typically, we subscribe from bottom to top. Since A will start producing items only when B connects to it, we want this subscription to happen when the whole pipeline is ready. After you subscribe all of them, demand will start flowing upstream and events downstream.
When implementing consumers, we often set the
:max_demand and
:min_demand on subscription. The
:max_demand specifies the
maximum amount of events that must be in flow while the
:min_demand
specifies the minimum threshold to trigger for more demand. For
example, if
:max_demand is 1000 and
:min_demand is 750,
the consumer will ask for 1000 events initially and ask for more
only after it processes at least 250.
In the example above, B is a
:producer_consumer and therefore
acts as a buffer. Getting the proper demand values in B is
important: making the buffer too small may make the whole pipeline
slower, making the buffer too big may unnecessarily consume
memory.
When such values are applied to the stages above, it is easy to see the producer works in batches. The producer A ends-up emitting batches of 50 items which will take approximately 50 seconds to be consumed by C, which will then request another batch of 50 items.
init and
In the example above, we have started the processes A, B, and C
independently and subscribed them later on. But most often it is
simpler to subscribe a consumer to its producer on its
init/1
callback. This way, if the consumer crashes, restarting the consumer
will automatically re-invoke its
init/1 callback and resubscribe
it to the producer.
This approach works as long as the producer can be referenced when
the consumer starts - such as by name for a named process. For example,
if we change the process
A and
B to be started as follows:
# Let's call the stage in module A as A GenStage.start_link(A, 0, name: A) # Let's call the stage in module B as B GenStage.start_link(B, 2, name: B) # No need to name consumers as they won't be subscribed to GenStage.start_link(C, :ok)
We can now change the
init/1 callback for C to the following:
def init(:ok) do {:consumer, :the_state_does_not_matter, subscribe_to: [B]} end
Subscription options as outlined in
sync_subscribe/3 can also be
given by making each subscription a tuple, with the process name or
pid as first element and the options as second:
def init(:ok) do {:consumer, :the_state_does_not_matter, subscribe_to: [{B, options}]} end
Similarly, we should change
B to subscribe to
A on
init/1. Let's
also set
:max_demand to 10 when we do so:
def init(number) do {:producer_consumer, number, subscribe_to: [{A, max_demand: 10}]} end
And we will no longer need to call
sync_subscribe/2.
Another advantage of using
to leverage concurrency by simply starting multiple consumers that subscribe
to their producer (or producer-consumer). This can be done in the example above
by simply calling start link multiple times:
# Start 4 consumers GenStage.start_link(C, :ok) GenStage.start_link(C, :ok) GenStage.start_link(C, :ok) GenStage.start_link(C, :ok)
In a supervision tree, this is often done by starting multiple workers. Typically
we update each
start_link/1 call to start a named process:
def start_link(number) do GenStage.start_link(A, number, name: A) end
And the same for module
B:
def start_link(number) do GenStage.start_link(B, number, name: B) end
Module
C does not need to be updated because it won't be subscribed to.
Then we can define our supervision tree like this:
children = [ {A, 0}, {B, 2}, Supervisor.child_spec({C, []}, id: :c1), Supervisor.child_spec({C, []}, id: :c2), Supervisor.child_spec({C, []}, id: :c3), Supervisor.child_spec({C, []}, id: :c4) ] Supervisor.start_link(children, strategy: :rest_for_one)
Having multiple consumers is often the easiest and simplest way to leverage concurrency in a GenStage pipeline, especially if events can be processed out of order.
Also note that we set the supervision strategy to
:rest_for_one. This
is important because if the producer A terminates, all of the other
processes will terminate too, since they are consuming events produced
by A. In this scenario, the supervisor will see multiple processes shutting
down at the same time, and conclude there are too many failures in a short
interval. However, if the strategy is
:rest_for_one, the supervisor will
shut down the rest of tree, and already expect the remaining process to fail.
One downside of
:rest_for_one though is that if a
C process dies, any other
C process after it will die too. You can solve this by putting them under
their own supervisor.
Another alternative to the scenario above is to use a
ConsumerSupervisor
for consuming the events instead of N consumers. The
ConsumerSupervisor
will communicate with the producer respecting the back-pressure properties
and start a separate supervised process per event. The number of children
concurrently running in a
ConsumerSupervisor is at most
max_demand and
the average amount of children is
(max_demand + min_demand) / 2.
Usage guidelines
As you get familiar with GenStage, you may want to organize your stages according to your business domain. For example, stage A does step 1 in your company workflow, stage B does step 2 and so forth. That's an anti- pattern.
The same guideline that applies to processes also applies to GenStage: use processes/stages to model runtime properties, such as concurrency and data-transfer, and not for code organization or domain design purposes. For the latter, you should use modules and functions.
If your domain has to process the data in multiple steps, you should write
that logic in separate modules and not directly in a
GenStage. You only add
stages according to the runtime needs, typically when you need to provide back-
pressure or leverage concurrency. This way you are free to experiment with
different
GenStage pipelines without touching your business rules.
Finally, if you don't need back-pressure at all and you just need to process
data that is already in-memory in parallel, a simpler solution is available
directly in Elixir via
Task.async_stream/2. This function consumes a stream
of data, with each entry running in a separate task. The maximum number of tasks
is configurable via the
:max_concurrency option.
Buffering
In many situations, producers may attempt to emit events while no consumers
have yet subscribed. Similarly, consumers may ask producers for events
that are not yet available. In such cases, it is necessary for producers
to buffer events until a consumer is available or buffer the consumer
demand until events arrive, respectively. As we will see next, buffering
events can be done automatically by
GenStage, while buffering the demand
is a case that must be explicitly considered by developers implementing
producers.
Buffering events
Due to the concurrent nature of Elixir software, sometimes a producer
may dispatch events without consumers to send those events to. For example,
imagine a
:consumer B subscribes to
:producer A. Next, the consumer B
sends demand to A, which starts producing events to satisfy the demand.
Now, if the consumer B crashes, the producer may attempt to dispatch the
now produced events but it no longer has a consumer to send those events to.
In such cases, the producer will automatically buffer the events until another
consumer subscribes. Note however, all of the events being consumed by
B in its
handle_events at the moment of the crash will be lost.
The buffer can also be used in cases where external sources only send events in batches larger than asked for. For example, if you are receiving events from an external source that only sends events in batches of 1000 and the internal demand is smaller than that, the buffer allows you to always emit batches of 1000 events even when the consumer has asked for less.
In all of those cases when an event cannot be sent immediately by
a producer, the event will be automatically stored and sent the next
time consumers ask for events. The size of the buffer is configured
via the
:buffer_size option returned by
init/1 and the default
value is
10_000. If the
buffer_size is exceeded, an error is logged.
See the documentation for
init/1 for more detailed information about
the
:buffer_size option.
Buffering demand
In case consumers send demand and the producer is not yet ready to fill in the demand, producers must buffer the demand until data arrives.
As an example, let's implement a producer that broadcasts messages to consumers. For producers, we need to consider two scenarios:
- what if events arrive and there are no consumers?
- what if consumers send demand and there are not enough events?
One way to implement such a broadcaster is to simply rely on the internal
buffer available in
GenStage, dispatching events as they arrive, as explained
in the previous section:
defmodule Broadcaster do use GenStage @doc "Starts the broadcaster." def start_link() do GenStage.start_link(__MODULE__, :ok, name: __MODULE__) end @doc "Sends an event and returns only after the event is dispatched." def sync_notify(event, timeout \\ 5000) do GenStage.call(__MODULE__, {:notify, event}, timeout) end def init(:ok) do {:producer, :ok, dispatcher: GenStage.BroadcastDispatcher} end def handle_call({:notify, event}, _from, state) do {:reply, :ok, [event], state} # Dispatch immediately end def handle_demand(_demand, state) do {:noreply, [], state} # We don't care about the demand end end
By always sending events as soon as they arrive, if there is any demand,
we will serve the existing demand, otherwise the event will be queued in
GenStage's internal buffer. In case events are being queued and not being
consumed, a log message will be emitted when we exceed the
:buffer_size
configuration. This behavior can be customized by implementing the optional
format_discarded/2 callback.
While the implementation above is enough to solve the constraints above,
a more robust implementation would have tighter control over the events
and demand by tracking this data locally, leaving the
GenStage internal
buffer only for cases where consumers crash without consuming all data.
To handle such cases, we will use a two-element tuple as the broadcaster state where the first element is a queue and the second element is the pending demand. When events arrive and there are no consumers, we will store the event in the queue alongside information about the process that broadcast the event. When consumers send demand and there are not enough events, we will increase the pending demand. Once we have both data and demand, we acknowledge the process that has sent the event to the broadcaster and finally broadcast the event downstream.
defmodule QueueBroadcaster do use GenStage @doc "Starts the broadcaster.", pending_demand}) do queue = :queue.in({from, event}, queue) dispatch_events(queue, pending_demand, []) end def handle_demand(incoming_demand, {queue, pending_demand}) do dispatch_events(queue, incoming_demand + pending_demand, []) end defp dispatch_events(queue, 0, events) do {:noreply, Enum.reverse(events), {queue, 0}} end defp dispatch_events(queue, demand, events) do case :queue.out(queue) do {{:value, {from, event}}, queue} -> GenStage.reply(from, :ok) dispatch_events(queue, demand - 1, [event | events]) {:empty, queue} -> {:noreply, Enum.reverse(events), {queue, demand}} end end end
Let's also implement a consumer that automatically subscribes to the
broadcaster on
init/1. The advantage of doing so on initialization
is that, if the consumer crashes while it is supervised, the subscription
is automatically re-established when the supervisor restarts it.
defmodule Printer do use GenStage @doc "Starts the consumer." def start_link() do GenStage.start_link(__MODULE__, :ok) end def init(:ok) do # Starts a permanent subscription to the broadcaster # which will automatically start requesting items. {:consumer, :ok, subscribe_to: [QueueBroadcaster]} end def handle_events(events, _from, state) do for event <- events do IO.inspect {self(), event} end {:noreply, [], state} end end
With the broadcaster in hand, now let's start the producer as well as multiple consumers:
# Start the producer QueueBroadcaster.start_link() # Start multiple consumers Printer.start_link() Printer.start_link() Printer.start_link() Printer.start_link()
At this point, all consumers must have sent their demand which we were not
able to fulfill. Now by calling
QueueBroadcaster.sync_notify/1, the event
shall be broadcast to all consumers at once as we have buffered the demand
in the producer:
QueueBroadcaster.sync_notify(:hello_world)
If we had called
QueueBroadcaster.sync_notify(:hello_world) before any
consumer was available, the event would also have been buffered in our own
queue and served only when demand had been received.
By having control over the demand and queue, the broadcaster has full control on how to behave when there are no consumers, when the queue grows too large, and so forth.
Asynchronous work and
handle_subscribe
Both
:producer_consumer and
:consumer stages have been designed to do
their work in the
handle_events/3 callback. This means that, after
handle_events/3 has been executed, both
:producer_consumer and
:consumer
stages will immediately send demand upstream and ask for more items. It is
assumed that events have been fully processed by
handle_events/3.
Such default behaviour makes
:producer_consumer and
:consumer stages
unfeasible for doing asynchronous work. However, given
GenStage was designed
to run with multiple consumers, it is not a problem to perform synchronous or
blocking actions inside
handle_events/3 as you can then start multiple
consumers in order to max both CPU and IO usage as necessary.
On the other hand, if you must perform some work asynchronously,
GenStage comes with an option that manually controls how demand
is sent upstream, avoiding the default behaviour where demand is
sent after
handle_events/3. Such can be done by implementing
the
handle_subscribe/4 callback and returning
{:manual, state}
instead of the default
{:automatic, state}. Once the consumer mode
is set to
:manual, developers must use
GenStage.ask/3 to send
demand upstream when necessary.
Note that
:max_demand and
:min_demand must be manually respected when
asking for demand through
GenStage.ask/3.
For example, the
ConsumerSupervisor module processes events
asynchronously by starting a process for each event and this is achieved by
manually sending demand to producers.
ConsumerSupervisor
can be used to distribute work to a limited amount of
processes, behaving similar to a pool where a new process is
started for each event. See the
ConsumerSupervisor docs for more
information.
Setting the demand to
:manual in
handle_subscribe/4 is not
only useful for asynchronous work but also for setting up other
mechanisms for back-pressure. As an example, let's implement a
consumer that is allowed to process a limited number of events
per time interval. Those are often called rate limiters:
defmodule RateLimiter do use GenStage def init(_) do # Our state will keep all producers and their pending demand {:consumer, %{}} end def handle_subscribe(:producer, opts, from, producers) do # We will only allow max_demand events every 5000 milliseconds pending = opts[:max_demand] || 1000 interval = opts[:interval] || 5000 # Register the producer in the state producers = Map.put(producers, from, {pending, interval}) # Ask for the pending events and schedule the next time around producers = ask_and_schedule(producers, from) # Returns manual as we want control over the demand {:manual, producers} end def handle_cancel(_, from, producers) do # Remove the producers from the map on unsubscribe {:noreply, [], Map.delete(producers, from)} end def handle_events(events, from, producers) do # Bump the amount of pending events for the given producer producers = Map.update!(producers, from, fn {pending, interval} -> {pending + length(events), interval} end) # Consume the events by printing them. IO.inspect(events) # A producer_consumer would return the processed events here. {:noreply, [], producers} end def handle_info({:ask, from}, producers) do # This callback is invoked by the Process.send_after/3 message below. {:noreply, [], ask_and_schedule(producers, from)} end defp ask_and_schedule(producers, from) do case producers do %{^from => {pending, interval}} -> # Ask for any pending events GenStage.ask(from, pending) # And let's check again after interval Process.send_after(self(), {:ask, from}, interval) # Finally, reset pending events to 0 Map.put(producers, from, {0, interval}) %{} -> producers end end end
Let's subscribe the
RateLimiter above to the
producer we have implemented at the beginning of the module
documentation:
{:ok, a} = GenStage.start_link(A, 0) {:ok, b} = GenStage.start_link(RateLimiter, :ok) # Ask for 10 items every 2 seconds GenStage.sync_subscribe(b, to: a, max_demand: 10, interval: 2000)
Although the rate limiter above is a consumer, it could be made a
producer-consumer by changing
init/1 to return a
:producer_consumer
and then forwarding the events in
handle_events/3.
Callbacks
GenStage is implemented on top of a
GenServer with a few additions.
Besides exposing all of the
GenServer callbacks, it also provides
handle_demand/2 to be implemented by producers and
handle_events/3 to be
implemented by consumers, as shown above, as well as subscription-related
callbacks. Furthermore, all the callback responses have been modified to
potentially emit events. See the callbacks documentation for more
information.
By adding
use GenStage to your module, Elixir will automatically
define all callbacks for you except for the following ones:
init/1- must be implemented to choose between
:producer,
:consumer, or
:producer_consumerstages
handle_demand/2- must be implemented by
:producerstages
handle_events/3- must be implemented by
:producer_consumerand
:consumerstages
use GenStage also defines a
child_spec/1 function, allowing the
defined module to be put under a supervision tree in Elixir v1.5+.
The generated
child_spec/1 can be customized with the following options:
:id- the child specification id, defaults to the current module
:start- how to start the child process (defaults to calling
__MODULE__.start_link/1)
:restart- when the child should be restarted, defaults to
:permanent
:shutdown- how to shut down the child
For example:
use GenStage, restart: :transient, shutdown: 10_000
See the
Supervisor docs for more information.
Although this module exposes functions similar to the ones found in
the
GenServer API, like
call/3 and
cast/2, developers can also
rely directly on GenServer functions such as
GenServer.multi_call/4
and
GenServer.abcast/3 if they wish to.
Name registration
GenStage is bound to the same name registration rules as a
GenServer.
Read more about it in the
GenServer docs.
Message protocol overview
This section will describe the message protocol implemented by stages. By documenting these messages, we will allow developers to provide their own stage implementations.
Back-pressure
When data is sent between stages, it is done by a message protocol that provides back-pressure. The first step is for the consumer to subscribe to the producer. Each subscription has a unique reference.
Once subscribed, the consumer may ask the producer for messages for the given subscription. The consumer may demand more items whenever it wants to. A consumer must never receive more data than it has asked for from any given producer stage.
A consumer may have multiple producers, where each demand is managed
individually (on a per-subscription basis). A producer may have multiple
consumers, where the demand and events are managed and delivered according to
a
GenStage.Dispatcher implementation.
Producer messages
The producer is responsible for sending events to consumers based on demand. These are the messages that consumers can send to producers:
{:"$gen_producer", from :: {consumer_pid, subscription_tag}, {:subscribe, current, options}}- sent by the consumer to the producer to start a new subscription.
Before sending, the consumer MUST monitor the producer for clean-up purposes in case of crashes. The
subscription_tagis unique to identify the subscription. It is typically the subscriber monitoring reference although it may be any term. Once sent, the consumer MAY immediately send demand to the producer. The
currentfield, when not
nil, is a two-item tuple containing a subscription that must be cancelled with the given reason before the current one is accepted. Once received, the producer MUST monitor the consumer. However, if the subscription reference is known, it MUST send a
:cancelmessage to the consumer instead of monitoring and accepting the subscription.
{:"$gen_producer", from :: {consumer_pid, subscription_tag}, {:cancel, reason}}- sent by the consumer to cancel a given subscription.
Once received, the producer MUST send a
:cancelreply to the registered consumer (which may not necessarily be the one received in the tuple above). Keep in mind, however, there is no guarantee such messages can be delivered in case the producer crashes before. If the pair is unknown, the producer MUST send an appropriate cancel reply.
{:"$gen_producer", from :: {consumer_pid, subscription_tag}, {:ask, demand}}- sent by consumers to ask demand for a given subscription (identified by
subscription_tag).
Once received, the producer MUST send data up to the demand. If the pair is unknown, the producer MUST send an appropriate cancel reply.
Consumer messages
The consumer is responsible for starting the subscription and sending demand to producers. These are the messages that producers can send to consumers:
{:"$gen_consumer", from :: {producer_pid, subscription_tag}, {:cancel, reason}}- sent by producers to cancel a given subscription.
It is used as a confirmation for client cancellations OR whenever the producer wants to cancel some upstream demand.
{:"$gen_consumer", from :: {producer_pid, subscription_tag}, events :: [event, ...]}- events sent by producers to consumers.
subscription_tagidentifies the subscription. The third argument is a non-empty list of events. If the subscription is unknown, the events must be ignored and a cancel message must be sent to the producer.
Link to this section Summary
Types
Option values used by the
init* common to
:consumer and
:producer_consumer types
Option values used by the
init* functions when stage type is
:consumer
Option values used by the
init* common to
:producer and
:producer_consumer types
Option values used by the
init* functions when stage type is
:producer_consumer
Option values used by the
init* specific to
:producer type
Option values used by the
init* functions when stage type is
:producer
Option used by the
subscribe* functions
Options used by the
subscribe* functions
The term that identifies a subscription.
Callbacks
Invoked when items are discarded from the buffer.
The same as
GenServer.format_status/2.
Invoked when a consumer is no longer subscribed to a producer.
Invoked on
:producer stages.
Invoked on
:producer_consumer and
:consumer stages to handle events.
Invoked to handle all other messages.
Invoked when a consumer subscribes to a producer.
Invoked when the server is started.
The same as
GenServer.terminate/2.
Functions
Asks the given demand to the producer.
Asynchronously queues an info message that is delivered after all currently buffered events.
Cancels
subscription_tag with
reason and resubscribe
to the same stage with the given options.
Asks the consumer to subscribe to the given producer asynchronously.
Makes a synchronous call to the
stage and waits for its reply.
Cancels the given subscription on the producer.
Sends an asynchronous request to the
stage.
Returns the demand mode for a producer.
Sets the demand mode for a producer.
Returns the estimated number of buffered items for a producer.
Starts a producer stage from an enumerable (or stream).
Replies to a client.
Stops the stage with the given
reason.
Creates a stream that subscribes to the given producers and emits the appropriate messages.
Queues an info message that is delivered after all currently buffered events.
Cancels
subscription_tag with
reason and resubscribe
to the same stage with the given options.
Asks the consumer to subscribe to the given producer synchronously.
Link to this section Types
consumer_and_producer_consumer_option()View Source
Specs
consumer_and_producer_consumer_option() :: {:subscribe_to, [atom() | pid() | {GenServer.server(), subscription_options()}]}
Option values used by the
init* common to
:consumer and
:producer_consumer types
consumer_option()View Source
Specs
consumer_option() :: consumer_and_producer_consumer_option()
Option values used by the
init* functions when stage type is
:consumer
from()View Source
Specs
from() :: {pid(), subscription_tag()}
The term that identifies a subscription associated with the corresponding producer/consumer.
producer_and_producer_consumer_option()View Source
Specs
producer_and_producer_consumer_option() :: {:buffer_size, non_neg_integer() | :infinity} | {:buffer_keep, :first | :last} | {:dispatcher, module() | {module(), GenStage.Dispatcher.options()}}
Option values used by the
init* common to
:producer and
:producer_consumer types
producer_consumer_option()View Source
Specs
producer_consumer_option() :: producer_and_producer_consumer_option() | consumer_and_producer_consumer_option()
Option values used by the
init* functions when stage type is
:producer_consumer
producer_only_option()View Source
Specs
producer_only_option() :: {:demand, :forward | :accumulate}
Option values used by the
init* specific to
:producer type
producer_option()View Source
Specs
producer_option() :: producer_only_option() | producer_and_producer_consumer_option()
Option values used by the
init* functions when stage type is
:producer
stage()View Source
Specs
The stage.
subscription_option()View Source
Specs
subscription_option() :: {:cancel, :permanent | :transient | :temporary} | {:to, GenServer.server()} | {:min_demand, integer()} | {:max_demand, integer()} | {atom(), term()}
Option used by the
subscribe* functions
subscription_options()View Source
Specs
subscription_options() :: [subscription_option()]
Options used by the
subscribe* functions
subscription_tag()View Source (opaque)
Specs
subscription_tag()
The term that identifies a subscription.
type()View Source
Specs
type() :: :producer | :consumer | :producer_consumer
The supported stage types.
Link to this section Callbacks
code_change(old_vsn, state, extra)View Source (optional)
Specs
code_change(old_vsn, state :: term(), extra :: term()) :: {:ok, new_state :: term()} | {:error, reason :: term()} when old_vsn: term() | {:down, term()}
The same as
GenServer.code_change/3.
format_discarded(discarded, state)View Source (optional)
Specs
format_discarded(discarded :: non_neg_integer(), state :: term()) :: boolean()
Invoked when items are discarded from the buffer.
It receives the number of excess (discarded) items from this invocation. This callback returns a boolean that controls whether the default error log for discarded items is printed or not. Return true to print the log, return false to skip the log.
format_status(arg1, list)View Source (optional)
Specs
format_status(:normal | :terminate, [ pdict :: {term(), term()} | (state :: term()), ... ]) :: status :: term()
The same as
GenServer.format_status/2.
handle_call(request, from, state)View Source (optional)
Specs
handle_call(request :: term(), from :: GenServer.from(), state :: term()) :: {:reply, reply, [event], new_state} | {:reply, reply, [event], new_state, :hibernate} | {:noreply, [event], new_state} | {:noreply, [event], new_state, :hibernate} | {:stop, reason, reply, new_state} | {:stop, reason, new_state} when [reply: term(), new_state: term(), reason: term(), event: term()]
Invoked to handle synchronous
call/3 messages.
call/3 will block until a reply is received (unless the call times out or
nodes are disconnected).
request is the request message sent by a
call/3,
from is a two-element tuple
containing the caller's PID and a term that uniquely identifies the call, and
state is the current state of the
GenStage.
Returning
{:reply, reply, [events], new_state} sends the response
reply
to the caller after events are dispatched (or buffered) and continues the
loop with new state
new_state. In case you want to deliver the reply before
processing events, use
reply/2 and return
{:noreply, [event], state}.
Returning
{:noreply, [event], new_state} does not send a response to the
caller and processes the given events before continuing the loop with new
state
new_state. The response must be sent with
reply/2.
Hibernating is also supported as an atom to be returned from either
:reply and
:noreply tuples.
Returning
{:stop, reason, reply, new_state} stops the loop and
terminate/2
is called with reason
reason and state
new_state. Then the
reply is sent
as the response to the call and the process exits with reason
reason.
Returning
{:stop, reason, new_state} is similar to
{:stop, reason, reply, new_state} except that no reply is sent to the caller.
If this callback is not implemented, the default implementation by
use GenStage will return
{:stop, {:bad_call, request}, state}.
handle_cancel(cancellation_reason, from, state)View Source (optional)
Specs
handle_cancel( cancellation_reason :: {:cancel | :down, reason :: term()}, from(), state :: term() ) :: {:noreply, [event], new_state} | {:noreply, [event], new_state, :hibernate} | {:stop, reason, new_state} when [event: term(), new_state: term(), reason: term()]
Invoked when a consumer is no longer subscribed to a producer.
It receives the cancellation reason, the
from tuple representing the
cancelled subscription and the state. The
cancel_reason will be a
{:cancel, _} tuple if the reason for cancellation was a
GenStage.cancel/2
call. Any other value means the cancellation reason was due to an EXIT.
If this callback is not implemented, the default implementation by
use GenStage will return
{:noreply, [], state}.
Return values are the same as
handle_cast/2.
handle_cast(request, state)View Source (optional)
Specs
handle_cast(request :: term(), state :: term()) :: {:noreply, [event], new_state} | {:noreply, [event], new_state, :hibernate} | {:stop, reason :: term(), new_state} when [new_state: term(), event: term()]
Invoked to handle asynchronous
cast/2 messages.
request is the request message sent by a
cast/2 and
state is the current
state of the
GenStage.
Returning
{:noreply, [event], new_state} dispatches the events and continues
the loop with new state
new_state.
Returning
{:noreply, [event], new_state, :hibernate} is similar to
{:noreply, new_state} except the process is hibernated before continuing the
loop. See the return values for
GenServer.handle_call/3 for more information
on hibernation.
Returning
{:stop, reason, new_state} stops the loop and
terminate/2 is
called with the reason
reason and state
new_state. The process exits with
reason
reason.
If this callback is not implemented, the default implementation by
use GenStage will return
{:stop, {:bad_cast, request}, state}.
handle_demand(demand, state)View Source (optional)
Specs
handle_demand(demand :: pos_integer(), state :: term()) :: {:noreply, [event], new_state} | {:noreply, [event], new_state, :hibernate} | {:stop, reason, new_state} when [new_state: term(), reason: term(), event: term()]
Invoked on
:producer stages.
This callback is invoked on
:producer stages with the demand from
consumers/dispatcher. The producer that implements this callback must either
store the demand, or return the amount of requested events.
Must always be explicitly implemented by
:producer stages.
Examples
def handle_demand(demand, state) do # We check if we're able to satisfy the demand and fetch # events if we aren't. events = if length(state.events) >= demand do state.events else # fetch_events() end # We dispatch only the requested number of events. {to_dispatch, remaining} = Enum.split(events, demand) {:noreply, to_dispatch, %{state | events: remaining}} end
handle_events(events, from, state)View Source (optional)
Specs
handle_events(events :: [event], from(), state :: term()) :: {:noreply, [event], new_state} | {:noreply, [event], new_state, :hibernate} | {:stop, reason, new_state} when [new_state: term(), reason: term(), event: term()]
Invoked on
:producer_consumer and
:consumer stages to handle events.
Must always be explicitly implemented by such types.
Return values are the same as
handle_cast/2.
handle_info(message, state)View Source (optional)
Specs
handle_info(message :: term(), state :: term()) :: {:noreply, [event], new_state} | {:noreply, [event], new_state, :hibernate} | {:stop, reason :: term(), new_state} when [new_state: term(), event: term()]
Invoked to handle all other messages.
message is the message and
state is the current state of the
GenStage. When
a timeout occurs the message is
:timeout.
If this callback is not implemented, the default implementation by
use GenStage will return
{:noreply, [], state}.
Return values are the same as
handle_cast/2.
handle_subscribe(producer_or_consumer, subscription_options, from, state)View Source (optional)
Specs
handle_subscribe( producer_or_consumer :: :producer | :consumer, subscription_options(), from(), state :: term() ) :: {:automatic | :manual, new_state} | {:stop, reason, new_state} when [new_state: term(), reason: term()]
Invoked when a consumer subscribes to a producer.
This callback is invoked in both producers and consumers.
producer_or_consumer will be
:producer when this callback is
invoked on a consumer that subscribed to a producer, and
:consumer
if when this callback is invoked on producers a consumer subscribed to.
For consumers, successful subscriptions must return one of:
{:automatic, new_state}- means the stage implementation will take care of automatically sending demand to producers. This is the default.
{:manual, state}- means that demand must be sent to producers explicitly via
ask/3.
:manualsubscriptions must be cancelled when
handle_cancel/3is called.
:manualcan be used when a special behaviour is desired (for example,
ConsumerSupervisoruses
:manualdemand in its implementation).
For producers, successful subscriptions must always return
{:automatic, new_state}.
:manual mode is not supported.
If this callback is not implemented, the default implementation by
use GenStage will return
{:automatic, state}.
Examples
Let's see an example where we define this callback in a consumer that will use
:manual mode. In this case, we'll store the subscription (
from) in the
state in order to be able to use it later on when asking demand via
ask/3.
def handle_subscribe(:producer, _options, from, state) do new_state = %{state | subscription: from} {:manual, new_state} end
init(args)View Source
Specs
init(args :: term()) :: {:producer, state} | {:producer, state, [producer_option()]} | {:producer_consumer, state} | {:producer_consumer, state, [producer_consumer_option()]} | {:consumer, state} | {:consumer, state, [consumer_option()]} | :ignore | {:stop, reason :: any()} when state: any()
Invoked when the server is started.
start_link/3 (or
start/3) will block until this callback returns.
args is the argument term (second argument) passed to
start_link/3
(or
start/3).
In case of successful start, this callback must return a tuple where the first element is the stage type, which is one of:
:producer
:consumer
:producer_consumer(if the stage is acting as both)
For example:
def init(args) do {:producer, some_state} end
The returned tuple may also contain 3 or 4 elements. The third
element may be the
:hibernate atom or a set of options defined
below.
Returning
:ignore will cause
start_link/3 to return
:ignore
and the process will exit normally without entering the loop or
calling
terminate/2.
Returning
{:stop, reason} will cause
start_link/3 to return
{:error, reason} and the process to exit with reason
reason
without entering the loop or calling
terminate/2.
Options
This callback may return options. Some options are specific to the chosen stage type while others are shared across all types.
:producer options
:demand- when
:forward, the demand is always forwarded to the
handle_demand/2callback. When
:accumulate, demand is accumulated until its mode is set to
:forwardvia
demand/2. This is useful as a synchronization mechanism, where the demand is accumulated until all consumers are subscribed. Defaults to
:forward.
:producer and
:producer_consumer options
:buffer_size- the size of the buffer to store events without demand. Can be
:infinityto signal no limit on the buffer size. Check the "Buffer events" section of the module documentation. Defaults to
10_000for
:producer,
:infinityfor
:producer_consumer.
:buffer_keep- returns whether the
:firstor
:lastentries should be kept on the buffer in case the buffer size is exceeded. Defaults to
:dispatcher- the dispatcher responsible for handling demands. Defaults to
GenStage.DemandDispatch. May be either an atom representing a dispatcher module or a two-element tuple with the dispatcher module and the dispatcher options.
:consumer and
:producer_consumer options
:subscribe_to- a list of producers to subscribe to. Each element represents either the producer module or a tuple with the producer module and the subscription options (as defined in
sync_subscribe/2).
terminate(reason, state)View Source (optional)
Specs
terminate(reason, state :: term()) :: term() when reason: :normal | :shutdown | {:shutdown, term()} | term()
The same as
GenServer.terminate/2.
Link to this section Functions
ask(producer_subscription, demand, opts \\ [])View Source
Specs
ask(from(), demand :: non_neg_integer(), [:noconnect | :nosuspend]) :: :ok | :noconnect | :nosuspend
Asks the given demand to the producer.
producer_subscription is the subscription this demand will be asked on; this
term could be for example stored in the stage when received in
handle_subscribe/4.
The demand is a non-negative integer with the amount of events to
ask a producer for. If the demand is
0, this function simply returns
:ok
without asking for data.
This function must only be used in the cases when a consumer
sets a subscription to
:manual mode in the
handle_subscribe/4
callback.
It accepts the same options as
Process.send/3, and returns the same value as
Process.send/3.
async_info(stage, msg)View Source
Specs
Asynchronously queues an info message that is delivered after all currently buffered events.
If the stage is a consumer, it does not have buffered events, so the message is queued immediately.
This call returns
:ok regardless if the info has been successfully
queued or not. It is typically called from the stage itself.
async_resubscribe(stage, subscription_tag, reason, opts)View Source
Specs
async_resubscribe( stage(), subscription_tag(), reason :: term(), subscription_options() ) :: :ok
Cancels
subscription_tag with
reason and resubscribe
to the same stage with the given options.
This is useful in case you need to update the options in which you are currently subscribed to in a producer.
This function is async, which means it always returns
:ok once the request is dispatched but without waiting
for its completion.
Options
This function accepts the same options as
sync_subscribe/2.
async_subscribe(stage, opts)View Source
Specs
async_subscribe(stage(), subscription_options()) :: :ok
Asks the consumer to subscribe to the given producer asynchronously.
This function is async, which means it always returns
:ok once the request is dispatched but without waiting
for its completion. This particular function is usually
called from a stage's
init/1 callback.
Options
This function accepts the same options as
sync_subscribe/2.
call(stage, request, timeout \\ 5000)View Source
Specs
Makes a synchronous call to the
stage and waits for its reply.
The client sends the given
request to the stage and waits until a reply
arrives or a timeout occurs.
handle_call/3 will be called on the stage
to handle the request.
stage stage is just late
with the reply, such reply may arrive at any time later into the caller's message
queue. The caller must in this case be prepared for this and discard any such
garbage messages that are two-element tuples with a reference as the first
element.
cancel(producer_subscription, reason, opts \\ [])View Source
Specs
Cancels the given subscription on the producer.
The second argument is the cancellation reason. Once the
producer receives the request, a confirmation may be
forwarded to the consumer (although there is no guarantee
as the producer may crash for unrelated reasons before).
The consumer will react to the cancellation according to
the
:cancel option given when subscribing. For example:
GenStage.cancel({pid, subscription}, :shutdown)
will cause the consumer to crash if the
:cancel given
when subscribing is
:permanent (the default) but it
won't cause a crash in other modes. See the options in
sync_subscribe/3 for more information.
The
cancel operation is an asynchronous request. The
third argument are same options as
Process.send/3,
allowing you to pass
:noconnect or
:nosuspend which
is useful when working across nodes. This function returns
the same value as
Process.send/3.
cast(stage, request)View Source
Specs
Sends an asynchronous request to the
stage.
This function always returns
:ok regardless of whether
the destination
stage (or node) exists. Therefore it
is unknown whether the destination stage successfully
handled the message.
handle_cast/2 will be called on the stage to handle
the request. In case the
stage is on a node which is
not yet connected to the caller one, the call is going to
block until a connection happens.
demand(stage)View Source
Specs
Returns the demand mode for a producer.
It is either
:forward or
:accumulate. See
demand/2.
demand(stage, mode)View Source
Specs
Sets the demand mode for a producer.
When
:forward, the demand is always forwarded to the
handle_demand
callback. When
:accumulate, demand is accumulated until its mode is
set to
:forward. This is useful as a synchronization mechanism, where
the demand is accumulated until all consumers are subscribed. Defaults
to
:forward.
This command is asynchronous.
estimate_buffered_count(stage, timeout \\ 5000)View Source
Specs
estimate_buffered_count(stage(), timeout()) :: non_neg_integer()
Returns the estimated number of buffered items for a producer.
from_enumerable(stream, opts \\ [])View Source
Specs
from_enumerable( Enumerable.t(), keyword() ) :: GenServer.on_start()
Starts a producer stage from an enumerable (or stream).
This function will start a stage linked to the current process that will take items from the enumerable when there is demand. Since streams are enumerables, we can also pass streams as arguments (in fact, streams are the most common argument to this function).
The enumerable is consumed in batches, retrieving
max_demand
items the first time and then
max_demand - min_demand the
next times. Therefore, for streams that cannot produce items
that fast, it is recommended to pass a lower
:max_demand
value as an option.
It is also expected the enumerable is able to produce the whole
batch on demand or terminate. If the enumerable is a blocking one,
for example, because it needs to wait for data from another source,
it will block until the current batch is fully filled. GenStage and
Flow were created exactly to address such issue. So if you have a
blocking enumerable that you want to use in your Flow, then it must
be implemented with GenStage and integrated with
from_stages/2.
When the enumerable finishes or halts, the stage will exit with
:normal reason. This means that, if a consumer subscribes to
the enumerable stage and the
:cancel option is set to
:permanent, which is the default, the consumer will also exit
with
:normal reason. This behaviour can be changed by setting
the
:cancel option to either
:transient or
:temporary
at the moment of subscription as described in the
sync_subscribe/3
docs.
Keep in mind that streams that require the use of the process
inbox to work most likely won't behave as expected with this
function since the mailbox is controlled by the stage process
itself. As explained above, stateful or blocking enumerables
are generally discouraged, as
GenStage was designed precisely
to support exchange of data in such cases.
Options
:link- when false, does not link the stage to the current process. Defaults to
true.
:dispatcher- the dispatcher responsible for handling demands. Defaults to
GenStage.DemandDispatch. May be either an atom or a tuple with the dispatcher and the dispatcher options.
:demand- configures the demand to
:forwardor
:accumulatemode. See
init/1and
demand/2for more information.
All other options that would be given for
start_link/3 are
also accepted.
reply(client, reply)View Source
Specs
reply(GenServer.from(), term()) :: :ok
Replies to a client.
This function can be used to explicitly send a reply to a client that
called
call
GenStage
that originally received the call (as long as that
GenStage communicated the
from argument somehow).
This function always returns
:ok.
Examples
def handle_call(:reply_in_one_second, from, state) do Process.send_after(self(), {:reply, from}, 1_000) {:noreply, [], state} end def handle_info({:reply, from}, state) do GenStage.reply(from, :one_second_has_passed) end
start(module, args, options \\ [])View Source
Specs
start(module(), term(), GenServer.options()) :: GenServer.on_start()
Starts a
GenStage process without links (outside of a supervision tree).
See
start_link/3 for more information.
start_link(module, args, options \\ [])View Source
Specs
start_link(module(), term(), GenServer.options()) :: GenServer.on_start()
Starts a
GenStage process linked to the current process.
This is often used to start the
GenStage as part of a supervision tree.
Once the server is started, the
init/1 function of the given
module is
called with
args as its arguments to initialize the stage. To ensure a
synchronized start-up procedure, this function does not return until
init/1
has returned.
Note that a
GenStage started with
start_link/3 is linked to the
parent process and will exit in case of crashes from the parent. The
GenStage
will also exit due to the
:normal reason in case it is configured to trap
exits in the
init/1 callback.
Options
:name- used for name registration as described in the "Name registration" section of the module documentation
:debug- if present, the corresponding function in the
:sysmodule is invoked
This function also accepts all the options accepted by
GenServer.start_link/3.
Return values
If the stage is successfully created and initialized, this function returns
{:ok, pid}, where
pid is the pid of the stage. If a process with the
specified name already exists, this function returns
{:error, {:already_started, pid}} with the pid of that process.
If the
init/1 callback fails with
reason, this function returns
{:error, reason}. Otherwise, if
init/1 returns
{:stop, reason}
or
:ignore, the process is terminated and this function returns
{:error, reason} or
:ignore, respectively.
stop(stage, reason \\ :normal, timeout \\ :infinity)View Source
Specs
Stops the stage with the given
reason.
The
terminate/2 callback of the given
stage.
stream(subscriptions, options \\ [])View Source
Specs
stream([stage() | {stage(), keyword()}], keyword()) :: Enumerable.t()
Creates a stream that subscribes to the given producers and emits the appropriate messages.
It expects a list of producers to subscribe to. Each element
represents the producer or a tuple with the producer and the
subscription options as defined in
sync_subscribe/2:
GenStage.stream([{producer, max_demand: 100}])
If the producer process exits, the stream will exit with the same
reason. If you want the stream to halt instead, set the cancel option
to either
:transient or
:temporary as described in the
sync_subscribe/3 docs:
GenStage.stream([{producer, max_demand: 100, cancel: :transient}])
Once all producers are subscribed to, their demand is automatically
set to
:forward mode. See the
:demand and
:producers
options below for more information.
GenStage.stream/1 will "hijack" the inbox of the process
enumerating the stream to subscribe and receive messages
from producers. However it guarantees it won't remove or
leave unwanted messages in the mailbox after enumeration
unless one of the producers comes from a remote node.
For more information, read the "Known limitations" section
below.
Options
:demand- sets the demand in producers to
:forwardor
:accumulateafter subscription. Defaults to
:forwardso the stream can receive items.
:producers- the processes to set the demand to
:forwardon initialization. It defaults to the processes being subscribed to. Sometimes the stream is subscribing to a
:producer_consumerinstead of a
:producer, in such cases, you can set this option to either an empty list or the list of actual producers so their demand is properly set.
Known limitations
from_enumerable/2
This module also provides a function called
from_enumerable/2
which receives an enumerable (like a stream) and creates a stage
that emits data from the enumerable.
Given both
GenStage.from_enumerable/2 and
GenStage.stream/1
require the process inbox to send and receive messages, it is
impossible to run a
stream/2 inside a
from_enumerable/2 as
the
stream/2 will never receive the messages it expects.
Remote nodes
While it is possible to stream messages from remote nodes
such should be done with care. In particular, in case of
disconnections, there is a chance the producer will send
messages after the consumer receives its DOWN messages and
those will remain in the process inbox, violating the
common scenario where
GenStage.stream/1 does not pollute
the caller inbox. In such cases, it is recommended to
consume such streams from a separate process which will be
discarded after the stream is consumed.
sync_info(stage, msg, timeout \\ 5000)View Source
Specs
Queues an info message that is delivered after all currently buffered events.
This call is synchronous and will return after the stage has queued
the info message. The message will be eventually handled by the
handle_info/2 callback.
If the stage is a consumer, it does not have buffered events, so the messaged is queued immediately.
This function will return
:ok if the info message is successfully queued.
sync_resubscribe(stage, subscription_tag, reason, opts, timeout \\ 5000)View Source
Specs
sync_resubscribe( stage(), subscription_tag(), reason :: term(), subscription_options(), timeout() ) :: {:ok, subscription_tag()} | {:error, :not_a_consumer} | {:error, {:bad_opts, String.t()}}
Cancels
subscription_tag with
reason and resubscribe
to the same stage with the given options.
This is useful in case you need to update the options in which you are currently subscribed to in a producer.
This function is sync, which means it will wait until the subscription message is sent to the producer, although it won't wait for the subscription confirmation.
See
sync_subscribe/2 for options and more information.
sync_subscribe(stage, opts, timeout \\ 5000)View Source
Specs
sync_subscribe(stage(), subscription_options(), timeout()) :: {:ok, subscription_tag()} | {:error, :not_a_consumer} | {:error, {:bad_opts, String.t()}}
Asks the consumer to subscribe to the given producer synchronously.
This call is synchronous and will return after the called consumer
sends the subscribe message to the producer. It does not, however,
wait for the subscription confirmation. Therefore this function
will return before
handle_subscribe/4 is called in the consumer.
In other words, it guarantees the message was sent, but it does not
guarantee a subscription has effectively been established.
This function will return
{:ok, subscription_tag} as long as the
subscription message is sent. It will return
{:error, :not_a_consumer}
when the stage is not a consumer.
subscription_tag is the second element
of the two-element tuple that will be passed to
handle_subscribe/4.
Options
:cancel-
:permanent(default),
:transientor
:temporary. When permanent, the consumer exits when the producer cancels or exits. When transient, the consumer exits only if reason is not
:normal,
:shutdown, or
{:shutdown, reason}. When temporary, it never exits. In case of exits, the same reason is used to exit the consumer. In case of cancellations, the reason is wrapped in a
:canceltuple.
:min_demand- the minimum demand for this subscription. See the module documentation for more information.
:max_demand- the maximum demand for this subscription. See the module documentation for more information.
Any other option is sent to the producer stage. This may be used by
dispatchers for custom configuration. For example, if a producer uses
a
GenStage.BroadcastDispatcher, an optional
:selector function
that receives an event and returns a boolean limits this subscription to
receiving only those events where the selector function returns a truthy
value:
GenStage.sync_subscribe(consumer, to: producer, selector: fn %{key: key} -> String.starts_with?(key, "foo-") end) | https://hexdocs.pm/gen_stage/1.1.1/GenStage.html | CC-MAIN-2022-21 | refinedweb | 8,674 | 54.73 |
This blog will focus on Object Oriented Programming concepts like Inheritance, Polymorphism and writing Packages, Interfaces and Abstract classes in Java.
Packages
Packages make a project organized. It creates a level of security by letting only the classes within that package have access, and it also provides name-scoping.
- Some of them we have already come across are System (System.out.println), String, Math. They are from the package java.lang that is automatically included without having to import.
- An import statement is written at the beginning of the code and is accessible throughout. An alternate way is to type the full name on each use in the code., i.e <pkg_name>.<class_name>
- Java doesn’t compile the imported class. An import statement only saves time typing the full name of every class.
Inheritance
Public, Private, Protected and Default are called access controls. They control whether a variables or a methods can be inheritance or not.
- Public members are inherited.
- Private members are not inherited
- Protected members can be access only by subclasses
- Default can be accessed only within the same class. It doesn’t require a keyword.
Java does not support multiple inheritance, as it leads to the “Deadly Diamond of Death” problem. Lets see what that means.
The diamond problem is an ambiguity that arises when two classes B and C inherit from A and then when D inherits from B and C. Now, if there is a method in A that has been overridden by both B and C then which version of the method does D inherit? that of B or that of C? To avoid this problem, Java does not allow multiple inheritance.
Few other important aspects of inheritance:
- A non-public class can be sub-classed only by the classes in the same package.
- A class declared as final cannot be sub-classed anymore.
- A class which has all private constructors cannot be sub-classed.
- The keyword used is “extends”
Polymorphism
Polymorphism is having many forms of a method.
It is not a valid overloading if only the return type of a given method has been changed. To overload a method, the argument list has to be changed.
Abstract Class
- Abstract Classes cannot be instantiated. The compiler will restrict instantiation of an abstract class. Whereas, concrete classes can be instantiated.
- Abstract Methods must be overridden by the inherited class. Abstract methods do not have a body. It keeps a check to ensure that the inherited class implement them.
- An abstract method should always be put in an abstract class.
Lets see how they are implemented in Java:
abstract class Pan extends Utensils { public abstract void Material(); //an abstract method. } Pan p; //works p = new Spoon; //works p = new Pan; //Compiler throws an error: Pan is abstract; cannot be instantiated
It is similar to that in Python. Lets see the implementation:
from abc import ABCMeta, abstractmethod #IMPORTANT class animal: __metaclass__ = ABCMeta def delegate(self): self.action() @abstractmethod def action(self): pass class tiger(animal): pass A=animal() t = tiger()
Interfaces
An interface can be presumed to be an extra feature that are not inherited from a parent class. This kind of helps overcome the drawback of not being able to inherit multiple classes.
- An interface contains all “abstract” methods that the subclass is forced to implement. It can be perceived as a 100% abstract class
- The keyword used is “implements”
- Interfaces are always “public”. Methods in the interface are always “abstract”
- A class can implement multiple interfaces which allows adding a number of features to a class making it more meaningful, in other words more real-word than the parent class.
Here’s an example:
interface
Pet {
// all are the abstract methods.
abstract
void
wagTail(
int
a);
abstract
void
RunAround(
int
a);
abstract
void
EatFood(
int
a);
} class Dog extends Animal implements Pet {}
Conclusion:
In this blogpost we learnt some Object Oriented Programming concepts in Java. We also learnt how to implement Packages, Abstract Classes and Interfaces. In the next blog will be a deep dive into Memory management and Garbage Collection.
The previous blog was about getting started with basic concepts like using Variables, write Classes, conditional statements, loops and built-in libraries. Java for Python Developers 1 – Basics | https://kriyavikalpa.com/2019/03/06/java-for-python-developers-2-oop/ | CC-MAIN-2020-10 | refinedweb | 705 | 57.27 |
Query Expressions
Query expressions are special statements used for querying a knowledge supply exploitation the LINQ. LINQ square measure simply extension ways that you simply decision and returns the info that you simply need. These ways square measure placed in the System.Linq namespace thus {you must|you need to|you need to} embody it once you want to use LINQ in your project. question expressions square measure translated into their equivalent technique syntax that may be understood by CLR. you’ll find out about exploitation the tacticsyntax for querying information exploitation LINQ within the next lesson.
Let’s take a look at the first example of using LINQ query expression to query values from a collection. Note that we are using LINQ to Objects so we will simply use a simple array as the source.
using System; using System.Linq; namespace LinqExample { class Program { static void Main(string[] args) { int[] numbers = { 1, 2, 3, 4, 5 }; var result = from n in numbers select n; foreach (var n in result) { Console.Write(n + " "); } } } }
Example 1
1 2 3 4 5
Line 2 imports the System.Linq namespace so that we can use LINQ in our program. Line 10 declares an array of 5 integers containing some values. Line 12-13 is the simplest query expression you can make although it is currently useless right now, but we will be studying more forms of query expressions. This query expression simply gets every number from the numbers array which can then be accessed using the results variable. The structure of a basic query expression is as follows:
var query = from rangeVar in dataSource <other operations> select <projection>;
Example 2 – Structure of a basic query expression
Note that you can write a query expression in a single line, but a good practice is to separate each clause into multiple lines. Each line of the formatted query expression above is called a clause. There are seven types of clauses you can use in a query expression which includes, from, select, where, orderby, let, join, and group-by clauses. For now, we will be only looking at the from and selectclauses.
Query expressions begin with the a from clause. The from clause uses a range variable (rangeVar) which will temporarily hold a value from the data Source, followed by the in a contextual keyword and then the data source itself. This can be compared to the mechanisms of the foreach loop wherever a spread variable can hold every price retrieved from the supply. however the vary variable in a fromclause is totally different because it solely acts as a regard to every consecutive part of the information supply. this is often due to the postponed execution mechanism that you’ll learn later. The vary variable mechanically observe its kind mistreatment kind reasoning supported the sort of each part from the information supply.
After a from clause, you can insert one or more where, orderby, let, or join clauses. You can even add one or more from clauses which will be demonstrated in a later lesson.
At the tip of a question expression is a select clause. Following the select keyword could be a projection which can confirm the form or variety of every came part. For example, if a worth following the select clause is of type int, then the kind of the question are a assortment of integers. you’ll even perform additional styles of projection and transformation techniques which can be incontestible in a very later lesson. Note that a question expression may also be all over with a group-by clause, however a separate lesson are dedicated for it.
So to wrap up, a typical query expression starts with a from clause with a range variable and a data source, then followed by any of the from, where, orderby, let, or join clauses, and finally the select or group-by clause at the end of the query expression.
If you know SQL, then the syntax of the query expression might look weird to you because the from clause is placed first and the select clause is placed last. This was done so that Visual Studio can use the Intellisense feature by knowing what type an item of the data source is in advance.
The keywords used in a query expression such as from and select are examples of contextual keywords. They are only treated as keywords during specific events and locations such as a query expression. For example, you can simply use the word select as a variable name if it will not be used in a query expression. To see the complete list of contextual keywords
The result of the query is of type IEnumerable<T>. If you will look at our example, the result was placed in a variable of type var which means it uses type inference to automatically detect the type of the queried data. You can, for example, explicitly indicate the type of the query result like this:
IEnumerable<int> result = from n in numbers select n;
but it requires you to know the type of the result in advance. It is recommended to use var instead to take advantage of a lot of its features.
Line 15-18 of Example 1 shows the value of every data from the query. We simply used a foreach loop, but you must also take note that we used var as the type of the range variable. This allows the compiler to simply detect the type of every data in the query result.
The LINQ has a feature called deferred execution. It means the question expression or LINQ methodology won’t execute till the program starts to browse or access AN item from the results of the question. The question expression really simply returns a computation. the particular sequence of information are retrieved once the user asks for it. to Illustrate, the question expression are dead after you access the results victimization a foreach loop. you may learn a lot of concerning postponed execution a bit bit later.
We have successfully written our very first query expression, but as you can see, it does nothing but to query every data from the data source. Later lessons will show you more techniques such as filtering, ordering, joining, and grouping results. We will look at each of the seven query expression clauses in more depth. | https://compitionpoint.com/query-expressions/ | CC-MAIN-2021-25 | refinedweb | 1,063 | 60.35 |
(Note: These solutions were written for Advanced Event 5 of the 2010 Scripting Games.) workshops, Exchange Risk Assessments (ExRAP), various custom Exchange and Windows PowerShell knowledge transfers, and the occasional critical situation onsite assistance. He maintains a blog for Exchange and Windows PowerShell topics at.
To tackle this event, as in any data gathering related script, the first hurdle for me was where to get this piece of information. WMI is normally a great place to look, but it can be tricky to find what you are looking for. I actually decided to use my normal Windows PowerShell tricks to attempt to find this information. I know this is a VBScript solution, but I use Windows PowerShell quite a bit and doing the research on Windows PowerShell wasn’t cheating in my book. If I am in a confession mood, I also used Windows PowerShell to repeatedly test launch the script as I was composing and debugging it. I tried out Adersoft’s VBSEdit to compose the script, though I didn’t use its built-in debugging features. I am not a regular VBScripter any more, so this script posed an interesting challenge for me. I will further confess that I completely wrote the solution in Windows PowerShell before even touching VBScript. Why did I do this? Because I can compose code very quickly in Windows PowerShell. I also suspected that WMI would be the key to this solution. Knowing that, I realized that a WMI Query is a WMI query regardless of if it’s used from Windows PowerShell or VBScript.
Suspecting that WMI is the most logical source for this information, I explored WMI through Windows PowerShell where WMI is very discoverable. I was able to find a WMI class called Win32_VideoController in the default root/CIMv2 namespace, which contained a property called AdapterRAM. This property contained a value in bytes of the amount of RAM on the system’s video controllers. It is possible that a system can have more than one video adapter, and because this challenge doesn’t specify specific details about multiple cards, I chose to simply report on a card that had greater than “0” adapterRAM. The WMI query that I built specifically—and remember that WMI query construction is irrelevant of the calling technology (in this case VBScript)—was this:
SELECT SystemName,AdapterRAM,Name FROM Win32_VideoController WHERE AdapterRAM>0
The above query returns only three properties that are of interest to me, and filters out instances where the adapterRAM is not greater than “0”. This query, and the use of WMI, is the key to this solution. Everything else about the solution is simply a wrapper for presenting the information returned by this query.
The other key parts of this script was code to automatically size the amount of RAM, creating logic to operate this script against multiple machines that might be remote, and ensuring that this script was only run with Windows Script Host’s CScript.exe.
I created the function, AutoByteUnitSizer, to make for nicely reusable code to perform the automatic unit sizing. I agonized for a bit on the best logic to use for this, but in the end, I simply check if the value is larger than 1 GB, then 1 MB, and then 1 KB to determine the best unit to use. The function came out quite simple in the end and through all the numbers I could throw at it to test it out, it worked very well.
I also created the function, CheckScriptHost, to check if this script was running in CScript.exe. Rather than just hard-coding the script to only check for CScript, I decided to make the function take a single argument, being either WScript.exe or CScript.exe, and it would output the correct error message and halt the script.
To allow this script to work against multiple machines, I didn’t bother to create a function; I just created a loop on an array of machine names. The script attempts a basic WMI connection to the name, and then only when the connection was successful does the script go on and actually perform the WMI query.
The rest of the script was relatively routine script flow control and output using several wscript.echo and If statements and such. The output of the script is simple text as shown in the following image. For this test run, I gave the script three machine names: a “.”, “localhost”, and a bogus name to ensure it ran against multiple machines and handled a server that is “not responding.”
I tried to comment the code well enough so that the flow of is should be easy enough to follow. Thanks for reading my version of a solution to this fun event.
Here is my script.
' Inventory of Video Adapter RAM for Windows 7 Aero Suitability
Option Explicit
On Error Resume Next
Dim arrComputerNames, strWQLQuery, strComputer, colItems
Dim objItem, objWMIService, strScriptHost
Const KB = 1024 '1KB is 1024Bytes
Const MB = 1048576 '1MB is 1024 * 1024 Bytes
Const GB = 1073741824 '1GB is 1024 * 1024 * 1024 Bytes
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
''''''The following line is editable, simply''''''''''''''''''''''''''''
''''''add resolvable machine names separated''''''''''''''''''''''''''''
''''''by commas and in quotes.''''''''''''''''''''''''''''''''''''''''''
arrComputerNames = Array(".","localhost","bogus")
'This is the query to be used to query WMI for AdapterRAM in the Win32_
'VideoController class.
strWQLQuery = "SELECT SystemName,AdapterRAM,Name " & _
"FROM Win32_VideoController WHERE AdapterRAM>0"
'Because we have the potential for many lines of output, let’s ensure
'we are running with CScript so we don't have to hit "OK" a ton of
'times. This is a function.
CheckScriptHost("cscript.exe")
'Now we simply loop through the computer names we have in the array to allow
'this script to work against multiple machines in one run.
For Each strComputer In arrComputerNames
objWMIService = vbNull
colItems = vbNull
objItem = 0
Err.Clear
'Connect to the WMI service on the target machine
Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\cimv2")
If Err.Number > 0 Then
WScript.Echo "ERROR: " & Err.Description & " - " & strComputer & vbCrLf
Else
'As long as we don’t have an error making the connection, we go
'and query the machine for the video memory
Set colItems = objWMIService.ExecQuery(strWQLQuery,,48)
For Each objItem in colItems
WScript.Echo "SystemName: " & objItem.SystemName
WScript.Echo "Name: " & objItem.Name
WScript.Echo "AdapterRAM: " & AutoByteUnitSizer(objItem.AdapterRAM)
'Let’s verify if we have enough RAM for Win7 Aero
If objItem.AdapterRAM > (128 * MB) Then
WScript.Echo "Video Adapter is ready for upgrade"
Else
WScript.Echo "Video Adapter Requires Upgrade"
End If
WScript.Echo ""
End If
'This function automatically returns a string presentation of the
'number passed to it. It will determine the best unit size, round
'the number to two decimal places, and append the unit abbreviation
'to the end of the string.
Function AutoByteUnitSizer (intRawValue)
Dim strUnit,strValue
If ((intRawValue / GB) >= 1) Then
strUnit = "GB"
strValue = Round((intRawValue / GB),2)
ElseIf ((intRawValue / MB) >= 1) Then
strUnit = "MB"
strValue = Round((intRawValue / MB),2)
ElseIf ((intRawValue / KB) >= 1) Then
strUnit = "KB"
strValue = Round((intRawValue / KB),2)
strUnit = "Bytes"
strValue = intRawValue
AutoByteUnitSizer = (strValue & strUnit)
End Function
'This subroutine checks to see if the script is running under the correct
'host, simple pass the sub the either "cscript.exe" or "wscript.exe".
Sub CheckScriptHost(strDesiredHost)
Dim strScriptHost, strCurrentHost
strScriptHost = LCase(Wscript.FullName)
strCurrentHost = Right(strScriptHost, 11)
If strCurrentHost <> strDesiredHost Then
WScript.Echo "This script is currently running under " & strCurrentHost & _
" and " & _
"will now exit. Please run this script only under " & strDesiredHost & "."
WScript.Quit
End | http://blogs.technet.com/b/heyscriptingguy/archive/2010/05/21/expert-solutions-advanced-event-5-of-the-2010-scriping-games.aspx | CC-MAIN-2013-20 | refinedweb | 1,244 | 61.87 |
Excessive heap usage
With
Main.hs:
module Main (main) where import InputOutput main :: IO () main = do let content1 = concat (replicate 1000000 "1x") ++ "0" let i1 = fst $ input content1 view i1 let content2 = concat (replicate 1000001 "1y") ++ "0" let i2 = fst $ input content2 view i2 view :: [Char] -> IO () view [] = return () view (i : is) = i `seq` view is
and
InputOutput.hs:
module InputOutput (input) where class InputOutput a where input :: String -> (a, String) instance InputOutput Char where input (x : bs) = (x, bs) instance InputOutput a => InputOutput [a] where input ('0':bs) = ([], bs) input ('1':bs) = case input bs of (x, bs') -> case input bs' of ~(xs, bs'') -> (x : xs, bs'')
according to
ghc -O -prof -auto-all --make Main.hs -fforce-recomp ./Main +RTS -h
heap usage goes up to about 20M with the HEAD, but only about 200 bytes with 6.8.2.
This is with 6.11.20081108, but I started investigating with the HEAD when I saw similar problems with more-or-less 6.10.1. | https://gitlab.haskell.org/ghc/ghc/issues/2762 | CC-MAIN-2019-51 | refinedweb | 167 | 62.72 |
Open House Math PowerPoint. August 23, 2011. Welcome to JJ Daniell MS. In middle school, the student is responsible for his/her own work. If your child is absent, he/she will need to see the teachers to get make-up work.By adonai
Academic Tracker Instructions. The following slides show examples of how to use the Student P ortal and your Academic F older. . Student Portal. Login. User Name: Four letters of last name. (Capitalize first letter) & grade in.By analu
Preparing for the Junior Year. January 2014. The Hexagon. Higher and Standard Level. Students will select 3 SL and 3HL IB courses The selection will be for both junior and senior year courses.By ora
Introduction to Computer Programming. Looping Around Loops II : Conditional Loops. The Problem with Counting Loops. Many jobs involving the computer require repetition, and that this can be implemented using loops.By kira
3-5 Rational numbers and equations. Miss battaglia – algebra 1 CP Objective: Solve equations involving rational numbers. Warm up. A rectangular field is 120 yd long and 53 yd 1 ft wide. How much longer is the length of the field than the width?. 1 yd =3 ft. nutrition.By ian-valentine
College Algebra K /DC Friday, 05 September 2014. OBJECTIVE TSW review for Monday’s test covering algebra review. ASSIGNMENTS DUE MONDAY WS Formulas WS Assorted Problems Pick up papers from side tables. √ = 100. WS Fractions & Proportions. 2) 6 7 / 8 yd 2 4) 4 / 3 6) 51 1 / 6 lbBy gwendolyn-hogan
Objective- To solve problems involving data analysis and measures of central tendency. Data Analysis. 1) Range 2) Measures of Central Tendency Mean Median Mode. 10 20 20 30 40. Range = Highest Value - Lowest Value. Range = 40 - 10. Range = 30. 10 20 20 30 40.By jasmine-whitaker
Introduction to Computer Programming. Decisions If/Else Booleans. Your old Average friend. import java.util.Scanner; public class AverageN { //AverageN - Find the average of N values public static void main(String[] args) { Scanner keyb = new Scanner(System.in);By zenaida-buck
Conditionals : function which returns max and min in right order. f unction [ max , min] = maxmin (x, y) max = ( x+y )/2 + abs (x-y)/2 min = ( x+y )/2 - abs (x-y )/ 2 End Short but not “natural”, the following is much more comprehensible function [ max , min] = maxmin (x, y)By derossett
CSC 170 Computing: Science and Creativity. Big IDEAs: Data and Information, Algorithms, PROGRAMMING LIST OF LISTS. Institute for Personal Robots in Education (IPRE). Data in rows and columns. Data is sometimes conveniently viewed as rows and columnsBy josephinem
A more natural solution def highest (x,y): if x > y : return x else: return y ¿meaning? If x is bigger than y, then return the value of x, else , which means, if x is smaller or equal than y return the value of y. Problem. Toss a coin and get head or tailBy akiefer
View Test average PowerPoint (PPT) presentations online in SlideServe. SlideServe has a very huge collection of Test average PowerPoint presentations. You can view or download Test average presentations for your school assignment or business presentation. Browse for the presentations on every topic that you want. | https://www.slideserve.com/search/test-average-ppt-presentation | CC-MAIN-2022-05 | refinedweb | 531 | 60.41 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Basic Electronics » Python questions
I am using python to log measurements from the temperature sensor over the COM port (a addition to the first project). I want to take measurements for a undefined amount of time, the problem is that I am stuck in the while loop and unable to close the text file using my current code. Is there a way to use a break statement to exit the while loop (for example if I hit the escape key on the keyboard)? Or maybe a different way to do this altogether?
Also is there a serial function that takes a line of code versus a number of bits? I am also sending a time in seconds over the com line and as the number become 10 to 100 etc the number of bits I will no longer get a nicely formatted text file.
#!/usr/bin/env python
#import module
import serial
#communicate with serial port
s = serial.Serial('COM3', 115200)
#write to text file
f = open('/Python27/temperature Readings/temperature.txt', 'w')
x = 1
while x == 1:
f.write(s.read(13))
f.close()
Thanks
Tom
I found a work around for my second question, however I am still stuck on the infinite while loop dilemma....
In my C program I specified the number of the number of digits before the decimal place to compensate for the longest time I may need (about a day):
printf_P(PSTR("%.1f %7.1f\n"), temp_avg, (int32_t) the_time / 100.0);
Hi tkopriva,
The serial module has a readline() method that will read characters until it encounters a newline character. That might be what you want to use instead of having a specific number of bytes read. You might need to specifically append another newline character to the line in your python code, because I think it would get stripped on the readline() call, but I'm not sure.
Your second problem is a little trickier. Any calls you make to get keyboard input from the command line are going to be blocking calls, so that isn't really an option. One solution is to just not worry about it, you can exit the Python program with Ctrl+D and any files it had opened will get closed. It just might close in a bad state with half a line written out. A second more elegant solution is to have a button on the MCU side that will cause your C code to send a special message to the python code. Python can check for this and break out of the while loop and properly close the file. If you really want to do this from the python side you can use a GUI library like PyGame. This will let you check for keypresses in a non-blocking manner and you can just exit out of the loop when a specific key is pressed. Hope that helps.
Humberto
Thanks Humberto, that helps a lot!
Tom
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/1091/ | CC-MAIN-2019-30 | refinedweb | 514 | 70.53 |
A Look Inside: Tracking Bugs By User Pain
Continuing.
Related posts
18 replies on “A Look Inside: Tracking Bugs By User Pain”
I think that one of the problems is that you Guys don’t develop games on your own engine so you can’t actually tell what is worth fixing and what is not. I know that you have game demos and tutorials but this does not equal a published game that went through full development pipeline. I’d like to see Unity Games by Unity3D which would provide feedback for the engine dev team. It works really well for in-house game engines and companies like Unreal and Crytek. Cheers!
I agree.
that’s true! UnityBug3D ! has reduced lots its’ Programmer’s lifetime .
is it possible charge unity3d in law ? for so many bug kill Programmer slowly
Today my boss ask me to learn UE4. it is really piss me off. hey! Unity! when will make your light and shader better than UE4 ,I am count on it !
I’m still skeptical. It’s not the first time Unity tries to tackle bugs and fails miserably, there are years old bugs that have not been addressed. If this changes how Unity prioritizes their development to include more fixing and polish them I’m all for it. But while I like the idea that this re-prioritizes the fixing to be more “user-aware” they have failed far too many times before.
I’ll be believe when I see it.
Now you need to open up the voting. Having a limited number of votes doesn’t accomplish anything useful. Being able to dump multiple votes into an issue is also a needless complication. Let users apply a single vote to any issue. That will give you a much more accurate read on what is important.
I have all my votes dumped into a couple of items that I just hope will get attention some day, and it’s extremely frustrating to run across a new issue and not be able to raise my hand and indicate there’s at least one more project out there impacted by the problem.
Opening up voting won’t dilute the value of votes either. If somebody sits down and votes on every one of the top 1000 items, there is net zero effect. A rising tide lifts all boats, etc.
Votes aren’t a scarce resource and artificial scarcity just reduces the value of voting, probably to the point that most people don’t even bother.
You should include a time parameter to prevent bugs “starvation”
Thanks to this bug, I spent x10 time aligning all the objects in all the levels of my game while thinking that I was using prehistoric software (having to zoom all the way in the scene in order the vertex spanning to work fine, and then zooming all the way out, doing that for every new object I wanted to position in the scene). (If this would’ve worked as expected, I would’ve saved a few days work) Btw… it’s going to be one year and the bug isn’t fixed yet. :(
Would really like to see a editor dev category in prevalence – in place of 4 or at least 3.
Because what we editor devs are able to make with the abilities you give us does in the end affect almost everyone. And there are still some ceavats that are really, really annoying and sometimes double developement time for us editor devs, trying to develop workarounds…
Now that you have [User Pain] you want to evaluate [Effort] as well.
Effort is the amount of man-hours that will be spent on reproducing, fixing, testing the bug (including the risk of introducing secondary bugs)
Total effort is limited (available resources) so you want to select bugs that the sum of User Pain reduced as large as possible.
This will be some sort of [User Pain]/[Effort] metric.
tl;dr if you can fix 10 cosmetic bugs in the same time as one serious bug then it sometimes worth it.
I love how you guys give info, about how you actually are running things. Few other companies are open about it. A lot of companies hide this information behind closed doors + NDA’s.
I really love these kind of posts! :) Please keep making them.
Still waiting for a 3 month old bug that’s been fixed in 5.3 but yet to make it in to 5.4, after 3 e-mails and no replies.
So I’m all for much needed changes to this bug tracking system.
LOL. yes. and is not only a crash. Is not only pain; is n° user * time * pain)square since taking out time from making other. Imagine the ripple effects of 11,000 bugs per year + bugs that for Unity are not consider as bugs but users thinks are bugs. One example is Y up and no way to change it in Unity settings. A bug is a lack an omission. And each developer producing 1000 lacks x Yr. Is it because are running too fast? If you go fast in a light car and then you stop for a coffee, probably that slow clever old truck driver will pass you.
“some stuff + Votes”
So an issue with 5000 votes trumps everything and will be on-top of your list all the time?
I’m sure they would re-address that section in the future if they were getting many issues with 5000 votes. Right now the most voted issue has 241 votes (which is fixed) so it’s probably a good measurement for them right now.
yeah I can see how this could be a problem. Highly-voted issues in Unity Feedback tend to be stuff like “We want a Make MMORPG button”
Such a system of ‘user pain’ will only lead to exaggerated claims of disasters by users looking to jump the queue and not solve the problem at hand.
A better approach would be to improve the performance metrics gathered by the Unity Editor and concisely send them with the error report.
Also, most of the time I have a bug it’s due to running out of memory or a namespace collision or an absence of a needed library or a dll inclusion problem – all those things happen frequently if a use imports a the wrong combination of Unity asset packages into their project – so I’m doubting the validity of many of those filed bug reports or at least the conjectured cause of the bug.
5 = it crashes
4 = it runs like shit
3 = usability
2 = feature
1 = polish
new content:
5 = it crashes (bug report and Unity Internal Discussion)
4 = it runs like shit (bug report and Unity Beta Forum)
3 = usability (bug report and Documentation)
2 = feature (bug report and Video explaining the new feature )
1 = polish (bug report and Video Tutorial or Live Training about it and how to use it!)
0 = clean if (bug report = 0)
I put maya as example: Maya has old boolean bugs since 90´Alias and will be there for ever at this point; That makes a software old. Is better to fixing old bugs before making more. Makes the software stable and ready before the author goes or moves on. Only experts users can find in the stage 1 problems. but including that problems must be fix till there is 0 bug reports. In this way expert users can make much more. I can´t use Maya booleana because I can´t push them up to my limite. The limitation in this particular example is the bug. But usually the software developer how make the new feature thinks is not important to has 0 bugs. Thinks is more important to have new content. | https://blogs.unity3d.com/2016/08/17/a-look-inside-tracking-bugs-by-user-pain/ | CC-MAIN-2021-04 | refinedweb | 1,308 | 69.52 |
qdstat man page
qdstat — show status and statistics for a running 'qdrouterd'
Synopsis
qdstat [Options]
Description
An AMQP monitoring tool that
-e, --edge : Show edge connections
-a, --address : Show Router Addresses
-m, --memory : Show Router Memory Stats
--autolinks : Show Auto Links
--linkroutes : Show Link Routes
-v, --verbose : Show maximum detail
--log : Show recent log entries
--limit=LIMIT : Limit number of output rows.
--ssl-password-file=SSL-PASSWORD-FILE : Certificate password, will be prompted if not specifed.
--sasl-mechanisms=SASL-MECHANISMS : Allowed sasl mechanisms to be supplied during the sasl handshake.
--sasl-username=SASL-USERNAME : User name for SASL plain authentication
--sasl-password=SASL-PASSWORD : Password for SASL plain authentication
--sasl-password-file=SASL-PASSWORD-FILE : Password for SASL plain authentication
--ssl-disable-peer-name-verify : Disables SSL peer name verification. WARNING - This option is insecure and must not be used in production environments
Output Columns
qdstat -c
- id
The connection’s unique identifier.
- host
The hostname or internet address of the remotely-connected AMQP container.
- container
The container name of the remotely-connected AMQP container.
- role
The connection’s role:
- normal - The normal connection from a client to a router.
- inter-router - The connection between routers to form a network.
- route-container - The connection to or from a broker or other host to receive link routes and waypoints.
- dir
The direction in which the connection was established:
- in - The connection was initiated by the remote container.
- out - The connection was initiated by this router.
- security
The security or encryption method, if any, used for this connection.
- authentication
The authentication method and user ID of the connection’s authenticated user.
- tenant
If the connection is to a listener using multi-tenancy, this column displays the tenant namespace for the connection.
qdstat -l
- type
The type of link:
- router-control - An inter-router link that is reserved for control messages exchanged between routers.
- inter-router - An inter-router link that is used for normal message-routed deliveries.
- endpoint - A normal link to an external endpoint container.
- dir
The direction that messages flow on the link:
- in - Deliveries flow inbound to the router.
- out - Deliveries flow outbound from the router.
- conn id
The unique identifier of the connection over which this link is attached.
- id
The unique identifier of this link.
- peer
For link-routed links, the unique identifier of the peer link. In link routing, an inbound link is paired with an outbound link.
- class
The class of the address bound to the link:
- local - The address that is local to this router (temporary).
- topo - A topological address used for router control messages.
- router - A summary router address used to route messages to a remote router’s local addresses.
- mobile - A mobile address for an attached consumer or producer.
- link-in - The address match for incoming routed links.
- link-out - The address match for outgoing routed links.
- addr
The address bound to the link.
- phs
The phase of the address bound to the link.
- cap
The capacity, in deliveries, of the link.
- pri
The priority of the link. Priority influences the order in which links are processed within a connection. Higher numbers represent higher priorities.
- undel
The number of undelivered messages stored on the link’s FIFO.
- unsett
The number of unsettled deliveries being tracked by the link.
- del
The total number of deliveries that have transited this link.
- presett
The number of pre-settled deliveries that transited this link.
- psdrop
The number of pre-settled deliveries that were dropped due to congestion.
- acc
The number of deliveries on this link that were accepted.
- rej
The number of deliveries on this link that were rejected.
- rel
The number of deliveries on this link that were released.
- mod
The number of deliveries on this link that were modified.
- admin
The administrative status of the link:
- enabled - The link is enabled for normal operation.
- disabled - The link is disabled and should be quiescing or stopped (not yet supported).
- oper
The operational status of the link:
- up - The link is operational.
- down - The link is not attached.
- quiescing - The link is in the process of quiescing (not yet supported).
- idle - The link has completed quiescing and is idle (not yet supported).
- name
The link name (only shown if the -v option is provided).
qdstat -n
- router-id
The router’s ID.
- next-hop
If this router is not a neighbor, this field identifies the next-hop neighbor used to reach this router.
- link
The ID of the link to the neighbor router.
- cost
The topology cost to this remote router (with -v option only).
- neighbors
The list of neighbor routers (the router’s link-state). This field is available only if you specify the -v option.
- valid-origins
The list of origin routers for which the best path to the listed router passes through this router (available only with the -v option).
qdstat -a
- class
The class of the address:
- local - The address that is local to this router.
- topo - The topological address used for router control messages.
- router - A summary router address used to route messages to a remote router’s local addresses.
- mobile - A mobile address for an attached consumer or producer.
- addr
The address text.
- phs
For mobile addresses only, the phase of the address. Direct addresses have only a phase 0. Waypoint addresses have multiple phases, normally 0 and 1.
- distrib
One of the following distribution methods used for this address:
- multicast - A copy of each message is delivered once to each consumer for the address.
- closest - Each message is delivered to only one consumer for the address. The closest (lowest cost) consumer will be chosen. If there are multiple lowest-cost consumers, deliveries will be spread across those consumers.
- balanced - Each message is delivered to only one consumer for the address. The consumer with the fewest outstanding (unsettled) deliveries will be chosen. The cost of the route to the consumer is a threshold for delivery (that is, higher cost consumers will only receive deliveries if closer consumers are backed up).
- flood - Used only for router-control traffic. This is multicast without the prevention of duplicate deliveries.
- pri
The priority of the address. If the address prefix/pattern is configured with a priority, that priority will appear in this column. Messages for addresses configured with a priority will be forwarded according to the address’s priority.
- in-proc
The number of in-process consumers for this address.
- local
For this router, the number of local address prefix of the link route.
- dir
The direction of matching links (from this router’s perspective).
- distrib
The distribution method used for routed links. This value should always be linkBalanced, which is the only supported distribution for routed links.
- status
The operational status of the link route:
- active - The route is actively routing attaches (it is ready for use).
- inactive - The route is inactive, because no local destination is connected.
qstat --autolinks
- addr
The auto link’s address.
- dir
The direction that messages flow over the auto link:
- in - Messages flow in from the route-container to the router network.
- out - Messages flow out to the route-container from the router network.
- phs
The address phase for this auto link.
- link
The ID of the link managed by this auto link.
- status
The operational status of this auto link:
- inactive - There is no connected container for this auto link.
- attaching - The link is attaching to the container.
- failed - The link-attach failed.
- active - The link is operational.
- quiescing - The link is quiescing (not yet supported).
- idle - The link is idle (not yet supported).
- lastErr
The description of the last attach failure that occurred on this auto link.
See Also
qdrouterd(8), qdmanage(8), qdrouterd.conf(5)
Referenced By
qdmanage(8), qdrouterd(8). | https://www.mankier.com/8/qdstat | CC-MAIN-2019-09 | refinedweb | 1,287 | 59.09 |
With, I will provide some practical guidance and best practices on how to reuse your web assets, and how you can use your Web skills to build deeply-integrated Windows apps.
First we should understand some difference between web and windows 8. Windows 8 style app built using HTML has direct access to the underlying platform and is able to share information with other applications.
Local and web context
To understand some of the differences between how your markup and code behave in the browser and how they behave in a Windows Store app using JavaScript, you need to first understand the difference between the local context and the web context..
How to Enable Some Access
You can find more information from HTML, CSS, and JavaScript features and differences (Windows Store apps). Also, if you can find which content is considered safe, and which is not, from article Making HTML safer: details for toStaticHTML.");
Automatic filtering prevents script injection into DOM elements. For example, setting innerHTML, outerHTML, document.write, DOMParser.parseFromString, etc are typical example of automatic filtering prevention.
Automatic filtering prevents script injection into DOM elements. For example, setting innerHTML, outerHTML, document.write, DOMParser.parseFromString, etc are typical example of automatic filtering prevention.
If you really trust what you are bringing in, disables the safe HTML filtering for the specified function. You can create a function that inserts content that would normally be blocked and use MSApp.execUnsafeLocalFunction to execute that function.
If you really trust what you are bringing in,>' } );
var someElement = document.getElementById('someElementID'); MSApp.execUnsafeLocalFunction( function() { someElement.innerHTML = '<div onclick="console.log(\"hi\");">hi</div>' } );
Writes the specified HTML without using safe HTML filtering. (These functions are a part of the Windows Library for JavaScript.) If you really trust what you are bringing in, you can use WinJS.Utilities.setInnerHTMLUnsafe, setOuterHTMLUnsafe, and insertAdjacentHTMLUnsafe to serve as wrappers for calling DOM methods that would otherwise strip out risky content.
Tips and Best Practices
Organize Your Pages
Organizing Your code with WinJS.Namespace
By default, there is no namespace concept in JavaScript. The WinJS.Namespace.define function creates a public namespace. This function allows you to create your own namespace as a way of organizing your code. When you create a type or other element inside a namespace, you reference it from outside the namespace by using its qualified name: Namespace.Type. The following code shows how to create a Robotics namespace and define the Robot type inside it, and how to use the type outside the namespace:
Robotics
Robot
WinJS.Namespace.define("Robotics", { Robot: WinJS.Class.define( function(name) { this.name = name; }, { modelName: "" }, { harmsHumans: false, obeysOrders: true }) }); var myRobot = new Robotics.Robot("Mickey");
myRobot.modelName = "4500"; var harm = Robotics.Robot.harmsHumans;
You can add elements that are defined elsewhere to a namespace by defining a field and setting the field value to it. In the following code, the getAllRobots function that is defined outside the WinJS.Namespace.define function is added to the Robotics namespace:
getAllRobots
function getAllRobots() { // Get the total number of robots. return _allRobots; } ... WinJS.Namespace.define("Robotics", { getAllRobots: getAllRobots });
var _allRobots = [ new Robotics.Robot("mike"), new Robotics.Robot("ellen") ];
Important If you define an element outside a namespace, and that element depends on the existence of the namespace (like the _allRobots array), you must place the declaration of the dependent element after the namespace definition. If you try to add the definition before the namespace definition, you get the error "namespace name is undefined."
_allRobots
You can add properties to a namespace multiple times if you need to. You might want to define a function in a different file from the one in which the rest of the namespace elements are defined. Or you might want to reuse a function in several namespaces.
Unique ID
In general, it’s always good practice to always use unique ID for each element across all HTML/CSS pages. It is especially important to make sure each element has an unique ID across all HTML/CSS pages in Windows 8 platform due to the nature of Windows 8.
The recommended way to attach an event handler is to use JavaScript to retrieve the control, then use the addEventListener method to register the event. The question is, when should you retrieve the control? You could just add it anywhere to your JavaScript code, but then there's a chance it might get called before the control exists.
The answer for most HTML files is to provide a then or done function for the Promise returned by the WinJS.UI.processAll method. If it’s for Page controls, you can use the ready function instead.
What's a Promise? To provide responsive user experience, many Windows Library for JavaScript.
In JavaScript
(function () { "use strict";
WinJS.Binding.optimizeBindingReferences =().done(function () { var button1 = document.getElementById("button1"); button1.addEventListener("click", button1Click,(). };
// The click event handler for button1 function button1Click(mouseEvent) { var button1Output = document.getElementById("button1Output"); button1Output.innerText = mouseEvent.type + ": (" + mouseEvent.clientX + "," + mouseEvent.clientY + ")";
}
var namespacePublicMembers = { clickEventHandler: button1Click }; WinJS.Namespace.define("startPage", namespacePublicMembers);
app.start(); })();
In HTML
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>BasicAppExample</title>
<!-- WinJS references --> <link href="//Microsoft.WinJS.1.0/css/ui-dark.css" rel="stylesheet"> <script src="//Microsoft.WinJS.1.0/js/base.js"></script> <script src="//Microsoft.WinJS.1.0/js/ui.js"></script>
<!-- BasicAppExample references --> <link href="/css/default.css" rel="stylesheet" /> <script src="/js/default.js"></script> </head> <body> <button id="button1">An HTML button</button> <p id="button1Output"></p> </body> </html>
Pages and Navigation
The Blank Application template works well for a very simple app, but when you write a more complex app, you'll probably want to divide your content into multiple files.
A traditional website might have a series of pages that you navigate between using hyperlinks. Each page has its own set of JavaScript functions and data, a new set of HTML to display, style info, and so on. This navigation model is known as multi-page navigation.
Unlike a traditional website, a Windows Store app using JavaScript works best when it uses the single-page navigation model. In this model, you use a single page for our app and load additional data into that page as needed.
That means that your app never navigates away from its default.html page. Always make the default.html and default.js files your app's start up page. They define the outmost UI for your app (such as the AppBar) and handle the application lifecyle.
If you never navigate away from default.html, how do you bring in content from other pages? There are a few different ways.
For more info about navigation, see Supporting navigation.
In my next blog, I will show some examples on how to transform HTML5/CSS3 apps into Windows 8 app. | http://blogs.msdn.com/b/dorischen/archive/2012/10/02/transform-your-html-css-javascript-apps-into-windows-8-application.aspx | CC-MAIN-2014-15 | refinedweb | 1,134 | 50.02 |
WaitWindow with Python
Hi,
I want the script (python) to wait for a task manager window to close before to go further. I think to use a waitwindow method, but I don't know why it says after running that, also the object is mapped.
"Python runtime error.
so far the script is this:
def Test1():
TestedApps.ECU_TEST.Run()
TestedApps.ECU_TEST.Run(1, True)
ecu_test = Aliases.ECU_TEST2
ecu_test.dlgECUTEST20204.btnOk.ClickButton()
ecu_test.dlgTaskManager.Click(242, 19)
p = Aliases.ECU_TEST2.dlgTaskManager()
while p.WaitWindow('#32770','Task Manager','-1',5000).Exist:
pass
Thank you!
Solved! Go to Solution.
Hi,
It is my guess that the error is thrown on this line:
while p.WaitWindow('#32770','Task Manager','-1',5000).Exist:
isn't it?
If my guess is correct, then does
Aliases.ECU_TEST2.dlgTaskManager()
object exist at this moment?
If it does not, then obviously you cannot call .WaitWindow() method of the object that does not exist.
.
================================
First of all thank you,
Well the object doesn't exists at that moment.
For a big picture, I'm trying to open a tool, choose the workspace and after that a task manager window appears. And I am trying to make the script to wait for that window to appear and wait until this window is closing. Is about a few seconds, I can use a Delay but I need this kind of function (I don't know which is the correct term) also in the near future, when the tool opens another tools and I want to wait for them to be open and then close after a period of time when their jobs are finished.
I attached the task manager window. This appears random, in 1-2 sec after the workspace is set. It's uploading some stuff than is closing (all by itself). I want to specify that the picture is changing, I don't know if it's mapped like the first picture after is changing in second the script will recognize the object? Sorry for all of this newbie stuff, but it's my first week with this tool.
Again Thank you!
Hi,
> This appears random, in 1-2 sec after the workspace is set.
Probably the main problem with windows like this is that they appear at non-predictable moment of time and it may take unpredictable time to close. I.e., depending on a bunch of different factors like network, memory and CPU performance, volume of work to be done, etc. the window may open, say, within 1-10 seconds time frame from the moment of initial workspace initialization. Depending on the volume of work to be done the window may close almost immediately or remain visible on the desktop for some time. Additionally, the window may not appear at all if the volume of work is too small or is absent.
On the other hand, it takes some noticeable time for TestComplete to detect the new window and to process it.
All the above leads to a problem that if the window was not detected within, for example 1 second, you can not be sure whether this is because all the work was done and the window will not be displayed or you need to wait more because of, say, congested network or overloaded CPU.
The best strategy for the situation like this depends on your goal: whether you need to interact with such window or need just to postpone test execution until the window is closed.
Probably the best option is if you can talk to developers and they add some property to the main application window that is set to false initially and to true after the window was closed. In this case you will be able reliably wait until the window is closed without spending extra time by just waiting in a loop until this property becomes true.
Check help topic if the above is an option.
If you need to interact with the window then you should use one of the WaitXXX methods. WaitWindow() or WaitAliasChild(). Use of Aliases is a good practice in TestComplete world but be aware and remember about caching:.... Regardless of whether you are using .WaitWindow() or .WaitAliasChild(), you need to work with the window itself, but not with some of its child UI elements. Considering the code that you provided, this is the window with class name '#32770' and 'Task Manager' caption. But you may ask developers to be on the safe side.
In any case, remember, that if window may not appear or appears for too short period of time (consult with developers as for this) then you must consider some timeout in order to prevent endless wait. Selection of timeout is the major tradeoff here - you must wait long enough to let window appear and be detected by TestComplete, but not too long in order not to waste time on wait when it becomes evident that the window (most probably) will not appear.
If the window we are talking about is of modal type you may consider to use the OnUnexpectedWindow event provided by TestComplete (). In the case of using events, you should not wait for the window but just proceed with test flow. If the event is triggered, then you should check if the event was triggered by the window you are interested in (Task Manager) and either wait until it closes or interact with it if needed.
P.S. I understand that all the above might look too abstract for the one who just started to use TestComplete. Feel free to ask here if something is not clear enough.
P.P.S. May 26 SmartBear holds TestComplete Introductory Training () which might appear to be useful.
<< | https://community.smartbear.com/t5/TestComplete-Questions/WaitWindow-with-Python/m-p/217830 | CC-MAIN-2022-40 | refinedweb | 951 | 70.73 |
This appendix is a brief tour of the new capabilities added to RPG IV as of V5R1, which makes it easier to use Java Native Interface to call Java from RPG and to call RPG from Java. Appendix A discusses how to use the AS/400 Toolbox for Java to call RPG from Java, using either hand-crafted code to the ProgramCall and ServiceProgramCall classes or using the Program Call Markup Language (PCML).
While the Toolbox remains the option of choice for calling RPG from Java for many people, there is now an alternative. Specifically, the Toolbox calls RPG programs and service programs in a separate job from that running the Java Virtual Machine. The advantage of this is that there are no worries about thread safety, and it is a tried and trusted application architecture. The alternative is to use Java Native Interface, as Chapter 14 hints at, to call ILE procedures inside service programs, within the same job as the Java Virtual Machine. The advantage to this approach is strictly performance, as it saves the overhead of starting that second job. If your service program does not turn on LR, then the advantage is perhaps minor. The downside of JNI for RPG calls has always been coding complexity, but as of V5R1, RPG has been enhanced to reduce this complexity.
On the other hand, if your goal is to call Java from RPG, you really are tied to JNI. Remember, JNI comes with C APIs for starting the JVM, instantiating objects, calling methods on those objects, and translating data between compiled languages and Java. Thus, it can be used to go either way. The historical problem with using JNI to call into Java has always been that of complexity, which is increased when using RPG versus C. Once again, however, the enhancements in V5R1 of RPG IV have significantly reduced this complexity, hiding much of it behind new RPG-native syntax.
One alternative to using JNI to call Java from RPG is to launch the Java application by calling the JAVA CL command from the RPG program, and then use a data queue or data area to communicate between the two programs. Another alternative is to use an embedded SQL call to invoke the Java class as a stored procedure, leveraging the database. Both come with their own advantages and disadvantages, but this appendix focuses exclusively on the new RPG IV built-in support to call Java directly via Java Native Interface.
To call Java from RPG, the following enhancements needed to be added to the RPG language syntax:
You might also expect the ability to access public variables in an object, but this support was not added to the RPG language. If you need to access a variable, you must create a Java class with methods that return the variables you are interested in, and optionally create methods to set them, if you want write access.
The enhancements added to V5R1 of RPG IV to enable calling of Java includes the following:
The first parameter, *JAVA, again identifies this as a prototype for a Java method. The second parameter is a character literal or constant field that identifies the package-qualified class that contains the method. The third parameter is a character literal or constant field that identifies the name of the method being prototyped. Both the second and third parameters are case-sensitive. When you prototype a method, use a D-spec with PR in positions 24 and 25, exactly as you do when prototyping a procedure. On the PR spec, specify the EXTPROC keyword, and also specify the return type of the method, if it returns something. The sub-sequent D-specs that have blanks in columns 24 and 25 identify the parameters of the method, again just as with regular RPG procedure prototypes. For the return type and the parameters, if the data type is a primitive, use the corresponding native RPG data type, as shown in Table B.1. You must also specify the VALUE keyword on the parameter D-specs, as all Java primitives are passed by value. If the return type or parameter type is an object, use the new o data type and specify the CLASS keyword in the keyword area of the D-spec. This keyword has the syntax described in the previous step.
Prior to discussing the important topic of data type mapping, let's see an example of this new syntax. Imagine you want to use a Vector object from within RPG. You want to instantiate an instance of Vector and access the methods addElement, size, and elementAt within that object. To keep the example simple, instantiate and store two Integer objects in the vector. Then walk the vector, extracting each Integer object and converting it first to a String object and then to an RPG character field for the purposes of displaying its value on the console.
This example requires the following fields:
This example also requires the following prototypes:
Once the fields are declared and the constructors and methods prototyped, you can write RPG logic to do the following:
Start with the field declarations and method prototypes, as shown in Listing B.1. This declares prototypes for two Java constructors and five methods. You do not need to prototype the String constructor because you will not be instantiating a String object directly. Instead, you will get a String object back from a call to the toString method of the Integer class.
Listing B.1: Declaring Object Fields and Java Method Prototypes
D* Declare fields to hold Vector and Integer objects... D VectorObj S O CLASS(*JAVA : 'java.util.Vector') D IntegerObj1 S O CLASS(*JAVA : 'java.lang.Integer') D IntegerObj2 S O CLASS(*JAVA : 'java.lang.Integer') D* Declare field to hold a String object... D StringObj S O CLASS(*JAVA : 'java.lang.String') D* Declare primitive fields to hold int values... D IntegerField S 10I 0 INZ D Idx S 10I 0 INZ D VectorSizeFld S 10I 0 INZ D* Declare primitive field to hold RPG version of String values... D StringField S 10A VARYING D*----------------------------------------------------------------------- D* D* Declare Vector constructor prototype... D VectorCtor PR O EXTPROC(*JAVA : 'java.util.Vector' : D *CONSTRUCTOR) D CLASS(*JAVA : 'java.util.Vector' ) D* Declare Integer constructor prototype... D IntegerCtor PR O EXTPROC(*JAVA : 'java.lang.Integer' D : *CONSTRUCTOR) D CLASS(*JAVA : 'java.lang.Integer' ) D** Parameter prototype declaration for Java type: int D IntegerCtorParm1... D 10I 0 VALUE D*----------------------------------------------------------------------- D* D* Prototype for addElement method in Vector class... D VectorAddElementMethod... D PR EXTPROC(*JAVA : 'java.util.Vector' : D 'addElement') D** Parameter prototype declaration for Java type: java.lang.Object D addElementParm1... D O CLASS(*JAVA : 'java.lang.Object' ) D* Prototype for size method in Vector class D VectorSizeMethod... D PR 10I 0 EXTPROC(*JAVA : 'java.util.Vector' : D 'size') D* Prototype for elementAt method in Vector class... D VectorElementAtMethod... D PR O EXTPROC(*JAVA : 'java.util.Vector' : D 'elementAt') D CLASS(*JAVA : 'java.lang.Object' ) D** Parameter prototype declaration for Java type: int D elementAtParm1... D 10I 0 VALUE D* Prototype for toString method in Integer class... D IntegerToStringMethod... D PR O EXTPROC(*JAVA : 'java.lang.Integer' D : 'toString') D CLASS(*JAVA : 'java.lang.String' ) D* Prototype for getBytes method in String class... D StringGetBytesMethod... D PR 10A EXTPROC(*JAVA : 'java.lang.String' D : 'getBytes') D VARYING
The hard part is prototyping the constructor and method calls, and declaring fields to hold object references. Once that is done, simply write code to call the constructors and methods as though they were procedures written in RPG. The mainline code to do this is shown in Listing B.2.
Listing B.2: Instantiating, Populating, and Traversing a Java Vector from RPG IV
C* Instantiate Vector object... C EVAL VectorObj = VectorCtor() C* Instantiate first Integer object, with 10 for the value... C EVAL IntegerField = 10 C EVAL IntegerObj1 = IntegerCtor(IntegerField) C* Instantiate second Integer object, with 20 for the value... C EVAL IntegerField = 20 C EVAL IntegerObj2 = IntegerCtor(IntegerField) C* Add the two Integer objects to the Vector object... C CALLP VectorAddElementMethod(VectorObj:IntegerObj1) C CALLP VectorAddElementMethod(VectorObj:IntegerObj2) C* Walk all elements of the Vector object, displaying each element... C EVAL VectorSizeFld = VectorSizeMethod(VectorObj) C FOR Idx = 0 to (VectorSizeFld-1) C** Retrieve Integer object using elementAt method of Vector... C EVAL IntegerObj1 = C VectorElementAtMethod(VectorObj:Idx) C** Convert Integer object into String object using toString method... C EVAL StringObj = C IntegerToStringMethod(IntegerObj1) C** Convert String object into RPG character field using getBytes method C EVAL StringField = C StringGetBytesMethod(StringObj) C** Display converted String field on console... C StringField DSPLY C* End FOR loop C ENDFOR C* Exit C EVAL *INLR = *ON
This code starts by calling the Vector constructor method to instantiate a Vector object. It next calls the Integer constructor method twice, to instantiate two integer objects. It passes an RPG integer field as a parameter to the constructor, first with the number 10 and then the number 20. It next calls the addElement method of the Vector object twice to add each Integer object to that Vector object. (For non-constructor and non-static method calls, you must pass the target object as the first parameter, even though you do not prototype this first parameter. It is implicit, and simply a 3GL alternative to Java's "dot" operator.)
An RPG FOR loop visits each element in the Vector object, from zero to the size minus one, as the Vector class uses zero-based element access. The code calls the size method of the Vector object to get the element count, and stores the result in an RPG field of type integer. Each iteration of the loop first calls the elementAt method to retrieve a reference to the Integer object at that index, and then calls the toString method of the Integer object to convert the Integer to a String object, to which it is given a reference. This String object is converted to an RPG character field by calling the toBytes method on the String object, which returns a byte array of individual characters. When you assign a Java byte array to an RPG character field, RPG takes care of converting the data to an EBCDIC RPG character field. Finally, this value is displayed to the console. The result of running this program is the values 10 and 20, as you would expect.
This example illustrates how to deal with casting Java objects when calling Java from RPG. The methods in the Vector class accept and return objects of type java.lang.Object, as shown in the prototypes of the addElement and elementAt methods. However, the code passed an object of type java.lang.Integer when calling the addElement method, and assigned the result of calling elementAt to an object field of type java.lang.Integer. This is legal in Java, but the assignment would require you to cast the result, using java.lang.Integer. There is no such syntax in RPG because the casting is done for you implicitly.
The code for accessing Java from RPG can be a bit tedious to write, due to the requirement to prototype all the constructors and methods and fully describe the class types of object fields. To help with this drudgery, a wizard in the CODE editor will generate these declarations for you, given the package, class, and method you wish to call. Another wizard in CODE converts RPG IV fixed-form logic to the new free-form style. Listing B.3 shows the logic from Listing B.2 in free-form style.
Listing B.3: The Logic from Listing B.2 in Free-Form RPG style
// Insttantiate Vector object... /FREE VectorObj = VectorCtor(); // Instantiae first Integer object, with 10 for the value... IntegerField = 10; IntegerObj1 = IntegerCtor(IntegerField); // Instantiate second Integer object, with 20 for the value... IntegerField = 20; IntegerObj2 = IntegerCtor(IntegerField); // Add the two Integer objects to the Vector object... VectorAddElementMethod(VectorObj:IntegerObj1); VectorAddElementMethod(VectorObj:IntegerObj2); // Walk all elements of the Vector object, displaying each element... VectorSizeFld = VectorSizeMethod(VectorObj); FOR Idx = 0 to (VectorSizeFld-1); //* Retrieve Integer object using elementAt method of Vector... IntegerObj1 = VectorElementAtMethod(VectorObj:Idx); //* Convert Integer object into String object using toString method... StringObj = IntegerToStringMethod(IntegerObj1); //* Convert String object into RPG character field using getBytes method StringField = StringGetBytesMethod(StringObj); //* Display converted String field on console... DSPLY StringField; // End FOR loop ENDFOR; // Exit *INLR = *ON; /END-FREE
When prototyping method and constructor calls, you specify RPG data types for the parameters and the return type, not Java data types. For each of the eight primitive data types in Java, there is a corresponding RPG data type you should use, and the RPG runtime takes care of the data mapping between the two languages. For objects, use the object o data type together with the CLASS keyword to specify the class type of the object. For arrays, use RPG arrays of fields with the appropriate type for each element. Table B.1 shows the mappings from each of the Java types to their corresponding RPG types.
Some Java data types, such as byte, map to more than one RPG data type. The one you use depends on your knowledge of the contents. If the byte variable or array contains a character or characters that are the result of calling toBytes on a String object, then use a character field in RPG. On assignment, RPG will do the necessary codepage mappings. If the Java byte variable contains numeric data, assign it to an RPG three-digit integer field, so that no such mapping is done.
In Java, you can only convert characters or strings to single-byte variables and arrays if the Unicode contents can, in fact, be converted to a single-byte codepage. This will not be the case if the character or string contains true double-byte data, such as Chinese or Japanese characters. In these cases, you need to assign the Java character field to an RPG Unicode character field. For Java strings, you first need to use the toCharArray method to convert the String object into a Java character array. This, in turn, can be assigned to an RPG Unicode character field (data type c) or an array of RPG Unicode characters.
When assigning to a field versus an array, set the length to be as big as the string might possibly be (or 32,767 if you don't know how long the Java string might be), and then specify the VARYING keyword to indicate that the length may vary. Keep in mind that, while RPG Unicode characters are two bytes long internally, you specify the length in terms of number of characters, not bytes.
When you call a Java method from RPG, that method might throw an exception. For example, when RPG does its implicit casting from one object type to another, if the object types are not compatible (as discussed in Chapter 9), you will get a ClassCastException thrown. RPG intercepts all Java exceptions and converts them to standard RPG runtime errors, with one of the program status codes shown in Table B.2. (For a complete and up-to-date list of status codes, see the "Program Status Data Structure" section of the ILE RPG Reference manual.)
When calling Java from RPG, the RPG runtime takes care of starting the Java Virtual Machine, if it is not already started. To explicitly start or explicitly destroy the JVM, call the JNI (Java Native Interface) procedures JNI_CreateJavaVM or JNI_DestroyJavaVM, respectively. (You first have to call JNI_GetCreatedJavaVMs.) Calling operating-system JNI methods like these requires you to use /COPY for the copy member JNI in file QSYSINC/QRPGLESRC. The details of making these calls is beyond this book, but are well documented in the RPG Programmer's Guide, in the section on RPG and Java.
At the time of this writing, a problem exists when the JVM is used in an interactive job. Specifically, the JVM for the job is destroyed when an ILE activation group ends, and it does not start again cleanly for the same job. This is a known problem, and work is being done to address it in V5R2, and possibly V5R1 via a PTF.
When RPG starts the JVM or you start it explicitly using a JNI call, the default CLASSPATH is used. This allows access to all the Java-supplied classes. To access other classes, you need to set your CLASSPATH prior to running your RPG program. This is most easily done using the ADDENVVAR CL command, specifying ENVVAR(CLASSPATH) and a colon-separated list of IFS folders for the VALUE parameter. This command sets the CLASSPATH for the life of this job only.
In addition to calling Java from RPG, there is also RPG support for calling RPG from Java. This support simplifies the effort to code and call Java native methods that are written in RPG as ILE procedures inside service programs. The Java language standard defines a common way to access C functions from Java, by defining a method signature with the keyword native. Such a method has no body, much like methods defined with the keyword abstract. Rather, the method is implemented in C, as a function, within a DLL or service program on iSeries. The DLL or service program containing the function (with the same signature as the Java method definition) is identified using a static initializer, as described in Chapter 14.
While it has always been possible to implement these functions using ILE RPG procedures, the effort required has been daunting without the special syntax and runtime support added to RPG IV as of V5R1. It is very easy, on the other hand, to call any iSeries program or ILE procedure within a service program using the Program Call Markup Language (PCML) supplied in the AS/400 Toolbox for Java. While easy, this means of calling RPG from Java does involve the overhead of starting a new job for the program or service program on the first call, and marshalling the parameter data between the jobs. In many cases, it can be more efficient to use RPG native method support, as the RPG procedures are called within the same job. You might or might not consider it easy.
The first important note about RPG native method support is that you must remember to specify the THREAD(*SERIALIZE) keyword on your RPG H-spec at the top of each module in which you wish to call procedures. Because Java is a threaded language, and you are calling RPG within the same job and possibly within multiple threads, you must use this keyword to avoid corrupting your data if two threads use the same database- accessing RPG logic simultaneously.
Writing an RPG native method is very easy. Simply code a RPG IV procedure as normal, being sure to specify the EXPORT keyword on the P-spec, and export it when creating the service program. (For example, use EXPORT(*ALL) on CRTSRVPGM.) You must also specify the EXTPROC keyword on your procedure prototype, with *JAVA for the first parameter, the package-qualified Java class containing the Java native method definition for the second parameter, and the Java name by which you want to refer to this method as the third parameter. When you create your service program, it is best to specify a named activation group such as QILE or your own name, versus using the default activation group or even *CALLER, since the caller in this case is Java.
To be able to access the procedure from your Java code, simply code a Java native method signature with exactly the same name (including case) as specified on the EXTPROC keyword of your procedure, and the same number of parameters. Each parameter and return type should be the Java equivalent of the RPG data type, as described in Table B.1. To tell Java the name of the service program containing the RPG native method, code the following static initializer at the top of your Java class:
static { System.loadLibrary ("MYSRVPGM"); }
Replace MYSRVPGM with the name of your service program. Repeat the System.loadLibrary statement for each service program containing native methods you wish to call.
With this, your Java code can now call that Java method as though it were written in Java! All data-mapping of the parameters will be done for you by the RPG runtime. At runtime, ensure that the library containing your service program is on your library list.
Let's look at an example, adapted from the ILE RPG Programmer's Guide. It starts with an RPG module that contains a single, simple procedure named checkCust. Given a customer ID, it does a chain operation to that record and returns true if the record was found, and false if it was not. This uses the CUSTDB database file from Listing A.3 in Appendix A, which contains a record format named CUSTREC. The key is the customer ID field, CUSTID, which has six integer digits and zero decimal places.
In the procedure in Listing B.4, the input parameter is defined as a 10-digit integer field, which maps to the int primitive data type in Java, as shown in Table B.1. The keyword CONST for the parameter indicates the parameter value does not change in the procedure. The procedure simply does a chain to that key, in the database file, and returns an indicator value of one if the chain was successful. The RPG indicator data type maps to the boolean primitive data type in Java. Notice that EXTPROC is specified, but the class name is not qualified with the package name because for this simple example, the class is in the unnamed package.
Listing B.4: An RPG Procedure to Be Used as an RPG Native Method
H NOMAIN THREAD(*SERIALIZE) DFTACTGRP(*NO) ACTGRP('QILE') ALWNULL(*USRCTL) FCUSTDB UF E DISK * -------------- * PROCEDURE checkCust prototype * -------------- D checkCust PR N EXTPROC(*JAVA:'MyClass':'checkCust') D custId 10I 0 CONST D* * --------- * PROCEDURE checkCust * --------- P checkCust B EXPORT D checkCust PI N D custId 10I 0 CONST /free chain custId custREC; return %found; /end-free P checkCust E
The Java code to call this procedure as a native method is shown in Listing B.5. A static initializer identifies the service program, and there is a very simple native method declaration. The main method tests instantiating the class and calling the native method.
Listing B.5: A Java Native Method to Call an RPG Procedure via JNI
public class MyClass { static { System.loadLibrary ("RPGNTVMTD"); } /** * The declaration of the RPG native method. * Calling this calls the RPG procedure checkCust * in service program RPGNTVMTD in library list. */ native boolean checkCust (int custId); /** * Command line control. Tests the RPG native method. */ public static void main(String args[]) { MyClass testObj = new MyClass(); // call the native method boolean found = false; int custId = 123; found = testObj.checkCust(custId); System.out.println("Result of native method = " + found); } }
To other Java code, a native method is no different than a regular Java method. Very nice! When dealing with character strings, it is best to define your RPG procedure to accept either a character or Unicode field, with the VARYING keyword. From the Java side, declare the native method to accept a byte array or a character array, respectively. Then, when you call the method with a String object, use the getBytes or getCharArray method, respectively. The RPG runtime will handle all the codepage conversions. Also very nice!
If your RPG procedure wants to "call back" and execute Java methods within the same object, use the %THIS built-in function to return a reference to the current object. This can then be passed as the first parameter to the Java method, using the syntax described for RPG calling Java.
There are some considerations when writing native methods. One is the CLASSPATH, which must be set properly to find your class, as usual. Another consideration involves exceptions. If your RPG native method ends in an error for some reason, the RPG runtime will throw a Java exception of class type java.lang.Exception, and getMessage on that exception object will return a string of the form "RPG nnnn" where nnnn is the status code from the RPG runtime.
Another consideration regards telling Java when you are done using a Java object, so that the JVM can cleanly dispose of that object, reducing memory leaks. This is done by calling the JNI API DeleteLocalRef. These and other considerations are described well in the ILE RPG Programmers Guide, in the section about RPG and Java. We leave them to your additional reading pleasure when you need them.
We hope you take these RPG enhancements as an indication of IBM's commitment not only to Java, but also to RPG. Indeed, IBM believes both languages have a long future and will live happily together for a long time to come! | http://flylib.com/books/en/2.163.1/appendix_b_mixing_rpg_and_java.html | CC-MAIN-2018-05 | refinedweb | 4,158 | 52.7 |
Python Data Structures and Algorithms: Bubble sort
Python Search and Sorting : Exercise-4 with Solution
Write a Python program to sort a list of elements using the bubble sort algorithm.
Note :.
Step by step pictorial presentation :
Sample Solution:-
Python Code:
def bubbleSort(nlist): for passnum in range(len(nlist)-1,0,-1): for i in range(passnum): if nlist[i]>nlist[i+1]: temp = nlist[i] nlist[i] = nlist[i+1] nlist[i+1] = temp nlist = [14,46,43,27,57,41,45,21,70] bubbleSort(nlist) print(nlist)
Sample Output:
[14, 21, 27, 41, 43, 45, 46, 57, 70]
Flowchart:
Python Code Editor :
Contribute your code and comments through Disqus.
Previous: Write a Python program for binary search for an ordered list.
Next: Write a Python program to sort a list of elements using the selection | https://www.w3resource.com/python-exercises/data-structures-and-algorithms/python-search-and-sorting-exercise-4.php | CC-MAIN-2019-47 | refinedweb | 137 | 52.23 |
Thanks for asking your question! I'm sorry to hear about your tax issue and I'm going to try my best to help you understand or resolve it.
How are you today?
I had to do some research, and what I found is that the IRS will issue the check in the name of both spouses on the tax refund check, because the surviving spouse can still negotiate the check.
You can read about this, HERE
You will be looking at option #3 is what looks like your situation is
So, unfortunately there is no form you could fill out to specify that the refund check should come to only one spouse
Link not working ....
copy and paste that into your browser
It's a PDF file, so it might open differently
pdf files aren't opening properly on my computer.
Will have to reboot and access this again.
Okay, I'm sorry for the inconvenience.
It just says what I just told you - that a refund check would be issued in the name of both spouses.
A different expert here - Welcome and thank you for giving me the opportunity to assist you with your tax question.
Please note that. A surviving spouse can sign by themselves if no personal representative has been appointed, or along with the personal representative if they will be filing a joint tax return with the deceased taxpayer. Here is the link to Form 1310:
Form 1310 will be attached to your brother's tax return. You will also want to be sure that the word "DECEASED" is written across the top of the joint return (1040) as well as the date of death for your brother's spouse. I hope this additional information is useful to you.
Thank you.
Original expert here. I was waiting for you to return after restarting your computer. I hope you were able to open the PDF file that I sent you.Basically, the IRS position is that you can issue the check in the name of both spouses, because your brother can still negotiate the check even after his wife's death. However, you can attach form 1310 to the tax return as a surviving spouse. Please be sure that the tax return notes that your brother's wife is deceased, along with the date of death for easier processing.Thanks again for using JustAnswer.com and have a great day. Please don't forget to rate as "excellent" so that I may receive credit for assisting you today. | http://www.justanswer.com/tax/7wrid-brother-s-spouse-passed-away-last-year-helping.html | CC-MAIN-2014-41 | refinedweb | 422 | 71.14 |
Capturing the view close "x"
- polymerchm
So, can you capture the event of the "x" in the upper left hand corner of a presented ui.view being touched to allow for a cleanup before the whole script stops? I tried the obvious "finally", but that didn't fire, only the console upper right hand X did that.
Two techniques to look at:
while View.on_screen:and
View.wait_modal().
I assume this is a custom view, in which case, did you try
def will_close(self): # This will be called when a presented view is about to be dismissed. # You might want to save data here. pass
However, I vaguely recall this working for some types of presentation modes (say, popover), but not others, but I could be wrong about that.
Another alternative to the blocking options that ccc suggests would be a
threading.Threador
Timerthat polls the
on_screenat some reasonably slow interval (few seconds).
One other option... hide the title bar and provide your own "X". Doesn't protect against two finger swipes, but most people don't know about that anyway. | https://forum.omz-software.com/topic/1495/capturing-the-view-close-x | CC-MAIN-2017-43 | refinedweb | 181 | 73.88 |
What's New in the .NET Framework
This article summarizes key new features and improvements in the following versions of the .NET Framework:
with the .NET Framework. For supported platforms, see System Requirements. For download links and installation instructions, see Installing the .NET Framework.
The .NET Framework 4.6.1 builds on the .NET Framework 4.6 by adding many new fixes and several new features while remaining a very stable product.
You can download the.NET Framework 4.6.1 from the following locations:.
You can target the .NET Framework 4.6.1 in Visual Studio 2012 or later by installing the .NET Framework 4.6.1 Dev Pack.
The .NET Framework 4.6.1 includes new features in the following areas:
For more information on the .NET Framework 4.6.1, see the following topics:. includes a number of improvements and changes.
-...
.NET 2015 introduces the .NET Framework 4.6 and .NET Core. Some new features apply to both, and other features are specific to .NET Framework 4.6 or .NET Core.
ASP.NET 5
.NET 2015 includes ASP.NET 5, .NET Framework on the same server. It includes a new environment configuration system that is designed for cloud deployment.
MVC, Web API, and Web Pages are unified into a single framework called MVC 6. You build ASP.
Model binding supports task-returning methods
In the .NET Framework 4.5, ASP.NET added the Model Binding feature that enabled an extensible, code-focused approach to CRUD-based data operations in Web Forms pages and user controls. The Model Binding system now supports Task-returning model binding methods. This feature allows Web Forms developers to get the scalability benefits of async with the ease of the data-binding system when using newer versions of ORMs, including the Entity Framework.
Async default. It can be enabled by setting the configuration setting to true.
HTTP/2 Support (Windows 10). HTTP/2 support has been added to ASP.NET in the .NET Framework 4.6. Because networking functionality exists at multiple layers, new features were required in Windows, in IIS, and in ASP.NET to enable HTTP/2. You must be running on Windows 10 to use HTTP/2 with ASP.NET.
HTTP/2 is also supported and on by default for Windows 10 Universal Windows Platform (UWP) apps that use the System.Net.Http.HttpClient API.
In order to provide a way to use the PUSH_PROMISE feature in ASP.NET applications, a new method with two overloads, PushPromise(String) and PushPromise(String, String, NameValueCollection), has been added to the HttpResponse class.
The browser and the web server (IIS on Windows) do all the work. You don't have to do any heavy-lifting for your users.
Most of the major browsers support HTTP/2, so it's likely that your users will benefit from HTTP/2 support if your server supports it.
Support for the Token Binding Protocol
Microsoft and Google have been collaborating on a new approach to authentication, called the Token Binding Protocol. The premise is that authentication tokens (in your browser cache) can be stolen and used by criminals to access otherwise secure resources (e.g. your bank account) without requiring your password or any other privileged knowledge. The new protocol aims to mitigate this problem.
The Token Binding Protocol will be implemented in Windows 10 as a browser feature. ASP.NET apps will participate in the protocol, so that authentication tokens are validated to be legitimate. The client and the server implementations establish the end-to-end protection specified by the protocol.
Randomized string hash algorithms
The .NET Framework 4.5 introduced a randomized string hash algorithm. However, it was not supported by ASP.NET because of some ASP.NET features depended on a stable hash code. In .NET Framework 4.6, randomized string hash algorithms are now supported. To enable this feature, use the aspnet:UseRandomizedStringHashAlgorithm config setting.
ADO.NET
ADO .NET now supports the Always Encrypted feature available in SQL Server 2016 Community Technology Preview 2 (CTP2). details, see Always Encrypted (Database Engine) and Always Encrypted (client development).
64-bit JIT Compiler for managed code
The .NET Framework 4.6 features a new version of the 64-bit JIT compiler (originally code-named RyuJIT). The new 64-bit compiler provides significant performance improvements over the older 64-bit JIT compiler. The new 64-bit compiler is enabled for 64-bit processes running on top of the .NET Framework 4.6. Your app will run in a 64-bit process if it is compiled as 64-bit or AnyCPU and is running on a 64-bit operating system. While care has been taken to make the transition to the new compiler as transparent as possible, changes in behavior are possible. We would like to hear directly about any issues encountered when using the new JIT compiler. Please contact us through Microsoft Connect if you encounter an issue that may be related to the new 64-bit JIT compiler.
The new 64-bit JIT compiler also includes hardware SIMD acceleration features when coupled with SIMD-enabled types in the System.Numerics namespace, which can yield good performance improvements.
Assembly loader improvements
The assembly loader now uses memory more efficiently by unloading IL assemblies after a corresponding NGEN image is loaded. This change decreases virtual memory, which is particularly beneficial for large 32-bit apps (such as Visual Studio), and also saves physical memory.
Base class library changes
Many new APIs have been added around to .NET Framework 4.6 to enable key scenarios. These include the following changes and additions:
IReadOnlyCollection<T> implementations
Additional collections implement IReadOnlyCollection<T> such as Queue<T> and Stack<T>.
CultureInfo.CurrentCulture and CultureInfo.CurrentUICulture.
Enhancements to garbage collection (GC)
The GC class now includes TryStartNoGCRegion and EndNoGCRegion methods that allow you to disallow garbage collection during the execution of a critical path.
A new overload of the GC.Collect(Int32, GCCollectionMode, Boolean, Boolean) method allows you to control whether both the small object heap and the large object heap are swept and compacted or swept only.
SIMD-enabled types
The System.Numerics namespace now includes a number of SIMD-enabled types, such as Matrix3x2, Matrix4x4, Plane, Quaternion, Vector2, Vector3, and Vector4.
Because the new 64-bit JIT compiler also includes hardware SIMD acceleration features, there are especially significant performance improvements when using the SIMD-enabled types with the new 64-bit JIT compiler.
Cryptography updates
The System.Security.Cryptography API is being updated to support the Windows CNG cryptography APIs. Previous versions of the .NET Framework have relied entirely on an earlier version of the Windows Cryptography APIs as the basis for the System.Security.Cryptography implementation. We have had requests to support the CNG API, since it supports modern cryptography algorithms, which are important for certain categories of apps.
The .NET Framework 4.6 includes the following new enhancements to support the Windows CNG cryptography APIs:
A set of extension methods for X509 Certificates, System.Security.Cryptography.X509Certificates.RSACertificateExtensions.GetRSAPublicKey(System.Security.Cryptography.X509Certificates.X509Certificate2) and System.Security.Cryptography.X509Certificates.RSACertificateExtensions.GetRSAPrivateKey(System.Security.Cryptography.X509Certificates.X509Certificate2), that return a CNG-based implementation rather than a CAPI-based implementation when possible. (Some smartcards, etc., still require CAPI, and the APIs handle the fallback).
The System.Security.Cryptography.RSACng class, which provides a CNG implementation of the RSA algorithm.
Enhancements to the RSA API so that common actions no longer require casting. For example, encrypting data using an X509Certificate2 object requires code like the following in previous versions of the .NET Framework.
Code that uses the new cryptography APIs in the .NET Framework 4.6 can be rewritten as follows to avoid the cast.
Support for converting dates and times to or from Unix time
The following new methods have been added to the DateTimeOffset structure to support converting date and time values to or from Unix time:
Compatibility switches
The new AppContext class adds a new compatibility feature that enables library writers to provide a uniform opt-out mechanism for new functionality for their users. It establishes a loosely they only alter it (that is, they provide the previous functionality) if the switch is set.
An application (or a library) can declare the value of a switch (which is always a Boolean value) that a dependent library defines. The switch is always implicitly false. Setting the switch to true enables it. Explicitly setting the switch to false provides the new behavior.
The library must check if a consumer has declared the value of the switch and then appropriately act on it.
if (!AppContext.TryGetSwitch("Switch.AmazingLib.ThrowOnException", out shouldThrow)) { // This is the case where the switch value was not set by the application. // The library can choose to get the value of shouldThrow by other means. // If no overrides nor default values are specified, the value should be 'false'. // A false value implies the latest behavior. } // The library can use the value of shouldThrow to throw exceptions or not. if (shouldThrow) { // old code } else { // new code } }
It's beneficial to use a consistent format for switches, since they are a formal contract exposed by a library. The following are two obvious formats.
Switch.namespace.switchname
Switch.library.switchname
Changes to the task-based asynchronous pattern (TAP)
For apps that target the .NET Framework 4.6, Task and Task<TResult> objects inherit the culture and UI culture of the calling thread. The behavior of apps that target previous versions of the .NET Framework, or that do not target a specific version of the .NET Framework, is unaffected. For more information, see the "Culture and task-based asynchronous operations” section of the CultureInfo class topic.
The System.Threading.AsyncLocal<T><T>.Value property was explicitly changed, or because the thread encountered a context transition.
Three convenience methods, Task.CompletedTask, Task.FromCanceled, and Task.FromException, have been added to the task-based asynchronous pattern (TAP) to return completed tasks in a particular state.
The NamedPipeClientStream class now supports asynchronous communication with its new ConnectAsync. method.
EventSource now supports writing to the Event log
You now can use the EventSource class to log administrative or operational messages to the event log, in addition to any existing ETW sessions created on the machine. In the past, you had to use the Microsoft.Diagnostics.Tracing.EventSource NuGet package for this functionality. This functionality is now built-into the .NET Framework 4.6.
Both the NuGet package and the .NET Framework 4.6 have been updated with the following features:
Dynamic events
Allows events defined "on the fly" without creating event methods.
Rich payloads
Allows specially attributed classes and arrays as well as primitive types to be passed as a payload
Activity tracking
Causes Start and Stop events to tag events between them with an ID that represents all currently active activities.
To support these features, the overloaded Write method has been added to the EventSource class.
Windows Presentation Foundation (WPF)
HDPI improvements
HDPI support in WPF is now better in the .NET Framework 4.6. Changes have been made to layout rounding to reduce instances of clipping in controls with borders. By default, this feature is enabled only if your TargetFrameworkAttribute is set to .NET 4.6. Applications that target earlier versions of the framework but are running on the .NET Framework 4.6 can opt in to the new behavior by adding the following line to the <runtime> section of the app.config file:
WPF windows straddling multiple monitors with different DPI settings (Multi-DPI setup) are now completely rendered without blacked-out regions. You can opt out of this behavior by adding the following line to the <appSettings> section of the app.config file to disable this new behavior:
Support for automatically loading the right cursor based on DPI setting has been added to System.Windows.Input.Cursor.
Touch is better
Customer reports on Connect that touch produces unpredictable behavior have been addressed in the .NET Framework 4.6. The double tap threshold for Windows Store applications and WPF applications is now the same in Windows 8.1 and above.
Transparent child window support
WPF in the .NET Framework 4.6 supports transparent child windows in Windows 8.1 and above. This allows you to create non-rectangular and transparent child windows in your top-level windows. You can enable this feature by setting the HwndSourceParameters.UsesPerPixelTransparency property to true.
Windows Communication Foundation (WCF)
SSL support
WCF now supports SSL version TLS 1.1 and TLS 1.2, in addition to SSL 3.0 and TLS 1.0, when using NetTcp with transport security and client authentication. It is now possible to select which protocol to use, or to disable old lesser secure protocols. This can be done either by setting the SslProtocols property or by adding the following to a configuration file.
Sending messages using different HTTP connections
WCF now allows users to ensure certain messages are sent using different underlying HTTP connections. There are two ways to do this:
Using a connection group name prefix
Users can specify a string that WCF will use as a prefix achieve for the connection group name. Two messages with different prefixes are sent using different underlying HTTP connections. You set the prefix by adding a key/value pair to the message's Message.Properties property. The key is "HttpTransportConnectionGroupNamePrefix"; the value is the desired prefix.
Using different channel factories
Users can also enable a feature that ensures that messages sent using channels created by different channel factories will use different underlying HTTP connections. To enable this feature, users must set the following appSetting to true:
Windows Workflow Foundation (WWF)
You can now specify the number of seconds a workflow service will hold on to an out-of-order operation request when there is an outstanding “non-protocol” bookmark before timing out the request. A “non-protocol” bookmark is a bookmark that is not related to outstanding Receive activities. Some activities create non-protocol bookmarks within their implementation, so it may not be obvious that a non-protocol bookmark exists. These include State and Pick. So if you have a workflow service implemented with a state machine or containing a Pick activity, you will most likely have non-protocol bookmarks. You specify the interval by adding a line like the following to the appSettings is non-zero, there are non-protocol bookmarks, and the timeout interval expires, the operation fails with a timeout message.
Transactions
You can now include the distributed transaction identifier for the transaction that has caused an exception derived from TransactionException to be thrown. You do this by adding the following key to the appSettings section of your app.config file:
The default value is false.
Networking
Socket reuse,384, the default size of the dynamic port range), which could limit the scalability of a service by causing port exhaustion when under load.
In the .NET Framework 4.6, two new APIs have been added to enable port reuse, which effectively removes the 64K limit on concurrent connections:
The SocketOptionName.ReuseUnicastPort enumeration value.
The ServicePointManager.ReusePort property.
By default, the ServicePointManager.ReusePort property to true. This causes all outgoing TCP socket connections from HttpClient and HttpWebRequest to use a new Windows 10 socket option, SO_REUSE_UNICASTPORT, that enables local port reuse.
Developers writing a sockets-only application can specify the SocketOptionName.ReuseUnicastPort option when calling a method such as Socket.SetSocketOption so that outbound sockets reuse local ports during binding.
Support for international domain names and PunyCode
A new property, IdnHost, has been added to the Uri class to better support international domain names and PunyCode.
Resizing in Windows Forms controls.
This feature has been expanded in .NET Framework 4.6 to include the DomainUpDown, NumericUpDown, DataGridViewComboBoxColumn, DataGridViewColumn and ToolStripSplitButton types and the rectangle specified by the Bounds property used when drawing a UITypeEditor.
This is an opt-in feature. To enable it, set the EnableWindowsFormsHighDpiAutoResizing element to true in the application configuration (app.config) file:
Support for code page encodings
.NET Core primarily supports the Unicode encodings and by default provides limited support for code page encodings. You can add support for code page encodings available in the .NET Framework but unsupported in .NET Core by registering code page encodings with the Encoding.RegisterProvider method. For more information, see System.Text.CodePagesEncodingProvider.
.NET Native
Windows apps for Windows 10 that target .NET Core and are written in C# or Visual Basic can take advantage of a new technology that compiles apps to native code rather than IL. They produce apps characterized by faster startup and execution times. For more information, see Compiling Apps with .NET Native. For an overview of .NET Native that examines how it differs from both JIT compilation and NGEN and what that means for your code, see .NET Native and Compilation.
Your apps are compiled to native code by default when you compile them with Visual Studio 2015. For more information, see Getting Started with .NET Native.
To support debugging .NET Native apps, a number of new interfaces and enumerations have been added to the unmanaged debugging API. For more information, see the Debugging (Unmanaged API Reference) topic.
Open-source .NET Framework packages
.NET Core packages such as the NIB: Immutable Collections, SIMD APIs, and networking APIs such as those found in the System.Net.Http namespace are now available as open source packages on GitHub. To access the code, see NetFx on GitHub. For more information and how to contribute to these packages, see .NET Core and Open-Source, .NET Home Page on GitHub..
Promoting a transaction and converting it to a durable enlistment
Transaction.PromoteAndEnlistDurable is a new API added to the .NET Framework 4.5.2 and 4.6:
[System.Security.Permissions.PermissionSetAttribute(System.Security.Permissions.SecurityAction.LinkDemand, Name = "FullTrust")] public Enlistment PromoteAndEnlistDurable(Guid resourceManagerIdentifier, IPromotableSinglePhaseNotification promotableNotification, ISinglePhaseNotification enlistmentNotification, EnlistmentOptions enlistmentOptions)
The method may be used by an enlistment that was previously created by Transaction.EnlistPromotableSinglePhase in response to the ITransactionPromoter.Promote method. It asks System.Transactions to promote the transaction to an MSDTC transaction and to “convert” the promotable enlistment to a durable enlistment. After this method completes successfully, the IPromotableSinglePhaseNotification interface will no longer be referenced by System.Transactions, and any future notifications will arrive on the provided ISinglePhaseNotification interface. The enlistment in question must act as a durable enlistment, supporting transaction logging and recovery. Refer to Transaction.EnlistDurable for details. In addition, the enlistment must support ISinglePhaseNotification. This method can only be called while processing an ITransactionPromoter.Promote call. If that is not the case, a TransactionException exception is thrown.
April 2014 updates:
Visual Studio 2013 Update 2 includes updates to the Portable Class Library templates to support these scenarios:
You can use Windows Runtime APIs in portable libraries that target Windows 8.1, Windows Phone 8.1, and Windows Phone Silverlight 8.1.
You can include XAML (Windows.UI.XAML types) in portable libraries when you target Windows 8.1 or Windows Phone 8.1. The following XAML templates are supported: Blank Page, Resource Dictionary, Templated Control, and User Control.
You can create a portable Windows Runtime component (.winmd file) for use in Store apps that target Windows 8.1 and Windows Phone 8.1.
You can retarget a Windows Store or Windows Phone Store class library like a Portable Class Library.
For more information about these changes, see Cross-Platform Development with the Portable Class Library.
The .NET Framework content set now includes documentation for .NET Native, which is a precompilation technology for building and deploying Windows apps. .NET Native compiles your apps directly to native code, rather than to intermediate language (IL), for better performance. For details, see Compiling Apps with .NET Native.
The .NET Framework Reference Source provides a new browsing experience and enhanced functionality. You can now browse through the .NET Framework source code online, download the reference for offline viewing, and step through the sources (including patches and updates) during debugging. For more information, see the blog entry A new look for .NET Reference Source.
Core new features and enhancements in the .NET Framework 4.5.1 include:
Automatic binding redirection for assemblies. Starting with Visual Studio 2013, when you compile an app that targets the .NET Framework 4.5.1, binding redirects may be added to the app configuration file if your app or its components reference multiple versions of the same assembly. You can also enable this feature for projects that target older versions of the .NET Framework. For more information, see How to: Enable and Disable Automatic Binding Redirection.
Ability to collect diagnostics information to help developers improve the performance of server and cloud applications. For more information, see the WriteEventWithRelatedActivityId and WriteEventWithRelatedActivityIdCore methods in the EventSource class.
Ability to explicitly compact the large object heap (LOH) during garbage collection. For more information, see the GCSettings.LargeObjectHeapCompactionMode property.
Additional performance improvements such as ASP.NET app suspension, multi-core JIT improvements, and faster app startup after a .NET Framework update. For details, see the .NET Framework 4.5.1 announcement and the ASP.NET app suspend blog post.
Improvements to Windows Forms include:
Resizing in Windows Forms controls. You can use the system DPI setting to resize components of controls (for example, the icons that appear in a property grid) by opting in with an entry in the application configuration file (app.config) for your app. This feature is currently supported in the following Windows Forms controls:
PropertyGrid
TreeView
Some aspects of the DataGridView (see new features in 4.5.2 for additional controls supported)
To enable this feature, add a new <appSettings> element to the configuration file (app.config) and set the EnableWindowsFormsHighDpiAutoResizing element to true:
Improvements when debugging your .NET Framework apps in Visual Studio 2013 include:
Return values in the Visual Studio debugger. When you debug a managed app in Visual Studio 2013, the Autos window displays return types and values for methods. This information is available for desktop, Windows Store, and Windows Phone apps. For more information, see Examine return values of method calls in the MSDN Library.
Edit and Continue for 64-bit apps. Visual Studio 2013 supports the Edit and Continue feature for 64-bit managed apps for desktop, Windows Store, and Windows Phone. The existing limitations remain in effect for both 32-bit and 64-bit apps (see the last section of the Supported Code Changes (C#) article).
Async-aware debugging. To make it easier to debug asynchronous apps in Visual Studio 2013, the call stack hides the infrastructure code provided by compilers to support asynchronous programming, and also chains in logical parent frames so you can follow logical program execution more clearly. A Tasks window replaces the Parallel Tasks window and displays tasks that relate to a particular breakpoint, and also displays any other tasks that are currently active or scheduled in the app. You can read about this feature in the "Async-aware debugging" section of the .NET Framework 4.5.1 announcement.
Better exception support for Windows Runtime components. In Windows 8.1, exceptions that arise from Windows Store apps preserve information about the error that caused the exception, even across language boundaries. You can read about this feature in the "Windows Store app development" section of the .NET Framework 4.5.1 announcement.
Starting with Visual Studio 2013, you can use the Managed Profile Guided Optimization Tool (Mpgo.exe) to optimize Windows 8.x Store apps as well as desktop apps.
For new features in ASP.NET 4.5.1, see ASP.NET 4.5.1 and Visual Studio 2013 on the ASP.NET site.. See the Background Server Garbage Collection section of the Fundamentals of Garbage Collection topic.
Background just-in-time (JIT) compilation, which is optionally available on multi-core processors to improve application performance. See ProfileOptimization.
Ability to limit how long the regular expression engine will attempt to resolve a regular expression before it times out. See the Regex.MatchTimeout property. System.Globalization.
Type reflection support split between Type and TypeInfo classes. See Reflection in the .NET Framework for Windows Store Apps.
In the .NET Framework 4.5, the Managed Extensibility Framework (MEF) provides the following new features:
Support for generic types.
Convention-based programming model that enables you to create parts based on naming conventions rather than attributes.
Multiple scopes.
A subset of MEF that you can use when you create Windows 8.x Store apps. This subset is available as a downloadable package from the NuGet Gallery. To install the package, open your project in Visual Studio,, Resource File Generator (Resgen.exe) enables you to create a .resw file for use in Windows 8.x Store apps from a .resources file embedded in a .NET Framework assembly. For more information, see Resgen.exe (Resource File Generator).
Managed Profile Guided Optimization (Mpgo.exe) enables you to improve application startup time, memory utilization (working set size), and throughput by optimizing native image assemblies. The command-line tool generates profile data for native image application assemblies. See Mpgo.exe (Managed Profile Guided Optimization Tool). Starting with Visual Studio 2013, you can use Mpgo.exe to optimize Windows 8.x.
The .NET Framework 4.5 provides a new programming interface for HTTP applications. For more information, see the new System.Net.Http and System.
In the .NET Framework 4.5, several new features were added to Windows Workflow Foundation (WF), including: Document.
For more information, see What's New in Windows Workflow Foundation.
Windows 8.x Store apps are designed for specific form factors and leverage the power of the Windows operating system. A subset of the .NET Framework 4.5 or 4.5.1 is available for building Windows 8.x Store apps for Windows by using C# or Visual Basic. This subset is called .NET for Windows 8.x Store apps and is discussed in an overview in the Windows Dev Center.
The Portable Class Library project in Visual Studio 2012 (and later versions) enables you to write and build managed assemblies that work on multiple .NET Framework platforms. Using a Portable Class Library project, you choose the platforms (such as Windows Phone and .NET for Windows 8.x Store apps) to target. The available types and members in your project are automatically restricted to the common types and members across these platforms. For more information, see Cross-Platform Development with the Portable Class Library. | https://msdn.microsoft.com/library/ms171868(v=vs.110) | CC-MAIN-2016-07 | refinedweb | 4,395 | 51.65 |
These are chat archives for TypeStrong/atom-typescript
import * as controllers from './controllers';in my app.ts file and I try to use one of the controllers:
var controller1 = controllers.Controller1. I get an error
Controller1 does not exist on typeof controllers.js, although I get autocompletion for Controller1 on controllers.
I get an error Controller1 does not exist on typeof controllers.js
TypeScript error?
although I get autocompletion for Controller1 on controllers.
If its a completion provided by typescript you should see
(badge) next to it. Otherwise its from the fuzzy finder and not reliable
is there something wrong with what I am trying to do?
Nothing beyond the fact that I haven't done it personally so my advice might not be useful :D
import * as controllers from './controllers'and then
controllers.Controller1should work
export class Controller1in
controllers
export class
export class Controller1 from './controller1';gives me errors
controller1. But whatever works :rose:
In ES6, you nearly always want to use named exports
I also find them much easier to refactor ;); }
declare module "knockout" { var ko: KnockoutStatic; export default ko; }
=with
default | https://gitter.im/TypeStrong/atom-typescript/archives/2015/05/13 | CC-MAIN-2019-04 | refinedweb | 184 | 68.26 |
kstars
#include <timezonerule.h>
Detailed Description
This class provides the information needed to determine whether Daylight Savings Time (DST; a.k.a.
"Summer Time") is currently active at a given location. There are (at least) 25 different "rules" which govern DST around the world; a string identifying the appropriate rule is attached to each city in citydb.sqlite.
The rules themselves are stored in the TZrulebook.dat file, which is read on startup; each line in the file creates a TimeZoneRule object.
TimeZoneRule consists of QStrings identifying the months and days on which DST starts and ends, QTime objects identifying the time at which the changes occur, and a double indicating the size of the offset in hours (probably always 1.00).
Month names should be the English three-letter abbreviation, uncapitalized. Day names are either an integer indicating the calendar date (i.e., "15" is the fifteenth of the month), or a number paired with a three-letter abbreviation for a weekday. This indicates the Nth weekday of the month (i.e., "2sun" is the second Sunday of the Month). Finally, the weekday string on its own indicates the last weekday of the month (i.e., "mon" is the last Monday of the month).
The isDSTActive(KStarsDateTime) function returns true if DST is active for the DateTime given as an argument.
The nextDSTChange(KStarsDateTime) function returns the KStarsDateTime of the moment at which the next DST change will occur for the current location.
- Version
- 1.0
Definition at line 58 of file timezonerule.h.
Constructor & Destructor Documentation
Default Constructor.
Makes the "empty" time zone rule (i.e., no DST correction)
Definition at line 25 of file timezonerule.cpp.
Constructor.
Create a TZ rule according to the arguments.
- Parameters
-
Definition at line 30 of file timezonerule.cpp.
Member Function Documentation
- Returns
- the current Timezone offset, compared to the timezone's Standard Time. This is typically 0.0 if DST is inactive, and 1.0 if DST is active.
Definition at line 91 of file timezonerule.h.
- Returns
- true if this rule is the same as the argument.
- Parameters
-
Definition at line 568 of file timezonerule.cpp.
Determine whether DST is in effect for the given DateTime, according to this rule.
- Parameters
-
Definition at line 294 of file timezonerule.cpp.
- Returns
- true if the rule is the "empty" TZ rule.
Definition at line 82 of file timezonerule.h.
- Returns
- computed value for next DST change in universal time.
Definition at line 106 of file timezonerule.h.
- Returns
- computed value for next DST change in local time.
Definition at line 109 of file timezonerule.h.
Recalculate next dst change and if DST is active by a given local time with timezone offset and time direction.
- Parameters
-
There are some problems using local time for getting next daylight saving change time.
- The local time is the start time of DST change. So the local time doesn't exists and must corrected.
- The local time is the revert time. So the local time exists twice.
- Neither start time nor revert time. There is no problem.
Problem #1 is more complicated and we have to change the local time by reference. Problem #2 we just have to reset status of DST.
automaticDSTchange should only set to true if DST status changed due to running automatically over a DST change time. If local time will changed manually the automaticDSTchange should always set to false, to hold current DST status if possible (just on start and revert time possible).
Definition at line 443 of file timezonerule.cpp.
Toggle DST on/off.
The
activate argument should probably be isDSTActive()
- Parameters
-
Definition at line 72 of file timezonerule. | https://api.kde.org/extragear-api/edu-apidocs/kstars/html/classTimeZoneRule.html | CC-MAIN-2019-30 | refinedweb | 609 | 60.21 |
This articles exlains why lazy loading is useful and how it helps to develop a high-performance application.
Lazy loading is a nice and very important concept in the programming world. Sometimes it helps to improve performance and adapt best practices in application design. Let's discuss why lazy loading is useful and how it helps to develop a high performance application.Lazy loading is essential when the cost of object creation is very high and the use of the object is vey rare. So, this is the scenario where it's worth implementing lazy loading.The fundamental idea of lazy loading is to load object/data when needed.At first we will implement a traditional concept of loading (it's not lazy loading) and then we will try to understand the problem in this. Then we will implement lazy loading to solve the problem.Have a look at the following code.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ConsoleAPP
{ float LoanAmount { get; set; }
public bool IsLoanApproved { get; set; }
public Loan(string accountNumber)
Console.WriteLine("Loan loading started");
this.LoanAmount = 1000;
this.IsLoanApproved = true;
class Program
static void Main(string[] args)
PersonalLoan p = new PersonalLoan("123456");
Console.ReadLine();
}
//Load Detail started to load
p.LoanDetail = new Loan("123456");
Console.WriteLine(p.LoanDetail.AccountNumber);
Console.WriteLine(p.LoanDetail.IsLoanApproved);
Console.WriteLine(p.LoanDetail.LoanAmount);
Implement lazy loading using Lazy<T> classAs we know, lazy loading is a nice feature of applications, not only to improve the performance of the application but also it helps to manage memory and other resource efficiently. Basically we can use lazy initialization when a large object is created or the execution of a resource- intensive task in particular when such creation or execution might not occur during the lifetime of the program. AndThis property will tell us whether or not the value is initializing in a lazy class.ValueIt gets the lazy initialized value of the current Lazy<T> instance.Fine. We will now implement one simple class and we will see how lazy<T> works with it. Have a look at the following example.
public class Test
private List<string> list = null;
public Test()
Console.WriteLine("List Generated:");
list = new List<string>() {
"Sourav","Ram"
};
public List<string> Names
get
{
return list;
}
Lazy<Test> lazy = new Lazy<Test>();
Console.WriteLine("Data Loaded : " + lazy.IsValueCreated);
Test t = lazy.Value;
foreach (string tmp in t.Names)
Console.WriteLine(tmp);
View All | https://www.c-sharpcorner.com/uploadfile/dacca2/implement-lazy-loading-in-c-sharp-using-lazyt-class/ | CC-MAIN-2021-21 | refinedweb | 413 | 51.75 |
Opened 20 months ago
Closed 19 months ago
Last modified 19 months ago
#21355 closed Bug (fixed)
Try importing _imaging from PIL namespace first.
Description
Patch available at. Fixes problem described in #20964.
On an OS X system with a Homebrew version of Python/PIL installed, the django/utils/image module encounters a hash collision error when trying to import _imaging, as described in #20964. Although that ticket was closed for not being a problem with Django, the ticket it references in the Homebrew project () only resulted in removing PIL from the core Homebrew libraries without actually fixing the original import issue. Everyone in the Homebrew ticket discussion agrees that pillow should be used in place of PIL, but because Django isn't officially removing support for PIL until 1.8, I believe that this problem deserves to be fixed in time for 1.6.
In my patch, I attempt to import _imaging from the PIL namespace first, which prevents the hash collision error because the _imaging module will imported using the same filesystem path as when imported through the earlier from PIL import Image statement. If _imaging isn't available under PIL, then it attempts to import _imaging directly just as did before. This allows PIL to be properly imported in my setup (OS X 10.8, Homebrew Python/PIL) and should still be compatible with environments in which the PIL namespace isn't available.
Change History (10)
comment:1 Changed 20 months ago by ptone
- Cc preston@… added
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 20 months ago by akaariai
comment:3 Changed 20 months ago by akaariai
- Resolution set to wontfix
- Status changed from new to closed
comment:4 Changed 20 months ago by leo+django@…
- Resolution wontfix deleted
- Status changed from closed to new
We actually have the exact same issue on a CentOS 6.4 with python-imaging (for PIL) and mod_wsgi both from CentOS. We are trying to use Django 1.6, but when using a django.forms.ImageField we are getting the error message:
ImproperlyConfigured: The '_imaging' module for the PIL could not be imported: No module named _imaging
The fix by richardxia (trying "from PIL import _imaging as PIL_imaging" first) works for us. I don't see why this shouldn't go into the official release as it seems harmless and works exactly the same as the PILImage import just a few lines above. Considering our pretty clean CentOS 6 simple setup I don't think we will be the only ones that run into this as more people migrate to Django 1.6.
Can the "wontfix" decision be re-evaluated?
comment:5 Changed 20 months ago by anonymous
- Version changed from 1.6-beta-1 to 1.6
comment:6 Changed 20 months ago by akaariai
OK, so it seems this isn't Homebrew issue. I'll mark this as release blocker for 1.6. I am not sure what we should do about this. If the patch really is safe (I can't tell) then we should likely backpatch.
By marking this as release blocker I want to assure that we at least consider doing something to this before 1.6.1.
comment:7 Changed 20 months ago by akaariai
- Severity changed from Normal to Release blocker
- Triage Stage changed from Unreviewed to Accepted
comment:8 Changed 20 months ago by akaariai
- Owner changed from nobody to akaariai
- Status changed from new to assigned
I think I will just commit the fix. The fix seems safe, and it seems there are some setups where the fix is needed.
Any objections?
comment:9 Changed 19 months ago by Anssi Kääriäinen <akaariai@…>
- Resolution set to fixed
- Status changed from assigned to closed
I'll close this as wontfix. If you must use Homebrew, then use Pillow.
ptone: I see you are in cc - if you are planning on doing something to this, just revert my wontfix. | https://code.djangoproject.com/ticket/21355 | CC-MAIN-2015-27 | refinedweb | 660 | 59.64 |
Barcode Software
how to generate barcode in c# net with example
Configuring DNS Servers and Clients in .NET
Deploy barcode standards 128 in .NET Configuring DNS Servers and Clients
1. What attributes did you add to your assembly 2. What command did you use to view the token of the public key in your assembly 3. What command did you use to verify that your assembly does not yet have a strong name
java barcode reader sample code
using bitmaps j2ee to develop bar code for asp.net web,windows application
BusinessRefinery.com/barcode
how to print barcode in rdlc report
generate, create barcode full none on .net projects
BusinessRefinery.com/barcode
Exam Highlights
use j2ee barcode encoder to paint bar code on java unity
BusinessRefinery.com/ bar code
use sql server 2005 reporting services bar code encoder to incoporate barcodes for .net list
BusinessRefinery.com/barcode
Table 9-1 Supported SMTP Commands
generate, create barcode product none for java projects
BusinessRefinery.com/ barcodes
use sql database bar code drawer to produce bar code in visual basic script
BusinessRefinery.com/ bar code
Occurs after the return values are serialized into XML, but before they are sent across the network to the client.
ssrs 2016 qr code
using barcode encoder for sql 2008 control to generate, create qr image in sql 2008 applications. documentation
BusinessRefinery.com/Quick Response Code
java qr code generator example
use jar quick response code printing to deploy denso qr bar code for java picture
BusinessRefinery.com/qr-codes
Table 18-3
qrcode image developers in word microsoft
BusinessRefinery.com/QR Code JIS X 0510
qr code font for crystal reports free download
use visual .net denso qr bar code creation to create qrcode for .net column,
BusinessRefinery.com/QR Code
Interviews
to encode denso qr bar code and qr data, size, image with office excel barcode sdk manage
BusinessRefinery.com/QR Code JIS X 0510
qr code 2d barcode image syntax with .net
BusinessRefinery.com/qr codes
Before you can create a table, you need a schema in which to create the table .A schema is similar to a namespace in many other programming languages; however, there can be only one level of schemas (that is, schemas cannot reside in other schemas). There are already several schemas that exist in a newly created database: the dbo, sys, and information_schema schemas. The dbo schema is the default schema for new objects, while the sys and information_schema schemas are used by different system objects.. Before SQL Server 2005, schemas did not exist. Instead of the object residing in a schema the object was owned by a database user (however, the syntax was the same: <owner>.<object>) In these versions, dbo was recommended to own all objects, but this is not true anymore. Starting with SQL Server 2005, all objects should be created within a user-defined schema. Schemas are created using the CREATE SCHEMA statement, as shown in the following example of creating a schema and a table within that schema:
ssrs pdf 417
use reporting services 2008 pdf 417 maker to create pdf-417 2d barcode for .net email
BusinessRefinery.com/PDF 417
.net code 128 reader
Using Barcode decoder for function VS .NET Control to read, scan read, scan image in VS .NET applications.
BusinessRefinery.com/barcode standards 128
21
use excel barcode 3 of 9 drawer to insert bar code 39 for excel restore
BusinessRefinery.com/USS Code 39
code 39 barcode font crystal reports
use .net vs 2010 crystal report 3 of 9 generator to insert barcode code39 in .net abstract
BusinessRefinery.com/Code 3 of 9
End Sub
rdlc data matrix
using customized rdlc report to print data matrix on asp.net web,windows application
BusinessRefinery.com/datamatrix 2d barcode
.net data matrix reader
Using Barcode decoder for softwares Visual Studio .NET Control to read, scan read, scan image in Visual Studio .NET applications.
BusinessRefinery.com/2d Data Matrix barcode
Estimated lesson time: 10 minutes
vb.net data matrix generator vb.net
using barcode integrated for .net vs 2010 control to generate, create data matrix ecc200 image in .net vs 2010 applications. bit
BusinessRefinery.com/Data Matrix 2d barcode
vb.net pdf417
using button .net to create pdf 417 for asp.net web,windows application
BusinessRefinery.com/pdf417 2d barcode
Case Scenario 1: Validating Input
As you learned in 1, Active Directory is the tool used to manage, organize, and locate resources on your network. DNS Server service is integrated into the design and implementation of Active Directory, making them a perfect match.
SPatiaL MetHODS
" Display the value returned from the Web service to
Lesson Review
Step 1: Confirm Group Membership
rithm s key pair. Pass true to this method to export both the private and public key, or pass false to export only the public key.
understanding SSAS Processing options
Prompt for a username and password
B. Alex can e-mail a remote assistance invitation to the administrator using the
4. Other Log Shipping Monitor Settings options include history retention, which determines how long the log shipping configuration will retain history information about the task, and the name and schedule for the alert job that raises an alert if there are problems in any log shipping jobs. You should use the same schedule as the schedule for the log shipping backup task.
Subnet mask (required)
ChAPTER 3
There is also the option to make a folder private. When you make a folder private, only the owner of the folder can access its contents. You can make folders private only if they are in the user s personal user profile (and only if the disk is formatted with NTFS, the native file system for Windows XP; you will learn more about NTFS in 11). A personal user profile defines customized desktop environments, display settings, and network and printer connections, among other things. Personal user profile folders include My Documents and its subfolders, Desktop, Start Menu, Cookies, and Favorites. To locate the list of local user profiles, right-click My Computer, select Properties, and, from the Advanced tab, in the User Profiles section, select Settings. To view a personal user profile, browse to C:\Documents And Settings\User Name, as shown in Figure 9-3.
More Code 128 on .NET
Articles you may be interested
how to generate barcode in c# net with example: MORE INFO in .NET Deploy Denso QR Bar Code in .NET MORE INFO
generate barcode c#: Installing and Configuring Office Applications in vb Integrate QR Code in vb Installing and Configuring Office Applications
create barcode image using c#: Site A Site B in visual C# Implementation QR in visual C# Site A Site B
java barcode reader library download: Lesson 1: Configuring the Client Endpoint in C# Encoder gs1 datamatrix barcode in C# Lesson 1: Configuring the Client Endpoint
barcode generator c# wpf: Creating Serviced Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509 in c sharp Assign EAN-13 in c sharp Creating Serviced Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
c# qr code library open source: Objective 6.5 in C#.net Draw QR Code in C#.net Objective 6.5
how to generate barcode in asp net c#: Real-time Sampling Service Monitors OWA servers (and other Web servers) in in .NET Render Denso QR Bar Code in .NET Real-time Sampling Service Monitors OWA servers (and other Web servers) in
vb.net free barcode component: Lesson 1: Creating and Managing Groups in .NET Compose Code-128 in .NET Lesson 1: Creating and Managing Groups
Using Additional Query Techniques in c sharp Assign qr bidimensional barcode
zxing create qr code c#: 6: Case Scenario Answers in .net C# Integrated qrcode in .net C# 6: Case Scenario Answers
barcode generator c# source code: g15im03 in visual C#.net Implementation qr bidimensional barcode in visual C#.net g15im03
generate barcode c# free: Adding counters to a Performance log in .NET Creation Quick Response Code in .NET Adding counters to a Performance log
read barcode from image c#.net: Monitoring Printers in .net C# Integrating 3 of 9 in .net C# Monitoring Printers
zxing c# qr code example: Objective 4.2 in visual C# Printer qrcode in visual C# Objective 4.2
vb.net generate 2d barcode: Explain SQL Server s index structure. in .NET Integrated barcode pdf417 in .NET Explain SQL Server s index structure.
barcode recognition vb.net: Windows Vista Upgrades and Migrations in .NET Use USS Code 39 in .NET Windows Vista Upgrades and Migrations
generate barcode c#.net: Lesson 2: Programming Transactions in .net C# Render ECC200 in .net C# Lesson 2: Programming Transactions
open source qr code library vb.net: Review in .NET Encoder QR Code JIS X 0510 in .NET Review
c# qr code reader webcam: Ping-of-death attack Port scan in .net C# Compose qr-codes in .net C# Ping-of-death attack Port scan
how to read data from barcode scanner in c#: Lesson 2: Creating Report Schedules and Subscriptions in C#.net Printer code-128b in C#.net Lesson 2: Creating Report Schedules and Subscriptions | http://www.businessrefinery.com/yc2/284/74/ | CC-MAIN-2021-49 | refinedweb | 1,503 | 58.38 |
#include <iostream>
#include <time.h>
#include <stdlib.h>
#include "/home/theodore/lotto"
using namespace std;
main ()
{
srand(time(0));
int One, Two, Three, Four, Five;
One = rand() % 35 + 1;
Two = rand() % 35 + 1;
Three = rand() % 35 + 1;
Four = rand() % 35 + 1;
Five = rand() % 35 + 1;
while (One == Two || One == Three || One == Four) {
One = rand() % 35 + 1;
}
while (Two == Three | Two == Four) {
Two = rand() % 35 + 1;
}
while (Three == Four) {
Three = rand() % 35 +1;
}
cout << "Here is your Texas Two Step quick pick: " << One << " " << Two << " " << Three << " " << Four ;
cout <<" (" << Five << ") \n";
cout << Lotto ;
}
// Having a hard time, first semester in programming..
//I don't understand for one, the proper command to read from the "Lotto" file
//Which I am trying to read random lines from, and second, the form in which the lines //should be in that are in the Lotto file
//PLEASE HELP...yes, i'm very new to this.. | https://www.daniweb.com/programming/software-development/threads/317804/random-out-from-file | CC-MAIN-2018-13 | refinedweb | 148 | 77.1 |
class Solution(object): def sumOfLeftLeaves(self, root): def dfs(root, left): if not root: return if left and not root.left and not root.right: cache[0] += root.val dfs(root.left, True) dfs(root.right, False) cache = [0] dfs(root, False) return cache[0]
Complexity is
O(n) time and
O(n) space. This is because we use DFS so it takes stack space proportional to height of tree (which is
n in the worst case).
@StefanPochmann said in Short Python:
lev? Is that debug stuff you forgot to take out? :-)
Haha yes, thank you
Bit different, including one of my favorite tricks:
def sumOfLeftLeaves(self, root): def dfs(root, left=False): if not root: return 0 if left and root.left is root.right: return root.val return dfs(root.left, True) + dfs(root.right) return dfs(root)
@agave Nice! But why
cache can deliver its value in the recursive process? Is this the property of a
list?
Your solution seem very nice.
Could you explain
what is the meaning of
if left and root.left is root.right:
here, could you explain it?
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect. | https://discuss.leetcode.com/topic/60926/short-python | CC-MAIN-2017-51 | refinedweb | 202 | 79.97 |
Stephen StewartPro Student 4,840 Points
Stuck on this bad boy!
I tried different attempts of trying to crack this and my head is about to pop :/ I can't think of what I'm doing wrong or it is the lack of oxygen going to my brain to think.
using Treehouse.Models; namespace Treehouse.Data { public class VideoGamesRepository { public VideoGame GetVideoGame(int id) { foreach(var videoGame in _videoGames) { if (videoGame.Id == id) { return videoGame; } } return null; } + ")"; } } } }
1 Answer
Steven Parker179,649 Points
Don't forget the caution given at the start of the challenge: "Important: In each task of this code challenge, the code you write should be added to the code from the previous task.".
I see your task 2 code, but I don't see the function you must have written to pass task 1. Perhaps you replaced it instead of adding to it while working on task 2.
It might help to leave the two comment lines in the code, and add each task's function below the related comment. | https://teamtreehouse.com/community/stuck-on-this-bad-boy | CC-MAIN-2020-05 | refinedweb | 174 | 76.25 |
Android Format date with time zone
android date format dd/mm/yyyy
android simpledateformat timezone
android date to string
android get utc time
convert date to utc android
android get timezone programmatically
how to change time format in android programmatically
I need to format the date into a specific string.
I used
SimpleDateFormat class to format the date using the pattern "
yyyy-MM-dd'T'HH:mm:ssZ" it returns current date as
"
2013-01-04T15:51:45+0530" but I need as
"
2013-01-04T15:51:45+05:30".
Below is the coding used,
Calendar c = Calendar.getInstance(); SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ssZ", Locale.ENGLISH); Log.e(C.TAG, "formatted string: "+sdf.format(c.getTime()));
Output: formatted string:
2013-01-04T15:51:45+0530
I need the format as
2013-01-04T15:51:45+05:30 just adding the colon in between gmt time.
Because I'm working on Google calendar to insert an event, it accepts only the required format which I have mentioned.
You can use Joda Time instead. Its
DateTimeFormat has a
ZZ format attribute which does what you want.
Big advantage: unlike
SimpleDateFormat,
DateTimeFormatter is thread safe. Usage:
DateTimeFormatter fmt = DateTimeFormat.forPattern("yyyy-MM-dd'T'HH:mm:ssZZ") .withLocale(Locale.ENGLISH);
SimpleDateFormat, To understand the concept, let us consider the below scenarios where we get the time stamp from the server in GMT format (assuming), then we� This example demonstrates how do I format date and time in android. Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
You can also use "ZZZZZ" instead of "Z" in your pattern (according to documentation). Something like this
Calendar c = Calendar.getInstance(); SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ssZZZZZ", Locale.ENGLISH); Log.e(C.TAG, "formatted string: "+sdf.format(c.getTime()));
Converting Date/Time Considering Time Zone — Android, In this tutorials, We explain How to Format DateTime in Android using SimpleDateFormat. Example Include These days most of the project required date and time formatting in Android. So we are setTimeZone(TimeZone. Android 10 deprecates the APK-based time zone data update mechanism (available in Android 8.1 and Android 9) and replaces it with an APEX-based module update mechanism. AOSP continues to include the platform code necessary for OEMs to enable APK-based updates, so devices upgrading to Android 10 can still receive partner-provided time zone data
What you can do is just add the ":" manually using substring(). I have faced this earlier and this solution works.
Format DateTime in Android, @{inheritDoc} */ public StringBuffer format(Date date, StringBuffer toAppendTo, FieldPosition fieldPosition) { dateFormat.setTimeZone(TimeZone.getDefault())� Get current time and date on Android Android Mobile Development Apps/Applications As per Oracle documentation, SimpleDateFormat is a concrete class for formatting and parsing dates in a locale-sensitive manner.
Why not just do it manually with regexp?
String oldDate = "2013-01-04T15:51:45+0530"; String newDate = oldDate.replaceAll("(\\+\\d\\d)(\\d\\d)", "$1:$2");
Same result, with substring (if performance is an issue).
String oldDate = "2013-01-04T15:51:45+0530"; int length = oldDate.length(); String newDate = oldDate.substring(0, length - 2) + ':' + oldDate.substring(length - 2);
java.text.DateFormat.setTimeZone java code examples, I tried above code in eclipse android to convert date according to timezone but I am getting same date and time for PST, IST, GMT. can you please say why I am not�
Try this
Calendar c = Calendar.getInstance(); SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ssZ", Locale.ENGLISH); System.out.println("formatted string: "+sdf.format(c.getTime())); String text = sdf.format(c.getTime()); String result = text.substring(0, 22) + ":" + text.substring(22); System.out.println("result = " + result);
How to convert Java Date into Specific TimeZone format, Formatting Date and Time. Format a date using the pattern specified with format() . Pass in Date and Time With Hour, Minute, and Time Zone. Android Question Formatting date and time into a string. Thread starter nibbo; I want to make a string = to the current date and time in the format yyyy-MM-dd HH
A Guide to Java's SimpleDateFormat, z shown timezone in abbreviated format e.g. IST or PST while Z shows relative to GMT e.g. GMT +8.00 or GMT -8.00. This helps to see on which timezone date is� Java SimpleDateFormat, Java Date Format, Java Date Time Format, java date format string, java time format, java calendar format, java parse date, Java DateFormat class, java.text.SimpleDateFormat, Java Date Format Example, java.text.DateFormat tutorial code.
How to display date in multiple timezone in Java with Example, Locale; import java.util.TimeZone; public class Main{ public static String formatDate(Date date, String formatString) { String result = ""; SimpleDateFormat sdf� Todays Day:THURSDAY Todays Date:October 19,2017 AD Current Time:12:15:44.232 PM Eastern Daylight Time Steps involved: The basic steps in the calculation of day/date/time in a particular time zone: Obtain a new Date object; A date format object is created in Simple Date Format
format Date by time zone GMT-0:00 - Android java.util, Date and time formats are specified by date and time pattern strings. Within date and X, Time zone, ISO 8601 time zone, -08 ; -0800 ; -08:00�
- I don't think you can with a simpledateformat. So you will need to insert the
:manually. Java 7 has introduced the
Xmarker that has more formatting options than
Zbut it is not available on android (yet?).
- If formatting manually is the task to be done(in reference to the above comment), you can search for the last
+and then add a
:after 2 indices.
- @KazekageGaara It could also be a
-...
- While in 2013 it was reasonable to use
Calendarand
SimpleDateFormat, don’t do that anymore. Those classes are poorly designed and now long outdated, the latter in particular notoriously troublesome. Instead use
OffsetDateTimeand
DateTimeFormatter, both from java.time, the modern Java date and time API. Or just
OffsetDateTime.toStringand no formatter.
- I found a slightly different way of doing this, given the: DateTime my_date = new DateTime(); ... Set the date to something my_date = ... Then format it with: String curr_date = my_date.toLocalDate().toString("yyyy-MM-dd");
- This is correct answer, @fargath you should mark it up.
- This should be the correct answer. I find it ridiculous when people refer to libraries as a primary solution instead of it being a secondary suggestion. It's like threading cloth with a sword.
- The documentation says it supports X.
- For me this doesn't work. I get this string:
2018-07-06T11:46:43+0200. The colon in the timezone is missing. So it's not ISO8601.
- I fail to see the linked documentation mentioning the possibility of 5
Z(
ZZZZZ). Is that just me? Also on my Java 8 I get
formatted string: 2019-03-21T16:06:30+0100without colon, but the outcome on Android may be different.
- Apart from
SimpleDateFormatand
TimeZonebeing poorly designed and long outdated: On my Java 8
TimeZone.getAvailableIDs()[358]is
Australia/Yancowinna, and who knows what it will be in future Java versions? I wouldn’t be too sure about Android either, but haven’t tried.
- you are right - the getAvailableID's is unreliable. I edited accordingly | http://thetopsites.net/article/55280484.shtml | CC-MAIN-2021-10 | refinedweb | 1,222 | 50.73 |
virtualkeyboard_show()
Display the virtual keyboard.
Synopsis:
#include <bps/virtualkeyboard.h>
BPS_API void virtualkeyboard_show()
Since:
BlackBerry 10.0.0
Arguments:
Library:libbps (For the qcc command, use the -l bps option to link against this library)
Description:
The virtualkeyboard_show() function causes the virtual keyboard to be displayed (if it is not already visible). When this function is called, the VIRTUALKEYBOARD_EVENT_VISIBLE event is sent unless the virtual keyboard was already visible.
When the device is connected to a keyboard (e.g., via Bluetooth), the virtual keyboard will not be shown unless the user swipes up with two fingers from the bottom bezel. This also applies to the simulator, which interprets the PC keyboard as being connected to the virtual device.
Devices that have a built-in keyboard do not support the use of the virtual keyboard. The virtual keyboard can't be displayed on such devices.
Returns:
Nothing.
Last modified: 2014-05-14
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | https://developer.blackberry.com/native/reference/core/com.qnx.doc.bps.lib_ref/topic/virtualkeyboard_show.html | CC-MAIN-2016-50 | refinedweb | 167 | 50.84 |
Hi,
How do I customize a C# control..say I want rounded corners for all my text boxes..
In Web apps I know you can add it and use the class property of a asp textbox control. How do I achieve it in Windows forms?
Thanks.
View Complete Post
Hi All,
I am using a browser control in my WPF application. The users of this application will be using this control for viewing content that could contain malicious scripts. I want to set the Trust Level so that none of the scripts should be executed. I have added
the Attribute to the method that creates the Browser control. I dont think its working though. Any thoughts on how this can be done?
[
PermissionSet(SecurityAction.Assert,
Name = "Nothing")]
public System.Windows.Forms.WebBrowser
CreateBrowserControl(
PermissionSet
(
SecurityAction .Assert, Name =
"Nothing"
)]
public
System.Windows.Forms.
Dynamically Centering a Windows Forms Label Control I'm working on an athletic workout journal application that will include an Interval / Countdown Timer. I plan to build several versions of this application in different technologies (WPF, Web, Phone) but I'm building the first version in Windows Forms. I want the user to be able to dynamically set the size of the "Stopwatch" display font, then resize the form and have the text re-center in the form, taking into consideration the new font size. Stuff like this is easy in HTML, but not so much in Windows Forms. Though I'm still in the early stages I thought I'd share the bit of math that I've come up with and invite any suggestions you might have. Code Snippet using System.Text; using System.Windows.Forms; namespace WorkOutTimer { public partial...(read more)
public
System.Windows.Forms.
Is Dispatcher.BeginInvoke really NOT thread safe, while the old System.Windows.Forms.Control.BeginIn
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend | http://www.dotnetspark.com/links/38867-c-sharp-windows-forms-customize-control.aspx | CC-MAIN-2017-30 | refinedweb | 320 | 58.99 |
I'm wondering whether there is any way to shorten anonymous function declaration in JavaScript through the utilization of preprocessor/compiler like Google Closure. I figure it'd be quite neat for callbacks.
For example, normally I'd write a qunit test case this way:
test("Dummy test", function(){ ok( a == b );});
test("Dummy test", #(ok a b));
Without worrying about preprocessors or compilers, you could do the following which shortens the callback syntax. One thing with this is that the scope of "this" isn't dealt with...but for your use case I don't think that's important:
var ok = function(a,b) { return (a==b); }; var f = function(func) { var args = Array.prototype.slice.call(arguments, 1); return function() { return func.apply(undefined,args); }; }; /* Here's your shorthand syntax */ var callback = f(ok,10,10); console.log(callback()); | https://codedump.io/share/Jn3rK47vD7GG/1/anonymous-function-declaration-shorthand-javascript | CC-MAIN-2017-09 | refinedweb | 141 | 54.93 |
Layout in ASP.NET Core
By Steve Smith and Dave Brock
Pages and views frequently share visual and programmatic elements. This article demonstrates how to:
- Use common layouts.
- Share directives.
- Run common code before rendering pages or views.
This document discusses layouts for the two different approaches to ASP.NET Core MVC: Razor Pages and controllers with views. For this topic, the differences are minimal:
- Razor Pages are in the Pages folder.
- Controllers with views uses a Views folder for views..
By convention, the default layout for an ASP.NET Core app is named _Layout.cshtml. The layout files for new ASP.NET Core projects created with the templates are:
Razor Pages: Pages/Shared/_Layout.cshtml
Controller with views: Views/Shared/_Layout.cshtml
The layout defines a top level template for views in the app. Apps don't require a layout. Apps can define more than one layout, with different views specifying different layouts.
The following code shows the layout file for a template created project with a controller and views:
<!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>@ViewData["Title"] - WebApplication1</title> > </head> <body> -WebApplication1</a> </div> <div class="navbar-collapse collapse"> <ul class="nav navbar-nav"> <li><a asp-Home</a></li> <li><a asp-About</a></li> <li><a asp-Contact</a></li> </ul> </div> </div> </nav> <partial name="_CookieConsentPartial" /> <div class="container body-content"> @RenderBody() <hr /> <footer> <p>© 2018 - WebApplication1</p> </footer> <> <script src="~/js/site.min.js" asp-</script> </environment> @RenderSection("Scripts", required: false) </body> </html>
Specifying a Layout
Razor views have a
Layout property. Individual views specify a layout by setting this property:
@{ Layout = "_Layout"; }
The layout specified can use a full path (for example, /Pages/Shared/_Layout.cshtml or /Views/Shared/_Layout.cshtml) or a partial name (example:
_Layout). When a partial name is provided, the Razor view engine searches for the layout file using its standard discovery process. The folder where the handler method (or controller) exists is searched first, followed by the Shared folder. This discovery process is identical to the process:
<script type="text/javascript" src="~/scripts/global.js"></script> @RenderSection("Scripts", required: false)
If a required section isn't found, an exception is thrown. Individual views specify the content to be rendered within a section using the
@section Razor syntax. If a page or view defines a section, it must be rendered (or an error will occur).
An example
@section definition in Razor Pages view:
@section Scripts { <script type="text/javascript" src="~/scripts/main.js"></script> }
In the preceding code, scripts/main.js is added to the
scripts section on a page or view. Other pages or views in the same app might not require this script and wouldn't define a scripts section.
The following markup uses the Partial Tag Helper to render _ValidationScriptsPartial.cshtml:
@section Scripts { <partial name="_ValidationScriptsPartial" /> }
The preceding markup was generated by scaffolding Identity.
Sections defined in a page and pages can use Razor directives to importing namespaces and use Pages (or Views) folder. A _ViewImports.cshtml file can be placed within any folder, in which case it will only be applied to pages or views within that folder and its subfolders.
_ViewImports files are processed starting at the root level and then for each folder leading up to the location of the page or view itself.
_ViewImports settings specified at the root level may be overridden at the folder level.
For example, suppose:
- The root level _ViewImports.cshtml file includes
@model MyModel1and
@addTagHelper *, MyTagHelper1.
- A subfolder _ViewImports.cshtml file includes
@model MyModel2and
@addTagHelper *, MyTagHelper2.
Pages and views in the subfolder will have access to both Tag Helpers and the
MyModel2 model.
If multiple _ViewImports.cshtml files are found in the file hierarchy, the combined behavior of the directives are:
Code that needs to run before each view or page should be placed in the _ViewStart.cshtml file. By convention, the _ViewStart.cshtml file is located in the Pages (or Views) folder. The statements listed in _ViewStart.cshtml are run before every full view (not layouts, and not partial views). Like ViewImports.cshtml, _ViewStart.cshtml is hierarchical. If a _ViewStart.cshtml file is defined in the view or pages folder, it will be run after the one defined in the root of the Pages (or Views) folder (if any).
A sample _ViewStart.cshtml file:
@{ Layout = "_Layout"; }
The file above specifies that all views will use the _Layout.cshtml layout.
_ViewStart.cshtml and _ViewImports.cshtml are not typically placed in the /Pages/Shared (or /Views/Shared) folder. The app-level versions of these files should be placed directly in the /Pages (or /Views) folder.
Feedback | https://docs.microsoft.com/en-us/aspnet/core/mvc/views/layout?view=aspnetcore-2.0 | CC-MAIN-2020-10 | refinedweb | 782 | 59.4 |
PdfMasher--E-Book Conversion
If you've had problems reading PDF files on various devices (like mobile phones), PdfMasher may be just what you're looking for. According to the Web site:
PdfMasher is a tool to convert PDF files containing text in ready-for-e-book HTML files. Most e-book readers support PDF files natively, but it's often a real pain to read those documents, because we don't have font-size control over the document like we have with native e-books. In many cases, we have to use the zooming feature, and it's just a pain. Another drawback of PDFs on e-book readers is that annotations are not supported.
There are already tools to convert PDFs to e-books, like Calibre, but what they do is try to guess the role of each piece of text in the PDF (and that's if you're lucky). I think that in all but the simplest cases, it's a mistake to think that anything short of an AI can do that kind of guessing.
Using PdfMasher, PDF files like these can be manipulated manually for conversion into other formats.
With the original PDF on the left and outputted HTML on the right, this e-book now can be read on any device without readability woes.
Installation
If you can install this with a binary, by all means do so. Available on the site are 32- and 64-bit Linux .deb packages for the ubiquitous Intel x86 architecture. For masochists, or those who don't have an Intel-based CPU, there is the obligatory source.
In order to grab the latest source, first you need to install hg, which was under the package name "mercurial" on my Kubuntu system. Once that's installed, grab the latest source by entering the command:
$ hg clone
Once that has finished downloading, keep this terminal open where it is, because next you'll need to sort out the library requirements, and then you'll return to this terminal and continue the installation. As far as dependencies are concerned, the documentation lists the following:
Python 3.2
pdfminer3k
jobprogress 1.0.0
Sphinx 1.0.7
pytest 2.0.3 to run unit tests
Markdown 2.0.3
PyQt 4.7.5
With the dependencies out of the way, re-open the terminal from before and enter the following commands:
$ cd pdfmasher $ python configure.py $ python build.py
Then, run the program with:
$ python run.py
If you're lucky enough to have the binary installed, you simply can run the program with the command:
$ pdfmasher
Usage
Before I try to explain how to use PdfMasher myself, I should include the following from the Web site:.
Before changing things under PdfMasher, I recommend having your PDF open to one side in another program so you can cross-check bits of text as you're culling sections. When you're ready to start, click on Open File and choose the PDF you want to "mash".
Once open, the pane below fills up in a manner that at first glance is overwhelming and incomprehensible. However, on a very basic level, each line is a section of text in your PDF. If you explore each line, you can check which part of the PDF is being examined, and if it's redundant, you can choose to ignore it in the conversion.
Looking at these PdfMasher lines in detail, each line has an X and Y axis reference, as well as font size, text length and page number. Whenever you click a line, the full text content of its section in the PDF is shown in the pane below.
If you've decided on which sections to remove, click Ignore to cut out the text from the final product. Click Normal to reinstate the text for inclusion. Depending on which device you'll be reading the resulting e-book, the header and footer information may be something you want to cut out of the page.
For example, in the screenshot, I'm removing the beginning references and page headers in a psychology paper that otherwise would leave a hard-to-navigate, garbled mess if I translated it into something I could read on my phone.
However, if what you're preparing is intended to be something like a public Web page instead of a trimmed-down e-book, you might want to use the Title and Footnote buttons. Title will result in an H1 title header in the outputted HTML. The Footnote button will move the text to the bottom of the document, and PdfMasher will try to make one of the cool hyperlinks mentioned earlier.
Once you've finished editing your document, click on the Build tab below, and then click on the Generate Markdown button. A raw text file will be generated in the same folder as the original PDF. Click on Reveal Markdown, and the source folder will be opened in your default file manager. Edit Markdown will open the actual text file in your default text editor, and View HTML will show the end product in a Web browser.
If you've made any errors, the output will reveal them quickly, and you can go back and simply start the Build process again. From here, you either can leave your output as is or convert your files into specific e-book formats.
Either way, PdfMasher uses some very simple methods to create something very clever and is a must-have for any regular e-book reader.
Learn is a great feature and I
This is a great feature and I am sure that there will be a lot of people interested to use it, I think I am going to install too this on my phone.
Asigurare
cannot build pdfmasher on Ubuntu 10.04 after all sorts packages
I've tried loading all sorts of packages from all sorts of sources and continue to fail at a pdfmasher build. I resorted to 'build' after I could not satisfy the DEB file dependencies.
Running python ./configure.py --ui=qt results in:
Traceback (most recent call last):
File "./configure.py", line 11, in
from argparse import ArgumentParser
ImportError: No module named argparse
Another time the same command gave me:
File "./configure.py", line 15
if ui not in {'cocoa', 'qt'}:
^
SyntaxError: invalid syntax
So then I tried to use 'pip' to install things:
prompt$ pip install requirements-lnx.txt
Unknown or unsupported command 'install'
I'm new to python development, but I've meed a code slinger for year, so I suspect that I'm missing something very obvious.
~~~ 0;-Dan
Ubuntu 10.04 and Python 3.2
The .deb file available on the pdfmasher site has a dependency for >=python3.2. Ubuntu 10.04 does not have this in the repos. Anyone have a howto or know of a PPA with the dependencies to get this going on Ubuntu 10.04? The IRIE Shinsuke PPA has python 3.2 but not all the dependencies.
Thank you
Tools like this make computers computers again.
Now if we can just stop people making PDF's and other non-computer shaped documents...
PDFMasher is a good tool, but not a replacement of Calibre
hi John,
Please make it clear that this tool, is not, a replacement of Calibre. Maybe it's a better PDF converter, but it is definitely not an eBook
catalog.
Thanks a lot for the introduction of this tool, it is very handy.
And why not to mail Kovid Goyal, Calibre author, and present this tool? Maybe there's a way to use them in the same interface. They are both Python devs.
Best,
Eugenio
The Calibre comments are the
The Calibre comments are the author's, not mine. You may wish to send your objections their way.
John Knight is the New Projects columnist for Linux Journal. | http://www.linuxjournal.com/content/pdfmasher-e-book-conversion?quicktabs_1=1 | CC-MAIN-2015-40 | refinedweb | 1,317 | 70.94 |
Results 1 to 1 of 1
- Join Date
- Dec 2012
- 4
- Thanks
- 1
- Thanked 0 Times in 0 Posts
misoproject/dataset - how do write the data to the browser screen?
EDIT - I have updated the code so all the js files link to remote copies. Previously the miso.ds.deps.ie.0.4.1.js file was a locally referenced file. This should allow you to run the code.
Hi,
I have managed to import information from a google spreadsheet into my page using the misoproject/dataset scripts.
The dataset is named as the var = ds This is the whole spreadsheet.
I have then created a subset of just one row of ds and this is named var = subset
I can see this subset in the Firefox Console.
Next I want to place this subset into a html table viewable in the browser.
How do I make that next step? I have been trying document.write and other methods but I cannot get it to work.
Eventually I want to create a mobile friendly web site that has many tables on it, each one populated by different data subsets, each one of those derived from the original larger dataset ds.
So if you know of a way to create an html table and populate it with subset data, I can then repeat that method for each table I need to create. This might not be the most efficient way but this is only for a site I want to create for a few friends to use. It's to track our sports results predicting contest that I have until now just run off a a spreadsheet.
Thanks. My code is below...
Code:
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <link rel="stylesheet" href="" /> <script src=""> </script> <script src=""> </script> <script src=""> </script> </head> <title>testing data feeds</title> </head> <body> <script language="javascript" type="text/javascript"> var ds = new Miso.Dataset({ importer: Miso.Dataset.Importers.GoogleSpreadsheet, parser: Miso.Dataset.Parsers.GoogleSpreadsheet, key: "0Ap-cOp8vqNpadG5GXzJsbmtRNlpObzhuRjIwanZfRWc", worksheet: "1" }); _.when(ds.fetch()).then(function(){ // create subset of just one row var subset = ds.rows(function(row){ return (row.Qcode === 201205); }); console.log("Subset length", subset.length); console.log("Question number", subset.column("Qcode").data); console.log("Matt ", subset.column("Matt").data); console.log("Rob ", subset.column("Rob").data); console.log("Paul ", subset.column("Paul").data); console.log("Ross ", subset.column("Ross").data); }); </script> </body> </html>
Last edited by masmedia; 01-03-2013 at 05:07 PM. | http://www.codingforums.com/javascript-programming/285172-misoproject-dataset-how-do-write-data-browser-screen.html | CC-MAIN-2016-07 | refinedweb | 415 | 66.84 |
16 April 2012 10:08 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
GDP growth is expected to accelerate to 4.2% in 2013, the Bank of Korea (BOK) said in a statement.
An easing of uncertainties over the eurozone sovereign debt crisis will be positive for economic growth, but the global slowdown and higher oil import prices will weigh down on the South Korean economy, it said.
“Export growth is forecast to slow somewhat, owing to the cooling of world trade growth as a result of the economic recession in the euro area,” the BOK said.
Meanwhile, the import prices of oil are expected to increase to $118/bbl in 2012 from $108/bbl last year, it added.
In 2013, the BOK forecasts a narrowing of the country’s current account surplus to around $12.5bn.
Current account measures an economy’s trade in goods, services, tourism and investment with the rest of the | http://www.icis.com/Articles/2012/04/16/9550504/south-korea-central-bank-trims-2012-gdp-growth-forecast-to-3.5.html | CC-MAIN-2014-42 | refinedweb | 154 | 70.02 |
Hi there folks. In this post I am going to teach you how to increase the recursion depth in python. Most of us at some time get this error :
RuntimeError: maximum recursion depth exceeded
If you want to fix this error just increase the default recursion depth limit but how to do it ? Just import the sys module in your script and in the beginning of the script type this :
sys.setrecursionlimit(1500)
This will increase the default limit to 1500. For the record the default limit is 1000. I hope that you found this post useful. Do share this on facebook and twitter and stay tuned for our next post.
Advertisements
3 thoughts on “Fixing error – maximum recursion depth reached”
That’s a band-aid though. Instead of going into these crazy-deep (at least for Python, it would be another story in LISP or Haskell) recursions, you should rewrite your algorithm to an iterative one. If it is tail-recursive, a loop will suffice. In other cases, you can simply keep a list of tasks. For example, instead of
def fib(x):
if x < 2: return x
return fib(x-1) + fib(x-2)
you can write it iteratively like
import collections
def fib(x):
tasks = collections.deque([x])
res = 0
while tasks:
v = tasks.pop()
if v < 2:
res += v
continue
tasks.append(v-1)
tasks.append(v-2)
return res
As you can see, the iterative variant is quite messy, but it makes the state explicit. (Full code at )
not working babamunini
Thank you for that. I had a perfectly working algorithm that required deeper stack to handle longer input. this worked – thanks again | https://pythontips.com/2013/08/31/fixing-error-maximum-recursion-depth-reached/ | CC-MAIN-2019-30 | refinedweb | 278 | 74.69 |
How to get the extension of a file in C++
In this tutorial, we will learn how to get the extension of a file in C++. We will create a function and pass the name of the file as an argument and give the output as the extension of the file. Let’s discuss it in detail.
We need to follow a few steps to get the extension of a file in C++. These are discussed here.
- We create a function extension() and pass the filename which is a string as an argument in the function.
- This function stores the position of the last “.” in the passed string.
- Then it stores the substring after “.” in a different string variable.
- That’s it. Now, we can print the substring which is the extension of the given file.
Also read: Rename a File in C++?
C++ program to get the extension of a file by the file name
Below you can see the C++ code implementation of the above algorithm:
#include <iostream> #include <string.h> using namespace std; void extension(string file_name); int main() { cout<<"Example program to find file extension.\n\n\n"; extension("this_file.txt"); extension("thisfile.txt"); extension("this\\file.txt"); extension("\\this\\file.txt"); extension("\\this.file.txt"); return 0; } void extension(string file_name) { //store the position of last '.' in the file name int position=file_name.find_last_of("."); //store the characters after the '.' from the file_name string string result = file_name.substr(position+1); //print the result cout<<"The file "<< file_name<<" has <." << result << "> extension."<<endl; }
And the output of the above program will be:
Example program to find file extension. The file this_file.txt has <.txt> extension. The file thisfile.txt has <.txt> extension. The file this\file.txt has <.txt> extension. The file \this\file.txt has <.txt> extension. The file \this.file.txt has <.txt> extension.
In our C++ program, you can see, we have used find_last_of() function of std::string class to find the rightmost ‘.’ in the file name. And then we have stored the following substring using the substr() function. This is the resultant string which returns us the extension of the file.
The resultant string is printed as the output before the return statement. Note that while calling the function we have used “//” instead of ‘/’ to specify the directories because ‘/’ is a special character.
Thank you. | https://www.codespeedy.com/get-the-extension-of-a-file-in-cpp/ | CC-MAIN-2022-27 | refinedweb | 391 | 68.36 |
Hi,
Still learning about python, I just figured there's actually a csv module which would help a lot for my CSV plugin... BUT, as I try to use it I get:
AttributeError: 'module' object has no attribute 'reader'
At first I thought it was because my plugin was named csv.py, so for some namespace issues it would try to find reader from my plugin, but I've tried it in another of my plugins and I still get the error.
According to the csv module and reader object are available since version 2.3, and if I'm not mistaken Sublime Text uses version 2.6?
I'm not sure if I misunderstand the language / its import system or what is made available to us but any help would be greatly appreciated!
Cheers
Eric | http://www.sublimetext.com/forum/viewtopic.php?p=25686 | CC-MAIN-2015-18 | refinedweb | 136 | 66.57 |
Whaa?
chan-split is a haskell library that is a wrapper around
Control.Concurrent.Chans
that separates a Chan into a readable side and
a writable side.
We also provide two other modules:
Data.Cofunctor (because there didn’t seem
to be one anywhere), and
Control.Concurrent.Chan.Class. The latter creates
two classes:
ReadableChan and
WritableChan, making the fundamental chan
functions polymorphic, and defining an instance for standard
Chan as well as
MVar (an MVar is a singleton bounded channel, wanna fight about it?).
Why?
Having separate read/write sides makes it easier to reason about your code,
supports doing some cool things (defining
Functor and
Cofunctor
instances), and makes more sense (e.g. the function of
dupChan
in the base library is much easier to
understand as an operation that happens on an
OutChan).
Also I use it in (the coming new, less stupid version of) my module simple-actors.
Where?
You can get it with a:
cabal install chan-split
And check out the docs on hackage. Or check out the source on github and send me pull requests.
Usage
Let’s write the numbers 1 - 10 to the InChan and read them as a stream in the OutChan:
module Main where import Control.Concurrent.Chan.Split import Control.Concurrent(forkIO) main = do -- Instead of a single Chan, we initialize a pair of (InChan,OutChan) (inC,outC) <- newSplitChan -- fork a writer on the InChan forkIO $ writeStuffTo inC [1..10] -- read from the OutChan in the main thread: getStuffOutOf outC >>= mapM_ print . take 10
And we’ll make
writeStuffTo and
getStuffOutOf, simply be the standard
functions. Note that
writeListToChan is actually polymorphic.
writeStuffTo :: InChan Int -> [Int] -> IO () writeStuffTo = writeList2Chan getStuffOutOf :: OutChan Int -> IO [Int] getStuffOutOf = getChanContents
Now I’ll demonstrate the use of our Functor and Cofunctor instances by re- defining those two functions above, after importing our Cofunctor class
import Data.Cofunctor ... -- we could convert the [Int] to [String] here but will instead demonstrate the -- Cofmap instance: writeStuffTo :: InChan String -> [Int] -> IO () writeStuffTo = writeList2Chan . cofmap show -- likewise this demonstrates the Functor instance of OutChan: getStuffOutOf :: OutChan String -> IO [Int] getStuffOutOf = getChanContents . fmap read
All right, leave your love or hate and stay tuned for the new (less stupider) simple-actors lib.
Update: 10/26/2012
The functor and contravariant functor instances have long since been removed; check out the simple actors package for a simple approach if you want that.
As of v0.4 we implement
Chan from scratch (i.e. not as a simple wrapper
around
Chan) and in 0.5 we extend that to
TChan in
STM. The latter chan
type in particular is made much more sensible and understandable by splitting
into read and write sides. | http://brandon.si/code/module-chan-split-released/ | CC-MAIN-2015-14 | refinedweb | 452 | 63.49 |
To anyone using xorg.conf.d, this seems to break X. It'll add a xorg.conf file in /etc/X11 containing:
```
Section "ServerLayout"
EndSection
```
I assume it's trying to append it to a non-existent xorg.conf file.
Search Criteria
Package Details: wacom-utility 1.21-5
Dependencies (5)
Required by (0)
Sources (2)
Latest Comments
Noid commented on 2015-12-13 04:04
To anyone using xorg.conf.d, this seems to break X. It'll add a xorg.conf file in /etc/X11 containing:
capoeira commented on 2014-06-13 18:45
my cth470 is not recognized by the software.
do I have to run this as root?
what is the name of the program? couldn't find it in terminal
caleb commented on 2014-05-27 10:42
@hexadecagram Try the rel-5 package I just updated this to. Also check that /usr/share/applications is a directory and not a file on your system. If it is a a file, look at its contents and figure out which package wrote it (pacman -Qo /usr/share/applications) and un-install that.
hexadecagram commented on 2014-05-27 04:31
During install:
error: failed to commit transaction (conflicting files)
wacom-utility: /usr/share/applications exists in filesystem
MilanKnizek commented on 2014-03-25 20:35
Added Anonymous' PKGBUILD and disowned.
Aelius commented on 2014-03-25 04:20
deb2targz was removed from official repos in 2011, flagging as out of date.
Anonymous comment on 2012-02-07 20:08
Here is my PKGBUILD using the .tar.gz:
pkgname=wacom-utility
pkgver=1.21
pkgrel=3
pkgdesc="Graphical tablet configuration utility"
arch=('i686' 'x86_64')
url=""
license=('GPL')
depends=('gtk2' 'python2' 'pygtk' 'xf86-input-wacom' 'gksu')
source=("{pkgname}/${pkgname}_${pkgver}-3.tar.gz" wacom-utility.desktop)
md5sums=('51ff9257b6e0c511ee57d40cd76742ec'
'1d44b3571fd5e48b80b2dec5209fcf47')
build() {
tar xvf ${pkgname}_${pkgver}-3.tar.gz
rm ${pkgname}_${pkgver}-3.tar.gz
rm -r ${srcdir}/${pkgname}/*.pyc
rm -r ${srcdir}/${pkgname}/debian
rm ${srcdir}/${pkgname}/wacom-utility.desktop
mkdir -p ${pkgdir}/usr/share/applications
cp wacom-utility.desktop ${pkgdir}/usr/share/applications
cp -r ${srcdir}/${pkgname} ${pkgdir}/usr/share
}
giniu commented on 2012-01-28 09:59
Again, why use .deb for debian/ubuntu when there is cross-distribution .tar.gz ?
worldwise001 commented on 2012-01-28 02:23
deb2targz is no longer in the official repo:
Here's an updated PKGBUILD using ar instead:
pkgname=wacom-utility
pkgver=1.21
pkgrel=3
pkgdesc="Graphical tablet configuration utility"
arch=('i686' 'x86_64')
url=""
license=('GPL')
depends=('gtk2' 'python2' 'xf86-input-wacom' 'gksu')
makedepends=('binutils')
source=("{pkgname}_${pkgver}-3_all.deb" wacom-utility.desktop)
md5sums=('179a3a33cd5c2592f5cb0211203df940'
'1d44b3571fd5e48b80b2dec5209fcf47')
build() {
ar xv ${pkgname}_${pkgver}-3_all.deb
cd ${srcdir}
tar xvf data.tar.gz
tar xvf control.tar.gz
cd ..
cp -r ${srcdir}/usr ${pkgdir}/usr
cp ${srcdir}/wacom-utility.desktop ${pkgdir}/usr/share/applications
}
biginoz commented on 2011-09-13 05:13
updated & It's work!
mrenn commented on 2011-07-22 00:05
v1.21:
Traceback (most recent call last):
File "/usr/share/wacom-utility/wacom_utility.py", line 6, in <module>
import gtk
ImportError: No module named gtk
It must call python2 instead of python
mrenn commented on 2011-07-21 19:20
v1.21:
Traceback (most recent call last):
File "/usr/share/wacom-utility/wacom_utility.py", line 6, in <module>
import gtk
ImportError: No module named gtk
Anonymous comment on 2011-05-17 14:40
PKGBUILD for the latest version (1.21).
Still using the .deb here
pkgname=wacom-utility
pkgver=1.21
pkgrel=1
pkgdesc="Graphical tablet configuration utility"
arch=('i686' 'x86_64')
url=""
license=('GPL')
depends=('gtk2' 'python' 'xf86-input-wacom')
makedepends=('deb2targz')
source=("{pkgname}_${pkgver}-3_all.deb")
md5sums=('179a3a33cd5c2592f5cb0211203df940')
build() {
deb2targz ${pkgname}_${pkgver}-3_all.deb
tar xvf ${pkgname}_${pkgver}-3_all.tar.gz
rm ${pkgname}_${pkgver}-3_all.tar.gz
cp -r ${srcdir}/usr ${pkgdir}/usr
}
giniu commented on 2011-02-28 19:44
Maybe better use as source and not depend on deb2targz?
dgellow commented on 2011-01-17 20:21
PKGBUILD for the last version.
pkgname=wacom-utility
pkgver=1.20
pkgrel=1
pkgdesc="Graphical tablet configuration utility"
arch=('i686' 'x86_64')
url=""
license=('GPL')
depends=('gtk2' 'python' 'xf86-input-wacom')
makedepends=('deb2targz')
source=("{pkgname}_${pkgver}-1_all.deb")
md5sums=('e54eee0d27697f79a720f062befa4d24')
build() {
deb2targz ${pkgname}_${pkgver}-1_all.deb
tar xvf ${pkgname}_${pkgver}-1_all.tar.gz
rm ${pkgname}_${pkgver}-1_all.tar.gz
cp -r ${srcdir}/usr ${pkgdir}/usr
}
glaville commented on 2010-12-10 22:12
Version 1.20 is out :
(source code in tar.gz form is available in the same directory).
giniu commented on 2010-11-16 13:49
Try and maybe add some of it to your autostart or profile.
Anonymous comment on 2010-11-16 13:14
yes it was easy to add the tablet to the wacom_data.py and all the stuff, but the application doesn't work fine (if I try to change the button1 setting to "Scroll wheel up" it changes this setting to button4 and it's the same for the other buttons) maybe I should edit some parameters of .xml file, furthermore I use X without xorg.conf and when I run the application it creates a new one that I have to remove manually each time. Anyway I think I have to learn to configure it manually without a GUI, could you suggest me a way to do it (i.e. where are the setting files or a tutorial,etc...)??? Again thanks for your help, without it maybe I'd have thrown it away!!!
giniu commented on 2010-11-15 20:23
right, mine (fun 2fg) is black (I again mistaken the serial number, mine is CTH460). Afaik the silver tablet isn't supported yet (check in file /usr/share/wacom-utility/wacom_data.py - there is 0xd1,0xd3 and 0xd4). You can try adding it - I think for now you can use the image & xml description of CTH460 (just copy it) and just add one line to file above with your model. pacman -Ql wacom-utility will tell you which files to check. It should be quite easy to add it I believe.
Anonymous comment on 2010-11-15 19:57
@giniu: first thank you very much for the help; then this is what I got:
[dede@casagrande ~]$ xidump -l
Virtual core pointer disabled
Virtual core keyboard keyboard
Virtual core XTEST pointer extension
Virtual core XTEST keyboard extension
Power Button extension
Video Bus extension
Power Button extension
AT Translated Set 2 keyboard extension
SynPS/2 Synaptics TouchPad extension
Wacom Bamboo Craft Finger pad extension
Wacom Bamboo Craft Finger touch extension
Wacom Bamboo Craft Pen eraser extension
Wacom Bamboo Craft Pen stylus extension
I found out on the web that both Bamboo Craft and Bamboo Fun are "pen&touch" the only difference is the size (Fun is bigger); so, because mine (in Italy) it's Bamboo Fun "small", I guess I have the Bamboo Craft and so the computer detects it the right way!! But now, why can't I configure it with wacom-utility or kcm-wacomtablet???
giniu commented on 2010-11-15 17:06
what does xidump -l return? Do you have all 4:
Wacom BambooFun 2FG 4x5 Pen eraser extension
Wacom BambooFun 2FG 4x5 Pen stylus extension
Wacom BambooFun 2FG 4x5 Finger pad extension
Wacom BambooFun 2FG 4x5 Finger touch extension
?
Anonymous comment on 2010-11-15 15:43
@giniu: I installed manually the wacom.ko driver module downloaded from the linuxwacom package on SourceForge and then I installed xf86-input-wacom
today I removed the wacom.ko module and disinstalled xf86-input-wacom with pacman, then I installed from AUR linuxwacom-bamboo-cth-ctl and I can use it (tapping,zooming,scrolling work) but:
- wacom-utility says "No graphics tablets detected" so I can't configure it
- kcm-wacomtablet recognizes it as a Bamboo Craft and behaves very strange infact it doesn't allow me to save a settings profile (if I try to add a profile "foo" when I click on "ok" I get the error: Profile "foo" doesn't exist)
I have a BAMBOO FUN Pen&Touch/model:CTH-461/product ID:00d2
giniu commented on 2010-11-14 17:00
( linuxwacom-bamboo-cth-ctl is xf86-input-wacom like in extra + newer kernel driver with more supported devices + more tools like xidump, wacdump good for debugging + custom .conf file )
giniu commented on 2010-11-14 16:32
well, that's same version as I have and it works for me. Do you use linuxwacom-bamboo-cth-ctl or other version of driver?
Anonymous comment on 2010-11-14 16:01
thanks it worked! but the application doesn't detect the tablet (it's a CTH-461) even if the drivers are installed correctly (it works with GIMP)!!! What can I do?
giniu commented on 2010-11-13 18:03
just do python2 /usr/share/wacom-utility/wacom_utility.py - of course you have to use python2 version of PKGBUILD, for example one posted below by nuno
Anonymous comment on 2010-11-13 12:32
Hi,
I installed the package but I can't find a way to run it. Please can someone tell me how to do it?
russo79 commented on 2010-11-09 01:22
Hi
Here is a "python3" compatible PKGBUILD.
russo79 commented on 2010-11-09 01:19
Hi
Here is a "python3" compatible PKGBUILD.
mathieu.clabaut commented on 2010-11-03 11:05
Python 3 is now the default. You should probably replace python by python2 in desktop file and in the python main file.
Anonymous comment on 2010-09-16 20:35
This package should have gksu as a dependency. I suppose most people using this already have gnome but some might not.
big_gie commented on 2010-09-03 19:16
Here's an updated PKGBUILD for v1.18-2
MilanKnizek commented on 2010-07-03 21:03
I do not know python, but an ugly hack solved the problem for me:
The original line no. 131 of /usr/share/wacom-utility/wacom_xorg.py:
newdata = ["Section \"ServerLayout\"\n","EndSection"]
adjusted to create just an empty file:
newdata = ["\n"]
MilanKnizek commented on 2010-07-03 20:47
The current X.org configuration does not create /etc/X11/xorg.conf by default, but it prefers to have the snippets in /etc/X11/xorg.conf.d instead.
The wacom utility (namely /usr/share/wacom-utility/wacom_xorg.py) tries parse non-existing xorg.conf, to back it up (fails), and installs a new xorg.conf. The trouble is that it has only two lines:
Section "ServerLayout"
EndSection
And starting GDM fails after reboot!
I assume that after installing the package xf86-input-wacom (which creates also xorg configuration file: /etc/X11/xorg.conf.d/50-wacom.conf), it is not necessary that wacom-utility touches /etc/X11/xorg.conf at all.
It is still usefull, since it creates $HOME/.wacom_utility, which is run on startup. | https://aur.archlinux.org/packages/wacom-utility/?comments=all | CC-MAIN-2016-30 | refinedweb | 1,822 | 57.57 |
Join the community to find out what other Atlassian users are discussing, debating and creating.
Hi All-
We are trying to calculate the age of each issue created in Jira. For this we have created a custom field of type Script Field. I am new to groovy.
Can some one help me in getting the age of the issue (current time - created date).
Any help is much appreciated.
Thanks in Advance,
Venkat
This is pretty straightforward:
import com.atlassian.core.util.DateUtils DateUtils.getDurationString(((new Date().getTime() - issue.getCreated().time) / 1000) as Long)
However, it may not do what you expect, as the value for a field is calculated at the time an issue is displayed, or updated. It's not updated in realtime.
NB - the code is suitable for the text template.
How can we do issue.getresolutionDate().time - issue.getCreated().time? Is that correct to plug into new Date().getTime() - issue.getCreated().time?
This is a great straight forward solution. Is there another solution that could calculate in realtime though?
This works great a displaying the age, but when you add the "Age" column to a list of issues, it sorts alphabetically rather than by actual "Age". Any suggestion on how to account for the proper sorting?
Then you need to convert the age to days, use the number searcher, and the number template...
Hello @Jamie Echlin @David Webb Can you please help with the formula to get the number of days? I have changed the template to Number field and the searcher template to Number Searcher but it gives me incorrect output in terms of days and error DateUtils.getDurationString(((new Date().getTime() - issue.getCreated().time)/(1000*60*60*24)) as Long) Error==> The indexer for this field expects a java.lang.Double but the script returned a java.lang.String - this will cause problems. Any help will be appreciated Thanks!!
getDurationString returns something like "3 weeks", so if that's what you want you need to change the indexer, probably change it to None.
Changing searcher to None and selecting Custom Template with a $value field works BUT I can't get it to parse down to days.
import com.atlassian.core.util.DateUtils
DateUtils.getDurationString(((new Date().getTime() - issue.getCreated().time) / (1000)) as Long)
This works, shows d,h,m
import com.atlassian.core.util.DateUtils
DateUtils.getDurationString(((new Date().getTime() - issue.getCreated().time) / (1000*60*60*24)) as Long)
wipes all values entirely. | https://community.atlassian.com/t5/Marketplace-Apps-Integrations/Age-of-the-Issue/qaq-p/275028 | CC-MAIN-2019-39 | refinedweb | 409 | 58.18 |
How to enable touch whit this code :QWSServer::setCursorVisible( true ) on friendly arm.
as follow this link:
I have problem for hide mouse pointer and I use this code for hiding that:
#include <QWSServer> int main(int argc, char *argv[]) { QApplication a(argc, argv); #ifdef Q_WS_QWS QWSServer::setCursorVisible( false); #endif ........
after this code my pointer is hide, BUT the touch is not worked for me. then I change the
falseto
truebut nothing changed just show the cursor at first but when i touch the screen the cursor hide and the buttons in my App not work, how can I active my touch on my arm device?
this is my device profile:
export QWS_MOUSE_PROTO=Tslib:/dev/touchscreen-1wire export QWS_KEYBOARD=TTY:/dev/tty1 | https://forum.qt.io/topic/72582/how-to-enable-touch-whit-this-code-qwsserver-setcursorvisible-true-on-friendly-arm | CC-MAIN-2019-22 | refinedweb | 122 | 65.86 |
0
Hi everyone,
I have worked some more on my code and this time I am getting some error that I am not able to resolve. Please help me.
The error says: "initialization skipped by case label." Please show me how to resolve this.
Here is the code. The error has a remark in red.
// Hw6_Bhasin.cpp : Defines the entry point for the console application. // #include "stdafx.h" #include <iostream> #include<fstream> #include<iomanip> #include<stdlib.h> using namespace std; int CarParking(int); int main() { int minutes = 0; char option = '\0', user; ifstream input; ofstream output; cout<< setiosflags(ios::left)<< "\t \t \t UH Visitors Parking "<<endl; cout<< setiosflags(ios::left)<< "_______________________________________________________________"<<endl; cout<<"\n"<<setiosflags(ios::right)<< "\t Help \t Car \t MotorCycle \t SeniorCitizen \t Quit"; cout<<"\n"<<setiosflags(ios::left)<<"_______________________________________________________________"<<endl; cout<<"Please select an option from above."<<endl; cin>> option; switch(option) { case 'h': case 'H': ifstream input("Help.txt"); break; case 'c':// Error C2360: initialization skipped by case label. case 'C': // same error here as well. cout<<"Please input the number of minutes you were parked in the lot."<<endl; cin>> minutes; CarParking( minutes); ofstream.output("Parking Charges.txt", ios::out); break; } return 0; } int CarParking(int min) { int total = 0, time = 0, fees = 0; ofstream output; time = (min/60); total = (time%2); fees = time + total; cout<< " Hours parked: "<<time<<endl; cout<<"Your parking fees: "<<fees<<endl; cout<<"Thankyou for using UH Visitors Parking.\n"; cout <<" Have a nice day."<<endl; output<< " \t UH Visitors Parking"<<endl; output<< " Hours parked: "<<time<<endl; output<<"Your parking fees: "<<fees<<endl; output<<"Thankyou for using UH Visitors Parking.\n"; output<<" Have a nice day."<<endl; return fees; }
Really appreciate your help.
Thanks. | https://www.daniweb.com/programming/software-development/threads/159820/initialization-skipped-by-case-label | CC-MAIN-2017-17 | refinedweb | 285 | 67.35 |
This page explains how to create Cloud Identity and Access Management (Cloud IAM) policies for authorization in Google Kubernetes Engine (GKE).
Overview
Every Google Cloud, GKE, and Kubernetes API call requires that the account making the request has the necessary permissions. By default, no one except you can access your project or its resources. You can use Cloud IAM to manage who can access your project and what they are allowed to do. Cloud IAM permissions work alongside Kubernetes RBAC, which provides granular access controls for specific objects in a cluster or namespace. Cloud IAM has a stronger focus on permissions at the level of the Google Cloud project and organization, though it does provide several predefined roles specific to GKE.
To grant users and service accounts access to your Google Cloud project, you add them as project team members, then assign roles to the team members. Roles define which Google Cloud resources an account can access and which operations they can perform.
In GKE, you can use Cloud IAM to manage which users and service accounts can access, and perform operations in,
Interaction with Kubernetes RBAC
Kubernetes' native role-based access control (RBAC) system also manages access to your cluster. RBAC controls access on a cluster and namespace level, while Cloud IAM works on the project level.
Cloud IAM and RBAC can work in concert, and an entity must have sufficient permissions at either level to work with resources in your cluster.
Cloud IAM Roles
The following sections describe the Cloud IAM Roles available in Google Cloud.
Predefined GKE Roles
Cloud IAM provides predefined Roles that grant access to specific Google Cloud resources and prevent unauthorized access to other resources.
Cloud IAM offers the following predefined roles for GKE:
To learn about permissions granted by each Cloud IAM role, refer to Permissions granted by Cloud IAM roles.
Primitive Cloud IAM roles
Primitive Cloud IAM roles grant users global, project-level access to all Google Cloud resources. To keep your project and clusters secure, use predefined Roles whenever possible.
To learn more about primitive roles, refer to Primitive roles in the Cloud IAM documentation.
Service Account User role
Service Account User grants a Google Cloud user account the permission to perform actions as though a service account were performing them.
Granting the
iam.serviceAccountUserrole to a user for a project gives the user all of the roles granted to all service accounts in the project, including service accounts that may be created in the future.
Granting the
iam.serviceAccountUserrole to a user for a specific service account gives a user all of the roles granted to that service account.
This role includes the following permissions:
iam.serviceAccounts.actAs
iam.serviceAccounts.get
iam.serviceAccounts.list
resourcemanager.projects.get
resourcemanager.projects.list
For more information about the ServiceAccountUser role, see ServiceAccountUser in the Cloud IAM documentation.
The following command shows the syntax for granting the Service Account User role:
gcloud iam service-accounts add-iam-policy-binding \ sa-name@project-id.iam.gserviceaccount.com \ --member=user:user \ --role=roles/iam.serviceAccountUser
Host Service Agent User role
The Host Service Agent User role is only used in Shared VPC clusters. This role includes the following permissions:
compute.firewalls.get
container.hostServiceAgent.*
Custom roles
If predefined roles don't meet your needs, you can create custom roles with permissions that you define.
To learn how to create and assign custom roles, refer to Creating and managing custom roles.
Viewing permissions granted by Cloud IAM roles
You can view the permissions granted by each Role using the
gcloud
command-line tool or Cloud Console.
gcloud
To view the permissions granted by a specific role, run the following command:
gcloud iam roles describe roles/role
where role is any Cloud IAM role.
GKE roles are prefixed with
roles/container.:
For example:
gcloud iam roles describe roles/container.admin
Console
To view the permissions granted by a specific Role, perform the following steps:
Visit the Roles section of Cloud Console's Cloud IAM page.
To see the roles for GKE, in the Filter table field, enter
Kubernetes Engine.
Select the desired role. The description of the role, and a list of assigned permissions displays.
Managing Cloud IAM roles
To learn how to manage Cloud IAM roles and permissions for human users, refer to Granting, changing, and revoking access to project members in the Cloud IAM documentation.
For service accounts, refer to Granting roles to service accounts.
Examples
Here are a few examples of how Cloud IAM works with GKE:
- A new employee has joined a company. They need to be added to the Google Cloud project, but they only need to view the project's clusters and other Google Cloud resources. The project owner assigns them the project-level Compute Viewer role. This role provides read-only access to get and list nodes, which are Compute Engine resources.
- The employee is working in operations, and they need to update a cluster using
gcloudor Google Cloud Console. This operation requires the
container.clusters.updatepermission, so the project owner assigns them the Kubernetes Engine Cluster Admin role. The employee now has the permissions granted by both the Kubernetes Engine Cluster Admin and Compute Viewer roles.
- The employee needs to investigate why a Deployment is having issues. They need to run
kubectl get podsto see Pods running in the cluster. The employee already has the Compute Viewer role, which is not sufficient for listing Pods. The employee needs the Kubernetes Engine Viewer role.
- The employee needs to create a new cluster. The project owner grants the employee the Service Account User role for the
project-number-compute@developer.gserviceaccount.comservice account, so that the employee's account can access Compute Engine's default service account. This service account has the Editor role, which provides a broad set of permission. | https://cloud.google.com/kubernetes-engine/docs/how-to/iam?hl=ru | CC-MAIN-2020-29 | refinedweb | 969 | 54.83 |
When I wanted to make my portfolio site fast and easy to maintain, I landed on the following solution to create a static site using Contentful as my content manager, Next.js to display the data, and Netlify for hosting.
Requirements of the site
Fast ⚡️
Secure 🔒
Maintainable 🏗
Easy to deploy 🚀
Service Worker ⚙️
Goal
Color.
mkdir new-thing && cd new-thing:
npm install --save next react react-dom
I’m going to run that script to install the dependencies; that will also automatically save those dependencies to my
package.json file.
I’ve gone ahead here, put in the scripts to build for production, and develop the project locally.
{ "scripts": { "dev": "next", "build": "next build", "start": "next start" } }.
mkdir pages && touch pages/index.js
Now that we have that, let’s run the project locally and visit localhost:3000 in the browser:
npm run dev.
{ "scripts": { "dev": "next", "build": "next build", "start": "next start", "postinstall": "npm run getcontent", "getcontent": "babel-node helpers/getcontent.js install" } }
We need to use a custom
.babelrc file here to utilize the
import /
export tokens available to us in that
getcontent.js file.
{ "presets": [ "env", "next/babel" ] }
{: .img}
Create a new folder from the root of the project for the JSON file to be written to—we will call that data:
mkdir data
The last step here before we can run our
postinstall script would be installing the depedencies:
npm i --save babel-cli contentful
Phew, ok, let’s run it!
npm run postinstall && next build && next export
Excellent, we have data from Contentful, written to JSON locally:
Now, we will display this data using a few React Components. To do that, let's create a components folder, enter it, and create the three main components we will be using:
mkdir components && cd components && touch WorkFeed.js && touch WorkItem.js && touch BackgroundImage.js
Back to
index.js, let's render our WorkFeed component and give it the data from Contentful:
import { Component } from 'react' import data from './../data/pageHome' import WorkFeed from './../components/WorkFeed' export default () => <WorkFeed data={data.work} />
Inside WorkFeed, we will loop our data and render a WorkItem for every case-study we have:
import WorkItem from './WorkItem' export default ({ data }) => ( <section className='work-feed'> {data.map((item, i) => { <WorkItem key={i} item={item} /> })} </section> ):
npm run postinstall && next build && next export.
To start using Contentful yourself, request a demo and go.
| https://www.contentful.com/blog/2018/05/09/building-portfolio-website-contentful-nextjs-netlify/ | CC-MAIN-2021-21 | refinedweb | 399 | 65.22 |
I've been playing around with Partial Types in Visual Studio “Whidbey“ after a developer asked me how the Extract Interface refactoring will work when applied to partial types. First a quick primer on partial types -
- Partial types lets you split the definition of a type (like a class) into multiple files. Visual Studio “Whidbey“ will, for example, divide VS designer generated code into one class and user written code into another.
- When declaring a partial class, use the “partial“ keyword in the class definition: partial class MyPartialClass.
- Partial types behave like just like regular types, except that the class is now divided into multiple files.
- A partial type can be compiled without need all of its matching files, so multiple developers could divide a large class into working chunks and not interfere with each other (although you would obviously need the other partial class files if you reference code that is declared in another partial class).
- Partial types are not limited to two files. In the example below, the class MyPartialClass is divided into three files - class1.cs, class2.cs and foo.cs.
- If you're using the command line compiler, there is no messy linking, to use partial classes, just point to the files and add all the references you would need as if the class was declared in one file. Ex: “csc class1.cs, class2.cs, foo.cs“.
- Visual Studio “Whidbey“ has full support for partial types (as you'll read below).
Refactoring partial classes
- Invoking the Extract Interface refactoring will work the same for partial classes as it will for regular classes, meaning public non-static methods will be available to extract into an interface. For example, I ran the Extract Interface refactoring on MyPartialClass and the dialog window will list *both* overloads for the foo method, even though they are each declared in separate files. For convenience sake, I copy/pasted the extracted interface into Class1.cs.
Interfaces and partial classes
- You can declare the interface in one partial class, and implement the interface across multiple partial classes. If you do not fully implement the interface in all of the partial classes, you will get an error saying that you haven't fully implemented the interface, which is again the same behavior as if the class was declared in one file. In the example below, the file class2.cs declares the IPartialClass interface and the IPartialClass interface is implemented in multiple files - class2.cs and foo.cs.
IntelliSense for variables declared in partial classes
- A variable declared in one partial class is available through IntelliSense across all of the partial classes (assuming correct scope). In the example below, Class1.cs declares a variable, private string s, and both the methods in the files Class2.cs and foo.cs can use this variable.
Method overloads in partial classes
- You can split method overloads into separate partial classes, but, just like a regular class, the method signature must be different for each overload our you'll get an error saying that you have already defined the method in the given class. In the example below, two partial classes, class2.cs and foo.cs each define a method named foo, but with different method signatures - foo(string PrintString) & foo().
- IntelliSense fully understands method overloads even if they are declared in multiple files. In the example below, Class1.cs has a class named TestClass that calls the overloaded method foo. Using IntelliSense in the Main() method, the developer will see both method overloads just like a regular class.
Class1.cs
using System;
interface IPartialClass
{
void foo(string s);
void foo();
}
partial class MyPartialClass
{
private string s = "Hello World";
}
class TestClass
{
[STAThread]
static void
{
MyPartialClass MyClass = new MyPartialClass();
MyClass.foo("Print this");
MyClass.foo();
Console.ReadLine();
}
}
using System; public partial class MyPartialClass : IPartialClass { public void foo(string PrintString) { Console.WriteLine(PrintString + " " + s); } }
Class2.cs
using System;
public partial class MyPartialClass : IPartialClass
{
public void foo(string PrintString)
{
Console.WriteLine(PrintString + " " + s);
}
}
foo.cs using System;
partial class MyPartialClass
{
public void foo()
{
Console.WriteLine(s);
}
}
Cool. Looks like no more ugly #region block around those WinFORM code generated from VS.
The most useful case does seem to be the winform code generated by the VS.net designer.
But having that code generated was ugly in the first place. Why not have a xml-based UI description language with a runtime-generated assembly before Longhorn/XAML?
If ASP.net can do it, why can’t winforms do it?
My *personal* view is that it’s a lot easier to do with ASP.NET because ASP.NET controls/pages are already designed as abstractions for a declarative language. Declarative markup language is what XAML is all about, and because this is so new the LH team wants to make sure we get it right and that we incorporate customer feedback into the Longhorn implementation. Creating a new format that would be immediately replaced by XAML probably wouldn’t be a good idea.
Realy realy bad feature!
When writing VSTO projects the developers are asked to write their startup code in an innocuous event… | https://blogs.msdn.microsoft.com/danielfe/2004/02/02/playing-with-partial-types/ | CC-MAIN-2019-22 | refinedweb | 849 | 55.95 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.