text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
This tool will grow polygon selection based on "Edge Angle", i.e. angle between normals of 2 polygons sharing particular edge. In other words. You've got selected polygon(s). Adjacent polygons will be added to selection (selection will be grown) if Edge Angle between polygons is in some range (set in UI). This growing will proceed until there is no valid polygons to grow. Similar tool called "Select by angle" can be found in 3DSMAX. Features: - Handy interface - Fully interactive. Tweak controls and immediately see results in viewports. - Implementation of 3DSMAX "Select by angle" Installation: Put this script in your PYTHONPATH directory. In Maya, run this script by typing in "Python" tab of Script Editor or make a shelf button containing: import fx_growSelectionByEdgeAngle fx_growSelectionByEdgeAngle.run() Please use the Feature Requests to give me ideas. Please use the Support Forum if you have any questions or problems. Please rate and review in the Review section.
https://ec2-34-231-130-161.compute-1.amazonaws.com/maya/script/fx-grow-selection-by-edge-angle-for-maya
CC-MAIN-2022-27
refinedweb
155
60.51
We have been using Mock for python for a while. Now, we have a situation in which we want to mock a function def foo(self, my_param): #do something here, assign something to my_result return my_result Normally, the way to mock this would be (assuming foo being part of an object) self.foo = MagicMock(return_value="mocked!") Even, if i call foo() a couple of times i can use self.foo = MagicMock(side_effect=["mocked once", "mocked twice!"]) Now, I am facing a situation in which I want to return a fixed value when the input parameter has a particular value. So if let’s say “my_param” is equal to “something” then I want to return “my_cool_mock” This seems to be available on mockito for python when(dummy).foo("something").thenReturn("my_cool_mock") I have been searching on how to achieve the same with Mock with no success? Any ideas? If side_effectis a function then whatever that function returns is what calls to the mock return. The side_effectfunction)] As indicated at Python Mock object with method called multiple times A solution is to write my own side_effect def my_side_effect(*args, **kwargs): if args[0] == 42: return "Called with 42" elif args[0] == 43: return "Called with 43" elif kwarg['foo'] == 7: return "Foo is seven" mockobj.mockmethod.side_effect = my_side_effect That does the trick Side effect takes a function (which can also be a lambda function), so for simple cases you may use: m = MagicMock(side_effect=(lambda x: x+1)) Just to show another way of doing it: def mock_isdir(path): return path in ['/var/log', '/var/log/apache2', '/var/log/tomcat'] with mock.patch('os.path.isdir') as os_path_isdir: os_path_isdir.side_effect = mock_isdir Tags: function, input, python, sed
https://exceptionshub.com/mocking-python-function-based-on-input-arguments.html
CC-MAIN-2021-21
refinedweb
284
50.26
I'm trying to use a helper method to determine the value of an attribute for several records. Here is the basic function I am trying to get working (from the view): <% if Baseline.where(subject_id: sub.subject_id).first.crf_status(crf) == 1 %> <td bgcolor="#98FB98" > <% else %> def crf_status(crf) case crf when Baseline then 'baseline_status' when FollowUp3Week then 'follow_up_3_week' ... end end <% if Baseline.where(subject_id: sub.subject_id).first.baseline_status == 1 %> <td bgcolor="#98FB98" > <% else %> If you want to return a method that is to be called in the context, use the .send method. Baseline.where(subject_id: sub.subject_id).first.send(crf_status(crf)) Whatever is returned from your method will be executed. This is a great metaprogramming example. You want to test against the class of the instance, so use the .class method on your case line. You'll want to return symbols not strings though, so do this: def crf_status(crf) case crf.class when Baseline then :baseline_status when FollowUp3Week then :follow_up_3_week else :default end end
https://codedump.io/share/tGKFo9oI6MXe/1/ruby-on-rails-user-helper-method-to-read-attribute
CC-MAIN-2017-43
refinedweb
166
58.48
accessible Skip over Site Identifier Skip over Generic Navigation Skip over Search Skip over Site Explorer Site ExplorerSite Explorer Joined: 3/10/2017 Last visit: 1/18/2018 Posts: 3 Rating: (0) Hey Guys, I already did a few Tutorials which you posted in this forum and basicly all ofthem were functionable. Now I wanted to read the value of a sensor, which provides an analogue voltage between 0-5V, but I always get the same error:"Invalid AIO pin specified - do you have an ADC?" I use exactly the same code as in the Tutorial-Document which you can see in the attachment. There is the Tutorial: Please can you help me? Google isn't really helpful and I always get the same error. I just did it on a second device - same error. Thanks a lot, guys Error_AIO.PNG (126 Downloads) Joined: 11/4/2016 Last visit: 4/27/2017 Posts: 71 (16) Hello, can you post your full code? Does the problem only occur with analog pin 0 or with other ones also? #include <iostream>#include "mraa.hpp"using namespace std;int main(){//initialize GPIO Pin 12 for Motor Channelmraa::Gpio* Motor_Channel_A;Motor_Channel_A = new mraa::Gpio(12, true, false);Motor_Channel_A->dir(mraa::DIR_OUT);//initialize GPIO Pin 13 for Break Channelmraa::Gpio* Break_Channel_A;Break_Channel_A = new mraa::Gpio(13, true, false);Break_Channel_A ->dir(mraa::DIR_OUT);//initialize PWM Pin 3 for Speed Controlmraa::Pwm* Speed_A;Speed_A = new mraa::Pwm(3);Speed_A->enable(1); //enable the Pin for PWM//initialize AIO Pin 2 for Potentiometer Inputmraa::Aio* PotentiometerIn;PotentiometerIn = new mraa::Aio(2);while(true){float value;value = PotentiometerIn->read(); //read analog value from Potentiometervalue = value / 1023; //map into value for PWM output (between 0 and 1 (0-100%))Motor_Channel_A->write(1); //Establishes forward direction of Channel ABreak_Channel_A->write(0); //Disengage the Brake for Channel ASpeed_A->write(value); //write the mapped analog value to the Speed control Pincout<<value<<endl;//display value written to the Speed control Pin}return 0;} well, the problem occurs with every pin. Last time I tested it with Python and I used analog input channel 0 to 100 (yes I know, there are not somuch Pins on the board but just to demonstrate it) but it still doesn't work. The error pop up in the line "PotentiometerIn = new mraa::Aio(2);"... I've three ionstances of the IoT2040 in front of me and it doesn't work on any of them so I think it can't be a hardware error. I also thought that there were jumper or smth like that to activate the analog inputs but I still wasn't able to fix the issue. Joined: 6/3/2015 Last visit: 9/18/2017 Posts: 215 (6) There is a little jumper inside the IOT that needs to be moved. See this document on page 14 for further information. I want "Goodbye World" to be carved into my headstone Joined: 4/28/2015 Last visit: 9/12/2017 Posts: 40 (2) JDariusThere is a little jumper inside the IOT that needs to be moved.See this document on page 14 for further information. This Jumper has NOTHING to do with the analog inputs of the board! It is responsible for the VIN pin which affects the IOs on the SHIELD mention in this document. The problem is already solved. I got an Engineering Sample an there are a few differences to the normal model. The error is only poping up when using the prototyp. Thanks for your help! Joined: 6/19/2017 Last visit: 5/22/2020 Posts: 3934 (61) New question published by Matty2020 is split to a separate thread with the subject How can i get a PWM signal on the Input?. Best regardsJen_Moderator
https://support.industry.siemens.com/tf/ww/en/posts/aio-reading-problem/163273/?page=0&pageSize=10
CC-MAIN-2020-24
refinedweb
623
58.92
An example of creating of two-dimensional matrix on the form. The analogue of TStringGrid component in Delphi Often in the tasks is necessary to enter a number or other data in a two-dimensional array (matrix) and be able to operate them. In the work is realized the analogue of component TStringGrid that is used in Delphi for viewing of data as two-dimensional table of strings. To do this is used a two-dimensional array of TextBox controls. The task Create a program that carries out the product of two matrices of dimension n. Matrix must be entered from the keyboard in a separate form and saved in the internal data structures. The user has the ability to see the resulting matrix. Also, it is possible to save the result matrix in the text file “Res_Matrix.txt”. Performing Save the project in some name. - Creating the main form Form1. Create the form as shown in Figure 1. Place on the form the controls of following types: – four controls of type Button. Automatically, four objects (variables) with names “button1”, “button2”, “button3”, “button4” will be created; – three controls of type “Label”, which are named as “label1”, “label2”, “label3”; – control of TextBox type, which is named “textBox1”. You need to form the properties of controls of types “Button” and “Label”: – in the object button1 property Text = “Input of matrix 1 …”; – in the object button2 property Text = “Input of matrix 2 …”; – in the object button3 property Text = “Result …”; – in the object button4 property Text = “Save to file “Res_Matr.txt””; – in the control label1 property Text = “n = “. To set up the view and behavior of form you need to do following actions: – set the title of form. To do this property Text = “The product of matrices”; – property StartPosition = “CenterScreen” (the form is placed to the center of screen); – property MaximizeBox = “false” (hide the button of maximize of form). Fig. 1. The form of application - Building the secondary form Form2. In the secondary form “Form2”, will be inputted data into the matrices and outputted the results. An example of creating a new form in MS Visual Studio – C# is described here. Add the new form to the application using the command Project -> Add Windows Form … In the opened window select “Windows Form“. The name of file leave as proposed “Form2.cs“. Place on the form, in any position, the control of “Button” type (Figure 2). As a result, the new object named “button1”, will be given. In the control “button1” you need to change the following properties: – property Text = “OK”; – property DialogResult = “OK” (Figure 3). It means, when user clicks on the button1, the window will be closed with returning code “OK”; – property Modifiers = “Public”. It means, that button “button1” will be visible from other modules (from form Form1). Set up the properties of form Form2: – property Text = “Input of matrix“; – property StartPosition = “CenterScreen” (the form is placed on the center screen); – property MaximizeBox = “false” (hide the maximize button). Fig. 2. The form “Form2” after setting Fig. 3. The property “DialogResult” of control “button1” at the form “Form2” - Entering the the internal variables. Next step – entering the internal variables into the text of module “Form1.cs“. To do this, you need to activate module “Form1.cs“. In the text of module “Form1.cs” you need to add the following code: ... namespace WindowsFormsApplication1 { public partial class Form1 : Form { const int MaxN = 10; // the maximum allowable dimension of the matrix int n = 3; // The current dimension of the matrix TextBox[,] MatrText = null; // The matrix of TextBox type elements double[,] Matr1 = new double[MaxN, MaxN]; // The matrix 1 of floating point numbers double[,] Matr2 = new double[MaxN, MaxN]; // The matrix 1 of floating point numbers double[,] Matr3 = new double[MaxN, MaxN]; // The matrix of results bool f1; // flag, which indicates about that the data were entered into the matrix Matr1 bool f2; // flag, which indicates about that the data were entered into the matrix Matr1 int dx = 40, dy = 20; // width and height of cells in MatrText [,] Form2 form2 = null; // an instance (object) of the class form "Form2" public Form1() { InitializeComponent(); } } } ... Let’s explain some values of variables: – MaxN – the maximum allowable dimension of the matrix; – n – dimension of the matrix, which user types on keyboard into control textBox1; – MatrText – two-dimensional matrix of controls TextBox type. In this matrix is entered the cells of matrix as strings. The data entering will be formed in the form “Form2”. – Matr1, Matr2 – matrices of elements of “double” type. Data will be copied from MatrText into Matr1 and Matr2; – Matr3 – the resulting matrix, which is equal to product of matrices Matr1 and Matr2; – f1, f2 – variables that determine whether the data has been entered, respectively, in Matr1 and Matr2 matrix; – dx, dy – the dimensions of of one cell of type TextBox in the MatrText matrix; – form2 – object of class of form “Form2”, using which we will have access to this form. - Programming the event Load of form “Form1”. Process of programming of any event in Microsoft Visual Studio – C# is described here in details. The code listing of event handler Load of form “Form1” is following: private void Form1_Load(object sender, EventArgs e) { // І. Initializing of controls and internal variables textBox1.Text = ""; f1 = f2 = false; // matrices are not yet filled label2.Text = "false"; label3.Text = "false"; // ІІ. Memory allocation and configure MatrText int i, j; // 1. Memory allocation for Form2 form2 = new Form2(); // 2. Memory allocation for the whole matrix (not for cells) MatrText = new TextBox[MaxN, MaxN]; // 3. Memory allocation for each cell of the matrix and its setting for (i = 0; i < MaxN; i++) for (j = 0; j < MaxN; j++) { // 3.1. Allocate memory MatrText[i, j] = new TextBox(); // 3.2. Set the value to zero. MatrText[i, j].Text = "0"; // 3.3. Set the position of cell in the Form2. MatrText[i, j].Location = new System.Drawing.Point(10 + i * dx, 10 + j * dy); // 3.4. Set the size of cell MatrText[i, j].Size = new System.Drawing.Size(dx, dy); // 3.5. Hide the cell MatrText[i, j].Visible = false; // 3.6. Add MatrText[i,j] into the form2 form2.Controls.Add(MatrText[i, j]); } } Let’s explain some of the code snippet in method Form1_Load(). The event “Load” is generated (called) when form is loading. Since there Form1 is the main form of the application, the “Load” event of “Form1” will be called immediately after the application starts to run. So, here it is expedient to introduce the initial initialization of global controls and internal variables of the program. These controls can be called from other methods of the class. In the event handler Form1_Load() the memory is allocated for two-dimensional matrix MatrText of strings only one time. This memory will be automatically freed upon completion of the application. The memory is allocated in two stages: – for the whole matrix MatrText as two-dimensional array; – for every element of matrix, which is the object of type “TextBox”. After allocating memory, for any object is carried out the setting of main internal properties (position, size, text and visibility). Also, every cell, which is created, is added (placed) on the form “Form2” using method Add() from class “Controls”. Every new cell can be added on the any other form of application. - The development of an additional method of resetting the data of matrix “MatrText”. In order to many times not use the code of resetting the matrix, you need to create own method (for example Clear_MatrText()), that realizes this code. Listing of method Clear_MatrText() is following: private void Clear_MatrText() { // Setting the cells of MatrText to zero for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) MatrText[i, j].Text = "0"; } - Programming the event of clicking on the button1 (“Input of matrix 1 …”). When button1 is clicked must be called the window of inputting a new matrix. The matrix size depends on the value of n. Listing of the event handler of clicking on the button1 is following: private void button++) if (MatrText[i, j].Text != "") Matr1[i, j] = Double.Parse(MatrText[i, j].Text); else Matr1[i, j] = 0; // 8. Data were entered into matrix f1 = true; label2.Text = "true"; } } In the listing above, the value of n is read. After that, is carry out the setting of cells of matrix MatrText. Based on the inputted value of n are formed the sizes of form “form2” and position of button “button1”. If, into the form “Form2”, user is pressed on the button “OK” (button2) then the rows from MatrText are moved into the two-dimensional matrix “Matr1” of floating point numbers. Converting from string to the corresponding real number is performed by the method Paste() from the class Double. Also, is formed the variable f1, which points that data were inputted into matrix “Matr1”. - Programming of event of clicking on the button “button2” (“Input of matrix 2…”). Code listing of event handler of clicking on the button2 is similar to the listing of event handler of clicking on the button1. It differs only in steps 7-8. In this section formed Matr2 matrix and variable f2. private void button++) Matr2[i, j] = Double.Parse(MatrText[i, j].Text); // 8. Matrix Matr2 is formed f2 = true; label3.Text = "true"; } } - Programming of the leaving of input focus in the control textBox1. In the application may be a situation when the user changes n to a new value. In this case, the flags f1 and f2 must be set to the new values. Also, the size of matrix MatrText must be changed. You can control the changing of value n using the event “Leave” of control textBox1. The event “Leave” is generated in time when control “textBox1” leaves the input focus (Figure 4). Fig. 4. The event Leave of the control textBox1 The code listing of event handler is following: private void textBox1_Leave(object sender, EventArgs e) { int nn; nn = Int16.Parse(textBox1.Text); if (nn != n) { f1 = f2 = false; label2.Text = "false"; label3.Text = "false"; } } - Programming of the event of clicking on the button3 (“Result …”). The output of result will be realized in the same form, in which were entered the matrices Matr1 and Matr2. First of all, the product of these matrices will be formed in the matrix Matr3. After that, the value from Matr3 is moved in “MatrText” and is displayed on the form Form2. The listing of event handler is following: private void button3_Click(object sender, EventArgs e) { // 1. Checking, were inputted data in the both matrices? if (!((f1 == true) && (f2 == true))) return; // 2. Calculating of the product of matrices. Result is in Matr3 for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) { Matr3[j,i] = 0; for (int k = 0; k < n; k++) Matr3[j, i] = Matr3[j, i] + Matr1[k, i] * Matr2[j, k]; } // 3. Inputting data into MatrText for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) { // 3.1. Tab order MatrText[i, j].TabIndex = i * n + j + 1; // 3.2. Converting the number to a string MatrText[i, j].Text = Matr3[i, j].ToString(); } // 4. Show the form form2.ShowDialog(); } - Programming an event of clicking on the button4 (“Save to file “Res_Matr.txt””). To save the result matrix Matr3, you need to use the capabilities of class “FileStream”. The class FileStream is described in the namespace System.IO. Therefore, in the beginning of module “Form1.cs” you need to add the following code: using System.IO; Listing of event handler of clicking on the button4 is the following: private void button4_Click(object sender, EventArgs e) { FileStream fw = null; string msg; byte[] msgByte = null; // array of bytes // 1. Open file for writing fw = new FileStream("Res_Matr.txt", FileMode.Create); // 2. Saving the matrix of result in file // 2.1. Save the number of elements of the matrix Matr3 msg = n.ToString() + "\r\n"; // Converting the string msg into a byte array msgByte msgByte = Encoding.Default.GetBytes(msg); // save of array msgByte into the file fw.Write(msgByte, 0, msgByte.Length); // 2.2. Now saving of the matrix msg = ""; for (int i = 0; i < n; i++) { // forming of a string based on the matrix for (int j = 0; j < n; j++) msg = msg + Matr3[i, j].ToString() + " "; msg = msg + "\r\n"; // new line } // 3. Converting the strings into a byte array msgByte = Encoding.Default.GetBytes(msg); // 4. Saving the strings into the file fw.Write(msgByte, 0, msgByte.Length); // 5. Close the file if (fw != null) fw.Close(); } - Running the application. Now you can run the application.
http://www.bestprog.net/en/2016/04/29/011-c-an-example-of-creating-of-two-dimensional-matrix-on-the-form-the-analogue-of-tstringgrid-component-in-delphi/
CC-MAIN-2017-39
refinedweb
2,099
65.73
Linux Content All Articles Interviews Linux in the Enterprise Security Alerts Administration Browsers Caching Certification Community Database Desktop Device Drivers Devices Firewalls Game Development Getting Started Kernel LDAP Multimedia Networking PDA Programming Security Tools Utilities Web Design and Development X Window System This article demonstrates how to write Ruby scripts that work like typical, well-behaved Unix commands. To make it more fun and useful, we'll write a command-line tool for processing data stored in the comma separated values (CSV) file format. CSV (not CVS) is used to exchange data between databases, spreadsheets, and securities analysis software, as well as between some scientific applications. That format is also used by payment processing sites that provide downloadable sales data to vendors who use their services. CSV files are plain text ASCII files in which one line of text represents one row or data and columns are separated with commas. A sample CSV file is shown below. ticker,per,date,open,high,low,close,vol XXXX,D,3-May-02,83.01,83.58,71.13,78.04,9645300 XXXX,D,2-May-02,82.47,85.76,82.05,83.84,7210000 XXXX,D,1-May-02,86.80,90.83,81.74,85.50,14253300 The script, csvt, will extract selected columns of data from a CSV file. The output will also be a CSV file, and the user will be able to specify the order the columns of data will be printed in. A simple data integrity test will make csvt fail, when the number of columns in one line differs from the number of columns in the previous line. The source of data will be either a file or standard input (STDIN), as is customary for many Unix command line tools. csvt STDIN The utility will support the following options: --extract col[,col][...], to print selected columns from input. Numbers are separated with commas, and numbering starts with 0. For example, --extract col[,col][...] $ csvt --extract 1,5,2 file prints columns 1, 5 and 2 (in that order) from file: file per,low,date D,71.13,3-May-02 D,82.05,2-May-02 D,81.74,1-May-02 It will possible to list the same column more than once $ csvt --extract 0,1,5,2,0 file Which has the following output as a result: ticker,per,low,date,ticker XXXX,D,71.13,3-May-02,XXXX XXXX,D,82.05,2-May-02,XXXX XXXX,D,81.74,1-May-02,XXXX --remove col[,col][...], to print everything but the selected columns. Numbers are separated with commas, and numbering starts with 0. For example, --remove col[,col][...] $ csvt --remove 1,5,2 file will print all columns except 1, 5 and 2 (in any order) from file: ticker,open,high,close,vol XXXX,83.01,83.58,78.04,9645300 XXXX,82.47,85.76,83.84,7210000 XXXX,86.80,90.83,85.50,14253300 Listing the same column number more than once will have no effect. -h --usage -u --version When csvt finds an unsupported option, or when it is run without any options, it will default to the behavior determined by To complete this tutorial you will need an OS capable of running the Ruby interpreter, the Ruby interpreter itself, and a text editor. The operating system can be any POSIX-compatible system, either commercial (AIX, Solaris, QNX, Microsoft NT/2000, Mac OS X, and others) or free (Linux, FreeBSD, NetBSD, OpenBSD, or Darwin). The Ruby interpreter should be the latest release of Ruby. You can check if Ruby has been installed on your system with the following command: $ ruby --version Related Reading Ruby in a Nutshell By Yukihiro Matsumoto When the system reports that there is no such file or directory, you can either download the latest Ruby binaries from the Ruby site or from one of repositories of ports and packages for your operating system (check the list of resources at the end of this article). If ready-made binaries are not available, you can always build Ruby from original sources found at the Ruby site. Detailed instructions for building Ruby can be found in the README file found in the interpreter's source archive. If you get stuck support is available on comp.lang.ruby as well as on the Ruby-talk mailing list. (Subscription details are on the Ruby site). The choice of text editor is largely a matter of personal preference. The author is a devoted vi user, but any text editor will do. vi Every tool, no matter how small, should come with a manual or, at the very least, it should print a short help screen that explains its usage. It is a good habit to write documentation before writing the first line of code. Since csvt is a simple tool with only five options, you can be forgiven for not writing the manual, but you should embed basic documentation in the script itself. This should be mandatory for even a short script that you are writing for your own use, because chances are good that you will forget what it does in two weeks. The help screen shown above will be printed by csvt after the user makes a mistake or runs csvt without specifying any options. Since it can only occupy one standard text terminal screen (80 by 25 characters), it must be terse, but informative. Ideally, it should present the following information: Your help screen could look like this (and it's okay just to type this stuff in a text editor and wrap it in code later): csvt -- extract columns of data from a CSV (Comma-Separate Values) file Usage: csvt [POSIX or GNU style options] file ... POSIX options GNU long options -e col[,col][,col]... --extract col[,col][,col]... -r col[,col][,col]... --remove col[,col][,col]... -h --help -u --usage -v --version Examples: csvt -e 1,5,6 file print column 1, 5 and 6 from file csvt --extract 4,1 file print column 4 and 1 from file csvt -r 2,7,1 file print all columns except 2, 7 and 1 from file csvt --remove 6,0 file print all columns except 6 and 0 from file cat file | csvt --remove 6,0 print all columns except 6 and 0 from file Send bug reports to bugs@foo.bar For licensing terms, see source code Because there are several cases where it might be necessary to display the help screen, you will need to put the code that displays it in a separate method. We'll call it printusage(). (It helps to have the source code of csvt handy) printusage() def printusage(error_code) print "csvt -- extract columns of data from a CSV (Comma-Separate Values) file\n" print "Usage: csvt [POSIX or GNU style options] file ...\n\n" print "POSIX options GNU long options\n" print " -e col[,col][,col]... --extract col[,col][,col]...\n" print " -r col[,col][,col]... --remove col[,col][,col]...\n" print " -h --help\n" print " -u --usage\n" print " -v --version\n\n" print "Examples: \n" print "csvt -e 1,5,6 file print column 1, 5 and 6 from file\n" print "csvt --extract 4,1 file print column 4 and 1 from file\n" print "csvt -r 2,7,1 file print all columns except 2, 7 and 1 from file\n" print "csvt --remove 6,0 file print all columns except 6 and 0 from file\n" print "cat file | csvt --remove 6,0 print all columns except 6 and 0 from file\n\n" print "Send bug reports to bugs@foo.bar\n" print "For licensing terms, see source code\n" exit(error_code) end printusage() takes one argument, error_code, which is later passed to exit()—a built-in Ruby method used to stop the script and return an error code. In your script printusage() will be called in two cases: error_code exit() You should always remember to write code that returns appropriate error codes. When your script returns meaningful error codes, it is much easier to write scripts that can handle critical situations..
http://www.linuxdevcenter.com/pub/a/linux/2003/09/18/ruby_csv.html?page=1
CC-MAIN-2015-27
refinedweb
1,361
56.69
The smallest CommonMark compliant markdown parser with positional info and concrete tokens. - compliant (100% to CommonMark) - extensions (GFM, directives, footnotes, frontmatter, math, MDX.js) - safe (by default) - small (smallest CM parser that exists) - robust (1700+ tests, 100% coverage, fuzz testing) Intro. It’s in open beta: up next are CMSM and CSTs. - for updates, see Twitter - for more about us, see unifiedjs.com - for questions, see Discussions - to help, see contribute or sponsor below Contents - Install - Use - API - Extensions - Syntax tree - CommonMark - Test - Size & debug - Comparison - Version - Security - Contribute - Sponsor - Origin story - License Install npm install micromark Use Typical use (buffering): var micromark = require('micromark') console.log(micromark('## Hello, *world*!')) Yields: <h2>Hello, <em>world</em>!</h2> The same can be done with ESM (in Node 10+, browsers that support it, or with a bundler), in an example.mjs file, like so: import micromark from 'micromark' console.log(micromark('## Hello, *world*!')) You can pass extensions (in this case micromark-extension-gfm): var micromark = require('micromark') var gfmSyntax = require('micromark-extension-gfm') var gfmHtml = require('micromark-extension-gfm/html') var doc = '* [x] contact@example.com ~~strikethrough~~' var result = micromark(doc, { extensions: [gfmSyntax()], htmlExtensions: [gfmHtml] }) console.log(result) Yields: <ul> <li><input checked="" disabled="" type="checkbox"> <a href="mailto:contact@example.com">contact@example.com</a> <del>strikethrough</del></li> </ul> Streaming interface: var fs = require('fs') var micromarkStream = require('micromark/stream') fs.createReadStream('example.md') .on('error', handleError) .pipe(micromarkStream()) .pipe(process.stdout) function handleError(err) { // Handle your error here! throw err } API This section documents the API. The parts can be used separately, but this isn’t documented yet. micromark(doc[, encoding][, options]) Compile markdown to HTML. Parameters doc Markdown to parse ( string or Buffer) encoding Character encoding to understand doc as when it’s a Buffer ( string, default: 'utf8'). options.defaultLineEnding Value to use for line endings not in doc (. options.extensions Array of syntax extensions ( Array.<SyntaxExtension>, default: []). options.htmlExtensions Array of HTML extensions ( Array.<HtmlExtension>, default: []). Returns string — Compiled HTML. micromarkStream(options?) Streaming interface of micromark. Compiles markdown to HTML. options are the same as the buffering API above. Available at require('micromark/stream').. Extensions There are two types of extensions for micromark: SyntaxExtension and HtmlExtension. They can be passed in extensions or htmlExtensions, respectively. SyntaxExtension A syntax extension is an object whose fields are the names of hooks, referring to where constructs “hook” into. content (a block of, well, content: definitions and paragraphs), document (containers such as block quotes and lists), flow (block constructs such as ATX and setext headings, HTML, indented and fenced code, thematic breaks), string (things that work in a few places such as destinations, fenced code info, etc: character escapes and -references), or text (rich inline text: autolinks, character escapes and -references, code, hard breaks, HTML, images, links, emphasis, strong). The fields at such objects are character codes, mapping to constructs as values. The built in constructs are an extension. See it and the existing extensions for inspiration. HtmlExtension An HTML extension is an object whose fields are either enter or exit (reflecting whether a token is entered or exited). The values at such objects are names of tokens mapping to handlers. See the existing extensions for inspiration. List of extensions micromark/micromark-extension-directive— support directives (generic extensions) micromark/micromark-extension-footnote— support footnotes Syntax tree A higher level project, mdast-util-from-markdown, can give you an AST. var fromMarkdown = require('mdast-util-from-markdown'). Common. Test micromark is tested with the ~650 CommonMark tests and more than 1000 extra tests confirmed with CM reference parsers. These tests reach all branches in the code, thus this project has 100% coverage. Finally, we use fuzz testing to ensure micromark is stable, reliable, and secure. To build, format, and test the codebase, use $ npm test after clone and install. The $ npm run test-api and $ npm run test-coverage scripts check the unit tests and their coverage, respectively. The $ npm run test-types script checks TypeScript definitions. The $ npm run test-fuzz script does fuzz testing for 15 minutes. The timeout is provided by GNU coreutils timeout(1), which might not be available on your system. Either install it or remove it from the script. Size &, which is published in the dist/ folder and has entries in the root. While developing or debugging, you can switch to use the source, which is published in the lib/ folder, and comes instrumented with assertions and debug messages. To see debug messages, run your script with a DEBUG env variable, such as with DEBUG="micromark" node script.js. To generate the codebase, use $ npm run generate after clone and install. The $ npm run generate-dist script specifically takes lib/ and generates dist/. The $ npm run generate-size script checks the bundle size of dist/. Comparison There are many other markdown parsers out there, and maybe they’re better suited to your use case! Here is a short comparison of a couple of ’em micromark can be used in two different ways. It can either be used, optionally with existing extensions, to get HTML pretty. If you’re looking for fine grained control, use micromark. remark. Transforming the tree is relatively easy: it’s a JSON object that can be manipulated directly. remark is stable, widely used, and extremely powerful for handling complex data. If you’re looking to inspect or transform lots of content, use remark. marked, use marked. mark’re in Node and have CommonMark-compliant (or funky) markdown and want to turn it into HTML, use markdown-it. Others There are lots of other markdown parsers! Some say they’re small, or fast, or that they’re CommonMark compliant — but that’s not always true. This list is not supposed to be exhaustive. This list of markdown parsers is a snapshot in time of why (not) to use (alternatives to) micromark: they’re all good choices, depending on what your goals are. Version The open beta of micromark starts at version 2.0.0 (there was a different package published on npm as micromark before). micromark will adhere to semver at 3.0.0. Use tilde ranges for now: "micromark": "~2.10.1". Security The typical security aspect discussed for markdown is cross-site scripting (XSS) attacks. It’s safe to compile markdown to HTML if it does not include embedded HTML nor uses dangerous protocols in links (such as javascript: or data:). micromark is safe by default when embedded HTML or dangerous protocols are used too, as it encodes or drops them. Turning on the allowDangerousHtml or allowDangerousProtocol options for user-provided markdown opens you up to XSS attacks. Another. Contribute See contributing.md in micromark/.github for ways to get started. See support.md for ways to get help. This project has a code of conduct. By interacting with this repository, organisation, or community you agree to abide by its terms. Sponsor Support this effort and give back by sponsoring on OpenCollective! Origin.
http://unifiedjs.com/explore/package/micromark/
CC-MAIN-2021-04
refinedweb
1,162
50.12
Last line is still not readmeandmycode Dec 11, 2013 3:50 PM Hi, I fixed error in last post and rewrote the code using ProcessBuilder instead. Output from my unix tool is: Command is a shell application' /home/myid/test/myshell/bin/run' that starts a ascii gui that looks like ( see below). It waits for input from user. ******************************************* * My shell Application * ******************************************* HELP: h COMMAND: c QUIT:q 135.19.45.18> And still the last line is not showing when I read the text. I have tried with readLine() and I get the same error. Note: When I debug, after characters in line: OUIT:q have been read I can see that debugger hangs on br.read() And now the code looks like: import java.io.BufferedReader; import java.io.BufferedWriter; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.io.OutputStream; import java.util.ArrayList; import java.util.List; public class RunShellCmd { private BufferedReader processInputStream; private BufferedWriter processWriter; public static void main(String args[]) throws Exception { RunShellCmd cmd = new RunShellCmd(); cmd.runCmd(); } public void runCmd() throws Exception { Process process = null; List<String> cmd = new ArrayList<String>(); cmd.add("/bin/bash"); cmd.add("/home/myid/test/myshell/bin/run"); ProcessBuilder processBuilder = new ProcessBuilder(cmd); processBuilder.redirectErrorStream(true); try { System.out.println("start process ...."); process = processBuilder.start(); InputStream is = process.getInputStream(); InputStreamReader isr = new InputStreamReader(is); BufferedReader br = new BufferedReader(isr); StringBuilder sb = new StringBuilder(); int intch; while ((intch = br.read()) != -1) { char ch = (char) intch; sb.append(ch); System.out.println(sb.toString()); } } catch (IOException e) { System.out .println("An error occourd: " + e.toString()); } finally { processInputStream.close(); process.destroy(); System.out.println("exit value: " + process.exitValue()); processWriter.close(); } } } 1. Re: Last line is still not readrp0428 Dec 11, 2013 4:02 PM (in response to meandmycode) DUPLICATE THREAD! Please don't create duplicate threads. You already have a thread for this same issue. I fixed error in last post and rewrote the code using ProcessBuilder instead. Good - then post your progress in the original thread and keep using it until the problem is resolved. That way the people that tried to help you before can see the entire context of what you are trying. 2. Re: Last line is still not readrukbat Dec 11, 2013 4:08 PM (in response to meandmycode) Moderator Action: Duplicate locked.
https://community.oracle.com/thread/2612205
CC-MAIN-2017-09
refinedweb
392
53.17
Melrose SDK - license manager Melrose SDK - license manager Hi Subhashini, This has already been discussed earlier. Please refer to this link for more information Regards Rooven Intel AppUp(SM) Center Intel AppUp(SM) Developer Program Intel Technical Support Thanks for a valuable reply.. my process is below, 1. make the .air application using Flash builder(flex SDK 4.1 and AIR 2.0) 2. import the melrose sdk(licensing.swc). 3. Write the below code in my application, import com.adobe.licensing.LicenseManager; private static const MY_UNIQUE_32_HEX_NUM:String = "0xD2315FC6-0xB309425F-0xB3085EE2-0xBD4D7748"; private static var UPDATE_MODE: Boolean = false; private static var DEBUG_MODE: Boolean = true; protected function initApp():void { var licenseManager:LicenseManager = new LicenseManager(); licenseManager.checkLicense(this, MY_UNIQUE_32_HEX_NUM, UPDATE_MODE, DEBUG_MODE); } here the "MY_UNIQUE_32_HEX_NUM" value is my GUID.(example :0xD2315FC6,0xB309425F,0xB3085EE2,0xBD4D7748) 4.publish my .air application i'm getting the following error "Your license can not be validated". is all above steps are correct or wrong ? if it is wrong please guide me the correct steps. Awaiting for your awesome reply !!! Hi Subhashini, It appears that you need to change Debug mode to False to match up with Praveen's example. Also, Adobe states "Once the application is ready to release, you must set debug to false to publish your application or it will be rejected." Please see Regards Hal G. Technical Support Team Intel AppUp(SM) Developer Program Intel AppUp(SM) Center Hi Hal , Thx for your reply.. if i'm set the DEBUG_MODE is false i 'm getting the same error "Your license can not validated " and if i'm set the DEBUG_MODE is true the check license pop up have 3 buttons like " Use Current License " "Skip license check" and "Delete Current License". What i do now? Awaiting for reply!! Hi SubhashiniBalaji, Thank you for your reply. Based on the code that you provided above, you should also use null or leave the MY_UNIQUE_32_HEX_NUM string empty as shown below, in addition to what Hal said above. private static const MY_UNIQUE_32_HEX_NUM:String = “”; Please let us know if this helps. Regards Rooven Intel AppUp(SM) Center Intel AppUp(SM) Developer Program Intel Technical Support Finally you were able to succeed or do you need still need some help
https://software.intel.com/fr-fr/forums/topic/322018
CC-MAIN-2015-27
refinedweb
369
57.77
Details Description It is high time that gremlin-python comes packaged with a real driver. After watching the community discussion, it seems that the way to go will be to use the concurrent.futures module with multithreading to provide asynchronous I/O. While the default underlying websocket client library will remain Tornado due to Python 2/3 compatibility issues, this should be decoupled from the rest of the client and easy to replace. With this is mind, I created a baseline client implementation with this commit in a topic branch python_driver. Some things to note: - All I/O is performed using the concurrent.futures module, which provides a standard 2/3 compatible future interface. - The implementation currently does not include the concept of a cluster, instead it assumes a single host. - The transport interface makes it easy to plug in client libraries by defining a simple wrapper. - Because this is an example, I didn't fix all the tests to work with the new client implementation. Instead I just added a few demo tests. If we decide to move forward with this I will update the original tests. The resulting API looks like this for a simple client: client = Client('ws://localhost:8182/gremlin', 'g') g = Graph().traversal() t = g.V() future_result_set = client.submitAsync(t.bytecode) result_set = future_result_set.result() results = result_set.all().result() client.close() Using the DriverRemoteConnection: conn = DriverRemoteConnection('ws://localhost:8182/gremlin', 'g') g = Graph().traversal().withRemote(conn) t = g.V() results = t.toList() conn.close() If you have a minute to check out the new driver code that would be great, I welcome feedback and suggestions. If we decide to move forward like this, I will proceed to finish the driver implementation. Issue Links - links to - Activity - All - Work Log - History - Activity - Transitions I gave it a quick look. I see that you moved a couple of public classes around. I assume that is a breaking change. If you feel like some reorganization was needed here, I think you should "deprecate" the old classes and keep them where they are (i'm assuming there is a way to mark code as deprecated in python) then introduce the new ones elsewhere. I think it's good that we have an actual "driver" now for python. It will be good to start to see the python driver begin to get similar features to the java one. Thanks for doing that. Good point. I moved the DriverRemoteConnection class to a submodule to mimic the file structure of the Java driver, which will break import statements. I could either provide a deprecation warning and leave things as they are, or move the class back to the original spot. Either way would be fine. Assuming that there are no objections, I will move the driver code to a TINKERPOP-1599 branch and work on a PR from there. Regarding providing "similar" features to the Java driver, what are the priorities here? Things I can think of off the top of my head are: - Implementing Cluster and Host to allow for multiple hosts. - Allowing configuration from a config file. - More advanced pooling/connection logic, like allowing a connection to be borrowed multiple times with a max_in_process setting. - Heartbeating. All of those are good ideas. From my perspective, I think you generally listed them in order of what I'd like to see first. GitHub user davebshow opened a pull request: TINKERPOP-1599 implement real gremlin-python driver This PR adds a better driver implementation for gremlin-python: - Uses a multi-threaded solution to provide asynchronous I/O - Decouples the underlying websocket client implementation from driver code, making it easy to plug in a different client - Makes it easy to plug in different protocols, like the Gremlin Server HTTP protocol. - Adds simple connection pooling used in concurrent requests, which increases driver performance with slow traversals - Improves driver tests by adding `pytest` fixtures and removing unneeded `unittest` code. This driver still isn't full featured compared to the Java driver, for example, it doesn't implement `Cluster` to use multiple hosts. But, it is considerably better than the old implementation, and as this was becoming a big PR, I figure we can add more features with subsequent PRs. Also note, I know we are in code freeze right now, so I don't expect this to be reviewed/merged for 3.2.4, but I'm busy and I wanted to get this done so I can move on. You can merge this pull request into a Git repository by running: $ git pull TINKERPOP-1599 Alternatively you can review and apply these changes as the patch at: To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #554 commit b10d6048ced6de08c307b8b9382d67e97e2cb0ef Author: davebshow <davebshow@gmail.com> Date: 2017-01-30T22:22:36Z added code for new driver, updated driver tests Github user spmallette commented on the issue: @davebshow now that code freeze is lifted i think you should probably rebase this PR. Also, two documentation related items: 1. This works needs a changelog entry - it was probably better you didn't add that yet since you would have had to move it anyway to 3.2.5. 2. This change seems sufficiently large and important that it should have some documentation on usage in the reference docs and should likely have an entry in the upgrade docs as well so that users know that there as an actual driver that can be used for python now. Github user davebshow commented on the issue: Yeah I figured we would need to add docs. Where should I add to the reference docs? Maybe in [Gremlin Applications]() after the [Connecting via Java]() section? Github user spmallette commented on the issue: I think that make sense - just add a "Connecting via Python" section perhaps. Github user davebshow commented on the issue: Ok, I have now rebased ad added docs to both reference and upgrading. I also reviewed the old implementation and made a couple small fixes for consistency. I also added standard op processor message serialization. I think this one is getting close to being ready to go. Github user spmallette commented on a diff in the pull request: — Diff: CHANGELOG.asciidoc — @@ -29,6 +29,11 @@ TinkerPop 3.2.5 (Release Date: NOT OFFICIALLY RELEASED YET) - Refactor `SparkContext` handler to support external kill and stop operations. - Fixed an optimization bug in `LazyBarrierStrategy` around appending barriers to the end of a `Traversal`. - `TraverserIterator` in GremlinServer is smart to try and bulk traversers prior to network I/O. +* Improved Gremlin-Python Driver implementation by adding a threaded client with basic connection pooling and support for pluggable websocket clients. + +Improvements +^^^^^^^^^^^^ + TINKERPOP-1599implement real gremlin-python driver - End diff – you don't need to add those manually - they get added on release (we generate a report out of jira for it). Github user davebshow commented on the issue: Well, I've made my last review of this code and pushed a bit of cleanup and a small fix. IMHO, this is ready to be merged. I'll wait to see if there are any more comments or feedback before upvoting. Github user spmallette commented on the issue: All tests pass with `docker/build.sh -t -n -i` VOTE +1 Github user davebshow commented on the issue: I think this is ready. Anyone want to be a third reviewer for this? VOTE +1 Github user okram commented on the issue: VOTE +1. Github user davebshow commented on the issue: Merged. Github user davebshow closed the pull request at: I've made a few small fixes since I first pushed this code. It would probably be better to see the changes in comparison mode.
https://issues.apache.org/jira/browse/TINKERPOP-1599
CC-MAIN-2017-39
refinedweb
1,292
55.03
I messed up with my class assignment. Pl help me i did some so far. File Ship And file ShipTestAnd file ShipTest// public class Ship { //fields private String name; private double speed; //constructors public Ship() { } //methods public String getName() { return name; } public double getSpeed() { return speed; } public void setName(String newName) { name = newName; } public void setSpeed(double s) { if (s >= 0) speed = s; } public double timeToCrossEnglishChannel(double s) { return speed * 2; } } So in File ship how to: Declare a public method called timeToCrossEnglishChannel.So in File ship how to: Declare a public method called timeToCrossEnglishChannel. It takes no parameter and returns the number of hours it takes to cross the channel. The return value is of type double. and in File Shiptest how to: Create an instance of type Ship. Call the variable of type Ship paddleStreamer. The name of the paddleStreamer is Rob Roy and the speed is 6 knots. Use this information to assign the fields.
http://www.javaprogrammingforums.com/whats-wrong-my-code/18100-pl-help-my-class-assingment.html
CC-MAIN-2014-42
refinedweb
158
72.76
Developer forums (C::B DEVELOPMENT STRICTLY!) > CodeCompletion redesign Requirements / Guidelines for re-writing the Code Completion (1/6) > >> rickg22:. For those who want to rewrite CodeCompletion using an existing C++ parser, here's what we should do: (These guidelines are subject to change without notice) * It SHOULD be designed around the class browser: Classes, Functions, Variables, Enums, Preprocessor. * The code completion plugin MUST hold an in-memory structure (maps, arrays, whatever is necessary) to contain all the tokens. For example, a class would include member variables, methods, and the methods would include the local variables. * This structure MUST be independent from the GUI. It MUST be a container. All code MUST be done by another class, but it MAY include saving, reading / writing from cache, and necessary functions. * There SHOULD be a structure per development language in the project. It's no use mixing C++ and Perl tokens, it would only create confusion. * The codecompletion plugin SHOULD have modules for different languages, these modules would consist of thread-safe functions to add the tokens into the main structure. * The ParserThread (this part we'll keep, but with the necessary adaptations) will call the modules to parse the different files. * For classes, use of STL classes (like vector and map) is RECOMMENDED, but you MAY NOT use std::string. You MUST use wxString instead. Use of only the std::string compatibility functions is RECOMMENDED. (Contributions to these ideas are welcome) David Perfors: Should it use wxWidgets functions? like wxString and wxArray, etc.? thomas: I would use string and abstain from using wx containers to maximum extend possible. There are three reasons for this: 1. wxString has std::string compatibility functions. If you use string, then you can do: --- Code: ---#if USE_WXWIDGETS #include <wx/wx.h> #define string wxString #else #include <string> #endif --- End code --- This will compile anywhere, with wx or without. 2. wx containers suck really bad. You cannot typedef them, you cannot declare them inside a class, they use obscure macros... Standard containers work really well. Wait... did I say three? :lol: Michael: --- Quote from: thomas on December 21, 2005, 12:04:50 pm ---2. wx containers suck really bad. You cannot typedef them, you cannot declare them inside a class, they use obscure macros... Standard containers work really well. --- End quote --- That's interesting :). Just a question. There are not problems by using standard containers with wxWidgets? Otherside, is not the use of std::string not adviced to be used with wxWidgets? Michael takeshi miya:. Navigation [0] Message Index [#] Next pageGo to full version
https://forums.codeblocks.org/index.php/topic,1715.0/wap2.html?PHPSESSID=732236a4c1872149216a187a0384c2da
CC-MAIN-2021-43
refinedweb
426
68.36
? Some time back I had read Peter Norvig’s beautiful post Solving Every Sudoku Puzzle, and tried to convert his program into Haskell. His idea, in short, was to use constraint propagation to fill up as many values as possible; and then ultimately search. Though the resulting Haskell program was not as succinct as his original Python, I thought it wasn’t too bad either. Perhaps using some monad trick (that I yet have to learn!) all the checks against ‘Nothing’ can be eliminated. One (only?) answer, got in under a second, is: 7 8 9 |1 3 5 |6 2 4 6 2 3 |9 4 7 |8 1 5 4 5 1 |2 8 6 |3 9 7 ------+------+------ 2 3 7 |4 1 8 |5 6 9 8 4 5 |6 9 3 |2 7 1 9 1 6 |7 5 2 |4 8 3 ------+------+------ 1 7 8 |5 2 4 |9 3 6 5 6 2 |3 7 9 |1 4 8 3 9 4 |8 6 1 |7 5 2 And my Haskell translation is (gist): import List (elem, nub, filter, delete, intersperse, replicate) import Data.Map (Map, fromList, (!), insert, keys, elems, toList) import Data.List (intercalate) import Data.Maybe type Val = Char -- Value of a cell type Square = String -- two-character square ID type Board = Maybe (Map Square String) -- The board state cross xs ys = [[x,y] | x <- xs, y <- ys] rows = "ABCDEFGHI" cols = "123456789" digits = "123456789" squares = cross rows cols unitlist = [cross rows [c] | c <- cols] ++ [cross [r] cols | r <- rows] ++ [cross rs cs | rs <- ["ABC", "DEF", "GHI"], cs <- ["123","456","789"]] units sq = filter (elem sq) unitlist peers sq = delete sq . nub . concat $ units sq -- Check for places where d appears in the units of sq checkplaces :: Board -> Square -> Val -> Board checkplaces b0 sq d = foldl f b0 (units sq) where f b u | isNothing b || len == 0 = Nothing | len == 1 = assign b (head dplaces) d | otherwise = b where dplaces = [s | s <- u, elem d ((fromJust b)!s)] -- dplaces is all squares in the unit u possibly containing d len = length dplaces eliminate :: Board -> Square -> Val -> Board eliminate Nothing _ _ = Nothing eliminate (Just b0) sq d | notElem d v = Just b0 -- Already Eliminated | length v' == 0 = Nothing -- Contradiction: Removed last value | length v' == 1 = checkplaces b'' sq d -- Only 1 left: Remove from peers | otherwise = checkplaces (Just b') sq d where v = b0 ! sq v' = delete d v b' = insert sq v' b0 h = (head v') b'' = foldl (\b p -> eliminate b p h) (Just b') (peers sq) assign :: Board -> Square -> Val -> Board assign Nothing _ _ = Nothing assign b0 sq d0 = foldl f b0 ((fromJust b0) ! sq) where f b d = if d0 == d then b else eliminate b sq d parsegr :: String -> Board parsegr s = foldl f b0 (zip squares s') where s' = filter (\x -> elem x "0.-123456789") s b0 = Just (fromList [(s, digits) | s <- squares]) f b (sq, d) | isNothing b = Nothing | notElem d digits = b | otherwise = assign b sq d search :: Board -> Board search Nothing = Nothing search (Just b) = if all ((==1) . length) $ elems b then Just b else search' (b!s) where minl = minimum [l | v <- elems b, let l = length v, l > 1] s = head [sq | (sq, v) <- toList b, length v == minl] search' [] = Nothing search' (d:ds) = if isJust b' then b' else search' ds where b' = search $ assign (Just b) s d printgr :: Board -> String printgr Nothing = "Unsolvable" printgr (Just b) = concat [(fr r) ++ (if elem r "CF" then line else "") ++ "\n" | r<-rows] where w = 1 + maximum [length (b!sq) | sq <- squares] line = (++) "\n" $ intercalate "+" $ replicate 3 $ replicate (3 * w) '-' fr r = concat [(fmt (b![r,c])) ++ (if elem c "36" then "|" else "") | c<-cols] fmt s = s ++ replicate (w - (length s)) ' ' solve :: String -> IO () solve s = putStrLn $ printgr $ search $ parsegr s -- Instance from grid = "7..1......2.....15.....639.2...18....4..9..7....75...3.785.....56.....4......1..2" main = solve grid Here’s my implementation, based on the Dlx algorithm: Implementation of DLX in PLT Scheme. This solves the sample problem in about 20ms on my machine. Solving sudoku is interseting. How about designing sudoku puzzles that have exactly one solution? Does anyone have an idea how to do that? I can think of the following: 0: Initially all fields are unoccupied. 1: Randomly choose an unoccupied field. 2: Take all consistent digits in arbitrary order. 3: If there are no consistent digits backtrack from step 1. 4: Try the digits in arbitrary order. 5: For each trial solve the puzzle to a maximum of two solutions. 6: If there is no solution, try the next consistent digit or backtrack from step 1 if no digits are left. 7: If there is one solution we are ready. 9: If there is more than one solution, try to complete the puzzle from step 1. Jos This is a solution in php that uses backtracking. You start with a first empty cell and the least number that fits there (i.e. with no conflict from other numbers that are already supplied in the puzzle). And apply the same logic to the next empty cell, till you’re not able to fill any number in an empty cell – at which time you backtrack to the previous cell, change the number there and try again. So for the above Sudoku puzzle, since the first empty cell is in (1,2), the function below will first be invoked as: solve(1,1,2) $game_table is just an array of 9X9 that already contains supplied numbers in the puzzle $complete will be set to FALSE before calling solve() for the first time function solve($number,$row,$column) { global $game_table; global $complete; while($number<=9 && !$complete) { //check is this number can fit into this cell if(check_horizontal($number,$row,$column) && check_vertical($number,$row,$column) && check_square($number,$row,$column)) { $game_table[$row][$column]=$number; //no conflict, so fill the cell with this number //find the next blank cell $i=1;$j=1;$found=false; for($i=1;$i<=9;$i++) { for($j=1;$j<=9;$j++) { if($game_table[$i][$j]==0) { $found=true; break; } } if ($found) break; } if($found) solve(1,$i,$j); //recursive call else { $complete= true; return; } } //recursive call will return here – so here either the puzzle is complete or there was no possibility to fill the next cell if(!$complete) $number++; } if(!$complete) $game_table[$row][$column]=0; //reset this cell } function check_horizontal($n,$r,$c) { global $game_table; for($j=1;$j<=9;$j++) if($game_table[$r][$j]==$n) return false; return true; } function check_vertical($n,$r,$c) { global $game_table; for($i=1;$i<=9;$i++) if($game_table[$i][$c]==$n) return false; return true; } function check_square($n,$r,$c) { global $game_table; $a = $r%3; $b = floor($r/3); if($a==0) { $r1 = (3*($b-1))+1; $r2 = $r; } else { $r1 = $r-$a+1; $r2 = 3*($b+1); } $a = $c%3; $b = floor($c/3); if($a==0) { $c1 = (3*($b-1))+1; $c2 = $c; } else { $c1 = $c-$a+1; $c2 = 3*($b+1); } for($i=$r1;$i<=$r2;$i++) for($j=$c1;$j<=$c2;$j++) if($game_table[$i][$j] == $n) return false; return true; } golang Python: This is roughly based on an article by P. Norvig I read once (Ashutosh Mehra mentions that article), but I’ve tried to implement it from memory, so the result is not as clean as the original.
http://programmingpraxis.com/2009/02/19/sudoku/?like=1&source=post_flair&_wpnonce=9233933e84
CC-MAIN-2014-42
refinedweb
1,239
70.57
Creating Flipable UI in Qt Quick From Nokia Developer Wiki This article shows how to use the QML Flipable element. Article Metadata Code Example Source file: Media:FlipableUI.zipCompatibility Created: jaydipNokia (19 Jun 2011) Last edited: hamishwillee (13 Jun 2012) Introduction The Flipable item provides a surface that can be flipped. Flipable is an item that can be visibly "flipped" between its front and back sides, like a card. It is used together with Rotation, State and Transition elements to produce a flipping effect. The following example shows a Flipable item that flips whenever it is clicked. Screen Shot Main.qml File The front and back properties are used to hold the items that are shown respectively on the front and back sides of the flipable item. You can add more control on Front and back sides like Button, Text, and Image Element etc. import QtQuick 1.0 Rectangle { id: rect width: 640; height: 320 color: "lightsteelblue" Flipable { id: sign property bool frontSide: true anchors.centerIn: parent width: 243; height: 230 MouseArea { anchors.fill: parent onClicked: sign.frontSide = !sign.frontSide z: -1 } transform: Rotation { origin.x: sign.width / 2; origin.y: sign.height / 2 axis.x: 1; axis.y: 0; axis.z: 0 angle: sign.frontSide ? 0 : 180 Behavior on angle { RotationAnimation { direction: RotationAnimation.Clockwise easing.type: Easing.InOutCubic; duration: 300 } } } front: Image { anchors.fill: parent source: "img/FN12.png" smooth: true // you can add button control here // you can also add text QMl Element here } back: Image { anchors.fill: parent source: "img/ND1.png" smooth: true // you can add button control here // you can also add text QMl Element here } } } Source Code You can download Sample code from File:FlipableUI.zip.
http://developer.nokia.com/community/wiki/index.php?title=Creating_Flipable_UI_in_Qt_Quick&oldid=151512
CC-MAIN-2014-15
refinedweb
282
61.12
MonoidK MonoidK is a universal monoid which operates on kinds. This type class is useful when its type parameter F[_] has a structure that can be combined for any particular type, and which also has an “empty” representation. Thus, MonoidK is like a Monoid for kinds (i.e. parametrized types). A MonoidK[F] can produce a Monoid[F[A]] for any type A. Here’s how to distinguish Monoid and MonoidK: Monoid[A]allows Avalues to be combined, and also means there is an “empty” A value that functions as an identity. MonoidK[F]allows two F[A]values to be combined, for any A. It also means that for any A, there is an “empty” F[A]value. The combination operation and empty value just depend on the structure of F, but not on the structure of A. Let’s compare the usage of Monoid[A] and MonoidK[F]. First some imports: import cats.{Monoid, MonoidK} import cats.implicits._ Just like Monoid[A], MonoidK[F] has an empty method, but it is parametrized on the type of the element contained in F: Monoid[List[String]].empty // res0: List[String] = List() MonoidK[List].empty[String] // res1: List[String] = List() MonoidK[List].empty[Int] // res2: List[Int] = List() And instead of combine, it has combineK, which also takes one type parameter: Monoid[List[String]].combine(List("hello", "world"), List("goodbye", "moon")) // res3: List[String] = List(hello, world, goodbye, moon) MonoidK[List].combineK[String](List("hello", "world"), List("goodbye", "moon")) // res4: List[String] = List(hello, world, goodbye, moon) MonoidK[List].combineK[Int](List(1, 2), List(3, 4)) // res5: List[Int] = List(1, 2, 3, 4) Actually the type parameter can usually be inferred: MonoidK[List].combineK(List("hello", "world"), List("goodbye", "moon")) // res6: List[String] = List(hello, world, goodbye, moon) MonoidK[List].combineK(List(1, 2), List(3, 4)) // res7: List[Int] = List(1, 2, 3, 4) MonoidK extends SemigroupK, so take a look at the SemigroupK documentation for more examples.
https://typelevel.org/cats/typeclasses/monoidk.html
CC-MAIN-2018-17
refinedweb
332
58.28
Lecture 8 — Tuples, Modules, Images¶ Overview¶ - While most of this lecture is not covered in our text book, the lecture serves as an introduction to using more complex data types like lists. - We will first learn a simple data type called tuples which allow us to work with multiple values together - including returning two or more values from a function. - We will then revisit modules, how functions you write can be used in other programs. - Most of the class we will learning how to use a new module for manipulating images. - We will introduce a new data type - an image - which is much more complex than the other data types we have learned so far. - We will study a module called pillow which is specifically designed for this data type. - Class will end with a review for Monday’s Exam 1, so it will be a bit long... Tuple Data Type¶ A tuple is a simple data type that puts together multiple values as a single unit. Much like a list, a Tuple allows you to access individual elements: first value starts at zero (this “indexing” will turn into a big Computer Science thing!) >>> x = (4, 5, 10) # note the parentheses >>> print(x[0]) 4 >>> print(x[2]) 10 >>> len(x) 3 As we will explore in class tuples and strings are similar in many ways. >>>>> s[0] 'a' >>> s[1] 'b' Just like strings, you cannot change a part of the tuple; you can only change the entire tuple! >>> x[1] = 2 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'tuple' object does not support item assignment >>> s[1] = 'A' Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'str' object does not support item assignment What are tuples good for?¶ Tuples are Python’s way of making multiple assignments. >>> 2,3 (2, 3) >>> x = 2,3 >>> x (2, 3) >>> a,b=x >>> a 2 >>> b 3 >>> c,d=3,4 >>> c 3 >>> d 4 You can write functions that return multiple values. def split(n): ''' Split a two-digit number into its tens and ones digit ''' tens = n // 10 ones = n % 10 return (tens, ones) x = 83 ten,one = split(x) print( x, "has tens digit", ten, "and ones digit", one ) outputs 83 has tens digit 8 and ones digit 3 We can do the reverse, passing a tuple to a function. def combine( digits ): return digits[0]*100 + digits[1]*10 + digits[2] d = (5, 2, 7) print( combine(d)) outputs 527 Basics of modules¶ Recall that a module is a collection of Python variables, functions and objects, all stored in a file. Modules allow code to be shared across many different programs. Before we can use a module, we need to import it. The import of a module and use of functions within the module have the follow general form. >>> import module_name >>> module_name.function(arugments) Area and Volume Module¶ Here are a number of functions from the area calculations we’ve been developing so far, gathered in a single Python file called lec08_area.py: import math def circle(radius): ''' Compute and return the area of a circle ''' return math.pi * radius**2 def cylinder(radius,height): ''' Compute and return the surface area of a cylinder ''' circle_area = circle(radius) height_area = 2 * radius * math.pi * height return 2*circle_area + height_area def sphere(radius): ''' Compute and return the surface area of a sphere ''' return 4 * math.pi * radius**2 Now we can write another program that imports this code and uses it: import lec08_area r = 6 h = 10 a1 = lec08_area.circle(r) # Call a module function a2 = lec08_area.cylinder(r,h) # Call a module function a3 = lec08_area.sphere(r) # Call a module function print("Area circle {:.1f}".format(a1)) print("Surface area cylinder {:.1f}".format(a2)) print("Surface area sphere {:.1f}".format(a3)) We will review this in class. PIL/PILLOW — Python Image Library¶ PILLOW is a series of modules built around the Imagetype, our first object type that is not part of the main Python language - We have to tell Python about this type through import We will use images as a continuing example of what can be done in programming beyond numbers and beyond text. See for more details. Images¶ An image is a two-dimensional matrix of pixel values The origin is in the upper left corner, see below: Pixel values stored in an image can be: - RGB — a “three-tuple” consisting of the red, green and blue values, all non-negative integers - L — a single “gray-scale” integer value representing the brightness of each pixel Some basic colors: Some important image modules¶ - Image module contains main functions to manipulate images: open, save, resize, crop, paste, create new images, change pixels, etc. - ImageDraw module contains functions to touch up images by adding text to it, draw ellipses, rectangles, etc. - ImageFont contains functions to create images of text for a specific font. - We will only use the Image module in this lecture. Our First Image Program¶ We’ll start by working through the following example which you can save as lec08_images_init.py: from PIL import Image filename = "chipmunk.jpg" im = Image.open(filename) print('\n' '********************') print("Here's the information about", filename) print(im.format, im.size, im.mode) gray_im = im.convert('L') scaled = gray_im.resize( (128,128) ) print("After converting to gray scale and resizing,") print("the image information has changed to") print(scaled.format, scaled.size, scaled.mode) scaled.show() scaled.save(filename + "_scaled.jpg") Image Type and Methods¶ Let us now see some very useful image methods. You need to be very careful with the image functions. - Some functions do change the image and return nothing (like the sort function for lists). - Some functions do not change the image and returns a value, which is sometimes a new image. It is crucial that you use each function correctly. im = Image.open(filename)reads an image with the given filename and returns an image object (which we are associating with the variable im). - Because we only give the file name, and not a more complete path, the Python script and the image must be stored in the same folder. Images are complex objects. They have associated properties that you can print or use. For example >>> im = Image.open('swarm.jpg') >>> im.size (600, 800) >>> im.format 'JPEG' >>> im.mode 'RGB' You can see that im.formatand im.modeare strings, while im.sizeis a tuple. All of these are values associated with an image object. im.show()is a function that displays the image. im.save(filename)saves the image in the given file name You can create an empty new image with given dimensions using: Image.new("RGB",(width,height)). >>> im5 = Image.new('RGB', (200,200)) >>> im5.show() You can also create a new image by cropping a part of a given image: >>> im.crop((w1,h1,w2,h2)) which will crop a box from upper left corner (w1, h1) to lower right corner (w2,h2). You can see that the box is entered as a tuple. The image object imis not changed by this function, but a new image is returned. So, we must assign it to a new variable. Try this: >>> im2 = im.crop((100,100,300,400)) >>> im2.show() >>> im.show() You can get a new image that is a resized version of an existing image. The new size must be given as a tuple: im.resize((width,height)) >>> im3 = im.resize((300,200)) >>> im3.save('resized.jpg') im.convert(mode)creates a copy of in image with a new mode - gray scale ( 'L') in the following example: >>> im4 = im.convert('L') >>> im4.show() Something new, functions that change an image¶ The functions we have seen so far return a new result, but never change the object that they apply to. More complex types often provide methods that allow us to change an image for efficiency reason. You just have to remember how each function works same as we had for lists. Here is our first function with this property: im1.paste(im2, (x,y))pastes one image ( im2) into the first image ( im1) starting at the top left coordinates (x,y). The first image is changed as a result, but not the second one. Note that the second image must fit in the first image starting with these coordinates; otherwise, the pasted image will be cropped. How we call such a function is different: >>> im1 = Image.open('sheep.jpg') >>> im1.size (600, 396) >>> im = Image.new('RGB', (600, 396*2)) >>> im.paste( im1, (0,0)) ##not assigning the result of paste to a new variable >>> im.show() >>> im.paste( im1, (0, 396)) >>> im.show() The fact that the function pastechanges an image is an implementation decision made by the designers of PIL, mostly because images are so large and copying is therefore time consuming. Later in the semester, we will learn how to write such functions. Example 2: Cut and pasting parts of an image¶ This example crops three boxes from an image, creates a new image and pastes the boxes at different locations of this new image. from PIL import Image im = Image.open("lego_movie.jpg") w,h = im.size ## Crop out three columns from the image ## Note: the crop function returns a new image part1 = im.crop((0,0,w//3,h)) part2 = im.crop((w//3,0,2*w//3,h)) part3 = im.crop((2*w//3,0,w,h)) ## Create a new image newim = Image.new("RGB",(w,h)) ## Paste the image in different order ## Note: the paste function changes the image it is applied to newim.paste(part3, (0,0)) newim.paste(part1, (w//3,0)) newim.paste(part2, (2*w//3,0)) newim.show() Summary¶ - Tuples are similar to strings and numbers in many ways. You cannot change a part of a tuple. However, unlike other simple data types, tuples allow access to the individual components using the indexing notation [ ]. - Modules contain a combination of functions, variables, object definitions, and other code, all designed for use in other Python programs and modules PILLOWprovides a set of modules that define the Imageobject type and associated methods. Reviewing for the exam: topics and ideas¶ Here are crucial topics to review before the exam. - Syntax: can you find syntax errors in code? - Correct variable names, assigning a value to a variable - Output: can you predict the output of a piece of code? - Expressions and operator precedence - The distinction between integer and float division - The distinction between division (4//5) and modulo (4%5) operators, and how they work for positive and negative numbers - Remember shorthands: +=, -=, /=, *=. - Functions: defining functions and using them - Distinguish between variables local to functions and variables that are global - Modules: how to import and call functions that are from a specific module ( mathis the only one we learned so far) - How to access variable values defined in a module (see math.pifor example) - Strings: how to create them, how to escape characters, multi-line strings - How to use input(): remember it always returns a string - Boolean data type: distinguish between expressions that return integer/float/string/Boolean - Remember the distinction between =and == - Boolean value of conditions involving AND/OR/NOT if/ elif/ else: how to write them. Understand what parts are optional and how they work - Creating, indexing and using lists - Remember the same function may work differently and do different things when applied to a different data type. - Review all about the different ways to call the print function for multiple lines of input - Operators: +(concatenation and addition), *(replication and multiplication), /, %, ** - Functions: int(), float(), str(), math.sqrt(), min(), max(), abs(), round(), sum()etc. - Functions applied to string objects using the dot notation, where stringis a string object, such as "car"or the name of a string variable: string.upper(), string.lower(), string.replace(), string.capitalize(), string.title(), string.find(), string.count(), len() - Functions applied to list objects using the dot notation, where listis a list object, such as [2, 3, ["a", "b"], 4]or the name of a list variable: list.append(), list.pop(), list.insert(), list.remove(), len(), list.sum() - Distinguish between the different types of functions we have learned in this class: - Functions that take one or more values as input and return something (input objects/values are not modified)>>> min(3,2,1) 1 >>>>> len(mystr) 12 - Functions that take one or more values as input and return nothing (input objects/values are not modified)>>> def print_max(val1, val2): ... print("Maximum value is", max(val1, val2)) ... >>> x1 = 10 >>> x2 = 15 >>> print_max(x1, x2) Maximum value is 15 - Functions that apply to an object, like a string, and return a value (but do not modify the object that they are applied to)>>>>> mystr.replace('o','x') 'Mxnty Pythxn' >>> mystr 'Monty Python' >>> mystr.upper() 'MONTY PYTHON' >>> mystr 'Monty Python' - Functions that are applied to an object, like an Imageand modify it (but not return anything), we have only learned Image.pasteso far (and images will NOT be on the exam).>>> im.paste( im2, (0,0) ) - Local vs. global variables: Can you tell what each of the print statements print and explain why? def f1(x,y): return x+y def f2(x): return x+y x = 5 y = 10 print('A:', f1(x,y)) print('B:', f1(y,x)) print('C:', f2(x)) print('D:', f2(y)) print('E:', f1(x)) Reviewing for the exam: problem solving¶ In the remaining time we will go through several practice questions to demonstrate how we approach these problems. While our immediate concern is the exam, you will be developing your problem solving skills and programming abilities. Most of these questions have appeared on previous exams in CS 1. What is the exact output of the following Python code? What are the global variables, the function arguments, the local variables, and the parameters in the code? x=3 def do_something(x, y): z=x+y print(z) z += z print(z) z += z * z print(z) do_something(1,1) y=1 do_something(y,x) Write a Python function that takes two strings as input and prints them together on one 35-character line, with the first string left-justified, the second string right-justified, and as many periods between the words as needed. For example, the function calls print_left_right( 'apple', 'banana') print_left_right( 'syntax error', 'semantic error') should output apple........................banana syntax error.........semantic error You may assume that the lengths of the two strings passed as arguments together are less than 35 characters. In the United States, a car’s fuel efficiency is measured in miles driven per gallon used. In the metric system it is liters used per 100 kilometers driven. Using the values 1.609 kilometers equals 1 mile and 1 gallon equals 3.785 liters, write a Python function that converts a fuel efficiency measure in miles per gallon to one in liters per 100 kilometers and returns the result. Write a program that reads Erin’s height (in cm), Erin’s age (years), Dale’s height (in cm) and Dale’s age (years) and tells the name of the person who is both older and taller or tells that neither is both older and taller.
http://www.cs.rpi.edu/~sibel/csci1100/fall2017/lecture_notes/lec08_modules_images.html
CC-MAIN-2018-05
refinedweb
2,539
63.29
07 March 2006 14:56 [Source: ICIS news] Correction: In the ICIS news article headlined "German chems growth to continue in H1 2005 - VCI", please read the headline as "German chems growth to continue in H1 2006 - VCI" ... instead of 2005. LONDON (ICIS news)--The upswing in the German chemicals industry will continue at least in the first half of 2006, although the high growth rates seen in 2005 will not be repeated, chemical industry association VCI said on Tuesday. Chemicals production in 2005 rose by 7.1%, the highest growth rate for 20 years. VCI said that in 2006 output growth of chemicals was still expected to slow to 2.5%, in line with previous forecasts. With continued rises in selling prices, however, turnover in the industry could increase by 4.5%. But growth prospects could be diminished by high crude prices, VCI warned. The increase in costs for the industry has again intensified but could be absorbed by economic growth. Private consumption in view of high unemployment and scarce cash availability could also affect the continuation of growth, VCI said. During the fourth quarter last year, chemicals production in ?xml:namespace> Chemicals prices rose significantly in the fourth quarter, by an average of 2.3% compared with the corresponding period of 2004, VCI said. Prices were up 2% from the previous three-month period. Turnover in the fourth quarter rose 0.8% from the previous quarter to Euro38.3bn ($46bn), mainly driven by growth in trade with other European countries. Exports to the European Union (EU) rose 8.6% to Euro14.9bn ($17.8bn) in Q4 2005 compared with the year earlier period. Chemicals exports to the The crude oil price sank 7.5% compared with the record value of the previous quarter to an average of $56.94/bbl. However, crude prices were still 30% higher than during Q4 2004. In January and February 2006, crude oil quotations again climbed over $60/bbl. Naphtha cost on average Euro428/tonne in Q4, 3.6% higher than in Q3. Prices for naphtha, the key raw material for chemicals production, increased 23% from the second quarter this year. Contract prices for ethylene rose by 28.9% from the third quarter to Euro825/tonne in Q4; propylene was up 26.6% to Euro810/tonne and orthoxylene (OX) increased by 26.2% to Euro770/tonne. However, ethylene and propylene prices fell by 4.8% and 3.1% respectively in Q1 this year compared with Q4 2005. German chemical industry output and prices: Q4 2005 versus Q4 2004 Source:.
http://www.icis.com/Articles/2006/03/07/1047208/corrected+german+chems+growth+to+continue+in+h1+2006-vci.html
CC-MAIN-2013-20
refinedweb
426
68.67
In this article, we will be working through how to use Font Awesome in an Angular app and how we can use Font Awesome’s icon animation and styling. Before we move further, let’s talk about what Font Awesome is. Font Awesome Font Awesome is an icon toolkit with over 1,500 free icons that are incredibly easy to use. The icons were created using a scalable vector and inherit CSS sizes and color when applied to them. This makes them high-quality icons that work well on any screen size. Before the release of Angular 5, developers had to install the Font Awesome package and reference its CSS in an Angular project before it could be used. But the release of Angular 5 made it easy to implement Font Awesome in our project by the creation of the Angular component for Font Awesome. With this feature, Font Awesome can be integrated into our projects cleanly. Font Awesome icons blend with text inline well due to their scalability. In this article, we are going to explore more on how to also use animation, coloring, and sizing for Font Awesome icons. Creating a demo Angular app Let’s create a demo Angular app for this tutorial. Open your terminal, CD to the project directory, and run the command below. Before you run the command, make sure you have Node.js installed in your system and also have Angular CLI installed, too: ng new angular-fontawesome Installing Font Awesome dependencies For those who have an existing project, we can follow up from here. Once the above command is done, CD to the project directory and install the below Font Awesome icon command: npm install @fortawesome/angular-fontawesome npm install @fortawesome/fontawesome-svg-core npm install @fortawesome/free-brands-svg-icons npm install @fortawesome/free-regular-svg-icons npm install @fortawesome/free-solid-svg-icons # or yarn add @fortawesome/angular-fontawesome yarn add @fortawesome/fontawesome-svg-core yarn add @fortawesome/free-brands-svg-icons yarn add @fortawesome/free-regular-svg-icons yarn add @fortawesome/free-solid-svg-icons Using Font Awesome icons in Angular applications There are two steps to using Font Awesome in an Angular project. We’re going to look at both of these: - How to use Font Awesome icons at the components level - How to use the Font Awesome icons library How to use Font Awesome icons at the components level This step has to do with using Font Awesome icons at the component level, and it is not a good approach because it involves us importing icons into each of our components that needs an icon and also importing the same icons multiple times. We are still going to take a look at how we can use icons in a component in case we are building an application that requires us to use an icon in just one component. After installing Font Awesome, open the app.module.ts and import the FontAwesomeModule just like the one below: import { FontAwesomeModule } from '@fortawesome/angular-fontawesome' imports: [ BrowserModule, AppRoutingModule, FontAwesomeModule ], After that, open app.component.ts and import the icon name that you want to use. Let’s say we want to make use of faCoffee: import { faCoffee } from '@fortawesome/free-solid-svg-icons'; Next, we create a variable name called faCoffee and assign the icon that we imported to the variable so can use it in the app.component.html. If we don’t do that, we can’t use it: faCoffee = faCoffee; Now, in the app.component.html, write the code below: <div> <fa-icon [icon]="faCoffee"></fa-icon> </div> Run the command to serve our app and to check if our icon is ng serve If we look at our webpage, we will see faCoffee displayed on the screen. That shows that the icon was installed and imported successfully. How to use the Font Awesome icons library This is the best approach to using Font Awesome in our applications, especially when we want to use it across all components without re-importing icons or importing one icon multiple times. Let’s take a look at how we can achieve that. Open app.module.ts and write the code below: import { FaIconLibrary } from '@fortawesome/angular-fontawesome'; import { faStar as farStar } from '@fortawesome/free-regular-svg-icons'; import { faStar as fasStar } from '@fortawesome/free-solid-svg-icons'; export class AppModule { constructor(library: FaIconLibrary) { library.addIcons(fasStar, farStar); } } After that, we can use it directly inside app.component.html without declaring a variable and passing it to that variable before using it: <div> <fa-icon [icon]="['fas', 'star']"></fa-icon> <fa-icon [icon]="['far', 'star']"></fa-icon> </div> If we load the webpage now, we are going to see the star icon being displayed. Icon styling in Font Awesome Font Awesome has four different styles, and we’ll look at the free icons — minus the Pro light icons, which use the prefix 'fal' and a professional license: - The solid icons use the prefix 'fas'and are imported from @fortawesome/free-regular-svg-icons - The regular icons use the prefix 'far'and are imported from @fortawesome/free-solid-svg-icons - The brand icons use the prefix 'fab'and are imported from @fortawesome/free-brands-svg-icons Moving forward, let’s look at what more we can do with Font Awesome icons. Changing icon color and size without writing a CSS style Let’s look at how we can change Font Awesome icon colors without writing a CSS style in Angular. This approach helps us use an icon at a component level. However, when using this approach across all components, it’s not helpful because it will change the icon colors everywhere in our project’s components. For multiple components, we can just change it once using a CSS class or style attribute. But when working on a component level, we can use it since we are going to use the icon just in that component instead of creating a CSS attribute for it and styling in the CSS file. So let’s look at how we are going to do this in an Angular project. By default, the icon below is black and we want to change it to red: // from black <fa-icon [icon]="['fab', 'angular']" ></fa-icon> // to red <fa-icon [icon]="['fab', 'angular']" [styles]="{ stroke: 'red', color: 'red' }" ></fa-icon> When changing icon colors and strokes using inline styling, we have to make use of the fa-icon attribute. Next, we are going to increase the icon size from small to large using inline styling in Angular. To do this, we have to use the size property of the fa-icon: <fa-icon [icon]="['fab', 'angular']" [styles]="{ stroke: 'red', color: 'red' }" size="xs" ></fa-icon> <fa-icon [icon]="['fab', 'angular']" [styles]="{ stroke: 'red', color: 'red' }" size="sm" ></fa-icon> <fa-icon [icon]="['fab', 'angular']" [styles]="{ stroke: 'red', color: 'red' }" size="lg" ></fa-icon> <fa-icon [icon]="['fab', 'angular']" [styles]="{ stroke: 'red', color: 'red' }" size="5x" ></fa-icon> <fa-icon [icon]="['fab', 'angular']" [styles]="{ stroke: 'red', color: 'red' }" size="10x" ></fa-icon> By default, the Font Awesome icons inherit the size of the parent container. It allows them to match any text we might use them with, but we have to give them the size we want if we don’t like the default size. We use the classes xs, sm, lg, 5x, and 10x. These classes increase and decrease the icon size to what we want. Animating Font Awesome icons Let’s also look at how we can animate Font Awesome icons without using CSS or animation libraries in Angular. As a developer, when a user clicks a submit button or when the page is loading, we may want to show a loader or spinner effect telling the user that something is loading. We can use Font Awesome icons to achieve that purpose. Instead of importing an external CSS animation library, we can just add the Font Awesome spin attribute to the icon tag. Doing this saves us from downloading a full CSS animation library just to end up using a spinner effect or writing a long CSS animation using keyframes. So let’s look at how we can achieve this by using a React icon: <fa-icon [icon]="['fab', 'react']" [styles]="{ stroke: 'blue', color: 'blue' }" size="10x" ></fa-icon> We have just imported the React icon, and now we are going to animate it. On the React icon component, add the Font Awesome spin loader attribute: <fa-icon [icon]="['fab', 'react']" [styles]="{ stroke: 'blue', color: 'rgb(0, 11, 114)' }" size="10x" [spin]="true" ></fa-icon> When we load the webpage, we are going to see the React icon rotating infinitely. This is because we set the spin attribute to true. Conclusion In this article, we were able to look at how we can use Font Awesome icons in an Angular project, how to add some basic styling that comes with the icon library, and how to animate icons. There is still more we can do with Font Awesome icons, things like Fixed Width Icons, Rotating Icons, Power Transforms, and combining two icons. Font Awesome’s tutorials can teach you more about how you can use these tools in your projects. If you found this article helpful, share it with your friends..
https://blog.logrocket.com/how-to-add-font-awesome-angular-project/
CC-MAIN-2022-40
refinedweb
1,553
53.65
same language pairs, you may have to convert certain documents to intermediate formats, or even resort to manual translation. All these issues add extra cost, and create unnecessary complexity in building consistent and automated translation workflows. Amazon Translate aims at solving these problems in a simple and cost effective fashion. Using either the AWS console or a single API call, Amazon Translate makes it easy for AWS customers to quickly and accurately translate text in 55 different languages and variants. Earlier this year, Amazon Translate introduced batch translation for plain text and HTML documents. Today, I’m very happy to announce that batch translation now also supports Office documents, namely .docx, .xlsx and .pptx files as defined by the Office Open XML standard. Introducing Amazon Translate for Office Documents The process is extremely simple. As you would expect, source documents have to be stored in an Amazon Simple Storage Service (Amazon S3) bucket. Please note that no document may be larger than 20 Megabytes, or have more than 1 million characters. Each batch translation job processes a single file type and a single source language. Thus, we recommend that you organize your documents in a logical fashion in S3, storing each file type and each language under its own prefix. Then, using either the AWS console or the StartTextTranslationJob API in one of the AWS language SDKs, you can launch a translation job, passing: - the input and output location in S3, - the file type, - the source and target languages. Once the job is complete, you can collect translated files at the output location. Let’s do a quick demo! Translating Office Documents Using the Amazon S3 console, I first upload a few .docx documents to one of my buckets. Then, moving to the Translate console, I create a new batch translation job, giving it a name, and selecting both the source and target languages. Then, I define the location of my documents in Amazon S3, and their format, .docx in this case. Optionally, I could apply a custom terminology, to make sure specific words are translated exactly the way that I want. Likewise, I define the output location for translated files. Please make sure that this path exists, as Translate will not create it for you. Finally, I set the AWS Identity and Access Management (IAM) role, giving my Translate job the appropriate permissions to access Amazon S3. Here, I use an existing role that I created previously, and you can also let Translate create one for you. Then, I click on ‘Create job’ to launch the batch job. The job starts immediately. A little while later, the job is complete. All three documents have been translated successfully. Translated files are available at the output location, as visible in the S3 console. Downloading one of the translated files, I can open it and compare it to the original version. For small scale use, it’s extremely easy to use the AWS console to translate Office files. Of course, you can also use the Translate API to build automated workflows. Automating Batch Translation In a previous post, we showed you how to automate batch translation with an AWS Lambda function. You could expand on this example, and add language detection with Amazon Comprehend. For instance, here’s how you could combine the DetectDominantLanguage API with the Python-docx open source library to detect the language of .docx files. import boto3, docx from docx import Document document = Document('blog_post.docx') text = document.paragraphs[0].text comprehend = boto3.client('comprehend') response = comprehend.detect_dominant_language(Text=text) top_language = response['Languages'][0] code = top_language['LanguageCode'] score = top_language['Score'] print("%s, %f" % (code,score)) Pretty simple! You could also detect the type of each file based on its extension, and move it to the proper input location in S3. Then, you could schedule a Lambda function with CloudWatch Events to periodically translate files, and send a notification by email. Of course, you could use AWS Step Functions to build more elaborate workflows. Your imagination is the limit! Getting Started You can start translating Office documents today in the following regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (London), Europe (Frankfurt), and Asia Pacific (Seoul). If you’ve never tried Amazon Translate, did you know that the free tier offers 2 million characters per month for the first 12 months, starting from your first translation request? Give it a try, and let us know what you think. We’re looking forward to your feedback: please post it to the AWS Forum for Amazon Translate, or send it to your usual AWS support contacts.- Julien
https://aws.amazon.com/blogs/aws/amazon-translate-now-supports-office-documents/
CC-MAIN-2022-33
refinedweb
772
56.25
We’re proud to announce the release of Cerbero Suite 5 and Cerbero Enterprise Engine 2! All of our customers can upgrade at a 50% discount their licenses for the next 3 months! We value our customers and everyone who has bought a license in August should have already received a free upgrade for Cerbero Suite 5! If you fall in that category and haven’t received a new license, please check your spam folder and in case contact us at sales@cerbero.io. Everyone who has acquired a license before August,! Speed We introduced many core optimizations, while maintaining the same level of security. Cerbero Suite has always been fast, so these changes may not be too apparent. They are, however, noticeable in our benchmarks! The scanning of certain file formats like PE and the disassembly of binaries using Carbon show a decent performance boost. However, in the case of certain file formats like PDF the performance boost is massive! Documentation For this release we created beautiful documentation for our SDK, which can be found at:. The documentation of each module comes with an introduction detailing essential concepts. Other sections provide code examples with explanations. The API documentation contains the prototype of each method and function and it comes with code examples. Related constants, classes, methods and functions all contain references to each other. The documentation contains notes and hints in case there are things to be aware of. The documentation is searchable. Entering the name of a constant, class, method or function directly brings to its documentation. The documentation of the UI module will enable you to create complex user interfaces. It even explains how to create entire workspaces with dock views, menus and toolbars. While there remain dozens of modules to document, the Core and UI module represent a great part of the functionality of Cerbero Suite and Cerbero Enterprise Engine. We will release the documentation of more modules and topics over the course of the 5.x series. Python This release comes with the latest Python 3.9.6! We update Python only between major versions and for the release of Cerbero Suite 4 we didn’t have the time to upgrade. So the previous series remained with Python 3.6. This series not only comes with the very latest Python version, but we also managed to keep compatibility with all our older supported systems, including Windows XP! Scan Data Hooks We introduced a new type of hook extension: scan data hooks. Using this type of hooks, it’s trivial to customize the scan results of existing scan providers. For example, adding a custom entry during the scan of a PE file and then provide the view to display it in the workspace. The following is small example. Add these lines to your user ‘hooks.cfg’ file. [ExtScanDataTest_1] label = External scan data test file = ext_data_test.py scanning = scanning scandata = scandata Create the file ‘ext_data_test.py’ in your ‘plugins/python’ directory and paste the following code into it. from Pro.Core import * def scanning(sp, ud): e = ScanEntryData() e.category = SEC_Info e.type = CT_VersionInfo e.otarget = "This is a test" sp.addHookEntry("ExtScanDataTest_1", e) def scandata(sp, xml, dnode, sdata): sdata.setViews(SCANVIEW_TEXT) sdata.data.setData("Hello, world!") return True Activate the extension from Extensions -> Hooks. Now when scanning a file an additional entry will be shown in the report. Clicking on the entry will display the data provided by the extension! This type of extension is extremely powerful and we’ll show some real use cases soon. What Next? Among the many things we introduced over the course of the previous 4.x series there was: - ARM32/ARM64 disassembly and decompiling. - Decompiling and emulation of Excel macros. - Support for Microsoft Office document decryption. - Disassembly of Windows user address space. - Disassembly of Windows DMP files. - Support of XLSB and XLSM formats. - Support of CAB format. - Hex editing of processes, disk and drives on Windows. - Updated native UI for Ghidra 10. - Improved decompiler. - Improved macOS support. So in the last series we spent a lot of time focusing on Microsoft technology. In particular, Excel malware required supporting its decryption, the various file formats used to deliver it (XLS, XLSB, XLSM) and creating a decompiler and an emulator for its macros. Also, in June we launched our Cerbero Enterprise Engine, which detracted some of our development resources, but it gave us the opportunity to clean up and improve our SDK. This series will be focused mostly on non-Microsoft specific technology and hence will appeal to a broader audience. We can’t wait to show you some of the things we have planned and we hope you enjoy this new release! Happy hacking!
https://cerbero-blog.com/?p=2167
CC-MAIN-2022-40
refinedweb
784
58.58
Rest in peace – 21-1! The grandest stage of all, Wrestlemania XXX recently happened. And with it, happened one of the biggest heartbreaks for the WWE fans around the world. The Undertaker’s undefeated streak was finally over. Now as an Undertaker fan, you’re disappointed, disheartened and shattered to pieces. And Little Jhool doesn’t want to upset you in any way possible. (After all you are his only friend, true friend!) Little Jhool knows that you’re still sensitive to the loss, so he decides to help you out. Every time you come across a number, Little Jhool carefully manipulates it. He doesn’t want you to face numbers which have “21” as a part of them. Or, in the worst case possible, are divisible by 21. If you end up facing such a number you feel sad… and no one wants that – because you start chanting “The streak is broken!” , if the number doesn’t make you feel sad, you say, “The streak lives still in our heart!” Help Little Jhool so that he can help you! Input Format: The first line contains a number, t, denoting the number of test cases. After that, for t lines there is one number in every line. Output Format: Print the required string, depending on how the number will make you feel. Constraints: 1 ≤ t ≤ 100 1 ≤ n ≤ 1000000 #include <stdio.h> #include<string.h> int main() { int test,flag,i,len; char num[1000000]; unsigned long long int number; scanf("%d",&test); while(test--) { scanf("%llu",&number); sprintf(num,"%llu",number); len=strlen(num); flag=0; for(i=0;i<len-1;i++) { if(num[i]=='2' && num[i+1]=='1') { flag++; } } if(flag==0 && (number%21)!=0) { printf("The streak lives still in our heart!\n"); } else { printf("The streak is broken!\n"); } } return 0; } Competitive coding Hackerearth problem
https://coderinme.com/rest-peace-21-1-hackerearth-problem-coderinme/
CC-MAIN-2019-09
refinedweb
311
76.11
Before You Waste Too Much Time The standard Perl distribution comes with a debugger, although it’s really just another Perl program, perl5db.pl. Since it is just a program, I can use it as the basis for writing my own debuggers to suit my needs, or I can use the interface perl5db.pl provides to configure its actions. That’s just the beginning, though. I can write my own debugger or use one of the many debuggers created by other Perl masters. Before I get started, I’m almost required to remind you that Perl offers two huge debugging aids: strict and warnings. I have the most trouble with smaller programs for which I don’t think I need strict and then I make the stupid mistakes it would have caught. I spend much more time than I should have tracking down something Perl would have shown me instantly. Common mistakes seem to be the hardest for me to debug. Learn from the master: don’t discount strict or warnings for even small programs. Now that I’ve said that, you’re going to look for it in the examples in this chapter. Just pretend those lines are there, and the book costs a bit less for the extra half a page that I saved by omitting those lines. Or if you don’t like that, just imagine that I’m running every program with both strict and warnings turned on from the command line: $ perl -Mstrict -Mwarnings program Along with that, I have another problem that bites me much more than I should be willing to admit. Am I editing the file on the same machine I’m running it on? I have login accounts on several machines, and my favorite terminal program has tabs so I can have many sessions in one window. It’s easy to checkout source from a repository and work just about anywhere. All of these nifty features conspire to get me into a situation where I’m editing a file in one window and trying to run it in another, thinking I’m on the same machine. If I’m making changes but nothing is changing in the output or behavior, it takes me longer than you’d think to figure out that the file I’m running is not the same one I’m editing. It’s stupid, but it happens. Discount nothing while debugging! That’s a bit of a funny story, but I included it to illustrate a point: when it comes to debugging, Humility is one of the principal virtues of a maintenance programmer.* My best bet in debugging is to think that I’m the problem. That way, I don’t rule out anything or try to blame the problem on something else, like I often see in various Perl forums under titles such as “Possible bug in Perl.” When I suspect myself first, I’m usually right. Appendix B is my guide to solving any problem, which people have found useful for at least figuring out what might be wrong even if they can’t fix it. {mospagebreak title=The Best Debugger in the World} No matter how many different debugger applications or integrated development environments I use, I still find that plain ol’ print is my best debugger. I could load source into a debugger, set some inputs and breakpoints, and watch what happens, but often I can insert a couple of print statements and simply run the program normally.† I put braces around the variable so I can see any leading or trailing whitespace: print "The value of var before is [$var]n"; #… operations affecting $var; print "The value of var after is [$var]n"; I don’t really have to use print because I can do the same thing with warn, which sends its output to standard error: warn "The value of var before is [$var]"; #… operations affecting $var; warn "The value of var after is [$var]"; Since I’ve left off the newline at the end of my warn message, it gives me the filename and line number of the warn: The value of var before is [$var] at program.pl line 123. If I have a complex data structure, I use Data::Dumper to show it. It handles hash and array references just fine, so I use a different character, the angle brackets in this case, to offset the output that comes from Data::Dumper: use Data::Dumper qw(Dumper); warn "The value of the hash is <n" . Dumper( %hash ) . "n>n"; Those warn statements showed the line number of the warn statement. That’s not very useful; I already know where the warn is since I put it there! I really want to know where I called that bit of code when it became a problem. Consider a divide subroutine that returns the quotient of two numbers. For some reason, something in the code calls it in such a way that it tries to divide by zero:‡ sub divide { my( $numerator, $denominator ) = @_; return $numerator / $denominator; } I know exactly where in the code it blows up because Perl tells me: Illegal division by zero at program.pl line 123. I might put some debugging code in my subroutine. With warn, I can inspect the arguments: sub divide { my( $numerator, $denominator ) = @_; warn "N: [$numerator] D: [$denominator]"; return $numerator / $denominator; } I might divide in many, many places in the code, so what I really need to know is which call is the problem. That warn doesn’t do anything more useful than show me the arguments. Although I’ve called print the best debugger in the world, I actually use a disguised form in the carp function from the Carp module, part of the standard Perl distribution. It’s like warn, but it reports the filename and line number from the bit of code that called the subroutine: #!/usr/bin/perl use Carp qw(carp); printf "%.2fn", divide( 3, 4 ); printf "%.2fn", divide( 1, 0 ); printf "%.2fn", divide( 5, 4 ); sub divide { my( $numerator, $denominator ) = @_; carp "N: [$numerator] D: [$denominator]"; return $numerator / $denominator; } The output changes to something much more useful. Not only do I get my error message, but carp adds some information about the line of code that called it, and it shows me the argument list for the subroutine. I see that the call from line 4 is fine, but the call on line 5 is the last one before Perl kills the program: $ perl show-args.pl N: [3] D: [4] at show-args.pl line 11 main::divide(3, 4) called at show-args.pl line 4 0.75 N: [1] D: [0] at show-args.pl line 11 main::divide(1, 0) called at show-args.pl line 5 Illegal division by zero at show-args.pl line 13. The carp function is the better-informed version of warn. If I want to do the same thing with die, I use the croak function. It gives the same message as carp, but just like die, croak stops the program.{mospagebreak title=Doing Whatever I Want} I can change the warn and die functions myself by messing with %SIG. I like to use these to peer into code I’m trying to figure out, but I don’t use these to add features to code. It’s just part of my debugging toolbox. The pseudokeys __WARN__ and __DIE__ hold the functions that perform those actions when I use the warn or die functions. I can use a reference to a named subroutine or an anonymous subroutine: $SIG{__DIE__} = &my_die_handler; $SIG{__DIE__} = sub { print "I’m about to die!" ) Without going through the entire code base, I can change all of the die calls into the more informative croak calls.§ In this example, I preface the subroutine call with an & and no parentheses to trigger Perl’s feature to pass on the current argument list to the next subroutine call so croak gets all of the arguments I pass: use Carp; $SIG{__DIE__} = sub { &Carp::croak }; die "I’m going now!"; # really calls croak now If I only want to do this for part of the code, I can use local (since %SIG is a special variable always in main::). My redefinition stays in effect until the end of the scope: local $SIG{__DIE__} = sub { &Carp::croak }; After either of my customized routines runs, the functions do what they would otherwise do; warn lets the program continue, and die continues its exception processing and eventually stops the program.‖ Since croak reports each level of the call stack and I called it from an anonymous subroutine, I get an artifact in my output: use Carp; print "Starting program…n"; $SIG{__DIE__} = sub { local $Carp::CarpLevel = 0; &Carp::croak; }; foo(); # program dies here sub foo { bar() } sub bar { die "Dying from bar!n"; } In the stack trace, I see a subroutine call from __ANON__ followed by the subroutine calls I expect to bar() and foo(): Starting program… Dying from bar! at die.pl line 12 main::__ANON__(‘Dying from bar!x{a}’) called at die.pl line 20 main::bar() called at die.pl line 18 main::foo() called at die.pl line 16 I change my anonymous subroutine to adjust the position in the stack where croak starts its report. I set the value of $Carp::CarpLevel to the number of levels I want to skip, in this case just 1: $SIG{__DIE__} = sub { local $Carp::CarpLevel = 1; &Carp::croak; }; Now I don’t see the unwanted output: Starting program… Dying from bar! at die.pl line 12 main::bar() called at die.pl line 18 main::foo() called at die.pl line 16 For a real-life example of this in action, check out the CGI::Carp module. Lincoln Stein uses the %SIG tricks to redefine warn and die in a web-friendly way. Instead of an annoying “Server Error 500” message, I can get useful error output by simply loading the module. While loading, CGI::Carp sets $SIG{__WARN__} and $SIG{__DIE__}: use CGI::Carp qw(fatalsToBrowser); The fatalsToBrowser function takes over the resulting page to show me the error, but the module has other interesting functions such as set_message, which can catch compile-time errors and warningsToBrowser, which makes the warnings in HTML comments embedded in the output. Of course, I don’t recommend that you use this in production code. I don’t want users to see the program’s errors. They can be handy when I have to debug a program on a remote server, but once I figure out the problem, I don’t need it anymore. By leaving it in there I let the public figure out how I’m doing things, and that’s bad for security.{mospagebreak title=Program Tracing} The Carp module also provides the cluck and confess subroutines to dump stack traces. cluck is akin to warn (or carp) in that it prints its message but lets the program continue. confess does the same thing, but like die, stops the program once it prints its mess.# Both cluck and confess print stack traces, which show the list of subroutine calls and their arguments. Each subroutine call puts a frame with all of its information onto the stack. When the subroutine finishes, Perl removes the frame for that subroutine, and then Perl looks on the stack for the next frame to process. Alternately, if a subroutine calls another subroutine, that puts another frame on the stack. Here’s a short program that has a chain of subroutine calls. I call the do_it function, which calls multiply_and_divide, which in turn calls the divide. Now, in this situation, I’m not getting the right answer for dividing 4 by 5. In this short example, you can probably spot the error right away, but imagine this is a huge mess of arguments, subroutine calls, and other madness: #!/usr/bin/perl use warnings; use Carp qw(cluck); print join " ", do_it( 4, 5 ), "n"; sub do_it { my( $n, $m ) = @_; my $sum = $n + $m; my( $product, $quotient ) = multiply_and_divide( [ $n, $m ], 6, { cat => ‘buster’ } ); return ( $sum, $product, $quotient ); } sub multiply_and_divide { my( $n, $m ) = @{$_[0]}; my $product = $n * $m; my $quotient = divide( $n, $n ); return ( $product, $quotient ); } sub divide { my( $n, $m ) = @_; my $quotient = $n / $m; } I suspect that something is not right in the divide subroutine, but I also know that it’s at the end of a chain of subroutine calls. I want to examine the path that got me to divide, so I want a stack trace. I modify divide to use cluck, the warn version of Carp’s stack tracing, and I put a line of hyphens before and after the cluck() to set apart its output to make it easier to read: sub divide { print "-" x 73, "n"; cluck(); print "-" x 73, "n"; my( $n, $m ) = @_; my $quotient = $n / $m; } The output shows me the list of subroutine calls, with the most recent subroutine call first (so, the list shows the stack order). The stack trace shows me the package name, subroutine name, and the arguments. Looking at the arguments to divide, I see a repeated 4. One of those arguments should be 5. It’s not divide’s fault after all: ————————————— at confess.pl line 68 ————————————— main::divide(4, 4) called at confess.pl line 60 main::multiply_and_divide(‘ARRAY(0x180064c)’) called at confess.pl line 49 main::do_it(4, 5) called at confess.pl line 41 9 20 1 It’s not a problem with divide, but with the information I sent to it. That’s from multiply_and_divide, and looking at its call to divide I see that I passed the same argument twice. If I’d been wearing my glasses, I might have been able to notice that $n might look like $m, but really isn’t: my $quotient = divide( $n, $n ); # WRONG my $quotient = divide( $n, $m ); # SHOULD BE LIKE THIS This was a simple example, and still Carp had some problems with it. In the argument list for multiply_and_divide, I just get ‘ARRAY(0x180064c)’. That’s not very helpful. Luckily for me, I know how to customize modules (Chapters 9 and 10), and by looking at Carp, I find that the argument formatter is in Carp::Heavy. The relevant part of the subroutine has a branch for dealing with references: package Carp; # This is in Carp/Heavy.pm sub format_arg { my $arg = shift; … elsif (ref($arg)) { $arg = defined($overload::VERSION) ? overload::StrVal($arg) : "$arg"; } … return $arg; } If format_arg sees a reference, it checks for the overload module, which lets me define my own actions for Perl operations, including stringification. If Carp sees that I’ve somehow loaded overload, it tries to use the overload::StrVal subroutine to turn the reference into something I can read. If I haven’t loaded overload, it simply interpolates the reference in double quotes, yielding something like the ARRAY(0x180064c) I saw before. The format_arg function is a bit simple-minded, though. I might have used the overload module in one package, but that doesn’t mean I used it in another. Simply checking that I’ve used it once somewhere in the program doesn’t mean it applies to every reference. Additionally, I might not have even used it to stringify references. Lastly, I can’t really retroactively use overload for all the objects and references in a long stack trace, especially when I didn’t create most of those modules. I need a better way. I can override Carp’s format_arg to do what I need. I copy the existing code into a BEGIN block so I can bend it to my will. First, I load its original source file, Carp::Heavy, so I get the original definition loaded first. I replace the subroutine definition by assigning to its typeglob. If the subroutine argument is a reference, I pull in Data::Dumper, set some Dumper parameters to fiddle with the output format, then get its stringified version of the argument: BEGIN { use Carp::Heavy; no warnings ‘redefine'; *Carp::format_arg = sub { package Carp; my $arg = shift; if( not defined $arg ) { $arg = ‘undef’ } elsif( ref $arg ) { use Data::Dumper; local $Data::Dumper::Indent = 0; # salt to taste local $Data::Dumper::Terse = 0; $arg = Dumper( $arg ); $arg =~ s/^$VARd+s*=s*//; $arg =~ s/;s*$//; } else { $arg =~ s/’/’/g; $arg = str_len_trim($arg, $MaxArgLen); $arg = "’$arg’" unless $arg =~ /^-?[d.]+z/; } $arg =~ s/([[:cntrl:]]|[[:^ascii:]])/sprintf("\x{%x}",ord($1))/eg; return $arg; }; } I do a little bit of extra work on the Dumper output. It normally gives me something I can use in eval, so it’s a Perl expression with an assignment to a scalar and a trailing semicolon. I use a couple of substitutions to get rid of these extras. I want to get rid of the Data::Dumper artifacts on the ends: $VAR = … ; # leave just the … Now, when I run the same program I had earlier, I get better output. I can see in elements of the anonymous array that I passed to multiply_and_divide: ————————————– at confess.pl line 65 main::divide(4, 4) called at confess.pl line 57 main::multiply_and_divide([4,5]) called at confess.pl line 46 main::do_it(4, 5) called at confess.pl line 38 9 20 1 The best part of all of this, of course, is that I only had to add cluck in one subroutine to get all of this information. I’ve used this for very complex situations with lots of arguments and complex data structures, giving me a Perl-style stack dump. It may be tricky to go through, but it’s almost painless to get (and to disable, too).{mospagebreak title=Safely Changing Modules} In the previous section I changed &Carp::format_arg to do something different. The general idea is very useful for debugging since I’m not only going to find bugs in the code that I write, but most often in the modules I use or in code that someone else wrote. When I need to debug these things in other files, I want to add some debugging statements or change the code somehow to see what happens. However, I don’t want to change the original source files; whenever I do that I tend to make things worse no matter how careful I am to restore them to their original state. Whatever I do, I want to erase any damage I do and I don’t want it to affect anyone else. I do something simple: copy the questionable module file to a new location. I set up a special directory for the debugging section just to ensure that my mangled versions of the modules won’t infect anything else. Once I do that, I set the PERL5LIB environment variable so Perl finds my mangled version first. When I’m done debugging, I can clear PERL5LIB to use the original versions again. For instance, I recently needed to check the inner workings of Net::SMTP because I didn’t think it was handling the socket code correctly. I choose a directory to hold my copies, in this case ~/my_debug_lib, and set PERL5LIB to that path. I then create the directories I need to store the modified versions, then copy the module into it: $ export PERL5LIB=~/my_debug_lib $ mkdir -p ~/my_debug_lib/Net/ $ cp `perldoc -l Net::SMTP` ~/my_debug_lib/Net/. Now, I can edit ~/my_debug_lib/Net/SMTP.pm, run my code to see what happens, and work toward a solution. None of this has affected anyone else. I can do all the things I’ve already showed in this chapter, including inserting confess statements at the right places to get a quick dump of the call stack. Every time I wanted to investigate a new module, I copied it into my temporary debugging library directory. Please check back next week for the conclusion to this article.
http://www.devshed.com/c/a/perl/debugging-perl/2/
CC-MAIN-2015-48
refinedweb
3,329
68.3
Movement detection and RF comms are the last two components for the minimum setup of a node in DomPi. The RF comms allows each of the remote nodes (Arduino based) to send and receive data from the Command Center (Raspberry Pi 3 based). The movement detection enables the project to build up three additional features: the alarm, determine presence at home and automatic lights. I will start with the easier component, the movement detection, but first, let me share the features I will be developing in this post: Movement detection - PIR sensor This component shall help the Command Center (CC) to deliver the three features mentioned above, meaning that the remote notes will just update the CC on the status - is motion detected or not - and it will be the CC who decides what action to trigger. The "intelligence" will reside in the CC. Let´s see the details of the hardware and software pieces. Hardware and wiring The sensor I will leverage for detecting movement at home, in the garden or garage is the HC-SR501, a passive infrared sensor. It works at +5V source and at +3.3V TTL. I have connected the Data pin of it (see picture below) to the pin 2 of the Arduino Nano. The sensor allows some configuration via hardware: - sensing distance: with the potentiometer T2 you can adjust the distance from 3 to 7 meters (10 to 23 ft). I will set up all of the PIRs at its maximum distance, - trigger approach: you can select the repetitive triggering (the output remains high while it detects presence in its range) or the non-repetitive (after some seconds the output goes to low and starts scanning again). This can be selected via the jumpers L(non-repetitive) and H (repetitive). I will set up all of the PIRs to the non-repetitive position. There is no special reason for this, but the main usage of the PIRs at the beginning will be for the alarm and with the non-repetitive it will be easier to avoid false positives - in the end I doubt that a burglar would stand up not moving for minutes and minutes at home... If however, I get a false positive - f.e. a bad initial reading of the PIR, etc - the output will go down after x secs allowing the CC to interpret this as a false alarm. - time delay adjust: you can adjust the seconds it waits before forcing a low output, between 5s and 300s. I´m adjusting it almost to the minimum, around 5-10s delay Pictures sources: PIR1, PIR2 An important note is that this sensor as a broad sensing angle, 110º, making it a good fit for DomPi where I want to control rooms and corridors, but maybe not that suitable if you pretend to control a narrower space. An interesting note on the PIR is that as a passive sensor "that measures infrared (IR) light radiating from objects in its field of view. (...) the temperature (...) in the sensor's field of view will rise from room temperature to body temperature. The sensor converts the resulting change in the incoming infrared radiation into a change in the output voltage, and this triggers the detection", source Wikipedia. Software There are two main alternatives to detect movement: with interrupts and by periodically polling the sensor. I did start with the interrupts, and attached an interrupt to be called each time there was a change in the PIR status (from nothing detected to something detected and the other way round). Below is part of the code that would allow the first approach. With the Arduino Nano you can attach interrupts to the pins 2 and 3 - in my case the PIR is in the pin 2. #define PIR_PIN_LIVING 2 int PIRstatus=0; ... pinMode(PIR_PIN_LIVING, INPUT); attachInterrupt(digitalPinToInterrupt(PIR_PIN_LIVING), processPIRchange, CHANGE); ... void processPIRchange() { PIRstatus = digitalRead(PIR_PIN_LIVING); } After several testing and a couple of hours invested I realized that the interrupts where not working properly. It did detect me correctly but the rest of the code was not working properly. Right now I have several components in the living room node that I use for testing: the PIR component, the RF light control, the environment measurement, the IR receiver (see posts 2 and 3) and the RF 2.4Ghz piece. My feeling is that some other library may be using the interrupts and it is not getting well along with my PIR. The responsiveness of the node went down dramatically making it not fit for purpose... So I decided to avoid interrupts and go to the second best option, polling the sensor periodically. For the DomPi project, in terms of usability, it won´t change anything, the main loop is fast enough to read the PIR quickly and detect any movement in time. Let me share with you the complete code in the next post, hopefully some minor issues will be sorted out! RF 2.4GHz comms - the NRF24L01 This is a key component of the DomPi. Since there will be five remote nodes (living room, two in the bedrooms, garden and garage), the best solution is to connect them with the Command Center via wireless - I don´t see myself making holes to reach the garage in my building of flats... I could use some Wifi dongle for the nodes, but the solution would not be that light nor cheap. Potentially, a RF 433Mhz transmitter and receiver per node could make the trick. I finally opted for a NRF24L01, it is a transceiver (transmits and receives with the same circuitry) and there are very nice libraries allowing a sort of RF network. Hardware and wiring The first thing to note is that there are some known problems with powering this chip directly from Arduinos such as the Nano or Mega that I pretend to use in DomPi. Since the power of these is limited to 50mA, it may not deliver the enough current to support the NRF24L01. There are a couple of workarounds to solve it: use an independent power unit, set two capacitors (10uF and 0.1uF) between its Vcc and Gnd or insert a base module between the Arduino and the NRF24L01 to power it up. To avoid further delays in the project, I will start by using the base module (see pic below). The good news is that the NRF24L01 seems to work ok with the Raspberry Pi. Pictures: NRF24_1, Base_module, NRF24+Base_module The wiring is not much complex but some care is needed. There are 8 pins in the base, one of them, the IRQ, is not required for the DomPi project. For the other 7, we have Gnd and Vcc, which should go to +5V, note that if you don´t use the base module then you have to connect the NRF24 to +3.3V or it can be damaged. The 5 remaining allow a SPI communication with the Arduino, the downsize of it is that 3 pins are fixed and you need to connect them to the right pins in the Arduino. These are the MISO, MOSI and the SCK that in the Nano case they should connect to 12, 11 and 13 respectively. I foresee some difficulties when working on the Garage node as the TFT base will connect to these pins and it will be difficult to physically access them. The last 2 pins, CE and CSN, can be selected via software when calling to the object. In my case I left them at: 7 and 8 respectively. I found this page very helpful while setting up the module. Software To use the NRF24L01 besides the Arduino SPI.h library, I am leveraging the library written by TMRh20. The great thing about this library, besides the support and forums you can get, is that I can use it on the Arduinos as well as on the Raspberry Pi, so... it is a great fit to the DomPi! As usual, to use the C++ library in Arduino, you just need to import the .zip file via de Arduino IDE (menu Sketch->Include Library->Add zip library), alternatively you can just paste the uncompressed file into the Arduino->libraries folder and next time you start the IDE, there it will be. A bit below I have included part of the Arduino code related to this component. It follows the examples included in the library: I use two objects, the RF24 to set up the radio functions, and the RF24Network which enables a network based on the radio object. There are two interesting points on the code, you need to set the id for the parent_node - to which the node will talk to - and also the id of this node. For the DomPi I have reserved the following ids: - 0 for the Command Center, RPI 3 - 1 for the kids´ bedroom - 2 for the parents´ bedroom - 3 for the living room - 4 for the garden - 5 for the garage The second point is lines 13 to 27. They define the message structure that will be sent to the master or received from it. Since these are the remote nodes, they will send the temperature, humidity, luminosity, motion status and another char more for future expansion. On the other hand, the remote nodes will receive from the CC a command - determines what needs to be done like "turn on light" - and an info char - with additional information like the number of the light to execute the command on. With each loop, we update the network and if there is a package received from the Command Center it calls the function receive_data() to process it. #include <RF24Network.h> #include <RF24.h> #include <SPI.h> // Radio with CE & CSN connected to pins 7 & 8 RF24 radio(7, 8); RF24Network network(radio); // Constants that identify this node and the node to send data to const uint16_t this_node = 3; const uint16_t parent_node = 0; struct message_1 { // Structure of our message to send int16_t temperature; //Temperature is sent as int16: I multiply by 100 the float value of the sensor and send it //this way I avoid transmitting floats over RF unsigned char humidity; unsigned char light; unsigned char motion; unsigned char dooropen; }; message_1 message_tx; struct message_action { // Structure of our message to receive unsigned char cmd; unsigned char info; }; message_action message_rx; RF24NetworkHeader header(parent_node); // The network header initialized for this node void setup() { // Initialize all radio related modules SPI.begin(); radio.begin(); delay(5); radio.setPALevel(RF24_PA_MAX); //This can lead to issues as per //use this radio.setPALevel(RF24_PA_LOW); if there are issues delay(5); radio.setChannel(108); //Set channel over the WIFI channels delay(5); radio.setDataRate(RF24_250KBPS); //Decrease speed and improve range. Other values: RF24_1MBPS y RF24_2MBPS delay(5); network.begin(90, this_node); } void loop() { // Update network data network.update(); //Receive RF Data while (network.available()) { receive_data(); } //Additional Code for IR, sensore measurement, etc } There also three important notes. - Line 39 sets the wireless channel to the 108. This module operates in 2.4Ghz that is the same band as the IEEE 802.11, the standard Wifi at home besides the 5Ghz band. By selecting the 108 channel, we should be out of the wifi band hence having less interference. - Line 36 sets the emitting power to the maximum that the NRF24L01 can provide. Although this looks like the right thing to do, due to the power issues mentioned before, the standard recommendation is to set it to the minimum power. In my case, with the base module it seems to be ok with the max power. - Line 41 sets the transmission speed to the lowest one allowed. The first reason is that we are transmitting just a few bits, below 100 bits including any headers. Additionally, the project does not require responses to the microsec. The second reason is the Shannon theorem. There is a lot of maths and of communications theory, but the summary is that the higher the speed, the "cleaner" the environment has to be (signal to noise,SNR, ratio). In general, the further both nodes will be, the "dirtier" the environment is (lower SNR) and therefore the maximum speed will decrease. If we force a higher speed than the maximum, we lose information and the nodes won´t communicate with each other. By selecting the minimum speed possible we can achieve longer distances or "dirtier" environment. Since I want to communicate with the garage, the SNR will be quite low, so it makes sense to decrease the speed. PS: This is also the reason why it takes long time to transmit data and pictures from the spaceships that visit the planets (like Pluto recently), the SNR is so low that according to the Shannon theorem the speed needs to be really low - sorry for the digression All in all, I am testing the communications with a second Arduino and after some fine tuning, it is working quite well. I´m curious to check whether I can communicate with the garage! Hope to be able to share the complete code of the living room in the next post! Nodes´ Dashboard After this week, most of the basic components of the living room node are finished and several of them will be easily replicable in all the remote nodes, so hope to see more greens coming in shortly! Any comments, suggestions or ideas are more than welcome!
https://community.element14.com/challenges-projects/design-challenges/pi-iot/b/blog/posts/piiot---dompi-04-movement-detection-and-rf2-4ghz-comms
CC-MAIN-2021-49
refinedweb
2,232
67.18
Each Answer to this Q is separated by one/two green lines. I want to create multiple users in django. I want to know which method will be the best.. class Teachers(models.Model): user = models.ForeignKey(User) is_teacher = models.BooleanField(default=True) ....... or should I use.. class Teacher(User): is_teacher = models.BooleanField(default=True) ....... or I have to make custom user model… Which will be good on creating multiple type users…?? Django doesn’t have multiple users – it only has one user and then based on permissions users can do different things. So, to start off with – there is only one user type in django. If you use the default authentication framework, the model for this user is called User, from django.contrib.auth.models. If you want to customize user behavior in django, there are three things you can do: Customize how you authenticate them. By default, authentication is done using a database where passwords are stored. You can authenticate against facebook/google etc. or against your existing user database – for example, with ActiveDirectory if you are on a Windows network. Create custom permissions, and based on these permissions, restrict what functions users can execute. By default, on every model – django will add basic permissions “can edit”, “can delete”, “can read”. You can create your own and then check if the user has these specific permissions. You can store extra information about the user, along with whatever normally is stored by django. There are two ways to do this, depending on how much customization you need. If everything django provides by default works for you, and all you want to do is store extra information about the user you can extend the user model – in previous versions this was called creating a custom profile. The other option you have is to create your own Usermodel, if you want deeper customization. The most common use of a custom user model is if you want to use an email address as the username. You don’t have to do all three, in fact sometimes all you want to do is store some extra information or have them authenticate using their email address; in some applications you have to modify all three places. In your case, since all you want to do is store extra information about a user, you would need to extend the user model, by creating a model that references User (note: you don’t inherit from User): class Profile(models.Model): user = models.OneToOneField(User) department = models.CharField(max_length=200, default="Computer Science") is_teacher = models.BooleanField(default=False) is_student = models.BooleanField(default=True) # .. etc. etc. One approach I was following with Django 1.7 (works with 1.6 too) is to subclass AbstractUser from django.db import models from django.contrib.auth.models import AbstractUser class User(AbstractUser): balance = models.DecimalField(default=0.0, decimal_places=2, max_digits=5) To use your model you need to set it to be the one used for authentication in settings.py: AUTH_USER_MODEL = 'your_app.User' Also note that you will now have to use settings.AUTH_USER_MODEL when referencing your new User model in a relation in your models. from django.db import models from django.conf import settings class Transaction(models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL) # ForeignKey(User) will not work
https://techstalking.com/programming/python/django-best-approach-for-creating-multiple-type-users/
CC-MAIN-2022-40
refinedweb
550
51.14
info_outline Solutions will be available when this assignment is resolved, or after a few failing attempts. Time is over! You can keep submitting you assignments, but they won't compute for the score of this quiz. Make Some Cars Create a class Car and create two instances car1 and car2. Then set three attributes for the instances: color, make, and model. Set whatever value you want for those attributes, but make sure they're different from car1 to car2 (i.e. car1 can't have the same color than car2). Test Cases test car1 attributes - Run Test def test_car1_attributes(): assert isinstance(car1, Car) is True, 'car1 is not created' assert isinstance(car1, object) is True assert hasattr(car1, 'color') is True, "car1 doesn't have a color attribute" assert hasattr(car1, 'make') is True, "car1 doesn't have a make attribute" assert hasattr(car1, 'model') is True, "car1 doesn't have a model attribute" test attributes are different - Run Test def test_attributes_are_different(): assert car1.color != car2.color assert car1.make != car2.make assert car1.model != car2.model test car2 attributes - Run Test def test_car2_attributes(): assert isinstance(car2, Car) is True, 'car2 is not created' assert isinstance(car2, object) is True assert hasattr(car2, 'color') is True, "car2 doesn't have a color attribute" assert hasattr(car2, 'make') is True, "car2 doesn't have a make attribute" assert hasattr(car2, 'model') is True, "car2 doesn't have a model attribute"
https://learn.rmotr.com/python/base-python-track/intro-to-oop/make-some-cars
CC-MAIN-2018-22
refinedweb
239
60.24
wcsrchr - Man Page search a wide character in a wide-character string Synopsis #include <wchar.h> wchar_t *wcsrchr(const wchar_t *wcs, wchar_t wc); Description The wcsrchr() function is the wide-character equivalent of the strr. Attributes For an explanation of the terms used in this section, see attributes(7). Conforming to POSIX.1-2001, POSIX.1-2008, C99. See Also strrchr(3), wcschr(3) Colophon This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at Referenced By signal-safety(7), strchr(3), wcschr(3).
https://www.mankier.com/3/wcsrchr
CC-MAIN-2022-21
refinedweb
108
66.74
If you are interested in Arduino, you have possibly heard about matrix keypads. They are widely used for entering numeric data in Arduino projects. An example for a matrix keypad (which I have one) is here and the official Arduino library for matrix keypad is here. I didn't like the library provided by Arduino, because it is unnecessarily complicated. A matrix keypad is very simple actually. As an example, assume that you have a "4 rows X 3 columns" keypad. It has 12 keys and 7 pins. These pins correspond to rows and columns. You can find the datasheet for my keypad here. Keys and their corresponding <rowpin, columnpin> pairs for my keypad is following: Key: RowPin + ColumnPin 1: 2 + 3 2: 2 + 1 3: 2 + 5 4: 7 + 3 5: 7 + 1 6: 7 + 5 7: 6 + 3 8: 6 + 1 9: 6 + 5 *: 4 + 3 0: 4 + 1 #: 4 + 5 When you press a key, let's say "1", its corresponding pins are shorted. That means, when you signal the row pin of the key, you read the same value from the column pin of that key, and vice versa. If you press "1" on the keypad, you will see that pins 2 & 3 are shorted. As you notice, keys [1..3] share the same row pin which is pin2, and keys[1,4,7,*] share the same column pin which is pin3. Your keypad may differ in terms of number of keys, pins and their mappings onto keys. Therefore, if you have one and don't have any document, you should first find out the relation between keys & pins by using a multimeter. In this section, I will try to explain how to connect your matrix keypad to the Arduino board by taking my keypad as an example. Your keypad structure may differ from mine, but the same rules apply. I have 7 pins on my keypad and therefore I have dedicated 7 digital pins of my Arduino board starting from 2 to 8. First, connect row pins of the keypad to the Arduino. But pay attention to the order of pins. Row1 pin (which is pin 2) of the keypad will be connected to the digital pin 2 of the Arduino, so on and so forth. All connections for my keypad: Keypad Pin => Arduino Digital Pin 2 (row1) => 2 7 (row2) => 3 6 (row3) => 4 4 (row4) => 5 3 (col1) => 6 1 (col2) => 7 5 (col3) => 8 Now, you are ready to test the library. Don't forget that the default configuration for the library is according to my keypad which I explained just above. If your keypad is different from mine, your hardware setup and software configuration must be adapted. How to make a new software configuration is explained below the basic usage. What do you expect from a matrix keypad? Simple. Which key is pressed, which key is released? Let's start. When you place the library in your libraries folder of your Arduino IDE, you are ready to use it. First, include the library and create an instance. #include <Keypad2.h> Keypad2 kp(1); // create default keypad. In the setup, specify your callback function for the press & release events. void setup() { Serial.begin(9600); // initialize Serial kp.setPressCallback(kpPressCallback); // set keypad callback function } And the callback function implementation: void kpPressCallback(char c, uint8_t state) { Serial.print(c); Serial.print(" "); Serial.print(state == KP2_KEYUP ? "up":"down"); Serial.println(); } Don't forget to monitor your keypad in the main loop. void loop() { kp.update(); } Your keypad may not have the same layout (number of keys, pins, key&pins relations) with the default one, of course. In this case, you have to provide a new configuration by calling setLayout function. Its prototype in the header file: setLayout // initialize hardware layout, both keypad & arduino // if not called, default layout should be used in constructor // parameters: // 1- row count // 2- column count // 3- keypad layout // 4- key states (2 bits for each key) // 5- row pins // 6- column pins // 7- internal pullup usage (0 for external pullups) void setLayout(int rowc, int colc, char **layout, uint8_t *keyState, \ int *rowp, int *colp, uint8_t useInternalPullup); And the default configuration of the library (which describes my keypad): // default layout const char kp2DefaultLayout[4][3] = { {'1','2','3'}, {'4','5','6'}, {'7','8','9'}, {'*','0','#'} }; // 2 bits for each key to store its state // 12*2/8 = 3 bytes for default layout const uint8_t kp2DefaultStateHolder[3] = {0, 0, 0}; const int kp2RowPins[4] = {2,3,4,5}; // connect to keypad pins 2,7,6,4 respectively const int kp2ColPins[3] = {6,7,8}; // connect to keypad pins 3,1,5 respectively // end of default layout When you called the constructor with 1 (true) as we did at the beginning of the section, what is done behind the scene is to call setLayout function with the default parameters as the following: setLayout if(useDefaultLayout) { this->setLayout(4, 3, (char **)kp2DefaultLayout, (uint8_t *)kp2DefaultStateHolder, \ (int *)kp2RowPins, (int *)kp2ColPins, 1); } If your configuration is different from the default one, you have to call setLayout function with proper parameters. What I would do in such a case is that - copy default values from the header file & paste into my code - edit default values according to the new keypad - rename parameter names Let's study on an example. Assume that our keypad is a matrix keypad with hexadecimal values as keys. 4x4 hexadecimal matrix keypad would be something like that: 1 | 2 | 3 | 4 5 | 6 | 7 | 8 9 | 0 | A | B C | D | E | F Therefore there must be 8 pins for 4 rows & 4 columns. Let's say the relation between pins and rows&columns: Pin | RowCol 1 | row1 2 | row3 3 | row2 4 | row4 5 | col2 6 | col4 7 | col1 8 | col3 We have to dedicate 8 digital pins on Arduino board. Let's say pins [2..9] are dedicated. Our parameters should be: // hex keypad layout const char layout[4][4] = { {'1','2','3','4'}, {'5','6','7','8'}, {'9','0','A','B'}, {'C','D','E','F'} }; // 2 bits for each key to store its state // 16*2/8 = 4 bytes for states const uint8_t stateHolder[4] = {0, 0, 0, 0}; const int rowPins[4] = {2,3,4,5}; // connect to keypad pins 1,3,2,4 respectively const int colPins[4] = {6,7,8,9}; // connect to keypad pins 7,5,8,6 respectively // end of hex keypad layout We have prepared our hardware, made the connections, prepared layout configuration, and we are ready to create an instance: #include <Keypad2.h> Keypad2 kp(0); // will be configured as hex keypad In the setup function, call setLayout: this->setLayout(4, 4, (char **)layout, (uint8_t *)stateHolder, \ (int *)rowPins, (int *)colPins, 1); That is it. You are ready to use your hex keypad. As a note, stateHolder is for internal usage. You just give some memory to the library to trace the keys. stateHolder I have seen many discussions about pullups and where to put them on the circuitry related to matrix keypads. Actually, you don't need them, just use built-in pullups! The algorithm for the key-press detection is 'scanning rows&columns'. As I said in the introduction section, row and column pins are shorted when you press a key. By setting row pins as INPUT and column pins as OUTPUT, one can try signalling each column and read from each row. If you read the same value from the row, that means their corresponding key is pressed, else released. In fact, I could use dynamic memory allocation for the state holder to lessen the number of parameters passed to setLayout function. But I don't use this technique if I don't really need it. It is very error-prone and if you make a mistake, very hard to detect. What I hate about a matrix keypad is that it is an enemy for your Arduino board pins. You need to allocate too many digital pins to use it. It would be very nice to have a serial version of it, which informs my Arduino when a key is pressed. Do you.
http://www.codeproject.com/Articles/710475/Using-matrix-keypad-with-Arduino
CC-MAIN-2015-48
refinedweb
1,361
68.1
std::shared_ptr::shared_ptr Constructs new shared_ptr from a variety of pointer types that refer to an object to manage. An optional deleter d can be supplied that is later used to destroy the object when no shared_ptr objects own it. By default, a delete-expression for type Y is used as the deleter. shared_ptrwith no managed object, i.e. empty shared_ptr shared_ptrwith ptras the pointer to the managed object. Ymust be a complete type and ptrmust be convertible to T*. Additionally: das the deleter. Deletermust be callable for the type Y, i.e. d(ptr) must be well formed, have well-defined behavior and not throw any exceptions. Deletermust be CopyConstructible. The copy constructor and the destructor must not throw exceptions. allocfor allocation of data for internal use. Allocmust be a Allocator. The copy constructor and destructor must not throw exceptions. shared_ptrwith no managed object, i.e. empty shared_ptr. shared_ptrwhich shares ownership information with r, but holds an unrelated and unmanaged pointer ptr. Even if this shared_ptris the last of the group to go out of scope, it will call the destructor for the object originally managed by r. However, calling get()on this will always return a copy of ptr. It is the responsibility of the programmer to make sure that this ptrremains valid as long as this shared_ptr exists, such as in the typical use cases where ptris a member of the object managed by ror is an alias (e.g., downcast) of r.get() shared_ptrwhich shares ownership of the object managed by r. If rmanages no object, *thismanages no object too. This overload doesn't participate in overload resolution if Y*is not implicitly convertible to T*. shared_ptrfrom r. After the construction, *this contains a copy of the previous state of r, ris empty. This overload doesn't participate in overload resolution if Y*is not implicitly convertible to T*. shared_ptrwhich shares ownership of the object managed by r. Y*must be convertible to T*. Note that r.lock() may be used for the same purpose: the difference is that this constructor throws an exception if the argument is empty, while std::weak_ptr<T>::lock() constructs an empty std::shared_ptrin that case. shared_ptrthat stores and owns the object formerly owned by r. Y*must be convertible to T*. After construction, ris empty. shared_ptrwhich manages the object currently managed by r. The deleter associated to ris stored for future deletion of the managed object. rmanages no object after the call. If Dis a reference type, equivalent to shared_ptr(r.release(), std::ref(r.get_deleter()). Otherwise, equivalent to shared_ptr(r.release(), r.get_deleter()) [edit] Notes When constructing a shared_ptr from a raw pointer to an object of a type derived from std::enable_shared_from_this, the constructors of shared_ptr update the private weak_ptr member of the std::enable_shared_from_this base so that future calls to shared_from_this() would share ownership with the shared_ptr created by this raw pointer constructor. Constructing a shared_ptr using the raw pointer overload for an object that is already managed by a shared_ptr leads to undefined behavior, even if the object is of a type derived from std::enable_shared_from_this (in other words, raw pointer overloads assume ownership of the pointed-to object). [edit] Parameters [edit] Exceptions [edit] Example #include <memory> #include <iostream> struct Foo { Foo() { std::cout << "Foo...\n"; } ~Foo() { std::cout << "~Foo...\n"; } }; struct D { void operator()(Foo* p) const { std::cout << "Call delete for Foo object...\n"; delete p; } }; int main() { { std::cout << "constructor with no managed object\n"; std::shared_ptr<Foo> sh1; } { std::cout << "constructor with object\n"; std::shared_ptr<Foo> sh2(new Foo); std::shared_ptr<Foo> sh3(sh2); std::cout << sh2.use_count() << '\n'; std::cout << sh3.use_count() << '\n'; } { std::cout << "constructor with object and deleter\n"; std::shared_ptr<Foo> sh4(new Foo, D()); } } Output: constructor with no managed object constructor with object Foo... 2 2 ~Foo... constructor with object and deleter Foo... Call delete for Foo object... ~Foo...
http://en.cppreference.com/w/cpp/memory/shared_ptr/shared_ptr
CC-MAIN-2015-35
refinedweb
653
58.99
Hi, I am still new to C++ and I am having a little trouble compiling the following code, and couldn't figure out what went wrong: #include <iostream.h> #include <string.h> using namespace std; float x = 5.0f; int _65Num = 65; int mian() { string str = "Hello World!"; cout << str << endl; cout << x << "" << str << endl; cout << "65Num = " << _65Num << endl; } The question ask me to identify the problem which I tried to correct as much as I can but I still get some problem while compiling... The original code was this: #include <iostream.h> #include <string.h> int mian() { string str = "Hello World!" cout << str << endl; cout << float x = 5.0f * str << end; int 65Num = 65; cout << "65Num = " < 65Num << endl; } Hope someone can point me in the right direction.
https://www.daniweb.com/programming/software-development/threads/266387/compile-error
CC-MAIN-2018-22
refinedweb
128
84.57
SYNOPSIS #include <pthread prior- ity of a thread.) RETURN VALUE On success, this function returns 0; on error, it returns a non-zero error number. If pthread_setschedprio() fails, the scheduling priority of thread is not changed. ERRORS EINVAL prio is not valid for the scheduling policy of the specified thread. EPERM The caller does not have appropriate privileges to set the spec- ified priority. ESRCH No thread with the ID thread could be found. POSIX.1-2001 also documents an ENOTSUP ("attempt was made to set the priority to an unsupported value") error for pthread_setschedparam(). VERSIONS This function is available in glibc since version 2.3.4. CONFORMING TO POSIX.1-2001. NOTES For a description of the permissions required to, and the effect of, changing a thread's scheduling priority, and details of the permitted ranges for priorities in each scheduling policy, see sched_setsched- uler(2). SEE ALSO getr(3), pthreads(7) COLOPHON This page is part of release 3.23 of the Linux man-pages project. A description of the project, and information about reporting bugs, can
http://www.linux-directory.com/man3/pthread_setschedprio.shtml
crawl-003
refinedweb
179
58.48
Introduction to Break in Scala Break in scala is a programming language that has seamlessly combined the object-oriented paradigm with the functional paradigm. The name Scala was coined from combining the two terms SCAlable and LAnguage. But one of the aspects to note about the language is the exclusion of the most basic flow constructs like Break and Continue. Do you have any idea why the Scalable Language has done this? No, you say! Well, worry not, In this article, we touch upon exactly this. We try to analyze how to use break if we need to in our program with some examples. Later we will also touch upon the design behind not adding break as an implicit flow constructs in the language. So although the language does not support the Break keyword as a language construct we can use the util.control.Breaks._ package to implement similar functionality. Let us take a look at the syntax for implementing break, Syntax: import util.control.Breaks._ breakable { <LOOP> { <statements> <condition> break; //Conditionally break out of loop } } As seen from the syntax few things that are important here are, - Importing the package and, - enclosing the loop in the breakable function. - Also, the obvious… a break statement. A fun fact btw, the break statement here is a function call. But more on that later. Flow Diagram Moving forward, Let us now try to further understand by looking at how the execution works with the help of a flowchart. The Flowchart below figure depicts how a normal for loop functions. It starts with an initialization block. Below the block is a decision box that verifies if the specified conditions are met. Based on the output of the decision box one of the two routes is selected. We are interested in what happens inside the “In loop code execution” block. A dotted line from the block represents a change of program flow due to the break statement. The below figure shows the internals of how the “In loop code execution” functions in Scala. In the flowchart from Figure 2, there is a call to break function which in turn raises an exception. This exception is then handled by the breakable which alters the program flow. Alternatively, In Java break is a predefined construct in the language and instead of throwing an exception it will change the code flow similar to how goto does. As seen in the flowchart control from the decision box (in Figure 1) comes to A in Figure 2. Also, control from B goes to the “Out of loop code execution” block above figure. Working Let us further look at how the break works in Scala by looking at the function internals. Breakable is a function that has a case to handle a breakException. Also, the break function throws a breakException. Code: def breakable(op: => Unit) { try { op } catch { case ex: BreakControl => if (ex ne breakException) throw ex } } def break(): Nothing = { throw breakException } One might start to wonder why is this done in the first place. Almost all languages have a break statement inbuilt in the language. So why have the language developers decided to not have break as a part of the arsenal? Well, there are a few reasons for that, - other language constructs. - Also, they create problems while interacting with closures. - They do not mesh well with the function literals. “You can read more on this in ‘Programming in Scala, Second Edition’, by Martin Odersky, Lex Spoon, and Bill Venners.” Examples of Break in Scala Let us take a look at a few examples to get an understanding of how this is done. Example #1 This example demonstrates the usage of a break in a single loop. Code: import scala.util.control.Breaks._ object BreakExample extends App { breakable { for (index <- 1 to 10) { println(index) if (index >= 5) break } } } Output: Explanation: As seen in the example, the for-loop is enclosed in the breakable block. For every iteration of the for loop, if loop checks if the index is less than or equal to 5, if the condition is valid then a call to break will be made which makes the program exit from the for loop. Output will be all the values of the index from 1 to 4. Example #2 This example demonstrates the use of break-in Nested loops. Code: import scala.util.control.Breaks._ object BreakTests extends App { val listA = List(1, 2, 3); val listB = List(5, 10, 15); var itemA = 0; var itemB = 0; breakable { for (itemA <- listA) { println("" + itemA); breakable { for (itemB <- listB) { println("- " + itemB); if (itemB == 10) break; } } } } } Output: Explanation: The program encounters a break statement inside the inner for loop. The usage is pretty similar to that in Example 1. In this case, when a break is encountered, the control will only exit the inner loop and not the outer loop. The functioning of the outer for loop will continue as expected. Observing the output makes it clear that whenever the value of itemB is 10 the for loop will be exited. Outer for loop goes through a complete iteration. Example #3 How the break statement works in a while loop explained below. Code: import scala.util.control.Breaks._ object BreakTests extends App { var index = 0; var sum = 0 breakable { while (index < 1000) { sum += index index +=1 println(sum); if (sum > 10) break; } } } Output: Explanation: This example is similar to Example 1, except a while loop has been used here to demonstrate working of break inside a while loop. Here a sum is computed and whenever the sum variable value goes above 10, a break is initiated. Conclusion So, in conclusion, Scala does not encourage to use explicit break statements. There are a lot of different mechanisms of how we can avoid using break statements. However, if the break is unavoidable you can use the Break package to get the task done. Recommended Articles This is a guide to Break in Scala. Here we discuss the syntax, flowchart, and working of break in scala along with the examples and its implementation. You may also look at the following articles to learn more-
https://www.educba.com/break-in-scala/
CC-MAIN-2021-25
refinedweb
1,027
63.59
On Wed, Apr 02, 2003 at 07:32:18PM -0500, Aahz wrote: > In article <3E8B7D49.829034FF at engcorp.com>, > Peter Hansen <peter at engcorp.com> wrote: > > > >I hope Greg's comments about being able to disable this are on the > >mark, because otherwise this would be a clear case of a desire for > >(unnecessary, IMHO) optimization being given higher priority than > >maintaining one of Python's strongest advantages. Speed will *never* > >be Python's strongest suit, but its dynamicism clearly is; let's not > >bugger that up! > > Maybe I'm misunderstanding something, but I was under the impression > that disabling the ability to munge another module's namespace was at > least partly driven by the desire to enable secure code. I don't think this was a consideration. 13 days, 22:00, 4 users, load average: 1.56, 1.51, 1.46
http://mail.python.org/pipermail/python-list/2003-April/203150.html
CC-MAIN-2013-20
refinedweb
142
63.9
Posted 21 Aug 2012 Link to this post Posted 23 Aug 2012 Link to this post I believe that the best approach here would be to create your own theme based on Office Black. You have to copy the files you need and modify it as described in our documentation. Since you want to change something contained in the template of the control you should modify its XAML. It would be much more appropriate to create your own, rather than to predefine the templates of all controls one by one. Currently the resources in our styles are gathered per assembly, so it would not be so difficult to find desired ones. If I can be of further assistance do not hesitate to contact us! Explore the entire Telerik portfolio by downloading Telerik DevCraft Ultimate. Posted 24 Aug 2012 Link to this post Thank you for getting back to us! Please follow these steps when dealing with custom theme in our newest version: 1. Create a new Class LIbrary named MyTheme and add the following piece of code within MyTheme.cs: [ThemeLocation(ThemeLocation.BuiltIn)] public class MyTheme : Theme { MyTheme() this .Source = new Uri( "/MyTheme;component/themes/Generic.xaml" , UriKind.RelativeOrAbsolute); } 2. Add a new Themes folder with the corresponding ResourceDictionaries for RadGridView (located on..\Q2 2012 SP1\ Official\Silverlight\Themes\..\Expression_Dark): System.Windows.xaml, Telerik.Windows.Controls.xaml, Telerik.Windows.Controls.Input.xaml, Telerik.Windows.Controls.GridView.xaml. 3.Specify the namespace for this theme and the Key for your theme in all ResourceDictionaries as follows: xmlns:external="clr-namespace:MyTheme" <local :MyTheme x: Posted 28 Aug 2012 Link to this post ControlBackground_Active_Stop0 ..ControlBackground_Active_Stop3 in Telerik.Windows.Controls.xaml does not change the RadComboBox I got to a point where I was frustrated with all the XAML I had to copy into my project just to change a couple of colors. I wanted to try another approach so I attempted to modify the Office_Black common.xaml, compile that theme and tell my project to use that theme. Unfortuantely it didn't look like it actually compiled the common.xaml. I think this had something to do with build action of the _x.xamls being set to PreprocessedXaml. Is there some trick I'm missing to modifying the common.xaml of a built-in Telerik theme? Posted 03 Sep 2012 Link to this post Rather unfortunately we are not quite sure what might be causing this behavior. Generally if you change a resource used by in control template it will be handled correspondingly. Would it be possible to isolate the problem in a small runnable project and send it back to us as an attachment to a new support thread? Posted
http://www.telerik.com/forums/custom-theme-with-only-common-xaml
CC-MAIN-2017-13
refinedweb
449
56.15
Employees app with XML parsing and messaging in WP This code example shows how to load, parse and display (employee) XML data hosted on a server in Windows Phone. The code example also shows how to send the employees an SMS or email, or make a phone call. Windows Phone 8 Windows Phone 7.5 Introduction This article shows how to create a Windows Phone app which loads Employee information from server in XML format. Loaded Employees XML data will be parsed and displayed (image 1) in a ListBox Control. Selected Employee's details will be shown in a new Employee Page (image 2). You can call, send SMS or Email messages to selected Employee. XML data on server In this example Employees data will be loaded from XML file. Prepare XML data and images on server in following format: Of course you also will need some nice pictures of your Employees. Save those pictures to same folder with XML file. Windows Phone Application To start creating a new Windows Phone Application using the Windows Phone SDK, start Microsoft Visual studio then create a new Project and select Windows Phone Application Template. In this example I have used C# as code behind language. XML Parsing in Windows Phone There are many different ways to load and parse XML data in Windows Phone ...." Employees Class Employees information will be load from XML data. All information will be stored to Employees and Employee Classes. To create a Employees Class in to your project: - Right click your project in Solutions Explorer - Select Add and then Class... Name: Employees.cs and click Add. using System; using System.Xml.Serialization; using System.Collections.ObjectModel; namespace Employees { [XmlRoot("root")] public class Employees { [XmlArray("employees")] [XmlArrayItem("employee")] public ObservableCollection<Employee> Collection { get; set; } } } Like you can see in XML file, there is root element and here in Employees Class we use it same root element which contains all the Employees (you have to add using statement for Serialization). After that we have array of employees and it contains a employee items. All the employee items will be stored to an ObservableCollection (you have to add using statement for ObjectModel). Employee Class In XML data each employee had elements for firstname, lastname, title and so on. You have to define them with using with [XMLElement]. Remember add using statement for Serialization. To create a Employee Class in to your project: 1. Right click your project in Solutions Explorer 2. Select Add and then Class... Name: Employee.cs and click Add. using System; using System.Xml.Serialization; namespace Employees { public class Employee { [XmlElement("firstName")] public string firstName { get; set; } [XmlElement("lastName")] public string lastName { get; set; } [XmlElement("title")] public string title { get; set; } [XmlElement("phone")] public string phone { get; set; } [XmlElement("email")] public string email { get; set; } [XmlElement("room")] public string room { get; set; } [XmlElement("picture")] public string picture { get; set; } [XmlElement("info")] public string info { get; set; } } } All the employees will be shown in a ListBox in an Application Main Page. Load a images There can be a small issues when you try to load lot of images from the web at the same time in Windows Phone. You can find a few assemblies Exlorer and browse your PhonePerformance.dll file. Open file properties and Unblock. Application and Page Title Modify your application and page title to suit your purpose > Use ListBox to show Employees Each Employee is a one row in a ListBox. Each row is 130 px height. Image will take 100 px and employees name, title and room will be shown at the right side of image. Data Binding from the code to ListBox will be used (look more detailed information about Data Binding in later from MainPage.xaml.cs). PhonePerformance.dll is used to load images (look Image element: Image delay:LowProfileImageLoader.UriSource="{Binding picture}"). Remember add a new XML xmlns:delay="clr-namespace:Delay;assembly=PhonePerformance" namespace to your phone:PhoneApplicationPage element. When user clicks a row in ListBox, a new Employee Page is displayed (it calls employeesList_SelectionChanged method). EmployeePage.xaml is described later in this article. <phone:PhoneApplicationPage x:Class="Employees.MainPage" .... xmlns:delay="clr-namespace:Delay;assembly=PhonePerformance" <ListBox x: <ListBox.ItemTemplate> <DataTemplate> <Grid Height="130"> <Grid.ColumnDefinitions> <ColumnDefinition Width="100"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Image delay:LowProfileImageLoader.UriSource="{Binding picture}" Grid. <StackPanel Margin="10,15,0,0" Grid. <TextBlock Text="{Binding lastName}" FontSize="30" /> <TextBlock Text=" " /> <TextBlock Text="{Binding firstName}" FontSize="30"/> </StackPanel> <StackPanel Margin="0,50,0,0" Grid. <TextBlock Grid. <TextBlock Grid. </StackPanel> </Grid> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Code behind MainPage loads employees XML data from server, parses it to Employees and Employee Classes and finally Binds the data to UI. Load XML data from Server In MainPage constructor after InitializeComponent() is done, we first check if there is a network connection available and then start loading XML data public MainPage() { InitializeComponent(); // is there network connection available if (!System.Net.NetworkInformation.NetworkInterface.GetIsNetworkAvailable()) { MessageBox.Show("No network connection available!"); return; } // start loading XML-data WebClient downloader = new WebClient(); Uri uri = new Uri("", UriKind.Absolute); downloader.DownloadStringCompleted += new DownloadStringCompletedEventHandler(EmployeesDownloaded); downloader.DownloadStringAsync(uri); } All the XML data is loaded, it is time to parse it. void EmployeesDownloaded(object sender, DownloadStringCompletedEventArgs e) { if (e.Result == null || e.Error != null) { MessageBox.Show("There was an error downloading the XML-file!"); } else { // Deserialize if download succeeds XmlSerializer serializer = new XmlSerializer(typeof(Employees)); XDocument document = XDocument.Parse(e.Result); // get all the employees Employees employees = (Employees) serializer.Deserialize(document.CreateReader()); // bind data to ListBox employeesList.ItemsSource = employees.Collection; } } When user clicks a row in a ListBox, employeesList_SelectionChanged) method will be called. Here we first find our Application and store selected employee there. This is one way to share data between views in Windows Phone Applications. After that a new page will be displayed. This EmployeePage.xaml will be created later in this article. // selection in EmployeeList is changed private void employeesList_SelectionChanged(object sender, SelectionChangedEventArgs e) { var app = App.Current as App; app.selectedEmployee = (Employee) employeesList.SelectedItem; this.NavigationService.Navigate(new Uri("/EmployeePage.xaml", UriKind.Relative)); } Now all the Employees should be visible in MainPage. -Remember to add the following two namespaces in mainpage.xaml.cs using System.Xml.Serialization; using System.Xml.Linq; App.xaml.cs In previous chapter we stored selected employee to App Class. So we need to change App.xaml.cs file a little (add selectedEmployee): // selected employee from EmployeeList public Employee selectedEmployee { get; set; } EmployeePage.xaml Create a new Page to your Project Employee's detailed information is shown a new Employee Page. To create a new Page to your project, right click your project in Solution Exloperer and select Add, New Item... Select Windows Phone Portrait Page and name it to EmployeePage.xaml Employee Page Design In right you can see Employee Detail Page in Design Mode. Company name and Page title is shown as they are in Main Page. You can modify those to your own needs. > Next there are employee's image, last and first name and title and room. StackPanel is used to organize controls. <Grid Height="130" VerticalAlignment="Top"> <Grid.ColumnDefinitions> <ColumnDefinition Width="100"/> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Image x:Name="image" Grid. <StackPanel Margin="10,15,0,0" Grid. <TextBlock x: <TextBlock Text=" " /> <TextBlock x: </StackPanel> <StackPanel Margin="0,50,0,0" Grid. <TextBlock Grid.Column="1" x: <TextBlock Grid.Column="1" x: </StackPanel> </Grid> User can call or send SMS and Email messages to selected employee by pressing visible phone numbers. When end user is pressing the numbers, ManipulationStarted will be launched. Event handling will be programmend in to EmployeePage.xaml.cs file. <Grid Height="50" VerticalAlignment="Top" Margin="0,150" ManipulationStarted="phone_ManipulationStarted"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <TextBlock x: <Image Source="Images/call.png" HorizontalAlignment="Right"/> </Grid> <Grid Height="50" VerticalAlignment="Top" Margin="0,210" ManipulationStarted="sms_ManipulationStarted"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <TextBlock x: <Image Source="Images/msg.png" HorizontalAlignment="Right"/> </Grid> <Grid Height="50" VerticalAlignment="Top" Margin="0,270" ManipulationStarted="mail_ManipulationStarted"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <TextBlock x: <Image Source="Images/mail.png" HorizontalAlignment="Right"/> </Grid> Detailed information of selected employee is displayed at the bottom of the screen. <TextBlock x: In this example I have used a few images to display Call, SMS and Email messages. You can add a new images to your project also. First create a new Folder for images to your Solution Explorer (name it to Images for example). Copy your images to that folder in Windows. Right click your Images folder in Solution Explorer and select Add, Existing Item... Browse your images and add to project. EmployeePage.xaml.cs Code behind Employee Page first gets the selected employee from App class and then displays employee's informations to UI. // selected Employee Employee employee; public EmployeePage() { InitializeComponent(); // get selected employee from App Class var app = App.Current as App; employee = app.selectedEmployee; // show employee details in page lastName.Text = employee.lastName; firstName.Text = employee.firstName; title.Text = employee.title; room.Text = "Room: " + employee.room; image.Source = new BitmapImage(new Uri(employee.picture, UriKind.RelativeOrAbsolute)); mail.Text = "Email: " + employee.email; sms.Text = "SMS: " + employee.phone; phone.Text = "Call: " + employee.phone; info.Text = employee.info; } When user want's to call employee, phone_ManipulationStarted method will be called. (Remember add using Microsoft.Phone.Tasks; in your code) private void phone_ManipulationStarted(object sender, ManipulationStartedEventArgs e) { MessageBoxResult result = MessageBox.Show("Make a phone call?", employee.lastName + " " + employee.firstName, MessageBoxButton.OKCancel); if (result == MessageBoxResult.OK) { // make a phone call PhoneCallTask phoneTask = new PhoneCallTask(); phoneTask.DisplayName = employee.lastName + " " + employee.firstName; phoneTask.PhoneNumber = employee.phone; phoneTask.Show(); } } When user want's to Send SMS message to employee, sms_ManipulationStarted method will be called. private void sms_ManipulationStarted(object sender, ManipulationStartedEventArgs e) { MessageBoxResult result = MessageBox.Show("Send SMS message?", employee.lastName + " " + employee.firstName, MessageBoxButton.OKCancel); if (result == MessageBoxResult.OK) { // sms SmsComposeTask composeSMS = new SmsComposeTask(); composeSMS.Body = "Edit your message"; composeSMS.To = employee.phone; composeSMS.Show(); } } When user want's to Send Email to employee, mail_ManipulationStarted method will be called. private void mail_ManipulationStarted(object sender, ManipulationStartedEventArgs e) { MessageBoxResult result = MessageBox.Show("Send Email?", employee.lastName + " " + employee.firstName, MessageBoxButton.OKCancel); if (result == MessageBoxResult.OK) { // sähköposti emailcomposer." + employee.email + "</a>"; } } Summary Working with XML data in Windows Phone Applications is quite a easy. Hope you find this article usefull and it helps you work with XML in WP7. And finally there are sources codes for this article: File:PTM Employees.zip Phuc@realcom - Great post! Thankyou! this is a great post.I have a question. Could rendering Employees to screen process comes along with downloading process? phuc@realcom 15:14, 17 January 2012 (EET) Pasi.manninen - to Phuc@realcomXML file will be loaded first and then EmployeesDownloaded method will be called. After that, in EmployeesDownloaded method, data will be binded to ListBox and then all images will be loaded. pasi.manninen 17:58, 18 January 2012 (EET) Paoki - Previous Next Employee Thanks for your post. I have a question. On the Employee Details page I want to have 2 buttons: next and previous, so that if the user clicks on the next button it will automatically bring the next employee data on the employee details page (sameway by clicking on the previous button). could you please let me know how to do that?thank you paoki 17:39, 19 February 2012 (EET) Pasi.manninen - Prev/Next Employee Hi Paoki, you have to move employees from MainPage.xaml.cs to App.xaml.cs (same way as selectedEmployee is). In Employee Details page you have to set selectedEmployee to previous or next employee from Employees Collection when buttons are clicked. And finally update view with new employee data.Pasi pasi.manninen 21:51, 28 February 2012 (EET) Etnad - Great post!! Great post, thank you for the example. Is it possible to change the email link to link to a pivot page? For an example, if employee is in a category of Day Shift, Night Shift or Swing Shift.. when the user clicks "Night Shift" it takes them to basically an about page that explains the Night Shift? I hope that made sense... anyway great post thanks for sharing. That was a silly question.. I figured it out. :P New question.. I can't figure out how to that the xml as a resource instead of requiring an internet connection. I've tried adding it as a resource then I thought it'd be as simple as changing this, but no luck. I've tried several different things but cant figure it out. any ideas? Etnad 07:16, 30 March 2012 (EEST) EDIT: I figured that out too... Here's the code if there's other newbies out there. Place the employees.xml directly under the project and set as Content. Change MainPage.xaml.cs to: Then completely comment out the "void EmployeesDownloaded" section. Woohoo it works! Okay.. next project, switch this to an LLS :) Anyone help with that? Kavit Patel - Create Data classes using tool Hi, Great article. I want to add something here. You have created ""Employees"" & ""Employee"" Classes manually to parse the XML response using XmlSerializer, creating this class manually requires lots of attention and hence some lengthy process. We can use the Visual Studio's tool named ""XSD"" to create these classes automatically. I have written article for the same @ Kavit. Regards, Kavit Patel 15:59, 25 April 2012 (EEST) Ekzotik - j2meHi is it possible to have such a code in j2me.please i request u post a j2me code too. ekzotik 14:25, 22 May 2012 (EEST) Javabak - Problem with the feed Hey there, nice explaination, works great but not in my case :( I have this application that shows events in my city, every event got an image and a title and if I click an event I'll go to the detail page that show me the larger image, the title and the description, everything similar to this example and in fact it works pretty fine, the problem? it shows me only one event, it seem that the deserialization stop workingafter the first one :\ here you can see the feed that i use: this is my Events class: and this is my Event class any help would be apreciated, tnx in advice!Diego. Javabak 14:30, 13 August 2012 (EEST) Hamishwillee - You might want to private message Pasi direclty Hi Guys If you're asking about something directly related to this article then by all means post here. However I suggest you also send a private message Pasi by hovering over his name up in the ArticleMetaData. This increases the likelihood he will get the message - as he may not be monitoring the article anymore. If however your question is only peripherally related to the article (for example a request for java me code that does the same thing!) then I suggest you raise a discussion board post on the Windows Phone forum - you can of course cross link to the article. This increases the chance that you will get support. Regards Hamish(Community Manager, Nokia) hamishwillee 07:16, 15 August 2012 (EEST) Nayana Bingi - Problem in ObservableCollection part Hi, I have tried to retrieve the XML data from the site "".. but the result i am getting is a blank page . The following is the content of xml file <?xml version="1.0" encoding="UTF-8"?> <articles><![CDATA[ The Indian Institute of Management, Bangalore (IIM-B) will refund the fees of around Rs 8 lakh that students pay for their two-year Post Graduate Programme (PGP) if they take up a job in any non-profit entity (NGO).]]> </articles> This is the Articles.cs file -- namespace t4 { } This is the Article.cs file-- namespace t4 { } This is my MainPage.xaml.cs file-- namespace t4 { There are no errors .. but the output that i get is a blank page .. when i debugged the NewsDownloaded () fuction the contents of locals stack are as shown DEFAULT_NAMESPACES Cannot fetch the value of field 'DEFAULT_NAMESPACES' because information about the containing class is unavailable. System.Xml.Serialization.XmlSerializerNamespaces in serializer variable. Please Suggest solution for this problem Thanks and RegardsNayana Nayana Bingi 08:21, 16 February 2013 (EET) Pooja 1650 - Corrected Broken Link Hi Passi, The link to PhonePerformance.dll was broken so I had just replaced it with the one mentioned in your other article "Weather in Windows Phone". Please check it once and correct me If I am wrong. Thanks,Pooja pooja_1650 08:06, 4 March 2013 (EET) Pasi.manninen - Pooja - Correct link Hi, now it is correct again. Br,Pasi. pasi.manninen 10:13, 18 March 2013 (EET) Kunal Chowdhury - Great post. Thank you for sharing. Regards,Kunal Chowdhury Kunal Chowdhury (talk) 21:36, 11 August 2013 (EEST) Ardit dine - Error Changing Uri Hello, great post.I'm haveing problem when i navigate to different employee all data are display correct but when i go back where are all the employee list and press a button that change the uri of the xml it gave me a error on EmployeePage.xaml ardit_dine (talk) 13:37, 6 November 2013 (EET) Hamishwillee - Ardit dine - thanks for posting Hi Ardit dine If you cab debug this yourself and update it would be great. If you don't get a reply in a few days you might want to sent a private message to the author pasi.manninen (just hover over his name to see option to send) RegardsHamish hamishwillee (talk) 05:23, 7 November 2013 (EET) Chintandave er - Is there easy way to search, filter data ? Hi, Thanks for this article.Is there easy way to search employee by his name and other details ? Chintandave er 09:19, 9 February 2014 (EET) Microasif - How can we search ? hey, Great article;How can we search from this Employees collection ?? Microasif (talk) 13:06, 2 March 2014 (EET)
http://developer.nokia.com/community/wiki/Employees_app_with_XML_parsing_and_messaging_in_WP
CC-MAIN-2014-10
refinedweb
2,967
50.63
> > You need it in the default (no security) version of security_file_mmap()> > in security.h not hard coded into do_mmap_pgoff, and leave the one in> > cap_* alone.> > But that would still leave it up to the security "models" to check> for basic security issues.Correct. You have no knowledge of the policy at the higher level. In theSELinux case security labels are used to identify code which is permittedto map low pages. That means the root/RAW_IO security sledgehammer can bereplaced with a more secure labelling system.Other policy systems might do it on namespaces (perhaps /binand /usr/bin mapping zero OK, /home not etc)
https://lkml.org/lkml/2009/6/3/450
CC-MAIN-2015-18
refinedweb
104
57.16
| Join Last post 06-15-2009 7:56 AM by andyste1. 53 replies. Sort Posts: Oldest to newest Newest to oldest I posted this on the 'Hosting Open Forums', but didn't get any answer so I'll try again here... I have a .NET 2.0 application that is hosted on Godaddy.com. When you setup your MySQL database, Godaddy supplies you with several connection strings, depending on the connector you wish to use (ODBC, OLEDB, and MySQL Connector .NET). Since they provide a connection string for Connector .NET, I assume this means I could use this connector to access MySQL, and from what I read, doing so would be preferable to using ODBC or OLEDB. I setup my page and plopped the MySql.Data.dll into my bin directory as suggested. Since Godaddy sets the trust level to medium on shared hosts, I got the following error (I've seen many people bring up this problem due to running under a medium trust level in .NET 2.0): System.Security.SecurityException: That assembly does not allow partially trusted callers. So, I did some research and it was suggested that if you add [assembly:AllowPartiallyTrustedCallers] to AssemblyInfo.cs in the MySql Connector .NET source, recompile, then use the new .dll, the trust issue will be resolved. I did this and placed the new .dll in my /bin folder and although the original error went away, I now get the following error: MySql.Data.MySqlClient.MySqlException: Unable to connect to any of the specified MySQL hosts ---> System.Security.SecurityException: Request for the permission of type 'System.Net.SocketPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed For what it's worth, I am able to get the ODBC connector working (but intermittently 'lose connection during query' somewhat rarely, but enough to be a problem), but would prefer using the Connector .NET to access MySQL database. Anybody experience this and know what is going on, or how to fix this? Is using the MySQL Connector .NET just not possible with Godaddy.com? Thanks for any info. Wyck I might not have every detail correct here, but basically: reason for failure: your ASP.NET 2.0 application is running under "medium trust" - a context not allowing calls to unsigned assemblies. You need to either a) ask GoDaddy to put your app to run under full trust (which they might not want to - but you should tell them that it's false advertising to let you think you can run MySQL from .NET 2.0 and then not allow it) or b) ask GoDaddy to put MySQL.Data.dll in gac and then change a file called web_mediumtrust.config to allow the calling of MySQL.Data despite the "security"problem. Not sure about the change needed - google might help you or maybe c) ask MySQL.com or whoever has created the MySQL.Data.Dll driver to sign the assembly correctly. Not sure about this however, havn't read up on it but I do think it is a simple signing of the assembly missing... Alternative b) should be best from all perspectives and I really think webhosts running websites under medium trust (which btw should be ok and recommended) should implement the change MAYBE A SOLUTION? Just found out more about it. Please read more here Seems like some heroic guy has recompiled the MySql.Data.Dll with correct permissions requested Ah yes, I was able to recompile the MySql.Data.Dll to allow partially trusted callers and I placed the new .dll in my /bin folder (see my first post). Although this did get rid of the "That assembly does not allow partially trusted callers" problem, a new problem arose. It appears that MySql.Data.Dll uses sockets and under the default permissions of the medium trust settings(web_mediumtrust.config), sockets aren't allowed. When I try to use open a connection using MySql.Data.Dll, I get the following error: When I spoke to technical support, they eventually sent me a link which described how to use medium trust under .NET 2.0. In this link, one of the things it describes is how to create a custom medium trust config file that will enable what you need, like System.Net.SocketPermission. However, the big problem with that is since this is their host, and they want things to run under medium trust since it is a shared host, they have naturally locked the trust level. And, of course, I cannot override the trust level to any custom settings that would enable sockets, or anything else for that matter. So, if the MySql Connector .NET uses sockets (which it appears it does), and sockets aren't allowed by default in medium trust (which it appears it isn't), and they have the trust config locked down (which they do), the only way for me to use MySql.Data.Dll with a shared hosting plan with Godaddy.com is for Godaddy.com to enable this themselves. Unless someone somewhere knows of a trick to work around this?? I'm not sure I have a lot of confidence that Godaddy will enable this. I can fall back to using ODBC, which I've gotten to work, but I just preferred to use MySql Connector .NET... Thanks for the replies. whebert, Ah yes, I was able to recompile the MySql.Data.Dll to allow partially trusted callers Ah yes, I was able to recompile the MySql.Data.Dll to allow partially trusted callers really sorry for not reading your question correctly. Or I think I first did but then mixed it up with a clients problem - cause this question was presented to me by a client with a similar problem and therefore I might have jumped to conclusions in your particular case... While my link might be of help for some people I'm sorry it didn't help you out with GoDaddy... As I have said before I really think it is low not to change the medium trust config file to make MySQL would work - false advertising I'd say - but me sympatizing for you doesn't help you much, does it? Let me know if you find out something cause this is big deal for many software developers as well. Update: My problem was eventually sent up as a trouble ticket to the "Advanced Hosting Support" on Godaddy.com. This is the response I received: Dear Sir/Madam,Thank you for contacting hosting support. The MySql.Data.MySqlClient is not installed on the hosting server. You may upload it to your bin directory and use it from there. However, the provider doesn't work in medium trust, so because you are using .NET 2.0 you will be unable to use it. We recommend switching to the ODBC .NET providers. Thank you for your understanding in this matter. So, it would appear that as of this writing, Godaddy.com does not support the MySql Connector/NET provider for .NET 2.0 on their shared hosts which run under medium trust. I assume it works with .NET 1.1, since everything runs under full trust (or so I've heard), but I have not tested that out. This is unfortunate. All other things aside, I really do like Godaddy.com hosting - especially the price. Like I've mentioned before, I was able to get the ODBC provider working, so I can continue with that for the time being; they give you one MS SQL database so the default .NET 2.0 Membership and Roles is trivial to setup and use; and with a little tweaking I was able to get the SMTP email relay server working (had to figure out how to pass it credentials). It would be nice to see Godaddy.com fully support .NET 2.0 development with the MySql Connector/NET provider. From the forums I read, it seems to be a rather large problem for many developers. From what I gather, all they'd have to do is add the System.Net.SocketPermission namespace to their web_mediumtrust.config file and we'd all be set (assuming you have a recompiled MySql.Data.Dll assembly that allows partially trusted callers, of course). :) Alek, just wanted to say: great, well done. hi whebert, i have the same problems also, currently trying to solve it. i have the error {"Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed."} but i had set it in the mediumtrust web config, don't know why it show this out. whebert can you show how you set the web config and how you set the connection to mysql using mysql connector and also how to recompiled MySql.Data.Dll assembly that allows partially trusted callers. cause i just put in the trust = medium in to my web application.... to used it in web hosting company server. thanks Great news! Thanks Alek and the Godaddy Hosting Team. I just got back from vacation and haven't had time to test this yet. As soon as I do I'll post my results - I'm sure it all works fine now. FrancisFoo, as soon as I get a chance to test this I'll post how I setup everything and how I recompiled the MySql.Data.Dll. Well, I tested it and sure enough, it works now. So thanks again to Alek and the Godaddy Hosting Team for taking the time to add support for the MySql Connector/Net provider under .NET 2.0. As mentioned, I did have to place the recompiled assembly in the /bin folder to get it to work, but that is easy to do. For those who haven't already done so or haven't read about it elsewhere, this is how I recompiled the MySql.Data.Dll assembly. I first downloaded the Connector/NET 1.0 provider from the MySql site: The Connector/NET provider (binaries and source code) were installed to the following location on my hard drive: C:\Program Files\MySQL\MySQL Connector Net 1.0.7 I then went into the \src folder and noticed they provide you with solution (.sln) and project (.csproj) files for Visual Studio, so I downloaded the free version of MS Visual C# 2005 Express Edition at: I opened the solution file with Visual Studio C# 2005 Express (I think VS did some conversion to the solution to open it in the new version, no biggy) and simply added the following to the AssemblyInfo.cs file: using [assembly: Clicked 'Build'->'Build Solution' and voila, it built the new assembly. I located the new .dll file in the C:\Program Files\MySQL\MySQL Connector Net 1.0.7\src\bin\net-1.1\Release folder. I know it recompiled it to the 'net-1.1' under \bin, but I don't think that matters - it seems to work for .NET 2.0 as well. If somebody knows differently, please let me know - I didn't see anywhere within VS Studio to change the compile for 2.0 specifically... Anyway, I placed this recompiled MySql.Data.Dll in the /bin folder on my website and everything seems to be working now. I didn't have to change anything in my web.config file, just used the connection string Godaddy provided to me and tested a few queries, everything is working as it should now. Hi whebert, thanks for reply, and thanks for the recompile part. i am using MS VB 2005 Express Edition ...... look like i had to... uninstall VB and install C# i had give it a "try" to compile it on Ms VS .Net 2003, it able to compile but show error when i used it in MS VB 2005 Express Edition. well post the result when it done. Francis can you send me a copy of the recompile Mysql? cause i have VWD install in my PC already. whebert , I install the C# and recompile the MySql and it works. ^^ I will try other query later. Finally I can continue develop my website. Thank you. Advertise | Ads by BanManPro | Running IIS7 Trademarks | Privacy Statement © 2009 Microsoft Corporation.
http://forums.asp.net/p/999456/1314486.aspx
crawl-002
refinedweb
2,040
74.79
AWS News Blog workloads. Instead of fighting for time on a cluster that must be shared with other researchers, they accelerate their work by launching clusters on demand, running their jobs, and then shutting the cluster down shortly thereafter, paying only for the resources that they consume. They replace tedious RFPs, procurement, hardware builds and acceptance testing with cloud resources that they can launch in minutes. As their needs grow, they can scale the existing cluster or launch a new one. This self-serve, cloud-based HPC approach favors science over servers and accelerates the pace of research and innovation. Access to shared, cloud-based resources can be granted to colleagues located on the same campus or halfway around the world, without having to worry about potential issues at organizational or network boundaries. Alces Flight in AWS Marketplace Today we are making Alces Flight available in AWS Marketplace. This is a fully-featured HPC environment that you can launch in a matter of minutes. It can make use of On-Demand or Spot Instances and comes complete with a job scheduler and hundreds of HPC applications that are all set up and ready to run. Some of the applications include built-in collaborative features such as shared graphical views. For example, here’s the Integrative Genomics Viewer (IGV):. We are launching Alces Flight in AWS Marketplace today. You can launch a small cluster (up to 8 nodes) for evaluation and testing or a larger cluster for research. If you subscribe to the product, you can download the AWS CloudFormation template from the Alces site. This template powers all of the products, and is used to quickly launch all of the AWS resources needed to create the cluster. EC2 Spot Instances give you access to spare AWS capacity at up to a 90% discount from On-Demand pricing and can significantly reduce your cost per core. You simply enter the maximum bid price that you are willing to pay for a single compute node; AWS will manage your bid, running the nodes when capacity is available at the desired price point. Running Alces Flight In order to get some first-hand experience with Alces Flight, I launched a cluster of my own. Here are the settings that I used: I set a tag for all of the resources in the stack as follows: I confirmed my choices and gave CloudFormation the go-ahead to create my cluster. As expected, the cluster was all set up and ready to go within 5 minutes. Here are some of the events that were logged along the way: Then I SSH’ed in to the login node and saw the greeting, all as expected: After I launched my cluster I realized that this post would be more interesting if I had more compute nodes in my cluster. Instead of starting over, I simply modified my CloudFormation stack to have 4 nodes instead of 1, applied the change, and watched as the new nodes came online. Since I specified the use of Spot Instances when I launched the cluster, Auto Scaling placed bids automatically. Once the nodes were online I was able to locate them from within my PuTTY session: Then I used the pdsh (Parallel Distributed Shell command) to check on the up-time of each compute node: Learn More This barely counts as scratching the surface; read Getting Started as Quickly as Possible to learn a lot more about what you can do! You should also watch one or more of the Alces videos to see this cool new product in action. If you are building and running data-intensive HPC applications on AWS, you may also be interested in another Marketplace offering. The BeeGFS (self-supported or support included) parallel file system runs across multiple EC2 instances, aggregating the processing power into a single namespace, with all data stored on EBS volumes. The self-supported product is also available on a 14 day free trial. You can create a cluster file system using BeeGFS and then use it as part of your Alces cluster.
https://aws.amazon.com/blogs/aws/new-in-aws-marketplace-alces-flight-effortless-hpc-on-demand/
CC-MAIN-2018-51
refinedweb
681
56.59
. 18 Responses for "Accessing displayObjects on the timeline after a gotoAndStop or gotoAndPlay" Very nice explaination! You are in my Favorites now. Thank you. I am trying to rebuild this and I am interest about the used graphics/clips? This approach has been around since AS 2.0. It allows to make button with “normal_up”, “normal_over”, “normal_down”, “selected_up”, “selected_over”, “selected_down”, “normal_disabled”, “selected_disabled” states. Also it warps nicely into a package with button class and button-event class. It works more stable when you setup private vars that store button states plus there are some bullet-proof tricks to handle disabled states and restoring events with only making hotspot timeline shorter. Best regards. @name Yes the timeline approach has been around since AS2, even AS1 for that matter. The point of this article is to show the key difference and known bug in AS3 that you can’t reference descendant displayObjects when you move the timeline to one of the states along the timeline. The fact that the numChildren property is correct and you can’t access the displayObjects proves that there is something wrong. You shouldn’t have to count ADDED_TO_STAGE events to make sure all sibling clips exist before accessing them. Thanks for your comment though. Sure, no problem. I am glad to share. Also, I am absolutely in agreement that AS 3.0 has some really annoying bugs (we all expect that things simply should work when you buy them, don’t they?). BTW I really enjoy reading your articles. This is a great help for development community. Thank you. Yes, this is s terrible bug i have been fighting a lot. And I really think it must be fix. I have found one solution to easy problems like the example of the button, where you have only one external timeline. if you put the trace(’the instance name of the overclip is: ‘ + this.overClip.name) in the “over” label, it will work fine, but of course you dont wont to put any code in frames. So you can add code to frames dinamically. this.addFrameScript(numFrame-1, codeFrameTrace); public function codeFrameTrace():void{ trace(’the instance name of the overclip is: ‘ + this.overClip.name) } Now this code will be execute after the mc is registered, and the problem is solved. I think this is not the best think to do, but its the best and easy solution i have found. Thanks for your comments. Thanks for this extensive research, would be great if Adobe picks it up. We’re in the same (deep) shit as we’re having a tight workflow back and forth between designers and developers, and it didn’t take more than a few days experimenting with CS3 & AS3 to run into this bug. And it actually doesn’t end there. If you link a class to any of the siblings on stage (via the library), the behaviour of the player changes. Usually it makes them known earlier, so instead of having to wait one or two enterFrames, they can be known at once. There’s also a huge difference with code inside a class linked to an object on stage: if you do the same there (gotoAndStop() etc.) it’s a lot more accurate and predictable. So it looks to me that the timeline code approach has gotten considerably less attention in the overhaul to AS3. I tend to praise Adobe for the fact that they manage to produce a tool usable for both fully code-based developers and timeline developers (aka scripters :D), but I guess this shows how difficult it has become to please both worlds. And ieven though I’m a fully code-based developer myself, and not very fond of timeline code, I really hope they fix this! I have used your described technique in AS2 for a few years and it became a basic tool for me. Switching to AS3, I started using the BaseButton class for interactive objects and bound movieclips to the button states for example: myButton.setStyle(”upSkin”, scrubButton_upSkin); myButton.setStyle(”overSkin”, scrubButton_overSkin); myButton.setStyle(”downSkin”, scrubButton_downSkin); myButton.setStyle(”disabledSkin”, scrubButton_disabledSkin); This method is definitely less designer friendly, however. The timeline is useful! Plus it seems to me that there is a fair burden of code compared to the timeline technique. I can’t recall the exact numbers but it seemed that using BaseButton right off the bat causes 25K to be added. I meant to port my old method over to compare but have not had a chance. What it seems to me scott is that there is now some loss of synchronicity here. What if you commanded “gotoandplay(”over”)” and then on the second frame in after the over frame add a callback or dispatch an event. Oh the h@ckage! That’s just ugly thinking about it. - Randy [...] Read Tutorial No Comments Leave a Commenttrackback addressThere was an error with your comment, please try again. name (required)email (will not be published) (required)url [...] Hi there , I have been coding in an environment where most movieclip buttons are created by designers, and have found even in as2 flash can bug out when executing code to manipulate frame actions , my final best practice I follow is to create all or move all instaces to frame one and hide with code until needed. I know its not the most fantastic idea but it eliminates many possible scripting/frameaction issues. Greetings from SouthAfrica The simplest catch solution! Matías Ini, you are my hero. Here is my implementation of your idea: My sample class: ————————- public class TestClass { private var mc1:MovieClip; private var mc2:MovieClip; public function TestClass() { // stop playhead at frame 2 gotoAndStop(2); workAroundFlashBug(); // CONSTRUCTOR CONTINUES AT INIT() due to workaround // clipCCW = ccw; // clipCW = cw; // selectRotationAnimation(); } private function init() { // FINISHES CONSTRUCTOR // these stages are present on the current frame // we save them to local variables mc1 = stageInstance1; mc2 = stageInstance2; // do whatever other constructor actions } private function workAroundFlashBug() { this.addFrameScript( currentFrame-1, init ); } } ————————- Original post: Matías Ini wrote: ————————- this.addFrameScript(numFrame-1, codeFrameTrace); public function codeFrameTrace():void{ trace(’the instance name of the overclip is: ‘ + this.overClip.name) } ————————- Sorry, forgot to remove the frameScript. Do this in the init() function. Cheers, Pete ————————- private function init() { // FINISHES CONSTRUCTOR // remove dirty frame script this.addFrameScript( currentFrame - 1, null ); // these stages are present on the current frame // we save them to local variables mc1 = stageInstance1; mc2 = stageInstance2; // do whatever other constructor actions } private function workAroundFlashBug() { this.addFrameScript( currentFrame-1, init ); } } ————————- Edit: // these stages are present on the current frame I mean, of course: // these stage instances are present on the current frame A little messy today Pete Can anyone help me? I am very green at this and am just trying to do a simple, simple, simple flash site but I cannot get around this 1009 bug. It’s awful and stopping me from getting anywhere. Basically how do you get the program to recognize the button as not null. I guess I mean how do you get it to read the button then apply the Action script in the proper order. I can’t believe Adobe designed this flaw into the program. Horrible! And when i say simple I mean I don’t understand hardly any of the code above. I’m really depressed over this- ruining my efforts t get a site up- i’d made such good progress til now. The code I am using is below: stop(); FsAs.addEventListener(MouseEvent.CLICK, FestsAwards); function FestsAwards(event:MouseEvent):void { gotoAndPlay(4) } CsBs.addEventListener(MouseEvent.CLICK, ContsBios); function ContsBios(event:MouseEvent):void { gotoAndPlay(35) } Imgs.addEventListener(MouseEvent.CLICK, Images); function Images(event:MouseEvent):void { gotoAndPlay(61) } CstCrw.addEventListener(MouseEvent.CLICK, CastCrew); function CastCrew(event:MouseEvent):void { gotoAndPlay(111) } BlgNws.addEventListener(MouseEvent.CLICK, BlogNews); function BlogNews(event:MouseEvent):void { navigateToURL(new URLRequest(””)); } hi in AS2 there is a similar bug let me explain it : if i manually place an mc in frame 1 and that mc has a function inside lets call it “myFunction” and in the frame 1 i use this : trace(mc.myFunction) it will return “undefined” , sometime i used the code you described to wait for the second onEnterFrame to occours, but then i started to use a similar approach where i create a MovieClip.prototype.onload and inside this prototype i create the code to wait for that secondframe and fire a custom event so instead of calling mc.myFunction first i declare the listener something like this : mc.loadedDone=function(){ //this function is called by MovieClip.prototype.onload trace(mc.myFunction) } so far the designers have no problems using this code Was this ever fixed? Do flash player 10 and the CS4 IDE address this bug? I found that while Flash doesn’t receive the instance names, it does know that it has X number of children. Based on that, I created a function that uses the Event.ADDED listener to check that all children are not null before proceeding. For the most part it seems to work for me, but I’m always looking for better and more efficient code. Details and code are here: Hello- I am a designer(not a developer) and I am having a similar issue. I am using a button to navigate the timeline, but the first time I click on it, it goes to the wrong frame. But if I go to some other areas of the timeline then back to that button it goes to the correct frame. Any suggestions? It’s very basic code, just your generic: on (press, release) { gotoAndPlay(”metro”); } Thanks! Hey Scott, I’ve created a class that you can use in place of the standard flash.display.MovieClip object. GotoClip stores commands you place on a MovieClip and then applies them to the MovieClip when the Flash Player has gotten around to rendering the MovieClip. Check it out: Now includes an archive with all source and example FLA.
http://www.scottgmorgan.com/blog/index.php/2008/03/06/accessing-displayobjects-on-the-timeline-after-a-gotoandstop-or-gotoandplay
crawl-002
refinedweb
1,657
63.7
When compressing an entire directory (folder) into a zip file in Python, you can use os.scandir() or os.listdir() to create a list of files and use the zipfile module to compress them, but it is easier to use the make_archive () of the shutil module is easier. In addition to zip, other formats such as tar are also supported. For more information on compressing and uncompressing zip files using the zipfile module, please refer to the following article. - Related Articles:zipfile to compress and uncompress ZIP files in Python Compress a directory (folder) into a zip file: shutil.make_archive() The first argument, base_name, specifies the name of the zip file to be created (without extension), and the second argument, format, specifies the archive format. The following can be selected for the argument format. 'zip' 'tar' 'gztar' 'bztar' 'xztar' The third argument, root_dir, specifies the path of the root directory of the directory to be compressed, and the fourth argument, base_dir, specifies the path of the directory to be compressed relative to the root_dir. Both are set to the current directory by default. If base_dir is omitted, the entire root_dir will be compressed. data/temp For example, suppose we have a directory with the following structure. dir ├── dir_sub │ └── test_sub.txt └── test.txt import shutil shutil.make_archive('data/temp/new_shutil', 'zip', root_dir='data/temp/dir') The new_shutil.zip compressed with the above settings omitting the base_dir will be decompressed as follows. new_shutil ├── dir_sub │ └── test_sub.txt └── test.txt Then, if the directory in root_dir is specified for base_dir, the following will be shown. shutil.make_archive('data/temp/new_shutil_sub', 'zip', root_dir='data/temp/dir', base_dir='dir_sub') The new_shutil_sub.zip compressed with the above settings will be decompressed as follows. dir_sub └── test_sub.txt
https://from-locals.com/python-zip-dir-shutil-make-archive/
CC-MAIN-2022-27
refinedweb
289
58.18
CHI::Stats - Record and report per-namespace cache statistics version 0.60 # Turn on statistics collection CHI->stats->enable(); # Perform cache operations # Flush statistics to logs CHI->stats->flush(); ... # Parse logged statistics my $results = CHI->stats->parse_stats_logs($file1, ...); CHI can record statistics, such as number of hits, misses and sets, on a per-namespace basis and log the results to your Log::Any logger. You can then parse the logs to get a combined summary. A single CHI::Stats object is maintained for each CHI root class, and tallies statistics over any number of CHI::Driver objects. Statistics are reported when you call the "flush" method. You can choose to do this once at process end, or on a periodic basis. Enable, disable, and query the current enabled status. When stats are enabled, each new cache object will collect statistics. Enabling and disabling does not affect existing cache objects. e.g. my $cache1 = CHI->new(...); CHI->stats->enable(); # $cache1 will not collect statistics my $cache2 = CHI->new(...); CHI->stats->disable(); # $cache2 will continue to collect statistics Log all statistics to Log::Any (at Info level in the CHI::Stats category), then clear statistics from memory. There is one log message for each distinct triplet of root class, cache label, and namespace. Each log message contains the string "CHI stats:" followed by a JSON encoded hash of statistics. e.g. CHI stats: {"absent_misses":1,"label":"File","end_time":1338410398, "get_time_ms":5,"namespace":"Foo","root_class":"CHI", "set_key_size":6,"set_time_ms":23,"set_value_size":20,"sets":1, "start_time":1338409391} Accepts one or more stats log files as parameters. Parses the logs and returns a listref of stats hashes by root class, cache label, and namespace. e.g. [ { root_class => 'CHI', label => 'File', namespace => 'Foo', absent_misses => 100, avg_compute_time_ms => 23, ... }, { root_class => 'CHI', label => 'File', namespace => 'Bar', ... }, ] Lines with the same root class, cache label, and namespace are summed together. Non-stats lines are ignored. The parser will ignore anything on the line before the "CHI stats:" string, e.g. a timestamp. Each parameter to this method may be a filename or a reference to an open filehandle. The following statistics are tracked in the logs: absent_misses- Number of gets that failed due to item not being in the cache compute_time_ms- Total time spent computing missed results in compute, in ms (divide by number of computes to get average). i.e. the amount of time spent in the code reference passed as the third argument to compute(). computes- Number of compute calls expired_misses- Number of gets that failed due to item expiring get_errors- Number of caught runtime errors during gets get_time_ms- Total time spent in get operation, in ms (divide by number of gets to get average) hits- Number of gets that succeeded set_key_size- Number of bytes in set keys (divide by number of sets to get average) set_value_size- Number of bytes in set values (divide by number of sets to get average) set_time_ms- Total time spent in set operation, in ms (divide by number of sets to get average) sets- Number of sets set_errors- Number of caught runtime errors during sets The following additional derived/aggregate statistics are computed by parse_stats_logs: misses- absent_misses+ expired_misses gets- hits+ misses avg_compute_time_ms- compute_time_ms/ computes avg_get_time_ms- get_time_ms/ gets avg_set_time_ms- set_time_ms/ sets avg_set_key_size- set_key_size/ sets avg_set_value_size- set_value_size/ sets hit_rate- hits/ gets Jonathan Swartz <swartz@pobox.com> This software is copyright (c) 2012 by Jonathan Swartz. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
http://search.cpan.org/dist/CHI/lib/CHI/Stats.pm
CC-MAIN-2017-26
refinedweb
584
52.8
1 hour migrations #1: SQS to GCP’s Cloud Pub/Sub Message queues play an important part in today’s modern scalable, distributed infrastructure. Amazon Web Services (AWS) offers SQS (Simple Queue Service) and Google Cloud Platform (GCP) offers Cloud Pub/Sub. These are both managed, scalable, reliable services for publishing and consuming messages. In this post I would like to discuss in a high level how to migrate your application from using Amazon’s SQS to Google’s Pub/Sub. Dude, where is my queue? Before we dive a bit deeper, let’s map some terms from SQS to Pub/Sub. While in SQS you work with queues, in Pub/Sub you work with topics and subscriptions. You’ll might ask, wait why did my queue become two separate entities? That has to do with the way Pub/Sub works. The actual queue-like paradigm in Pub/Sub is represented by a topic, but you won’t be able to do much without consumers creating and subscribing to one (or more) subscriptions for that topic. Let’s look at the following SQS queue : - Name: myqueue - Region : us-east-1 - Message retention: 12 days - Fifo: no - Redrive policy : No (no dead letter queue) - Maximum payload size: 256KB - Message delivery delay : None The first migration steps to Cloud Pub/Sub would be: - Create a topic with the name “myqueue” - Create a subscription with the name “mysubscription” for that topic. Notice that: - There is no need to define a region since in Pub/Sub topics are global. - Message retention in Pub/Sub (at the date of publication of this post) is 7 days, so we can’t get to 12 days like the configuration we had on SQS. Publishing messages (i.e Let them have it!) In SQS, publishing a message is as easy as calling the SendMessage API call with as little as the queue URL and the message payload. In Cloud Pub/Sub, you can call the “publish” method with just the topic name and the payload. Easy, but just pay attention that Cloud Pub/Sub apis will return a future that you will need to wait to resolve until you can hold the sent message Id. Receiving messages (i.e Incoming!) To poll messages from SQS you can use the AWS api in an easy manner: response = client.receive_message( QueueUrl="", MaxNumberOfMessages=1, VisibilityTimeout=10, ) Now, in Cloud Pub/Sub you can use message polling or message pushing (read more at). For simplicity purposes we will use the poll mechanism, but notice I am using the asynchronous flavor: subscriber = pubsub_v1.SubscriberClient() subscription_path = subscriber.subscription_path( 'myproject', 'mysubscription') def message_callback(message): # Do something with the message here message.ack() subscriber.subscribe(subscription_path, callback=message_callback) Because the polling here is non blocking, if this is the only thing the process does, you will need to have a main loop that keeps it alive. Conclusion In this post I covered a migration scenario from a simple SQS use to Cloud Pub/Sub. Of course there are much more complicated use cases, architectures and best practices which I’ll might cover in upcoming posts. More reading: Cloud Pub/Sub: SQS and Cloud Pub/Sub:
https://medium.com/google-cloud/1-hour-migrations-1-sqs-to-gcps-cloud-pub-sub-105bcac63318
CC-MAIN-2019-47
refinedweb
529
62.27
Initialization options for the column. Gets or sets the horizontal alignment of cells in the column or row. The default value for this property is null, which causes the grid to select the alignment automatically based on the column's dataType (numbers are right-aligned, Boolean values are centered, and other types are left-aligned). If you want to override the default alignment, set this property to 'left', 'right', 'center', or 'justify'. Gets or sets a value that indicates whether the user can move the column or row to a new position with the mouse. The default value for this property is true. Gets or sets a value that indicates whether cells in the column or row can be merged. The default value for this property is false. Gets or sets a value that indicates whether the user can resize the column or row with the mouse. The default value for this property is true. Gets or sets a value that indicates whether the user can sort the column by clicking its header. Gets or sets the name of the property the column is bound to. Gets or sets an ICellTemplateFunction or a template string to be used for generating the HTML content of data cells in this Column. Cell template strings use template literal syntax. The content string is generated using a scope of type ICellTemplateContext. ICellTemplateFunction functions take an argument of type ICellTemplateContext and return the HTML content to be displayed in the cell. For example: // simple/default rendering with template string col.cellTemplate = '${value}:${col.format}'; // conditional formatting with template string col.cellTemplate = '<span class=${value > 40000 ? "big-val" : "small-val"}>${text}</span>'; // conditional formatting with ICellTemplateFunction: col.cellTemplate = (ctx: ICellTemplateContext) => { return '<span class="{cls}">{val}</span>' .replace('{cls}', ctx.value > 40000 ? 'big-val' : 'small-val') .replace('{val}', ctx.text); }; Notice that the cell templates are regular strings, not actual JavaScript templates. Therefore, they are defined using regular quotes (single or double) as oppsed to the back-quotes used by JavaScript template strings. The cellTemplate property provides a simpler (but less powerful) alternative than the formatItem event or the cell templates available in the Wijmo interop modules. When using cell templates, you should still set the column's binding and format properties. They will be used in edit mode and to support copy/paste/export operations. Cell templates are used only to render cell data, and have no effect on editing. If you want to customize the cell editors, use the editor property. Gets the ICollectionView bound to this column or row. Gets or sets a CSS class name to use when rendering data (non-header) cells in the column or row. Gets or sets a CSS class name to use when rendering all cells (data and headers) in the column or row. Gets a string that describes the current sorting applied to the column. Possible values are '+' for ascending order, '-' for descending order, or null for unsorted columns. Gets the index of this column in the sort descriptions array for the grid's collection view. By default, data-mapped cells have drop-down lists that can be used for quick editing. You can change the type of editor by setting the column's dataMapEditor property. The default editor type, DataMapEditor.DropDownList, requires the wijmo.input module to be loaded. Gets or sets a value that indicates the type of editor to use when editing data-mapped cells in this column or row. The DataMapEditor.DropDownList setting (the default value) adds drop-down buttons to cells to columns that have a dataMap and are not read-only. Clicking on the drop-down buttons causes the grid to show a list where users can select the value for the cell. The DataMapEditor.RadioButtons setting causes the grid to show radio buttons for each option. The buttons can be clicked with the mouse or keyboard (by pressing each option's initial letter or the space key to cycle through the options.) The default value for this property is DataMapEditor.DropDownList. The drop-down list is enabled only if the wijmo.input module to be loaded. Gets or sets the type of value stored in the column or row. Values are coerced into the proper type when editing the grid. Gets or sets the ID of an element that contains a description of the column. The ID is used as the value of the aria-describedby attribute for the column header element. Gets or sets a CSS class name to add to drop-downs in this column or row. The drop-down buttons are shown only if the column has a dataMap set and is editable. Clicking on the drop-down buttons causes the grid to show a list where users can select the value for the cell. Cell drop-downs require the wijmo.input module to be loaded. The input control is typically one of the Wijmo input controls. It should be compatible with the column's data type. For example, this code replaces the built-in editor for all date columns on a grid with a single InputDate control: import { InputDate } from '@grapecity/wijmo.input'; let inputDate = new InputDate(document.createElement('div')); theGrid.columns.forEach(col => { if (col.DataType == DateType.Date) { col.editor = inputDate; } }) And this code replaces the built-in editor for all data-mapped columns on a grid with AutoComplete controls: import { AutoComplete } from '@grapecity/wijmo.input'; theGrid.columns.forEach(col => { let map = col.dataMap; if (map) { col.editor = new AutoComplete(document.createElement('div'), { itemsSource: map.collectionView, displayMemberPath: map.displayMemberPath, selectedValuePath: map.selectedValuePath }); } }); Notice how the example above uses the column's dataMap property to initialize the AutoComplete. In many cases you may also want to use column properties such as format and isRequired to initialize your custom editors. The example below shows how you can use the editor property to edit grid items with various Wijmo input controls: Gets or sets the text displayed in the column header. Gets the index of the column or row in the parent collection. Gets or sets the "type" attribute of the HTML input element used to edit values in this column or row. By default, this property is set to "tel" for numeric columns, and to "text" for all other non-boolean column types. The "tel" input type causes mobile devices to show a numeric keyboard that includes a negative sign and a decimal separator. Use this property to change the default setting if the default does not work well for the current culture, device, or application. In these cases, try setting the property to "number" or simply "text." Gets or sets a value that indicates whether cells in this column or row contain HTML content rather than plain text. This property only applies to regular cells. Row and column header cells contain plain text by default. If you want to display HTML in column or row headers, you must use the FlexGrid.formatItem event and set the cell's innerHTML content in code. The default value for this property is false. Gets or sets a value that indicates whether cells in the column or row can be edited. The default value for this property is true. Gets or sets a value that determines whether values in this column or row are required. By default, this property is set to null, which means values are required, but non-masked string columns may contain empty strings. When set to true, values are required and empty strings are not allowed. When set to false, null values and empty strings are allowed. Gets or sets a value that indicates whether the column or row is selected. Gets or sets a mask to use while editing values in this column or row. The mask format is the same used by the wijmo.input.InputMask control. If specified, the mask must be compatible with the value of the format property. For example, the mask '99/99/9999' can be used for entering dates formatted as 'MM/dd/yyyy'. Gets or sets the maximum number of characters that the can be entered into cells in this column or row. This property is set to null by default, which allows entries with any number of characters. Gets or sets the maximum width (in pixels) of the column. This property is set to null by default, which means there is no maximum width. Gets or sets the minimum width of the column. This property is set to null by default, which means there is the minimum width is zero. Gets or sets a value that indicates whether the content of cells in this column or row should wrap at new line characters (\n). The default value for this property is false. Gets or sets the name of the column. The column name can be used to retrieve the column using the FlexGrid.getColumn method. Gets the position of the column or row in pixels. Gets or sets a value that determines whether the grid should optimize performance over precision when auto-sizing this column. Setting this property to false disables quick auto-sizing for this column. Setting it to true enables the feature, subject to the value of the grid's wijmo.grid.FlexGrid.quickAutoSize property. Setting it to null (the default value) enables the feature for columns that display plain text and don't have templates. Gets the render size of the column or row. This property accounts for visibility, default size, and min and max sizes. Gets the render width of the column. The value returned takes into account the column's visibility, default size, and min and max sizes. Gets or sets the size of the column or row. Setting this property to null or negative values causes the element to use the parent collection's default size. Gets or sets the name of the property to use when sorting this column. Use this property in cases where you want the sorting to be performed based on values other than the ones specified by the binding property. Setting this property is null causes the grid to use the value of the binding property to sort the column. Gets or sets a value that indicates whether the column or row is visible. Gets or sets the width of the column. Column widths may be positive numbers (sets the column width in pixels), null or negative numbers (uses the collection's default column width), or strings in the format '{number}*' (star sizing). The star-sizing option performs a XAML-style dynamic sizing where column widths are proportional to the number before the star. For example, if a grid has three columns with widths "100", "*", and "3*", the first column will be 100 pixels wide, the second will take up 1/4th of the remaining space, and the last will take up the remaining 3/4ths of the remaining space. Star-sizing allows you to define columns that automatically stretch to fill the width available. For example, set the width of the last column to "*" and it will automatically extend to fill the entire grid width so there's no empty space. You may also want to set the column's minWidth property to prevent the column from getting too narrow. Gets or sets a value that indicates whether the content of cells in this column or row should wrap to fit the available column width. The default value for this property is false. Gets the actual alignment if items in the column or row. Returns the value of the align property if it is not null, or selects the alignment based on the column's dataType. Row that contains the cell being checked. A string representing the cell alignment. Gets a value that determines whether values in the column/row are required. Returns the value of the isRequired property if it is not null, or determines the required status based on the column's dataType. By default, string columns are not required unless they have an associated dataMap or mask; all other data types are required. Row that contains the cell being checked. True if the value is required, false otherwise. Raises the gridChanged event. Marks the owner list as dirty and refreshes the owner grid. Represents a column on the grid.
https://www.grapecity.com/wijmo/api/classes/wijmo_grid.column.html
CC-MAIN-2020-34
refinedweb
2,047
65.93
6634/python-whitespace-for-indenting Try the UUID module of Python. For example, ...READ MORE Hope this will help you...Python Tutorial READ MORE The slice notation is [start:end:step]. Step = ...READ MORE You could use a dictionary: def f(x): ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE Enumerate() method adds a counter to an ...READ MORE You can simply the built-in function in ...READ MORE Python has no unary operators. So use ...READ MORE A more pythonic way would be this: ``` while ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/6634/python-whitespace-for-indenting?show=6635
CC-MAIN-2022-27
refinedweb
126
78.85
Contributing to Open Source: Gatekeeper Case StudyBy Bruno Skvorc. An example of the library’s use is clearly demonstrated in our post about [the skeleton no-framework project][nowf] which is a sample app composed entirely of Composer packages and acting like a framework-powered app, but completely free of any framework coupling. More from this author This post isn’t about Gatekeeper per-se, though. It’s about contributing to open source, and going about the right way to doing it. In this tutorial, we’ll extend GateKeeper with a count feature. Currently, in order to find out the total number of users in the database one would have to fetch them all, then count them – either that or write a query to do so manually. But it might be better if this were built into the adapter interface so that it’s not only a native feature, but also a requirement for future database engines to be added. Step 1: Ask the owner The first step of contributing to open source is doing due diligence. This can be as simple as asking the repo owner about the status of this feature, in order to make sure it isn’t planned and is, in fact, desirable. An issue in the repo is often enough, as evident in this case. Step 2: Fork, clone, test Note: if you’d like to follow along, please clone an older version of Gatekeeper which doesn’t have this feature yet. This one should do. First, let’s fork the repo so we can start working on it. Next, we need to set up a development environment in which to work on the package. Naturally, we use our trusty Homestead Improved for this. Once the VM has been set up and SSHed into, we can clone our fork, install dependencies and run tests: git clone cd gatekeeper composer install vendor/bin/phpunit All the tests should be green: At this point, it’s preferred to make a separate branch for all the changes we’ll be making. git checkout -b "feature-count" Step 3: Plan of Action Gatekeeper currently only supports MySQL – this makes our job a bit easier, but still not trivial. Despite only supporting a single type of datasource (for now), abstract and interface classes still need to be updated, seeing as they’re written with future compatibility with different data sources in mind. We will, thus, need to modify: Gatekeeper/DataSource– the abstract DataSource class DataSource/MySQL– the MySQL datasource which contains the actual methods we use DataSource/Stub– to upgrade the stub with which to write other datasources, so other contributors know they need a countmethod, too We also need to create a new Count handler, because Gatekeeper uses magic static calls to create, find, update and delete entities, forwarding them to the appropriate handler depending on the name of the invoked method. For an example, see the __callStatic magic method in Gatekeeper/Gatekeeper.php, and how it defers method calls to handlers like Handler/Create.php or Handler/FindBy.php, etc. Step 4: Just Do It ™ Delegating the static call To prepare the foundation for our custom Count handler, we delegate the static call to it and pass forward the argument and the data source. This is all done by simply adding another elseif block to Gatekeeper::__callStatic: } elseif ($action == 'count') { $action = new \Psecio\Gatekeeper\Handler\Count($name, $args, self::$datasource); } Since we added a new action, we need to modify the static property $actions as well: /** * Allowed actions * @var array */ private static $actions = array( 'find', 'delete', 'create', 'save', 'clone', 'count' ); Count handler We then create the handler in the file Psecio/Gatekeeper/Handler/Count.php: <?php namespace Psecio\Gatekeeper\Handler; class Count extends \Psecio\Gatekeeper\Handler { /** * Execute the object/record count handling * * @throws \Psecio\Gatekeeper\Exception\ModelNotFoundException If model type is not found * @return int Count of entities */ public function execute() { $args = $this->getArguments(); $name = $this->getName(); $model = '\\Psecio\\Gatekeeper\\' . str_replace('count', '', $name) . 'Model'; if (class_exists($model) === true) { $instance = new $model($this->getDb()); $count = (!$args) ? $this->getDb()->count($instance) : $this->getDb()->count($instance, $args[0]); return (int)$count['count']; } else { throw new \Psecio\Gatekeeper\Exception\ModelNotFoundException( 'Model type ' . $model . ' could not be found' ); } } } It’s almost identical to the Create handler, except for the unreachable return statement at the bottom which I’ve removed, small changes in the body, and a minor alteration to the class’ description. DataSource and Stub Next, let’s get the easy ones out of the way. In Psecio/Gatekeeper/DataSource/Stub.php, we add a new blank method: /** * Return the number of entities in DB per condition or in general * * @param \Modler\Model $model Model instance * @param array $where * @return bool Success/fail of action * @internal param array $where "Where" data to locate record */ public function count(\Modler\Model $model, array $where = array()){} We then add a similar signature to the abstract: /** * Return the number of entities in DB per condition or in general * * @param \Modler\Model $model Model instance * @param array $where * @return bool Success/fail of action * @internal param array $where "Where" data to locate record */ public abstract function count(\Modler\Model $model, array $where = array()); With all this out of the way, it’s time to write the actual logic that takes care of counting. MySQL It’s time to change DataSource/MySQL.php now. We’ll add the count method right under the find method: /** * Find count of entities by where conditions. * All where conditions applied with AND * * @param \Modler\Model $model Model instance * @param array $where Data to use in "where" statement * @return array Fetched data */ public function count(\Modler\Model $model, array $where = array()) { $properties = $model->getProperties(); list($columns, $bind) = $this->setup($where); $update = array(); foreach ($bind as $column => $name) { // See if we keep to transfer it over to a column name if (array_key_exists($column, $properties)) { $column = $properties[$column]['column']; } $update[] = $column.' = '.$name; } $sql = 'select count(*) as `count` from '.$model->getTableName(); if (!empty($update)) { $sql .= ' where '.implode(' and ', $update); } $result = $this->fetch($sql, $where, true); return $result; } Harvesting the logic from the similar find method above, our count method does the following: - Grab properties as defined in model in question (e.g. see UserModel::$properties) - Separate the values as passed in via $whereinto columns and their values - Build the WHEREpart of the query by looking into the properties, seeing if any of them have different names to those requested (e.g. requested FirstNamehas a database counterpart of - Build whole query - Execute with forced singlemode on true(see fetchmethod) because we only expect to get a single value back – an integer indicating the count. - Return the count. Step 5: Testing Ordinarily, there would be a unit testing stage. This is out of the scope of this tutorial, and I encourage you to look at this tutorial instead. If there’s sufficient interest in seeing unit tests developed for this package, we will, of course, accommodate. Let us know in the comments. Implementing the Experimental Version Let’s do a manual test. First, we’ll commit and push our work online. git add -A git commit -m "Adding count feature" git push origin feature-count The changes will now be in our fork, online. Then, let’s go ahead and create a brand new project in another folder with the following composer.json file: { "require": { "psecio/gatekeeper": "dev-master" }, "repositories": [ { "type": "vcs", "url": "" } ] } Using Composer’s Repositories feature, we can make sure that Composer fetches our copy of Gatekeeper instead of the original, while still thinking it has the original – this allows us to test our changes as if in a real project using Gatekeeper – arguably a more realistic testing scenario than unit tests would be at this point. Save and exit this file, and then run: composer require symfony/var-dumper --dev This will both install the above defined custom package, and Symfony’s VarDumper for easier debugging. You might get asked for a Github token during installation – if that’s the case, just follow the instructions. Lo and behold, if we look inside the Gatekeeper main class now, we’ll see that our count updates are all there. Next, let’s follow the typical Gatekeeper installation procedure now by executing vendor/bin/setup.sh and following instructions. If you’re using Homestead Improved, just enter the default values ( localhost, homestead, homestead, secret). Testing Now let’s create an index.php file which we’ll use for testing: <?php require_once 'vendor/autoload.php'; use Psecio\Gatekeeper\Gatekeeper; Gatekeeper::init('./'); $groups = [ 'admin' => 'Administrators', 'users' => 'Regular users' ]; foreach ($groups as $name => $description) { if (!Gatekeeper::findGroupByName($name)) { Gatekeeper::createGroup([ 'name' => $name, 'description' => $description ]); } } We activate the autoloader, initialize Gatekeeper (it uses the .env file from the root of our folder for credentials), and set up two default groups. Then, let’s go ahead and test counting on the groups: dump(Gatekeeper::countGroup()); dump(Gatekeeper::countGroup(['id' => 1])); Sure enough, it works. Let’s test users now. Gatekeeper::countUser(); This accurately produces a count of 0. A common use case for out-of-the-box apps is seeing if there are no users in the database when a user is being created, and then giving that new user Admin privileges. The first user of a system is often considered its owner, so it’s a convenience for setting up super-user accounts. Let’s do that. Gatekeeper::createUser([ 'username' => 'bruno-admin', 'first_name' => 'Bruno', 'last_name' => 'Skvorc', 'email' => 'bruno.skvorc@sitepoint.com', 'password' => '12345', 'groups' => (Gatekeeper::countUser()) ? ['users'] : ['admin'] ]); Gatekeeper::createUser([ 'username' => 'reg-user', 'first_name' => 'Reggie', 'last_name' => 'User', 'email' => 'reg@example.com', 'password' => '12345', 'groups' => (Gatekeeper::countUser()) ? ['users'] : ['admin'] ]); dump(Gatekeeper::findUserByUsername('bruno-admin')->groups[0]->name); dump(Gatekeeper::findUserByUsername('reg-user')->groups[0]->name); Sure enough, the output is accurate – the first group printed on screen is admin, and the second is users. With the count mechanics correctly implemented, it’s time to submit a pull request. Submitting a PR First, we go to our own fork of the project. Then, we click the “New Pull Request” button. On the next screen everything should be green – the UI should say “Able to merge”: Once we click the “Create Pull Request” button, we should add a title and a description as detailed as possible, preferably referencing the issue from Step 1 above. That’s it – pressing the “Create Pull Request” button wraps things up – all we can do now is wait for feedback from the project owner. Conclusion This was a case study of contributing to a relatively popular PHP package. I hope it was useful as a learning material and a basic guide into giving back and the process of adding features into something you use. It’s important to note that while this is a common process, as I mentioned before, it’s rather uncommon in that it had no unit tests. However, the original library doesn’t test the handlers either, and making a mocking framework for the handlers and the data they can interact with would be too out of scope for this particular post. Again, if you’re curious about that process and would like us to cover it in depth, please do let us know in the comments below that like button. It should also be noted that while this was a very easy feature upgrade, sometimes things don’t go according to plan and the changes won’t work immediately in the new project where they’re being tested. In that case, changes can, for example, be made in the vendor folder of the test project until everything works, copied over to the development project, pushed online and then re-installed into the test project to make sure everything still works. Have your own contribution stories to share? Let us know – we’d love to explore them! - 1 Make Your Own Social Network, Game Server, or Knowledgebase! - Sourcehunt - 2 Yarn vs npm: Everything You Need to Know - 3 What Are the Workflows of Prominent PHP Community Members? - 4 How Your Company Can Benefit from Contributing to Open Source - 5 Sourcehunt 2016.8 - Contribute to Regression, Regex, ORMs, and More
https://www.sitepoint.com/contributing-to-open-source-gatekeeper-case-study/
CC-MAIN-2017-34
refinedweb
2,022
50.46
I have tried compiling and installing from the 2.0.0 release source, the SVN head, and from a PPA. After each, I still get the following from a Python console: >>> import cv Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named cv Starting with the "new" Python interface in 2.0, that import is supposed to work. I downloaded the python-opencv package in Lucid Lynx and this worked: from opencv import cv from opencv import highgui from opencv import cv from opencv import highgui Then you should be able to check what functions are available by using dir(cv) and dir(highgui).03 times active 19 days ago
http://superuser.com/questions/113577/installing-opencv-2-0-on-ubuntu-karmic-still-get-importerror-from-import-cv-i/138987#138987
crawl-003
refinedweb
116
78.38
What the Heck is Qt Quick? Hey Brad I keep hearing about “Qt Quick” what the heck is that? Qt Quick is to Qt as WPF is to .NET Qt Quick is a GUI architecture that extents the Qt Framework. Hey wait a minute I could make user interfaces in Qt before Qt Quick why do I need a new architecture to make UIs? Well your right; just like in stock .NET (that is not WPF) Qt comes with a GUI architecture for lack of a better name lets call Forms and Controls. That is in Qt you can load up pre-designed GUI elements and re-arrange them to suit your needs. Think forms like in construction; if your building something out of concrete you first need to create a form that you can pour the concrete into so it will hold its shape. The Forms and Controls architecture you get in Qt is much the same; that is you get access to a number of pre-existing forms (Qt calls these Widgets) that you can pour your business logic into. Sure these forms are highly customizable but in the end of the day your staring with a template and customizing it. Sometimes there isn’t a pre-existing form that suites your needs and you have to start from scratch. Since there are no shortage of companies whose whole business model is around developing custom controls from scratch I think its safe to say building your own control for a Forms and Controls GUI architecture isn’t the easiest thing in the world. Enter Qt Quick which is a whole new GUI architecture which I like to call Declarative GUI Architecture. In this environment your not starting with pre-existing templates but rather you have a blank canvas and a number of elements in which to craft your own controls/widgets. What differs here from Forms and Controls is that the elements are primitives and most importantly just about every element can be place inside any other element to create new elements (sort of like an artiest palette of primary colours). In simple terms you can put stuff inside other stuff to make new stuff. This nonrestrictive style of building GUIs really lends to ones creative side. Want a button that has instead of text or an icon, has a check box? Why not; you can make that unique custom control in literally a few lines of code. Why you would want such a control I have no idea but the point is you can make it really easily in this type of GUI architecture. Ok so you explained the what, now how about the How? Qt Quick is the architecture you must live within to take advantage of the P-S-I-O-S (Put-Stuff-In-Other-Stuff) declarative nature of this architecture; how you actually declare your design is with a scripting language called QML (Qt Meta Language or Qt Modeling Language). QML QML has a JavaScript styling to it where each UI element is defined by a name, open curly brace, optional attributes, then a closing curly brace. Ending each line with a semi-colon is optional but required if you want to write your QML in one line. More then having a JavaScript syntax it actually extends a subset of the JavaScript engine; that is you can use and define JavaScript functions right inside your QML. I was thrown a bit before I realized this; I had a Qt QDate object being passed to QML and I needed to define the date formatting in the QML. I didn’t know what kind of object it was inside the QML as it wasn’t a QDate object anymore. FYI it becomes a JavaScript date object. The basic UI element in QML is called Item. Item { id: myItem } An Item actually has nothing to render but is good for grouping or limiting scope (more on scope in my Qt Quick 101 tutorial). If I want something to show up in the UI I could use a Rectangle which really is your basic visual building block. Item { id: myItem width: 100 height: 75 Rectangle { anchors.fill: parent color: "blue" Text { id: myText anchors.left: parent.left anchors.leftMargin: 10 anchors.top: parent.top anchors.topMargin: 10 color: "yellow" text: "hello world" } } } Here if I were to render this I would see a blue rectangle 100 pixels by 75 pixels with the yellow text hello world 10 pixels down the y axis and 10 pixels in along the x axis in the top left corner. QML being a script it does not get compiled. You save your QML in a plain text file with the extension *.qml. In your C++ main method you initialize the Qt Quick engine and point it at a QML file to parse. In your main QML script you can reference other QML files with the import statement. By using special Macros in your C++ you can expose properties, methods, and even enumerations to your QML. Signals and Slots are automatically exposed to your QML (i.e. you can connect to them in your QML without any special sytax to let your QML know about them). With this you can put your backed business logic in your C++ and your UI logic in your QML. Here is a simple sample application to get a sense of Qt Quick. Its not terribly creative but it will show you some basics. You can download the sample app and see it run (will need OpenGL 2.0 or greater and the Visual C++ redistributable for Visual Studio 2012 x64 in order to run it. To learn more check out my Qt Quick 101 tutorial. Download the QMLSampleApplication import QtQuick 2.0 Rectangle { width: 200 height: 100 color: "black" Column { anchors.fill: parent anchors.margins: 20 spacing: 10 Rectangle { anchors.left: parent.left anchors.right: parent.right height: myText.implicitHeight + 10 border.color: "gray" color: "white" Text { id: myText anchors.left: parent.left anchors.leftMargin: 10 anchors.right: parent.right anchors.verticalCenter: parent.verticalCenter color: "black" } } Rectangle { anchors.left: parent.left anchors.right: parent.right height: buttonText.implicitHeight + 10 border.color: "darkgray" color: buttonMouseArea.pressed ? Qt.darker("gray", 1.5) : "gray" Text { id: buttonText anchors.verticalCenter: parent.verticalCenter anchors.horizontalCenter: parent.horizontalCenter text: "Click Me!" } MouseArea { id: buttonMouseArea anchors.fill: parent onClicked: { if ( myText.text === "how are you"){ myText.text = "hello world" }else if ( myText.text === "hello world" ){ myText.text = "how are you?" }else{ myText.text = "hello/QMLSampleApp.qml")); viewer.showExpanded(); return app.exec(); } So that in short is what the heck is Qt Quick; a GUI architecture that extends the Qt framework and in my humble opinion the only GUI architecture to choose when developing a Qt application. If you have any questions or comments feel free to leave them below and I’ll respond when time permits. Until next time think imaginatively and design creatively
http://imaginativethinking.ca/heck-qtquick/
CC-MAIN-2019-04
refinedweb
1,160
73.27
Graphics::ColorNames - defines RGB values for common color names use Graphics::ColorNames 2.10; $po = new Graphics::ColorNames(qw( X )); $rgb = $po->hex('green'); # returns '00ff00' $rgb = $po->hex('green', '0x'); # returns '0x00ff00' $rgb = $po->hex('green', '#'); # returns '#00ff00' $rgb = $po->rgb('green'); # returns '0,255,0' @rgb = $po->rgb('green'); # returns (0, 255, 0) $rgb = $po->green; # same as $po->hex('green'); tie %ph, 'Graphics::ColorNames', (qw( X )); $rgb = $ph{green}; # same as $po->hex('green'); This module provides a common interface for obtaining the RGB values of colors by standard names. The intention is to (1) provide a common module that authors can use with other modules to specify colors by name; and (2) free module authors from having to "re-invent the wheel" whenever they decide to give the users the option of specifying a color by name rather than RGB value. For example, use Graphics::ColorNames 2.10; use GD; $pal = new Graphics::ColorNames; $img = new GD::Image(100, 100); $bgColor = $img->colorAllocate( $pal->rgb('CadetBlue3') ); Although this is a little "bureaucratic", the meaning of this code is clear: $bgColor (or background color) is 'CadetBlue3' (which is easier to for one to understand than 0x7A, 0xC5, 0xCD). The variable is named for its function, not form (ie, $CadetBlue3) so that if the author later changes the background color, the variable name need not be changed. You can also define "Custom Color Schemes" for specialised palettes for websites or institutional publications: $color = $pal->hex('MenuBackground'); As an added feature, a hexidecimal RGB value in the form of #RRGGBB, 0xRRGGBB or RRGGBB will return itself: $color = $pal->hex('#123abc'); # returns '123abc' The standard interface (prior to version 0.40) is through a tied hash: tie %pal, 'Graphics::ColorNames', @schemes; where %pal is the tied hash and @schemes is a list of color schemes. A valid color scheme may be the name of a color scheme (such as X or a full module name such as Graphics::ColorNames::X), a reference to a color scheme hash or subroutine, or to the path or open filehandle for a rgb.txt file. As of version 2.1002, one can also use Color::Library dictionaries: tie %pal, 'Graphics::ColorNames', qw(Color::Library::Dictionary::HTML); This is an experimental feature which may change in later versions (see "SEE ALSO" for a discussion of the differences between modules). Multiple schemes can be used: tie %pal, 'Graphics::ColorNames', qw(HTML Netscape); In this case, if the name is not a valid HTML color, the Netscape name will be used. One can load all available schemes in the Graphics::ColorNames namespace (as of version 2.0): use Graphics::ColorNames 2.0, 'all_schemes'; tie %NameTable, 'Graphics::ColorNames', all_schemes(); When multiple color schemes define the same name, then the earlier one listed has priority (however, hash-based color schemes always have priority over code-based color schemes). When no color scheme is specified, the X-Windows scheme is assumed. Color names are case insensitive, and spaces or punctuation are ignored. So "Alice Blue" returns the same value as "aliceblue", "ALICE-BLUE" and "a*lICEbl-ue". (If you are using color names based on user input, you may want to add additional validation of the color names.): use Graphics::ColorNames 0.40; . $obj->load_scheme( $scheme ); Loads a scheme dynamically. The scheme may be any hash or code reference. $hex = $obj->hex($name, $prefix); Returns a 6-digit hexidecimal RGB code for the color. If an optional prefix is specified, it will prefix the code with that string. For example, $hex = $obj->hex('blue', '#'); # returns "#0000ff" @rgb = $obj->rgb($name); $rgb = $obj->rgb($name, $separator); If called in a list context, returns a triplet. If called in a scalar context, returns a string separated by an optional separator (which defauls to a comma). For example, @rgb = $obj->rgb('blue'); # returns (0, 0, 255) $rgb = $obj->rgb('blue', ','); # returns "0,0,255" Since version 2.10_02, the interface will assume method names are color names and return the hex value, $obj->black eq $obj->hex("black") Method names are case-insensitive, and underscores are ignored. These functions are not exported by default, so much be specified to be used: use Graphics::ColorNames qw( all_schemes hex2tuple tuple2hex ); @schemes = all_schemes(); Returns a list of all available color schemes installed on the machine in the Graphics::ColorNames namespace. The order has no significance. ($red, $green, $blue) = hex2tuple( $colors{'AliceBlue'}); $rgb = tuple2hex( $red, $green, $blue ); The following schemes are available by default: About 750 color names used in X-Windows (although about 90+ of them are duplicate names with spaces). 16 common color names defined in the HTML 4.0 specification. These names are also used with older CSS and SVG specifications. (You may want to see Graphics::ColorNames::SVG for a complete list.) 100 color names names associated Netscape 1.1 (I cannot determine whether they were once usable in Netscape or were arbitrary names for RGB values-- many of these names are not recognized by later versions of Netscape). This scheme may be deprecated in future versions, but available as a separate module. 16 commom color names used with Microsoft Windows and related products. These are actually the same colors as the "HTML" scheme,. (Schemes with a different base namespace will require the fill namespace to be given.) The color names must be in all lower-case, and the RGB values must be 24-bit numbers containing the red, green, and blue values in most- significant to least- significant byte order. An example naming schema is below: package Graphics::ColorNames::Metallic; sub NamesRgbTable() { use integer; return { copper => 0xb87333, gold => 0xcd7f32, silver => 0xe6e8fa, }; } You would use the above schema as follows: tie %colors, 'Graphics::ColorNames', 'Metallic'; The behavior of specifying multiple keys with the same name is undefined as to which one takes precedence. As of version 2.10, case, spaces and punctuation are ignored in color names. So a name like "Willy's Favorite Shade-of-Blue" is treated the same as "willysfavoroteshadeofblue". (If your scheme does not include duplicate entrieswith spaces and punctuation, then the minimum version of Graphics::ColorNames should be 2.10 in your requirements.) An example of an additional module is the Graphics::ColorNames::Mozilla module by Steve Pomeroy. Since version 1.03, NamesRgbTable may also return a code reference: package Graphics::ColorNames::Orange; sub NamesRgbTable() { return sub { my $name = shift; return 0xffa500; }; } See Graphics::ColorNames::GrayScale for an example. The alias "Graphics::ColourNames" (British spelling) is no longer available as of version 2.01. It seems absurd to maintain it when all the modules does is provide an alternative spelling for the module name without doing anything about the component colors of each scheme, and when most other modules (and non-Perl software) does not bother with such things. Color::Library provides an extensive library of color schemes. A notable difference is that it supports more complex schemes which contain additional information about individual colors and map multiple colors to a single name. Color::Rgb has a similar function to this module, but parses an rgb.txt file. Graphics::ColorObject can convert between RGB and other color space types. Acme::AutoColor provides subroutines corresponding to color names. Robert Rothenberg <rrwo at cpan.org> Alan D. Salewski <alans at cji.com> for feedback and the addition of tuple2hex. Steve Pomeroy <xavier at cpan.org>, "chemboy" <chemboy at perlmonk.org> and "magnus" <magnus at mbox604.swipnet.se> who pointed out issues with various color schemes. Feedback is always welcome. Please use the CPAN Request Tracker at to submit bug reports. There is a Sourceforge project for this package at. If you create additional color schemes, please make them available separately in CPAN rather than submit them to me for inclusion into this module. Copyright (c) 2001-2008 Robert Rothenberg. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~rrwo/Graphics-ColorNames/lib/Graphics/ColorNames.pm
CC-MAIN-2014-41
refinedweb
1,322
54.63
This is such a simple issue, it's embarrassing to have to ask.. but I've been having trouble with this theoretical issue of defining string arrays (and even integers!) outside of their class. Here's some code: #include <iostream> #include <string> using namespace std; class desu { public: string ok[4]; // Give string 4 elements of space } s; s.ok[] = {"one", "two", "three", "four"}; // Calling upon object "s" to access class "desu" int main() { cout << "Write some generic stuff here"; return 0; } Now, do I actually need to define a public function WITHIN class 'desu' (privatizing the variables, of course) and, outside the class, call upon that function just to fill my string array? Granted, I've even tried this with integers (just to make sure that I'm not screwing up with string arrays) and i still get compile errors, eg: #include <iostream> using namespace std; class desu { public: int lol; } s; s.lol=5; int main() { cout << "Write some generic stuff here"; return 0; } Thank you dearly for any help.
https://www.daniweb.com/programming/software-development/threads/169244/defining-string-arrays-outside-of-their-class
CC-MAIN-2018-22
refinedweb
172
64.75
09-03-2021 11:02 AM Hello I am trying to use event structures for a user interface controlled program. I have simple next buttons to go to the next stage which are working fine. However, I am having an issue when trying to implement an option button. So the user doesn't have to use this functionality (to open a seperate settings page) but it's available should they wish to. I have attached a simplified VI to show how I am trying to achieve it. I have the option OK button which if pressed, will execute and then the VI will complete when STOP is pressed. If the user should not wish to use the OK button, they can press stop and the VI will end because the STOP button ends the first event, and I have created a second case in the second event structure which does nothing when the STOP is pressed. Without this, the VI would not end because it would wait for the optional button to be pressed. This works, however 2 issues; 1) I believe it's bad practice 2) It is freezing the front panel, even though the structure has completed (i know this as the next set of code starts functioning) and even though I have unticked the "freeze front panel whilst executing" option. NB: these errors do not show in the simplified version but I just wanted to demonstrate the code I am using. Thanks in advance for any guidance, this is my first time working with event structures and I have not been able to find any good resources to explain them for more advanced use cases. 09-03-2021 11:15 AM You should only use a single event structure. You also need to put it in a while loop,. Do not run your VI using the "continuous Run" button. That is actually running your VI over and over, restarting it every time. It is not running it until you stop the program like you would like. 09-03-2021 01:58 PM @of2006 wrote: NB: these errors do not show in the simplified version but I just wanted to demonstrate the code I am using. You obviously simplified it "ad absurdum". This is no longer viable code and since it does no longer show the error, it is pointless to troubleshoot. Right? Please attach a fully functional version of your code that shows the problem. Preferably, it should be a simple state machine with one toplevel while loop and an event structure (possibly containing several event cases). Then tell us the exact steps how you are using it, what you expect to happen, and what happens instead. I also recommend the learning resources listed at the top of the forum. I recommend to get more familiar with the basics of dataflow and architecture and postpone the use of event structures until you have more experience. 09-06-2021 03:27 AM - edited 09-06-2021 03:28 AM I have attached the actual code and have removed unrelated functionality. The functionality is such that the user can continue without accessing the admin settings, they can optionally click to view settings but they are locked and then they have the option to unlock them with a password. Case 1: User presses next without viewing admin settings, Result: front panel is frozen, Next cannot be pressed to end the program Case 2: User selects to view admin settings, without unlocking them, returns to setup and clicks next, Result: works fine, next can be pressed to end program Case 3: User selects to view admin settings and then unlocks them (I have set password to 'test'), returns to setup and clicks next, Result: front panel is frozen, next cannot be pressed to end the program. I hope that helps 09-13-2021 02:25 AM Is anyone able to offer any guidance or is this functionality not possible? Thank you 09-13-2021 03:48 AM - edited 09-13-2021 03:51 AM The "lock UI" option locks the UI at the moment the event is GENERATED, not when it is handled. Otherwise it would make so sense. You cannot "stop" an event structure from enqueuing statically registered UI events (Events registered within the dialog of the Event Structure itself). Even if the Event structure is "stopped", i.e. will never be called again, the events are still going to be queued up and this of course, in combination with the "lock UI" option will automatically freeze your UI until the event is handled, which is never. Congratulations LabVIEW, you played yourself. I use Dynamic event registration for this. (Drop a "Register for Events" node and wire in any UI elements you want to respond to - then wire this to the dynamic terminal of the Event Structure). That way, when you are finished with the event structure, you can actually wire in a null reference to the event registration refnum (this can even be done from within the event structure handling the event if you wish). This will then prevent any enqueuing of events in the second event handler and essentially side-step the issue you are seeing. I generally disagree with the "only use one event structure" idea. Instead I would say only use one Event structure with statically defined events, everything else (even if handling events of UI elements) should use dynamic registration. Heck, if I had my way, ALL event registration would be dynamic (or at least the ability to "switch off" static events at run-time) would be implemented. 09-13-2021 09:49 AM @of2006 wrote: Is anyone able to offer any guidance or is this functionality not possible? You need to learn the basics of Dataflow! At this you don't understand it. Currently, you have five event structures. Three in parallel and one inside another event structure and one in another case. This is total garbage code and will never work right. The above case can only complete once all event structures have fired, which is currently a nightmare. As we said, you need to scrap that program and write a proper state machine with exactly one (or even no) event structures. If the admin/nonadmin switching exclusively deals with disabling/enabling some controls, that could also be done "out of band" in a parallel loop, independent of the main code. That loop could have a second event structure (exactly one!) and will do nothing unless the admin state is switched or the program terminated. In fact most of my program have such a parallel loop that only deals with UI issues. It will instantly react, no matter what happens in the main state machine. 09-13-2021 10:21 AM Oh lord. Yeah, what Altenbach said. I hadn't even looked at your code but was arguing from an abstract point of view. Fix what Altenbach has pointed out and forget my advice until then. You have bigger problems. 09-13-2021 11:35 AM To expand a bit on why your code absolutely will not work as is: All of your event structures are in the same loop. Dataflow says that no construct (loop, subVI, whatever) will return until everything inside of it or feeding it returns. This means two things: 1: Event structures will NOT return until they have processed an event or, if configured to have a timeout case, have timed out. Event structures will execute exactly one frame. In the timeout case, this is the "timeout" frame. 2: Outer loops will NOT return until all event structures inside them return. Thus, your main loop won't return (i.e., move on) until ALL event structures have processed an event. The above posters are correct- you need to only use one single event structure in a given loop, or really in a given VI*. If you want different cases handled, use different cases in one structure, not multiple structures. I hope that helps. *There are definitely times when you want or need multiple event structures in a given VI, but I can't think of a time I've ever needed two in the same *loop*. I would advise that you avoid using multiple event structures in the same VI until you really feel like you "get" single event structures and dynamic event registration. Event registration is a complicated topic, so it's best to just use a single event structure and let LabVIEW handle it automatically when you're first learning. 09-17-2021 05:34 AM - edited 09-17-2021 05:37 AM Thank you Bert for providing a clear explanation. I am aware of why this doesn't work as you have explained, but what I don't know is how to implement it so that the functionality can. I hadn't thought of not using event structures as it's an event driven process. There is no "data flow" because the user can choose any one of these buttons at the same time, that is what I am trying implement, I am not forcing a set "flow". I will have a look at whether case structures are more suitable for this application, I'd imagine it would be an event structure with multiple cases that is in a loop, providing the input to the case structure. altenbach, yes you're right I could put the admin functionality outside of the loop but I still have the same issue of "ending" the event structure if the user has chosen not to use this functionality This site uses cookies to offer you a better browsing experience. Learn more about our privacy statement and cookie policy.
https://forums.ni.com/t5/LabVIEW/Ending-Optional-Event-Structure/m-p/4176428?profile.language=en
CC-MAIN-2021-49
refinedweb
1,618
68.6
Summary So I’ve been posting a lot about NHibernate over the past few months (Fluent NHibernate to be exact). Mostly motivated by frustrations with Entity Framework. I have spent the past week working with NHibernate in a production environment as a proof of concept example. The first problem with NHibernate is the fact that you have to manually generate the classes and mappings. So I came up with a simple and dirty solution to this problem… The Table Dump Utility I threw together a console application that can be used to dump all the tables of designated databases into directories. Each table generates a “.cs” file with the class and mapping code in it. You can download the console application here: GenerateNHibernateTables.zip To use the utility, change the “ConnectionString” at the top to point to your database. Then you can change the parameter of the “DumpDatabase()” method to contain the name of the database you wish to generate tables for. You can add additional “DumpDatabase()” methods if you want to dump a bunch of databases. Each database will be contained in a subdirectory named the same as the database name. All the subdirectories will be contained inside a directory on your C: drive named “NHDatabase”. Obviously, you can hack up this program to suite your needs. What this program will generate This program will generate a “.cs” file for each table that will contain the class and mapping for the table. I have quickly thrown in some data types and what the matching class virtual getter will be (i.e. “int”, “string”, “DateTime”, etc). Composite keys are recognized and the “Equals” and “GetHashCode” methods are automatically inserted into the class with the proper field names. The “Table” mapping is included for convenience and can be altered. This is setup for MS SQL server, containing the database name as well as the table name. Nullable and not Nullable fields are accounted for. String lengths are accounted for. What this program will not generate No foreign key constraints are generated. No one-to-many or many-to-many mappings are setup. Only the most generic field types are recognized. No context code is generated. What can you use this program for? I wrote this program to reduce some of my tedious data-entry work. You can get about 90% of the code you need for your Fluent mappings and table classes with this utility, then spend a few minutes cleaning up anything that doesn’t quite fit. If a field type is not recognized, then the program will spit out an “unknown(sql field name)” field name. You can decide what to name this variable and type over the unknown name. Also, don’t forget to correct the namespace. I threw a generic “databasename.NameSpace” text for the name space. You should change this to match your project namespace. You can do it in the program before you generate tables to save yourself some typing. Also, you can rerun this program to write over the tables that exist. Make sure you don’t have a directory open before you do it (otherwise you might lock the program out from making changes). To use the files, just drag the “.cs” files into your project, tweak the namespace, correct any syntax errors and add your tables to your own context class. That’s it!
http://blog.frankdecaire.com/2014/05/17/fluent-nhibernate-table-dump-utility/
CC-MAIN-2018-05
refinedweb
562
74.79
Counting roots Posted February 27, 2013 at 10:13 AM | categories: nonlinear algebra | tags: Updated February 27, 2013 at 02:27 PM Matlab post The goal here is to determine how many roots there are in a nonlinear function we are interested in solving. For this example, we use a cubic polynomial because we know there are three roots. $$f(x) = x^3 + 6x^2 - 4x -24$$ 1 Use roots for this polynomial This ony works for a polynomial, it does not work for any other nonlinear function. import numpy as np print np.roots([1, 6, -4, -24]) [-6. 2. -2.] Let us plot the function to see where the roots are. import numpy as np import matplotlib.pyplot as plt x = np.linspace(-8, 4) y = x**3 + 6 * x**2 - 4*x - 24 plt.plot(x, y) plt.savefig('images/count-roots-1.png') Now we consider several approaches to counting the number of roots in this interval. Visually it is pretty easy, you just look for where the function crosses zero. Computationally, it is tricker. 2 method 1 Count the number of times the sign changes in the interval. What we have to do is multiply neighboring elements together, and look for negative values. That indicates a sign change. For example the product of two positive or negative numbers is a positive number. You only get a negative number from the product of a positive and negative number, which means the sign changed. import numpy as np import matplotlib.pyplot as plt x = np.linspace(-8, 4) y = x**3 + 6 * x**2 - 4*x - 24 print np.sum(y[0:-2] * y[1:-1] < 0) 3 This method gives us the number of roots, but not where the roots are. 3 Method 2 Using events in an ODE solver python can identify events in the solution to an ODE, for example, when a function has a certain value, e.g. f(x) = 0. We can take advantage of this to find the roots and number of roots in this case. We take the derivative of our function, and integrate it from an initial starting point, and define an event function that counts zeros. $$f'(x) = 3x^2 + 12x - 4$$ with f(-8) = -120 import numpy as np from pycse import odelay def fprime(f, x): return 3.0 * x**2 + 12.0*x - 4.0 def event(f, x): value = f # we want f = 0 isterminal = False direction = 0 return value, isterminal, direction xspan = np.linspace(-8, 4) f0 = -120 X, F, TE, YE, IE = odelay(fprime, f0, xspan, events=[event]) for te, ye in zip(TE, YE): print 'root found at x = {0: 1.3f}, f={1: 1.3f}'.format(te, ye) root found at x = -6.000, f=-0.000 root found at x = -2.000, f=-0.000 root found at x = 2.000, f= 0.000 Copyright (C) 2013 by John Kitchin. See the License for information about copying.
https://kitchingroup.cheme.cmu.edu/blog/2013/02/27/Counting-roots/
CC-MAIN-2021-31
refinedweb
498
76.11
Today there are a large number of books covering .NET and Windows Forms. While most of these books discuss the essentials of working with Windows Forms and guide you well on your way to becoming proficient in developing Windows Forms applications, very few books cover a vital and much needed topic. And this topic is: the sequence of events that are triggered for a Form. Knowing the lifecycle of a Form can help you place important bits of code in relevant events. If you look at ASP.NET tutorials and books, you will find many references to the Web Page lifecyle but what about Windows Forms lifecycle? Sadly, there's not much concrete information about this. The aim of this article is, therefore, to delve into this topic and provide insightful knowledge about Form events. The events in the lifecycle of a Form from the time it is launched to the time it is closed are listed below: Let us view this through an example. 1. First, launch Visual Studio IDE (2005 or 2008) and create a Windows Forms application. Figure 1: New Project Dialog Box 2. Give a suitable name and click OK. This will create the aplication and open it in Design view. Figure 2: Application in the Design view 3. Open the Form properties window. The easiest way to do this is: select the Form in the design view and press the F4 key. 4. Click the Events tab and select the Move event. Figure 3: Form events 5. Double click on it. This will cause the event handler to be auto-generated in the Code View. 6. Switch back to the Design View and in the Form properties window, double click the Load event. 7. Likewise, repeat this procedure for all the events that were listed earlier. 8. Open the Code View of Form1.Designer.cs and add the code marked in bold. using System.IO; namespace LifecycleDemo { partial class Form1 { StreamWriter sr = new treamWriter("D:\\formevents.txt"); /// <summary> /// Required designer variable. /// </summary> private System.ComponentModel.IContainer components = null; /// Clean up any resources being used. /// <param name="disposing">true if managed resources should be disposed; otherwise, false.</param> protected override void Dispose(bool disposing) { if (disposing && (components != null)) { components.Dispose(); } base.Dispose(disposing); sr.Close(); } #region Windows Form Designer generated code #endregion } } public partial class Form1 : Form public Form1() InitializeComponent(); private void Form1_Move(object sender, EventArgs e) sr.WriteLine("1 - Move event"); private void Form1_Load(object sender, EventArgs e) sr.WriteLine("2 - Load event"); private void Form1_Activated(object sender, EventArgs e) sr.WriteLine("3 - Activated event"); private void Form1_VisibleChanged(object sender, EventArgs e) sr.WriteLine("4 - VisibleChanged event"); private void Form1_Shown(object sender, EventArgs e) sr.WriteLine("5 -Shown event"); private void Form1_Paint(object sender, PaintEventArgs e) sr.WriteLine("6 - Paint event"); private void Form1_FormClosed(object sender, FormClosedEventArgs e) sr.WriteLine("7 - FormClosed event"); private void Form1_FormClosing(object sender, FormClosingEventArgs e) sr.WriteLine("8 - FormClosing"); private void Form1_Deactivate(object sender, EventArgs e) sr.WriteLine("9 - Deactivate"); } Figure 4: Form shown during execution of the application 11. Switch to some other application, such that the form is no longer in focus. 12. Switch back to the Windows Form so that it regains focus. 13. Exit the Windows Form application. 14. Open the text file, formevents.txt. You will observe the output similar to the one shown in Figure 5. (Output may vary if you perform some other actions in between causing additional events to be raised). Figure 5: Text file contents showing event lifecycle The Move, Load, VisibleChanged and Activated events are raised even before the Form is shown. Then the Shown event takes place. This is followed by the Paint event. These events are common for any application and are always standard. When you switch the focus to some other application, the Deactivate event occurs. When the form regains focus, the Activated event is raised. Then the form is painted again because it has regained focus. When you attempt to close the form, the FormClosing and FormClosed events are raised. Finally, after the form is closed, Deactivate is raised once more. If you had inserted a WriteLine for Dispose as well (which has not been written as of now) you would see that statement appearing after the Deactivate. Conclusion: Thus, you learned about the lifecycle of events in a Windows Form.
http://www.c-sharpcorner.com/uploadfile/mamta_m/windows-forms-events-lifecycle/
CC-MAIN-2014-15
refinedweb
725
59.6
Abstract Model to manage SQL Requests Project description Abstract Model to manage SQL Requests This module provide an abstract model to manage SQL Select request on database. It is not usefull for itself. You can see an exemple of implementation in the ‘sql_export’ module. (same repository). Implemented features - Add some restrictions in the sql request: - you can only read datas. No update, deletion or creation are possible. - some tables are not allowed, because they could contains clear password or keys. For the time being (‘ir_config_parameter’). - The request can be in a ‘draft’ or a ‘SQL Valid’ status. To be valid, the request has to be cleaned, checked and tested. All of this operations can be disabled in the inherited modules. - This module two new groups: - SQL Request / User : Can see all the sql requests by default and execute them, if they are valid. - SQL Request / Manager : has full access on sql requests. Usage Inherit the model: from openerp import models - class MyModel(models.model) - _name = ‘my.model’ _inherit = [‘sql.request.mixin’] _sql_request_groups_relation = ‘my_model_groups_rel’ _sql_request_users_relation = ‘my_model_users_rel’ Bug Tracker Bugs are tracked on GitHub Issues. In case of trouble, please check there if your issue has already been reported. If you spotted it first, help us smash it by providing detailed and welcomed feedback. Credits Contributors - Florian da Costa <florian.dacosta@akretion.com> - Sylvain LE GAL () Funders The development of this module has been financially supported by: - Akretion (<>) - GRAP, Groupement Régional Alimentaire de Proxim.
https://pypi.org/project/odoo8-addon-sql-request-abstract/
CC-MAIN-2018-30
refinedweb
242
51.14
Pt-D-5 Dave saysYou can do this using an if statement <% if @count > 5 -%> You've viewed this page <%= pluralize(@count,"time") %>, mon ami. <% end -%> Combusean says: Isn’t the idea to display the counter ‘business logic’ and thus in the store controller and then in the viewand then in the view @display_count = nil if @count > 5 @display_count = @count end <% ‘->’ just makes the HTML output nicer. It oppresses the newline character after the ‘>’ :)h4. Scott says: store_controller.rb store/index.rhtmlstore/index.rhtml def index . . @count = index_access_counter @count_minimum_to_display = true if @count > 5 end def add_to_cart . . else @cart = find_cart @cart.add_product(product) session[:counter] = nil end end def index_access_counter session[:counter] ||= 0 session[:counter] += 1 end <%= %> Erykwalder saysI just did: <%= 'Viewed ' + pluralize(@counter, 'time') if @counter > 5 %> The reason why I wouldn’t put the logic for deciding whether or not to display the count in the view, is because the logic relates directly to the view. Page History - V23: Viacheslav Soldatov [over 3 years ago] - V20: Bess Fernandez [over 4 years ago] - V19: Bess Fernandez [over 4 years ago] - V18: Justin McGuire [over 4 years ago] - V14: Brent Merrick [over 4 years ago] - V13: Joseph Venator [almost 5 years ago] - V12: Joseph Venator [almost 5 years ago] - V11: Joseph Venator [almost 5 years ago] - V10: Joseph Venator [almost 5 years ago] - V9: Joseph Venator [almost 5 years ago]
https://pragprog.com/wikis/wiki/Pt-D-5/version/2
CC-MAIN-2016-50
refinedweb
229
56.18
As I mentioned in the other post this functionality is the basis for web mapping servers but could also be used to quickly generate image renderings of shapefiles for documents, presentations, e-mail, or metadata catalogs. You'll notice this script is very similar to the PIL script I posted. Swapping out PIL with PNGCanvas required minimal changes. As I did last time I also create a world file which allows this image to be layered in most GIS systems albeit only at a single scale. import shapefile import pngcanvas # Read in a shapefile and write png image r = shapefile.Reader("mississippi") xdist = r.bbox[2] - r.bbox[0] ydist = r.bbox[3] - r.bbox[1] iwidth = 400 iheight = 600 xratio = iwidth/xdist yratio = iheight/ydist pixels = [] # # Only using the first shape record for x,y in r.shapes()[0].points: px = int(iwidth - ((r.bbox[2] - x) * xratio)) py = int((r.bbox[3] - y) * yratio) pixels.append([px,py]) c = pngcanvas.PNGCanvas(iwidth,iheight) c.polyline(pixels) f = file("mississippi.png","wb") f.write(c.dump()) f.close() # # Create a world file wld = file("mississippi.pgw", "w") wld.write("%s\n" % (xdist/iwidth)) wld.write("0.0\n") wld.write("0.0\n") wld.write("-%s\n" % (ydist/iheight)) wld.write("%s\n" % r.bbox[0]) wld.write("%s\n" % r.bbox[3]) wld.close You can download the shapefile used in this example here: You can download the script featured above here: Hi All, I am using same script for draw a polygon on image, and i have made but i want to increase polygon area and made in center of image, Please help me
http://geospatialpython.com/2010/12/rasterizing-shapefiles-2-pure-python.html
CC-MAIN-2016-44
refinedweb
277
59.7
Announcing Entity Framework Code-First (CTP5 release) This week the data team released the CTP5 build of the new Entity Framework Code-First library. EF Code-First enables a pretty sweet code-centric development workflow for working with data. It enables you to: - Develop without ever having to open a designer or define an XML mapping file - Define model objects by simply writing “plain old classes” with no base classes required - Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything - Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping I’m a big fan of the EF Code-First approach, and wrote several blog posts about it this summer: - Code-First Development with Entity Framework 4 (July 16th) - EF Code-First: Custom Database Schema Mapping (July 23rd) - Using EF Code-First with an Existing Database (August 3rd) Today’s new CTP5 release delivers several nice improvements over the CTP4 build, and will be the last preview build of Code First before the final release of it. We will ship the final EF Code First release in the first quarter of next year (Q1 of 2011). It works with all .NET application types (including both ASP.NET Web Forms and ASP.NET MVC projects). Installing EF Code First You can install and use EF Code First CTP5 using one of two ways: Approach 1) By downloading and running a setup program. Once installed you can reference the EntityFramework.dll assembly it provides within your projects. or: Approach 2) By using the NuGet Package Manager within Visual Studio to download and install EF Code First within a project. To do this, simply bring up the NuGet Package Manager Console within Visual Studio (View->Other Windows->Package Manager Console) and type “Install-Package EFCodeFirst”: Typing “Install-Package EFCodeFirst” within the Package Manager Console will cause NuGet to download the EF Code First package, and add it to your current project: Doing this will automatically add a reference to the EntityFramework.dll assembly to your project: NuGet enables you to have EF Code First setup and ready to use within seconds. When the final release of EF Code First ships you’ll also be able to just type “Update-Package EFCodeFirst” to update your existing projects to use the final release. EF Code First Assembly and Namespace The CTP5 release of EF Code First has an updated assembly name, and new .NET namespace: - Assembly Name: EntityFramework.dll - Namespace: System.Data.Entity These names match what we plan to use for the final release of the library. Nice New CTP5 Improvements The new CTP5 release of EF Code First contains a bunch of nice improvements and refinements. Some of the highlights include: - Better support for Existing Databases - Built-in Model-Level Validation and DataAnnotation Support - Fluent API Improvements - Pluggable Conventions Support - New Change Tracking API - Improved Concurrency Conflict Resolution - Raw SQL Query/Command Support The rest of this blog post contains some more details about a few of the above changes. Better Support for Existing Databases EF Code First makes it really easy to create model layers that work against existing databases. CTP5 includes some refinements that further streamline the developer workflow for this scenario. Below are the steps to use EF Code First to create a model layer for the Northwind sample database: Step 1: Create Model Classes and a DbContext class Below is all of the code necessary to implement a simple model layer using EF Code First that goes against the Northwind database: EF Code First enables you to use “POCO” – Plain Old CLR Objects – to represent entities within a database. This means that you do not need to derive model classes from a base class, nor implement any interfaces or data persistence attributes on them. This enables the model classes to be kept clean, easily testable, and “persistence ignorant”. The Product and Category classes above are examples of POCO model classes. EF Code First enables you to easily connect your POCO model classes to a database by creating a “DbContext” class that exposes public properties that map to the tables within a database. The Northwind class above illustrates how this can be done. It is mapping our Product and Category classes to the “Products” and “Categories” tables within the database. The properties within the Product and Category classes in turn map to the columns within the Products and Categories tables – and each instance of a Product/Category object maps to a row within the tables. The above code is all of the code required to create our model and data access layer! Previous CTPs of EF Code First required an additional step to work against existing databases (a call to Database.Initializer<Northwind>(null) to tell EF Code First to not create the database) – this step is no longer required with the CTP5 release. Step 2: Configure the Database Connection String We’ve written all of the code we need to write to define our model layer. Our last step before we use it will be to setup a connection-string that connects it with our database. To do this we’ll add a “Northwind” connection-string to our web.config file (or App.Config for client apps) like so: <connectionStrings> <add name="Northwind" connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|\northwind.mdf;User Instance=true" providerName="System.Data.SqlClient" /> </connectionStrings> EF “code first” uses a convention where DbContext classes by default look for a connection-string that has the same name as the context class. Because our DbContext class is called “Northwind” it by default looks for a “Northwind” connection-string to use. Above our Northwind connection-string is configured to use a local SQL Express database (stored within the \App_Data directory of our project). You can alternatively point it at a remote SQL Server. Step 3: Using our Northwind Model Layer We can now easily query and update our database using the strongly-typed model layer we just built with EF Code First. The code example below demonstrates how to use LINQ to query for products within a specific product category. This query returns back a sequence of strongly-typed Product objects that match the search criteria: The code example below demonstrates how we can retrieve a specific Product object, update two of its properties, and then save the changes back to the database: EF Code First handles all of the change-tracking and data persistence work for us, and allows us to focus on our application and business logic as opposed to having to worry about data access plumbing. Built-in Model Validation EF Code First allows you to use any validation approach you want when implementing business rules with your model layer. This enables a great deal of flexibility and power. Starting with this week’s CTP5 release, EF Code First also now includes built-in support for both the DataAnnotation and IValidatorObject validation support built-into .NET 4. This enables you to easily implement validation rules on your models, and have these rules automatically be enforced by EF Code First whenever you save your model layer. It provides a very convenient “out of the box” way to enable validation within your applications. Applying DataAnnotations to our Northwind Model The code example below demonstrates how we could add some declarative validation rules to two of the properties of our “Product” model: We are using the [Required] and [Range] attributes above. These validation attributes live within the System.ComponentModel.DataAnnotations namespace that is built-into .NET 4, and can be used independently of EF. The error messages specified on them can either be explicitly defined (like above) – or retrieved from resource files (which makes localizing applications easy). Validation Enforcement on SaveChanges() EF Code-First (starting with CTP5) now automatically applies and enforces DataAnnotation rules when a model object is updated or saved. You do not need to write any code to enforce this – this support is now enabled by default. This new support means that the below code – which violates our above rules – will automatically throw an exception when we call the “SaveChanges()” method on our Northwind DbContext: The DbEntityValidationException that is raised when the SaveChanges() method is invoked contains a “EntityValidationErrors” property that you can use to retrieve the list of all validation errors that occurred when the model was trying to save. This enables you to easily guide the user on how to fix them. Note that EF Code-First will abort the entire transaction of changes if a validation rule is violated – ensuring that our database is always kept in a valid, consistent state. EF Code First’s validation enforcement works both for the built-in .NET DataAnnotation attributes (like Required, Range, RegularExpression, StringLength, etc), as well as for any custom validation rule you create by sub-classing the System.ComponentModel.DataAnnotations.ValidationAttribute base class. UI Validation Support A lot of our UI frameworks in .NET also provide support for DataAnnotation-based validation rules. For example, ASP.NET MVC, ASP.NET Dynamic Data, and Silverlight (via WCF RIA Services) all provide support for displaying client-side validation UI that honor the DataAnnotation rules applied to model objects. The screen-shot below demonstrates how using the default “Add-View” scaffold template within an ASP.NET MVC 3 application will cause appropriate validation error messages to be displayed if appropriate values are not provided: ASP.NET MVC 3 supports both client-side and server-side enforcement of these validation rules. The error messages displayed are automatically picked up from the declarative validation attributes – eliminating the need for you to write any custom code to display them. Keeping things DRY The “DRY Principle” stands for “Do Not Repeat Yourself”, and is a best practice that recommends that you avoid duplicating logic/configuration/code in multiple places across your application, and instead specify it only once and have it apply everywhere. EF Code First CTP5 now enables you to apply declarative DataAnnotation validations on your model classes (and specify them only once) and then have the validation logic be enforced (and corresponding error messages displayed) across all applications scenarios – including within controllers, views, client-side scripts, and for any custom code that updates and manipulates model classes. This makes it much easier to build good applications with clean code, and to build applications that can rapidly iterate and evolve. Other EF Code First Improvements New to CTP5 EF Code First CTP5 includes a bunch of other improvements as well. Below are a few short descriptions of some of them: - Fluent API Improvements EF Code First allows you to override an “OnModelCreating()” method on the DbContext class to further refine/override the schema mapping rules used to map model classes to underlying database schema. CTP5 includes some refinements to the ModelBuilder class that is passed to this method which can make defining mapping rules cleaner and more concise. The ADO.NET Team blogged some samples of how to do this here. - Pluggable Conventions Support EF Code First CTP5 provides new support that allows you to override the “default conventions” that EF Code First honors, and optionally replace them with your own set of conventions. - New Change Tracking API EF Code First CTP5 exposes a new set of change tracking information that enables you to access Original, Current & Stored values, and State (e.g. Added, Unchanged, Modified, Deleted). This support is useful in a variety of scenarios. - Improved Concurrency Conflict Resolution EF Code First CTP5 provides better exception messages that allow access to the affected object instance and the ability to resolve conflicts using current, original and database values. - Raw SQL Query/Command Support EF Code First CTP5 now allows raw SQL queries and commands (including SPROCs) to be executed via the SqlQuery and SqlCommand methods exposed off of the DbContext.Database property. The results of these method calls can be materialized into object instances that can be optionally change-tracked by the DbContext. This is useful for a variety of advanced scenarios. - Full Data Annotations Support EF Code First CTP5 now supports all standard DataAnnotations within .NET, and can use them both to perform validation as well as to automatically create the appropriate database schema when EF Code First is used in a database creation scenario. Summary EF Code First provides an elegant and powerful way to work with data. I really like it because it is extremely clean and supports best practices, while also enabling solutions to be implemented very, very rapidly. The code-only approach of the library means that model layers end up being flexible and easy to customize. This week’s CTP5 release further refines EF Code First and helps ensure that it will be really sweet when it ships early next year. I recommend using NuGet to install and give it a try today. I think you’ll be pleasantly surprised by how awesome it is. Hope this helps, Scott
http://weblogs.asp.net/scottgu/announcing-entity-framework-code-first-ctp5-release
CC-MAIN-2014-42
refinedweb
2,161
50.06
Hi and welcome to our site!I assume that was a living trust set by your mother and you became a succeeding trustee after your mother passed away.If that is correct - the trust became irrevocable just after your mother (the grantor) died - and as such it is treated as a separate legal and taxing entity.According to filing requirements - see instructions page 4 Who Must File -.Thus because the gross income realized from the sale of the house is above $600 - the tax return is required. To determine the capital gain realized by the trust we need to start with the basis. For inherited property - the basis is the fair market value of the property at the time the decedent passed away.If the house was valued $300k when she died - that is your basis.The basis should be adjusted by selling expenses and improvement expenses - so your adjusted basis will be $400k as you estimated.That will be $50k gain - which is classified as long term capital gain. I assume that proceeds are to be distributed to beneficiaries - thus the trust will pass taxable gain to beneficiaries as well - so beneficiaries will include their share of taxable income into individual tax returns The trust should issue schedule K1 to each beneficiary - that is the only property held in the trust - and all funds are distributed - the trust would be dissolved. When I typed in my question, I was unaware where this was going, so I made up numbers. We estimate the actual gain to be about $100K, if that makes any difference. The two main questions we have right now are: 1. How much to we need to allocate for the Capital Gain tax payment? We hear 15% and 10%. 2. Because the total estate was under $1M, I have issued no Schedule K1 documents. Many small payouts have been made. Am I now in a bind? I just intended to file one 2013 federal tax return, pay the CG tax and that's all. I know about the K1, but declined to issue any. We have made smaller payouts (Trust included $150K or so in cash) for four years. How much to we need to allocate for the Capital Gain tax payment? We hear 15% and 10%.As I mentioned above - the gain most likely will be passed to beneficiaries - so the tax rate will be determined by each beneficiary based ion his/her taxable income, filing status, deductions, etc.The tax rate will be determined individually. 3. Can I file to pay the CG taxes with no K1s issued? Also: 4. Is there any State tax (VA)? thanks... So, you are saying the Trust does not pay any CG tax? Because the total estate was under $1M, I have issued no Schedule K1 documents. Many small payouts have been made. Am I now in a bind?I think you meant estate taxes - there is NO estate taxes. But if the trust has income above filing requirements - INCOME tax return is needed. K1 is to report distribution of taxable income to beneficiaries - thus - if there is taxable income and it is distributed - K1 is required as well. OK, but if the gain is all distributed, the Trust effectively has no 2013 income? Can I just issue a K1 specific to the gain? Can I file to pay the CG taxes with no K1s issued?If none of income was distributed to beneficiaries and if the trust is not required to distribute income to beneficiaries (that should be verified with trust's documents) - then no K1 is required and income taxes are paid by the trust. Is there any State tax (VA)?Yes - because the house is in VA - the gain is considered income from VA sources - and it is taxable for the state regardless where beneficiaries are residents.Same tax treatment as on the federal level - the gain is passed to beneficiaries - and the tax liability is determined based on the total income. I'm trying to determine how the federal government would be able to determine what distributions reported on K1s is taxable. If only the portion of the distribution that is capital gain is in fact taxable and those funds are mixed into the Trust account, who can tell what porting of the Trust is taxable? Why not distribute the taxable amount to the children who will pay no taxes? (porting = portion) OK, but if the gain is all distributed, the Trust effectively has no 2013 income? Can I just issue a K1 specific to the gain?If the house was sold in 2013 - that means - the trust has gross income more than $600 in 2013 - and is required to file the tax return.When the gain is distributed to beneficiaries - it is deducted on the trust's income tax return (form 1041) - so the trust will not have any taxable income - still the tax return is needed.K1's are not issues separately - they are issued as part of the tax return and sent to the IRS as attachments to 1041. In additional - copies of K1's are sent to beneficiaries - so they will use them for individual tax returns. The process is starting to make sense, but I still see no way the government (IRS) can possibly determine what distributions are taxable. Forget for a second that we have been distributing funds over four tax years. If a Trust has a total value of $600K and out of that total $100K was capital gains, then the entire $600K is distributed to seven people (in varying amounts) how on earth would the government determine who of these seven people received a portion of the capital gains and who got only the untaxed inheritance? First of all - having taxable income and required to file - are different issues.Then - if the GROSS income is above $600 - not $600k - the trust is required to file the tax return.Second - the taxable income is calculated on the tax return - in the same manner as for individuals. So you will identify the gain $100k as in your example - as taxable long term capital gain.Then - you deduct the distribution to beneficiaries - and attach K1 with their names and SSN's - so the IRS will track that taxable income to to these individuals.After that deduction - the trust will not have any tax liability.Another issue - there are two types of distribution - the distribution of the income and distribution of the corpus. Income is generally distributed first - so if there is any income and any distribution - it is assumed that income is distributed.Distribution of the corpus doesn't generate any taxable income for beneficiaries - as as such is not required to be reported on K1. You may mention such distribution as other information on K1 or in additional statement to beneficiaries - but because that is not taxable - the IRS would not worry about that.The proportion of the distribution is based on the trust document - according to grantor - as a trustee you should follow grantor's wishes. I appreciate the information. I'm still not clear on all of this, but I have taken enough of your time. I was not anticipating a dialog or I'd be a little better prepared. This service is new to me. When we close this are further questions allowed or is that a new exchange? My intention is to provide EXCELLENT service - and I appreciate if you take time and rate my work accordingly.You may come back to this page any time Here is the address - might be not immediately available - but surely will address all your tax related issues.You are welcome to come back and ask for clarification. Thank you.
http://www.justanswer.com/tax/81puz-trustee-mother-left-house-valued-300-000.html
CC-MAIN-2015-06
refinedweb
1,291
62.48
Ideas for a Programming Language Part 3: No Shadow Worlds by Malte Skarupke This post is inspired by this text by Gilad Bracha. I recommend that you read it before reading this blog post. The idea of Shadow Worlds is that people keep inventing these constructs that are powerful enough that you want to use them as programming languages, but they are not powerful enough to have the features of high level programming languages that make programming enjoyable. Before reading that blog post I knew that I wanted a macro system for my language. Looking at my old notes I had written down that a macro system is necessary, and that templates are not enough. However I wanted a type safe macro system from modern languages like Nim or Rust. Gilad Bracha’s blog post made me realize that you actually never want macros because they are just another Shadow World. Instead you want to do everything in your high level language. My syntax for this is inspired by D and Scala, where every function has two argument lists: One with compile time arguments, and one with runtime arguments. In D it looks like this: foo!(compile_time_args)(run_time_args) and in Scala it looks like this: foo[compile_time_args](run_time_args). I slightly prefer Scalas syntax, even though it does mean that operator[] is not available for map lookups or array indexing. Scala uses operator() for that instead which actually is kinda nice once you get used to it. In this post I will use Scala’s syntax. (I also tried just using the template syntax from C++, but since the template brackets <> are placed before the function declaration instead of where the normal arguments are, everything looks kinda weird) That syntax was invented for templates, and it works great for that in those languages, but I think it should also be used for macros. A macros is simply a function that has no runtime arguments. A common example where you need to use macros is for asserts, because you can’t get the file and line number of your function arguments. But in theory that’s an easy problem: #define ASSERT(expr) if (!(expr))\ {\ assert_callback(#expr, __FILE__, __LINE__);\ }\ else\ static_cast<void>(0) becomes void assert[expression[bool] to_check]() { if (!to_check.execute()) { assert_callback(to_check.text, to_check.location.filename, to_check.location.line_number); } } Which requires no special syntax. Expressions are templatized by the value that they produce, and they just have their textual representation and location as members. Another common source of macros is when you just want to turn something into a string. Your codebase probably has at least two cases where this is done for types, and at least five cases where this is done for enums. It usually looks something like this: #define REGISTER_CLASS(type) template<> const char * GetClassName<type> { return #type; } Where in a non-shadow world language you could just write this: string GetClassName[type t]() { return t.name; } No reason to make it any more complicated than that. Types simply have names and you can get them. A more complicated case that has to be a macro in C++ is when you want to add members to a struct. For example let’s say that you have operator<, and you want to automatically add operator>, operator<= and operator>=. You can do this using the CRTP, but let’s try to be fancy. Given this class: class SharedString { std::shared_ptr<const std::string> str; public: // ... bool operator<(const SharedString & other) const { return *str < *other.str; } bool operator<(const std::string & other) const { return *str < other; } bool operator<(const char * other) const { return *str < other; } }; bool operator<(const std::string & lhs, const SharedString & rhs) { return lhs < rhs.get(); } bool operatpr<(const char * lhs, const SharedString & rhs) { return lhs < rhs.get(); } I want to add a macro that adds the remaining operators for SharedString, std::string and const char *. Using macros or templates we have to execute the same macro three times or inherit three times using the CRTP, because we have to overload the operator for three types. We have to do this three times because both macros and templates are shadow worlds that don’t provide a looping construct or object oriented programming where you can ask questions about objects. You can kinda emulate loops, but once you’ve heard about shadow worlds you feel a bit stupid every time that you are trying to emulate a loop in templates when you have got a high level programming language called C++ available which has perfectly good support for loops. So let’s write this in a non-shadow world language: bool generic_greater[type t, type o](const t & lhs, const o & rhs) { return rhs < lhs; } bool generic_less_equal[type t, type o](const t & lhs, const o & rhs) { return !(rhs < lhs); } bool generic_greater_equal[type t, type o](const t & lhs, const o & rhs) { return !(lhs < rhs); } void complete_operators[type t] { const set[function] & less_thans = t.methods("operator<"); namespace ts_space = t.namespace for (const function & less_than : less_thans) { type other = less_than.arguments(1).type; t.methods("operator>").insert(&generic_greater[t, other]); t.methods("operator<=").insert(&generic_less_equal[t, other]); t.methods("operator>=").insert(&generic_greater_equal[t, other]); ts_space.functions("operator>").insert(&generic_greater[other, t]); ts_space.functions("operator<=").insert(&generic_less_equal[other, t]); ts_space.functions("operator>=").insert(&generic_greater_equal[other, t]); } } complete_operators[SharedString]; The type has a map “methods” of type map[string, set[function]] which I can use to iterate over all methods on a type. Each function in the set then can tell me about its arguments and the type of those arguments. Once I have all the types I can generate all the required functions using a pretty normal syntax. All of this happens at compile time, as if I had used a macro for doing it. Using a macro here would have lead to code that’s harder to read and harder to debug, but it probably would also compile more slowly because you have to go through the whole string processing apparatus of the compiler. Templates have less downsides, except that they use a modified syntax (this is one reason why people use macros when they could use templates: no need to learn a special syntax) and looping is annoying. But also templates use duck typing, which makes error message worse. If you get errors in the above function, those errors will hopefully be easier to reason about than macro- or template-error messages would be because everything is statically typed, you get a normal call stack and state is represented in stack local variables that you can examine in the watch window of your debugger. (I expect that to happen often, where if you get rid of shadow worlds, your existing tools become more useful) However if you have worked in languages that allow modifications to classes like the above, you’ll probably not be satisfied at this point. Modifying classes like this actually introduces a whole new set of problems, mainly related to maintenance. So I looked back at templates and realized that they have one really nice property: you write normal structs, and you just replace a bit in the middle. To write a template like std::vector I can mostly write a normal struct and then only the pointers to the storage get replaced at compile time. It’s a style of programming that doesn’t really have a name. Declarative Programming is the closest thing I can name, but it’s too vague. But basically it’s the opposite of pattern matching, so maybe we should call it pattern expansion. And as you look at other shadow worlds, you find that this style is actually common: In printf you write a normal string and only one bit in the middle is replaced. Of the three shadow worlds in Gilad Brachas article, the HTML template system also has this property. So I think a crucial step in getting rid of shadow worlds is supporting this style of programming in the language so that templates, macros, printf and regular expressions can all use the same language feature. And it would have to be a normal language feature that can be abstracted, extended and composed. And I think we’re almost there, but I actually can’t figure out the last step. So the blog post will have to end with an unsatisfying “I’m still trying to figure it out.” My current idea is that I use the same syntax for pattern matching that I use for pattern expansion. The idea is that like in boost.serialization you use one syntax to do both. So if you can pattern-expand a template struct, you can use the same syntax to pattern-match it. Similarly if you can pattern-match a string, you can pattern-expand using the same syntax instead of using printf. To be able to use the same syntax for templates and for strings the idea is to not have quotation marks “” around strings. They are not necessary when you store the source code in a database instead of a text file. And then all you have to do is find a way to support normal loops, normal conditionals and normal abstractions like classes and functions. It doesn’t sound that difficult, but whenever I work on it I can’t quite fit the pieces together and it’s very easy to accidentally enter a shadow world. But I think the idea is solid and I’ll solve the issues once I start implementing this. But the accidental creation of shadow worlds is an interesting observation. One example of this that I still hope to influence is Jonathan Blow’s new language in which he demonstrates many of the same features that I am thinking about. However it also shows how easily you can accidentally introduce shadow worlds. For example the feature of “notes” on members that he showed in a recent video is clearly a shadow world. And several people immediately complain about it in the Q&A after the talk. But people complain about it only seeing specific problems that you might run into using the feature, not seeing that the feature is fundamentally flawed because it works differently than everything else in the language: There will be a lot of code duplication by users of the language simply because they will have to invent similar code once for notes and once for normal parts of the language. Me personally I will try to solve the issues that notes are trying to address by having compile time properties on variables, which are normal properties that you could also have on a struct. But I will talk about that when I talk about dependent typing. Another thing that Jonathan Blow demonstrated in an early video was a bit of special syntax for functions that are called at compile times, like #check_call or #load. If those were just normal functions, I can use all the higher order functions that are already available for me in normal functions. Have you seen how people reimplement std::transform, std::for_each or more complicated features for macros or templates? It’s not pretty. (often it’s not obvious because the reimplementations will not be put into a reusable, named piece of code. So you have to read the code to see that a pattern is reimplemented several times) The fact that functions like #check_call or #load can only be called at compile time shouldn’t change how you can manipulate them. I’m not sure if he still has that special syntax since it hasn’t shown up in recent videos, but I think that once you’re aware of shadow worlds you tend to notice faster when you’re inventing a special feature when an existing feature could be adapted, which allows you to benefit from all the supporting features for that existing feature. This post is mostly about macros and templates because those are the biggest shadow worlds in C++. But once you recognize shadow worlds, you find them all over the place. I’m seriously considering to not have regular expressions and to have an object oriented way of building state machines instead. How hard can it be to come up with a better design than regular expressions, eh? Jonathan Blow also already realized that the build process is usually a shadow world. There are also debug visualizers and in serialization Protobuf is more of a shadow world than boost::serialization. String formatting is another shadow world that I started addressing with format_it, and we’re creating even more new ones all the time. Part 4 of this series, Dependent Typing, will also be challenging to write because I must not accidentally create a shadow world. I think that shadow worlds get created accidentally. The problem is that they make the implementer’s life a lot easier because then she can ignore all the complexities of the language by for example not supporting all types. Also whenever you create something new there is a strong desire to keep it simple this time. But it doesn’t work. You’re programming, and you always end up emulating the features that your high level language would have given you if there hadn’t been a shadow world. And if you add up all the time that has been spent trying to emulate loops in template and macros by using recursion, or trying to print complex types using printf, the initial time savings for the original implementer have long been used up. So I am committing to having no shadow worlds in my language. Everything will use the normal high level language features that the language itself provides. If you can’t do conditionals, loop over something or encapsulate something in a reusable object or algorithm, I will change the design until you can. I completely agree. When “shadow worlds” are created to simplify some task, you often instead want continuous granularity—simple shortcuts that always have a way out that bypasses exactly one level of abstraction. (I stole this term from Casey Muratori:) For compile-time versus run-time I think key to doing this right is to not have a distinction between compile-time and run-time. Which is kind of terrifying to set up in a performant manner. It’s also definitely a Big Idea, which would preclude Jai from doing it. But in my head it’s the only fully-consistent way to not end up trapped… Another interesting thing is that pattern matching or pattern expansion are shadow worlds almost by definition. I can picture a language with a pattern match operator and a pattern expand operator that are inverses and can execute arbitrary code, which lets you escape the shadow world, but just because you can escape it doesn’t mean that it wasn’t there. The other thing that I find interesting to think about is: there is a magical abstract world where you look at a possible solution and think “someone else might want to do it another way, this isn’t general enough” and then there’s the adjoining concrete world where a specific problem is being solved and you can think “no, I’m doing it only this way” and throw away every abstraction that doesn’t directly help you solve your problem. I am not completely convinced that the barrier between those two worlds is the barrier between the compiler/language and the program implemented in it. Part of the attraction of these “shadow worlds” is looking at them and saying “yes, that makes useful design choices and throws away abstraction in a manner that makes solving my problem simpler.” Which is super good! The problem is, in a sense, the inverse of the law of leaky abstractions—you will always find a place where your initial choices were not exactly right and you need to escape the sandbox. But having that sandbox is fundamentally useful: for example, data formats which do not allow arbitrary code execution are the backbone of Internet security. (I also wish there was another term for this instead of “shadow world.” I think I’ve heard some other definition of “shadow world” which was a way to talk about OOP: encouraging you to create a bunch of objects which model some “real” objects and act as proxies for them, which is scary because, really, It’s All About The Data.) Re: having no difference between compile time and runtime: I agree, and maybe I have thought about it the wrong way. For me the border between compile time and run time is when parameterizing structs or functions. So for example T min[type T](T a, T b) { return a <= b ? a : b; } in here the T argument is "compile time" where the a and b arguments are "runtime." But of course this function can also be called at compile time. So maybe I'm confusing myself by using the wrong terminology. It seems to me like I can't get rid of this distinction. It is important for type safety. So maybe I just need to find a better name for it. Maybe it's the distinction between code generation and code execution… As for pattern matching and pattern expansion being shadow worlds: I had the same initial thought. When I first wrote this post it was all about how to get rid of macros, templates and printf and regular expressions and all other code where pattern expansion is happening. (I didn't recognize them as the same style, I just recognized it as a piece of code where I can't abstract and compose normally) And it turned out badly. I didn't like it. Just like I don't actually like the "methods(…).insert(…)" from above. So I was floating the idea among coworkers and some people were strongly defending templates and I had to understand what I was missing. At the same time I was using a libary called Knockout JS to program some stuff at home. () And that library is brilliant. Watch the video and when he says at 16:09 "It's pretty hard to see how this code could be any simpler" that is absolutely what I found when using it. There's two reasons for that: 1. Dependency tracking and reactive programming, and 2. Pattern expansion in HTML. At the time I didn't call it pattern expansion but at some point I realized that all these different things are doing the same pattern: C++ templates, knockouts data-bind=for_each (and friends), printf, regular expressions are all doing the same thing and no programming language has ever fully supported that. And my hope is that the main reason why it always ends up being a shadow world is that nobody has ever fully supported it, so I intend to fully support it. And that part has been difficult but I intend to solve it. As for shadow worlds being about getting rid of unnecessary abstractions: Me personally I tend to identify shadow worlds by code duplication. In several ways. One: if there just is a lot of code duplication then the shadow world probably doesn't support loops or loops are too much work (macros), or it doesn't support abstractions. (printf) And two: if there is a lot of code that had to be reimplemented for the shadow world even though it already exists for other parts of the language. (templates) And then it's often just about how you present things. Graph scripting can be a shadow world of high level programming languages, but it doesn't have to be. It depends on what features are available and how they interact with the rest of the engine. The json data format can be a shadow world of the C++ type system, but it doesn't have to be. It depends on what libraries you use to convert between the two. For example if you have a SIMD Vec4 type, how often does your code explain to the conversion library how to convert it to and from Json? I have seen libraries where you only have to do it once (boost.serialization) and I have seen libraries where you have to do it every time. (protobuf) I don’t think a compile-time/run-time split is required for type safety. Nothing about type parameterization requires compile-time work, it’s just set up to be extremely simple to do all the work at compile-time with no runtime overhead/JIT. I guess the sticking point for me is: let’s say you “fully support” pattern matching/pattern expansion. Does that mean you end up with an alternate syntax for absolutely everything in the language? Does it mean that you end up with the Perl (?{ }) syntax for regular expressions that lets you embed arbitrary code? Turing-complete patterns? I don’t really understand what the target is. I also am not sure you can get away from things like JSON being shadow worlds, though maybe I’m using a different definition than you are. For example, you can’t represent NaN in JSON, so data can’t necessarily round-trip through JSON. That case isn’t code duplication, it’s the other space being less-featured than the “main” one… I don’t think it’s necessarily about feature parity. If JSON doesn’t support NaN that’s fine. It’s not really a problem. (in fact it might be a feature, how often have you wanted to have NaN in an object without it indicating an error?) Shadow worlds become a problem when you have to do more work because of them just because they don’t support commonly used patterns well. Especially when those commonly used patterns are used in the shadow world. Like imagine if JSON didn’t support lists. If it only supported maps, people would be emulating the list feature using maps. It makes the difference between “you just get back your object” and having wrapper code all over the place. The difference between zero lines of code and one line of code is huge. But yeah, still not sure where to draw the line between when not supporting a feature is problematic and when it’s OK. (or even good) I think code duplication is a good guide. If lots of code was written to shoehorn support for NaN into json (turn it into a string?) then there would be a problem. Well, an example feature that JSON doesn’t support is DAGs, you have to emulate that. I didn’t respond for a while because your comment is correct and points out a valid flaw in my reasoning that I have actually run into before. So your comment could just stand on its own. But for some reason I was thinking of it again and I was wondering: Are you aware of a file format which supports graphs (or just DAGs) well? I could probably come up with a binary format of my own, but maybe somebody has thought about this more than I has and has already solved the problems that I’d run into. Also maybe somebody has come up with a text format that supports graphs and also allows merging. That would be something I would use. Thanks for writing this. It has changed the way I think about programming. However, I feel that your initial example is, itself, a shadow world. In C++, as in other languages, the compile-time arguments are split from the run-time: we have template parameters and function parameters, so you can write f thoughts have led to my proposal for a new feature in C++: constexpr function parameters. These would be regular function parameters that you can mark as being required to be compile-time constants. You can read the latest draft of it here: (some of the formatting is no good on GitHub). I would appreciate it if you (and your readers) would take a look over the proposal and let me know what you think.
https://probablydance.com/2015/02/16/ideas-for-a-programming-language-part-3-no-shadow-worlds/?replytocom=3534
CC-MAIN-2020-34
refinedweb
4,027
60.45
Sorry for the newbish question... I searched the forums and googled it but I got different answers =\ Outputs 3.Outputs 3.Code:#include <iostream> using namespace std; class Base { private: int Num; public: Base() { Num = 3; } Base( int _Num ) { Num = _Num; } int GetNum() { return Num; } }; class Derived : public Base { }; int main() { Derived Mine; cout << Mine.GetNum() << endl; } So I have a Derived called Mine... and it inherits from Base. Base has a private integer called Num. But Num is private! How can Derived have it also, if it's private? Private members shouldn't be inherited! Some sources say that private members are not inherited, and not accessible from the outside (except for friends). Yet if they're not inherited, then why does Derived have it? Thanks for your help! - Evan
https://cboard.cprogramming.com/cplusplus-programming/66441-inheritance.html
CC-MAIN-2017-22
refinedweb
131
77.94
Qt is one of the most robust cross-platform application development framework. Some well known application developed with this is VLC player, google earth, Skype and Maya. It was first released in May 1995 and is dual licensed, which means it can be used for creating open source applications as well as commercial ones. Qt comes with Qt toolkit which is a very powerful utility for development of applications. Large number of open source developers use Qt all over the world on various projects. In this tutorial, it is assumed that the reader is familiar with the fundamentals of C++ and Linux Commands. Installing Qt in Debian Based Linux I am not covering the installation of Qt on other operating systems as it is well documented in Qt’s main website. However, for ease of use in debian based linux system, I will go ahead and explain on how to install Qt in your debian based linux system. Since we are talking about debian based linux, we will be using the apt command for installation of Qt. The command for installing Qt in your linux system through your terminal is: $ sudo apt install qt5-default This will install all the necessary packages required for our simple work of making a Qt based C++ program. At the time of writing this article, qt5 was used, hence the package name is qt5-default. Making our first Qt based C++ program To start off with the Qt Based C++ program we will going make a simple console based program that will only print out the version of Qt it used while compilation. We will name this program source code file as version.cpp. Also, we will keep this file in a separate (and empty) folder named “version”. Here is that source code of the that file: #include <QtCore> #include <iostream> using namespace std; int main(){ cout << "Qt Version : " << qVersion() << endl; return 0; } The above code is self explanatory. QtCore is a class based library of Qt whose function named qVersion() is called in the main() function of the program which gives the version number of the current Qt. In order to compile this program properly, we need to make it into a project format that will build all the necessary configuration files for compilation. To make this into a project format simply go in your terminal then, go to the directory where you have placed your source file and put this command: $ qmake -project The above command will generate one file with .pro extension. The project files contains the details of projects such as what should be the template name, what should be the target name for the compilation of the binary file, and some other configurations such as which API to disable or not. Next, we will have to generate a Makefile so that the compiler could finally compile the program with the instructed Makefile commands. This could be done with the following command: $ qmake As we run the above command the qmake tool will generate the Makefile in accordance with the details given in project file generated before. After this is done, the last step to do is to compile. Since the Makefile was already generated by qmake, we will just have to run the make command to compile the whole project. Keep in mind that, whatever is the folder name in which the source file resides, that will be the binary file’s name after compilation. For me, this folder’s name was “version”. $ make After this, you could run the program with its binary file’s name, such as: $ ./version It will give an output as such: Qt Version : 5.9.5 And that is your first Qt based C++ program.
https://mr-kumar-abhishek.github.io/blog/2020/02/29/making-first-program-with-qt-and-c-plus-plus
CC-MAIN-2022-05
refinedweb
622
66.17
With the release of React Hooks I have seen a lot of posts comparing class components to functional components. Functional components are nothing new in React, however it was not possible before version 16.8.0 to create a stateful component with access to lifecycle hooks using only a function. Or was it? Call me a pedant (many people already do!) but when we talk about class components we are technically talking about components created by functions. In this post I would like to use React to demonstrate what is actually happening when we write a class in JavaScript. First, I would like to very briefly show how, what are commonly referred to as functional and class components, relate to one another. Here's a simple component written as a class: class Hello extends React.Component {render() {return <p>Hello!</p>}} And here it is written as a function: function Hello() {return <p>Hello!</p>} Notice that the Functional component is just a render method. Because of this, these components were never able to hold their own state or perform any side effects at points during their lifecycle. Since React 16.8.0 it has been possible to create stateful functional components thanks to hooks, meaning that we can turn a component like this: class Hello extends React.Component {state = {sayHello: false}componentDidMount = () => {fetch('greet').then(response => response.json()).then(data => this.setState({ sayHello: data.sayHello });}render = () => {const { sayHello } = this.state;const { name } = this.props;return sayHello ? <p>{`Hello ${name}!`}</p> : null;}} Into a functional component like this: function Hello({ name }) {const [sayHello, setSayHello] = useState(false);useEffect(() => {fetch('greet').then(response => response.json()).then(data => setSayHello(data.sayHello));}, []);return sayHello ? <p>{`Hello ${name}!`}</p> : null;} The purpose of this article isn't to get into arguing that one is better than the other, there are hundreds of posts on that topic already! The reason for showing the two components above is so that we can be clear about what React actually does with them. In the case of the class component, React creates an instance of the class using the new keyword: const instance = new Component(props); This instance is an object; when we say a component is a class, what we actually mean is that it is an object. This new object component can have its own state and methods, some of which can be lifecycle methods (render, componentDidMount, etc.) which React will call at the appropriate points during the app's lifetime. With a functional component, React just calls it like an ordinary function (because it is an ordinary function!) and it returns either HTML or more React components. Methods with which to handle component state and trigger effects at points during the component's lifecycle now need to be imported if they are required. These work entirely based on the order in which they are called by each component which uses them, they do not know which component has called them; this is why you can only call hooks at the top level of the component and they can't be called conditionally. JavaScript doesn't have classes. I know it looks like it has classes, we've just written two! But under the hood JavaScript is not a class-based language, it is prototype-based. Classes were added with the ECMAScript 2015 specification (also referred to as ES6) and are just a cleaner syntax for existing functionality. Let's have a go at rewriting a React class component without using the class syntax. Here is the component which we are going to recreate: class Counter extends React.Component {constructor(props) {super(props);this.state = {count: 0}this.handleClick = this.handleClick.bind(this);}handleClick() {const { count } = this.state;this.setState({ count: count + 1 });}render() {const { count } = this.state;return (<><button onClick={this.handleClick}>+1</button><p>{count}</p></>);}} This renders a button which increments a counter when clicked, it's a classic! The first thing we need to create is the constructor function, this will perform the same actions that the constructor method in our class performs apart from the call to super because that's a class-only thing. function Counter(props) {this.state = {count: 0}this.handleClick = this.handleClick.bind(this);} This is the function which React will call with the new keyword. When a function is called with new it is treated as a constructor function; a new object is created, the this variable is pointed to it and the function is executed with the new object being used wherever this is mentioned. Next, we need to find a home for the render and handleClick methods and for that we need to talk about the prototype chain. JavaScript allows inheritance of properties and methods between objects through something known as the prototype chain. Well, I say inheritence, but I actually mean delegation. Unlike in other languages with classes, where properties are copied from a class to its instances, JavaScript objects have an internal protoype link which points to another object. When you call a method or attempt to access a property on an object, JavaScript first checks for the property on the object itself, if it can't find it there then it checks the object's prototype (the link to the other object), if it still can't find it then it checks the prototype's prototype and so on up the chain until it either finds it or runs out of prototypes to check. Generally speaking, all objects in JavaScript have Object at the top of their prototype chain; this is how you have access to methods such as toString and hasOwnProperty on all objects. The chain ends when an object is reached with null as its prototype, this is normally at Object. Let's try to make things clearer with an example. const parentObject = { name: 'parent' };const childObject = Object.create(parentObject, { name: { value: 'child' } });console.log(childObject); First we create parentObject. Because we've used the object literal syntax this object will be linked to Object. Next we use Object.create to create a new object using parentObject as its prototype. Now, when we use console.log to print our childObject we should see: The object has two properties, there is the name property which we just set and the __proto___ property. __proto__ isn't an actual property like name, it is an accessor property to the internal prototype of the object. We can expand these to see our prototype chain: The first __proto___ contains the contents of parentObject which has its own __proto___ containing the contents of Object. These are all of the properties and methods that are available to childObject. It can be quite confusing that the prototypes are found on a property called __proto__! It's important to realise that __proto__ is only a reference to the linked object. If you use Object.create like we have above, the linked object can be anything you choose, if you use the new keyword to call a constructor function then this linking happens automatically to the constructor function's prototype property. Ok, back to our component. Since React calls our function with the new keyword, we now know that to make the methods available in our component's prototype chain we just need to add them to the prototype property of the constructor function, like this: Counter.prototype.render = function() {const { count } = this.state;return (<><button onClick={this.handleClick}>+1</button><p>{count}</p></>);},Counter.prototype.handleClick = function () {const { count } = this.state;this.setState({ count: count + 1 });} This seems like a good time to mention static methods. Sometimes you might want to create a function which performs some action that pertains to the instances you are creating but it doesn't really make sense for the function to be available on each object's this. When used with classes they are called Static Methods, I'm not sure if they have a name when not used with classes! We haven't used any static methods in our example but React does have a few static lifecycle methods and we did use one earlier with Object.create. It's easy to declare a static method on a class, you just need to prefix the method with the static keyword: class Example {static staticMethod() {console.log('this is a static method');}} And it's equally easy to add one to a constructor function: function Example() {}Example.staticMethod = function() {console.log('this is a static method');} In both cases you call the function like this: Example.staticMethod() Our component is almost ready, there are just two problems left to fix. The first problem is that React needs to be able to work out whether our function is a constructor function or just a regular function because it needs to know whether to call it with the new keyword or not. Dan Abramov wrote a great blog post about this, but to cut a long story short, React looks for a property on the component called isReactComponent. We could get around this by adding isReactComponent: {} to Counter.prototype (I know, you would expect it to be a boolean but isReactComponent's value is an empty object, you'll have to read his article if you want to know why!) but that would only be cheating the system and it wouldn't solve problem number two. In the handleClick method we make a call to this.setState. This method is not on our component, it is "inherited" from React.Component along with isReactComponent. If you remember the prototype chain section from earlier, we want our component instance to first inherit the methods on Counter.prototype and then the methods from React.Component. This means that we want to link the properties on React.Component.prototype to Counter.prototype.__proto__. Fortunately there's a method on Object which can help us with this: Object.setPrototypeOf(Counter.prototype, React.Component.prototype); That's everything we need to do to get this component working with React without using the class syntax. Here's the code for the component in one place if you would like to copy it and try it out for yourself: function Counter(props) {this.state = {count: 0};this.handleClick = this.handleClick.bind(this);}Counter.prototype.render = function() {const { count } = this.state;return (<><button onClick={this.handleClick}>+1</button><p>{count}</p></>);}Counter.prototype.handleClick = function() {const { count } = this.state;this.setState({ count: count + 1 });}Object.setPrototypeOf(Counter.prototype, React.Component.prototype); As you can see, it's not as nice to look at as before! In addtion to making JavaScript more accessible to developers who are used to working with traditional class-based languages, the class syntax also makes the code a lot more readable. I'm not suggesting that you should start writing your React components in this way (in fact, I would actively discourage it!), I only thought it would be an interesting exercise which would provide some insight into how JavaScript inheritence works. Although you don't need to understand this stuff to write React components, it certainly can't hurt and I expect there will be occassions when you are fixing a tricky bug where understanding how prototypal inheritence works will make all the difference. I hope you have found this article interesting and/or enjoyable, if you have any thoughts on the subject then please let me know. If you've found this helpful then let me know with a clap or two!
https://blog.matt-thorning.dev/react-object-components
CC-MAIN-2021-04
refinedweb
1,914
55.64
0 I've been studying up on virtual destructors and how they work and I wrote this little program. I've been adding and removing the word "virtual" on lines 9, 16, and 23 to see what happens. #include <iostream> using namespace std; class a { public: a(){cout << "a::a()" << endl;} virtual ~a(){cout << "a::~a()" << endl;} }; class b: public a { public: b(){cout << "b::b()" << endl;} virtual ~b(){cout << "b::~b()" << endl;} }; class c: public b { public: c(){cout << "c::c()" << endl;} virtual ~c(){cout << "c::~c()" << endl;} }; int main() { { a* x = new c(); delete x; } return 0; } Here are my findings... - When "virtual" is on all the lines, all destructors are called. Makes sense to me. - When "virtual" is on none of the lines, only a's destructor is called. Also makes sense to me. - When a's destructor is not virtual and either b's or c's IS virtual, the program crashes. I'm not sure why this one happens. - When a's destructor virtual and b's and c's destructors are not virtual, all three destructors are called. This one is really puzzling me. I thought that you started at the base destructor and stopped after the first non-virtual destructor. In this case, b's destructor would not be virtual, so c's should not ahve been called? But it was called. Finally, is there ever a time when you would not want to define all three destructors as virtual? If b and c had no dynamic memory to delete, it would cause no problems to not call the destructors, but at the same time, what would the harm be? Any guidance on what that magical word "virtual" does in these cases would be appreciated, particularly what is happening in list items 3 and 4 would be appreciated.
https://www.daniweb.com/programming/software-development/threads/426495/when-should-i-use-a-virtual-destructor
CC-MAIN-2017-26
refinedweb
305
71.65
How To Connect To A MySql Database in VB.net By: Syed M Hussain Printer Friendly Format Connector/Net As well as downloading the MySql database, you will need to download the MySql Connector/Net driver. This driver enables developers to easily create .NET applications. Developers can build applications using their choice of .NET languages. MySql Connector/Net is a fully-managed ADO.NET driver written in 100% pure C#. You will need to download this driver from the MySql website. Once downloaded simply go through the installation process. Console Application It’s time to look at some code. Load the Visual Studios IDE and select a new Visual Basic Console Application. Once your new projected has loaded you should have an empty module. The first thing we need to do is add a reference to the MySql assembly. Click the Project from the menu and then select Add Reference. Under the .Net tab, browse for the MySQL.Data assembly. Now that the reference has been added, we need to use the Imports directive to import the MySql.Data.MySqlClient namespace. Your imports directives should look like the following: Imports System.Data Imports MySql.Data.MySqlClient To connect to the MySql database, we need to use the MySqlConnection Class. This class has two constructors. The default constructor takes no arguments. The second constructor takes a connection string as an argument. If you use the default constructor, you can specify the connection string later in your code by using the ConnectionString property. Below in listing 1.1 we use the second constructor. Listing 1.1 con = New MySqlConnection(\"Server=\" + _host + \";User Id=\" + _user + \";Password=\" + _pass + \";\") In listing 1.1 a MySqlConnection object is created. This object is then used to connect to the database. Listing 1.2 below shows the complete code to connect to a MySql database and query a table. The MySql database is used for this example. Listing 1.2 Imports System.Data Imports MySql.Data.MySqlClient Module Module1 Private con As New MySqlConnection Private cmd As New MySqlCommand Private reader As MySqlDataReader Private _host As String = \"localhost\" \' Connect to localhost database Private _user As String = \"root\" \'Enter your username, default is root Private _pass As String = \"\" \'Enter your password Sub Main() con = New MySqlConnection(\"Server=\" + _host + \";User Id=\" + _user + \";Password=\" + _pass + \";\") Try con.Open() \'Check if the connection is open If con.State = ConnectionState.Open Then con.ChangeDatabase(\"MYSQL\") \'Use the MYSQL database for this example Console.WriteLine(\"Connection Open\") Dim Sql As String = \"SELECT * FROM USER\" \' Query the USER table to get user information cmd = New MySqlCommand(Sql, con) reader = cmd.ExecuteReader() \'Loop through all the users While reader.Read() Console.WriteLine(\"HOST: \" & reader.GetString(0)) \'Get the host Console.WriteLine(\"USER: \" & reader.GetString(1)) \'Get the username Console.WriteLine(\"PASS: \" & reader.GetString(2)) \'Get the password End While End If Catch ex As Exception Console.WriteLine(ex.Message) \' Display any errors. End Try Console.ReadKey() End Sub End Module Author Author. Actually, looking at the code, it looks as somethi View Tutorial By: Johnson at 2009-07-28 08:14:38 2. thanx a lot.............!!!! u r best.......... View Tutorial By: Pratik at 2010-04-12 12:41:30 3. Thats should work. i'l try it. Thnx! View Tutorial By: s2me at 2010-12-18 08:22:32 4. thanks . it helps easy to connect.. View Tutorial By: bastin at 2011-12-09 19:49:34 5. Do you happen to have the code of if statement for View Tutorial By: James at 2012-06-04 09:02:31 6. thanks - this really helped!! View Tutorial By: mike at 2012-06-14 19:45:17 7. I have a squid proxy installed as forward proxy an View Tutorial By: ruffy at 2012-12-20 12:34:15 8. Nice tutorial, just what I was looking for to get View Tutorial By: Melvin at 2013-01-10 01:00:06 9. how to connect database simple method View Tutorial By: kavimozhian at 2013-04-08 11:14:29 10. i want to know how to connect to a remote mysql da View Tutorial By: lixiang at 2013-04-23 04:32:58 11. i am looking for the code which wiLL HAVE AFFECT I View Tutorial By: pooja at 2013-06-10 13:48:32 12. give the picture view of connecting vb.net with my View Tutorial By: Murugamani at 2014-09-09 06:12:58 13. Some really wondrous work on behalf of the owner o View Tutorial By: how a push up bra works at 2017-06-02 15:06:08
https://java-samples.com/showtutorial.php?tutorialid=1019
CC-MAIN-2022-21
refinedweb
769
69.18
Introduction There is new technology all around us and only more coming every day. Our micro controllers are getting faster, are phones getting smarter, and the cloud is becoming stronger. With all this new technology everyone is asking themselves: How can I utilize it all? Well, today we will do just that. This guide will walk you through using a Raspberry Pi, Microsoft Azure,! Goals For this guide, our primary goal is to show off a proof-of-concept device cycle that is functional and really lets you see the power of combing all of these technological resources. We won't be making production ready code, and won't be utilizing every device/technology to its fullest ability. This guide is already fairly long given all the ground we have to cover, and doing a full dive into each would give us quite the novel. Instead, we're focusing on something that most "tinkers" could make as long as they have the hardware and a limited knowledge of programming. We want to give you a working project and inspire you to push it even further. This project is your 'getting started' guide and the real fun begins when you pick up after that! Step 1: Requirements Knowledge Requirements We have tried to make this guide as 'friendly' as possible to the general audience, but in order for us to move quickly and condense the guide down from a three hour epic to a 1 hour project, we have to assume you know a little something before hand. *Don't know something? Don't worry! Throughout the guide we include lots of links to other sources where you can study up beforehand and catch up quickly! 1. Limited Python and C# Knowledge Basic programming knowledge will be needed. We won't be diving into anything too complicated, but it's good to have a basic understanding of these two languages 2. Raspberry Pi Basics Because it's a bit more popular compared to Windows Phone and Azure, I won't be going too in depth on setting up your Raspberry Pi and wiring it all together. There are also a lot of great tutorials out there already to help you get started. Make sure you can ssh into one, know your basic Linux commands, and feel comfortable putting a circuit together (a very basic one). 3.. - A Windows 8 Computer - Windows Phone with 8.1 software update (Must have Cortana!) - Raspberry Pi - SD Card with Raspbian for Rasberrry Pi (NOOBS works just fine) - 3 Wires for your simple circuit - 1 LED - 1 270 ohm resistor - Breadboard for your LED circuit Step 2: Setting Up Azure 1: a Crash Course in Azure. This will allow you to expand on it in the future, and also understand the purpose behind why we are using Azure versus another service. Why Azure? Why not Node.js? That's probably a question a lot of you will be asking, and it's a very legitimate question. Why not just create a basic REST server that we can hit to command our devices? Well the issue is that we live in the future, and the future is all about the Internet of Things (IoT). IoT deals with tens if not hundreds of little devices all around your home, all connected giving you unparalleled control. Will a simple Node server running on a Pi be able to handle all of that? Isn't dealing with all of that funky server code another guide (if not an entire book) in it's own right? Yes, yes it is, and that's why Azure is here to the rescue. While we really won't see the benefits of Azure in this initial guide because we are only hooking up one device, once we begin to hook up more and more devices we will be able to see the true benefits. What Will We Be Making? For our project, we're going to make a Service Bus that will process Topics and Subscriptions. Don't worry, I know we're throwing a lot of fancy words around early, but I assure you that it doesn't take long to get a basic understanding. A Service Bus, in a nutshell, provides a highly robust messaging framework that serves as a relay between two (or more) endpoints. It is essentially the magical 'cloud' that we hear so much about. Something sends it a message, it decides where that message should go, sends it, and another device gets that message. The service bus is our mail sorting facility, make sense? So what about these Topics and Subscriptions? Why can't we just call them messages? Well, because it's not quite that simple. A Topic contains a message, but you can't say that a Topic is a message. It's just incorrect.. Furthermore, how much of a pain is it that every time we add an IoT device we need to re-code our entire Cortana logic? Instead, we publish a message to the Topic "LightControls" and that Topic now publishes to all subscribers (which would be every IoT device that controls a lightswitch) to go to the "OFF" position. Still Confused? Don't worry, this isn't something that's easy to pick up (let alone explain) in a paragraph or two. If you still want to learn more, here are some great resources: Introducing Queues and Topics in Azure Service Bus - Code Magazine How to User Service Bus Topics/Subscriptions - Microsoft Windows Azure Service Bus Topics and Subscriptions - Neudesic In a Nut Shell... Cortana is going to send a message to a Topic on the Service Bus (the cloud). Our Cloud will then send that message to every device that has "subscribed" to that topic. So when we send "DeskLightsOff" to the "LightControl" Topic, our DeskLights will have Subscribed to it, will receive it, and then will process that command. Step 3: Setting Up Azure 2: Creating Your Azure Service Bus Time to Code! Well...not yet. First, we have to set up our Azure Service Bus. Microsoft has provided a pretty slick online interface that actually lets us create the whole thing without typing a single line of code. Pretty Cool. Let's get started! 1. Create Your Azure Account Chances are you don't have an Azure Account, so you will need to sign up for their free trial. It's fairly straight forward, although you will need to enter a credit card Worried about paying monthly for this? Don't. The service bus we will set up will likely receive less than 1 million calls a month. It currently costs about $1.00 for 30 million. You'll be paying a few dimes at most. If you want to continue past the free trial, however, be prepared to shell out about $11 a month. 2. Create A Service Bus - Log into the Azure Management Portal after you have created your account. - Figure 1: Click on 'Service Bus' to go to the 'Service Bus' Dashboard. - Figure 2: Click on 'Create' In the lower pane. - Figure 3: Enter in a name for your namespace (Must be Unique!). Note that my Region is Central US because that's where I am. Azure will likely put in whatever region it thinks is best, and you should just leave it. Click on the Checkmark in the lower right corner to continue. - Your Namespace will be "activating" for a short bit, and then Boom! Your Service Bus has been Created! - ***NOTE***For this guide, I have named my Namespace "CustomNamespace". Anywhere you see that String, you should replace it with your own Namespace name. Recap So basically all we did here was get started with Azure. We created an account and then a Service Bus. Remember that a Service Bus provides us with a Cloud "Mail Sorting Facility", but right now that Facility doesn't really have any direction. In our next step we will work on adding actual logic with Topics and Subscriptions. Step 4: Setting Up Azure 3: Creating Topics and Subscriptions Now that we've created the Service Bus, it's time to add our Topics and Subscriptions. Remember from our previous write-up that we send messages to a Topic, which is then relayed to one or more subscriptions. Conceptual Setup Before we just start hacking away, let's take a step back and remember what exactly we are doing. Look at Figure 1 and make sure you have an idea of what's going on. We won't be creating that exact model today, but it is definitely something you could do further down the road. We have a LightTopic which is going to be where we send commands dealing with turning off the lights. We then have our LightSubscription which we will label DeskLightSubscription because it's what we want the Desk lights (the led we have hooked up to the Pi) to listen to. 1. Create the 'LightTopic' Topic - Figure 2. Click on your Service Bus to go to the Service Bus Dashboard - Figure 3. Click on 'TOPICS' on the upper panel to go to the Topics Page - Figure 4. Click on 'CREATE A NEW TOPIC' - Figure 5. Type in 'LightTopic' (Or whatever you want to call it) Then Click 'CREATE A NEW TOPIC' Great! We've created our first Topic. This is where we will send all of our commands from Cortana. 2. Create the 'LightSubscription' Subscription - Figure 6. Click on 'New Subscription' At the bottom of your Topics page (Where you should have left off) - Figure 7. Enter 'LightSubscription' into the TextBox and click on the Arrow in the Lower Right Corner. - Figure 8. You'll be taken to a 'Details' Page. Leaving everything at the Defaults will be fine, so simply click on the arrow again. - Figure 9. Once you go back to the Topic Dashboard, you'll notice that there is now 1 Subscription attached to the topic. Recap Yup. We're done. It really is that simple. Now, navigating through Azure can be a little daunting at first, especially with all the different terms which we may not all be familiar with. As we saw though, the process is actually pretty simple. I know it doesn't really feel like we've done anything yet because there isn't really anything tangible or code, and unfortunately there won't be for a while. But we have created an essential and important part of our project! Step 5: Setting Up Your Raspberry Pi 1: Environment Setup Now that we've got our Azure Service Bus setup, it's time to setup our IoT device, AKA our Raspberry Pi. What Our Raspberry Pi Will Do: To Recap, our Raspberry Pi is simply going to be a IoT slave. It will do what it is told to do which is either turn on or off the lights. In the future, this could be expanded to sending messages regarding different variables you want to measure, but for now we're going to keep it simple for the sake time and length of this guide. The Pi will be attached to our 'LightSubscription' and listen for anytime it gets a message from that subscription, which is of course triggered by our topic. Hardware Setup Setting up the circuit, dealing with GPIO, and general Raspberry Pi shenanigans is a bit beyond the scope of this guide, simply because it's quite literally an instructable in itself. Basically to get a general setup you will need to create a very basic "Blinky" circuit with your Raspberry Pi. If you need help with that, I would highly suggest checking out a few of these guides that cover the subject. Software Setup Now that you have a basic led circuit working on your Pi, we will need to install the software packages needed to interact with our Azure Service Bus. Lucky for us, there is a Python SDK for Azure which works quite well. We can simply use git to clone the repo down and install it on our machine: >>>>git clone >>>>cd azure-sdk-for-python >>>>python setup.py install If you're having any trouble, make sure that you have done an 'update' and 'upgrade' recently. >>>> sudo apt-get update >>>> sudo apt-get upgrade To test and make sure that the azure SDK installed correctly, do the following commands and make sure the output is the same: >>>> python Python 2.7.3 >>>> import azure >>>> >>>> exit() If it installed correctly, typing the line `import azure'` should result in a blank line following it. Simply type `exit()` to leave the python terminal. Step 6: Setting Up Your Raspberry Pi 2: Coding the Initial Setup Alright! Time to actually get down and dirty with some code! The code for this is actually pretty straightforward, and it's only about 50 lines of code. That being said, we'll try and walk through it slowly so you can get a better conceptual understanding of how it works. If you just want to "Grab and Go" so to speak, the entire file is attached for those who wish to do so. Import Libraries We have quite a few libraries we need to import for all these moving parts to work. Remember that we are also controlling the pins, we will need to import the GPIO functionality as well. In total, the top of your import statements should look like the following: import RPi.GPIO as GPIO #For Controlling the Pins<br>import threading #To Run Async import sys import select from azure.servicebus import * import os The 3 oddball ones are obviously the GPIO, Threading, and Azure libraries. The middle one, threading, might seem a little strange at first. Essentially we need it to make sure we are actively "listening" to the server subscription. To do this effectively, we run that "listening" on a separate thread. Create Constant Variables Yes, this can seem a bit trivial, but it will also help a lot with understanding the connection between Azure and Python. The code is pretty straight forward: # Make sure you set the following: AZURE_SERVICEBUS_NAMESPACE='CustomNamespace' AZURE_SERVICEBUS_SHARED_KEY_NAME='RootManageSharedAccessKey' AZURE_SERVICEBUS_ACCESS_KEY_VALUE='<INSERT_YOUR_ACCESS_KEY_HERE>' GPIO_BCM_PIN = 11 #The Pin your LED is controlled by The 'namespace' and 'GPIO' variables should be pretty obvious, but the middle two could cause some confusion. Essentially, this is your special "login Key' that will grant access to your Azure Service Bus. For now, don't worry about it (We'll find out where to get this Key in the next step!). Set up Lights on Start-up Whenever we start our program, we want to give our Lights (or in our case, our little LED) a specific state. In this case, we will set it to 'OFF" # setup the GPIO for the LED GPIO.setmode(GPIO.BCM) GPIO.setup(GPIO_BCM_PIN,GPIO.OUT) # Initially turn off the LED GPIO.output(GPIO_BCM_PIN, 0) Start Incoming Messages Thread Here is where we start to work the magic. We will create a thread and have it target a new function (which we have not created yet) called process_messages. We will create this function in the next step. For now, let's create and start the thread. # start a thread listening for incoming messages t = threading.Thread(target=process_messages) #will create 'process_messages' next step t.daemon=True; t.start() Wait Clean-up We will then 'wait' for any raw_input from our user. Essentially, don't end this program unless somebody hits a key. Finally, we'll release any GPIO resources to ensure a safe exit from our program. # wait until the user enters something char = raw_input("Press enter to exit program") # release any GPIO resources GPIO.cleanup() Recap So what does this code do? Well, not much. In fact in won't even compile right now (we're missing that process_messages function!). But we've set up the structure for how our IoT device will work. We will set the light to 'OFF', then listen for any command from the Service Bus on a separate thread. The next step will show us how to do that. Step 7: Setting Up Your Raspberry Pi 3: Coding the Subscription This step completely revolves around the process_messages function that we talked about in the previous step. We will use the Python Azure SDK to actively listen for messages from the Azure subscription and update our LED accordingly. Initialize the Service Bus Object First things first, we will create this 'process_messages()' function and then the service bus object. def process_messages(): # Initialize the service bus service_bus = ServiceBusService(service_namespace=AZURE_SERVICEBUS_NAMESPACE, shared_access_key_name=AZURE_SERVICEBUS_SHARED_KEY_NAME, shared_access_key_value=AZURE_SERVICEBUS_ACCESS_KEY_VALUE ) Pretty simple, right? Notice that we used our namespace, Key Name, and Key Value to hook up to the Service Bus. These are the key things you need to actually log in and interact with your service bus. We will find out how to add the key name and value at the end of this step. Get the Topic and Subscription Pretty Straightforward. We're going to get our 'LightTopic' topic and 'LightSubscription' Subscription'. This way our Service Bus knows who to interact with. service_bus.get_topic("lighttopic") service_bus.get_subscription("lighttopic","lightsubscription") Looping and Logic Now comes the interesting part. We will do a basic 'While True' loop (an infinite loop) to actively listen for our Subscription. while True: msg = service_bus.receive_subscription_message('lighttopic', 'lightsubscription', peek_lock=False) if msg.body is not None: print(msg.body) if msg.custom_properties["led"]==1: print("turning on the LED") GPIO.output(GPIO_BCM_PIN, 1) else: print("turning off the LED") GPIO.output(GPIO_BCM_PIN, 0) You can see that we have a few debugging statements in here (by printing the message body to the console, along with LED commands). You are free to delete them if you wish. This code is pretty straight forward. Our 'msg' object gets whatever message came from the subscription, and then we look at the custom_properties of it. If there is an Object 'led' that equals 1, we turn the LED On. Otherwise, we turn it off. When we start to program Cortana and deal more with the publishing side of things, we will see how exactly we interact with this 'led' custom property. Find and Insert your Azure Key Yes, I know, I'm a tease. It was the first thing I started with and now I'm ending with it. The process is pretty straight forward: - Go to your 'CustomNameSpace' Dashboard on the Azure Management Portal - Click on 'Configure' - You will see see a section labeled 'shared access key generator' - The 'POLICY NAME' is your Key Name - The 'PRIMARY KEY' is your Key Value - Insert those two things into your code in their respective spots - See Figure 1. for help Run Your Code! And now you're ready to run! Everything should compile and work. If it doesn't, be sure to check out the attached code sample to see if you missed any parts. That being said, it really doesn't do much. That's because right now it's only listening, but we're not sending anything! It's just sitting there and happily listening to an empty cloud. Next, we'll dive into Cortana and Windows Phone to publish a message to the cloud so that our Pi actually has a message to hear! Step 8: Creating Your Windows Phone App Part 1: Intro to Windows Phone 8.1 Introduction to Window's Phone 8.1 Despite its low market share, Window's Phone really is a great phone that gives you a lot of power as a developer. Because I know that it may not be the most frequently developed platform, and not everyone is familiar with C# here, I'll be trying to go a bit slower and with a bit more detail in this section of the instructable. If you already are comfortable with the Window's Phone platform, Natural Language Processing (SSML), and C#, you will probably be able to zip through this pretty quickly. Otherwise, grab a cup of coffee, find a comfy chair, and buckle up. We've got an app to write! Setting up your Environment First things first, as with all coding projects, is setting up our environment. If you've never developed with C# before, chances are you might not even have visual studio installed. Let's get it all set up. - Make sure you have a computer that is running Windows 8. You must have this! - If you don't have Visual Studio, download Visual Studio Community 2013 with Update 4 - The Windows Phone 8.1 SDK should be installed with Visual Studio Community - If you have a Windows Phone, you'll want to register it for development Not too hard, right? Visual Studio is a big download and quite a large software package. It may take some time to install, but it usually works out of the box pretty well. Next we'll get started on actually creating our Windows Phone App! Step 9: Creating Your Windows Phone App Part 2: Configure and Navigating Your Project Creating Your Application Let's go ahead and create our Windows Phone 8.1 project, then we can walk through some of the nitty gritty and discover what's inside. - Figure 1 - Click on 'New Project...' - Figure 2 - Look under: Templates > Visual C# > Store Apps, and you should see an project template named 'Blank App (Universal Apps)'. Select it, and type in a name for your Home Automation app. *NOTE* - I named my app 'BACH', which stands for 'Badass Automated Cloud Home'. You're free to name your project whatever you please, but any reference to 'BACH' in my code or pictures should be replaced with your own project name Configuring your Project If you have never used Visual Studio before, this may look a little daunting (and I also apologize you've missed on what I believe to be the best IDE hands down). I'll try to walk you through the thousands of different knobs and buttons pretty specifically, so don't worry. An important first step is to take a look at Figure 3 and understand a bit what is going on. Remember how we click "Universal App" back on that project template? Well that's because this app could actually be deployed on Windows Phone and Windows Desktop (8.1 versions and higher, of course). That's why we see two different projects within our 'Solution Explorer' on the right-hand side. Everything under BACH.Windows(Windows 8.1)? Just ignore it. We won't be developing a desktop app in this tutorial. Another thing to take in mind is that within our code files, we will see a lot of "Platform Specific Code". That's code that looks like this: #if WINDOWS_PHONE_APP private TransitionCollection transitions; #endif That's how the compiled app knows if it should a piece of code or not, depending on if it is deployed as a Windows 8.1 app or as a Windows Phone 8.1 app. Why does this matter? Well, because Visual Studio is pretty slick, and we can actually tell it what we are currently working on. In Figure 3 you'll notice I have also circled something in the upper left corner: a tab that currently says BACH.Windows. That's basically us telling Visual Studio that we are currently working on the Windows app, except that we don't want it to say that! Instead, let's switch it over to say BACH.Windowsphone (Figure 4). Now, we also need to tell Visual Studio that when we click "Run Program" we want it to run the Windowsphone version, not the Windows version. To do that, we will simply right-click on the Bach.WindowsPhone (Windows Phone 8.1) project and select 'Set as StartUp Project' (Figure 5). Navigating the Project Structure Now that we've configured everything to run in 'WindowsPhone' mode, lets do a very quick overview of your project structure (Figure 6): - Properties Beyond Scope of Guide - References Other libraries you might use, we'll add some libraries letter to connect to our Azure Service - Assets Where you store all those pretty pictures. You'll see some already in there by default. We won't be dealing with this folder within this guide. - MainPage.xaml > MainPage.xaml.cs Your "Design" logic and "Code" logic, respectively. We could make an entire guide in itself about how to code these two documents. Basically what you need to know is this is the "Front Page" of your app. - Package.appxmanifest App Name, requirements, packages, etc. We will be turning on the "Internet" permissions later on. - BACH.SHARED - App.xaml > App.xaml.cs What's all this 'Shared' nonsense? Well, remember this app is 'Universal', so this is code that is shared between your desktop and phone app. This is where we will put all of the Azure Service calls. Hopefully that makes you a bit more comfortable with the Windows Phone structure, and don't feel overwhelmed. If you do, or would like to learn more, I would highly suggest you check out Channel9's video tutorial: Windows Phone 8 Development for absolute beginners - Channel9 Step 10: Creating Your Windows Phone App Part 3: Azure on Windows Phone 8.1 Where Things Get Complicated... That super nice, easy to explain format we've had for so long? Yup. Say goodbye. This specific step, while not hard can be really confusing for anyone not familiar with basic REST services and network protocols. In the interest of not blowing this instructable into a full on novel, we will treat the Azure Service call here within Windows Phone as essentially a "Black Box" call. That means we will copy and paste some code down and just compile it. We won't walk through it, and some of you may have no clue what it even does, but that doesn't really matter, because it works.There is a great list of resources at the end of this guide that can help you understand it better, but for now, let's just bite our lip and press forward. Installing Libraries and Permissions We will be using some fancy internet protocol libraries to access Azure since unfortunately there is not currently a specific Azure SDK for Windows Phone (yet). In order to do that, we will have install a Package(a library) from the NuGet Package Manager. - Figure 1 - Right Click on the WindowsPhone project and select 'Manage NuGet Packages...' - Figure 2 - Search for 'Json' in the search bar on the upper right, and click on 'Json.NET', then 'Install' - Close out of the Package Manager Great, we've got the libraries we need. Now we need to enable the 'Microphone' and 'Internet' capabilities for our app. This will let our app allow us to use Cortana and post messages to our Azure Service Bus. - Double Click on 'Package.appxmanifest' - Figure 3 - Switch to the Capabilities Tab - Figure 3 - Make sure that 'Internet(Client and Server) and 'Microphone' are checked. Now, at the top of your file, insert the following code to allow us to reference all these new fancy packages within our code: using Newtonsoft.Json;<br>using Windows.Security.Cryptography; using Windows.Security.Cryptography.Core; using System.Net; using System.Net.Http; using System.Net.Http.Headers; Coding the Azure Call The attached code file AzureCall.cs is not the entire App.xaml.cs file, it is simply 3 functions that you should copy and paste into your App.xaml.cs file. I would recommend copy and pasting these 3 functions underneath the OnSuspending function. Let's do a very brief overview of these 3 functions: - SendSBMessage - The primary function you will call. We will dive deeper into this function below. - SASTokenHelper - This function helps us encode out SAS token to log in to our Azure Service Bus - HmacSha256 - This function is purely there for Cryptography. If you don't understand it, don't worry. Briefly Looking At SendSbMessage() Let's just very briefly peek into this message. Specifically, I want us to look at this piece of logic within the function: HttpContent content = new StringContent(json, Encoding.UTF8); content.Headers.ContentType = new MediaTypeHeaderValue("application/json"); content.Headers.Add("led", message); string path = "/lighttopic/messages"; var response = client.PostAsync(path, content).Result; Specifically, let's look at that call content.Headers.Add("led", message); string path = "/lighttopic/messages"; Remember that 'lighttopic' is our Topic name, and that in our Python script on our Raspberrry Pi we checked for the header variable attached to the `led` header. Hopefully this is starting to come together a little bit. Essentially, this function takes in a 'Message' which is going to be a 0 or 1 for either "OFF" or "ON". We will attached that to our LED Header and then send it off to our Azure Service Bus. Step 11: Creating Your Windows Phone App Part 4: Cortana Introducing Cortana Cortana is the personal assistant within Window's Phone, and soon the entire Windows ecosystem. She is Microsoft's answer to Apple's 'Siri', and a very strong answer at that. One of the most important features of Cortana is the third party support for app developers. This means that we can actually use Cortana within our app and interact with her through our own personal app that we developed rather than be restricted to just commands she is programmed with. How we will use Cortana: An Overview For our App, we won't get too deep into all the different ways we can leverage Cortana simply because there isn't enough time. We'll hard-code Cortana to respond to 2 distinct phrases: "Turn my lights Off" - Will send a message to the Azure Service Bus to turn off our lights "Turn my lights On" - Will send a message to the Azure Service Bus to turn on our lights I've provided some more resources for learning more about Cortana and different ways we can develop with her at the end of this guide but without dissecting every little detail of Cortana, let's learn just enough to get by: SSML - Speech Synthesis Markup Language is how we tell Cortana what phrases to listen to and basic responses. Installing - When we first install our app, we won't be able to interact with our app through Cortana until we manually start the app for the first time. This is because opening our app installs the voice commands Cortana needs to recognize our app. Pick your App name Wisely - We can't directly interact with our app through Cortana, we have to tell Cortana that we want to use the commands from a specific app, rather than her general list. For example, if I made an sports app and told Cortana "What is the Score of the Packer Game?" she wouldn't use the information from my app, because she doesn't know that my app can provide such information (she instead would look it up herself and give you the correct answer anyway). So instead, we have to say "SportsApp, what is the score of the Packer game?". This tells Cortana that the command 'what is the score of the Packer game?' belongs to the 'SportsApp' and she should consult that app to give the proper feedback. Want to learn more about Cortana and her features? I highly recommend Channel9's excellent video lecture on her. Step 12: Creating Your Windows Phone App Part 4: Coding Cortana - SSML SSML For the first part of coding Cortana we will focus on the SSML or Speech Synthesis Markup Language document. This xml document, also called the will allow us to program Cortana and tell her what to listen for and what kind of phrases should open our app. First things first, let's go ahead and create our SSML document. - Figure 1 - Right click on the WP8.1 project, click Add and then New Item - Figure 2 - Scroll to find the Voice Command Definition template, and create it with a name. I named mine ControlCommands.xml - Click OK to create the document. Navigating SSML This might look overwhelming at first glance, but that's just because the template is nice enough to fill it with tons of examples. In reality, we don't need a lot of this code, and once you get the hang of it is actually pretty simple. Let's take a look at the first block at the top: <CommandSet xml: <CommandPrefix> Contoso Rodeo </CommandPrefix> <Example> play a new game </Example> Figure 3 The 'CommandPrefix' tag is what we talked about in the last step: Your app's name as known by Cortana. This doesn't really even have to be your actual app's name, it could be anything. I suggest something easy that you are going to remember. Remember that I named this app 'B.A.C.H" so I am going to insert 'bach' as my CommandPrefix. The 'Example' tag is a suggestion to the user. When they scroll through Cortana and see all the available apps, it can show them suggested things to say. Now erase the other large tag blocks in the file! What? Why did I have you do that? Well, because we're using KISS in this project (Keep It Simple Stupid). At the risk of boring/teasing you to death with all the different SSML write-up we can use for Cortana, we will keep it simple and just use what we need (Also, inserting xml code on Instructables is very difficult due to their editor!). So what do we need to put in? Let's insert the following command block: <Command Name="DeskLightsOn"> <Example> turn on my desk lights </Example> <ListenFor> turn on my desk lights </ListenFor> <Feedback> Turning On Your Desk Lights... </Feedback> <Navigate /> </Command> So what the heck is going on here? - Command - the tag block is what we reference in the actual code. When we refer to this later on, we'll call it 'DeskLightsOn' after it's name attribute. - Example this is, again, the suggested input to the user for this command - ListenFor - What phrase 'triggers' this command to 'fire' . Pretty self explanatory hopefully. - Feedback - What Cortana responds with (she actually speaks it out loud). - Navigate This would be used it we were going to a specific page in the App, but we're not, so let's just leave it blank for now. Take a look at Figure 4 to see the XML code for both the "ON" and "OFF" commands for the desk lights. If you want to check out the entire code file, you can download it from the attached file. A Final Note The above usage of Cortana is very primitive, and I will fully acknowledge there are quite a few things we could do to improve this code. However, this isn't an instructable about Cortana (Coming Soon though!), it is simply an introduction to her. If you wish to learn how to leverage and improve this SSML even more, I highly suggest you check out my resources guide at the back of this instructable! Step 13: Creating Your Windows Phone App Part 4: Coding Cortana - Logic Cortana's Logic Now that Cortana can hear you, we need her to understand you. This part of the guide goes into the 'backend' of Cortana and pairing her with our app's logic. We want to program her so that when she hears one of these commands, she sends a message to our Azure Service Bus and turns on or off the lights. Installing the Voice Commands Don't Forget! This is a problem a lot of people have when first using Cortana, simply because it is easily overlooked. Let's do it right away to make sure it is done and over with. Put the following code in your MainPage.xaml.cs document. First, import the correct reference calls at the top of your file: using System.Threading.Tasks; using Windows.Media.SpeechRecognition; using Windows.Storage; Then the code to install our voice commands can be written as a function like so: private async Task InstallVoiceCommandsAsync() { var storageFile = await StorageFile.GetFileFromApplicationUriAsync(new Uri("ms-appx:///ControlCommands.xml")); await VoiceCommandManager.InstallCommandSetsFromStorageFileAsync(storageFile); } Not familiar with Async and Await? Read up on the MSDN documents here. Now that we have our function, let's make sure to call it! A good spot to put it is within the OnNavigatedTo function. Because our install is an asynchronous task, we will have to turn the OnNavigateTo function into an async function by simple adding the following code: protected override async void OnNavigatedTo(NavigationEventArgs e) Then we just check that this is the first time the page has been navigated to and install our voice commands: if (e.NavigationMode == NavigationMode.New) { await InstallVoiceCommandsAsync(); } Great! Now our voice commands will install as soon as open the app for the first time! Voice Command Logic Now, we will use what command the user spoke to figure out which command to send to our Azure Service Bus. Open up your App.xaml.cs file and scroll to just below the RootFrame_FirstNavigated function. We will put the following function under there: protected override void OnActivated(IActivatedEventArgs args) { base.OnActivated(args); if (args.Kind == ActivationKind.VoiceCommand) { var commandArgs = args as VoiceCommandActivatedEventArgs; if (commandArgs != null) { SpeechRecognitionResult speechRecognitionResult = commandArgs.Result; var voiceCommandName = speechRecognitionResult.RulePath[0]; switch (voiceCommandName) { case "DeskLightsOff": SendSBMessage("0"); break; case "DeskLightsOn": SendSBMessage("1"); break; } } } Window.Current.Activate(); } This function is called every time somebody uses our Command Prefix (BACH) when giving Cortana an instruction. While it may seem a little confusing at first, the real important part are the following lines of code: switch (voiceCommandName) { case "DeskLightsOff": SendSBMessage("0"); break; case "DeskLightsOn": SendSBMessage("1"); break; } We use a switch statement to jump through the different commands in our SSML document. What we are checking is for the String to match up with our Command Name from our Command blocks in the ssml doc. Once we know what command we have received, we can then act accordingly by firing our SendSBMessage which sends a message to our Azure Service Bus. For "Off" we send a '0' and "On" we send a '1'. Code not working 100%? Don't worry, I've attached both a MainPage.xaml.cs and App.xaml.cs that should give you more clarity when following along. Step 14: The Final Product! You Did It! With a few simple clicks, you can deploy your app to your Windows Phone device and you are all set! Initially, you'll simply see a black screen (Figure 1), but that is nothing to be alarmed about. Remember we didn't really code any GUI for this project and simply based it entirely around the voice commands with Cortana. By holding down the search button you can activate Cortana. Simply say "BACH turn on my desk lights" Cortana should return with the message that she is turning them on (Figure 2). Hopefully, this will trigger your LED to go on! Congrats! Moving Forward... In this guide, we've only just barely scratched the surface of what this technology is capable of. Here are just a few ideas that you can take going forward to really flesh out your automated home with Cortana and Azure: - Hook up your music player to Cortana, set wake-up alarms! - Hook up a relay with your RPi to control large lights - Use Cortana's Geofencing feature to have the lights turn off when you leave your house (Guide on this coming soon!) - Create a GUI interface on your Windows Phone app to control your lights - Create an Azure Storage Table and keep the 'status' of different IoT devices around your home and send them to your App! Got more ideas? Post them in the comments and I will add them to the list! Step 15: Troubleshooting and References Troubleshooting I keep getting the error: Error 2 The type or namespace name 'SpeechRecognition' does not exist in the namespace 'Windows.Media' (are you missing an assembly reference?) Yes, this is a strange one. I actually made two of these projects (one to test it, the other as a code-along). On one project I did have this error, and the other I didn't, despite identical code. The problem is that 'SpeechRecognition' is a library only available to Windows Phone, and not Windows in general. Because this code is in our 'App.xaml.cs' file, it's technically shared code. To get around this, simply wrap it in a Windows Phone 8 IF: #if WINDOWS_PHONE_APP using Windows.Media.SpeechRecognition; #endif That will make it exclusive to your Windows Phone project, and should compile. I keep getting a "Authorization Denied" when trying to connect to my Azure Service Bus through the Windows Phone app Let me guess, you were the rebellious one who decided to name all their variables on their own? Double check that you have substituted all my instructable names with your own custom names. Doing a 'CTRL-F' on certain variables works well. It takes almost 20 seconds from giving the command to my phone to the light actually turning on/off, what gives? Yes, I've encountered this too. I've seen the 'delay' range anywhere from 10-30 seconds, with an average of just around 17 seconds. My initial thought is that this slow-down is coming from the code on the RPi, and it isn't 'listening' to the subscription as fast as we would hope for. If I find a fix for it in the future, I will add to this instructable. I will post more troubleshooting tips as people comment on various problems! References Raspberry Pi Azure - Brokered Messaging REST Tutorial - Azure Service Bus and IoT - How to Use Service Bus Topic/Subscriptions - Service Bus Queues, Topics and Subscriptions Overview - Service Bus Topic and Subscription Tutorial - Python Azure SDK - Shout out to 'mlinnen' for his work in developing this awesome RPi Service Bus Example! Windows Phone 8.1 and Cortana - Windows Phone 8.1 for Absolute Beginners - Windows Phone SDK Documentation - Windows Phone 8.1 Development Center - Universal Windows App Development with Cortana and the Speech SDK - Launch a foreground app with voice commands in Cortana - Integrating your Application to work with Cortana in Windows Phone 8.1 - Building Text to Speech Applications using Windows Phone 8.1 and Cortana - What Developers need to Know about the Cortana API Runner Up in the Home Automation 23 Discussions 4 years ago on Introduction This should have made it to the finals of the Coded Contest and shoudl have won the Judges prize! I cannot beleive that it did not get picked! Reply 4 years ago on Introduction Thanks for the support MadDocks. I will say I am a little blindsided that I put so much effort into this guide and I failed to even get a nod as a finalist. This was an experiment (my first time posting an instructable) and since there isn't much advantages to posting on Instrcutable (if this kind of effort isn't rewarded) I think I will just be posting these guides to my own blog from now on. Reply 4 years ago on Introduction There are several images and diagrams in your ible that you didn't create, which can taint an otherwise stellar project. Sadly, those stolen images may have affected your chances of winning. Reply 4 years ago on Introduction Oh, I'm sure it could have. I guess I'm just disappointed that the entry was even accepted into the contest then. Had that detail been highlighted prior to the closing, that's a fix that could have easily been made. You don't tell sports players that they are fine to use steriods for the season but then invalidate their playoff run. 2 years ago can I just use my computer instead of a windows phone? I just want it for around the house automation 3 years ago on Introduction Wow! So much detail! 3 years ago I think the likes and views just show how much your work is appreciated, hope you add more instructables in the future. :) Awesome project! :) 4 years ago on Introduction Error 101. Text is disabled to Mazziz due to a unresponding crash. Sorry for any Incoveince caused. By Instructables. 4 years ago on Introduction Good work. I like the name Windows Azure. Reply 4 years ago on Introduction I got something funny. The Computer, Windows Azure. Ha! That is funny 4 years ago This no work. U says iPhone but it no compatable... It no integrate Reply 4 years ago on Introduction I'm sorry that you misread the guide to think that iPhone would be compatible. This guide is exclusively for Windows Phone. Reply 4 years ago on Introduction Oh. Sorry Reply 4 years ago on Introduction Dont be rude 4 years ago on Introduction This is the best ever! 10 out of 10! 4 years ago on Introduction will there be a guide/howto on adding more devices such as an oven to preheat, little custom devices that are little more than a motor in a circuit..etc. Reply 4 years ago on Introduction That's a fine idea. It becomes tricky because a lot of those kind of solutions are very custom. There isn't really a strong standard interface for ovens, microwaves, washing machines, etc. Making a guide on how to hook it up to my oven might be totally different from what you would need. 4 years ago on Introduction Nice speech at the front, Like this instructible! 4 years ago on Introduction There's no way I'll put my home automation server to the cloud but the tutorial is good, thanks for sharing. Reply 4 years ago on Introduction I would agree that if you have the networking skills, it makes far more sense to just create your own local server. I used Azure mainly so that people without strong networking skills could get get something up and running quickly.
https://www.instructables.com/id/Use-RPi-Azure-and-Cortana-to-Automate-your-Home/
CC-MAIN-2019-26
refinedweb
7,690
71.95
Variables You are encouraged to solve this task according to the task description, using any language you may know. Demonstrate the language's methods of variable declaration, initialization, assignment, datatypes, scope, referencing, and other variable related facilities. [edit] Ada declare X : String := "Hello"; -- Create and initialize a local variable Y : Integer; -- Create an uninitialized variable Z : Integer renames Y: -- Rename Y (creates a view) begin Y := 1; -- Assign variable end; -- End of the scope [edit] ALGOL 68 Local variables are generally called local variables in ALGOL 68. Variables must be declared before use. In traditional ALGOL 68, variables must be declared before any labels: in a compound-clause. The declaration of a variable, without assigning a value takes the form: <typename> <variablename>; int j; Some common types are: char, string, short int, int, long int, real, long real, bits and bytes . Multiple variables may be defined in a single statement as follows: LONG REAL double1, double2, double3; It is possible to initialize variables with expressions having known values when they are defined. The syntax follows the form <typename> <variablename> := <initializing expression>; SHORT INT b1 := 2500; LONG INT elwood = 3*bsize, jake = bsize -2; The strings in ALGOL 68 are flex arrays of char. To declare initial space for a string of exactly to 20 characters, the following declaration is used. FLEX[20]CHAR mystring; All arrays are structure that include both the lower lwb and upper upb of the array. Hence strings in ALGOL 68 may safely contain null characters and can be reassigned with longer or shorter strings. To declare an initialized string that won't be changed the following declaration may be used: []CHAR mytext = "The ALGOL 68 Language"; There are more rules regarding arrays, variables containing pointers, dynamic allocation, and initialization that are too extensive to cover here. [edit] AppleScript Variables are untyped in AppleScript, but they must be instantiated before use.Example: set x to 1Scope may be explicitly defined before instantiation using either the globalor localdeclarations. global xIf undeclared, AppleScript will automatically set the scope based on the following rule: variables declared at the top level of any script will be (implicit) globals, variables declared anywhere else will be (implicit) locals. Scope cannot be changed after being explicitly or implicitly defined. set x to 1 local y set y to 2 Where a variable has both local and global instances, it is possible to use the my modifier to access the global (top-level) instantiation. on localx() set x to 0 -- implicit local return x end localx on globalx() set x to 0 -- implicit local return my x end globalx on run set x to 1 -- top-level implicit global return {localx(), globalx()} end run --> RETURNS: {0, 1} Applescript also supports top-level entities known as properties that are global to that script. property x : 1Properties behave exactly as global variables except that they are persistent. Their most recent values are retained between script executions (or until the script is recompiled). [edit] AutoHotkey x = hello ; assign verbatim as a string z := 3 + 4 ; assign an expression if !y ; uninitialized variables are assumed to be 0 or "" (blank string) Msgbox %x% ; variable dereferencing is done by surrounding '%' signs fx() { local x ; variable default scope in a function is local anyways global y ; static z=4 ; initialized once, then value is remembered between function calls } [edit] AWK In awk, variables are dynamically typecast, and do not need declaration prior to use. a = 1 # Here we declare a numeric variable fruit = "banana" # Here we declare a string datatype In awk multiple assignments are possible from within a single statement: x = y = z = 3 Variables have global scope, and there is no way to make a variable local to a block. However, function arguments are local, so it is possible to make a variable local to a function by listing the variable as an additional dummy function argument after the required arguments: function foo(j k) { # j is an argument passed from caller # k is a dummy not passed by caller, but because it is in the # argument list, it will have a scope local to the function k = length(j) print j "contains " k " characters" } [edit] BASIC In BASIC, variables are global and there is no scope. However, it is an error to reference a variable before it has been assigned. 10 LET A=1.3 20 LET B%=1.3: REM The sigil indicates an integer, so this will be rounded down 30 LET C$="0121": REM The sigil indicates a string data type. the leading zero is not truncated 40 DIM D(10): REM Create an array of 10 digits 50 DIM E$(5.10): REM Create an array of 5 strings, with a maximum length of 10 characters 60 LET D(1)=1.3: REM Assign the first element of d 70 LET E$(3)="ROSE": REM Assign a value to the third string 80 PRINT D(3): REM Unassigned array elements have a default value of zero 90 PRINT E$(3): REM Ten spaces because string arrays are not dynamic 100 PRINT E$(3);"TTA CODE": REM There will be spaces between rose and etta 110 DIM F%(10): REM Integers use less space than floating point values 120 PRINT G: REM This is an error because f has not been defined 130 PRINT D(0): REM This is an error because elements are numbered from one 140 LET D(11)=6: REM This is an error because d only has 10 elements 150 PRINT F%: REM This is an error because we have not provided an element number 160 END [edit] Applesoft BASIC In Applesoft BASIC, variables are global and there is no scope. And, it is not an error to reference a variable before it has been assigned. The LET keyword is optional. Almost all math is done using floating point numbers by default. Using floating point variables is almost always faster than using integer variables which require extra conversion between floating point and integer. Integers use less space than floating point values. Applesoft BASIC array indexes start at zero. 10 A = 1.7: REM LET IS NOT REQUIRED 20 LET B% = 1.7: REM THE PERCENT SIGN INDICATES AN INTEGER; THIS GETS TRUNCATED DOWN 30 LET C$ = "0121": REM THE DOLLAR SIGN INDICATES A STRING DATA TYPE. THE LEADING ZERO IS NOT TRUNCATED 40 DIM D(20): REM CREATE AN ARRAY OF 21 FLOATING POINT NUMBERS 50 DIM E$(5,10): REM CREATE A TWO DIMENSIONAL ARRAY OF 66 STRINGS 60 LET D(1) = 1.3: REM ASSIGN THE SECOND ELEMENT OF D 70 Y$(3) = "ROSE": REM ASSIGN A VALUE TO THE FOURTH STRING 80 PRINT X: REM UNASSIGNED FLOATING POINT AND INTEGER VARIABLES HAVE A DEFAULT VALUE OF ZERO 90 PRINT Y$(2): REM UNASSIGNED STRING VARIABLES ARE EMPTY 100 PRINT Y$(3);"TTA CODE": REM THERE WON'T BE SPACES BETWEEN ROSE AND ETTA 110 F%(10) = 0: REM IF ARRAYS ARE NOT DECLARED THEY HAVE 11 ELEMENTS BY DEFAULT; IE. DIM F%(10) 120 PRINT G: REM THIS PRINTS 0 AND IS NOT AN ERROR EVEN THOUGH G HAS NOT BEEN DEFINED 130 PRINT D(0): REM THIS IS NOT AN ERROR BECAUSE ELEMENTS ARE NUMBERED FROM ZERO. 140 PRINT F%: REM THIS PRINTS 0 BECAUSE F% IS A DIFFERENT VARIABLE THAN THE ARRAY F%(10) 150 LET D(21) = 6: REM THIS IS AN ERROR BECAUSE D ONLY HAS 21 ELEMENTS INDEXED FROM 0 TO 20. [edit] Batch File Batch file variables are not limited to data types and they do not need to be initialized before use. @echo off ::setting variables in defferent ways set myInt1=5 set myString1=Rosetta Code set "myInt2=5" set "myString2=Rosetta Code" ::Arithmetic set /a myInt1=%myInt1%+1 set /a myInt2+=1 set /a myInt3=myInt2+ 5 set myInt set myString pause>nul [edit] BBC BASIC REM BBC BASIC (for Windows) has the following scalar variable types; REM the type is explicitly indicated by means of a suffix character. REM Variable names must start with A-Z, a-z, _ or `, and may contain REM any of those characters plus 0-9 and @; they are case-sensitive. A& = 123 : REM Unsigned 8-bit byte (0 to 255) A% = 12345678 : REM Signed 32-bit integer (-2147483648 to +2147483647) A = 123.45E6 : REM Variant 40-bit float or 32-bit integer (no suffix) A# = 123.45E6 : REM Variant 64-bit double or 32-bit integer A$ = "Abcdef" : REM String (0 to 65535 bytes) REM Scalar variables do not need to be declared but must be initialised REM before being read, otherwise a 'No such variable' error is reported REM The static integer variables A% to Z% are permanently defined. REM BBC BASIC also has indirection operators which allow variable-like REM entities to be created in memory: DIM addr 7 : REM Allocate 8 bytes of heap ?addr = 123 : REM Unsigned 8-bit byte (0 to 255) !addr = 12345 : REM Signed 32-bit integer (-2147483648 to +2147483647) |addr = 12.34 : REM Variant 40-bit or 64-bit float or 32-bit integer $addr = "Abc" : REM String terminated by CR (0 to 65535 bytes) $$addr = "Abc": REM String terminated by NUL (0 to 65535 bytes) REM The integer indirection operators may be used in a dyadic form: offset = 4 addr?offset = 12345678 : REM Unsigned 8-bit byte at addr+offset addr!offset = 12345678 : REM Signed 32-bit integer at addr+offset REM All variables in BBC BASIC have global scope unless they are used REM as a formal parameter of a function or procedure, or are declared REM as LOCAL or PRIVATE. This is different from most other BASICs. [edit] Bracmat Variable declaration. Variables local to a function ( i and j in the example below) can be declared just before the body of a function. (myfunc=i j.!arg:(?i.?j)&!i+!j) Global variables are created the first time they are assigned a value. Initialization. Local variables are initialised to 0. Assignment. There are two ways. To assign unevaluated code to a variable, you normally would use the <variable>=<unevaluated expression> syntax. To assign an evaluated expression to a variable, you use pattern matching as in <evaluated expression>:?<variable> Datatypes. There are no datatypes. The nature of an expression is observed by pattern matching. Scope. Local variables have dynamic scope. Referencing. Variables are referenced using the !<variable> syntax. Other variable related facilities. Global variables (name as well as value) can be removed from memory with the built-in tbl function. The names of built-in functions such as put and lst can be used as variable names without adverse effects on the built-in function. It is not possible to redefine built-in functions to do something different. [edit] C Local variables are generally called auto variables in C. Variables must be declared before use. The declaration of a variable, without assigning a value takes the form <typename> <variablename>; int j; Some common types are: char, short, int, long, float, double and unsigned. Multiple variables may be defined in a single statement as follows: double double1, double2, double3; It is possible to initialize variables with expressions having known values when they are defined. The syntax follows the form <typename> <variablename> = <initializing expression>; short b1 = 2500; long elwood = 3*BSIZE, jake = BSIZE -2; Strings in C are arrays of char terminated by a 0 or NULL character. To declare space for a string of up to 20 characters, the following declaration is used. char mystring[21]; The extra length leaves room for the terminating 0. To declare an initialized string that won't be changed the following declaration may be used: const char * mytext = "The C Language"; There are more rules regarding arrays, variables containing pointers, dynamic allocation, and initialization that are too extensive to cover here. [edit] C++ Much like C, C++ variables are declared at the very start of the program after the headers are declared. To declare a as an integer you say: the type of variable; then the variable followed by a semicolon ";". int a; Template variables are specified with template parameters in angle brackets after the class name: std::vector<int> intVec; [edit] C# Variables in C# are very dynamic, in the form that they can be declared practically anywhere, with any scope. As in other languages, often used variables are: int, string, double etc. They are declared with the type first, as in C: int j; Multiple variables may be defined in a single line as follows: int p, a, d; It is also possible to assign variables, either while declaring or in the program logic: int a = 4; int b; int c = Func(a); b = 5; [edit] COBOL For example usage of array variables in COBOL, see Arrays#COBOL. [edit] Assignment Variables can be assigned values using either MOVE or SET. SET is used for assigning values to indexes, pointers and object references, and MOVE is used for everything else. MOVE 5 TO x MOVE FUNCTION SOME-FUNC(x) TO y MOVE "foo" TO z MOVE "values 1234" TO group-item SET some-index TO 5 One of COBOL's more well-known features is MOVE CORRESPONDING, where variables subordinate to a group item are assigned the values of variables with the same names in a different group item. The snippet below uses this to reverse the date: 01 normal-date. 03 year PIC 9(4). 03 FILLER PIC X VALUE "-". 03 month PIC 99. 03 FILLER PIC X VALUE "-". 03 dday PIC 99. *> Misspelling is intentional; day is a reserved word. 01 reversed-date. 03 dday PIC 99. 03 FILLER PIC X VALUE "-". 03 month PIC 99. 03 FILLER PIC X VALUE "-". 03 year PIC 9(4). ... PROCEDURE DIVISION. MOVE "2012-11-10" TO normal-date MOVE CORR normal-date TO reversed-date DISPLAY reversed-date *> Shows '10-11-2012' [edit] Declaration Variables in COBOL are declared in the DATA DIVISION. In standard COBOL, they can be declared within it in: - the FILE SECTION, where sort-files/file records are defined and associated with their respective file/sort descriptions. - the WORKING-STORAGE SECTION, where static data is declared. - the LOCAL-STORAGE SECTION, where automatic data is declared. - the LINKAGE SECTION, where parameters are defined. - the REPORT SECTION, where reports are defined and associated with report descriptions. - the SCREEN SECTION, where the screens used for terminal I/O are described. Variables are defined in the following format: level-number variable-name clauses. Variable type is defined in a PICTURE clause and/or a USAGE clause. PICTURE clauses can be used like so: 01 a PIC X(20). *> a is a string of 20 characters. 01 b PIC 9(10). *> b is a 10-digit integer. 01 c PIC 9(10)V9(5). *> c is a decimal number with a 10-digit integral part and a 5-digit fractional part. 01 d PIC 99/99/99. *> d is an edited number, with a slash between each pair of digits in a 6-digit integer. The USAGE clause is used to define pointers, floating-point numbers, binary numbers, packed decimals and object references amongst others. Each variable has a level-number, which is a number from 1 to 49, or 77, which goes before the variable name. Level-numbers indicate how data is grouped together. Variables with higher level-numbers are subordinate to variables with lower level-numbers. The 77 level-number indicates the variable has no subordinate data and is therefore not a group item. Group items can include FILLER items which are parts of a group item which are not directly accessible. *> Group data items do not have a picture clause. 01 group-item. 03 sub-data PIC X(10). 03 more-sub-data PIC X(10). [edit] Initialization Initialization is done via the VALUE clause in the DATA DIVISION or via the INITIALIZE statement in the PROCEDURE DIVISION. The INITIALIZE statement will set a variable back to either the value in its VALUE clause (if INITIALIZE has a VALUE clause) or to the appropriate value out of NULL, SPACES or ZERO. DATA DIVISION. WORKING-STORAGE SECTION. 01 initialized-data PIC X(15) VALUE "Hello, World!". 01 other-data PIC X(15). ... PROCEDURE DIVISION. DISPLAY initialized-data *> Shows 'Hello, World!' DISPLAY other-data *> Will probably show 15 spaces. Group items can be initialized, but they are initialized with a string like so: 01 group-item VALUE "Hello!12345". 03 a-string PIC X(6). *> Contains "Hello!" 03 a-number PIC 9(5). *> Contains '12345'. [edit] Reference Modification Reference modification allows a range of characters to be taken from a variable. some-str (1:1) *> Gets the first character from the string some-num (1:3) *> Get the first three digits from the number another-string (5:) *> Get everything from the 5th character/digit onwards. *> To reference modify an array element some-table (1) (5:1) *> Get the 5th character from the 1st element in the table [edit] Scope Variables by default are local to the subprogram/class/etc. (source element) they are defined in. The GLOBAL clause allows the variable to be accessed in any nested source units as well. To be accessed from those inner source elements, the variable must be redeclared exactly as it was in the outer one, complete with GLOBAL clause, otherwise the variable in the inner one will shadow the global variable from the outer one. [edit] Common Lisp [edit] Declaration Special variables are more or less like globals in other languages: Special variables may be defined with defparameter. (defparameter *x* nil "nothing") Here, the variable *x* is assigned the value nil. Special variables are wrapped with asterisks (called 'earmuffs'). The third argument is a docstring. We may also use defvar, which works like defparameter except that defvar won't overwrite the value of the variable that has already been bound. (defvar *x* 42 "The answer.") It does, however, overwrite the docstring, which you can verify: (documentation '*x* 'variable) defconst works in the same way, but binds a constant. Constants are wrapped with '+' signs rather than earmuffs. Common Lisp is case-insensitive, so saying (equal +MoBy-DicK+ +moby-dick+) will return t no matter the combination of upper and lower-case letters used in the symbol names. Symbols are hyphenated and lower-case. For non-special varibles, we use let: (let ((jenny (list 8 6 7 5 3 0 9)) hobo-joe) (apply #'+ jenny)) The symbols 'jenny' and 'hobo-joe' are lexically bound meaning that they are valid within the scope of the let block. If we move the apply form out of scope, the compiler will complain that 'jenny' is unbound. The let macro binds an arbitrary number of variables: 'jenny' is bound to a list of numbers, whereas hobo-joe is bound to nil because we haven't provided a value. jenny and hobo-joe have the same scope. Common Lisp prefers to use variables in lexical, rather than dynamic scope. If we've defined *x* as a special variable, then binding it lexically with let will create a new local value while leaving the dynamically scoped *x* untouched. (progn (let ((*x* 43)) (print *x*) (print *x*)) If *x* has previously been bound to the value 42, then this example will output 43, then 42. [edit] Mutation We use setf to modify the value of one or more variables. (setf *x* 625) [edit] Types Common Lisp is dynamically typed, so we don't have to explicitly tell it what type of value a symbol holds. But we can if we want to: (declaim (ftype (function (fixnum) fixnum) frobnicate)) (defun frobnicate (x) (declare (type fixnum x)) (the fixnum (+ x 128))) A fixnum is like an unsigned integer in C. The size of a fixnum can vary by architecture, but because the Lisp virtual machine requires 2 bits for its own purposes, a fixnum can only represent 2^62 different values on a 64-bit platform. The keyword the tells the compiler the type returned by an expression; frobnicate returns a fixnum. The declare statement applies to the function in which the statement appears. In the example, we assert to the compiler that x is a fixnum. Whereas declare describes a function, declaim is used to inform the compiler about the program globally. The usage above gives type information about one or more functions. The fixnum in parentheses tells the compiler that the function is of one argument, being is a fixnum, and that the function returns a fixnum. We can provide any number of function names. [edit] Delphi var i: Integer; s: string; o: TObject; begin i := 123; s := 'abc'; o := TObject.Create; try // ... finally o.Free; end; end; [edit] D float bite = 36.321; ///_Defines a floating-point number (float), "bite", with a value of 36.321 float[3] bites; ///_Defines a static array of 3 floats float[] more_bites; ///_Defines a dynamic array of floats [edit] DWScript See Delphi for "classic" declaration. In DWScript, if variables have to be declared before use, but can be declared inline, and their type can also be inferred. var i := 123; // inferred type of i is Integer var s := 'abc'; // inferred type of s is String var o := TObject.Create; // inferred type of o is TObject var s2 := o.ClassName; // inferred type of s2 is String as that's the type returned by ClassName [edit] E E is an impure, lexically scoped language. Variables must be defined before use (they are not created on assignment). Definition of variables is a special case of pattern matching. An identifier occurring in a pattern is a simple non-assignable variable. The def operator is usually used to define local variables: def x := 1 x + x # returns 2 Assignment The pattern var x makes x an assignable variable, and := is the assignment operator. def var x := 1 x := 2 x # returns 2 (As a shorthand, var x := ... is equivalent to def var x := ....) There are update versions of the assignment operator, in the traditional C style ( +=, -=, |=, etc.), but also permitting any verb (method name) to be used: def var x := 1 x += 1 # equivalent to x := x + 1, or x := x.add(1) x # returns 2 def var list := ["x"] list with= "y" # equivalent to list := list.with("y") list # returns ["x", "y"] Patterns Since variable definition is part of pattern matching, a list's elements may be distributed into a set of variables: def [hair, eyes, var shirt, var pants] := ["black", "brown", "plaid", "jeans"] However, assignment to a list as in Perl or Python is not currently supported. [shirt, pants] := ["white", "black"] # This does not do anything useful. Scoping In E, a variable is visible from the point of its definition until the end of the enclosing block. Variables can even be defined inside expressions (actually, E has no statement/expression distinction): def list := [def x := timer.now(), x] # two copies of the current time list[0] == x # x is still visible here; returns true Slots The difference between assignable and non-assignable variables is defined in terms of primitive operations on non-primitive slot objects. Slots can also be employed by programmers for effects such as variables which have an effect when assigned (e.g. backgroundColor := red) or automatically change their values over time, but that is beyond the scope of this task. For example, it is possible to transfer a variable between scopes by referring to its slot: def makeSum() { var a := 0 var b := 0 return [&a, &b, fn { a + b }] } def [&x, &y, sum] := makeSum() x := 3 y := 4 sum() # returns 7 As suggested by the & syntax, the use of slots is somewhat analogous in effect to C pointers or C++ references, allowing the passing of locations and not their values, and "pass-by-reference" or "out" parameters: def getUniqueId(&counter) { counter += 1 return counter } var idc := 0 getUniqueId(&idc) # returns 1 getUniqueId(&idc) # returns 2 [edit] Ela Strictly speaking Ela doesn't have variables. Instead Ela provides a support for a declaration of names that can be bound to values. Unlike variables names are immutable - it is not possible to change a value bound to a name. Global declarations: x = 42 sum x y = x + y Local declarations: sum x y = let z = x + y in z sum x y = z where z = x + y [edit] Erlang Variables spring into life upon assignment, which can only happen once (single assignment). Their scope is only the local function and they must start with a capital letter. two() -> A_variable = 1, A_variable + 1. [edit] F# Variables in F# bind a name to a value and are, by default, immutable. They can be declared nearly anywhere and are normally local to the block/assembly they are defined in. They are declared in the form: let name parameters = expression. let x = 5 // Int let mutable y = "mutable" // Mutable string let recordType = { foo : 6; bar : 6 } // Record let intWidget = new Widget<int>() // Generic class let add2 x = 2 + x // Function value Types are normally inferred from the values they are initialised with. However, types can be explicitly specified using type annotations. let intEqual (x : int) (y : int) = x = y let genericEqual x y : 'a = x = y Mutable variables are set using the <- operator. sum <- sum + 1 [edit] Factor The SYMBOL bit defines a new symbol word which is used to identify variables. use-foo shows how one would modify and get the contents of the variable. named-param-example is an example of using :: to define a word with named inputs, similar to the way other languages do things. Last, but not least, local-example shows how to use [let to define a group of lexically scoped variables inside of a word definition. SYMBOL: foo : use-foo ( -- ) 1 foo set foo get 2 + foo set ! foo now = 3 foo get number>string print ; :: named-param-example ( a b -- ) a b + number>string print ; : local-example ( -- str ) [let "a" :> b "c" :> a a " " b 3append ] ; [edit] Forth Historically, Forth has preferred open access to the parameter stack over named local variables. The 1994 standard however added a cell-sized local variable facility and syntax. The semantics are similar to VALUEs: locals are initialized from stack contents at declaration, the name retrieves the value, and TO sets the value of the local name parsed at compile time ("value TO name"). : hypot ( a b -- a^2 + b^2 ) LOCALS| b a | \ note: reverse order from the conventional stack comment b b * a a * + ; Modern Forth implementations often extend this facility in several ways, both for more convenient declaration syntax and to be more compatible with foreign function interfaces. Curly braces are used to replace the conventional stack comment with a similar looking local variable declaration. : hypot { a b -- a^2 + b^2 } \ text between "--" and "}" remains commentary a a * b b * + ; Modern systems may also allow different local data types than just integer cells. : length { F: a F: b F: c -- len } \ floating point locals a a F* b b F* F+ c c F* F+ FSQRT ; [edit] Fortran program test implicit none integer :: i !scalar integer integer,dimension(10) :: ivec !integer vector real :: r !scalar real real,dimension(10) :: rvec !real vector character(len=:),allocatable :: char1, char2 !fortran 2003 allocatable strings !assignments: !-- scalars: i = 1 r = 3.14 !-- vectors: ivec = 1 !(all elements set to 1) ivec(1:5) = 2 rvec(1:9) = 0.0 rvec(10) = 1.0 !-- strings: char1 = 'hello world!' char2 = char1 !copy from one string to another char2(1:1) = 'H' !change first character end program test [edit] GAP # At top level, global variables are declared when they are assigned, so one only writes global_var := 1; # In a function, local variables are declared like this func := function(n) local a; a := n*n; return n + a; end; # One can test whether a variable is assigned IsBound(global_var); # true; # And destroy a variable Unbind(global_var); # This works with list elements too u := [11, 12, , 14]; IsBound(u[4]); # true IsBound(u[3]); # false Unbind(u[4]); [edit] Go Simplest and most common While Go is statically typed, it provides a “short variable declaration” with no type explicitly stated, as in, x := 3 This is the equivalent of, var x int // declaration x = 3 // assignment The technique of not stating the type is known as type inference, or duck typing. The right hand side can be any expression. Whatever type it represents is used as the type of the variable. More examples: y := x+1 // y is int, assuming declaration above same := x == y // same declared as bool p := &same // type of p is pointer to bool pi := math.Floor(math.Pi) // math.Floor returns float64, so that is the type of pi Nothing goes uninitialized Variables declared without initializer expressions are initialized to the zero value for the type. var x, y int // two variables, initialized to zero. var p *int // initialized to nil Opposite C While putting the variable before the type feels “backwards” to programmers familiar with certain other languages, it succinctly allows multiple variables to be declared with arbitrarily complex type expressions. List syntax Variables can be declared in a list with the keyword var used only once. The syntax visually groups variables and sets the declaration off from surrounding code. var ( x, y int s string ) Multiple assignment Multiple values can be assigned in a single assignment statement, with many uses. x, y = y, x // swap x and y sinX, cosX = math.Sincos(x) // Sincos function returns two values // map lookup optionally returns a second value indicating if the key was found. value, ok = mapObject[key] Other kinds of local variables Parameters and named return values of functions, methods, and function literals also represent assignable local variables, as in, func increase (x int) (more int) { x++ more = x+x return } Parameter x and return value more both act as local variables within the scope of the function, and are both assignable. When the function returns, both go out of scope, although the value of more is then returned as the value of the function. While assignment of return values is highly useful, assignment of function parameters is often an error. Novice programmers might think that modifying a parameter inside the function will affect a variable used as an argument to the function call. It does not. Method receivers also represent assignable local variables, and as with function parameters, assigning them inside the method is often a mistake. Other common errors Short declarations can involve multiple assignment, as in x, y := 3, 4 But there are complications involving scope and variables already defined that confuse many programmers new to Go. A careful reading of the language specification is definitely in order, and a review of misconceptions as discussed on the mailing list is also highly recommended. Programmers new to the concept of closures often fail to distinguish between assigning free and bound variables. Function literals in Go are closures, and a common novice error is to start multiple goroutines from function literals, and fail to understand that multiple goroutines are accessing the same free variables. [edit] Haskell You can define a variable at the top (module) level or in a where, let, or do construct. foobar = 15 f x = x + foobar where foobar = 15 f x = let foobar = 15 in x + foobar f x = do let foobar = 15 return $ x + foobar One particular feature of do notation looks like assignment, but actually, it's just syntactic sugar for the >>= operator and a unary lambda. main = do s <- getLine print (s, s) -- The above is equivalent to: main = getLine >>= \s -> print (s, s) Pattern matching allows for multiple definitions of the same variable, in which case each call uses the first applicable definition. funkshun True x = x + 1 funkshun False x = x - 1 foobar = funkshun True 5 + funkshun False 5 -- 6 + 4 case expressions let you do pattern-matching on an arbitrary expression, and hence provide yet another way to define a variable. funkshun m = case foo m of [a, b] -> a - b a : b : c : rest -> a + b - c + sum rest a -> sum a Guards are as a kind of syntactic sugar for if-else ladders. signum x | x > 0 = 1 | x < 0 = -1 | otherwise = 0 A defintion can be accompanied by a type signature, which can request a less general type than the compiler would've chosen on its own. (Because of the monomorphism restriction, there are also some cases where a type signature can request a more general type than the default.) Type signatures are also useful even when they make no changes, as a kind of documentation. dotProduct :: [Int] -> [Int] -> Int dotProduct ns ms = sum $ zipWith (+) ns ms -- Without the type signature, dotProduct would -- have a more general type. foobar :: Num a => a foobar = 15 -- Without the type signature, the monomorphism -- restriction would cause foobar to have a less -- general type. Since Haskell is purely functional, most variables are immutable. It's possible to create mutable variables in an appropriate monad. The exact semantics of such variables largely depend on the monad. For example, STRefs must be explicitly initialized and passed between scopes, whereas the implicit state of a State monad is always accessible via the get function. [edit] HicEst ! Strings and arrays must be declared. ! Everything else is 8-byte float, READ/WRITE converts CHARACTER str="abcdef", str2*345, str3*1E6/"xyz"/ REAL, PARAMETER :: named_constant = 3.1415 REAL :: n=2, cols=4, vec(cols), mtx(n, cols) DATA vec/2,3,4,5/, mtx/1,2,3.1415,4, 5,6,7,8/ named = ALIAS(alpha, beta, gamma) ! gamma == named(3) ALIAS(vec,n, subvec,2) ! share subvec and vec(n...n+1) ALIAS(str,3, substr,n) ! share substr and str(3:3+n-1) a = EXP(b + c) ! assign/initialze a=1, b=0, c=0 str = "blahblah" ! truncate/expand if needed beta = "blahblah" ! illegal CALL noArguments_noUSE ! global scope SUBROUTINE CALL Arguments_or_USE(a) ! local scope SUBROUTINE t = func() ! local scope FUNCTION SUBROUTINE noArguments_noUSE() ! all global vec2 = $ ! 1,2,3,... END SUBROUTINE Arguments_or_USE(var) ! all local USE : vec ! use global object var = SUM(vec) t = TIME() ! local, static, USEd by func() END FUNCTION func() ! all local USE Arguments_or_USE : t ! use local object func = t END [edit] Icon and Unicon Icon/Unicon data types are implemented as type safe self-descriptive values and as such do not require conventional type declarations. See Introduction to Unicon and Icon about declarations Declarations are confined to scope and use and include local, static, global, procedure parameters, and record definitions. Additionally Unicon has class definitions. Undeclared variables are local by default. global gvar # a global procedure main(arglist) # arglist is a parameter of main local a,b,i,x # a, b, i, x are locals withing main static y # a static (silly in main) x := arglist[1] a := 1.0 i := 10 b := [x,a,i,b] # ... rest of program end [edit] Icon [edit] Unicon This Icon solution works in Unicon. [edit] J val=. 0 J has two assignment operators. The =. operator declares, initializes, assigns, etc. a local variable. The =: operator does the same for a "global" variable. fun =: 3 :0 val1 =: 0 val1 =. 2 val2 =. 3 val1, val2 ) fun'' 2 3 val1 0 val2 |value error Note that the language forbids assigning a "global" value in a context where the name has a local definition. fun1 =: 3 :0 val3=. 0 val3=: 0 ) fun1'' |domain error But the purpose of this rule is to help people catch mistakes. If you have reason to do this, you can easily set up another execution context. fun2 =: 3 :0 val4=. 0 3 :'val4=:y' y ) fun2 '' Variables are referred to by name, and exist in locales (which may be used as classes, closures or other stateful references). FIXME (working on good illustrative examples that would make sense to someone used to different languages) That said, it is possible and not uncommon to write an entire J application without using any variables (J has a functional, "point free" style of coding known as tacit). Names are optional (though often convenient). And, it can be possible to build code using names and then remove them using f. -- this is somewhat analogous to compiling code though the implementation of f. does not have to compile anything. [edit] Java Variables in Java are declared before their use with explicit types: int a; double b; AClassNameHere c; Several variables of the same type can be declared together: int a, b, c; Variables can be assigned values on declaration or afterward: int a = 5; double b; int c = 5, d = 6, e, f; String x = "test"; String y = x; b = 3.14; Variables can have scope modifiers, which are explained here. final variables can only be assigned once, but if they are Objects or arrays, they can be modified through methods (for Objects) or element assignment (for arrays): final String x = "blah"; final String y; final double[] nums = new double[15]; y = "test"; x = "blahblah"; //not legal nums[5] = 2.5; //legal nums = new double[10]; //not legal final Date now = new java.util.Date(); now.setTime(1234567890); //legal now = new Date(1234567890); //not legal [edit] JavaScript Information lifted from Stack Overflow (credit to krosenvold and triptych) it's parent scope. When resolving a variable, javascript starts at the innermost scope and searches outwards. //); [edit] Joy JOY does not have variables. Variables essentially name locations in memory, where values are stored. JOY also uses memory to store values, but has no facility to name these locations. The memory that JOY uses is commonly referred to as "the stack". Initializing The JOY stack can be initialized: [] unstack Assignment Values can be pushed on the stack: 42 pushes the value 42 of type integer on top of the stack. Stack Calling the stack by name pushes a copy of the stack on the stack. To continue the previous example: stack pushes the list [42] on top of the stack. The stack now contains: [42] 42. [edit] Liberty BASIC 'In Liberty BASIC variables are either string or numeric. 'A variable name can start with any letter and it can contain both letters and numerals, as well as dots (for example: user.firstname). 'There is no practical limit to the length of a variable name... up to ~2M characters. 'The variable names are case sensitive. 'assignments: -numeric variables. LB assumes integers unless assigned or calculated otherwise. 'Because of its Smalltalk heritage, LB integers are of arbitrarily long precision. 'They lose this if a calculation yields a non-integer, switching to floating point. i = 1 r = 3.14 'assignments -string variables. Any string-length, from zero to ~2M. t$ ="21:12:45" flag$ ="TRUE" 'assignments -1D or 2D arrays 'A default array size of 10 is available. Larger arrays need pre-'DIM'ming. height( 3) =1.87 dim height( 50) height( 23) =123.5 potential( 3, 5) =4.5 name$( 4) ="John" 'There are no Boolean /bit variables as such. 'Arrays in a main program are global. 'However variables used in the main program code are not visible inside functions and subroutines. 'They can be declared 'global' if such visibility is desired. 'Functions can receive variables by name or by reference. [edit] Julia Certain constructs in the language introduce scope blocks, which are regions of code that are eligible to be the scope of some set of variables. The scope of a variable cannot be an arbitrary set of source lines, but will always line up with one of these blocks. The constructs introducing such blocks are: function bodies (either syntax) while loops for loops try blocks catch blocks let blocks type blocks. #declaration/assignment, declaration is optional x::Int32 = 1 #datatypes are inferred dynamically, but can also be set thru convert functions and datatype literals x = 1 #x is inferred as Int, which is Int32 for 32-bit machines, Int64 for 64-bit machines #variable reference julia>x 1 x = int8(1) #x is of type Int8 x = 1.0 #x is Float64 x = y = 1 #assign both x and y to 1 global x = 1 #assigns 1 to global variable x (used inside scope blocks) local x = 1 #assigns 1 to local variable x (used inside scope blocks) x = 'a' #x is a 'Char' type, designated by single quotes x = "a" #x is a 1-element string, designated by double quotes [edit] Lasso // declare thread variable, default type null var(x) $x->type // null // declare local variable, default type null local(x) #x->type // null // declare thread variable, initialize with a type, in this case integer var(x = integer) // declare local variable, initialize with a type, in this case integer local(x = integer) // assign a value to the thread var x $x = 12 // assign a value to the local var x $x = 177 // a var always has a data type, even if not declared - then it's null // a var can either be assigned a type using the name of the type, or a value that is by itself the type local(y = string) local(y = 'hello') '\r' // demonstrating asCopyDeep and relationship between variables: local(original) = array('radish', 'carrot', 'cucumber', 'olive') local(originalaswell) = #original local(copy) = #original->asCopyDeep iterate(#original) => { loop_value->uppercase } #original // modified //array(RADISH, CARROT, CUCUMBER, OLIVE) '\r' #originalaswell // modified as well as it was not a deep copy //array(RADISH, CARROT, CUCUMBER, OLIVE) '\r' #copy // unmodified as it used ascopydeep //array(RADISH, CARROT, CUCUMBER, OLIVE) [edit] Logo Historically, Logo only had global variables, because they were easier to access when stepping through an algorithm. Modern variants have added dynamic scoped local variables. make "g1 0 name 2 "g2 ; same as make with parameters reversed global "g3 ; no initial value to func :x make "g4 4 ; still global localmake "L1 6 local ["L2 "L3] ; local variables, collection syntax func2 :g4 print :L2 ; 9, modified by func2 print :L3 ; L3 has no value, was not modified by func2 end to func2 :y make "g3 :y make "L2 :L1 + 3 ; dynamic scope: can see variables of callers localmake "L3 5 ; locally override L3 from caller (print :y :L1 :L2 :L3) ; 4 6 9 5 end print :g4 ; 4 print :L1 ; L1 has no value print name? "L1 ; false, L1 is not bound in the current scope [edit] LotusScript Sub Click() 'a few declarations as example Dim s as New NotesSession ' declaring a New NotesSession actually returns the current, active NotesSession Dim i as Integer ' i = 0 Dim s as String ' s= "" Dim v as Variant ' v is nothing Dim l as Long ' l = 0 Dim doc as NotesDocument 'doc is EMTPY '... End Sub [edit] Lua In lua, variables are dynamically typecast, and do not need declaration prior to use. a = 1 -- Here we declare a numeric variable fruit = "banana" -- Here we declare a string datatype needspeeling = True -- This is a boolean local b = 2 -- This variable declaration is prefixed with a scope modifier The lua programming language supports multiple assignments from within a single statement: A, B, C, D, E = 2, 4, 6, 8, "It's never too late" [edit] Mathematica x=value assign a value to the variable x x=y=value assign a value to both x and y x=. or Clear[x] remove any value assigned to x lhs=rhs (immediate assignment) rhs is evaluated when the assignment is made lhs:=rhs (delayed assignment) rhs is evaluated each time the value of lhs is requested Atomic Objects All expressions in Mathematica are ultimately made up from a small number of basic or atomic types of objects. Symbol / String / Integer / Real / Rational / Complex These. Symbols are the basic named objects in Mathematica aaaaa user-defined symbol Aaaaa system-defined symbol $Aaaa global or internal system-defined symbol aaaa$ symbol renamed in a scoping construct aa$nn unique local symbol generated in a module Contexts aaaa`x is a symbol with short name x, and context aaaa.. Scoping Constructs With[] evaluate with specified variables replaced by values Module[] localize names of variables (lexical scoping) Block[] localize values of variables (dynamic scoping) DynamicModule[] localize names of variables in dynamic interface constructs Other Forms of Scoping Begin, End localize symbol namespace Throw, Catch localize exceptions Quiet, Check localize messages BlockRandom localize pseudorandom variables [edit] MATLAB / Octave a = 4; % declare variable and initialize double value, s = 'abc'; % string i8 = int8(5); % signed byte u8 = uint8(5); % unsigned byte i16 = int16(5); % signed 2 byte u16 = uint16(5); % unsigned 2 byte integer i32 = int32(5); % signed 4 byte integer u32 = uint32(5);% unsigned 4 byte integers i64 = int64(5); % signed 8 byte integer u64 = uint64(5);% unsigned 8 byte integer f32 = float32(5); % single precission floating point number f64 = float64(5); % double precission floating point number , float 64 is the default data type. c = 4+5i; % complex number colvec = [1;2;4]; % column vector crowvec = [1,2,4]; % row vector m = [1,2,3;4,5,6]; % matrix with size 2x3 Variables within functions have local scope, except when they are declared as global global b [edit] Modula-3 MODULE Foo EXPORTS Main; IMPORT IO, Fmt; VAR foo: INTEGER := 5; (* foo is global (to the module). *) PROCEDURE Foo() = VAR bar: INTEGER := 10; (* bar is local to the procedure Foo. *) BEGIN IO.Put("foo + bar = " & Fmt.Int(foo + bar) & "\n"); END Foo; BEGIN Foo(); END Foo. For procedures, the formal parameters create local variables unless the actual parameter is prefixed by VAR: PROCEDURE Foo(n: INTEGER) = Here, n will be local to the procedure Foo, but if we instead wrote: PROCEDURE Foo(VAR n: INTEGER) = Then n is the global variable n (if it exists). [edit] Nimrod var x: int = 3 var y = 3 # type inferred to int var z: int # initialized to 0 let a = 13 # immutable variable var b, c: int = 10 s = "foobar" [edit] Objeck Different ways to declare and initialize an integer. a : Int; b : Int := 13; c := 7; [edit] OCaml The default handlers for values in OCaml are not variables strictly speaking, because as OCaml is a functional language these values can't vary (so are not variable). Strictly speaking these are bindings. An identifier is bound to a value in an immutable way. The standard way to bind an identifier to a value is the let construct: let x = 28 This stated, ocaml programmers most often use the word variable when they refer to bindings, because in the programming world we usually use this word for the default values handlers. Now to add confusion, real variables also exist in OCaml because it is an impure functional language. They are called references and are defined this way: let y = ref 28 References can then be accessed and modified this way: !y (* access *) y := 34 (* modification *) An identifier can not be declared uninitialised, it is always defined with an initial value, and this initial value is used by the OCaml type inference to infer the type of the binding. Inside an expression, bindings are defined with the let .. in construct, and we can also define multiple bindings with the let .. and .. in construct (here the expression can be the definition of a new identifier or the definition of a function): let sum = (* sum is bound to 181 *) let a = 31 and b = 150 in (a + b) let sum () = (* sum is a function which returns 181 *) let a = 31 and b = 150 in (a + b) [edit] Openscad mynumber=5+4; // This gives a value of nine [edit] Oz Variable names in Oz always start with an uppercase letter. Oz variables are dataflow variables. A dataflow variable can basically be free (unbound) or determined (has a value). Once a value has been assigned, it can not be changed. If we assign the same value again, nothing happens. If we assign a different value to an already determined variable, an exception is raised: declare Var %% new variable Var, initially free {Show Var} Var = 42 %% now Var has the value 42 {Show Var} Var = 42 %% the same value is assigned again: ok Var = 43 %% a different value is assigned: exception In the Emacs-based interactive environment, declare creates a new open scope in which variables can be declared. The variables are visible for the entire rest of the session. Most operations on free variables block until the variables have been bound (but not Show as used above). Assignment to dataflow variables is also called unification. It is actually a symmetric operation, e.g. the following binds B to 3: declare A = 3 B in A = B {Show B} However, variables can only be introduced at the left side of the = operator. So this is a syntax error: declare A = 3 A = B %% Error: variable B not introduced in {Show B} It is possible to introduce multiple variables in a single statement: declare [A B C D] = [1 2 3 4] %% unification of two lists In a module definition, toplevel variables can be introduced between the keywords define and end without the need for declare. The range between these two keywords is also their scope. Toplevel variables can optionally be exported. functor export Function define ToplevelVariable = 42 fun {Function} 42 end end Function and class definitions introduce a new variable with the name of the function/class and assign the new function/class to this variable. Most Oz statement introduce a new scope and it is possible to introduce local variables at the top of this scope with the in keyword. fun {Function Arg} LocalVar1 in LocalVar1 = if Arg == 42 then LocalVar2 in LocalVar2 = yes LocalVar2 else LocalVar3 = no %% variables can be initialized when declared in LocalVar3 end LocalVar1 end Here, LocalVar1 is visible in the whole body of Function while LocalVar2 is only visible in the then branch and LocalVar3 is only visible in the else branch. Additionally, new local variables can be introduced everywhere using the keyword local. if {IsEven 42} then {System.showInfo "Here, LocalVar is not visible."} local LocalVar = "Here, LocalVar IS visible" in {System.showInfo LocalVar} end end New variables are also introduced in pattern matching. case "Rosetta code" of First|_ then {Show First} end %% prints "R" _ creates a new nameless variable that is initially unbound. It is usually pronounced "don't care". It is possible to create a read-only view of a variable with the !! operator. This is called a "future". We can wait for such a variable to become bound by another thread and we can read its value, but we can never set it. declare A B = !!A %% B is a read-only view of A in thread B = 43 %% this blocks until A is known; then it fails because 43 \= 42 end A = 42 Additional operations on variables: declare V = 42 in {Wait V} %% explicitly wait for V to become determined if {IsDet V} then %% check whether V is determined; not recommended {Show determined} elseif {IsFree V} then %% check whether V is free; not recommended {Show free} end IsFree and IsDet are low-level functions. If you use them, you code is no longer declarative and prone to race conditions when used in a multi-threaded context. To have mutable references like in imperative languages, use cells: declare A = {NewCell 42} OldVal in {Show @A} %% read a cell with @ A := 43 %% change its value OldVal = A := 44 %% read and write at the same time (atomically) A is an immutable dataflow variable that is bound to a mutable reference. [edit] PARI/GP There are two types of local variables, local (mostly deprecated) and my. Variables can be used without declaration or initialization; if not previously used such a variable is a pure variable: technically, a monomial in a variable with name equal to the variable name. This behavior can be forced with the apostrophe operator: regardless of the value (if any) currently stored in x, 'x displays as (and is treated internally as) x. This is useful when you want to use it as a variable instead of a number (or other type of object). For example, 'x^3+7 is a cubic polynomial, not the number 8, even if x is currently 1. [edit] Pascal [edit] Perl In perl, variables are global by default and can be manipulated from anywhere in the program. Variables can be used without first being declared, providing that the strict pragmatic directive is not in effect: sub dofruit { $fruit='apple'; } dofruit; print "The fruit is $fruit"; Variables can be declared prior to use and may be prefixed with scope modifiers our, my, or local see scope modifiers for the differences. Variables which haven't been assigned to have the undefined value by default. The undefined value acts just like 0 (if used as a number) or the empty string (if used as a string), except it can be distinguished from either of these with the defined function. If warnings are enabled, perl will print a message like "Use of uninitialized value $foo in addition (+)" whenever you use the undefined value as a number or string. Initialization and assignment are the same thing in Perl: just use the = operator. Note that the rvalue's context (scalar or list) is determined based on the lvalue. my $x = @a; # Scalar assignment; $x is set to the # number of elements in @a. my ($x) = @a; # List assignment; $x is set to the first # element of @a. my @b = @a; # List assignment; @b becomes the same length # as @a and each element becomes the same. my ($x, $y, @b) = @a; # List assignment; $x and $y get the first # two elements of @a, and @b the rest. my ($x, $y, @b, @c, $z) = @a; # Same thing, and also @c becomes empty # and $z undefined. The kind of value a variable can hold depends on its sigil, "sigil" being a slang term for "funny character in front of a variable name". $dollarsigns can hold scalars: the undefined value, numbers, strings, or references. @atsigns can hold arrays of scalars, and %percentsigns can hold hashes of scalars (associative arrays mapping strings to scalars); nested data structures are constructed by making arrays or hashes of references to arrays or hashes. There are two other sigils, but they behave quite unlike the others. A token of the form &foo refers to a subroutine named foo. In older versions of Perl, ampersands were necessary for calling user-defined subroutines, but since they no longer are, they have only a handful of obscure uses, like making references to named subroutines. Note that you can't assign to an ampersand-marked name. But you can assign to a typeglob, a kind of object represented with the notation *var. A typeglob *foo represents the symbol-table entry for all of the otherwise independent variables $foo, @foo, %foo, and &foo. Assigning a string "bar" to *foo makes these variables aliases for $bar, @bar, %bar, and &bar respectively. Alternatively, you can assign a reference to a typeglob, which creates an alias only for the variable of the appropriate type. In particular, you can say *twiddle = sub {...} to change the definition of the subroutine &twiddle without affecting $twiddle and friends. [edit] The strict pragmatic directive If the strict pragmatic directive is in effect, then variables need explicit scope declaration, so should be prefixed with a my or our keyword depending on the required level of scope: use strict; our $fruit; # declare a variable as global our $veg = "carrot"; # declare a global variable and define its value [edit] Local and global variables The following example shows the use of local and global variables: $fruit="apple"; # this will be global by default sub dofruit { print "My global fruit was $fruit,"; # use the global variable my $fruit="banana"; # declare a new local variable print "and the local fruit is $fruit.\n"; } dofruit; print "The global fruit is still $fruit"; [edit] Perl 6 Much of what is true for Perl 5 is also true for Perl 6. Some exceptions: There are no typeglobs in Perl 6. Assigning an array to a scalar variable now makes that scalar variable a reference to the array: my @y = <A B C D>; #Array of strings 'A', 'B', 'C', and 'D' my $x = @y; # $x is now a reference for the array @y say $x[1]; # prints 'B' follow by a newline character Types and constraints can also be applied to variables in Perl 6: # $x can contain only Int objects my Int $x; # $x can only contain native integers (not integer; (Includes code modified from. See this reference for more details.) (Much more can and should be said here). [edit] PL/I /* The PROCEDURE block and BEGIN block are used to delimit scopes. */ declare i float; /* external, global variable, excluded from the */ /* local area (BEGIN block) below. */ begin; declare (i, j) fixed binary; /* local variable */ get list (i, j); put list (i,j); end; /* Examples of initialization. */ declare p fixed initial (25); declare q(7) fixed initial (9, 3, 5, 1, 2, 8, 15); /* sets all elements of array Q at run time, on block entry. */ declare r(7) fixed initial (9, 3, 5, 1, 2, 8, 15); /* sets all elements of array R at compile time. */ p = 44; /* run-time assignment. */ q = 0; /* run-time initialization of all elements of Q to zero. */ q = r; /* run-time assignment of all elements of array R to */ /* corresponding elemets of S. */ [edit] PicoLisp You can control the local bindings of symbols with functions like 'use' or 'let': (use (A B C) (setq A 1 B 2 C 3) ... ) This is equivalent to (let (A 1 B 2 C 3) ... ) The parentheses can be omitted if there is only a single variable (use A (setq A ..) ... ) (let A 1 ...) Other functions that handle local bindings are 'let?', 'bind', 'job', 'with' or 'for'. [edit] PHP <?php /** * @author Elad Yosifon */ /* * PHP is a weak typed language, * no separation between variable declaration, initialization and assignment. * * variable type is defined by the value that is assigned to it. * a variable name must start with a "$" sign (called "sigil", not a dollar sign). * variable naming rules: * + case-sensitive. * + first character after $ must not be a number. * + after the first character all alphanumeric chars and _(underscore) sign is allowed, e.g. $i_am_a_new_var_with_the_number_0 * */ # NULL typed variable $null = NULL; var_dump($null); // output: null # defining a boolean $boolean = true; var_dump($boolean); // output: boolean true $boolean = false; var_dump($boolean); // output: boolean false /* * casting is made using (TYPE) as a prefix */ # bool and boolean is the same $boolean = (bool)1; var_dump($boolean); // output: boolean true $boolean = (boolean)1; var_dump($boolean); // output: boolean true $boolean = (bool)0; var_dump($boolean); // output: boolean false $boolean = (boolean)0; var_dump($boolean); // output: boolean false # defining an integer $int = 0; var_dump($int); // output: int 0 # defining a float, $float = 0.01; var_dump($float); // output: float 0.01 # which is also identical to "real" and "double" var_dump((double)$float); // output: float 0.01 var_dump((real)$float); // output: float 0.01 # casting back to int (auto flooring the value) var_dump((int)$float); // output: int 0 var_dump((int)($float+1)); // output: int 1 var_dump((int)($float+1.9)); // output: int 1 # defining a string $string = 'string'; var_dump($string); // output: string 'string' (length=6) # referencing a variable (there are no pointers in PHP). $another_string = &$string; var_dump($another_string); // output: string 'string' (length=6) $string = "I'm the same string!"; var_dump($another_string); // output: string 'I'm the same string!' (length=20) # "deleting" a variable from memory unset($another_string); $string = 'string'; /* * a string can also be defined with double-quotes, HEREDOC and NOWDOC operators. * content inside double-quotes is being parsed before assignment. * concatenation operator is .= */ $parsed_string = "This is a $string"; var_dump($parsed_string); // output: string 'This is a string' (length=16) $parsed_string .= " with another {$string}"; var_dump($parsed_string); // output: string 'This is a string with another string' (length=36) # with string parsing $heredoc = <<<HEREDOC This is the content of \$string: {$string} HEREDOC; var_dump($heredoc); // output: string 'This is the content of $string: string' (length=38) # without string parsing (notice the single quotes surrounding NOWDOC) $nowdoc = <<<'NOWDOC' This is the content of \$string: {$string} NOWDOC; var_dump($nowdoc); // output: string 'This is the content of \$string: {$string}' (length=42) # as of PHP5, defining an object typed stdClass => standard class $stdObject = new stdClass(); var_dump($stdObject); // output: object(stdClass)[1] # defining an object typed Foo class Foo {} $foo = new Foo(); var_dump($foo); // output: object(Foo)[2] # defining an empty array $array = array(); var_dump($array); // output: array {empty} /* * an array with non-integer key is also considered as an associative array(i.e. hash table) * can contain mixed variable types, can contain integer based keys and non-integer keys */ $assoc = array( 0 => $int, 'integer' => $int, 1 => $float, 'float' => $float, 2 => $string, 'string' => $string, 3 => NULL, // <=== key 3 is NULL 3, // <=== this is a value, not a key (key is 4) 5 => $stdObject, 'Foo' => $foo, ); var_dump($assoc); // output: // ======= // array // 0 => int 0 // 'integer' => int 0 // 1 => float 0.01 // 'float' => float 0.01 // 2 => string 'string' (length=6) // 'string' => string 'string' (length=6) // 3 => null // 4 => int 3 // 5 => // object(stdClass)[1] // 'Foo' => // object(Foo)[2] /* * all variables are "global" but not reachable inside functions(unless specifically "globalized" inside) */ function a_function() { # not reachable var_dump(isset($foo)); // output: boolean false global $foo; # "global" (reachable) inside a_function()'s scope var_dump(isset($foo)); // output: boolean true } a_function(); /** * there is another special type of variable called (Resource). * for more info regarding Resources: * @url * @url */ [edit] PureBasic ; Variables are initialized when they appear in sourcecode with default value of 0 and type int Debug a ; or value "" for a string, they are not case sensitive Debug b$ ; This initializes a double precision float, if type is following the dot Debug c.d ; They can be initialized with define (double precision float, string, integer) Define d.d = 3.5, e$ = "Test", f.i = a + 2 ; Define can have a default type (all bytes except j which is long): Define.b g, h, j.l ; Define without following variables sets default type. In this case to single precision float Define.f ; So this will be an single precision float and no integer Debug k ; EnableExplicit forces declaration of used variables with define EnableExplicit ; Will throw an error because L isn't initialized Debug L DisableExplicit ; Global Variables are available in Procedures and Threads too Global M = 3, N = 2 Procedure Dummy(parameter1, parameter2 = 20) ; Parameter contain values which where used when calling the function, ; their types have to be specified in the above Procedure header. ; The last ones can have default values which get applied if this parameter is not given. ; Variables in Procedures are separate from those outside, ; so d can be initialized again with another type ; which would otherwise lead to an error d.i ; Protected makes a variable local even if another one with same name is declared as global (see above) Protected M = 2 ; Shares a variable with main program like it was declared by global Shared a ; prevents a variable to be initialized with default value again when procedure is called a second time, ; could be used for example as a counter, which contains the number of times a function was called Static a ; N here also would have a value of 2, while for example ; f would, when named, initialize a new variable, and so have a value of 0 EndProcedure ; finally there are constants which are prefixed by an #: #Test = 1 ; Their value cannot be changed while program is running #String_Constant = "blubb" ; In constrast to variables, a constant has no types except an (optional) $ sign to mark it as string constant #Float_Constant = 2.3 ; Maps, LinkedLists , Arrays and Structures are not handled here, because they are no elemental variables [edit] PowerShell Variables in PowerShell start with a $ character, they are created on assignment and thus don't need to be declared: $s = "abc" $i = 123 Uninitialized variables expand to nothing. This may be interpreted for example as an empty string or 0, depending on context: 4 + $foo # yields 4 "abc" + $foo + "def" # yields "abcdef" Variables all show up in the Variable: drive and can be queried from there with the usual facilities: Get-ChildItem Variable: Since Variables are provided via a flat filesystem, they can be manipulated using the common cmdlets for doing so. For example to delete a variable one can use Remove-Item Variable:foo as if it were a file or a registry key. There are, however, several cmdlets dealing specifically with variables: Get-Variable # retrieves the value of a variable New-Variable # creates a new variable Set-Variable # sets the value of a variable Clear-Variable # deletes the value of a variable, but not the variable itself Remove-Variable # deletes a variable completely [edit] Python Names in Python are not typed, although all the objects referred to by them, are. Names are lexically scoped by function/method/class definitions, and must be defined before use. Names in global statements are looked up in the outermost context of the program or module. Names in a nonlocal statement are looked up in the order of closest enclosing scope outwards. [edit] R Variables are dynamically typed, so they do not need to be declared and instantiated separately. <- and = are both used as the assignment operator, though <- is preferred, for compatibility with S-Plus code. foo <- 3.4 bar = "abc" It is possible to assign multiple variables with the same value, and to assign values from left to right. baz <- quux <- 1:10 TRUE -> quuux There are also global assignment operators, <<- and ->>. From their help page: The operators '<<-' and '->>' cause a search to made through the environment for an existing definition of the variable being assigned. If such a variable is found (and its binding is not locked) then its value is redefined, otherwise assignment takes place in the global environment. In practice, this usually means that variables are assigned in the user workspace (global environment) rather than a function. a <- 3 assignmentdemo <- function() { message("assign 'a' locally, i.e. within the scope of the function") a <- 5 message(paste("inside assignmentdemo, a = ", a)) message(paste("in the global environment, a = ", get("a", envir=globalenv()))) message("assign 'a' globally") a <<- 7 message(paste("inside assignmentdemo, a = ", a)) message(paste("in the global environment, a = ", get("a", envir=globalenv()))) } assignmentdemo() assign 'a' locally, i.e. within the scope of the function inside assignmentdemo, a = 5 in the global environment, a = 3 assign 'a' globally inside assignmentdemo, a = 5 in the global environment, a = 7 Finally, there is also the assign function, where you choose the environment to assign the variable. assign("b", TRUE) #equivalent to b <- TRUE assign("c", runif(10), envir=globalenv()) #equivalent to c <<- runif(10) [edit] Racket #lang racket ;; define a variable and initialize it (define foo 0) ;; increment it (set! foo (add1 foo)) ;; Racket is lexically scoped, which makes local variables work: (define (bar) (define foo 100) (set! foo (add1 foo)) foo) (bar) ; -> 101 ;; and it also makes it possible to have variables with a global value ;; that are accessible only in a specific local lexical scope: (define baz (let () ; used to create a new scope (define foo 0) (define (bar) (set! foo (add1 foo)) foo) bar)) ; this is the value that gets bound to `baz' (list (baz) (baz) (baz)) ; -> '(1 2 3) ;; define a new type, and initialize a variable with an instance (struct pos (x y)) (define p (pos 1 2)) (list (pos-x p) (pos-y p)) ; -> '(1 2) ;; for a mutable reference, a struct (or some specific fields in a ;; struct) can be declared mutable (struct mpos (x y) #:mutable) (define mp (mpos 1 2)) (set-mpos-x! mp 11) (set-mpos-y! mp 22) (list (mpos-x mp) (mpos-y mp)) ; -> '(11 22) ;; but for just a mutable value, we have boxes as a builtin type (define b (box 10)) (set-box! b (add1 (unbox b))) (unbox b) ; -> 11 ;; (Racket has many more related features: static typing in typed ;; racket, structs that are extensions of other structs, ;; pattern-matching on structs, classes, and much more) [edit] Rascal The effect of a variable declaration is to introduce a new variable Name and to assign the value of expression Exp to Name. A variable declaration has the form Type Name = Exp; A mention of Name later on in the same scope will be replaced by this value, provided that Name’s value has not been changed by an intermediate assignment. When a variable is declared, it has as scope the nearest enclosing block, or the module when declared at the module level. There are two rules you have to take into account. Double declarations in the same scope are not allowed. Additionally, the type of Exp should be compatible with Type, i.e., it should be a subtype of Type. As a convenience, also declarations without an initialization expression are permitted inside functions (but not at the module level) and have the form Type Name; and only introduce the variable Name. Rascal provides local type inference, which allows the implicit declaration of variables that are used locally in functions. There are four rules that apply when doing so. (1) An implicitly declared variable is declared at the level of the current scope, this may the whole function body or a block nested in it. (2) An implicitly declared variable gets as type the type of the first value that is assignment to it. (3) If a variable is implicitly declared in different execution path of a function, all these implicit declarations should result in the same type. (4) All uses of an implicitly declared variable must be compatible with its implicit type. Examples Two explicit variable declarations: rascal>int max = 100; int: 100 rascal>min = 0; int: 0 An implicit variable declaration rascal>day = {<"mon", 1>, <"tue", 2>, <"wed",3>, >>>>>>> <"thu", 4>, <"fri", 5>, <"sat",6>, <"sun",7>}; rel[str, int]: { <"thu",4>, <"mon",1>, <"sat",6>, <"wed",3>, <"tue",2>, <"fri",5>, <"sun",7> } Variable declaration and assignment leading to type error rascal>int month = 12; int: 12 rascal>month ="December"; |stdin:///|(7,10,<1,7>,<1,17>): Expected int, but got str Pitfalls Local type inference for variables always uses the smallest possibe scope for a variable; this implies that a variable introduced in an inner scope is not available outside that scope. Here is how things can go wrong: rascal>if( 4 > 3){ x = "abc"; } else { x = "def";} str: "abc" rascal>x; |stdin:///|(0,1,<1,0>,<1,1>): Undeclared variable, function or constructor: x [edit] REXX [edit] assignments via = REXX has only one type of variable: a (character) string. There is no need to declare anything (indeed, there is no way to declare anything at all). All unassigned REXX variables have a default value of the variable name (in uppercase). There is no way to initialize a variable except to assign it a value (by one of the methods below), however, there is a way to "initialize" a (stemmed) array in REXX with a default value) ── see the 6th section below, default value for an array. To assign some data (value) to a variable, one method is to use the assignment operator, an equal sign (=): aa = 10 /*assigns chars 10 ───► AA */ bb = '' /*assigns a null value ───► BB */ cc = 2*10 /*assigns charser 20 ───► CC */ dd = 'Adam' /*assigns chars Adam ───► DD */ ee = "Adam" /*same as above ───► EE */ ff = 10. /*assigns chars 10. ───► FF */ gg='10.' /*same as above ───► GG */ hh = "+10" /*assigns chars +10 ───► hh */ ii = 1e1 /*assigns chars 1e1 ───► ii */ jj = +.1e+2 /*assigns chars +.1e+2 ───► jj */ Variables aa, ff, gg, hh, ii, and jj will all be considered numerically equal in REXX (but not exactly equal except for ff and gg). [edit] assignments via PARSE kk = '123'x /*assigns hex 00000123 ───► KK */ kk = 'dead beaf'X /*assigns hex deadbeaf ───► KK */ ll = '0000 0010'b /*assigns blank ───► LL (ASCII) */ mm = '0000 0100'B /*assigns blank ───► MM (EBCDIC)*/ xxx = '11 2. 333 -5' parse var xxx nn oo pp qq rr /*assigns 11 ───► NN */ /*assigns 2. ───► OO */ /*assigns 333 ───► PP */ /*assigns ─5 ───► QQ */ /*assigns "null" ───► RR */ /*a "null" is a string of length zero (0), */ /*and is not to be confused with a null char.*/ cat = 'A cat is a lion in a jungle of small bushes.' /*assigns a literal ───► CAT */ [edit] assignments via VALUE Assignments can be made via the value BIF [Built-In Function] (which also has other capabilities), the capability used here is to create a variable name programmatically (normally using concatenation or abuttal). call value 'CAT', "When the cat's away, the mice will play." /*assigns a literal ───► CAT */ yyy='CA' call value yyy'T', "Honest as the Cat when the meat's out of reach." /*assigns a literal ───► CAT */ yyy = 'CA' call value yyy || 'T', "Honest as the Cat when the meat's out of reach." /*assigns a literal ───► CAT */ [edit] unassigned variables There are methods to catch unassigned variables in REXX. /*REXX pgm to do a "bad" assignment (with an unassigned REXX variable).*/ signal on noValue /*usually, placed at pgm start. */ xxx=aaaaa /*tries to assign aaaaa ───► xxx */ say xxx 'or somesuch' exit noValue: /*this can be dressed up better. */ badLine =sigl /*REXX line number that failed. */ badSource=sourceline(badLine) /*REXX source line ··· */ badVar =condition('D') /*REXX var name that's ¬ defined.*/ say say '*** error! ***' say 'undefined variable' badvar "at REXX line number" badLine say say badSource say exit 13 Note: the value (result) of the condition BIF can vary in different implementations of a REXX interpreter. output using Regina (various versions), PC/REXX, and Personal REXX: *** error! *** undefined variable AAAAA at REXX line number 5 xxx=aaaaa /*tries to assign aaaaa ───► xxx */ output using R4 REXX: Error 46 : Invalid variable reference (NOVALUE) Information: Variable aaaaa does not have an assigned value Error occurred in statement# 5 Statement source: xxx= aaaaa Statement context: D:\variabl4.rex, procedure: variabl4 [edit] scope of variables REXX subroutines/functions/routines/procedures can have their own "local" variables if the procedure statement is used. Variables can be shared with the main program if the variables are named on the procedure statement with the expose keyword. /*REXX pgm shows different scopes of a variable: "global" and "local".*/ q = 55 ; say ' 1st q=' q /*assign a value ───► "main" Q.*/ call sub ; say ' 2nd q=' q /*call a procedure subroutine. */ call gyro ; say ' 3rd q=' q /*call a procedure with EXPOSE. */ call sand ; say ' 4th q=' q /*call a subroutine or function. */ exit /*stick a fork in it, we're done.*/ /*──────────────────────────────────SUB subroutine──────────────────────*/ sub: procedure /*use PROCEDURE to use local vars*/ q = -777 ; say ' sub q=' q /*assign a value ───► "local" Q. */ return /*──────────────────────────────────GYRO subroutine─────────────────────*/ gyro: procedure expose q /*use EXPOSE to use global var Q.*/ q = "yuppers" ; say 'gyro q=' q /*assign a value ───► "exposed" Q*/ return /*──────────────────────────────────SAND subroutine─────────────────────*/ sand: /*all REXX variables are exposed.*/ q = "Monty" ; say 'sand q=' q /*assign a value ───► "global" Q*/ return output 1st q= 55 sub q= -777 2nd q= 55 gyro q= yuppers 3rd q= yuppers sand q= Monty 4th q= Monty Programming note: there is also a method in REXX to expose a list of variables. [edit] default value for an array There is a way in REXX to assign a default value (or an initial value, if you will) to a (stemmed) array. aaa. = '───────nope.' /*assign this string as a default*/ aaa.1=1 /*assign 1 to first element.*/ aaa.4=4. /* " 4 " fourth " */ aaa.7='lucky' /* " 7 " seventh " */ do j=0 to 8 /*go through a bunch of elements.*/ say 'aaa.'||j '=' aaa.j /*display element # and its value*/ end /*we could've started J at -100.*/ [edit] dropping a variable In REXX, dropping a variable (this can be thought of as deallocating it or setting the value back to its "default"). Note that the storage used by the variable's (old) value is not truly deallocated, but its storage is returned to the pool of storage available for allocation of other REXX variables (and their values). This action isn't mandatory for the REXX language (or for that matter, not even specified), but it's apparently what all (Classic) REXX interpreters do at the the time of this writing. radius=6.28 /*assign a value to a variable. */ say 'radius =' radius drop radius /*now, "undefine" the variable. */ say 'radius =' radius Note: The value of an undefined (or deallocated) REXX variable is the uppercased name of the REXX variable name. output radius = 6.28 radius = RADIUS [edit] compound variables In additions to the (simple) variables described above, REXX defines compound variables that consist of a stem (symbol.) followed by a list of period-separated simple variables and constants The compond variable's tail is computed by concatenating the variables' values (which can be any string) with the constants and the intervening periods. var.='something' /* sets all possible compound variables of stem var. */ x='3 ' var.x.x.4='something else' Do i=1 To 5 a=left(i,2) Say i var.a.a.4 "(tail is '"a||'.'||a||'.'||'4'"')" End Output: 1 something (tail is '1 .1 .4') 2 something (tail is '2 .2 .4') 3 something else (tail is '3 .3 .4') 4 something (tail is '4 .4 .4') 5 something (tail is '5 .5 .4') [edit] Ruby Information taken from Variables page at the Ruby User's Guide Rubyis a global variable maintained by the interpreter, and nilis really a constant. As these are the only two exceptions, they don't confuse things too much. Referencing an undefined global or instance variable returns nil. Referencing an undefined local variable throws a NameError exception. $a_global_var = 5 class Demo @@a_class_var = 6 A_CONSTANT = 8 def initialize @an_instance_var = 7 end def incr(a_local_var) @an_instance_var += a_local_var end end [edit] Scala Variables in Scala can have three different scopes depending on the place where they are being used. They can exist as fields, as method parameters and as local variables. Below are the details about each type of scope: - respectively var or val. - and defined by val keyword. In algebraic datatypes (Scala's case classes), -think records- the parameters looks like method parameters. In this case they are fields. - Local variables are variables declared inside a method. Local variables are only accessible from inside the method, but the objects you create may escape the method if you return them from the method. Local variables can be both mutable or immutable types and can be defined using respectively var or val. [edit] Seed7 Seed7 variables must be defined with type and initialization value, before they are used. There are global variables and variables declared local to a function. $ include "seed7_05.s7i"; var integer: foo is 5; # foo is global const proc: aFunc is func local var integer: bar is 10; # bar is local to aFunc begin writeln("foo + bar = " <& foo + bar); end func; const proc: main is func begin aFunc; end func; [edit] SNOBOL4 Local variables in Snobol are declared in a function definition prototype string: define('foo(x,y)a,b,c') :(foo_end) foo a = 1; b = 2; c = 3 foo = a * ( x * x ) + b * y + c :(return) foo_end This defines a function foo( ) taking two arguments x,y and three localized variables a,b,c. Both the argument parameters and vars are dynamically scoped to the function body, and visible to any called functions within that scope. The function name also behaves as a local variable, and may be assigned to as the return value of the function. Any variable initialization or assignment is done explicitly within the function body. Unassigned variables have a null string value, which behaves as zero in numeric context. Snobol does not support static or lexical scoping, or module level namespaces. Any variables not defined in a prototype are global to the program. [edit] Tcl Tcl's variables are local to procedures, lambdas and methods by default, and there is no initialization per se: only assignment when the variable previously did not exist. Demonstrating: namespace eval foo { # Define a procedure with two formal arguments; they are local variables proc bar {callerVarName argumentVar} { ### Associate some non-local variables with the procedure global globalVar; # Variable in global namespace variable namespaceVar; # Variable in local (::foo) namespace # Access to variable in caller's context; may be local or global upvar 1 callerVarName callerVar ### Reading a variable uses the same syntax in all cases puts "caller's var has $callerVar" # But global and namespace vars can be accessed by using qualified names puts "global var has $globalVar which is $::globalVar" ### Writing a variable has no special syntax ### but [set] is by far the most common command for writing set namespaceVar $globalVar incr globalVar ### Destroying a variable is done like this unset argumentVar } } The main thing to note about Tcl is that the "$" syntax is a language level operator for reading a variable and not just general syntax for referring to a variable. [edit] TI-83 BASIC Variables will remain global, even after the program is complete. Global variables persist until deleted (or reset or power loss, unless they are archived). Variables may be assigned with the → to a value. :1→A [edit] TI-89 BASIC A variable not declared local (to a program or function) is global. Global variables are grouped into folders of which one is current at any given time. Global variables persist until deleted (or reset or power loss, unless they are archived). Local mynum, myfunc Variables may be assigned with the → or Define statements, both of which assign a new value to a variable. → is typically used interactively, but only Define can assign programs or multi-statement functions. Define mynum = 1 © Two ways to assign a number 1 → mynum Define myfunc(x) = (sin(x))^2 © Two ways to assign a function (sin(x))^2 → myfunc(x) Define myfunc(x) = Func © Multi-statement function If x < 0 Then Return –x Else Return x EndIf EndFunc [edit] TUSCRIPT $$ MODE TUSCRIPT,{} var1=1, var2="b" PRINT "var1=",var1 PRINT "var2=",var2 basket=* DATA apples DATA bananas DATA cherry LOOP n,letter="a'b'c",fruit=basket var=CONCAT (letter,n) SET @var=VALUE(fruit) PRINT var,"=",@var ENDLOOP Output: var1=1 var2=b a1=apples b2=bananas c3=cherry [edit] TXR Variables have a form of pervasive dynamic scope in TXR. Each statement ("directive") of the query inherits the binding environment of the previous, invoking, or surrounding directive, as the case may be. The initial contents of the binding environment may be initialized on the interpreter's command line. The environment isn't simply a global dictionary. Each directive which modifies the environment creates a new version of the environment. When a subquery fails and TXR backtracks to some earlier directive, the original binding environment of that directive is restored, and the binding environment versions generated by backtracked portions of the query turn to garbage. Simple example: a variable. Whichever clause of the cases is successful will pass both its environment modifications and input position increment to the next element of the query. Under some other constructs, environments may be merged: @(maybe) @a bar @(or) foo @b @(end) The maybe directive matches multiple clauses such that it succeeds no matter what, even if none of the clauses succeed. Clauses which fail have no effect, but the effects of all successful clauses are merged. This means that if the input which faces the above maybe is the line "foo bar", the first clause will match and bind a to foo, and the second clause will also match and bind b to bar. The interpreter integrates these results together and the environment which emerges has both bindings. [edit] UNIX Shell #!/bin/sh # The unix shell uses typeless variables apples=6 # pears=5+4 # Some shells cannot perform addition this way pears = `expr 5+4` # We use the external expr to perform the calculation myfavourite="raspberries" [edit] XPL0 There are only three variable types: 32-bit signed integers (in the 32-bit version), IEEE-754 64-bit reals, and characters which are actually 32-bit addresses of (or pointers to) strings. When a 'char' variable is subscripted, it accesses a single byte. All variable names must be declared before they are used, for example: int I, J, K; real X, Y, Array(10); char String(80); Variables (as well as all declared names, such as for procedures) must start with a capital letter or underline. Names may contain digits or underlines and be any length, but only the first 16 characters are significant. Names are case-insensitive by default, unless the /c switch is used when compiling. Variables are usually assigned values like this: X:= 3.14, but the first declared variables in a procedure can have argument values passed into them. Variables other than arguments are only initialized by explicit assignments in the body of the code. Local variables are encapsulated in the procedures that declare them and are not visible (are out of scope) outside those procedures. Procedures can be (statically) nested several levels deep. The deepest procedure can "see" all variables declared in the procedures in which it is nested. If two variables are declared with identical names, the most local one is used. [edit] XSLT Although called variables, XSLT "variable" elements are single-assignment, and so behave more like constants. They are valid in the node scope in which they are declared. <xsl:variable <xsl:if... </xsl:if> <!-- prepend '$' to reference a variable or parameter--> [edit] zkl The are two variable type in zkl, register (auto in C) and var (instance variable). vars are global to class [instance], registers are visible to their scope and enclosed scopes. In addition, a function can have vars (static var in C), which are actually [hidden] instance data. vars have no type. var v; // global to the class that encloses this file class C{ var v } // global to class C, each instance gets a new v class C{fcn f{var v=123;}} // v can only be seen by f, initialized when C is class C{fcn init{var [const] v=5;}} // init is part of the constructor, so vars are promoted yo class scope. This allows const vars to be created at construction time var v=123; v="hoho"; //not typed class C{var v} // C.v OK, but just v is not found class C{var[const]v=4} // C.v=3 illegal (compile or run time, depending) class C{var[mixin]v=4} // the compiler treats v as an int for type checking class C{var[proxy]v=f; fcn f{println("my name is ",self.fcn.name)} } v acts like a property to run f so C.v is the same as C.f() class C{reg r} // C.r is compile time error r:=5; // := syntax is same as "reg r=5", convenience - Programming Tasks - Basic language learning - Ada - ALGOL 68 - AppleScript - AutoHotkey - AWK - BASIC - Applesoft BASIC - Batch File - BBC BASIC - Bracmat - C - C++ - C sharp - COBOL - Common Lisp - Delphi - D - DWScript - E - Ela - Erlang - F Sharp - Factor - Forth - Fortran - GAP - Go - Haskell - HicEst - Icon - Unicon - J - Java - JavaScript - Joy - Liberty BASIC - Julia - Lasso - Logo - LotusScript - Lua - Mathematica - MATLAB - Octave - Modula-3 - Nimrod - Objeck - OCaml - Openscad - Oz - PARI/GP - Pascal - Perl - Perl 6 - PL/I - PicoLisp - PHP - PureBasic - PowerShell - Python - R - Racket - Rascal - REXX - Ruby - Scala - Seed7 - SNOBOL4 - Tcl - TI-83 BASIC - TI-89 BASIC - TUSCRIPT - TXR - UNIX Shell - XPL0 - XSLT - Zkl - GUISS/Omit - Unlambda/Omit - Retro/Omit - Initialization - Scope
http://rosettacode.org/wiki/Variables
CC-MAIN-2014-42
refinedweb
14,540
58.72
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives PLaneT is PLT Scheme's centralized package distribution system. PLaneT provides automatic run-time module distribution and caching. Cute, and be sure to check out the available packages on the PLaneT website, as well as the implemenetation details. One might fel that this is outside the scope of normal language design, as it is related to environment facilities that may be non-portable or not available entirely on some systems. On the other hand there are clear benefits of buitlin language support: it can provide clean and consistent distribution semantic, it ensures soource code portability (as opposed to implementation portability), universal availability, and perhaps most importantly form a practical point of view, these days, it makes distribution by source easier, since required modules will be fetched automatically. I am sure opinions differ. "most importantly form a practical point of view, these days, it makes distribution by source easier, since required modules will be fetched automatically." This is 99% of the reason I wrote PLaneT in the first place. People were always making little PLT scripts for various tasks, but there was no good mechanism for sharing them: you can download and install somebody's .plt file for your own use, but your users will have to install the same libraries to be able to get your program working; if you think they'll object to having to install 80 libraries just to get your Pokemon card sorter working you're just going to do without the libraries. This happened over and over again, to the point where it was folk wisdom that every serious PLTer had built their own redundant library for every possible task, and that's no good for anybody. I figured an automatic installer was a workable answer. As for why it's incorporated directly into the language rather than being an app that goes on the side: it tied in perfectly with the already-existing require form, it made usage much easier (ease of use was our #1 design priority), and nobody to my knowledge had done it before, so I wanted to try it out and see what'd happen. PLaneT hasn't been out long enough for me to tell you whether that turned out to be a good idea or a bad one. I can tell you that we've found it great for distributing libraries but less great for distributing more application-like pieces of software; that's been the biggest problem we've run into so far. Oh, it's cool alright. Else I wouldn't have linked to it, would I? :-) My question was of a more general nature. Should other languages and module systems attempt to solve this issue (thus havving to deal perhaps with things like versioning, security, network access etc. etc.) or should this sort of thing be left outisde the scope of language design? I still think the gold standard from these things is CPAN, which is a Perl module that's external to perl. I haven't used planet enough to tell if I'd want to use it in real code. The versioning feature is nice, but none of the planet modules have multiple versions available yet, so it's not real useful right now. The docs claim there's a separate module implementing planet, but I can't find it. I'd think you'd be able to roll a CPAN like interface out of it, though. Only the most recent version is listed on the page, though. For instance, Neil Van Dyke's webscraperhelper.plt has a version 1.0 and a version 1.1 available (those are PLaneT package version numbers, not library version numbers, so they follow a very consistent pattern, and if there's a 1.1 you can be sure a 1.0 exists as well). This extension mechanism (hook) is not PLaneT specific, I presume. The way the module system works is that if you say (require [whatever]), that [whatever] gets shipped off to an ordinary scheme function called current-module-name-resolver that turns it into a lower-level thing and handles namespace-attach-module issues. PLaneT works by having an entry in the default current-module-name-resolver, and users are free to override that function entirely to do whatever they want. However, if you write all your Scheme code in modules (as PLT hopes you will!), you find a problem: by the time the code that overrides the default module-name-resolver gets executed, all the requires are already done, so you can't really install a new resolver usefully (at least not within the context of a module). That's why PLaneT had to be built in to the interpreter rather than just being a pure user library. I think a better way to describe the real choices involved is: (a) large language; (b) smaller language, with large standard library; (c) small language with some mechanism for moudule distribution and management. All this assuming, of course, the language is supposed to have more than a single implementation. Language size and library size and/or module distribution seem pretty orthogonal to me -- witness asdf-install for Common Lisp, a module distribution system for a large language (or small language with large very-standard library, depending on how you want to slice it) with multiple implementations. I'm not sure those categories are exhaustive, but I think one of the things we've learned about programming languages in the past several years is that for practical use, having access to large libraries is more important than the expressiveness of the underlying language. For instance, a lot of people choose to program in Java even when they like OCaml or something better just because of Java's high-quality standard library. And of course CPAN is widely touted as one of Perl's best features. I think academic-types tend to sort-of recognize that, but since it doesn't feel like an academic problem, they dismiss it and work on problems that are more likely to lead to publication and that incidentally might help real programmers. That may just be a fundamental divide between academic language design and commercial language design, but I'd like to think that at least those of us on the academic side can put some kind of thought into how a language can best facilitate libraries. Having a big standard library (python, java, Common Lisp) is one way to go; automatic distribution (perl, Common Lisp's asdf-install, PLT) is another. But all these attempts are just stabs at something that seems like it might work; and they all do work, though it's not clear which ones are better or even what the relevant dimensions of comparison are. To my knowledge nobody's done a really thoughtful study of the subject. I think one of the things we've learned about programming languages in the past several years is that for practical use, having access to large libraries is more important than the expressiveness of the underlying language. I think that's because: Nobody outside academia cares about static expressivity, because they don't see the value in it. Gee, Frank, that sounds an awful lot like resigning the game to the "pragamatists". You feeling OK? ;-) I did not say there was no value in it, only that its value is largely unrecognized. I didn't think you were saying there was no value in it, just that you were giving up the fight, leaving the benighted non-academics to their fate. For me the ongoing challenge is to find ways to make advanced PL concepts usable and relevant for practical applications. Algrithmic complexity and theorem-proving may not be in big demand, but modularity, readability of code and cooperative coding are. Making a "sales pitch" based on these factors just might make inroads... This is certainly what drives me. It makes me nuts that folks in academia who are striving to to improve the state of the art in software development are dismissed as ivory tower theoreticians while the "real world" is stuck with C++, Java, C#, Visual Basic... more days than not, I get up in the morning to sling C++ and ask myself why I bother, apart from the paycheck. Not to reiterate the entire rant again, but the gap between the quality of what I can do when I get to choose the tools and what I can do when I don't get to choose is simply mind-boggling. At the moment, I am too benighted by my thesis to care deeply about the fates of benighted non-academics.
http://lambda-the-ultimate.org/node/229
CC-MAIN-2017-51
refinedweb
1,466
57.4
book is invaluable for anyone wishing to understand the recent religious resurgence that has caught so many educated people by surprise, and shown up their tone deafness about all that is happening. Os Guinness, author of The Case for Civility As Pope Benedict constantly reminds us, Christians today face a new situation where other religious traditions once more challenge Christian belief just as they did in the days of the apostles. This book helps us understand our non-Christian neighbors and as such is a valuable tool for all Catholic educators. H enry Rosenbaum, SAC Former Director of Education for the Roman Catholic Diocese of Calgary This textbook-type tour of world religions, spiced with personal close-ups, fully merits a place on thoughtful Christians bookshelves. Informal and informative, learned, wise, chatty, and sometimes provocative, it is a very impressive performance. J.I. Packer, Professor of Theology, Regent College My former colleague from Regent College days, and long-time friend, Irving Hexham has written an absolutely fascinating book on world religions which ref lects a balance and level of scholarship rarely found in introductory works. Therefore, I enthusiastically endorse this book. Bruce Waltke, Professor of Old Testament, Knox Theological Seminary 0310259442_understanding_int_CS4.indd 1 2/1/11 10:14 AM Irving Hexham writes that bland approaches [to the study of religion] produce bland students. Irving Hexham is not bland, and by combining authoritative knowledge of the worlds religions with a keen eye for current events, he has given us a textbook that will not produce bland students. Instead it will produce students who know about religion and who know how religious people the world over relate to the crucial issues of the day. TerryC. Muck, Dean and Professor of Mission and World Religion, E.Stanley Jones School of World Mission and Evangelism, Asbury Theological Seminary Irving Hexham is well known to his many readers through his publications on religious studies, both as a general field of research, as well as represented in various movements, both local and worldwide. In Understanding World Religions, his provocative work, especially on African and also Indian religious views, is worth the price of the volume. We need to examine these often neglected areas of study. GaryR. Habermas, Distinguished Research Professor and Chair of the Department of Philosophy and Theology, Liberty University Often it is just scholars who take real interest in world religions and new religious movements. The only time most of us lift our heads is when we hear of some tragic event that shows us other people believe differently than we do. But this is the world we live in and Irving Hexhams book is a resource that brings clarity to this vast world of religious beliefs. This book needs to be read and kept available on the bookshelf of every Christian leader. Carson Pue, President, Arrow Leadership 0310259442_understanding_int_CS4.indd 2 2/1/11 10:14 AM ZONDERVAN Understanding World Religions Requests for information should be addressed to: Zondervan, Grand Rapids, Michigan 49530 Library of Congress Cataloging-in-Publication Data Hexham, Irving. Understanding world religions / Irving Hexham. p. cm. Includes bibliographical references (p. 477). and index. ISBN 978-0-310-25944-2 (hardcover, printed) 1. ReligionsTextbooks. I. Title. BL80.3.H49 2011 200dc22 2010013103 All Scripture quotations, unless otherwise indicated, are taken from the Holy Bible, New International Version, NIV. Copyright 1973, 1978, 1984 by Biblica, Inc. Used by permission of Zondervan. All rights reserved worldwide. Maps by International Mapping. Copyright 2011 by Zondervan. All rights reserved.: Christopher Tobias Cover photography: Kazuyoshi Nomachi/Corbis Interior design: Publication Services, Inc. Printed in the United States of America 11 12 13 14 15 16 17 18 19 20 /DCI/ 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0310259442_understanding_int_CS4.indd 4 2/1/11 10:14 AM Contents Acknowledgments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Part 1 Studying Religion 1. Introductory Issues in the Study of Religion . . . . . . . . . . . . . . . . . . . . . . . . . 15 2. A Biased Canon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Part 2 African Traditions 3. African Religious Traditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4. Witchcraft and Sorcery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5. God in Zulu Religion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 6. The Case of Isaiah Shembe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Part 3 The Yogic Tradition 7. The Origins of Yogic Religions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 8. The Richness of the Hindu Tradition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 9. Rethinking the Hindu Tradition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 10. Gandhi the Great Contrarian. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 11. Buddhism. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 12. The Development of Buddhist Belief and Practice. . . . . . . . . . . . . . . . . . . . 195 13. The Moral Quest of Edward Conze. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 14. Other Yogic-Type Traditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Part 4 The Abramic Tradition 15. Early Judaism. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 0310259442_understanding_int_CS4.indd 5 2/1/11 10:14 AM 6 16. Rabbinic and Other Judaisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 17. Jewish Faith and Practice. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 18. Martin Bubers Zionist Spirituality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 19. Christ ianity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 20. Christ ian History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 21. Christ ian Faith and Practice. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 22. Christ ian Politics according to Abraham Kuyper . . . . . . . . . . . . . . . . . . . . . 387 23. The Challenge of Islam. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 24. Muslim Beliefs and Practices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 25. Muslim Piety. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 26. Sayyid Qutb and the Rebirth of Contemporary Islam . . . . . . . . . . . . . . . . . 453 Conclusion: Whither Religious Studies?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465 Suggestions for Further Reading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 Credits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 0310259442_understanding_int_CS4.indd 6 2/1/11 10:14 AM Acknowledgments For a book like this, the most appropriate place to begin in acknowledging all the help I have received over the years is with my entrance to Lancaster University in 1967. Therefore, I must begin by thanking Colin Lyas for accepting a former gas fitter into a degree program even though I did not fit the normal category of incoming students straight out of an English grammar school. Next, I need to thank the excellent religious studies professors who taught me at Lancaster, beginning with Ninian Smart, Edward Conze, Adrian Cunningham, Bob Morgan, James Richardson, Stuart Mews, and James Dickie (Yaqub Zaki). The education provided by the department at Lancaster was second to none. Clark Pinnock and Francis Schaeffer also deserve mentioning for encouraging me to go to university in the first place. Their recognition of my academic potential and real support changed my life from that of a manual worker to a scholar. After leaving Lancaster, I went to the University of Bristol, where I studied with two excellent Africanists, Fred Welbourn and Kenneth Ingham, to whom I owe a great deal. Fred taught me to understand African thought and encouraged my interest in new religious movements, while Kenneth insisted that I become a competent historian. Later G.C. Pipin Oosthuizen and Hans-Jrgen Becken deepened my knowledge of African religion, as did Gerald Pillay and the Right Reverend Londa Shembe. More recently, Ulrich van der Heyden also played an important role in encouraging my study of religion in both Africa and Germany. Michael Hahn of the University of Marburg is to be thanked for kindly allowing me to sit in on his graduate course on Buddhism at the University of Calgary and for later proving to be a true friend. So too Tony Barber deserves thanks for his friendship and insights into Chinese religion and culture. Niri Pillay and her mother also provided vivid insights into the Hindu tradition. Samerah Barett and Nastaran Naskhi need to be thanked, alongside my former neighbor Ishmael Bavasah, for correcting my understanding of Islamic culture. Samerah took time out of her busy schedule in law school to carefully read and comment on my chapters on Islam, which I appreciate. So too did Gordon Nickle, who graciously corrected any academic errors he found in the text. Katrine Brix provided scholarly comments on my chapter on Christ ianity, while Trevor Watts and Kristen Andruchuk also looked it over and gave their views as Christ ian readers. Henry Srebrnik advised with the chapter on Judaism, and I also benefited from regular conversations with my colleagues Eliezer Segal, Leslie Kawamura, and Elizabeth Rohlman. 0310259442_understanding_int_CS4.indd 7 2/1/11 10:14 AM 8 Acknowledgments My thanks go to both the staff at Research Serv ices of the University of Calgary and the Social Sciences Research Council of Canada for their generous support of my research over many years. Without them I would have produced far less. In particular the grants I received to visit Africa and Europe helped greatly with this book. Finally, I must thank a host of students from numerous religious traditions who took my courses at the University of Calgary. They both inspired me and corrected my misunderstandings of their traditions. 0310259442_understanding_int_CS4.indd 8 2/1/11 10:14 AM Introduction 0310259442_understanding_int_CS4.indd 9 2/1/11 10:14 AM 10 Introduction often called the new atheism,2 but one cannot deny that they make some interesting points. From experience, I know that students are far more likely to take an issue seriously and become fascinated by a topic if they are presented with different opinions that challenge their way of thinking or the work of other academics. Bland approaches produce bland students. Therefore, when I present controversial topics or points of view that are normally ignored by introductory texts, I am not necessarily presenting my own views. What I am doing is presenting ideas and arguments that I believe will stimulate debate and draw students into serious discussions about the study of religion. For example, in chapter9 I present various arguments about the origins of Indian, or what I call yogic, religions, which are related to the Indus Valley civilization, and I raise issues surrounding the Aryan invasions. Then, in chapter10 I introduce NiradC. Chaudhuris ideas about the development of the Hindu tradition. Not everyone will agree with what is said here; Chaudhuri is an important commentator who makes a convincing argument that deserves attention even though some people regard his work as controversial. By presenting the Hindu tradition in this way, I intend for students to be stimulated to find out more and to seriously study the issues involved. Therefore, while some people will disagree with the ideas and arguments, the way they are presented allows professors and students alike to enter into meaningful debates. In this way the book is a teaching tool rather than a set text that preserves a received tradition. Another area where some readers will feel disquiet is in my discussions of various modern thinkers at the end of each main section that outlines a major religious tradition. For example, my treatment of Gandhi as a contrarian will surprise many readers and annoy some. Others will complain that I spend far too little time discussing Gandhis personal religious beliefs and far too much time on his views about imperialism and cultural issues. My approach is intended to make people see beyond stereotypes. What I hope to achieve is a presentation that makes the reader want to read what Gandhi, and the others I discuss, actually said and that develops in the reader a fascination with these important and, each in their own way, fascinating people. As a student of Ninian Smart (19272001), I place great value on understanding a religion as it is understood by its practitioners. To borrow Smarts words, I believe the student of religion must learn to walk in anther persons moccasins, or, to put it another way, see the world through the interpretive lenses, or tinted glasses, worn by true believers. My other mentors include Edward Conze (19041979), James Dickie (or Yaqub Zaki, to give him his Arabic name), Kenneth Ingham, and Fred Welbourn (19121986)a ll of whom took very different approaches to the study of religion and never avoided controversy. I also admire, among others, the work of Walter Kaufmann (19211980) and Karl Popper (19021994), to whom I owe an intellectual debt. From all of these people I learned that understanding precedes criticism, which expresses the spirit of this book. The book is divided into four parts: (1) Studying Religion, (2) African Traditions, (3) The Yogic Tradition, including Hindu religions and Buddhism, and (4) The 2. The New Atheism, Wired, November 2006. 0310259442_understanding_int_CS4.indd 10 2/1/11 10:14 AM Introduction 11 Abramic Tradition, which includes Christ ianity, Judaism, and Islam; these are followed by a short conclusion ref lecting on Christ ian approaches to other religious traditions. Each part deals with a major religious tradition and some of its manifestations. These sections are designed to provide basic information about various religions in an engaging way. Thus, the last chapter dealing with each major religion focuses on one particular individual. In this way students are given insights into the work of people who embrace specific religious traditions, and they are helped to see that very often such people adopt positions that seem contradictory to outsiders. This is done to help students grasp the complexity of religious life and recognize that engagement with an individuals life and work brings with it the realization that religious beliefs are never as neat and clear-cut as they appear in most textbooks. Initially I had planned to make each part and each section within it identical. As a result I intended to produce a uniform manuscript which allows students to compare the history, teachings, and practices of each major religious tradition with those of other traditions systematically. As I wrote the book, I found that this neat scheme did not work. Religions and religious traditions are different. Each is unique. Therefore, a different approach was demanded for each tradition if I hoped to capture its essence in a few short chapters. As a result the book lacks the obvious cohesion of the earlier plan but is, I believe, more authentic and useful to the student of religions. Some readers will no doubt find the inclusion of footnotes tiresome and inappropriate for an introductory textbook. Although I can understand this reaction, I believe it is wrong. Therefore, I compromised by providing footnotesa nd at times there are a lot of themonly for those sections and arguments where serious questions exist about the claims I make. In this way I provide professors and students with a means of checking things out for themselves and seeing why I say the things I do on controversial or littleknown topics. Similarly, maps were used only when they added to the text and dealt with unfamiliar issues. Therefore, there are no biblical maps which are easily found in other books. The pictures also were chosen in the hope that they will communicate something of the ethos of a religion in a particular time and place. As a result they are rather eclectic and not the standard glossy photo. Hopefully they make the book interesting and are informative. Other readers may be surprised that I have retained the essentially Christ ian designations of dates, BC and AD, instead of the increasingly popular Common Era or CE and BCE. This is because the Common Era is common to Jews and Christ ians but still excludes Buddhists, Hindus, and Muslims. It is therefore a very misleading term. For this reason I prefer the traditional Western usage to a modern innovation which does not even have the saving grace that it supposedly developed in a homogeneous society. In conclusion, I hope that this book will stimulate debate and encourage a new generation of students to become involved in the study of religion and religious traditions. 0310259442_understanding_int_CS4.indd 11 2/1/11 10:14 AM Part 1 Studying Religion 0310259442_understanding_int_CS4.indd 13 2/1/11 10:14 AM Chapter 1 Introductory Issues in the Study of Religion What Is Religion? Most people have a clear idea of what they mean by religion and can usually identify religious behavior when they see it. Nevertheless, when we have to define religion, we soon discover that the task is quite difficult, because religion is manifested in many different ways in our world. Thus, while for most people religion involves a belief in God, this is not true for certain forms of Buddhism. Indeed, to the educated Buddhist, God is quite unimportant. Yet for many peasants living in Buddhist countries, the role of gods in their daily lives is important. Therefore, a distinction has to be made between Buddhism as a great tradition1 and the many little traditions embedded in a predominantly Buddhist culture. The educated Buddhist does not seek God, but his peasant neighbor, while acknowledging the importance of Buddhism for liberation, does worship various gods. Because of the difficulties created by movements, such as Buddhism, that are 1. See, e.g., Robert Redfield, Peasant Society and Culture: An Anthropological Approach to Civilization (Chicago: Univ. of Chicago Press, 1956), 70104. 15 0310259442_understanding_int_CS4.indd 15 2/1/11 10:14 AM 16 Studying Religion clearly religious, many students turn to experts for a definitive definition of religion. What they find is a bewildering series of definitions. For example, sociologist mile Durkheim (18581917) defined religion as a unified system of beliefs and practices relative to sacred things, that is to say, things set apart and forbiddenbeliefs and practices which unite into one single moral community called a Church, all those who adhere to them.2 Another author who is often cited for his definition of religion is the philosopher Immanuel Kant (17241804), who def ined religion as the recognition of all duties as divine commands. 3 Similarly, Max Mller (18231900), whom many regard as the true founder of religious studies,4 gave a twofold definition of religion as a body of doctrines handed down by tradition, or in canonical books, and containing all Photo 1.2 Max Mller is generally known as the father of religious that constitutes the faith of Jew, studies. Although he concentrated his work on the religions of India, Christ ian, or Hindu and as a he had wide interests and an appreciation for African and other reli gious traditions neglected by later scholars. faculty of faith ... which distinguishes man from animals.5 Ref lecting on these and many similar definitions of religion, one soon sees that most of them ref lect both the complexity of the subject and the interests of the person making the definition. Thus Durkheim writes as a sociologist, Kant as a philosopher, and Mller as a historian-linguist inf luenced by theological discussions. 0310259442_understanding_int_CS4.indd 16 2/1/11 10:14 AM 17 certain general characteristics found in similar phenomena which we also call religions. We can say we are in the presence of a religion, he suggested, when we discover a set of institutionalized rituals, identified with a tradition and expressing and/or evoking sacral sentiments directed at a divine or trans-divine focus seen in the context of the human phenomenological environment and at least partially described by myths or by myths and doctrines.6 Each of the key terms in this model for identifying religion can be discussed at great length. All we will do here is brief ly discuss their key characteristics. First of all, when we attempt to study a religion, or religions, all we can really do is look at their institutional manifestations. We can observe behavior, but we can never really know what goes on in a persons head. Therefore, for practical purposes, studying religion means studying religious institutions or institutions identified as religious. This means the study of religion is the study of religious movements which are observable within society and therefore are a form of social movement. The next question is whether a movement is religious or secular. Many secular movements appear religious. For example, a crowd at a hockey game or watching American football often acts in ways that look like those of a religious group. But although some people argue that such actions are religious at heart, there is a big difference between a secular and a religious gathering. Political parties, the fans mobbing rock stars, and the veneration of nationalist leaders all have similarities to religion, but none are religious in themselves. Therefore, they need to be excluded from our study. This is why the other characteristics indicated by Smart are important. Let us begin by considering ritual behavior. Rituals are repetitive behavior fixed by tradition. In the study of religion they are, as Smart says, traditional religious behavior or actions. Probably the most obvious form of ritual is the Roman Catholic Mass, which contains a lot of color, carefully ordered actions, a fixed order of words, particular smells, and what is in many ways a carefully orchestrated theatrical performance. In other religious traditions, things like pilgrimage to Mecca, for Muslims, or sacrifices and ritual bathing, for Hindus, are good examples of ritual action. Photo 1.3 Ninian Smart at a confer Some religious traditions, especially those associated with ence in Washington, D.C., shortly be religious movements such as the Protestant Reformation, react fore his death. He pioneered modern strongly against what they call dead rituals. Such groups fail religious studies at Lancaster Univer to recognize their own ritual actions while identifying the ritsity, in England, before moving to the uals of other religious movements as somehow unspiritual or University of Santa Barbara in the U.S. 6. Ninian Smart, Towards a Definition of Religion, unpublished paper, Lancaster University, 1970. Cf. Ninian Smart, The Worlds Religions (Engelwood Cliffs, N.J.: Prentice-Hall, 1989), 1021. 0310259442_understanding_int_CS4.indd 17 2/1/11 10:14 AM 18 Studying Religion false. For example, the Plymouth Brethren strongly reject rituals like those of the Catholic Mass or High Anglican services on theological grounds. Yet, in fact, their own services have many rituals, even though the participants usually fail to recognize them as rituals. The very order and arrangement of the Brethren service actually make it a ritual action. Consequently, rituals need to be understood in terms of the convictions of the worshipers and the relationship between them and the divine, or, as Smart argues in some cases, the trans-divine. Seeing ritual in this way, one can argue that a football game is a ritual act, but not a religious one. And yet very clearly the divine, or trans-divine, element is missing. This is why, Smart insists, religious rituals need to be identified with a tradition. Traditions are those things that add meaning to action. For students, probably the best example of a tradition is the act of graduation after they complete their degree. On such occasions people dress up in peculiar clothes, make speeches, and do all sorts of unusual things. While cynics might say that such actions are a waste of time, they serve a useful purpose. They remind people that the awarding of a degree conveys certain privileges and responsibilities that gain their validity from the fact that they are not some new, f ly-by-night invention. Tradition assures the student that their degree is valid because the institution awarding it has stood the test of time. Thus a degree from Harvard University is immediately recognized because of the tradition associated with Harvard, while one from Upper Backwoods College may have little value. Smart then notes that these institutions, and the rituals and traditions associated with them, have an impact upon the people involved. This he describes as expressing and/or evoking sacral sentiments. In other words, participating in religious activities within the framework a traditionalsuggestion institution not only expressesrecognizing a certain commitment to spiritual Ninian ofSmarts about a religion Institutions Rituals Traditions Religion Myths Doctrines Sacral Sentiments Figure 1.1 Ninian Smarts suggestion about recognizing a religion. When all of these features are found together in society, then we are probably dealing with a religion. 0310259442_understanding_int_CS4.indd 18 2/1/11 10:14 AM 19 values but often has the remarkable effect of creating or evoking a sense of the sacred in the worshiper and sometimes even in people who simply attend the ceremony without really intending to worship. This sense of the sacred, Smart explains, is directed at a divine or trans-divine focus. That is, the participant directs their feeling of worship, awe, and respect toward either the divine or something beyond the divine. By the divine he means God, gods, or, as in the case of Buddhism, something beyond or at least separate from the divine. This latter option Smart identifies as the trans-divine focus, a term for such things as Nirvana in Buddhism, or the veneration of ancestors in African and other primal traditions. Next, Smart reminds his readers that religion takes place within the human phenomenological environment, by which he means the totality of human social life and individual experience within which religion exists. Finally, Smart raises the important point that religious people describe their beliefs and practices in terms of myths, or, as he says, myths and doctrines. Here it is important to Photo 1.4 Mircea Eliade portrayed understand what Smart means by myth. For many people a on a Moldovian postage stamp, myth is a story that is simply untrue. Essentially, this is the way which shows the esteem in which he the German theologian Rudolf Bultmann (18841976) used is held in the land of his birth. myth when he developed his theories about the necessity of what he called demythologising, which he believed was necessary to make the New Testament acceptable to the modern world. In Bultmanns view the New Testament is a product of a prescientific age, many of whose stories, such as accounts of miracles, are therefore unacceptable to people living in an age of science. Therefore, in his view, these stories need to be reinterpreted to explain what they really mean in terms of their message and not regarded as literal accounts of what happened. In other words, Bultmann says that the stories he identifies as myths in the New Testament are simply untrue. On the other hand, some religious writers in the tradition of Mircea Eliade (19071986), Joseph Campbell (19041987), and Carl Gustav Jung (18751961) define myth as some sort of special story containing unique insights into religious truth, archaic insights often lost to humans living in industrial societies. The task of the 0310259442_understanding_int_CS4.indd 19 2/1/11 10:14 AM 20 Studying Religion scholar is to probe these myths to get at their inner truths. The problem with this view is not that it represents myths as untrue but that it represents them as in some sense containing a supertruth not located in any other type of story. Against these two extremes stands the anthropological understanding of myth, which appears to be close to Smarts view. Initially developed by Bronislaw Malinowski (18841942), who reacted to abstract theorizing by nineteenth-century writers about the meaning of myth, the anthropological approach looks at myths in terms of their function in society.7 This definition emphasizes that a myth is any story which affects the way people live. It is not necessarily either unhistorical, as Eliade tends to argue, or historical. Rather, a story which becomes a myth can be true or false, historical or unhistorical, fact or fiction. What is important is not some special feature of the story itself but the function which it serves in the life of an individual, a group, or a whole society. Myths, in the anthropological understanding, enable members of different societies and subgroups within societies to make sense of their lives and their world. As anthropologist John Middleton puts it, a myth is a statement about society and mans place in it and in the surrounding universe ... Myths and cosmological notions are concerned with the relationship of a people with other peoples, with nature and with the supernatural.8 The importance of a myth lies not in its particular qualities as a story but in the use made of it. When a story acts upon the imagination of an individual or group in such a powerful way that it begins to shape their lives, molding their thoughts and directing their actions, then that story has become a myth. Thus what makes a story a myth is not its content, as the rationalists thought, but the way the story takes on a life of its own in the thought of an individual or an entire society.. However one defines myth, the success of a myth depends upon people accepting and acting upon what they consider to be its message. Nevertheless, most people who act upon what they consider to be the message contained in a myth do so because they believe it is true. In other words, they accept the story as true before it becomes a myth in their lives. Questions of historic, philosophic, or any other form of verifiable truth are therefore important in the creation and maintenance of mythologies. In fact, such questions often precede the acceptance of myths. What matters is not simply the power of myths to inspire belief and to enable believers to make sense of their experiences but the prior belief that the story is true. Smart also points out that while all religions have their own mythic cores, the myths of a single religion often appear to conf lict with one another or with the teachings of the group. For example, the Hebrew Bible makes it very clear that there is only one God. Yet Jesus spoke and acted in such a way that he appeared to accrue to himself certain powers 7. I have developed aspects of this argument in various papers and books, including Irving Hexham and Karla Poewe, New Religions as Global Cultures (Boulder, Colo.: Westview, 1997), 7998. 8. John Middleton, ed., Myth and Cosmos (New York: Natural History Press, 1967), x. 0310259442_understanding_int_CS4.indd 20 2/1/11 10:14 AM 21 and attributes that belong to God alone. For his early Jewish followers such statements and actions created a crisis of understanding. If there is only one God, how could Jesus possibly do and say things that only God can do and say? Similarly, in Buddhism there seem to be contradictory stories about the nature of the Buddha, about whether he has one or many bodies, and about how these bodies, some of which are spiritual, relate to each other. There are also questions about how many Buddhas there actually are and how they relate to the man known to history as the Buddha. Questions like these gave rise to the development of Christ ian doctrine and Buddhist philosophizing. Within both religions, and indeed within all other major religious traditions, the sometimes apparently contradictory nature of mythic stories creates the need to develop doctrine. Doctrines are articulate expressions of logical reasoning applied to the bedrock mythic substructure found in all religions. In doctrines, theologians and philosophers attempt to show how the stories cohere despite apparent contradictions. 0310259442_understanding_int_CS4.indd 21 2/1/11 10:14 AM 22 Studying Religion essence of religion, which involves the totality of personal existence.9 The argument against seeing religion as a way of life is that it is too inclusive to be of any value to scholars. If, as Tillich argues, everyone has an ultimate concern, how can anyone study such a vague concept? Likewise, if all of life is religion, what is not religious? And how can we ever talk about religion as an identifiable aspect of reality? The answer of people like Tillich and Welbourn was that perhaps we should recognize that religious institutions must be studied as institutions and not as a special category of institution labeled religious.10 Once this is done, Welbourn argued, we must recognize that in an ontological sense all institutions can have a religious dimension. Therefore, to thoroughly study religion we must consider both the institutional forms of piety and ontological commitments. Many scholars object that the ontological approach Photo 1.6 Paul Tillich, one of the pioneers to religion results in a definition of religion that of existential theology. He fled Nazi Germany includes everything. Welbourn counters that unless to teach in the U.S., where he made his repu tation as an original thinker. Key to Tillichs we are prepared to include ontological definitions understanding of religion is the concept of we can never really understand the religion of the ultimate concern. This he based on the Baganda, Islam, or even certain forms of Christ ian biblical concept of idolatry, asserting that ity.11 To reinforce his point he cites the example of whatever a person makes the most important some Marxist students he encountered when he was a thing in their life expresses their true religious faith. If someone claims to be a Christian but student at Cambridge in the 1930s. These people, he their life revolves around their family or some notes, were not intellectual Marxists but ontological other created thing, and not God, they are an believers. Their commitment to Marxism was as total idolater. as his own to Christ ianity or that of the Baganda to their traditional way of life. We can, he suggests, say that these commitments are examples of pseudo-religion, but such a judgment, in his view, does an injustice to both the facts and the people involved in such movements.12 Religion, Welbourn argues, should be viewed in terms of each persons implicit ontological commitments, what really motivates them in their day-to-day lives, rather than in terms of explicit religiosity, with its unimportant institutional rituals. Welbourn also argues that while it is easy to recognize myths and rituals, once they have ceased to function as living realities, we are rarely aware of our own myths and ritu9. Fred B. Welbourn, Towards Eliminating the Concept of Religion, unpublished paper given at the Colloquium on the Concept of Religion, Lancaster University, 1518 December 1969; cf. Paul Tillich, Systematic Theology (Chicago: Univ. of Chicago Press, 1973; first published 1951), 1:1215. 10. Welbourn, Towards Eliminating the Concept of Religion, 13. 11. Ibid., 1314. 12. Ibid., 78. 0310259442_understanding_int_CS4.indd 22 2/1/11 10:14 AM 23 0310259442_understanding_int_CS4.indd 23 2/1/11 10:14 AM 24 Studying Religion we cannot fully grasp the essence of religion at a personal level. To attempt to grasp an individuals ontological commitments, he claims, is futile. Therefore, while we cannot precisely identify an individuals ontological commitments, their ultimate concerns, we can identify such concerns historically and socially when they are expressed by groups of individuals living in community. In this way Dooyeweerd seems to be coming close to the views of Smart while at the same time paying attention to the complex issues of commitment. A religious community, Dooyeweerd argues, is maintained by a common spirit, which as a dynamis, as a central motive power, is active in the concentration-point of human existence. This spirit of community, he claims, works through what he calls a religious ground idea or ground motive, which gives content to the entire life and thought of a society. Thus it can be seen in the historical development of human societies, where it takes on particular forms that are historically determined.15 Photo 1.8 Herman Dooyeweerd was a Dutch Chris These insights lead Dooyeweerd to argue tian philosopher who developed the ideas of Abraham Kuyper in relation to the philosophy of law. This led him that the ultimate ontological commitments of to spend a lot of time pondering the nature of religion. individuals find expression historically and As a result he developed ideas similar to those of Wel socially in various religious or faith commubourn and Tillich. nities that can be studied. Having recognized this, he argues, we must also recognize that because individuals are often born and raised in a faith community and die in it, the commitments expressed in the community are capable of molding both individual members and the community as a whole. By studying these communities, then, it is possible to study the ontological commitments of their members. In this way, Dooyeweerd appears to combine the institutional and ontological definitions of religion while seeking to overcome common objections to both. 0310259442_understanding_int_CS4.indd 24 2/1/11 10:14 AM 25 Religion There are as many ways of studying religion as there are academic disciplines. So which methods are the most useful? Figure 1.2 Because religion is a living entity, the academic field of religion is much like African stud ies, womens studies, and a host of other scholarly enterprises that study a multifaceted social reality. Therefore, it can be legitimately studied from many different perspectives. are usually taken at face value. Thus theology is based on the analysis of beliefs as they are found in the Bible, devotional works, and books of theology. Theologians are trained to read texts, which they then interpret according to established exegetical techniques. From experience it is clear that people trained in theological and other literary disciplines often find it very difficult to appreciate the methods of social scientists. Therefore, it is not surprising that when theologians discuss sociology they tend to think of it in terms of the highly philosophical work of figures like Karl Marx (18181883), Max Weber (18641920), and Peter L. Berger (1929).16 Rarely does one find a theologian engaging in serious dialogue with sociologists like Charles Glock (1924),17 Rodney Stark (1934),18 or Reginald Bibby (1943),19 whose work is based on survey research, statistical analysis, and empirical observations. Similarly, although such a thing as theological anthropology20 exists, few Christ ians are seriously engaged in social anthropology 16. Cf. Charles Villa-Vicencio, Trapped in Apartheid: A Socio-Theological History of the English-Speaking Churches (Maryknoll, N.Y.: Orbis, 1988). 17. Charles Y. Glock and Rodney Stark, Religion and Society in Tension (Chicago: Rand McNally, 1965). 18. Rodney Stark and William Sims Bainbridge, The Future of Religion: Secularization, Revival and Cult Formation (Berkeley: Univ. of California Press, 1985). 19. Reginald Bibby, Fragmented Gods: The Poverty and Potential of Religion in Canada (Toronto: Irvin, 1987). 20. See, e.g., Charles H. Kraft, Christianity in Culture (Maryknoll, N.Y.: Orbis, 1979), for one of the better examples of this genre. 0310259442_understanding_int_CS4.indd 25 2/1/11 10:14 AM 26 Studying Religion as an academic discipline. Occasionally the works of an anthropolog ist like Mar y Douglas (19212007) may catch the imagination of a theologian, but generally little or no effort is made by theologians, or people trained in classical religious studies, to engage in or understand the discipline except for apologetic purposes. One exception to this general picture of disciplinary apartheid is the Annual Conference on Implicit Religion, organized by Welbourns student, Dr. Edward Bailey. For almost thirty years he has encouraged the production of numerous papers and books examining what he calls implicit religion. By this he means the actual practices and beliefs of people as discovered by others through careful observation of their actions and not simply by taking their words at face value. Bailey argues that it is important to Photo 1.9 Karl Marx (left) and his close collaborator Friedrich Engels. This bronze monument was erected during the cold war by the Ger observe what people actually do, man Democratic Republic (GDR) in a park near Alexanderplatz in for not simply analyze what they say.21 mer East Berlin. Unlike many other monuments from that period, it By observing their actions, the way has survived. For the GDR, Marxism was a pseudoreligion, as the Ger they live, he claims, it is possible to man film Goodbye Lenin brilliantly shows. Yet although most Ameri get at the nature of peoples actual cans have no problem rejecting Marxism as a political philosophy, the ideas of Marx play an important role in theories about the origins and ontological commitments, that is, nature of religion. to detect their implicit religion. Today, several British universities offer courses in the area. Both MA and PhD degrees are available on the topic of implicit religion, although the notion has not really caught on in North America. 0310259442_understanding_int_CS4.indd 26 2/1/11 10:14 AM 27 nitions on the grounds that meaningful empirical research, based on archival evidence or fieldwork, demands a clear definition that distinguishes religion from other forms of social life.22 Therefore, they defined religions as human organizations primarily engaged in providing general compensators based on supernatural assumptions.23 Later, they refined their theory24 to define religion in terms of systems of general compensators based on supernatural assumptions (emphasis added).25 Using this definition, Stark and Bainbridge provide their readers with five dimensions of religiousness.26 These dimensions are based on a scale devised by Stark and Glock 27 and are not unlike the indicators of religion proposed by Ninian Smart.28 Stark and Bainbridge describe these factors as belief, practice, experience, knowledge, and consequences. They then claim that by studying these dimensions of social institutions and movements, scholars are able to measure and examine religion from a variety of perspectives that allow them to generate hypotheses and create usable research instruments. To many people this definition appears reductionist because of the use of the term general compensators. Stark and Bainbridge are at pains to point out that this is not the case. They are not proposing a crude deprivation model of religion. Rather, they carefully explain, they use the terms to refer to clearly religious expectations such as the promise of a triumph over death.29 Subsequently, in a series of books like For the Glory of God,30 Stark has attempted to show how his theories work by providing many complex examples. It should be noted that while Stark and Bainbridges understanding of religion excludes certain types of ontological definition such as Tillichs description of an ultimate concern and Welbourns analysis of individual commitments, it includes the type of ontological commitment 0310259442_understanding_int_CS4.indd 27 2/1/11 10:14 AM 28 Studying Religion discussed by Dooyeweerd.31 More importantly, however, Stark and Bainbridge show that by utilizing their definition it is possible to construct testable hypotheses, general theories, and definite research programs, which enliven the study of religion. Religion Social Anthropology and Sociology History Figure 1.3 Practical approaches to studying religion based on five key disciplines. 31. Stark and Bainbridge, The Future of Religion, 5153. 0310259442_understanding_int_CS4.indd 28 2/1/11 10:14 AM 29 argued that it is best to limit the study of religion, at least in its initial stages, using three main academic traditions: philosophy and logic, history, and the social sciences, particularly anthropology and sociology. Thus anyone who wishes to understand a religious movement needs to approach it both anthropologically/sociologically and historically. Then they need to analyze their findings using logic and the tools of philosophy. Later it may be important to take into account geographic, economic, and other methods to understand the nature of a particular religion. The approach taken here is to view religion as an area of human life that needs to be studied using various well-established academic disciplines. Thus, to really study a religion, students need to examine its history and beliefs as well as its current cultural context. Such a multidisciplinary approach is both challenging and very exciting, as I hope you will discover in reading this book. 0310259442_understanding_int_CS4.indd 29 2/1/11 10:14 AM Chapter 2 A Biased Canon Introduction Recognizing bias is the first step toward critical thinking in academic work, and you can do it if you develop confidence in your own judgment. By their very nature biases are ingrained. Therefore, most people do not recognize a bias even when it is staring them in the face. Yet once this is pointed out and you begin looking for biases, they are relatively easy to find. You can discover the truth of this statement the next time you read a book. All you have to do is ask yourself, What is the authors bias? Then begin looking for it. You will be surprised how much this simple question reveals. To help you see how biases can be recognized with a little effort, this chapter provides one example of the way racism has clearly affected modern thinking about the nature of religious studies. The pervasiveness of biases and their ability to distort our understanding are not something new, discovered by postmodern philosophers in the last fifty years. It has been known for centuries. Recognizing bias has always formed the basis of a good education. For example, in 1873, Herbert Spencer (18201903) wrote a wonderful book exposing the role of bias in human thinking. The book, The Study of Sociology,1 is a classic that discusses in great detail the numerous ways bias can enter into our thinking. It reminds us that recognizing bias is nothing new. Yet it is something we must learn. To demonstrate how frequently unrecognized biases Photo 2.1 Herbert Spencer, whose appear in textbooks, this chapter will examine the attitude work on bias is relevant today and ex poses the modern tendency to think toward Africa and Africans in religious studies textbooks. that serious criticism began only a Even though most of the authors of the books we will examfew years ago. ine are self-proclaimed liberals who would be horrified at the 1. This is available for free in electronic form from The Liberty Fund. It may be downloaded from http:// oll.libertyfund.org/?option=com_staticxt&staticfile=show.php%3Ftitle=1335. The main website of The Liberty Fund is found at. 31 0310259442_understanding_int_CS4.indd 31 2/1/11 10:14 AM 32 Studying Religion suggestion that their books are riddled with racism, there can be no doubt that textbooks dealing with African religions suffer from a racist heritage. But until this is pointed out to students, few people consciously recognize the fact, even though many have a subconscious feeling that something is wrong with what they are reading. To show how deep the problem of bias runs in religious studies textbooks, we will survey a number of different works from the late 1960s to the present. Ninian Smart, in his popular The Religious Experience of Mankind (1969), devoted exactly 5 out of 576 pages to a consideration of African religion, while the British writer Trevor Ling, in A History of Religions East and West (1979), managed to avoid the discussion of African religions altogether. Robert S. Ellwood, in Many Peoples, Many Faiths (1982), and David S. Noss, in The Worlds Religions (1984), make no mention of African religions, nor, more recently, do John L. Esposito, Darrell J. Fasching, and Todd Lewis, in their 550-page World Religions Today (2002). In Willard G. Oxtobys massive two-volume, 1100-page edited work, World Religions (2002), only 4 pages are devoted to African religions, with another 4 pages to African religions in the Americas. Yet 6 pages are devoted to Bahai and 5 to the New Age. Warren Matthews is slightly better in his World Religions (2004), including 22 pages on African religions; yet of these he devotes 10 pages to ancient Egyptian religions, weakening his treatment of contemporary religions, especially those practiced south of the Sahara. In like manner, Christopher Partridge, in his edited work Introduction to World Religions (2005), treats African religions in a mere 8 pages, while devoting 14 to the Bahai and 22 to the Zoroastrian tradition, even though it is so small as to be virtually extinct. More recently, Theodore M. Ludwig, in The Sacred Paths: Understanding the Religions of the World (2006), classifies African religions under the heading Among Indigenous Peoples. In this section of his book he weaves African religions together with the native religions of Australia, North and South America, Indonesia, the South Pacific, and various other areas where he finds similar patterns of myth and ritual. Not to be outdone, Willard G. Oxtoby and Alan F. Segal, in their Concise Introduction to World Religions (2007), classify African religions under Indigenous Religious Traditions, devoting only 17 of the sections 48 pages to Africa. Although in terms of space this is an improvement, it is hard to justify in terms of the sheer size and diversity of African religious traditions and clearly shows the insensitivity of scholars to this issue. Books of readings containing sacred texts are no better. For example, Sacred Texts of the World: A Universal Anthology, edited by Ninian Smart and Richard D. Hecht, devotes a mere 5 out of 408 pages to things African. In Lessa and Vogts classic anthropological Reader in Comparative Religion (1979), only 22 out of 488 pages are devoted to African religions. The one minor exception to this almost total boycott of African religions by Western scholars is the 48 pages Whitfield Foy gives to the subject in his 725-page selection of readings for the British Open University entitled Mans Religious Quest (1978). But even there the attention is limited and disproportionate to that devoted to other traditions, such as Zoroastrianism, which receives 60 pages. If one surveys academic journals in religious studies, one finds a dearth of articles on African religions and few reviews of books about Africa. The uninitiated might attribute 0310259442_understanding_int_CS4.indd 32 2/1/11 10:14 AM A Biased Canon 33 this lack of attention to a lack of scholarship dealing with African religions. But this is not the case. When Hans-Jrgen Becken and Londa Shembe translated some of the works of Isaiah Shembe, the founder of one of the most important new religious movements in Africa, into English, the book received very few reviews even though it was published by an established academic press. This is all the more surprising because at the time of their publication the Shembe texts were the only English translation of the scriptures of a major contemporary African religious movement. Instead of being welcomed as the major breakthrough that they were, they were ignored by journals. Even Religious Studies Review, which is devoted to reviewing books on religion, pays virtually no attention to books on African religion and acts as though many of them do not exist. One could go on providing example after example of the almost total neglect of African religious traditions in standard religious studies texts, but the examples cited make the point. Finally, before looking at the history of the study of African religions it is important to remember that all of the authors mentioned above would probably see themselves as liberal, or even very liberal, and none are even remotely racist. Nevertheless, the ethos of religious studies in which they work has blinded them to the importance and complexity of African religious traditions. 0310259442_understanding_int_CS4.indd 33 2/1/11 10:15 AM 34 Studying Religion century. When Louis Henry Jordan published his book Comparative Religion2 in 1905, he provided the following details about the state of religion in the world: More recent figures show that over the twentieth century the relative distribution of world religions changed as follows: 2. Louis Henry Jordan, Comparative Religion: Its Genesis and Growth (Edinburgh: T&T Clark, 1905). 0310259442_understanding_int_CS4.indd 34 2/1/11 10:15 AM A Biased Canon 35 From these charts it is clear that African religions form a significant segment of world religions. Further, while the number of people practicing traditional African religions fell over the century, the influence of African traditions on other religions, such as Christianity and African Islam, remains great. Therefore, there is no excuse for ignoring African religions in religious studies textbooks. Yet African religions are ignored, and they are usually treated in a very dismissive way when they are actually included in such texts or mentioned in academic journals. 3. Ninian Smart, The Worlds Religions (Cambridge: Cambridge Univ. Press, 1989). 4. This argument was brief ly raised in my African Religions: Some Recent and Lesser Known Works, Religion 20 (1990): 36172. I also discussed it at greater length in my chapter African Religions and the Nature of Religious Studies, in Religious Studies: Issues, Prospects and Proposals (Atlanta, Ga.: Scholars Press, 1991), 36179, which was based on a conference paper given at the University of Manitoba in 1988. 0310259442_understanding_int_CS4.indd 35 2/1/11 10:15 AM 36 Studying Religion religions.5 Second, he notes that Indians direct their worship toward a large number of gods because God is described ... as taking many forms, with the result that the numerous gods become manifestations of the One Divine Being. Yet, although similar practices may be observed in Africa, Smart says Africans enjoy a refracted theism, which he clearly considers an inferior form of religious consciousness.6 Third, Smart says Indians possess a mythic system with a thousand themes. On the other hand, equally rich African mythologies are reduced to myths of death and disorder, to which trickster myths are added, as though these three themes exhausted African mythic consciousness.7 Fourth, when discussing sacrifice in the Indian context, Smart tells us it is a central ritual which must be interpreted as part of a vast system of interrelated beliefs. But in the African context he dismisses sacrifice: as elsewhere in the world, it is a gesture of communication with god. 8 Recognizing that it was not Smarts intention to denigrate African religions, we must nevertheless observe that his use of words is unfortunate, suggesting very clearly that African ritual sacrif ices are really not worth serious consideration because they simply duplicate things that happen more interestingly elsewhere. Fifth, Smart says Indian expressions of anthropomorphism represent a splendid act of imagination, but he sees African societies as possessing Photo 2.4 Smart sees the many gods of India through the lens of Ve anthropomorphic religions, danta philosophy, which reduced them to one essential essence, but when he looks at African religions, he sees many different gods. This is which in the context of his dissimply inconsistent, because not all Hindus think that all the gods are cussion appear rather limited the same god. The above picture, from a Hindu temple, is of a small and simplistic.9 altar with pictures. While it is permissible to look behind the gods of Hin Of course some people will duism to one God, the same courtesy ought to be extended to African object to these comparisons religions. 5. 6. 7. 8. 9. 0310259442_understanding_int_CS4.indd 36 2/1/11 10:15 AM A Biased Canon 37 0310259442_understanding_int_CS4.indd 37 2/1/11 10:15 AM 38 Studying Religion all-embracing reaffirmation of values, helped too by the interpretation of Aborigine religion created by writers on them, such as Mircea Eliade.12 Yet, for some reason, he feels that African societies are on the whole too small to be able to bear the full impact of modern social change.13 Seventh, while Smart acknowledges that Christ ianity had a long history in Africa and that dynamic Christ ian movements have developed on the African continent, he gives no hint that such church fathers as St. Augustine of Hippo and Tertullian were African, and in all probability Black. As a result the encounter between Christ ianity and African religion is seen as essentially a one-way transaction, with Africans adapting Chris tianity to their needs but not really inf luencing the outside world.14 Yet it can be argued that the impact of Africa on Christ ianity is as great as the impact of Christ ianity on Africa. For example, there is considerable evidence that the modern charismatic movement was of African origin and that without an appreciation of African culture one cannot really understand either classical or contemporary Christ ianity. Taking these considerations into account, it is clear that in religious studies texts, like Smarts, African religions get a very raw deal. To understand this general neglect and disparagement of African religion in the West, we need to look at the treatment of Africa and Africans in European history and European thought generally. 0310259442_understanding_int_CS4.indd 38 2/1/11 10:15 AM A Biased Canon 39 0310259442_understanding_int_CS4.indd 39 2/1/11 10:15 AM 40 Studying Religion be naturally inferior to the whites. There never was a civilized nation of any other complexion than white.22 Similarly, while Jean-Jacques Rousseau (17121778) is remembered for his attack on slavery, it is forgotten that he also spoke quite freely of negroes and savages. In fact, when Rousseaus views are examined in detail, his assessment of the noble savage mirrors modern racism. In his essay What Is the Origin of Inequality among Men, and Is It Authorized by Natural Law?23 he wrote: We should beware, therefore, of confounding the savage man with the men we have daily before our eyes. Nature treats all the animals left to her care with a predilection ... By becoming domesticate they lose half these advantages ... there is still a greater difference between savage and civilised man than between wild and tame beasts ...24 These comments lead to the view that ... they [savages] go naked, have no dwellings, and lack all superf luities which we think so necessary ... Their children are slowly and with difficulty taught to walk.25 Such racist comments are followed by the observation that, Solitary, indolent, and perpetually accompanied by danger, the savage cannot but be fond of sleep; his sleep too must be light, like that of the animals ... Such in general is the animal condition, and such, according to travellers, is that of most savage nations ...26 And again: Savage man ... must accordingly begin with purely animal functions ... being destitute of every species of intelligence ... his desires never go beyond his physical wants ... food, a female, and sleep.27 Moving from the savage in particular to people in general, Rousseau says, Everything seems removed from savage man ... He is so far from having the knowledge which is needful to make him want more, that he can have neither foresight nor curiosity ... He has not understanding enough to wonder at the great miracles; nor is it in his mind that we can expect to find that philosophy man needs.28 After all of this, Rousseau makes it quite clear that his savage is no abstract entity but can be identified with Africans in particular.29 The philosopher Immanuel Kant (17241804) was more cautious in his essay On the Different Races of Mankind.30 Nevertheless, he did appear to think that racial mixture was to be discouraged and laid a highly theoretical basis for segregation. With such biased philosophical judgements behind him, the later German philosopher Georg Wilhelm Friedrich Hegel (17701831) had no hesitation in saying, The peculiarly African character is difficult to comprehend ... In Negro life the characteristic point is the fact that 22. David Hume, Essays (London: Routledge and Sons, 1906), 152. 23. Jean-Jacques Rousseau, A Discourse on a Subject Proposed by the Academy of Dijon: What Is the Origin of Inequality among Men, and Is It Authorized by Natural Law? in The Social Contract and Discourses, trans. G. D. H. Cole, Everymans Library 660 (1913; New York: Dutton, 1966), 165. 24. Ibid., 168. 25. Ibid. 26. Ibid., 169. 27. Ibid., 171. 28. Ibid., 172. 29. Ibid. 30. Immanuel Kant, Immanuel Kants Werke, Band 2, Vorkritische Schriften (Berlin: Bruno Cassirer, 1922). 0310259442_understanding_int_CS4.indd 40 2/1/11 10:15 AM A Biased Canon 41 Early-Nineteenth-Century European Reactions to India 31. Georg Wilhelm Friedrich Hegel, The Philosophy of History (New York: Willey Book Co., 1944). 32. Kenneth Ingham, Reformers in India (New York: Octagon Books, 1973), 154. 33. James Mill, The History of British India, abridged by William Thomas (Chicago: Univ. of Chicago Press, 1975), 13789; Raghavan Iyer, ed., The Glass Curtain between Asia and Europe (London: Oxford Univ. Press, 1965), 211. 34. Hegel, Philosophy of History, 13941, 155, 15758, 167. 0310259442_understanding_int_CS4.indd 41 2/1/11 10:15 AM 42 Studying Religion Yet by the mid-nineteenth century the outlook of many Europeans had changed, and India began to benefit from a growing appreciation of its religious and cultural heritage. Clearly, Kantian philosophy, Hegelian dialectics, and other, similar forms of philosophical idealism affected Western scholarly views of Indian religions,35 but Hegels disciples, and thinkers such as Arthur Schopenhauer (17881860), who detested Hegel, were enthusiastic about Indian thought.36 Africa Abandoned No parallel appreciation of African values developed during the nineteenth century. In fact, if anything, the descriptions of Africa and Africans written by European writers caused Black Africans to sink lower and lower on the scale of humanity.37 Once again, it would be easy to explain this devaluation of African life in terms of its primitive state as compared to the richness of Indian culture, especially Indian philosophy. Such an explanation overlooks the fact that American Indians and similar groups did not suffer the same negative reactions by nineteenth-century writers as Africans did.38 Therefore it is increasingly difficult not to see an element of racism in the neglect of African religions.39 The truth is that the more one probes the treatment of Blacks and Black religions by Western scholars, the more disturbing the issue becomes.40 0310259442_understanding_int_CS4.indd 42 2/1/11 10:15 AM A Biased Canon 43 religions altogether.43 Later still (1883), James Clarkes Ten Great Religions shows a typical disrespect for African religions. Unlike modern writers, Clarke does not hesitate to tell his readers, The negroes of Africa have been charged with all sorts of vices and crimes ... But it must be remembered that the negroes of whom we have usually heard have been for centuries corrupted by the slave-traders ... Travellers who have penetrated the interior ... have met with warm hospitality ... They have, in short, found the rudimentary forms of the kingly and queenly virtues of truth and love, justice and mercy, united in the hearts of these benighted heathens ... Such are the virtues which already appear in primitive man, rudimentary virtues, indeed ...44 No wonder that by the time of the World Congress of Religions, in 1893, African religions had completely disappeared from the vision of progressive scholars. As a result the proceedings of the Congress give no attention whatsoever to African religions. Early-twentieth-century descriptions of African religions are equally prejudiced. Edwin W. Smith, for example, in his tellingly entitled book The Religion of Lower Races, as Illustrated by the African Bantu, describes African religion as elementary and a religion of fear.45 Clearly, the neglect of African religion and religions has a long history. Modern textbooks, which almost totally neglect African religion, are simply continuing a two- hundred-year tradition deeply rooted in European racism. Consequently, when students read popular textbook accounts of African religion or encounter its almost total neglect, they quickly form the opinion that Photo 2.11 The proceedings of the World African religions are unworthy of serious study. Thus Congress of Religions, held in Chicago in 1893. It claimed to represent all the religions of existing textbooks confirm old prejudices and lead to the world, but totally ignored African religions. the further neglect of Africa by anyone interested in the serious study of religion. 0310259442_understanding_int_CS4.indd 43 2/1/11 10:15 AM 44 Studying Religion based upon oral traditions, were translated into first Latin and then German and English during the early part of the nineteenth century, no similar translations were made of African traditions. Indeed, often all that Western scholars knew about African religions were sensational accounts of primitive practices by traders and missionaries. That Africa could have its own epics that might rival the Mahabharata, and that apparently irrational behavior, such as witchcraft, might have a logical basis simply did not occur to nineteenth-century scholars.46 One need not argue that African epics are better or worse than Indian epics. All that needs to be recognized is that in the nineteenth century very few people, in Europe at least, took African oral traditions seriously.47 Indian religions, on the other hand, attained a respectability never attained by African faiths.48 While James Mill could see Indian rituals as essentially expressions of barbaric superstitions, scholars studying Indian beliefs slowly began to recognize an underlying order behind the rituals. Indologists therefore began to attribute meaning to these apparently meaningless acts, thus weakening Mills arguments.49 Later intellectual movements like Vedanta and, at a more popular level, Theosophy, which was founded in America in the 1870s by Helena Petrovna Blavatsky (18311891), allowed even the crudest ritual acts to be reinterpreted in sophisticated ways. But this is not all. The very fact of interpretation led to further refinement and produced schools of apologists who saw in Indian religions an alternative to the spiritual bankruptcy of the West.50 That the Buddhism of C. A. F. Rhys-Davids (18571942) is far removed from Buddhism as actually practiced by traditional Buddhists is unimportant.51 To rephrase the well- 0310259442_understanding_int_CS4.indd 44 2/1/11 10:15 AM A Biased Canon 45 known comment by Karl Barth (18861968) on the famous German liberal theologian Adolf von Harnack (18511930), in the nineteenth century Western orientalists looked deep and long into the well of Indian spirituality and saw their own ref lection. One result was the development of what we now know as religious studies, which highly prizes Indian religions while almost totally disregarding African religions. Conclusion The examples presented in this chapter are so gross that it seems unbelievable that Black Americans are not protesting loud and clear about the prejudice found in religious studies texts. Yet they are not. This is probably because these prejudices are so deep and appear so scholarly that no one really notices them. Instead they lie on the edge of the readers consciousness. Yet once the way African religions are treated in textbooks is realized, it becomes possible to look out for similar biases elsewhere, and then it quickly becomes clear that textbooks are full of bias and prejudice. We all notice these things at a subliminal level, but few of us really trust our own judgment enough to point out the biased nature of textbooks. Yet this is what we all must learn to do if scholarship in religious studies is to advance. So the task is now handed over to you the reader. What biases can you find in the textbooks you read? 0310259442_understanding_int_CS4.indd 45 2/1/11 10:15 AM Part 2 African Traditions 0310259442_understanding_int_CS4.indd 47 2/1/11 10:15 AM Chapter 3 African Religious Traditions Introduction Although scholars may disagree about the exact nature of various religious traditions, there is general agreement in religious studies textbooks about the existence of what may be called the Great Traditions, or World Religions. These are usually listed as Buddhism, Christianity, Confucianism, Hinduism, Islam, and Judaism, all of which have long histories and written texts. Apart from these major traditions there are numerous smaller religious traditions that are sometimes called traditional or primal religions because they are usually found in non-Western societies that lack written scriptures. Probably the greatest concentration of this type of religion is found in Africa. Writing about African religions is like writing about European religions or Indian religions. There are many very different African religious traditions; therefore, it is impossible to speak about African religion without qualification. Here I offer an overview of some common features of many different African religions. These shared beliefs and practices, it should be noted, are often found in other traditional religions throughout the world and are not exclusively African, although they take on a particular form in Africa.1 Such religions lack written scriptures and recorded histories and often share a belief in evil power identified with sorcery or witchcraft, specialized healers, psychic events, and the importance of ancestors. Recognizing the similarity between such religions, John Taylor identified them as primal religions, because in his view they draw on deep-rooted primal, or basic, experiences common to all humans, experiences capable of being formed into coherent ways of seeing the world. Although often very different from each other in * Shortly before his final illness and death, Fred Welbourn and I planned to rewrite Atoms and Ancestors. For a variety of reasons this was never done. This and the following chapter make extensive use of Welbourns work. 1. For a critique of European scholarship on African religions, see Okot pBitek, African Religions in Western Scholarship (Nairobi: East African Literature Bureau, n.d.). Although it was written over thirty years ago, little has changed since then in the area of religious studies. 49 0310259442_understanding_int_CS4.indd 49 2/1/11 10:15 AM #13 Africa (52%) 50 African Traditions Medit St r a i t o f Gi b r a l t a r CO ains OC o u n t O Rl a s M ne an Va IA an Ch ue biq AM MADAGASCAR SWAZILAND ser Cape Town BI Mo De Kalahari Desert ne . i R m bez MOZ mib 500 miles t R if ea ng NG ZAMBIA Na Ca p e Gu a rd a f u y MALAWI Lake COMOROS Nyasa ANGOLA ZIMBABWE NAMIBIA BOTSWANA 500 km. ey UGANDA KENYA Lake Kampala Victoria B a s i n INDIAN RWANDANairobi DEM. REP. BURUNDI OF THE OCEAN CONGO TANZANIA Lake Tanganyika Dar es Salaam Za ll Ad R. CABINDA (ANGOLA) AT L A N T I C OCEAN R. ETHIOPIA C o n g o Co CO GABON le of ulf OO Gulf of Guinea SUDAN CENTRAL AFRICAN REPUBLIC ER TO G O BENIN BURKINA GUINEA- GUINEA FASO BISSAU Lake NIGERIA CTE Volta SIERRA DIVOIRE Lagos I BE LEONE GHANA RI A EQUATORIAL M CA GUINEA ERITREA DJIBOUTI i CHAD Lake Chad THE GAMBIA AL Gr Arabian Peninsula zam NIGER EGYPT Ni l e R. Blu R. Se MALI r ige te R. Libyan Desert Re Cape Verde SENEGAL SO LIBYA MAURITANIA Cairo ALGERIA Sea M At WESTERN SAHARA (MOROCCO) Cape Blanc ra TUNISIA Wh AT L A N T I C OCEAN er SOUTH AFRICA Durban LESOTHO Ca p e o f Good Hope Map 3.1 Map of Africa showing todays political divisions and the cities of Cape Town, Durban, Dar es Salaam, Nairobi, Kampala, Lagos, and Cairo. The borders of todays African states are outlined, and the name of each state is given. Looking at this map, one gets an idea of the vastness of Africa in relation to Europe and the rest of the world. detail, Taylor argued, African religious traditions share many common features that can be seen as a worldview which may be identified as the primal vision.2 The study of African religions is difficult also because of the relative lack of interest in the topic among religious studies scholars, as was shown in the last chapter. Consequently, at present we simply cannot write an introductory section, like the one on the yogic tradition, outlining the history of African religions because at present the material for such a chapter does not exist. As pointed out earlier, this is a result of the bias against things African in both scholarly discourse and popular culture. Hopefully, it will change in the future. 2. John V. Taylor, The Primal Vision: Christian Presence amid African Religion (London: SCM Press, 1963). 0310259442_understanding_int_CS4.indd 50 2/1/11 10:15 AM 51 Photo 3.1 This famous painting, The Monk by the Sea (18081810) by Caspar David Friedrich (17741840), captures the essence of the power of the primal over the human being. The monk stand ing on the sea shore contemplating the vastness of the ocean and the sky evokes a sense of the fini tude of life. The original may be seen in the Old National Gallery, Unter den Linden, Berlin, Germany. 0310259442_understanding_int_CS4.indd 51 2/1/11 10:15 AM 52 African Traditions Primal experiences are important for African religious movements because they affirm the reality of traditional mythologies and the foundation myths of new religious movements like the amaNazaretha. Before a person has a primal experience, he or she may view the traditional mythology, or myths of a particular new religious movement, as unbelievable fairy tales which only uneducated traditionalists believe. Following a primal experience, the old ways, or teachings, of a new religion become a reality. As it turns out, primal experiences are remarkably common among humans. In the 1970s David Hay became interested in the phenomenon when some postgraduate students at the University of Nottingham, England, responding to a social survey, admitted that they had had primal experiences that profoundly affected their outlook. The majority of these students said that they had no adequate explanation for their experience and would welcome one. Following this initial survey, Hay and Ann Morisy arranged a statistically valid national survey of the British population. In this more qualified survey they found that 36.4 percent of those included in the random sample reported having had such experiences. Significantly, 45 percent of those who had these experiences had no real contact with churches or organized religions.3 In a national survey in the United States, some 30 percent of Americans responded positively to questions about primal experiences. A much higher figure was obtained by Robert Wuthnow in his survey of the San Francisco Bay area population. There Wuthnows positive Religious Traditions and Primal Experiences Primal Traditions Primal Experiences African Religions Confucianism Prophecies Healings Shamanism Revelations New Religions Miracles Revitalization Movements Voices Ghosts Figure 3.1 Primal religions are those that generally lack strong written traditions or rigidly orga nized priesthoods. Instead they rely on direct experiences of the supernatural, which, it must be stressed, is always seen as a natural continuation of this life. 3. David Hay, Reports of Religious Experiences by a Group of Postgraduate Students: A Pilot Study, and Religious Experiences among a Group of Postgraduate Students: A Qualitative Survey, unpublished papers presented at the Colloquium on Psychology and Religion, Lancaster University, 1975; David Hay and Ann Morisy, Reports of Ecstatic Paranormal or Religious Experiences in Great Britain and the United States: A Comparison of Trends, Journal for the Scientific Study of Religion 1/7 (1978): 25565. 0310259442_understanding_int_CS4.indd 52 2/1/11 10:15 AM Understanding Traditional Societies 53 0310259442_understanding_int_CS4.indd 53 2/1/11 10:15 AM 54 African Traditions Explaining Primal Experiences In his book Atoms and Ancestors,8 Fred Welbourn pointed out that until recently few Europeans or North Americans had seen bacteria. In fact, he argued, we rarely think about bacteria unless there is an outbreak of disease that we believe they have caused. It is equally significant that even today many people do not realize that bacteria not only cause illness, but are also essential to healthy organic life. The truth is that if we have to think about bacteria at all, whether we want to know how to kill the malign variety that cause dysentery, or to increase their benign activity in a compost heap, we do so because we see them at work. Normally, however, we simply take the existence of bacteria on trust. Yet for the last hundred years most people in the West, if challenged, would have said that bacteria are a natural and inescapable part of life. They are something which pervades our environment, yet which we normally do not think about and only very rarely see. Only when they begin to cause problems do we consult specialists who can heal illness or tell us why our compost heap is not working properly. Similarly, people living in traditional societies, like the Zulu in South Africa, often claim to have seen an Ancestor, or what in the West we call a ghost. Among the Zulu, and 7. William Shakespeare, Hamlet, 1.5.16667. 8. Fred B. Welbourn, Atoms and Ancestors (London: Edwin Arnold, 1968). This and the next chapter are a revision of Welbourns work, as agreed with him before he died. Welbourns East African Rebels (London: SCM Press, 1961) is a classic study of African Independent Churches that also throws light on traditional religions. 0310259442_understanding_int_CS4.indd 54 2/1/11 10:15 AM 55 many other African groups, Ancestors are not expected to be seen. They work in other ways and are experienced as part of everyday life. Yet they are rarely thought about unless, like bacteria, they begin to cause problems. Among the Ganda, sometimes called the Baganda, of Uganda, custom directs that shrines to the dead should be tended regularly, yet they are usually neglected unless the ghost of the deceased causes trouble. Likewise, people in Zulu homesteads only think about deceased Ancestors when problems arise in daily life. The benign activity of ghostsor Ancestors, as both the Ganda and Zulu call themis taken for granted in most traditional societies. Therefore, Ancestors are hardly mentioned except on very special occasions. When an outbreak of disease occurs or misfortunes continually arise, a specialist is consulted. Like the Western medical specialist or horticultural expert, Africans who know how to communicate with the Ancestors to bring healing or good fortune are highly prized. They are specialists, or people with a calling who have entered their profession after a long training and many years of study. And as with the Western specialist, the proof of the pudding is in the eating. If health and good fortune are restored, then it is clear that the specialist has placated the Ancestors, whose activity is beyond doubt. Sometimes, however, Ancestors are experienced in a much more frightening way. When interviewed by the author, Estelle Nxele, a Zulu woman in Natal, South Africa, described her encounter with them as follows: In nineteen sixty-six my spirit came up very strong. At work I used to have a bad, sharp headache. In one minute, it would come up like a balloon. Just like a balloon. I couldnt see. The doctor was frightened to give me an injection, gave me pills for pain. They sent me home and I slept. The next day, I was all right. Then when I was sleeping here, I could hear people talking, but I was sleeping like a dream. I used to see them when I was sleeping. They talked to me, my grandfathers and my granny too. This is how youyoure going to help people. That is what they told me. As a result of this and many other very frightening experiences, Estelle eventually sought the serv ices of a specialist who could deal with her African disease. It got its name from the fact that it had defied Photo 3.5 Estelle Nxele dressed in preparation for a heal the efforts of Western doctors to find a ing ceremony at her home near Durban, South Africa. 0310259442_understanding_int_CS4.indd 55 2/1/11 10:15 AM 56 African Traditions cure. Even though she was a baptized Christ ian, she went to a traditional Zulu diviner. The diviner explained that Estelle had received a call from the Ancestors and that in consequence she must not wear Western shoes or enter a church building. She also had to undergo years of rigorous training before she too became a well-respected diviner who practiced the old ways of healing through communication with the Ancestors. On the other hand, she could remain a Christ ian provided she did not go into a church building, which the Ancestors found frightening. Despite this restriction on Estelles behavior, her children and grandchildren were encouraged to go to church. 0310259442_understanding_int_CS4.indd 56 2/1/11 10:15 AM 57 Photo 3.6 Sociologist mile Durkheim made the sacredsecular distinction basic to his definition of religion. In do ing so he appealed to the religion of Australian Aborigines, even though he had never visited Australia or had direct contact with Aborigines. As a result, he constructed an ap pealing theory on the basis of secondhand observations and theoretical speculation. 0310259442_understanding_int_CS4.indd 57 2/1/11 10:15 AM 58 African Traditions approach is essentially similar to that of Eliade, who clearly states in the introduction to his Shamanism: Archaic Technique of Ecstasy 20 that he is not interested in history as practiced by historians but rather in the history of religion,21 which he defines as a hierophany realized at a certain historical moment[which] is structurally equivalent to a hierophany a thousand years earlier or later.22 The problem with all of these approaches is that they impose a preexisting theory on the empirical data. Durkheim, Otto, van der Leeuw, and Eliade knew what they would find with regard to the sacred before they ever opened a book to prove their theories. And none of them, despite Durkheims reputation as a sociologist, did fieldwork. Commenting on Durkheims study, the great British social anthropologist E. E. Evans-Pritchard writes, Durkheims theory is more than just neat; it is brilliant and imaginative, almost poetical ... While various logical and philosophical objections could be raised, I would rather base the case for the prosecution on ethnographical evidence. Does this support the rigid dichotomy he makes between the sacred and profane? I doubt it. Surely what he calls sacred and profane are on the same level of experience, and, far from being cut off from one another, they are so closely intermingled as to be inseparable.23 What is more, Evans-Pritchard points to Durkheims selective use of Australian evidence and his clear misunderstanding of how sacred objects are treated in practice. Jim Bellis illustrated this issue very well in relation to the sacred drums of the Ashanti of West Africa. According to many accounts, the Ashanti regard their drums with awe because the Ancestors speak through them. But Bellis points out that in everyday life the drums are neglected and often treated quite badly until the occasion arises when they are needed. Then, and only then, do they become objects of power. As soon as communication with the Ancestors is broken, however, the drums revert to their former low status. An extreme example of Ashanti disregard for the sacred nature of the drums is found in the following story.24 During a battle between the Ashanti and another group of warriors, a group of Ashanti were cut off from the main force. Things looked desperate, so the Ashanti used their drums to appeal for help from the Ancestors. When no help was forthcoming, the Ashanti urinated over the drums to show their contempt for the stubborn refusal of their Ancestors to come to their aid; then the warriors fought their way through enemy lines to rejoin the main force. Such coarse treatment of sacred objects does not fit any model created by Durkheim, Otto, van der Leeuw, or Eliade. From these comments it seems safe to conclude that if the reality of African religion contradicts some pet theories current in religious studies departments, it is because those theories are f lawed in their very essence. The problem, simply stated, is that Durkheim, Otto, van der Leeuw, and Eliade were armchair theorists. They analyzed written texts. But 20. Ibid., xvi. 21. Ibid., xvii. 22. E. E. Evans-Pritchard, Theories of Primitive Religion (Oxford: Oxford Univ. Press, 1965), 6465. 23. The preceding passage and following story are based on a lecture given by Bellis during his visit to the University of Calgary in 1988. 24. Note that traditional African cosmologies refer to God as him or it; they do not see God in feminine terms. See, e.g., Gabriel M. Setiloane, The Image of God among the Sotho-Tswana (Rotterdam: Balkema, 1976). 0310259442_understanding_int_CS4.indd 58 2/1/11 10:15 AM 59 to study African religions meaningfully we must move beyond the text to the life experience of living p eople. 0310259442_understanding_int_CS4.indd 59 2/1/11 10:15 AM 60 African Traditions Patterns of Power At other times the power is inherent. Among the Buganda a man who killed the animal after which his clan is named was believed to have killed his clan totem, and when he died immediately afterward, his death was regarded as punishment. If a pregnant woman laughed at a lame person, her child would be born lame. If a sheep, a goat, or a dog got onto the roof of a house, the inhabitants would leave it at once, saying it was unlucky to live there. All these things were taboo and in many ways ref lect similar folk traditions in Europe and America. Traditional Africans describe this power as an all-pervasive psychic force behaving very much like electricity is believed to behave in our society. People and things which are positively charged with power can pass it on by contact to anyone who is negatively charged. Unless this process is properly controlled, damage will result. A positively 0310259442_understanding_int_CS4.indd 60 2/1/11 10:15 AM 61 0310259442_understanding_int_CS4.indd 61 2/1/11 10:15 AM 62 African Traditions This analogy illustrates the way traditional Africans and other traditional peoples understand the power which operates through their Ancestors, ghosts, and the spirits. Such power underlies all life, but common sense normally disregards it. When out-of-the-way things happen, or when a person needs special power for a particular purpose, for example, to deal with misfortune or to seek unusual success, he or she becomes aware, as we become aware of electricity, of something which he or she Photo 3.10 A picture of the computer screen linked to the computer believes to be around him or her on which this book was written. We all know how computers work. and available all the time. But how many people really know how they work? Very few people un To appreciate what this means derstand the complex physics and mathematical calculations that make in practice, it is necessary to reccomputers possible. ognize that when ghost stories are told in Western societyand we must remember that there are sane and intelligent Westerners who believe in ghoststhere is usually an atmosphere of horror. In the West ghosts are usually malicious. But in many traditional societies, ghosts are felt to be an integral part of society, deeply concerned for its welfare, interfering, it is true, if they do not receive the attention which is their due, but expecting to play their part in its smooth running. Most traditional African, and other, ghosts carry over into the next world the characteristics that they acquired in this. Thus, a man caught thieving might ask to be killed rather than have his hand cut off, lest he should enter maimed into the world of ghosts. Therefore, among the Ganda, it was not surprising that the ghost of a paternal aunt, always in life an Photo 3.11 Marleys ghost is probably the best-known ghost in English literature, with the possible exception of the ghost of Hamlets father. What do these stories tell us about European attitudes to ghosts? 0310259442_understanding_int_CS4.indd 62 2/1/11 10:15 AM 63 oppressive, authoritarian figure, was frequently thought to be the cause of sickness. Among the Zulu the ghosts of fathers and grandfathers, who in life expected respect, also became angry if their memory and wishes were not respected. In a similar vein, special precautions were taken to avoid the ghosts of people with abnormalities and people who had been social misfits. Ghosts, in other words, easily become the source of danger and evil. 0310259442_understanding_int_CS4.indd 63 2/1/11 10:15 AM 64 African Traditions the past, Pretorius and the doctors at the hospital acknowledged the value of local healers and traditional medicines. They also conceded the psychological value of visits to traditional healers. Therefore, they embarked on a campaign to win over the healers through mutual respect and cooperation. The essence of this campaign was to admit that African healers could help TB victims with psychic problems caused by the anger of ghosts, but that the white doctors were able to cure the symptoms. This approach to a deadly disease worked remarkably well, as the healers learned to identify the symptoms of TB and send their patients to the hospital for further help. For this approach to work, the docPhoto 3.12 An African patient with severe TB is exam ined by a Dutch doctor at Madlawani Hospital, Transkei, tors had to set aside their skepticism about South Africa, in 1974. The scars on the mans skin were ghosts and other psychic forces. Instead they made over many months as the man was treated by an allowed their patients to believe whatever African healer. In this case the delay in his receiving West they wanted about the ultimate cause of ern medical attention created a grave risk to the mans their illnesses and concentrated on treating health. their medical causes. For most of the Africans who came for help, ghosts and psychic agents were the primary cause of their sickness. Yet even traditional healers admitted that although ghosts and other forces accounted for sickness, and their rituals freed patients from the power of such evil forces, there was still a need to cure the material expression of such attacks. It was here that Western medicine could be useful. To appreciate the implications of these ideas we must attempt to understand the role of witchcraft and sorcery in traditional societies. It is to this task we now turn. 0310259442_understanding_int_CS4.indd 64 2/1/11 10:15 AM
https://www.scribd.com/document/77233778/Understanding-World-Religions-by-Irving-Hexham
CC-MAIN-2019-43
refinedweb
14,810
51.58
Cons - A Software Construction System A guide and reference for version 2. Cons uses file signatures to decide if a derived file is out-of-date and needs rebuilding. In essence, if the contents of a file change, or the manner in which the file is built changes, the file's signature changes as well. This allows Cons to decide with certainty when a file needs rebuilding, because Cons can detect, quickly and reliably, whether any of its dependency files have been changed. Cons uses the MD5 (Message Digest 5) algorithm to compute file signatures. The MD5 algorithm computes a strong cryptographic checksum for any given input string. Cons can, based on configuration, use two different MD5 signatures for a given file: The content signature of a file is an MD5 checksum of the file's contents. Consequently, when the contents of a file change, its content signature changes as well. The build signature of a file is a combined MD5 checksum of: the signatures of all the input files used to build the file the signatures of all dependency files discovered by source scanners (for example, .hfiles) the signatures of all dependency files specified explicitly via the Dependsmethod) the command-line string used to build the file The build signature is, in effect, a digest of all the dependency information for the specified file. Consequently, a file's build signature changes whenever any part of its dependency information changes: a new file is added, the contents of a file on which it depends change, there's a change to the command line used to build the file (or any of its dependency files), etc. For example, in the previous section, the build signature of the world.o file will include: the signature of the world.c file the signatures of any header files that Cons detects are included, directly or indirectly, by world.c the text of the actual command line was used to generate world.o Similarly, the build signature of the libworld.a file will include all the signatures of its constituents (and hence, transitively, the signatures of their constituents), as well as the command line that created the file. Note that there is no need for a derived file to depend upon any particular Construct or Conscript file. If changes to these files affect a file, then this will be automatically reflected in its build signature, since relevant parts of the command line are included in the signature. Unrelated Construct or Conscript changes will have no effect. Before Cons exits, it stores the calculated signatures for all of the files it built or examined in .consign files, one per directory. Cons uses this stored information on later invocations to decide if derived files need to be rebuilt. After the previous example was compiled, the .consign file in the build/peach/world directory looked like this: world.h:985533370 - d181712f2fdc07c1f05d97b16bfad904 world.o:985533372 2a0f71e0766927c0532977b0d2158981 world.c:985533370 - c712f77189307907f4189b5a7ab62ff3 libworld.a:985533374 69e568fc5241d7d25be86d581e1fb6aa After the file name and colon, the first number is a timestamp of the file's modification time (on UNIX systems, this is typically the number of seconds since January 1st, 1970). The second value is the build signature of the file (or ``-'' in the case of files with no build signature--that is, source files). The third value, if any, is the content signature of the file. When Cons is deciding whether to build or rebuild a derived file, it first computes the file's current build signature. If the file doesn't exist, it must obviously be built. If, however, the file already exists, Cons next compares the modification timestamp of the file against the timestamp value in the .consign file. If the timestamps match, Cons compares the newly-computed build signature against the build signature in the .consign file. If the timestamps do not match or the build signatures do not match, the derived file is rebuilt. After the file is built or rebuilt, Cons arranges to store the newly-computed build signature in the .consign file when it exits. Cons provides a SourceSignature method that allows you to configure how the signature should be calculated for any source file when its signature is being used to decide if a dependent file is up-to-date. The arguments to the SourceSignature method consist of one or more pairs of strings: SourceSignature 'auto/*.c' => 'content', '*' => 'stored-content'; The first string in each pair is a pattern to match against derived file path names. The pattern is a file-globbing pattern, not a Perl regular expression; the pattern <*.l> will match all Lex source files. The * wildcard will match across directory separators; the pattern foo/*.c would match all C source files in any subdirectory underneath the foo subdirectory. The second string in each pair contains one of the following keywords to specify how signatures should be calculated for source files that match the pattern. The available keywords are: Use the content signature of the source file when calculating signatures of files that depend on it. This guarantees correct calculation of the file's signature for all builds, by telling Cons to read the contents of a source file to calculate its content signature each time it is run. Use the source file's content signature as stored in the .consign file, provided the file's timestamp matches the cached timestamp value in the .consign file. This optimizes performance, with the slight risk of an incorrect build if a source file's contents have been changed so quickly after its previous update that the timestamp still matches the stored timestamp in the .consign file even though the contents have changed. The Cons default behavior of always calculating a source file's signature from the file's contents is equivalent to specifying: SourceSignature '*' => 'content'; The '*' will match all source files. The content keyword specifies that Cons will read the contents of a source file to calculate its signature each time it is run. A useful global performance optimization is: SourceSignature '*' => 'stored-content'; This specifies that Cons will use pre-computed content signatures from .consign files, when available, rather than re-calculating a signature from the the source file's contents each time Cons is run. In practice, this is safe for most build situations, and only a problem when source files are changed automatically (by scripts, for example). The Cons default, however, errs on the side of guaranteeing a correct build in all situations. Cons tries to match source file path names against the patterns in the order they are specified in the SourceSignature arguments: SourceSignature '/usr/repository/objects/*' => 'stored-content', '/usr/repository/*' => 'content', '*.y' => 'content', '*' => 'stored-content'; In this example, all source files under the /usr/repository/objects directory will use .consign file content signatures, source files anywhere else underneath /usr/repository will not use .consign signature values, all Yacc source files ( *.y) anywhere else will not use .consign signature values, and any other source file will use .consign signature values. Cons provides a SIGNATURE construction variable that allows you to configure how signatures are calculated for any derived file when its signature is being used to decide if a dependent file is up-to-date. The value of the SIGNATURE construction variable is a Perl array reference that holds one or more pairs of strings, like the arguments to the SourceSignature method. The first string in each pair is a pattern to match against derived file path names. The pattern is a file-globbing pattern, not a Perl regular expression; the pattern `*.obj' will match all (Win32) object files. The * wildcard will match across directory separators; the pattern `foo/*.a' would match all (UNIX) library archives in any subdirectory underneath the foo subdirectory. The second string in each pair contains one of the following keywords to specify how signatures should be calculated for derived files that match the pattern. The available keywords are the same as for the SourceSignature method, with an additional keyword: Use the build signature of the derived file when calculating signatures of files that depend on it. This guarantees correct builds by forcing Cons to rebuild any and all files that depend on the derived file. Use the content signature of the derived file when calculating signatures of files that depend on it. This guarantees correct calculation of the file's signature for all builds, by telling Cons to read the contents of a derived file to calculate its content signature each time it is run. Use the derived file's content signature as stored in the .consign file, provided the file's timestamp matches the cached timestamp value in the .consign file. This optimizes performance, with the slight risk of an incorrect build if a derived file's contents have been changed so quickly after a Cons build that the file's timestamp still matches the stored timestamp in the .consign file. The Cons default behavior (as previously described) for using derived-file signatures is equivalent to: $env = new cons(SIGNATURE => ['*' => 'build']); The * will match all derived files. The build keyword specifies that all derived files' build signatures will be used when calculating whether a dependent file is up-to-date. A useful alternative default SIGNATURE configuration for many sites: $env = new cons(SIGNATURE => ['*' => 'content']); In this configuration, derived files have their signatures calculated from the file contents. This adds slightly to Cons' workload, but has the useful effect of "stopping" further rebuilds if a derived file is rebuilt to exactly the same file contents as before, which usually outweighs the additional computation Cons must perform. For example, changing a comment in a C file and recompiling should generate the exact same object file (assuming the compiler doesn't insert a timestamp in the object file's header). In that case, specifying content or stored-content for the signature calculation will cause Cons to recognize that the object file did not actually change as a result of being rebuilt, and libraries or programs that include the object file will not be rebuilt. When build is specified, however, Cons will only "know" that the object file was rebuilt, and proceed to rebuild any additional files that include the object file. Note that Cons tries to match derived file path names against the patterns in the order they are specified in the SIGNATURE array reference: $env = new cons(SIGNATURE => ['foo/*.o' => 'build', '*.o' => 'content', '*.a' => 'cache-content', '*' => 'content']); In this example, all object files underneath the foo subdirectory will use build signatures, all other object files (including object files underneath other subdirectories!) will use .consign file content signatures, libraries will use .consign file build signatures, and all other derived files will use content signatures. Cons provides a -S option that can be used to specify what internal Perl package Cons should use to calculate signatures. The default Cons behavior is equivalent to specifying -S md5 on the command line. The only other package (currently) available is an md5::debug package that prints out detailed information about the MD5 signature calculations performed by Cons: % cons -S md5::debug hello sig::md5::srcsig(hello.c) => |52d891204c62fe93ecb95281e1571938| sig::md5::collect(52d891204c62fe93ecb95281e1571938) => |fb0660af4002c40461a2f01fbb5ffd03| sig::md5::collect(52d891204c62fe93ecb95281e1571938, fb0660af4002c40461a2f01fbb5ffd03, cc -c %< -o %>) => |f7128da6c3fe3c377dc22ade70647b39| sig::md5::current(|| eq |f7128da6c3fe3c377dc22ade70647b39|) cc -c hello.c -o hello.o sig::md5::collect() => |d41d8cd98f00b204e9800998ecf8427e| sig::md5::collect(f7128da6c3fe3c377dc22ade70647b39, d41d8cd98f00b204e9800998ecf8427e, cc -o %> %< ) => |a0bdce7fd09e0350e7efbbdb043a00b0| sig::md5::current(|| eq |a0bdce7fd09e0350e7efbbdb043a00b0|) construction In order to shorten the command lines as much as possible, Cons will remove -I flags for any directories, locally or in the repositories, which do not actually exist. (Note that the -I flags are not included in the MD5 signature calculation for the target file, so the target will not be recompiled if the compilation command changes due to a directory coming into existence.) Because Cons relies on the compiler's -I flags to communicate the order in which repository directories must be searched, Cons' handling of repository directories is fundamentally incompatible with using double-quotes on the #include directives in any C source code that you plan to modify: in any C source (.c or .h) files that you plan to modify locally: #include <file.h> /* USE ANGLE-BRACKETS INSTEAD */ Code that will not change can still safely use double quotes on #include lines.. As previously mentioned, a construction environment is an object that has a set of keyword/value pairs and a set of methods, and which is used to tell Cons how target files should be built. This section describes how Cons uses and expands construction environment values to control its build behavior. Construction variables from a construction environment are expanded by preceding the keyword with a % (percent sign): Construction variables: XYZZY => 'abracadabra', The string: "The magic word is: %XYZZY!" expands to: "The magic word is: abracadabra!" A construction variable name may be surrounded by { and } (curly braces), which are stripped as part of the expansion. This can sometimes be necessary to separate a variable expansion from trailing alphanumeric characters: Construction variables: OPT => 'value1', OPTION => 'value2', The string: "%OPT %{OPT}ION %OPTION %{OPTION}" expands to: "value1 value1ION value2 value2" Construction variable expansion is recursive--that is, a string containing %-expansions after substitution will be re-expanded until no further substitutions can be made: Construction variables: STRING => 'The result is: %FOO', FOO => '%BAR', BAR => 'final value', The string: "The string says: %STRING" expands to: "The string says: The result is: final value" If a construction variable is not defined in an environment, then the null string is substituted: Construction variables: FOO => 'value1', BAR => 'value2', The string: "%FOO <%NO_VARIABLE> %BAR" expands to: "value1 <> value2" A doubled %% will be replaced by a single %: The string: "Here is a percent sign: %%" expands to: "Here is a percent sign: %" When you specify no arguments when creating a new construction environment: $env = new cons(); Cons creates a reference to a new, default construction environment. This contains a number of construction variables and some methods. At the present writing, the default construction variables on a UNIX system are: CC => 'cc', CFLAGS => '', CCCOM => '%CC %CFLAGS %_IFLAGS -c %< -o %>', CXX => '%CC', CXXFLAGS => '%CFLAGS', CXXCOM => '%CXX %CXXFLAGS %_IFLAGS -c %< -o %>', INCDIRPREFIX => '-I', INCDIRSUFFIX => '', LINK => '%CXX', LINKCOM => '%LINK %LDFLAGS -o %> %< %_LDIRS %LIBS', LINKMODULECOM => '%LD -r -o %> %<', LIBDIRPREFIX => '-L', LIBDIRSUFFIX => '', AR => 'ar', ARFLAGS => 'r', ARCOM => ['%AR %ARFLAGS %> %<', '%RANLIB %>'], RANLIB => 'ranlib', AS => 'as', ASFLAGS => '', ASCOM => '%AS %ASFLAGS %< -o %>', LD => 'ld', LDFLAGS => '', PREFLIB => 'lib', SUFLIB => '.a', SUFLIBS => '.so:.a', SUFOBJ => '.o', SIGNATURE => [ '*' => 'build' ], ENV => { 'PATH' => '/bin:/usr/bin' }, And on a Win32 system (Windows NT), the default construction variables are (unless the default rule style is set using the DefaultRules method): CC => 'cl', CFLAGS => '/nologo', CCCOM => '%CC %CFLAGS %_IFLAGS /c %< /Fo%>', CXXCOM => '%CXX %CXXFLAGS %_IFLAGS /c %< /Fo%>', INCDIRPREFIX => '/I', INCDIRSUFFIX => '', LINK => 'link', LINKCOM => '%LINK %LDFLAGS /out:%> %< %_LDIRS %LIBS', LINKMODULECOM => '%LD /r /o %> %<', LIBDIRPREFIX => '/LIBPATH:', LIBDIRSUFFIX => '', AR => 'lib', ARFLAGS => '/nologo ', ARCOM => "%AR %ARFLAGS /out:%> %<", RANLIB => '', LD => 'link', LDFLAGS => '/nologo ', PREFLIB => '', SUFEXE => '.exe', SUFLIB => '.lib', SUFLIBS => '.dll:.lib', SUFOBJ => '.obj', SIGNATURE => [ '*' => 'build' ],: Objects $env 'foo.c', 'bar.c'; This will arrange to produce, if necessary, foo.o and bar.o. The command invoked is simply %CCCOM, which expands, through substitution, to the appropriate external command required to build each object. The substitution rules will be discussed in detail in the next section.. The INCDIRPREFIX and INCDIRSUFFIX variables specify option strings to be appended to the beginning and end, respectively, of each include directory so that the compiler knows where to find .h files. Similarly, the LIBDIRPREFIX and LIBDIRSUFFIX variables specify the option string to be appended to the beginning of and end, respectively,. Within a construction command, construction variables will be expanded according to the rules described above. In addition to normal variable expansion from the construction environment, construction commands also expand the following pseudo-variables to insert the specific input and output files in the command line that will be executed: The target file name. In a multi-target command, this expands to the first target mentioned.) Same as %>. These refer to the first through ninth input file, respectively. The full set of input file names. If any of these have been used anywhere else in the current command line (via . There are additional % elements which affect the command line(s): Cons includes the text of the command line in the MD5 signature for a build, so that targets get rebuilt if you change the command line (to add or remove an option, for example). Command-line text in between %( and %), however, will be ignored for MD5 signature calculation. Internally, Cons uses %( and %) around include and library directory options ( -I and -L on UNIX systems, /I and /LIBPATH on Windows NT) to avoid rebuilds just because the directory list changes. Rebuilds occur only if the changed directory list causes any included files to change, and a changed include file is detected by the MD5 signature calculation on the actual file contents. Cons expands construction variables in the source and target file names passed to the various construction methods according to the expansion rules described above: $env = new cons( DESTDIR => 'programs', SRCDIR => 'src', ); Program $env '%DESTDIR/hello', '%SRCDIR/hello.c'; This allows for flexible configuration, through the construction environment, of directory names, suffixes, etc. Cons supports several types of build actions that can be performed to construct one or more target files. Usually, a build action is a construction command--that is, a command-line string that invokes an external command. Cons can also execute Perl code embedded in a command-line string, and even supports an experimental ability to build a target file by executing a Perl code reference directly. A build action is usually specified as the value of a construction variable: $env = new cons( CCCOM => '%CC %CFLAGS %_IFLAGS -c %< -o %>', LINKCOM => '[perl] &link_executable("%>", "%<")', ARCOM => sub { my($env, $target, @sources) = @_; # code to create an archive } ); A build action may be associated directly with one or more target files via the Command method; see below. A construction command goes through expansion of construction variables and %- pseudo-variables, as described above, to create the actual command line that Cons will execute to generate the target file or files. After substitution occurs, strings of white space are converted into single blanks, and leading and trailing white space is eliminated. It is therefore currently not possible to introduce variable length white space in strings passed into a command.. If any command (even one within a multi-line command) begins with [perl], the remainder of that command line will be evaluated by the running Perl instead of being forked by the shell. If an error occurs in parsing the Perl code, supports the ability to create a derived file by directly executing a Perl code reference. This feature is considered EXPERIMENTAL and subject to change in the future. A code reference may either be a named subroutine referenced by the usual \& syntax: sub build_output { my($env, $target, @sources) = @_; print "build_output building $target\n"; open(OUT, ">$target"); foreach $src (@sources) { if (! open(IN, "<$src")) { print STDERR "cannot open '$src': $!\n"; return undef; } print OUT, <IN>; } close(OUT); return 1; } Command $env 'output', \&build_output; or the code reference may be an anonymous subroutine: Command $env 'output', sub { my($env, $target, @sources) = @_; print "building $target\n"; open(FILE, ">$target"); print FILE "hello\n"; close(FILE); return 1; }; To build the target file, the referenced subroutine is passed, in order: the construction environment used to generate the target; the path name of the target itself; and the path names of all the source files necessary to build the target file. The code reference is expected to generate the target file, of course, but may manipulate the source and target files in any way it chooses. The code reference must return a false value ( undef or 0) if the build of the file failed. Any true value indicates a successful build of the target. Building target files using code references is considered EXPERIMENTAL due to the following current limitations: Cons does not print anything to indicate the code reference is being called to build the file. The only way to give the user any indication is to have the code reference explicitly print some sort of "building" message, as in the above examples. Cons does not generate any signatures for code references, so if the code in the reference changes, the target will not be rebuilt. Cons has no public method to allow a code reference to extract construction variables. This would be good to allow generalization of code references based on the current construction environment, but would also complicate the problem of generating meaningful signatures for code references. Support for building targets via code references has been released in this version to encourage experimentation and the seeking of possible solutions to the above limitations.. AfterBuildmethod The AfterBuild method evaluates the specified perl string after building the given file or files (or finding that they are up to date). The eval will happen once per specified file. AfterBuild is called as follows: AfterBuild $env 'foo.o', qq(print "foo.o is up to date!\n"); The perl string is evaluated in the script package, and has access to all variables and subroutines defined in the Conscript file in which the AfterBuild method is called. Commandmethod The Command method is a catchall method which can be used to arrange for any build action to be executed to update the target. For this command, a target file and list of inputs is provided. In addition, a build action is specified as the last argument. The build action is typically a command line or lines, but may also contain Perl code to be executed; see the section above on build actions for details. The Command method is called as follows: Command $env <target>, <inputs>, <build action>; The target is made dependent upon the list of input files specified, and the inputs must be built successfully or Cons will not attempt to build the target. section above on construction variable expansion for section above on construction variable expansion for section above on construction variable expansion for. RuleSetmethod The RuleSet method returns the construction variables for building various components with one of the rule sets supported by Cons. The currently supported rule sets are: Rules for the Microsoft Visual C++ compiler suite. Generic rules for most UNIX-like compiler suites. On systems with more than one available compiler suite, this allows you to easily create side-by-side environments for building software with multiple tools: $msvcenv = new cons(RuleSet("msvc")); $cygnusenv = new cons(RuleSet("unix")); In the future, this could also be extended to other platforms that have different default rule sets. DefaultRulesmethod The DefaultRules method sets the default construction variables that will be returned by the new method to the specified arguments: DefaultRules(CC => 'gcc', CFLAGS => '', CCCOM => '%CC %CFLAGS %_IFLAGS -c %< -o %>'); $env = new cons(); # $env now contains *only* the CC, CFLAGS, # and CCCOM construction variables Combined with the RuleSet method, this also provides an easy way to set explicitly the default build environment to use some supported toolset other than the Cons defaults: # use a UNIX-like tool suite (like cygwin) on Win32 DefaultRules(RuleSet('unix')); $env = new cons(); Note that the DefaultRules method completely replaces the default construction environment with the specified arguments, it does not simply override the existing defaults. To override one or more variables in a supported RuleSet, append the variables and values: DefaultRules(RuleSet('unix'), CFLAGS => '-O3'); $env1 = new cons(); $env2 = new cons(); # both $env1 and $env2 have 'unix' defaults # with CFLAGS set to '-O3': Build the specified target. If target is a directory, then recursively build everything within that directory. Limit the Conscript files considered to just those that match pattern, which is a Perl regular expression. Multiple + arguments are accepted. Sets name to value val in the ARG hash passed to the top-level Construct file. -cc Show command that would have been executed, when retrieving from cache. No indication that the file has been retrieved is given; this is useful for generating build logs that can be compared with real build logs. -cd Disable all caching. Do not retrieve from cache nor flush to cache. -cr Build dependencies in random order. This is useful when building multiple similar trees with caching enabled. -cs Synchronize existing build targets that are found to be up-to-date with cache. This is useful if caching has been disabled with -cc or just recently enabled with UseCache. -d Enable dependency debugging. -f<file> Use the specified file instead of Construct (but first change to containing directory of file). -h Show a help message local to the current build if one such is defined, and exit. -k Keep going as far as possible after errors. -o<file> Read override file file. -p Show construction products in specified trees. No build is attempted. -pa Show construction products and associated actions. No build is attempted. -pw Show products and where they are defined. No build is attempted. -q Make the build quiet. Multiple -q options may be specified. A single -q options suppress messages about Installing and Removing targets. Two -q options suppress build command lines and target up-to-date messages. -r Remove construction products associated with <targets>. No build is attempted. -R<repos> Search for files in repos. Multiple -R repos directories are searched in the order specified. -S<pkg> Use the sig::<pkg> package to calculate. Supported <pkg> values include "md5" for MD5 signature calculation and "md5::debug" for debug information about MD5 signature calculation. If the specified package ends in <::debug>, signature debug information will be printed to the file name specified in the CONS_SIG_DEBUG environment variable, or to standard output if the environment variable is not set. -t Traverse up the directory hierarchy looking for a Construct file, if none exists in the current directory. Targets will be modified to be relative to the Construct file. Internally, cons will change its working directory to the directory which contains the top-level Construct file and report: cons: Entering directory `top-level-directory' This message indicates to an invoking editor (such as emacs) or build environment that Cons will now report all file names relative to the top-level directory. This message can not be suppressed with the -q option. -v Show cons version and continue processing. -V Show cons version and exit. -wf<file> Write all filenames considered into file. -x Show a help message similar to this one, and exit., although the same scanner may (and should) be used for multiple files of a given type. A QuickScan scanner is only ever invoked once for a given source file, and it is only invoked if the file is used by some target in the tree (i.e., there is a dependency on the source file). %) %<)); } } The subroutine above). final example, which scans a different type of input file, takes over the file scanning rather than being called for each input line: $env->QuickScan( sub { my(@includes) = (); do { push(@includes, $3) if /^(#include|import)\s+(\")(.+)(\")/ && $3 } while <SCAN>; @includes }, "$idlFileName", "$env->{CPPPATH};$BUILD/ActiveContext/ACSCLientInterfaces" );.
http://search.cpan.org/~knight/cons-2.3.0/cons
CC-MAIN-2017-09
refinedweb
4,508
51.18
Introduction to the New MVC 1.0 (Ozark RI) MVC 1.0 is one of the new Java EE 8 specifications available under JSR 371. It is designed to be an action-oriented framework layered on top of JAX-RS API (until some point, Servlet API was also in discussion) that wishes to be an alternative, not a replacement, to the component-oriented JSF. In other words, MVC 1.0 is a different standardized Java EE approach for building Web applications on the Java EE platform (rather close to Spring MVC). If you want the controller logic in your hands and you want to provide a full control of the URI space, MVC 1.0 may be the right choice. The M, V, and C in MVC 1.0 We assume that you know by now that, generally speaking, the model refers to the application's data, the view to the application's data presentation, and the controller to the part of the system responsible for managing input, calling business logic, updating the model and deciding which view should be rendered. In MVC 1.0, they are related as shown in Figure 1. Figure 1: MVC 1.0 The Model (M) The model imposed by MVC 1.0 is basically a simple HashMap defined in javax.mvc.Models, as demonstrated below. Most commonly, you will manipulate this map from your controllers (JAX-RS classes) for providing a map between names and objects: // MVC 1.0 - source code of javax.mvc.Models public interface Models extends Map<String, Object>, Iterable<String> { } This HashMap will be manipulated from the controller, and will update the view. Although the Models model must be supported by all view engines, MVC comes with another model based on CDI (@Named). The CDI model is RECOMMENED against the Models, but OPTIONAL! As you will see later, the official implementation of MVC 1.0 supports the CDI model, also. Other implementation are not forced to have the CDI model! The View (V) The data stored in the model should be reflected by the view (template). For this, the view can use placeholders specific to the view type. Each placeholder will be replaced by the pointed data from the model. For example, if you provide a Book object that has an author property to the following views, you will obtain the same result (most commonly you will use EL to point data from the model): JSP view: ${book.author} Facelets view: #{book.author} Handlebars view: {{book.author}} Thymeleaf view: #{book.author} As you saw in Figure 1, the view is represented by the ViewEngine interface listed below: // MVC 1.0 - source code of javax.mvc.engine.ViewEngine public interface ViewEngine { String VIEW_FOLDER = "javax.mvc.engine.ViewEngine.viewFolder"; String DEFAULT_VIEW_FOLDER = "/WEB-INF/views/"; boolean supports(String view); void processView(ViewEngineContext context) throws ViewEngineException; } ViewEngine's task is to merge model and view, and, as you will see, the official MVC 1.0 implementation comes with several view engines (the mandatory view engines are for JSP and Facelets). In other words, the view engine is capable of fetching data from different models and render (produce the HTML markup) the required view. Developers also can write their own engines, and, as you will see in this article, it is not a very difficult task. The Controller (C) The controller is the "brain" of the application; it is responsible for combining data models and views to serve the user requests (display application's pages). In MVC 1.0, the controllers are implemented in JAX-RS style. This seriously reduces the learning curve for both newcomers and experienced Java EE developers. Once you learn how to write JAX-RS resources, you will also learn how to write MVC 1.0 controllers and vice versa. However, there are a few major differences and similarities between them. MVC 1.0 and JAX-RS Major Differences The major differences between MVC 1.0 and JAX-RS are as follows: -, and so forth). This restriction is true for hybrid classes, also. Such classes must be CDI-managed beans. - A String returned by an MVC controller is interpreted as a view path rather than text content (for example, pointing to a JSP page). So, pay attention to this aspect because a JAX-RS resource may return content text, whereas MVC controllers don't. - The default media type for a response is assumed to be text/html, but otherwise can be declared using @Produces just like in JAX-RS. - A MVC controller that returns void must be decorated with the type/method @View annotation to indicate the view to display. - We can encapsulate points to the default view for the controller. The default view will be used ONLY if you return null from your non-void controller. MVC 1.0 and JAX-RS Major Similarities The major similarities between MVC 1.0 and JAX-RS are: - All parameter types injectable in JAX-RS resources are also available in MVC controllers. - The default resource class instance lifecycle is per-request in JAX-RS and MVC (implementations may support other lifecycles via CDI). - The same caveats that apply to JAX-RS classes in other lifecycles also apply to MVC classes. The MVC Annotations Résumé MVC 1.0 comes with the following annotations: - @Controller (javax.mvc.annotation.Controller): Applied to class level, it defines an MVC controller. Applied to method level, it defines a hybrid class (controller and JAX-RS resource). - @View (javax.mvc.annotation.View): Applied to class level, it points the view for all void controller methods. Applied to method level, it points the view for that void controller method, or for that non-void controller method when it returns null (default view). - @CsrfValid (javax.mvc.annotation.CsrfValid): Can be applied only at method level and requires that a CSRF token must be validated before invoking the controller. Validation failure is signaled via ForbiddenException. - @RedirectScoped (javax.mvc.annotation.RedirectScoped): Can be applied at type, method, or field level; it points that a certain bean is in redirect scope. Action-based vs. Component-based When we say action-based vs. component-based, we are referring to MVC 1.0 vs. JSF. Obviously, JSF is a mature technology with several major releases whereas MVC 1.0 is just one of the new specs that will debut in Java EE 8. Both are MVC based with different flavors. Well, we won't insist too much on a MVC 1.0 and JSF comparison, but maybe it is good to keep in mind a few facts like the following figure and table. First, it is important to distinguish among model, view, and controllers from a MVC 1.0 and JSF perspective. Figure 2 speaks for itself. Figure 2: JSF MVC Table 1: Action-based vs. component-base At a closer look, the MVC 1.0 design is as in Figure 3: Figure 3: MVC 1.0 style Each user request will be assigned by MVC to the proper controller, based on the specified path (an instance of a controller class is to be instantiated and initialized on every request). The controllers are written by the developer and there can be more than one in the same application. These can be pure MVC controllers or hybrid classes that acts as MVC controllers and JAX-RS resources, depending on the incoming request path. Moreover, the path may point the action method that should deal with the request. Commonly, the action method will manipulate the model and will point the view that should be rendered for the user. The view will reflect the current model data (most probably via EL). The JSF design is like what's shown in Figure 4: Figure 4: JSF-MVC style In JSF the, controller is a servlet named FacesServlet and the user cannot modify/extend it. This is responsible to deal with all user requests that should pass through the JSF lifecycle or points to JSF resources. FacesServlet distinguishes between requests types and delegates the tasks accordingly. The JSF lifecycle is a complex "machinery" based on two stages: execution and rendering. The execution stage has five phases as follows: Restore View, Apply Request Values, Process Validations, Update Model Values, and Invoke Application. The rendering stage consists of one phase, named Render Response. This phase is responsible for rendering views for the user. Each request will pass through all or a subset of phases from execution stage and through the rendering stage. Ozark RI Overivew The reference implementation of MVC 1.0 is named Ozark. In Figure 5, you can notice the main features of Ozark: Figure 5: MVC 1.0 RI (Ozark) Ozark comes with two models: the mandatory implementation of Models and the optional CDI model. Moreover, Ozark comes with three view engines: ServletViewEngine, JspViewEngine, and FaceletsViewEngine. Of course, you can implement more views by implementing ViewEngine or extending the existing view engines. The HelloWorld Example We start this section with a very simple Hello World example. Basically, we will have a HTML start page/view (index.html) that will contain a link (path) pointing an action of an Ozark controller. This action will simply return another page/view (hello.html). First, we need to configure the Ozark application (like a JAX-RS application), which means setting the base URI from which an application's controllers respond to requests. Later, we will dissect this aspect, but for now let's take the simplest way to accomplish this task: @ApplicationPath("resources") public class HelloApplication extends Application { } Further, we can write the HTML pages, which are very simple. The index.html page's relevant code is: <a href="resources/hello">Hello World!</a> The value of the href attribute represents a path that points to our controller/action. This path consists of the base URI (resources) and a relative URI (hello) separated by a slash, "/": 1: import javax.mvc.annotation.Controller; 2: import javax.ws.rs.GET; 3: import javax.ws.rs.Path; 4: 5: @Controller 6: @Path("hello") 7: public class HelloController { 8: 9: @GET 10: public String helloAction() { 11: return "/hello.html"; 12: } 13: } Line 5: At this line, you can notice the presence of a @Controller annotation. This annotation is specific to MVC 1.0 and is the landmark that this is an MVC 1.0 controller. You also can use it at method level, and annotate only the helloAction() method, but that practice is more common for hybrid classes (Ozark controllers/JAX-RS resources). Line 6: Further, we use the @Path annotation to point the controller-relative URI. Because we have a single method (action), we can say that the @Path from class level will be resolved in the end to the helloAction() method. When you have multiple methods (actions) in the controller, you need to use @Path at method (action) level also. Each method (action) will have its own path. We will talk later about this topic. Line 9: Because we intend to reach helloAction() via the HTML <a/> tag, we indicate that this action is capable of dealing only with GET requests. The HTML <a/> fires a GET request. Line 11: There's an interesting detail at this line. Here, we indicate the view that should be rendered to the user after this action effect is complete. In our case, this view is named hello.html. Because the view is stored in the same folder with index.html (in the webapp folder), we need to prefix its path with a slash, "/". Without this slash, hello.html will not be located/found. This is another topic discussed later. So, the hello.html is embarrassingly simple: <h1>Hello World! (HTML page)</h1> The complete application is named HelloWorld. In this example, we have used HTML pages, but as we said earlier, Ozark also has dedicated view engines for Servlets, JSP, and Facelets. This means that the HelloWorld application can be re-written using JSP, Facelets, or Servlet in a very easy manner. In the JSP approach, we simply rename index.html and hello.html as index.jsp and hello.jsp. That's it! Just for fun, you can write the index.jsp like this (the complete application is named, HelloWorldJSP): <body> <% StringHello World!</a> </body> In the Servlet approach, the content that should be displayed to the user is returned by HelloServlet. @WebServlet("/HelloServlet") public class HelloServlet extends HttpServlet { ... protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); try (PrintWriter out = response.getWriter()) { ... out.println("<h1>Hello World! (Servlet)</h1>"); ... } } ... } And the controller simply points to this Servlet: @GET public String helloAction() { return "/HelloServlet"; } The complete application is named HelloWorldServlet. Now, the last case is reserved for Facelets. You have to be aware about the fact that Facelets support is not enabled by default. MVC applications that use Facelets are required to package a web.xml deployment descriptor with the following entry mapping the extension *.xhtml as shown next (you can use prefix mapping also (e.g. > If your application starts from a Facelet, you need to configure the welcome file, also. For example: <welcome-file-list> <welcome-file>index.xhtml</welcome-file> </welcome-file-list> If you application doesn't start from a Facelets page (for example, from a JSP/HTML page), you don't need this part and still use Facelets for the rest of the pages, except the starting page. The complete application is named HelloWorldFacelets.
http://www.developer.com/java/ent/introduction-to-the-new-mvc-1.0-ozark-ri.html
CC-MAIN-2017-09
refinedweb
2,237
66.54
Data.Record.hs. (comment: my preferences would be (1) we should try to implement as many useful record operations, predicates, and invariants as we can, (2) we should try to unify the sets of operations into a coherent whole, (3) we should identify to what extent and in what form we need to have language and implementation support, and (4) users, not library providers, will decide which subsets of operations they use). Label Namespace The proposals which are implemented as libraries put labels in conid (at the value level) and tycon (at the type level). In other words they must begin with capital letters, not clash with any other constructor or type, and be declared before use. If we want to support labels as first-class objects, this is essential so that we can distinguish them from other objects. The other proposals allow labels to be arbitrary strings, and distinguish them from other objects by context. This is related to the problem of Label Sharing: if the label L is declared in two different modules M1 and M2, both of which are imported, do we have one label L or two labels M1.L and M2.L? Should there be a mechanism for identifying labels on import? The Heterogeneous Collections system introduces user defined Label Namespaces. All the labels in a record must live in the same namespace, and within a namespace, labels are defined in a way which forces a linear ordering. This gives a kind of modularity which interacts with the module system in a complex way: is not clear (for example) what happens when two imported modules extend a namespace independently.). - permutativity: The easiest way to implement permutativity of field labels is to sort them by some total ordering. Although this can be implemented using functional dependencies, it's complex and inefficient. Compiler support for a global order on tycons (based on fully qualified name, perhaps) would be very helpful. I have submitted a feature request #1894. Does this conflict with type sharing? Here is the example translated into Data.Record, and slightly expanded to demonstrate the flexibility with which one may specify precisely as much detail as wanted, without requiring a global label ordering (if that is wanted, we simply permute to the locally expected order). norm1 has the type asked for above (no extra fields, specific order for expected fields), norm2 has the most flexible type (only expected fields are specified), norm3 shows a middle ground (no extra fields permitted, but order is irrelevant): data X = X deriving Show; instance Label X where label = X data Y = Y deriving Show; instance Label Y where label = Y data Z = Z deriving Show; instance Label Z where label = Z type Point = (X := Float) :# (Y := Float) :# () -- a record is a list of fields, sorted to match expected order record r = r !#& undefined -- a point is a Point, if re-ordered point = record ((Y := (3.0::Float)) :# (X := (4.0::Float)) :# ()) :: Point norm1 :: Point -> Float norm1 p = sqrt (p #? X * p #? X + p #? Y * p #? Y) -- this is a type error, because X and Y are not in the expected order -- test1a = norm1 ((Y := 3.0) :# (X := 4.0) :# ()) -- this works, because X and Y will be permuted into the expected order test1b = norm1 (record $ (Y := (3.0::Float)) :# (X := (4.0::Float)) :# ()) norm2 :: (Select X Float rec, Select Y Float rec) => rec -> Float norm2 p = sqrt ((p #? X) * (p #? X) + (p #? Y) * (p #? Y)) -- this is OK, because norm2 doesn't care about field order test2a = norm2 ((Y := 3.0) :# (X := 4.0) :# ()) -- this is OK, because norm2 doesn't care about unused fields test2b = norm2 ((Y := 3.0) :# (X := 4.0) :# (Z := True) :# ()) norm3 :: Project' rec Point => rec -> Float norm3 p' = sqrt ((p #? X) * (p #? X) + (p #? Y) * (p #? Y)) where p = (record p')::Point -- this is OK, because norm3 is doesn't care about field order test3a = norm3 ((Y := (3.0::Float)) :# (X := (4.0::Float)) :# ()) -- this is a type error, because norm3 doesn't accept unused fields -- test3b = norm3 ((Y := 3.0) :# (X := 4.0) :# (Z := True) :# ()) The more complex systems support first class labels. Here is an example using the Type Families system: labelZip :: ({n :: a} `Disjoint` {m :: b}) => n -> m -> [a] -> [b] -> [{n :: a, m :: b}] labelZip n m = zipWith (\x y -> {n = x, m = y})) Libraries could then implement mkUnderlyingRecord, underlyingEmptyRecord, MkUnderlyingRecord, UnderlyingEmptyRecord and viewUnderlyingRecord in whatever way is best. What do you think?
https://ghc.haskell.org/trac/ghc/wiki/ExtensibleRecords?version=31
CC-MAIN-2017-43
refinedweb
742
61.46
Jeremy Medford wrote:Thank you all, I feel like I'm starting to make some progress here. I now have the following code structure in VideoServlet and I have a question: (Hopefully this isn't too much code being displayed.) public class VideoServlet extends ActionServlet { ( code removed for brevity's sake ) public void init() { addActionHandler( "addnewvideo", new AddSubmitHandler() ); addActionHandler( "addvideo", new AddHandler() ); addActionHandler( "default", new Default() ); } } 1. When I begin fresh at my index.html and select 'List Videos', I am taken to the Video List but I have one row that shouldn't be there. It is filled with nulls for the strings and a couple of default "no's" for the booleans. It's as if it the AddSubmitHandler() is being used instead of my Default(). Why might this be happening? I thought Default would be used if the action does not match the others inside init()?
http://www.coderanch.com/t/476117/Cattle-Drive/Servlets-blocked
CC-MAIN-2014-52
refinedweb
149
72.46
Created on 2014-04-30 08:49 by Yinon, last changed 2014-09-18 02:11 by berker.peksag. This issue is now closed. If you meant to supply a patch, it is missing. And in any event, you need to describe the issue. Use the 'abspath' shortcut instead of 'os.path.abspath' See the attached patch (sorry, forgot to attach before) Wouldn't it be better to switch uses of abspath to be os.path.abspath? os.path is used elsewhere in the file, after all. Brett added "from os.path import abspath" in but I think that import should be deleted and os.path.abspath used directly. IMO either change would not improve the code at all. Suggest closing this. I disagree. It took me longer than I'd like to admit to track down the file history and understand it. I'd like to prevent other people from having to try and understand why it works this way. On the other hand, it looks like people have discovered it: so getting rid of it isn't so simple. If we are going to keep it, we should add a test for it (which actually might exist, I haven't checked yet). I'd prefer to get rid of it, otherwise we might get requests to add all the other os.path functions to the shutil namespace, and I don't think having that kind of "more than one way to do it" serves anyone. I suppose we'll have to deprecate it first if we do get rid of it :(. Here is a patch to deprecate the shutil.abspath function. Shouldn't the existing calls to abspath() be changed to os.path.abspath()? Or are both patches meant to be applied? I don't think the first patch applies cleanly any more. In any event: the deprecation and test look good to me. So assuming we get rid of the import and get rid of direct calls to abspath(), I'm +1. Now that I think about it, maybe we don't need a deprecation warning. says: ." abspath isn't in __all__, so it's arguably not part of the public API, anyway. I'm for get rid of "from import" without deprecation. Definitely this is not part of API. New changeset ab369d809200 by Berker Peksag in branch 'default': Issue #21391: Use os.path.abspath in the shutil module. Done. Thanks for the reviews!
https://bugs.python.org/issue21391
CC-MAIN-2017-43
refinedweb
407
85.08
Overview of Preprocessor The C preprocessor is exactly what its name implies. It is a program that processes our source program before it is passed to the compiler. Preprocessor commands (often known as directives) form what can almost be considered a language within C language. The beging with a hash symbol (#). It must be the first nonblank character, and for readability, a preprocessor directive should begin in the first column. Example:- #include <stdio.h> #define PI 3.14 int main() { int radius; float area; printf("Enter the radius="); scanf("%d",& radius); area = PI*radius*radius; printf("Area =%f",area); getch(); return 0;} Output: Enter the rasius 5 Area= 78.5
http://www.cseworldonline.com/tutorial/c_Preprocessor.php
CC-MAIN-2019-18
refinedweb
110
58.69
BaseSort member variable? On 14/07/2016 at 12:06, xxxxxxxx wrote: User Information: Cinema 4D Version: 17 Platform: Windows ; Mac OSX ; Language(s) : C++ ; --------- Hi, I'm looking for a way to implement a variable sorting crtiteria in my BaseSort class, but I'm not getting it to work. I just want to sort an array with vectors using a tolerance value (eps) : class SortVector : public maxon::BaseSort <SortVector> { public: static Bool LessThan(Vector a, Vector b) { if (a.z < b.z - eps) return true; if (a.z > b.z + eps) return false; return (a.x < b.x); } SortVector(Float d) { eps = d; } public: static Float eps; }; //--------------------------------- maxon::BaseArray<Vector> points; //... filling the array ... SortVector sort(10); sort.Sort(points); I want to start the sorting using the tolerance value, but the compiler gives me an unsolved symbol "eps" error because of the static declaration. But I can't omit static, because LessThan wouldn't know "eps". And finally I can't omit static for LessThan(). It happens frequently, that I need additional variable parameters for sorting. How is the provided way to do this? Thanks of any help. On 14/07/2016 at 17:08, xxxxxxxx wrote: Did you forget to add the definition of SortVector::eps also in your actual code? Something like this in a translation unit (.cpp file). Float SortVector::eps = 1.0e-7; // just an example value On 15/07/2016 at 01:45, xxxxxxxx wrote: You only have the declaration of the static member but you also need to give it a definition (i.e. what Niklas provided) to actually make it exist. On 15/07/2016 at 03:21, xxxxxxxx wrote: Thanks. Have to read a chapter about static stuff I guess.
https://plugincafe.maxon.net/topic/9592/12877_basesort-member-variable
CC-MAIN-2019-22
refinedweb
290
67.76
Dec 28, 2014 03:48 PM|Anyhan|LINK Hi, I am developing a project in Visual Studio 2013 with ASP.NET Identity and Visual C#. For this project, I have added an extra field in the registration page to allow for the entry of a username - for example "Anyhan". I have changed the code in the AccountController.cs class under the registration method in the following way: Old code: var user = new ApplicationUser { UserName = model.Email, Email = model.Email }; New Code: var user = new ApplicationUser { UserName = model.UserName, Email = model.Email }; The change assigns a username, for example "Anyhan" to the UserName field in the database, along with the entered email address being added to the Email field. My trouble is, when a user is registered, and they forget their password and wish to reset the password, the email is set using the following code in the IdentityConfig.cs class in the AppStart folder: var mail = new System.Net.Mail.MailMessage(sentFrom, message.Destination); Now, I have configure the email settings and tested them and they work. However, when I changed the registration code to add a username and not an email address to the UserName column in the database, the emails dont send. (there is also no error thrown) My belief is that, ASP.NET Identity uses the value in the UserName column as the recipient of the email. Is there a way of altering this to use the Email column values as the recipients so that usernames do not have to be email addresses? I have looked through the code and cannot find where one would change where message.Destination would look to the Email column. Thank you in advance for any help on this issue as it has baffled me for some time. Anthony All-Star 69276 Points Moderator Jan 01, 2015 05:58 PM|anas|LINK Anyhan My belief is that, ASP.NET Identity uses the value in the UserName column as the recipient of the email. I don't think ASP.NET identity configure this automatically. There must be something in the code which is still considering the userName as the email and you need to debug this to find it. Can you please show the part that send the code ? As far as i know, ASP.NET identity doesn't have a built in functions that send emails, so there must be a code that was written in your solution which do this. One way to find all the code parts that are referencing the UserName property is to go to this property, right click on it in the class declaration and select "Find All References", this should show you a list with all the areas in your project that are referencing this property. You need to review each one and replace the locations with the Email property when needed (like the case of sending the emails). Jan 02, 2015 01:47 AM|gtscdsi|LINK Hi Anyhan, Here is an article for your reference. I have tried the sample in this tutorial and set breakpoint at Task SendAsync(IdentityMessage message). The value of message.Destination shows the email address. public class EmailService : IIdentityMessageService { public Task SendAsync(IdentityMessage message) { // Plug in your email service here to send an email. return Task.FromResult(0); } } You may set breakpoint at Anyhanvar mail = new System.Net.Mail.MailMessage(sentFrom, message.Destination); to check the value in both working an not working scenario. Hope useful to you! Best Regards 2 replies Last post Jan 02, 2015 01:47 AM by gtscdsi
https://forums.asp.net/t/2026993.aspx?C+Identity+forgot+password+email
CC-MAIN-2019-18
refinedweb
592
65.22
Registering a user defined regexp function with Sqlite from CherryPy Multithreading can be tricky and can actually trip you in unexpected ways. And in some situations multithreading is almost unavoidable, for example when using CherryPy as it usually instantiates a fair number of threads to efficiently serve multiple HTTP requests in parallel. The situation that caught me unawares was the combination with Sqlite. Sqlite can be used in a multithreaded fashion but you must make sure that each thread has it's own connection object to communicate with the database. This is easily accomplished by registering a function with CherryPy that will be called for each newly started thread. This might look as follows: import sqlite3 import threading data=None db='/tmp/example.db' def initdb(): global data,db sql='create table if not exists mytable (col_a, col_b);' conn=sqlite3.connect(db) c = conn.cursor() c.execute(sql) conn.commit() conn.close() data=threading.local() def connect(thread_index): global data,db data.conn = sqlite3.connect(db) data.conn.row_factory = sqlite3.Row if __name__ == "__main__": initdb() cherrypy.engine.subscribe('start_thread', connect) <... code to start the cherrypy engine ...> There are two functions in the example above. The first one, initdb(), is used to initialize the database, that is, to create any tables necessary if they are not defined yet and to prepare some storage that is unique for each thread. Normally, all global data is shared between threads so we have to take special measures to provide each thread with its own private data. This is accomplished by the call to threading.local(). The resulting object can be used to store data as attributes and this data is private to each thread. initdb() needs to be called only once before starting the CherryPy engine. The second function, connect(), should be called once for every thread. It creates a database connection and stores a reference to this connection in the conn attribute of the global variable data. Because this was setup to be private data for each thread, we can use it to store a separate connection object. In the main section of the code we simply call initdb() once and use the cherrypy.engine.subscribe() function to register our connect() function to be executed at the start of a new thread. The code to actually start CherryPy is not shown in this example. User defined functions Now how can this simple setup cause any troubles? Well, most database configuration actions in Sqlite are performed on connection objects and when we want them to work in a consistent way we should apply them to each and every connection. In other words, those configuration actions should be part of the connect() function. An example of that is shown in the last line of the connect() function where we assign a sqlite3.Row factory to the row_factory attribute of a connection object. Because we do it here we make sure that we may consistently access columns by name in any record returned from a query. What I failed to do and what prompted this post was register a user defined function for each connection. Somehow it seemed logical to do it only once when initializing the database, but even if that connection wasn't closed it was impossible to use that function in a query. And user defined functions are not a luxury but a bare necessity if you want to use regular expressions in Sqlite! Sqlite supports the REGEXP operator in queries so you may use a query like: select * from mytable where a regexp '^a.*b$'; This will select any record that has a value in its a column that starts with an a and ends with a b. However, although the syntax is supported, it still raises a Sqlite3.OperationalError exception because the regexp function that is called by the regexp operator is not defined. If we want to use regular expressions in Sqlite we have to supply an implementation of the regexp function ourselves. Fortunately this is quite simple, a possible implementation is shown below: import re def regex(pattern,string): if string is None : string = '' return re.search(pattern,str(string))!=None Note that this isn't a very efficient implementation as we compile a pattern again and again each time the function is called even when it may be called hundreds of times with the same pattern in a single query. It does the job however. All that is left to do now, is register this function. Not, as I did, as part of the initdb() function, but as part of the connect() function that is called for each thread: def connect(thread_index): global data,db data.conn = sqlite3.connect(db) data.conn.row_factory = sqlite3.Row data.conn.create_function('regexp',2,regex) The create_function() method will make our newly defined function available. It takes a name, the number of arguments and a reference to our new function as arguments. Note that despite what the Sqlite documentation states, our regular expression function should be registered with the name regexp (not regex!). A side note on multiprocessing If you have a multiprocessor or multicore machine, multithreading will in general not help you to tap into the full processing power of your server. In this article I explore ways to use Python's multiprocessing module in combination with Sqlite.
http://michelanders.blogspot.com/2010/10/sqlite-multithreading-woes.html
CC-MAIN-2017-22
refinedweb
887
53.51
Cover image by Mikael Kristenson on Unsplash This is Part 3 in the Python Scripting Toolbox series. It's a three-part survey of the tools available to us for Python scripting. I'm showing off the functionality by creating three scripts that show off different parts of the standard library. In this one, we'll use sys, os, shutil, pathlib, and argparse. Keep in mind that I'm using Python 3.6 here, so you'll see some shwanky features like "f-strings" and the pathlib module that aren't necessarily available in earlier versions. Try to use a 3.6 (or at least 3.4) if you can while following along. - In Part 1, we built shout.py: a script that shouts everything you pass into it. - In Part 2, we created make_script.py: a script that generates a starter script from a template, for use in things like Project Euler or Rosalind. - In Part 3, we are going to create project_setup.py: a script that creates a basic project skeleton, and add a couple new tools to our toolbox. Now, let's get started. project_setup.py Once again, before we get started, it's important to say that there are a number of great libraries out there that already do this for you. Cookiecutter is probably the most popular. It's fully featured, has great docs, and a huge ecosystem of plug-ins and pre-made templates. Now that we've got that out of the way, it's time for us to boldly go and reinvent the wheel using only the Standard Library! Onward! Here are the requirements for project_setup.py. It needs to take an input that will point it towards a source template to copy, possibly with some sane defaults and error checking. It will also need to know where to put the copy. Most importantly, it must copy the source template to the target location. First Draft: Basic Functionality We're going to need one old friend and two new ones to start: sys, pathlib, and shutil. We'll use sys to handle our input arguments (at least for now), pathlib to handle file paths in a humane way, and shutil to do the heavy lifting in terms of copying. Here's our first cut. # project_skeleton.py import pathlib import shutil import sys def main(source_template, destination): """ Takes in a directory as a template and copies it to the requested destination. """ # We'll use 'expanduser' and 'resolve' to handle any '~' directories # or relative paths that may cause any issues for `shutil` src = pathlib.Path(source_template).expanduser().resolve() dest = pathlib.Path(destination).expanduser().resolve() if not src.is_dir(): exit("Project Skeleton: Source template does not exist.") if dest.is_dir(): exit("Project Skeleton: Destination already exists.") shutil.copytree(src, dest) print(f"Project Skeleton: Complete! {src} template copied to {dest}") if __name__ == "__main__": if len(sys.argv) != 3: print(f"Usage: {sys.argv[0]} SOURCE_TEMPLATE_PATH DESTINATION_PATH") source, destination = sys.argv[1:] main(source, destination) pathlib is a super powerful library, and it's probably my favorite of the ones we'll cover in this tutorial. It has a lot of high-level easy-to-read methods, it's cross-platform without much fuss, and it makes it so you never have to type another darn os.path.join ever again (mostly). My favorite, favorite thing about it is that it overloads the division slash ( /) so that "dividing" two paths or a path and a string creates a new path, regardless of what platform you're on: p = pathlib.Path("/usr") b = p / "bin" print(b) # => /usr/bin/ As you can see, we really only use shutil for one thing: copying the directory tree from one place to another. shutil is going to be your go-to module for copying, moving, renaming, overwriting, and getting data about files and directories. Lastly, we use good ole' sys to do some rudimentary input argument validation. Pass 2: Argument Parsing Just like last time, I want to add some argument parsing, via argparse. Not nearly as many options are required as last time. import argparse # ... The other imports and main function don't change. if __name__ == "__main__": parser = argparse.ArgumentParser( description="Generate a project skeleton from a template.") parser.add_argument( "source", help="The directory path of the source template.") parser.add_argument("destination", help="The location of the new project.") args = parser.parse_args() main(args.source, args.destination) Now our command has nice argument counting, usage messages, and a help message! Pass 3: Having a Templates directory This script will work great, but we have to type the full path to our template skeleton every time we use it, like some kind of peasant! Let's use a handy trick to implement some sane, cascading defaults. """Generates a project skeleton from a template.""" import argparse import os # <= Adding an 'os' import. You'll see why below. import pathlib import shutil def main(source_template, destination): """ Takes in a directory (string) as a template and copies it to the requested destination (string). """ src = pathlib.Path(source_template).expanduser().resolve() dest = pathlib.Path(destination).expanduser().resolve() if not src.is_dir(): exit(f"Project Skeleton: Source template at {src} does not exist.") if dest.is_dir(): exit(f"Project Skeleton: Destination at {dest} already exists.") shutil.copytree(src, dest) print(f"Project Skeleton: Complete! {src} template copied to {dest}") if __name__ == "__main__": parser = argparse.ArgumentParser( description="Generate a project skeleton from a template.") # I want to tweak the 'source' argument, since now all we want # is for the user to provide the name of the template. parser.add_argument( "template", help="The name of the template to use.") parser.add_argument("destination", help="The location of the new project.") # This is the magic. We're going to add an argument that specifies where # our templates live. If the user doesn't specify, we'll see if they have # an environment variable set that tells us where to look. If they don't # have that, we'll use a sane default. parser.add_argument( "-d", "--template-dir", default=os.environ.get("SKELETON_TEMPLATE_DIR") or "~/.skeletons", help="The directory that contains the project templates. " "You can also set the environment var SKELETON_TEMPLATE_DIR " "or use the default of ~/.skeletons." ) args = parser.parse_args() # One last tweak: We want to append the name of the template skeleton # to the root templates directory. source_dir = pathlib.Path(args.template_dir) / args.template main(source_dir, args.destination) And there you have it! You can call your script in a number of ways: $ python project_skeleton.py big_project code/new_project # This assumes you have a template called big_project in your ~/.skeletons dir $ SKELETON_TEMPLATE_DIR="~/code/templates/" python project_skeleton.py big_project code/new_project # Using an environment variable. For long-term use, put this variable # declaration in your .bashrc :) $ python project_skeleton.py -d ~/my/favorite/secret/place/ big_project code/new_project Wrap Up I was going to wild with interpolated variables into directories, filenames, and files (similar to how Cookiecutter does it), cloning from remote URLs, and more, but that seems like a bit much for one article. I think I'll leave it here, and maybe put those into a future article. If anybody wants to give it a shot, send me a link to your solution for bonus internet points! Discussion (1) Aw man! I love the use of f-strings the most. Beautiful. I still do os.path.jointhough 🙈
https://dev.to/rpalo/python-scripting-toolbox-part-3---project-skeleton-generator-4f7f
CC-MAIN-2022-05
refinedweb
1,227
59.3
* 2 3 4 5 6 7 8 9 1 * 3 4 5 6 7 8 9 1 2 * 4 5 6 7 8 9 1 2 3 * 5 6 7 8 9 1 2 3 4 * 6 7 8 9 1 2 3 4 5 * 7 8 9 i have alot of problems to do something simple like this. Here is my code: #include <stdio.h> int main() { int number; int star =1, star_location = 1; int numberzcounter; for(numberzcounter=1;numberzcounter<7;numberzcounter++) { for(number = 1;number<10;number++) { if(star == star_location) { star_location++; printf("* "); } else { printf("%d ", number); star++; } } printf("\n"); } return 0; } and it produced this output which is obviously wrong. * 2 * 4 * 6 * 8 * 1 * 3 * 5 * 7 * 9 * 2 * 4 * 6 * 8 * 1 * 3 * 5 * 7 * 9 * 2 * 4 * 6 * 8 * 1 * 3 * 5 * 7 * 9 can anyone help me please? i am still unable to produce the desired pattern. Thanks in advance. This post has been edited by shangyi: 09 October 2008 - 05:18 AM
https://www.dreamincode.net/forums/topic/66888-help-with-nested-loop-to-display-patterns-using-c/
CC-MAIN-2018-30
refinedweb
169
74.42
Details - Type: Improvement - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: 1.6.0 - - - Labels:None - Hadoop Flags:Reviewed - Release Note:IDL parsing updated to support capturing doc strings for both Protocol and Message entities. - Tags:IDL, Code generation Description /** method=@AccessControl(source="MyService") */ string echoString(string msg) throws com.aol.interfaces.error.ServiceError; Message echoMessage(Message msg) throws com.aol.interfaces.error.ServiceError; void publishMessage(string msg) oneway; } Activity This is looking great. A few nits: - The changes to output/unicode.avpr were garbled. - There were some whitespace/indendation changes in Protocol.java that weren't needed. I've attached a new version of a patch that fixes these details. I'll commit this tomorrow unless there are objections. This time with ASF license grant Ok, new patch file (avro-doc-v3.patch) uploaded. Simplified JSON string escaping logic (thanks Doug!) I'm happy to update my patch to include this simplification. George: you're right, there's a bug in the escaping of schemas included in specific compiler output. I've attached a one-line change that fixes this. With this, I don't believe you'll need to make all the escape-related changes in your patch. Should I file a separate issue for this, or would you just like to include this change in your patch? For good measure, if you make the comment of the Span record in the avroTrace.avdl file look like this /** * An individual span is the basic unit of testing. * The record is used by both \"client\" and \"server\". */ record Span { you will trigger the other bug I ran into while adding doc support for Protocols and Messages. Bascially, the SpecificCompiler.java in it's javaEscape() method doesn't ignore already escaped double-quotes and hence replaces \" with \\" which causes an improperly escaped string in the generated java sources. In the case the code doesn't even compile. The resulting generated avroTrace.java file contains this statement. public static final org.apache.avro.Protocol PROTOCOL = org.apache.avro.Protocol.parse("{\"protocol\":\"AvroTrace\",\"namespace\":\"org.apache.avro.ipc.trace\",\"types\":[{\"type\":\"enum\",\"name\":\"SpanEvent\",\"symbols\":[\"SERVER_RECV\",\"SERVER_SEND\",\"CLIENT_RECV\",\"CLIENT_SEND\"]},{\"type\":\"fixed\",\"name\":\"ID\",\"size\":8},{\"type\":\"record\",\"name\":\"TimestampedEvent\",\"fields\":[{\"name\":\"timeStamp\",\"type\":\"long\"},{\"name\":\"event\",\"type\":[\"SpanEvent\",\"string\"]}]},{\"type\":\"record\",\"name\":\"Span\",\"doc\":\"* An individual span is the basic unit of testing.\n * The record is used by both \\\\"client\\\\" and \\\\"server\\\\".\",\"fields\":[{\"name\":\"traceID\",\"type\":\"ID\"},{\"name\":\"spanID\",\"type\":\"ID\"},{\"name\":\"parentSpanID\",\"type\":[\"ID\",\"null\"]},{\"name\":\"messageName\",\"type\":\"string\"},{\"name\":\"requestPayloadSize\",\"type\":\"long\"},{\"name\":\"responsePayloadSize\",\"type\":\"long\"},{\"name\":\"requestorHostname\",\"type\":[\"string\",\"null\"]},{\"name\":\"responderHostname\",\"type\":[\"string\",\"null\"]},{\"name\":\"events\",\"type\":{\"type\":\"array\",\"items\":\"TimestampedEvent\"}},{\"name\":\"complete\",\"type\":\"boolean\"}]}],\"messages\":{\"getAllSpans\":{\"request\":[],\"response\":{\"type\":\"array\",\"items\":\"Span\"}},\"getSpansInRange\":{\"request\":[{\"name\":\"start\",\"type\":\"long\"},{\"name\":\"end\",\"type\":\"long\"}],\"response\":{\"type\":\"array\",\"items\":\"Span\"}}}}"); Scroll until you find the "doc" string for the "Span" record. I used a fresh checkout of 1.6.0 for this test that did NOT contain my patch. I'm sorry that my initial comments weren't very clear. Argh... looks like I need wiki help The first sentence of my last comment should read... Right and that is the problem I ran into with multi-line comments in the IDL. In the patch you provided, the newline (\n) is escaped as in \\n Right and that is the problem I ran into with multi-line comments in the IDL. In the patch you provided, the newline (\n) is escaped as in n. As long as the JSON string has the newlines escaped everything is fine. The problem that I found is that the IDL parsing logic does NOT escape the newlines in a multi-line comment. This is true for enum, fixed and record schemas. The code generation phased worked fine, but when it came time to compile the generated java code, the Protocol.parse() method failed because the "doc" string in the JSON structure did not have the newlines properly escaped. For example, try adding the following multi-line comment to the avroTrace.avdl file. /** - An individual span is the basic unit of testing. - The record is used by both client and server. */ record Span { The some of the tests in the ipc module will fail because the doc string contains an un-escaped newline. I've uploaded the simple patch for the avroTrace.avdl file. Hence in my patch, I explicitly escape any newlines in the doc strings when generating the JSON format, and convert them back to newlines when "getting" the doc (protocol.getDoc()) so that comments in the generated code look much better. > JSON doc strings need newlines to be escaped. Jackson does not usually emit JSON it cannot parse. Here's a patch that adds a test of parsing newlines in a doc string. The modified test passes for me. Where are you seeing failures? Not sure how to delete attached files, so if interested in the patch, please use avro-doc-v2.patch to pick up all the latest tweaks. Thanks. Minor update to the patch that restores newlines in the multi-line doc strings when written by the velocity templates. Attached file avro-doc.patch to this jira ticket Patch to avro 1.6.0 that improves the IDL parser to capture doc strings for both protocols and messages. Did not attache the patch So I believe I've got code working against 1.6.0-SNAPSHOT (revision 1170735) to handle doc strings for protocol and message entities. I'll submit a patch. I ran into a couple interesting issues that are probably undiscovered bugs in the existing implementations. 1. The SpecificCompiler.java javaEscape() method is currently escaping double-quotes even if they are already escaped in the string. I changed the replace() call with replaceAll("([\\\\])\"", "$1\\\\\"").replaceFirst("\"", "\\\\\"") which basically will replace a double-quote with \" as long as it is not already escaped. 2. JSON doc strings need newlines to be escaped. If a doc string was multi-line and contained newlines, the JSON parser complained. I fixed these by escaping the newlines when ever generated the JSON objects (e.g. toJson()). The JSON is meant to be extensible: "Attributes not defined in this document are permitted as metadata, but must not affect the format of serialized data." from So no changes are required to other implementations if we start using a "java-annotations" property. Yes, that makes a lot of sense. Ideally, I'd like to see both options implemented. I realize that leveraging the doc block as a way to get annotations is a work around (a.k.a. hack ... when I started down that path, I was hoping it was something I could do without changing the avro code. So it seems there are really two things to tackle... 1. Modify the IDL parser to capture doc for both the protocols and messages and save it. 2. Modify the IDL parser to allow properties for the protocol and message definitions and then expose these properties via the parsed code. I'm assuming that the property information would need to appear in the JSON version of the protocol? This then would affect all languages that depend on the JSON encoding (e.g. python). Correct? I think getting protocol and message documentation into javadoc would be a good thing. That said, if generating protocol annotations is the desired end-goal, then it does not seem appropriate to use documentation strings. Rather, I'd suggest that protocols and messages in Java be extended to support arbitrary properties (like schemas and fields already do). Then the IDL parser would need to be changed to parse these (as it already does for schemas and fields). Finally, we could alter the templates to emit annotations from the value of the "java-annotations" property of protocols, messages, schemas and fields. Does that make sense? I committed this. Thanks, George!
https://issues.apache.org/jira/browse/AVRO-886?focusedCommentId=13108128&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-18
refinedweb
1,317
58.99
Text Summarization involves condensing a piece of text into a shorter version, reducing the size of the original text while preserving key information and the meaning of the content. Since manual text synthesis is a long and generally laborious task, task automation is gaining in popularity and therefore a strong motivation for academic research. In this article, I will take you through the task of Natural Language Processing to summarize text with Machine Learning. In Machine Learning, there are important applications for text summarization in various Natural Language Processing related tasks such as text classification, answering questions, legal text synthesis, news synthesis, and headline generation which can be achieved with Machine Learning. The intention to summarize a text is to create an accurate and fluid summary containing only the main points described in the document. Also, Read – Scraping YouTube with Python. Types of Approaches to Summarize Text Before I dive into showing you how we can summarize text using machine learning and python, it is important to understand what are the types of text summarization to understand how the process works, so that we can use logic while using machine learning techniques to summarize the text. Generally, Text Summarization is classified into two main types: Extraction Approach and Abstraction Approach. Now let’s go through both these approaches before we dive into the coding part. The Extractive Approach The Extractive approach takes sentences directly from the document according to a scoring function to form a cohesive summary. This method works by identifying the important sections of the text cropping and assembling parts of the content to produce a condensed version. The Abstractive Approach The Abstraction approach aims to produce a summary by interpreting the text using advanced natural language techniques to generate a new, shorter text – parts of which may not appear in the original document, which conveys the most information. In this article, I will be using the extractive approach to summarize text using Machine Learning and Python. I will use the TextRank algorithm which is an extractive and unsupervised machine learning algorithm for text summarization. Summarize Text with Machine Learning So now, I hope you know what text summarization is and how it works. Now, without wasting any time let’s see how we can summarize text using machine learning. The dataset that I will use in this task can be downloaded from here. Now, let’s import the necessary packages that we need to get started with the task: Code language: JavaScript (javascript)Code language: JavaScript (javascript) import pandas as pd import numpy as np import nltk nltk.download('punkt') import re from nltk.corpus import stopwords Now, as I have imported the necessary packages, the next step is to look at the data to get some idea of what we are going to work with: Code language: JavaScript (javascript)Code language: JavaScript (javascript) from google.colab import files uploaded = files.upload() df = pd.read_csv("tennis.csv") df.head() Code language: CSS (css)Code language: CSS (css) df['article_text'][1] "BASEL, Switzerland (AP), Roger Federer advanced to the 14th Swiss Indoors final of his career by beating seventh-seeded Daniil Medvedev 6-1, 6-4 on Saturday. Seeking a ninth title at his hometown event, and a 99th overall, Federer will play 93th-ranked Marius Copil on Sunday. Federer dominated the 20th-ranked Medvedev and had his first match-point chance to break serve again at 5-1. He then dropped his serve to love, and let another match point slip in Medvedev's next service game by netting a backhand. He clinched on his fourth chance when Medvedev netted from the baseline. Copil upset expectations of a Federer final against Alexander Zverev in a 6-3, 6-7 (6), 6-4 win over the fifth-ranked German in the earlier semifinal. The Romanian aims for a first title after arriving at Basel without a career win over a top-10 opponent. Copil has two after also beating No. 6 Marin Cilic in the second round. Copil fired 26 aces past Zverev and never dropped serve, clinching after 2 1/2 hours with a forehand volley winner to break Zverev for the second time in the semifinal. He came through two rounds of qualifying last weekend to reach the Basel main draw, including beating Zverev's older brother, Mischa. Federer had an easier time than in his only previous match against Medvedev, a three-setter at Shanghai two weeks ago." Now, I will split the sequences into the data by tokenizing them using a list: Code language: JavaScript (javascript)Code language: JavaScript (javascript) from nltk.tokenize import sent_tokenize sentences = [] for s in df['article_text']: sentences.append(sent_tokenize(s)) sentences = [y for x in sentences for y in x] Now I am going to use the Glove method for word representation, it is an unsupervised learning algorithm developed by Stanford University to generate word integrations by aggregating the global word-to-word co-occurrence matrix from a corpus. To implement this method you have to download a file from here and store it into the same directory where your python file is: Code language: JavaScript (javascript)Code language: JavaScript (javascript) word_embeddings = {} f = open('glove.6B.100d.txt', encoding='utf-8') for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') word_embeddings[word] = coefs f.close() clean_sentences = pd.Series(sentences).str.replace("[^a-zA-Z]", " ") clean_sentences = [s.lower() for s in clean_sentences] stop_words = stopwords.words('english') def remove_stopwords(sen): sen_new = " ".join([i for i in sen if i not in stop_words]) return sen_new clean_sentences = [remove_stopwords(r.split()) for r in clean_sentences] Now, I will create vectors for the sentences: Code language: PHP (php)Code language: PHP (php) sentence_vectors = [] for i in clean_sentences: if len(i) != 0: v = sum([word_embeddings.get(w, np.zeros((100,))) for w in i.split()])/(len(i.split())+0.001) else: v = np.zeros((100,)) sentence_vectors.append(v) Finding Similarities to Summarize Text The next step is to find similarities between the sentences, and I will use the cosine similarity approach for this task. Let’s create an empty similarity matrix for this task and fill it with cosine similarities of sentences: Code language: JavaScript (javascript)Code language: JavaScript (javascript) sim_mat = np.zeros([len(sentences), len(sentences)]) from sklearn.metrics.pairwise import cosine_similarity for i in range(len(sentences)): for j in range(len(sentences)): if i != j: sim_mat[i][j] = cosine_similarity(sentence_vectors[i].reshape(1,100), sentence_vectors[j].reshape(1,100))[0,0] Now I am going to convert the sim_mat similarity matrix into the graph, the nodes in this graph will represent the sentences and the edges will represent the similarity scores between the sentences: Code language: JavaScript (javascript)Code language: JavaScript (javascript) import networkx as nx nx_graph = nx.from_numpy_array(sim_mat) scores = nx.pagerank(nx_graph) Now, let’s summarize text: Code language: PHP (php)Code language: PHP (php) ranked_sentences = sorted(((scores[i],s) for i,s in enumerate(sentences)), reverse=True) for i in range(5): print("ARTICLE:") print(df['article_text'][i]) print('\n') print("SUMMARY:") print(ranked_sentences[i][1]) print('\n') Output: ARTICLE: Maria Sharapova has basically no friends as tennis players on the WTA Tour. The Russian player has no problems in openly speaking about it and in a recent interview she said: 'I don't really hide any feelings too much.. Uhm, are so many other things that we're interested in, that we do.' SUMMARY:. So, I will congratulate you as you have made a text summarization system using machine learning which will reduce your effort to find important and the most relevant information from any topic. Also, Read – What is Cloud Computing in Machine Learning. I hope you liked this article on summarize text using machine learning. Feel free to ask your valuable questions in the comments section below. You can also follow me on Medium to learn every topic of Machine Learning.
https://thecleverprogrammer.com/2020/08/24/summarize-text-with-machine-learning/
CC-MAIN-2021-04
refinedweb
1,322
53
Build: NetBeans IDE 7.4 (Build 201310012201) VM: Java HotSpot(TM) 64-Bit Server VM, 24.0-b56, Java(TM) SE Runtime Environment, 1.7.0_40-b43 OS: Windows 7 MackSix: 1. Source Level is 1.8 in Java Freeform Project while JDK is JDK 1.4. 2. Changed source level to 1.6 and this exception was thrown. Stacktrace: org.xml.sax.SAXParseException; cvc-complex-type.2.4.a: Invalid content was found starting with element 'compilation-unit'. One of '{"":compilation-unit}' is expected. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:198) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:134) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:437) at com.sun.org.apache.xerces.internal.impl.XM) Created attachment 140865 [details] stacktrace 1. Open attached free-form project. 2. Open context menu and go to Properties >> Java Source Classpath >> Java Platform and change JDK platform from 1.8 to 1.7. 3. This exception is thrown. 4. Open context menu again and JDK plataform is set back to 1.8. 5. Go to Java Source and change the Source Level to 1.7 or 1.6. 6. This exception is thrown. Workaround: None that I know of. Product Version: NetBeans IDE 7.4 (Build 201310012201) Java: 1.7.0_40; Java HotSpot(TM) 64-Bit Server VM 24.0-b56 Runtime: Java(TM) SE Runtime Environment 1.7.0_40-b43 System: Windows 7 version 6.1 running on amd64; Cp1252; en_US (nb) Created attachment 140866 [details] Project to use for reproduction.> How am I supposed to change Java Platform to 1.6? I can't even change Java Platform from one version of JDK 7 to another now after changing the source level entries in project xml. I was able to do it a couple of times and now it throws the exception and won't change. I rebooted NetBeans and it now allows me to change Java Platforms between different JDK 7 versions again. (In reply to Tomas Danek from comment #4) >> Plus you may have to Reboot NetBeans to allow the Java Platform to be changed from 1.8 to 1.7, between 1.7 versions or a 1.6 version. If I have JDK Platform set to JDK1.6.0_45 and try to change source level to 1.5, it throws this exception too. org.xml.sax.SAXParseException; cvc-complex-type.2.4.a: Invalid content was found starting with element 'compilation-unit'. One of '{"":compilation-unit}' is expected. jlahoda: there's an duplicate issue that you had assigned to yourself? I can't believe this isn't a patch1 candidate. ;) Created attachment 142295 [details] stacktrace This is a FreeForm Java project. I was trying to add and remove jars from the Project Properties src class path. The last time I did this, it forgot my new settings and reverted back to the old ones. This was more or less the first thing I did when I started up NetBeans. The project was already opened. This bug already has 5 duplicates see Integrated into 'releases/release74', will be available in build *201312042201* or newer. Wait for official and publicly available build. Changeset: User: Milos Kleint <mkleint@netbeans.org> Log: #236846 once negotiated what namespace is necessary and what is used, always consistently use the existing namespace when creating elements verified in patch 2
https://netbeans.org/bugzilla/show_bug.cgi?id=236846
CC-MAIN-2019-30
refinedweb
571
62.54
Azure DevOps Server 2019 Update 1 Release Notes | Developer Community | System Requirements and Compatibility | License Terms | DevOps Blog | SHA-1 Hashes | In this article, you will find information regarding the newest release for Azure DevOps Server 2019 Update 1. To download Azure DevOps Server products, visit the Azure DevOps Server Downloads page. To learn more, see Azure DevOps Server Requirements. Visit the visualstudio.com/downloads page to download Team Foundation Server products. Direct upgrade to Azure DevOps Server 2019 Update 1 is supported from Azure DevOps Server 2019 or Team Foundation Server 2012 and newer. If your TFS deployment is on TFS 2010 or earlier, you need to perform some interim steps before upgrading to Azure DevOps Server 2019 Update 1. Please see the Install page for more information. Azure DevOps Server 2019 Update 1.1 Patch 11 Release Date: September 14, 2021 Patch 11 for Azure DevOps Server 2019 Update 1.1 includes fixes for the following. - Resolve the issue reported in this Developer Community feedback ticket. Azure DevOps Server 2019 Update 1.1 Patch 10 Release Date: August 10, 2021 Patch 10 for Azure DevOps Server 2019 Update 1.1 includes fixes for the following. - Fix issue with email delivery jobs for some work item types. Azure DevOps Server 2019 Update 1.1 Patch 9 Release Date: June 15, 2021 Patch 9 for Azure DevOps Server 2019 Update 1 2019 Update 1.1 Patch 8 Release Date: April 13, 2021 We have released a patch for Azure DevOps Server 2019 Update 1.1 that fixes the following. - CVE-2021-27067: Information disclosure - Resolve the issue reported in this Developer Community feedback ticket | Unable to register test result iteration details on Azure DevOps Server 2019 To implement fixes for this patch you will have to follow the steps listed below for general patch installation and AzureResourceGroupDeploymentV2 task installations. General patch installation If you have Azure DevOps Server 2019 Update 1.1, you should install Azure DevOps Server 2019 Update 1.1 Patch 8. Verifying Installation Option 1: Run devops2019.1.1patch8.exe CheckInstall, devops2019.1.1patch 8, the version will be 17.153.31129.2. 2019 Update 1.1 Patch 7 Release Date: January 12, 2021 We have released a patch for Azure DevOps Server 2019 Update 1.1 2019 Update 1.1 Patch 6 Release Date: December 8, 2020 We have released a patch for Azure DevOps Server 2019 Update 1.1 that fixes the following. Please see the blog post for more information. - CVE-2020-1325: Azure DevOps Server Spoofing Vulnerability - CVE-2020-17135: Azure DevOps Server Spoofing Vulnerability - CVE-2020-17145: Azure DevOps Server and Team Foundation Services Spoofing Vulnerability - Fix issue with TFVC not processing all results Important Please read the full instructions provided below before installing this patch. General patch installation If you have Azure DevOps Server 2019 Update 1.1, you should install Azure DevOps Server 2019 Update 1.1 Patch 6. Verifying Installation Option 1: Run devops2019.1.1patch6.exe CheckInstall, devops2019.1.1patch 6, the version will be 17.153.30723.5. AzurePowerShellV4 task installation Note All the steps mentioned below need to be performed on a Windows machine Prerequisites Install Azure PowerShell Az module Azure Powershell on your private agent machine. Create a pipeline with the AzurePowerShellV4 task. You will only see one Fail on Standard Error in the task. Install Extract the AzurePowerShellV4.zip package to a folder named AzurePowerShellV. The path of the extracted package will be D:\tasks\AzurePowerShellv4. ~$ tfx build tasks upload --task-path *<Path of the extracted package>* Azure DevOps Server 2019 Update 1.1 Patch 5 Release Date: September 8, 2020 We have released a patch for Azure DevOps Server 2019 Update 1.1 that fixes the following. Please see the blog post for more information. - DTS 1713492 - Unexpected behavior while adding AD groups to security permissions. Azure DevOps Server 2019 Update 1.1 Patch 4 Release Date: July 14, 2020 We have released a patch for Azure DevOps Server 2019 Update 1.1 that fixes the following. Please see the blog post for more information. - CVE-2020-1326: Cross-site Scripting Vulnerability - Build pipeline shows incorrect connection for unauthorized users when selecting Other Git source. - Fix error when changing changing Inheritance to On or Off in XAML build definition. Azure DevOps Server 2019 Update 1.1 Patch 3 Release Date: June 9, 2020 We have released a patch for Azure DevOps Server 2019 Update 1.1 that fixes the following. Please see the blog post for more information. - CVE-2020-1327: Ensure that Azure DevOps server sanitizes user inputs. Azure DevOps Server 2019 Update 1.1 Patch 2 Release Date: April 14, 2020 We have released a patch for Azure DevOps Server 2019 Update 1.1 that fixes the following. Please see the blog post for more information. SVN commits do not trigger pipeline Adding support for SHA2 in SSH on Azure DevOps Azure DevOps Server 2019 Update 1.1 Patch 1 Release Date: March 10, 2020 We have released a security patch for Azure DevOps Server 2019 Update 1.1 that fixes the following bugs. Please see the blog post for more information. CVE-2020-0700: Cross-site Scripting Vulnerability CVE-2020-0758: Elevation of Privilege Vulnerability CVE 2020-0815: Elevation of Privilege Vulnerability Azure DevOps Server 2019 Update 1.1 RTW Release Date: December 10, 2019 Azure DevOps Server 2019 Update 1.1 is a roll up of bug fixes and security updates. It includes all fixes in the Azure DevOps Server 2019 Update 1 patches previously released. You can directly install Azure DevOps Server 2019 Update 1.1 or upgrade from Azure DevOps Server 2019 or Team Foundation Server 2012 or newer. Note The Data Migration Tool will be available for Azure DevOps Server 2019 Update 1.1 about three weeks after this release. You can see our list of currently supported versions for import here. This release includes fixes for the following bugs: Azure Boards - When creating a new work item from the product backlog, the Title field is not initialized with the default value in the process template. - Slowness and timeouts when using Azure Boards. - The Revised By value is incorrect on work item links. Azure Pipelines - In Pipelines notifications, fields such as Duration may be null in some locales. - Template path may not point to a valid JSON file in a Pipeline that includes an Azure Resource Group Deployment. - The collection-level retention settings page appears in the project settings pages. Azure Test Plans - Editing fields in Test Plans is slow. - In a Test Case, when opening from Boards (as opposed to Test Plans), the Shared Step details do not open. General Administration - High memory usage. - Servers with load balancer configurations had to explicitly add their public origin to the AllowedOrigins registry entry. - Customers who install on SQL Azure do not see the Complete Trial dialog. - Installing extensions give the error "Error message Missing contribution (ms.vss-dashboards-web.widget-sdk-version-2)". - When setting up Elastic Search, there is an error: "User is unauthorized". - Indexing and query failures in Elastic Search when upgrading from TFS 2018 Update 2 or newer. - "Create Warehouse" step fails when configuring Azure DevOps Server. This release includes the following update: - Support for SQL Server 2019. Azure DevOps Server 2019 Update 1 Patch 1 Release Date: September 10, 2019 We have released a security patch for Azure DevOps Server 2019 Update 1 that fixes the following bug. Please see the blog post for more information. - CVE-2019-1306: Remote code execution vulnerability in Wiki Azure DevOps Server 2019 Update 1 Release Date: August 20, 2019 Note The Data Migration Tool will be available for Azure DevOps Server 2019 Update 1 about three weeks after this release. You can see our list of currently supported versions for import here. RC2 Release Date: July 23, 2019 RC2 includes several bug fixes since RC1 and is the final planned prerelease. RC1 Release Date: July 2, 2019 Summary of What's New in Azure DevOps Server 2019 Update 1 You can also jump to individual sections to see the new features: General Dark Theme The dark theme has been a popular feature on Azure DevOps Services and it is now available in Azure DevOps Server. You can turn on dark theme by selecting Theme from the menu underneath your avatar in the top right of every page. Boards New Basic process. We recommend that you use Issues to track things like user stories, bugs, and features while using Epics to group Issues together into larger units of work. As you make progress on your work, move items along a simple state workflow of To Do, Doing, and Done. See the track issues and tasks documentation to help you get started with your new project. State value order on work item form Previously, the state value on the work item form was ordered alphabetically. With this update we changed how the state values are ordered to match the workflow order in the process settings. You can also change the order of the states in each category in the state customization settings. Feature Enablement is no longer available Customers will need to manually update the XML for each project in order to enable new features after upgrading their collection. Refer to the documentation to learn how to enable specific features.. Edit and delete discussion comments We're excited to announce the availability of a highly voted Developer Community feature, edit and delete of comments in your work item's discussion in Azure Boards. To edit your comment, simply hover over any comment that you own, and you will see two new buttons. If you click the pencil icon, you will enter in to edit mode and can simply make your edits and press the "Update" button to save your edits. When you click the overflow menu, you will see the option to delete your comment. Once you click this, you will be prompt again to confirm that you want to delete this comment, and the comment will be deleted. You will have a full trace of all the edited and deleted comments in the History tab on the work item form. You will also see that we've updated the UI of our discussion experience to make it feel more modern and interactive. We've added bubbles around comments to make it clearer where individuals comments start and end. Export query results to a CSV file You can now export query results directly to a CSV format file from the web. Navigate to Azure Boards work items directly from mentions in any GitHub comment Now. See the Azure Boards GitHub integration documentation for more information. Accept and execute on issues in GitHub while planning in Azure Boards Now you can link work items in Azure Boards with related issues in GitHub. With this new type of linking, several other scenarios are now possible. If your team wants to continue accepting bug reports from users, for example, as issues within GitHub but relate and organize the team's work overall in Azure Boards, now you can. The same mention syntax your team uses for commits and pull requests still applies and of course you can link manually in Azure Boards with the issue URL. See the GitHub & Azure Boards documentation for more information. Quickly view linked GitHub activity from the Kanban board When reviewing the Kanban board yourself or as a team, you often have questions such as "has this item started development yet?" or "is this item in review yet?" With the new GitHub annotations on the Kanban board, now you can get a quick sense of where an item is and directly navigate to the GitHub commit, pull request, or issue for more detail. See the Customize cards documentation for more information about this and the other annotations for Tasks and Tests. Repos. Rerun expired build for auto-complete pull requests Azure Repos will now automatically queue expired builds that have been triggered by a pull request policy. This applies to pull requests that have passed all other policies and are set to auto-complete. Previously, when pull requests had policies like required reviewers, the approval process could take too long and an associated build could expire before a reviewer approved the pull request. If the pull request was set to auto-complete it would remain blocked until a user manually queued the expired build. With this change the build will be queued automatically so that the pull request can auto-complete after a successful build. Note This automation will only queue up to five expired builds per pull request and will only attempt to re-queue each build once.. Filter by target branch in pull requests (PRs) Pull requests let your team review code and give feedback on changes before merging them into the main branch. They have become an important part of many teams' workflows since you can step through proposed changes, leave comments, and vote to approve or reject code changes. To make it easier for you to find your pull requests, we added a filtering option to let you search for PRs using the target branch. You can also use the target branch filtering to customize the pull requests view in the Mine tab. Allow extensions to add syntax highlighting and autocomplete Currently, we publish syntax highlighting for a subset of languages supported by the Monaco editor. However, many of you want to create your own syntax highlighting for languages that we don't support. With this update, we added an extensibility point that allows extensions to add syntax highlighting and autocomplete to the file explorer and pull requests views. You can find an example of an extension demonstrating this feature here. In addition, we added support for Kusto language syntax highlighting. Repository creation extension point We've added an extension point to allow you to add new items to the repository picker. This extension point will let you add custom actions (redirects, popups, etc) to the repository picker menu, enabling flows like alternate repository creation scenarios. Improved encoding support Previously, editing and saving files on the web would only save as UTF-8 encoding and we did not prompt you when the file encoding changed. Now, we will give you a warning when you try to save a file that is not UTF encoded via the web (which only supports UTF encoding). In addition, we added support for UTF-16 and UTF-32 encoding via the web pushes endpoint. This means that we will preserve the encoding type so you don't have to rewrite them as UTF-8. The following screenshot shows and example of the dialog that you will see when you introduce encoding changes by a web push. Go get command support in Azure Repos Go is an open source programming language, also referred to as Golang. In Go, you can use the get command to download and install packages and dependencies. With this update, we've added support for go get within an Azure DevOps repository. With go get, you will be able to download packages with their dependencies named by the import paths. You can use the import key word to specify the import path. Pipelines Web editor with IntelliSense for YAML pipelines If you use YAML to define your pipelines, you can now take advantage of the new editor features introduced with this release. Whether you are creating a new YAML pipeline or editing an existing YAML pipeline, you will be able to edit the YAML file within the pipeline web editor. Use Ctrl+Space for IntelliSense support as you edit the YAML file. You will see the syntax errors highlighted and also get help on correcting those errors. Task assistant for editing YAML files We continue to receive a lot of feedback asking to make it easier to edit YAML files for pipelines, so: - main - releases/* autoCancel: false Choose the directory of checked out code in YAML pipelines Previously, we checked out repos to the s directory under $(Agent.BuildDirectory). Now you can choose the directory where your Git repo will be checked out for use with YAML pipelines. Use the path keyword on checkout and you will be in control of the folder structure. Below is an example of the YAML code that you can use to specify a directory. steps: - checkout: self path: my-great-repo In this example, your code will be checked out to the my-great-repo directory in the agent's workspace. If you don't specify a path, your repo will continue to be checked out to a directory called s. New Azure App Service tasks optimized for YAML We now support four new tasks which provide an easy yet powerful way to deploy Azure App Services with modern developers in mind. These tasks have an optimized YAML syntax making it simple and intuitive to author deployments to Azure AppServices, including WebApps, FunctionApps, WebApps for Containers and FunctionApp for Containers on both Windows and Linux platforms. We also support a new utility task for file transformation and variable substitution for XML and JSON formats. Changes to default permissions for new projects Up until now, project contributors could not create pipelines unless they are explicitly given "Create build definition" permission. For new projects, your team members can readily create and update pipelines. This change will reduce the friction for new customers that are onboarding to Azure Pipelines. You can always update the default permissions on the Contributors group and restrict their access. Manage GitHub releases using pipelines GitHub releases are a great way to package and provide software to users. We are happy to announce that you can now automate it using GitHub Release task in Azure Pipelines. Using the task you can create a new release, modify existing draft/published releases or discard older ones. It supports features like uploading multiple assets, marking a release as pre-release, saving a release as draft and many more. This task also helps you create release notes. It also can automatically compute the changes(commits and associated issues) that were made in this release and add them to the release notes in a user friendly format. Here is the simple YAML for the task: task: GithubRelease@0 displayName: 'Create GitHub Release' inputs: githubConnection: zenithworks repositoryName: zenithworks/pipelines-java assets: $(build.artifactstagingdirectory)/*.jar A sample GitHub release created using this task: Links to specific lines in a build log You can now share a link to specific lines in the build log. This will help you when collaborating with other team members in diagnosing build failures. Simply select the lines of a log from the results view to get a link icon.. New extension contribution points in the Pipelines Test tab. Agent pool user interface update The agent pools management page in project settings has been updated with a new user interface. Now you can easily see all the jobs that are running in a pool. In addition you can learn why a job is not running.. Automatically redeploy on failure When a deployment to a stage fails, Azure Pipelines can now automatically redeploy the last successful deployment. You can configure the stage to automatically deploy the last successful release by configuring the Auto-redeploy trigger in the Post-deployment conditions. We plan to add additional triggered events and actions to the auto redeploy configuration in a future sprint. See the Deployment groups documentation for more information. Grafana annotations service hook We now support a new service hook that lets you add Grafana annotations for Deployment Completed events to a Grafana dashboard. This allows you to correlate deployments with the changes in application or infrastructure metrics that are being visualized in a Grafana dashboard. Query Azure Monitor alerts tasks The previous version of the Query Azure Monitors task supported querying alerts only on the classic monitoring experience. With this new version of the task, you can query alerts on the unified monitoring experience recently introduced by Azure Monitor. Inline input of spec file in Deploy to Kubernetes task Previously, the Kubernetes deployment task required you to provide a file path for the configuration. Now you can add the configuration inline as well. Docker CLI Installer task This task allows installation of any version of Docker CLI on the agents as specified by the user.. _38<<_39<<_40<< Publish to Azure Service Bus session queues We've extended the Agentless job build task to include the ability to publish messages to session queues. This option has been added to the Publish to Azure Service Bus task. _42<<. _44<< Duffle tool installer task in build and release pipeline Duffle is a command line tool that allows you to install and manage Cloud Native Application Bundles (CNAB). With CNABs, you can bundle, install and manage container-native apps and their services. In this update, we added a new task for build and release pipelines that allows you to install a specific version of Duffle binary.. Improvements to ServiceNow integration A key capability for cross-team collaboration is to enable each team to use a service of their choice and have effective end-to-end delivery. With this update, we enhanced the ServiceN Red Hat Enterprise Linux 6 With this update, we added agent support for Red Hat Enterprise Linux 6. You can now configure agents targeting the Red Hat Enterprise Linux 6 platform for build and release jobs execution.. Azure Active Directory (AD) authentication support for Azure SQL task The Azure SQL task has been enhanced to support connecting to a database using Azure AD (Integrated & Password) and a connection string in addition to the existing support for SQL server authentication. Publish build artifacts with long file paths Until now, there was a limitation that prevented uploading build artifacts with paths longer than 233 characters. This could prevent you from uploading code coverage results from Linux and macOS builds with file paths longer than the limit. The limit has been updated to support long paths. Skip continuous integration (CI) for a commit*** Test Plans Test result trend (Advanced) widget The Test result trend (Advanced) widget. Share test run results via URL You can configure automated tests to run as part of a build or release. The published test results can be viewed in the Tests tab in build or release summary. With this update, we added a Copy results URL feature so you can share a single test run results with others in your team. The sharing levels include: - Run level - Result level - Individual tab selected within test run - Sharing is also compatible with any extension tabs configured When you share the URL, viewers will see the test run results in the full screen view. Artifacts NuGet packages with SemVer 2.0.0 version numbers Previously, Azure Artifacts did not support NuGet packages with SemVer 2.0.0 version numbers (generally, version numbers that contain the build metadata portion of the version, which is signified by a +). Now you can save packages from nuget.org that contain build metadata and push your own packages with build metadata. Per the SemVer spec and NuGet.org policy, build metadata cannot be used to order packages. So, you cannot publish both 1.0.0+build1 and 1.0.0+build2 to Azure Artifacts (or nuget.org) as those versions will be considered equivalent and thus subject to the immutability constraints. Provenance information on packages With this update, we've made it a bit easier to understand the provenance of your packages: who or what published them and what source code commit they came from. This information is populated automatically for all packages published using the NuGet, npm, Maven, and Twine Authenticate (for Python) tasks in Azure Pipelines. Package usage stats Until now, Azure Artifacts didn't provide a way to gauge the usage or popularity of packages. With this update, we added a count of Downloads and Users to both the package list and package details pages. You can see the stats on the right side of either page. Support for Python Packages Azure Artifacts can now host Python packages: both packages you produce yourself and upstream packages saved from the public PyPI. For more details, see the announcement blog post and the docs. Now, you can now host all of your NuGet, npm, Maven, and Python packages in the same feed. Upstream sources for Maven Upstream sources are now available for Maven feeds. This includes the primary Maven Central repository and Azure Artifacts feeds. To add Maven upstreams to an existing feed, visit Feed settings, select the Upstream sources pivot, then select Add upstream source. Proxy support for Artifacts-related tasks Until now, many Artifacts-related build tasks didn't provide full support for Azure Pipelines' proxy infrastructure, which led to challenges using the tasks from on-premises agents. With this update, we've added support for proxies to the following tasks: - Npm@1 ('npm' in the designer) - NuGetCommand@2 ('NuGet' in the designer): restore and push commands only - DotNetCoreCLI@2 ('.NET Core' in the designer): restore and nuget push commands only - NpmAuthenticate@0, PipAuthenticate@0, and TwineAuthenticate@0 ('[type] Authenticate' in the designer): These tasks support proxies during the acquisition of auth tokens, but it is still necessary to configure any subsequent tasks/scripts/tools to also use the proxy. Put another way, these tasks do not configure the proxy for the underlying tool (npm, pip, twine). - NuGetToolInstaller@0, NodeTool@0, DotNetCoreInstaller@0 ('[type] Installer' in the designer) All Artifacts package types supported in releases Until now, only NuGet packages have been supported in the Azure Artifacts artifact type in Pipelines releases. With this update, all Azure Artifacts package types - Maven, npm, and Python -. Delegate who can manage feeds In Azure Artifacts, Project Collection Administrators (PCAs) have always been able to administer all feeds in an Azure DevOps server. With this update, PCAs can also give this ability to other users and groups, thus delegating the ability to manage any feed. Wiki Markdown templates for formulas and videos There is no longer a need to remember markdown syntax for adding formulas, videos and YAML tags when editing a Wiki. You can now click on the context menu in the toolbar and select the option of your choice.. Permalinks for Wiki pages Until now, shared Wiki page links broke if the linked page was renamed or moved. We've now. Reporting Analytics extension no longer needed to use Analytics Analytics is increasingly becoming an integral part of the Azure DevOps experience. It is an important capability for customers to help them make data driven decisions. For Update 1, we're excited to announce that customers no longer need the Analytics extension to use Analytics. Customers can now enable Analytics underneath Project Collection Settings. It's a simple process that's right within the product. Here is how customers can enable Analytics: - Navigate to Project Collection Settings: - Click Enable Analytics And that's it! Analytics powered experiences will be turned on for the collection. New collections created in Update 1 and Azure DevOps Server 2019 collections with the Analytics extension installed that were upgraded will have Analytics enabled by default. To learn more about Analytics and the experiences it enables: - Read more about enabling Analytics. - Read the Analytics Overview documentation. - Read up on the key features: Analytics Widgets, Top Failing Test Report, Power BI Integration, and the OData Endpoint. - Watch this Channel 9 Video on Azure DevOps Analytics. Feedback We would love to hear from you! You can report a problem or provide an idea and track it through Developer Community and get advice on Stack Overflow.
https://docs.microsoft.com/en-us/azure/devops/server/release-notes/azuredevops2019u1?view=azure-devops
CC-MAIN-2021-39
refinedweb
4,619
54.32
Playing with the Arduino is definitely fun, specially when you start looking into fun little add-ons like the Arduino ENC28J60 Ethernet shield/module we will looked. Now sending a “Hello World!” message might be cool, but it’s hardly functional and since the Arduino is great with all kinds of sensors (like the DS18B20 temperature sensor), why not combine the two to retrieve Arduino Data over a network connection? With that comes the question … how the heck do I get my data? Obviously the “see it my web-browser” is one option, but how about a server application that is to store the sensor data? In this article we will look at a trick we can use to “Pull” data to our server, server-application or even web-browser. This is probably the easiest way to get your data. There are 30 comments. You can read them below. You can post your own comments by using the form below, or reply to existing comments by using the "Reply" button. Hi,I tried using the code your provide here. However the code just hangs. I use a Nano v3 + Ethernet shield. I like the idea of having some extra details in the XML, so I tried your code.The serial reports it launches, but thats it.Any suggestions how to find the problem?Thanks,Robert Robert Hi Robert, I’m assuming you mean the W5100 Ethernet shield? The Arduino sells? In that case you’d need the regular Ethernet library and not the UIPEthernet library (I assume you already tried that). Unfortunately I’m not experienced with the regular Ethernet library, but I’m planning on testing that in the near future (I won’t be able to test it within the next 2 months). Both libraries should be interchangable, but I can imagine that there are minor differences. I’d add a few “Serial.println(“xyz”);” lines to see where it hangs, if it hangs. If anything, I’d checkout line 35 – 77. hans Hi Hans, No I am actually using an Arduino Nano V3.0 + ENC28J60 Ethernet shield. For some reason however the code hangs after the moment it connects to the Arduino, somehow the sensor is not read out. If I run a standalone test for TCP webserver it works fine. If I run a temp sensor code seperately, it runs fine. I will try to do some debugging with your suggestions. I want this nano + shield to work, because it’s so darn nice and small. Thanks for your help Robert Hi Robert, Oh OK,… I’m planning to use the Nano for the exact same reason, just never got to it. Unlikely, but could the power supply be too low? I’ve seen quite a few people using this setup without issues, and some how I doubt the Nano is to blame. hans Hi Hans, Ok. I will go and try. If I find a solution I will let you know. As far as I can see your code should just work. Even though there is little memory space left (almost 30k of code).. L8r Robert Weird this is the serial output: The skets does not print any sensor data at all. But when I run the sensor only skets: The weird thing is that the code doesnot respond to the “ping” command either, which it should. Hmmm, any suggestions on next steps are welcome. Robert Seems “TemperatureToXML()” is not doing anything … that’s weird. Are you using the same pins as in the example above – I assume you wired it exactly like described, keeping in mind the physical differences of the Nano of course? What’s the Wattage/Amps of the 5V power supply you’re using (although I doubt that would be the problem)? Can you try a strong 5V Power Supply? I’ve had some cheap Chinese PSU’s in the past that claim a certain Amps but in the end do not even get close to what was claimed which caused the Uno (and Raspberry Pi) to become unstable at times. I’ve tried to find difference (electronically) between the Uno and the Nano,but could not find anything that would explain this … (I checked this list and the Arduino Forums and even Google for random blogs and such) hans Hans, Found the problem, sometimes you just need to put down your work for a while and then look again. I reread your responses and it became clear right away. The issue is that you cannot use pin 10 twice :-( I should have been more carefull when reading all the articles. It became clear right away after I looked at it again. The TCP server worked. The temperature reading worked. Just not at the same time. Because you can only use a pin once. So leasson learned: only use every pin just once… Thanks for you pointers, it helped. Robert Glad to hear that it works. The “reading too quick” problem happens to most of us ;-) hans Hi Hans, Thanks for your work! I started to build something similar and this helps a lot in the beginning.. I just have one improvement (from my point of view :-)). If you have only DS18B20 sensors on your 1wire bus, it takes a lot of time from opening the Webpage til get all the data. My advise is to let the sensors start conversion in the begining all at once: after that you can run your original while(ds.search(addr)) {…} without conversion and the big delay. This will help when you have 8 and plan for more sensors (my case) :-) and again, Thanks for practical example with perfect description! Michal Michal Hi Michal! Thank you very much for the suggested improvements! I do not have the chance to test it yet (no equipment nearby, since I’m not home), but I most certainly will add it to the “to do” list. Thank you for the compliment as well of course … hans hi Hans, First,Thanks for this work u have done here.It was very helpful for my project.I encountered a problem. Everything seems to work very well.Im pulling the data from ethernet as an html file.I want to write a php code to get the html content and save it on my database.However when i tried to use simple_html_dom function which extracts the contents of html i seem to get the error “file_get_contents(): send of 20 bytes failed with errno=10053 An established connection was aborted by the software in your host machine. in C:\wamp\www\mytry\simple_html_dom.php on line 1081”. how to solve this problem?The way i want to do is it,in PHP when i give a http request to the ip address of the ethernet shield ,it should reply with an html file which i would be using to store in a database. Please help..Thanks:) This is the php code: <html> <head> </head> </body> <?php include ‘simple_html_dom.php’; $html = new simple_html_dom(); $html->load_file(‘’);//This is the ip address of the ethernet shield.(This is where the error occurs) ?> </body> </html> This is the arduino code: #include <SPI.h> #include <Ethernet.h> // Ethernet MAC address – must be unique on your network byte mac[] = { 0x54, 0x34, 0x41, 0x30, 0x30, 0x31}; // ethernet interface IP address (unique in your network) IPAddress ip(192, 168, 1, 99); // ethernet interface IP port (80 = http) EthernetServer server(80); EthernetClient client; void setup() { // Open serial communications: Serial.begin(9600); // start the Ethernet connection and the server: Ethernet.begin(mac, ip); server.begin(); Serial.println(“Tweaking4All.com – Temperature Drone – v1.0”); Serial.println(“-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n”); Serial.print(“IP Address: “); Serial.println(Ethernet.localIP()); Serial.println(); } void loop() { // listen for incoming clients client = server.available(); if (client) { Serial.println(“-> New Connection\n”); // == ‘\n’ && currentLineIsBlank) { client.println(“<html>”); Serial.println(” Collecting Sensor Data:”); TemperaturesToXML(); client.println(“</html>”); Serial.println(“\n Done Collecting Sensor Data”); break; } if (c == ‘\n’) { // you’re starting a new line currentLineIsBlank = true; } else if (c != ‘\r’) { // you’ve gotten a character on the current line currentLineIsBlank = false; } } } // give the web browser time to receive the data delay(10); // close the connection: client.stop(); Serial.println(” Disconnected\n”); } } void TemperaturesToXML(void) { client.println(“<head>”); client.println(“</head>”); client.println(“<body”); client.println(“<div class=\”sensorsdata\”>”); client.println(“99”);// i wrote a sample html to check parsing of values to html variable in php client.println(“</div>”); client.println(“</body>”); // Get Serial number delay(1000); // maybe 750ms is enough, maybe not } Eswaran Hi Eswaran, sorry for the late reply – I had to think about this one, or better said: think how I can help. You could try changing this part of the code: to: Maybe this way it will pick up the HTML correctly? Just a thought, I did not test this. hans Hai hans, Sorry for a long and messed up comment. Actually the problem was found to be the K9 Security software that i have installed in my system.It somehow did not allow the computer to communicate with arduino server. Thanks for the help and reply :) Eswaran Good to hear you’ve got it resolved! And … thanks for posting your solution – other will benefit from it! hans First of all Thanks too much for the gr8 code u posted i need the simple keywords of the UIPethernet that i can write a code to send and receive data through ethernet i mean i need to deal with enc28j60 like Serial ports Moustafa NAbil Hi Moustafa, I have no experience with using the ENC28J60 like a serial port … maybe one of the users/visitors here might know how and what is possible, Sorry I cannot be of more help … hans it is like any another ethernet driver i just need to know how to code with UIPEthernet library exactly like it was arduino shield Moustafa NAbil As far as I know, the UIPEthernet library is an almost 100% compatible dropin for the standard Ethernet library that comes with the Arduino IDE (ie. The “standard” Ethernet Shield). So it should mostly work the same as the standard shield. hans i dont know how to use any ethernet library Moustafa NAbil Then I’m not quite sure what you’re asking …? hans I have written 3 articles that deal with the ENC28J60 … Generic Arduino Ethernet (ENC28J60 included) article, How to Push Data from Arduino to server, How to Pull data from Arduino from a computer. They all come with examples, I’d start reading through those. (sorry if this is not answering your question) hans Hi, I am very beginner of this field , now I am using HTU21D temperature sensor for my project, so please send the TemperaturesToXML() function code for HTU21D sensor. And my project task is want to save sensor sensing temperature values into text file on my local PC. sivasankari Hi Sivasankari, I wish I could help – I do not have that model sensor. Maybe others here have used this one and are willing to post their conversion code? By the way, I noticed that AdaFruit has a fully functional Library for this sensor: AdaFruit HTU21D. . Maybe that help hans When working with a raspberry, I had to replace the following line: client.println(“<?xml version=’1.0′?>\n<sensordata>”); with the code below: client.println(“HTTP/1.1 200 OK”); client.println(“Content-Type: text/xml”); client.println(“Connection: close”); // the connection will be closed after completion of the response client.println(); client.println(“<sensordata>”); The arduino was sending the XML file correctly, but the raspberry was not able to interpret this as xml, Thomas Thanks Thomas! Good to know, and thank you for posting the work around. Could it be that the following lines screw it up? I used “client.println” for the 3 cases, which would start a new line for the “</chip>” (the one with “default” should remain a println) – maybe using “client.print” resolves this issue? So the code could be produce a cleaner results by doing this: Just doing a stab in the dark here of course, but who knows. hans I just modified the example code to make the XML look cleaner – of course, I do not know if this helps the issue you ran into. hans UPDATE: Bug in the code (for a leading “0” for the address bytes, I used <10, this should however be <0x10) and cleaner XML output (<chip> and </chip> on the same line now). hans […] // […] Hello, Thanks for the nice tutorial. I’ve tried the code and successfully send data from Arduino to localhost but unable to receive the content from the server. I’ve modified the Arduino code as followed: #include <UIPEthernet.h> // Used for Ethernet // #define DEBUG // **** ETHERNET SETTING **** // Arduino Uno pins: 10 = CS, 11 = MOSI, 12 = MISO, 13 = SCK // Ethernet MAC address – must be unique on your network – MAC Reads T4A001 in hex (unique in your network) byte x; byte mac[] = { 0x54, 0x34, 0x41, 0x30, 0x30, 0x31 }; // For the rest we use DHCP (IP address and such) #define DEBUG true EthernetClient client; char server[] = “192.168.1.4”; // IP Adres (or name) of server to dump data to void setup() { Serial.begin(9600); // only use serial when debugging Ethernet.begin(mac); #ifdef DEBUG Serial.print(“IP Address : “); Serial.println(Ethernet.localIP()); Serial.print(“Subnet Mask : “); Serial.println(Ethernet.subnetMask()); Serial.print(“Default Gateway IP: “); Serial.println(Ethernet.gatewayIP()); Serial.print(“DNS Server IP : “); Serial.println(Ethernet.dnsServerIP()); #endif } void loop() { String inData; // if you get a connection, report back via serial: if (client.connect(server, 80)) { Serial.println(“-> Connected”); // only use serial when debugging // Make a HTTP request: client.print( “GET /quizee.club/gateway/arduino.php?”); client.print(“test”); // print: sensorx= client.print(“=”); client.print(x++); client.print( ” HTTP/1.1″); client.println( “Host: 192.168.1.3” ); client.println( “Host: “); client.print( server ); client.println( “Connection: close” ); client.println(); boolean currentLineIsBlank = true; String http_response; int response_start, response_end; while (client.connected()) { if (client.available()) { char c = client.read(); http_response += c; // if you’ve gotten to the end of the line (received a newline // character) and the line is blank, the http request has ended, // so you can send a reply if (c == ‘\n’ && currentLineIsBlank) { Serial.println( http_response ); break; } if (c == ‘\n’) { // you’re starting a new line currentLineIsBlank = true; } else if (c != ‘\r’) { // you’ve gotten a character on the current line currentLineIsBlank = false; } } } client.stop(); } else { Serial.println(“–> connection failed !!”); // only use serial when debugging } } This code sends the data from Arduino to the PHP server but in response the PHP server sends the message as following: -> Connected HTTP/1.1 200 OK Date: Tue, 25 Apr 2017 06:47:23 GMT Server: Apache/2.2.6 (Win32) PHP/5.2.5 X-Powered-By: PHP/5.2.5 Set-Cookie: PHPSESSID=8jj3eidsfaub76md3t0jq1kr76; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Content-Length: 77 Content-Type: application/json; charset=utf-8 Everything is fine here, except I missed the content actually send by the server. What’s wrong with this? Bright
https://www.tweaking4all.com/hardware/arduino/arduino-ethernet-data-pull/
CC-MAIN-2017-22
refinedweb
2,529
65.73
In Scala, you can perform mixin-class composition. That is, you can reuse new member definitions of a class that represent the delta with its superclass in the definition of the new class. I'll dive deeper on traits and mixin-class composition with clear examples later. Now that I've defined traits, let me go back to our sample FirstApplication singleton object. If you add a line that starts with the override keyword to the FirstApplication object definition, you will override the main method defined in the App trait with your own implementation: object FirstApplication extends App { override def main(args: Array[String]) = Console.println("Let's put some text on the Console output.") } While you write the new line of code in the editor, you can take advantage of auto-complete features. For example, if you enter " Console." and wait a moment, the IDE displays all members for Console. Press p and the context menu will display all members for Console that start with " p" (see Figure 8). You can also press Ctrl + Space in the editor to show Scala completions (the same shortcut you use in Visual Studio). Figure 8: Using the auto-complete features to filter the members for Console that start with "p." As you might guess, Console is a singleton object. Console implements functionality for printing Scala values on the terminal and for reading specific values. You can right-click on Console and select Open Declaration in the context menu. The IDE will open a new code tab and display the declaration for the Console singleton object (see Figure 9). Figure 9: The declaration of the Console singleton object. In Scala, it is necessary to use the def keyword to declare a method. Because the main method in the FirstApplication object overrides the main method defined in the App trait, it is necessary to include the override keyword. You will notice that a green arrow appears at the left-hand side of the line that overrides the main method. When you hover the mouse over the green arrow, a tooltip displays the label " overrides scala.App.main," and the mouse pointer changes to a hand (Figure 10). Figure 10: The code editor displaying details about the overridden main method. You can click on the green arrow and the editor will navigate to the App trait and display the code for the main method (see Figure 11) that the FirstApplication trait is overriding. Both Eclipse and the Scala IDE include many features that would require you to install or buy plug-ins in Visual Studio 2012. As a result, you won't be disappointed by the IDE feature set when you start working with Scala. Figure 11: The code editor displaying the code for the main method in the App trait. If you've worked with Delphi, the way you specify the method arguments and types in Scala will feel familiar. The argument name goes first, then the type, separated by a colon. The main method receives an args argument of type Array[String]. Array[String] is equivalent to Array<String> in C#. Thus, Array[T] is equivalent to Array<T> in C#, where T is the desired type for the array elements. Arrays are zero based, as in C#, but you have to specify an index in parenthesis instead of square brackets. For example, you can use args(0) to access the first element in the Array[String] argument named args. The method doesn't return any value, so it isn't necessary to specify the return type. By default, a method that doesn't specify a return type has a Unit return type, which is equivalent to the well-known void in C#. You can specify a return type of Unit if you want to, so the following method declaration would be equivalent to the previous declaration: override def main(args: Array[String]): Unit Notice that you specify the return type, separated by a colon, after the closing parenthesis. If you are curious about the rationale for types appearing after the arguments, variables, and methods, I suggest you watch Martin Odersky's talk at FOSDEM 2009, where he describes the design principles of the Scala programming language. Odersky, who developed Scala, also explained this design in his interview with Dr. Dobb's. In both cases, he provides a clear example of the problems that modern complex types generate for type-interference mechanisms. Many developers coming from C, C++, C#, or Java backgrounds hate reading type declarations in a different way. If you listen to Odersky's detailed explanation, you may not fall victim to this hatred for type declarations in Scala. For example, in modern applications, it is common to find complex types such as Services.EnterpriseLibrary.Common.UpdateCustomerRequest. When you put the type before the method declaration, argument, or variable, you aren't just including a short word such as int, byte, or long, as was traditional in early C programs. Once you get used to the way Scala has been designed, it is easier to read and understand code containing complex type names, and the type-inference mechanism can work more efficiently. After the main method declaration, you will notice an equal sign ( =) and then a single line of code that calls the println method for the Console singleton object. Because there is just one statement in the code block for the method, the usage of curly braces is optional. Thus, the following two code blocks produce the same results and are valid: // Option 1: Single statement without curly braces object FirstApplication extends App { override def main(args: Array[String]) = Console.println("Let's put some text on the Console output.") } // Option 2: Single statement with curly braces object FirstApplication extends App { override def main(args: Array[String]) = { Console.println("Let's put some text on the Console output.") } } Scala allows you to use semicolons as in C#. However, Scala has the ability to infer semicolons, so they are optional. You only need them when multiline statement separations might not work as you expect, and you don't want to reorganize your lines based on the way the inference mechanism works. The following code block uses the optional semicolon and makes the code similar to what you'd expect when working with C#. Obviously, it is convenient to reduce semicolon usage in Scala because you won't find too many semicolons when you read Scala code written by the language experts. object FirstApplication extends App { override def main(args: Array[String]) = { Console.println("Let's put some text on the Console output."); } } Executing a Simple Scala Application and Running Code in the Interpreter Let's return to our sample FirstApplication. Select Run | Run or just press Ctrl + F11 and FirstApplication will execute its main method, which displays text on the Console output. If there are problems in the build, you will see the list in the Problems view and the errors highlighted in the code editor for the different source files. You can check the Problems view by selecting Window | Show View | Problems or pressing Alt + Shift + Q, X (see Figure 12). Figure 12: The Problems view displaying details for three errors. If you run the previously shown code (which doesn't have errors), the IDE will display "Let's put some text on the Console output" in the Console view (see Figure 13). If you don't see the Console view, you just need to select Window | Show View | Console or press Alt + Shift + Q, C. Then, make sure that the "Show console when standard out changes" option button is pressed (see Figure 14). This way, the IDE will automatically display the Console whenever your application writes text to it. Figure 13: The Console view displaying the results of running the simple FirstApplication Scala application. Figure 14: The "Show console when standard out changes" option button displaying its tooltip text. One of the best features of the Scala IDE is that it allows you to run a stand-alone piece of code in the integrated Scala interpreter. If you've worked with F#, you already know about the F# interactive window. You can take advantage of this REPL (Read-Evaluate-Print Loop) mechanism to test different pieces of code without having to build the entire application. For example, if you select the following piece of code " Console.println("Let's put some text on the Console output.");" in the editor for the FirstApplication trait, and then press Ctrl + Shift + X or click the Run selection in Scala interpreter button, the IDE will display the Scala Interpreter view, evaluate the selected code, and print the results of the evaluation. In this case, the result of the evaluation will be displaying the text (see Figure 15). Figure 15: The Scala interpreter displaying the results of executing the selected code. Conclusion In this article, I've provided a brief overview about the Scala IDE for those used to spending their days with Visual Studio. In addition, I've gotten you up and running with Scala. Obviously, I still need to dive deep into the differences between Scala and C#, but I wanted to offer a quick introduction and help you start working with a completely new IDE. In the next article, I'll discuss the unique way Scala works with immutable and mutable variables, method names, operators, and type inference, and you'll see why Scala is becoming so popular. Gaston Hillar is a frequent contributor to Dr. Dobb's. Related Article Scala for C# Developers: Useful Features
http://www.drdobbs.com/mobile/mobile/scala-for-c-developers-a-tutorial/240156877?pgno=2
CC-MAIN-2016-44
refinedweb
1,595
61.36
ornek is an instance of a class, ornek is my object that i defined to use and that line one of its values have been set i think i found error point there is a part of code below i need to... ornek is an instance of a class, ornek is my object that i defined to use and that line one of its values have been set i think i found error point there is a part of code below i need to... 356 gidilen=0; 357 ornek.set_g(gidilen); these are snippet of my code yes it has zero value not null i need to set zero value for my zero step in algorithm. why does java suppose that it... sorry for huge exceptions :( yes it is working after i changed Arraylist to ArrayList and deleted my Arraylist class :) Now i finished my artificial intelligence lecture homework project but i m getting this exceptions. ... thanks for greet answer o:-):o :)>- it is ok so sorry problem is that i wrote Arraylist not ArrayList X_X Exception in thread "main" java.lang.NullPointerException at yuzswing.denem.Arraylist.add(Arraylist.java:10) at yuzswing.denem.ArrayListDeneme.main(ArrayListDeneme.java:10) when i click... first i tried to create type of string before use my object but i couldnt do it what is my problem in my code? my class is: import java.util.ArrayList; public class ArrayListDeneme {... I have an algorithm problem and i have created a class for my objects hard problem is how can i create a matrix that has increasing size and every rows in matrix is also increasing for... i have created a matrix with button and i want to listen event of this matrix. i need to take coordinates of matrix (X Y) when a button clicked and i ll use this coordinate for my other operations in... ovv it is ok! now i made it. PRoblem was solved :) it is needed to define every button in for loop like that for (int i = 0; i < 5; i++) { // butonlar[i]=new JButton[5]; for... thanks :) yes i wasnt aware of that thank you very much but still it doesnt run. My codes now like that: import java.awt.Dimension; import java.awt.GridLayout; import java.awt.Toolkit; ... Hi java programmers! I got a run time problem with my code in below import java.awt.Dimension; import java.awt.GridLayout; import java.awt.Toolkit; import javax.swing.JButton; import... Sometimes i write a code in netbeans and try to run but it doesnt. i delete one letter and write it again after that it is working For example piece line of code is like that ArrayList<String>... thanks alot it is working
http://www.javaprogrammingforums.com/search.php?s=8b01c5e970d561f8a58feddcde28e266&searchid=1813554
CC-MAIN-2015-40
refinedweb
455
74.69
Created on 2012-10-25 11:56 by msmhrt, last changed 2018-07-05 15:47 by p-ganssle. OS: Windows 7 Starter Edition SP1 (32-bit) Japanese version Python: 3.3.0 for Windows x86 (python-3.3.0.msi) time.tzname on Python 3.3.0 for Windows is decoded by wrong encoding. time >>> time.tzname[0] '\x93\x8c\x8b\x9e (\x95W\x8f\x80\x8e\x9e)' >>> time.tzname[0].encode('iso-8859-1').decode('mbcs') '東京 (標準時)' >>> '東京 (標準時)' means 'Tokyo (Standard Time)' in Japanese. time.tzname on Python 3.2.3 for Windows works correctly. C:\Python32>python.exe Python 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit (Intel)] on win 32 Type "help", "copyright", "credits" or "license" for more information. >>> import time >>> time.tzname[0] '東京 (標準時)' >>> I see in 3.3 PyUnicode_DecodeFSDefaultAndSize() was replaced by PyUnicode_DecodeLocale(). What show sys.getdefaultencoding(), sys.getfilesystemencoding(), and locale.getpreferredencoding()? Looking at the CRT source code, tznames should be decoded with mbcs. See also As I understand, OP has UTF-8 locale. >I see in 3.3 PyUnicode_DecodeFSDefaultAndSize() was replaced > by PyUnicode_DecodeLocale(). Related changes: - 8620e6901e58 for the issue #5905 - 279b0aee0cfb for the issue #13560 I wrote 8620e6901e58 for Linux, when the wcsftime() function is missing. The problem is the changeset 279b0aee0cfb: it introduces a regression on Windows. It looks like PyUnicode_DecodeFSDefault() and PyUnicode_DecodeFSDefault() use a different encoding on Windows. I suppose that we need to add an #ifdef MS_WINDOWS to use PyUnicode_DecodeFSDefault() on Windows, and PyUnicode_DecodeFSDefault() on Linux. See also the issue #10653: time.strftime() uses strftime() (bytes) instead of wcsftime() (unicode) on Windows, because wcsftime() and tzname format the timezone differently. > What show sys.getdefaultencoding(), sys.getfilesystemencoding(), and locale.getpreferredencoding()? sys >>> sys.getdefaultencoding() 'utf-8' >>> sys.getfilesystemencoding() 'mbcs' >>> import locale >>> locale.getpreferredencoding() 'cp932' >>> 'cp932' is the same as 'mbcs' in the Japanese environment. > >>> sys.getfilesystemencoding() > 'mbcs' > >>> import locale > >>> locale.getpreferredencoding() > 'cp932' > >>> > > 'cp932' is the same as 'mbcs' in the Japanese environment. And what is the value.of locale.getpreferredencoding(False)? > And what is the value.of locale.getpreferredencoding(False)? >>> import locale >>> locale.getpreferredencoding(False) 'cp932' >>> See also the issue #836035. According to CRT source code: - tzset() uses WideCharToMultiByte(lc_cp, 0, tzinfo.StandardName, -1, tzname[0], _TZ_STRINGS_SIZE - 1, NULL, &defused) with lc_cp = ___lc_codepage_func(). - wcsftime("%z") and wcsftime("%Z") use _mbstowcs_s_l() to decode the time zone name I tried to call ___lc_codepage_func(): it returns 0. I suppose that it means that mbstowcs() and wcstombs() use the ANSI code page. Instead of trying to bet what is the correct encoding, it would be simpler (and safer) to read the Unicode version of the tzname array: StandardName and DaylightName of GetTimeZoneInformation(). If anything is changed, time.strftime(), time.strptime(), datetime.datetime.strftime() and time.tzname must be checked (with "%Z" format). "Instead of trying to bet what is the correct encoding, it would be simpler (and safer) to read the Unicode version of the tzname array: StandardName and DaylightName of GetTimeZoneInformation()." GetTimeZoneInformation() formats correctly timezone names, but it reintroduces #10653 issue: time.strftime("%Z") formats the timezone name differently. See also issue #13029 which is a duplicate of #10653, but contains useful information. -- Example on Windows 7 with a french setup configured to Tokyo's timezone. Using GetTimeZoneInformation(), time.tzname is ("Tokyo", "Tokyo (heure d\u2019\xe9t\xe9)"). U+2019 is the "RIGHT SINGLE QUOTATION MARK". This character is usually replaced with U+0027 (APOSTROPHE) in ASCII. time.strftime("%Z") gives "Tokyo (heure d'\x81\x66ete)" (if it is implemented using strftime() or wcsftime()). -- If I understood correctly, Python 3.3 has two issues on Windows: * time.tzname is decoded from the wrong encoding * time.strftime("%Z") gives an invalid output The real blocker issue is a bug in strftime() and wcsftime() in Windows CRT. A solution is to replace "%Z" with the timezone name before calling strftime() or wcsftime(), aka working around the Windows CRT bug. Is there any progress on this issue? Could somebody respond to the originator please. I have just observed behaviour for the Czech locale. I tried to avoid collisions with stdout encoding, writing the strings into a file using UTF-8 encoding: tzname_bug.py -------------------------------------------------- #!python3 import time import sys with open('tzname_bug.txt', 'w', encoding='utf-8') as f: f.write(sys.version + '\n') f.write('Should be: Střední Evropa (běžný čas) | Střední Evropa (letní čas)\n') f.write('but it is: ' + time.tzname[0] + ' | ' + time.tzname[1] + '\n') f.write(' types: ' + repr(type(time.tzname[0])) + ' | ' + repr(type(time.tzname[1])) + '\n') f.write('Should be as ascii: ' + ascii('Střední Evropa (běžný čas) | Střední Evropa (letní čas)') + '\n') f.write('but it is as ascii: ' + ascii(time.tzname[0]) + ' | ' + ascii(time.tzname[1]) + '\n') ----------------------------------- It creates the tzname_bug.txt with the content (copy/pasted from UNICODE-capable editor (Notepad++, the indicator at the right bottom corner shows UTF-8. ----------------------------------- 3.5.0 (v3.5.0:374f501f4567, Sep 13 2015, 02:27:37) [MSC v.1900 64 bit (AMD64)] Should be: Střední Evropa (běžný čas) | Střední Evropa (letní čas) but it is: Støední Evropa (bìný èas) | Støední Evropa (letní èas) types: <class 'str'> | <class 'str'> Should be as ascii: 'St\u0159edn\xed Evropa (b\u011b\u017en\xfd \u010das) | St\u0159edn\xed Evropa (letn\xed \u010das)' but it is as ascii: 'St\xf8edn\xed Evropa (b\xec\x9en\xfd \xe8as)' | 'St\xf8edn\xed Evropa (letn\xed \xe8as)' ----------------------------------- To decode the tzname strings, Python calls mbstowcs, which on Windows uses Latin-1 in the "C" locale. However, in this locale the tzname strings are actually encoded using the system ANSI codepage (e.g. 1250 for Central/Eastern Europe). So it ends up decoding ANSI strings as Latin-1 mojibake. For example: >>> s 'Střední Evropa (běžný čas) | Střední Evropa (letní čas)' >>> s.encode('1250').decode('latin-1') 'Støední Evropa (bì\x9ený èas) | Støední Evropa (letní èas)' You can work around the inconsistency by calling setlocale(LC_ALL, "") before anything imports the time module. This should set a locale that's not "C", in which case the codepage should be consistent. Of course, this won't help if you can't control when the time module is first imported. The latter wouldn't be a issue if time.tzset were implemented on Windows. You can at least use ctypes to call the CRT's _tzset function. This solves the problem with time.strftime('%Z'). You can also get the CRT's tzname by calling the exported __tzname function. Here's a Python 3.5 example that sets the current thread to use Russian and creates a new tzname tuple: import ctypes import locale kernel32 = ctypes.WinDLL('kernel32') ucrtbase = ctypes.CDLL('ucrtbase') MUI_LANGUAGE_NAME = 8 kernel32.SetThreadPreferredUILanguages(MUI_LANGUAGE_NAME, 'ru-RU\0', None) locale.setlocale(locale.LC_ALL, 'ru-RU') # reset tzname in current locale ucrtbase._tzset() ucrtbase.__tzname.restype = ctypes.POINTER(ctypes.c_char_p * 2) c_tzname = ucrtbase.__tzname()[0] tzname = tuple(tz.decode('1251') for tz in c_tzname) # print Cyrillic characters to the console kernel32.SetConsoleOutputCP(1251) stdout = open(1, 'w', buffering=1, encoding='1251', closefd=0) >>> print(tzname, file=stdout) ('Время в формате UTC', 'Время в формате UTC') I have worked around a bit differently -- the snippet from the code: result = time.tzname[0] # simplified version of the original code. # Because of the bug in Windows libraries, Python 3.3 tried to work around # some issues. However, the shit hit the fan, and the bug bubbled here. # The `time.tzname` elements are (unicode) strings; however, they were # filled with bad content. See for details. # Actually, wrong characters were passed instead of the good ones. # This code should be skipped later by versions of Python that will fix # the issue. import platform if platform.system() == 'Windows': # The concrete example for Czech locale: # - cp1250 (windows-1250) is used as native encoding # - the time.tzname[0] should start with 'Střední Evropa' # - the ascii('Střední Evropa') should return "'St\u0159edn\xed Evropa'" # - because of the bug it returns "'St\xf8edn\xed Evropa'" # # The 'ř' character has unicode code point `\u0159` (that is hex) # and the `\xF8` code in cp1250. The `\xF8` was wrongly used # as a Unicode code point `\u00F8` -- this is for the Unicode # character 'ø' that is observed in the string. # # To fix it, the `result` string must be reinterpreted with a different # encoding. When working with Python 3 strings, it can probably # done only through the string representation and `eval()`. Here # the `eval()` is not very dangerous because the string was obtained # from the OS library, and the values are limited to certain subset. # # The `ascii()` literal is prefixed by `binary` type prefix character, # `eval`uated, and the binary result is decoded to the correct string. local_encoding = locale.getdefaultlocale()[1] b = eval('b' + ascii(result)) result = b.decode(local_encoding) > local. @eryksun: I see. In my case, I can set the locale before importing the time module. However, the code (asciidoc3.py) will be used as a module, and I cannot know if the user imported the time module or not. Instead of your suggestion result = result.encode('latin-1').decode('mbcs') I was thinking to create a module say wordaround16322.py like this: --------------- import locale locale.setlocale(locale.LC_ALL, '') import importlib import time importlib.reload(time) --------------- I thought that reloading the time module would be the same as importing is later, after setting locale. If that worked, the module could be simply imported wherever it was needed. However, it does not work when imported after importing time. What is the reason? Does reload() work only for modules coded as Python sources? Is there any other approach that would implement the workaroundXXX.py module? > import locale > locale.setlocale(locale.LC_ALL, '') > > import importlib > import time > importlib.reload(time) > > it does not work when imported after importing time. > What is the reason? Does reload() work only for > modules coded as Python sources? The import system won't reinitialize a builtin or dynamic extension module. Reloading just returns a reference to the existing module. It won't even reload a PEP 489 multi-phase extension module. (But you can create and exec a new instance of a multi-phase extension module.) > Is there any other approach that would implement the > workaroundXXX.py module? If the user's default locale and the current thread's preferred language are compatible with the system ANSI encoding [1], then you don't actually need to call _tzset nor worry about time.tzname. Call setlocale(LC_CTYPE, ''), and then call time.strftime('%Z') to get the timezone name. If you use Win32 directly instead of the CRT, then none of this ANSI business is an issue. Just call GetTimeZoneInformation to get the standard and daylight names as wide-character strings. You have that option via ctypes. [1]: A user can select a default locale (language) that's unrelated to the system ANSI locale (the ANSI setting is per machine, located under Region->Administrative). Also, the preferred language can be selected dynamically by calling SetThreadPreferredUILanguages or SetProcessPreferredUILanguages. All three could be incompatible with each other, in which case you have to explicitly set the locale (e.g. "ru-RU" instead of an empty string) and call _tzset. @eryksun: Thanks for your help. I have finaly ended with your... "Call setlocale(LC_CTYPE, ''), and then call time.strftime('%Z') to get the timezone name." bpo-31549 has been marked as a duplicate of this issue. Formatting timezone on Windows in the right encoding is an old Python (especially Python 3) issue:
https://bugs.python.org/issue16322
CC-MAIN-2019-30
refinedweb
1,899
52.56
Programming is mainly about handling data. As a Python developer, you’ll find ways to store data in a manner that is consistent with your objectives. Sometimes, you’ll need to preserve the order of data insertion in a set, for example, if you are handling bank transactions. Each transaction has to be unique, and it is important to preserve the order in which transactions are created. Python’s ordered sets help you to do just that. In this article, we will explain the programming concept of an ordered set, before showing you how to create one in a Python program. What Is a Set in Python? In the Python programming language, a set is a collection of unique elements. It is a hash table-based data structure with undefined element ordering. You can browse a set’s elements, add or remove them, and perform the standard set operations of union, intersection, complement, and difference. Unlike lists, ordinary sets do not preserve the order in which we insert the elements. This is because the elements in a set are usually not stored in the order in which they appear. What Is an Ordered Set? Unlike in a standard set, the order of the data in an ordered set is preserved. We used ordered sets when we needed the order in which we entered the data to be maintained over the course of the program. In an ordered set, looking at the data does not change its order as it would in an unordered set. How To Create an Ordered Set in Python Python allows you to create ordered sets in your programs. Below we’ll demonstrate two ways to do so: using Python’s ordered-set package, and a manual method. But first, let’s establish a context. Let’s say you’re developing an app for a bank in which you need to record transaction numbers one after another in a summary document. Each bank transaction operation is unique. Also, you want the order in which transactions are made to reflect exactly in your data set. This is a perfect opportunity for you to use the OrderedSet class included in Python’s ordered_set package. Python’s Ordered Set Class The simplest way to create an ordered set in Python is to use the OrderedSet class. Note that this class is not included by default. You first need to make sure you have the ordered-set package installed. Note that ordered-set is a third-party package, and its functionality can change independently of the version of Python that you’re using. To install the package, type the following command in your terminal: pip install ordered-set This will enable you to use the OrderedSet class. Now, you can create a Python program that uses the OrderedSet class. Let’s see what a simple ordered set looks like: from ordered_set import OrderedSet setTest = OrderedSet(["First", "Second", "Second", "Third"]) print(setTest) First, we import the freshly installed ordered_set package. Then, we create an object off the OrderedSet class, passing the members as parameters. The print statement in this example outputs the following: OrderedSet(['First', 'Second', 'Third']) The string ‘Second’ that we entered twice when creating the set is now gone, while the order in which we entered data is maintained. Now, let’s create an ordered set of bank transactions. In a real-world scenario, you would want to keep the order of insertion in place, to allow you to analyze the transactions, check for fraud, and so forth. Here is how the program might look: from ordered_set import OrderedSet bankStatement = OrderedSet(["BK0001","BK0002","BK0003","BK0004","BK0005"])... The ordered set is created. Now, if you want to a given transaction, you could select the corresponding set of items using its index: ...print("Transaction no",bankStatement[1],"has been recorded successfully") This gives you the following output: Transaction no BK0002 has been recorded successfully But what if someone wanted to add a transaction that has already been recorded, such as “BK0004”? If we had used a list, this action would have been possible. Fortunately, the ordered set does not allow it. Let’s run the following code: bankTrs.add("BK0004") print(bankTrs) The result of the print statement remains unchanged, proving that the ordered set disregarded the action: OrderedSet(['BK0001', 'BK0002', 'BK0003', 'BK0004', 'BK0005']) This feature proves particularly useful in this case. As a programmer, you won’t have to worry about ensuring that each data member is unique. The ordered-set package contains other noteworthy features. It allows you to perform useful operations like difference, intersection and union using the operators -, & and |. Set Operations Let’s rewrite the program to create two different ordered sets that could represent two bank statements."]) We deliberately included the transactions BK0004 and BK0005 in both statements. That could be the case if the first and the second statement partially cover the same time period. If you want to see the transactions that exist only in the bankStatement1, just run the following bit of code: diff = bankStatement1 - bankStatement2 print("The transactions unique to the first summary are",diff) This gives us the following result: The transactions unique to the first summary are OrderedSet(['BK0001', 'BK0002', 'BK0003']) For readability purposes, we can enclose the ordered set inter within a list when displaying the data using this code: diff = bankStatement1 - bankStatement2 print("The transactions unique to the first summary are",list(diff)) Now, if you need to retrieve only the transactions that exist in both statements, use the intersection statement like so: inter = bankStatement1 & bankStatement2 print("The transactions common to both summaries are",list(inter)) You’ll get the intended result: The transactions common to both summaries are OrderedSet['BK0004', 'BK0005'] Finally, if you wish to see all the transactions of both statements, simply perform the union operation: union = bankStatement1 | bankStatement2 print("Here are all the transactions of these summaries:",lis(union)) This will give you the following output: Here are all the transactions of these summaries: OrderedSet['BK0001', 'BK0002', 'BK0003', 'BK0004', 'BK0005', 'BK0006', 'BK0007', 'BK0008'] The ordered_set package makes creating and manipulating ordered sets in Python simple and effective. The Manual Method It is also possible to create an ordered set of data entirely manually. In case you are not able to use the ordered-set package, you can still use this workaround. Let’s see how this method works. First, we’ll create a string array containing our set of data: bankStatement=["BK0001","BK0002","BK0003","BK0004","BK0004","BK0005","BK0006"] Then, we create a for loop that checks each element, looking for duplicates. If there are any, they will be removed from the set. To test this, we’ll deliberately include a duplicate element in the array. for string in range(len(bankStatement), 1, -1): if bankStatement[string-1] in bankStatement[:string-1]: bankStatement.pop(string-1) The for loop starts iterating from the back of the list, that is, from the last element. It takes that element (called string above) and checks whether it is already present in the subset of the list up until but not including the current element ( string). If it is already present, we remove the mention of the element closer to the front of the list, but keep the original mention of the element closer to the back of the list. Now, when we print the array content, there are no duplicates and the order is maintained: ['BK0001', 'BK0002', 'BK0003', 'BK0004', 'BK0005', 'BK0006'] This allows us to create an ordered set even if we cannot use Python’s dedicated feature! Learn To Code Online Python is a versatile programming language with a few options for creating ordered sets. You can use the OrderedSet class to get the job done, or you can do so manually if needed. Want to go beyond ordered set creation in Python? Udacity’s expert-designed Introduction to Programming Nanodegree program is your next step. By the end of this course, you’ll know the basics of coding and have the skills to confidently manage real-world programming scenarios using HTML, CSS, Python, and more! Complete Code Examples Example 1: Bank Transaction Ordered Set Creation from ordered_set import OrderedSet bankStatement = OrderedSet(["BK0001","BK0002","BK0003","BK0004","BK0005"]) print("Transaction no",bankStatement[1],"has been recorded successfully") bankTrs.add("BK0004") print(bankTrs) Example 2: Differente, Union, Intersection"]) diff = bankStatement1 - bankStatement2 print("The transactions unique to the first summary are",list(diff)) inter = bankStatement1 & bankStatement2 print("The transactions common to both summary are",list(inter)) union = bankStatement1 | bankStatement2 print("Here are all the transactions of these summaries:",list(union)) Example 3: The Manual Method bankStatement=["BK0001","BK0002","BK0003","BK0004","BK0004","BK0005","BK0006"] for string in range(len(bankStatement), 1, -1): if bankStatement[string-1] in bankStatement[:string-1]: bankStatement.pop(string-1) print(bankStatement)
https://www.udacity.com/blog/2021/11/python-ordered-sets-an-overview.html
CC-MAIN-2022-05
refinedweb
1,464
51.07
Elixir provides excellent interoperability with Erlang libraries. In fact, Elixir discourages simply wrapping Erlang libraries in favor of directly interfacing with Erlang code. In this section we will present some of the most common and useful Erlang functionality that is not found in Elixir. As you grow more proficient in Elixir, you may want to explore the Erlang STDLIB Reference Manual in more detail. The built-in Elixir String module handles binaries that are UTF-8 encoded. The binary module is useful when you are dealing with binary data that is not necessarily UTF-8 encoded. iex> String.to_charlist "Ø" [216] iex> :binary.bin_to_list "Ø" [195, 152] The above example shows the difference; the String module returns Unicode codepoints, while :binary deals with raw data bytes. Elixir does not contain a function similar to printf found in C and other languages. Luckily, the Erlang standard library functions :io.format/2 and :io_lib.format/2 may be used. The first formats to terminal output, while the second formats to an iolist. The format specifiers differ from printf, refer to the Erlang documentation for details. iex> :io.format("Pi is approximately given by:~10.3f~n", [:math.pi]) Pi is approximately given by: 3.142 :ok iex> to_string :io_lib.format("Pi is approximately given by:~10.3f~n", [:math.pi]) "Pi is approximately given by: 3.142\n" Also note that Erlang’s formatting functions require special attention to Unicode handling. The crypto module contains hashing functions, digital signatures, encryption and more: iex> Base.encode16(:crypto.hash(:sha256, "Elixir")) "3315715A7A3AD57428298676C5AE465DADA38D951BDFAC9348A8A31E9C7401CB" The :crypto module is not part of the Erlang standard library, but is included with the Erlang distribution. This means you must list :crypto in your project’s applications list whenever you use it. To do this, edit your mix.exs file to include: def application do [extra_applications: [:crypto]] end The digraph module (as well as digraph_utils) contains functions for dealing with directed graphs built of vertices and edges. After constructing the graph, the algorithms in there will help finding for instance the shortest path between two vertices, or loops in the graph. Given three vertices, find the shortest path from the first to the last. iex> digraph = :digraph.new() iex> coords = [{0.0, 0.0}, {1.0, 0.0}, {1.0, 1.0}] iex> [v0, v1, v2] = (for c <- coords, do: :digraph.add_vertex(digraph, c)) iex> :digraph.add_edge(digraph, v0, v1) iex> :digraph.add_edge(digraph, v1, v2) iex> :digraph.get_short_path(digraph, v0, v2) [{0.0, 0.0}, {1.0, 0.0}, {1.0, 1.0}] Note that the functions in :digraph alter the graph structure in-place, this is possible because they are implemented as ETS tables, explained next. The modules ets and dets handle storage of large data structures in memory or on disk respectively. ETS lets you create a table containing tuples. By default, ETS tables are protected, which means only the owner process may write to the table but any other process can read. ETS has some functionality to be used as a simple database, a key-value store or as a cache mechanism. The functions in the ets module will modify the state of the table as a side-effect. iex> table = :ets.new(:ets_test, []) # Store as tuples with {name, population} iex> :ets.insert(table, {"China", 1_374_000_000}) iex> :ets.insert(table, {"India", 1_284_000_000}) iex> :ets.insert(table, {"USA", 322_000_000}) iex> :ets.i(table) <1 > {<<"India">>,1284000000} <2 > {<<"USA">>,322000000} <3 > {<<"China">>,1374000000} The math module contains common mathematical operations covering trigonometry, exponential, and logarithmic functions. iex> angle_45_deg = :math.pi() * 45.0 / 180.0 iex> :math.sin(angle_45_deg) 0.7071067811865475 iex> :math.exp(55.0) 7.694785265142018e23 iex> :math.log(7.694785265142018e23) 55.0 The queue is a data structure that implements (double-ended) FIFO (first-in first-out) queues efficiently: iex> q = :queue.new iex> q = :queue.in("A", q) iex> q = :queue.in("B", q) iex> {value, q} = :queue.out(q) iex> value {:value, "A"} iex> {value, q} = :queue.out(q) iex> value {:value, "B"} iex> {value, q} = :queue.out(q) iex> value :empty rand has functions for returning random values and setting the random seed. iex> :rand.uniform() 0.8175669086010815 iex> _ = :rand.seed(:exs1024, {123, 123534, 345345}) iex> :rand.uniform() 0.5820506340260994 iex> :rand.uniform(6) 6 The zip module lets you read and write ZIP files to and from disk or memory, as well as extracting file information. This code counts the number of files in a ZIP file: iex> :zip.foldl(fn _, _, _, acc -> acc + 1 end, 0, :binary.bin_to_list("file.zip")) {:ok, 633} The zlib module deals with data compression in zlib format, as found in the gzip command. iex> song = " ...> Mary had a little lamb, ...> His fleece was white as snow, ...> And everywhere that Mary went, ...> The lamb was sure to go." iex> compressed = :zlib.compress(song) iex> byte_size song 110 iex> byte_size compressed 99 iex> :zlib.uncompress(compressed) "\nMary had a little lamb,\nHis fleece was white as snow,\nAnd everywhere that Mary went,\nThe lamb was sure to go." © 2012–2017 Plataformatec Licensed under the Apache License, Version 2.0.
http://docs.w3cub.com/elixir~1.5/erlang-libraries/
CC-MAIN-2017-39
refinedweb
854
61.12
Maybe it's just me, but I would prefer to think about what arduino the language *could* be, rather than weigh it down with preconceptions about C. We do not need to make it a war, a debate would be nice. Not really. A lot of this is stuff that the "serious" programmers have long ago agreed does not have a right or wrong way, and the best to hope for is consistency, at least within a single program, and hopefully from a single programmer, and maybe, if you're lucky, from an entire company. I gotta say that "LED_Reform01" is a pretty awful name for anything Or better yet do it for them? Anyone disagree with me wanting to change all the ints to something more space efficient I think I see more people running into problems with the limited range of "small" datatypes (too many of the examples need a "bit" datatype (pin or LED state, right?), which C doesn't have (C++ doesn't have it either, does it?)) struct SPinStates { byte bPinState1 : 1; byte bPinState2 : 1; byte bPinState3 : 1; byte bPinState4 : 1; byte bPinState5 : 1; byte bPinState6 : 1; byte bPinState7 : 1; byte bPinState8 : 1; };SPinStates pinStates;pinStates.bPinState=HIGH;digitalWrite(13,pinStates.bPinState1); //turn on led ...Have some predefined aliases for pins/functions. Lazy initialize the pin and serial modes based on the function request, ie set(statusLed, low) (or high or pullup or ), combine common function combinations into more powerful single commands. #include "Pins.h"pin(13);void setup(){ set(pin13,OUTPUT);}void loop(){ delay(1000); toggle(pin13);} /*|||| File: || Pins.h || Author: || Alexander Brevig || Created: || 2009-02-06 || Last Update: || 2009-02-06 |||| Description: || Implement functionality for pin data wrapped in one byte.|| Information include: || -Pin number || -Pin mode || -Pin state |||| License:|| GNU Lesser General Public License 2.1 or later . ||*/#include "WProgram.h"#define pin(x) SPin pin##x={x}struct SPin{ byte pin : 5; byte mode : 1; byte state : 1;};void set(SPin& pin,boolean mode,boolean state=false){ pin.mode = mode; pin.state = state; pinMode( pin.pin, pin.mode); digitalWrite( pin.pin, pin.state);}void setHigh(SPin& pin){ pin.state = false; digitalWrite( pin.pin, pin.state);}void setLow(SPin& pin){ pin.state = true; digitalWrite( pin.pin, pin.state);}void toggle(SPin& pin){ pin.state ? setHigh(pin) : setLow(pin);} set (pins.LED, ON) print ("Hello World") set(pins.analog1,PULLUP)loop: print ("Temperature is " + get(pins.analog1)/ 1110) wait (0.5) loop: print ("pin 2 is " + get(2)) wait (1) loop: set (pins.LED, ON) wait (1) set (pins.LED, OFF) wait (1) I'll stop hijacking, I promise Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=46314.30
CC-MAIN-2016-36
refinedweb
471
67.25
Attendees - Present - kcoyle, Workshop_room, DaveReynolds, hhalpin, dbs, Workshop_room.a, tbaker, DBooth - Chair - Arnaud Le Hors and Harold Solbrig - Scribe - DavidBooth, PhilA, Anamitra, TimCole Contents - Topics - Validating statistical Index Data represented in RDF using SPARQL Queries - Jose Labra Gayo - Stardog ICV - Evrin Sirin - Bounds: Expressing Reservations about incoming Data, Martin Skjaeveland - OSLC Resource Shape: A Linked Data Constraint Language - Description Set Profiles - Tom Baker - Experiences with the Design of the W3C XML Schema Definition Language - Noah Mendelsohn - Next Steps - Commitments Validating statistical Index Data represented in RDF using SPARQL Queries - Jose Labra Gayo Jose: Motivation - Webindex Project <ericP> [ slide 2] <ericP> jose: developed for web index <ericP> ... we developed the data portal for web index Visualization and data portal <ericP> [slide 3] <ericP> jose: the workflow involves: <ericP> ... .. get data from external sources <ericP> ... .. statisticians produce index <ericP> ... .. we map that to RDF and provide visualizations Conversion is from Excel to RDF <kcoyle> tbaker: the piratepad has quite a bit of content jose: Technical details. 61 countries, 85 indicators. > 1megatriple, linked to DBpedia, etc. WebIndex slide <ericP> [slide: WebIndex computation process (1)] <PhilA> Jose Emilio is talking about and its use of Jose: Used SPARQL CONSTRUCT instead of ASK ... empty graph if ok, else RDF graph with error [slide: SPARQL queries RDF Data Cube] [slide: limitations of SPARQL expressivity] Jose: Challenge computing series computation on RDF collections ... Idea of RDF Profiles for dataset families could it be the mouse? Jose: ... Source code in Scala (site on slides) ... - demo site ... Webindex as use case, SPARQL as implementation, RDF Profiles (declaritive, Turtle) <danbri> (somewhat related, SKOS validation - - also just a structure) <jose:> you can check e.g. that one observation is in one slice but not much more expressivity than that <ashok:> if SPARQL works for a fairly complicated situation, why are we thinking about anything else? <jose:> SPARQL is hard to debug ... we need to differentiate validating the graph vs. a dataset ... with SPARQL, we can test specific values in a particular graph ... though we could compile ShEx to SPARQL <Jose> A couple of interesting, albeit unrelated ideas here... ... signing RDF - how do you generate a reproducable MD5 w/o order? ... functional patterns for RDF lists. Should there be "best practices"? <PhilA:> is slide 11 a candidate profile? ... if so, i see it as too complicated ... we have two req: validation and form creation. too complex for the latter <ericP:> is that 'cause of the expressivity, or 'cause it's in RDF? <PhilA:> i suppose 'cause it's in RDF <evren:> re: UI generation, the issue is not the syntax, it's the SPARQL query. that's where the shape of the data is described PhilA: EU reqs are "don't make me need to speak SPARQL to generate a UI" gjiang: did you use SPARQL extensions? jose: we weren't happy when we had to use jena:sqrt gjiang: maybe there can be a link from SPARQL to some statistical package Stardog ICV - Evrin Sirin <gjiang> [at slide 8] Semantics in OWL are for inference not suitable for validation <gjiang> [at slide 11] Rule syntax for constraints Evren: [at slide 13] if each person must have two parents, but only one was specified, inference can determine that there is another parent, and then the validation can be applied after inference. (Evren talks to a slide that is not in the uploaded slides) Evren: the tool figures out explanation of validation. evren: i agree with the folks that said we need good explanations of errors but don't believe the constraints author should have to write the explanation. that should be the tool. ... we have definitions of constraints in W3C specs so we should capture. Evren: Yes. Arnaud: The question is whether the language allows you to specify that entailment should be used. ... the question is "does the language you use allow you to specify the entailment?" ... Initially you propopsed to just change the OWL namespace. Is that what you use now? Evren: No, that would require using all the tool chains. You just execute it through the validation. That's why at the tool level you need to separate the axioms from the constraints. Arthur: How would you associate the constraints with a graph? <PhilA> PhilA: A proposal (from Paul Davidson) is to add a property to VoID that links a dataset to a profile (constraint. EricP: What if someone interprets constraints as inference rules accidentally? Evren: under OWA it would just infer that person085 is a manager, instead of determining (under CWA) that there is an error because person085 is not a manager. @@1: how can i read this to learn about the graph to e.g. generate a form? evren: you can thing about it as the SPARQL BGP describes the graph ... so we see "someValuesFrom" and we'll create a text box, ... [Evren explains how constraint can be represented in SPARQL] _: What about optional properties? @@1: how would i describe optional properties evren: right, you wouldn't write that in the constraints langauge Arthur: It's not really a constraint, it's a graph descriptoin. <DaveReynolds> +1 to last speaker, optional properties are needed for describing the data "shape" as part of publish/consume contract, even though they are not part of validation Arthur: You want to describe a contract with a service, and part of the contract is that a property can appear 0+ times. evren: we added "min 0" to our OWL constraint. it's not actionable during constraints checking but it describes the graph Bounds: Expressing Reservations about incoming Data - Martin Skjaeveland (slides:, report summary Arthur: [on slide 6] What do you mean by element? Does it depend on ts position? Martin: By element I mean a S P or O in a graph. Evren: [on slide 13] What kind of use cases for ontology hijacking? Martin: Can check if you are adding domain and range axioms. Evren: OWL RD only allows things that can be expressed with one triple. Cannot have someValuesFrom, allValuesFrom, (and some others). <danbri> if I say MyNewType is a subClassOf, versus MyNewType is a superTypeOf ... people tend to see the latter as weirder, the former as acceptable and non-hijack-y gjiang: Re ont hijacking that adds statements. What about removing statements? what effect does it have? martin: no, only considered use cases of receiving data and protecting existing dataset. Eric: Use case came from practical considerations or theoretical? Martin: We did prior work on managing RDF transformations. This is transforming by adding. EricP: SADI project is all about inferring extra triples. Their rules are written in OWL DL. Coffee <dbs> coffee++ OSLC Resource Shape: A Linked Data Constraint Language (slides, paper, report summary) slide 1 - 1 slide intro to OSLC Arthur: IBM customers want tools that cover the product life cycle and beyond ... core specs delivered to W3C, being worked on in LDP WG. More domain specific specs gone to OASIS ... customers bothered by lack of XML Schema analogue ... came up with minimal language [on slide 6] ... RDF/XML snippet shown is a resource shape for a bug report [on slide 7] ... Does data have to be in the graph or is it externally referenced, for e.g. [on slide 8] ... Creation factory is the data source, query capability is the endpoint (scribe paraphrase) ... data can link to its description (its shape) [on slide 10] ... example of declarative list of properties etc. Encoded in turtle ... OSLC is just a vocabulary, it's not an ontology. How you use it is up to you slides 11 - 16 show the spec Arthur: [on slide 17] SPARQL seems good for the task of testing against the resource shape ericP: I notice people favour returning True if there's a failure (the inverse of OSLC model) Arthur: OK, but you want data to be returned so you cna fix it [on slide 20 - Summary] ... OSLC has been around about 3 years hsolbrig: How does this relate to WSDL? Arthur: It's in the same spirit ... you can check for properties, cardinalities etc. ... hsolbrig: Is there a spec for the semantics of OSLC? Arthur: The semantics would be formalised using SPARQL [Discussion of what 'read-only' means] <Zakim> evrensirin, you wanted to aks about deletes evrensirin: you said something about not needing to do anything about DELETE? arthur: you might want to specify a pre-condition for a delete ... that's a good point. The context of the constraint is important Description Set Profiles - Tom Baker (slides, paper, report summary) tbaker: [Gives background on DC.] Application Profiles date from 2000 [on slide 6] ... Looks more like a record format <kcoyle> [on slide 8] description set document slide 10 shows same data in XML tbaker: So can we validate the extracted data ... Defined a small set of constraints that we saw being used in the DC community in their app profiles ... being produced as natural language text tbaker: [on slide 13] Just flash this up - it's the entire set of templates defined in the description set profile constraint language tbaker: [on slide 16] The motivation was to help people author application profiles in a consistent way ... here's a screenshot from an experiment that sadly no longer exists although there is some Python code I can share ... it shows a tabular presentation of a profile - a style people are used to [on slide 17] ... constraints are being embadded in the source of the wiki page in a controlled way [on slide 19] ... vision was that the profile could be used to configure editors as well as validators [on slide 20] ... We found that people were designing APs without looking at functional requirements ... so this is an attempt from 2007 to put the APs in context ... the yellow box is the AP - a set of documentation about the content of your metadata ... you can also document the domain model it was based on <gjiang> ... distinction among foundation standards, domain standards, application profile tbaker: we had some syntax definitions based on the abstract model [on slide 21] ... I'm really offering this as a set of requirements that were gathered in the DC community up to 2008 [on slide 22] ... we wanted to encourage people to base their APs on functional requirements [on slide 23] ... wanted to encourage people to model reality but with a light touch [on slide 24] ... then we wanted to constrain the data - important for consistency and quality control ... bridging the gap between people who see the world as a series of records and those whop see unbounded graphs ... record people, used to XML, just saw it as the latest validation syntax. Some APs were then written as OWL ontologies. Wanted to get people to constrain the data, not the vocabulary (scribe note - hope I got that right) [on slide 22] ... [on slide 22] ... [on slide 22] ... PhilA: [on slide 26] +1 to the 'Authored in an idiom usable by normal people' requirement tbaker: Before questions - can I ask kcoyle to comment? Anything to add? kcoyle: My only comment is that I've been doing a back of the envelope on what we have and do not have is DSP language ... when the requirements are completed, what we might want to do is to look at the existing languages and techniques and see which ones cover what ... my gut feeling that there may not be a single solution because diff comunities have diff contexts Arnaud: I hope you'll be able to join us after lunch as that's when we'll step back from the reqs and look at use cases, diff technologies etc. whether they match or not ... challenge in standards is always to decide on the use cases ... that's all for after lunch TimCole: Thinking about APs.... XMl Schema always seem pretty powerful. Does anything on DSP provide any guidance on how we might make a language from what we have? ... One application might ask for foaf:name, another might want foaf:givenName and foaf:familyName - can I define a constraint doc in some way so that I can add an extra requirement? tbaker: We refer to a specific set of DPS, or set of them - they're cookie cutters for data ... in the example from FOAF - those distinctions are defined in the FOAF vocab - the DSP would say what to use but I don't see how that eg would impact the design of the constraint language itself TimCole: You've defined a profile with lots of things and I want to change one thing. Do I have to repeat the whole thing or can I just define the difference? tbaker: We did discuss having a layered approach so people can define a basic profile and then just add a layer on. So that's in the same thought process but we decided not to solve that Arnaud: Anything else? Experiences with the Design of the W3C XML Schema Definition Language - Noah Mendelsohn (slides, paper, report summary) Arnaud: Noah was involved in XML Schema and so he's here to share his experiences of that Noah: We went through a lot of things when designing XMLSchema - it has a lot right and a few problems Noah: [on slide 3] These topics match those in the paper [on slide 4] ... People came with very different assumptions and ideas and diff ideas about validation ... some thought the idea was to end up with a Boolean ... others wanted to say more ... some people wanted to know that data matched a type and why (data binding) ... following the 80/20 rule is good but one person's 80 is another's 20 [on slide 5] ... discussed diff between validating doc as a whole or at the element level ... RDF folks better at idea that serialisations are diff versions of same abstract model. That doesn't work so well for all XML folks [on slide 6] ... No surprise that XML folks write their schemas in XML ... It's possible that there were better ways of encoding a schema <Arnaud> Noah is talking about the example on page 3 of his paper Noah: So the warning is - don't automatically write your schemas in RDF [on slide 9] ... Anticipate versioning you're likely to need an answer ... people find that their previous work needs updating. May need to reinterpret something ...' ericP: Drills down a little. <mgh> In GS1 standards that provide XSD artefacts, we use this mechanism to represent an extension point (wildcard) <xsd:any Noah: Point is that how to handle such cases is essentially app-specifc ericP: Did you consider creating a compact syntax? Noah: I guess you'll want your abstract model to map to RDF - you're used to that ... we do have the abstract model for XML, it's there. Arnaud: Thank you for coming Noah <tbaker> Thank you, Noah! <kcoyle> could someone post here when things start up again, for those of us on the phone? thx Arnaud: you're touching points that have been raised Lunch - will be 25 minutes <Anamitra> scribe-Anamitra <dbs> aside: thanks to everyone for being so good to us remote attendees :) <kcoyle> it's hard to hear - we may need some structure to be able to get participation of the phone people <SteveS> Scribe: Anamitra Next Steps <Zakim> PhilA, you wanted to talk about queuing <kcoyle> +1 dbs -- keeping track of the slides is a big help up next Alignment of requirements and technology Arnaud: questions regarding what we want to do ... capture use cases ... its just not about validation - its abt describing the Resource too <mgh> XSD can also be used to generate an instance XML document example from an XSD. Do we need that kind of capability? - to generate a set of triples from a description? PhilA: describe and validation are different <Zakim> arthur, you wanted to describe scope harold: [you want to] want to publish what you expect without going to SPARQL ... if i import data from an RDB with a good model, all i need from our language is to publish the description <PhilA> +1 to Harold Arthur: just calling this workshop validation is not accurate ericP: lets call it validation and description +1 ericP <hsolbri> Characterization? ericP: constraints is not a clear way to describe a resource <Zakim> kcoyle, you wanted to ask about defining description <ericP> kcoyle: when we talk about validation description, or do we have a broader view of description? kcoyle: there are certails aspects that are just description without any validation aspect hsolbri: we need something that does not imply process <TimCole> To provide scope, do we want though to focus on descriptive aspects that support validation? hsolbri: testcases for RDF and sw that produces RDF is to be considered Arnaud: Resource shape serves dual purpose of describe and validation <Zakim> evrensirin, you wanted to comment about being careful about descriptions evrensirin: define the scope - main goal validation - side goal is describe the resource <Zakim> PhilA, you wanted to talk about the likely new CSV on the Web WG which, in some ways, is closely related gjiang: low level user should be able to define the constraints like UML <PhilA> -> CSV on the WEb <hsolbri> gjiang: we may need an OCL for the description language as well philA: similar to csv metadata - like headers and data type <Zakim> arthur, you wanted to discuss how resources can use existing vocabularies in a novel way arthur: we need to describe resources/documents - you can describe that without inventing any new RDF terms ... we should avoid inventing vocab terms if we can ... and re-use as much we can <Zakim> hsolbri, you wanted to say CSV is on our radar itself. We started with UML / XML Schema, need to produce RDF equiv and CSV hsolbri: omv - schema for describing ontology - modeled in RDF ... started with UML ... UML->XML schema ... we need to be able to exchange constraints between different modeling framwork - UML, RDF <PhilA> +1 to hsolbri Ashok_Malhotra: UML is useful - lets focus on just RDF validation - and then build tooling later for covering exchange between models - keep the swcope small Arnaud: can define a transformation from csv to RDF and then validate using the RDF validator <Zakim> hsolbri, you wanted to rebut hsolbri: UML and xml schema community has already done the groudwork - lets start with that - as relevant to RDF sandro: there is too much mismatch between these models hsolbri: RDF type analogus to UML class and UML attribute to RDF predicate <Zakim> arthur, you wanted to say UML has a different perspective and to and to arnaud: guided by UML - makes sense arthur: fundamental mismatch between UML and RDF ... RDF class is a classification - a resource can have many classification ... UML and RDF has intersection - so u can do a OO model as RDF - but not the other way hsolbri: lossy in both direction arthur: oo is abt info hiding - <Zakim> kcoyle, you wanted to caution about starting with UML or XML or ?? kcoyle: agree with Arthur - ... UML and other models comes with baggage <Zakim> ericP, you wanted to say that it's probable that the info that we care about for shape/pattern description is largely covered by UML SteveS: UML has evolved sandro: we should have a way to produce the RDF constraints as UML diagrams arthur: ER diagrams precede UML <Zakim> hsolbri, you wanted to change the subject. <kcoyle> SteveS: flow-charting <kcoyle> can't we start as a community group? <Zakim> DavidBooth, you wanted to say I think it would be helpful if we roughly ranked our use cases and requirements arthur: we need to plan - have atleast 2 stages - ... statge 1>extremely simple spec - then follow that up with the stage 2 Ashok_Malhotra: easy declarative stuff for 80% of stuff - and the SPARQL for rest of it <Zakim> ericP, you wanted to ask if the description and validation of the issue tracking document in seems useful to all of us here +1 Ashok_Malhotra <kcoyle> sandro: start with a spec, get all of the right people in the room <Zakim> hsolbri, you wanted to ask eric a question about pushback hsolbri: do we have a political issue for validating RDF - <TimCole> +q Reaction may depend on definition of validation. sandro: consumers need to know about what they are consuming - that argument works - as opposed to a triple store needing that info <Zakim> TimCole, you wanted to suggest that reaction may depend on definition of validation <SteveS> Based on past discussions within context of LDP: I think Tim and Henry see the need/motivation for this thing we called validation TimCole: want to stay away from just a binary result - valid or not - give information about the result _: the simple declarative format will lend itself to autogenerate SPARQL <Zakim> evrensirin, you wanted to comment on simplicity ericP: if simple format is not able to define something - we will need to re-look as to whether we can improve it to cover that <kcoyle> can't hear - pls scribe! thx Arthur: disjoint constraint can be added to resource shape ... should be driven by use cases Ashok_Malhotra: do we have people who will like to start of this spec? Arthur: I would Arnaud: is it a requirement to make this language RDF <kcoyle> DCMI can offer the constraints in DSP - Arthur, I will do that ericP: the primary language should be RDF <kcoyle> +1 needs to be demonstrable <tbaker> +1 agree that should be representable, not necessarily represented, in RDF - who am I agreeing with (is this being scribed)? hsolbri: description should exist in SPARQL query form arthur: the more declaritive the language is - the easier it is to define in RDF <Zakim> evrensirin, you wanted to talk about RDF representation evrensirin: atleast have a way to specify sparql as a literal in the constraint language Ashok_Malhotra: schema for schemas never worked <Zakim> ericP, you wanted to say that the requirement we're discussing is whether the expression in RDF is *interoperable* ericP: interoperable RDF representation hsolsbri: represent in RDF as much as possible - should be able to publish a standard representation form <kcoyle> pls scribe <tbaker> didn't catch the point about SKOS - or who was talking... <arthur> Eric was talking about SKOS <tbaker> thnx <Arnaud> scribe: TimCole Can we define next steps Do we agree that a Working Group should be formed to make a new declarative language with fall back to SPARQL To speed things along we should start from a preliminary spec? Who would do this? <kcoyle> I offered DSP structure and constraints to Arthur Candidates. ResourceShape, Shape Expressions, DSP Arthur. There will have to be a call... <roger> can you send a link to your "shape expressions" information please Eric ? <tbaker> wondering whether there is consensus that a working group is needed as opposed to a community group (as Karen suggested) Arnuad: The working group will be chartered to use a spec as starting point, but WG can throw the spec out and start again. evrensirin: Could the WG start with multiple specs? Arnaud: There are IP issues which make this approach more difficult. <Zakim> PhilA, you wanted to talk a little about W3C process PhilA: To get a WG chartered, need bums on seats <kcoyle> TimCole: :-) <Zakim> SteveS, you wanted to talk about doing joint submission SteveS: Submission (of starting spec) can be collaborative. Arnaud: Charters need to be approved by W3C mgmt, and then by members. ... A draft charter is developed on mailing list. Responses feed the process of moving the charter forward. ... If interested in submitting a spec to serve as starting point, need to submit to W3C to clear IP issues. ... process takes a few months. PhilA: at least. TimCole: Do we need to do any winnowing or prioritizing of list developed yesterday? <kcoyle> TimCole: list needs a fair amount of work Arnaud: Have we done enough for now? Let Chairs move forward, form mailing list, etc. ... Or we can work a little longer? evrensirin; We need to say a little more about needs and priorities ericP: The example implies some things about expressivity and interface evrensirin: Wants to talk more about higher level aspects of use case. Who's this for? ericP: Wants to keep concrete though. Not too hi-level. <Zakim> tbaker, you wanted to ask if a WG is really needed. Why not a Community Group? tbaker: Given lack of really strong agreement on task needed, do we want to start with a Community Group Arnaud: Community Groups are recent. More of a forum to work together. No resources or formal endorsement by the W3C. <Zakim> PhilA, you wanted to answer Tom Arnaud: At best CG creates a spec which would need to be submitted, go through a WG, and then be ratrified. PhilA: Some commercial entities reluctant to implement a CG spec. Arnaud: Some success stories, but really though the startup is faster in the end not really faster in the end. PhilA: if we can get a WG charter that tends to be better <tbaker> +1 depends on how mature the concept is, and easier to involve people with CG arthur: I think this is a mature area, and so appropriate for a WG Commitments Arnaud: Going back to having people commit. Do we have a critical mass? ... Who here would commit? <arthur> +1 <hsolbri> +1 <ericP> +1 <roger> +0.6 <labra> +1 <evrensirin> +1? <kcoyle> ~1 (unsure) <nmihindu> +1 <tbaker> +0.6 <ssimister> 0.5 not sure yet <Ashok_Malhotra> 0 <mSkjaeveland> 0 <mesteban> +0.5 <mgh> +0.5 not sure yet TimCole: harder to join WG if your institution not part of W3C ... -1 since Illinois not part of W3C <ddolan> +0.5 <SteveS> +0.1 I will participate through Arthur/Arnaud, definitely support it <Ashok_Malhotra> Community Groups cannot create standards tbaker: Not sure we are really ready to write a good charter yet. Arnaud: Let's get back to what the problem we're trying to solve? ... Let's focus on the use cases. <Arnaud> <kcoyle> pad has requirements, but not use cases. need to gather use cases <kcoyle> most of the talks represented one or more use cases moving to pirate pad now. <Zakim> dbooth, you wanted to suggest roughly prioritizing use cases and requirements <SteveS> How about a X day effort to build the list of requirements and/or use-cases, then Y day effort to prioritize them (using a surveying tool)? <Zakim> dbooth, you wanted to say it is important to be able to apply different schemas to the same datasets
http://www.w3.org/2013/09/11-rdfval-minutes
CC-MAIN-2017-17
refinedweb
4,421
60.55
This is newly written code for the students of YRC – run under Vidyasagar Academy, with the spinning ability while following the black line by the world’s fastest robot – POLOLU 3pi. This code is written by our expert faculty, Prof. Dattaraj Vidyasagar, for the students. The code gives the ability to trace the direction i.e. bends of black line at every rotation and thus follows it. This idea is taken from the spinning robot designed by “Byon”. This code is available freely for the students of YRC Akola. Those who want the code can contact us. Watch this video of Spinning Pololu Part of source code #include <pololu/3pi.h> #define FORWARD_OFFSET 0xA0 // Offset (0..255) of forward from the the front line #define MAX_SPEED 255 // Maximum speed the wheels will go #define MIN_SPEED 200 // Minimum speed the wheels will go void Spinning_Line_Follow( void ) { unsigned short phase_start = get_ms(); // Start time of this rotation unsigned short last_phase_len = 100; // Duration of the last rotation char last_line_side = 0; // which side was the line on? char line_count = 0; // Is this the front or back line? char led_duration = 0; // How much longer should the LED be on while ( 1 ) { unsigned short cur_time = get_ms(); // Grab the current time in ms unsigned int sensors[5]; // Is the line left or right? char line_side = (read_line(sensors,IR_EMITTERS_ON) < 2000); left_led( 0 ); // Turn off the "FRONT" LED if (line_side & !last_line_side) { // If it just changed, if ( ++line_count & 1 ) { // and if this is the front line left_led( 1 ); // Turn on "FRONT" LED last_phase_len = cur_time - phase_start;// save the last rotation duration phase_start = cur_time; // and start counting this rotation } } last_line_side = line_side; // Remember where the line was unsigned short cur_phase = cur_time - phase_start; // How far are we into the curent rotation? cur_phase <<= 8; // Multipy by 256 cur_phase /= last_phase_len; // based on the last rotation duration cur_phase += FORWARD_OFFSET; // offset by which direction is "FORWARD" short left = cur_phase & 0xFF; // Wrap back to 0 .. 255 if ( left >= 128 ) { // Convert to 0 .. 127 .. 0 left = 256 - left; } left = (((left * (MAX_SPEED - MIN_SPEED))>>7) + MIN_SPEED); // Scale the wheel speed to be MIN at 0, MAX at 127 short right = MAX_SPEED + MIN_SPEED - left; // the right is 180 degress out of phase from the left set_motors(left, -right); // and the right goes backwards } }
https://vsagar.org/spinning-3pi-line-following-robot/
CC-MAIN-2017-26
refinedweb
369
68.2
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 4 results of 4 Thanks I am not as up to speed on boost as I should be. This looks very interesting I will see what I can do with it. Martin Casado wrote: > > To follow up on this, I've found it quite natural to use > boost::intrusive_pointer, boost::bind and boost::function to manage the > python callable object as it is being tossed around C++. Take for > example the following C++ method which accepts an arbitrary python > argument to be passed in when the method is called, and a python callable: > > void > some_cpp_class::add_callback(PyObject* blob, PyObject* callback) > { > using namespace std; > PyObject *args; > > if (!callback || !PyCallable_Check(callback)) { > std::cerr << "Type error! type not callable" > << std::endl; > return; > } > > args = PyTuple_New(1); > PyTuple_SetItem(args, 0, blob); > Py_INCREF(blob); > > boost::intrusive_ptr<PyObject> cptr(callback, true); > boost::intrusive_ptr<PyObject> aptr(args, false); > > boost::function<void ()> f = boost::bind(call_python_function, > cptr, aptr); > > // Do something with f here to be called later. > > } > > void > call_python_function(boost::intrusive_ptr<PyObject> callable, > boost::intrusive_ptr<PyObject> args) > { > using namespace std; > PyObject *pyret; > > pyret = PyObject_CallObject(callable.get(), args.get()); > if (pyret != NULL) { > Py_DECREF(pyret); > } else { > cerr << "python callobject failed " << endl; > } > } > > Note that this is just a simple example, there is no reason you couldn't > pass C++ types into f(..) or bind them to be passed on > call_python_function(..) when it is eventually called. Also note that > this assume intrusive_ptr<PyObject> has been correctly defined to > INCREF/DECREF the objects. > > hth, > .martin > > > >> It can be done, and I've done it multiple times, but it isn't exactly >> the easiest thing to do. I haven't seen a good discussion of this >> anywhere: >> >> My typical approach is to write a C++ class that implements the listener >> interface(es). My C++ object has a captive python object delegate, and >> the C++ callbacks simply use the python C-API to determine if the the >> delegate object has a function of a specific name, and if it exists, the >> callback calls the python function. >> >> A further wrinkle is that event callbacks can be done on threads other >> than the main python interpreter thread. In that case, every C++ >> callback that modifies python variables has to lock the global >> interpreter lock using >> PyGILState_STATE gstate; >> gstate = PyGILState_Ensure(); >> .... >> PyGILState_Release(gstate); >> >> >> Likewise, the functions that start processing the event loop (or need to >> block while other events are processed) have to release the global >> interpreter lock using >> >> Py_BEGIN_ALLOW_THREADS >> .. >> Py_END_ALLOW_THREADS >> >> >> I use it through a swig exception declaration: >> %define THREAD_WRAP(x) >> %exception x >> { >> /* Generated by THREAD_WRAP x */ >> Py_BEGIN_ALLOW_THREADS >> $action >> Py_END_ALLOW_THREADS >> } >> %enddef >> >> >> example: >> THREAD_WRAP(A::b); >> >> >> >> As a side effect of this, I never have to wrap the actual callback >> functions -- I just wrap the constructor for the C++ class that >> implements the callback functions, and the function that accepts a >> python object delegate, as well as the the functiont that starts the >> event loop. >> >> >> So I might have a script that does this: >> >> >> import msgCallback >> >> class MyCallback(object): >> def onMsg(arg1, arg2): >> print "OnMsg called!" >> >> >> cobj=msgCallback.PythonCCallbackWrapper() >> cobj.setDelegate(MyCallback()) >> msgCallback.startEventLoop() >> >> >> To do this correctly requires a very good working knowledge of the >> target API, SWIG, and the Python C-API, but when done properly, it gives >> you the flexibility of writing extremely flexible event-driven programs. >> >> ------------------------------ >> >> Message: 5 >> Date: Wed, 12 Dec 2007 15:54:08 -0800 (PST) >> From: mattdavis <someoneinjapan@...> >> Subject: [Swig-user] new to swig and callbacks wondering if swig can >> do this and any pointers. >> To: swig-user@... >> Message-ID: <14306927.post@...> >> Content-Type: text/plain; charset=us-ascii >> >> >> Hi everyone, >> >> I am new to swig and have some questions about callbacks. I would love >> any >> help, hints, or links to pages that might contain more information for >> me. >> So here is my current situation. I have 5 function that I am trying to >> wrap >> in swig. There are 2 RegisterForNotification function which accept an >> EventID and an either an EventResponder or an EventListner. The >> EventListener and EventResponder are functions which get called when an >> event is raised RaiseEvent notifies all the listeners first and then >> calls >> the event responder. There may be multiple listeners, but only one >> responder. The listeners may do some processing, but do not provide a >> result. The responder handles the event and returns a result. the other >> 2 >> are UnregisterForNotification which takes an EventID and either an >> EventListener or EventResponder. >> >> The other question I have is do I need to wrap the EventListner and >> EventResponder? The following code is in the .h file >> >> typedef CBFunctor3wRet<EventId, const String&, const Argument&, Result> >> EventListener; >> typedef CBFunctor3wRet<EventId, const String&, Argument&, Result> >> EventResponder; >> >> does that mean that it will be wrapped or do I have to specifically >> include >> it? I have also already wrapped my String and Result classes but >> haven't >> yet wrapped the Argument class. I am assuming that I will need to wrap >> the >> Argument class as well. >> >> What I want to be able to do is call these functions from a python >> script. >> So for example if a REQUIREPASSWORD event pops up then I can send it a >> password from the script and I get a result back. I have done a little >> digging around and from what I can understand to do call backs I have to >> have the callback functions defined in C and I wrap them with swig. >> Please >> correct me if I am wrong. For what I am trying to do that doesn't fit >> very >> well. I want to be able to write function in python and be able to pass >> them in. Is this possible? I have also seen some posts and links to >> TypeMaps. This was a little deeper into swig then I thought I would >> have to >> go. I am not opposed to doing this, I was just hoping to know if that >> is >> the right approach to this problem or if there is a more simple approach >> to >> this. >> >> Any help is greatly appreciated. >> > > > ------------------------------------------------------------------------- > SF.Net email is sponsored by: > It's the best place to buy or sell services > for just about anything Open Source. >;164216239;13503038;w? > _______________________________________________ > Swig-user mailing list > Swig-user@... > > > -- View this message in context: Sent from the swig-user mailing list archive at Nabble.com. Not exactly. At the C/C++ level, when an event is raised, the code typically does something like: //dispatch event *(pfunc)(arg1, arg2,...) (i.e. it calls a function through a function pointer). Your problem (and the problem with compiled languages in general) is that while you can create function pointers, you cannot create functions dynamically -- the actuall called function has to be available at compile time. What you can do, however, is get SWIG to create a python object that wraps a pointer to one or more actual C functions, and pass that underlying C function pointer to your callback registration function. This can be somewhat useful if, for instance, you only have a few different callback functions. SWIG can help you do this -- wrap the pointer to function signature, wrap the actual function signature, and figure out how to set the pointer to point to the function. Then wrap the callback function registration, pass your pointer-to-function, nad you are set. A more flexible approach, however, is to write a callback function that calls back into your python code -- do this the way that was discussed last week (i.e. have your callback call a function in a 'well known' python object) =20 -----Original Message----- Message: 1 Date: Mon, 17 Dec 2007 13:21:35 -0800 (PST) From: mattdavis <someoneinjapan=40gmail.com> Subject: =5BSwig-user=5D Is this possible with swig? To: swig-user=40lists.sourceforge.net Message-ID: <14374048.post=40talk.nabble.com> Content-Type: text/plain; charset=3Dus-ascii I have some callback functions as part of an api that I am using. I have wrapped them up in swig but I am stuck at a point. What I want to be able to do is to be able to write python functions and pass them in as parameters into the swig wrapped functions. So what I don't know is if I pass the function in as parameters and the event get's raised and the function is called to handle the event and it is a python function, will it run? If not can swig handle this case? If so what do I need to read up on? Thanks, Matt -- View this message in context: ml Sent from the swig-user mailing list archive at Nabble. Running swigwin 1.3.33 over the functest and simple example.i for Lua creates an example_wrap.c that I'm unable to compile on Visual Studio 2005 with two errors on the same line: Error 1 error C2275: 'swig_module_info' : illegal use of this type as an expression d:\swigwin\examples\lua\functest\example_wrap.c 1216 Error 2 error C2065: 'module' : undeclared identifier d:\swigwin\examples\lua\functest\example_wrap.c 1216 along with the following compiler output: Compiling... example_wrap.c d:\swigwin\examples\lua\functest\example_wrap.c(1216) : error C2275: 'swig_module_info' : illegal use of this type as an expression d:\swigwin\examples\lua\functest\example_wrap.c(329) : see declaration of 'swig_module_info' d:\swigwin\examples\lua\functest\example_wrap.c(1216) : error C2065: 'module' : undeclared identifier 1 2 d:\swigwin\examples\lua\functest\example_wrap.c(1615) : warning C4013: 'add1' undefined; assuming extern returning int d:\swigwin\examples\lua\functest\example_wrap.c(1643) : warning C4013: 'add2' undefined; assuming extern returning int d:\swigwin\examples\lua\functest\example_wrap.c(1671) : warning C4013: 'add3' undefined; assuming extern returning int d:\swigwin\examples\lua\functest\example_wrap.c(1697) : warning C4013: 'add4' undefined; assuming extern returning int The Line is this one: swig_module_info* module=SWIG_GetModule(L); the complete generated example_wrap.c is pasted here: Any ideas would be appreciated. <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html;charset=ISO-8859-1" http- </head> <body bgcolor="#ffffff" text="#000000"> <img alt="midday">
http://sourceforge.net/p/swig/mailman/swig-user/?viewmonth=200712&viewday=18
CC-MAIN-2015-22
refinedweb
1,687
54.12
When creating managed code that will reside within SQL Server as a stored procedure, you are basically creating a static method on a class. That static method is then decorated with the Microsoft.SqlServer.Server.SqlProcedure attribute. When your assembly is deployed to SQL Server and stored within the database, this attribute allows SQL to create a CLR routine for the method. By default, SQL Server 2005 doesn't allow you to execute CLR code, so you'll have to enable it by executing the following command inside a SQL query window (make sure you're connected with sufficient privileges to perform this command): sp_configure 'clr enable', 1 After executing this, SQL Server will inform you that the option has changed, but it will not take effect until you issue the following command: reconfigure Now you're ready to start coding. Ordinarily, you would have to create an assembly and then go over to SQL Server and issue several commands within the query window to deploy the assembly and then create a managed stored procedure. However, with Visual Studio 2005, you can create a special type of project called a SQL Server project. Before you create a SQL Server project, you will need to have an instance of SQL Server handy, as well as a database against which you are planning on developing. When you first create a SQL Server Project, you will be asked for a database reference if you haven't already created one, as shown in Figure 21.1. With a new SQL Server project in your solution, you are ready to go. Simply right-click the project and highlight the Add submenu. You will see the following list of SQL Server objects appear: User-Defined Function Stored Procedure Aggregate Trigger User-Defined Select Stored Procedure and call it TestProcedure. Visual Studio will create a stored procedure stub that looks as follows: using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure] public static void TestProcedure() { // Put your code here } }; Note that the class is a partial class called StoredProcedures. Whenever you add a new stored procedure to your SQL Server project, it will be part of the partial class StoredProcedures and the static method representing the procedure will be in its own file. When building C# static methods that will be used as stored procedures, you need to remember the following rules: The return type of the method is used as the return value of the stored procedure or function. The parameter list of the method is the parameter list of the stored procedure. As such, you should only use data types from the System.Data.SqlTypes namespace. Keep in mind that your method has no user interface, so any debugging or tracing you do can't go to a console window. You can still print debug messages the same way you could with stored procedures, however. Now we make a small modification to the "stub" method provided for us, and we're left with this: using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure] public static void TestProcedure(out SqlString outVal) { // Put your code here outVal = "Hello World"; } }; When you build this project, the assembly is compiled into a DLL, but that's it. In order to get your stored procedure onto the server, you need to debug your application. This will deploy your assembly to SQL Server, register your stored procedure, and then execute a test script found in the Test Scripts folder of your solution called Test.sql. To execute just this stored procedure without running a test script, open your Server Explorer pane, browse to the stored procedure you just created, right-click the procedure, and choose Execute. Finally, select <NULL> for the input to the @outVal parameter. When you execute the stored procedure, the following text will appear in your output window: Running [dbo].[TestProcedure] ( @outVal = <NULL> ). No rows affected. (0 row(s) returned) @outVal = Hello World @RETURN_VALUE = Finished running [dbo].[TestProcedure]. This is just the beginning. As you will see in the section on utilizing the new server-side SQL library, accessing data and returning data to the caller are both extremely easy tasks managed by powerful tools.
https://flylib.com/books/en/1.237.1.134/1/
CC-MAIN-2022-05
refinedweb
729
53.61
noid - nice opaque identifier generator commands noid [ -f Dbdir ] [ -vh ] Command Arguments, and other ephemera. A noid minter is a lightweight database designed for efficiently generating, tracking, and binding having application in identifier resolution. Noid minters are very fast, scalable, easy to create and tear down, and have a relatively small footprint. They use BerkeleyDB as the underlying database. An identifier generated by a noid minter is also known generically as a "noid" (standing for nice opaque identifier and rhyming with void).. The form, number, and intended longevity of a minter's identifiers are given by a Template and a Term supplied when the generator database is created. A supplied Term of "long" establishes extra restrictions and logging appropriate for the support of persistent identifiers. Across successive minting operations, the generator "uses up" its namespace (the pool of identifiers it is capable of minting) such that no identifier will ever be generated twice unless the supplied Term is "short" and the namespace is finite and completely exhausted. The default Term is "medium". The noid utility parameters -- flags, Dbdir (database location), Command, Arguments -- are described later under COMMANDS AND MODES. There are also sections covering persistence, templates, rule-based mapping, URL interface, and name resolution. Once the noid utility is installed, the command, noid dbcreate s.zd will create a minter for an unlimited number of identifiers. It produces a generator for medium term identifiers (the default) with the Template, s.zd, governing the order, number, and form of minted identifier strings. These identifiers will begin with the constant part s and end in a digit (the final d), all within an unbounded sequential ( z) namespace. The TEMPLATES section gives a full explanation. This generator will mint the identifiers, in order, s0, s1, s2, ..., s9, s10, ..., s99, s100, ... and never run out. To mint the first ten identifiers, noid mint 10 When you're done, on a UNIX platform you can remove that minter with rm -fr NOID Now let's create a more complex minter. noid dbcreate f5.reedeedk long 13030 cdlib.org oac/cmp This produces a generator for long term identifiers that begin with the constant part 13030/f5. Exactly 70,728,100 identifiers will be minted before running out. The 13030 parameter is the registered Name Assigning Authority Number (NAAN) for the assigning authority known as "cdlib.org", and "oac/cmp" is a string chosen by the person setting up this minter to identify the project that will be operating it. This particular minter generates identifiers that start with the prefix f5 in the 13030 namespace. If long term information retention is within the mission of your organization (this includes national and university libraries and archives), you may register for a globally unique NAAN by sending email to ark at cdlib dot org. Identifiers will emerge in "quasi-random" order, each consisting of six characters matching up one-for-one with the letters eedeed. noid mint 1 The first identifier should be 13030/f54x54g11, with the namespace ranging from a low of 13030/f5000000s to a high of 13030/f5zz9zz94. You can create a "locations" element under a noid and bind three URLs to it with the command, noid bind set 13030/f54x54g11 locations \ '' The template's final k causes a computed check character to be added to the end of every generated identifier. It also accounts for why the lowest and highest noids look a little odd on the end. The final check character allows detection of the most common transcription errors, namely, incorrect entry of one character and the transposition of two characters. The next command takes three identifiers that someone might ask you about and determines that, despite appearances, only the first is in the namespace of this minter. noid validate - 13030/f54x54g11 13030/f54y54g11 \ 13030/f54x45g11 To make way for creation of another minter, you can move the entire minter into a subdirectory with the command, mkdir f57 ; mv NOID f57 A minter may be set up on a web server, allowing the NAA organization easily to distribute name assigment to trusted parties operating from remote locations. The URL INTERFACE section describes the procedure in detail. Once set up, you could mint one identifier by entering a URL such as the following into your web browser: Using a different procedure, you can also make your identifier bindings (e.g., location information) visible to the Internet via a few web server configuration directives. The NAME RESOLUTION section explains this further. your claim credibility is a set of verifiable assertions, or metadata, about the object, such as age, height, title, or number of pages. Verifiability is outside the scope of the noid utility, but you can use a minter to record assertions supporting an association by binding arbitrary named elements and values to the identifier. Noid database elements can be up to 4 gigabytes in length, and one noid minter is capable of recording billions of identifiers. You don't have to use the noid binding features at all if you prefer to keep track of your metadata elsewhere, such as in a separate database management system (DBMS) or on a sheet of paper. In any case, for each noid generated, the minter automatically stores its own lightweight "circulation" record asserting who generated it and when. If most of your metadata is maintained in a separate database, the minter's own records play a back up role, providing a small amount of redundancy that may be useful in reconstructing database bindings that have become damaged. An arbitrary database system can complement a noid minter without any awareness or dependency on noids. On computers, identifier bindings are typically managed using methods that at some point map identifier strings to database records and/or to filesystem entries (effectively using the filesystem as a DBMS). The structures and logistics for bindings maintenance may reside entirely with the minter database, entirely outside the minter database, or anywhere in between. An individual organization defines whatever maintenance configuration suits it. A persistent identifier is an identifier that an organization commits to retain in perpetuity. Associations, the sine qua non of identifiers, last only as long as they (in particular, their bindings) are maintained. Often maintaining identifiers goes hand in hand with controlling the objects to which they are bound. No technology exists that automatically manages objects and associations; persistence is a matter of service commitment, tools that support that commitment, and information that allows users receiving identifiers to make the best judgment regarding an organization's ability and intention to maintain them. It will be normal for organizations to maintain their own assertions about identifiers that you issue, and vice versa. In general there is nothing to prevent discrepancies among sets of assertions. Effectively, the association -- the identifier -- is in the eye of the beholder. As a simple example, authors create bibliography entries for cited works, and in that process they make their claims, often with small errors, about such things as the author and title of the identified thing. It is common for a provider of an identifier-driven service such as digital object retrieval to allow users to review its own, typically better-maintained sets of identifier assertions (i.e., metadata), even if it minted none of the identifiers that it services. We call such an organization a Name Mapping Authority (NMA) because it "maps" identifiers to services. It is possible for an NMA to service identifiers even if it neither hosts nor controls any objects. It will also be normal for archiving organizations to maintain their own peculiar ideas about what persistence means. Different flavors will exist even within one organization, where, for example, it may be appropriate to apply corrections to persistent objects of one category, to never change objects of another, and to remove objects of a third category with a promise never to re-assign those objects' identifiers. One institution will guarantee persistence for certain things, while the strongest commitment made by some prominent archives will be "reasonable effort". Given the range of possibilities, a memory organization will need to record not only the identities but also the support policies for objects in its care. Any database, including a noid minter, can be used for this purpose. For persistence across decades or centuries, opinions regarding an object's identity and commitments made to various copies of it will tend naturally to become more diverse. An object may have been inherited through a chain of stewardship, subtle identity changes, and peaks of renewed interest stretching back to a completely unrelated and now defunct organization that created and named it. For its original identifier to have persisted across the intervening years, it must look the same as when first minted. At that particular time, global uniqueness required the minted identifier to bear the imprint of the issuing organization (the NAA, or Name Assigning Authority), which long ago ceased to have any responsibility for its persistence. There is thus no conflict in a mapping authority (NMA) servicing identifiers that originate in many different assigning authorities. These notions of flavors of persistence and separation of name authority function are built into the ARK (Archival Resource Key) identifier scheme that the noid utility was partly created to support. By design, noid minters also work within other schemes in recognition that persistence has nothing to do with identifier syntax. Opaque identifiers can be used by any application needing to reduce the liability created when identifier strings contain linguistic fragments that, however appropriate or even meaningless they are today, may one day create confusion, give offense, or infringe on a trademark as the semantic environment around us and our communities evolves. If employed for persistence, noid minters ease the unavoidable costs of long term maintenance by having a small technical footprint and by being implemented completely as open source software. For more information on ARKs, please see . Once again, the overall utility summary is noid [ -f Dbdir ] [ -vh ] Command Arguments In all invocations, output is intended to be both human- and machine-readable. Batch operations are possible, allowing multiple minting and binding commands within one invocation. In particular, if Command is given as a "-" argument, then actual Commands are read in bulk from the standard input. The string, Dbdir, specifies the directory where the database resides. To protect database coherence, it should not be located on a filesystem such as NFS or AFS that doesn't support POSIX file locking semantics. Dbdir may be given with the NOID environment variable, overridable with the -f option. If those strings are empty, the name or link name of the noid executable (argv[0] for C programmers) is checked to see if it reveals Dbdir. If that check (described next) fails, Dbdir is taken to be the current directory. To check the name of the executable for Dbdir, the final pathname component (tail) is examined and split at the first "_" encountered. If none, the check fails. Otherwise, the check is considered successful and the latter half is taken as naming Dbdir relative to the current directory. This mechanism is designed for cases when it is inconvenient to specify Dbdir (such as in the URL interface) or when you are running several minters at once. As an example, /usr/bin/noid_fk9 specifies a Dbdir of fk9. All files associated with a minter will be organized in a subdirectory, NOID, of Dbdir; this has the consequence that there can be at most one minter in a directory. To allow noid to create a new minter in a directory already containing a NOID subdirectory, remove or rename the entire NOID subdirectory. The noid utility may be run as a URL-driven web server application, such as in a CGI that allows name assignment via remote operator. If the executable begins noidu..., the noid URL mode is in effect. Input parameters, separated by a "+" sign, are expected to arrive embedded in the query part of a URL, and output will be formatted for display on an ordinary web browser. An executable of noidu_xk4, for example, would turn on URL mode and set Dbdir to xk4. This is further described under URL INTERFACE. The noid utility may be run as a name resolver running behind a web server. If the executable begins noidr..., the noid resolver mode is in effect, which means that commands will be read from the standard input (as if only the "-" argument had been given) and the script output will be unbuffered. This mode is designed for machine interaction and is intended to be operated by rewriting rules listed in a web server configuration file as described later under NAME RESOLUTION AND REDIRECTION INTERFACE. At minter creation time, a report summarizing its properties is produced and stored in the file, NOID/README. This report may be useful to the organization articulating the operating policy of the minter. In a formal context, such as the creation of a minter for long term identifiers, that organization is the Name Assigning Authority. The -v option prints the current version of the noid utility and -h prints a help message. In the Command list below, capitalized symbols indicate values to be replaced by the caller. Optional arguments are in [brackets] and (A|B|C) means one of A or B or C. Create a database that will mint (generate) identifiers according to the given Template and Term. As a side-effect this causes the creation of a directory, NOID, within Dbdir. If you have several generators, it may be convenient to operate each from within a Dbdir that uniquely identifies each Template; for example, you might change to a directory that you named fk6 after the Template fk.rdeedde ("fk" followed by 6 variable characters) of the minter that resides there. The Term declares whether the identifiers are intended to be "long", "medium" (the default), or "short". A short term identifier minter is the only one that will re-mint identifiers after the namespace is exhausted, simply returning the oldest previously minted identifier. As mentioned earlier, however, some namespaces are unbounded and never run out of identifiers. If Term is "long", the arguments NAAN, NAA, and SubNAA are required, and all minted identifiers will be returned with the NAAN and a "/" prepended to them. The NAAN is a namespace identifier and should be a globally unique Name Assigning Authority (NAA) number. Apply for one by email to ark@cdlib.org, or for testing purposes, use "00000" as a non-unique NAAN. The NAA argument is the character string equivalent for the NAAN; for example, 13960 corresponds to the NAA, "archive.org". The SubNAA argument is also a character string, but is a locally determined and possibly structured subauthority string (e.g., "oac", "ucb/dpg", "practice_area") that is not globally registered. If Template is not supplied, the minter freely binds any identifier that you submit without validating it first. In this case it also mints medium term identifiers under the default Template, .zd. Generate N identifiers. If other arguments are specified, for each generated noid, add the given Element and bind it to the given Value. [Element-Value binding upon minting is not implemented yet.] There is no "unmint" command. Once an identifier has been circulated in the outside world, it may be hard to withdraw because external users and systems will have bound it with their own assertions. Even within the minting organization, removing all of the identifier's supporting bindings could entail actions such as file deletion that are outside the scope of the minter. While there is no command capable of withdrawing a circulated identifier, it is nonetheless easy to queue an identifier for reminting and to hold it against the possibility of minting at all. Identifiers that are long term should be treated as non-renewable resources except when you are absolutely sure about recycling them. [This command is not implemented yet.] Generate N "peppered" identifiers. A peppered identifier is a regular identifier concatenated with a "!" character and a randomly generated cookie -- the pepper -- which serves as a kind of per-identifier password. (Salt is a technical term for some extra data that makes it harder to crack encrypted values; we use pepper for some extra data that makes it harder to crack unencrypted values.) To provide an extra level of database security, the base identifier, which is everything up to the "!", should be used in all public communication, but the complete peppered identifier is required for all noid operations that would change values in the database. As with the mint command, if other arguments are specified, for each generated noid, add the given Element and bind it to the given Value. For the given Id, bind the Element to Value according to How. The Element and Value may be arbitrary strings. There are two reserved Element names allowing Values to be entered that are too large or syntactically inconvenient (depending on the calling environment's quoting restrictions) to pass in as command-line tokens. If the Element is ":" and no Value is present, lines are read from the standard input up to a blank line; they will contain Element-colon-Value pairs in essentially email header format, with long values continued on indented lines. If the Element is ":-" and no Value is present, lines are read from the standard input up to end-of-file; the first non-comment, non-blank line must have an Element-colon to specify an Element name, and all the remaining input (up to EOF) is taken as its corresponding Value. Lines beginning with "#" are considered "comment" lines and are skipped. The How argument specifies one of the following kinds of binding. Of these, the set, add, insert, and purge kinds "don't care" if there is no current binding. Only if Element does not exist, create a new binding. Only if Element exists, undo any old bindings and create a new binding. Means new or, failing that, replace. Only if Element exists, place Value at the end of the old binding. Means new or, failing that, append. Only if Element exists, place Value at the beginning of the old binding. Means new or, failing that, prepend. Remove any trace of Element, returning an error if it did not exist to begin with. Remove any trace of Element, returning success whether or not it existed to begin with. Means new, but ignore the Id argument (actually, confirm that it was given as new) and mint a new Id first. [This kind of binding is not implemented yet.] Means new, but ignore the Id argument (new) and peppermint a new Id first. The RULE-BASED MAPPING section explains how to set up retrieval using non-stored values. For the noid, Id, print with labels all bindings for the given Elements. If no Element is given, find and print all bindings for the given Id. This is the verbose version of the get command, in that it prints headers and labels for everything it finds. For the noid, Id, print without labels all bindings for the given Elements. If no Element is given, find and print all bindings for the given Id. This is the quiet version of the fetch command, in that it suppresses all headers and labels. Between each Element requested, the output will be separated by a blank line. Place or remove a hold on one or more Ids. A hold placed on an Id that has not been minted will cause it to be skipped when its turn to be minted comes around. A hold placed on an Id that has been minted will make it impossible to queue (typically for recycling). Minters of long term identifiers automatically place a hold on every minted noid. Holds can be placed or removed manually at any time. Queue one or more Ids for minting. Time is a number followed by units, which can be d for days or s for seconds (the default units). This can be used to recycle noids now or after a delay period. With first, the Id(s) will be queued such that they will be minted before any of the time-delayed entries. With lvf (Lowest Value First), the lowest valued identifier (intended for use with numeric identifiers) will be taken from the queue for minting before all others. [ needs testing ] Validate one or more Ids against a given Template, which, if given as "-", causes the minter's native Template to be used. A Template is a coded string of the form Prefix.Mask that is given to the noid dbcreate command to govern how identifiers will be minted. The Prefix, which may be empty, specifies an initial constant string. For example, upon database creation, in the Template tb7r.zdd the Prefix says that every minted identifier will begin with the literal string tb7r. Each identifier will end in at least two digits ( dd), and because of the z they will be sequentially generated without limit. Beyond the first 100 mint operations, more digits will be added as needed. The minted noids will be, in order, tb7r00, tb7r01, ..., tb7r100, tb7r101, ..., tb7r1000, ... The period (".") in the Template does not appear in the identifiers but serves to separate the constant first part (Prefix) from the variable second part (Mask). In the Mask, the first letter determines either random or sequential ordering and the remaining letters each match up with characters in a generated identifier. Perhaps the best way to introduce templates is with a series of increasingly complex examples. .rddd to mint random 3-digit numbers, stopping after 1000th .sdddddd to mint sequential 6-digit numbers, stopping after millionth .zd sequential numbers without limit, adding new digits as needed bc.rdddd random 4-digit numbers with constant prefix bc 8rf.sdd sequential 2-digit numbers with constant prefix 8rf .se sequential extended-digits (from 0123456789bcdfghjkmnpqrstvwxz) h9.reee random 3-extended-digit numbers with constant prefix h9 .zeee unlimited sequential numbers with at least 3 extended-digits .rdedeedd random 7-char numbers, extended-digits at chars 2, 4, and 5 .zededede unlimited mixed digits, adding new extended-digits as needed sdd.sdede sequential 4-mixed-digit numbers with constant prefix sdd .rdedk random 3 mixed digits plus final (4th) computed check character .sdeeedk 5 sequential mixed digits plus final extended-digit check char .zdeek sequential digits plus check char, new digits added as needed 63q.redek prefix plus random 4 mixed digits, one of them a check char The first letter of the Mask, the generator type, determines the order and boundedness of the namespace. For example, in the Template .sddd, the Prefix is empty and the s says that the namespace is sequentially generated but bounded. The generator type may be one of, r for quasi-randomly generated identifiers, s for sequentially generated identifiers, limited in length and number by the length of the Mask, z for sequentially generated identifiers, unlimited in length or number, re-using the most significant mask character (the second character of the Mask) as needed. Although the order of minting is not obvious for r type minters, it is "quasi-random" in the sense that on your machine a minter created with the same Template will always produce the same sequence of noids over its lifetime. Quasi-random is a shade more predictable than pseudo-random (which, techically, is as random as computers get). This is a feature designed to help noid managers in case they are forced to start minting again from scratch; they simply process their objects over in the same order as before to recover the original assignments. After the generator type, the rest of the Mask determines the form of the non-Prefix part, matching up letter-for-character with each generated noid character (an exception for the z case is described below). In the case of the Template xv.sdddd, the last four d Mask characters say that all identifiers will end with four digits, so the last identifier in the namespace is xv9999. When z is used, the namespace is unbounded and therefore identifiers will eventually need to grow in length. To accommodate the growth, the second character ( e or d) of the Mask will be repeated as often as needed; for instance, when all 4-digit numbers are used up, a 5th digit will be added. After the generator type character, Mask characters have the following meanings: d a pure digit, one of { 0123456789 } e an "extended digit", one of { 0123456789bcdfghjkmnpqrstvwxz } (lower case only) k a computed extended digit check character; if present, it must be the final Mask character The set of extended digits is designed to help create more compact noids (a larger namespace for the same length of identifier) and discourage "accidental semantics", namely, the introduction of strings that have unintended but commonly recognized meanings. Opaque identifiers are desirable in many situations and the absence of vowels in extended digits is a step in that direction. To reduce visual mismatches, there is also no letter "l" (ell) because it is often mistaken for the digit "1". The optional k Mask character, which may only appear at the end, enables detection of cases when a single character is mistyped and when two adjacent characters have been transposed -- the most common transcription errors. A final k in the Mask will cause a check character to be appended after first computing it on the entire identifier generated so far, including the NAAN if one was specified at database creation time. For example, the final digit 1 in 13030/f54x54g11 was first computed over the string 13030/f54x54g1 and then added to the end. Any Element may be bound to a class of Ids such that retrieval against that Element for any Id in the class returns a computed value when no stored value exists. The class of Ids is specified via a regular expression (Perl-style) that will be checked for a match against Ids submitted via a retrieval operation (get or fetch) that names any Element bound in this manner. If the match succeeds, the element Value that was bound with the Id class is used as the right-hand side of a Perl substitution, and the resulting transformation is returned. We call this rule-based mapping, and it is probably best explained by working through the examples below. To set up rule-based mapping for an Id class, construct a bind operation with an Id of the form :idmap/Idpattern, where Idpattern is a Perl regular expression. Then choose an Element name that you wish to have trigger the pattern match check whenever that Element is requested via a retrieval operation and a stored value does NOT exist; any Element will work as long as you use it for both binding and retrieving. Finally, specify a Value to be used as replacement text that transforms matching Ids into computed values via a Perl s/// substitution. As a simple example, noid bind set :idmap/^ft redirect g7h would cause any subsequent retrieval request against the Element named "redirect" to try pattern matching when no stored value is found. If the Id begins with "ft", it would then try to replace the "ft" with "g7h" and return the result as if it were a stored value. So if the Id were ft89xr2t, the command noid get ft89xr2t redirect would return g7h89xr2t. Fancier substitutions are possible, including replacement patterns that reference subexpressions in the original matching Idpattern. For example, the second command below, noid bind set ':idmap/^ft([^x]+)x(.*)' my_elem '$2/g7h/$1' noid get ft89xr2t my_elem would return r2t/g7h/89. For ease of implementation, internally this kind of binding is stored and reported (which can be confusing) as the special noid, :idmap/Element, under element name Idpattern. Any number of minters can be operated behind a web server from a browser or any tool that activates URLs. This section describes a one-time set up procedure to make your server aware of minters, followed by another set up procedure for each minter. The one-time procedure involves creating a directory in your web server document tree where you will place one or more noid minter databases. In this example, the directory is htdocs/nd and we'll assume the noid script was originally installed in /usr/local/bin. mkdir htdocs/nd cp -p /usr/local/bin/noid htdocs/nd/ The second command above creates an executable copy of the noid script that will be linked to for each minter you intend to expose to the web. To make your server recognize such links, include the line ScriptAliasMatch ^/nd/noidu(.*) "/srv/www/htdocs/nd/noidu$1" in your server configuration file and restart the server before trying the commands that follow. If you did not install the supporting Noid.pm module normally, you may also have to store a copy of it next to the script. This completes the one-time server set up. Thereafter, for each minter that you wish to expose, it must first be allowed to write to its own database when invoked via the web server. Because it will be running under a special user at that time, before you create it, first become the user that your server runs under. In this example that user is "wwwrun". cd htdocs/nd su wwwrun noid dbcreate kt.reeded mkdir kt5 mv NOID kt5/ ln noid noidu_kt5 The third command above creates a minter for noids beginning with kt followed by 5 characters. The minter is then moved into its own directory within htdocs/nd. Finally, the last command makes a hard link (not a soft link) to the noid script, which for this minter will be invoked under the name noidu_kt5. The URL interface is similar to the command line interface, but Commands are passed in via the query string of a URL where by convention a plus sign ("+") is used instead of spaces to separate arguments. You will likely want to set up access restrictions (e.g., with an .htaccess file) so that only the parties you designate can generate identifiers. There is also no dbcreate command available from the URL interface. To mint one identifier, you could enter the following URL into your web browser, but replace "foo.ucop.edu" with your server's name: Reload to mint again. If you change the 1 to 20, you get twenty new and different noids. To bind some data to an element called "myGoto" under one of the noids already minted,? bind+set+13030/kt639k9+myGoto+ In this case we stored a URL in "myGoto". This kind of convention can underly a redirection mechanism that is part of an organization's overall identifier resolution strategy. To retrieve that stored data, Bulk operations can be performed over the web by invoking the URL with a query string of just "-", which will cause the minter to look for noid commands, one per line, in the POST data part of the HTTP request. If you put noid commands in a file myCommands and run the Unix utility curl --data-binary @myCommands \ '?-' you could, for example, change the "myGoto" bindings for 500 noids in that one shell command. The output from each command in the file will be separated from the next (on the standard output) by a blank line. In a URI context, name resolution is a computation, sometimes multi-stage, that translates a name into associated information of a particular type, often another name or an address. A resolver is a system that can perform one or more stages of a resolution. Noid minters can be set up as resolvers. In our case, we're interested in automatically translating access requests for each of a number of identifiers into requests for another kind of identifier. This is one tool in the persistent access strategy for naming schemes such as URL, ARK, PURL, Handle, DOI, and URN. You can use a noid minter to bind a second name to each identifier, even to identifiers that the minter did not generate. In principle, this will work with names from any scheme. With web browsers, a central mechanism for name resolution is known as the server redirect, and mainstream web servers can easily be configured to redirect a half million different names without suffering in performance. You might choose not to use native web server redirects if you require resolution of several million names, or if you require software and infrastructure for non-URL-based names. Whatever your choice, maintaining a table that maps the first name to the second is an unavoidable burden. As with the URL interface, any number of resolvers (minters underneath) can be operated behind a web server from a browser or a tool that activates URLs. This section describes a one-time set up procedure to make your server aware of resolvers, followed by another set up procedure for each resolver. The one-time procedure involves creating a directory in your web server document tree where you will place one or more noid resolver databases. In this example (and in the previous example), we use htdocs/nd: mkdir htdocs/nd cp -p /usr/local/bin/noid htdocs/nd/ The second command above creates an executable copy of the noid script that will be linked to for each resolver you intend to expose. To make your server recognize such links, include the line (this is slightly different from the similar line in the previous section), ScriptAliasMatch ^/nd/noidr(.*) "/srv/www/htdocs/nd/noidr$1" in your server configuration file. If you did not install the supporting Noid.pm module normally, you may also have to store a copy of it next to the script. Then include the following lines in the configuration file; they form the start of a rewriting rule section that you will add to later for each resolver that you set up. RewriteEngine on # These next two files and their containing # directory should be owned by "wwwrun". RewriteLock /var/log/rewrite/lock RewriteLog /var/log/rewrite/log ## RewriteLogLevel 9 The non-comment lines above initialize the rewriting system, identify the lock file used to synchronize access to the resolver, and identify the log file which can help in finalizing the exact rewrite rules that you use; disable logging with the default RewriteLogLevel value of 0, or set it as high as 9, with higher numbers producing more detailed information. This completes the one-time server set up for resolvers. Thereafter, for each resolver that you wish to run, you need to set up a noid database and create a link of the form noidr... so that the noid script can be invoked in resolution mode. Unlike the URL interface, the resolution interface does not itself mint from the underlying minter. A separate URL interface may still be set up to mint and bind identifiers in the resolver database, or minting and binding can take place off the net. In what follows, we will assume that you have set up a noid database with the same location and template as in the previous section. As before, the server is assumed to run under the user "wwwrun" and the database resides in htdocs/nd/kt5. As if our intentions included persistent identification, the minter in this example is for generating long term identifiers. cd htdocs/nd noid dbcreate kt.reeded long 13030 cdlib.org dpg mkdir kt5 mv NOID kt5/ ln noid noidr_kt5 The last command makes a new hard link (not a soft link) to the noid script, which for this resolver will be invoked under the name noidr_kt5. The resolution interface is not called by a URL directly, but is invoked once upon server startup, where the noidr... prefix tells it to run in resolution mode. In this mode it loops, waiting for and responding to individual resolution attempts from the server itself. To set up an individual resolver, define a Rewrite Map followed by a set of Rewrite Rules. This is done using server configuration file lines as shown in the next example. As with any change to the file, you will need to restart the server before it will have the desired effect. # External resolution; start program once on server start RewriteMap rslv prg:/srv/www/htdocs/nd/noidr_kt5 # Main lookup; add artificial prefix for subsequent testing RewriteRule ^/ark:/(13030/.*)$ "_rslv_${rslv:get $1 myGoto}" # Test: redirect [R] if it looks like a redirect RewriteRule ^_rslv_([^:]*://.*)$ $1 [R] # Test: strip prefix; pass through [PT] if intended for us RewriteRule ^_rslv_(/.*)$ $1 [PT] # Test: restore value if lookup failed; let come what may RewriteRule ^_rslv_$ %{REQUEST_URI} # Alternative: redirect failed lookup to a global resolver When a request received by the server matches a Rewrite Rule, an attempt to resolve it via the running noidr... script is made. In this example, we will need to have bound a string representing a URL to the value for the fixed element name "myGoto" under each identifier that we wish to be resolvable. Building on the example from the previous section, assume the element "myGoto" holds the same URL as before for the noid 13030/kt639k9. A browser retrieval request made by entering or clicking on would then result in a server redirect to The resolution result for an identifier is whatever the get returns, which could as easily have retrieved a stored value as a rule-based value (allowing you to redirect many similar identifiers with one rule). This approach to resolution does not address resolver discovery. An identifier found in the wild need not easily reveal whether it is actionable or resolvable, let alone which system or resolver to ask. The usual strategy for contemporary (web era) identifier schemes relies on well-known, scheme-dependent resolvers and web proxying of identifiers embedded in URLs. For example, global resolution for a non-proxied URN or Handle uses an undisclosed internet address, hard-coded into the resolver program, from which to start the resolution process. An ARK, PURL, or proxied Handle or URN tend to rely on a disclosed starting point. Whatever method is used for discovery, a noid resolver can in principle be used to resolve identifiers from any scheme. The following describes the Noid Check Digit Algorithm (NCDA). Digits in question are actually "extended digits", or xdigits, which form an ordered set of R digits and characters. This set has radix R. In the examples below, we use a specific set of R=29 xdigits. When applied to substrings of well-formed identifiers, where the length of the substring is less than R, the NCDA is "perfect" for single digit and transposition errors, by far the most common user transcription errors (see David Bressoud, Stan Wagon, "Computational Number Theory, 2000, Key College Publishing"). The NCDA is complemented by well-formedness rules that confirm the placement of constant data, including fixed labels and any characters that are not extended digits. After running the NCDA on the selected substring, the resulting check digit, an xdigit actually, is used either for comparing with a received check digit or for appending to the substring prior to issuing the identifier that will contain it. For the algorithm to work, the substring in question must be less than R characters. The extended digit set used in the current instance is a sequence of R=29 printable characters defined as follows: xdigit: 0 1 2 3 4 5 6 7 8 9 b c d f g value: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 xdigit: h j k m n p q r s t v w x z value: 15 16 17 18 19 20 21 22 23 24 25 26 27 28 Each xdigit in the identifier has the corresponding ordinal value shown. Any character not in the xdigit set is considered in the algorithm to have an ordinal value of zero. A check digit is an xdigit computed from the base substring and then appended to form the "checked substring" (less than R+1 characters long). To determine if a received identifier has been corrupted by a single digit or transposition error, the relevant substring is extracted and its last character is compared to the result of the same computation performed on the preceding substring characters. The computation has two steps. Consider a base substring (no check digit appended) such as 13030/xf93gt2 (base substring) Step 1. Check that the substring is well-formed, that is, that all non-xdigit characters (often constant character data) are exactly where expected; if not, the substring is not well-formed and the computation aborts. (This step is required to accommodate characters such as "/" that contribute nothing to the overall computation.) Step 2. Multiply each character's ordinal value by its position number (starting at position 1), and sum the products. For example, char: 1 3 0 3 0 / x f 9 3 g t 2 ord: 1 3 0 3 0 0 27 13 9 3 14 24 2 pos: 1 2 3 4 5 6 7 8 9 10 11 12 13 prod: 1 + 6 + 0 +12 + 0 + 0+189+104 +81 +30+154+288 +26=891 Step 3. The check digit is the xdigit whose ordinal value is that sum modulo R (divide the sum by R and take the remainder). In the example, 891 = 21 mod R (29) and so the check digit is q. This is appended to obtain the "checked substring", which is 13030/xf93gt2q (substring with check digit appended) What follows is a two-part proof that this algorithm is "perfect" with respect to single digit and transposition errors. Lemma 1: The NCDA is guaranteed against single-character errors. Proof: We must prove that if two strings differ in one single character, then the check digit (xdigit) also differs. If the n-th xdigit's ordinal is d in one string and e in another, the sums of products differ only by (... + nd + ...) - (... + ne + ...) = n(d - e) The check digits differ only if n(d - e) is not 0 mod R. Assume (contrapositively) that n(d - e) does equal 0 mod R. First, we know that n(d - e) is not zero because n is positive and d is different from e. Therefore, there must be at least one positive integer i such that n(d - e) = Ri => (n/i)(d - e) = R Now, because R is prime, either (a) n/i = 1 and d - e = R or (b) n/i = R and d - e = 1 But (a) cannot hold because xdigit ordinals differ by at most R-1. This leaves (b), which implies that there is an integer i = n/R. But since R is prime and n (a position number) is a positive integer less than R, then 0 < i < 1, which cannot be true. So the check digits must differ. Lemma 2: The NCDA is guaranteed against transposition of two single characters. Proof: Non-contributing characters (non-xdigits) transposed with other characters will be detected in Step 1 when checking the constraints for well-formedness (e.g., the "/" must be at position 6 and only at position 6). Therefore we need only consider transposition of two xdigit characters. We must prove that if one string has an xdigit of ordinal e in position i and an xdigit of ordinal d in position j, and if another string is the same except for having d in position i and e in position j, then the check digits also differ. The sums of the products differ by (... + ie + ... + jd + ...) - (... + id + ... + je + ...) = (ie + jd) - (id + je) = e(i - j) + d(j - i) = d(j - i) - e(j - i) = n(d - e) where n = j - i > 0 and n < R. The check digits differ only if n(d - e) = 0 mod R. This reduces to the central statement of Lemma 1, which has been proven. Add features that are documented but not implemented yet: Element-Value binding upon minting; the peppermint command. The append and prepend kinds of binding currently have string-level semantics (new data is added as characters to an existing element); should there also be list-level semantics (new data added as an extra subelement)? Add extra options for dbcreate. An option to specify one or more identifier labels to strip from requests, and one canonical label to add upon minting and reporting. An option to set the initial seed for quasi-random ordering. Utilize the granular BerkeleyDB transaction and locking protection mechanisms. Extend the Template Mask to allow for other character repertoires with prime numbers of elements. These would trade a some eye-friendliness for much more compact identifiers (cf. UUID/GUID), possibly also a way of asking that the last character of the repertoire only appear in the check character (e.g., for i and x below). { 0-9 x } cardinality 11, mask char i { 0-9 a-f _ } cardinality 17, mask char x { 0-9 a-z _ } cardinality 37, mask char v { 1-9 b-z B-Z } - {l, vowels} cardinality 47, mask char E { 0-9 a-z A-Z # * + @ _ } cardinality 67, mask char w Visible ASCII - { % - . / \ } cardinality 89, mask char c Add support for simple file management associated with identifiers. For example, minting (and reminting) the noid xv8t984 could result in the creation (and re-creation) of a corresponding canonical directory xv/8t/98/4/. This utility is in the beta phase of development. It is open source software written in the Perl scripting language with strictest type, value, and security checking enabled. While its readiness for long term application is still being evaluated, it comes with a growing suite of regression tests (currently about 250). Under case-insensitive file systems (e.g., Mac OS X), there is a chance for conflicts between the directory name NOID, script name noid, and module documentation requested (via perldoc) as Noid. Not yet platform-independent. Please report bugs to jak at ucop dot edu. directory containing all database files related to a minter the BerkeleyDB database file at the heart of a minter the creation record containing minter analysis details dbopen(3), perl(1), uuidgen(1), John A. Kunze, Michael A. Russell Perl Modules: Noid, BerkeleyDB, Config, Text::ParseWords, Getopt::Long, Fcntl, Sys::Hostname Script Categories: CGI UNIX : System_administration Web
http://search.cpan.org/~jak/Noid/noid
CC-MAIN-2018-26
refinedweb
7,715
51.68
This project provides a 101 tour of the Windows Forms DataGrid Control, with emphasis on easy-to-use (and understand) customizations. DataGrid You don't have database connectivity? No problem, this project's for you. We build a simple, memory-resident database using classes provided by ADO.NET, no external database required. Next, we employ a DataGrid object to display the contents of a table within our database. Finally, we move along to customizing the columns in the DataGrid. And yes, if you want a custom combobox column, then look no further. It’s robust, uncomplicated and it works! This article was not developed in a vacuum. I would like to credit the excellence of authors Kristy K. Saunders and Dino Esposito. I'm going to elaborate on their work, tempered by personal experience, to present an article for the novice user. We are going to model telephone number in just two database tables; one describes a set of countries, the other contains the phone numbers themselves: For the purpose of this exercise, let’s assume any telephone number is a combination of a Country-Code, Area-Code, Office-Code, Phone Number and Extension. With the exception of Country-Code, each of these elements is stored under a corresponding column in the Phone table. The columns are PhAreaCode, PhOfficeCode, PhPhoneNo and PhExtension. Each telephone number is represented by a row in the Phone table. Phone PhAreaCode PhOfficeCode PhPhoneNo PhExtension The Phone table also contains a PhCountryId column that we can use to look-up a matching entry in the Country table. From this table, we can extract the name of the country, and the country-code. The relationship is modeled as an arrow pointing from the PhCountryId column (in Phone) to the CyId column (in Country). For this to work, two conditions must be met: PhCountryId Country CyId Beyond these rules, we do want to allow for an empty value in the PhCountryId Column. This can happen when, for instance, the phone is part of an International Satellite Phone System. Clearly it makes no sense to assign a 'Country-Code' to this type of phone! In database parlance this is referred to as a null index (or DBnull in ADO.NET). We can accommodate this value in the Country Table by inserting a row with a corresponding DBnull value in the CyId Column. If you look into the code that generates the memory-resident database, you will see this is the very first row I add to the Country Table. DBnull Now, ADO.NET is based on the concept of ‘Connectionless Database’. The name is really a misnomer because we obviously must be connected when we read, write or update the underlying database. However, in-between times, we are working out of database subsets that reside purely in application memory space. Contrast this with classic ADO (Visual Studio 6) which puts heavy emphasis on a continuous connection to the underlying database! As a part of this approach, ADO.NET provides classes that correspond to database tables, columns and rows. ADO.NET (within the System.Data namespace) also provides a rich set of database-related classes to manage Filters, Constraints, Relationships etc.). However, to keep our example manageable, we’ll focus primarily on just the DataSet, DataTable, DataColumn and DataRow classes. System.Data DataSet DataTable DataColumn DataRow If we have a live database (SQL Server, Oracle, Access), we can use ADO.NET-aware components to auto-generate these objects directly from the database structure and content. However, it is instructive to perform this task as a manual exercise. Please note: The DataGrid has a visual representation that allows us to navigate between related tables. However we are trying, instead, to portray a merger of the Phone and Country columns into a spreadsheet-like view. For this reason, I chose not to make use of the available ADO.NET DataRelation class or the DataSet.Relations collection that DataGrid can hook into. However, you can uncomment the following lines of code (in PhoneDataSet.cs) if you would like to explore inter-table navigation: DataRelation DataSet.Relations //code to create a parent-child data relationship //and add this to the DataSet object DataColumn parentCol; DataColumn childCol; // get a handle to the parent-child datacolumns DataColumn parentCol = _dsInfo.Tables["Phone"].Columns["PhCountryId"]; DataColumn childCol = _dsInfo.Tables["Country"].Columns["CyId"]; // Add the relationship to the DataSet but do not create constraints // (because not all child table entries are used by the parent table) _dsInfo.Relations.Add("ByCountry", parentCol, childCol, false); //end of code to create a parent-child relationship First, we generate structure for each DataTable using DataColumn objects. Only then can we populate our DataTable objects with actual data stored in DataRow objects. Finally, we move both DataTable objects (Phone and Country) into a container defined by the DataSet class. There is no obligation to perform this step; we are not making extensive use of DataSet. However, since most database applications employ this class, I thought it was representative to include DataSet in my example. Database construction is handled in the PhoneDataSet.cs class file. It’s quite straight forward and heavily commented. I’ve created a few helper routines to assemble columns, tables, rows and primary keys (which incidentally don’t get used in this project). The only property exposed by this class is the DataSet object (_dsInfo) that contains our DataTables. Except for the helper functions, this is mostly throw-away code, but if you’ve never tried manual construction of database, the PhoneDataSet class you might enjoy a quick look. _dsInfo PhoneDataSet The DataColumn class has two properties which are of interest to us: // corresponds to a column name in a database aColumn.ColumnName; // supposed to be a 'friendly' name for the column aColumn.Caption; Now to quote from Microsoft ™ Help pages, "You can use the Caption property to display a descriptive or friendly name for a DataColumn.". Okay, what I’m hoping is that ColumnName corresponds to the title of a column in a database table. Likewise, I expect Caption will be grabbed for the displayed DataGrid column header. So in my database, I’ve set a friendly name for the Caption property and a hostile database descriptor for the ColumnName. As we shall shortly see, my optimism is once again to be dashed against the rocks. ColumnName Caption Connecting our memory-resident Phone table to the DataGrid is simplicity itself; we use two properties of the DataGrid called DataSource and DataMember. However, we do get choices on how we use these properties: DataSource DataMember First, we can connect directly to the Phone DataTable like this: grdPhone.DataSource = _pdsPhone._dsInfo.Tables["Phone"]; Alternately, we can connect to the DataSet container, then identify the contained DataTable by name: this.grdPhone.DataSource = _pdsPhone._dsInfo; this.grdPhone.DataMember = "Phone"; Both approaches work equally well, which hints at the true versatility of the DataGrid control. However, our DataGrid looks quite sad and is, frankly, less than I was hoping for. At the very least, I thought the DataGrid would pick-up and display the Caption property from the DataColumn objects. Instead, what I see is the ColumnName property, which is not what I wanted. In addition, I really don’t have much use for the index representation of a country (PhCountryId). I want to see the actual name of the country. So it’s time to beautify our DataGrid. The DataGrid control has a property called TableStyles. TableStyles is a collection of DataGridTableStyle objects, indexed by a something called the MappingName. This name is used because DataGrid can bind to many collection types; hence calling it "TableName" might be considered quite inappropriate in some circumstances. However, in our case, this will be the name we gave to a DataTable object. TableStyles DataGridTableStyle MappingName After binding the DataSet and DataTable to the DataGrid, I set a breakpoint to explore the TableStyles property to see how it was constructed. Unfortunately, the DataGrid is running off an internal, default collection of DataGridTableStyles that we are not intended to access (it's protected!). Fortunately, there is a published trick to exposing this collection: DataGridTableStyles protected DataGridTableStyle GridTableStyle = new DataGridTableStyle(); GridTableStyle.MappingName = "Phone"; // adding the table style corresponding to the Phone table induces the grid to // populate our DataGridTableStyle object with the corresponding column styles grdPhone.TableStyles.Add(GridTableStyle); Amazingly, internal code within the DataGrid has kindly populated my DataGridTableStyle object with all the information about DataGrid columns that I could reasonably wish for. Here is what I get: TablesStyles [] GridColumnStyles GridColumnStyles [] DataGridColumnStyle A diagram makes these relationships a little clearer. Note, these objects can navigate up the hierarchy, as indicated by the arrows: Finally, we have a set of objects (class DataGridColumnStyle) that describe the appearance and performance of each column that appears in our DataGrid! Once I have navigated down to this object, I can change just about anything I want. For instance: We can also delete or add columns, re-order them or even add custom columns. But let’s not get ahead of ourselves here! My project contains a single button which is labeled "Press me". The first time you press the button, it re-labels the columns using the Caption property of the DataColumn objects that we used to create the database itself. This task is performed by the method: // revise the column headers to match the Caption field of each table column CopyCaptionToHeader(grdPhone); To demonstrate how simple it is to change column properties, I have also expanded the width of the PhCountryId column to 90 pixels. Finally, I've chosen to set the first column (PhIndex or IdxPhone) as read-only and centre-aligned. PhIndex IdxPhone We are making progress but I want to see, and select, the country name for each telephone entry that appears in the DataGrid. Unfortunately, all I have at the moment, is an index value (for instance the value "501" represents "America"). So what to do? I’ve talked about the DataGridColumnStyle object which governs the appearance and performance of a single column in the DataGrid. However, in reality, DataGridColumnStyle is an abstract class. Unfortunately (and I would love to know why), we are only offered two concrete subclasses which we can actually use. These are: DataGridBoolColumn Boolean DataGridTextBoxColumn TextBox Oops! Neither of these is going to help me much! What I really want is a ComboBox which magically appears whenever I click within a cell under the "Country" column header. Okay, I’ve read several articles which offered a custom ComboBox column and right here I’m offering you my interpretation of this useful class. Due to a failure in my imaginative-naming subroutines, I’ve called my Class MyComboColumn and you can view the code in MyComboColumn.cs. If you want to understand how I arrived at this Class, then read on. However if you simply want to use the class "as is", then skip to the section entitled "Using the DataGridComboBoxColumn". ComboBox MyComboColumn Now, the Visual Studio .NET Help files invite us to sub-class DataGridColumnStyle to create our own custom columns. I’m told which methods I need to override, but when do they get called and why? What are my responsibilities as the coder of a new, robust sub-class? I spent several hours experimenting, then decided to take the path of least resistance. I simply sub-classed the DataGridTextBoxColumn as others have done before me! What I discovered during my experimenting was the following: DataGrid.Controls Paint() Edit() My first action was to simply override the Paint() methods and put a breakpoint in each before calling the base.Paint() method. In this way, I was able to determine which signature was in-use. I could equally well have drawn a picture instead of a string! How cool, we are half way to a DataGridPictureColumn! base.Paint() DataGridPictureColumn Next I override the Edit() methods. Again there are multiple signatures but only one appears to be in-use within the DataGridTextBoxColumn. So I can now construct an override method to create and display a ComboBox within the boundaries supplied on the Edit() parameter list. I must also remember to add the ComboBox to the DataGrid.Controls property. I cannot overstress the importance of this step! I attach a (Leave) event handler, so whenever the ComboBox loses focus, we execute code to make the ComboBox invisible. And voila! A DataGridComboBoxColumn control. Well almost. We have a few more tasks to take care of: Leave DataGridComboBoxColumn We do have one additional problem, and it is quite significant. I would like to thank my (almost) tame testers, Dave (I can break anything) and Baldev (I can terrorise any coder) for pointing out this issue to me. The ComboBox Control may be implemented using either a DropDownList or a DropDown style. The two styles result in quite different behaviors: DropDownList DropDown With this style the navigation keys (up-arrow or down-arrow) select the previous or next row from the ComboBox. We can also edit the selected entry (which in most cases is undesirable). With this style the navigation keys (up-arrow or down-arrow) select the previous or next row in the DataGrid. Editing of the selected entry is not enabled, however if you press a key, such as the letter 'A' the next entry in the ComboBox that starts with the same letter is selected. This can be very useful! However, the drawbacks derive from behaviors inherited from the super-class (DataGridComboBoxColumn). When we navigate using the up-arrow or down-arrow keys, we do NOT get a 'Leave' event generated on the ComboBox Control. The Constructor for MyComboColumn supports selection of either a DropDownList or DropDown style; both have virtue in specific circumstances, although I suspect the DropDownList style is the preferred choice. The solution for the DropDownList style is almost as bizarre as the problem itself. After much thought (and some experiments) I discovered that setting the 'ReadOnly' Property of the super-class (DataGridComboBoxColumn) restores the missing 'Leave' events. Ouch! ReadOnly To block editing on the DropDown style I have added an event handler for the 'KeyPress' ComboBox event. The handler does not impact the navigation keys, nor does it impact the 'delete' key. However editing with ascii characters is now blocked. The value of retaining the 'delete' key is that it can be used to select the DBnull object. To see how the two styles are accommodate in code, look at the Constructor for MyComboColumn. The rest of the code is oblivious to the style we choose. And that, ultimately, is about all it takes to create a custom ComboBox column. I’ve stripped the code to a minimum so don’t be outraged by the absence of parameter validation and error recovery code (try-catch). I felt the subject matter was complex enough without including extraneous code that might cause confusion. But the code does work without throwing exceptions, provided it’s used as intended. Which brings me to the next topic: try catch Here is all the code you need to prepare the MyComboColumn object: // define my custom combobox column style MyComboColumn aCboCol = new MyComboColumn( _pdsPhone._dsInfo.Tables["Country"], "CyName", "CyId", true); aCboCol.Width = 129; aCboCol.MappingName = "PhCountryId"; aCboCol.HeaderText = "Country"; The constructor takes four parameters: CyName boolean true false To allow either the DropDownList or DropDown style to be chosen I have provide a CheckBox on the Form. When checked, the DropDownList is employed. CheckBox Form We must also bind the MyComboColumn itself to the appropriate column in the DataTable that currently underlies the DataGrid itself. In this case, we are binding the PhCountryId column in the "Country" DataTable. The Width property is set purely for aesthetic value. Width I’ve tried to illustrate the bindings in the following diagram. I hope it helps: Bindings (a) and (b) and (c) are responsible for populating the ComboBox control and are established in the constructor for MyComboColumn. Binding (a) links to the ValueMember property of the ComboBox while binding (c) links to the DisplayMember property. ValueMember DisplayMember Binding (d) connects MyComboColumn to the PhCountryId column in the Phone table and is established through the MappingName property of MyComboColumn. Binding (e) is provided by code in MyComboColumn and synchronizes the CyId and PhCountryId columns. On the Edit() method, this binding is used to select the initial entry shown in the ComboBox control when it receives focus. On the corresponding "lose-focus" event, the ValueMember property from the current row of the ComboBox is written back to the corresponding row and column in the Phone table. Before we insert MyComboColumn, we must first remove the existing DataGridTextBoxColumn that binds to PhCountryId in the "Phone" DataTable. We simply cannot bind a new DataGridColumnStyle to the same DataColumn in the same DataTable: // remove old column containing the unhelpful index value grdPhone.TableStyles["Phone"].GridColumnStyles.RemoveAt(1); // and replace add my custom column at the same location this.InsertColumnAt(grdPhone.TableStyles["Phone"], aCboCol, 1); The RemoveAt() methods works on a zero-based item array. Consequently, we are actually removing the second DataGridColumnStyle from the collection, and not the first. Now you would think that a collection which implements a RemoveAt() method would have a corresponding InsertAt() method. Well you’d be wrong. Instead I’ve had to kludge my own method to perform this onerous task. InsertColumnAt() makes a copy of the current DataGridColumnStyle collection. Then it clears the existing collection and repopulates it by sequentially adding objects across from the copy. At the appropriate point in this re-construction sequence, the new DataGridColumnStyle is added. Simple, inelegant, but it works. RemoveAt() InsertAt() InsertColumnAt() To see the results of adding a new ComboBox column, press the button (now labeled) "Press again". If you find problems related to my implementation, please let me know. I will fix errors in the code (if I can). Changes will be incorporated if they have merit in the context of an article written for novices. Now on to a few points that might interest you: When you’ve finished adding DataRow objects to your DataTable objects, remember to AcceptChanges() on the DataSet or DataTable objects! Otherwise you may find they suddenly disappear en-masse if you reject recent updates using RejectChanges(). AcceptChanges() RejectChanges() In both the Edit() and Paint() methods that I have overridden, we must deal with the possibility of encountering a null value (System.DBNull) for the "Country" index. This is a normal occurrence when we are adding a new row to the underlying "Phone" DataTable. In both cases, I default to using the first entry from the "Country" DataTable. System.DBNull You might be wondering why I delay binding to the parent DataGrid object until the first time the Edit() method is called. This is because the DataGrid object is not available until MyComboColumn is added into the DataGrid.TableStyles collection. As the Edit() method only gets called after this necessary step has occurred, I can safely bind at this point. DataGrid.TableStyles Another issue which arose was an eye-opener! I discovered the ComboBox does not get populated until the ComboBox.Visible property is set for the first time. Consequently, the code to make the ComboBox visible, in Edit(), is called BEFORE we select an item in the ComboBox. To avoid multiple Paint events, I use the BeginUpdate() and EndUpdate() methods. ComboBox.Visible Paint BeginUpdate() EndUpdate() It is important to note that a ComboBox is taller than a TextBox which uses the same font. Consequently, you should set the PreferredRowHeight to a suitable value. I do this by creating a temporary ComboBox populated with the same font used for the current DataGridTableStyle. PreferredRowHeight Quite a few implementations I’ve seen appear to intercept the Scroll event for the DataGrid. But if you bind the ComboBox to the DataGrid's Control's collection, I don’t see this as a necessary step. The DataGrid scrolls quite nicely even when the ComboBox is visible. Scroll Another issue I encountered relates to the color of individual columns. The DataGrid control provides support for alternate-row coloring, however I wanted to color the columns. So I've added code which allows me to override (if I choose) the default colors for the MyComboColumn control, both background and foreground. To use the code, uncomment the following two lines in Form1.cs: // uncomment these next two lines if you would like // some south-west colors in your column aCboCol.backgroundColour = System.Drawing.Color.Aquamarine; aCboCol.foregroundColour = System.Drawing.Color.RoyalBlue; You can question the South-Western color scheme, but what you should get is this: We've arrived. You can RemoveAt() or Remove() the IdxPhone column, but this was as much as I set out to achieve. Incidentally, some people think the way to remove a column is to set the width of the column to zero (0). However the column still exists and the "Tab" key will require a second press to skip over the invisible column. Remove() I hope you can now see how to build your own columns in a DataGrid. A column class to display graphics (such as items from a Parts Bin) should now be within your grasp. Tip of the Day: If you are buying books on C# and you're a novice, consider also buying a book or two written for Visual Basic. Because Visual Basic is often regarded as "The People's Program Language", authors are expected to write super-friendly material. So often times I can get easy-to-read guidance from Visual Basic books. C# and Visual Basic are truly convergent languages! This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here internal class ModifiedControl : Control { protected override bool ProcessKeyMessage(ref Message m) { // Keep all the keys for the Control. return ProcessKeyEventArgs(ref m); } } if (aType != typeof(System.DBNull)) { aRowA = ((DataTable)_objSource).Select(_strValue + " = " + anObj.ToString()); } if (aType != typeof(System.DBNull)) { aRowA = ((DataTable)_objSource).Select(_strValue + " = " + "'" + anObj.ToString() + "'"); } foreach(object anObj in aColA) { if (iIdx == index) { gridStyle.GridColumnStyles.Add(newColumn); } gridStyle.GridColumnStyles.Add((DataGridColumnStyle)anObj); iIdx++; } if(aColA.Length > gridStyle.GridColumnStyles.Count) gridStyle.GridColumnStyles.Add(newColumn); General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/5817/DataGrid-Zen-Novice?msg=1958253
CC-MAIN-2016-36
refinedweb
3,748
55.54
How do I verify a download? - Abort initialization if temp dir for iconcache cannot be created (bug #874447). - Fix animated icons causing unity-panel-service to eat all CPU (bug #865601). - Prefix log output with pid of application. - Use a placeholder menu if there is none defined yet (Fix half of bug #860395). - Make sure GTK notices icon updates (bug #812884). - Prevent applications from stealing icons from each others (bug #850139). - Turn debug into a runtime option. - Log message when an "Activate" entry is added. - Load Qt translations ourself for the "Activate" entry. - Make sure we fallback to the English version of "Activate". - Move the config file from canonical/sni-qt.conf to sni-qt.conf. - Revert namespace'ification. - Add the ability to show an Activate entry to the menu if the app does not provide it. - Survive a restart of the StatusNotifierWatcher process. - Build with -fvisibility=hidden and move all classes into a SniQt namespace to reduce risks of symbol clashes. - Rename _qt_sni_category to _sni_qt_category. - Touch the icon theme dir, so that GTK looks into it and find new icons. - Add build-time option to enable debug output - Document the _qt_sni_property hack - Added a triming feature to the icon cache to avoid flooding tmp dir - Added support for scroll events - Added manual tests, with a test program based on Qt systray example - Add LGPL license and Nokia LGPL exception. Change license to LGPL 3 Relicense to GPLv3
https://launchpad.net/sni-qt/+download
CC-MAIN-2018-17
refinedweb
238
68.36
Can someone explain to me an efficient way of finding all the factors of a number in Python (2.7)? I can create algorithms to do this job, but i think it is poorly coded, and takes too long to execute a result for a large numbers. def factors(n): return set(reduce(list.__add__, ([i, n//i] for i in range(1, int(n**0.5) + 1) if n % i == 0))) This will return all of the factors, very quickly, of a number n. Why square root as the upper limit? sqrt(x) * sqrt(x) = x. So if the two factors are the same, they're both the square root. If you make one factor bigger, you have to make the other factor smaller. This means that one of the two will always be less than or equal to sqrt(x), so you only have to search up to that point to find one of the two matching factors. You can then use x / fac1 to get fac2 the reduce(list.__add__, ...) is taking the little lists of [fac1, fac2] and joining them together in one long list. The [i, n/i] for i in range(1, int(sqrt(n)) + 1) if n % i == 0 returns a pair of factors if the remainder when you divide n by the smaller one is zero (it doesn't need to check the larger one too, it just gets that by dividing n by the smaller one.) The set(...) on the outside is getting rid of duplicates. I think this only happens for perfect squares. For n = 4, this will return 2 twice, so set gets rid of one of them. Edit: sqrt is actually faster than **0.5, but I'll leave it out as it's nice as a self-contained snippet.
https://codedump.io/share/IuRu3ddKdg00/1/what-is-the-most-efficient-way-of-finding-all-the-factors-of-a-number-in-python
CC-MAIN-2017-43
refinedweb
302
81.22
On Tue, 18 Aug 2009 06:06:29 +0300, Alex Grönholm alex.gronholm@nextday.fi wrote: That scheme does not allow me to say "This dependency is required unless platform is X". A practical example of this is Beaker, where pycryptopp is required for cookie encryption, but works without external dependencies on Jython. Yes, I follow.. That's why I'm so keen on having a pre_setup() and a post_setup() user routine, where the environment and dependencies can be exposed. I can't see any easy way to do conditional logic in a config file. imho it just has to go in code.. and it is as simple as that - haha for example (demonstration code - not runnable): """ from distutils.core import setup def pre_setup(): if sys.platform() == "linux2": self.dependencies.append("pycryptopp") return setup() """ So, one could modify the dependencies dynamically based on the underlying platform. Dependencies is a list of packages within the setup class. So the user can add or remove items before the actual setup() is done. Also, how do I define that this package has a minimum (or maximum) required Python version? [Python_versions] minimum_supported=2.3 maximum_supported=2.7 We can't tell a user it won't work - only that it isn't supported. Or that the distribution is only applicable to a certain platform (say, win32 or java)? [Platforms] supported1=linux2 supported2=mac supported3=win32,win64 I think we can let the developer specify what *is* supported, and if it doesn't match then give the user the choice to proceed or not. (At their own risk) David
https://mail.python.org/archives/list/distutils-sig@python.org/thread/NUGZBMBQMJZ5GXPPU6KJCKRTWVJDB63F/
CC-MAIN-2021-43
refinedweb
264
57.98
Poor default for API workers in Neutron Bug Description The default value of 0 for the NeutronWorkers results in different behavior in newton than it did in mitaka. In mitaka, neutron checked the api_worker count like so: workers = cfg.CONF. if not workers: workers = processutils. Since we default to 0, this has the result of setting it to the number of threads. It is now: def _get_api_workers(): workers = cfg.CONF. if workers is None: workers = processutils. return workers Resulting in the value actually being treated as 0. This has a significant impact on performance. This was changed here: https:/ Upping to critical. Some perf testing has illustrated that API performance takes a huge hit. Reviewed: https:/ Committed: https:/ Submitter: Jenkins Branch: master commit 959e87238877fa0 Author: Brent Eagles <email address hidden> Date: Thu Sep 1 14:58:59 2016 -0230 Change NeutronWorkers default to result in previous behavior Neutron's behavior changed in Newton with respect to the default value of 0 for NeutronWorkers. Instead of effectively treating it as "processorc any workers, resulting in only a single process for handling API requests. This change alters the default value to regain the previous behavior. This change also extends the worker count to the RPC worker setting - which was possibly an oversight in previous releases. The default behavior will have no result for most systems. Closes-Bug: 1619383 Change-Id: Id6e3ee54416037 Should read "this has the result of setting it to the number of processors"
https://bugs.launchpad.net/tripleo/+bug/1619383
CC-MAIN-2019-09
refinedweb
243
56.45
mScript is a .NET class that allows you to dynamically script portions of your code using VBScript. You can pass multiple variables to your script, do any necessary processing, and return multiple variables from the script. mScript After adding a reference to the mScriptable.dll assembly, you can use/import the namespace: using mScriptable; You can then begin by creating an instance of the mScript class found in the mScriptable namespace. mScriptable mScript script = new mScript() Next, you need to supply the variables your script will need access to. script.addScriptInput("foo", "Hello"); script.addScriptInput("bar", "World"); script.addScriptInput("x", 100); script.addScriptInput("y", 13); And also assign your script code to the script object. script string myScriptCode = "..."; script.setScript(myScriptCode); Your script code must be valid VBScript. Any error in the script will be caught by the Windows Scripting Host and not the control. Currently, the return value from the Windows Scripting Host process is not being monitored to determine if a script completed successfully, so it's important to catch your own errors. Your VBScript can retrieve the values supplied to it using either a provided inpVal(varName) function, or an abbreviated wrapper function iv(varName). Values can be returned to the .NET caller using a provided return varName, varVal subroutine. A sample script might look like the following: inpVal(varName) iv(varName) return varName, varVal You can execute your script code using the runScript() method of the mScript object. This method will return a Hashtable containing all of your script's return values. runScript() Hashtable Hashtable rValues = script.runScript(); Upon completion of the script, the runScript() method will return a Hashtable containing the script's return values. In the case of our example, your Hashtable would have the following: hwString calc1 calc2 Back in your .NET application, you can then retrieve these values using the Hashtable. string returned = ""; foreach (string rVar in rValues.Keys) { returned += rVar + " = " + rValues[rVar] + "\r\n"; } MessageBox.Show(returned); That's pretty much all there is to it. The supplied demo project shows a working example of the class in use, allowing you to supply the inputs, modify the VBScript, execute, and then view the outputs. mScriptable relies on the Windows Scripting Host for its VBScript functionality. As such, with each call to runScript(), you incur the overhead of starting a Windows Scripting Host process. The communication between the .NET component and the scripting host is fairly crude, but workable. For each run, a new file (called [timestamp].vbs) is created. This file includes your provided code as well as some header code that provides the basic variable value retrieval and value return functionality as well as file I/O. Each call your script makes to the return subroutine spools a tab delimited name/value pair out to another file called [timestamp].txt. When the script exits, the .NET module reads the values from this file and makes them available in a Hashtable. After the script completes and the values are retrieved, both the .vbs and .txt files are removed. return At user request, mScriptable has been modified so it can now run a user's standalone scripts. If you want to use a VBScript directly with WSH and also via a .NET application using mScriptable, you now can. The changes you'll need to make to your script are as follows: inpVal() iv() return() To prevent duplicate declarations of functions/subroutines in the VBScript code, mScriptable will remove any user defined functions/subroutines named inpVal(), iv(), or return(). A sample VBScript that can run standalone as well as via mScriptable might look like the following: ' My function to allow script to run via WSH directly Function iv(myVariable) Dim retVal Select Case myVariable Case "foo" retVal = "John" Case "bar" retVal = "Doe" Case "x" retVal = 9 Case "y" retVal = 3 Case Else retVal = "" End Select iv = retVal End Function Sub return(varName, varVal) ' this is just a dummy function MsgBox varName & " = " & varVal End Sub When executed via the WSH directly, this script would run as you would expect. When loaded and run via mScriptable, the iv() function and the return() subroutine will be removed and replaced with mScriptable's own code that supplies the script inputs..
http://www.codeproject.com/Articles/15741/Scripting-NET-applications-with-VBScript
CC-MAIN-2015-35
refinedweb
703
64.91