text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Ok so, I'm working on a program that is suppose to solve a function:
(1+ (x^2) / sqrt(3x+6)) / (x^2 - 5x)
As you tell by the function the denominator cannot be = to 0.
So here's my code that I'm having trouble with.
#include <stdio.h> #include <math.h> int main(void) { int x; printf("Enter a number x: "); scanf("%lf", &x); double d = ((x*x)-(5*x)); if (d == 0) { printf("f(x) is not defined\n"); } else { double n = 1 + ((x*x)/sqrt(3*x+6)); double f = n/d; printf("f(x) = %lf\n", f); } return 0; }
It seems to be doing everything that it's suppose to, but its not unfortunately. So basically, the program should being by calculating the denominator. If the denominator returns as 0, then the program prints that the function is not defined. If denominator is not 0, then it continues to solve the function.
However, when I run the program the answers come out incorrect.
For example, when I type in 2.75 for my value x I would get "f(x) is not defined" Even though for the value 2.75 does not equal 0.
For the value 2.74, I would get -.00011 which is wrong. The answer should be -0.48299.
I'm not sure what I am doing wrong. It seems very random. Certain values will come back as not defined when the denominator does not equal zero. Or certain values will come back entirely wrong.
|
https://www.daniweb.com/programming/software-development/threads/389828/solving-a-function-where-x-needs-to-be-tested
|
CC-MAIN-2018-13
|
refinedweb
| 253
| 76.72
|
FEM. More...
#include <FEMMetaData.hpp>
FEM.
Users who are interested in a FEM based mesh database should use FEMMetaData rather than MetaData.
Invariants for FEM MetaData: 1. Each cell topology has one and only one root cell topology part. The root cell topology part for a cell topology is a unique part that can be subsetted to induce cell topology on the subset part. -> Enforced by register_cell_topology is the only function that modifies the PartCellTopologyVector private data on FEMMetaData. 2. Root cell topology parts cannot be subsets of parts with cell topologies -> Enforced by declare_part_subset 3. Incompatible cell topologies are prohibited. I.e. parts with different cell topologies of the same rank (as the part) cannot be subsets of each other. -> Enforced by declare_part_subset
Definition at line 54 of file FEMMetaData.hpp.
Initialize the spatial dimension and an optional list of entity rank names associated with each rank.
-------------------------------------------------------------------------------- FEMMetaData Specific functions begin: -------------------------------------------------------------------------------- This function can only be called once. To determine if a FEMMetaData class has been initialized, call the is_FEM_initialized function.
Definition at line 61 of file FEMMetaData.cpp.
This function is used to register new cell topologies and their associated ranks with FEMMetaData. Currently, several shards Cell Topologies are registered with appropriate ranks at initialization time. See: internal_declare_known_cell_topology_parts for the whole list.
Note: This function also creates the root cell topology part which is accessible from get_cell_topology_root_part
Definition at line 176 of file FEMMetaData.cpp.
Return the cell topology associated with the given part. The cell topology is set on a part through part subsetting with the root cell topology part.
Note: This function only uses the PartCellTopologyVector to look up the cell topology for a given part. This depends on declare_part_subset to update this vector correctly. If a cell topology is not defined for the given part, then an invalid Cell Topology object will be returned.
Definition at line 237 of file FEMMetaData.cpp.
Get an existing part by its application-defined text name.
Return NULL if not present and required_by == NULL. If required and not present then throws an exception with the 'required_by' text.
Definition at line 291 of file FEMMetaData 310 of file FEMMetaData.hpp.
Declare a part of the given name and entity rank Redeclaration returns the previously declared part.
This part does not have an entity type rank.
Definition at line 318 of file FEMMetaData.hpp.
Declare an entity-relationship between parts.
If entity e1 is a member of root_part and there exists an entity relation from e1 to e2 that satisfies the relation stencil then e2 must be a member of the target_part .
Definition at line 336 of file FEMMetaData.hpp.
Get a field, return NULL if it does not exist.
Definition at line 407 of file FEMMetaData.hpp.
Declare a field of the given field_type, test name, and number of states.
A compatible redeclaration returns the previously declared field.
Definition at line 423 of file FEMMetaData.hpp..
Definition at line 445 of file FEMMetaData.hpp.
Commit the part and field declarations so that the meta data manager can be used to create mesh bulk data.
Verifies consistency of the meta data and clean out redundant field data allocation rules. Once committed no further part or field declarations can be made.
Definition at line 462 of file FEMMetaData.hpp.
|
http://trilinos.sandia.gov/packages/docs/r10.8/packages/stk/doc/html/classstk_1_1mesh_1_1fem_1_1FEMMetaData.html
|
CC-MAIN-2014-35
|
refinedweb
| 547
| 50.43
|
24 August 2011 11:29 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
FPCC restarted one of the three CDUs at its 540,000 bbl/day refinery in Mailiao on 21 August. The other two CDUs are expected to be back on stream in the coming days, according to sources familiar with the situation.
The company’s No 1 residual fluid catalytic cracker (RFCC) and its olefins conversion unit (OCU) at Mailiao will restart “very soon”, but the date has yet to be confirmed, said the FPCC source.
The RFCC can produce around 325,000 tonnes/year of propylene, while the OCU has a propylene capacity of 250,000 tonnes/year.
The shutdown of the refinery and propylene facilities was prompted by an end-July fire at the Mailiao petrochemical complex.
The No 2 RFCC had been slated for maintenance this month before the latest in a string of fire incidents at the site
|
http://www.icis.com/Articles/2011/08/24/9487480/Taiwans-Formosa-Petrochemical-eyes-restart-of-propylene.html
|
CC-MAIN-2014-52
|
refinedweb
| 153
| 59.33
|
The bpmn2 process editor and the guvnor web process editor can`t be compatible freely?ylq2250 kun Oct 4, 2011 11:05 AM
i design a process using the bpmn2 process editer(not the bpmn2 visual editor) which contain a Multiple Instances node and then add it into the guvnor,
then i login into the guvnor and open the process with the web process editor, i just change the start node`s localtion of it and save it, i go back to the
eclipse and open the process ,now the problem is coming, the Multiple Instances node become the Embedded Sub-Process node, how this happen?
why ? is anybody come across this problem? or this a bug of the web process designer?
1. Re: The bpmn2 process editor and the guvnor web process editor can`t be compatible freely?Adam Bach Oct 4, 2011 11:41 AM (in response to ylq2250 kun)
web editor isnt usable yet... None of my proc esses from guvnor dont run under jbpm. newer versions that are comming are even worse.
2. Re: The bpmn2 process editor and the guvnor web process editor can`t be compatible freely?ylq2250 kun Oct 4, 2011 1:41 PM (in response to Adam Bach)
thanks for your reply.
if the bpmn2 process editor and the web process editor dosen`t compatible freely, so the guvnor just manager the process definitions and the rules assets,it is too havery to bring the guvnor into our project,the web process designer is so wonderful tools,if it is as what you say , i feel so sad!
can anybody give me some suggestion about the problem i come across above, i want to use both the web process editor and the jbpm2 process editor to designer one process, if they work compatiblly, so the devloper can modify the process definition by opening browser not with the eclipse .
3. Re: The bpmn2 process editor and the guvnor web process editor can`t be compatible freely?Adam Bach Oct 4, 2011 3:02 PM (in response to ylq2250 kun).
4. Re: The bpmn2 process editor and the guvnor web process editor can`t be compatible freely?Tihomir Surdilovic Oct 4, 2011 4:44 PM (in response to ylq2250 kun)
Hi ylq2250 kun,
the jBPM Eclipse Editor and the Web Designer both produce valid BPMN2 that is per the BPMN2 specification. They do however use different API, Eclipse editor uses the jBPM built-in parser, whereas the web designer uses eclipse.bpmn2 (). The produced BPMN2 of those editor is somewhat different (different namespace used, different id-creation strategy) but they should be able to be used interchangeably...we have and keep putting alot of work into it, so having guys like Adam above just spew Fud about it is a little discouraging, but he has the right to his opinions.
Can you please post your BPMN2 with which you are running into issues with so we can test locally and provide fixes if necessary?
Currently, the Web Designer is strongly tied to Guvnor. You can however use Guvnor just as an asset repository and embed the editor into your own application(s) -- for more info see blog and video:. If there is anough user interest to create a stand-alone version of the designer, then that will happen too
Hope this helps.
5. Re: The bpmn2 process editor and the guvnor web process editor can`t be compatible freely?Tihomir Surdilovic Oct 4, 2011 4:42 PM (in response to Adam Bach)
Hi Adam,
>> web editor isnt usable yet... None of my proc esses from guvnor dont run under jbpm. newer versions that are comming are even worse. <<
What you are saying has no weight to it, unless you can provide some specific examples. If you can gist or pastebin the processes you are running into it would help get them fixed. We have frequent releases of the designer and would like it if you can help us get it better for the future...
>>. <<
The eclipse.bpmn2 API has it's own id creation strategy .. I agree that the default id's created are not very human-readable...which can be changed (please raise a Jira for it). However existing Ids of uploaded processes are not overwritten.
Again, you make no sense with your statement (other than just sound off like someone who does not know what he is talking about) unless you can provide specific examples, which would help us get the web designer better for future releases.
You are of course as anyone else interested more than welcome to contribute the the jBPM Web Designer if you wish.
Thanks.
6. Re: The bpmn2 process editor and the guvnor web process editor can`t be compatible freely?ylq2250 kun Oct 5, 2011 2:25 AM (in response to Tihomir Surdilovic)
thanks for Tihomir Surdilovic`s reply.
i come across another problem while i try to make a demo process for you :
i`ve used the jBPM Eclipse Editor designering a demo process which contain a Multiple Instances node and the multiple Instances node contain a Reusable Sub-Process node , following is the screenshots :
the sub-process is very simple that it is only contain a script node:
so i add the two processes into gunvor ,and then i try to edit the testMainProcess in the web process editor(just rename the Multiple Instances node`s name),when i execute the "save changes" opetion,the modify didn`t apply,the background server program(tomcat application) occured the follow errors:
is this the guvnor's or the web process designer`s problem? i can not modify the testMainProcess in the web designer as this error. so i didn`t see the Multiple Instances node changed into the Embedded Sub-process node.
the problem which i reported yesterday that the two nodes has changed for each other is still exist, following is the process designer screenshots :
the test11 process is still having the problem i reported yesterday.
i`ll upload these three process definitions to you at the attachments,thus would you please make a test with these processes and help me fixing the problems ? thank you very much.
- test11.bpmn.zip 1.3 KB
- testSubProcess.bpmn.zip 928 bytes
- testMainProcess.bpmn.zip 1.2 KB
7. Re: The bpmn2 process editor and the guvnor web process editor can`t be compatible freely?Chris Melas Oct 5, 2011 5:49 AM (in response to Tihomir Surdilovic)
Hello everyone....totally agree with Tihomir
we use Eclipse Editor and the Web Designer nicely together, web designer has become very stable and the fixes lately have become very rapid.
8. Re: The bpmn2 process editor and the guvnor web process editor can`t be compatible freely?Tihomir Surdilovic Oct 5, 2011 6:03 AM (in response to ylq2250 kun)
Thanks, I will take a look at your example and raise Jiras if needed. The error in the console you pasted is Guvnor not being able to find the web designer under localhost:8080/designer. This is the default configuration, and if you tell us about your deployment setup, maybe designer is running on a different host/port/subdomain in your case? Check the latest docs on designer here:. It includes the configuration options on both the Guvnor and the Designer side which may help (10.4 Configuring Designer, specifically). What versions of Guvnor/Web Designer are you using? Are you using the jbpm-installer?
9. Re: The bpmn2 process editor and the guvnor web process editor can`t be compatible freely?Adam Bach Oct 5, 2011 6:38 AM (in response to Tihomir Surdilovic)
there was one of my topics where I have posted 2 example differences. But they exeisted when using installer versions.
Whe I have changed to some newer web designer version I remember (sth like 56) and I think it even couldnt save gatweway type. So After few such experiences I stopped using it.
Every stabile version should be attached to installer etc so we know which one to use.
Sorry for beeing discouraging but I'm writing MA Thesis on BPM and long time ago I decided to use jBPM but for many months every release is continously discouraging me.
- No WebService task as standard one, which is in specification as standard...
- jbpm-console doesnt support HumanTask WS specification, there is no JPA logging attached to console, no loggin of started/stopped of human task which is kinda essential, strange ant manipulation with server war package bacouse sb commited wrong persistence.xml to console server. I got an advise to not use it as this is only demonstrational product... cmon this suucks, every real bpms should have such console working. I am currently trying to modify it but as i dwell deeper i get to gwt-console* packages that jbpm console depands and i cnat find any project page for it so i need to download sources from gerpcode and try to make it all manualy... so jbpm console depands on different jars which should be available as projects with jbpm console souces to be freely modified....
- eclipse bpmn2 plugin (not the one from isntaller, and I know it now you who is responsible for this plugin!) is not compatibile although it looks greate! team seems to have abandoned the further work on February this year and this is the time i saw last commit.
- Web Designer problems, no custom tasks (I saw them in some guvnor tutorial lately but this is not compatibile with jpbm-consol as far as i know)
This all makes me very discouraged for further work but since i have spend so much time on it already i cant realy change to other product. Writing my own jbpm console makes no sense as this take much time and duplicates current functionality and would enhence it only in very few places...
I know that Developers are doing all they can in time they have but I'm frustrated with the situation itself.
10. Re: The bpmn2 process editor and the guvnor web process editor can`t be compatible freely?ylq2250 kun Oct 6, 2011 11:11 PM (in response to Tihomir Surdilovic)
thanks for Tihomir Surdilovic`s replly again.
the guvnor`s version is "5.3.0.CR1" for tomcat6.0 ,the Web designer`s version is "1.0.0.055-jboss" .I just take the guvnor `s war rename to "drools-guvnor.war" and the web designer`s war rename to "designer.war",and put the two wars into the webapps fold of the tomcat6.0,then i start the tomcat server and login into the gunver with the "" address. i don`t do any configuration for them.
by the way ,can i use the web Designer to design a process which i described in the front reply like the "testMainProcess" process, as we have a requirement which need to design a process contains a Multiple Instance node and a Reusable Sub-Process node insiding the Multiple Instance node.
i look through all the node elements in the web Designer but i can find a node like the Multiple Instance node element in the Eclipse Jbpm Editor. how can i fix the requirement.can you teach me how to designer this process with the web Designer.
is there any documents or user guiders about the web Designer? as i find there are so many properties about a node element in the web Designer. i don`t know which property i should set to let the process definition valid and can be executed by the jbpm Engine.
|
https://developer.jboss.org/thread/173194
|
CC-MAIN-2018-39
|
refinedweb
| 1,936
| 60.65
|
window.console differs a lot depending on DevTools tools being open or not
Confirmed Issue #10868613 • Assigned to Jeff F.
Steps to reproduce
window.console differs a lot depending on DevTools tools being open or not. This should not happen; there should be no observable differences to web-facing content.
Quick testing led me to two differences (I’d expect more):
console.log.toString() is
function __BROWSERTOOLS_CONSOLE_SAFEFUNC() { [native code] }when DevTools are open and
function log() { [native code] }wen they’re closed. The same applies to all the other methods.
console is a namespace object only if DevTools is open. That means the following:
var log = console.log;
log(‘message’);
works only if DevTools are open. Making console a namespace object is tracked in so maybe this one will be resolved once the other one is but I’d pay attention if everything matches (the proto value, property descriptors etc.).
Microsoft Edge Team
Changed Assigned To to “Leo L.”
Changed Status to “Confirmed”
Changed Assigned To from “Leo L.” to “Jeff F.”
You need to sign in to your Microsoft account to add a comment.
|
https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/10868613/
|
CC-MAIN-2017-13
|
refinedweb
| 185
| 68.06
|
Generic Methods
If you are creating multiple methods with the same functionality but varies on the data type that they will accept or contribute on their functionality, then you can use generic methods to save you some typing. A general structure of a generic method is as follows:
returnType methodName<type> (type argument1) { type someVariable; }
You can see that a type was indicated after the method name, wrapped inside angle brackets. Now every occurrence of this type will be substituted by the type that was indicated inside these brackets. The program below shows an example of using a generic method.
using System; namespace GenericMethodDemo { public class Program { public static void Show<X>(X val) { Console.WriteLine(val); } public static void Main() { int intValue = 5; double doubleValue = 10.54; string stringValue = "Hello"; bool boolValue = true; Show(intValue); Show(doubleValue); Show(stringValue); Show(boolValue); } } }
Example 1 – Using a Generic Method
5 10.54 Hello true
We created a generic method that can accept any kind of data and then show its value. We then pass different data to the same function. The method changes that type of X depending on the type of data passed as an argument. For example, when we passed an int value, all the occurrence of X in the method will be replaced by int so the method will look like this if an int value is passed.
public static void Show (int val) { Console.WriteLine(val); }
You can also explicitly indicate the type to be used by the generic method when you call that method (although it is not necessary). For example the calls to the method above can be rewritten like this:
Show<int>(intValue); Show<double>(doubleValue); Show<string>(stringValue); Show<bool>(boolValue);
Note about using generic methods, we cannot include inside the code computations such as adding two numbers because the compiler cannot determine the actual type of the operands and if they can be properly added to each other. We simply show the values inside our method because the compiler can show any type of data using the Console.WriteLine() method.
public static void Show<X>(X val1, X val2) { Console.WriteLine(val1 + val2); // ERROR }
You can also specify multiple type specifiers for a generic method. Simply separate each type with a comma.
public static void Show<X, Y>(X val1, Y val2) { Console.WriteLine(val1); Console.WriteLine(val2); }
You can pass two different values to the method like this:
Show(5, true); // OR Show<int, bool>(5, true);
Therefore, X will be of type int and Y will be substituted with bool. You can also pass two arguments with the same type.
Show(5, 10); // OR Show<int, int>(5, 10);
|
https://compitionpoint.com/generic-methods/
|
CC-MAIN-2021-21
|
refinedweb
| 449
| 60.95
|
When people are trying to learn neural networks with TensorFlow they usually start with the handwriting database. This builds a model that predicts what digit a person has drawn based upon handwriting samples obtained from thousands of persons. To put that into features-labels terms, the combinations of pixels in a grayscale image (white, black, grey) determine what digit is drawn (0, 1, .., 8, 9).
Here we use other data.
Prerequisites
Before reading this TensorFlow Neural Network tutorial, you should first study these three blog posts:
Introduction to TensorFlow and Logistic Regression
What is a Neural Network? Introduction to Neural Networks Part I
Introduction to Neural Networks Part II
Then you need to install TensorFlow. The easiest way to do that on Ubuntu is to follow these instructions and use virtualenv.
Then install Python Pandas, numpy, scikit-learn, and SciPy packages.
The Las Vegas Strip Hotel Dataset from Trip Advisor
Programmers who are learning to using TensorFlow often start with the iris-data database. That given the combination of pixels that show what type of Iris flower is drawn. But we want to do something original here instead of use the Iris dataset. So we will use the Las Vegas Strip Data Set, cited in the paper “Moro, S., Rita, P., & Coelho, J. (2017). Stripping customers’ feedback on hotels through data mining: The case of Las Vegas Strip. Tourism Management Perspectives, 23, 41-52.” and see if we can wrap a neural network around it.
In their paper, the authors wrote a model using the R programming language and used Support Vector Matrices (SVMs) as their algorithm. That is a type of non-linear regression problem. It uses the same approach to solving regular LR problems, which is to find a line that reduces the MSE (mean square error) to its lowest point to build a predictive model. But SVMs take that up a notch in complexity by working with multiple, nonlinear inputs and finds a plane in n-dimensional space and not line on the XY Cartesian Plane.
Here we take the same data and but use a neural network instead of SVM. We will present this in 3 blog posts:
- Put data into numeric format.
- Train neural network.
- Make prediction.
The data and code for this tutorial is located here.
The Data
Click here to see the data in Google Sheets format. The data is too wide to fit on one screen so we show it below in two screen prints. If you read the paper cited above you can get more details about the data but basically it is TripAdvisor data for 21 Hotels along the Las Vegas Strip. The goal is to build a model that will predict what score an individual is likely to give to which hotel.
The score is 1 to 5 and the input are 20 variables described in the spreadsheet below.
The authors of the paper say that certain data—like whether the hotel has a casio, pool, number of stars, or free internet—does not have much bearing on the score given by the hotel guest on Tripadvisor. Rather the factors that most heavily predict the score are the number of reviews the reviewer has written and how long they have been writing reviews. Other factors that influence the score are the day of the week and the month of the year.
Convert Values to Integers
You can download the code below from this iPython notebook.
First we need to convert all of those values to integers as machine learning uses arrays of numbers as input. We adopt three approaches:
- If the number is already an integer leave it.
- If the number is a YES or NO then change it to 1 or 0.
- If the element in a string, then use the ordinal string function to change each letter to a integer. Then sum those integers.
import pandas as pd
def yesNo(x):
if x=="YES":
return 1
else:
return 0
def toOrd(str):
x=0
for l in str:
x += ord(l)
return int(x)
cols = ['User country', 'Nr. reviews','Nr. hotel reviews','Helpful votes',
'Score','Period of stay','Traveler type','Pool','Gym','Tennis court',
'Spa','Casino','Free internet','Hotel name','Hotel stars','Nr. rooms',
'User continent','Member years','Review month','Review weekday']
df = pd.read_csv('/home/walker/TripAdvisor.csv',sep=',',header=0)
df['Casino']=df['Casino'].apply(lambda x : yesNo(x))
df['Gym']=df['Gym'].apply(lambda x : yesNo(x))
df['Pool']=df['Pool'].apply(lambda x : yesNo(x))
df['Tennis court']=df['Tennis court'].apply(lambda x : yesNo(x))
df['Casino']=df['Casino'].apply(lambda x : yesNo(x))
df['Free internet']=df['Free internet'].apply(lambda x : yesNo(x))
df['Spa']=df['Spa'].apply(lambda x : yesNo(x))
cols2 = ['Period of stay', 'Hotel name', 'User country',
'Traveler type', 'User continent', 'Review month', 'Review weekday']
for y in cols2:
df[y]=df[y].apply(lambda x: toOrd(x))
df.to_csv('tripAdvisorFL.csv')
Here we change every string to an integer. You would have to save the string-integer combination in some data structure so that later you could see which integer equals what string value.
Here is what our data looks like now.
Related posts:
- 10 Troubling Excuses for Using Oozie with Hadoop Workflows
- Machine Learning and AI Frameworks: What’s the Difference and How to Choose?
- Using Logistic Regression, Scala, and Spark
- BMC Control-M for Hadoop: Are You Ready for Big Data?
- Jobs-as-Code: The Power of Shifting Left
Wikibon: Automate your Big Data pipeline
Learn how data management experts throughout the industry are transforming their Big Data infrastructure for maximum business impact.
|
http://www.bmc.com/blogs/using-tensorflow-to-create-neural-network-with-tripadvisor-data-part-i/
|
CC-MAIN-2018-26
|
refinedweb
| 939
| 55.54
|
Here is a listing of advanced C++ interview questions on “Resource Management” along with answers, explanations and/or solutions:
1. What can go wrong in resource management on c++?
a) Leakage
b) Exhaustion
c) Dangling
d) All of the mentioned
View Answer
Explanation:If there is any mishap in memory or resource management means, the problems that are mentioned above can happen.
2. When we call that resource is leaked?
a) Arise of compile time error
b) It cannot be accessed by any standard mean.
c) Arise of runtime error
d) none of the mentioned
View Answer
Explanation:Resource is said to be leaked when it cannot by accessed by any means of standard mean.
3. What kind of error can arise when there is problem in memory?
a) Segmentation fault
b) Produce an error
c) Both a & b
d) none of the mentioned
View Answer
Explanation:None
4. What is the output of this program?
#include <iostream>
#include <new>
using namespace std;
int main ()
{
int i, n;
int * p;
i = 2;
p= new (nothrow) int[i];
if (p == 0)
cout << "Error: memory could not be allocated";
else
{
for (n=0; n<i; n++)
{
p[n] = 5;
}
for (n = 0; n < i; n++)
cout << p[n];
delete[] p;
}
return 0;
}
a) 5
b) 55
c) 555
d) Error: memory could not be allocated
View Answer
Explanation:As we had given i value as 2, It will print the 5 for two times.
Output:
$ g++ res.cpp
$ a.out
55
#include <iostream>
using namespace std;
int main(void)
{
const char *one = "Test";
cout << one << endl;
const char *two = one;
cout << two << endl;
return 0;
}
a) Test
b) TestTest
c) Te
d) none of the mentioned
View Answer
Explanation:We are copying the values from one variable to other, So it is printing is TestTest
Output:
$ g++ res1.cpp
$ a.out
TestTest
6. What is the output of this program?
#include <iostream>
using namespace std;
int funcstatic(int)
{
int sum = 0;
sum = sum + 10;
return sum;
}
int main(void)
{
int r = 5, s;
s = funcstatic(r);
cout << s << endl;
return 0;
}
a) 10
b) 15
c) error
d) none of the mentioned
View Answer
Explanation:Eventhough we passed the value, we didn’t caught to manipulate it, So it is printing as 10.
Output:
$ g++ res2.cpp
$ a.out
10
7. What is the output of this program?
#include <iostream>
#include<string.h>
using namespace std;
int main()
{
try
{
char *p;
strcpy(p, "How r u");
}
catch(const exception& er)
{
}
}
b) segmentation fault
c) error
d) runtime error
View Answer
Explanation:As we are using a pointer value to copy a string, So it will be producing a runtime error.
Output:
$ g++ res3.cpp
$ a.out
segmentation fault
8. What is meant by garbage collection?
a) Form of manual memory management
b) Form of automatic memory management
c) Used to replace the variables
d) None of the mentioned
View Answer
Explanation:The garbage collectoion attempts to reclaim memory occupied by objects that are no longer in use by the program.
9. What are the operators available in dynamic memory allocation?
a) new
b) delete
c) compare
d) both a & b
View Answer
Explanation:new and delete operators are mainly used to allocate and deallocate
during runtime.
10. Which is used to solve the memory management problem in c++?
a) Smart pointers
b) arrays
c) stack
d) none of the mentioned
View Answer
Explanation:None.
Sanfoundry Global Education & Learning Series – C++ Programming Language.
Here’s the list of Best Reference Books in C++ Programming Language.
To practice all features of C++ programming language, here is complete set on 1000+ Multiple Choice Questions and Answers on C++.
|
http://www.sanfoundry.com/advanced-cpp-interview-questions-resource-management/
|
CC-MAIN-2016-50
|
refinedweb
| 613
| 60.95
|
Results 1 to 2 of 2
Thread: import DVD to imovie?
import DVD to imovie?
- Member Since
- Apr 20, 2007
- Location
- little bit north of dallas
- 198
- Specs:
- MBP C2D, 1 GB ram, 120GB HD
hey all. i have a set of training DVDs we use at work to train our new employees. they're each about 20 to 30 minutes in length, and there's about 5 dvd's total. it would make everyone's lives so much easier if i could condence the information to one disc, but i can't figure out how to get the info into iMovie without hooking up a DVD player to a capture device. could someone please help me out here?
TIA!
- Member Since
- Dec 03, 2006
- Location
- Irvine, CA
- 9,385
- Specs:
- Black Macbook C2D 2GHz 3GB RAM 250GB HD iPhone 4 iPad 3G
You'll have to rip the DVDs first before you can edit them. I would suggest using Handbrake to do this. You can then import each of the files into iMovie to "stitch" them together and then export the movie to iDVD and either create a menu system to point to each individual movie or just have one giant video
June 2007
July 2009
Thread Information
Users Browsing this Thread
There are currently 1 users browsing this thread. (0 members and 1 guests)
Similar Threads
iMovie 7 does not import video snippets from my iMovie HD projectBy Turk182 in forum Movies and VideoReplies: 1Last Post: 03-07-2010, 11:26 AM
how to import NON-Commercial DVD into iMovieBy Patt in forum Movies and VideoReplies: 4Last Post: 07-12-2009, 06:26 PM
Import DVD format to iMovie formatBy teewil in forum Movies and VideoReplies: 2Last Post: 02-06-2009, 05:01 PM
import issue: only showing imovie 08:last import ... unable to locate previous importBy DAIDAI in forum Movies and VideoReplies: 0Last Post: 12-25-2008, 02:03 PM
Can I import from a DVD to imovie?By jerryny in forum Movies and VideoReplies: 3Last Post: 05-20-2005, 03:08 PM
|
http://www.mac-forums.com/forums/movies-video/67143-import-dvd-imovie.html
|
CC-MAIN-2018-17
|
refinedweb
| 346
| 71.78
|
int main (void){ board_init(); sysclk_init(); delay_init(sysclk_get_cpu_hz()); pio_set_output(PIOB, LED0_GPIO, HIGH, DISABLE, DISABLE); while(1) { gpio_set_pin_high(LED0_GPIO); delay_ms(1000); gpio_set_pin_low(LED0_GPIO); delay_ms(1000); } // Insert application code here, after the board has been initialized.}
#include <asf.h>#define MY_LED IOPORT_CREATE_PIN(PIOB, 27)int main (void){ board_init(); sysclk_init(); delay_init(sysclk_get_cpu_hz()); ioport_init(); ioport_set_pin_dir(MY_LED, IOPORT_DIR_OUTPUT); while(1) { ioport_set_pin_level(MY_LED, true); delay_ms(2500); ioport_set_pin_level(MY_LED, false); delay_ms(2500); }}
@C:\arduino-1.5.2\hardware\tools\bossac.exe --port=COM5 -U false -e -w -v -b $(TargetDir)\$(TargetName).bin -R
ioport_init(); ioport_set_pin_dir(MY_LED, IOPORT_DIR_OUTPUT); : ioport_set_pin_level(MY_LED, true);
I am using the Atmel Studio 6.1 in order to better learn the Cortex M3 ARM microcontroller.
gpio_set_pin_high(LED0_GPIO) is no closer to the hardware than digitalWrite() in terms of learning anything, it's just another name for a function to set a pin. As noted, even if you get down and dirty with the port control registers that's nothing to do with "ARM".You could start writing code in assembler, that really is learning ARM. Apart from that offhand I think the only ARM things you can play with is the NVIC or the systick counter.
I may have to go back to the modest days of assembly ... in order to really learn the hardware.
say I know how to program the ARM to prospective employers.
implicit declaration of function 'delay_init' implicit declaration of function 'delay_ms'
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=173679.msg1304039
|
CC-MAIN-2015-48
|
refinedweb
| 266
| 55.95
|
Smart, pythonic, ad-hoc, typed polymorphism for Python.
mypy, PEP561 compatible
pip install classes
You also need to configure
mypy correctly and install our plugin:
# In setup.cfg or mypy.ini: [mypy] plugins = classes.contrib.mypy.classes:
jsonis parseable by the given schema?
There should be a better way of solving this problem! And typeclasses are a better way!
How would new API look like with this concept?
from typing import Union from classes import typeclass @typeclass def to_json(instance) -> str: """This is a typeclass definition to convert:
True) 'true' to_json(1) '1' to_json([False, 1, 2.5]) '[false, 1, 2.5]'to_json( full docs to learn more.
Want more? Go to the docs! Or read these articles:
— ⭐️ —
Drylabs maintains dry-python and helps those who want to use it inside their organizations.
Read more at drylabs.io
|
https://openbase.com/python/classes
|
CC-MAIN-2021-39
|
refinedweb
| 138
| 70.9
|
C++ Problem solving
Description of ATM program
In this ATM program we have to calculate the remaining balance after any transaction. If the balance is sufficient for transaction, we have to print the remaining balance after transaction. But if the balance is insufficient, then we have to print the current balance.
There is also a condition in this ATM program that the ATM machine receives only the withdrawal request of the multiple of 5$. If the user places the withdrawal request which is not multiple of 5 then we have to print the current balance as the withdrawal is not successful.
This problem is taken from CodeChef for practice and learning purpose of our readers. In this C++ practice article we will discuss about the solution of this problem with source code.
Solution and source code of ATM program
We will take two input from the user which will be the amount the user want to withdraw and the current balance of his account. As the bank charges 0.50$ for each successful transaction. We have to consider the total balance is sufficient for withdraw with bank charges.
Imagine, the user gives two number 30 and 120.00.
Then we have to print the remaining balance is
120.00 – (30 + 0.50)
= 89.50
But if the user gives two number such as 300 and 120.00.
Then it is not possible to withdraw the money as the remaining balance is not sufficient for this withdrawal request.
Again, any withdrawal request which is not the multiple of 5, will not be successful. Then we have to print the current balance also.
Now, let’s see the source code for atm problem solving in C++.
// solving atm problem by c++ #include <iostream> #include <iomanip> using namespace std; int main(){ int tran; float newBal; cin >> tran >> newBal; if(tran % 5 == 0 && newBal >= (tran + 0.5)){ newBal = newBal - (tran + 0.5); } cout << fixed << setprecision(2) << newBal << endl; }
Try to give different input to test the program. You will see the output like this.
30 120.00 89.50
42 120.00 120.00
300 120.00 120.00
For each case in output console, the first number is the amount which the user want to withdraw. The second number is the current balance. Second line contains the output for specific input.
|
https://worldtechjournal.com/cpp-practice/atm-program-in-cpp/
|
CC-MAIN-2022-40
|
refinedweb
| 388
| 74.9
|
04 September 2013 16:52 [Source: ICIS news]
HOUSTON (ICIS)--?xml:namespace>
That puts the assessed range for SBR non-oil grade SBR unchanged at 80.5-90.5 cents/lb ($1,775-1,995/tonne, €1,349-1,516/tonne), while SBR oil-extended grade 1712 rose 11 cents/lb to 74.5-84.5 cents/lb.
The big development, sources said, was the narrowing of the spread between SBR 1502 and SBR 1712. The difference is usually about 5-10 cents/lb less for 1712. But with 1712 more subject to the vagaries of crude oil prices, the recent uptick in crude futures has caused the spread to narrow to just 5-7 cents/lb, sources said.
Both US SBR and BD are lower on weakness in the replacement-tyre market, which accounts for about 80% of US total tyre sales. Tyres are a key downstream market for both SBR and BD.
Replacement tyres are not selling, sources say, because of weak economies worldwide. Last week, Ford Europe said that it estimated that the European auto market had hit bottom but did not see it recovering to pre-recession levels for at least five years. In the near term, sources had expected the replacement-trye market to recover sometime in the second half of 2013, but now estimates are for no recovery before second quarter 2014.
|
http://www.icis.com/Articles/2013/09/04/9703211/us-sbr-15021712-price-spread-tightens-on-higher-crude-oil.html
|
CC-MAIN-2015-06
|
refinedweb
| 228
| 70.02
|
On 9/07/2003 14:37 Reinhard Pötz wrote:
> [3] I'm aware of Sylvain's and Marc's proposal on changing
> the scope of available controllers. I contacted Sylvain off-list
> and the said that they want to come up with a concrete
> implementation
> of their proposal in the future and this should *not* influence
> the release of Cocoon 2.1 as the proposed changes would only have
> a small impact on the *public* interfaces.
+1 under the assumption that:
- we can reasonably expect that Sylvain & Marc will try to minimize
the impact on public interfaces (i.e. sitemap syntax and friends) of
their planned future contributions
- the 'original flow' is considered to be but one implementation of
the aspect of flow handling in Cocoon, especially in the naming sense.
It should be possible that sitemap constructs can be reconsidered, and
it should also be possible that 'Flow' is considered to be a generalized
service in Cocoon, with a number of realisations, without the 'original
flow' claiming the entire Flow concept thought- and namespace
It's good to see progress in this matter, thank you for shepherding the
discussion into a vote, Reinhard!
</Steven>
--
Steven Noels
Outerthought - Open Source, Java & XML Competence Support Center
Read my weblog at
stevenn at outerthought.org stevenn at apache.org
|
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200307.mbox/%3C3F0C21F0.40706@outerthought.org%3E
|
CC-MAIN-2015-48
|
refinedweb
| 217
| 53.95
|
3.4
From Eigen
Raw dump of the main novelties and improvements that will be part of the 3.4 release compared to the 3.3 branch:
New features
- New versatile API for sub-matrices, slices, and indexed views [doc]. It basically extends
A(.,.)to let it accept anything that looks-like a sequence of indices with random access. To make it usable this new feature comes with new symbols:
Eigen::all,
Eigen:);
- Reshaped views through the new members
reshaped()and
reshaped(rows,cols). This feature also comes with new symbols:
Eigen::AutoOrder,
Eigen::AutoSize. [doc]
- A new helper
Eigen::fix<N>to pass compile-time integer values to Eigen's functions [doc]. It can be used to pass compile-time sizes to
.block(...),
.segment(...), and all variants, as well as the first, size and increment parameters of the seq, seqN, and lastN functions introduced above. You can also pass "possibly compile-time values" through
Eigen::fix<N>(n). Here is an example comparing the old and new way to call
.blockwith fixed sizes:
template<typename MatrixType,int N> void foo(const MatrixType &A, int i, int j, int n) { A.block(i,j,2,3); // runtime sizes // compile-time nb rows and columns: A.template block<2,3>(i,j); // 3.3 way A.block(i,j,fix<2>,fix<3>); // new 3.4 way // compile-time nb rows only: A.template block<2,Dynamic>(i,j,2,n); // 3.3 way A.block(i,j,fix<2>,n); // new 3.4 way // possibly compile-time nb columns // (use n if N==Dynamic, otherwise we must have n==N): A.template block<2,N>(i,j,2,n); // 3.3 way A.block(i,j,fix<2>,fix<N>(n)); // new 3.4 way }());
- A new namespace indexing allowing to exclusively import the subset of functions and symbols that are typically used within
A(.,.), that is: all,seq, seqN, lastN, last, lastp1. [doc]
- Misc
- Add templated
subVector<Vertical/Horizonal>(Index)aliases to
col/row(Index)methods, and
subVectors<>()aliases to
rows()/cols().
- Add
innerVector()and
innerVectors()methods.
- Add diagmat +/- diagmat operators (bug 520)
- Add specializations for
res ?= dense +/- sparseand
res ?= sparse +/- dense. (see bug 632)
- Add support for SuiteSparse's KLU sparse direct solver (LU-based solver tailored for problems coming from circuit simulation).
Performance optimizations
- Vectorization of partial-reductions along outer-dimension, e.g.: colmajor.rowwise().mean()
- Speed up evaluation of HouseholderSequence to a dense matrix, e.g.,
MatrixXd Q = A.qr().householderQ();
- Various optimizations of matrix products for small and medium sizes when using large SIMD registers (e.g., AVX and AVX512).
- Optimize evaluation of small products of the form
s*A*Bby rewriting them as:
s*(A.lazyProduct(B))to save a costly temporary. Measured speedup from 2x to 5x (see bug 1562).
- Improve multi-threading heuristic for matrix products with a small number of columns.
- 20% speedup of matrix products on ARM64
- Speed-up reductions of sub-matrices.
- Optimize extraction of factor Q in SparseQR.
- SIMD implementations of math functions (exp,log,sin,cos) have been unified as a generic implementation compatible over all supported SIMD engines (SSE,AVX,AVX512,NEON,Altivec,VSX,MSA).
- Workaround a performance regression in matrix product with gcc>=6.0 and SSE/AVX only (no-fma). We are still working on a similar issue with clang>=6.0 and AVX+FMA.
Hardware support
- AVX512 support is now complete (including complex scalars) and enabled by default when enabled on compiler side.
- Generalization of the CUDA support to CUDA/HIP for AMD GPUs.
- Add explicit SIMD support for MSA instruction set (MIPS).
- Add explicit SIMD support for ZVector instruction set (IBM).
|
https://eigen.tuxfamily.org/index.php?title=3.4&direction=prev&oldid=2403
|
CC-MAIN-2021-43
|
refinedweb
| 609
| 53.68
|
In this lesson, we will discuss the fundamentals of the pyplot framework in matplotlib.
The Imports For This Lesson
For this lesson, we will need to run our standard matplotilb import as well as the magic function that allows us to display plots within the Jupyter Notebook:
import matplotlib.pyplot as plt %matplotlib inline
I will also run the command to set my display to ‘Retina Mode’:
from IPython.display import set_matplotlib_formats set_matplotlib_formats('retina')
Lastly, we will import the NumPy numerical computing library:
import numpy as np
Pyplot’s Stateful Interface
Pyplot operates using what is called a
stateful interface. This is essentially a fancy term that means that you can change the state of a plot after it is created.
In practice, this means that we generally create a plot in one step, and then modify the plot over time using a series of additional steps until it has the appearance and characteristics that we desire.
To see an example of this, let’s create three random datasets using NumPy’s
randn function:
data1 = np.random.randn(10) data2 = np.random.randn(10) data3 = np.random.randn(10)
Note that since
randn is a random number generator, each of these data sets will be different despite being generated using the same command.
Now let’s plot each of these datasets using the
plt.plot() method that we used earlier in this course. To do this, simply run multiple
plt.plot() methods one after the other, like this:
plt.plot(data1) plt.plot(data2) plt.plot(data3)
Here’s the output of that code (note that your plot may look slightly different since your randomly-generated datasets will be different than mine):
This is an excellent example of pyplot’s stateful interface - instead of trying to plot all three data sets on one line, you can plot them one-by-one onto the same canvas.
The same principle applies with other plot characteristics like titles and axis labels. You can use the
title,
xlabel, and
ylabel methods to places titles on your chart:
plt.plot(data1) plt.plot(data2) plt.plot(data3) plt.title("Some randomly generated datasets") plt.xlabel("These are the x labels") plt.ylabel("These are the y labels")
Here is the new output of this code:
Pyplot’s Interactive Mode
Pyplot has an attribute called
interactive mode that changes whether a plot is displayed after modifying it.
When
interactive mode is turned on, the plot is displayed whenever it is modified. You can turn interactive mode on using
plt.ion().
When
interactive mode is turned off, the plot is not displayed whenever it is modified. In this case, you would display the plot using the
plt.show() method. You can turn interactive mode off using
plt.ioff.
If you are not sure whether or not you are currently operating in
interactive mode, you can test this using
plt.isinteractive(). It will return
True if
interactive mode is enabled and
False if
interactive mode is disabled.
If you test this in your Jupyter Notebook, you will notice that your plots will still display even if
interactive mode is disabled. This is because of the
%matplotlib inline command we executed at the beginning of this lesson.
You can disabled this forced plot display by executing the following code:
shell = get_ipython() from ipykernel.pylab.backend_inline import flush_figures shell.events.unregister('post_execute', flush_figures)
Moving On
In this lesson, we learned about the basics of matplotlib’s pyplot interface. After working through some practice problems in the next section, we will explore pyplot’s
plot method in more detail.
|
https://nickmccullum.com/python-visualization/pyplot/
|
CC-MAIN-2020-29
|
refinedweb
| 598
| 54.73
|
I've got a namespace query that amounts to this: How can an imported function see data in the parent custom namespace? I have read through numerous posts which skirt this issue without answering it. To illustrate, create plugin.py with a couple of functions. The second will obviously fail. ---- def Hello(): print 'hello' def ViewValuable(): print VALUABLE ---- Then create master.py which loads the plugin at runtime, later running various code fragments against it. ---- # location of plugin module filespec = '/path/to/plugins/plugin.py' filepath, filename = os.path.split(filespec) filename = os.path.splitext(filename)[0] # add to system path if filepath not in sys.path: sys.path.append(filepath) # import into our namespace space = __import__(filename, globals(), locals(), []) namespace = space.__dict__ # sometime later in the code... define a new function def _plus(): print 'plus' # add that to our namespace namespace.update({'Plus': _plus, 'VALUABLE': 'gold'}) # run custom code code = """ Hello() Plus() Valuable() """ exec code in namespace ---- This code will echo the lines: hello plus Followed by a traceback for: NameError: global name 'VALUABLE' is not defined The question is: How do I get a function in plugin.py to see VALUABLE? Using external storage of some sort is not viable since many different instances of plugin.py, all with different values of VALUABLE, might be running at once. (In fact VALUABLE is to be a key into a whole whack of data stored in a separate module space.) Extensive modifications to plugin.py is also not a viable approach, since that module will be created by users. Rather, I need to be able to pass something at execution time to make this happen. Or create an access function along the lines of _plus() that I can inject into the namespace. Any help, please? I've been losing sleep over this one. -- robin
|
https://mail.python.org/pipermail/python-list/2007-July/432468.html
|
CC-MAIN-2014-15
|
refinedweb
| 304
| 68.67
|
500W electric scooter control and instrumentation with Arduino mega Answered
1. Introduction
DC 500W motor control with an Arduino mega to limit starting current and to vary the speed of the scooter. The battery is in 24V, 10A.h. The following table summarizes their characteristics:
2. Bibliography:
Link download :
sketch_escooter_feed_back_reel_V1.ino
escooter_ampli_SIMULINK.mdl
escooter feed back ISIS.DSN
youtube : "study trotinette electric e-scooter 100W et 350W, wiring" youtube
Article: «Study of electric scooters 100W and 500W (Arduino), Revue 3EI 2017»
Pdf?
Book «I realize my electric vehicle» at DUNOD
3. Open loop program
To test the programming, we simulate the program in ISIS, as can be seen in the following figure. In addition, we have an LCD display to display data (duty cycle corresponding to the PWM at 32Khz, motor current, motor voltage, action on the pushbuttons, 4 push buttons are used.
BP1 to manually increment the duty cycle, BP2 decrement it. BP3 set the duty cycle to 0, corresponding to the brake contact.
The speed of the motor is practically proportional to the duty cycle
We made our own current amplifier called a step-down chopper but it is possible to buy a shield
There are many cards for Arduino to control DC motors especially of low powers and also of great powers as can be observed on the following links.
But all these chopper shields measure the current internally but there is no current limitation.
In order to have a current limitation, an analog current loop is required using specialized AOP or IC or a fast digital current loop.
But what should be the value of the limitation current?
The choice of the current value is normally for the 1-hour operation service in order to be able to carry out relatively long climbs without reaching the critical temperature of the engine.
In our case, the limitation current must be
Limiting motor = Power / Upper battery = 500W / 24V = 20A
In addition, the power transistor of the chopper can only support 50A in our case.
But in open loop, it has no current regulation, so as not to exceed the maximum current, a ramp of the duty cycle will be used.
A 0.1 second interruption routine will be used to measure the voltage of the current (sample measurement, sample). This sampling time is arbitrary but does not allow to be faster than the rise time of the current because the electric time constant of the motor is L / R = 1.5 ms.
Open loop operation with a 25.5s (8bit) ramp and 0.1s interrupt routine provides a good understanding of the operation of a DC motor drive.
The display will only be done every 0.2s to have a stability of the digits on the screen. In addition, a digital filtering will be done on the current and the voltage on 4 values therefore on 0.4s.
[b] Algo open loop [/b]
Interrupt Routine All 0.1S
Read voltage and current
Loop loop (push button scan)
If BP1 = 1 then increment PWM
If BP2 = 1 then decrement PWM
If BP3 = 1 then PWM = 0
Displaying variables every 0.2s
Code: [Select]
// include the library code:
#include <LiquidCrystal.h>
#include <SoftwareSerial.h>
#include <TimerOne.h>
#define SERIAL_PORT_LOG_ENABLE 1
#define Led 13 // 13 for the yellow led on the map
#define BP1 30 // 30 BP1
#define BP2 31 // 31 BP2
#define BP3 32 // 32 BP3
#define LEDV 33 // 33 led
#define LEDJ 34 // 34 led
#define LEDR 35 // 35 led
#define relay 36 // 36 relay
#define PWM10 10 //11 timer2
LiquidCrystal lcd(27, 28, 25, 24, 23, 22); // RS=12, Enable=11, D4=5, D5=4, D6= 3, D7=2, BPpoussoir=26
// Configuring variables
unsigned int UmoteurF = 0; // variable to store the value coming from the sensor
unsigned int Umoteur = 0;
unsigned int Umoteur2 = 0;
unsigned int Umoteur3 = 0;
unsigned int Umoteur4 = 0;
unsigned int ImoteurF = 0;
unsigned int Imoteur = 0;
unsigned int Imoteur2 = 0;
unsigned int Imoteur3 = 0;
unsigned int Imoteur4 = 0;
byte Rcy=0 ; // 8bit duty cycle
unsigned int temps;
// the setup function runs once when you press reset or power the board
void setup() {
pinMode(Led, OUTPUT); // Arduino card
pinMode(LEDV, OUTPUT);
pinMode(LEDR, OUTPUT);
pinMode(LEDJ, OUTPUT);
pinMode (PWM10,OUTPUT); // Pin (10) output timer2
// digitalWrite(LEDV,LOW);
Timer1.initialize(100000); // initialize timer1, and set a 0,1 second period => 100 000
Timer1.attachInterrupt(callback); // attaches callback() as a timer overflow interrupt
lcd.begin(20, 4);
Serial1.begin(9600);
TCCR2B = (TCCR2B & 0b11111000) | 0x01; //pin 10 32khz
//
// analogWriteResolution(bits)
lcd.setCursor(0,1);
lcd.print("Rcy");
lcd.setCursor(10,1);
lcd.print("Um");
lcd.setCursor(5,1);
lcd.print("Im");
lcd.setCursor(10,1);
lcd.print("Um");
lcd.setCursor(20,1); // 4 lines display * 20 characters
lcd.print("BP1+");
lcd.setCursor(25,1);
lcd.print("BP2-");
lcd.setCursor(29,1);
lcd.print("BP3=0");
}
// Interruptions tous les 0.1s
void callback() {
temps++;
//toogle state ledv for check
if ( digitalRead(LEDV)== 1 ) {digitalWrite(LEDV,LOW);}
else {digitalWrite(LEDV,HIGH);}
analogWrite(PWM10,Rcy); // frequency
Umoteur=analogRead(A0);
Imoteur=analogRead(A1);
Imoteur2=Imoteur;
Imoteur3=Imoteur2;
Imoteur4=Imoteur3;
ImoteurF=(Imoteur4+Imoteur3+Imoteur2+Imoteur)/4 ;
Umoteur2=Umoteur;
Umoteur3=Umoteur2;
Umoteur4=Umoteur3;
UmoteurF=(Umoteur4+Umoteur3+Umoteur2+Umoteur)/4 ;
}// End routine
// Loop corresponding to main function
void loop() {
// BP + LED
if ((digitalRead(BP1))==1) {
lcd.setCursor(20,0); // Column line
lcd.print("BP1");
digitalWrite(LEDR, LOW);
digitalWrite(LEDJ, LOW);
Rcy++; // PWM incrementation
if ( Rcy>254) {Rcy=254;}
delay(100); //8bits * 100ms = 25S increment 25ssecond slope
}
if ((digitalRead(BP2))==1) {
lcd.setCursor(20,0);
lcd.print("BP2");
Rcy--;
if ( Rcy<2) {Rcy=2;} // PWM almost at 0, engine stop
delay(100);
digitalWrite(LEDR, HIGH);
digitalWrite(LEDJ, HIGH);
}
if ((digitalRead(BP3))==1) {
lcd.setCursor(20,0);
lcd.print("BP3");
Rcy=2; // PWM almost at 0, engine stop
}
if (temps>=2) {
lcd.setCursor(0,0);
lcd.print(" "); // Erase line
lcd.setCursor(0,0);
lcd.print(Rcy);
lcd.setCursor(5,0);
ImoteurF=(ImoteurF)/20; //resistance (5/1024)*(10/0.25ohm) si ACS712 66mV/A
// For resistance 1ohm (ImoteurF) / 20; Simulation 5/25
lcd.print(ImoteurF);
lcd.setCursor(10,0);
UmoteurF=UmoteurF*10/38; //10/38 10/30 simula
if (Umoteur>ImoteurF){UmoteurF=UmoteurF-ImoteurF; } //U-R*I
lcd.print(UmoteurF);
temps=0;
}// End if time
} // End loop
Since there is a limit of 9000 characters in the forum below
Open loop program feature previous
The interrupt routine lasts only 250 microseconds, the loop of the main program which scans the action of push buttons is 13micros and the display time of all data is 11ms. Thus, it is possible to improve the sampling period and thus the speed of the regulation of the current.
The Arduino makes it possible to make the instrumentation of the scooter so to know the power, the consumption in Ah and Wh, to measure the speed, to know the consumption according to Wh / km, to measure the temperature of the engine, Have a safe operation.
But for now we will see how to limit the current
4. Closed loop program, limited current control
The sampling period will increase to 0.01 seconds (interrupt routine)
If the current is less than the desired value, then the duty cycle can be increased or decreased to the desired value which is the setpoint.
On the other hand, if the motor current is greater than the limiting value, there is a rapid decrease in the duty cycle.
So as not to exceed the value of the duty cycle if it is saturated to 254 maximum and to the minimum value 6.
Code: [Select]
if (Imoteur<4000) // No current limitation at (20A * 10) * 20 = 4000
{if (consigne>Rcy) {Rcy=Rcy+1;} // Pwm ramp + 1 * 0.01second pure integrator
if (consigne<Rcy && Rcy!=0) {Rcy=Rcy-1;} // The decrementing is done only for the acceleration grip or with BP2
if ( Rcy>254) {Rcy=254;} // Limitation of duty cycle
analogWrite(PWM10,Rcy); // Frequency 32kHz timer2}
}
if (Imoteur>4000) { Rcy=Rcy-5; // No current filtering, to be faster
if ( Rcy<6) {Rcy=5;} // Rcy is not signed, nor the PWM therefore Rcy minimum must not be less than 6
analogWrite(PWM10,Rcy); // Frequency 32kHz timer2}
}
5. Closed Loop Program, Limited Current Control with Acceleration Handle
An acceleration handle provides a 0.8V voltage when not operated and a 4.5V voltage when the handle is fully engaged.
Instead of using pushbuttons to increase or decrease the speed setpoint, an acceleration handle will be used
Code: [Select]
Upoignee=analogRead(A3); // The relation in Upoign and the setpoint which corresponds to the duty cycle corresponds to
if (Upoignee>100) { consigne=(Upoignee/2); //0=a*200+b et 255=a*800+b
consigne= consigne-100;
}
else { consigne=0; }
if (Upoignee<100) { consigne=0; } // redundancy
6. Temperature and safety program of the motor with the current measurement
The outdoor temperature measurement can be easily performed by the LM35 component which charges 0.01V by degrees Celsius
Code: [Select]
temperature=analogRead(A2); //lm35 0.01V/°C
temperature=temperature/2; // Temperature coefficient
lcd.setCursor(5,2);
lcd.print(" ");
lcd.setCursor(5,2);
lcd.print(temperature); // Display in ° C
lcd.setCursor(9,2); // Erasing secu display
lcd.print(" ");
if (temperature>80 ) {lcd.setCursor(9,2); // If motor external temperature is above 80 ° C
lcd.print("secuT");
Rcy=0;}
In addition, thermal safety by measuring the motor current will be added.
If the limitation current is greater than 10s then the motor will no longer be powered for 30s.
A "secu" display will appear on the LCD display.
This safety makes it possible to cut the motor on slope too high and when blocking the engine but it would be necessary to add the measurement of the speed in the latter case
Code: [Select]
if (timesecurite>=10000 ) {flagarret=1;
// If limitation current for a current of more than 10s
timerepos=0;
consigne=0;
Rcy=0;
timesecurite=0;} // Then stop engine during a downtime
if (flagarret==1 ) {lcd.setCursor(9,2); // If limiting current for a current of more than 20s
lcd.print("secU"); } // Then stopping the motor for a stop time and display
if (timerepos>=30000 && flagarret==1) {flagarret=0;
lcd.setCursor(9,2); // After a rest time here of 30s
lcd.print(" "); }
The display can be observed if the temperature is above 80 ° C
Thermal safety by measuring the motor current (digital thermal relay) which allows to know the image of the internal temperature of the engine would be ideal. But for this, it is necessary to know well the thermal modeling of the motor.
7. Measurement of the energy capacity of the battery
The energy capacity of a battery is in A.H, we will display the value in mA.H to have a high accuracy. The capacity will be in A.Second in the following equation. So to have in mA.H, it will be divided by capacity by3600.
Capacity (A.s) n = I * Te + Cn-1 with Te = 0.01s and I multiplied by 10
So in the interrupt routine
Code: [Select]
capacity=ImoteurF+capacity ;
And in the display
Code: [Select]
lcd.setCursor(0,3); // Display of energy capacity
lcd.print("C mA.h=");
capacity1=capacity/(18000); //18000=3600*5 5=> Current measurement coefficient
lcd.print(capacity1);
To check a current of 10A with an adjustable resistor and after 30s, the capacity must be 83mA.H
8. Power and modeling with SIMULINK
Modeling helps to understand the vehicle and its control. In addition, it is possible to compile the control part directly into the Arduino program from simulation under Simulink. But it will not be possible to simulate the instrumentation with the LCD display.
In the following figure, we can observe the simulation of the programming of the chopper with the current limitation with Simulink. In the following figure, the green box shows the duty cycle control to vary the speed and the red border the current limitation. The controller of the control is here a simple integrator but it is possible to carry out a multitude of control.
In the previous figure, it can be observed that the current is well limited to 25A from 2s to 9.5s. Then, the current reaches 10.8A under established speed regime at 22.5km / h. The dynamics are similar to the tests carried out.
With a slope of 5%, the cyclic ratio reaches only 100% as can be seen in the following figure. The speed will reach painfully 19km / h with a current of 24A and a motor power of 580W.
See article: Study of electric scooters 100W and 500W (Arduino),
9. First conclusion
It is easy to control a 500W DC motor with an Arduino and some components
So repair many scooters that are in DC motors.
But it takes some knowledge (automatic, engine) to know how to properly manage the engine and limit its current so as not to damage it
The display of the speed, the distance, the operating time to know the Watt.km / km can also be realized with a menu 2.
The .ino program as an attached file,
But it is not possible to put an attached file in ISIS electronic labcenter?
What is this forum?
It would be desirable that the compiler could generate the.cof to debug in Isis and test the program line by line ....
Arduino still has to make a lot of effort to be on the same level as other microcontrollers
10. speed measurement (tachometer)
Velocity measurement is carried out using a hall effect sensor SS495 or A1324 which counts each revolution of the wheel. It is enough to enter the perimeter of the wheel of the scooter (130mm of radius therefore 0.816m in the case
To have the speed, it is enough just to divide the number of turn of wheel on an arbitrary time of 1s to have a minimum speed of 0.81m / s therefore of 2.93 km / h. In addition, an average filter with 3 values will be used to display the speed. At 25km / h, there will be 8.5 laps.
To count the turns, an external interrupt routine will be used on input INT0 21 of the mega card.
To simulate the speed, a pulse on input 21 will be used with a duty cycle of 10%.
Code: [Select]
void INT0b21() {
Tspeed++; // External interruption to count the number of turns
}
// In the set up declare the interrupt routine when the 5V edge of the magnet detection is done
attachInterrupt(digitalPinToInterrupt(21), INT0b21, RISING ); // External interruption
// In loop
if (temps09>=5) { // 1 second loop
lcd.setCursor(13,2); // Erasing speed
lcd.print("kph ");
lcd.setCursor(16,2);
speed1=Tspeed*2937; //1tour*816*3.6/1s=2.937km/h
speed2=speed1; //Tspeed (rate/seconde)
speed3=speed2;
speedF=(speed1+speed2+speed3)/3000; // To put in kph
lcd.print(speedF,1); // Display to the nearest tenth
Tspeed=0; // Reset counter
temps09=0; //reset time
}
To improve the accuracy of the velocity measurement, it is possible that the sampling time of the velocity measurement is dependent on the velocity.
For example:
For speeds less than 10km / h sample at 1second, but above 10km / h sample at 2 seconds.
11. Distance measurement for autonomy
The distance corresponds to the total number of turns of the wheel multiplied by the perimeter of the wheel.
So do not set the number of turns to 0 for each sample.
On the other hand, the reset of the distance will be done when pressing the reset of the Arduino Mega.
The distance display will be displayed to the nearest second.
At 32km / h, it will take 2 minutes to do 1km as can be seen in the following figure:
Code: [Select]
void INT0b21() {
Tspeed++; // External interruption to count speed
nbrRate++;
}
lcd.setCursor(13,4);
lcd.print("km "); //
distance=(nbrRate*816)/1000; //distance m
distance=distance/1000; //distance km
lcd.setCursor(15,4);
lcd.print(distance,1);
You can observe the electrical installation with the chopper, the arduino, and the display when the program is set up
12. Synthesis
The RAM space is used only at 4% and ROM space at 3%, for an Arduino mega. So we could take an arduino a little smaller.
But, there are 8 Lipo cells to make the 24V power supply to power the engine via the chopper. Therefore, the voltage measurement of each element will be on the Arduino with a JST connector. This measurement makes it possible to know if a cell with an internal resistance which begins to pose a problem and to know if the balancing of each cell has indeed been carried out.
It is possible to switch to 36V with 12 cells also with the arduino mega without using an external shield that multiplex 24 analog inputs on input A0
It is possible to send all data to a smartphone via Bluetooth HC06 via pins 20, 21, RX1 and TX1. But the application under android realized under JAVA Studio can not be shared on this forum. This part will not be explained.
After having made the instrumentation of this scooter, a study should be carried out on the precision of the measurements, it is possible to read
"Instrumentation of a low-power electrical motor vehicle" eco marathon "type Revue 3EI N ° 81, July 2015
Discussions
|
https://www.instructables.com/topics/500W-electric-scooter-control-and-instrumentation-/
|
CC-MAIN-2019-04
|
refinedweb
| 2,869
| 53.51
|
Download presentation
Presentation is loading. Please wait.
Published byKyree Cast Modified about 1 year ago
1
1 Long Run Stock Returns Following R&D Increases - A Test of R&D Spillover Effect Yanzhi Wang Yuan Ze University Taiwan By Sheng-Syan Chen, Wei-Ju Huang and Yanzhi Wang
2
2/21 R&D and Firm Valuation What is research and development (R&D)? –The investments on innovations and patents. –In accounting treatment, R&D is expensed. –Related to intangible assets –R&D has the externality (spillover effect) ASKEY ( 亞旭 ) obtained the technologies via patents from ATEONIX NETWORKS in 2004. These patents are about –Method and system for providing remote storage for an internet appliance –Method and system for providing a modulized server on board ASUS ( 華碩 ) obtained the technologies via patents from ASKEy via acquisition in 2008. Then, ASUS sued IBM.
3
3/21 R&D and Stock Return R&D level is positively related to future stock return (Lev and Sougiannis, 1996; Chan, Lakonishok, and Sougiannis, 2001). –Given that R&D generates intangible assets which is difficult to be valued, the R&D is normally underreacted. R&D level affects future long term stock return. R&D increase is also positively related to future stock return (Chan, Martin and Kensinger, 1990; Eberhart, Maxwell and Siddique; 2004) –R&D increase indicates the improvement on profitability that investors slowly react to. –R&D outlay reduces the current earnings while R&D is beneficial to future profitability. Investors extrapolate R&D increase firm’s future earnings too low, and this biased estimation causes positive return.
4
4/21 Ebarhart, Maxwell and Siddique (2004) The firms with R&D intensity over 5% and R&D increase over 5% are investigated. The R&D information is obtained from the annual report. The five-year long run abnormal return is about 0.74% and 0.53% per month based on equal- and value-weighted Carhart (1997) four-factor model, respectively. The operating performance improves in consequent of the R&D increase.
5
5/21 Some Comparisons EventArticleMethodsAbnormal Return per Month IPORitter and Welch (2002) Three-factor: EW-0.21% (t = -1.23) SEOLoughran and Ritter (2000) Three-factor: EW Three-factor: VW -0.47% (t = -5.42) -0.32% (t = -3.00) RepurchaseChan, Ikenberry and Lee (2007) Four-factor: EW Four-factor: VW 0.28% (t = 4.24) 0.25% (t = 3.35) M&AMitchell and Stafford (2000) Three-factor: EW Three-factor: VW -0.20% (t = -3.70) -0.03% (t = -0.48) R&D increase Eberhart, Maxwell and Siddique (2004) Four-factor: EW Four-factor: VW 0.74% (P-value=0.001) 0.53% (P-value=0.001)
6
6/21 The Issue Why the R&D increase that is a non-timing event experiences abnormal return so high? There could be other factor affecting the long run abnormal return of the R&D increase. Economics literature has widely discussed the spillover effect of the R&D for past decades (Arrow, 1962; Griliches, 1979; Bernstein and Nadiri, 1988; Hanel and St-Pierre, 2002; Agarwal, Echambadi, Franco and Sarkar, 2004; Hunt, 2006), but few papers mention this in finance literature. This could be the factor.
7
7/21 R&D Spillover Effect R&D spillover describes the fact that privately owned firm does not (or cannot) appropriate the outcome of its R&D investment. Due to the industrial competition, the rival firms may follow to increase the R&D after the EMS sample firm increases R&D.
8
8/21 Hypothesis Eberhart, Maxwell and Siddique (2004) find significantly high abnormal return for R&D increase firms. We hypothesize that this result is related to the R&D spillover effect. For the Eberhart, Maxwell and Siddique (2004) sample firm with more R&D followers, then the abnormal return of the sample firm should be higher.
9
9/21 Sample Collection Our sample is collected from U.S companies listed on the NYSE/Amex/Nasdaq during January 1974 to December 2006. We start our R&D sample collection from 1974 because the requirement of reporting R&D became effective from 1974. We mainly follow EMS and set up five criterions for our sample that includes firm-year observations with significant increases in R&D: (i) the ratio of R&D expenditures divided by sales over 5%, (ii) the ratio of R&D expenditures divided by average total assets over 5%, (iii) the change of the ratio of R&D expenditures divided by sales over 5%, (iv) the ratio of change of R&D expenditures divided by average total assets over 5%, and (v) the ratio of change of R&D expenditure and than divided by R&D expenditures over 5%. As a result, the sample with significant increase in R&D expenditures includes 10,280 U.S firm-year observations.
10
10/21 Methodology - Return From July 1976 to December 2006, each R&D increase portfolio p is formed by including sample firms which were increase firm within past 60 months. For example, the R&D increase calendar-time portfolio is composed of firms classified as the R&D increase firm in any of the past 5 years. As a result, the R&D increase portfolio monthly returns are regressed on Carhart (1997) four-factors as follows: Coefficient tests are adjusted with Newey-West autocorrelation- heteroskedasicity estimation. Year t-1/DecYear t /DecYear t+1 /July Compute ΔR&D ΔR&D is publicly available Holding period Year t+6/June end
11
11/21 Methodology - Operating Performance Using EBITDA/Assets as ROA and as well as the measure of operating performance (Barber, Lyon, 1996) We look at the median of changes of ROAs. We compute abnormal operating performance by matching firm approach. 1.Minimize |OP t -OP t | around 80%~120% of OP t with the same 2-digit SIC code. 2.Minimize |OP t -OP t | around 80%~120% of OP t with the same 1-digit SIC code if it’s not found at step 1. 3.Minimize |OP t -OP t | around 80%~120% of OP t without industry requirement if it’s still not found at step 2. 4.Minimize |OP t -OP t | without industry and filter requirement for all remaining sample.
12
12/21 Summary Statistics R&D increase firms are small growth firms R&D increase firms are high R&D firms About 30% of R&D increasing firms are followed by their industry peers. R&D increasing firms cluster in two industries: manufacturing and pharmaceutical. Table 1
13
13/21 Abnormal Return of R&D Increase RD followed ratio is the percentage of rival firms that follow the EMS sample firms to increase R&D over at least 1% High RD followed ratio implies higher R&D spillover effects. Get a closed look at resultGet a closed look at result
14
14/21 Some Robust Checks Fama and French (1993) three-factor model Control the time-varying risk betas in factor model Control the delisting return in factor model Remove the repeating R&D increase events Change the definitions of the R&D increase follower All these approaches appear consistent result.
15
15/21 Operating Performance We use Fama and French (2000) earnings regression β 1 and β 2 are coefficients for RD followed rank and RD followed ratio, respectively Abnormal ROA t+5 - Abnormal ROA t =β 0 +β 1 RD followed rank t +β 2 RD followed ratio t +(γ 1 +γ 2 NDFED t +γ 3 NDFED t ×DFE t +γ 4 PDFED t ×DFE t )×DFE t +(λ 1 +λ 2 NCED t +λ 3 NCED t ×CE t +λ 4 PCED t ×CE t )×CE t +ε t
16
16/21 R&D Mimicking Rival firms engage in R&D mimicking to undo the negative effect of the sample firm’s R&D investment. In a concentrated industry, the strategic reactions are more active, thus the R&D increasing benefit could be offset by rivals’ following R&D increases. The R&D increasing firms earn lower return in a more concentrated industry.
17
17/21 Stock Return and Industry Concentration Firms in high- concentration industry earn lower return This confirms the mimicking hypothesis.
18
18/21 Fama and MacBeth Regression For monthly stock returns during July at year t+1 to June year t+2, we include the monthly stocks with significant R&D increases in any of past five years (t to t–4) in the regression model. We regress the monthly returns on independent variables including the RD-followed ratio. The Fama-MacBeth estimates are obtained by the time-series average and tested by time-series volatility.
19
19/21 Fama-Macbeth (1973) Regression
20
20/21 Industry R&D Growth as Spillover Proxy The R&D spillover describes the impact of rivals’ R&D inputs on sample firms’ outputs. So the aggregative industrial R&D inputs (excluding sample firm) could be an alternative. We use the industry-wide R&D growth. Link to Table 9Table 9
21
21/21 Conclusion The long-term positive abnormal stock return following a firm’s R&D increase is argued by Eberhart, Maxwell and Siddique (2004). In this paper, we turn to propose an economical hypothesis, the R&D spillover effect, to account for the long-term stock return post to the R&D increase. As a firm increases the R&D investment, its industry peers may follow and increase their R&D investments under a competitive industry. Given that R&D investment has spillover effect, the follower’s R&D investment is beneficial to the firm that has significantly invested in R&D projects. Hence we argue and find that the firm with significant R&D increases and with sufficient R&D investment followers tends to outperform that with few R&D followers. This economic explanation also helps to answer the puzzle why the R&D increase, which is a non-timing event, is followed by significant abnormal stock return.
Similar presentations
© 2017 SlidePlayer.com Inc.
|
http://slideplayer.com/slide/4137769/
|
CC-MAIN-2017-17
|
refinedweb
| 1,673
| 51.68
|
I previously alluded to a solution to the problem I have had with creating relative complex object structures for the purpose of testing. I've now released the Python version of this library under the name NonMockObjects, which is now available from the Python Cheeseshop (sort of a CPAN equivalent).
Full documentation is in the package, but here's a more conversational introduction: This library allows me to create layered test functions, which can build on each other. By way of demonstration, here's a couple of snippets of my real testing code. Here's the simplest case, where I create a new "text hunk", which is the root text unit of my system that comments, entries, and everything else will be built on:
@register def texthunk(data, content = "Test Content%(inc)s", author = author, format = TEXT_HUNK_FORMATS.HTML): return make(TextHunk, all_args())
@register uses the Python decorator syntax to register this function with the object that centralizes all the access to the test functions. The content uses an automatic "unique ID" incrementer, accessed via %(inc)s, which is standard Python text interpolation syntax. (It's unique per run, not globally; so far this has been adequate.) author = author is the real magic of the system; author is another @registered function, and when you call texthunk, you get one of three choices:
- Leave the author parameter blank, in which case the author function will be called to create a new author and the TextHunk gets that author.
- Pass in a use_author={} dict, which will cause a new Author to be created, but allows you to pass in parameters to affect the author. This allows you to retain the flexibility of creating a new author easily, while allowing you to specify what may be one or two attributes that are important to this test. This works recursively; author itself has further references, and you can pass in use_author={'use_django_user': {'first': 'Bob'}} to make sure your Author has the first name "Bob". You want to minimize this because this introduces coupling, although this coupling may be mitigated in later releases with some support for forwarding parameters, allowing you to re-write the forwarding if something changes later.
- Pass in a pre-existing author object with author=my_author, which completely skips creating an author object.
The upshot is that if I just want a text hunk, I call data.texthunk(), but I can trivially customize that text hunk further. (Usually with the "author" parameter as I test various permission things.)
("Make" is just a simple function that wraps a ".save()" call around the Django object creation; in many apps you may directly return the results of an object construction.)
On the flip side, when you write the test function, you don't need to worry about which of the three scenarios above is in play; you get an "Author" object and you do whatever with it.
So, the solution to my previous long list of objects that need to be created just to test whether, say, a new author can post a comment, is reduced to:
import testfuncs # this has all my test function declarations import nonmockobjects data = nonmockobjects.Objects() # I'm used to calling it 'data' from work author = data.author() # creates a 'new user' by default entry = data.entry() # creates an entry; note the entry will use a new author comment = data.add_comment_to_entry(author=author, entry=entry) # Note the contents of the comment are defaulted elsewhere, and # for this test I don't care what they are.
You could actually simplify that down to a comment = data.add_comment_to_entry() and then extract the relevant new author and entry, but I prefer to be explicit about it. (I may end up being wrong here, and introducing coupling, but it seems likely to me that a comment is always going to need an author and a target entry.)
The decoupling that this introduces comes from the fact that when you don't care about something, and you don't specify it, your test is no longer coupled to it. When you say entry = data.entry(), you don't have to care about any irrelevant future changes to what may constitute an entry; you'll continue to get a correctly-structure Entry object back from that call, regardless. And hopefully should any relevant changes occur, that will manifest as a breaking test; one could argue that if it doesn't that's a bug in the tests.
I've had months of experience with this structure, if you count my original Perl experience (which isn't as easy to use but has the same effects in the end), and so far it's stood up to everything I've thrown at it; I think this is because the creation function really are functions and you can do whatever you need to do with them. I think if I tried to implement this with some clever metadata or something it'd fail; you really need the functions.
And to be fair, the texthunk above is the simplest possible case. At work we've got some slightly more complicated code than this, but in practice complicated code in your test creation functions needs to be factored up into your objects. The final, factored creation functions are generally concerned with simple data massaging and structure management, almost more data than code, which you can see in following, my most complicated creation function, for entries. The purpose of this function is to massage the act of creating an entry to go through the exact same function that entries created by the user in the normal use of the system goes through, which in this case is not the default Django-provided constructor, but a class method I added. (One of the helpful side effects of this library can be to make it easier to go through the same functions in your test code that the user will go through, by making it much easier to match the interface of those functions, which may assume complicated data structures exist.)
@register def entry(data, categories = category, # can take a list, too content = "Test Content %(inc)s", author = author, format = TEXT_HUNK_FORMATS.HTML, status = ENTRY_STATUSES.NORMAL, link = None, title = None, summary = None): # We process this to go through the new_entry class method. if isinstance(categories, Category): categories = [categories] args = {'status': status, 'link': link, 'title': title, 'summary': summary, 'content': content, 'author': author.id, 'format': format, 'categories': [x.id for x in categories]} return Entry.new_entry(args)
|
http://www.jerf.org/iri/post/2514
|
CC-MAIN-2018-05
|
refinedweb
| 1,081
| 56.69
|
Lesson 24: How to Send SMS
Written by Jonathan Sim
You can find this lesson and more in the Arduino IDE (File -> Examples -> Andee). If you are unable to find them, you will need to install the Andee Library for Arduino IDE.
Did you know that you can use the Annikken Andee to send an SMS through the connected smartphone to another recipient?
This is very useful if you want to monitor certain processes and be notified via SMS while you're out.
Here's how you can get Annikken Andee to send the SMS!
Unfortunately with the strict security feature imposed by Apple. You are unable to send SMS out through your iOS devices.
You could programme Arduino x Annikken Andee to automatically send an SMS when sensor readings are of a certain level. But if you intend to do that, be sure to use a flag to prevent the Arduino x Annikken Andee from automatically sending repeatedly.
#include <SPI.h> #include <Andee.h> // We'll use a button to send the message AndeeHelper sendMessage; AndeeHelper SMSobject; // The message and recipient must be declared as an object // You can use the text input button to get the user to set the // recipient number and even the message itself! char messageRecipient[] = "+6587654321"; char message[] = "Hello World!";
void setup() { Andee.begin(); // Setup communication between Annikken Andee and Arduino Andee.clear(); // Clear the screen of any previous displays setInitialData(); // Define object types and their appearance }
void setInitialData() { sendMessage.setId(0); sendMessage.setType(BUTTON_IN); sendMessage.setLocation(0,0,FULL); sendMessage.setTitle("Send SMS"); SMSobject.setId(1); SMSobject.setType(SMS_SENDER); // Sets object as an SMS object SMSobject.setRecipient(messageRecipient); SMSobject.setMessage(message); }
void loop() { if( sendMessage.isPressed() ) // When user presses the send button on phone { sendMessage.ack(); // Acknowledge button press SMSobject.send(); // Sends the SMS to the recipient } sendMessage.update(); // Do not update SMS objects! delay(500); // Always leave a short delay for Bluetooth communication }
|
http://resources.annikken.com/index.php?title=Lesson_24:_How_to_Send_SMS
|
CC-MAIN-2017-09
|
refinedweb
| 321
| 50.12
|
Introduction to Python isinstance
This function checks if the object is an instance of the class or in for a class. This function allows you to check the object against the subclass. This function always returns a boolean value. The object we are passing into the isinstance method of the same type then the isinstance method will true otherwise it will return false. Let’s suppose we have a number 25, and we want to check whether it is an integer type then we can use isinstance function and in it will return true, as 25 is int.
Syntax:
isinstance(object, classinfo)
Examples of Python isinstance
Examples of python isinstance are given below:
Example #1
Code:
lst = [1,2,3,'a']
print(isinstance(lst, list))
Output:
We have created a list and stored it in a variable lst. We have lst as an object in isinstance function and object type as a list. They match and isinstance returns true.
print(isinstance(lst, tuple))
Output:
In this above example, this time we have passed object type as a tuple and isinstance return false, as lst is not tuple type. Isinstance method can be used to check variables with multiple data types.
Example #2
Code:
lst = [1,2,3,'a']
dist = {'A':[1,2,3], 'B':['a','b','c']}
print(isinstance(dist, tuple))
Output:
We have made a dictionary and it consists of two sets. Now if I pass this dictionary to isinstance and check with tuple class, it will return false. As we know, a dictionary is a distant object type.
lst = [1,2,3,'a']
dist = {'A':[1,2,3], 'B':['a','b','c']}
print(isinstance(dist, (tuple, dict)))
Output:
Now this time, we have passed both tuple and dict into object type and isinstance will return true this time. Isinstance function is very useful when we are using different classes and writing more function programs then we will be using this method so much.
Example #3
Code:
class test1:
a = 1
class test2:
b = 2
t1 = test1()
t2 = test2()
print(isinstance(t1, test1))
Output:
In the above program, we have created two classes test1 and test2 and created two variables a and b with values 1 and 2. t1 is the instance of the class test1 and t2 is the instance of class test2. Now we are checking t1 objects with the object type of test1. Isinstance will be true because it belongs to the same instance.
Example #4
Code:
a = 5
print('Is a is int ?: ', isinstance(a, int))
b = 2.5
print('Is b is floaf?: ', isinstance(b, float))
c = 'lol'
print('Is c is String: ', isinstance(c, str))
e = (1, 2, 3)
print('Is e is tuple?: ', isinstance(e, tuple))
f = [] print('Is f is list?:', isinstance(f, list))
g = {}
print('Is g is list?:', isinstance(g, dict))
Output:
In the above example, we have created the variable of different data types and we have checked every variable in the isinstance method and the following outputs are generated.
Example #5
Code:
def multiplication(p1,p2):
if isinstance(p1,(int)) and isinstance(p2, (int)):
return f'Params Ok! Result: {p1*p2}'
else:
return 'Params must be type of (int)'
print(multiplication(2,'f'))
Output:
In the above program, we have created a method multiplication that is taking two parameters p1 and p2. Then we are using isinstance method two checks whether both the parameters are of int type or not. If any of the parameters is not integer type then isinstance will return false and we will display an error message. We have used the print function to call the method and passed 1 integer and the second parameter as character and function has returned an error message as expected.
Now we are passing both the parameters as an integer and this time we get the multiplication of both the numbers.
Code:
def multiplication(p1,p2):
if isinstance(p1,(int)) and isinstance(p2, (int)):
return f'Params Ok! Result: {p1*p2}'
else:
return 'Params must be type of (int)'
print(multiplication(2,3))
Output:
Difference between isinstance and Type Function
The purpose of both functions is the same to check the data type of the parameter, but implementation is different with different use cases. Type function will return the data type of the object while isinstance will return value based on the object and data type we pass.
Suppose you just want to know the data type of the parameter then type function is a good choice and if we want to check if the data type of pass is the same as a data type of the second parameter then isinstance function is recommended. It will return a boolean value.
Conclusion
Isinstance method is a very useful functionality in python program for functional programming because many times we need to check that the object is the instance of the class or not and this function returns the boolean output. The object we are passing into the isinstance method of the same type then the isinstance method will true otherwise it will return false.
Recommended Articles
This is a guide to Python isinstance. Here we discuss the Introduction and difference between isinstance and type function along with different examples and its code implementation. You may also have a look at the following articles to learn more –
|
https://www.educba.com/python-isinstance/?source=leftnav
|
CC-MAIN-2021-04
|
refinedweb
| 889
| 68.5
|
The usecase run command runs the script from the specified entry point.
The basic syntax for running a use case script from the command-line is:
usecase run
script_name [
options]
The options that you can use are defined in the method with the
$Options$ tag of the current use case within the script. You
can get more information about them using the usecase
help command.
The syntax for setting options on the command line is
--options.name=value or
--options.name
value. If you do not specify a value for an option explicitly, the option
takes the default value defined in the script.
optionsas the name of the top level group of options. However, you can give a different name to the top level group of options in the use case script.
When issuing the usecase run command, rather
than specifying an absolute path to a script, specify a
-p or
-s flag before the script name. For
example, issue
usecase run -p to
run a use case script for the current platform.
script_name [
options]
If there is more than one use case defined in a single script, the entry point
or main method to use when running the script must be defined. For scripts with multiple
use cases the syntax is
usecase run .
script_name
entry_point [
options]
If the entry point to the use case accepts positional arguments, you must specify them on the command-line when the script is run. For example, if the main method in the use case script positional.py in the current working directory is defined as follows:
... $Run$ main ...
def main(script, filename, value): print("Running the main method")
The syntax to run the script is:
usecase run positional.py [
options]
filename
value
usecase run myscript.py --options.enableETM=True --options.enableETM.timestamping=True --options.traceCapture "DSTREAM"
Runs a use case script called myscript.py in the current working directory, setting the options defined for this use case.
usecase run multipleEntry.py mainOne --options.traceCapture "ETR"
Runs a use case script called multipleEntry.py in the current working directory. The entry point
to this use case script is
mainOne. A single option
is specified after the entry point.
usecase run -s multipleScript.py mainTwo filename.txt 100
Runs a use case script in the /Scripts/usecase/ directory in the configuration database called
multipleScript.py in the current working
directory. The entry point to this use case script is
mainTwo which defines two positional arguments. No options are
supplied to the script.
On the command-line, providing a long list of options might be tedious to type in every time the script is run over different connections. A solution to this is to use the built-in functionality --save-options.
For example, you can run the script
usecase run
where
script_name --
option1=..., --
option2=...
--save-options=
/path/to/options.txt
options.txt is the file in
which to save the options. This saves the options to this use case script, at
runtime, to options.txt.
If you do not specify an option on the command-line, its default value is saved to the specified file.
After saving options to a file, there is a similar mechanism for loading them
back in. Issuing
usecase run loads the options in from
script_name --load-options=
path/to/options.txt
options.txt and,
if successful, runs the script.
You can combine options by providing options on the command-line and loading them from a file. Options from the command-line override those from a file.
Example:
The options file options.txt for myscript.py contains two option values:
Running usecase run myscript.py
--load-options=options.txt results in
options.a having the value 10 and
options.b having the value 20, loaded from the specified file. If an
option is set on the command-line, for example usecase run
--options.b=99 --load-options=options.txt, it overrides those options
retrieved from the file.
options.a takes the value
10, but
options.b takes the new value 99 provided
on the command-line and not the one stored in the file. This is useful for storing a
standard set of options for a single use case and modifying only those necessary at
runtime.
When running a script, the user might want to see what options are being used, especially if the options are loaded from an external file. The built-in option --show-options displays the name and value of all options being used in the script when the script is run.
usecase run
script_name--show-options
Prints out a list of the default options for this script.
usecase run
script_name--option1=x, --option2=y --show-options
Prints out a list of options for the script, with updated values for option1 and option2.
usecase run
script_name--load-options=
file--show-options
Prints out a list of options taking their values from the supplied file. If an option is not defined in the loaded file, its default value is printed.
|
http://infocenter.arm.com/help/topic/com.arm.doc.dui0446z/vvi1443623644016.html
|
CC-MAIN-2020-10
|
refinedweb
| 830
| 66.84
|
Patterson19,588 Points
Why isn't this working?
'def create_shopping_list hash = { "name"=> name } return hash
end ' Not sure why this does not work?
def create_shopping_list hash = { "name"=> name } return hash end
1 Answer
Rick BuffingtonCourses Plus Student 8,037 Points
From what I can tell, it looks like you are trying to assign the key "name" with a value that Ruby thinks is a variable. Ruby is looking for a variable named "name" because you have not enclosed the value in quotes - or have not assigned a value to name.
hash = { "name" => "name" }
If what you are trying to do is assign the value with the "name" variable, you'd need to declare and assign it before using it in the hash:
name = "John" hash = { "name" => name }
Hope that helps.
Brian Patterson19,588 Points
Brian Patterson19,588 Points
Thanks now stuck on the next bit.
|
https://teamtreehouse.com/community/why-isnt-this-working-22
|
CC-MAIN-2022-33
|
refinedweb
| 145
| 73.31
|
NfcShareFilesContent
Since: BlackBerry 10.0.0
#include <bb/system/NfcShareFilesContent>
To link against this class, add the following line to your .pro file: LIBS += -lbbsystem
Defines a request to share local files over NFC.
Clients specify the files they wish to share by creating an instance of NfcShareFilesContent, populating it with local file paths in URI form, and passing the object to NfcShareManager::setShareContent(const NfcShareFilesContent &).
Note that a request must contain at least one file to be valid.
Overview
Public Functions Index
Public Functions
QList< QUrl >
Returns the list of local file paths to be shared using NFC.
NfcShareFilesContent &
Copies the data of an existing NfcShareFilesContent object to this object.
Returns the NfcShareFilesContent instance.
BlackBerry 10.0.0
void
Sets the list of local file paths to be shared using NFC.
Local files are specified as URIs with a scheme of "file://" and a path to a file on the local file system.
To be valid, there must be at least one file in the list.
Required: YES.
BlackBerry 10.0.0
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
https://developer.blackberry.com/native/reference/cascades/bb__system__nfcsharefilescontent.html
|
CC-MAIN-2020-05
|
refinedweb
| 189
| 58.58
|
Key ideas: Logistic regression, cross validation, sensitivity, specificity, receiver operating characteristics curve
This noteboook shows how to use cross-validation and logistic regression in Statsmodels to assess how well a group of predictor variables can be used to predict a binary outcome. A reciever operating characteristics (ROC) curve is used to describe the strength of the predictive relationship.
We start with the usual import statements:
import numpy as np import pandas as pd from statsmodels.discrete.discrete_model import Logit
The data are from a study of Pima indians in the US. Each individual is assessed as either having, or not having type 2 diabetes. Several predictors such as age, BMI, and blood pressure can be used to predict diabetes status. The data have been split into training and testing sets, but since we are using cross-validation here, we merge the two data sets.
data_url = "" data_tr = pd.read_csv(data_url) data_url = "" data_te = pd.read_csv(data_url) data = pd.concat((data_tr, data_te), axis=0)
Next we split the data into a vector of outcomes (endog), and a matrix of predictors (exog). We also add an intercept (a column of 1's) to exog. Note that an alternative approach would be to use formulas to fit the logistic regression model. In this case, these two steps would not be necessary.
endog = np.asarray([int(x.lower() == "yes") for x in data["type"]]) xnames = ["npreg", "glu", "bp", "skin", "bmi", "ped", "age"] exog = np.asarray(data[xnames]) exog = np.concatenate((np.ones((exog.shape[0],1)), exog), axis=1)
Many of the predictors show moderately strong marginal associations with diabetes status, so there is hope that we can predict the outcome fairly well by combining the information in all the predictors.
x = [np.corrcoef(endog, x)[0,1] for x in exog[:,1:].T] x = pd.Series(x, index=xnames) print x
npreg 0.252586 glu 0.503614 bp 0.183432 skin 0.254874 bmi 0.300901 ped 0.233074 age 0.315097 dtype: float64
Next we do the cross validation. Each subject is held out in turn during the model building. We construct a predicted outcome (on a probability scale) for the held-out observation and place it into the vector named "scores".
n = data.shape[0] scores = np.zeros(n, dtype=np.float64) for k in range(n): ii = range(n) ii.pop(k) mod = Logit(endog[ii], exog[ii,:]) rslt = mod.fit(disp=False) scores[k] = rslt.predict(exog[k,:])
The ROC curve is a plot of senstivity against 1 - specificity. We calculate sensitivity and specificity here (note that this is not a very efficient algorithm, but the calculation is fast so it doesn't matter much here).
uscores = np.unique(scores) n1 = np.sum(endog) Sens = np.zeros_like(uscores) Spec = np.zeros_like(uscores) for j,u in enumerate(uscores): Sens[j] = np.sum((scores >= u) * endog) / float(n1) Spec[j] = np.sum((scores <= u) * (1-endog)) / float(n - n1)
Now we make the ROC plot.
plt.plot(1 - Spec, Sens, '-') plt.plot([0,1], [0,1], '-', color='grey') plt.xlabel("1 - Specificity", size=17) plt.ylabel("Sensitivity", size=17)
<matplotlib.text.Text at 0x483be50>
We can calculate the area under the curve (AUC) using the trapezoidal method:
auc = 0. for i in range(len(Spec)-1): auc += (Spec[i+1] - Spec[i]) * (Sens[i+1] + Sens[i]) / 2 print auc
0.846749423092
|
http://nbviewer.jupyter.org/urls/umich.box.com/shared/static/aouhn2mci77opm3v89vc.ipynb
|
CC-MAIN-2018-05
|
refinedweb
| 560
| 61.73
|
You're enrolled in our new beta rewards program. Join our group to get the inside scoop and share your feedback.Join group
Join the community to find out what other Atlassian users are discussing, debating and creating.
JIRA (3.13.3 - Windows Platform)
Hi,
We would like to run an executable with parameters in a post-function, we've tried several ways, one as follows, with "WaitFor()" the workflow transitions takes a long time to complete, without "waitFor()" the workflow transition complete faster, in neither case the command run successfully.
We've tested the command.text out of JIRA, in command line, and it works well.
{code}
def command = """C:\\JIRA-Enterprise-3.13.1\\datadir\\groovyscripts\\InsertPBI.exe "" "web" "$title1" "$descrip1" "$myIssue.key" "$myIssue.assignee" """ //string
log.debug command.toString()
Process process = command.execute()
process.waitFor()
{code}
Is it possible to do what we are trying to accomplish?
We are very grateful for script runner, it has helped us a lot, we were able to automatize many tasks with it. It's a great tool.
Thank you!
Hi,
We could solve the issue, it was something related to authentication in the external procedure, now is working perfectly.
Thank you for this great plugin!
Regards
Hi Maria,
I'm having the same issue while trying to run an external program in a post-function using script runner (code is almost similar to yours). The external program is supposed to return a string value that will be used further in the post-function script. The program runs perfectly in groovy console (returns the expected output). However, the same program does not produce any output when run from the post-function. I even tried capturing the process output (consumeProcessOutput) but no luck!
It would be great if you can advise on how did you fix your.
|
https://community.atlassian.com/t5/Jira-questions/How-to-run-an-external-process-in-post-function-using-script/qaq-p/28679
|
CC-MAIN-2021-21
|
refinedweb
| 305
| 66.54
|
Parsing FASTA files
What, again?
FASTA files are a bit more complex than what I talked about a couple weeks ago. That FASTA parser won't handle some FASTA files. It assumes there is a blank line between each record. I made that assumption because it's easier to write a parser when there is a well-defined "end of record" identifier. Real FASTA files don't always have that blank line, as with the following.
>YAL059W-36310.36514 Putative promoter sequence AAATAATATTTGGGGCCCCTCGCGGCTCATTTGTAGTATCTAAGATTATGTATTTTCTTT TATAATATTTGTTGTTATGAAACAGACAGAAGTAAGTTTCTGCGACTATATTATTTTTTT TTTTCTTCTTTTTTTTTCCTTTATTCAACTTGGCGATGAGCTGAAAATTTTTTTGGTTAA GGACCCTTTAGAAGTATTGAATGTG >YAL058W-37154.37469 Putative promoter sequence TTTCATATGAAAGGTCCTAGGAATACACGATTCTTGTACGCATTCTTCTTTTTTCTATCT TCTTTCATTCTTTGTACATTAGATAACATGGTTTTAGCTTAGTTTTATTTTATTTTTTAT ATATCTGGATGTATACTATTATTGAAAAACTTCATTAATAGTTACAACTTTTTCAATATC AAGTTGATTAAGAAAAAGAAAATTATTATGGGTTAGCTGAAAACCGTGTGATGCATGTCG TTTAAGGATTGTGTAAAAAAGTGAACGGCAACGCATTTCTAATATAGATAACGGCCACAC AAAGTAGTACTATGAA >YAL056W-38979.39264 Putative promoter sequence AAGTGTTAGTTTATAACATGGTCTCAATAATTGCACCACAACGGCTTCTCTTTTATAGAT GGTTAACATTATAGTATCAATATTATCATCATGATTAAATGATGATGTATAATACTTACC CGATGTTAAATCTTATTTTTTCATGCAGTAAGTAATCATGCAACAAGAAAAACCCGTAAT TAAGCGAACATAGAACAACTAGCATCCCCGATAAGACGGAATAGAATAGTAAAGATTGTG ATTCATTGGCAGGTCCATTGTCGCATTACTAAATCATAGGCATGGALess often you'll come across FASTA files with extra blank lines between records or at the beginning or end of the file, but I've never seen one with an extra blank line inside of a record. I expect that many parsers won't handle that case correctly.
What happens if I use the old FASTA parser to read these FASTA records? I'll save them to the file "test.fasta" and use fasta_reader.py as a library to read the first record in the file.
>>> import fasta_reader >>> rec = fasta_reader.read_fasta_record(open("test.fasta")) >>> rec.title 'YAL059W-36310.36514 Putative promoter sequence' >>> rec.sequence 'AAATAATATTTGGGGCCCCTCGCGGCTCATTTGTAGTATCTAAGATTATGTATTTTCTTTTATAATATT TGTTGTTATGAAACAGACAGAAGTAAGTTTCTGCGACTATATTATTTTTTTTTTTCTTCTTTTTTTTTCC TTTATTCAACTTGGCGATGAGCTGAAAATTTTTTTGGTTAAGGACCCTTTAGAAGTATTGAATGTG> YAL058W-37154.37469 Putative promoter sequenceTTTCATATGAAAGGTCCTAGGAAT ACACGATTCTTGTACGCATTCTTCTTTTTTCTATCTTCTTTCATTCTTTGTACATTAGATAACATGGTTT TAGCTTAGTTTTATTTTATTTTTTATATATCTGGATGTATACTATTATTGAAAAACTTCATTAATAGTTA CAACTTTTTCAATATCAAGTTGATTAAGAAAAAGAAAATTATTATGGGTTAGCTGAAAACCGTGTGATGC ATGTCGTTTAAGGATTGTGTAAAAAAGTGAACGGCAACGCATTTCTAATATAGATAACGGCCACACAAAG TAGTACTATGAA>YAL056W-38979.39264 Putative promoter sequenceAAGTGTTA GTTTATAACATGGTCTCAATAATTGCACCACAACGGCTTCTCTTTTATAGATGGTTAACATTATAGTATC AATATTATCATCATGATTAAATGATGATGTATAATACTTACCCGATGTTAAATCTTATTTTTTCATGCAG TAAGTAATCATGCAACAAGAAAAACCCGTAATTAAGCGAACATAGAACAACTAGCATCCCCGATAAGACG GAATAGAATAGTAAAGATTGTGATTCATTGGCAGGTCCATTGTCGCATTACTAAATCATAGGCATGGA' >>>There is no blank line so the code that reads the sequence lines kept on reading lines until it reached the end of the file. All of the rest of the file was interpreted as sequence, so the other two title files were included in the sequence output.
You know how I keep talking about sanity checking? I should have done a bit more checking to make sure that I didn't use a line starting with a ">" as a sequence line. The change would occur in the following of the existing code
sequence_lines = [] while 1: line = infile.readline().rstrip() if line == "": break sequence_lines.append(line)and the code with the change would look like
sequence_lines = [] while 1: line = infile.readline().rstrip() if line == "": break elif line.startswith(">"): raise TypeError("Read title line when expecting sequence line") sequence_lines.append(line)I'm not going to show the full program with this change because I'm instead going to handle that case correctly.
Look-ahead
The problem with this more complex FASTA format is that there's no way to identify the end of a record except by reading the start of the next record. If I read a line after the title line it could be sequence data or it could be a blank line (marking the end of the sequence data) or it could be the next record's title line (implicitly marking the end of the sequence data and the start of the next record). In computer science terms, it needs to look-ahead to see if it's at the end of a record.
My fasta_record_reader() function reads a function and leaves the input file positioned ready to read the next record. After it reads a record it's ready to read the next record. If there's no blank line then it will read the first line of the next record, notice it's finished with the original record and return it to the caller. But in this case it's read one line too many. The file pointer will be on the sequence data line after the title line of the next record.
Possible solution #1 - seek
Files support a seek function, which moves the location of the file pointer. Here's how to move backwards in the file:
title = infile.readline() if not title: return None # End-of-file? data = [] while 1: line = infile.readline() if line.isspace(): return FastaRecord(title, "".join(data)) if line.startswith(">"): infile.seek(-len(line), 1) # the 1 means "relative to current position" return FastaRecord(title, "".join(data)) data.append(line.rstrip("\n"))
Python supports many file-like objects but not all are seekable. One example is a network connection
>>> import urllib2 >>> f = urllib2.urlopen("") >>> f.read(40) '>gi|2765658|emb|Z78533.1|CIZ78533 C.irap' >>> f.read(40) 'eanum 5.8S rRNA gene and ITS1 and ITS2 D' >>> f.read(40) 'NA\nCGTAACAAGGTTTCCGTAGGTGAACCTGCGGAAGGAT' >>> f.read(40) 'CATTGATGAGACCGTGGAATAAACGATCGAGTG\nAATCCG' >>> f.seek(-5, 1) Traceback (most recent call last): File "<stdin>", line 1, in ? AttributeError: addinfourl instance has no attribute 'seek' >>>while another is program input from a terminal or unix pipe connected to sys.stdin.
Because of the limitations in different file-like objects it's best to have a solution which works without the ability to seek.
Possible solution #2 - a new file-like object
It's easy to make a file-like object. Actually, it's easy to make a sufficiently file-like object. The FASTA record reader only uses the readline method of the file object. I can implement as a wrapper to an existing file. Here's one which is useful for debugging.
class DebugFile(object): def __init__(self, infile): self.infile = infile def readline(self): line = self.infile.readline() print "Read:", repr(line) return line >>> f = DebugFile(open("test.fasta")) >>> s = f.readline() Read: '>YAL059W-36310.36514 Putative promoter sequence\n' >>> s '>YAL059W-36310.36514 Putative promoter sequence\n' >>>
I'll modify this to have an UndoFile with the new method saveline which takes a string. The string is saved so the next readline returns that string instead of reading the next line of the file. Once returned it's reset to None to mean that no line is saved.
class UndoFile: def __init__(self, infile): self.infile = infile self._saved = None def readline(self): if self._saved is not None: s = self._saved self._saved = None return s return self.infile.readline() def saveline(self, s): if self._saved is not None: raise TypeError("Only one line may be saved") self._saved = sHere it is in action on a file-like object which doesn't support seeks.
>>> import urllib2 >>> infile = urllib2.urlopen("") >>> f = UndoFile(infile) >>> f.readline() '>YAL059W-36310.36514 Putative promoter sequence\n' >>> f.readline() 'AAATAATATTTGGGGCCCCTCGCGGCTCATTTGTAGTATCTAAGATTATGTATTTTCTTT\n' >>> f.readline() 'TATAATATTTGTTGTTATGAAACAGACAGAAGTAAGTTTCTGCGACTATATTATTTTTTT\n' >>> f.readline() 'TTTTCTTCTTTTTTTTTCCTTTATTCAACTTGGCGATGAGCTGAAAATTTTTTTGGTTAA\n' >>> f.readline() 'GGACCCTTTAGAAGTATTGAATGTG\n' >>> f.readline() '>YAL058W-37154.37469 Putative promoter sequence\n' >>> s = _ >>> s '>YAL058W-37154.37469 Putative promoter sequence\n' >>> f.saveline(s) >>> f.readline() '>YAL058W-37154.37469 Putative promoter sequence\n' >>>and here's how to use it to read a FASTA record
def read_fasta_record(infile): line = infile.readline() if not line: return # End-of-file # Double-check that it's a title line if not line.startswith(">"): raise TypeError( "The title line must start with a '>': %r" % line) title = line[1:].rstrip("\n") sequences = [] while 1: line = infile.readline() if line.isspace(): break if line.startswith(">"): # Read the first line of the next record. # Save for next time and go with what we've had infile.saveline(line) break sequences.append(line) return FastaRecord(title, "".join(sequences))
This is a good general-purpose solution. You can use the UndoFile with anything that parses files using readline and you can use it with anything that understands that new "saveline" protocol. Protocol is the Python word which says that there's an agreement on what a given objects must do to be labled "*-like". The more an object implements the file protocol the more it can be used any place that a file can be used.
Some of the Biopython parsers use this approach. The biggest problem is the performance. Every readline() first goes to the UndoHandle (that's the Biopython version of UndoFile) and nearly all then go to actual file object. Function calls in Python are somewhat slow.
There's also the complication in the caller of wrapping the actual file into an undo-able object.
The iterator protocol
The for-loop works on lists, dictionaries, strings, files, generator objects and other data types. The technical term for the things you iterate over is container; you iterator over elements in the container.
You can make your own objects work in a for-loop by implementing the iterator protocol. The for-loop starts by asking the object for its iterator. The object can be a container (like a string or dictionary) or an iterator. If it's an iterator it should return itself. The for-loop gets the iterator by calling the special method __iter__().
The iterator must implement two methods: __iter__() and next(). The first almost certainly returns itself. The second is more interesting. When the for-loop needs the next element from the iterator it gets it by calling next() and using the return value. If there are no more values then next() must raise an exception named StopIteration.
To see that in action, here's an iterator which counts from a given number down to 0.
>>> class Countdown(object): ... def __init__(self, value): ... self.value = value ... def __iter__(self): ... return self ... def next(self): ... if self.value < 0: ... raise StopIteration ... value = self.value ... self.value = value-1 ... return value ... >>> for i in Countdown(5): ... print i ... 5 4 3 2 1 0 >>>In detail here's what the for-loop uses from the container and the iterator.
>>> container = Countdown(5) >>> iter(container) <__main__.Countdown object at 0x554f0> >>> iterobj = iter(container) >>> iterobj.next() 5 >>> iterobj.next() 4 >>> iterobj.next() 3 >>> iterobj.next() 2 >>> iterobj.next() 1 >>> iterobj.next() 0 >>> iterobj.next() Traceback (most recent call last): File "<stdin>", line 1, in ? File "<stdin>", line 8, in next StopIteration >>>
Possible solution #3 - a FASTA record iterator
I could make an iterator over the records in a FASTA file pretty easily if I have a read_fasta_record function which reads a record and has the file handle set up for the next record.
import fasta_reader class FastaRecordReader(object): def __init__(self, infile): self.infile = infile def __iter__(self): return self def next(self): x = fasta_reader.read_fasta_record(self.infile) if x is None: raise StopIteration return x >>> for rec in FastaRecordReader(open("simple.fasta")): ... print rec.title ... first sequence record second sequence record third sequence record >>>This is effectively identical to the generator solution of
import fasta_reader def read_fasta_records(infile): while 1: rec = fasta_reader.read_fasta_record(infile) if rec is None: break yield rec
What's useful about the FastaRecordReader class solution is that the object provides a place to store information needed to read the next record, so long as there isn't lookahead information which needs to be shared with other parsers which expect a standard file object.
Instead of saving the look-ahead information in its own class, I'll merge it with the record reading code to get a new FastaRecordReader. The interface is unchanged, except that this will correctly handle cases where there is no blank line to end a record.
class FastaRecordReader(object): def __init__(self, infile): self.infile = infile self._saved = None def __iter__(self): return self def next(self): # The first line could come from the lookahead if self._saved is not None: line, self._saved = self._saved, None else: line = self.infile.readline() if not line: # end-of-file raise StopIteration # Double-check that it's a title line if not line.startswith(">"): raise TypeError( "The title line must start with a '>': %r" % line) title = line[1:].rstrip("\n") sequences = [] while 1: line = self.infile.readline() if line.isspace(): break if line.startswith(">"): self._saved = line break sequences.append(line) return FastaRecord(title, "".join(sequences))
Possible solution #4 - a generator
I showed how the first FastaRecordReader class was "effectively identical to the generator implementation of read_fasta_records." The same is true with the improved FastaRecordReader – I can make an improved generator version. The code only changes in a few places; I use the local variable "saved" instead of the instance attribute "self._saved" and I "yield" items in a while loop instead of "return"ing items each time the function is called.
def read_fasta_records(infile): saved = None while 1: # Look for the start of a record if saved is not None: line = saved saved = None else: line = infile.readline() if not line: return # skip blank lines if line.isspace(): continue # Double-check that it's a title line if not line.startswith(">"): raise TypeError( "The title line must start with a '>': %r" % line) title = line.rstrip() # Read the sequence lines until the next record, a blank # line, or the end of file sequences = [] while 1: line = infile.readline() if not line or line.isspace(): break if line.startswith(">"): # The start of the next record saved = line break sequences.append(line.rstrip("\n")) yield FastaRecord(title, "".join(sequences))
The only major difference is in how they save state. The class approach uses instance variables while the generator uses local variables. The technical term for saving state in local variables is closure. Some people prefer closures over objects and classes.
I've found that the generator code for examples like this are easier than a class-based solution, but mostly because everything is in one function instead of in two methods of a class. In theory anything one one form has a solution in the other form and the translation from one to the other is mechanical. My experience says that the differences are also very minor.
(The more complex translations - though still mechanical - occur when the closure solution yields when inside of several loops. Bioinformatics data files are made of records and they can be processed a record at a time so this doesn't happen frequently. I've only needed the more complex class solution once.)
Which is best?
That depends on your style and what you need it for. I usually prefer the generator function first (easiest for me to use), the look-ahead solution second (very general and easiest to undestand), and the class-based one last.
|
http://dalkescientific.com/writings/NBN/iterators.html
|
crawl-002
|
refinedweb
| 2,308
| 51.24
|
This plugin allows Grails developers to expose methods defined in Grails Service classes as Web Services. The latest version of the plugin is 0.6. This version supports Grails v1.1. NOTE: Please use version 0.5 for Grails 1.0.4 and earlier versions.
v. 0.5
v. 0.2
v. 0.1.2
v. 0.1.1
v. 0.1
InstallationType the following command in your Grails application directory to install Apache Axis2 plugin.
Alternatively, you can use the following command, if you have a plugin archive locally.
$> grails install-plugin axis2
$> grails install-plugin /path/to/grails-axis2-<version>.zip
DependenciesThis plugin is based on the WSO2 WSF/Spring framework, which integrates the Apache Axis2 Web services engine into Spring.
Getting StartedJust add the following line to a Grails service class to expose is as a web service. This will expose all the methods of the service class as web service operations.
For more information on service classes refer to the section on Services in the Grails user guide.The following code illustrates a sample service class exposed as an Apache Axis2 web service.
static expose=['axis2']
You can use Java or Groovy classes (including domain classes) as parameters and return types.After running the application (grails run-app), the EPR of the web service will be:
import javax.jws.WebParam;class TestService { static expose=['axis2'] String sayHello(String yourName) { return "Hello ${yourName}!" } def availableBooks() { return Book.list() } Book getBookById(int id) { return Book.get(id) } void addBook(@WebParam(name="name")String pName, @WebParam(name="author")String pAuthor) { new Book(name:pName, auther:pAuthor).save(); } }
And the WSDL will be available at:
You can browse the Axis2 web interface at:
RoadmapThe following features are planned to implement in the near future.
- Custom WSDLs
- WS-* support
Source CodeThis source code is available at.
Report BugsPlease use JIRA issue tracker available at. Report the bugs under the "Grails-axis2" component of the "Grails Plugins" project. If you do not have an exsisting JIRA account, please sign up at.
Version Historyv. 0.6
- Supports Grails 1.1
- Updated to WSF/Spring v1.5
- Partial support for @WebParam annotation ('name' attribute)
- Groovy classes (including Grails domain classes) for parameters and return values.
- Grails v1.0.1 support
- Minor changes and bug fixes
- Initial release
Site Login
|
http://www.grails.org/Apache+Axis2+Plugin
|
crawl-002
|
refinedweb
| 385
| 51.34
|
Tutorial
Solve CORS once and for all with Netlify Dev.
Access-Control-Allow-Headers and CORS
Say you’re a budding young (or young-at-heart!) frontend developer. You’d really love to smush together a bunch of third party APIs for your next Hackathon project. This API looks great:! We’ll build the next great Dad Joke app in the world! Excitedly, you whip up a small app (these examples use React, but the principles are framework agnostic):
function App() { const [msg, setMsg] = React.useState("click the button") const handler = () => fetch("", { headers: { accept: "Accept: application/json" } }) .then((x) => x.json()) .then(({ msg }) => setMsg(msg)) return ( <div className="App"> <header className="App-header"> <p>message: {msg}</p> <button onClick={handler}> click meeee</button> </header> </div> ) }
You excitedly
yarn start to test your new app locally, and…
Access to fetch at '' from origin '' has been blocked by CORS policy: Request header field accept is not allowed by Access-Control-Allow-Headers in preflight response.
Oh no, you think, I’ve seen this before but how do I fix this again?
You google around and find this browser plugin and this serverside fix and this way too long MDN article and its all much too much for just a simple API request. First of all, you don’t have control of the API, so adding a CORS header is out of the question. Second of all, this problem is happening because you’re hitting an
https:// API from, which doesn’t have SSL, so the problem could go away once you deploy onto an
https enabled domain, but that still doesn’t solve the local development experience. Last of all, you just wanted to get something up and running and now you’re stuck Googling icky security stuff on step 1.
Super frustrating. Now that more and more of the web is HTTPS by default, you’re just going to run into this more and more as you work on clientside apps (one reason server-side frameworks actually don’t even face CORS problems because they are run in trusted environments).
Netlify Dev, the Local Proxy Solution
If you’ve looked around long enough you’ll notice that CORS is a browser protection that completely doesn’t apply if you just made the request from a server you control. In other words, spin up a proxy server and all your problems go away. The only problem has been spinning up this proxy has been too hard and too costly. And that’s just for local dev; the deployed experience is totally different and just adds more complexity to the setup.
For the past few months I’ve been working on Netlify Dev, which aims to be a great proxy server for exactly this kind of usecase. It comes embedded in the Netlify CLI, which you can download:
$ npm i -g netlify-cli
Now in your project, if it is a popular project we support like create-react-app, Next.js, Gatsby, Vue-CLI, Nuxt and so on, you should be able to run:
# provide a build command and publish folder # specific to your project, # create a Netlify site instance, and # initialize netlify.toml config file if repo connected to git remote $ netlify init # or `ntl init` # start local proxy server $ netlify dev # or `ntl dev`
And you should see the proxy server run on
localhost:8888 if that port is available.
If your project isn’t supported, you can write and contribute your own config, but it should be a zero config experience for the vast majority of people.
As of right now it is a local proxy server that just blindly proxies your project, nothing too impressive. Time to spin up a serverless function!
Creating a Netlify Function
At this point you should have a
netlify.toml file with a
functions field. You can handwrite your own if you wish, but it should look like this:
[build] command = "yarn run build" functions = "functions" publish = "dist"
You can configure each one of these to your needs, just check the Netlify docs. But in any case, now when you run:
$ netlify functions:create
the CLI shows you the list of function templates. Pick
node-fetch and it will scaffold a new serverless function for you in
/functions/node-fetch by default, including installing any required dependencies. Have a look at the generated files, but the most important one will be
functions/node-fetch/node-fetch.js. By convention the folder name must match the file name for the function entry point to be recognized.
Great, so we now have a serverless Node.js function making our call to the API. The only remaining thing to do is to modify our frontend to ping our function proxy instead of directly hitting the API:
const handler = () => fetch("/.netlify/functions/node-fetch", { headers: { accept: "Accept: application/json" } }) .then((x) => x.json()) .then(({ msg }) => setMsg(msg))
Getting rid of CORS in local development
Now when we run the proxy server again:
$ netlify dev # or ntl dev
And head to the proxy port (usually), and click the button…
message: Why can't a bicycle stand on its own? It's two-tired.
Funny! and we can laugh now that we have got rid of our CORS issues.
Deploying and Getting rid of CORS in production
When deploying, we lose the local proxy, but gain the warm embrace of the production environment, which, by design, is going to work the exact same way.
$ npm run build ## in case you need it $ netlify deploy --prod ## this is the manual deploy process
And head to the deployed site (run
netlify open:site).
Note: if you are deploying your site via continuous deployment from GitHub, GitLab or BitBucket, you will want to modify your
netlify.toml build command to install function dependencies:
[build] command = "yarn build && cd functions/node-fetch && yarn" functions = "functions" publish = "dist"
Now you know how to spin up a function to proxy literally any API, together with using confidential API keys (either hardcoded, although don’t do this if your project is Open Source, or as environment variables) that you don’t want to expose to your end user, in minutes. This helps to mitigate any production CORS issues as well, although those are more rare.
If you have simple endpoints and files to proxy, you may also choose to use Netlify Redirect Rewrites to accomplish what we just did in one line, however it is of course less customizable.
That’s all there is to solving your CORS problems once and for all! Note that Netlify Dev is still in beta, if you ran into any hiccups or have questions, please file an issue!
|
https://www.digitalocean.com/community/tutorials/nodejs-solve-cors-once-and-for-all-netlify-dev
|
CC-MAIN-2021-31
|
refinedweb
| 1,117
| 65.35
|
If you are new to java programming language then this tutorial will be helpful in understanding the program.
AdsTutorials
In this section we will explain you with the help of a program to find the area and perimeter of a rectangle. To find area and perimeter of a rectangle is a simple program in java. If you are new to java programming language then the example will help you to understand the concepts. We will write a program, which calculate the area and perimeter of a rectangle in java.
Code Description : Using this example, first of all create a class Program. Now define two integer variable length and breadth and initializing it. As the program is based on the input provide by user, so it is advisable that always try to enter correct input otherwise it may cause exception in the program. Now create a BufferedReader class object. The InputStreamReader read the input and stored it in buffer. The input provided is of string type for which we have to convert it into integer type by using Integer.parseInt() method. Creating a object of the class, by using an object only we can call the function or making function as static we can call directly it by class name. So, through the object with the dot operator, invoking the method area(), which calculate the area and perimeter and print it to the console. Now here is your program compile and run the program and provide the correct input, so that you will get the perfect result.
Program using function to find the area and perimeter of rectangle as follows:
import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class Program { static int length=0; static int breadth=0; public static void main(String args[]) { try { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); System.out.println(" "); System.out.println("Please enter length of a rectangle"); length = Integer.parseInt(br.readLine()); System.out.println("Please enter breadth of a rectangle"); breadth = Integer.parseInt(br.readLine()); Program ob=new Program(); ob.area(); } catch(NumberFormatException ob) { System.out.println("Invalid input" + ob); System.exit(0); } catch (IOException e) { e.printStackTrace(); } } public void area() { int perimeter = 2 * (length + breadth); int area = length*breadth; System.out.println("Perimeter of a rectangle is = " + perimeter); System.out.println("Area of a rectangle is = " + area); } }
Output from the program :
Advertisements
Posted on: June 25, 2013 If you enjoyed this post then why not add us on Google+? Add us to your Circles
Advertisements
Ads
Ads
Discuss: A Program to find area and perimeter of Rectangle
Post your Comment
|
http://roseindia.net/java/beginners/java-RecArea.shtml
|
CC-MAIN-2017-43
|
refinedweb
| 434
| 57.27
|
- Advertisement
algorhythmicMember
Content Count337
Joined
Last visited
Community Reputation176 Neutral
About algorhythmic
- RankMember
- I dunno about the other games (I assume not) but FEAR certainly has VSync disabled..
- yup bridge installed
- There isn't one on the A8N32-SLI Deluxe
SLI Nightmare
algorhythmic posted a topic in GDNet LoungeHi, I'm having real problems with my SLI rig. Basically the thing is behaving like a single gpu system. First some specs: AMD Athlon(tm) 64 X2 Dual Core Processor 4800+ ASUSTeK A8N32-SLI-Deluxe Motherboard Corsair Corsair TWIN2X1024-3200C2 - 2Gb (2x 1024Mb) Enermax Liberty 600W PSU Dual Leadtek WinFast PX7800 GTX TDH (256Mb) with Arctic Cooling Silencers (5rev3) installed Western Digital SATA Raptor Creative SoundBlaster X-Fi Elite Pro PLEXTOR DVDR PX-716A DVD Burner Antec P180 Case (front-middle hdd fan intaking at low at rear-cpu-level fan out at medium) Needless to say that all the drivers are up to date and that Multi-GPU rendering is enabled. I have Arctic Cooling Silencers (5R3) installed on both graphic cards and these idle at 42deg (GPU1) and 37deg (GPU2) the difference in temperature no doubt due to their position within the case. The CPU and Motherboard are both between 35deg and 40deg. When running F.E.A.R., HalfLife2 and Quake4 at full detail and resolution, I notice that GPU1 can go up to around 60deg but GPU2 remains at 38deg. Also my overall 3DMark06 score is 4323, which according to ASUS support is way too low (should be around 7000, which I confirmed checking similar configs). Again when running 3DMark06 I notice that GPU1 heats up but GPU2 doesn't. At first I thought maybe something was wrong with the second graphics card but upon swapping the two on the suggestion of Leadtek's technical support, GPU1 (formerly GPU2) climbed to around 62deg and GPU2 (formerly GPU1) remained around 36deg. I tried find some kind of demo which would prove that SLI isn't active despite (rather than simply basing myself on GPU temperature readouts) but couldn't find anything. Interestingly the only game for which I can see the load balancing readout (world of warcraft) suggests that the rendering load is equally distributed but this again goes against my gpu temperature readouts. So my question to you is what the heck is going on? Is something wrong with my motherboard? I'd rather exhaust every possiblity before taking this thing back to the company I bought it from (in seperate parts which doesn't make returning it easier it would seem) so if anyone has any bright ideas (configuration, further testing, etc..) I'm all ears. Many Thanks in advance
- It looks like I'm going for the Anec P180 however I heard (and read but it's in French so I won;t post it) the Enermax Noisetakers have slight flaws. I was thinking of going for the Enermax Libery 620W as long as the cables reach (the P180 has an unorthodox PSU configuration)
- Haven't added it up (plus there have been some changes - better mother board). I'll post it when it's final.
- gah I'm getting even more confused now :(
- Well I don't intend to overclock at all and I believe the a8n-sli premium has a heatpipe system. So I assume I just need a ventilator for the 64x2 and a few for the box. I was thinking of a coolermaster cavalier t03 but I'm not sure how many fans that's going to need..
- Exactly what I keep thinking.
To water cool or not to water cool
algorhythmic posted a topic in GDNet LoungeHi, I'm finally getting a new PC (specs below) and this time I'll be building it myself (no more DELLs for me!). I've got pretty much everything covered except for the actual box and the issue of cooling. Some people recommended getting a water cooling system but I'm not sure. Any advice/recommendations would be most appreciated (included normal heatsink/fan setup advice cos I'm pretty clueless as to what I need to keep this setup reasonably cool). Specs: AMD Athlon 64 X2 4800+ Asustek A8N-SLI Premium – Socket 939 Corsair DDR-TWINX (2x1024) PC3200 (CL2) [Ref : TWINX2048-3200C2] Seagate Barracuda 7200.9 300Go 16Mo Cache [Ref : ST3300622AS] Asus GeForce 7800GTX 256Mo DDR3 VIVO [Ref : ASUS_EN7800GTX/2DHTV] (possibly x 2) Creative Sound Blaster X-Fi Elite Pro Graveur DVD NEC ND4550 DVD+R/RW + DL PSU Enermax Noisetaker 600W [Ref : EG701AX-VE(w)] Many Thanks
Managed Directx is too slow could the problem be in the settings ... (Corrected)
algorhythmic replied to jad_salloum's topic in Graphics and GPU ProgrammingTom Miller's Game Loop An implementation of the above
Managed Directx is too slow could the problem be in the settings ... (Corrected)
algorhythmic replied to jad_salloum's topic in Graphics and GPU ProgrammingI'm using the Miller approach for my windowed dx app but it uses 95% cpu even when minimized and with the entire game loop commented out. What to do??? Cheers
- Of course. How stupid of me not to see it. I thought it would be some stupid omission. Thanks for the answers in particular Enigma's.
- From what I understood a float/double can implicitly convert to complex<double>.
Guru of the Week
algorhythmic posted a topic in General and Gameplay ProgrammingHi, My C++ is a bit rusty so I was browsing through GOW and came across this. #include <iostream> #include <complex> using namespace std; class Base { public: virtual void f( int ) { cout << "Base::f(int)" << endl; } virtual void f( double ) { cout << "Base::f(double)" << endl; } virtual ~Base() {} }; class Derived: public Base { public: void f( complex<double> ) { cout << "Derived::f(complex)" << endl; } }; void main() { Base* pb = new Derived; pb->f(1.0); delete pb; } (from) Apparently this results in Base::f(double) being called but I just don't understand. I don’t see why pb->f(1.0) calls the base version of f() despite the base pointer pointing to a derived object. The other f()s are hidden aren’t they? I don’t quite see what he means by “overload resolution is done on the static type (here Base), not the dynamic type (here Derived)”. Why is this the case? Any help would be most appreciated. Thanks
- Advertisement
|
https://www.gamedev.net/profile/65300-algorhythmic/
|
CC-MAIN-2018-43
|
refinedweb
| 1,054
| 58.82
|
> From: Aaron Bentley <address@hidden> > [complaints about the builder algorithm I sketched for > summary deltas.] Ok, you're right, I oversimplified. The builder should use the framework I hope you've built for backbuilding. Specifically, it should use multiple calls to ancestor-tracing recursive functions to trace out a few different possible build paths then, from among those that actually work, choose the shortest one that is consistent with the demands of the user's revision libraries. You named two tracing algorithms: 1. Backwards from the closest same-version descendent we have. 2. Backwards to the closest ancestor we have. And of course there's also cacherevs to terminate those traces. The general way to write the trace-gathering algorithm should be something like this: { forwards-starter = the revision i want to build; backwards-starter = the nearest namespace-descendent i have of that; summary-starter = the corresponding summary revision; paths-in-progress = the set of traces {forwards-starter, backwards-starter, summary-starter}; winning-paths = the empty set; while (paths-in-progress is not the empty set) { remaining-paths-in-progress = the empty set; t = pick one trace from paths-in-progress; for each way of continuing t, bringing it closer to the revision we want { make t' which is that way of extending t if t' reaches the goal, add it to winning-paths otherwise, add it to remaining-paths-in-progress } paths-in-progress = remaining-paths-in-progress; } use the "best" of winning-paths; } Your code should be able to generalize to that easily and _that_, unlike my earlier mistake, will provide what I promised greek0. > (Also, note that building backwards using a summary delta has the > interesting property that you must *add* a patchlog so that > reverse-applying the changeset works. Good thing the patchlogs can be > retrived separately.) Yup. > > (Another reason is that it's deterministic and > > predictable, and hence a reliable tool.) > Here's an alternative deterministic approach: > Take both the build paths (from step 4. above). For each of them: > 1. find all the deltas whose START-REVISION and END-REVSISION are in the > path. > 2. select the delta with the largest revision ordinal. > 3. delete all revisions in the path from START-REVISION to END-REVISION, > and replace with the delta > 4. repeat until no deltas have both START-REVISION and END-REVISION in > the path > This will always improve performance. It deterministic and simple, and > scales O(n) with the number of deltas. It fits nicely with the > backbuilder's design. > I haven't sorted out the best way to use external deltas, but I think a > similar approach could apply (probably as a first step). The tricky bit > is determining when to use an external delta whose START-REVISION isn't > in a revlib. I like the way mine generalizes better. It's got that step of for each way of continuing [the path] t whereas your approach would handle _only_ summary deltas. Plus I still don't agree that adding a new free-form subdir for whatever deltas people care to add is really all that useful an idea. Setting parameters on my-style of summary deltas can always get you close to the best you could do with aribtrary deltas and with less fuss and fewer unanswered questions. Arbitrary deltas are flexability that nobody really needs. -t
|
http://lists.gnu.org/archive/html/gnu-arch-users/2004-06/msg00409.html
|
CC-MAIN-2014-15
|
refinedweb
| 557
| 60.95
|
For an application I am developing, I needed to get all functions and variables declared in JavaScript code. Because the application I am developing is in Java, I started looking for readily available JavaScript parser written in Java. I found Mozilla Rhino to be a good fit for my requirements and decided to use it.
Parsing with Rhino is quite simple and well documented.
private AstRoot parse(String src, int startLineNum) throws IOException { CompilerEnvirons env = new CompilerEnvirons(); env.setRecoverFromErrors(true); env.setGenerateDebugInfo(true); env.setRecordingComments(true); StringReader strReader = new StringReader(src); IRFactory factory = new IRFactory(env); return factory.parse(strReader, null, startLineNum); }
AstNode class has methods like getSymbols and getSymbolTable which I presumed would give me all variables and functions declared in the code. However these methods returned only functions and not variables declared. So I had to traverse the tree and get all symbols. I thought I would use visit method of AstRoot to process all nodes.
root.visit(new NodeVisitor() { @Override public boolean visit(AstNode node) { int nodeType = node.getType(); //TODO: process the node based on node type return true; //process children } });
However the above code threw ClassCastException exception in the visit method of ScriptNode class.
java.lang.ClassCastException: org.mozilla.javascript.Node cannot be cast to org.mozilla.javascript.ast.AstNode at org.mozilla.javascript.ast.ScriptNode.visit(ScriptNode.java:312)
And the test code was a simple JS script –
function test() { var k = 10; } i = 100;
So finally I decided to traverse the AST from root and process each node. That is when I realized Rhino created AST differently from that AST I had worked with earlier. The AST I had worked with created nodes with their children as array of nodes. If you start with the root and visit each child, you are sure to traverse the entire tree and hence the code.
However Rhino does not create nodes with array of child nodes. Each node has methods to access the first node and then you can traverse remaining child nodes by accessing ‘next’ member. So instead of an array, it creates linked list of child nodes, which is not such a big problem. However a few things surprised me when I traversed the tree.
I started with the root node and for each node I followed following rules –
- Get first child and traverse the linked list by calling next till the next is null
- Call next on the parent node and repeat the loop.
Here is what I found –
- When you traverse as above, you will only visit function name, and not function body
- If you want to visit function body, you will have to call getFunctions (or getFunctionNode) method of ScriptNode. Both AstNode and FunctionNode are of type ScriptNode.
- If you traverse the node as I described above (children first and then next node), you are not guaranteed to visit all AST nodes.
For the simple snippet of JS code above, the AST creates is as follows
As you can see, only function Name node is in the tree. AstRoot has two children – Name (Function) and Node (Expr_Result). Expr_Result node has one child – Node (setName). SetName has two children- Name (BindName) and NumberLiteral. This tree represents the entire code, except body of the function test. As mentioned earlier, you can get body of the function by calling getFunctionNode on AstRoot.
Now, if you inspect Name (BindName) node, you will see that it’s parent is Assignment node. Assignment has left node as Name(BindName) and right node as NumberLiteral. However Assignment is not part of the tree if you traverse it as described above.
So you also need to consider type (type constants are decalred in org.mozilla.javascript.Token class) and parent of the node when processing the AST created by Rhino.
-Ram Kulkarni
15 Replies to “Understanding AST created by Mozilla Rhino parser”
Hello. Nice article. I am currently struggling with Rhino AST. You state that it is quite well documented, I have not found much in the way of documentation apart from the JavaDocs that have some gaps. Do you have a link to some useful documentation? Thanks again for the helpful article.
When I said that parsing with Rhino was well documented, I was talking about documentation for using parser APIs, not how AST was structured. However I can’t seem to find where I saw the documentation about using Parser APIs now. I am sure I saw it before. I might have also looked at test cases under testsrc/org folder, when you download Rhino.
HI, thanks for the clear example. I was wandering if you know a way to change the function body. I currently want to wrap a function’s body with a “with(some_object){ }”
I haven’t done something like this before, but one possible solution could be that after you parse JS code, get all FunctionNode. You can get the start offset of each function by calling getAbsolutePosition method. The end offset could be startOffset + node.getText().length(). Once you know start and end positions, then insert wrapper code at those positions in the source file.
I am new to Rhino parser. I need to find out how to fetch words present in given script. I have tried doing this but I could retrieve only names of variables and functions. But I need to extract every single word( which could become a bit difficult in obfuscated script). Here is my approach to find variable and function names in script
import java.io.FileReader;
import java.io.IOException;
import java.io.Reader;
import org.mozilla.javascript.Parser;
import org.mozilla.javascript.ast.AstNode;
import org.mozilla.javascript.ast.FunctionCall;
import org.mozilla.javascript.ast.NodeVisitor;
public class JavascriptParser {
public static void main(String[] args) throws IOException {
class Printer implements NodeVisitor {
public boolean visit(AstNode node) {
/* Here I am using Name instead of FunctionCall because it gives errors for a few functions present in script */
if (node instanceof Name) {
System.out.println(name.getString());
/* this prints name of variable as well as functions present in script*/
}
return true;
}
}
String file = “/dss2.js”;
Reader reader = new FileReader(file);
try {
AstNode node = new Parser().parse(reader, file, 1);
node.visit(new Printer());
} finally {
reader.close();
}
}
What do you mean by capturing every single word in JS script? With the script you posted, you will be able to capture all functions and variables. Can you give example of what you are not able to capture with your code?
Basically I wanted to find the ratio of keywords to word and probability of shell code being present in JS script (part of implementation of research paper) to check if script is vulnerable. So considering a dummy script like this :
var emptyFun = || {});
while (length1–) {
method = methods[length];
if (!console[method1]) {
console[method] = emptyFun;
}
}
In this case, from the previous code I will be able to extract [ emptyfun, methods, length, length1, console, window, method ] but I will not be able to extract words like [ function, assert, clear, etc]. Even though I have the script I cant separate words based on when a space character occurs (will not work in case of obfuscated script).
Just processing Name is not going to give you all words/keywords. You will have to handle each type of node separately. For example –
You will need to handle more types depending on what keywords you want to process. You can find more type constants in Token.java. If you have downloaded Rhino, you will find this class in rhino1_7R4/src/org/mozilla/javascript/Token.java
Thanks a lot. It worked. 🙂
I want to use rhino to get the AST of a piece of code and make some transformations on this tree. Then I try to generate the corresponding JavaScript code by the transformed AST, is there any API provided by rhino?
Not sure. I was only interested in parsing JS code, not regenerating it from AST – so did not look for such API. But if I find any such API, I will post it here
thank you sir…your method printnode() in above comment is quite useful for me.
but i have some problem..while running code ,if my javascript has some stmt like var a=10;
then switch goes to default case and print final int value like 145. i want to print var name.
and other problem is that if my script contains function code then various parse exception occurs.
so please guide me.
if stmt is like var a=eval(c+d);
then please give me hint for how to reach eval() while parsing script.
The code snippet I posted in one of the comment for just an example of how to process nodes. I did not handle all the cases. To evaluate function calls, like eval, you can either check parent of node in Token.NAME case (in case of function call the parent would be of type FunctionCall) or handle a new case Token.CALL. Use debugger to see details of each node. Also refer to source of Token class of Mozilla to know what node type constants mean.
Hello Ram,
Your article is very helpful and thank you for writing such article.
Where can I get all types of AST nodes? I have searched but no luck.
Could you please help me how can I check the if below regular expression is present in my JS file after I have parsed it using AST?
JS: steal(function($) {
init: function(){
this.validate(“irefNo”, function () {
var test=””;
test = this.iReferenceNumber.match(/[$+,:=?@#|!`^()~[]><\%*_=+]/g);
return "Invalid characters' "+test+" 'found in field";
}
}
}
|
http://ramkulkarni.com/blog/understanding-ast-created-by-mozilla-rhino-parser/
|
CC-MAIN-2020-40
|
refinedweb
| 1,596
| 65.42
|
I see that, according to the Node js website,
Node.js now supports 93% of ES6 language features
import
No you can't use
import at the moment and don't expect it any time soon.
ES6 modules fall into the missing 7%. It's still not clear how or even if Node will support these yet.
The V8 feature/bug for modules is also open. The module syntax/parser is defined in ECMAScript 2015. The loader isn't, which Google is using for it's implementation. has detailed Node.js support for ES6 features both protected and unprotected by
--harmony. Modules aren't in the table yet.
|
https://codedump.io/share/EXZwSLmCuCie/1/what-es6-can-node-js-6-use-now
|
CC-MAIN-2017-09
|
refinedweb
| 108
| 69.99
|
NAME
setjmp, sigsetjmp - save stack context for nonlocal goto
SYNOPSIS
#include <setjmp.h> int
DESCRIPTION
setjmp() and longjmp(3) are useful for dealing with errors and interrupts encountered in a low-level subroutine of a program. setjmp() saves the stack context/environment in env for later use by longjmp(3). The stack context will be invalidated if the function which called setjmp() returns. sigsetjmp() is similar to setjmp(). If, and only if, savesigs is nonzero, the process's current signal mask is saved in env and will be restored if a siglongjmp(3) is later performed with this env.
RETURN VALUE
setjmp() and sigsetjmp() return 0 if returning directly, and nonzero when returning from longjmp(3) or siglongjmp(3) using the saved context.
CONFORMING TO
C89, C99, and POSIX.1-2001 specify setjmp(). POSIX.1-2001 specifies sigsetjmp().
NOTES
POSIX does not specify whether setjmp() will save the signal mask. In System V it will not. In 4.3BSD it will, and there is a function _setjmp that will not. By default, Linux/glibc follows the System V behavior, but the BSD behavior is provided if the _BSD_SOURCE feature test macro is defined and none of _POSIX_SOURCE, _POSIX_C_SOURCE, _XOPEN_SOURCE, _XOPEN_SOURCE_EXTENDED, _GNU_SOURCE, or _SVID_SOURCE is defined. If you want to portably save and restore signal masks, use sigsetjmp() and siglongjmp(3). setjmp() and sigsetjmp() make programs hard to understand and maintain. If possible an alternative should be used.
SEE ALSO
longjmp(3), siglongjmp(3)
COLOPHON
This page is part of release 3.35 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. 2009-06-26 SETJMP(3)
|
http://manpages.ubuntu.com/manpages/precise/man3/setjmp.3.html
|
CC-MAIN-2016-26
|
refinedweb
| 277
| 67.04
|
I tried to create a Morse Decoder. It replaces latin letters with their morse codes. There is one whitespace between letters and three whitespace between words.
def decodeMorse(morseCode)
morse_dict = {
"a" => ".-","b" => "-...","c" => "-.-.","d" => "-..","e" => ".","f" => "..-.","g" => "--.","h" => "....","i" => "..","j" => ".---","k" => "-.-","l" => ".-..","m" => "--","n" => "-.","o" => "---","p" => ".--.","q" => "--.-","r" => ".-.","s" => "...","t" => "-","u" => "..-","v" => "...-","w" => ".--","x" => "-..-","y" => "-.--","z" => "--.."," " => " ","1" => ".----","2" => "..---","3" => "...--","4" => "....-","5" => ".....","6" => "-....","7" => "--...","8" => "---..","9" => "----.","0" => "-----"
}
wordList = morseCode.split(" ")
wordList.each do |word|
word = word.downcase
word.split("").each do |letter|
a = ' ' + morse_dict[letter].to_s + ' '
word.gsub! letter a
end
end
sentence = wordList.join(' ')
return sentence.lstrip
end
puts decodeMorse("Example from description")
NoMethodError: undefined method `letter' for main:Object
from codewars.rb:12:in `block (2 levels) in decodeMorse'
from codewars.rb:10:in `each'
from codewars.rb:10:in `block in decodeMorse'
from codewars.rb:8:in `each'
from codewars.rb:8:in `decodeMorse'
The problem is here:
word.gsub! letter a
it is being interpreted from the right to the left since there is no comma between
letter and
a it’s being treated as
letter(a) function call. You want both
letter and
a to be passed as parameters to a function call ⇒ separate them with comma:
# ⇓ HERE word.gsub! letter, a
BTW,
gsub might take a hash as a second param to make substitutions:
word.gsub(/./, morse_dict)
would change all letters to their Morse representations. To deal with spaces one might use
gsub that takes a block:
word.gsub(/./) { |l| " #{morse_dict[l]} " }.squeeze(' ')
|
https://codedump.io/share/0XMfHJIB19Mo/1/ruby-morse-decoder
|
CC-MAIN-2016-50
|
refinedweb
| 248
| 69.07
|
I'm trying to solidify my understanding of some vector concepts.
1) interators are "smart access" in that they can keep track of the array
position and size.
2) the access element operator [] is the "dumb" access to the vector.
Also, I was experimenting with ways to dump the contents of a vector into an array of the same size.
Here's how i got it to happen:
#include <iostream> #include <vector> using namespace std; vector<int> vint; int input =1 ; int main() { cout << "Enter an integer (Enter 0 to exit) "; while(input) { cin >> input; vint.push_back(input); } size_t n = vint.size(); int arr[n]; cout << "the vector content is: "; for ( int j = 0; j < n; j++) { arr[j] = vint[j]; cout << vint[j] << " "; } cout << endl; cout << "The vector size is: " << vint.size() << endl; cout << "The content of the array is: "<< endl; for (int j=0; j < n; j++) cout << arr[j] << " "; cout << endl; return 0; }
I tried to do direct assignment:
arr=vint // why won't this work? essentially they're both arrays- or so i thought.
Then I got busted trying to do this:
for (it = iterator vint.begin; it < iterator vint.end() ; it++)
arr[it] = *it; // OK, I guess the iterator is a different animal than an integer? Not to mention the possible scoping problem?
It didn't work if I used an actual integer either like this:
arr[x] = *it // where int x is part of a nested loop. I dont understand why this didn't work since I CAN do:
cout << *it // this will provide the dereferenced pointer data
What's the story here?
I sure this all seems twisted to folks who are comfortable with the nuances, but I would appreciate some clarifications.
I learn best by tinkering...
|
https://www.daniweb.com/programming/software-development/threads/81240/vector-iterators-vs-operator
|
CC-MAIN-2016-50
|
refinedweb
| 294
| 63.8
|
If you are
a software vendor, you might have faced with the problem of fixing the crashes
(critical errors, exceptions) of your app on user’s machine located on the
other side of the globe. For example, assume a user writes you an email where he describes
the error in your software. Of course you are interested in making the user happy, so you start
asking him to provide more information, to give a screenshot or an error message.
The problem is that user has no technical knowledge and usually can’t provide
you with as many details as you need to reproduce the crash. Typical users
would just give up using your application if it crashes frequently and start using your
comptetitor’s software (sounds sad).
So, what
can be done to collect technical information about errors more easily? The
answer is distributing a crash reporting
library with your software, which would collect all required information about
the problem and send the information to you (user just would need to provide
his consent by pressing the “Send report” button). The technical information the crash reporting library would collect for you includes the following: the crash minidump file containing the call stack at the moment of crash helping you to see the line of code where exception had happened, screenshot of the desktop at the moment of crash helping you to see what button user had clicked and helping you reproduce the problem.
In this tutorial, I will show how to integrate one such an Open-source crash reporting library
called CrashRpt with your application. You may be already interested if CrashRpt works with your application or not. If your application is written in C/C++ using Visual C++ .NET 2003, 2005, 2008, 2010 or Visual C++ Express, and if your app is WinAPI/ATL/WTL/MFC-based, the answer is yes. CrashRpt supports Win32 and Win64 processor architectures. It works in Windows 2000, XP, Vista and Windows 7.
For the demonstration, I use an MFC application, because I figured out that using CrashRpt with MFC may cause some confusion. For example, one problem MFC users experience is determining the right place in the code where to initialize the crash reporting library.
This article is aimed to be a beginner’s tutorial. The source code and binary files I use for demonstration are attached to the article. The code of CrashRpt library (v.1.3.0) is also attached, but it is recommended that you download the latest version from the external site.
In conclusion of the tutorial, I give some links for an interested reader who may want to become familiar with more advanced topics, like using CrashRpt in a multi-threaded application, sending error report over the Internet to an HTTP server, postprocessing error reports using a command line tool and so on.
Note: You may also refer to the Add Crash Reporting to Your Applications with the CrashRpt Library article by Mike Carruth, which is a little obsolete, but contains many interesting details of CrashRpt usage.
First of all, we will create a simple document-view MFC application which always crashes when saving the document through File->Save menu.
In this tutorial, we will create a very simple MFC application from scratch in Visual Studio 2005. To do this, we open Visual Studio window and then open menu File and select the New->Project… from the menu.
In the appeared New Project dialog, we choose MFC Application from the templates list and enter the SimpleApp name into the Name field and press the OK button. Then, when the MFC Application Wizard – SimpleApp dialog appears, we just click the Finish button to generate project files for the new SimpleApp application.
Now, we will add a code that would crash the application. Of course, we can insert such a bad code into any place of our application (as it happens in real life), but to make things simple, we will make our app crash when saving the document. To do so, in the SimpleAppDoc.cpp file, modify the CSimpleAppDoc::Serialize() method the following way:
CSimpleAppDoc::Serialize()
// CSimpleAppDoc serialization
void CSimpleAppDoc::Serialize(CArchive& ar)
{
if (ar.IsStoring())
{
// TODO: add storing code here
int* p = NULL;
*p = 13;
}
else
{
// TODO: add loading code here
}
}
The method above contains NULL pointer assignment, which when executed, will rise an access violation exception.
NULL
Note: If you are interested to know what else code may crash your application, you can refer to the Making Your C++ Code Robust article.
Finally, we press F5 to compile and run the created application. The application window should appear (see the figure below).
Figure 1 – SimpleApp MFC Application Window
To see what happens on crash, in the SimpleApp window, we open menu File and choose Save. We enter some file name and press the Save button. The application will terminate with an error message.
You can find the application we’ve just created attached to this tutorial (SimpleApp.zip archive).
Now, we will install the CrashRpt library. In this demo, we will use the latest version of CrashRpt available at the time of writing this tutorial - v.1.3.0.
First, we should get the CrashRpt library source code. You can download the latest CrashRpt distribution archive from here. The archive uses 7z format, so we can unpack the archive with the 7zip tool. We unpack it to some folder, for example to C:\CrashRpt.
Note: The CrashRpt v.1.3.0 source code is also attached to this article, but it is recommended that you get the latest version from CrashRpt project website.
Let’s look inside the CrashRpt folder. It contains several subfolders and files.
The bin subfolder contains compiled CrashRpt binaries (CrashRpt1300.dll, CrashSender1300.exe and so on). CrashRpt consists of two core modules: CrashRpt1300.dll and CrashSender1300.exe. CrashRpt1300.dll contains functionality for intercepting exceptions in a client software. CrashSender1300.exe contains functionality for compressing and sending error reports to the software's support team. CrashRpt is separated into these modules to be able to close the application which have crashed and to continue sending the error report in CrashSender1300.exe in background.
The include and lib subfolders contain header files and import library files. We will need these files later when compiling and linking the SimpleApp application.
The lang_files subfolder contains language INI files named like crashrpt_lang_XX.ini, where XX is a language abbreviation. The INI files contain localized strings for CrashRpt dialogs, so you can localize it to your favourite language.
The top-level folder contains file CrashRpt_vs2010.sln, which is the CrashRpt solution file for Visual Studio 2010. If you use Visual Studio 2010, you can double-click this file and refer to the Compiling CrashRpt section below. But in this tutorial, I will show how to generate CrashRpt solution file for an older version, Visual Studio 2005.
We will use CMake cross-platform make system to generate CrashRpt solution file very easily. If you don’t have CMake, download its installer from here and install CMake on your computer. I use the latest version at the moment, CMake 2.8.7.
Next, we will run the CMake-GUI wizard by opening menu Start and choosing CMake 2.8->CMake (cmake-gui). The CMake dialog should appear. In the dialog, we need to provide the path to folder where we've unpacked the CrashRpt archive (in our case, C:\CrashRpt). Enter this path into the “Where is the source code:” and “Where to build the binaries:” fields as shown in the figure below and press the Configure button.
Figure 2 – Generating CrashRpt Project Files for Visual Studio 2005 with CMake
The generator selection dialog appears where we need to select Visual Studio 8 2005 from the drop-down list and press the Finish button. Then press the Generate button. If everything is OK, you should see the “Generating done” message. Now, go to C:\CrashRpt folder and you should be able to see the CrashRpt.sln file. Double-click the file to open it in Visual Studio.
Compiling CrashRpt yourself is strongly recommended if you want CrashRpt to handle exceptions that may occur in C run-time (CRT) libraries. CrashRpt distribution archive already contains compiled CrashRpt binaries, but it is not recommended to use them with your software, because your software may use different C run-time DLLs, and CrashRpt won't be able to intercept exceptions in your C run-time libraries.
Compiling CrashRpt is very straightforward – you just need to select Release configuration and press F7. If everything OK, you should be able to find CrashRpt binaries in the bin subfolder.
To be able to use CrashRpt API functions, we include CrashRpt.h header file in the beginning of the SimpleApp.cpp file.
CrashRpt.h
#include <CrashRpt.h>
CWinApp::Run()
int CSimpleAppApp::Run()
{
BOOL bRun;
BOOL bExit=FALSE;
while(!bExit)
{
bRun= CWinApp::Run();
bExit=TRUE;
}
return bRun;
}
We have overridden the CWinApp::Run() method. The CWinApp::Run() method is called when an MFC application starts, so this is the right place to initialize CrashRpt.
Next, we initialize CrashRpt by calling the crInstall() function and pass it the configuration parameters through CR_INSTALL_INFO structure. Below our Run() method with inserted CrashRpt API functions is presented:
crInstall()
CR_INSTALL_INFO
Run()
int CSimpleAppApp::Run()
{
// Install crash reporting
CR_INSTALL_INFO info;
memset(&info, 0, sizeof(CR_INSTALL_INFO));
info.cb = sizeof(CR_INSTALL_INFO);
info.pszAppName = _T("SimpleAp"); // Define application name.
info.pszAppVersion = _T("1.0.0"); // Define application version.
// URL for sending error reports over HTTP.
info.pszUrl = _T("");
// Install all available exception handlers.
info.dwFlags |= CR_INST_ALL_POSSIBLE_HANDLERS;
// Use binary encoding for HTTP uploads (recommended).
info.dwFlags |= CR_INST_HTTP_BINARY_ENCODING;
// Provide privacy policy URL
info.pszPrivacyPolicyURL = _T("");
int nResult = crInstall(&info);
if(nResult!=0)
{
TCHAR buff[256];
crGetLastErrorMsg(buff, 256);
MessageBox(NULL, buff, _T("crInstall error"), MB_OK);
return 1;
}
// Take screenshot of the app window at the moment of crash
crAddScreenshot2(CR_AS_MAIN_WINDOW|CR_AS_USE_JPEG_FORMAT, 95);
BOOL bRun;
BOOL bExit=FALSE;
while(!bExit)
{
bRun= CWinApp::Run();
bExit=TRUE;
}
// Uninstall crash reporting
crUninstall();
return bRun;
}
In the code above, we install all available exception handlers into the application. We specify the application name and version, because this info is needed to identify the application that sends the report. We provide an URL for transferring the error report to an HTTP server using the binary transfer encoding. We also provide a privacy policy URL.
We also call the crAddScreenshot2() function. We tell it to add a screenshot of the application window to the error report. The screenshot image will use compressed JPEG format with 95% quality to reduce image size.
crAddScreenshot2()
crUninstall()
Ok, now we need to add CrashRpt1300.lib to the list of input libraries for the project. In the Solution Explorer window, right-click the project node and choose Properties item from the context menu. Then open Configuration Properties->Linker->Input->Additional Dependencies and then add CrashRpt1300.lib to the list of libraries.
CrashRpt1300.dll
CrashSender1300.exe
dbghelp.dll
crashrpt_lang.ini
These files are required for CrashRpt to work properly. The files CrashRpt1300.dll and CrashSender1300.exe are core CrashRpt modules. The dbghelp.dll file is the Microsoft Debug Help Library, CrashRpt depends on this module. The crashrpt_lang.ini file contains localized strings for CrashRpt dialogs, so you can localize it to your favourite language.
When files have been copied, run the SimpleApp.exe file. In the appeared SimpleApp window, open menu File and choose Save from the menu. Enter some file name and press the Save button. When the access violation happens, you should see a nice-looking CrashRpt Error Report window as shown it the figure below.
Figure 3 – Error Report Window
Let's review what is displayed on the Error Report window.
The Privacy Policy link allows to see the privacy policy we use when collecting data from user. The privacy policy typically states we use the data to improve the software and we do not sell or otherwise transfer the data to third parties.
We see that the report contains 165 KB of data. This is rather small and acceptable for transferring over the Internet. CrashRpt library can transfer the data as a request to HTTP server or as an E-mail message with attachments. The recipient recieves the error report as a compressed ZIP archive containing several files.
To send the generated error report to the HTTP server you have specified, press the Send report button. If you do not want to send the report, press Close the program button. Note, that in this tutorial I do not show how to send error report over the Internet, you should provide a real recipient's address to send error reports as E-mail and/or configure a server-side script to send error reports over HTTP connection.
By clicking the What does this report contain? link, you may review the contents of the generated error report. In the figure below, you can see that the report we’ve generated contains three files: crashdump.dmp (crash minidump), crashrpt.xml (crash description XML) and screenshot0.jpg (desktop screenshot).
Figure 4 – Error Report Details Dialog
The crash minidump file can be used to debug the crash. You can double-click the crashdump.dmp file to open it in Visual Studio and see the line of the code where exception had happened.
There is also a nice-looking HEX, text or image preview for each file in the error report. The figure below displays a preview of the JPEG screenshot of our app's window that may be useful to reproduce the crash.
Figure 5 - Desktop Screenshot Preview
Crash reporting allows you to automatically collect technical information about errors in your software to later postprocess the info on developer’s side. In this tutorial, I’ve demonstrated how to integrate the CrashRpt crash reporting library into an MFC application.
This tutorial is very simple, and intended for beginners. In this tutorial, I haven’t covered the advanced topics of installing CrashRpt into a multi-threaded application, adding custom files to the error report, sending error reports over the Internet as an E-mail message or as a request to an HTTP server and analyzing arriving error reports using a command-line tool. An interested reader may find more information on these and other topics in CrashRpt online documentation.
For those, who want to better understand exception handling in Visual C++, I would recommend the article Effective Exception Handling in Visual C++, where I describe in details how CrashRpt catches exceptions in a C++ program. If you are interested to learn how the HEX file preview shown on the figure above works, you can refer to the article FilePreviewCtrl – Preview Files in Text, HEX and Image Format.
January 1st 2012 - Initial release
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
InitInstance()
ExitInstance()
CSimpleMfcApp::Run()
BOOL CMainFrame::OnSetCursor(CWnd* pWnd, UINT nHitTest, UINT message)
{
int* p = NULL;
*p = 13;
return CMDIFrameWnd::OnSetCursor(pWnd, nHitTest, message);
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/308634/Integrating-Crash-Reporting-into-Your-Application?msg=4117396
|
CC-MAIN-2014-42
|
refinedweb
| 2,529
| 55.34
|
Hi Experts,
There is a SWCV in Basis Objects under perticular SWCV, i need to transport individual object from namespace from that SWCV which is under Basis Objects list of other SWCV.
thanks.
Hi
SE10 --> create transport of copies --> then click on the TR and click include objects icon --> select "freely selected objects" --> radio button "Selected Object"
Now you can select the objects you need
Help to improve this answer by adding a comment
If you have a different answer for this question, then please use the Your Answer form at the bottom of the page instead.
Hi Mohamed Buhari,
Thank You
But I am using running these interfaces in PO7.4
thanks.
Add a comment
|
https://answers.sap.com/questions/667235/how-to-transport-objects-in-basis-objects-list-und.html
|
CC-MAIN-2020-50
|
refinedweb
| 116
| 59.16
|
Subject: Re: [boost] [outcome] non-interface-related concerns
From: Niall Douglas (s_sourceforge_at_[hidden])
Date: 2017-05-26 23:20:09
>>> 1. The current GitHub repository consists of sub-repos. Do you (Niall)
>> plan
>>> to have it like this when the library is accepted? Is it relevant?
>>
>> A lot of the opposition to git subrepos appears to stem from lack of
>> awareness of how they and git work. Some are opposed to boost-lite
>> substituting for Boost, but as I have mentioned a fair few times now, I
>> am following precedent for C++ 14 mandatory libraries here which doesn't
>> have a hard dependency on monolithic Boost, and whether the internal
>> platform abstraction layer is in a subrepo or not isn't hugely important
>> for the end user.
>>
>
> Out of curiosity, what precedent is it?
I believe Boost.Hana has no hard dependency on the rest of Boost..
>> I mean, they just #include and go. If that works, does any of the rest
>> of it matter?
>>
>
> As a matter of fact, my questions were motivated by the fact that I
> #included but it didn't work. But apparrently, I incorrectly attributed the
> problem to sub-modules.
Mingw is not a regularly tested platform, and hence is not advertised on
the documented compiler support page. Last time I tested it was well
before Christmas..
>>> 2. This macro BOOST_OUTCOME_DISABLE_PREPROCESSED_INTERFACE_FILE, it
>>> controls whether I use this "preprocessed" file versus if I use... what?
>>> Normal header files? What is gained by this preprocessed file? Not having
>>> to include more than one header, or something else? Can I get different
>>> results when compiling with and without this macro?
>>
>> The partially preprocessed edition is auto generated per commit. It
>> saves the compiler from doing all the recursive #includes per compiland.
>> It reduces compile times for end users a lot, I measured about half a
>> second per compiland. Say for a project of 200 compilands, that's 10
>> seconds.
>>
>
> Are you saying the gain stems form includingg 1 versus N files?
That's part of it. But avoiding all the recursive macro expansion
certainly helps too.
>> Several other Boost libraries also pre-preprocess header files into the
>> source tree and when you #include that library, you are including the
>> pre-preprocessed edition, not the original source code. It's pretty
>> conventional for libraries doing a lot preprocessor work to use this
>> technique to reduce the compile time impact on end users.
>>
>
> Could you tell me which libraries?
Is it Boost.Geometry and Boost.Serialisation or something? Back in the
day where we emulated variadic templates using preprocessor recursive
includes, that's what I'm thinking of.
> Also, if the contents are identical, what do I need the non-preprocessed
> version for?
Well, it's the source of the source as it were.
> Ok, let me change your explanation a bit. Going back to your exable, I have
> two version namespaces:
>
> ```
> namespace boost { namespace outcome { namespace v1 { ... } } }
> namespace boost { namespace outcome { inline namespace v2 { ... } } }
> ```
> Now, if I want to use outcome from v1, I can type:
>
> ```
> namespace outcome = boost::outcome::v1;
> ```
>
> And I didn't use any macro. I guess my question is, why do you need a macro
> for it?
Whilst Outcome is in an unstable state, the actual namespace used is:
namespace boost { namespace outcome { inline namespace v1_GITSHA { ... } } }
So there is no boost::outcome::v1 namespace yet, but there will be one
day. Hence the macro.
> Yes, I get that. Ok, now I understand why I got confused. I think the fix
> simply did not remove all errors on my version of MinGW.
Yes, it's a mystery. I would assume that your mingw-w64 is older than
mine. They may have since implemented the integer to string conversion
routines the compile error is complaining about recently.
If you can figure out a way of detecting mingw-w64 version, I can use
the non-_s editions of those conversion functions. That would get you up
and running on your version of mingw.
It might be worth, to help you with your review, to try fixing your
Outcome edition and send me a pull request to try out the bug reporting.
You can either fix the partially preprocessed header, or dive in and fix
the original boost-lite source if you compile your program with
BOOST_OUTCOME_DISABLE_PREPROCESSED_INTERFACE_FILE.
Niall
-- ned Productions Limited Consulting
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
|
https://lists.boost.org/Archives/boost/2017/05/235309.php
|
CC-MAIN-2020-50
|
refinedweb
| 745
| 67.65
|
A library for inserting a piece of code into another piece of code
Project description
Description
A library allows to insert one piece of code into another piece of code secretly, without changing line numbers.
The current version supports only Python 3.6. Also it allows to insert only functions without arguments.
Usage
A library has a useful function insert_code, which inserts which takes the original code, the code to insert and line number for insertion. The simplest example of usage from the examples directory:
from bytesinsert import insert_code def hello(): print("1") print("3") def new_code(): print("2") code_orig = hello.__code__ code_to_insert = new_code.__code__ success, result = insert_code(code_orig, code_to_insert, 6) if success: exec(result)
The resulting output will be:
1 2 3
The insert_code function inserted code from the function new_code into the function hello by updating its bytecode.
Install
This library can be easily installed with pip:
pip install bytesinsert
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/bytesinsert/
|
CC-MAIN-2022-27
|
refinedweb
| 175
| 52.8
|
Connection Pooling¶
Connection pooling is a technique of creating and managing a pool of connections that are ready for use, which greatly increase the performance of your applications by reducing the connection creation time.
The way of using connection pooling in Connector/Python with the X Protocol, is
by calling the
mysqlx.get_client() function as follows:
import mysqlx connection_str = 'mysqlx://mike:s3cr3t!@localhost:33060' options_string = '{}' # An empty document client = mysqlx.get_client(connection_str, options_string) session = client.get_session() # (...) session.close() client.close()
The connection settings and options can also be a dict:
import mysqlx connection_dict = { 'host': 'localhost', 'port': 33060, 'user': 'mike', 'password': 's3cr3t!' } options_dict = {} client = mysqlx.get_client(connection_dict, options_dict) session = client.get_session() # (...) session.close() client.close()
All sessions created by
mysqlx.Client.get_session() have a pooled connection,
which after being closed by
mysqlx.Session.close() returns to the pool of
connections, so it can be reused.
Until now we didn’t supply any configuration for
mysqlx.Client. We can
set the pooling options by passing a dict or a JSON document string in the
second parameter.
The available options for the
mysqlx.Client are:
options = { 'pooling': { 'enabled': (bool), # [True | False], True by default 'max_size': (int), # Maximum connections per pool "max_idle_time": (int), # milliseconds that a connection will remain active # while not in use. By default 0, means infinite. "queue_timeout": (int), # milliseconds a request will wait for a connection # to become available. By default 0, means infinite. } }
To disable pooling in the client we can set the
enabled option to
False:
client = mysqlx.get_client(connection_str, {'pooling':{'enabled': False}})
To define the pool maximum size we can set the
max_size in the
pooling
options. In the following example
'max_size': 5 sets 5 as the maximum number
of connections allowed in the pool.
connection_dict = { 'host': 'localhost', 'port': 33060, 'user': 'mike', 'password': 's3cr3t!' } options_dict = {'pooling':{'max_size': 5, 'queue_timeout': 1000}} client = mysqlx.get_client(connection_dict, options_dict) for _ in range(5): client.get_session() # This will raise a pool error: # mysqlx.errors.PoolError: pool max size has been reached client.get_session()
The
queue_timeout sets the maximum number of milliseconds a request will
wait for a connection to become available. The default value is
0 (zero)
and means infinite.
The following example shows the usage of threads that will have to wait for a session to become available:
import mysqlx import time import random from threading import Thread connection_dict = { 'host': 'localhost', 'port': 33060, 'user': 'mike', 'password': 's3cr3t!' } options_dict = {'pooling':{'max_size': 6, 'queue_timeout':5000}} schema_name = 'test' collection_name = 'collection_test04' def job(client, worker_number): """This method keeps the tasks for a thread. Args: client (Client): to get the sessions to interact with the server. worker_number (int): the id number for the worker. """ rand = random.Random() worker_name = "Worker_{}".format(worker_number) print("starting Worker: {} \n".format(worker_name)) # Take a nap before do the job, (gets a chance to other thread to start) time.sleep(rand.randint(0,9)/10) # Get a session from client session1 = client.get_session() # Get a schema to work on schema = session1.get_schema(schema_name) # Get the collection to put some documents in collection = schema.get_collection(collection_name) # Add 10 documents to the collection for _ in range(10): collection.add({'name': worker_name}).execute() # close session session1.close() print("Worker: {} finish\n".format(worker_name)) def call_workers(client, job_thread, workers): """Create threads and start them. Args: client (Client): to get the sessions. job_thread (method): the method to run by each thread. workers (int): the number of threads to create. """ workers_list = [] for n in range(workers): workers_list.append(Thread(target=job_thread, args=[client, n])) for worker in workers_list: worker.start() # Get a client to manage the sessions client = mysqlx.get_client(connection_dict, options_dict) # Get a session to create an schema and a collection session1 = client.get_session() schema = session1.create_schema(schema_name) collection = schema.create_collection(collection_name) # Close the session to have another free connection session1.close() # Invoke call_workers with the client, the method to run by the thread and # the number of threads, on this example 18 workers call_workers(client, job, 18) # Give some time for the workers to do the job time.sleep(10) session1 = client.get_session() schema = session1.get_schema(schema_name) collection = schema.get_collection(collection_name) print(collection.find().execute().fetch_all())
The output of the last print will look like the following:
[{'_id': '00005b770c7f0000000000000389', 'name': 'Worker_2'}, \ {'_id': '00005b770c7f000000000000038a', 'name': 'Worker_2'}, \ {'_id': '00005b770c7f000000000000038b', 'name': 'Worker_2'}, \ {'_id': '00005b770c7f000000000000038c', 'name': 'Worker_2'}, \ {'_id': '00005b770c7f000000000000038d', 'name': 'Worker_2'}, \ {'_id': '00005b770c7f000000000000038e', 'name': 'Worker_2'}, \ {'_id': '00005b770c7f000000000000038f', 'name': 'Worker_2'}, \ {'_id': '00005b770c7f0000000000000390', 'name': 'Worker_2'}, \ {'_id': '00005b770c7f0000000000000391', 'name': 'Worker_2'}, \ {'_id': '00005b770c7f0000000000000392', 'name': 'Worker_2'}, \ {'_id': '00005b770c7f0000000000000393', 'name': 'Worker_1'}, \ {'_id': '00005b770c7f0000000000000394', 'name': 'Worker_4'}, \ {'_id': '00005b770c7f0000000000000395', 'name': 'Worker_1'}, \ {'_id': '00005b770c7f0000000000000396', 'name': 'Worker_4'}, \ {'_id': '00005b770c7f0000000000000397', 'name': 'Worker_7'}, \ {'_id': '00005b770c7f0000000000000398', 'name': 'Worker_1'}, \ {'_id': '00005b770c7f0000000000000399', 'name': 'Worker_4'}, \ {'_id': '00005b770c7f000000000000039a', 'name': 'Worker_7'}, \ {'_id': '00005b770c7f000000000000039b', 'name': 'Worker_1'}, \ {'_id': '00005b770c7f000000000000039c', 'name': 'Worker_4'}, \ {'_id': '00005b770c7f000000000000039d', 'name': 'Worker_7'}, \ {'_id': '00005b770c7f000000000000039e', 'name': 'Worker_1'}, \ {'_id': '00005b770c7f000000000000039f', 'name': 'Worker_8'}, \ {'_id': '00005b770c7f00000000000003a0', 'name': 'Worker_4'}, \ {'_id': '00005b770c7f00000000000003a1', 'name': 'Worker_7'}, \ ... \ {'_id': '00005b770c7f000000000000043c', 'name': 'Worker_9'}]
The 18 workers took random turns to add their documents to the collection,
sharing only 6 active connections given by
'max_size': 6 in the
options_dict given to the client instance at the time was created on
mysqlx.get_client(connection_dict, options_dict)().
|
https://dev.mysql.com/doc/dev/connector-python/8.0/tutorials/connection_pooling.html
|
CC-MAIN-2022-05
|
refinedweb
| 836
| 50.02
|
.
A simple histogram can be a great first step in understanding a dataset. Earlier, we saw a preview of Matplotlib's histogram function (see Comparisons, Masks, and Boolean Logic), which creates a basic histogram in one line, once the normal boiler-plate imports are done:
%matplotlib inline import numpy as np import matplotlib.pyplot as plt plt.style.use('seaborn-white') data = np.random.randn(1000)
plt.hist(data);
The
hist() function has many options to tune both the calculation and the display;
here's an example of a more customized histogram:
plt.hist(data, bins=30, normed=True, alpha=0.5, histtype='stepfilled', color='steelblue', edgecolor='none');
The
plt.hist docstring has more information on other customization options available.
I find this combination of
histtype='stepfilled' along with some transparency
alpha to be very useful when comparing histograms of several distributions:
x1 = np.random.normal(0, 0.8, 1000) x2 = np.random.normal(-2, 1, 1000) x3 = np.random.normal(3, 2, 1000) kwargs = dict(histtype='stepfilled', alpha=0.3, normed=True, bins=40) plt.hist(x1, **kwargs) plt.hist(x2, **kwargs) plt.hist(x3, **kwargs);
If you would like to simply compute the histogram (that is, count the number of points in a given bin) and not display it, the
np.histogram() function is available:
counts, bin_edges = np.histogram(data, bins=5) print(counts)
[ 12 190 468 301 29]
Just as we create histograms in one dimension by dividing the number-line into bins, we can also create histograms in two-dimensions by dividing points among two-dimensional bins.
We'll take a brief look at several ways to do this here.
We'll start by defining some data—an
x and
y array drawn from a multivariate Gaussian distribution:
mean = [0, 0] cov = [[1, 1], [1, 2]] x, y = np.random.multivariate_normal(mean, cov, 10000).T
plt.hist2d(x, y, bins=30, cmap='Blues') cb = plt.colorbar() cb.set_label('counts in bin')
Just as with
plt.hist,
plt.hist2d has a number of extra options to fine-tune the plot and the binning, which are nicely outlined in the function docstring.
Further, just as
plt.hist has a counterpart in
np.histogram,
plt.hist2d has a counterpart in
np.histogram2d, which can be used as follows:
counts, xedges, yedges = np.histogram2d(x, y, bins=30)
For the generalization of this histogram binning in dimensions higher than two, see the
np.histogramdd function.
plt.hexbin: Hexagonal binnings¶
The two-dimensional histogram creates a tesselation of squares across the axes.
Another natural shape for such a tesselation is the regular hexagon.
For this purpose, Matplotlib provides the
plt.hexbin routine, which will represents a two-dimensional dataset binned within a grid of hexagons:
plt.hexbin(x, y, gridsize=30, cmap='Blues') cb = plt.colorbar(label='count in bin')
plt.hexbin has a number of interesting options, including the ability to specify weights for each point, and to change the output in each bin to any NumPy aggregate (mean of weights, standard deviation of weights, etc.).
Another common method of evaluating densities in multiple dimensions is kernel density estimation (KDE).
This will be discussed more fully in In-Depth: Kernel Density Estimation, but for now we'll simply mention that KDE can be thought of as a way to "smear out" the points in space and add up the result to obtain a smooth function.
One extremely quick and simple KDE implementation exists in the
scipy.stats package.
Here is a quick example of using the KDE on this data:
from scipy.stats import gaussian_kde # fit an array of size [Ndim, Nsamples] data = np.vstack([x, y]) kde = gaussian_kde(data) # evaluate on a regular grid xgrid = np.linspace(-3.5, 3.5, 40) ygrid = np.linspace(-6, 6, 40) Xgrid, Ygrid = np.meshgrid(xgrid, ygrid) Z = kde.evaluate(np.vstack([Xgrid.ravel(), Ygrid.ravel()])) # Plot the result as an image plt.imshow(Z.reshape(Xgrid.shape), origin='lower', aspect='auto', extent=[-3.5, 3.5, -6, 6], cmap='Blues') cb = plt.colorbar() cb.set_label("density")
KDE has a smoothing length that effectively slides the knob between detail and smoothness (one example of the ubiquitous bias–variance trade-off).
The literature on choosing an appropriate smoothing length is vast:
gaussian_kde uses a rule-of-thumb to attempt to find a nearly optimal smoothing length for the input data.
Other KDE implementations are available within the SciPy ecosystem, each with its own strengths and weaknesses; see, for example,
sklearn.neighbors.KernelDensity and
statsmodels.nonparametric.kernel_density.KDEMultivariate.
For visualizations based on KDE, using Matplotlib tends to be overly verbose.
The Seaborn library, discussed in Visualization With Seaborn, provides a much more terse API for creating KDE-based visualizations.
|
https://nbviewer.jupyter.org/github/donnemartin/data-science-ipython-notebooks/blob/master/matplotlib/04.05-Histograms-and-Binnings.ipynb
|
CC-MAIN-2019-13
|
refinedweb
| 785
| 50.84
|
Error when trying to assign a string to the title attribute.
I am in the process of taking old javafx code and moving it to the compiled version. In one section of my code I have a tab class that is called from the main form. When moving things around, trying to make it compliant with the compiled JavaFx code, I get the following error when trying to compile my project (in NetBeans 6.1):
"error: Cannot override TabExample.title default initializer in title subclass."
In the interpreted version, the following was outside my tab class and it worked fine:
TabExample.attribute title="Other";
In the compiled version I moved it into the class (because I was getting an error) so it looks like this:
class TabExample extends Tab{
attribute title="Other";
}
Any thoughts what the error means and how I can get around it?
Hi, JavaFX overrides a little bit differently than Java. Use the “override†keyword to override the attribute.
import javafx.ui.*;
public class TabExample extends Tab{
override attribute title = "Other";
}
|
https://www.java.net/node/678916
|
CC-MAIN-2015-40
|
refinedweb
| 175
| 72.87
|
Hi,
I am trying to configure 802.1ad on ixgbe vfs with a goal of moving resultant q-in-q interface into a container i.e. have the vf in default namespace and have all the vf's vlan interfaces being moved to different namespaces. Theoretically in this configuration, packets would be double-tagged coming into pf, be directed to correct vf (and outer tag stripped) and then delivered to linux with single-tag.
I tried this:
# echo 32 > /sys/class/net/eth1/device/sriov_numvfs
# ip link set dev eth1 up
# ip link set dev eth1 vf 1 vlan 2 proto 802.1ad
RTNETLINK answers: Protocol not supported
Is there a way I can do this ?
TIA,
Don.
Link Copied
Hi Donlobete
Thank you for posting at Wired Communities. Can you share the information below:Thank you for posting at Wired Communities. Can you share the information below:
1) What is the network adapter model used here?1) What is the network adapter model used here?
2) What is exact Linux operating system?2) What is exact Linux operating system?
3) What is the exact ixgbe driver version used?3) What is the exact ixgbe driver version used?
Thanks,Thanks,
sharonsharon
Embedded application using Broadwell CPU (Intel(R) Xeon(R) CPU D-1527 @ 2.20GHz) with integrated 2x10Gbps -
8086("Intel Corporation"), device:
15ab("Ethernet Connection X552 10 GbE Backplane")
Linux kernel v4.11 and the ixgbe driver for that kernel. "modinfo ixgbe" says version: 5.0.0-k.
Latest iproute2.
Thanks
Hi Donlobete,
Thank you for the information. I will have to further investigate on this.Thank you for the information. I will have to further investigate on this.
Rgds,Rgds,
sharonsharon
Hi Donlobete,
S-tag is not supported by X552, please refer to page 743 of this document - is not supported by X552, please refer to page 743 of this document -
regards,regards,
VinceVince
|
https://community.intel.com/t5/Ethernet-Products/802-1ad-on-ixgbe-vfs/td-p/573236
|
CC-MAIN-2021-31
|
refinedweb
| 315
| 58.99
|
Pin 13 was my first guess as to the cause of the problem as there's an LED attached to it. I've been suggesting to people to try different pin assignments and changed the default sketch to use pin 12 and 11 for a few weeks now.
With that said, I didn't ask you specifically not to use pin 13 for two reasons. First, pin 13 works fine on the Uno and we both assumed you were using the Uno at the begining. Secondly, the sketch you linked to at also uses pins 12 and 13. So, to totally close this case, were you using the exact trollmaker sketch including using pins 12 and 13 or did you use that sketch but with different pins?
The TestF library is basically exactly the same as the trollmaker sketch. If the Duemilanove really can't use pin 13 as a input, it shouldn't be able to with either the NewPing library or the trollmaker sketch (as it uses pin 13 for input).
I don't ever remember seeing false positives or false negatives. For example, if I put an object a foot away from the sensor it will show a distance every time. It may fluctuate a bit as far as the distance, but will never give a 0 reading. Likewise, if I set the maximum distance at 200cm and have it ping out at a wall that's 15 feet away, it will always get a 0 reading. Some, have incorrectly interpreted this as incorrect. But, this is the correctly designed behavior. You wouldn't want the sensor to report back 200cm when there was something 400cm away as you've set the sensor to only listen for objects 200cm away or closer. A reading of 200cm would tell you that something is 200cm away, when there is not. Any object beyond the specified maximum distance is returned as 0, which means "all clear". It's slightly different with the timer interrupt method as we don't create an event unless there's something in-range.
I used the below sketch with your HC-SR04 sensor. It does VERY fast pings and displays a "." if it gets a ping within 200cm or an "0" if it gets no ping (zero). It displays it in 80 columns so a lot of data can quickly be looked after the test is complete. Each line ends with the ping microseconds for the final ping (kind of a test to make sure you're understanding what it's showing).
So, you may want to try the above sketch and a similar test before we try to filter false positives or false negatives that I can't even duplicate.
.00..00.00...00...00......00..00.....00....00.00.....00.....00.00....00.00..00.. 61500..00.00....00..00.00.....00.00.00.00.00.00.00..........00.00..00......00.00.00 0...00.00....00.....00..........00.00.00....00...00.....00.00...00.00.00.....00.. 611.00...00..00.00...00...00.00.00...00...00...00.00.00..00.......00..00...00..00.0 00..00...00.....00....00.00.00.00.00.00.00......00....00.....00.00...00.....00.00 0.00..00...00...00.......00.00....00.00.......00........00..00...00...........00. 607........00......00.00.00..00............00.......00.00...00.00.............00... 61100.......00.00.00.....00.00.00.00....00.00................00..............00.... 607......00..........00.00..00.00....00......00.00.00..00....00.....00...00.00..... 583........00...........00.00....00..00..00...00..00..00.00..00.......00...00.00.00 0.......00..00.00..00..00.00.....00...00.......00.00.00.....00.00.00..00......00. 611..00..00...00.00..00..00.00...00.......00...00....00..00........00...00....00.00 0.00...00..........00....00..00....00..00....00..00.00..00..00.00....00.......00. 611
Preliminary results, tested against NewPing v1.4Uno R3, pins 12&13 : SRF05 sensor. Other hardware (LEDs, servos, 1 5V-triggered relay) attached but not activated/powered. Tons of false negatives (0 appearing when something is in range). I tried increasing the delay to 50, and while it did help a little, it didn't resolve the false negatives. Duemilanove, pins 12&11 : SR04 sensor. No other hardware attached. Tons of false negatives (0 appearing when something is in range). Same behavior as above with the delay change.
When I do the same test, I get nothing but periods. Maybe try it while disconnecting the other hardware to see if it's a possible noise issue.
I would still not use pin 13 for input, even on the Uno. Not that it's causing the problem, it's just probably not a good idea. If you're out of pins, it would be better to swap pins 12 and 13 and use 13 for the trigger and 12 for the echo.
Also, it's possible that it's giving the zeros because the ping never initiated.
While the sensor specs say it only needs a 10uS high notch to trigger a pin, maybe some sensors need a little more. I haven't seen this, but I'm thinking it's very possible. Try changing the delayMicroseconds in line 46 and 48 of NewPing.cpp to 1000 each. That should give it plenty of time to initiate the ping..
Even if the false negatives are going to happen for you in any case; If you only want to know the distance, why not just filter out the zeros and just deal with the successful pings? If you only want to know a distance, just ping frequently and ignore all the zeros.
Finally, I've been thinking of some type of averaging method in NewPing. My thought was to do an odd number of pings like 3, 5 or 7, throw out the high and low value, and average the rest. Obviously, taking into consideration zero results as well. Then, it would give that average as the result. It would be a poor man's standard deviation, creating smaller and faster code but yielding basically the same result. This would also fix your false negative results. I'd rather locate the source of your problem instead, as that's a LOT of false negatives considering I get none.
Quote from: teckelAre all these zeros the same thing you get when you use a different sensor library? As other libraries may not return zero for a failed ping, you'd probably need to do an additional command to make the out of range a zero and in range a period.I'll re-test with that other implementation, but I began to code for false positives and negatives based on my experience with it, not NewPing. Still, I'll see if I can get a comparison going..
#include <NewPing.h>#define trigPin 7#define echoPin 8NewPing sonar(trigPin, echoPin, 200);byte i = 0;void setup() { Serial.begin(9600);}void loop() { unsigned int us = sonar.ping(); if (us==0) { Serial.print("0"); } else { Serial.print("."); } i++; if (i==80) { Serial.print(" "); Serial.println(us); i=0; } delay(10);}
#define trigPin 7#define echoPin 8byte i = 0;void setup() { Serial.begin (9600); pinMode(trigPin, OUTPUT); pinMode(echoPin, INPUT);}void loop() { digitalWrite(trigPin, HIGH); delayMicroseconds(1000); digitalWrite(trigPin, LOW); unsigned int us = pulseIn(echoPin, HIGH); if (us==0) { Serial.print("0"); } else { Serial.print("."); } i++; if (i==80) { Serial.print(" "); Serial.println(us); i=0; } delay(10);}
My testing indicates that a 10ms delay was always too short and a 20ms delay was too short for close objects (which is counter-intuitive to me) - at least for my SR04 sensor.Increasing the delays in my larger code base reduced but did not eliminate my false negatives. Testing with my actual head while as stationary as I could make it produced lots of false negatives, depending on what it bounced off. In some cases, there were no false negatives. Moving my head while not walking was problematic, too. I suspect I will get more false readings while walking because this adds sensor vibration.To sum up: Minimum delay between pings for my hardware was 40ms. I have some inherent false negatives due to the target shape, target motion, and sensor vibration in my application. I'm going to keep using my digital filter, unless I've missed some other solution. I'd be happy to share my filter with others if it's of general use.
Okay, so really the only problem is the delay between pings.
[...]Set the delay to 29ms in the sketch and see if you get any false negatives. I would suspect you get basically none.
So the question is, were you all along trying to do pings too quickly which is why you concluded that the SR04 sensor gave many false negative readings? All of my sample sketches show delays in the 33 to 35ms range with comments not to go quicker than 29ms.
For me, any loop() delay >= 21ms yielded zero false negatives when testing stationary objects, between 1cm and 50cm from the sensor's emitter.Is there any reason this should be? Just a variation between specific hardware and the spec? I don't mind using 39ms to be safe, but there seems to be a very harsh cutoff between the behavior at 20ms delay and the behavior at 21ms delay.
I am interested in using the new ping library but am concerned about the interrupt driven version. I am using an arduino uno, 2 parallax ping sensors mounted on servo motors (from Parallax). I want to drive each servo 180 deg to make a full 360 sweep as fast as I can. After each servo move, I want to ping a distance and use that value to play a small wav file. I put an Adafruit wav shield on my arduino and found that it uses the only 16 bit timer. The servo library from arduino uses it too. So I found another servo library that uses an 8 bit timer on the AT328 and I can play wavs and mover servos. I cannot get good pings all the time with the standard ping code. It appears that the new ping libary uses the 16 bit timer, too. But, in the revision history, it appears that an earlier version does not. Is that correct? I would appreciate any suggestions. Thank you.
#include <NewPing.h>#define SONAR_NUM 3 // Number or sensors.#define MAX_DISTANCE 200 // Maximum distance (in cm) to ping.#define PING_INTERVAL 100 //, 6, MAX_DISTANCE), // Each sensor's trigger pin, echo pin, and max distance to ping. NewPing(8, 9, MAX_DISTANCE), NewPing(11, 12,].convert_cm(sonar[currentSensor].ping_result);}void oneSensorCycle() { // Sensor ping cycle complete, do something with the results. for (uint8_t i = 0; i < SONAR_NUM; i++) { Serial.print(i); Serial.print("="); Serial.print(cm[i]); Serial.print("cm "); } Serial.println();}
|
https://forum.arduino.cc/index.php?topic=106043.msg859240
|
CC-MAIN-2019-39
|
refinedweb
| 1,834
| 73.37
|
I'm using ASP.NET MVC 3.0. I'm using the OutputCache attribute to cache a partial view result, which is rendered on my master page using Html.RenderAction. The caching part works fine, but I cannot invalidate the cache by calling HttpResponse.RemoveOutputCacheItem. (The partial view remains cached and the action doesn't get called anymore.) I've been fighting this for a day, I've read all the forum posts on this, and I can't figure out what I'm doing wrong. There seems to be no documentation on the parameter that gets passed into RemoveOutputCacheItem, and the API gives no indication that something failed.
My partial view looks like this ...
public class HomeController : Controller
{
[OutputCache(Duration=3600)]
[ChildActionOnly]
public ActionResult NavBar()
{
...
return PartialView(model);
}
}
In my master page view, I'm using this to render the partial:
<% Html.RenderAction("NavBar", "Home"); %>
When the user performs an action that would cause the results of /Home/NavBar to change, I call this from within that action:
HttpResponse.RemoveOutputCacheItem("/Hom have set the output cache for 5 pages(5 minutes)
I want to clear all these pages from cache on some nutton click
Hello,
I am using EF Data model for my database activities and I figured recently that using views were hurting performance of the web application. After some research I decided to pre-generate my view and one of the initial step was to
Change the Metadata Artifact Processing property to: Copy to Output Directory. But when I right click on my DB Model designer and check the properties for Metadata Artifact Processing I only see one option and that is "Embed in Output Assembly" I don't know why I don't see the option "Copy to Output Directory". Any ideas are appreciated.
I am Windows 7, 64-bit, VS 2010 Ultimate, .Net Framework 4.0
Thanks for your time,
Uday.
|
http://www.dotnetspark.com/links/61358-unable-to-invalidate-output-cache-using.aspx
|
CC-MAIN-2017-43
|
refinedweb
| 316
| 55.13
|
<<
Theodor Fahami1,034 Points
Help TreeHouse Challenge, char and indexOf
This is my task:
I have modeled an assistant for a tech conference. Since there are so many attendees, registration is broken up into two lines. Lines are determined by last name, and they will end up in line 1 or line 2. Please help me fix the getLineFor method.
NEW INFO: chars can be compared using the > and < symbols. For instance 'B' > 'A' and 'R' < 'Z'
This is the pre-writen code:
public class ConferenceRegistrationAssistant {
public int getLineFor(String lastName) { /* If the last name is between A thru M send them to line 1 Otherwise send them to line 2 */ int line = 0; return line; }
}
I do not understand how i am supposed to do this! Please help me understand how i can complete this challenge.
2 Answers
Unsubscribed User427 Points
In this challenge you are asked to :
- check what is the first letter of lastName ;
- see if this letter is between A and M or N and Z ;
- if the this letter is between A and M set line to 1 ;
- else set line to 2.
Try to solve this challenge with these explanations but if it is too confusing here's the solution with explanations :
public class ConferenceRegistrationAssistant { public int getLineFor(String lastName) { //This is the variable line which must be equal to 1 or 2 according to the first letter of the variable lastName. int line = 0; //This is the if statement which checks if the first letter of lastName comes before N to see if this letter is between A and M. if (lastName.charAt(0)<'N') { //If this is the case the variable line is set to 1 because the first letter of the variable lastName is between A and M. line=1; } //If this is not the case we can deduce that the first letter of the lastName variable is between N and Z. else { //So we define the variable line to 2. line=2; } //We finally return the variable line returning 1 or 2. return line; } }
|
https://teamtreehouse.com/community/help-treehouse-challenge-char-and-indexof
|
CC-MAIN-2022-40
|
refinedweb
| 344
| 72.6
|
ioctl(2) BSD System Calls Manual ioctl(2)
NAME
ioctl -- control device
SYNOPSIS
#include <sys/ioctl.h> int ioctl(int fildes, unsigned long request, ...);
DESCRIPTION
The ioctl() function manipulates the underlying device parameters of spe- cial files. In particular, many operating characteristics of character special files (e.g. terminals) may be controlled with ioctl() requests. The argument fild] fildes is not a valid descriptor. [EINVAL] Request or argp is not valid. [ENOTTY] fildes is not associated with a character special device. [ENOTTY] The specified request does not apply to the kind of object that the descriptor fildes references.
SEE ALSO
cdio(1), chio(1), mt(1), execve(2), fcntl(2), intro(4), tty(4)
HISTORY
An ioctl() function call appeared in Version 7 AT&T UNIX. 4th Berkeley Distribution December 11, 1993 4th Berkeley Distribution
Mac OS X 10.9.1 - Generated Mon Jan 6 08:01:01 CST 2014
|
http://www.manpagez.com/man/2/ioctl/
|
CC-MAIN-2014-35
|
refinedweb
| 150
| 51.85
|
I'm looking for best practices in regards to having a class which has a flag which decides whether an operation can be performed on that class or not. This entire class will be returned in WCF REST services as well used in a Silverlight GUI via RIA Services in order to set certain action buttons to be enabled or disabled for the user. I want to know best practices for setting up a class in this way. For example:
public class SomeCustomerClass { private bool _canResetCustomer; public bool CanResetCustomer { get { return _canResetCustomer; } //TODO: Place GET logic here from DB if it's null set { _canResetCustomer = value; } } if (this._canResetCustomer) { ResetCustomer(); } ...
See my "TODO"? I need to determine if the bool has been set. If it hasn't been set, I need to get the eligibility of this customer for reset from a list of data-based rules from the database. The way I see it, there are two options, both of which I've used before:
Define another bool which tracks whether CanReset has been set or not, like so:
public bool _canResetSet { get; set; }
Change my bool to a nullable type. I think in the constructor I'd have to instantiate the object with _canResetCustomer = null. Maybe?
Not sure why I'm asking this question except that maybe I'm just shy of nullable types. Does this entire dilemma speak to other issues with the way I design things? I commonly use this method of flagging in webforms applications as well.
|
http://www.howtobuildsoftware.com/index.php/how-do/6kD/c-aspnet-wcf-silverlight-ria-best-practice-for-setting-boolean-flags
|
CC-MAIN-2019-09
|
refinedweb
| 252
| 67.79
|
To install PyTorch follow the instructions on the official website:
pip install torch torchvision
We aim to gradually expand this series by adding new articles and keep the content up to date with the latest releases of PyTorch API. If you have suggestions on how to improve this series or find the explanations ambiguous, feel free to create an issue, send patches, or reach out by email.
PyTorch is one of the most popular libraries for numerical computation and currently is amongst the most widely used libraries for performing machine learning research. In many ways PyTorch is similar to NumPy, with the additional benefit that PyTorch allows you to perform your computations on CPUs, GPUs, and TPUs without any material change to your code. PyTorch also makes it easy to distribute your computation across multiple devices or machines. One of the most important features of PyTorch is automatic differentiation. It allows computing the gradients of your functions analytically in an efficient manner which is crucial for training machine learning models using gradient descent method. Our goal here is to provide a gentle introduction to PyTorch and discuss best practices for using PyTorch.
The first thing to learn about PyTorch is the concept of Tensors. Tensors are simply multidimensional arrays. A PyTorch Tensor is very similar to a NumPy array with some
magical additional functionality.
A tensor can store a scalar value:
import torch a = torch.tensor(3) print(a) # tensor(3)
or an array:
b = torch.tensor([1, 2]) print(b) # tensor([1, 2])
a matrix:
c = torch.zeros([2, 2]) print(c) # tensor([[0., 0.], [0., 0.]])
or any arbitrary dimensional tensor:
d = torch.rand([2, 2, 2])
Tensors can be used to perform algebraic operations efficiently. One of the most commonly used operations in machine learning applications is matrix multiplication. Say you want to multiply two random matrices of size 3x5 and 5x4, this can be done with the matrix multiplication (@) operation:
import torch x = torch.randn([3, 5]) y = torch.randn([5, 4]) z = x @ y print(z)
Similarly, to add two vectors, you can do:
z = x + y
To convert a tensor into a numpy array you can call Tensor's numpy() method:
print(z.numpy())
And you can always convert a numpy array into a tensor by:
x = torch.tensor(np.random.normal([3, 5]))
The most important advantage of PyTorch over NumPy is its automatic differentiation functionality which is very useful in optimization applications such as optimizing parameters of a neural network. Let's try to understand it with an example.
Say you have a composite function which is a chain of two functions:
g(u(x)).
To compute the derivative of
g with respect to
x we can use the chain rule which states that:
dg/dx = dg/du * du/dx. PyTorch can analytically compute the derivatives for us.
To compute the derivatives in PyTorch first we create a tensor and set its
requires_grad to true. We can use tensor operations to define our functions. We assume
u is a quadratic function and
g is a simple linear function:
x = torch.tensor(1.0, requires_grad=True) def u(x): return x * x def g(u): return -u
In this case our composite function is
g(u(x)) = -x*x. So its derivative with respect to
x is
-2x. At point
x=1, this is equal to
-2.
Let's verify this. This can be done using grad function in PyTorch:
dgdx = torch.autograd.grad(g(u(x)), x)[0] print(dgdx) # tensor(-2.)
To understand how powerful automatic differentiation) = PyTorch:
import numpy as np import torch # Assuming we know that the desired function is a polynomial of 2nd degree, we # allocate a vector of size 3 to hold the coefficients and initialize it with # random noise. w = torch.tensor(torch.randn([3, 1]), requires_grad=True) # We use the Adam optimizer with learning rate set to 0.1 to minimize the loss. opt = torch.optim.Adam([w], 0.1) def model(x): # We define yhat to be our estimate of y. f = torch.stack([x * x, x, torch.ones_like(x)], 1) yhat = torch.squeeze(f @ w, 1) return yhat def compute_loss(y, yhat): # The loss is defined to be the mean squared error distance between our # estimate of y and its true value. loss = torch.nn.functional.mse_loss(yhat, y) return loss def generate_data(): # Generate some training data based on the true function x = torch.rand(100) * 20 - 10 y = 5 * x * x + 3 return x, y def train_step(): x, y = generate_data() yhat = model(x) loss = compute_loss(y, yhat) opt.zero_grad() loss.backward() opt.step() for _ in range(1000): train_step() print(w.detach().numpy())
By running this piece of code you should see a result close to this:
[4.9924135, 0.00040895029, 3.4504161]
Which is a relatively close approximation to our parameters.
This is just tip of the iceberg for what PyTorch can do. Many problems such as optimizing large neural networks with millions of parameters can be implemented efficiently in PyTorch in just a few lines of code. PyTorch takes care of scaling across multiple devices, and threads, and supports a variety of platforms.
In the previous example we used bare bone tensors and tensor operations to build our model. To make your code slightly more organized it's recommended to use PyTorch's modules. A module is simply a container for your parameters and encapsulates model operations. For example say you want to represent a linear model
y = ax + b. This model can be represented with the following code:
import torch class Net(torch.nn.Module): def __init__(self): super().__init__() self.a = torch.nn.Parameter(torch.rand(1)) self.b = torch.nn.Parameter(torch.rand(1)) def forward(self, x): yhat = self.a * x + self.b return yhat
To use this model in practice you instantiate the module and simply call it like a function:
x = torch.arange(100, dtype=torch.float32) net = Net() y = net(x)
Parameters are essentially tensors with
requires_grad set to true. It's convenient to use parameters because you can simply retrieve them all with module's
parameters() method:
for p in net.parameters(): print(p)
Now, say you have an unknown function
y = 5x + 3 + some noise, and you want to optimize the parameters of your model to fit this function. You can start by sampling some points from your function:
x = torch.arange(100, dtype=torch.float32) / 100 y = 5 * x + 3 + torch.rand(100) * 0.3
Similar to the previous example, you can define a loss function and optimize the parameters of your model as follows:
criterion = torch.nn.MSELoss() optimizer = torch.optim.SGD(net.parameters(), lr=0.01) for i in range(10000): net.zero_grad() yhat = net(x) loss = criterion(yhat, y) loss.backward() optimizer.step() print(net.a, net.b) # Should be close to 5 and 3
PyTorch comes with a number of predefined modules. One such module is
torch.nn.Linear which is a more general form of a linear function than what we defined above. We can rewrite our module above using
torch.nn.Linear like this:
class Net(torch.nn.Module): def __init__(self): super().__init__() self.linear = torch.nn.Linear(1, 1) def forward(self, x): yhat = self.linear(x.unsqueeze(1)).squeeze(1) return yhat
Note that we used squeeze and unsqueeze since
torch.nn.Linear operates on batch of vectors as opposed to scalars.
By default calling parameters() on a module will return the parameters of all its submodules:
net = Net() for p in net.parameters(): print(p)
There are some predefined modules that act as a container for other modules. The most commonly used container module is
torch.nn.Sequential. As its name implies it's used to to stack multiple modules (or layers) on top of each other. For example to stack two Linear layers with a
ReLU nonlinearity in between you can do:
model = torch.nn.Sequential( torch.nn.Linear(64, 32), torch.nn.ReLU(), torch.nn.Linear(32, 10), )
PyTorch. PyTorch implicitly tiles the tensor across its singular dimensions to match the shape of the other operand. So it’s valid to add a tensor of shape
[3, 2] to a tensor of shape
[3, 1].
import torch a = torch.tensor([[1., 2.], [3., 4.]]) b = torch.tensor([[1.], [2.]]) # c = a + b.repeat([1, 2]) c = a + b print = torch.rand([5, 3, 5]) b = torch.rand([5, 1, 6]) linear = torch.nn.Linear(11, 10) # concat a and b and apply nonlinearity tiled_b = b.repeat([1, 3, 1]) c = torch.cat([a, tiled_b], 2) d = torch.nn.functional.relu(linear(c)) print(d.shape) # torch.Size([5, 3, 10])
But this can be done more efficiently with broadcasting. We use the fact that
f(m(x + y)) is equal to
f(mx + my). So we can do the linear operations separately and use broadcasting to do implicit concatenation:
a = torch.rand([5, 3, 5]) b = torch.rand([5, 1, 6]) linear1 = torch.nn.Linear(5, 10) linear2 = torch.nn.Linear(6, 10) pa = linear1(a) pb = linear2(b) d = torch.nn.functional.relu(pa + pb) print(d.shape) # torch.Size([5, 3, 10])
In fact this piece of code is pretty general and can be applied to tensors of arbitrary shape as long as broadcasting between tensors is possible:
class Merge(torch.nn.Module): def __init__(self, in_features1, in_features2, out_features, activation=None): super().__init__() self.linear1 = torch.nn.Linear(in_features1, out_features) self.linear2 = torch.nn.Linear(in_features2, out_features) self.activation = activation def forward(self, a, b): pa = self.linear1(a) pb = self.linear2(b) c = pa + pb if self.activation is not None: c = self.activation(c) return c
So far we discussed the good part of broadcasting. But what’s the ugly part you may ask? Implicit assumptions almost always make debugging harder to do. Consider the following example:
a = torch.tensor([[1.], [2.]]) b = torch.tensor([1., 2.]) c = torch.sum(a + b) print(c)
What do you think the value of
c would be after evaluation? If you guessed 6, that’s wrong. It’s going to be 12. This is because when rank of two tensors don’t match, PyTorch = torch.tensor([[1.], [2.]]) b = torch.tensor([1., 2.]) c = torch.sum(a + b, 0) print(c)
Here the value of
c would be
[5, 7], and we immediately would guess based on the shape of the result that there’s something wrong. A general rule of thumb is to always specify the dimensions in reduction operations and when using
torch.squeeze.
Just like NumPy, PyTorch overloads a number of python operators to make PyTorch code shorter and more readable.
The slicing op is one of the overloaded operators that can make indexing tensors very easy:
z = x[begin:end] # z = torch.narrow(0, begin, end-begin)
Be very careful when using this op though. The slicing op, like any other op, has some overhead. Because it's a common op and innocent looking it may get overused a lot which may lead to inefficiencies. To understand how inefficient this op can be let's look at an example. We want to manually perform reduction across the rows of a matrix:
import torch import time x = torch.rand([500, 10]) z = torch.zeros([10]) start = time.time() for i in range(500): z += x[i] print("Took %f seconds." % (time.time() - start))
This runs quite slow and the reason is that we are calling the slice op 500 times, which adds a lot of overhead. A better choice would have been to use
torch.unbind op to slice the matrix into a list of vectors all at once:
z = torch.zeros([10]) for x_i in torch.unbind(x): z += x_i
This is significantly (~30% on my machine) faster.
Of course, the right way to do this simple reduction is to use
torch.sum op to this in one op:
z = torch.sum(x, dim=0)
which is extremely fast (~100x faster on my machine).
PyTorch also overloads a range of arithmetic and logical operators:
z = -x # z = torch.neg(x) z = x + y # z = torch.add(x, y) z = x - y z = x * y # z = torch.mul(x, y) z = x / y # z = torch.div(x, y) z = x // y z = x % y z = x ** y # z = torch.pow(x, y) z = x @ y # z = torch.matmul(x, y) z = x > y z = x >= y z = x < y z = x <= y z = abs(x) # z = torch.abs(x) z = x & y z = x | y z = x ^ y # z = torch.logical_xor(x, y) z = ~x # z = torch.logical_not(x) z = x == y # z = torch.eq(x, y) z = x != y # z = torch.ne(x, y)
You can also use the augmented version of these ops. For example
x += y and
x **= 2 are also valid.
Note that Python doesn't allow overloading
and,
or, and
not keywords.
PyTorch is optimized to perform operations on large tensors. Doing many operations on small tensors is quite inefficient in PyTorch. So, whenever possible you should rewrite your computations in batch form to reduce overhead and improve performance. If there's no way you can manually batch your operations, using TorchScript may improve your code's performance. TorchScript is simply a subset of Python functions that are recognized by PyTorch. PyTorch can automatically optimize your TorchScript code using its just in time (jit) compiler and reduce some overheads.
Let's look at an example. A very common operation in ML applications is "batch gather". This operation can simply written as
output[i] = input[i, index[i]]. This can be simply implemented in PyTorch as follows:
import torch def batch_gather(tensor, indices): output = [] for i in range(tensor.size(0)): output += [tensor[i][indices[i]]] return torch.stack(output)
To implement the same function using TorchScript simply use the
torch.jit.script decorator:
@torch.jit.script def batch_gather_jit(tensor, indices): output = [] for i in range(tensor.size(0)): output += [tensor[i][indices[i]]] return torch.stack(output)
On my tests this is about 10% faster.
But nothing beats manually batching your operations. A vectorized implementation in my tests is 100 times faster:
def batch_gather_vec(tensor, indices): shape = list(tensor.shape) flat_first = torch.reshape( tensor, [shape[0] * shape[1]] + shape[2:]) offset = torch.reshape( torch.arange(shape[0]).cuda() * shape[1], [shape[0]] + [1] * (len(indices.shape) - 1)) output = flat_first[indices + offset] return output
In the last lesson we talked about writing efficient PyTorch code. But to make your code run with maximum efficiency you also need to load your data efficiently into your device's memory. Fortunately PyTorch offers a tool to make data loading easy. It's called a
DataLoader. A
DataLoader uses multiple workers to simultanously load data from a
Dataset and optionally uses a
Sampler to sample data entries and form a batch.
If you can randomly access your data, using a
DataLoader is very easy: You simply need to implement a
Dataset class that implements
__getitem__ (to read each data item) and
__len__ (to return the number of items in the dataset) methods. For example here's how to load images from a given directory:
import glob import os import random import cv2 import torch class ImageDirectoryDataset(torch.utils.data.Dataset): def __init__(path, pattern): self.paths = list(glob.glob(os.path.join(path, pattern))) def __len__(self): return len(self.paths) def __item__(self): path = random.choice(paths) return cv2.imread(path, 1)
To load all jpeg images from a given directory you can then do the following:
dataloader = torch.utils.data.DataLoader(ImageDirectoryDataset("/data/imagenet/*.jpg"), num_workers=8) for data in dataloader: # do something with data
Here we are using 8 workers to simultanously read our data from the disk. You can tune the number of workers on your machine for optimal results.
Using a
DataLoader to read data with random access may be ok if you have fast storage or if your data items are large. But imagine having a network file system with slow connection. Requesting individual files this way can be extremely slow and would probably end up becoming the bottleneck of your training pipeline.
A better approach is to store your data in a contiguous file format which can be read sequentially. For example if you have a large collection of images you can use tar to create a single archive and extract files from the archive sequentially in python. To do this you can use PyTorch's
IterableDataset. To create an
IterableDataset class you only need to implement an
__iter__ method which sequentially reads and yields data items from the dataset.
A naive implementation would like this:
import tarfile import torch def tar_image_iterator(path): tar = tarfile.open(self.path, "r") for tar_info in tar: file = tar.extractfile(tar_info) content = file.read() yield cv2.imdecode(content, 1) file.close() tar.members = [] tar.close() class TarImageDataset(torch.utils.data.IterableDataset): def __init__(self, path): super().__init__() self.path = path def __iter__(self): yield from tar_image_iterator(self.path)
But there's a major problem with this implementation. If you try to use DataLoader to read from this dataset with more than one worker you'd observe a lot of duplicated images:
dataloader = torch.utils.data.DataLoader(TarImageDataset("/data/imagenet.tar"), num_workers=8) for data in dataloader: # data contains duplicated items
The problem is that each worker creates a separate instance of the dataset and each would start from the beginning of the dataset. One way to avoid this is to instead of having one tar file, split your data into
num_workers separate tar files and load each with a separate worker:
class TarImageDataset(torch.utils.data.IterableDataset): def __init__(self, paths): super().__init__() self.paths = paths def __iter__(self): worker_info = torch.utils.data.get_worker_info() # For simplicity we assume num_workers is equal to number of tar files if worker_info is None or worker_info.num_workers != len(self.paths): raise ValueError("Number of workers doesn't match number of files.") yield from tar_image_iterator(self.paths[worker_info.worker_id])
This is how our dataset class can be used:
dataloader = torch.utils.data.DataLoader( TarImageDataset(["/data/imagenet_part1.tar", "/data/imagenet_part2.tar"]), num_workers=2) for data in dataloader: # do something with data
We discussed a simple strategy to avoid duplicated entries problem. tfrecord package uses slightly more sophisticated strategies to shard your data on the fly.
When using any numerical computation library such as NumPy or PyTorch, it's important to note that writing mathematically correct code doesn't necessarily lead to correct results. You also need to make sure that the computations are stable.
Let's start with a simple example. Mathematically, it's easy to see that
x * y / nan PyTorch. torch def unstable_softmax(logits): exp = torch.exp(logits) return exp / torch.sum(exp) print(unstable_softmax(torch.tensor([1000., 0.])).numpy()) #) Σ exp(x - c) = exp(x) / torch def softmax(logits): exp = torch.exp(logits - torch.reduce_max(logits)) return exp / torch.sum(exp) print(softmax(torch.tensor([1000., 0.])).numpy()) #) = -Σ p_i log(q_i). So a naive implementation of the cross entropy would look like this:
def unstable_softmax_cross_entropy(labels, logits): logits = torch.log(softmax(logits)) return -torch.sum(labels * logits) labels = torch.tensor([0.5, 0.5]) logits = torch.tensor([1000., 0.]) xe = unstable_softmax_cross_entropy(labels, logits) print(xe.numpy()) # prints inf
Note that in this implementation as the softmax output approaches zero, the log's output approaches infinity which causes instability in our computation. We can rewrite this by expanding the softmax and doing some simplifications:
def softmax_cross_entropy(labels, logits, dim=-1): scaled_logits = logits - torch.max(logits) normalized_logits = scaled_logits - torch.logsumexp(scaled_logits, dim) return -torch.sum(labels * normalized_logits) labels = torch.tensor([0.5, 0.5]) logits = torch.tensor([1000., 0.]) xe = softmax_cross_entropy(labels, logits) print(xe.numpy()) # prints 500.0
We can also verify that the gradients are also computed correctly:
logits.requires_grad_(True) xe = softmax_cross_entropy(labels, logits) g = torch.autograd.grad(xe, logits)[0] print(g.numpy()) # prints [0.5, -0.5].
By default tensors and model parameters in PyTorch are stored in 32-bit floating point precision. Training neural networks using 32-bit floats is usually stable and doesn't cause major numerical issues, however neural networks have been shown to perform quite well in 16-bit and even lower precisions. Computation in lower precisions can be significantly faster on modern GPUs. It also has the extra benefit of using less memory enabling training larger models and/or with larger batch sizes which can boost the performance further. The problem though is that training in 16 bits often becomes very unstable because the precision is usually not enough to perform some operations like accumulations.
To help with this problem PyTorch supports training in mixed precision. In a nutshell mixed-precision training is done by performing some expensive operations (like convolutions and matrix multplications) in 16-bit by casting down the inputs while performing other numerically sensitive operations like accumulations in 32-bit. This way we get all the benefits of 16-bit computation without its drawbacks. Next we talk about using Autocast and GradScaler to do automatic mixed-precision training.
autocast helps improve runtime performance by automatically casting down data to 16-bit for some computations. To understand how it works let's look at an example:
import torch x = torch.rand([32, 32]).cuda() y = torch.rand([32, 32]).cuda() with torch.cuda.amp.autocast(): a = x + y b = x @ y print(a.dtype) # prints torch.float32 print(b.dtype) # prints torch.float16
Note both
x and
y are 32-bit tensors, but
autocast performs matrix multiplication in 16-bit while keeping addition operation in 32-bit. What if one of the operands is in 16-bit?
import torch x = torch.rand([32, 32]).cuda() y = torch.rand([32, 32]).cuda().half() with torch.cuda.amp.autocast(): a = x + y b = x @ y print(a.dtype) # prints torch.float32 print(b.dtype) # prints torch.float16
Again
autocast and casts down the 32-bit operand to 16-bit to perform matrix multiplication, but it doesn't change the addition operation. By default, addition of two tensors in PyTorch results in a cast to higher precision.
In practice, you can trust
autocast to do the right casting to improve runtime efficiency. The important thing is to keep all your forward pass computations under
autocast context:
model = ... loss_fn = ... with torch.cuda.amp.autocast(): outputs = model(inputs) loss = loss_fn(outputs, targets)
This maybe all you need if you have a relatively stable optimization problem and if you use a relatively low learning rate. Adding this one line of extra code can reduce your training up to half on modern hardware.
As we mentioned in the beginning of this section, 16-bit precision may not always be enough for some computations. One particular case of interest is representing gradient values, a great portion of which are usually small values. Representing them with 16-bit floats often leads to buffer underflows (i.e. they'd be represented as zeros). This makes training neural networks very unstable.
GradScalar is designed to resolve this issue. It takes as input your loss value and multiplies it by a large scalar, inflating gradient values, and therefore making them represnetable in 16-bit precision. It then scales them down during gradient update to ensure parameters are updated correctly. This is generally what
GradScalar does. But under the hood
GradScalar is a bit smarter than that. Inflating the gradients may actually result in overflows which is equally bad. So
GradScalar actually monitors the gradient values and if it detects overflows it skips updates, scaling down the scalar factor according to a configurable schedule. (The default schedule usually works but you may need to adjust that for your use case.)
Using
GradScalar is very easy in practice:
scaler = torch.cuda.amp.GradScaler() loss = ... optimizer = ... # an instance torch.optim.Optimizer scaler.scale(loss).backward() scaler.step(optimizer) scaler.update()
Note that we first create an instance of
GradScalar. In training loop we call
GradScalar.scale to scale the loss before calling backward to produce inflated gradients, we then use
GradScalar.step which (may) update the model parameters. We then call
GradScalar.update which performs the scalar update if needed. That's all!
The following is a sample code that show cases mixed precision training on a synthetic problem of learning to generate a checkerboard from image coordinates. You can paste it on a Google Colab, set the backend to GPU and compare the single and mixed-precision performance. Note that this is a small toy example, in practice with larger networks you may see larger boosts in performance using mixed precision.
import torch import matplotlib.pyplot as plt import time def grid(width, height): hrange = torch.arange(width).unsqueeze(0).repeat([height, 1]).div(width) vrange = torch.arange(height).unsqueeze(1).repeat([1, width]).div(height) output = torch.stack([hrange, vrange], 0) return output def checker(width, height, freq): hrange = torch.arange(width).reshape([1, width]).mul(freq / width / 2.0).fmod(1.0).gt(0.5) vrange = torch.arange(height).reshape([height, 1]).mul(freq / height / 2.0).fmod(1.0).gt(0.5) output = hrange.logical_xor(vrange).float() return output # Note the inputs are grid coordinates and the target is a checkerboard inputs = grid(512, 512).unsqueeze(0).cuda() targets = checker(512, 512, 8).unsqueeze(0).unsqueeze(1).cuda()
class Net(torch.jit.ScriptModule): def __init__(self): super().__init__() self.net = torch.nn.Sequential( torch.nn.Conv2d(2, 256, 1), torch.nn.BatchNorm2d(256), torch.nn.ReLU(), torch.nn.Conv2d(256, 256, 1), torch.nn.BatchNorm2d(256), torch.nn.ReLU(), torch.nn.Conv2d(256, 256, 1), torch.nn.BatchNorm2d(256), torch.nn.ReLU(), torch.nn.Conv2d(256, 1, 1)) @torch.jit.script_method def forward(self, x): return self.net(x)
net = Net().cuda() loss_fn = torch.nn.MSELoss() opt = torch.optim.Adam(net.parameters(), 0.001) start_time = time.time() for i in range(500): opt.zero_grad() outputs = net(inputs) loss = loss_fn(outputs, targets) loss.backward() opt.step() print(loss) print(time.time() - start_time) plt.subplot(1,2,1); plt.imshow(outputs.squeeze().detach().cpu()); plt.subplot(1,2,2); plt.imshow(targets.squeeze().cpu()); plt.show()
net = Net().cuda() loss_fn = torch.nn.MSELoss() opt = torch.optim.Adam(net.parameters(), 0.001) scaler = torch.cuda.amp.GradScaler() start_time = time.time() for i in range(500): opt.zero_grad() with torch.cuda.amp.autocast(): outputs = net(inputs) loss = loss_fn(outputs, targets) scaler.scale(loss).backward() scaler.step(opt) scaler.update() print(loss) print(time.time() - start_time) plt.subplot(1,2,1); plt.imshow(outputs.squeeze().detach().cpu().float()); plt.subplot(1,2,2); plt.imshow(targets.squeeze().cpu().float()); plt.show()
|
https://awesomeopensource.com/project/vahidk/EffectivePyTorch
|
CC-MAIN-2022-40
|
refinedweb
| 4,452
| 51.85
|
I am looking to learn C++, I have no prior knowledge of any other language. I've tried using the 'Learn C++ in 21 Days' book which is on the internet but it does not really explain anything. I have made the 'Hello World' program but I cannot compile it because I always get errors.
Here is the code I use:
#include <iostream> int main(); int main() { cout << "Hello World!\n"; return 0; }
The guide doesn't specify any particular software or compiler it just says I need a raw text editor (notepad) and some sort of compiler. I downloaded 'Cygwin' and use it to compile but I always get the following messages:
hello.cpp: In function 'int main()':
hello.cpp:6: error: 'cout' undeclared (first use of this function)
hello.cpp:6: error: (Each undeclared identifier is reported only once for each function it appears in.)
The code I used is copy and pasted straight from the guide but it still doesn't work. If anyone could explain what is going wrong I would really appreciate it. I am more annoyed at the fact that following the guide word for word doesn't work and I would be even more happy if someone could point me in the right direction or show me a working guide which can help me get started on learning this code.
|
http://www.dreamincode.net/forums/topic/57820-learning-c-from-scratch/
|
CC-MAIN-2017-39
|
refinedweb
| 227
| 70.02
|
This document is placed in the public domain.
Abstract
This document can be considered a companion to the tutorial. It shows how to use Python, and even more importantly, how not to use Python.
Language Constructs You Should Not Use¶
While Python has relatively few gotchas compared to other languages, it still has some constructs which are only useful in corner cases, or are plain dangerous.
from module import *¶
Inside Function Definitions¶ were
local and which were global. In Python 2.1 this construct causes warnings, and
sometimes even errors.
At Module Level¶ questions a
builtin.
When It Is Just Fine¶
There are situations in which
from module import * is just fine:
- The interactive prompt. For example,
from math import *makes Python an amazing scientific calculator.
- When extending a module in C with a module in Python.
- When the module advertises itself as
from import *safe.
Unadorned
exec,
execfile() and friends¶()
from module import name1, name2¶
This is a “don’t” which is much weaker than the previous “don’t”s but is still something you should not do if you don’t have good reasons to do that. The reason it is usually a
except:¶¶.
The best version of this function uses the
open() call as a context
manager, which will ensure that the file gets closed as soon as the
function returns:
def get_status(file): with open(file) as fp: return fp.readline()
Using the Batteries¶.
Using Backslash to Continue Statements¶))
|
http://docs.activestate.com/activepython/2.7/python/howto/doanddont.html
|
CC-MAIN-2019-22
|
refinedweb
| 243
| 65.93
|
import "github.com/r9y9/gossp/stft"
Package stft provides support for Short-Time Fourier Transform (STFT) Analysis.
STFT represents Short Time Fourier Transform Analysis.
New returns a new STFT instance.
DivideFrames returns overlapping divided frames for STFT.
FrameAt returns frame at specified index given an input signal. Note that it doesn't make copy of input.
func (s *STFT) ISTFT(spectrogram [][]complex128) []float64
ISTFT performs invere STFT signal reconstruction and returns reconstructed signal.
NumFrames returnrs the number of frames that will be analyzed in STFT.
func (s *STFT) STFT(input []float64) [][]complex128
STFT returns complex spectrogram given an input signal.
Package stft imports 2 packages (graph). Updated 2016-07-19. Refresh now. Tools for package owners. This is an inactive package (no imports and no commits in at least two years).
|
https://godoc.org/github.com/r9y9/gossp/stft
|
CC-MAIN-2017-39
|
refinedweb
| 131
| 52.76
|
JBoss.orgCommunity Documentation
Presents jQuery JavaScript framework functionality
Able to apply onto JSF components and other DOM objects.
Works without conflicts with prototype.js library
<rich:jQuery> can be used in two main modes:
as a one-time query applied immediately or on a document ready event
as a JavaScript function that can be invoked from the JavaScript code
The mode is chosen with "timing" attribute that has the following options:
immediate — applying a query immediately;
onload — applying a query when a document is loaded;
onJScall — applying a query by invoked JavaScript function defined with the "name" attribute.
Definition of the "name" attribute is mandatory when the value of "timing" attribute is "onJScall". If the "name" attribute is defined when "timing" value equals to "immediate" or "onload", the query is applied according to this value, but you still have an opportunity to invoke it by a function name.
The "selector" attribute defines an object or a list of objects. The query is defined with the "query" attribute.
Here is an example of how to highlight odd rows in a table:
<style>
.odd {background-color: #FFC;}
</style>
<rich:dataTable id="customList" ...>
...
</rich:dataTable>
<rich:jQuery
The "selector" attribute uses defined by w3c consortium syntax for CSS rule selector with some jQuery extensions
Those are typical examples of using selector in the <rich:jQuery> component.
In addition, RichFaces allows using either a component id or client id if you apply the query to a JSF component. When you define a selector, RichFaces examines its content and tries to replace the defined in the selector id with a component id if it's found.
For example, you have the following code:
<h:form
<h:panelGrid
<h:graphicImage
<h:graphicImage
</h:panelGrid>
</h:form>
The actual id of the
<h:panelGrid>
table in the browser DOM is
"form:menu". However, you still can reference to images inside this table using the following selector:
...
<rich:jQuery
...
You can define the exact id in the selector if you want. The following code reference to the same set of a DOM object:
...
<rich:jQuery
...
Pay attention to double slashes that escape a colon in the id.
In case when the "name" attribute is defined, <rich:jQuery> generates a JavaScript function that might be used from any place of JavaScript code on a page.
There is an example of how to enlarge the picture smoothly on a mouse over event and return back to the normal size on mouse out:
...
<h:graphicImage
<h:graphicImage
...
<rich:jQuery
<rich:jQuery
...
The JavaScript could use two parameters. The first parameter is a replacement for the selector attribute. Thus, you can share the same query, applying it to the different DOM objects. You can use a literal value or a direct reference for an existing DOM object. The second parameter can be used to path the specific value inside the query. The JSON syntax is used for the second parameter. The "param." namespace is used for referencing data inside the parameter value.
<rich:jQuery> adds styles and behavior to the DOM object dynamically. This means if you replace something on a page during an Ajax response, the applied artifacts is overwritten. But you are allowed to apply them again after the Ajax response is complete.
Usually, it could be done with reRendering the <rich:jQuery> components in the same Ajax interaction with the components these queries are applied to. Note, that queries with "timing" attribute set to "onload" are not invoked even if the query is reRendered, because a DOM document is not fully reloaded during the Ajax interaction. If you need to re-applies query with "onload" value of "timing" attribute, define the "name" attribute and invoke the query by name in the "oncomplete" attribute of the Ajax component.
RichFaces includes jQuery JavaScript framework. You can use the futures of jQuery directly without defining the <rich:jQuery> component on a page if it is convenient for you. To start using the jQuery feature on the page, include the library into a page with the following code:
...
<a4j:loadScript
...
Refer to the jQuery documentation for the right syntax. Remember to use
jQuery() function instead of
$(), as soon as jQuery works without conflicts
with
prototype.js.
Table of <rich:jQuery> attributes.
Visit the jQuery page at RichFaces LiveDemo for examples of component usage and their sources.
More information about jQuery framework you can find injQuery official documentation.
See also:
"Using jQuery with Other Libraries" in jQuery official documentation.
|
http://docs.jboss.org/richfaces/latest_3_3_X/en/devguide/html/rich_jQuery.html
|
CC-MAIN-2016-30
|
refinedweb
| 745
| 53.81
|
local_time_hw_device Struct Reference
#include <
local_time_hal.h
>
Detailed Description
Definition at line 57 of file local_time_hal.h .
Field Documentation of file local_time_hal.h .
A method used to collect low level sync data in a lab environments. Most HAL implementations will simply set this member to NULL, or return -EINVAL to indicate that this functionality is not supported. Production HALs should never support this method.
Definition at line 98 of file local_time_hal.h .
Returns the nominal frequency (in hertz) of the system wide local time counter
Definition at line 77 of file local_time_hal.h .
Returns the current value of the system wide local time counter
Definition at line 70 of file local_time_hal.h .
Sets the HW slew rate of oscillator which drives the system wide local time counter. On success, platforms should return 0. Platforms which do not support HW slew should leave this method set to NULL.
Valid values for rate range from MIN_INT16 to MAX_INT16. Platform implementations should attempt map this range linearly to the min/max slew rate of their hardware.
Definition at line 89 of file local_time_hal.h .
The documentation for this struct was generated from the following file:
- hardware/libhardware/include/hardware/ local_time_hal.h
|
https://source.android.com/reference/hal/structlocal__time__hw__device?hl=ja
|
CC-MAIN-2022-27
|
refinedweb
| 197
| 51.24
|
The Nomad Web UI offers a web experience for inspecting a Nomad cluster. It's built into the Nomad binary and is served alongside the API. With zero additional configuration, you can use the Web UI instead of the CLI to inspect cluster state and submit and manage jobs.
»Open with a browser
The Nomad Web UI is served alongside the API. If you visit the Nomad server
address in a web browser, you will be redirected to the Web UI, which is served
under
/ui. If you are unsure what port the Nomad HTTP API is running on, try
the default port:
4646.
The first page you arrive at is a listing of all Jobs for the default namespace.
The entire Web UI sitemap is documented as an API.
»Open from a terminal
In order to make it as seamless as possible to jump between the CLI and UI, the
Nomad CLI has a
ui subcommand. This command can take any identifier and open
the appropriate web page.
»Begin at a job
If your cluster contained a job named "redis-job", you could open the UI directly at the job page with the following command:
$ nomad ui redis-job
»Begin at an allocation
You can also use the
nomad ui command to open directly to a specific
allocation. This example opens an allocation ID starting with "d4005969".
$ nomad ui d4005969 Opening URL ""
By default, all features—read and write—are available to all users of the Web UI. By using Access Control Lists, it is possible to lock down what users get access to which features.
»Access an ACL-enabled UI
»Use the Nomad UI without a token
When a user browses the Web UI without specifying an access control token, they assume the rules of the anonymous policy. Since Nomad ACLs use a default-deny model, if ACLs are enabled and no anonymous policy is authored, the Web UI will show unauthorized messages on every page other than the settings page.
»Provide a token to browse the UI
From the ACL Tokens page, which is accessible from the top-right menu, you can set your access control token via the token Secret ID.
This token is saved to local storage and can be manually cleared from the ACL Tokens page.%"/>
|
https://learn.hashicorp.com/tutorials/nomad/web-ui-access?in=nomad/web-ui
|
CC-MAIN-2022-40
|
refinedweb
| 382
| 67.28
|
SIMD details will be forthcoming from the .NET blog, but Soma's already showed some sample code, and everything's available, so until those details are available here are directions about how to kick the SIMD tires:
- Go get RyuJIT CTP3 and install it (requires x64 Windows 8.1 or Window Server 2012R2, same as before)
- Set the "use RyuJIT" environment variable: set COMPLUS_AltJit=*
You can take a gander at some sample code sitting over here. It's pretty well commented, at least the VectorFloat.cs file is in the Mandelbrot demo. I spent almost as much time making sure comments were good as writing those 24 slightly different implementations of the Mandelbrot calculation. Take a look at the Microsoft.Numerics.Vectors namespace: there some stuff in Vector<T>, some stuff in VectorMath, and some stuff in plain Vector. And there are also "concrete types" for 2, 3, and 4 element floats.
One quick detail, for those of you that are trying this stuff out immediately: our plan (and we've already prototyped this) is to have Vector<T> automatically use AVX SIMD types on hardware where the performance of AVX code should be better than SSE2. Doing that properly, however, requires some changes to the .NET runtime. We're only able to support SSE2 for the CTP release because of that restriction (and time wasn't really on our side, either).
I’ll try to answer questions down below. I know this doesn’t have much detail. We’ve been really busy trying to get CTP3 out the door (it was published about 45 seconds before Jay’s slide went up). Many more details are coming. One final piece of info: SIMD isn’t the only thing we’ve been doing over the past few weeks. We also improved our benchmarks a little bit:
The gray bars are standard deviation, so my (really horrible) understanding of statistics means that delta's that are within the gray bars are pretty much meaningless. I'm not sure if we really did regress mono-binarytrees or not, but I'm guessing probably not. I do know that we improved a couple things that should help mono-fasta and mono-knucleotide, but the mono-chameneos-redux delta was unexpected.
Anyway, I hope this helps a little. I'm headed home for the evening, as my adrenaline buzz from trying to get everything done in time for Jay's talk is fading fast, and I want to watch Habib from the comfort of my couch, rather than be in traffic. Okay, I'm exaggerating: my commute is generally about 10 minutes, but I'm tired, and whiny, and there are wolves after me.
-Kev
The SIMD stuff is very exciting. If I can find some time I'll try adding some experimental support for it in MonoGame.
@Tom Spilman: I'd love to see this stuff supported in Mono!
Sweet jesus, I think I might faint from excitement!
Great news! Very exiting times!
@Nicholas: I know that RyuJIT with SIMD is exciting. Please just breathe deeply, and perhaps lie down. I don't want any of RyuJIT's rabid fans injuring themselves 🙂
Wicked! Thank you!
It is not SIMD support itself what excites me the most but the fact MS started paying attention to the quality of the machine code CLR generates.
Kevin, is it possible for JIT to treat readonly fields as JIT constants? For example, when non-inlined methods (not ctors) are jited. That would let not to jit the whole branches and allow other types of optimizations.
If it is not possible right now some mechanism (an attribute?) could be introduced that new code could use.
What am I missing. I install that package and none of the namespaces start with Microsoft. They all start with System e.g System.Numerics.Vector<T>. 1.0.1-Beta is the right version, isn't it?
@OmariO: const fields are generally the only things that will benefit from that sort of work. If it can be const, it should be const. readonly could allow us to do some type-specific optimizations & potentially better inlining (of methods off the readonly field) but not much else. It still a reasonable idea, though…
@Keith, yes that's the right one. Once you've installed that, you still need to set the environment variables & use the type in your class constructor. Check out the sample for more details: code.msdn.microsoft.com/SIMD-Sample-f2c8c35a
tirania.org/…/Nov-03.html
Awesome! I'm very happy to see SIMD in .Net. I'm super-excited that MS is making an effort to improve the code that JIT'r produces.
Here's a test for you….write a function and mark it as unsafe. Make sure to not use anything that requires garbage collection. Have it do something that is computationally expensive and time consuming. Like a function that takes an matrix and computes the Levenshtein distance between all of the rows. Write exactly the same thing in C++. Compare them. Compare their speed. Compare the emitted assembly. In most cases .Net is slower and does more. There's often no reason it couldn't have produced exactly the same, fast, code.
One suggestion on the blog: While it's cool to see the difference between CTP2 and CTP3, what really matters is how RyuJIT compares to JIT64 so we can see how much better it is from it's predecessor.
I see 2 useful cases when readonly fields as JIT constants could beneficial:
{
static readonly bool etwLoggingEnabled = Congiguration.GetEtwLoggingAnabled();
public void SomeMethod()
{
// begin region
if (etwLoggingEnabled)
{
logger.SomeMethodcalled();
}
// end
…
}
}
JIT could potentially completely skip jitting that region.
2.
class Buffer
{
readonly int[] buffer;
readonly int max; // must be <= buffer.Length when methods below are jited
uint pointer = 0;
public void WriteNext(int value)
{
…
buffer[pointer % max] = value;
// or buffer[pointer % buffer.Length] = value;
pointer++;
}
public void Loop(int value)
{
for (uint i =…; … ; i++ )
{
buffer[i % max] = value;
// or buffer[i % buffer.Length] = value;
}
}
}
If the reference to the array is a jit time constant then buffer.Length is a jit constant too. This would let eliminate array bounds check and even replace % with & if at jit time max (or buffer.Length) is a power of two.
@Laci: yup, we were very well aware of that work. We actually talked to the Xamarin folks about the design (as well as MVPs) before we finalized it. Our design is significantly less linked at the hip with x86 than Mono's design.
@David: You probably won't believe me, but there's a fair bit of code that looks better coming out of RyuJIT that is better than what the C++ compiler is emitting. It's not a 50-50 thing (admittedly, we are slightly worse off than the C++ compiler, but it's not a slam dunk like it once was), but having more language constraints results in some scenarios where things work better.
@Omario_O: Ah: write-once (readonly really means write-once) turned into compile time constants: kind of cool. We can definitely take a look at it from that angle!
This has been around for a decade in C++. Plus, true cross-platform portability (C++ again). Did I mention the ability to do many useful things you cannot do in C# and you can in C++???
@Amity Games: By "this" are you referring to the new Vector<T> type that is sized according to the width of you machine's CPU? Because if so, then you are wrong: C++ doesn't have this ability. If you're referring to a bunch of other stuff, then I'll agree with you. C++ can do many useful things you cannot do in C#, just like C# can do many useful things that you cannot do in C++. I'm a big fan of choosing the right tool for the job. I think choosing the right tool for the job is far more important that trolling. But that's just me 😛
Just tested RyuJIT and the SIMD Samples, ok. Now I've already a BIG Math Library with lots of functions (plus almost all Vertex/PixelShader syntax reproduced) and almost all my struct are SequentialLayout like the "internal Register" , How to let RyuJIT generate SIMD on my struct/class/Methods instead of the one in BCL.Simd ? Is that possible or I need to extend Vector<T> ? And with extension methods can RyuJit generate SIMD ?
I'm experimenting with vectorization of operations on large arrays and I'm find the performance of Vector<T>.CopyTo() isn't very good at all. Here's a snippet of my sample code to do vectorization:
var scaledData1 = new float[rawData.Length];
var scaledData2 = new float[rawData.Length];
var stopwatch = Stopwatch.StartNew();
for (int i = 0; i < rawData.Length; i++)
{
scaledData1[i] = rawData[i] * Scale + Offset;
}
stopwatch.Stop();
Console.WriteLine("Regular scaling takes " + stopwatch.ElapsedMilliseconds);
var vecScaleFactor = new Vector<float>(Scale);
var vecOffset = new Vector<float>(Offset);
stopwatch.Reset();
stopwatch.Start();
for (int i = 0; i < rawData.Length; i += 4)
{
var simdVector = new Vector<float>(rawData, i);
var result = (simdVector * vecScaleFactor) + vecOffset;
//scaledData2[i] = result[0];
//scaledData2[i + 1] = result[1];
//scaledData2[i + 2] = result[2];
//scaledData2[i + 3] = result[3];
result.CopyTo(scaledData2, i);
}
Console.WriteLine("Vector scaling takes " + stopwatch.ElapsedMilliseconds);
This approach takes longer than just processing the array using regular (non-SIMD) scaling. However if I comment out the CopyTo() and uncomment the copy of the individuals registers then the performance is better than using non-SIMD approach – usually taking about 2/3 to 1/2 the time.
Another concerning issue is that I seem to get different results when SIMD is not enabled and I'm running a Release config. One would hope that the code behaves the same whether SIMD is enabled or not. Of course, the performance of using Vector<T> when SIMD is not enabled is so much worse that you should use VectorMath.IsHardwareAccelerated to pick the right code path. All of this plumbing work begs for having auto-vectorization built into the C# compiler. 🙂
@MrReset: We've had a number of requests for automatically detecting (or allowing attributes) user types and promoting them to vector types. The primary problem with this is the amount of JIT-time validation we'd need to do: we'd need to make sure that element layout is compatible, validate which methods correspond to which hardware operations, etc… It's an interesting problem that lands somewhere between full auto-vectorization, and what we have today. The approach that is likely to be most achievable in the short term is for RyuJIT to do a great job of optimizing the case where you have struct that contains a Vector4f (or other Vector type), and you can just modify the internal representation to forward to the contained Vector type. Does that make sense?
@Keith:
Most likely the results differ because Intel's 32-bit float SIMD implementations perform their computations at 32-bits precision, whereas the default behavior of their scalar unit is to take a 32-bit float, convert internally to 80-bit extended precision, perform the calculation at the 80-bit precision level, and then either truncate or round the result and return as a 32-bit value.
"I'm find the performance of Vector<T>.CopyTo() isn't very good at all"
CopyTo isn't an intrinsic in the released version. I find it hard to believe that it will stay like this, it's more likely that they didn't have time to finish it. Vector<T>'s constructor is an intrinsic so this kind of stuff is certainly doable.
Yes, Vector<T>.CopyTo() isn't an intrinsic yet in CTP due to lack of time. It will definitely be made into an intrinsic.
What exactly is meant by "instrinsic"? Does this mean there is canned code that doesn't need to be jitted?
@Keith: "Intrinsic" means "built-in": the function isn't really a function, but is something that the JIT understands and can directly generate optimal code for, without generating a function call. Sorry for the compiler-lingo 🙂
Also YOU HAVE A BUG! Please don't use the constant '4' in you loop! Yes, it will work today, but it will stop working when you run it on an AVX machine, when using an AVX capable JIT. This line:
for (int i = 0; i < rawData.Length; i += 4)
should be
for (int i = 0; i < rawData.Length; i += Vector<float>.length)
And actually, you either need to ensure that rawData.Length is a multiple of Vector<float>.length, or you're going to walk off the end of the array, so you'll need to handle the final N elements separately…
@KevinFrei, yeah, I know about the issue with using a const 4. The length was 2 when I used type double. 🙂 The point was to do a quick demonstration for our engineering team. In fact, I was using Vector<float>.Length along with result.CopyTo(scaledData2,i). That worked succinctly and "seemed" to be length independent (see below). But when I found the perf was worse, I just hacked the register copy (didn't want to add another, inner loop to handle the length independence). So when CopyTo() is made intrinsic, this should simplify to the length independent:
<pre>
for (int i = 0; i < rawData.Length; i += Vector<float>.Length)
{
var simdVector = new Vector<float>(rawData, i);
var result = (simdVector * vecScaleFactor) + vecOffset;
result.CopyTo(scaledData2, i);
}
</pre>
Well, except for the case where rawData.Length is not a multiple of vector length. Hopefully the Vector<T>( T[] values, int index) and CopyTo(T[] array, int index) can handle the final N elements. Hmm, just tried it and they don't! Doh! It seems like these two methods could be made smarter i.e. the ctor should only fill registers up to the end of array passed in. And CopyTo() should copy register values up to the end of the array passed in and not beyond. Otherwise, we could make sure our array size align on Vector<T>.Length boundaries but that is a major spew. Or handle the last loop iteration doing a manual register copy inside a loop. That's a smaller spew. Hmm, no wonder folks would prefer that the compiler do this work for them. 😉
BTW, thanks for the definition of "intrinsic".
BTW, if I change the code to be truly length independent like so:
Vector<float> vector;
int vectorLength = Vector<float>.Length;
float[] temp = new float[vectorLength];
for (int i = 0; i < rawData.Length; i += Vector<float>.Length)
{
int numElements = rawData.Length – i;
if (numElements < vectorLength)
{
float[] rawValues = new float[numElements];
for (int j = 0; j < numElements; j++)
{
temp[j] = rawData[i + j];
}
vector = new Vector<float>(temp);
}
else
{
numElements = vectorLength;
vector = new Vector<float>(rawData, i);
}
var result = (vector * vecScaleFactor) + vecOffset;
for (int j = 0; j < numElements; j++)
{
scaledData2[i + j] = result[j];
}
}
My performance is often worse than the non-vectorized loop. So unless the Vector<T> ctor and CopyTo() methods are updated to handle lengths less than current number of registers, the only performant approach I can see is to ensure my arrays are always a multiple of vector length and to ignore any padding at the end of the array.
This sort of vectorization has to be very common, right? Surely there is some canonical implementation documented somewhere?
Another thought. Ideally I'd like Vector to just handle this for me. It might very handy to have method like:
static void Vectorize(T[] srcArray, T[] dstArray, int index, Func<Vector<T>,Vector<T>> operation)
Then my loop becomes:
for (int i = 0; i < rawData.Length; i += Vector<float>.Length)
{
Vector<float>.Vectorize(rawData, scaledData, i, vec => vec * vecScaleFactor + vecOffset):
}
Even more ideally, Vector<T> could handle type conversions. For instance, AD converter data is stored as integer values or varying widths. To convert the digital quantization value to its analog equivalent you multiply the integer value by a floating point scale factor and then add a floating point offset to compute the final value which is a floating point number. Often the quantization value bit width of the ADC is small – say 8 to 16 bits. If you have digitized lots of data (GBs) it is more efficient to store the data in a byte or short array than in a double array. Hence the desire to do Vector<byte> * Vector<double> + Vector<double> gives Vector<double>.
"My performance is often worse than the non-vectorized loop."
There's really no point in talking about the performance of this kind of code until CopyTo becomes an intrinsic. Extracting the elements of a vector to store them into the float array has a significant performance penalty.
"This sort of vectorization has to be very common, right? Surely there is some canonical implementation documented somewhere?"
The usual approach is to have a second scalar loop that deals with the remaining array elements. A native C/C++ compiler (VC++ for example) generates something like the following code:
int i = 0;
for (; i <= rawData.Length – Vector<float>.Length; i += Vector<float>.Length) {
var v = new Vector<float>(rawData, i);
v = v * scale + offset;
v.CopyTo(rawData, i);
}
for (; i < rawData.Length; i++) {
rawData[i] = rawData[i] * scale[0] + offset[0];
}
"It might very handy to have method like:…"
Probably not, at least in the case of " * scale + offset". The delegate call would cost you an arm and a leg in such cases. This would work properly only if they ever decide to inline delegate calls.
Using vectors for the scalar loop like you are trying to do is not ideal because packing scalars into a vector is not exactly efficient the same way that extracting values from a vector is not efficient.
"Even more ideally, Vector<T> could handle type conversions."
That would be cool but it's quite tricky. For example a Vector<int> could be converted to a Vector<float> but what about a Vector<short> to Vector<float>? A Vector<short> could have 8 elements and a Vector<float> could have only 4.
@MikeDanes – thanks for the reply! All good points for me to consider.
Kevin, when you say add a reference in the class constructor, are you talking about a static method constructor or the parameterless constructor?
@Keith & Mike:
We're prepping an update that includes an intrinsified (that's clearly not a real word) version of .CopyTo. The edge-of-the-array cases are still being discussed. I really like the Vector<T>.Vectorize usability, but it would require a tremendous amount of work to get it all to actually work properly. I like to think most of it could be done at compile time, not JIT/Runtime, but we're just really getting started here…
@Rob:
class MyClass {
// This here is the class constructor:
static MyClass() { Vector<int> foo = Vector<int>.Zero; }
}
Does that make it clear?
Is it possible to do shuffle operations with this? I was looking to port a series of fast Y'CbCr (Y'UV) packed/planar and colorspace conversion functions I wrote with SSE2/AVX2 intrinsics in C++. They can convert a 1080p 24bpp framebuffer nearly as fast as it can be copied (sans conversion) with memcpy. Without shuffles, it'd be many times slower.
I've written a research video codec in C# that I'd like to actually be able to use one day. Yea, I'm a glutton for punishment, apparently. ;P
jclary – I would love to see support for shuffles in our vector types. I think that for the fixed-size types (e.g. Vector4), the API design could be fairly straight forward (e.g. Vector4.WZYX() might reverse the elements). However it would be more challenging to design a general purpose API for shuffling Vector<T>. There is an issue on the corefx github repo: github.com/…/1168 that you could add comments to – and I would especially appreciate hearing your specific requirements.
Hello speed geeks.
Is there a roadmap for incorporating the SIMD support into the .Net runtimes and native support in Visual Studio?
Cheers,
Rob
|
https://blogs.msdn.microsoft.com/clrcodegeneration/2014/04/03/ryujit-ctp3-how-to-use-simd/
|
CC-MAIN-2018-39
|
refinedweb
| 3,371
| 65.52
|
Well, I came back yesterday from a weekend at my girlfriend's city and the GIMP bug was already solved! There are great people in the free software community.
Well, I came back yesterday from a weekend at my girlfriend's city and the GIMP bug was already solved! There are great people in the free software community.
Today.
Obviously I am not going to give the final answer to such a question like which one is faster. It depends on what you are programming and their requisites, and if you need maximun speed you may need to use C++, C or even assembler. But I want to share with you my little experiment.
I wrote a small C++ library for handling tables of data of diverse types. I did it aiming to clarity and usability, and so I used many inheritance and virtual methods. I recognize efficiency was not my priority.
Then I needed to convert ascii files of tabbed data with different formats to mine. I needed to write too much code in C++ (and I wanted to learn python I must confess) so I wrote a python script to do that (with a Gtk interface thanks to lgs).
Fascinated by the simplicity of Python code (as I told you on 29th) I decided to wrote in Python a feature selection algorithm I had previously wrote in C++. Today I have decided to test their speed and these are the results:
Well, it is not a very scientific experiment, and I cheated C++ because it is loading the table 32 times (it takes just a few seconds). But, hey! I expected C++ to win by large.
Good (coding is simpler now) and bad news (it took long time to wrote C++ lib) for me. But definitely a BIG surprise!
I am a bit upset with media saying that we, the free software advocates, are creating viruses (MyDoom). There is no doubt someone want us to be seen like dangerous terrorists.
SCO, and all companies that try to make money by sueing other companies instead of really producing things, have many enemies. Why don't they say that the IBM workers are developing viruses? Will it be because IBM can take actions against being defamed? Should we take legal actions against the media defaming us?
The virus had probably been created by a person or two. Why blaming an entire community for that?
If I known the person who created MyDoom I would state him/her clearly that s/he is making much harm to free software, that s/he is supporting ms war on free software just like terrorists are supporting George W. Bush antidemocratic politics.
Rarely a free software supporter may want to do that...
Yesterday I finished reading "Stupid white men ... and other sorry excuses for the state of the nation!" by Michael Moore. Great book.
I did this code a few weeks ago, but I still fascinated the simple and clean it looks in python with iterators:
def powerset(set): for size in range(len(set)+1): for subset in powersetOfSize(set, size): yield subset
def powersetOfSize(set, size): if size > 1: for i in range(len(set) - (size-1)): for subset in powersetOfSize(set[i+1:], size-1): yield [set[i]] + subset elif size == 1: for i in set: yield [i] else: yield []
As you may have guessed, given a set of elements as a python list, this code allows you to iterate through the elements of its power set in increasing set size order.
You may be thinking this guy is mad if he thinks that iterator recursive code is simple. But hey! If you look an example, and think how you would write that in another programming language, you will probably agree with me.
The promised clarification example:
>>> l = [1, 2, 3] >>> for i in powerset(l): ... print i ... [] [1] [2] [3] [1, 2] [1, 3] [2, 3] [1, 2, 3]
Parece interesante el proyecto advogato, al menos curioso. Bueno, pues esto es una primera entrada de diario a esto que parece ser un blog...
It seems interesting advogato project, at least curious. Well, this is my first diary entry here. I suspect this is a kind!
|
http://www.advogato.org/person/arauzo/diary.html?start=6
|
CC-MAIN-2016-30
|
refinedweb
| 705
| 71.04
|
If.
Basic C program structure
Every C program is composed of one or more functions in the following format. The function receives a number of variable parameters, runs the commands in the function, and then returns a variable of the given type. Note that you can omit the return value by using
void as your function type.
type function (parameters) { local variables commands }
Here is a practical example (don't worry if you don't understand it yet):
int my_function(int a) { char b; b='r'; printf("%d %c",a,c); }
Another example function is shown below. The
main() function is the only one that is required by every C program. It receives two parameters (
argc and
argv), which contain the command line parameters entered when the program begins and the number of parameters. The function should return an integer to exit. The following code will print out "
x = 0 and y = 1". (The
printf() function will be covered later in this section!)
#include <stdio.h> int main (int argc, char *argv[]) { int x, y; x = 0; printf ("x = %d ", x); y = 1; printf ("and y = %d", y); return 0; }
Compiling your programs
If you are reading this article, you will probably need to compile your programs. Assuming that you call your file
result.c, you can compile your programs by typing:
gcc -o result result.c
You will now be able to run the program by typing
./result in the directory where the program was compiled. It's handy to have a console open with the program, and a second console with the command line above, ready to compile the code.
Variable types
C provides a number of variable types. You may notice that these are very basic types, all holding numbers, but they can be used to represent any piece of data. The following table gives an overview of the types that are available to you.
C Data Types
It's important to think about where you place the declaration when creating variables. The scope of a variable refers to which parts of the application can access the variable. In general, variables use the following scope rules in C:
- You can only access a variable that has been declared before referencing it. In other words, the declaration of a variable must appear above its use.
- A variable cannot be accessed outside the set of brackets where it was declared. For example, if a variable was declared inside of a
forloop, it cannot be accessed outside of that loop.
You can declare variables outside any function, such as above the
main() function. This is called a global variable. Also, a variable declared within a function is local to that function.
Arrays
The variable types previously mentioned are useful, but what if you want a thousand integers? It would take quite some time to create a variable for each of these integers. To solve this problem, you could create an array of integers. The following program creates an array, and then fills it up with the numbers 1 to 1000. Notice that arrays are indexed beginning from zero.
int arrayOfInt[1000], i; for (i = 0; i < 1000; i++) arrayOfInt[i] = i + 1;
It is also possible to create arrays with multiple dimensions. The following applications creates an array that has 1000 columns and 1000 rows, and fills it up with the sum of the indexes.
int multiArray[1000][1000], i, j; for (i = 0; i < 1000; i++) for (j = 0; j < 1000; j++) arrayOfInt[i][j] = i + j;
The maximum number of dimensions is defined by the compiler, but most applications do not use more than three at the most. If you reach the maximum (GCC's maximum is 29), you should consider rethinking your approach to the problem at hand!
Outputting (printing) data
Although GTK+ is a graphical toolkit, it is still useful to be able to output debugging information to the terminal. To do this in C, you would use
printf(), which you already saw in a previous example. You can use additional parameters in this function to embed numbers and strings into the output text. In order for this function to work, you must include the header file
stdio.h
The following example would print out "
x = 5 and y = 10.7". The
%d is replaced by the first variable, which is an integer. Then,
%4.1f is replaced by the second floating point variable with a maximum length of 4 characters and one character following the decimal point.
int x = 5; double y = 10.74; printf ("x = %d and y = %4.1f", x, y);
There are a large number of options available of
printf(). It would be pointless to enumerate them here since the reference is available on thousands of sites around the Internet. To read more on this function, take a look at this page.
Conditionals and loops
C provides a number of methods for controlling the flow of your program. In order to understand these, you need to understand a few logical operators that are available. They are introduced in the following table.
Logical Operators in C
If/else comparisons
The
if statement is a comparison command that can be used to run code only if a condition is met. In addition, you can include optional
else if statements, that will only be evaluated if the previous statements were found to be false. Lastly, an
else statement can be used at the end to catch all other cases.
The following example shows you how to use an
if. Note, the curly brackets can be omitted if only one command is run after a conditional statement.
int x; if (x > 0) { printf ("x is positive"); } else if (x == 0) { printf ("x is equal to zero"); } else { printf ("x is negative"); }
In this example, what would be printed if x was equal to -5, 0, or 5? The application would print "
x is negative", "
x is equal to zero", and "
x is positive" respectively.
The switch statement
The
switch can be used as a cleaner style in place of some
if statements. It compares the variable or expression in the
switch to each
case value. If the correct value is found, all of the commands will be run until a
break or the end of the statement is reached.
In the following example,
white space will be printed if
ch is a space, tab, or new line character. If the letter is an uppercase vowel,
vowel will be printed. Otherwise, the output will show
other character.
char ch; switch (ch) { case ' ': case '\t': case '\n': printf ("white space"); break; case 'A': case 'E': case 'I': case 'O': case 'U': printf ("vowel"); break; default: printf ("other character"); }
The
default case is not required, but it can be used to catch all other cases not previously specified. Note that each case value must be constant, meaning that they cannot be variables.
While loops
The
while loop will continue to run its contained commands over and over until its condition is evaluated as false.
while (condition) { commands }
In the following example, the
while will continue running until
x is greater than or equal to ten. The
break statement can be used to exit the loop before it is completed. What will this code print out?
int x = 0; while (x < 10) { printf ("%d ", x); x += 2; if ((x % 3) == 0) break; }
The code above will print
0 2 4 . The
x variable is incremented by 2 every time, but when it reaches 6, the
if statement evaluates to true and exits the loop. Note: The percent sign (%) represents the modulus operator, which returns the remainder of the division.
For loops
The
for loop allows you to perform initialization, comparison, and incrementing all at once. You can omit one or all of these steps by leaving it blank, although leaving out the comparison will make an infinite loop. In this case, you could use the
break command to exit the loop when necessary.
for (initialize; comparison; increment) { commands }
In the following example, the integer
i is initialized to zero at the beginning of the loop. It will continue running until
i is greater than or equal to ten, incrementing by one every time the loop is run. What will this code output?
int i; for (i = 0; i < 10; i++) { if ((x % 3) == 0) continue; printf ("%d ", i); }
The above code will output
1 2 4 5 7 8 . If
x is divisible by three,
continue will skip the rest of the loop's iteration, so the values 0, 3, 6, and 9 will not be printed. You can also use the
break command with
for loops.
Pointers
One of the most important parts the C programming language is the pointer. A pointer is basically a variable that store the memory address of a variable, array, function, etc. There are two operators that are used with pointers:
- The ampersand (&) symbol is called the monadic or unary operator. It returns the address where the variable is located.
- The asterisk (*) symbol is used for dereferencing, and returns the object that is at the location stored by a pointer.
To help you understand this concept, look at the following example. The second line creates a pointer that stores the memory location of
x. The next line dereferences
ptr, printing out the integer at that memory location (
1). Then, the memory location
ptr is set to the value 7. Since
ptr points to
x, the last statement prints out "
7".
If you find this confusing, slowly re-read the sentence above (yes, it does make sense!) and experiment with the code.
int x = 1; int *ptr = &x; printf ("%d", *ptr); *ptr = 7; printf ("%d", x);
Pointers are very powerful, because they can be used with arrays. The following example creates an array of one hundred characters, and then uses a pointer to traverse the array, setting each to the value of its index.
char chArray[100]; char *ptr; int i; for (i = 0, ptr = &chArray[0]; i < 100; i++, ++ptr) *ptr = i;
This example shows a few important concepts. First, you should notice that you can provide multiple commands in the first and third parts of the
for statement by separating them will commas. This allows you to initialize multiple variables, or provide more than one increment.
In terms of pointers, you can increment or decrement a pointer by using standard integer operations. It is legal to move a pointer with any of the following:
++ptr,
--ptr,
ptr += 10,
ptr -= intVariable, etc.
Strings
Strings are very important to any programming language, because human interaction would be very difficult otherwise. A string is defined in C as an array of char variables. Each string in C must end with the null character ('\0' or 0) so that the end can be found. This is because most strings are stored as pointers.
The following example shows one way to create a string. A pointer called
text is created that points to nothing at first. It is then initialized to "Hello world!" and printed to the screen.
char *text; text = "Hello world!"; printf (text);
This string was initialized by setting the text directly, but this is generally not a good idea. Instead, you should use a function defined in
string.h called
strdup() to dynamically allocate a copy of the string like the following example. Once the application is done with the string,
free() should be called so that the memory taken up by that string can be used by other parts of the program.
char *text; text = strdup ("Hello world!"); printf (text); free (text);
There are a number of other functions available in
string.h for manipulating strings. The following application shows a few of them, but you should reference the header file for a full list of functions.
#include <stdio.h> #include <string.h> #include <stdlib.h> int main () { char *text1, *text2; text1 = strdup ("+"); text2 = (char*) malloc (10 * sizeof (char)); strcpy (text2, text1); while (strlen (text2) < 10) { if (strcmp (text2, "+++++") == 0) printf ("Pluses: "); strcat (text2, text1); } printf (text2); free (text1); free (text2); return 0; }
The example above is a bit frivolous, but it shows some of the most common functions used for string manipulation. First, the
malloc() function was used to allocate a piece of memory with a size of 10 bytes. This function requires you to include
stdlib.h and will be covered in more detail in the next section.
Then, the contents of
text1 are copied into
text2 with
strcpy(). The loop continues until the length of
text2 is nine characters. During each iteration of the loop, a single plus is concatenated to the variable with
strcat().
The conditional statement uses
strcmp() to compare two strings. It will return a negative number if the first string should be sorted before the second, zero if the two strings are equal, or a positive number otherwise. You should note that you cannot use
==,
!=, or any other operator to compare strings, since using those operators would just comparing the pointer location!
Dynamic memory allocation
One of the main reasons for pointers is to enable dynamic memory allocation, or the ability to allocate memory while the application is running. Memory is allocated in C with
malloc(), which uses the following syntax. It accepts the number of bytes to allocate, returning a pointer to the newly allocated memory location. When you are done with the memory, you must call
free(), or that space will be leaked by your application!
int *data; data = (int*) malloc (100 * sizeof (int));
In addition to this function, you can use
calloc(), which allocates an array of data with the given size. In the following example, an array of 100 elements with a size of 4 bytes is allocated. This code does the same essentially the same thing as the previous example.
int *data; data = (int*) calloc (100, sizeof (int));
As with
malloc(), everything allocated with
calloc() should be freed when you are done using it. Also, one of the most common problems encountered with pointers is not allocating memory before using it, so make sure to avoid using uninitialized pointers! Basically, if you get a Segmentation Fault, you are misusing pointers.
Structures
The
struct type allows you to create your own data types in C. For example, let us assume you want to store information about an animal. You could use the following structure to do this:
typedef struct { char *name; char *sound; int legs; } animal;
The structure definition above creates a new data type called
animal. This data type holds two strings and one integer. This is useful because you can create as many instances of
animal as you want. The following code shows two ways you can use the new structure, one without and one with pointers.
animal cow; animal *chicken; cow.name = strdup ("Cow"); cow.sound = strdup ("Moo"); cow.legs = 4; chicken = (animal*) malloc (sizeof (animal)); chicken->name = strdup ("Chicken"); chicken->sound = strdup ("Cluck"); chicken->legs = 2; free (chicken);
In this example, and instance of
animal is created called
cow. Since this is not a pointer, you can use the period character to reference the members of the structure. In the second part of this code, the memory is allocated for another
animal instance. Notice that, with a pointer, you must use
-> to access the members of the structure!
It may not be immediately apparent from this example why you would want to use a pointer. However, if you wanted to pass the structure to a function, it would be much more convenient. Passing
chicken would just create a copy of the pointer. This is why pointers are preferred in C.
Conclusion
By now, you should be familiar with the basics of the C programming language. While there are other aspects of the language that you may need to learn when programming with GTK+, these basics will allow you to use all but the most advanced portions of the libraries.
If you are interested in learning GTK+, I would encourage you to check out my book on the subject: Foundations of GTK+ Development. This book covers everything from the basics of the libraries to creating your own widgets, and much more in between.
C?
Yuk, you don't get C by reading an article.
And you don't need to, either.
Be sensible and use a scripting language.
I'm a bit disappointed to
I'm a bit disappointed to see that all you have done is given a crash-course in C.
I was expecting some more info regarding GTK+ and how to get started.
Thanks,
Raseel
In your "practical example"
In your "practical example" above I believe you made an error.
int my_function(int a)
{
char b;
b='r';
printf("%d %c",a,c);
}
You will return an error because the variable 'c' has been neither declared or set.
It should read:
int my_function(int a)
{
char b;
b='r';
printf("%d %c",a,b);
}
Above I just changed the 'printf' statement to use 'b' instead of 'c'
A more noteworthy example might be:
int my_function(int a)
{
char b;
b='r';
printf("Your argument as a decimal %d and as a character %c",a,a);
printf("Your function variable b as a decimal %d and as a character %c",b,b);
}
That way new users can see how easy it is to display variables as different types and their associated values.
R/S
Derek
@skypher - I took a couple
@skypher - I took a couple of computer science classes in college (C, C++, and visual *barf* basic), but was too interested in partying to absorb and retain what I learned. Then several years later I found I have a knack for programming, and I made a career of it, learning 99% of what I know through various articles, mailing lists, API documentation, and digesting the source code of other projects. Most of my knowledge is in VHL or scripting languages (PHP, Python, Ruby, VB, C#, ColdFusion, Java), but as I've hit some limitations (specifically with Python) with what I can do, I have a renewed interest (as in the last week) in C as it can do anything, just with a lot more coding.
Where would your scripting languages come from without C anyhow? I don't think your favorite scripting language would be nearly as encompassing and could do 1/10th of what it does if it was written in assembly. Though it probably would be damn fast.
@raseel - Try programming with Python and GTK+ first as you will get a grasp on how to create, modify, and access GTK objects without needing to worry about a strongly typed language and lower level problems (like pointers, memory allocation, etc.) that you do when learning with C. I suggest looking over the Exaile source code as it is my favorite audio player for GTK, and it is written in pure, well coded python. The PyGTK API mirrors, almost verbatim, the GTK C API, so things make a lot more sense when you start coding GTK with C after learning the API in Python. That is if you already know Python. If not, I suggest you learn.
@Andrew Krause - Thanks for the article! I've been trying to get caught up on C as I haven't done much with it in almost 10 years, and last night I (re)learned pointers and pointer math, and tonight, your article answered a question I had been trying to figure out for the last couple of hours - what "->" is in C - I know in OO PHP, it is how to reference an object's property or method, and I figured it had something to do with struct's in C, but you were the only page out of about 40 or 50 I looked at that gave me an answer (google doesn't include special characters in searches, but I have seen that operator in a lot of GTK+ code - this article was on the 4th page of results when searching for "gtk c operators"). Thank you, and your article is a perfect crash course for someone who hasn't done C in 9 years.
|
http://www.freesoftwaremagazine.com/comment/76078
|
CC-MAIN-2014-42
|
refinedweb
| 3,365
| 70.33
|
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Microcontroller Programming » Very simple assembly problem
So I'm trying to write a simple avr assembly program to turn the led on and off. There's like 20 million demos out there of people doing this. But I think my problem may be a hardware problem (as in I connected something wrong).
Anyway, I can get the led to turn on but not blink. If I make the Main section not be a loop, the led will flash briefly.
So anyway I'm trying to use Port B, so I have an led attached to pin 19 and another to pin 18. The other ends go to ground. Thats the way it should be, right? 19=PB5. Something like that. Anyway, I calculated that 28,000 NOPS should be a half second delay. The main program bit just turns the led on, waits half a sec (approximately) then turns it off, waits, then loops. The delay is two nested loops with 28 NOPs in the middle.
So what am I doing wrong? I calculated the correct delay so it wouldn't be a case of the lights blinking faster or slower than my ability (or patience) to view them.
So here's the program I'm trying to use:
The forum was garbling it so I put it on one of my sites.
It's still rather frustrating, but after another day futzing with it I finally got one of the two leds (a red one :-p) to feebly blink in an approximation of what it should be doing. Except much dimmer.
But I'm calling it a success.
Encase anyone with the same problem wants to see whats different about the "working" version:
Of course, virtually the same thing took all of 15 minutes in C:
The clock rate for the ATMEGA168 is 14.74MHz-- that means it can execute atleast 14.74 MILLION Assembly instructions per second. 28,000 NOPs is equivalent to 0.00189 (1.9ms).
maas15, the frequency of the MCU's system clock is controlled by the internal or external oscillator used to drive it. The crystal that ships with NerdKits oscillates at 14.74MHz -- well within the range that ATmega168 works at when powered by +5V.
Since ATmega is a RISC family microcontroller, most of its instructions take one clock cycle to execute. See datasheet, chapter 31: "Instruction Set Summary", pp.347, for a list of instructions and their cycle times. NOP does take one cycle and your calculations, apart from missing the actual frequency of the crystal, were correct in that regard. :)
I made a binky LED, for your pin 19 you want something like this to compile (WinAVR has AVRdude that works great). Here's the source code for blinking/flashing a LED.
/*
by
This code blinks an LED on then offevery 100ms.
This code will loop infinitely.
*/
#include <avr/io.h>
#include <avr/delay.h>
void sleep(uint8_t millisec)
{
while(millisec)
{
delay_ms(1);
millisec--;
}
}
main()
{
DDRB |=1<<PB5; //enable pin19
while(1)
{
PORTB &= ~(1<<PB5); //pin 19 on
sleep(100);
PORTB |=(1<<PB5); //pin 19 off
sleep(100);
}
}
Btw, the LED positive (longest wire) goes to the red + power rail and the -/gnd goes to the pin 19 to your MCU. When using PWM, then 2 dif MCU pins will get the + and - (pulse madulated speeds, forward and reversing), but this simple project is simple turning off and back on to make a flashing effect.
wayward
If a NOP instruction takes one cycle (that is 1 Hertz), then 14.74M cycles can be executed in one second. So again, 28,000 NOPs is only 1.9ms. He won't even see it. Half a second would require something on the order of 7 MILLION NOPs to be issued.
Am I missing something?
BobaMosfet, no, you're spot on :) I just wanted to show that the idea he had throughout the calculations was good (one instruction = one cycle = 1/F [s]), but he probably used an incorrect starting frequency, and somehow ended up at 28 K instructions.
Hey BobaMosfet,
I followed the calculation you did for the 28,000 NOPS equaling a 1.9 ms delay. I think I'm missing something as the delay code in libnerdkits only has 9 NOPS for delaying 1 microsecond given the 14.74M clockspeed.
Hi sigkill0,
Part of the delay is in executing the loop counter itself. We can look at the disassembled version of the delay_us function to see how it gets compiled into assembly instructions that are executed by the microcontroller. For example, running "make tempsensor.ass" in the tempsensor directory will generate assembly code that basically reads like this (cleaned up a bit):
delay_us:
1368 LDI R18, 0x00
136a LDI R19, 0x00
136c RJMP to 0x1384
136e NOP
1370 NOP
1372 NOP
1374 NOP
1376 NOP
1378 NOP
137a NOP
137c NOP
137e NOP
1380 SUBI R18, 0xFF
1382 SBCI R19, 0xFF
1384 CP R18, R24
1386 CPC R19, R25
1388 BRCS to 0x136E
The left column is the memory address in the flash memory, and the right side describes which instruction and its parameters. Then, you can look at the AVR instruction set and try to decode what's going on. I will admit that is not the most newbie-friendly technique, but it is a useful exercise in understanding how things are happening inside the microcontroller!
The first two LDI instructions are basically doing "uint16_t i=0" -- setting two bytes of registers to zero. Instructions 136e through 137e are the 9 NOP instructions. Instructions 1380 and 1382 are a subtraction and subtraction with carry -- which is really just a confusing way of doing "i++". Instructions 1384 and 1386 compare it the variable "i" (in registers R18 and R19) to the parameter (passed in via registers R24 and R25), which sets the carry flag "C" if the parameter is greater than the variable "i". Instruction 1388 is a branch to repeat the loop if the carry flag is set. Anyway, these instructions all take 1 cycle each, except for the BRCS instruction which takes 2 cycles when it branches, so in total, one cycle of the loop (instructions 136e through 1388) takes 15 cycles. (There are a few extra instructions at the beginning and end, but if the loop is run multiple times, only the 15 cycles are repeated.) At a crystal oscillator frequency of 14.7456 MHz, these cycles will take 15/14745600 = 1.017 microseconds, so the delay_us function is about 1.7% slow compared to nominal.
Hope that helps!
Mike
Mike,
Thanks makes perfect sense.
BobaMosfet: Shoot you're right. I think I was thinking MHz = ~1000Hz, but wolframalpha tells me that it's 1.474 * 10^7 Hz. Whoops. That explains all my trials that had the light stuck on permanently... I think the controller was doing exactly what I told it to do.
Dealing with such small bytes is driving me nuts! (Just an irrelevant aside).
Thanks to all for pointing out the exact problem.
"Dealing with such small bytes is driving me nuts! (Just an irrelevant aside)."
I suppose that's better than such small nuts driving you bytes...
Please log in to post a reply.
|
http://www.nerdkits.com/forum/thread/185/
|
CC-MAIN-2019-04
|
refinedweb
| 1,217
| 73.47
|
Alex's Corner - Week 6 - Best Practices for Deployment
Hi Everyone,
I will be running the next webinar at 2 pm (BST) Thursday the 18th of May, discussing best practices for deploying Pycom devices. This live stream slightly breaks the usual schedule as the Pytrack livestream was delayed a week.
The format will be as following:
- Introduction
- Discussion about deployment
- Explanations/Examples
Please do ask any questions (either here or in the livestream chat), I'll try to answer them live!
Visit this link next week, on the 18th of May to join in!
Cheers,
Alex
@tenspot You're welcome. And if you like video tutorials, I can also recommend the ones by Tony DiCola from Adafruit. Although they are not LoPy/Pycom specific (he often uses the Adafruit boards) they are very informative. That is how I learned microPython:
Hi @tenspot, apologies for the delay getting back to you, I have been busy with events this week! Thanks @RobTuDelft and @PiAir for answering these questions!
@PiAir Hi, Thanks for your reply, I didn't realise it was that simple! Thanks to Rob also for your comment on capitalisation of constants. Knowledge of these things all helps to improve my code readability and style.
- RobTuDelft last edited by
@tenspot have a look at this config.py and this file that uses it.
Basically the config.py is a .py file where you declare the variables:
GATEWAY_ID = '11aa334455bb7788' SERVER = 'router.eu.thethings.network' PORT = 1700 NTP = "pool.ntp.org" NTP_PERIOD_S = 3600` WIFI_SSID = 'my-wifi' WIFI_PASS = 'my-wifi-password'
Then, if you want to use them in another python file, you import it using:
import config
After that you can address any variable that you have defined in that config.py file like (for example):
config.WIFI_SSID
etc
Hi Alex,
Thank you for posting these tutorial videos, I find them very useful.
You refer to using a configuration file for wifi password, UUID etc., I would be grateful if you would outline the structure of this file as I would like to use this technique, but am unsure how to create such a file.
Regards,
Mervyn
@PiAir
You can add or between and fire in both cases :-)
e.g.
Pin.IRQ_FALLING | Pin.IRQ_RISING
are there specific use case
This depend - e.g. how fast signal raisin/falling
@livius Ok...but when would I prefer using IRQ_HIGH_LEVEL over IRQ_RISING in an interrupt? Is it just there for redundancy or are there specific use case that would make you use one or the other?
@PiAir
this is about voltage/state indicator
IRQ_LOW_LEVEL0
IRQ_HIGH_LEVEL1 - 3V3
IRQ_FALLING- The falling edge of the signal
IRQ_RISING- The rising edge of the signal
OK, last one for this video. You create the interrupt handler using:
def pin_handler(arg):
From this page I understand that arg contains the Pin object. So I would be able to create one handler that handles interrupts from multiple pins, correct?
On that page, also Pin.IRQ_LOW_LEVEL and Pin.IRQ_HIGH_LEVEL are mentioned. Would I be able to for example use that so set an interrupt that triggers a funciton if the temperature provided by an DHT11 gets to a low or high level?
The documentation here is a bit outdated? It states that only the Alarm class and the Pin provide the .callback() method while at least the Bluetooth class also supports it.
Sorry about all these questions after the livestream itself!
Ah, you explain the PWRON_RESET and SOFT_RESET later. Solves one question.
Still not sure why I wouldn't want to init the wifi in case of SOFT_RESET?
A little later you look at
while not wlan.isconnected(): machine.idle() # save power while waiting
Since the topic is deployment:
This loop caused me problems once, because I used it on one of my LoPy devices. At home, there was no problem. But then I took the node with me and had to unplug it's power and reboot it while being no where near the range of my WiFi network...
I now use this code in my boot.py:
nets = wlan.scan() for net in nets: if net.ssid == config.WIFI_SSID: if not wlan.isconnected(): wlan.connect(config.WIFI_SSID, auth=(WLAN.WPA2, config.WIFI_PASS), timeout=5000) while not wlan.isconnected(): machine.idle() # save power while waiting pycom.rgbled(0x007f00) # green time.sleep(3) pycom.rgbled(0) break
(forgot where I've seen this, can't imagine I made it up myself)
This then checks to see if there is a WiFi network available with the same SSID as the one in the config.py file. If so, it connects and shows the green LED for 3 seconds to indicate me that it has connected.
So, if I start the node outside of the reach of my WiFi network it no longer tries to connect.
In the livestream you mention that
if machine.reset_cause() != machine.SOFT_RESET:
checks for a crash. But after a crash wouldn't you still want to setup the wifi?
If I look at this list, it lists
- machine.PWRON_RESET
- machine.HARD_RESET
- machine.WDT_RESET
- machine.DEEPSLEEP_RESET
- machine.SOFT_RESET
But without description of what could cause any of these and also without examples of what I would want to different for each of these.
The Pycom documentation only lists:
- machine.PWRON_RESET
- machine.SOFT_RESET
Again without any further information.
Could you explain these a bit more and possible give examples of how you would want to handle them in boot.py ?
It was. :)
(or is it "It is" ?)
- jmarcelino last edited by jmarcelino
@bucknall
I am between timezones at the moment but 18th of May was yesterday?
Sorry ignore... got confused because I saw this posted an hour ago, but the stream was yesterday..
|
https://forum.pycom.io/topic/1241/alex-s-corner-week-6-best-practices-for-deployment/17
|
CC-MAIN-2019-09
|
refinedweb
| 950
| 66.54
|
package B::Lint; use if $] > 5.017, 'deprecate'; our $ is used. Any occurrence of any of these variables in your program can slow your whole program down. See L<perlre> for details. =item B<all> Turn all warnings on. =item B<none> Turn all warnings off. =back =head1 NON LINT-CHECK OPTIONS =over 8 =item B<-u Package> Normally, Lint only checks the main code of the program together with all subs defined in package main. The B<-u> option lets you include other package names whose subs are then checked by Lint. =back =head1 EXTENDING LINT Lint can be extended by with plugins. Lint uses L<Module::Pluggable> to find available plugins. Plugins are expected but not required to inform Lint of which checks they are adding. The C<< B::Lint->register_plugin( MyPlugin => \@new_checks ) >> method adds the list of C<@new_checks> to the list of valid checks. If your module wasn't loaded by L<Module::Pluggable> then your class name is added to the list of plugins. You must create a C<match( \%checks )> method in your plugin class or one of its parents. It will be called on every op as a regular method call with a hash ref of checks as its parameter. The class methods C<< B::Lint->file >> and C<< B::Lint->line >> contain the current filename and line number. package Sample; use B::Lint; B::Lint->register_plugin( Sample => [ 'good_taste' ] ); sub match { my ( $op, $checks_href ) = shift @_; if ( $checks_href->{good_taste} ) { ... } } =head1 TODO =over =item while(<FH>) stomps $_ =item strict oo =item unchecked system calls =item more tests, validate against older perls =back =head1 BUGS This is only a very preliminary version. =head1 AUTHOR Malcolm Beattie, mbeattie@sable.ox.ac.uk. =head1 ACKNOWLEDGEMENTS Sebastien Aperghis-Tramoni - bug fixes =cut use strict; use B qw( walkoptree_slow main_root main_cv walksymtable parents OPpOUR_INTRO OPf_WANT_VOID OPf_WANT_LIST OPf_WANT OPf_STACKED SVf_POK SVf_ROK ); use Carp 'carp'; # The current M::P doesn't know about .pmc files. use Module::Pluggable ( require => 1 ); use List::Util 'first'; ## no critic Prototypes sub any (&@) { my $test = shift @_; $test->() and return 1 for @_; return 0 } BEGIN { # Import or create some constants from B. B doesn't provide # everything I need so some things like OPpCONST_BARE are defined # here. for my $sym ( qw( begin_av check_av init_av end_av ), [ 'OPpCONST_BARE' => 64 ] ) { my $val; ( $sym, $val ) = @$sym if ref $sym; if ( any { $sym eq $_ } @B::EXPORT_OK, @B::EXPORT ) { B->import($sym); } else { require constant; constant->import( $sym => $val ); } } } my $file = "unknown"; # shadows current filename my $line = 0; # shadows current line number my $curstash = "main"; # shadows current stash my $curcv; # shadows current B::CV for pad lookups sub file {$file} sub line {$line} sub curstash {$curstash} sub curcv {$curcv} # Lint checks my %check; my %implies_ok_context; map( $implies_ok_context{$_}++, qw(scalar av2arylen aelem aslice helem hslice keys values hslice defined undef delete) ); # Lint checks turned on by default my @default_checks = qw(context magic_diamond undefined_subs regexp_variables); my %valid_check; # All valid checks for my $check ( qw(context implicit_read implicit_write dollar_underscore private_names bare_subs undefined_subs regexp_variables magic_diamond ) ) { $valid_check{$check} = __PACKAGE__; } # Debugging options my ($debug_op); my %done_cv; # used to mark which subs have already been linted my @extra_packages; # Lint checks mainline code and all subs which are # in main:: or in one of these packages. sub warning { my $format = ( @_ < 2 ) ? "%s" : shift @_; warn sprintf( "$format at %s line %d\n", @_, $file, $line ); return undef; ## no critic undef } # This gimme can't cope with context that's only determined # at runtime via dowantarray(). sub gimme { my $op = shift @_; my $flags = $op->flags; if ( $flags & OPf_WANT ) { return ( ( $flags & OPf_WANT ) == OPf_WANT_LIST ? 1 : 0 ); } return undef; ## no critic undef } my @plugins = __PACKAGE__->plugins; sub inside_grepmap { # A boolean function to be used while inside a B::walkoptree_slow # call. If we are in the EXPR part of C<grep EXPR, ...> or C<grep # { EXPR } ...>, this returns true. return any { $_->name =~ m/\A(?:grep|map)/xms } @{ parents() }; } sub inside_foreach_modifier { # TODO: use any() # A boolean function to be used while inside a B::walkoptree_slow # call. If we are in the EXPR part of C<EXPR foreach ...> this # returns true. for my $ancestor ( @{ parents() } ) { next unless $ancestor->name eq 'leaveloop'; my $first = $ancestor->first; next unless $first->name eq 'enteriter'; next if $first->redoop->name =~ m/\A(?:next|db|set)state\z/xms; return 1; } return 0; } for ( [qw[ B::PADOP::gv_harder gv padix]], [qw[ B::SVOP::sv_harder sv targ]], [qw[ B::METHOP::sv_harder meth_sv targ]], [qw[ B::SVOP::gv_harder gv padix]] ) { # I'm generating some functions here because they're mostly # similar. It's all for compatibility with threaded # perl. Perhaps... this code should inspect $Config{usethreads} # and generate a *specific* function. I'm leaving it generic for # the moment. # # In threaded perl SVs and GVs aren't used directly in the optrees # like they are in non-threaded perls. The ops that would use a SV # or GV keep an index into the subroutine's scratchpad. I'm # currently ignoring $cv->DEPTH and that might be at my peril. my ( $subname, $attr, $pad_attr ) = @$_; my $target = do { ## no critic strict no strict 'refs'; \*$subname; }; *$target = sub { my ($op) = @_; my $elt; if ( not $op->isa('B::PADOP') ) { $elt = $op->$attr; } return $elt if eval { $elt->isa('B::SV') }; my $ix = $op->$pad_attr; my @entire_pad = $curcv->PADLIST->ARRAY; my @elts = map +( $_->ARRAY )[$ix], @entire_pad; ($elt) = first { eval { $_->isa('B::SV') } ? $_ : (); } @elts[ 0, reverse 1 .. $#elts ]; return $elt; }; } sub B::OP::lint { my ($op) = @_; # This is a fallback ->lint for all the ops where I haven't # defined something more specific. Nothing happens here. # Call all registered plugins my $m; $m = $_->can('match'), $op->$m( \%check ) for @plugins; return; } sub B::COP::lint { my ($op) = @_; # nextstate ops sit between statements. Whenever I see one I # update the current info on file, line, and stash. This code also # updates it when it sees a dbstate or setstate op. I have no idea # what those are but having seen them mentioned together in other # parts of the perl I think they're kind of equivalent. if ( $op->name =~ m/\A(?:next|db|set)state\z/ ) { $file = $op->file; $line = $op->line; $curstash = $op->stash->NAME; } # Call all registered plugins my $m; $m = $_->can('match'), $op->$m( \%check ) for @plugins; return; } sub B::UNOP::lint { my ($op) = @_; my $opname = $op->name; CONTEXT: { # Check arrays and hashes in scalar or void context where # scalar() hasn't been used. next unless $check{context} and $opname =~ m/\Arv2[ah]v\z/xms and not gimme($op); my ( $parent, $gparent ) = @{ parents() }[ 0, 1 ]; my $pname = $parent->name; next if $implies_ok_context{$pname}; # Three special cases to deal with: "foreach (@foo)", "delete # $a{$b}", and "exists $a{$b}" null out the parent so we have to # check for a parent of pp_null and a grandparent of # pp_enteriter, pp_delete, pp_exists next if $pname eq "null" and $gparent->name =~ m/\A(?:delete|enteriter|exists)\z/xms; # our( @bar ); would also trigger this error so I exclude # that. next if $op->private & OPpOUR_INTRO and ( $op->flags & OPf_WANT ) == OPf_WANT_VOID; warning 'Implicit scalar context for %s in %s', $opname eq "rv2av" ? "array" : "hash", $parent->desc; } PRIVATE_NAMES: { # Looks for calls to methods with names that begin with _ and # that aren't visible within the current package. Maybe this # should look at @ISA. next unless $check{private_names} and $opname =~ m/\Amethod/xms; my $methop = $op->first; next unless $methop->name eq "const"; my $method = $methop->sv_harder->PV; next unless $method =~ m/\A_/xms and not defined &{"$curstash\::$method"}; warning q[Illegal reference to private method name '%s'], $method; } # Call all registered plugins my $m; $m = $_->can('match'), $op->$m( \%check ) for @plugins; return; } sub B::PMOP::lint { my ($op) = @_; IMPLICIT_READ: { # Look for /.../ that doesn't use =~ to bind to something. next unless $check{implicit_read} and $op->name eq "match" and not( $op->flags & OPf_STACKED or inside_grepmap() ); warning 'Implicit match on $_'; } IMPLICIT_WRITE: { # Look for s/.../.../ that doesn't use =~ to bind to # something. next unless $check{implicit_write} and $op->name eq "subst" and not $op->flags & OPf_STACKED; warning 'Implicit substitution on $_'; } # Call all registered plugins my $m; $m = $_->can('match'), $op->$m( \%check ) for @plugins; return; } sub B::LOOP::lint { my ($op) = @_; IMPLICIT_FOO: { # Look for C<for ( ... )>. next unless ( $check{implicit_read} or $check{implicit_write} ) and $op->name eq "enteriter"; my $last = $op->last; next unless $last->name eq "gv" and $last->gv_harder->NAME eq "_" and $op->redoop->name =~ m/\A(?:next|db|set)state\z/xms; warning 'Implicit use of $_ in foreach'; } # Call all registered plugins my $m; $m = $_->can('match'), $op->$m( \%check ) for @plugins; return; } # In threaded vs non-threaded perls you'll find that threaded perls # use PADOP in place of SVOPs so they can do lookups into the # scratchpad to find things. I suppose this is so a optree can be # shared between threads and all symbol table muckery will just get # written to a scratchpad. *B::METHOP::lint = *B::PADOP::lint = *B::PADOP::lint = \&B::SVOP::lint; sub B::SVOP::lint { my ($op) = @_; MAGIC_DIAMOND: { next unless $check{magic_diamond} and parents()->[0]->name eq 'readline' and $op->gv_harder->NAME eq 'ARGV'; warning 'Use of <>'; } BARE_SUBS: { next unless $check{bare_subs} and $op->name eq 'const' and $op->private & OPpCONST_BARE; my $sv = $op->sv_harder; next unless $sv->FLAGS & SVf_POK; my $sub = $sv->PV; my $subname = "$curstash\::$sub"; # I want to skip over things that were declared with the # constant pragma. Well... sometimes. Hmm. I want to ignore # C<<use constant FOO => ...>> but warn on C<<FOO => ...>> # later. The former is typical declaration syntax and the # latter would be an error. # # Skipping over both could be handled by looking if # $constant::declared{$subname} is true. # Check that it's a function. next unless exists &{"$curstash\::$sub"}; warning q[Bare sub name '%s' interpreted as string], $sub; } PRIVATE_NAMES: { next unless $check{private_names}; my $opname = $op->name; if ( $opname =~ m/\Agv(?:sv)?\z/xms ) { # Looks for uses of variables and stuff that are named # private and we're not in the same package. my $gv = $op->gv_harder; my $name = $gv->NAME; next unless $name =~ m/\A_./xms and $gv->STASH->NAME ne $curstash; warning q[Illegal reference to private name '%s'], $name; } elsif ( $opname eq "method_named" ) { my $method = $op->sv_harder->PV; next unless $method =~ m/\A_./xms; warning q[Illegal reference to private method name '%s'], $method; } } DOLLAR_UNDERSCORE: { # Warn on uses of $_ with a few exceptions. I'm not warning on # $_ inside grep, map, or statement modifier foreach because # they localize $_ and it'd be impossible to use these # features without getting warnings. next unless $check{dollar_underscore} and $op->name eq "gvsv" and $op->gv_harder->NAME eq "_" and not( inside_grepmap or inside_foreach_modifier ); warning 'Use of $_'; } REGEXP_VARIABLES: { # Look for any uses of $`, $&, or $'. next unless $check{regexp_variables} and $op->name eq "gvsv"; my $name = $op->gv_harder->NAME; next unless $name =~ m/\A[\&\'\`]\z/xms; warning 'Use of regexp variable $%s', $name; } UNDEFINED_SUBS: { # Look for calls to functions that either don't exist or don't # have a definition. next unless $check{undefined_subs} and $op->name eq "gv" and $op->next->name eq "entersub"; my $gv = $op->gv_harder; my $cv = $gv->FLAGS & SVf_ROK ? $gv->RV : undef; my $subname = ($cv || $gv)->STASH->NAME . "::" . ($cv ? $cv->NAME_HEK || $cv->GV->NAME : $gv->NAME); no strict 'refs'; ## no critic strict if ( not exists &$subname ) { $subname =~ s/\Amain:://; warning q[Nonexistent subroutine '%s' called], $subname; } elsif ( not defined &$subname ) { $subname =~ s/\A\&?main:://; warning q[Undefined subroutine '%s' called], $subname; } } # Call all registered plugins my $m; $m = $_->can('match'), $op->$m( \%check ) for @plugins; return; } sub B::GV::lintcv { # Example: B::svref_2object( \ *A::Glob )->lintcv my $gv = shift @_; my $cv = $gv->CV; return unless $cv->can('lintcv'); $cv->lintcv; return; } sub B::CV::lintcv { # Example: B::svref_2object( \ &foo )->lintcv # Write to the *global* $ $curcv = shift @_; #warn sprintf("lintcv: %s::%s (done=%d)\n", # $gv->STASH->NAME, $gv->NAME, $done_cv{$$curcv});#debug return unless ref($curcv) and $$curcv and not $done_cv{$$curcv}++; my $root = $curcv->ROOT; #warn " root = $root (0x$$root)\n";#debug walkoptree_slow( $root, "lint" ) if $$root; return; } sub do_lint { my %search_pack; # Copy to the global $curcv for use in pad lookups. $curcv = main_cv; walkoptree_slow( main_root, "lint" ) if ${ main_root() }; # Do all the miscellaneous non-sub blocks. for my $av ( begin_av, init_av, check_av, end_av ) { next unless eval { $av->isa('B::AV') }; for my $cv ( $av->ARRAY ) { next unless ref($cv) and $cv->FILE eq $0; $cv->lintcv; } } walksymtable( \%main::, sub { if ( $_[0]->FILE eq $0 ) { $_[0]->lintcv } }, sub {1} ); return; } sub compile { my @options = @_; # Turn on default lint checks for my $opt (@default_checks) { $check{$opt} = 1; } OPTION: while ( my $option = shift @options ) { my ( $opt, $arg ); unless ( ( $opt, $arg ) = $option =~ m/\A-(.)(.*)/xms ) { unshift @options, $option; last OPTION; } if ( $opt eq "-" && $arg eq "-" ) { shift @options; last OPTION; } elsif ( $opt eq "D" ) { $arg ||= shift @options; foreach my $arg ( split //, $arg ) { if ( $arg eq "o" ) { B->debug(1); } elsif ( $arg eq "O" ) { $debug_op = 1; } } } elsif ( $opt eq "u" ) { $arg ||= shift @options; push @extra_packages, $arg; } } foreach my $opt ( @default_checks, @options ) { $opt =~ tr/-/_/; if ( $opt eq "all" ) { %check = %valid_check; } elsif ( $opt eq "none" ) { %check = (); } else { if ( $opt =~ s/\Ano_//xms ) { $check{$opt} = 0; } else { $check{$opt} = 1; } carp "No such check: $opt" unless defined $valid_check{$opt}; } } # Remaining arguments are things to check. So why aren't I # capturing them or something? I don't know. return \&do_lint; } sub register_plugin { my ( undef, $plugin, $new_checks ) = @_; # Allow the user to be lazy and not give us a name. $plugin = caller unless defined $plugin; # Register the plugin's named checks, if any. for my $check ( eval {@$new_checks} ) { if ( not defined $check ) { carp 'Undefined value in checks.'; next; } if ( exists $valid_check{$check} ) { carp "$check is already registered as a $valid_check{$check} feature."; next; } $valid_check{$check} = $plugin; } # Register a non-Module::Pluggable loaded module. @plugins already # contains whatever M::P found on disk. The user might load a # plugin manually from some arbitrary namespace and ask for it to # be registered. if ( not any { $_ eq $plugin } @plugins ) { push @plugins, $plugin; } return; } 1;
|
https://metacpan.org/dist/B-Lint/source/lib/B/Lint.pm
|
CC-MAIN-2022-40
|
refinedweb
| 2,350
| 58.72
|
One).
I’m going to put together a few blog postings over the next few weeks that show off ways to use LINQ/DLINQ/XLINQ within ASP.NET projects. This first walkthrough below will help you get started and introduce some of the important LINQ concepts. You can follow-along by downloading the May CTP LINQ preview above and typing in the code below (I list all of it below), or you can download and run the complete .zip file of my samples here (note: you still need to install the LINQ May CTP drop for the .zip file of samples to work).>
I can then populate a collection of Location objects and databind it to the Grid in my code-behind like so:
using System.Collections.Generic;
public partial class Step2 : System.Web.UI.Page:
Step 3: Refactoring the City Collection Slightly
Since we’ll be re-using this collection of cities in several other samples, I decided to encapsulate my travels in a “TravelOrganizer” class like so::
public partial class Step3 : System.Web.UI.Page
TravelOrganizer travel = new TravelOrganizer();
GridView1.DataSource = from location in travel.PlacesVisited
select location;:
public partial class Step4 : System.Web.UI.Page
GridView1.DataSource = (from location in travel.PlacesVisited
orderby location.Distance descending
select location).Skip(1).Take>
<b>Total Travel Distance (outside of US):</b>
<asp:Label</asp:Label>
</div>
<b>Average Distance:</b>
<asp:Label</asp:Label>
Step5.aspx.cs code-behind file:
public partial class Step5 : System.Web.UI.Page
//
//" %>
<h1>Anonymous Type</h1>
And within our code-behind file we’ll write a LINQ query that uses anonymous types like so:
public partial class Step6 : System.Web.UI.Page
orderby location.City
select new {
City = location.City,
Distance = location.Distance
};):
public partial class Step7 : System.Web.UI.Page
group location by location.Country into loc
Country = loc.Key,
Cities = loc,
TotalDistance = loc.Sum(dist => dist.Distance)
The GridView on our .aspx page is then defined like so:
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Step7.aspx.cs" Inherits="Step7" %>
).
In my next few LINQ-related blog postings I’ll show how you can go even further, and take advantage of the new DLINQ support to use the above techniques against relational databases as well as the new XLINQ support to work against XML files and structures. What is great about the LINQ project is that the syntax and concepts are the same across all of its uses – so once you learn how to use LINQ against an array or collection, you also know all the concepts needed to work against a database or even XML file.):
public partial class Data_Data2 : System.Web.UI.Page
Northwind db = new Northwind();
GridView1.DataSource = from x in db.Suppliers
where x.Country == "USA"
orderby x.Country
select new {
x.CompanyName,
x.Country,
x.Products
}; I built above from this .ZIP file here.
Hope this helps,
Scott
Good Stuff, Scott!!!. Simple and Short...
Keeps me enumerating...
I had downloaded a two part asp.net WMV of yours from Channel 9...Nice Ones...
Cheers
--Subbu
simple and powerful application.
easy data access from array. very interesting
Over the last few days I’ve spent some spare time playing around with LINQ and LINQ for SQL (aka
Hi Jason,
Yep -- you can definitely run LINQ over a custom object model just fine. As long as it support IEnumerable you can perform queries without having to modify your object model (which is pretty cool).
I read in ScottGu's post the other day that LINQ can also be used under Web Access Projects (aka
One of the highlights for me of my recent trip to TechEd NZ and Australia was the opportunity I had to
Hi Ranjith,
The big advantages of LINQ over using SQL statements directly are:
1) It is a cleaner, terser syntax
2) It is checked by the compiler instead of at runtime
3) It can be easily debugged
4) It can be used against any datatype - it isn't limited to relational databases, you can also use it against XML, regular objects, etc.
Hi John,
LINQ is very fast and uses pretty proven object relational mapping concepts, so I wouldn't say it is a research project.
LINQ is not released yet, but will be next year. It will be fully supported and tested, and designed for both large and small systems.
LINQ is not yet in Beta yet. It will have a full beta w/ an early adopter program behind it. There will be very, very large projects running on it before it is released.
The var keyword can optionally be used (or not). It is useful with anonymous types - where there is no type name that you can specify for a return result (since there is no type declaration).
If you want to avoid using var, you can just declare your types - in which case the result would be more like this:
IEnumerable<Product> = // LINQ Query
Hi Kevin,
Unfortunately it is going to be early next year when we get LINQ ready for the next major release (which will be the productized version that is fully integrated into Visual Studio).
I know it is really addictive (I love it). We'll try and get it out to you as soon as possible though.
Thanks,
Scott Guthrie , "The Father of ASP", is speaking at the North Dallas .NET User Group on November 2 nd
Come join NDDNUG and meet the original creator of ASP.NET - Scott Guthrie! A few times a year, NDDNUG
Hi Jimbo,
That is a good scenario!
Can you send me email (scottgu@microsoft.com) and I'll loop you in with a few folks from the LINQ team to discuss? They'd probably have some good recommendations for you.
Earlier this week I presented at the ASP.NET Connections conference in Las Vegas. This is a great conference
great article....thanxxx
Great article. So easy to use by understanding your article.
Any suggested book to improve on C# 3.0?
Pathik
Hi Pathik,
Here is a pointer to an electronic book on LINQ and C# 3.0 where you can learn more about it:
David Hayden also has some good C# 3.0 articles you might want to read:
I have Microsoft .net linq preview (May 2006) installed and tried "Step 1: Creating your first ASP.NET page using LINQ", but the linq statement is unrecognized, "city", "in", "cities", and "ToUpper()" are all red underlined with "; expected"
from city in cities
where city.Length > 4
orderby city
select city.ToUpper();
Thanks.. :)
Excellent article for LINQ
LINQ将被完全集成到代号为Orcas的下个版本Visual Studio中,而且它也包含了一些非常酷的框架和 工具支持,包括完全的智能感知和可视化设计器支持。你可以在这儿下载上周发布的LINQ五月份CTP版。这个CTP版 本的亮点就是它能在VS 2005上运行,使你能够立即开始深入研究LINQ。它实现了很多用户的反馈(例 如:在DLINQ中添加了对存储过程的支持),并且包含了一个内置的ASP.NET网站项目模板来帮助你在ASP.NET 中使用它(注意:你也可以在VS 2005 Web Application
关于LINQ(语言集成查询)的一些学习笔记
LINQ是什么?当我们要对数据库表进行查询的时候,我们一定会编写
Should I stay or should I go … with Visual Studio 2005 or 2008 is the question in this particular case
【原文地址】UsingLINQwithASP.NET(Part1)
【原文发表日期】Sunday,May14,20069:49PM
最近使我激动不已的新鲜事之一就是LINQ系...
Pingback from magnum blog » Blog Archive » links for 2008-03-21
Pingback from parts cleaner
在ASP.NET数据控件当中使用LinqDataSource,可以作为Linq的入门读物...
Pingback from advantages of relational databases
Pingback from Using LINQ with ASP.NET
|
http://weblogs.asp.net/scottgu/archive/2006/05/14/Using-LINQ-with-ASP.NET-_2800_Part-1_2900_.aspx
|
crawl-002
|
refinedweb
| 1,187
| 66.74
|
For Python, modules are source files that can be imported into a program. They can contain any Python structure and run when imported. They are compiled when first imported and stored in a file (with the extension ".pyc" or ".pyo"), have their own namespaces and support Doc Strings. They are singleton objects (only one instance is loaded into memory, which is available globally for the program).
The modules are located by the interpreter through the list of folders
PYTHONPATH (sys.path), which usually includes the current directory first.
The modules are loaded with the
import statement. Thus, when using a module structure, it is necessary to identify the module. This is called absolute import.
import os print os.name
posix
You can also import modules with relative form:
from os import name print name
posix
To avoid problems such as variable obfuscation, the absolute import is considered a better programming practice than the relative import.
Example of module:
# File calc.py # Function defined in module def average(list): return float(sum(list)) / len(list)
Example of module usage:
# Imports calc module import calc l = [23, 54, 31, 77, 12, 34] # Calls the function defined in calc print calc.average(l)
38.5
The main module of a program has the variable
__name__ equals to
__main__, thus it is possible to test if the main module:
if __name__ == "__main__": # Code here will only be run # if it is the main module # and not when it is imported by another program pass
That way it is easy to turn a program into a module.
Another module example:
""" modutils => utility routines for modules """ import os.path import sys import glob def find(txt): """find modules with name containing the parameter """ resp = [] for path in sys.path: mods = glob.glob('%s/*.py' % path) for mod in mods: if txt in os.path.basename(mod): resp.append(mod) return resp
Example module use:
from os.path import getsize, getmtime from time import localtime, asctime import modutils mods = modutils.find('xml') for mod in mods: tm = asctime(localtime(getmtime(mod))) kb = getsize(mod) / 1024 print '%s: (%d kbytes, %s)' % (mod, kb, tm)
/usr/lib/python2.7/xmlrpclib.py: (50 kbytes, Fri Apr 19 16:20:45 2013) /usr/lib/python2.7/xmllib.py: (34 kbytes, Fri Apr 19 16:20:45 2013) /usr/lib/python2.7/dist-packages/libxml2.py: (335 kbytes, Wed May 1 14:19:10 2013) /usr/lib/python2.7/dist-packages/drv_libxml2.py: (14 kbytes, Wed May 1 14:19:10 2013)
Splitting programs into modules makes it easy to reuse and locate faults in the code.
|
https://nbviewer.ipython.org/github/ricardoduarte/python-for-developers/blob/master/Chapter8/Chapter8_Modules.ipynb
|
CC-MAIN-2021-43
|
refinedweb
| 435
| 56.86
|
XML and Transcoding - How Would You Do It? 139
morzel asks a doosy: "XML is one of these words everybody's talking about yet no-one really knows how to use it in specific applications or server technologies. At the Apache XML Project, some work is being done on integrating XML/XSL in the server itself, but personally I like IBM's idea of a transcoder in between a range of (XML) servers and a range of clients. But... how can it be done?" (More)
"Suppose you have to develop an on-line application, and you'd want to go with XML on the server side, and everyday browsers on the client side. Portable platforms like Palm and WAP-enabled phones will probably be a client platform that is being used frequently.
What tools -open source or commercial- are available to accomplish this?
The elements of the system are:
- XML Enabled Database system: Data is retrieved by the transcoder using HTTP or your favorite protocol
- Transcoding gateway: should translate the XML data using XSL (or another way) to a form readable by the client. The exact translation or the XSL to use can be set by the server (included in the XML source), or be detected by the gateway.
- Browsers of all colours and kinds.
XML is the wave of the future, that's for sure... But what tools are available to actually incorporate XML in a system that can do all things we poor webdesigners dream of?
All suggestions welcome! "
Why XML is GOOD!!! (Score:1)
Standard formats needed... (Score:3)
Of course, that won't happen, we'll all make our own stripped-down, human-readable versions, with big gaping flaws, until someone either standardizes it, or hides something nasty and binary with a GUI and dominates the market (*hint* I wonder who wants to use XML and "open standards"....) So let's try to come up with a real open format now, instead.
---
pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
Mino XML parser (Score:3)
I'm working on XSL support (so people can easily say what XML tags should become in HTML), so that should be done in the (hopefully) near future. For now, feel free to download the latest alpha and play with it.
In the near future, I plan to have support for databases, CSS, XSL (as mentioned above), and a few other XML-related technologies.
People familiar with C/C++ should easily be able to write custom modules for converting from XML to HTML using the library by looking at the examples in xmlhandlers/. Anyone want to help develop this?
XSLT is a Great Idea (Score:2)
In the OSS arena, the best example of XML on the server=>HTML (or for that matter anything else) on the client is Cocoon [apache.org]. I played around with Cocoon 1.x a little bit and it's very impressive architecturally, but even the principals agree that the performance isn't there yet. I am eagerly awaiting for Coccoon 2 though
engineers never lie; we just approximate the truth.
Related: Client-side data on demand? (Score:2)
This was the theory anyway...has anybody heard of such an implementation, or does anybody know if it is in a future spec?
One application (which is badly needed on the web, I think) is a dynamic collapsable tree. Imagine if you will a SlashDot comments page (not to hard, as you are looking at one!). Now, instead of getting a page-full of comments that take a healthy amount of time downloading (depending on your threshold settings): imagine clicking on a message to expand more comments in the thread which are fetched dynamically. You could resort, change moderation thresholds, and lots of other nifty dynamic operations without having the server do all the work.
-AP
On the browser (Score:3)
In the meantime, there are some Java Servlets out there to do the transformation on the server side. The server will grab the XML and XSL file, do transformations, and output HTML (or whatever format) to the client. I haven't played with them enough to recommend one as being particularly better, but there's some handy stuff out there.
----
Uses of XML in the real world... (Score:2)
In our case, this meant we had to find a way to store that hierachical information, which is vital to the front end, in an intermediate format that did not put load on the database itself.
The reason for that, of course, is that when you're running a distributed application to potentially thousands of clients, you want any database hit to be as few, fast and clean as possible.
That means we can't sustain connections to the DB.
That means we have to use disconnected record sets.
Disconnected recordsets don't hold hierachy information, and that means that we have find some other way of hitting the database once, getting enough data to build the hierachy externally, then shutting down the DB link.
XML provides the functionality we need to parse a flat recordset back up to a hierachical structure, without hitting the database again. It also has the added bonus that when it comes to presenting the front end in a browser, we can feed it directly to the browser if it's "XML compliant" (IE5, though there is a patch for IE4 [microsoft.com]).
B.
PS: You'll also find that XSL can do similar things to your XML as CSS does to HTML
server to server / business to business (Score:2)
The widget order fulfilment organization has a server that speaks XML over HTTP. We created a widget on our server to talk XML over HTTP to it. Instead of spending weeks to work out how to communicate with some proprietery server in proietary format we spent a few days interfacing our servers.
XML = server to server / business to business killer technology
The consumer may someday directly use XML but I don't see that coming soon on a broad scale. HTML (with Java, Javascript, CSS, etc.) will (IMHO) be the way consumers work the web for the near future.
Of course, I could be wrong.
XML FAQ (Score:4)
Server side solutions: Exeter XML Server (Score:1)
Beware XSL (Score:2)
Looking at any non-trivial XSL stylesheets, you can see what a generally bad idea it is.
My advice would be to use a real programming language with DOM bindings.
XML.com has a good article regarding XSL:XSL considered hamrful. [xml.com]
Note that XML.com also has some pro-XSL articles listed, but they aren't nearly as persuasive.
The bottom line is that the W3 "ordained" XSL to be part of the grand scheme of things, although the technology hasn't been developed in response to any particular problem.
Seems fairly easy to me... (Score:1)
So then, if you intend to use XML to store the data, and XSL to format it, the only part of the equation left is determining which stylesheet applies to which requesting client. I have no experience with XSL (I use XML for machine data, not for human eyes) -- is it possible to determine in the document which stylesheet to use? If so, it's just a matter of writing all the stylesheets.
Of course, this all depends on everyone understanding XML and XSL. If people insist on using legacy clients (like non-XML compliant web-browsers *cough*Netscape*cough*), then is a need for "transcoders" to do the XML/XSL interpretation and spit out HTML (|| HDML || whatever) that works with that client.
P.S. If you want applications of XML, look in the b2b e-commerce world. I'll avoid the direct plug and not name the company I work for, but the whole industry is based on XML.
XML is perfect.. (Score:1)
just in case you handt heard...
Yes but surely only for READ-ONLY resultsets. (Score:1)
Perhaps you should take a look at M$ ! (Score:2)
Believe it or not, the open-source bug has biten M$ !
Look into M$'s sponsorship of the Schools Interoperability Framework () and maybe you can see how M$ plans to use XML (and its derivative) in real world application.
XML (Score:2)
The downside is that the solution will involve extra processing steps, extra stuff to be implemented, and impose on you a development model that might not always be convenient (not everything wants to be a document, or a conversion or transcoding between document formats).
However, there are many cases where XML is the only viable solution, and in those cases you're just glat you can solve the problem at all! A typical example is when you have documents coming from multiple sources, and you publish them to multiple targets. It's easy to see what the XML solution would look like--but the problem doesn't even fit into the other ways of doing things.
With WebMacro [webmacro.org] a common implementation strategy is to drop key XML objects into a template that is otherwise created through ordinary WebMacro HTML template gunk.
The advantage of this approach is that you can create the bread-and-butter stuff like shopping carts, authentication, login/logout, using ordinary Java servlet code and templates. (These things are nasty when you try and force them into a document model).
Then in the middle of your page somewhere you have your XML document, rendered using XSLT or something. You have other targets, besides your servlet, where you publish that same XML document, so the whole thing winds up being a rather pleasant mixture of two different programming paradigms.
Again, the key insight in this strategy is that you use XML for the parts of your problem where it is the only viable solution--and you do everything else the normal way (without the extra costs imposed by XML, since you don't need the extra power).
I worked in an SGML shop for a couple of years, and became smitten with SGML/XML. I set out to do absolutely everything I could in SGML/XML for awhile, before realizing that a traditional template tool (like WebMacro [webmacro.org]) was far more useful for typical bread and butter servlet programming.
I still use XML a lot, but now I use it intelligently, where it's needed!
XMLTP (Score:1)
Re:On the browser (Score:2)
The current situation however is that there is a plethora of browsers, which is growing rapidly, with big differences among them, between OSes and even between versions...
To develop a number of websites, one can simply not assume that users will have a specific browsers on a specific OS of a specific version...
We might evolve to a pure XML/XSL/CSS browser eventually, but until then, there has to be a different solution that can serve today... You would be amazed how much people still use Netscape 3, just because they don't have the urge to upgrade...
Java servlets are a technique, but again: it's built into the server. There are a number of servers out there, that don't have these servlets, so that another solution would come in reaallllyyyy handy.
Okay... I'll do the stupid things first, then you shy people follow.
Re:Yes but surely only for READ-ONLY resultsets. (Score:2)
I agree that that would be very very ugly, but then, I also don't think that a system should necessarily be trying to provide concurrency on the client side, especially if the client base is expected to be extensive.
In this case, as you say, record locking and concurrency handling problems would all but preclude the use of anything but the most 'beefy' RDBMS's.
In my case, perhaps I am lucky in that user interaction is not 'live', but transactional. I just present some output, and wait for the user to respond in whatever way. Once that response comes in, I have a heap of middle-tier business logic handling exactly what we should do with it.
Record locking and such issues are dealt with at that level, rather than in the backend.
And yes, I do believe that SQL Server could handle such a solution, coupled with MTS and perhaps using a little DCOM
In any case, transactions can do nothing but help the cause
Re:XML (Score:1)
There is indeed a lot of work to be done on a framework from which one can develop these applications, but its components should be recycled very easily, especially when your company's main business is developing on-line applications
;-)
Okay... I'll do the stupid things first, then you shy people follow.
Re:Related: Client-side data on demand? (Score:1)
XML and XSLT are the way to go (Score:3)
As far as IBM's product goes - once you drill down into the technical details, it looks very much like cocoon. Interestingly enough, some of the closed source components that IBM's product relies on were donated a few months back to jump start the xml.apache.org site (namely, the XML4J parser and the Lotus XSLT processor). The main thing that IBM seems to be offering here is its 'transcoder' technology - which may be interesting and certainly bears investigation, but for my money, you're better off checking out (and having a voice in the development of) the open source apache projects.
Re: more *very* useful uses of XML (Score:1)
Anyone who says "XML is one of these words everybody's talking about yet no-one really knows how to use it in specific applications or server technologies." has probably not noticed the whirlwind of activity (including many bona-fide commercial ventures) surrounding XML.
Hundreds of site today buy syndicated news from central sources (iSyndicate.com and newsreal are two that come to mind) and receive their news feeds via XML. Also, check out webmethods.com -- here's a phenominally successful company whose entire business model is based on XML-enabling businesses.
i'm workin' on it, dammit. (Score:3)
xml rocks. every piece of online information should be in xml. usability on the web is horrible right now. the fact that search engines and yahoo-style directories are the main entrances to the web is horrific. the fact that google can't find me a single page on gkrellm (a kick-ass system monitor for linux) pisses me off to no end when i'm bored with my current skin. with everything in xml the extraction of data would be much simpler and therefore the interfaces to the web would be much more effective.
the current problem is that
i'm working on a solution and need help...so it's actually pretty smooth that this article came out in
./ at this point.
in a huge blow to problems #1 and #2 above (as well as quite a few others), i am initiating the creation of Uberbia, the most open source of web sites. the backend is zope, which is a tres cool open source web application environment which can conveniently output its internal data as xml. what this allows is for information to be created in zope and stored in zope's native db format and served up as web pages (for instance) quickly, but then also output as xml. problem #2 solved. and when browsers can handle the xml...shove it out that way.
zope also allows for information to be very easily created and shared. this is one of the main goals of Uberbia.
the idea for Uberbia was born out of the fact that the Open Source community has been living in an environment of relatively closed content management on the internet. Sure, one could create a web page and post a HOWTO they just wrote. And then post a message to a relevant mailing list letting everyone know that resource is available. And then submit the HOWTO to the LDP and wait for it to be approved and posted on the LDP page. Uberbia will remove a lot of this hassle and allow the Open Source community to easily create and manage it's content. and the data will go into an xml-aware application. problem #1 solved, at least for the Open Source community. well, okay...so i'm still workin' on it, but it'll get solved, dammit.
on trying to figure out what i was talking about, Ethan (a friend and to-be-developer of Uberbia) wrote:
sounds to me like you want to build an open-content information space. am I totally off-base? Bring "source" up to the next level of abstraction? Collaborative environments of information?
yup. he gets it. but the possibilities that arise from having such a body of contributors and open content in xml are insane. for example, imagine turning on a "newbie" feature in Uberbia that automagically inserted links to the proper entry in the jargon file for every word that was defined there. not difficult with zope and the data in xml
so, essentially i'm responding to this ask slashdot question by calling out for help with an open source project that wants to solve this problem and others. some work has been done, but there's a lot more to do. sourceforge is graciously both hosting the development of this and hosting the project itself. if you are interested at all in the development of something like this or have some really smooth-ass ideas, let me know [mailto] or join the mailing list [sourceforge.net].
i hope some of that made sense.
word, Uberdog
Re:Related: Client-side data on demand? (Score:1)
Maybe I'm just graitutuous waster of bandwidth but I'm starting to lean heavily towards writing proxy that _automatically_ grabs all links' content on a page when I hit the main page itself.
IE5 already does XML+XSL (Score:2)
It isn't too bad, either.
If no XSL stylesheet is applied then it displays the XML document using a "TreeView" default style sheet.
Also, because the XML parser & XSL thing is COM based you can use it in any language that supports COM - like Javascript/VBScript/ASP. I hate to be a MS lover, but unless you go to Java there isn't much that can do it better than that.
The new XML parser that comes with Win2000 is supposed to be 5 times faster, too. See MSDN [slashdot.org].
As far as I know there is no support in IE5 for XML+CSS. I may be wrong, there, though.
Re:Mino XML parser (Score:1)
But that is true for any web based system (Score:2)
You couldn't do it with HTML, either, could you?
Any server that uses stateful connections like that is going to have to be big & powerful.
You're looking at the problem the wrong way (Score:3)
XML really doesn't change any of the domains EXCEPT the presentation domain. You don't need an XML enabled DB, as you NEVER want to have the outside world talking directly to your DB. XML (combined with HTTP or whatever else) is one way of presenting your application. The various transforms that you would do using XSL are just "aspects" of the same presentation. So this doesn't completely change the way you build applications, just how you do your presentation.
I've written more than a few apps that were available both as GUI applications and web servers. Both versions shared the same code base up until the last layer.
As far what you need to do an XML system, I think it's a lot like an existing HTML system. With HTML, you need a database server, an app server, and a web server for an HTML system. The web server is normally scripting enabled so you can do handy transforms with the raw data.
With XML, it's basically the same concept, except your "XML server" needs to be using XSL to script transforms of the XML data. What we currently don't have is a very good way of doing this. Ideally you'd actually want the CLIENT to do the transforms as the XML data is usually much terser than whatever the XSL will generate. However, nobody trusts the clients to do this, so you might as well go with the XSL engine on the server.
Some examples... (Score:3)
There are many tools available to build such a system.
To mention only Open Source projects, I could suggest using Apache JSERV [apache.org] with Apache Cocoon [apache.org] as a framework, Castor [slashdot.org] or Quick [jxml.com] to bind XML data to Java objects and a OODBMS like ozone [ozone-db.org] or a RDBMS like PostgreSQL [postgresql.org].
These are my favorites
;)
They are very powerful and highly flexible, but the price to pay is that they are rather complex to use, that you need time to get on speed with them and that you loose focus on the core techniques behind them.
To try to get a good understanding of these core techniques, I have set up some simple examples showing how one can bind XML documents into java objects, store these objects in a OODBMS and use them in a XSLT sheet both in standand alone mode or as a servlet.
These examples are available on our web at [dyomedea.com] and a mailing list [egroups.com] has been created to exchange and discuss such basic tips.
Hope this helps.
Eric van der Vlist
Re: more *very* useful uses of XML (Score:1)
Re:Mino XML parser (Score:1)
Re:Beware XSL (Score:3)
I wouldn't write off XSL on the strength of that article at xml.com...
When I first looked at XSL some months ago, I thought that it would be a messy and difficult language. I was wrong. XSL, IMHO, is the right solution for translating XML into pretty much anything. Yes, it does have a steep initial learning curve (much like our favourite OS
As for non-trivial XSL stylesheets? On our project, we have written XSL to transform our XML data into binary outputs. The stylesheets used ran into tens of thousands of lines! I think that qualifies for non-trivial in anyone's book. I admit that the XSL is difficult to read, but show me any source that is easy to read when >10k lines...
XSL as a complete solution? No. Even in a relatively simple XML to HTML documentation tool I wrote, I called the XSL from a JavaScript app that handled things like file access and other helper functions. This was under Win2k, using the built in script engine to call the XSL via COM. (yes, even MS get's things right sometimes) The point is that XSL is better for tranforming XML than trying to use a DOM-manipulating language binding...
On another note, why does everyone assume that XML is solely for exchanging data on the web/net? I've used it for documentation, log files, test cases, application persistence and application exchange formats. It's a lot more useful and flexible than people think.
XML Script (Score:3)
You might like to check out this page [xmlscript.org]. One of the things they have is an interpreter (X-Tract) that reads a template (written in XML!) and performs pretty much arbitrary transformations on XML input data based on this template. Looks pretty cool and simple to use. X-Tract is free for download. Funny I didn't find any info on license terms though.
I tried doing some very simple stuff with the Linux version, and the only complaints I have are:
XML and MetaHTML (Score:3)
like programming designed to emit HTML (it
was developed before XML was invented). It
was developed by Brian Fox and myself when
we had a company called Universal Access (ua.com). MetaHTML
is superior in some ways to XSL, because it is
more a general purpose programming language, yet
it's evaluator does a lot of the work of parsing
XML syntax expressions. We used to use it
to do many XML-ish things, such a generate the
MetaHTML documentation automatically from a
structured representation in the database.
MetaHTML has also been under GNU public license since about 1996.
Shameless plug (Score:1)
Grr.. (Score:1)
Re:Grr.. (Score:3)
Re:Shameless plug (Score:1)
Otherwise, nice work -- keep going!
Re:Mino XML parser (Score:1)
This is not a troll, I seriously don't know and want to.
10,000 line stylesheets (Score:2)
This is supposed to be good? Something is horribly broken. Perhaps a different tool would be more appropriate? How about a parser generator? (see Jikes [ibm.com])
Re:Shameless plug (Score:1)
Don't know why this comment got mangled the first time, it did look right in the preview...
Otherwise, nice work -- keep going!
Re:Grr.. (Score:1)
We already do this. Our website is live. (Score:4).
--
We already do this. Our website is live. (Score:2).
--
Standard formats are NOT required. (Score:1)
Joe
Re:Shameless plug (Score:1)
Licensing isn't mentioned on the site because it hasn't yet been fixed. It is currently not open-source, although X-Tract is and will remain free for non-commercial use. Other aspects of licensing very much depend on how we sell XML Script server apps to commercial companies, and that's yet to be decided.
As to the _data file="http://...", that's a bug - could you please send this to support@xmlscript.org so we can get to the bottom of it?
Re:XML Script (Score:1)
We're not sure why the URL retrieval isn't working (it's another library that does this for us, so we'll have to look into it further) but can you send us a bug report on the CGI thing? support@xmlscript.org [mailto] is your best bet.
We know the documentation isn't the best in the world, and we have been working on it lots for v1.1 (which you should expect to see later this week). Thanks for the feedback.
IE5 XSLT is not standard. (Score:2)
Do not assume this to be a case of embrace & extend. Microsoft just implemented XSL before the spec was finalised. They say they will bring out a compliant version soon.
Re:Mino XML parser (Score:2)
One use for XML is that you can develop entire sites using your own tag set instead of HTML. For example, if you want to represent a list of books in HTML, you would probably setup a list of items. In XML, you can do:
<book>
<name>Some Book</name>
<author>Some Author</author>
</book>
...
Which is much easier to understand. Using XSL (a stylesheet language for XML) or a parser built specifically for your tag set, that <book> tag and its subtags will actually mean something.
Writing your entire site in XML has other advantages. For example, let's say you have 100 pages on your site, all written in HTML. Now you want to change the layout of the entire site. You would have to modify the HTML of all 100 pages. If all those pages were written in XML, however, you would have to modify only one file, the XSL stylesheet.
XML also has support for namespaces. A namespace (in XML) is a group of tags. Each namespace has a URI. For example, the upcoming XHTML 1.0 namespace is (that link does not actually exist though). Namespaces are very useful. If you were writing a document in XHTML and wanted to include tags from your own tagset, you would call in your namespace, and you would suddenly be able to use your own tags.
My parser will have XSL support soon. For now, you can write the modules in C/C++ and the parser will load them automagically using the namespaces and parse the XML.
I have a few articles/tutorials I've written over at gelicon.com [gelicon.com] on XHTML, XML, DTDs, and namespaces. Hopefully they will offer a better understanding.
Re:Uses of XML in the real world... (Score:2)
XML is just a consistent way of presenting information, not some major enabling-technology.
There is XSLT and then there's XSL:FO (Score:1)
XSL Transformations:
Transforms any XML document type into another. This can include HTML if it is well formed e.g. XHTML. In reality, it really is not just for "Stylesheets" but can also be used for data to data transformation. The W3C have published a recommendation (their version of a standard) and there are many implementations.
XSL Formatting Objects.
Formats XML for print or screen display. Powerful, complex typesetting-style system, you could use the analogy "PDF/Postscript for XML". Not a standard yet, and only one partial implementation of an old working draft (FOP).
A lot of the guys criticism in that article refers to the second part of XSL, which is not what people are using, or referring to when they discuss XSL here.
I don't find the guys article that persuasive, it is full of assertions, without proving them. Most of the guys gripes are directed towards formatting objects, which is complex, but the momentum behing XSL relates to XSL Transformations.
Re:Yes but surely only for READ-ONLY resultsets. (Score:1)
InfoShark [infoshark.com]
Re:Related: Client-side data on demand? (Score:1)
Re:10,000 line stylesheets (Score:1)
Apache XML Project has many tools for this (Score:1)
By the way you're putting the problem, it seems that XSLT is the answer for your questions.
The Apache XML project has a XSLT processor called Xalan that can take care of much of that part (I haven't tested any other XSL processors yet). Just link your XML document / DOM Tree to a style sheet and you have a transformed document to the format you like.
The only reason I see that this is needed is because nowadays only IE 5 and Mozilla can work natively with XML files and linked Style Sheets (and that locks you to CSS for Mozilla), so if you plan to use XML with any other device, AFAIK, you will have to use some kind of tranformation processor. It can be used to tranform a XML doc to another XML doc, but that escapes from the presentation field.
Just take a look at their page and make some tests. They're pretty nice tools, and quite easy to work with.
--
Marcelo Vanzin
Re:Standard formats needed... (Score:2)
What we do need are tools to manipulate XML. The tools for reading and writing XML are already there. What we need next is tools to transform XML documents (the standard to specify these transformation already exists: XSL).
I think there are several initiatives in this direction. (sorry I don't have any references).
Like many people I see a great future for XML but I think the coming few years will be characterized by a lot of redundant programming since everybody will individually attempt to implement more or less the same components. It would be nice to see some reusable components on the serverside.
Learning curve of XSL (Score:1)
You create templates to match the different kind of elements, and work your way down the tree of the document. This approach allows the stylesheet to work with documents which different numbers of elements, or slightly different structure. Some problems are solved with recursion.
You can do a simple approach where you have a fixed structure document, and insert values from the XML at certain points. This works for a lot of problems.
The main problem I had when learning XSL is study material. The specifications don't function as a tuturial. I recommend ooks/bible/updates/14.html [unc.edu]. It is a version of the chapter on XSL from the XML Bible, updated for the W3C recommendation. I wish I had found it sooner (I have the book, by the way, very good).
Re:Related: Client-side data on demand? (Score:2)
A second reason you'll have less network trafic is that you don't have to put layout information in the XML files. Rather you download a separate XSL file (which can be cached). Subsequent communication consists of data only.
Microsoft has some nice demos on their site (yes I know it's propietary and all but it's there) and I think mozilla also has a few nice demos.
An interesting way to connect a DB & XML (Score:1)
Instead of converting the entire database to an XML file, which consumes a lot of resources, and has synchronization issues, this approach places an XML API frontend on the JDBC system. This creates a "virtual" XML document that other XML tools can access via DOM or SAX.
For example, they create a SAX frontend for JDBC, and use it with a SAX-based XSL tool (XT) to transform the data to HTML. So, for example, where the database encounters a column for CustomerName, the template for a CustomerName entity in the XSL sheet is triggered. To the XSL tool and stylesheets it seems as if they are accessing an XML document.
A small warning... (Score:3)
Its slow. VERY slow.
Most XSL implementations have significant performance and scalability issues as compared to more common custom technology for producing dynamic web pages.
There's no argument that its a better technology, but I've known several commercial web sites that have spent considerable resources developing XML/XSL implementations and having to roll back the technology when they discovered they needed four or five times the number of servers to be able to use it.
Anyone know of any top-tier sites that are actually using the technology?
NNTP? (Score:2)
XML should be used where its appropriate. I'm unconvinced that client-side transformations are the right thing.
do as the ORBs do (Score:1)
This has the advantage of reusing existing (ORB) technology for new purposes, and fits into an existing ideology that many already understand.
You would put a client of one of these XML ORBs into Apache or your browser client, and be able to exchange documents and DTDs freely just as with code objects and traditional ORBs.
Or so I would hope.
Any XML/SQL mapping ? (Score:1)
Re:Uses of XML in the real world... (Score:1)
I saw a briefing from the W3C at the last Builder.Com conference and he had some interesting things to say. Specifically he stated that the best uses he could see for XML right now are long term storage and data conversation. Without XQL being anything but vapor right now, searching and parsing it on the fly is a nightmare.
Just seems to me that with an Oracle 8i backend database that burns out static HTML if you need speed would be simpler than trying to incorporate an XML solution unless you needed multiple systems with different architectures to work with the same data. Even in that case you could just parse the database data into XML pages instead of HTML.
Anyone formatting XML output for printing? (Score:1)
I'm thinking of something along the lines of Formscape which can format data for invoices and purchase orders and such.
Why? (Score:4)
Spend your time working with those tools (XML4C, expat, rxp to name a few) to create higher level tools. Don't re-implement an XML parser - I can guarantee you it will be full of obscure bugs where you didn't understand the spec, didn't understand how to cope with character encodings, or just did something wrong. This stuff, despite the XML spec suggesting that a graduate could write a parser in a matter of weeks, is hard, and experienced people (such as James Clark) have put out excellent products for all to use under non-restrictive licences. Theres even an LGPL parser already out there called libxml (ships with gnome).
If you don't believe you'll create a broken parser, see the recent XML conformance tests on XML.com.
I'd also love to see you move from a non-working XML parser to something supporting XSL "in the near future". I appreciate your enthusiasm, but the XPath spec has some tough little nuts to crack (I know - I'm cracking them right now) and then implementing XSLT from an 80-odd page spec - wow - good luck to you!
(I'm not trying to poo-poo your project, but so many people start working on stuff that's already being worked on in the open-source community that it's just wasted effort).
XML Repositories (Score:1)
Fact is, XML is great for data interchange, plugging large ammounts of standard infomration into standard forms (PO's, RFQs and other business docs) as well as putting some muscle into search engines via context based searching (via XML metadata) but there are way too many standards out there.
- BizTalk - This is the standard, open nonetheless, that MSFT [microsoft.com] is developing to standardize XML. It is an open standard, but the obvious benefit to MSFT is that they can plug Biztalk functionality right into all of their product lines for interoperability across a platform.
- OASIS's XML.org - OASIS, a non-affiliated standards body, much like W3C, set out to develop a standardized set of XML schemas and DTDs (document type definitions) however, MSFT beat them to the punch by launching their BizTalk site a day before OASIS, ahhh Microsoft, finds a way to compete even in open standards.
- RosettaNet - These guys set out to "map" all common business processes and to make an open standard for XML in the business world, but, alas, mapping entire processes takes a long time, a lot of notaeriety here, not as much substance.
These are just a few examples, there are others, but, my guess is that you'll hear the most about these folks. To make things even more complicated although these guys seem to be "competing" they are almost all members of each others' groups, in a sort of "coopition" model. So, overall, it is no wonder why the big push is for standards repositories, and related transaltion to an from various formats.
That's my $.02
Performance Issues, XSL and Available Tools (Score:1)
On performance, I really matters what kind of parser you use. There are two standard parser interfaces:
There has been a lot of argument this year over whether or not to use XSL to style XML documents. I think the jury is still out on this -- at least as far as pure display style is concerned. (There are a lot of CSS loyalists out there as well.) But XSLT as a transformation language for XML is a real winner. One of the reasons is simple but profound -- XSLT is XML and is parseable and transformable just like any other XML document. You can create a stylesheet by using another specialized XSLT sheet to transform an XML or XSL document into the stylesheet you want. This can be very powerful, but difficult to debug.
Finally, I am surprised that nobody on this site has mentioned the expat (stream based) parser by James Clark that is an almost standard part of the modules for Perl5. I am learning Perl using the ActiveState port on NT and am having a whale (camel?) of a time, and the expat parser is clean and fast and fun.
Oh, and one final note -- while there are some really useful books on XML, I suggest you keep to the basic reference type (Neil Bradley's The XML Companion is next to me on my desk right now, and there is a second edition out) and use the net as your basic resource, especially lists like XML-DEV. Things are moving way to fast.
SGML Word Processor (Score:1)
SGML or XML would seem to be perfect for an open source word processor. One of the biggest obstacles of exchanging information in business is the many proprietary document formats. It would seem that if such a program could become the standard (I know that's a big if), it could be a potential killer app for linux in the business world. Especially if it came out on linux first. But even if it didn't, the linux version could be free whereas a windows version would most likely be proprietary. And I would place far more trust in an open source application complying with standards than I would one which is closed.
I know word processing isn't fun or sexy, but its an extremely important part of computing and should receive more attention than it has.
Re:SGML Word Processor (Score:1)
Help solve the problem (Score:1)
This question is what the people on the Apache XML project [apache.org] spend more or less all their time not just talking about but building stuff. If you care, join up.
Having said that, XSLT may be magic, but "old-fashioned" solutions like PHP and Zope and plain old perl-backed CGIs (perl includes an excellent XML parser) ain't going away anytime soon.
Re:Why? (Score:1)
Re:XML (Score:2)
XML doesn't solve this problem either. Writing a different stylesheet for each browser winds up being just as much work. The key is to get all of that work out of your source code, so that it is independent of the application. You can do that by using a template system.
The IBM example has multiple sources of documents feeding multiple target formats, where those targets are diverse--not just different forms of HTML, but different media altogether. In those cases XML is a big win.
XML applications *do* exist (Score:2)
XML is one of these words everybody's talking about yet no-one really knows how to use it in specific applications or server technologies
I disagree. Check out the W3C's SVG standard [w3.org]. This is for real.
If you've ever had to muck about with all of the different proprietary flavors of vector graphics formats, you know what a great thing this will be.
That said, I personally *don't* believe in across-the-board XML standardization panacea. Some things deserve standardization, others don't.
Accountants all adhere to accepted standard accounting practices. This is what makes it possible to encapsulate their work into shrink-wrapped database products that pretty much any accountant can use. But this only works because the process is so well known.
So I disagree vehemently that business-to-business transactions, for example, are ripe for XML standardization. Why? Because who the heck is such an expert on these kinds of transactions to be telling everyone else how to do it? There's a lot of trial-and-error to go through before anyone should start proposing standards.
And remember: "You can't vote for anarchy".
;~)
The Road Ahead --- and some pitfalls. (Score:1)
My reason for going on about multi-user, record locking databases is this
:- Assume you build a good web site, nice and fast and so on, used by many people. I would suspect that as, per the old adage 'No good deed goes unpunished', your boss would then ask you to build a more interactive site.
Then you realise to your horror that XML doesn't really help at all when it comes time to trying to re-mesh updated/changed XML 'data bursts' back in to the main DB.
Another thing that just occurred to me - Surely the queries needed to get the hierarchical data have to be expressed in SQL. If so, surely the cost in terms of logical/physical reads (i.e. the cost to the server of doing the queries) will be the same whether you do them all at once, to build your XML 'data burst' or whether you run them just as the user requests them.
In Oracle you can keep open connections to the server at all times (and even pre-start some at DB startup) i.e. the connection latency is very small. I think SQL Server would have to be configured to pool connections in some way. Does SQL Server 7 let you do this? Does MTS let you do this? I'm not sure.
BTW what are your feelings on MS having to delay the In-Memory Database and COM+ (component) load balancing. As I remember they had to drop them from basic W2000 Server and have said you'll get them in the W2000 Datacenter edition. It might be that without these features your DCOM and MTS architecture might run out of steam. (You might even have to tell your boss to splash out on W2000 datacenter edition as well!).
Just some thoughts.
Re:Related: Client-side data on demand? (Score:1)
Here's an interesting example:
XSL Sample [microsoft.com]
Which is from the following article:
Choosing between XSL and CSS [microsoft.com]
Of course, solutions like this for general websites aren't very appropriate yet for public websites, as they require IE 5.0. But the technology is very exciting.
There are several other examples on this site that utilize client-side XML processing to dynamically change the way data is displayed - sorting a baseball roster by name or batting averages, or even calculating and displaying statistics on the client.
Re:Related: Client-side data on demand? (Score:2)
Anyway, I don't think that the bandwidth problem is caused by either HTML or XML. The real problem is the objects that are referenced like for instance gif or jpg images and that won't change I'm afraid.
XSL can't match programming languages from the 60s (Score:1)
Does XSL encourage reuse through its syntax? No
Does XSL base its constructs on proven language design ideas picked up in the last twenty years? No
I have no idea why people are so ga-ga over a language that predates Algol-6x in its design.
10,000 line sheets necessary due to dumb syntax (Score:2)
For someone who uses a language like Python or Java, I can't imagine why they would find anything compelling about XSL. It really is a dog language. Most people are just too ga-ga over the fact that it is encoded in XML to see how lame it really is.
Thankfully, few people are rallying behind it.
Cocoon project? (Score:1)
So how do you convert XML->HTML or XML->WAP? (Score:1)
If you "already do this [convert XML->HTML or XML->WAP]", how does that work? Is it custom?
XML Summary and History -- Comments on Transcoding (Score:2)
I believe eventually we are going to get to a point where server-side transcoding will not be necessary. However, this will be several years, and we are going to have to learn how to do all of this efficiently.
I am even developing my own transcoding software process because I belive I have a better method of doing it than what is currently available. If and when I do succeed it will be closed-source because I want to make money off of my product, not just give away all my hard work.
Anyway, the next few years are going to be very interesting.
E
Well, you've got it sort of wrong. (Score:1)
This format in particular, offer no modularity or reuse features, and there is nothing about XML that strictly forbids such features.
Re:Mino XML parser (Score:1)
Re:XML Summary and History -- Comments on Transcod (Score:1)
Call me pedantic, but I have some issues with the following statement:
HTML and XML are related formats; in fact, HTML can be defined as a subset of XML.
This is a bit of a peeve of mine. HTML is an application of SGML, not a subset of SGML, and definately not a subset of XML.
A lot of stuff that's in HTML is not legal in XML, like the IMG tag and the OPTION tag:
Which is why XHTML [w3.org] was created.
Re:Performance Issues, XSL and Available Tools (Score:1)
I'd like to echo your shout out to James Clark's products. On the Java front, his XT library implements XSLT, and uses a SAX parser (which, as was pointed out, implies better performance than DOM). [jclark.com]
Yes, unfortunately XML is getting overhyped (Score:1)
XML only solves the problem of data formatting.
There are some doc-heads out there that are trying to wrap XSL, XQL, XPath, and some of the other proto-standards into one cohesive view of the world, but it really isn't there yet.
SQL databases are still the way to go for storage - more due to uptime and recoverability than anything else. Also, regular programming languages such as Python and Java, when used with DOM bindings are still a more powerful, efficient, and flexible solution than XSLT or XSL-FO.
Re:XML Summary and History -- Comments on Transcod (Score:1)
Re:Why? (Score:1)
I'm not saying everybody has to use this program, or that it will be the #1 XML parser. I'm just saying it's something useful I'm developing, which is also helping me learn a great many things about XML development.
Besides, it gives me something to do
Re:XML Summary and History -- Comments on Transcod (Score:1)
HTML has recently been slightly altered into the XHTML DTD.
A person can use any XHTML DTD in any XML document.
So saying that HTML is a subset of XML is not far from the truth. I am also willing to bet a person would have moderate success using a regular HTML DTD in an XML document, but it would not be worth it.
E
Re:FIRST JESUS POST (Score:1)
XML and Transcoding - How IBM would do it (Score:2)
As you hinted in your note, it can sometimes be a challenge to select the best stylesheet to apply to a given XML document. The gateway may want to choose a stylesheet based on the source document and the destination browser or device. In addition, different stylesheets may be better suited to specific user preferences or network connections. The IBM transcoding technology includes a way to select the "best" stylesheet to apply in a given situation.
The Transcoding technology can also adapt content other than XML for different clients. HTML requires special processing because you can't apply stylesheets to directly since it's not well formed. Images also require special handling to adapt them for the destination device. The whole transcoding gateway may be a separate component, installed as an HTTP proxy, or it may be configured as a servlet on the same server that is the content source.
Jabber is XML (Score:1)
Re:You're looking at the problem the wrong way (Score:1)
I only partially agree. In the presentation domain, XML can be used to isolate the logical structure of the data from the HTML/WML/etc. It's very useful for this, but beware of the slowness of XSLT (as others have commented). I found that using the fastest XSLT (the jclark version [jclark.com]) it still took around 300 ms to produce about 20K of HTML from XML.
In my situation, much of the XML was static information, so I decided to generate JSP output using XSLT instead, since JSP is compiled; the same could be done with another compiled scripting lanuguage. What was most interesting to me was the problem of isolating the static parts of the page, which could be compiled in JSP, from the dynamic parts, which had to come from the database / application layer. In this case, the tag extensions in the latest JSP (1.1) are very handy. They allows the JSP file to be a well-formed XML document, and therefore easily generated by XSLT, and the extended tags can be programmed to interact with the application layer in a very clean way. The tag extensions could be programmed to either interact with an application object, or a XML DOM, although actually the latter is more cumbersome.
I agree that XML is not very valuable as a direct interface to the database -- there should always be a layer between the database server to enforce access control, implement rules, etc. However, XML is useful as an exchange format between loosely connected servers, such as in B2B interactions. In these cases it is better than using distributed objects, because the coupling is looser and easier to define. But I'm of the opinion that the XML should represent a high-level operation, not database rows.
Re: more *very* useful uses of XML (Score:1)
We have it in production (Score:2)
XML rocks. You don't need to stuff your head full of theoretical debates about namespaces, general entities, etc. All you need is vi (or Notepad) and Saxon. To learn XML syntax, just write XML files by hand and feed them to SAXON until it no longer reports XML errors. To learn XSL, just write XSL files until you get SAXON to actually spit out some HTML. Lots of examples are available to accelerate the trial and error process.
When you are finally ready to integrate the whole shebang into actual applications, there are tons of open-source tools to choose from. Look at the list above again - Apache,PHP,MySQL,SAXON - cost zero - this combo drives one of France's most popular Websites.
Re:Why? (Score:2)
If that doesn't bake yer noodles, download rxp which also does validation against a dtd.
Really, work on providing XPath and XSL support for expat - the community will thank you _much_ more for it.
Hummingbird & XML (Score:2)
Re:The Road Ahead --- and some pitfalls. (Score:2)
|
https://slashdot.org/story/00/01/13/2330201/xml-and-transcoding---how-would-you-do-it
|
CC-MAIN-2017-47
|
refinedweb
| 9,039
| 70.94
|
Making A Mac App Scriptable Tutorial
Allow users to write scripts to control your OS X app – giving it unprecedented usability. Discover how in this “Making a Mac App Scriptable Tutorial”.
Version
- Other, Other, Other
Update 9/21/16: This tutorial has been updated for Xcode 8 and Swift 3.
As an app developer, it’s near impossible to think of all the ways people will want to use your app. Wouldn’t it be cool to let your users create scripts to customize your app to their own personal needs?
With Applescript and Javascript for Automation (JXA), you can! In this making a Mac app scriptable tutorial you will discover how to add scripting capabilities to a sample application. You’ll start by learning how to control existing apps with scripting, and then extend a sample app to allow custom script actions.
Getting Started
Download the sample project, open it in Xcode and build and run to see how it looks:
The app shows a list of tasks with due dates in the next few days and the tags associated with each task. It uses an outline view to group the tasks by due date.
You might have noticed that you can’t add, edit, or delete any tasks. That’s by design – these actions will be handled by your user automation scripts.
Take a a look at the files in the project:
- There are 2 model class files: Task.swift and Tag.swift. These are the classes that you will be scripting.
- The ViewController group handles the display and watches for changes in the data.
- The Data group has a file with the sample tasks and a DataProvider that reads those tasks and handles any edits that arrive.
- The AppDelegate uses a
DataProviderobject to keep a record of the app’s tasks.
- The ScriptableTasks.sdef file is a crucial file…which you will explore in detail later.
There are sample scripts for this tutorial as well; download them here. There are two folders in this package: one for AppleScript and one for JavaScript. Since this tutorial isn’t focused on how to write scripts, you’ll be using each of the downloaded scripts to test the functionality that you’ll add to Scriptable Tasks.
Enough with the talk – time to move on to the scripting! :]
Using the Script Editor
Open up the Script Editor app, found in Applications/Utilities, and open a new document:
You’ll see a set of four buttons in the top toolbar: Record, Stop, Run, and Compile. Compile checks that your scripting is syntactically correct, and Run does pretty much what what you’d expect.
At the bottom of the window, you’ll see three icons which switch between views. Description lets you add some information about your script, while Result shows the final result of running a script. The most useful option is the third button: Log.
The Log offers a further four options: Result, Messages, Events and Replies. Replies is the most informative, as it shows a log of every command and the return value of that command. When testing any scripts, I highly recommend the Log in Replies mode.
Note: If you ever open an AppleScript file and find it contains code like this:
«class TaSk» whose «class TrFa» is false and «class CrDa», click Compile and it will be translated to readable AppleScript, provided you have the target app installed.
There are two scripting languages you’ll cover in this tutorial. The first is AppleScript, introduced with Mac System 7 in 1991, which uses an English-like syntax to make it usable by coders and non-coders alike.
The second is JavaScript for Automation (JXA), introduced by OSX Yosemite, which lets coders use the familiar JavaScript syntax to build their automation tasks.
The scripts in this tutorial will be presented in both AppleScript and JXA, so you’re free to wander down the path of whichever language you’d like to explore. :]
Note: Throughout this tutorial, the scripting code snippets are presented in AppleScript first, and immediately followed by the equivalent JavaScript version.
Exploring App Scripting With TextEdit
There’s a great little app already installed on your Mac that supports scripting: TextEdit. In the Script Editor, select Window/Library and look for the TextEdit entry. If it’s not there, click the Plus button at the top, navigate to your Applications folder and add TextEdit. Then double-click the TextEdit entry to open the TextEdit dictionary:
Every scriptable app has a dictionary, stored in a scripting definition (SDEF) file. The dictionary tells you what objects the app has, what properties the objects have and what commands the app responds to. In the above screen shot, you can see that TextEdit has paragraphs, and paragraphs have color and font properties. You will use this information to style some text.
Open 1. TextEdit Write.scpt from either the AppleScript or the JavaScript folder. Run the script; you’ll see TextEdit create and save a document.
You now have a new document, but it needs a bit of styling. Open 2. TextEdit Read Edit.scpt, run this script and you’ll see the document re-opened and styled as per the script.
Although delving into the actual script is beyond the scope of this tutorial, feel free to read the scripts in detail to see how they act on the TextEdit document.
As mentioned in the introduction, all apps are scriptable to some extent. To see this in action, ensure Scriptable Tasks is running. Next, open a new script window in Script Editor and enter one of the following scripts, depending on which language you’re using:
-- AppleScript tell application "Scriptable Tasks" to quit
or
// JavaScript Application("Scriptable Tasks"). quit();
Click Run and Scriptable Tasks should quit. Change the script to the following and click Run again:
tell application "Scriptable Tasks" to launch
or
Application("Scriptable Tasks").launch();
The app restarts, but doesn’t come to the foreground. To bring the app into focus, change
launch to
activate in the script above and click Run.
Now that you’ve seen that apps can respond to scripting commands, it’s time to add this ability to your app.
Making Your App Scriptable
The scripting definition file of your app defines what the app can do; it’s a little like an API. This file lives in your app project and specifies several things:
- Standard scripting objects and commands, such as
window,
make,
delete,
count,
openand
quit.
- Your own scriptable objects, properties and custom commands.
In order to make classes in your app scriptable, there are a few changes you’ll need to make to the app.
First, the scripting interface uses Key-Value-Coding to get and set the properties of objects. In Objective-C, all objects conformed to the KVC protocol automatically, but Swift objects don’t do so unless you make them subclasses of
NSObject.
Next, scriptable classes need an Objective-C name that the scripting interface can recognize. To avoid namespace conflicts, Swift object names are mangled to give a unique representation. By prefixing the class definitions with
@objc(YourClassName), you give them a name that can be used by the scripting engine.
Scriptable classes need object specifiers to help locate a particular object within the application or parent object, and finally, the app delegate must have access to the data store so it can return the application’s data to the scripts.
You don’t necessarily have to start your own scripting definition file from scratch, as Apple provides a standard SDEF file that you can use. Look in the /System/Library/ScriptingDefinitions/ directory for CocoaStandard.sdef. Open this file in Xcode and have a look; it’s XML with specific headers, a dictionary and inside that, the Standard Suite.
This is a useful starting point, and you could copy and paste this XML into your own SDEF file. However, in the interest of clean code, it’s not a good idea to leave your SDEF file full of commands and objects that your app does not support. To this end, the sample project contains a starter SDEF file with all unnecessary entries removed.
Close CocoaStandard.sdef and open ScriptableTasks.sdef. Add the following code near the end at the
Insert Scriptable Tasks suite here comment:
<!-- 1 --> <suite name="Scriptable Tasks Suite" code="ScTa" description="Scriptable Tasks suite."> <!-- 2 --> <class name="application" code="capp" description="An application's top level scripting object."> <cocoa class="NSApplication"/> <!-- 3 --> <element type="task" access="r"> <cocoa key="tasks"/> </element> </class> <!-- Insert command here --> <!-- 4 --> <class name="task" code="TaSk" description="A task item" inherits="item" plural="tasks"> <cocoa class="Task"/> <!-- 5 --> <property name="id" code="ID " type="text" access="r" description="The unique identifier of the task."> <cocoa key="id"/> </property> <property name="name" code="pnam" type="text" access="rw" description="The title of the task."> <cocoa key="title"/> </property> <!-- 6 --> <property name="daysUntilDue" code="CrDa" type="number" access="rw" description="The number of days before this task is due."/> <property name="completed" code="TrFa" type="boolean" access="rw" description="Has the task been completed?"/> <!-- 7 --> <!-- Insert element of tags here --> <!-- Insert responds-to command here --> </class> <!-- Insert tag class here --> </suite>
This chunk of XML does a lot of work. Taking it bit by bit:
- The outermost element is a
suite, so your SDEF file now has two suites: Standard Suite and Scriptable Tasks Suite. Everything in the SDEF file needs a four-character code. Apple codes are nearly always in lower-case and you will use a few of them for specific purposes. For your own suites, classes and properties, it’s best to use a random mix of upper-case, lower-case and symbols to avoid conflicts.
- The next section defines the application and must use the code
"capp". You must specify the class of the application; if you had subclassed
NSApplication, you would use your subclass name here.
- The application contains
elements. In this app, the elements are stored in an array called
tasksin the app delegate. In scripting terms, elements are the objects that the app or other objects can contain.
- The last chunk defines the
Taskclass that the application contains. The plural name for accessing multiples is
tasks. The class in the app that backs this object type is
Task.
- The first two properties are special. Look at their codes:
"ID "and
"pnam".
"ID "(note the two spaces after the letters) identifies the unique identifier of the object.
"pnam"specifies the
nameproperty of the object. You can access objects directly using either of these properties.
"ID "is read-only, as scripts should not change a unique identifier, but
"pnam"is read-write. Both of these are text properties. The
"pnam"property maps to the
titleproperty of the
Taskobject.
- The remaining two properties are a number property for
daysUntilDueand a Boolean for
completed. They use the same name in the object and the script, so you don’t need to specify the
cocoa key.
- The “Insert…” comments are placeholders for when you need to add more to this file.
Open Info.plist, right-click in the blank space below the entries and select Add Row. Type an upper-case S and the list of suggestions will scroll to Scriptable. Select it and change the setting to YES.
Repeat this process to select the next item down: Scripting definition file name. Set this to the name of your SDEF file: ScriptableTasks.sdef
If you prefer to edit the Info.plist as source code, you can alternatively add the following entries inside the main dict:
<key>NSAppleScriptEnabled</key> <true/> <key>OSAScriptingDefinition</key> <string>ScriptableTasks.sdef</string>
Now you have to modify the app delegate to handle requests that come via script.
Open AppDelegate.swift file and add the following to the end of the file:
extension AppDelegate { // 1 override func application(_ sender: NSApplication, delegateHandlesKey key: String) -> Bool { return key == "tasks" } // 2 func insertObject(_ object: Task, inTasksAtIndex index: Int) { tasks = dataProvider.insertNew(task: object, at: index) } func removeObjectFromTasksAtIndex(_ index: Int) { tasks = dataProvider.deleteTask(at: index) } }
Here’s what’s going on in the code above:
- When a script asks for
tasksdata, this method will confirm that the app delegate can handle it.
- If a script tries to insert, edit or delete data, these methods will pass those requests along to
dataProvider.
To make the
Task model class available to the scripts, you have to do a bit more coding.
Open Task.swift and change the class definition line to the following:
@objc(Task) class Task: NSObject {
Xcode will immediately complain that
init requires the
override keyword, so let Fix-It do that. This is required as this class now has a superclass:
override init() {
Task.swift needs one more change: an object specifier. Insert the following method into the
Task class:
override var objectSpecifier: NSScriptObjectSpecifier { // 1 let appDescription = NSApplication.shared().classDescription as! NSScriptClassDescription // 2 let specifier = NSUniqueIDSpecifier(containerClassDescription: appDescription, containerSpecifier: nil, key: "tasks", uniqueID: id) return specifier }
Taking each numbered comment in turn:
- Get a description of the app’s class since the app is the container for tasks.
- Get a description of the task by id within the app. This is why the Task class has an
idproperty – so that each task can be correctly specified.
You’re finally ready to start scripting your app!
Scripting Your App
Before you start, make sure to quit any running instance of the app that Script Editor might have opened.
Build and run Scriptable Tasks; right-click on the icon in the Dock and select Options/Show in Finder. Quit the Script Editor app and restart it to let it pick up the changes to your app.
Open the Library window, and drag the Scriptable Tasks app from the Finder into the Library window.
If you get an error saying the app is not scriptable, try quitting Script Editor and starting it again as it sometimes doesn’t register a freshly built app. If it still fails to import, go back and double-check your changes to the SDEF file.
Double-click Scriptable Tasks in the Library to see the app’s dictionary:
You’ll see the Standard Suite and the Scriptable Tasks Suite. Click on the Scriptable Tasks suite, and you will see what you put into the SDEF file. The application contains tasks, and a task has four properties.
Change the scripting language in the dictionary to JavaScript using the Language popup in the toolbar. You will see the same information but with one important change. The cases of classes and properties have changed. I have no idea why this is, but it’s one of those “gotchas” you need to watch out for.
In Script Editor, make a new script file and set the editor to show Log/Replies. Test either of the following scripts, making sure to select the appropriate language in the language pop-up:
tell application "Scriptable Tasks" get every task end tell
or
app = Application("Scriptable Tasks"); app.tasks();
In the log, you will see a list of the tasks by ID. For more useful information, edit the scripts as follows:
tell application "Scriptable Tasks" get the name of every task end tell
or
app = Application("Scriptable Tasks"); app.tasks.name();
Try out a few more of the sample scripts you downloaded earlier. When running the scripts, make sure you set the Script Editor to show Log/Replies so that you can see the results along the way.
Each script quits the app before running it again; this is to reset the data after any edits so that the sample scripts work as expected. You wouldn’t normally do this in your own scripts.
Note: Script Editor can get very confused as you build updated versions of the app, because it tries to keep a version running at all times if you have an open script that is using the app. This often ends up as an older version of the app, so before every build, quit the app.
If you see two copies of the Scriptable Tasks app running at any time, or if there appears to be a script error in any of the samples, you can be sure that Script Editor has glommed on to the wrong version of the app. The easiest fix is to quit all copies of the app and quit Script Editor. Clean the Xcode build (Product/Clean), then build and run again.
Restart Script Editor and when it opens the script, click Compile and then click Run. And if THAT doesn’t work, delete Derived Data for the app in ~/Library/Developer/Xcode/DerivedData.
Try out the next two sample scripts:
3. Get Tasks.scpt
This script retrieves the number of tasks and the names of tasks using various filters. Make note of the following:
- JavaScript counts from 0, AppleScript counts from 1.
- Text searches are case-insensitive.
4. Add Edit Tasks.scpt
This script adds new tasks, toggles the
completed flag on the first task, and tries to create a task with the same name as another.
Hmmm… creating a task with the same name worked! Now you have two “Feed the cat” tasks. The cat will be thrilled, but for the purposes of this app, task names should be unique. Trying to add a task with a name that is already in use should have produced an error.
Back in Xcode, look in AppDelegate.swift and you can see that when the script wants to insert an object, the app delegate passes that call to its
dataProvider. In DataProvider.swift, look at
insertNew(task:at:), which inserts an existing task into the array or appends a new task to the end.
Time to add a check here. Replace the function with the following:
mutating func insertNew(task: Task, at index: Int) -> [Task] { // 1 if taskExists(withTitle: task.title) { // 2 let command = NSScriptCommand.current() command?.scriptErrorNumber = errOSACantAssign command?.scriptErrorString = "Task with the title '\(task.title)' already exists" } else { // 3 if index >= tasks.count { tasks.append(task) } else { tasks.insert(task, at: index) } postNotificationOfChanges() } return tasks }
Here’s what each commented section does:
- Use an existing function to check if a task with this name already exists.
- If the name is not unique:
- Get a reference to the scripting command that called this function.
- Set the command’s
errorNumberand
errorStringproperties;
errOSACantAssignis one of the standard AppleScript error codes. These will be sent back to the calling script.
- If the name is unique:
- Process the task as before.
- Post a notification of data changes. The ViewController will see this and update the display.
Quit the app if running, then build and run your app. Run the 4. Add Edit Tasks scripts again. This time you should get an error dialog and no duplicate tasks will be created. Sorry about that, cat…
5. Delete Tasks.scpt
This script deletes a task, checks if a particular task exists and deletes it if possible, and finally deletes all completed tasks.
Working With Nested Objects
In the sample app, the second column displays a list of tags assigned to each task. So far, you have no way of working with them via scripts – time to fix that!
Object specifiers can handle a hierarchy of objects. That’s what you have here, with the application owning the tasks and each task owning its tags.
As with the
Task class, you need to make the
Tag scriptable.
Open Tag.swift and make the following changes:
- Change the class definition line to
@objc(Tag) class Tag: NSObject {
- Add the
overridekeyword to
init.
- Add the object specifier method:
override var objectSpecifier: NSScriptObjectSpecifier { // 1 guard let task = task else { return NSScriptObjectSpecifier() } // 2 guard let taskClassDescription = task.classDescription as? NSScriptClassDescription else { return NSScriptObjectSpecifier() } // 3 let taskSpecifier = task.objectSpecifier // 4 let specifier = NSUniqueIDSpecifier(containerClassDescription: taskClassDescription, containerSpecifier: taskSpecifier, key: "tags", uniqueID: id) return specifier }
The above code is relatively straightforward:
- Check that the tag has an assigned task.
- Check that the task has a class description of the correct class.
- Get the object specifier for the parent task.
- Construct the object specifier for the tag contained inside the task and return it.
Add the following to the SDEF file at the
Insert tag class here comment:
<class name="tag" code="TaGg" description="A tag" inherits="item" plural="tags"> <cocoa class="Tag"/> <property name="id" code="ID " type="text" access="r" description="The unique identifier of the tag."> <cocoa key="uniqueID"/> </property> <property name="name" code="pnam" type="text" access="rw" description="The name of the tag."> <cocoa key="name"/> </property> </class>
This is very similar to the data for the
Task class, but a tag only has two exposed properties:
id and
name.
Now the
Task section has to be edited to indicate that it contains tag elements.
Add the following code to the Task class XML, at the
Insert element of tags here comment:
<element type="tag" access="rw"> <cocoa key="tags"/> </element>
Quit the app, then build and run the app again.
Go back to the Script Editor; if the Scriptable Tasks dictionary is open, close and re-open it. See if it contains information about tags.
If not, remove the Scriptable Tasks entry from the Library and add it again by dragging the app into the window:
Try one of the following scripts:
tell application "Scriptable Tasks" get the name of every tag of task 1 end tell
or
app = Application("Scriptable Tasks"); app.tasks[0].tags.name();
The app now lets you retrieve tags – but what about adding new ones?
You may have noticed in Tag.swift that each
Tag object has a weak reference to its owning task. That helps create the links when getting the object specifier, so this task property must be set when assigning a new tag to a task.
Open Task.swift and add the following method to the
Task class:
override func newScriptingObject(of objectClass: AnyClass, forValueForKey key: String, withContentsValue contentsValue: Any?, properties: [String: Any]) -> Any? { let tag: Tag = super.newScriptingObject(of: objectClass, forValueForKey: key, withContentsValue: contentsValue, properties: properties) as! Tag tag.task = self return tag }
This method is sent to the container of the new object, which why you put it into the
Task class and not the
Tag class. The call is passed to
super to get the new tag, and then the task property is assigned.
Quit and build and run your app. Now run the sample script 6. Tasks With Tags.scpt which lists tag names, lists the tasks with a specified tag, and deletes and create tags.
Adding Custom Commands
There is one more step you can take when making an app scriptable: adding custom commands. In earlier scripts, you toggled the
completed flag of a task directly. But wouldn’t it be better – and safer – if scripts didn’t change the property directly, but instead used a command to do this?
Consider the following script:
mark the first task as "done" mark task "Feed the cat" as "not done"
I’m sure you’re already reaching for the SDEF file and you would be correct: the command has to be defined there first.
There are two steps that need to happen here:
- Tell the application that this command exists and what its parameters will be.
- Tell the Task class that it responds to the command and what method to call to implement it.
Inside the Scriptable Tasks suite, but outside any class, add the following at the Insert command here comment:
<command name="mark" code="TaSktext"> <direct-parameter <parameter name="as" code="DFLG" description="'done' or 'not done'" type="text"> <cocoa key="doneFlag"/> </parameter> </command>
“Wait a minute!” you say. “Earlier you said that codes had to be four characters, and now I have one with eight? What’s going on here?”
When defining a method, you provide a two part code. This one combines the codes or types of the parameters – in this case a
Task object with some text.
Inside the
Task class definition, at the Insert responds-to command here comment, add the following code:
<responds-to <cocoa method="markAsDone:"/> </responds-to>
Now head back to Task.swift and add the following method:
func markAsDone(_ command: NSScriptCommand) { if let task = command.evaluatedReceivers as? Task, let doneFlag = command.evaluatedArguments?["doneFlag"] as? String { if self == task { if doneFlag == "done" { completed = true } else if doneFlag == "not done" { completed = false } // if doneFlag doesn't match either string, leave un-changed } } }
The parameter to
markAsDone(_:) is an
NSScriptCommand which has two properties of interest:
evaluatedReceivers and
evaluatedArguments. From them, you try to get the task and the string parameter and use them to adjust the task accordingly.
Quit and build and run your app again. Check the dictionary in the Script Editor, and delete and re-import it if the
mark command is not showing:
You should now be able to run the 7. Custom Command.scpt scripts and see your new command in operation.
markcommand does not work in JavaScript. I have added manual toggling of the
completedproperty to the JavaScript version of 7. Custom Command.scpt but left the original there too. Hopefully it will work after an update.
Where to Go From Here?
You can download the final version of the sample project here.
There wasn’t room to cover inter-app communication in this making a mac app scriptable tutorial, but to see how to work between apps, check out 8. Inter-App Communication.scpt for some examples. This script gathers a list of incomplete tasks due today and tomorrow, inserts them into a new TextEdit file, styles the text and saves the file.
For more information about scriptable apps, the official Apple docs on Scriptable Applications are a good start, as is Apple’s Overview of Cocoa Support for Scriptable Applications.
Interested in learning more about JXA? Check out the Introduction to JavaScript for Automation Release Notes.
I hope you enjoyed this making a mac app scriptable tutorial; if you have any questions or comments, please join the forum discussion below!
|
https://www.raywenderlich.com/1033-making-a-mac-app-scriptable-tutorial
|
CC-MAIN-2021-17
|
refinedweb
| 4,344
| 64
|
03-15-2011 05:06 PM
03-15-2011 05:43 PM
The file extension should be irrelevant. The mimetype may be more important.
A simple text embed as I showed earlier would work, with the results of toString() being passed to XML(). It can take numerous types and convert them.
If you data might contain any Unicode information, you'd want to pay particular attention to encoding issues. Generally the file would probably be UTF-8 encoded, so then you might need to use readUTF() on the object retrieved from the embed class. To avoid future problems, you should probably just start with that one, unless you know you will have a different encoding.
03-17-2011 09:54 AM - edited 03-17-2011 03:43 PM
So my code is pretty much as follows for searchers looking for solutions:
package { import flash.utils.ByteArray; import qnx.ui.text.Label; public class MYXML extends Sprite { super(); private var E1 = String: new String('string'); [Embed('MyXML.xml', mimeType='application/octet-stream')] private var MyXMLFILE:Class; private var EleXML:XML; public function obtainXML():void { var ba:ByteArray = new MyXMLFILE(); var st:String = ba.readUTFBytes(ba.length); EleXML = new XML(st); } obtainXML(); var idlabel :Label = new Label(); idlabel.text = 'Informative information'; idlabel.format = new TextFormat( null, 14, 0xFFFFFF ); idlabel.x = 200; idlabel.y = 20; idlabel.width = idlabel.textWidth + 5; idlabel.height = idlabel.textHeight + 5; this.addChild( idlabel ); DisplayInfo(); private function DisplayInfo():void{ var EN:Label = new Label(); var ESTRING:String = new String( E1 ); var cSTRING:String = new String(); trace(EleXML.Element.(@symbol == ESTRING)); cSTRING = EleXML.Element.More.fact1.text(); EN.text = cSTRING; EN.width = EN.textWidth + 5; EN.x = 90; EN.y = 90; addChild(EN); } } }
Sorry I forgot to add the xml...
<?xml version="1.0" encoding="utf-8"?> <Element symbol="string"> <More> <fact1>This is an element in an element</fact1> <fact2>This is another element</fact2> <fact3>This is fact</fact3> </More> </Element> <Element symbol="notastring"> <More> <fact1>This will not return true</fact1> <fact2>This is not fact</fact2> <fact3>It will come true if you search for @symbol == "notastring" </fact3> </More> </Element>
This should work perfectly with the above xml/as3 code.
03-17-2011 11:11 AM
Thanks tensioncore... for the sake of completeness..could you also post a sample xml file that goes with this code.
03-17-2011 12:00 PM
Tensioncore, kditty, and I did additional experimentation in the IRC chat (#playbook-dev on irc.freenode.net) and discovered a few related items.
For one thing, in spite of advice you can find on the web that mimeType of "text/xml" does something useful, it appears either to be unsupported, or broken. Certainly Adobe says it's not one of the select supported types.
Also, the ByteArray.readUTFBytes() approach doesn't appear to be the only way to do it. The following variation works fine as well:
[Embed('MyXML.xml', mimeType='application/octet-stream')] private var MyXMLFILE:Class; ... var EleXML:XML = XML(new MyXMLFILE());
Not that both approaches are creating (apparently.. the docs aren't exactly good on this) something called a ByteArrayAsset object when you do "new MyXMLFILE()", which supports pretty much all the ByteArray methods.
In this case, however, you're relying on the XML() call's ability to call toString() on the object passed (also undocumented, but it does take numerous types of input and converts them to strings to parse them). Note also that I'm using just "XML()" above instead of "new XML()"... there's no real difference in these cases (the function effectively calls new XML for you, and returns the result).
Both approaches appear to ignore any encoding="" attribute that you might have in your <?xml ?> tag, and treat it always as UTF-8 encoded data (at least in my experiments).
Lastly, although I've seen blogs saying that hardcoding the XML directly into your source code will compress it better than using Embed, my experiments show that in fact the data in the embedded XML above will be compressed quite efficiently, so you should have no worries about doing this and in fact, it's a rather nice way to get data like this into an app. I took one file with a line of 99 characters of text, and cloned the line 999 times (for a data size of about 10K). The resulting SWF grew only about 50 bytes... (I tried again with random data and it was 10K bigger, so that's conclusive.)
03-17-2011 03:42 PM
Thanks for helping me teach! It helps the learning!Thanks for helping me teach! It helps the learning!
tags07 wrote:
Thanks tensioncore... for the sake of completeness..could you also post a sample xml file that goes with this code.
|
http://supportforums.blackberry.com/t5/Adobe-AIR-Development/Lots-of-Data-What-to-do/m-p/946713
|
CC-MAIN-2014-10
|
refinedweb
| 795
| 58.69
|
Created on 2008-03-18 05:22 by ocean-city, last changed 2011-12-21 07:04 by petri.lehtinen.
Hello. I found another problem related to issue2301.
SyntaxError cursor "^" is shifted when multibyte
characters are in line (before "^").
I think this is because err->text is stored as UTF-8
which requires 3 bytes for multibyte character,
but actually cp932 (my console encoding) requires only 2 bytes for it.
So "^" is shited to right 5 bytes because there is 5 multibyte chars.
C:\Documents and Settings\WhiteRabbit>py3k x.py
push any key....
File "x.py", line 3
print "あいうえお"
^
SyntaxError: invalid syntax
[22567 refs]
Sorry, I didn't know what PyTokenizer_RestoreEncoding really doing.
That function adjusted err_ret->offset for this encoding conversion.
So, Python2.5 can output cursor in right place. (Of course, if source
encoding is not compatible for console encoding, broken string is printed
though. Anyway, cursor is right)
C:\Documents and Settings\WhiteRabbit>py a.py
File "a.py", line 2
x "、「、、、ヲ、ィ、ェ"
^
SyntaxError: invalid syntax
[8728 refs]
I tried to fix this problem, but I'm not sure how to fix this.
> I tried to fix this problem, but I'm not sure how to fix this.
Quick observation...
///////////////////////////////////
// Possible Solution
1. Convert err->text to console compatible encoding (not to source
encoding like in python2.x) where PyTokenizer_RestoreEncoding is there.
2. err->text is UTF-8, actual output is done in
Python/pythonrun.c(print_error_text), so adjust offset there.
///////////////////////////////////
// Solution requires...
1.
- PyUnicode_DecodeUTF8 in Python/pythonrun.c(err_input) should
be changed to some kind of "bytes" API.
- The way to write "bytes" to File object directly is needed.
2.
- The way to know actual byte length of given unicode + encoding.
////////////////////////////////////////////////////
// Experimental patch
Attached as experimental patch of solution 2. Looks agly, but
seems working on my environment.
(I assumed get_length_in_bytes(f, " ", 1) == 1 but I'm not sure
this is always true in other platforms. Probably nicer and more
general solution may exist)
> (I assumed get_length_in_bytes(f, " ", 1) == 1 but I'm not sure
> this is always true in other platforms. Probably nicer and more
> general solution may exist)
This assumption still lives, but I cannot find better solution.
I'm thinking now attached patch is good enough.
Patch revised.
I think that your patch works only for terminals where one byte of the
encoded text is displayed as one character on the terminal. This is not
true for utf-8 terminals, for example.
In the attached patch, I tried to write some unit tests, (I had to adapt
the traceback module as well), and one test still fails because the
captured stderr has a utf-8 encoding.
I think that it's better to count unicode characters.
You are right, this issue is more difficult than I thought...
I found wcswidth(3), if this function is available we can use this
function, but unfortunately there is no such function in VC6 and this
function is meaningless on cygwn, so I cannot test it. ;-(
Maybe we can use
import unicodedata
unicodedata.east_asian_width()
but I need to investigate more.
For the moment, I'd suggest that one unicode character has a the same
with as the space character, assuming that stdout.encoding correctly
matches the terminal.
Then the C implementation could do something similar to the statements I
added in traceback.py:
offset = len(line.encode('utf-8')[:offset].decode('utf-8'))
Amaury, if doing so, the cursor will shift left by 5 columns on my
environment like this, no? ("あ" requires 2 columns for example)
print "あいうえお"
^
This seems to be a difficult problem. Doesn't the exact width depend on
the terminal capabilities? and fonts, and combining diacritics...
An easy way to put the caret at the same exact position is to repeat the
beginning of the line up to the offending offset:
print "あいうえお"
print "あいうえお^<------------------
But I don't know how to make it look less ugly.
At least my "one unicode char is one space" suggestion corrects the case
of Western languages, and all messages with single-width characters.
See also a related issue: issue3975.
>At least my "one unicode char is one space" suggestion corrects the case
>of Western languages, and all messages with single-width characters.
I'm not happy with this solution. ;-(
>Doesn't the exact width depend on
>the terminal capabilities? and fonts, and combining diacritics...
I have to admit you are right.
Nevertheless, I got coLinux(Debian) which has localed wcswidth(3), so I
created another experimental patch.
(py3k_adjust_cursor_at_syntax_error_v2.patch)
The strategy is ...
1. Try to convert to unicode. If fails, nothing changed to offset.
2. If system has wcswidth(3), try that function
3. If system is windows, try WideCharToMultibyte with CP_ACP
4. If above 2/3 fails or system is others, use unicode length as offset
(Amaury's suggestion)
This patch ignores file encoding. Again, this patch is experimental,
best effort, but maybe better than current state.
P.S.
I tested this patch on coLinux with ja_JP.UTF-8 locale and manual
#define HAVE_WCSWIDTH 1
because I don't know how to change configure script.
Experimental patch was experimental, wcswidth(3) returns 1 for East
Asian Ambiguous character.
debian:~/python-dev/py3k# ./python /mnt/windows/a.py
File "/mnt/windows/a.py", line 3
"♪xÅx" abc
^
should point 'c'. And another one
debian:~/python-dev/py3k# export LANG=C
debian:~/python-dev/py3k# ./python /mnt/windows/a.py
File "/mnt/windows/a.py", line 3
"\u266ax\u212bx" abc
^
SyntaxError: invalid syntax
Please forget my patch. :-(
This issue is a problem of units. The error text is an utf8 *byte*
string and offset is a number of *bytes*. The goal is to get the text
*width* of a *character* string. We have to:
1- convert offset from bytes number to character number
2- get the error message as (unicode) characters
3- get the width of text[:offset]
It's already possible to get (2) from the utf8 string, and code from
ocean-city's patch (py3k_adjust_cursor_at_syntax_error_v2.patch) can
be used for (3). The most difficult point is (1).
I will try to implement that.
Resolution of this may be applicable to Issue3446 as well.
"center, ljust and rjust are inconsistent with unicode parameters"
Proof of concept of patch fixing this issue:
- parse_syntax_error() reads the text line into a PyUnicodeObject*
instead of a "const char**"
- create utf8_to_unicode_offset(): convert byte offset to a number of
characters. The Python version should be something like:
def utf8_to_unicode_offset(text, byte_offset):
utf8 = text.encode("utf-8")
utf8 = utf8[:byte_offset]
text = str(utf8, "utf-8")
return len(text)
- reuse adjust_offset() from
py3k_adjust_cursor_at_syntax_error_v2.patch, but force the use of
wcswidth() because HAVE_WCSWIDTH is not defined by configure
- print_error_text() works on unicode characters and not on bytes!
The patch should be refactorized:
- move adjust_offset(), utf8_to_unicode_offset(), utf8_len() in
unicodeobject.c. You might create a new method "width()" for the
unicode type. This method can be used to fix center(), ljust() and
rjust() unicode methods (see issue #3446).
For an easier review, I splitted my patch in multiple small patches:
- unicode_utf8size.patch: create _PyUnicode_UTF8Size() function:
Number of bytes needed to encode the unicode character as UTF-8
- unicode_width.patch: create PyUnicode_Width(): Number of column
needed to represent the string in the current locale. -1 is returned
in case of an error.
- adjust_offset.patch: Change unit of SyntaxError.offset, convert
utf8 offset to unicode offset
- print_exception.patch: process error text as an unicode string
(instead of a byte string), convert offset from characters
to "columns"
Dependencies:
- adjust_offset.patch depends on unicode_utf8size.patch
- print_exception.patch depends on unicode_width.patch
Changes since issue2382.patch:
- PyUnicode_Width() doesn't change the locale
- PyUnicode_Width() uses WideCharToMultiByte() on MS_WINDOWS, and
wcswidth() otherwise (before: do nothing if HAVE_WCSWIDTH is not
definied)
- the offset was converted from utf8 index to unicode index only in
print_error_text(), not on SyntaxError creation
- _PyUnicode_UTF8Size() and PyUnicode_Width() are public
unicode_width.patch:
* error messages should be improved:
ValueError("Unable to compute string width") for Windows
IOError(strerror(errno)) otherwise
adjust_offset.patch:
* format_exception_only() from Lib/traceback.py may need a fix
* about the documentation: it looks like SyntaxError.offset unit is
not documentation in exceptions.rst (should it be documented, or
leaved unchanged?)
print_exception.patch:
* i'm not sure of the reference counts (ref leak?)
* in case of PyUnicode_FromUnicode(text, textlen) error,
>>PyFile_WriteObject(textobj, f, Py_PRINT_RAW);
PyFile_WriteString("\n", f);<< is used to display the line but textobj
may already ends with \n.
* format_exception_only() from Lib/traceback.py should do the same job
than fixed print_exception(): get the string width (to fix this
issue!)
I just created the issue #12568 for unicode_width.patch.
What's the status of this issue?
FWIW, this is not only a problem with east asian characters:
>>> ä äää
File "<stdin>", line 1
ä äää
^
SyntaxError: invalid syntax
|
http://bugs.python.org/issue2382
|
crawl-003
|
refinedweb
| 1,463
| 59.09
|
paystack_sdk 0.0.1
Flutter Paystack SDK #
This plugin provides an easy way to receive payments on Android and iOS apps with Paystack. It uses the native Android and iOS libraries under the hood and provides a unified API for initializing payment in a platform-agnostic way. The flow surroudning how Paystack payments work is well written in the Android library documentation, so we'll just skip all the formalities and demonstrate how to use this library.
Usage #
Step 1 - Add this plugin as a dependency to your flutter project #
The good folks at Flutter explains how here
Step 2 - Accept payment #
This step assumes you've already built your UI for accepting card details from your application user. And of course, you have your Paystack public API key.
Import the plugin in the file where you want to accept payments.
import 'package:paystack_sdk/paystack_sdk.dart';
Next, initialize Paystack by proividing your public API key. You should preferably do this once, when the page loads and the public key value will remain set. You can subsequently use the SDK to receive payments multiple times in the page.
Future<void> initPaystack() async { String paystackKey = "pk_test_xxxxxxxxxxxxxxx"; try { await PaystackSDK.initialize(paystackKey); // Paystack is ready for use in receiving payments } on PlatformException { // well, error, deal with it } }
Receive payments already!
initPayment() { // pass card number, cvc, expiry month and year to the Card constructor function var card = PaymentCard("5060666666666666666", "123", 12, 2020); // create a transaction with the payer's email and amount (in kobo) var transaction = PaystackTransaction("wisdom.arerosuoghene@gmail.com", 100000); // debit the card (using Javascript style promises) transaction.chargeCard(card) .then((transactionReference) { // payment successful! You should send your transaction request to your server for validation }) .catchError((e) { // oops, payment failed, a readable error message should be in e.message }); }
Contributing #
Contributions are most welcome. You could improve this documentation, add method to support more Paystack features or just clean up the code. Just do your thing and create a PR. This started out as a quick work to achieve paystack payments on Android and iOS. A lot could have been done better.
0.0.1 #
Features #
- Accept payment with email, amount and card details
paystack_sdk_example #
Demonstrates how to use the paystack_sdk plugin.
Getting Started #
For help getting started with Flutter, view our online documentation.
Use this package as a library
1. Depend on it
Add this to your package's pubspec.yaml file:
dependencies: paystack_sdk: :paystack_sdk/paystack_sdk)
23 out of 23 API elements have no dartdoc comment.Providing good documentation for libraries, classes, functions, and other API elements improves code readability and helps developers find and use your API.
Format
lib/payment_card.dart.
Run
flutter format to format
lib/payment_card.dart.
Format
lib/paystack_transaction.dart.
Run
flutter format to format
lib/paystack_transaction.dart.
Maintenance suggestions
Package is getting outdated. (-15.89 points)
The package was last published 60 weeks ago.
Package is pre-v0.1 release. (-10 points)
While nothing is inherently wrong with versions of
0.0.*, it might mean that the author is still experimenting with the general direction of the API.
|
https://pub.dev/packages/paystack_sdk
|
CC-MAIN-2020-05
|
refinedweb
| 512
| 57.37
|
In this example I will show how to read data from excel using C# and how to show it in WPF DataGrid.:
So instead of DataTable I'm simply returning DefaultView.
Let's now see what we have in XAML::
using Excel = Microsoft.Office.Interop.Excel; public class ExcelData { public DataView Data { get { Excel.Application excelApp = new Excel.Application(); Excel.Workbook workbook; Excel.Worksheet worksheet; Excel.Range range; workbook = excelApp.Workbooks.Open(Environment.CurrentDirectory + "\\Excel.xlsx"); worksheet = (Excel.Worksheet)workbook.Sheets["Test Sheet"]; int column = 0; int row = 0; range = worksheet.UsedRange; DataTable dt = new DataTable(); dt.Columns.Add("ID"); dt.Columns.Add("Name"); dt.Columns.Add("Position"); dt.Columns.Add("Web Site"); for (row = 2; row <= range.Rows.Count; row++) { DataRow dr = dt.NewRow(); for (column = 1; column <= range.Columns.Count; column++) { dr[column - 1] = (range.Cells[row, column] as Excel.Range).Value2.ToString(); } dt.Rows.Add(dr); dt.AcceptChanges(); } workbook.Close(true, Missing.Value, Missing.Value); excelApp.Quit(); return dt.DefaultView; } } }As you can see reading excel in .NET Framework 4 is pretty simple task to do. I read excel worksheet and put the data into DataTable. The tricky part of it is to bind my property to the DataGrid. You cannot bind DataTable to a DataGrid because DataGrid does not implement IEnumerable interface.
So instead of DataTable I'm simply returning DefaultView.
Let's now see what we have in XAML:
<Window x: <Grid> <DataGrid Name="dataGrid1" ItemsSource="{Binding Data}"> </DataGrid> </Grid> </Window>ItemSource bound to the Data property. In the code behind I just specifying DataContext for the DataGrid:
ExcelData exceldata = new ExcelData(); this.dataGrid1.DataContext = exceldata;You're welcome to download the source code of this example (Visual Studio 2010 Project)
Instead of hardcoding column names in ExcelData class......get the column names from the worksheet (assuming headings are in row one)
Then any table can be imported!
// amend code as such
DataTable dt = new DataTable();
//dt.Columns.Add("ID");
//dt.Columns.Add("Name");
//dt.Columns.Add("Position");
//dt.Columns.Add("Web Site");
for (column = 1; column <= range.Columns.Count; column++)
{
//dr[column - 1] = (range.Cells[row, column] as Excel.Range).Value2.ToString();
dt.Columns.Add((range.Cells[1, column] as Excel.Range).Value2.ToString());
}
Good point!
Thank you.
Error 1 The name 'Environment' does not exist in the current context
Error 2 The name 'Missing' does not exist in the current context
Error 3 The name 'Missing' does not exist in the current context
How to clear this errors ?
Thanks for the article!
I can not open Excel file.I got an error as "Microsoft Excel cannot access the file 'C:\Excel.xlsx'" in this line....
workbook = excelApp.Workbooks.Open(Environment.CurrentDirectory + "\\Excel.xlsx");
Plz help me...I spent lot of time in this
Try using a direct path to the file:
workbook = excelApp.Workbooks.Open("c:\\temp\\book.xlsx);
Note that you will need to use \\ instead of singles ;)
There are several possible reasons:
• The file name or path does not exist.
• The file is being used by another program.
Thanks a lot! Very good example!
Nice Article ................
just a little advice you never create an instance of an object inside a "for"
it should be like this
DataRow dr=new DataRow();
for(stuff in here)
{
dr.stuff= morestuff;
}
If there's an empty cell, this line:
range.Cells[row, column] as Excel.Range).Value2.ToString()
breaks. I could put it in a try catch where the catch sets that one cell to an empty string, but that doesn't seem like it would be best practice. Any thoughts on how to handle this scenario?
Just write this thing:
dr[column - 1] = (range.Cells[row, column] as Excel.Range).Value2!=null ? (range.Cells[row, column] as Excel.Range).Value2.ToString() : "";
An easier way to handle this row:
range.Cells[row, column] as Excel.Range).Value2.ToString()
is to change it to:
Convert.ToString((range.Cells[row, column] as Excel.Range).Value2);
.ToString() can't handle nulls but Convert.ToString() can. It allows for less code too.
This is really nice article have a look of this article.
excel sheet is not binding to datagrid bt no error appears
You probably forgot to pass data context or type mismatch
nope i passed everything
Hi... Thanks a lot for the info. I tried the solution and it worked. There is only one limitation that I found. I used a button click to invoke the class Excel data. The button and the datagrid should be part of the the same xaml file window. Can someone please tell how to show the data in a different window than the one in which the button is placed. I tried this but was not working since the datagrid then is not a part of the xaml. The exact message is as under.
Error 1 'AutoCode1.AutoCode1home' does not contain a definition for 'dataGrid2' and no extension method 'dataGrid2' accepting a first argument of type 'AutoCode1.AutoCode1home' could be found (are you missing a using directive or an assembly reference?)
Thanks one again for sharing your replies
I wanted to add code but it doesn;t allow me, mistaking xaml for html
I use this Export2Excel.dll. It is simple and very fast.
<DataGrid is not cming using WPF :( please help
Display on this kind of error in runtime:
Retrieving the COM class factory for component with CLSID {00024500-0000-0000-C000-000000000046} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)).
|
http://www.codearsenal.net/2012/06/c-sharp-read-excel-and-show-in-wpf.html
|
CC-MAIN-2016-22
|
refinedweb
| 918
| 60.92
|
In this Python Tutorial, we will be discussing the ctypes library. Often when programming in Python, we may feel the need to use functions that may be written in another language. This is most commonly because Python is much slower in comparison to languages like C++ or C.
There are actually many libraries in Python that use functions written in C or C++, such as NumPy or OpenCV. Importing libraries like this, and using their functions is made possible with the ctypes library in Python.
Let’s explore how you can do this as well in this Python ctypes Tutorial!
Importing a C library with ctypes
Now let’s try to import a basic library with a single function over to our Python program using ctypes. You can’t link your regular
.c file though. You need to generate a shared library, which can be done with the following command.
gcc -fPIC -shared -o clibrary.so clibrary.c
clibrary is the name we gave to our C file. The name itself can be anything, just be mindful of the extensions when using this command.
Let’s first create our
.c file.
#include <stdio.h> void prompt() { printf("hello world\n"); }
Now we need to go over to our Python file, import the ctypes library and then use the CDILL function to load up the shared library file. Remember to include the full path to the shared library file if it’s not in the same directory as the Python file.
import ctypes libObject = ctypes.CDLL('clibrary.so')
The CDILL function returns a library object, which we can use to access functions within the library.
Calling C Functions from Python
Now that we have a library object, we can call the function from that library as a method of the library object.
libObject = ctypes.CDLL('clibrary.so') libObject.prompt()
hello world
As you can see, the above function has executed successfully. Now let’s take at some more advanced examples, where we call C functions from Python.
Let’s modify a prompt a bit, so that it prints out some variables that we pass into it.
#include <stdio.h> void prompt(int num) { printf("The number %d was entered", num); }
Now let’s try calling this function from Python by passing an integer into it. Remember to generate a new
.so file after making any changes to the
.c library.
import ctypes testlib = ctypes.CDLL('clibrary.so') testlib.prompt(20)
The number 20 was entered
It works!
Using Function Signatures to Call Functions
There are alternate techniques that we can use to call functions that we will discuss in this Python ctypes tutorial.
Let’s create a function in our C Library, that adds two numbers together, and returns the result. A very basic addition function.
int add(int num1, int num2) { return num1 + num2; }
Now for our Python file. What you need to do first, is acquire the function signature of the C function. Like so.
addTwoNumbers = clibrary.add
You can even keep the same name if you want. So
add = clibrary.add is also valid (thanks to namespaces). Now we need to define the function parameters, and function return type.
addTwoNumbers.argtypes = [ctypes.c_int, ctypes.c_int] addTwoNumbers.restype = ctypes.c_int
argtypes is for the parameters, and
restype is for the return type. Notice how the
argtypes takes a list, as there can be multiple parameters. On the other hand,
restype takes a single value, as there can only be one return type or none.
The complete code:
import ctypes clibrary = ctypes.CDLL('clibrary.so') addTwoNumbers = clibrary.add addTwoNumbers.argtypes = [ctypes.c_int, ctypes.c_int] addTwoNumbers.restype = ctypes.c_int print("Sum of two numbers is :", addTwoNumbers(20, 10))
Sum of two numbers is : 30
Strings in ctypes
A common issue that people run into, is when dealing with strings in C and Python. The problem occurs, because in Python, strings are immutable which means they cannot be changed. They can only be overwritten completely, not modified. in C and C++, strings
First let’s try passing a string to a function in C.
#include <stdio.h> void display(char* str) { printf("%s", str); }
Here is out simple C function, which will print out the string we pass to it.
import ctypes clibrary = ctypes.CDLL('clibrary.so') clibrary.display(b"John")
John
Our python code here, simply passes a string to the C function. Notice that little “b” before the string? That is done to declare it as binary, which is nessacery in order for it to be compatible with C.
Another rather important thing to mention, is how to convert a variable containing a regular string (not a binary string), to a C acceptable string. We can’t use that little “b” unless we are dealing with raw strings. But for variables, we have encode method which we can use.
import ctypes clibrary = ctypes.CDLL('func.so') cstring = "John" clibrary.display(cstring.encode())
Attempting to do this without the encode() method will return an error!
Supported and Unsupported Datatypes
Now that we have this working, let’s move try and modify the string, within the C function.
void increment(char* str) { for (int i = 0; str[i] != 0; i++) { str[i] = str[i] + 1; } }
This C function will increment each character in the string by 1. So if there is an “A” character, it will become a “B” due to the ascii codes. Let’s try passing a string from Python, into this function.
import ctypes clibrary = ctypes.CDLL('clibrary.so') string = "Hello" print("Before:" , string) clibrary.increment(string) print("After: ", cstring)
Before: Hello After: Hello
As you can see from the output, there was no change. This is because Python strings are not compatible with C. Now let’s resolve this issue using some of ctypes special datatypes which are compatible with C.
Do try and use ctypes datatypes as much as possible instead of Python types when writing code which is meant to interact with C. Very few Python types can directly be used with C (without any errors), among-st which two are integers and bytes (byte strings).
import ctypes clibrary = ctypes.CDLL('func.so') cstring = ctypes.c_char_p(b"Hello") print("Before:" , cstring.value) clibrary.increment(cstring) print("After: ", cstring.value)
Before: b'Hello' After: b'Ifmmp'
Instead of using a Python string, we use a char pointer (the string equivalent) from ctypes. Hence it worked! Every ctype datatype has a value attribute, which returns a native python object. Hence we were able to return a string from the char pointer (which is basically C’s version of a string).
Alternatively, we just use cstring = b”Hello”, as Python bytes are also supported by C..
Creating Mutable memory with ctypes String Buffers
When using character pointers, we aren’t exactly dealing with mutable memory. The memory in question was still immutable. If we try assigning a new value to it, a new memory location will be assigned to the string with that value. Instead of the old memory location being assigned a new value.
This can cause problems with functions which expect to receive mutable memory. The below example illustrates this issue.
import ctypes clibrary = ctypes.CDLL('clibrary.so') cstring = ctypes.c_char_p(b"Hello") print("Before:" , cstring, cstring.value) cstring.value = b"World" print("After: ", cstring, cstring.value)
Before: c_char_p(2186232967920) b'Hello' After: c_char_p(2186232970992) b'World'
Notice how the address is different after the assignment? The address is the value in the brackets.
To resolve this we create String buffers using ctypes. Let’s take a look at how to do this.
clibrary = ctypes.CDLL('clibrary.so') cstring = ctypes.create_string_buffer(b"Hello") print("Before:" , cstring, cstring.value) clibrary.increment(cstring) print("After: ", cstring, cstring.value)
Before: c_char_p(2200599049040) b'Hello' After: c_char_p(2200599049040) b'Ifmmp'
As you can see, the string was properly modified. Creating a string buffer with ctypes, gives us mutable memory which we can use with C functions.
You can also choose to pass in an integer “n” instead of a string into the
create_string_buffer() function which will create an empty string buffer of “n” size.
Memory Management with Pointers in ctypes
Memory management with Pointers in ctypes is the last topic of this tutorial.
C++ and C both make use of Pointers, but Pointers are not available as a datatype in Python. The ctypes library gives
ctypes.POINTER, which we can use to create a pointer in Python.
Let’s take a look at some examples using this ctypes pointer.
#include <stdio.h> #include <string.h> #include <stdlib.h> char* alloc_memory(void) { char* str = strdup("Hello World"); printf("Memory allocated...\n"); return str; } void free_memory(char* ptr) { printf("Freeing memory...\n"); free(ptr); }
First we declared two functions in our C library, one for allocating memory and one for freeing memory. The
alloc_memory() function returns a pointer to the string that we created. And the
free_memory() function frees it up.
The basic gist of it, that we will use the
alloc_memory() function to give us the string. We will then use this string in Python for whatever purpose we wish, then use the
free_memory() function to free up the memory.
import ctypes clibrary = ctypes.CDLL('clibrary.so') # Defining alloc function and it's return type alloc_func = clibrary.alloc_memory alloc_func.restype = ctypes.POINTER(ctypes.c_char) # Defining free function and it's return type free_func = clibrary.free_memory free_func.argtypes = [ctypes.POINTER(ctypes.c_char)] # Using the function to return a string cstring_pointer = alloc_func() str = ctypes.c_char_p.from_buffer(cstring_pointer) print(str.value) # Freeing memory free_func(cstring_pointer)
Memory allocated... b'Hello World' Freeing memory...
Creating Pointers with ctypes
There is an alternative way to create a pointer which you might find useful. It’s generally a bit slower than using ctypes.POINTER however. (Due to an inbuilt system that allows ctypes.POINTER to reuse old Pointers)
ctypes.POINTER() takes as parameter the type of pointer you wish to create, an integer pointer for example. ctypes.pointer() takes as parameter an object, which it then returns a pointer to. The
pointer() however, only accepts datatypes from the ctypes module. So regular python datatypes will not work.
num1 = ctypes.c_int(100) num2 = ctypes.c_int(200) ptr = ctypes.pointer(num1) print(ptr.contents.value) ptr.contents = num2 print(ptr.contents.value)
100 200
You can use the
contents attribute to access the object to which the Pointer points. You can further access the
value attribute to print out it’s actual python value, rather the a ctype object.
Using C++ with ctypes
This may not be very obvious, but you can also use C++ with Python with the help of ctypes. It is a bit trickier and there are some limitations but it’s pretty simple to grasp.
The shared library is generated in the same manner, but with .cpp instead .c as the extension and g++ instead of gcc.
g++ -fPIC -shared -o cppLibrary.so cppLibrary.cpp
Now, there are a few things to keep in mind. C++ supports Function overloading, as a result of which C++ also performs name mangling on it’s function names. What this means, is that additional information will be appended to the function name, which results in function name being changed. This is done to have a unique name for each function for the compiler to use.
Name mangling is performed on every function, regardless of whether it is an overloaded function or not. So you can’t try avoid it. If you try to call a C++ function from a C++ Shared library in Python using ctypes, it will give an error that the function was not found.
#include <iostream> void display() { std::cout << "Hello world\n"; }
import ctypes clibrary = ctypes.CDLL('cppLibrary.so') clibrary.display()
AttributeError: function 'display' not found
As you can see, it’s not working. Luckily, it has an easy fix! All you need to do is wrap it inside extern “C” as shown below. This disables name mangling, and thus allows the code to run using ctypes.
#include <iostream> extern "C" { void display() { std::cout << "Hello world\n"; } }
import ctypes clibrary = ctypes.CDLL('cppLibrary.so') clibrary.display()
Hello world
Now it works!
Name mangling and it’s effects on using C++ with Python are a separate topic completely, and worth researching on. This section describes the main process of using C++ in Python, but there are several good practices that you also need to be learning.
And that’s the end of our Python ctypes tutorial!
This marks the end of the Python ctypes Tutorial. Any suggestions or contributions for CodersLegacy are more than welcome. Questions regarding the article content can be asked in the comments section below.
|
https://coderslegacy.com/python/ctypes-tutorial/
|
CC-MAIN-2022-40
|
refinedweb
| 2,111
| 67.45
|
one at a time.
If we have to implement this manually then it will be a tedious and error prone process.In Angular 1 we have ng-repeat directive to work with a collection of items.It creates a new template for every item in the collection.
Angular 2 doesn’t have ng-repeat but has a new directive ngFor for working with a collection of items .If we use this directive then the template which we provide will be instantiated for every item in the collection.
We use it as
<li * {{i}} {{item}}</li>
above items is the collection which contains the items that we want to display.item represents a single element in the items collection.
We are displaying item here.So if item contains integer values from 1 to 5 then it will display the numbers from 1 to 5.
In the following example we have implemented a component which consists of a collection of employees.We are setting the employees array in the constructor.
In the template of the component we are just looping over the array of array and displaying the elements.
import { Component } from '@angular/core'; @Component({ selector: 'example-component', template: `<ul><li *{{employee}}</li></ul>` }) export class ExampleComponent{ employees: string[]=[]; constructor() { this.employees = ['Marc', 'Amit', 'Tony', 'Greg']; } }
Above component will display the employees in a list.
ngFor provides several variables which we can use in the template.Following variables can be used in the tempalte created using ngFor
- Index represents current iteration.
- first true or false value which indicates whether this is first item in the collection.
- last true or false value which indicates whether this is last item in the collection.
- even true or false value which indicates whether the index is even.
- odd true or false value which indicates whether the index is even.
|
https://codecompiled.com/angularjs/page/3
|
CC-MAIN-2021-49
|
refinedweb
| 303
| 57.57
|
How to: Create and Use C# DLLs (C# Programming Guide).cs: The source file that contains the method
Add(long i, long j). It returns the sum of its parameters. The class
AddClassthat contains the method
Addis a member of the namespace
UtilityMethods.
Mult.cs: The source code that contains the method
Multiply(long x, long y). It returns the product of its parameters. The class
MultiplyClassthat contains the method
Multiplyis also a member of the namespace
UtilityMethods.
TestCode.cs: The file that contains the
Mainmethod. It uses the methods in the DLL file to calculate the sum and the product of the run-time arguments.
Example
//:
Otherwise, you have to use the fully qualified names, as follows:
Execution
To run the program, enter the name of the EXE file, followed by two numbers, as follows:
TestCode 1234 5678
Output
Compiling the Code
To build the file
MathLibrary.DLL, compile the two files
Add.cs and
Mult.cs using the following command line:
csc /target:library /out:MathLibrary.DLL Add.cs Mult.cs
The /target:library compiler option tells the compiler to output a DLL instead of an EXE file. The /out compiler option followed by a file name is used to specify the DLL file name. Otherwise, the compiler uses the first file (
Add.cs) as the name of the DLL.
To build the executable file,
TestCode.exe, use the following command line:.
|
http://msdn.microsoft.com/en-US/library/3707x96z(v=vs.80).aspx
|
CC-MAIN-2014-35
|
refinedweb
| 234
| 68.36
|
Write a C++ Program for Fedor’s Roadmap Navigation Problem
C++ Program for Fedor’s Roadmap Navigation Problem
- Fedor is a research scientist, who has recently found a roadmap of Ancient Berland.
- Ancient Berland consisted of N cities that were connected by M bidirectional roads. The road builders weren’t knowledgable. Hence, the start city and the end city for each road were always chosen randomly and independently. As a result, there were more than one road between some pairs of cities. Nevertheless, by luck, the country remained connected (i.e. you were able to get from one city to another via these M roads). And for any road, the start and the end city were not the same.
- Moreover, each road had it’s own value of importance. This value was assigned by the Road Minister of Ancient Berland. The Road Minister also was not knowledgable, so these numbers were assigned to the roads randomly and independently from the other roads.
- When there was a war with the neighboring countries (usually it was with Ancient Herland), it was important to estimate separation number for some pairs of cities.
- The separation number for a pair of cities – let’s call these cities A and B – is explained below:
Consider a set of roads that were built. The subset of this set is good, if after removing all roads from this set, there’s no longer a way from A to B. The minimal possible sum of roads’ value of importance of any good subset is a separation number for the pair of cities (A, B).
For a research, Fedor would like to know the product of separation values over all unordered pairs of cities. Please, find this number. It can be huge, so we ask you to output its product modulo 109+7.
Explanation
Input Format
The first line of input consist of two integers N and M, separated by a single space.
Then, M lines follow. Each of these lines consist of three integers Xi, Yi, Zi separated by a single space.
It means that there was a road between the city Xi and the city Yi with a value of importance equal to Zi.
Constraints
3 ≤ N ≤ 500
3 ≤ M ≤ 104
1 ≤ value of importance ≤ 105
The cities are indexed from 1 to N.
Output Format
An integer that represents the value, Fedor needs, modulo 109+7.
Sample Input
3 3
1 2 3
2 3 1
1 3 2
Sample Output
36
Explanation
There are three unordered pairs of cities: (1, 2), (1, 3) and (2, 3). Let’s look at the separation numbers:
For (1, 2) we have to remove the first and the second roads. The sum of the importance values is 4.
For (1, 3) we have to remove the second and the third roads. The sum of the importance values is 3.
For (2, 3) we have to remove the second and the third roads. The sum of the importance values is 3. So, we get 4 * 3 * 3 = 36.
Scoring
- In the 25% of the test data N = 50 and M = 300.
- In another 25% of the test data N = 200 and M = 10000
- In the rest of the test data N = 500 and M = 10000
Time Limit:3.0 sec(s) for each input file.Memory Limit:256 MBSource Limit:1024 KB
#include <map> #include <set> #include <list> #include <queue> #include <deque> #include <stack> #include <bitset> #include <vector> #include <ctime> #include <cmath> #include <cstdio> #include <string> #include <cstring> #include <cassert> #include <numeric> #include <iomanip> #include <sstream> #include <fstream> #include <iostream> #include <algorithm> using namespace std; typedef long long LL; typedef pair<int, int> PII; typedef vector<int> VI; typedef vector<LL> VL; typedef vector<PII> VPII; #define MM(a,x) memset(a,x,sizeof(a)); #define ALL(x) (x).begin(), (x).end() #define P(x) cerr<<"["#x<<" = "<<(x)<<"]\n" #define PP(x,i) cerr<<"["#x<<i<<" = "<<x[i]<<"]\n" #define P2(x,y) cerr<<"["#x" = "<<(x)<<", "#y" = "<<(y)<<"]\n" #define TM(a,b) cerr<<"["#a" -> "#b": "<<1e3*(b-a)/CLOCKS_PER_SEC<<"ms]\n"; #define UN(v) sort(ALL(v)), v.resize(unique(ALL(v))-v.begin()) #define mp make_pair #define pb push_back #define x first #define y second struct _ {_() {ios_base::sync_with_stdio(0);}} _; template<class A, class B> ostream& operator<<(ostream &o, pair<A, B> t) {o << "(" << t.x << ", " << t.y << ")"; return o;} template<class T> void PV(T a, T b) {while(a != b)cout << *a++, cout << (a != b ? " " : "\n");} template<class T> inline bool chmin(T &a, T b) {return a > b ? a = b, 1 : 0;} template<class T> inline bool chmax(T &a, T b) {return a < b ? a = b, 1 : 0;} template<class T> string tostring(T x, int len = 0) {stringstream ss; ss << x; string r = ss.str(); if(r.length() < len) r = string(len - r.length(), '0') + r; return r;} template<class T> void convert(string x, T& r) {stringstream ss(x); ss >> r;} const int inf = 0x3f3f3f3f; const long long linf = 0x3f3f3f3f3f3f3f3fLL; const int mod = int(1e9) + 7; const int N = 501; #define FF int const int maxE = 40010; const int maxN = 10010; const int INF = 0x3f3f3f3f; struct Dinic { int ne; int hd[maxN], work[maxN], q[maxN], Level[maxN], from[maxE], to[maxE], next[maxE]; FF cap[maxE], flow[maxE]; Dinic() {init();} void init() { ne = 0; memset(hd, -1, sizeof(hd)); } void add(int x, int y, FF c) { from[ne] = x, to[ne] = y, cap[ne] = c, flow[ne] = 0, next[ne] = hd[x], hd[x] = ne++; from[ne] = y, to[ne] = x, cap[ne] = 0, flow[ne] = 0, next[ne] = hd[y], hd[y] = ne++; } void addU(int x, int y, FF c) { from[ne] = x, to[ne] = y, cap[ne] = c, flow[ne] = 0, next[ne] = hd[x], hd[x] = ne++; from[ne] = y, to[ne] = x, cap[ne] = c, flow[ne] = 0, next[ne] = hd[y], hd[y] = ne++; } bool dinic_bfs(int S, int T) { int head = 0, tail = 0; memset(Level, 0, sizeof(Level)); Level[S] = 1; q[tail++] = S; while(head < tail) { int u = q[head++]; for(int i = hd[u]; i != -1; i = next[i]) { int v = to[i]; if(flow[i] < cap[i] && !Level[v]) { Level[v] = Level[u] + 1; q[tail++] = v; if(v == T) return 1; } } } return 0; } FF dinic_dfs(int u, int T, FF pMin) { if(u == T || !pMin) return pMin; FF ret = 0; for(int& i = work[u]; i != -1; i = next[i]) { int v = to[i]; FF f; if(Level[v] == Level[u] + 1 && (f = dinic_dfs(v, T, min(pMin, cap[i] - flow[i])))) { flow[i] += f; flow[i ^ 1] -= f; ret += f; pMin -= f; if(pMin == 0) break; } } return ret; } FF dinic(int S, int T) { FF ret = 0; while(dinic_bfs(S, T)) { memcpy(work, hd, sizeof(hd)); ret += dinic_dfs(S, T, INF); } return ret; } } dn; int n, m; vector<pair<PII, int>> E; int visited[N]; void dfs(int cur, vector<vector<int>>& G) { if(visited[cur]) return; visited[cur] = 1; for(auto i : G[cur]) dfs(i, G); } int parent[N]; int cut[N][N]; int main() { cin >> n >> m; assert(3 <= n && n <= 500); assert(3 <= m && m <= 1e4); for(int i = 1; i <= m; i++) { int u, v, w; cin >> u >> v >> w; assert(1 <= u && u <= n); assert(1 <= v && v <= n); assert(u != v); assert(1 <= w && w <= 1e5); u--, v--; E.pb(mp(mp(u, v), w)); } MM(cut, 0x3f); for(int i = 1; i < n; i++) { dn.init(); for(auto e : E) dn.addU(e.x.x, e.x.y, e.y); int S = i, T = parent[i]; int x = dn.dinic(S, T); MM(visited, 0); vector<vector<int>> G(n + 1); for(int j = 0; j < dn.ne; j++) if(dn.cap[j] > dn.flow[j]) G[dn.from[j]].pb(dn.to[j]); dfs(S, G); for(int j = i + 1; j < n; j++) if(visited[j] && parent[j] == parent[i]) parent[j] = i; cut[i][parent[i]] = cut[parent[i]][i] = x; for(int j = 0; j < i; j++) { cut[i][j] = cut[j][i] = min(x, cut[parent[i]][j]); } } LL res = 1; for(int i = 0; i < n; i++) for(int j = i + 1; j < n; j++) res = res * cut[i][j] % mod; cout << res << endl; return 0; }
OUTPUT : :
3 3 1 2 3 2 3 1 1 3 2 36
|
http://www.techyshed.com/cpp-fedors-roadmap-navigation-problem/
|
CC-MAIN-2018-30
|
refinedweb
| 1,407
| 67.49
|
Regular a field like Email Id or a telephone number.
also read:
2) java.util.regex Package
The java.util.regex package provides the necessary classes for using Regular Expressions in a java application. This package has been introduced in Java 1.4. It consists of three main classes namely,
- Pattern
- Matcher
- PatternSyntaxException
3) Pattern Class
A regular expression which is specified as a string, should be first compiled into an instance of
Pattern class. The resulting pattern can then be used to create an instance of
Matcher class which contains various in-built methods that helps in performing a match against the regular expression. Many
Matcher objects can share the same
Pattern object.
Let us now discuss about the important methods in
Pattern class.
a) compile() method
Before working with, we need a compiled form of regular expression pattern, by calling the
Pattern.compile() method which returns a new
Pattern object. Note that
compile() is a static method, so we dont an instance of the
Pattern class.
There are two forms of
compile() method,
- compile(String regex)
- compile(String regex, int flags)
In the first form of
compile() method, we pass the regular expression that would be compiled. In the second form of this method, we have an additional parameter which is used to specify the match flags that has to be applied. The flags can be either
CASE_INSENSITIVE,
MULTILINE,
DOTALL,
UNICODE_CASE or
CANON_EQ based on which matching would be done.
b) matcher() method
The
matcher() method is used to create new
Matcher object for an input for a given pattern, which can be used to perform matching operations. The syntax is as follows,
matcher(String input)
c) matches() method
The
Pattern class provides a
matches() method, which is a static method. This method returns true only if the entire input text matches the pattern. This method internally depends on the
compile() and
matcher() methods of the
Pattern object. The syntax for this static matches() method is,
Pattern.matches(pattern, inputSequence);
Let us see a simple example,
RegExpTest.java
import java.util.regex.Matcher; import java.util.regex.Pattern; public class RegExpTest { public static void main(String[] args) { String inputStr = "Computer"; String pattern = "Computer"; boolean patternMatched = Pattern.matches(pattern, inputStr); System.out.println(patternMatched); } }
The
matches() method returns true in this case and hence we get the output as true. If in case, we had given the input string as
"ComputerScience", then the
matches() method would have returned false.
In the static
matches() method, when we specify an input string and a pattern to the
Pattern.matches() method, the pattern gets compiled into a
Pattern object which is used for matching operation. This is inefficient because every time we specify an input string and pattern, compilation of the pattern is done. Hence, its better to use the non static
matches() method in
Matcher class. (This
matches() method would be discussed when dealing with Matcher class in the forth-coming section).
d) pattern() method
Returns the regular expression as a string from which this pattern was compiled.
e) split() method
This method is used to split the given input text based on the given pattern. It returns a String array.
There are two forms of
split() method,
- split(String input)
- split(String input, int limit)
In the second form, we have an argument called
limit which is used to specify the limit i.e the number of resultant strings that have to be obtained by
split() method.
Let us see a simple example for the
split() method,
Pattern pattern = Pattern.compile("ing"); Matcher matcher = pattern.matcher("playingrowinglaughingsleepingweeping"); String[] str = pattern.split(input, 4); for(String st : str) { System.out.println(st); }
In the above code, we had specified the limit of number of Strings to be returned as 4. Hence we would get 4 strings as the result.
The output for the above code is,
play row laugh sleepingweeping
f) flag() method
This method returns this pattern’s match flags which would have been specified when the pattern was compiled.
4) Matcher Class
The
Matcher class which contains various in-built methods such as
matches(),
find(),
group(),
replaceFirst(),
replaceAll() etc., that help us to check whether the desired pattern occurs in the given text or search the desired pattern in the text or to replace the occurrence of the pattern in the text with some other set of characters as per the requirement.
Let us now discuss about the important methods in
Matcher class.
a) matches() method
The
matches() method available in the
Matcher class is used to match an input text against a pattern. This method returns true only if the entire input text sequence matches the pattern. Consider the following example,
String input = "Java1.4, Java1.5, Java1.6"; Pattern pattern = Pattern.compile("Java"); Matcher matcher = pattern.matcher(input); boolean patternMatched = matcher.matches();
In this case the value of
patternMatched will be false since the entire input string
"Java1.4, Java1.5, Java1.6" does not match the regular expression pattern
"Java" and hence
matches() method returns false. Let us use the same
Pattern object in another
Matcher object and see how it works. Consider the following code,
String input = "Java"; Matcher matcher1 = pattern.matcher(input); boolean patternMatched1 = matcher1.matches();
Here, the
matches() method returns true since the entire input sequence matches the pattern
"Java". The
matches() method finds appropriate use in searching for particular whole words in a given text.
Let us see another example to know about some other methods available in
Matcher class,
String input = "Java1.4, Java1.5, Java1.6"; Pattern pattern = Pattern.compile("Java"); Matcher matcher = pattern.matcher(input); while (matcher.find()){ System.out.println(matcher.group() + ": " +matcher.start() + ": " + matcher.end()); }
The output of the above code is,
Java: 0: 4 Java: 9: 13 Java: 18: 22
In the above example code, we see the usage of
find(),
group(),
start and
end() methods.
Now, let us know about the purpose of those methods.
b) find() method
The
find() method in
Matcher Class returns true if the pattern occurs anywhere in the input string.
It has two forms,
- find()
- find(int start)
In our example, we used the first form of
find() method. It searches for all occurrences of the pattern
"Java" in the given input String
"Java1.4, Java1.5, Java1.6" and then returns true if a subsequence in the input matches the desired pattern.
In the second form of
find() method, we have an argument that is used to specify the start index of find operation.
c) group() method
The
group() method in
Matcher Class returns the piece of input that has matched the pattern.
d) start() method and end() method
The
start() and
end() methods in
Matcher Class return the start and end indexes respectively, for each occurrence of the subsequences in the input text that has matched the defined regular expression pattern.
5) PatternSyntaxException Class
PatternSyntaxException is an unchecked exception which is thrown when there is any syntax error in a regular expression pattern. It has various methods like
getDescription(),
getIndex(),
getMessage() and
getPattern() which enable us to get the details of the error.
We have just seen very simple examples to understand the basics of Regular Expressions in java, and about the purpose of few often used methods. With this basic knowledge, we shall discuss in depth about Regular Expression in the following sections.
6) Matching any single character
The
'.' character is used to match any single character. If suppose we use a pattern
'ca.' then this pattern would match string inputs like
'car', ‘cat’,
'cap' etc.. because they start with ca and then followed by another single character. Consider an example to understand this,
Pattern patternObj = Pattern.compile("ca."); Matcher matcher = patternObj.matcher("cap"); if(matcher.matches()){ System.out.println("The given input matched the pattern"); }
The output of the above code is,
The given input matched the pattern
7) Matching Special characters
Suppose we need to specify
'.' character in our pattern to indicate that the string input should contain the
'.' character. But, in Regular Expression
'.' has a specific meaning. So we have to specify it to the compiler that we don’t mean the Regular expression
'.' by escaping it with a
'\' (backslash character) which is a metacharacter. Consider the following example,
Matcher matcher1 = patternObj.matcher("test.java"); Matcher matcher2 = patternObj.matcher("nest.java"); if(matcher1.matches() && matcher2.matches()){ System.out.println("Both the inputs matched the pattern"); }
The output of this code is,
Both the inputs matched the pattern
Hence, the use of
'.' means to match any character, and the use of ‘\.’ means the normal
'.' character. We can use this backslash character wherever we need to specify a special character for some other purpose.
(Note : In Java, the compiler expects a backslash character
'\' to be always prefixed with another backslash character
'\' when used within a String literal.)
8) Matching particular characters
In a given text, we may need to match specific desired characters. We have already seen that the
'.' character will match any single character but now our requirement is to match only
'c' or
's' along with other desired set of characters. In such situations, we can enclose the desired characters in parenthesis
[] which is a metacharacter used to indicate a character set from which any one character should be available in the given text. Consider the following piece of code that illustrates the same,
Pattern patternObj = Pattern.compile("[CcSs]at"); Matcher matcher = patternObj.matcher("Sat"); if(matcher.matches()){ System.out.println("The given input matched the pattern"); }
The output of this code is,
The given input matched the pattern
In the above example, we have used the pattern
'[CcSs]at' wherein
Cc is used to match
C and
c, and
Ss matches
S and
s. Hence this would match inputs such as
Cat,
cat,
Sat and
sat.
9) Matching range of characters
In Regular expressions, we use the metacharacter
'-' i.e a hyphen symbol to specify a range of characters. For example, we can specify the range of lowercase alphabets as
'[a-z]' and
'[A-Z]' in case of uppercase alphabets.
Consider a situation where we need the input to start with a number from
0 to
3, then followed by any alphabet from
a to
z and then followed by another number that might range from
7 to
9. The following code can be used to validate the input against such a pattern,
Pattern patternObj = Pattern.compile("[0-3][a-z][7-9]"); Matcher matcher = patternObj.matcher("2a8"); if(matcher.matches()){ System.out.println("The given input matched the pattern "); }
10) Matching characters apart from a specific list
We can use the
'^' metacharacter to specify one or more characters that we want don’t expect to match. Let us achieve the same requirement in our previous example by using a different pattern that makes use of the metacharacter
'^' ,
Pattern patternObj = Pattern.compile("[^3-9][a-z][7-9]");
The use of
'^' character inside
[] indicates that those characters specified in it are not expected in the input.
This pattern will match the input
"2a8" as in our previous example.
11) Use of other Metacharacters in Regular Expression
We have already discussed about the use of
'.' and
'^' metacharacters. Let us see the purpose of other metacharacters.
\d
This matches a numeric digit. It is the same as using the character set
[0-9].
\D
This matches any character which is non-numeric. It is the same as the use of
[^0-9].
\s
This matches a single whitespace character.
\S
This matches any character which is not a whitespace character.
\w
This matches a word character. It is equivalent to the character class
[A-Za-z0-9_].
\W
This matches a character that is not a word character. It is equivalent to the
negated character class
[^A-Za-z0-9_].
[…]
This matches a single character present in between the square paranthesis.
[a-f[s-z]]
This specifies the union of two sets of characters, which is the same as
[a-fs-z], i.e it
matches any character that might be either from
a to
f and from
s to
z.
[a-m[f-z]]
This specifies the intersection of two sets of characters, which is the same as
[f-m], i.e it matches any character from
f to
m.
[^…]
This matches any single character, except those characters that are specified inside the square parentheses
[].
12) POSIX Character Classes
The
java.util.regex package provides a set of
POSIX character classes, which are indeed shortcuts to be used in regular expressions that make it easier for us to use instead of specifying the entire pattern.
\p{Lower}
It can be used to match any single lowercase alphabetic character. It is the same as using
[a-z].
\p{Upper}
It can be used to match any single uppercase alphabet character. It is the same as using
[A-Z].
\p{Alpha}
It is used to match any alphabetic character. It serves the same purpose as
[A-Za-z].
\p{Digit}
It is used to match any single digit. It serves the same purpose as
[0-9].
\p{Punct}
It is the same as using
[!"#$%&'()*+,- ./:;?@[\]^_`{|}~].
\p{Graph}
It is the same as using
[\p{Alpha}\p{Punct}].
\p{Print}
It is the same as using [\p{Graph}].
\p{ASCII]
It can be used to match any of the ASCII characters. It serves the same purpose as
U+0000 through
U+007F.
\p{XDigit}
It matches a single hexadecimal digit. It is the same as using
[0-9a-fA-F].
\p{Space}
It is used to match a single whitespace character. It is the same as using
[ \t\n\x0B\f\r].
\p{Blank}
It matches a single space character or a tab character.
\p{Cntrl}
It matches a control character. It is the same as using
[\x00-\x1F\x7F].
13) Conclusion
also read:
This article is just an introduction to Regular expressions. We have seen the basics on how to use regular expressions to perform useful operations such as search, replace and validation for a given input. Regular Expressions can be effectively used to suit our application needs that involve text manipulation.
|
http://www.javabeat.net/regular-expressions-in-java/4/
|
CC-MAIN-2015-32
|
refinedweb
| 2,356
| 57.57
|
python foo.py
>>>>> a.split() ['3', '5', '8']split is a nice way to break input lines into fields.
>>>>> names = a.split(',') >>> names ['brad vander zanden', ' yifan tang', ' george brett'] >>> names[1].strip() 'yifan tang'
>>> x = 1 >>> s = 'the value of x is ' + x TypeError: cannot concatenate 'str' and 'int' objects >>> s = 'the value of x is ' + str(x) >>> s 'the value of x is 1'
a = [3, 4, 5] b = a b[2] = 8 a // prints [3, 4, 8]
a, b = 0, 1 while b < 10: print b a, b = b, a+bComments
10 <= a <= 20 # true if a between 10 and 20 a < b == c # true if a < b and b == c
if name in ['frank', 'george', 'ralph']:is equivalent to
if (name == 'frank') or (name == 'george') or (name == 'ralph'):
if condition: statements elif condition: statements elif condition: statements ... else: statements
for var in sequence: statements that operate on varFor example, the following code sums the elements of a list:
sum = 0 for value in data: sum = sum + value
range(4) # yields [0, 1, 2, 3]
range(0, 10, 3) # yields [0, 3, 6, 9] range(10, 0, -3) # yields [10, 7, 4, 1]
sum = 0 for i in range(1,11): sum = sum + i
>>> for i, v in enumerate(['tic', 'tac', 'toe']): ... print i, v ... 0 tic 1 tac 2 toe
>>> vector1 = [10, 20, 30, 40, 50] >>> vector2 = [5, 10, 15, 20, 25] >>> for v1, v2 in zip(vector1, vector2): ... product = product + v1 * v2 ... >>> product 2750
for v1 in reversed(vector1): print v1
>>> people = { 'brad': 45, 'yifan': 36, 'smiley' : 5 } >>> for k, v in people.iteritems(): ... print k, v ... yifan 36 smiley 5 brad 45
for n in range(2, 10): for x in range(2, n): if n % x == 0: print n, 'equals', x, '*', n/x break else: # loop fell through without finding a factor print n, 'is a prime number'
def functionName(args): """ Documentation String """ (optional) function body return value
def foo(): global x ... statements that modify x ...
def dfs(x, y): """ Perform a depth first search starting at x and ending at y Parameters x: The node from which to start the search y: The node at which to terminate the search Side-Effects: None """ ...
def makeFormula(formula): return lambda: formula newFormula = makeFormula("a + b")Here's another example that creates an incrementer:
def makeIncrementer(n): return lambda x: x + n f = make_incrementor(42) f(1) # returns 43
lambda x,y: x if x < y else y
>>> def fib(): ... yield 1 ... yield 1 ... current, prev = 1, 1 ... while True: ... current, prev = current + prev, current ... yield current ... >>> for i in fib(): ... if i > 100: ... break ... print i ... 1 1 2 3 5 8 13 21 34 55 89
>>> a = fib() >>> a.next() 1 >>> a.next() 1 >>> a.next() 2 >>> a.next() 3 >>> a.next() 5
def factorial(n): if n == 1: yield 1 else yield n * fact(n-1)This won't work like you think because you expect fact(n-1) to return a number, but it in fact returns a generator object and hence the multiplication fails with a type error.
There are sophisticated solutions beyond the scope of this course for doing recursive generator functions. An easier way is to declare a nested function that returns a list of the results, and then returns the results one at a time:
def factorial(n): def fact(n): if n == 1: return [1] else: result = fact(n-1) result.append(n*result[-1]) return result results = fact(n) for i in results: yield i
|
http://web.eecs.utk.edu/~bvz/teaching/cs365Sp16/notes/Python/PythonVarsControlStructs.html
|
CC-MAIN-2017-51
|
refinedweb
| 590
| 67.49
|
dlm_lock - acquire or convert a DLM lock
#include <libdlm.h> int dlm_lock_lock_wait(uint32_t mode, struct dlm_lksb *lksb, uint32_t flags, const void *name, unsigned int namelen, uint32_t parent, /* unused */ void *bastarg, void (*bastaddr) (void *bastarg), void *range); /* unused */ int dlm_ls_lock(dlm_lshandle_t lockspace,_ls_lock_wait(dlm_lshandle_t lockspace, uint32_t mode, struct dlm_lksb *lksb, uint32_t flags, const void *name, unsigned int namelen, uint32_t parent, /* unusued */ void *bastarg, void (*bastaddr) (void *bastarg), void *range); /* unused */ int dlm_ls_lockx(dlm_lshandle_t lockspace, uint32_t mode, struct dlm_lksb *lksb, uint32_t flags, const void *name, unsigned int namelen, uint32_t parent, /* unused */ (*astaddr) (void *astarg), void *astarg, void (*bastaddr) (void *astarg), uint64_t *xid, uint64_t *timeout);
dlm_lock and its variants acquire and convert locks in the DLM. dlm_lock() operations are asynchronous. If the call to dlm_lock returns an error then the operation has failed and the AST routine will not be called. If dlm_lock returns 0 it is still possible that the lock operation will fail. The AST routine will be called when the locking is complete or has failed and the status is returned in the lksb. dlm_lock_wait() will wait until the lock operation has completed and returns the final completion status. dlm_ls_lock() is the same as dlm_lock() but takes a lockspace argument. This lockspace must have been previously opened by dlm_lockspace_open() or dlm_lockspace_create(). For conversion operations the name and namelen are ignored and the lock ID in the LKSB is used to identify the lock to be converted. If a lock value block is specified then in general, a grant or a conversion to an equal-level or higher-level lock mode reads the lock value from the resource into the caller’s lock value block. When a lock conversion from EX or PW to an equal-level or lower-level lock mode occurs, the contents of the caller’s lock value block are written into the resource. If the LVB is invalidated the lksb.sb_flags member will be set to DLM_SBF_VALNOTVALID. Lock values blocks are always 32 bytes long. If the AST routines or parameter are passed to a conversion operation then they will overwrite those values that were passed to a previous dlm_lock call. mode Lock mode to acquire or convert to. LKM_NLMODE NULL Lock LKM_CRMODE Concurrent read LKM_CWMODE Concurrent write LKM_PRMODE Protected read LKM_PWMODE Protected write LKM_EXMODE Exclusive flags Affect the operation of the lock call: LKF_NOQUEUE Don’t queue the lock. If it cannot be granted return -EAGAIN LKF_CONVERT Convert an existing lock LKF_VALBLK Lock has a value block LKF_QUECVT Put conversion to the back of the queue LKF_EXPEDITE Grant a NL lock immediately regardless of other locks on the conversion queue LKF_PERSISTENT Specifies a lock that will not be unlocked when the process exits; it will become an orphan lock. LKF_CONVDEADLK Enable internal conversion deadlock resolution where the lock’s granted mode may be set to NL and DLM_SBF_DEMOTED is returned in lksb.sb_flags. LKF_NODLCKWT Do not consider this lock when trying to detect deadlock conditions. LKF_NODLCKBLK Not implemented LKF_NOQUEUEBAST Send blocking ASTs even for NOQUEUE operations LKF_HEADQUE Add locks to the head of the convert or waiting queue LKF_NOORDER Avoid the VMS rules on grant order LKF_ALTPR If the requested mode can’t be granted (generally CW), try to grant in PR and return DLM_SBF_ALTMODE. LKF_ALTCW If the requested mode can’t be granted (generally PR), try to grant in CW and return DLM_SBF_ALTMODE. LKF_TIMEOUT The lock will time out per the timeout arg. lksb Lock Status block This structure contains the returned lock ID, the actual status of the lock operation (all lock ops are asynchronous) and the value block if LKF_VALBLK is set. name Name of the lock. Can be binary, max 64 bytes. Ignored for lock conversions. (Should be a string to work with debugging tools.) namelen Length of the above name. Ignored for lock conversions. parent ID of parent lock or NULL if this is a top-level lock. This is currently unused. ast Address of AST routine to be called when the lock operation completes. The final completion status of the lock will be in the lksb. the AST routine must not be NULL. astargs Argument to pass to the AST routine (most people pass the lksb in here but it can be anything you like.) bast Blocking AST routine. address of a function to call if this lock is blocking another. The function will be called with astargs. range This is unused. xid Optional transaction ID for deadlock detection. timeout Timeout in centiseconds. If it takes longer than this to acquire the lock (usually because it is already blocked by another lock), then the AST will trigger with ETIMEDOUT as the status. If the lock operation is a conversion then the lock will remain at its current status. If this is a new lock then the lock will not exist and any LKB in the lksb will be invalid. This is ignored without the LKF_TIMEOUT flag. Return values 0 is returned if the call completed successfully. If not, -1 is returned and errno is set to one of the following: EINVAL An invalid parameter was passed to the call (eg bad lock mode or flag) ENOMEM A (kernel) memory allocation failed EAGAIN LKF_NOQUEUE was requested and the lock could not be granted EBUSY The lock is currently being locked or converted EFAULT The userland buffer could not be read/written by the kernel (this indicates a library problem)) If an error is returned in the AST, then lksb.sb_status is set to the one of the above values instead of zero. Structures struct dlm_lksb { int sb_status; /* Final status of lock operation */ uint32_t sb_lkid; /* ID of lock. Returned from dlm_lock() on first use. Used as input to dlm_lock() for a conversion operation */ char sb_flags; /* Completion flags, see above */ char sb_lvbptr; /* Optional pointer to lock value block */ };_unlock(3), dlm_open_lockspace(3), dlm_create_lockspace(3), dlm_close_lockspace(3), dlm_release_lockspace(3)
|
http://huge-man-linux.net/man3/dlm_ls_lock.html
|
CC-MAIN-2021-04
|
refinedweb
| 975
| 61.77
|
25 September 2013 17:11 [Source: ICIS news]
Correction: In the ICIS story headlined “US INVISTA develops new polyol to raise feedstock flexibility” dated 25 September 2013, the story reported that production will start in January 2014. However, production has already started in ?xml:namespace>
The new polyester polyol, Terate HT, was one of the finalists for the Polyurethanes Innovation Award, the winner of which will be announced on Wednesday.
At the same time, INVISTA wanted to improve on the qualities of the Terate line.
The Terate line is important for INVISTA, because it is an aromatic polyester polyol that is used to make rigid-foam polyurethanes, said Bob Francois, president of specialty materials for the company. He made his comments on the sidelines of the Polyurethanes Technical Conference, held by the Center for the Polyurethanes Industry (CPI).
Rigid-foam polyurethanes are used to make high-quality insulation. Such insulation is seeing increasing demand from the construction industry, which is eager to meet new building codes that call for better energy efficiency. At the same time, rigid foams made from aromatic polyester polyols have good flame-resistance qualities, which avoids the need to add halogenated flame retardants to the foam.
After a couple of years of development, INVISTA created Terate HT, which is based on a new chemistry and that builds on the qualities of the original Terate line, Francois said. At the same time, Terate HT has recycled content, which allows customers to meet sustainability goals.
INVISTA is already producing the polyol at its plant in The Netherlands, he said. . It then made an initial investment in
For INVISTA's older generation of Terate polyols, its feedstock was tied to the production of dimethyl terephthalate (DMT), Francois said.
DMT, in turn, is a feedstock used to make polyethylene terephthalate (PET).
Over the years, however, PET production increasingly relied on another raw material, purified terephthalic acid (PTA).
With supplies of feedstock dwindling, INVISTA nonetheless chose to continue investing in the Terate line.
"We challenged the R&D team and the business team," he said. INVISTA wanted to retain the best properties of the original Terate line, as well as improve on other qualities of the product.
As INVISTA worked on developing the new line, it had cooperation from customers and co-suppliers, Francois said. "The customers were very willing to take risks in every step of the process."
The company chose its plant in The Netherlands to start initial production, Francois said.
INVISTA chose
Although Terate HT is a new product, it is already receiving interest from Asian customers, Francois said.
The polyurethanes
|
http://www.icis.com/Articles/2013/09/25/9709594/Corrected-US-INVISTA-develops-new-polyol-to-raise-feedstock-flexibility.html
|
CC-MAIN-2015-06
|
refinedweb
| 433
| 51.89
|
Python's exec statement can execute code that you read, generate, or otherwise obtain during a program's run. exec dynamically executes a statement or a suite of statements. exec is a simple keyword statement with the following syntax:
exec code[ in globals
[,locals]]
code can be a string, an open file-like object, or a code object. globals and locals are dictionaries (in Python 2.4, locals can be any mapping, but globals must be specifically a dict; in Python 2.5, either or both can be any mapping). If both are present, they are the global and local namespaces in which code executes. If only globals is present, exec uses globals in the role of both namespaces. If neither globals nor locals is present, code executes in the current scope. Running exec in the current scope is a bad idea, since it can bind, rebind, or unbind any name. To keep things under control, use exec only with specific, explicit dictionaries.
Use exec only when it's really indispensable. Most often, it's best to avoid exec and choose more specific, well-controlled mechanisms instead: exec pries loose your control on your code's namespace, damages your program's performance, and exposes you to numerous, hard-to-find bugs.
For example, a frequently asked question about Python is "How do I set a variable whose name I just read or built?" Strictly speaking, exec lets you do this. For example, if the name of the variable you want to set is in varname, you might use:
exec varname+'=23'
Don't do this. An exec statement mydict. You can then use the following variation:
exec varname+'=23' in mydict
While this use is not as terrible as the previous example, it is still a bad idea. Keeping such "variables" as dictionary entries is simple and effective, but it also means that you don't need to use exec at all to set them. Just code:
mydict[varname] = 23
With this approach, your program is clearer, direct, elegant, and faster. While there are valid uses of exec, they are extremely rare and should always use explicit dictionaries.
exec can execute an expression because any expression is also a valid statement (called an expression statement). However, Python ignores the value returned by an expression statement in this case. To evaluate an expression and obtain the expression's value, see built-in function eval, covered in eval on page 161.
To obtain a code object
to use with exec, you normally call built-in function compile with the last argument set to 'exec' (as covered in "compile"). A code object c exposes many interesting read-only attributes whose names all start with 'co_', such as:
co_argcount
The number of parameters of the function of which c is the code (0 when c is not the code object of a function but rather is built directly by compile on a string)
co_code
A byte-string with c's bytecode
co_consts
The tuple of constants used in c
co_filename
The name of the file c was compiled from (the string that is the second argument to compile when c was built that way)
co_firstlinenumber
The initial line number (within the file named by co_filename) of the source code that was compiled to produce c if c was built by compilation from a file
co_name
The name of the function of which c is the code ('<module>' when c is not the code object of a function but rather is built directly by compile on a string)
co_names
The tuple of all identifiers used within c
co_varnames
The tuple of local variables' identifiers within c, starting with parameter names
Most of these attributes are useful only for debugging purposes, but some may help advanced introspection, as exemplified further on in this section.
If you start with a string that holds some statements, I recommend using compile on the string, then calling exec on the resulting code object rather than giving exec the string to compile and execute. This separation lets you check for syntax errors separately from evaluation-time errors. You can often arrange things so that the string is compiled once and the code object is executed repeatedly, which speeds things up. eval can also benefit from such separation. Moreover, the compile step is intrinsically safe (while both exec and eval are enormously risky if you execute them on code that you don't trust), and you may be able to perform some checks on the code object to lessen the risk.
A code object has a read-only attribute co_names, which is the tuple of the names used in the code. For example, say that you want the user to enter an expression that contains only literal constants and operatorsno function calls nor other names. Before evaluating the expression, you can check that the string the user entered satisfies these constraints:
def safe_eval(s):
code = compile(s, '<user-entered string>', 'eval')
if code.co_names:
raise ValueError, ('No names (%r) allowed in expression (%r)' %
(code.co_names, s))
return eval(code)
This function safe_eval evaluates the expression passed in as argument s only if the string is a syntactically valid expression (otherwise, compile raises SyntaxError) and contains no names at all (otherwise, safe_eval explicitly raises ValueError).
Knowing what names the code is about to access may sometimes help you optimize the preparation of the dictionary that you need to pass to exec or eval as the namespace. Since you need to provide values only for those names, you may save work by not preparing other entries. For example, say that your application dynamically accepts code from the user with the convention that variable names starting with data_ refer to files residing in subdirectory data that user-written code doesn't need to read explicitly. User-written code may in turn compute and leave results in global variables with names starting with result_, which your application writes back as files in subdirectory data. Thanks to this convention, you may later move the data elsewhere (e.g., to BLOBs in a database instead of to files in a subdirectory), and user-written code won't be affected. Here's how you might implement these conventions efficiently:
def exec_with_data(user_code_string):
user_code = compile(user_code_string, '<user code>', 'exec')
datadict = { }
for name in user_code.co_names:
if name.startswith('data_'):
datafile = open('data/%s' % name[5:], 'rb')
datadict[name] = datafile.read( )
datafile.close( )
exec user_code in datadict
for name in datadict:
if name.startswith('result_'):
datafile = open('data/%s' % name[7:], 'wb')
datafile.write(datadict[name])
datafile.close( )
Note that function exec_with_data is not at all safe against untrusted code: if you pass it as argument user_code_string, which is some string obtained in a way that you cannot entirely trust, there is essentially no limit to the amount of damage it might do. This is unfortunately true of just about any use of both exec and eval, except for those rare cases in which you can set very strict and checkable limits on the code you're about to execute or evaluate, as was the case for function safe_eval.
Old versions of Python tried to supply tools to ameliorate this situation, under the heading of "restricted execution," but those tools were never entirely proof against the ingenuity of able hackers, and current versions of Python have therefore dropped them. If you need to ward against such attacks, take advantage of your operating system's protection mechanisms: run untrusted code in a separate process, with privileges as restricted as you can possibly make them (study the mechanisms that your OS supplies for the purpose, such as chroot, setuid, and jail). To ward against "denial of service" attacks, have the main process monitor the separate one and terminate it if and when resource consumption becomes excessive. Processes are covered in "Running Other Programs" on page 354.
|
http://books.gigatux.nl/mirror/pythoninanutshell/0596100469/pythonian-CHP-13-SECT-1.html
|
CC-MAIN-2018-22
|
refinedweb
| 1,313
| 55.98
|
#include <wx/richtext/richtextbuffer.h>
A class representing a shadow. shadow blur distance.
Gets the colour.
Gets the colour as a long.
Returns the border flags.
Gets the shadow horizontal offset.
Gets the shadow vertical offset.
Gets the shadow opacity.
Gets the shadow spread size.
True if the shadow has a valid colour.
True if the shadow has no attributes set.
Returns true if the dimension is valid.
Equality operator.
Removes a border flag.
Removes the specified attributes from this object.
Resets the shadow.
Sets the shadow blur distance.
Sets the shadow colour.
Sets the shadow colour.
Sets the border flags.
Sets the shadow horizontal offset.
Sets the shadow vertical offset.
Sets the shadow opacity.
Sets the shadow spread size.
Sets the valid flag.
|
https://docs.wxwidgets.org/trunk/classwx_text_attr_shadow.html
|
CC-MAIN-2018-51
|
refinedweb
| 124
| 74.86
|
Unleashing the Power of .NET Big Memory and Memory Mapped Files
- |
-
-
-
-
-
-
Read later
Reading List
Key Takeaways
- Web servers often have far more memory than the .NET GC can efficiently handle under normal circumstances.
- The performance benefits of a caching server are often lost due to increased network costs.
- Memory Mapped Files are often the fastest way to populate a cache after a restart.
- The goal of server-side tuning is to reach the point where your outbound network connection is saturated. This is obtained by minimizing CPU, disk, and internal network usage.
- By keeping object graphs in memory, you can obtain the performance benefits of a graph database without the complexity.
In continuation of the Big Memory topic on the .NET platform (part1, part2), this article describes the benefits of utilization of large data sets in-process on the managed CLR server environments using Agincore’s Big Memory Pile.
Overview
RAM is very fast and affordable these days, yet is ephemeral. Every time the process restarts, memory is cleared out and everything has to be reloaded from scratch. To address this we have recently added Memory Mapped Files support to our solution - NFX Pile. With memory mapped files, the data can be quickly fetched from disk after a restart.
Overall, the Big Memory approach is beneficial for developers and businesses as it shifts the paradigm of high-performance computing on the .NET platform. Traditionally Big Memory systems were built in C/C++ style languages where you primarily dealt with strings and byte arrays. But it is hard to solve any real world business problems while focusing on low level data structures. So instead we are going to concentrate on CLR objects. Memory Pile allows developers to think in terms of object instances, and work with hundreds of millions of those instances that have properties, code, inheritance and other CLR-native functionality.
This is different from language-agnostic object models, as proposed by some vendors (i.e. ones that interoperate Java and .NET), which introduce extra transformations, and all of the out-of-process solutions that require extra traffic/context switching/serialization. Instead, we’re going to discuss in-process local heaps, or rather “Piles” of objects, which exist in managed code in large byte arrays. Individually, these objects are invisible to the GC.
Use Cases
Why would anyone use dozens or hundreds of gigabytes of RAM in a first place? Here are a few tested use-cases of the Big Memory Pile technology.
The first thing that comes to mind is cache. In an E-Commerce backend we store hundreds of thousands of products ready to be displayed as detailed catalog listings. Each may have dozens of variations. When you build a catalog view listing 30+ products on a single screen, you’d better get those objects pretty quickly even for a single user scrolling a page with progressive loading. Why not use Redis or Memcached? Because we do the same thing only in-process, saving on network traffic and serialization. Transforming data into network packets into objects can be a surprisingly expensive operation. Wouldn’t you use a
Dictionary<id, Product> (or
IMemoryCache) if it were possible to hold all several hundred thousand products and their variations? Caching data alone provided enough motivation for using RAM, but there is much more...
In another cache use-case - a REST API server we were able to pre-serialize around 50 million rarely changing JSON vectors as UTF8-encoded byte arrays. The byte[], which was around 1024 bytes, could then be served directly into Http stream, making the network the bottleneck at around 80,000 req/sec.
Working with complex object graphs is another perfect case for Pile. In a social app, we needed to traverse the conversation threads on Twitter. When tracing who said what and when on a social media site, the ability to hold hundreds of millions of small vectors in memory is invaluable. We might as well have used a graph DB, however in our case we are the graph DB, right in the same process (it is a component hosted by our web MVC app). We’re now handling 100K+ REST API calls/sec, which is the limit of our network connection, while keeping the CPU usage low.
In this, and other use cases, background workers asynchronously update the social graph as changes come in. In many cases, such as the product catalog we talked about earlier, this can be done preemptively. You couldn’t do that with a normal cache that only holds a subset of the interesting data.
How it Works
Big Memory Pile solves the GC problems by using the transparent serialization of CLR object graphs into large byte arrays, effectively “hiding” the objects from GC’s reach. Not all object types need to be fully serialized though, -
string and byte[] objects are written into Pile as buffers bypassing all serialization mechanisms yielding over 6 M inserts/second for a 64 char string on a 6 core box.
The key benefit of this approach is its practicality. The real-life cases have shown the phenomenal overall performance while using the native CLR object model - this saves development time because you don't need to create special-purpose DTOs, and works faster, as there are no extra copies in-between that need to be made.
Overall, Pile has turned much of the I/O bound code into a CPU-bound code. What should have normally been a typical case for an async (with i/o bound) implementation, became 100% sync linear code, which is simpler and performs better as Tasks and other async/await goodies have a hidden cost (see here and here) when doing multi 100K ops/sec on a single server.
Big Memory Mapped Files
In-memory processing is fast and easy to implement, however when the process restarts you lose the dataset, which is large by definition (tens to hundreds of gigabytes). Pulling all of that data from its original source can be very time consuming, time that you can’t afford just after a restart.
To solve this we added Memory Mapped File (MMF) support using standard .NET classes: MemoryMappedFile and MemoryMappedViewAccessor. Now, instead of using byte[] as a backing store for memory segments, we use MemoryMappedViewAccessor instance and some low-level tricks to access data by pointers directly - all of this is still done using standard C#, no C++ is involved as we want to keep everything simple, especially the build chain.
Writing to memory via MemoryMappedViewAccessor (MMFMemory class) modifies virtual memory pages in the OS layer directly. The OS tries to fit those pages in physical RAM, if it can’t it swaps them out to disk. A nice feature of writing Pile into MMF is you don’t need to re-read everything from disk even after the process restarts soon after shutdown. The OS keeps the pages that have been mapped into process address space around even AFTER the process termination. Upon start, the MMFPile can access the pages already in RAM in a much quicker fashion than reading from disk anew.
Do note that MMFPile yields slower performance than DefaultPile (based on byte[]) due to the unmanaged code context switch done in the MMFMemory class.
Here are some test results:
Benchmark insert 200,000,000 string[32] 12 threads:
(Machine: Intel Core I7 3.2 Ghz, 6 Core, Win 7 64bit, VS2017, .NET 4.5)
DefaultPile
- 24 sec @ 8.3 M insert/sec = 8.5 Gb memory; Full GC < 8 ms
MMFPile
- 41 sec @ 4.9 M insert/sec = 8.5 GB memory + disk; Full GC < 10 ms
- Flush all data to disk on Stop(): 10 sec
- Read all data back to ram: 48 sec = ~ 177 mbyte/sec
As you can see, the MMF solution does have an extra cost; the throughput is lower due to unmanaged MMF transition, and once you mount the Pile back from disk, it takes time proportional to the amount of memory allocated to warm-up the RAM with data from disk. However you do not need to wait to load the whole working set back, as the MMFPile is instantly available for writes and reads after the Pile.Start(), the full load of all data is going to take time, in the example above the 8.5 GB dataset takes 48 sec to warm-up in RAM on a mid-grade SSD.
Benchmark insert 200,000,000 Person (class with 7 fields) objects 12 threads:
DefaultPile
- 85 sec @ 2.4 M insert/sec = 14.5 Gb memory; Full GC < 10ms
MMFPile
- 101 sec @ 1.9 M insert/sec = 14.5 GB memory + disk; Full GC < 10ms
- Flush all data to disk on Stop(): 30 sec
- Read all data back to ram: 50 sec = ~ 290 mbyte/sec
Other Improvements
Since our previous post on InfoQ we have made a number of improvements to the NFX.Pile:
Raw Allocator / Layered Design
The Pile implementation is now better layered, allowing us to treat string and
byte[] as directly writeable/readable from the large contiguous blocks of RAM. The whole serialization mechanism is bypassed for
byte[] completely, making it possible to use Pile as just a raw byte[] allocator.
var ptr = pile.Put(“abcdef”);//this will bypass all serializers //and use UTF8Encoding instead var original = pile.Get(ptr) as string;
Performance Boost
The segment allocation logic has been revised and yields 50%+ better performance during inserts from multiple threads due to introduction of sliding window optimization that tries to avoid multi-threading contention. Also, strings and byte[] are now bypassing the serializer completely yielding 5M+ inserts/sec for most cases (200%+ improvement)
Enumeration
It is now possible to get the contents of the whole pile as it implements IEnumerable<PileEntry> interface. PileEntry struct
foreach(var entry in pile) { Console.WriteLine(“{0} points to {1} bytes”.Args( entry.Pointer, entry.Size)); var data = pile.Get(entry.Pointer); … }
Durable Cache
For performance reasons, the default mode for the cache is “Speculative”. In this mode hash code collisions may cause lower priority items to be ejected from the cache even when there is otherwise enough memory.
The cache server can now store data in a “Durable” mode, which works more like a normal dictionary. Because durable mode needs to do rehashing in the bucket, it is 5-10% slower than speculative mode. This is hardly noticeable for most applications, but you’ll need to test to see what is best for your particular situation.
//Specify TableOptions for ALL tables, make tables DURABLE cache.DefaultTableOptions = new TableOptions("*") { CollisionMode = CollisionMode.Durable };
In-Place Object Mutation and Pre-allocation
It is now possible to alter objects at the existing PilePointer address. The new API Put(PilePointer...) allows for placing a different payload at the existing location. If the new payload does not fit in the existing block, then Pile creates an internal link to the new location (a la file system link in *nix systems) effectively making the original pointer an alias to the new location. Deleting the original pointer deletes the link and what it points to. The aliases are completely transparent and yield the target payload on read.
You can also pre-allocate more RAM for the future payload by specifying the preallocateBlockSize in the Put() call.
//Implement linked list stored in Pile public class ListNode { public PilePointer Previous; public PilePointer Next; public PilePointer Value; } ... private IPile m_Pile;//big memory pile private PilePointer m_First;//list head private PilePointer m_Last;//list tail ... //Append a person instance to a person linked list stored in a Pile //returns last node public PilePointer Append(Person person) { var newLast = new ListNode{ Previous = m_Last, Next = PilePointer.Invalid, Value = m_Pile.Put(person)}; var existingLast = m_Pile.Get(m_Last); existingLast.Next = node; m_Pile.Put(m_Last, existingLast);//in-place edit at the existing ptr m_Last m_Last = m_Pile.Put(newLast);//add new node to the tail return m_Last; }
For more information see our video: .NET Big Memory Object Pile - Use 100s of millions of objects in RAM
Links
- Source Code (GitHub Apache 2.0):
- NFX Documentation
- NFX Pile 1.5 Billion C#/NET Objects on a 24 CPU 150Gb RAM Cloud Instance
- InfoQ Articles Series on Big Memory .NET:
- About Agnicore.
Rate this Article
- Editor Review
- Chief Editor Action
Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered.
Get the most out of the InfoQ experience.
Tell us what you think
Load From Disk
by
Forest Snyder
Load Persisted MMF?
by
Dan L
Re: Load From Disk
by
Dmitriy Khmaladze
no you do not need to wait for 500 gigabytes to load a whole.
What happens is: you Start() the pile and it mounts segments from disk in < 1 sec,
however it does not know the whole statistics as of yet, - you can instantly read pointers pointing to those segments, you can instantly delete those pointers but you can not write into those segments until the get crawled() - analyzed by the async thread. This thread may take minutes to load your data. Its ok.
Until it does - the new writes will go towards the end of the MMFPile.
To summarize: you may use MMFPile in 1 sec after start, IF you need the full statistic (which most likely you do not for operation) then you wait.
Statistics = total object count, bytes used etc...
Re: Load Persisted MMF?
by
Dmitriy Khmaladze
The answer is this: the pile is not empty - it gets "crawled()" async by a separate worker, the statistics (how many objects, bytes yadayada...) gets built as the thread reads the stuff to memory, BUT that does not mean that you can dereference stuff right away. The MMF files are handled by the OS, so if you try to do a scattered read it will work just fine right after the load. See the PileForm , run this guy here to see how it works graphically using WinfForms: github.com/aumcode/nfx/tree/master/Source/Testi...
dmitriyk [at] agnicore [com] shoot questions
Re: Load Persisted MMF?
by
Dan L
Re: Load Persisted MMF?
by
Dmitriy Khmaladze
On shutdown you will lose index, but can keep MMFPile intact.
What you can do is reconstruct the cache by enuming through pile after load, which is going to cause some delay. We have yet to release into open source our full cache server that stores keys in the balanced index in MMFPile using version tolerant serializer - the code is used in proprietary system.
Re: Load From Disk
by
Jonathan Allen
Unless you try to iterate through the entire collection, the OS is going to pick up your data from disk one page at a time until it runs out of RAM.
Now lets say your application crashes and restarts. Since the OS already has the file mapped into memory, there is no delay. You're not "copying" the file into your application's memory. Rather, your application is using the file/file cache as memory.
Memory mapped files are often used for cross-process communication. If two applications map the same file to memory, they can see each other's changes. Again, this works because the file is kept in memory at the OS level. (I wonder how two Pile-based applications would handle this.)
Re: Load From Disk
by
Dmitriy Khmaladze
short answer: In the open-source NFX code, the MMFPile, the MMF mounted into pile are for exclusive use per process - this is purposely designed this way for simplicity and speed.
Besides, the IPC in NFX is done via Glue. There is no practical need to share the memory using Pile for IPC.
The long answer:
Pile is a memory manager, which is a thread safe state machine. As such, it needs to synchronize the access to segment buffers and free slot pool which are not stored in the MMF. MMF only stores the actual data kept in Pile, but not the freelists and other metadata. This is done on purpose as syncing this stuff between processes would have been either prohibitive performance-wise or very complex to implement. Now, we are ONLY talking about the PARTICULAR implementation of the IPile interface as provided by NFX.
Internally we do have a distributed "huge pile" which spans multiple machines, but it is not open as of yet as it is a part of cluster Agni OS.
Re: Durability
by
Dmitriy Khmaladze
Re: Load Persisted MMF?
by
Martin Strimpfl
I'm trying to reconstruct the cache however whenever I try to get the data from the Pile, I get an exception:
Bad SLIM format header'
Here is the code, the line throwing it is the cache.Pile.Get(entry.Pointer)
var cache = new LocalCache();
cache.Pile = new MMFPile(cache) { DataDirectoryRoot = @"D:\Temp\MMF\" };
cache.Start();
var persons = cache.GetOrCreateTable<int>("Persons");
foreach (var entry in cache.Pile)
{
var data = cache.Pile.Get(entry.Pointer);
var person = data as Person;
if (person != null)
{
persons.PutPointer(person.Id, entry.Pointer);
}
}</int>
Re: Load Persisted MMF?
by
Dmitriy Khmaladze
Add the filtering for entry.Type.
The enumerator returns all internal "guts", so the Where()
should help:
foreach(var entry in cache.Pile.Where( e => e.Type != PileEntry.DataType.Link))
{....}
Re: Load Persisted MMF?
by
Martin Strimpfl
the slim serializer is using the NFX.Serialization.Slim.TypeRegistry class to find out the type for deserialization. To deserialize the data, the SlimeSerializer is trying to first read the type's id from the memory stream and then using this id to get the Type from the TypeRegistry. However there is no such Type at that time so the exception is thrown.
To avoid it, I have to put the person object first to the Pile, so the TypeRegistry registers the Person Type (and if more types were stored previsouly, I need to do that in the exect order so the TypeRegistry is storing the type with the same id).
Is there a way to register the types before the TypeRegistry is used so I can be certain of the position?
Or am I missing something?
Re: Load Persisted MMF?
by
Dmitriy Khmaladze
you are not missing anything, and did a fantastic job!
This is me missing an improper merge issue which I did not even realize was there.
The MMFPile writes its type registry to a file (near the data files). On start it reads it back. This code was absent on GitHub and Nuget (we use internal company repository and I incorrectly merged older code)
I have just synced the internal repo and GitHub and also released a new NuGet, so this problem is solved.
Thanks for finding the problem!
MMF is a great idea !
by
Gabriel Rabhi
|
https://www.infoq.com/articles/Big-Memory-Part-3?utm_source=infoq&utm_medium=popular_widget&utm_campaign=popular_content_list&utm_content=presentation
|
CC-MAIN-2017-47
|
refinedweb
| 3,118
| 63.29
|
36920/how-to-find-if-setuptools-is-installed-in-python
Is there a way to find out if setuptools is installed in python? I am following a tutorial in which setuptools is mentioned as pre-requisites but I don’t know if I have installed it or no.
You can use python interpreter to check it:
Enter python to get in the python interpreter
In python interpreter, run the following code:
>>> import sys
>>> 'setuptools' in sys.modules.keys()
This will return True if it is installed and False if it is not.
Try this:
if cookie and not cookie.isspace():
# the ...READ MORE
For Python 3, try doing this:
import urllib.request, ...READ MORE
To check if the substring exists in ...READ MORE
To check if a list is empty ...READ MORE
suppose you have a string with a ...READ MORE
You can also use the random library's ...READ MORE
Syntax :
list. count(value)
Code:
colors = ['red', 'green', ...READ MORE
Enumerate() method adds a counter to an ...READ MORE
Hey @abhijmr.143, you can print array integers ...READ MORE
To print the message that file is not ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.
|
https://www.edureka.co/community/36920/how-to-find-if-setuptools-is-installed-in-python
|
CC-MAIN-2022-40
|
refinedweb
| 214
| 76.82
|
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.
Andreas Schwab <schwab@suse.de> writes: | gcc@integrable-solutions.net writes: | | |> --- 89,104 ---- | |> __ret = snprintf(__out, __size, __fmt, __v); | |> #else | |> if (__prec >= 0) | |> ! __ret = std::sprintf(__out, __fmt, __prec, __v); | |> else | |> ! __ret = std::sprintf(__out, __fmt, __v); | |> #endif | |> | |> #if __GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ > 2) | |> __gnu_cxx::__uselocale(__old); | |> #else | |> ! std::setlocale(LC_ALL, __sav); | |> ! std::free(__sav); | |> #endif | |> return __ret; | |> } | |> | | What about snprintf (ifdef GLIBCXX_USE_C99)? Will it need a std:: prefix | as well? Good question. Firstly I was surprised to it there at all. Secondly, snprintf is not declared in std:: -- which is no good because it means that a we're counting on a non-<xxx.h> file to put something in the global space, which is clearly wrong. In the end, I decided to apply the small fix and raise the bigger issue latter. Now that you ask the question, I'm attempted to reply: What snprintf is doing there without being properly scoped? Sorry, I don't know the answer. :-( -- Gaby
|
https://gcc.gnu.org/legacy-ml/libstdc++/2003-07/msg00225.html
|
CC-MAIN-2021-10
|
refinedweb
| 180
| 77.94
|
I keep getting hit by this because my mind is programmed into believing that the edit cursor position will always be within view but... it isnt.
I Add a switch allowing for the cursor to always remain within the view port on mouse scroll. I.E. once i have scrolled down far enough that the mouse is on the line at the top of the view port the mouse will remain on that line (or bottom if scrolling up).
Are you using ST3? If so you can leverage those command listeners If not, it's a bit more involved (but still doable) as you would have to rebind the movement commands. If you have auto pairing enabled and you insert a closing quote, it does issue a move command, so the cursor will move, this may or may not be what you want. Then again, why would you enter a closing quote/brace/etc if you don't know where the cursor is?
import sublime_plugin
import sublime
class MoveInterceptCommand(sublime_plugin.TextCommand):
def run(self, edit, by, forward, extend=False):
move_cursor_to_visible(self.view)
self.view.run_command("move", {"by": by, "forward": forward, "extend": extend})
class MoveToInterceptCommand(sublime_plugin.TextCommand):
def run(self, edit, to, extend=False):
move_cursor_to_visible(self.view)
self.view.run_command("move_to", {"to": to, "extend": extend})
class MoveInterceptEventListener(sublime_plugin.EventListener):
def on_text_command(self, view, command_name, args):
if len(view.sel()) == 1:
move_by_list = "characters", "lines", "words", "word_ends", "subwords", "subword_ends", "pages"]
move_to_list = "bol", "eol"]
if command_name == "move":
if "by" in args and args"by"] in move_by_list:
return ("move_intercept", args)
elif command_name == "move_to":
if "to" in args and args"to"] in move_to_list:
return ("move_to_intercept", args)
def move_cursor_to_visible(view):
vis_region = view.visible_region()
cursors = view.sel()
move_cursor_top = False
move_cursor_bottom = False
for cursor in cursors:
if cursor.b < vis_region.begin():
move_cursor_top = True
break
if cursor.b > vis_region.end():
move_cursor_bottom = True
break
if move_cursor_top:
cursors.clear()
cursors.add(sublime.Region(vis_region.begin(), vis_region.begin()))
elif move_cursor_bottom:
cursors.clear()
cursors.add(sublime.Region(vis_region.end(), vis_region.end()))
erm im not quite follwing you here. what im doing is mouse wheel scrolling and having the cursor not remain within the view port. when i then hit cursor down the view port migrates away from where i specifically located it back to where the cursor is. What i would like to happen would be that when i mouse wheel down that the cursor remain within the view port. if i have mouse wheeled down 5 pages and hit cursor down to position the cursor on the piece of code im LOOKING at... i dont want the view port to suddenly automatically home itself to the cursor - if the cursor was always within the view port it would not have to do so.
Also, im not comfortouble with creating some sort of plugin to achieve this as I dont program python or know the ST3 api enough to do it. Im also working 80 hour weeks for the past 6 weeks and will probably continue to do so for the forseeable future or beyond.
This seems to me like the sort of thing that should be built in as a configurable switch, not some geeky plugin that only a ST3 geek would know how to use
I did create a syntax highlight module for my forth programming but that was all just regular expressions... none of that API stuff or python coding.
Also, the plugin code you pasted seems to be a "hit this magic keypress and the cursor will home itself to the viewport". This would not work for me, i need the cursor to ALWAYS be within the view port. IF this magic keypress would mentally work for me then all i would have to do is remember to MOUSE CLICK after doing a mouse scroll and before I hit any cursor keys.
Problem is, my 30+ years of coding has hard wired my brain into expecting the cursor to never go outside the view. Im too old and wrinkly to recode my thought patterns into doing this
This was more of a temporary work around. Hopefully it will be implemented as a setting but all this does is catch the "move" command (which is called when you press the navigation keys). So the only "magic key press" is the navigation keys (which it sounded like you were doing already?). If the cursor is outside of the view, it moves the cursor to the top/bottom (depending on where the cursor was relative to the view), then executes the move command. Thus, no magic jumping view away from page 7153148520/8246976239. I haven't been coding for 30+ years, but I completely understand the idea of the cursor staying within the view. At the end of the day, it's your choice if you want to try it. Just thought I would throw up a short term, temporary solution
lol what the heck i replied to this hours ago but i guess i musta been assleep at the keyboard and never hit send
anyway, far be it for me to look a gift horse in the mouth! If this does what im looking for ill give it a shot but im not sure where to put it or how to make it active
%*$&^!TI! press SEND this time lol (ty
From the menu select "Tools -> New Plugin". Copy and paste code contents into the file. Then save the file in your user directory (I believe it should default there, but if not I think you can find it ) as "view_navigation.py". That's it! You shouldn't have to restart the editor, but I've seen some odd loading things with plugins in general so you might. Anyways, place the cursor, scroll somewhere, then press a navigation key. See if it does what you expect (or at least something more like what you want). Just as a reminder, this will only work in ST3. Let me know if you run into issues.
!this solves it tyvm!!!
lol ok this does not quite work. it works for the case when i have mouse wheel scrolled off page but when i am holding shift and using the cursor keys to mark a block of code the cursor jumps back to the top of the page when it reaches the bottom
That'l learn me for using mouse AND keyboard methods for editing stuff
Odd it works for me, unless I'm misunderstanding what you are doing. I'll assume that's the case . I did just find a bug that will break the selections if the beginning if off the view, but I don't think that is what you are seeing. I also know that this doesn't handle multiple cursors elegantly. I don't think either of those are your particular case, so if you could clarify a bit more, I'll try to fix it
Nevermind, I neglected to go all the way to the bottom (having the entire view highlighted). I've updated the code so it (should) work better. Didn't anyone tell you you shouldn't highlight so much code!
lol that fixed it from what i can tell : )
well i discovered the flaw in this module today and i dont know if you can fix it or not but if there are multiple cursors and some of them are off screen moving the cursor removes all but one
this really needs a fix from the management i think
Hmm, I was wondering when that would come up. I hadn't decided the best way to go about managing multiple cursors. My first thought was to keep the cursor positions the same as you scroll. Of course, then I would have to decide how I want to manage multiple cursors that are more than 1 "view region" away from each other. In addition, where would I put cursors that had a column position beyond the new line. For this, I'd probably just put it at the end of that line. Now if only the API would just give the last position in that row rather than moving to the next row. Oh well though, I can do some checks for that. I'll try to post an update for that soon.
do you have a way to check if there are multiple cursors and if there are not do anything special? Im not really sure how multiple cursors should be handled but this might be the best plan. I.E. only move the cursor into the view if there are only one of them. otherwise let the editor do whatever it would do without this plugin. I dont even know if the api allows this either.
That's the easiest thing to do. I've updated the plugin so it should only move the cursor into the view if there is a single cursor. Otherwise, it should retain the default editor behavior.
tyvm
|
https://forum.sublimetext.com/t/edit-cursor-off-view/9728/15
|
CC-MAIN-2016-30
|
refinedweb
| 1,487
| 69.52
|
OSEE/ATS/Users Guide
From Eclipsepedia
to <a href="./reference/report_a_bug.html]]Report a Bug</a>.
ATS Configuration
denotes a team configured to do work in ATS">
Purpose
Show and edit the workflows configured for use in ATS including Team Workflows, Tasks and Reviews.
How to do it
Double-click open any Action or Team Workflow from ATS World, Search results or ATS Results. The editor will be opened allowing view and edit of workflow.
">
Purpose
Shows states of workflow, alows entry in current state and provides services to perform actions, see metrics and research information about workflow.
How to do it
Default tab shown when any ATS object is opened in the ATS Editor.
Current State
Shown in the top status bar and in the main window, the current state is the state of the workflow state machine that is running for this ATS object. These workflows can be configured with 3 or 30 state depending on the needs of the program/team that is using it. <a href="./reference/workflow_editor/current_state.html">More</a>
Other States
Shows states of workflow, alows entry in current state and provides services to perform actions, see metrics and research information about workflow.
Services
Shows tasks associated with states of workflow. Allows quick editing of task information and allows a quick jump (double-click) to open task in ATS Workflow Editor.
ATS Workflow Editor - Task Tab
<img src="ATS_files/ats_workflow_editor_task_tab.jpg">
Purpose
Shows tasks associated with states of workflow. Allows quick editing of task information and allows a quick jump (double-click) to open task in ATS Workflow Editor.
How to do it
Select task tab after ATS object is opened in the ATS Editor.
Open Task
Double-click on any task to open in ATS Editor.
Right-click edit
Selecting one or more tasks and right-click produces a menu with selections for editing multiple tasks at a single time.
Alt-Left-Click edit
A quick way to edit a single field in a task is by holding the Alt key down and selecting
the cell to edit. This pops up an editor associated with the type of cell selected.
Actions">
Purpose
Enable data entered in OSEE to be spell checked.
How to do it
As data is entered into OSEE spell-checked fields, a blue line will be displayed if the word is not recognized. Only lower-case words or words with only first character uppercase will be spell checked. Acronyms, words with special characters, numbers and single letter words will be ignored.
Main Dictionary
OSEE has a main dictionary included in it's release. See below for it's source, copyrights and credits.
Additional Released Dictionaries
Additionally dictionaries can be added to OSEE via extension points. These can only be modified by hand and thus included in normal release cycle.
Run-time Global Dictionary
Each OSEE user is able to add words to a Global dictionary stored in the database by right-clicking on the word underlined in blue and selecting to save global. These words are stored in the "Global Preferences" artifact and will then be shown as a valid word in all users's spell checking.
Run-time Personal Dictionary
Each OSEE user is able to add words to their Personal dictionary stored in the database by right-clicking on the word
underlined in blue and selecting to save personal. These words are stored in the user's "User" artifact
and will then be shown as a valid word only for that user.
">
Purpose
Shows a graphical representation of the currently open Action or Team Workflow.
How to do it
Double-click open any Action or Team Workflow. Action View will show parent child relationship between Action and it's children Team Workflows. Cyan outline show currently open editor.
ToolTip
Hover over any object to determine information about current state, assignees and work to be done.
Double-Click / Right-Click
Double-Click to open any object in the ATS Editor or right click for more options.
ATS World View
.
Peer To Peer Review Workflow
Purpose
The Peer To Peer Review is a lightweight review type that enables interactive one-on-one reviews where two people sit at a single computer and review, disposition and resolve the issues as they are found. This review type does not require (but does allow) defects to be logged. This review type can be created as a stand-alone review or attached to any workflow. When attached to a workflow, it is related to a state and can be set as a "blocking" review that will keep the workflow from continuing until the review is completed. <img src="ATS_files/peerToPeerReviewEditor.JPG">
State Machine
<img src="ATS_files/peerToPeerReviewStateMachine.JPG">
How to do it
Stand-Alone Peer To Peer Review - From ATS Navigator, filter on "peer" and select "New Peer To Peer Review". Enter
required fields and select transition to start the review.
Workflow Related Peer To Peer Review - From any ATS workflow editor, select "Create a Peer To Peer Review" in the left column of the workflow editor. This will create the review and attach it to the current state. Enter required fields and select transition to start the review.===Prepare State=== This state allows the user to create the peer to peer review. Enter the required information and transition to Review to start the review. All review participants will be automatically assigned to the review state upon transition.
This state allows the users to review the materials, log any defects
and allows for the author to resolve and close any defects.
Decision Review Workflow
Purpose
The Decision Review is a simple review that allows one or multiple users to review something and answer a question. This review can be created, and thus attached, to any reviewable state in ATS. In addition, it can be created automatically to perform simple "validation" type reviews during a workflow.
State Machine
<img src="ATS_files/decisionReview.JPG">
How to do it
From any active state, select "Create a Decision Review" in the left column of the workflow editor.
This will create the review and attach it to the current state. Then, proceed to "Prepare State"
to entering the necessary information required for this review.===Prepare State===
This state allows the user to create the decision review. Enter the required information and transition to
Decision to start the review. All transitioned to assignees will be required to perform the review.
This state allows the user to review the description or materials and choose their decision.
Followup State
This state allows for followup action to be taken based on the decision.
Configure ATS for Change Tracking
Purpose
ATS is used to track any type of change throughout the lifecycle of a project. Below are the steps to configure ATS for tracking something new.
How to do it
- Review <a href="./reference/overview/ats_overview.html">ATS Overview</a> to understand ATS Concepts, Terms and Architecture. Pay special attention to ATS Terms
- Determine what Actionable Items (AIs) need to be available to the user to select from. This can be anything from a single AI for tracking something like a tool or even an activity that needs to be done to a hierarchical decomposition of an entire software product or engineering program.
- Considerations:
- Item should be in the context of what the user would recognize. eg: OSEE ATS World View versus something unknown to the user such as AtsWorldView.java.
- Decompose AI into children AI when it is desired to sort/filter/report by that decomposition.
- Actionable Item attributes to be configured:
- Name: Unique name that the user would identify with.
- Active: yes (converted to "no" when AI is no longer actionable)
- Actionable Item relations to be configured:
- TeamActionableItem: relate to Team Definition that is responsible for performing the tasks associated with this AI. NOTE: If this relation is not set, ATS will walk up the Default Hierarchy to find the first AI with this relation.
- Determine the teams that are going to perform the tasks that are associated with the AIs selected by the user.
- Considerations:
- Use separate teams if certain changes are to be managed by different leads.
- Use separate teams if one team's completion and releasing is independent of another's.
- Use separate teams if team members are separate.
- Use separate teams if different workflows are required for one set of AIs than another.
- Team attributes to be configured:
- Name: Unique team name that is distinguishable from other teams in a list.
- Description: Full description of the team and it's scope.
- Active: yes (converted to "no" when AI is no longer actionable)
- Team Uses Versions: yes if team workflows are going to use the build management and release capabilities of ATS.
- Full Nam: Extended name for the team. Expansion of acronym if applicable
- Team relations to be configured:
- TeamActionableItem: relation to all AIs that this team is responsible for.
- Work Item.Child: WorkFlowDefinition artifact configures the state machine that this team works under. NOTE: If this relation is not set, ATS will walk up the Default Hierarchy to find the first AI with this relation.
- TeamLead: User(s) that are leading this team. These users will be assigned to the Endorse state of the Team Workflow upon creation of an Action by a user. Providing multiple leads reduces bottlenecks. First lead to handle the Team Workflow wins.
- TeamMember: User(s) that are members of the team. These users will be shown first as preferred assignees and have the ability to privileged edit a Team Workflow for the team they belong to.
- Choose existing WorkFlowDefinition or create new WorkFlowDefinition to be used by the team and relate it to Team Definition (as above). This can be done through File->New->Workflow Configuration. Enter a namespace and a default workflow will be created and can be edited.
- Create version artifacts necessary (if using versions) and relate them to Team Definition (as above)
- If branching of artifacts is going to be used (see below), configure versions with their appropriate parent branch id.
- Determine if Branching within one of the states in the workflow is desired/required and configure as appropriate.
- Considerations:
- Branching is necessary if objects to change are stored in OSEE as artifacts. If so, OSEE ATS can create a working branch off the parent branch, allow user to modify artifacts and then commit these changes when complete, reviewed and authorized (as necessary). If objects are stored outside OSEE (eg. code files checked into SVN), this option is not necessary.
- Configure ATS workflow for branching:
- Create AtsStateItem extension specifying which state the branching will occur. This is normally in the Implement state of a workflow.
- Create root branch and import documents that will be managed through define and tracked through ATS.
- Set all Version artifacts "Parent Branch Id" attribute to the branch id of the root branch (or child branches, if using multi-branching)
- If only a single branch is to be used OR versioning is NOT configured to be used, the "Parent Branch Id" should be s
Configure Team Definition
Purpose
The Team Definition artifact specifies leads and members that are assigned to work on related Actionable Items.
How to do it
- Team Definitions should match company organizational structure.
- Attributes
- Name:[uniquely recognizable team name]
- ats.Full Name:[optional full name]
- ats.Description:[desc]
- ats.Active:[yes]
- ats.Team Uses Version:[yes if want to use release/build planning]
- Relations
- DefaultHeirarchy: Relate to parent team or top level "Teams"
- TeamDefinitionToVersion: Relate to current and future VersionArtifacts
- TeamLead: Relate to one or more team leads. These individuals will have priviledged edit and perform the Endorse state by default.
- TeamMember: Relate to one or more team members. These individuals will have ability to priviledged edit Workflows created by themselves against the team they belong to.
- Work Item.Child: Relate to a single "Work Flow Definition" artifact that defines the workflow that will be used for this team.
Configure Actionable Items (AI)
Purpose
Actionable Items provide the end user with a selection of things impacted by the Action. They are related to the <a href="./reference/configure/TeamDefinition.html">Team</a> that is responsible for performing the work.
How to do it
- AIs should not be deleted. Instead, use the ats.Active attribute to deactivate the AI. If an AI must be deleted, search for all "ats.Actionable Item" attributes that have the value of the AI's guid. These must be changed to another AI before deletion.
- Actionable Item tree can be created to the level at which actions are to be written. Usually a component decomposition. In the case of UIs, create one for each view or window.
- Attributes
- Name:[uniquely recognizable team name]
- ats.Active:[yes]
- Relations
- DefaultHeirarchy: Relate to parent team or top level "Actionable Items" artifact"
- TeamActionableItem: Relate to team responsible for performing tasks. Team can be related to parent and all children will have team by default.
Workflow Configuration
Purpose
To create a new workflow configuration that ATS uses to move an Action through it's specific workflow.
Ats Workflow Configuration artifacts.
ATS uses four main artifacts to configure a workflow for use by a Team.
- >Configurations can also be created through the java. An example of this can be seen by looking at the org.eclipse.osee.ats.config.demo plugin. This plugin, and the DemoDatabaseConfig.java class, shows how to programatically generate work flows, pages, rules and widgets to configure ATS. This configuration will be generated during a database initialization.</a><a> </a>
</a>==ATS Workflow Configuration Editor==
<img src="reference/configure/configEditor.JPG" border="1">.</company>
-. (eg:)
- <a href="./reference/configure/TeamDefinition.html">Configure the Team Definition</a> to use the new workflow
- Create a new Action and test the created workflow
-
Workflow Configuration - Validation
Validation of a workflow is provided by selecting the check icon and selecting a state,
transition or the entire workflow (selecting the white background). This will popup
whatever error occurs or a "Validation Success" if all is ok.
Note: - .
Configure ATS for Help
Purpose
To configure ATS workflows to use the integrated help system. ATS help useds a combination of widget tooltip, static help pages and dynamic help content configured through extended plugins.
How to do it
- Workflow Page Help
- Workflow Widget Help
- Declared tooltip is shown as tooltip when hover over label
- Double-Click label pops open html dialog if help contextId and pluginId are set
- Double-Click label pops open tooltip
- Top down order of obtaining help content
- Setting tooltip in IStateItem interface
- Work Widget Definitions in Work Data attribute value of XWidget=...tooltip="put help here"
- ATSAttributes.java declarations
Select <img src="ATS_files/refresh.gif"> to refresh the contents.
Select <img src="ATS_files/customize_002.gif"> to <a href="">Customize Table</a>.
Select <img src="ATS_files/bug.gif"> to <a href="">Report a Bug</a>.
- >
|
http://wiki.eclipse.org/index.php?title=OSEE/ATS/Users_Guide&oldid=158063
|
CC-MAIN-2014-15
|
refinedweb
| 2,470
| 56.05
|
>
The question is answered, right answer was accepted
So I'm trying to scale my background sprite(s) to fill the screen, there are 2 a regular background image, and a sprite that lays over it with alpha transparency to fake shadows (since lighting doesn't apply to 2d sprites) the issue is i cant seem to find that magical float to multiply my sprite width/height by to get the desired size (maybe i just suck at math. My First attempt was:
percentW = (float)scrW/(float)imgW;
percentH = (float)scrH/(float)imgH;
which gives me 0.3125/0.3125
but in playing with inspector variables i found that for this particular resolution i need 0.8/0.8 so i looked online and found this code:
float height = 2.0f * Mathf.Tan(0.5f * Camera.main.fieldOfView * Mathf.Deg2Rad);
float width=height * scrW/scrH;
which gives me 0.649 for height, closer but still not perfect. I can hard code a close approximation by multiplying by 1.3 to get .84 but i want .8 almost exactly, and its driving me nuts.
This is going to be an android title meaning tons and tons of varying resolutions so of course independence is of the utmost importance. So has anyone figured out how to do this yet?
Answer by robertbu
·
Jan 19, 2014 at 08:49 PM
I've not spent a lot of time with the 2D stuff yet, but it seems to be that you have to use the world size of the sprite to make this calculation. In particular, since you can specify Pixels to Units when you import a sprite texture, the world mapping needs to use the sprite.bounds. Here's some sample code. There may be a simpler solution. It assumes an Orthographic camera centered on the image parallel to the camera plane:
function ResizeSpriteToScreen() {
var sr = GetComponent(SpriteRenderer);
if (sr == null) return;
transform.localScale = Vector3(1,1,1);
var width = sr.sprite.bounds.size.x;
var height = sr.sprite.bounds.size.y;
var worldScreenHeight = Camera.main.orthographicSize * 2.0;
var worldScreenWidth = worldScreenHeight / Screen.height * Screen.width;
transform.localScale.x = worldScreenWidth / width;
transform.localScale.y = worldScreenHeight / height;
}
P.S. I see you are trying to use fieldOfView. If you are using a perspective camera, this code will need to be modified.
This helped me MASSIVELY in my project, cheers :)
HI Robert, When I use this code, monodevelop says
Error: Assets/C#Scripts/MainMenu1.cs(45,35): error CS1061: Type object' does not contain a definition for sprite' and no extension method sprite' of type object' could be found (are you missing a using directive or an assembly reference?)
object' does not contain a definition for
sprite' of type
@rahulreddyv - The code above compiled fine for me. I just tested it again to make sure. What changes did you make from what is here?
I hv copied it into a CS Script, and it puked these errors. I will work it for CS thank you for the code. I think this transforms all the sprites isn't it?
The above code scales a sprite to fill the screen. And yes the code will need a bit of changing for C#.
Answer by Shubhra
·
Jun 14, 2014 at 10:23 AM
Many Many thanks to robert. It helped me a lot. This is the C# implementation of robert's code:
void Resize()
{
SpriteRenderer sr=GetComponent<SpriteRenderer>();
if(sr==null) return;
transform.localScale=new Vector3(1,1,1);
float width=sr.sprite.bounds.size.x;
float height=sr.sprite.bounds.size.y;
float worldScreenHeight=Camera.main.orthographicSize*2f;
float worldScreenWidth=worldScreenHeight/Screen.height*Screen.width;
Vector3 xWidth = transform.localScale;
xWidth.x=worldScreenWidth / width;
transform.localScale=xWidth;
//transform.localScale.x = worldScreenWidth / width;
Vector3 yHeight = transform.localScale;
yHeight.y=worldScreenHeight / height;
transform.localScale=yHeight;
//transform.localScale.y = worldScreenHeight / height;
}
A little dated post, but you can do that simpler with C#
SpriteRenderer sr = GetComponent<SpriteRenderer>();
float worldScreenHeight = Camera.main.orthographicSize * 2;
float worldScreenWidth = worldScreenHeight / Screen.height * Screen.width;
transform.localScale = new Vector3(
worldScreenWidth / sr.sprite.bounds.size.x,
worldScreenHeight / sr.sprite.bounds.size.y, 1);
Also, you may not want to do "if(sr==null) return;", because debugger would just let it run and you wouldn't see where the error is.
Use "Public SpriteRender sr;" at the beginning of the script instead of "SpriteRenderer sr = GetComponent();" so that you can use it in any gameobject, however this requires more work if the sprite is created by code...
Dated addition. Also, probably a little redundant and not the cleanest solution. However, I've modified the above to allow for keeping an aspect ratio (so if you try and put a 640x480 sprite on a 500x500 screen - it will create a 667x500 sprite).
In this case, I called it AutoStretchSprite.cs and attached it to a SpriteRender.
using UnityEngine;
using System.Collections;
[RequireComponent(typeof(SpriteRenderer))]
public class AutoStretchSprite : MonoBehaviour {
/// <summary> Do you want the sprite to maintain the aspect ratio? </summary>
public bool KeepAspectRatio = true;
/// <summary> Do you want it to continually check the screen size and update? </summary>
public bool ExecuteOnUpdate = true;
void Start () {
Resize(KeepAspectRatio);
}
void FixedUpdate () {
if (ExecuteOnUpdate)
Resize(KeepAspectRatio);
}
/// <summary>
/// Resize the attached sprite according to the camera view
/// </summary>
/// <param name="keepAspect">bool : if true, the image aspect ratio will be retained</param>
void Resize(bool keepAspect = false)
{
SpriteRenderer sr = GetComponent<SpriteRenderer>();
transform.localScale = new Vector3(1, 1, 1);
// example of a 640x480 sprite
float width = sr.sprite.bounds.size.x; // 4.80f
float height = sr.sprite.bounds.size.y; // 6.40f
// and a 2D camera at 0,0,-10
float worldScreenHeight = Camera.main.orthographicSize * 2f; // 10f
float worldScreenWidth = worldScreenHeight / Screen.height * Screen.width; // 10f
Vector3 imgScale = new Vector3(1f, 1f, 1f);
// do we scale according to the image, or do we stretch it?
if (keepAspect)
{
Vector2 ratio = new Vector2(width / height, height / width);
if ((worldScreenWidth / width) > (worldScreenHeight / height))
{
// wider than tall
imgScale.x = worldScreenWidth / width;
imgScale.y = imgScale.x * ratio.y;
}
else
{
// taller than wide
imgScale.y = worldScreenHeight / height;
imgScale.x = imgScale.y * ratio.x;
}
}
else
{
imgScale.x = worldScreenWidth / width;
imgScale.y = worldScreenHeight / height;
}
// apply change
transform.localScale = imgScale;
}
}.
Adjust overlay position for different resolutions
0
Answers
2d Sprites automatically round their scale to the nearest hundred
1
Answer
How to Make 2D game responsive?
0
Answers
Scaling pixelated font to different resolutions
0
Answers
2D Unity game doesn't work with different resolutions
1
Answer
|
https://answers.unity.com/questions/620699/scaling-my-background-sprite-to-fill-screen-2d-1.html?sort=oldest
|
CC-MAIN-2019-22
|
refinedweb
| 1,072
| 59.09
|
Microsoft has changed many things in its recent .NET Beta 2 release. Most of the code compiled in Beta 1 may not compile in Beta 2. So if you are working on Beta 1, you might want to upgrade it to Beta 2. .NET Beta 2 SDK and VS.NET Beta 2 is available for download on Microsoft's MSDN site for MSDN subscribers.
ADO.NET Namespaces
In my first article, I'll discuss some of the ADO.NET changes. If you remember Beta 1, there were two common namespaces - System.Data.ADO and System.Data.SQL.
Besides the namespaces name changes, there are other changes such as Data Components. Most of the data components remain same besides DataSetCommand. In Beta 2, DataSetCommand component is replaced with DataAdapters. DataAdapters sits between a DataSet and a Database and fills data from the data source to the DataSet. Our following articles and sample code will show you how to work with DataAdapters.
See my tutorial on DataAdapters Working with OleDb Data Adapters for how to write database applications using DataAdapters.
DataSet and DataView components remain same in Beta 2. So that's all good since most of the programming revolves around DataSet.
One more component changed in Beta 2 is DataSetView. DataSetView is called DataViewManager. I didn't look into the details of DataViewManager but I'ld think no changes in that class.
Connection and DataAdapters
There is one big change in Beta 2 is working with DataConnection and DataAdapters. If you create DataAdapter application, (See Working with OleDb Data Adapters), there is no direct connection between Data Adapters and Connection objects. In Beta 1, you could connect a DataSetCommand direct to a Connection object.
In Beta 2, you use DataCommands to connect a Connection object with the Data Adapters. There are four command objects for each INSERT, DELETE, UPDATE, and SELECT SQL queries.
There are more changes but I think its enough for today. Its late and I better go get some sleep otherwise I won't be able to work tomorrow ;). May be, Mike or I'll write more articles in more depth some other day. There are many other areas to be discussed. Tomorrow, I 'll try to cover few XML.NET areas. If you think you found something, which is different in Beta 2 than Beta1, just drop me a mail at mcb@mindcracker.com and help your fellow developers.
Happy .Net.
|
http://www.c-sharpcorner.com/UploadFile/mahesh/dotNETBeta211302005043635AM/dotNETBeta2.aspx
|
CC-MAIN-2015-18
|
refinedweb
| 404
| 76.11
|
asm [options] filename [[options]filename...]
Parameters
Note
All options for Ilasm.exe are case-insensitive and recognized by the first three letters. For example, /lis is equivalent to /listing and /res:myresfile.res is equivalent to /resource:myresfile.res. Options that specify arguments accept either a colon (:) or an equal sign (=) as the separator between the option and the argument. For example, /output:file.ext is equivalent to /output=file.ext.
Remarks.
Note
Compilation might fail if the last line of code in the .il source file does not have either trailing white space or an end-of-line character.
You can use Ilasm.exe in conjunction with its companion tool, Ildasm.exe. Ildasm.exe takes a PE file that contains IL IL.
Note
Currently, you cannot use this technique with PE files that contain embedded native code (for example, PE files produced by Visual C++). IL file.
using System; public class Hello { public static void Main(String[] args) { Console.WriteLine("Hello World!"); } }
The following IL code example corresponds to the previous C# code example. You can compile this code into an assembly using the IL Assembler tool. Both IL
See also
Feedback
|
https://docs.microsoft.com/en-us/dotnet/framework/tools/ilasm-exe-il-assembler?view=netframework-4.7.2
|
CC-MAIN-2019-43
|
refinedweb
| 193
| 53.17
|
Python library to manipulate ESC/POS Printers
Project Description
Python library to manipulate ESC/POS Printers.
1. Dependencies
In order to start getting access to your printer, you must ensure you have previously installed the following python modules:
- pyusb (python-usb)
- Pillow
2. Description
Python ESC/POS is a library which lets the user have access to all those printers handled by ESC/POS commands, as defined by Epson, from a Python application.
The standard usage is send raw text to the printer, but in also helps the user to enhance the experience with those printers by facilitating the bar code printing in many different standards, as well as manipulating images so they can be printed as brand logo or any other usage images migh have.
Text can be aligned/justified and fonts can be changed by size, type and weight.
Also, this module handles some hardware functionalities like, cut paper, carrier return, printer reset and others concerned to the carriage alignment.
3. Define your printer
Before start create your Python ESC/POS printer instance, you must see at your system for the printer parameters. This is done with the ‘lsusb’ command.
First run the command to look for the “Vendor ID” and “Product ID”, then write down the values, these values are displayed just before the name of the device with the following format:
xxxx:xxxx
Example:
Bus 002 Device 001: ID 1a2b:1a2b Device name
Write down the the values in question, then issue the following command so you can get the “Interface” number and “End Point”
lsusb -vvv -d xxxx:xxxx | grep iInterface lsusb -vvv -d xxxx:xxxx | grep bEndpointAddress | grep OUT
The first command will yields the “Interface” number that must be handy to have and the second yields the “Output Endpoint” address.
By default the “Interface” number is “0” and the “Output Endpoint” address is “0x82”, if you have other values then you can define with your instance.
4. Define your instance
The following example shows how to initialize the Epson TM-TI88IV
NOTE: Always finish the sequence with Epson.cut() otherwise you will endup with weird chars being printed.
from escpos import * """ Seiko Epson Corp. Receipt Printer M129 Definitions (EPSON TM-T88IV) """ Epson = escpos.Escpos(0x04b8,0x0202,0) Epson.text("Hello World") Epson.image("logo.gif") Epson.barcode Epson.barcode('1324354657687','EAN13',64,2,'','') Epson.cut()
or use with statement:
with EscposIO(printer.Network('192.168.1.87', port=9100)) as p: p.set(font='a', codepage='cp1251', size='normal', align='center', bold=True) p.printer.set(align='center') p.printer.image('logo.gif') p.writelines('Big line\n', font='b') p.writelines(u'Привет', color=2) p.writelines(u'BIG TEXT', size='2x') # After exit of with, printer will cut the paper
5. API
- Escpos() - main class
- Escpos.image(path_img) - Open image file
- Escpos.qr(text, *args, **kwargs) - Print QR Code for the provided string
- Escpos.barcode(code, bc, width, height, pos, font) - Print Barcode
- Escpos.text(text) - Print any text
- Escpos.set(codepage=None, **kwargs) - kwargs should be:
- bold: set bold font
- underline: underline text
- size: Text size
- font: Font type
- align: Text position
- inverted: White on black text
- color: Text color
- Escpos.cut() - Cut the paper
- Escpos.cashdraw(pin) - Send open cashdraw signal to printer pin.
- Escpos.control() and Escpos.hw() - Should be use it when you want to do another operations.
- EscposIO(printer, autocut=True, autoclose=True) - class for using with ‘with’ statement. When autocut=False printer not cut the paper after exit of “with”.
- EscposIO.set(**kwargs) - set the params in printing stream
- bold: set bold font
- underline: underline text
- size: Text size
- font: Font type
- align: Text position
- inverted: White on black text
- color: Text color
- EscposIO.writelines(text, **params) - Accept params like “set”, and apply them for this lines. You should use set() for setting common params.
6. Links
Please visit project homepage at:
- Manuel F Martinez <manpaz@bashlinux.com>
- Dmitry Orlov <me@mosquito.su>
Release history Release notifications
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/python-escpos/1.0.9/
|
CC-MAIN-2018-17
|
refinedweb
| 683
| 56.45
|
-------------------------------------------------------------------------------- -- | -- Module : XMonad.Actions.CycleWindows -- Copyright : (c) Wirt Wolff <wirtwolff@gmail.com> -- License : BSD3-style (see LICENSE) -- -- Maintainer : Wirt Wolff <wirtwolff@gmail.com> -- Stability : unstable -- Portability : unportable -- --. ----------------------------------------------------------------------------- module XMonad.Actions.CycleWindows ( -- * Usage -- $usage -- * Cycling nearby or nth window into current frame -- $cycle cycleRecentWindows, cycleStacks', -- * Cycling half the stack to get rid of a boring window -- $opposite rotOpposite', rotOpposite, -- * Cycling windows through the current frame -- $focused rotFocused', rotFocusedUp, rotFocusedDown, shiftToFocus', -- * Cycling windows through other frames -- $unfocused rotUnfocused', rotUnfocusedUp, rotUnfocusedDown, -- * Updating the mouse pointer -- $pointer -- * Generic list rotations -- $generic rotUp, rotDown ) where import XMonad import qualified XMonad.StackSet as W import XMonad.Actions.RotSlaves import Control.Arrow (second) -- #Editing_key_bindings". {- -} -- $cycle --. cycleRecentWindows :: [KeySym] -- ^ A list of modifier keys used when invoking this action. -- As soon as one of them is released, the final switch is made. -> KeySym -- ^ Key used to shift windows from below the current choice into the current frame. -> KeySym -- ^ Key used to shift windows from above the current choice into the current frame. -- If it's the same as the first key, it is effectively ignored. -> X () cycleRecentWindows = cycleStacks' stacks where stacks s = map (shiftToFocus' `flip` s) (wins s) wins (W.Stack t l r) = t : r ++ reverse l -- |. cycleStacks' :: (W.Stack Window -> [W.Stack Window]) -- ^ A function to a finite list of permutations of a given stack. -> [KeySym] -- ^ A list of modifier keys used to invoke 'cycleStacks''. -- As soon as any is released, we're no longer cycling on the [Stack Window] -> KeySym -- ^ Key used to select a \"next\" stack. -> KeySym -- ^ Key used to select a \"previous\" stack. -> X () cycleStacks' filteredPerms mods keyNext keyPrev = do XConf {theRoot = root, display = d} <- ask stacks <- gets $ maybe [] filteredPerms . W.stack . W.workspace . W.current . windowset let evt = allocaXEvent $ \p -> do maskEvent d (keyPressMask .|. keyReleaseMask) p KeyEvent {ev_event_type = t, ev_keycode = c} <- getEvent p s <- keycodeToKeysym d c 0 return (t, s) choose n (t, s) | t == keyPress && s == keyNext = io evt >>= choose (n+1) | t == keyPress && s == keyPrev = io evt >>= choose (n-1) | t == keyPress && s `elem` [xK_0..xK_9] = io evt >>= choose (numKeyToN s) | t == keyRelease && s `elem` mods = return () | otherwise = doStack n >> io evt >>= choose n doStack n = windows . W.modify' . const $ stacks `cycref` n io $ grabKeyboard d root False grabModeAsync grabModeAsync currentTime io evt >>= choose 1 io $ ungrabKeyboard d currentTime where cycref l i = l !! (i `mod` length l) -- modify' ensures l is never [], but must also be finite numKeyToN = subtract 48 . read . show -- | Given a stack element and a stack, shift or insert the element (window) -- at the currently focused position. shiftToFocus' :: (Eq a, Show a, Read a) => a -> W.Stack a -> W.Stack a shiftToFocus' w s@(W.Stack _ ls _) = W.Stack w (reverse revls') rs' where (revls', rs') = splitAt (length ls) . filter (/= w) $ W.integrate s -- $opposite -- :: X() rotOpposite = windows $ W.modify' rotOpposite' -- | The opposite rotation on a Stack. rotOpposite' :: W.Stack a -> W.Stack a rotOpposite' (W.Stack t l r) = W.Stack t' l' r' where rrvl = r ++ reverse l part = (length rrvl + 1) `div` 2 (l',t':r') = second reverse . splitAt (length l) $ reverse (take part rrvl ++ t : drop part rrvl) -- Up :: X () rotFocusedUp = windows . W.modify' $ rotFocused' rotUp rotFocusedDown :: X () rotFocusedDown = windows . W.modify' $ rotFocused' rotDown -- | The focused rotation on a stack. rotFocused' :: ([a] -> [a]) -> W.Stack a -> W.Stack a rotFocused' _ s@(W.Stack _ [] []) = s rotFocused' f (W.Stack t [] (r:rs)) = W.Stack t' [] (r:rs') -- Master has focus where (t':rs') = f (t:rs) rotFocused' f s@(W.Stack _ _ _) = rotSlaves' f s -- otherwise -- $unfocused -- Rotate windows through the unfocused frames. This is similar to -- @rotSlaves@, from "XMonad.Actions.RotSlaves", but excludes the current -- frame rather than master. rotUnfocusedUp :: X () rotUnfocusedUp = windows . W.modify' $ rotUnfocused' rotUp rotUnfocusedDown :: X () rotUnfocusedDown = windows . W.modify' $ rotUnfocused' rotDown -- | The unfocused rotation on a stack. rotUnfocused' :: ([a] -> [a]) -> W.Stack a -> W.Stack a rotUnfocused' _ s@(W.Stack _ [] []) = s rotUnfocused' f s@(W.Stack _ [] _ ) = rotSlaves' f s -- Master has focus rotUnfocused' f (W.Stack t ls rs) = W.Stack t (reverse revls') rs' -- otherwise where (master:revls) = reverse ls (revls',rs') = splitAt (length ls) (f $ master:revls ++ rs) -- $generic -- Generic list rotations such that @rotUp [1..4]@ is equivalent to -- @[2,3,4,1]@ and @rotDown [1..4]@ to @[4,1,2,3]@. They both are -- @id@ for null or singleton lists. rotUp :: [a] -> [a] rotUp l = drop 1 l ++ take 1 l rotDown :: [a] -> [a] rotDown = reverse . rotUp . reverse
|
http://hackage.haskell.org/package/xmonad-contrib-bluetilebranch-0.9.1.4/docs/src/XMonad-Actions-CycleWindows.html
|
CC-MAIN-2015-06
|
refinedweb
| 743
| 60.92
|
Hello,I tried to use a java custom function in tibco to generate a 32 bit hex string. I got a following BW error while validating a resource.
Missing: Invalid Java custom function:Summary.
Code is below.
import java.util.UUID;
public class GenerateUUID
{
public static String GenerateId()
{
UUID id=UUID.randomUUID();
return String.valueOf(id);
}
}
I am able to compile the above code and it successfully generates the class file. While I tried to validate through the Tibco BW,it says error i.e.Missing: Invalid Java custom function:Summary.
Even I was able to create a class file for the following simple java class file,but i am still getting a same kind of error while validating a class file in Tibco Designer.
sample java class example
public class Greeting
{
public static String CreateGreeting(String name)
{
return ""Hello,""+name ""!"";
}
Could anybody help me about this concern, I think I am missing something.Help will be highly appreciated.
Hi,
I tried the same thing and there is no prob.. I was able to get the ID..
once after locating the class file.. kinldy press "LOAD" and validate..
__________
Sriram
Hi,
Where did you create the class file for the Java Customer Function? You have
to ensure that the class is compiled in equal or lower version of javac than
the jre version of BW.
Best Regards
Andre
@Andre, As you say, I compiled my java file and created a class file in my local directory, loaded it into the BW's Java custom function activity, under configuration tab, but still problem presents, what to do, Is there is any problem with my java class may be I am missing something.
I created my java class in notepad, I compiled my java file using command prompt
with the following command actually I don't have NetBeans/Eclipse/DR java GUI application
c:>javac GenerateUUID.java
c:>
if some error occurred during compilation then it shows error but while I compiled with the above command, it successfully compiled
While Validating class file after loading it into the BW, it says error, what's the problem.
Error is:
Missing: Invalid Java custom function:Summary
I am using B.W 5.6.2 and JDK1.6.0
Any ideas....please help me out.
Hi,
I tried to re-create the issue and got and solved.. and i followed the following steps..
If u have created the class file correctly...
1) Locate the class file ..
2)apply-->save and validate.. you ll get the error "Missing: Invalid Java custom function:Summary "
Steps to resolve:
1)same as setp1..
2)Press "LOAD" -- it will reflect with some bytes...
3) apply -->save.. and validate.. No errors..
Did u try that?
_______
Sriram
Yes Sriram, as you say,I did the same thing but there is same problem. Actually, I used the JDK version6.0 and Business Work 5..6.0.Is there any problem with version?
Help me.
Regards
Sandesh
Compile the code with jre 1.5, this problem will be resolved.
Yes I got it fixed Sriram, it was a problem of version. when I compiled my java class using the command prompt, BW didn't say validation successful, but when I tried NetBeansIDE 6.8 GUI Application to compile my java class, it was successfully compiled and when loaded the class file in BW, it says Validation Successful, So I came to know that, coding was accurate but the issue was the version of JAVA that I used earlier to compile java file to create class.
My NetBeanIDE is using equal version of JAVAC that the JRE of BW is using, Earlier, I used upgraded version of JAVAC to compile my Java file through the Command Prompt so that was a problem.
Thanks both Sriram and Andre for helping me to solve this problem
Please note that Java class needs to implement serializable object before it gets called in the java function in BW.
Import java.io.serializable;
public class GenerateUUID implements Serializable
|
https://it.toolbox.com/question/custom-java-function-in-tibco-020411
|
CC-MAIN-2019-26
|
refinedweb
| 669
| 66.44
|
NAME
towupper - convert a wide character to uppercase
SYNOPSIS
#include <wctype.h> wint_t towupper(wint_t wc);
DESCRIPTION
The towupper() function is the wide-character equivalent of the toupper(3) function. If wc is a wide character, it is converted to uppercase. Characters which do not have case are returned unchanged. If wc is WEOF, WEOF is returned.
RETURN VALUE
The towupper() function returns the uppercase equivalent of wc, or WEOF if wc is WEOF.
CONFORMING TO
C99.
NOTES
The behavior of towupper() depends on the LC_CTYPE category of the current locale. This function is not very appropriate for dealing with Unicode characters, because Unicode knows about three cases: upper, lower and title case.
SEE ALSO
iswupper(3), towctrans(3), towlower(3)
COLOPHON
This page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
|
http://manpages.ubuntu.com/manpages/maverick/man3/towupper.3.html
|
CC-MAIN-2013-20
|
refinedweb
| 150
| 59.5
|
In Part 1 of this article, we created a control template for an Outlook 2010 task button in Expression Blend. It looked nice, but it wasn't terribly useful. For one thing, all of the colors were hard-coded into the template, which seriously limits the template's flexibility. For another thing, the image and text used by the template were hard-coded as well, which means that the template would have to be copied into each button on which it was used.
In this part of the article, we are going to wrap the control template in a custom control, which will eliminate the hard-coding that we used in Part 1. The result will be a flexible, general purpose task button that we can add to a project and use pretty much like any other button.
The current version of the demo project is Version 1.0.1. This version corrects an error spotted by reader freefeb in the <Image> declaration in Generic.xaml. Images should now display correctly in task buttons.
<Image>
Before we get to the custom control itself, we have one more bit of housekeeping to do. The color values for our button are all shades of a base color. In Part 1, we grabbed various shades of blue from Outlook 2010, using the Expression Blend Eyedropper tool. Later in this part, we are going to bind these values to the Background property of our button. To do that, we are going to need to specify different shades of the background color. And, that task will require the use of an IValueConverter object.
Background
IValueConverter
We won't go into a whole lot of detail about IValueConverter here. If you aren't familiar with the interface, take another detour and learn the basics. We will assume you have a general understanding of IValueConverter from this point on.
Our value converter will take a parameter, namely, the percentage of the base color that we want to return from the converter. The converter takes a SolidColorBrush (passed in as a hex value from XAML) as its value, and converts the brush color to an HLS value. Then, it adjusts the luminance of the HLS value to make it lighter or darker, using a percentage passed in to the converter as the 'parameter' argument. The adjustment is done as a percentage of the base color's luminance. For example, if the base color has a luminance of 80%, and we pass in 85% as the adjustment factor (via the 'parameter' argument), then the HSL color's luminance will be adjusted to 68% (85% of 80%).
SolidColorBrush
As a quick side note, there are two models for performing color adjustments, HLS and HSB. I prefer the HLS model, but many people prefer the HSB model. I have included conversion methods and IValueConverter classes for both models in my article, WPF Color Conversions, on CodeProject. So, if you prefer HSB, the IValueConverter in the demo project can easily be swapped out for the HSB converter from that article.
We will set the IValueConverter aside for now - we will use it later when we assemble the custom control. For now, let's turn our attention to creating the control itself.
At last, we have arrived at the point in our journey where we actually create the custom control. It feels like we have been hiking through the Grand Canyon for a couple of days, and we have finally reached the Colorado River. But before we create our custom control, let's look at the difference between user controls and custom controls.
A custom control is no more than a class that wraps a control template. Custom controls can be a bit confusing, because WPF does not give you a design surface to work with. That is one of the big differences between user controls and custom controls.
A user control is really a fragment of a view. Like a window, a user control has a surface onto which other controls can be dropped. The developer drops controls onto the design surface to compose the view that the user control will represent. For this reason, user controls are sometimes referred to as 'composite controls'.
An iconic example of a user control is a color picker. A color picker is made up of several controls, including sliders for RGB values, a Rectangle to preview the selected color, and buttons to submit or cancel the selection. User controls often use the Model-View-ViewModel pattern to communicate with the rest of the application. The properties of their constituent controls are bound directly to the view model, rather than to custom properties of the controls themselves.
Rectangle
A custom control is a very different creature. A custom control is not a composite of constituent controls. Instead, it is often derived from a single control. For example, we will derive our custom control from the RadioButton class. That approach allows us to inherit the behavior of a RadioButton (we specifically want to use the IsChecked property) and add our own custom properties to the control (we will be adding ImagePath and Text properties).
RadioButton
IsChecked
ImagePath
Text
As we noted above, custom controls do not provide a design surface, as do user controls. Instead, custom controls rely on a bit of a gimmick. WPF contains built-in support for themes. Any control template in a resource dictionary named Generic.xaml, located in a root-level folder named Themes, will be considered part of the default theme for an application. WPF uses this mechanism to provide the resource dictionary for our custom control's template.
So, our custom control will consist of two elements:
We created the control template in Part 1. Now, it is time to assemble the custom control itself.
In Visual Studio 2008, create a new WPF Custom Control Library called Outlook2010TaskButton. Visual Studio will create a solution with the following structure:
Outlook2010TaskButton
As you can see, Visual Studio has created a class for our custom control (currently named CustomControl1.cs), and a Themes subfolder which contains a Generic.xaml resource dictionary. We will start by filling out the custom control class.
We begin by renaming CustomControl1.cs to TaskButton.cs. We know that we will need two custom properties:
Image
ImageSource
Why not simply use a ContentPresenter instead of custom properties, and let the templated button decide what content to present? As we noted in Part 1, we are creating a special purpose button - one that emulates an Outlook task button. We lock the content in the control template so that we can enforce our standards for this type of button.
ContentPresenter
We will need to add the two custom properties we will need as dependency properties. We won't go into a long explanation of dependency properties here - take another detour if you aren't familiar with them. Suffice it to say that custom control properties have to be set up as dependency properties.
Dependency properties look strange to those of us who are used to plain old .NET properties, but they aren't really that different. They simply follow slightly different conventions:
static
Property
ImagePathProperty
GetValue()
SetValue()
For our task button, all the custom control class has to do is implement the two dependency properties we need. So, it looks like this:
using System.Windows.Controls
namespace Outlook2010TaskButton
{
/// <summary>
/// An Outlook 2010 Task Button
/// </summary>
public class TaskButton : RadioButton
{
#region Fields
// Dependency property backing variables
public static readonly DependencyProperty ImageProperty;
public static readonly DependencyProperty TextProperty;
#endregion
#region Constructor
/// <summary>
/// Default constructor.
/// </summary>
static TaskButton()
{
// Initialize as lookless control
DefaultStyleKeyProperty.OverrideMetadata(typeof(TaskButton),
new FrameworkPropertyMetadata(typeof(TaskButton)));
// Initialize dependency properties
ImageProperty = DependencyProperty.Register("Image",
typeof(ImageSource), typeof(TaskButton), new UIPropertyMetadata(null));
TextProperty = DependencyProperty.Register("Text", typeof(string),
typeof(TaskButton), new UIPropertyMetadata(null));
}
#endregion
#region Custom Control Properties
/// <summary>
/// The image displayed by the button.
/// </summary>
/// <remarks>The image is specified in XAML as an absolute or relative path.
/// </remarks>
[Description("The image displayed by the button"), Category("Common Properties")]
public ImageSource Image
{
get { return (ImageSource)GetValue(ImageProperty); }
set { SetValue(ImageProperty, value); }
}
/// <summary>
/// The text displayed by the button.
/// </summary>
[Description("The text displayed by the button."), Category("Common Properties")]
public string Text
{
get { return (string)GetValue(TextProperty); }
set { SetValue(TextProperty, value); }
}
#endregion
}
}
Notice that the ImageProperty has a type of ImageSource, even though we pass it a relative path in XAML. WPF has a built-in ImageSourceConverter that loads the image from the relative path passed in and hands the image to the Image property. In the original version of this article, I used an ImagePath property, which took the relative path passed in from XAML. That turned out to be the wrong approach, and WPF was not always able to resolve the relative path to the button image. Changing the ImagePath property (a String type) to an Image property (of type ImageSource) resolved the problem.
ImageProperty
ImageSource
ImageSourceConverter
String
Notice also that we an apply standard .NET property attributes to our custom control properties. For example, the Category attribute specifies the property category in which the property should appear in Expression Blend and Visual Studio, and the Description attribute specifies the text description that will appear for the property in Visual Studio.
Category
Description
When we created our custom control project, Visual Studio created a simple control template for our task button:
<ControlTemplate TargetType="{x:Type local:TaskButton}">
<Border Background="{TemplateBinding Background}"
BorderBrush="{TemplateBinding BorderBrush}"
BorderThickness="{TemplateBinding BorderThickness}">
</Border>
</ControlTemplate>
We are going to replace it with the control template we created in Part 1, but before we do that, notice the TemplateBinding objects in the default template. A TemplateBinding binds a property in a control template to a property of the templated control. When we create an instance of our task button and set its Background, BorderBrush, and BorderThickness properties, the TemplateBinding objects will pull these values into the control template. We will use this technique in our control template after we copy it in.
TemplateBinding
BorderBrush
BorderThickness
For now, we simply replace the default control template with the template from Part 1. First, we delete the default template, leaving the outer <ControlTemplate> element. Then, we copy the control template from Part 1, omitting its outer <ControlTemplate> element, and paste it into the outer <ControlTemplate> element of the default template.
<ControlTemplate>
As we will see a bit later, there are some quirks associated with property bindings. In some instances, you have to use regular Binding objects. A notable example is when you need to use one of WPF's built-in value converters, such as the converter that returns an ImageSource object from an image path. TemplateBinding objects don't have access to these converters, so as we will see below, we have to use a regular Binding object to get the image specified by the task button's ImagePath property.
Binding
In other cases, a regular Binding object won't work. For example, I learned during the course of this project that a regular Binding object won't work inside a control template if the binding relies on a custom value converter, such as the HlsValueConverter we use for the task button. If a regular Binding object is used, the value converter does not get called. So, you have to use a TemplateBinding object.
HlsValueConverter
In other cases, you can use either a regular Binding object or a TemplateBinding object. I recommend always starting with a TemplateBinding object. If that doesn't work, try changing the property binding to a regular Binding object and see if that change resolves the problem.
If you are actually performing these steps as we go along, the first thing you probably noticed after you pasted the control template is an exception that reads: The file calendar.png is not part of the project or its 'Build Action' property is not set to 'Resource'. Remember that in our control template, we hard-coded the image path in Part 1 - now, we need to change the hard-coded path to a property binding.
You can find the button's content markup around line 115 of Generic.xaml. Here is what the image path looks like before we modify it:
<Image Source="calendar.png" ... />
And, here is what it looks like after we modify it:
<Image Source="{Binding Path=ImagePath,
RelativeSource={RelativeSource TemplatedParent}}" ... />
The important change is to the Source property. Rather than hard-coding the source, we have bound it to the ImagePath property of the custom control.
Source
There are a couple of points to note with respect to this binding:
RelativeSource
The TemplatedParent value to which we set the RelativeSource object is actually part of a RelativeSourceMode enum, which lists the various modes a RelativeSource can assume. We will see another use for a System.Windows.Data.RelativeSource object later, when we set the Background property of our task button.
TemplatedParent
RelativeSourceMode
System.Windows.Data.RelativeSource
Our control template still hard-codes the button text:
<TextBlock Text="Calendar" ... />
We want to replace the hard-coded Calendar with a binding to the custom control's Text property. And, in this case, it's pretty simple:
<TextBlock Text="{TemplateBinding Text}" ... />
We can use a TemplateBinding object to do the binding, because we do not need access to built-in value converters or other features that require a full Binding object. And, we don't need a RelativeSource object, since we don't need to resolve anything relative to the location of an instance of our control. So, a simple TemplateBinding, of the sort we saw in the default template that Visual Studio created for us, does nicely.
At this point, we have a functioning task button. Let's see how it looks:
<TaskButton>
To add the task button, you will need to add an XML namespace declaration to the custom control assembly. Window1.xaml should now look like this:
<Window x:Class="TaskButtonDemo.Window1"
xmlns=""
xmlns:x=""
xmlns:
<Grid>
<custom:TaskButton
</Grid>
</Window>
Compile the application and run it. You should see a window that looks like this:
There is a quirk related to how the control is set up. Since both the background and text colors are bound to the Background property of the control, text will not appear on the control until its Background property is set; nor will the State effects. Once the Background property is set, all should appear.
The only shortcoming of our button is that it is still hard-coded to shades of blue. Before we refactor the XAML to data-bind the control's color properties, we will need to add an IValueConverter to perform color adjustments. We will add the file HlsValueConverter.cs from the WPF Color Conversions article discussed above.
We will need to add the converter to the control template, as well. It goes in the <ControlTemplate.Resources> section, and it looks like this:
<ControlTemplate.Resources>
<local:HlsValueConverter x:
Now, we are ready to use the value converter in the control template.
We need to link the button's color properties to the color properties of the custom control. And, if we are going to emulate the Outlook 2010 task button, we will want to be very specific about how we do that binding:
Originally, I had planned to set up the control template bindings to automatically bind the button background to the host window background, all from within the control template. I ultimately decided that approach was a bit too restrictive, so the control template binds to the custom control's Background property. When a task button is instantiated in a WPF window (or a user control), the developer can bind the control's Background property to the window's Background property. That way, if the window color is changed, the change will flow through to any task buttons on the window automatically.
For now, set the Window1.Background property to #FFB2C5DD in the demo project, which will color the window to match the button.
Window1.Background
Now, we begin the process of refactoring the hard-coded color values to data-bound colors. We will base everything on the custom control's Background property. Let's start with the BorderGrid layer, which consists of a background, an outer border, and an inner border. Here is what the markup looks like before we begin:
BorderGrid
<Grid x:
<Grid.Effect>
<DropShadowEffect ShadowDepth="4" Opacity="0.1"/>
</Grid.Effect>
<Rectangle x:
<Rectangle x:
</Grid>
First, we will bind the Grid's Background to the custom control's Background:
Grid
<Grid x:
That one is pretty simple. Next, we set the outer border. This is a darker shade of the Background - let's try 80% of the background color:
<Rectangle x:Name="OuterStroke" Stroke="{TemplateBinding Background,
Converter={StaticResource ColorConverter}, ConverterParameter='0.8'}"
Margin="0"/>
As you can see, we have added the color converter to the binding, which will adjust the border color to a darker shade.
Next, let's set the inner border. This object is set the same way, except that it is a lighter shade of the background color. Let's try 120%:
<Rectangle x:Name="InnerStroke" Stroke="{TemplateBinding Background,
Converter={StaticResource ColorConverter}, ConverterParameter='1.2'}"
Margin="1" />
Note that we used TemplateBinding objects to perform the color bindings on these Rectangles. We are required to use TemplateBinding objects, because we make use of a value converter. If we use regular Binding objects, the value converter would never get called, and the outer and inner border would not appear in the MouseOver or Selected states.
We now have the MouseOver state data bound, rather than hard-coded. To see how it looks, let's switch back to Window1.xaml in the demo project. We created a task button there earlier; now, we need to bind the task button's Background property to the same property for the window:
MouseOver
<custom:TaskButton ImagePath="calendar.png" Text="Calendar"
Background="{Binding Path=Background,
RelativeSource={RelativeSource FindAncestor,
AncestorType={x:Type Window}}}" />
Once again, we use a RelativeSource object to bind the Background property. But this time, we use the RelativeSourceMode that searches up the WPF element tree to find a particular ancestor of the control being set. In this case, it is a Window object - the window that hosts the task button.
Window
Note that all of the color property values are derivatives of the background color value. So, when we instantiate a TaskButton in a project, we need to only set its Background property, and all of the State effects are generated automatically.
TaskButton
Compile the solution and run it. When you move your mouse over the task button, it should light up in the usual manner. And, if you change the background color of Window1, the task button should change to the same color.
Window1
There are other color properties in the control template that we need to refactor to property bindings. We won't go over those in detail here, since they are done the same way as the BorderGrid. You can examine Generic.xaml in the project in the attached solution to see the XAML for the control template.
Once you have completed the control, add a couple more task buttons to Window1. When you select one of them, you will see any other selected button deselect, just like the task buttons in Outlook 2010. And, the beauty of the arrangement is that each task button can be implemented with a single line of markup.
That brings us to the conclusion of the project. It may seem like a lot of work just to create a templated button, but look at what it has accomplished: the button is simple to implement in a project, it has a consistent look and feel, and it has fairly sophisticated effects. Plus, working out how to create the button teaches a lot of WPF skills that I know I have avoided for far too long. I hope you found the journey as worthwhile as I did - all in all, it was a very worthwhile way to stay somewhat productive while enjoying my holiday time with family and friends.
As always, comments and corrections are welcome. And, your vote is always appreciated!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
<UserControl
x:Class="Hfr.WindowsPhone.Controls.TestControl"
xmlns=""
xmlns:x=""
xmlns:d=""
xmlns:mc=""
FontFamily="{StaticResource PhoneFontFamilyNormal}"
FontSize="{StaticResource PhoneFontSizeNormal}"
Foreground="{StaticResource PhoneForegroundBrush}"
mc:Ignorable="d"
d:DesignHeight="80"
d:
<Grid x:
</Grid>
</UserControl>
RadioButton.IsChecked
Checked
private void TaskButtonChecked(object sender, RoutedEventArgs e)
{
var taskButtons = (this as Visual).GetChildsWhere(IsTaskButton);
foreach (TaskButton taskButton in taskButtons)
if (taskButton != sender)
taskButton.IsChecked = false;
}
private bool IsTaskButton(Visual sender)
{
return (sender is TaskButton);
}
<Storyboard x:
<Image Source="{Binding Path=ImagePath, RelativeSource={RelativeSource TemplatedParent}}"
Width="24" Height="24" Stretch="Fill" Margin="10,0,0,0" />
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/49802/Create-a-WPF-Custom-Control-Part-2?fid=1556016&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None
|
CC-MAIN-2015-48
|
refinedweb
| 3,503
| 54.02
|
Writing programs is fun, but making them fast can be a pain. Python programs are no exception to that, but the basic profiling toolchain is actually not that complicated to use. Here, I would like to show you how you can quickly profile and analyze your Python code to find what part of the code you should optimize.
What's Profiling?
Profiling a Python program is doing a dynamic analysis that measures the execution time of the program and everything that composes it. That means measuring the time spent in each of its functions. This will give you data about where your program is spending time, and what area might be worth optimizing.
It's a very interesting exercise. Many people focus on local optimizations, such as determining e.g. which of the Python functions
range or
xrange is going to be faster. It turns out that knowing which one is faster may never be an issue in your program, and that the time gained by one of the functions above might not be worth the time you spend researching that, or arguing about it with your colleague.
Trying to blindly optimize a program without measuring where it is actually spending its time is a useless exercise. Following your guts alone is not always sufficient.
There are many types of profiling, as there are many things you can measure. In this exercise, we'll focus on CPU utilization profiling, meaning the time spent by each function executing instructions. Obviously, we could do many more kinds of profiling and optimizations, such as memory profiling which would measure the memory used by each piece of code—something I talk about in The Hacker's Guide to Python.
cProfile
Since Python 2.5, Python provides a C module called cProfile which has a reasonable overhead and offers a good enough feature set. The basic usage goes down to:
>>> import cProfile >>> cProfile.run('2 + 2') 2 function calls in 0.000 seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.000 0.000 <string>:1(<module>) 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
Though you can also run a script with it, which turns out to be handy:
$ python -m cProfile -s cumtime lwn2pocket.py 72270 function calls (70640 primitive calls) in 4.481 seconds Ordered by: cumulative time ncalls tottime percall cumtime percall filename:lineno(function) 1 0.004 0.004 4.481 4.481 lwn2pocket.py:2(<module>) 1 0.001 0.001 4.296 4.296 lwn2pocket.py:51(main) 3 0.000 0.000 4.286 1.429 api.py:17(request) 3 0.000 0.000 4.268 1.423 sessions.py:386(request) 4/3 0.000 0.000 3.816 1.272 sessions.py:539(send) 4 0.000 0.000 2.965 0.741 adapters.py:323(send) 4 0.000 0.000 2.962 0.740 connectionpool.py:421(urlopen) 4 0.000 0.000 2.961 0.740 connectionpool.py:317(_make_request) 2 0.000 0.000 2.675 1.338 api.py:98(post) 30 0.000 0.000 1.621 0.054 ssl.py:727(recv) 30 0.000 0.000 1.621 0.054 ssl.py:610(read) 30 1.621 0.054 1.621 0.054 {method 'read' of '_ssl._SSLSocket' objects} 1 0.000 0.000 1.611 1.611 api.py:58(get) 4 0.000 0.000 1.572 0.393 httplib.py:1095(getresponse) 4 0.000 0.000 1.572 0.393 httplib.py:446(begin) 60 0.000 0.000 1.571 0.026 socket.py:410(readline) 4 0.000 0.000 1.571 0.393 httplib.py:407(_read_status) 1 0.000 0.000 1.462 1.462 pocket.py:44(wrapped) 1 0.000 0.000 1.462 1.462 pocket.py:152(make_request) 1 0.000 0.000 1.462 1.462 pocket.py:139(_make_request) 1 0.000 0.000 1.459 1.459 pocket.py:134(_post_request)[…]
This prints out all the function called, with the time spend in each and the number of times they have been called.
Advanced Visualization With KCacheGrind
While being useful, the output format is very basic and does not make easy-to-grab knowledge for complete programs. For more advanced visualization, I leverage KCacheGrind. If you did any C programming and profiling these last years, you may have used it as it is primarily designed as front-end for Valgrind generated call-graphs.
In order to use, you need to generate a cProfile result file, then convert it to KCacheGrind format. To do that, I use pyprof2calltree.
$ python -m cProfile -o myscript.cprof myscript.py $ pyprof2calltree -k -i myscript.cprof
And, the KCacheGrind window magically appears!
Concrete Case: Carbonara Optimization
I was curious about the performances of Carbonara, the small timeserie library I wrote for Gnocchi. I decided to do some basic profiling to see if there was any obvious optimization to do.
In order to profile a program, you need to run it. But, running the whole program in profiling mode can generate a lot of data that you don't care about, and it adds noise to what you're trying to understand. Since Gnocchi has thousands of unit tests and a few for Carbonara itself, I decided to profile the code used by these unit tests, as it's a good reflection of basic features of the library.
Note that this is a good strategy for a curious and naive first-pass profiling. There's no way that you can make sure that the hotspots you will see in the unit tests are the actual hotspots you will encounter in production. Therefore, profiling in conditions and with a scenario that mimics what's seen in production is often a necessity if you need to push your program optimization further and want to achieve perceivable and valuable gain.
I activated cProfile using the method described above, creating a
cProfile.Profile object around my tests (I actually started to implement that in testtools). I then run KCacheGrind as described above. Using KCacheGrind, I generated the following figures.
The test I profiled here is called
test_fetch and is pretty easy to understand: it puts data in a timeserie object, and then fetch the aggregated result. The above list shows that 88 % of the ticks are spent in
set_values (44 ticks over 50). This function is used to insert values into the timeserie, not to fetch the values. That means that it's really slow to insert data, and pretty fast to actually retrieve data.
Reading the rest of the list indicates that several functions share the rest of the ticks,
update,
_first_block_timestamp,
_truncate,
_resample, etc. Some of the functions in the list are not part of Carbonara, so there's no point in looking to optimize them. The only thing that can be optimized is, sometimes, the number of times they're called.
The call graph gives me a bit more insight about what's going on here. Using my knowledge about how Carbonara works, I don't think that the whole stack on the left for
_first_block_timestamp makes much sense. This function is supposed to find the first timestamp for an aggregate, e.g. with a timestamp of 13:34:45 and a period of 5 minutes, the function should return 13:30:00. The way it works currently is by calling the
resample function from Pandas on a timeserie with only one element, but that seems to be very slow. Indeed, currently, this function represents 25 % of the time spent by
set_values (11 ticks on 44).
Fortunately, I recently added a small function called
_round_timestamp that does exactly what
_first_block_timestamp needs that without calling any Pandas function, so no
resample. So, I ended up rewriting that function this way:
def _first_block_timestamp(self): - ts = self.ts[-1:].resample(self.block_size) - return (ts.index[-1] - (self.block_size * self.back_window)) + rounded = self._round_timestamp(self.ts.index[-1], self.block_size) + return rounded - (self.block_size * self.back_window)
And then, I re-run the exact same test to compare the output of cProfile.
The list of function seems quite different this time. The number of times spent used by
set_values dropped from 88 % to 71 %.
The call stack for
set_values shows that pretty well: we can't even see the
_first_block_timestamp function as it is so fast that it totally disappeared from the display. It's now being considered insignificant by the profiler.
So, we just sped up the whole insertion process of values into Carbonara by a nice 25 % in a few minutes. Not that bad for a first naive pass, right?
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/profiling-python-using-cprofile-a-concrete-case
|
CC-MAIN-2016-44
|
refinedweb
| 1,477
| 66.13
|
@STL: Your description of not being able to use & on the temporary created by std::string concatenation doesn't hold for C++ in Visual Studio 2010. For instance, when I compile the following code ...
#include <iostream> using std::cout; using std::endl; #include <string> using std::string; int main () { string x( "hello " ); string y( "world" ); cout << ( x + y ) << endl; cout << &( x + y ) << endl; return 0; }... there are no compiler warnings and no compiler errors. When I run that compiled code, I get the following output:
hello world 002FFD54Aside: Using g++ 4.5.0 with -std=c++0x, there is a compiler warning, but no compiler errors. Here is the compiler warning:
warning: taking address of temporary
If I change std::string to int, then everything works as expected and I get the following compilation error for C++ in Visual Studio 2010:
error C2102: '&' requires l-value. In g++ 4.5.0, I get the following compilation error:
error: lvalue required as unary '&' operand
Joshua Burkholder
|
https://channel9.msdn.com/Niners/Burkholder/Comments?page=4
|
CC-MAIN-2016-22
|
refinedweb
| 165
| 62.78
|
Dockerless, Part 3: Moving Development Environment to Containers with Podman
Dockerless, Part 3: Moving Development Environment to Containers with Podman
In the last article in this series, we take an in-depth look at deploying a development environment using a Docker alternative, Podman.
Join the DZone community and get the full member experience.Join For Free
In the introductory article of this series I wrote that one of disadvantages of Podman and Buildah is that the technology is still pretty new and moves fast. This final article you are reading appeared with much delay because from Podman 1.3.1 to 1.4.1, one of the key features that we will look at in this article was broken.
Luckily, Podman 1.4.1 and above not only fixes features that were broken for a few weeks, but also has these features finally covered with tests. Hopefully, there will be no such dramatic loss in functionality in future releases. My original warning still applies though: new container technology toolchain is new and sometimes unstable. Keep that in mind.
Disclaimer #1: Depending on when you are reading this article, my warning might not apply. It's the state of things as of June 2019. If you are from the end of 2019 or 2020, chances are that Podman is mature and stable enough for you not to worry about broken features between minor versions.
Disclaimer #2: I will briefly mention how Podman works, but I won't go into details. If you are an infrastructure engineer or just curious, then follow all the links I put in the article to learn more. If you are a developer who doesn't care too much about internals, then skip them, as it might take you quite some time to dive into this topic without immediate benefit for your daily work.
What Is Podman and Does It Work?
Podman is a replacement for Docker for local development of containerized applications. Podman commands map 1 to 1 to Docker commands, including their arguments. You could alias
docker with
podman and never notice that there is a completely different tool for managing your local containers.
One of the core features of Podman is its focus on security. There is no daemon involved in using Podman. It uses a traditional fork-exec model instead and also heavily utilizes user namespaces and network namespaces. As a result, Podman is a bit more isolated and in general more secure to use than Docker. You can even be root in a container without granting container or Podman any root privileges on the host — and user in a container won't be able to do any root-level tasks on the host machine.
A good example of how Podman's model can lead to better security is covered in an article, "Podman: A more secure way to run containers." If you want to learn more about how Podman leverages Linux namespaces, start with, "Podman and user namespaces: A marriage made in heaven" article. Finally, if you want to read about possible obstacles that you might have with this approach, then read, "The shortcomings of rootless containers."
For most of the users, the internals of Podman should not matter too much in a day-to-day use. What does matter is that Podman provides the same developer experience as Docker while doing things in a slightly more secure way in the background. Let's see if that's true.
Local Development Environment of mkdev.me
The main web application behind mkdev.me is written in Ruby on Rails. For a developer to be able to run this application locally he or she needs:
- PostgreSQL server;
- Redis server;
- Mattermost instance (for our chat solution);
- Mattermost test instance (to be used during automated tests);
In total, that's five services to run locally (including web application itself). One can imagine that for any new developer to install and configure all of it by hand can take quite some time. And once it's done, there is no guarantee that resulting local environment is close to the production one: a developer could install different PostgreSQL or Mattermost versions, that were not yet tested to work with mkdev.
Wouldn't it be great to bootstrap a complete development environment with one command and get a production-like setup running in seconds? That's what Docker and Docker Compose provided developers with. That's what Podman can provide as well.
Podman's Pods and What They Are Good For
On top of the regular containers, Podman has
pods. If you ever heard of Kubernetes, this concept is familiar to you. In Kubernetes, a
pod is the smallest deployment unit that consists of one or more containers. Podman's pods are exactly the same. All containers inside the pod share the same network namespace, so they can easily talk to each other over
localhost without the need to export any extra ports.
There are three possible use cases for pods.
1. Prepare Your Application for Running on Kubernetes/Openshift
You could use pods in Podman as a preparation step before moving it to Kubernetes. In many cases, for real-world web applications, you probably will be better off using minikube, which will guarantee you the same APIs and functionality Kubernetes has. You would want to have Deployments, Services, and other resources, that would be a vital part of your setup in production. Just having a way to simulate pods with Podman won't be of much benefit for this.
2. Run Your Application with Podman in Production
You could decide that complete container orchestration is an overkill for you (and that would be a very good decision in many cases). Then it would make sense to still use containers for packaging and delivering your application. And in certain cases, you could benefit from not just running one container, but running multiple ones inside the pod on your production server. The question is what exactly will be the benefit of putting your containers inside the pod versus just running them as separate systemd managed services? I don't have a good answer here, but the feature is there and someone might find a use case for it in production.
3. Simplifying Your Development Environment
The final and the most attractive reason for developers is to use Podman pods to automate development environment. In this case, you would run all the services your application depends on inside the same pod. This is absolutely not something you should ever do in production environment on a real Kubernetes cluster, as your services should be running in different pods behind different replication controllers and service endpoints. But for local development doing it this way is convenient.
Podman Pods and Kubernetes Pods
Before we move to some real examples, we need to learn about one pod-related feature of Podman:
play kube. Podman doesn't have a replacement to Docker Compose. There is a third-party tool podman-compose that might bring this functionality, but we at mkdev didn't get to test it yet.
Instead of Docker Compose, Podman has pods and a way to run them out of YAML definition. This YAML definition is compatible with Kubernetes pods YAML, meaning that you can take this YAML, load it into your Kubernetes cluster and expect some pods running.
Out of scope: Supporting docker-compose. We believe that Kubernetes is the defacto standard for composing Pods and for orchestrating containers, making Kubernetes YAML a defacto standard file format. - Podman documentation
Now let's do a small example.
Basic Usage of Podman
We first need to create a new pod that will expose port 5432:
podman pod create --name postgresql -p 5432 -p 9187
We can see running pods with
podman pod ps command:
POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID 235164dd4137 postgresql Created 26 seconds ago 1 229b2a70b8c4
When you create a new pod, Podman automatically starts
infra container, which you can see by running
podman ps.
Let's start a PostgreSQL container inside this pod:
podman run -d --pod postgresql -e POSTGRES_PASSWORD=password postgres:latest
If you don't have
postgres:latest image yet, Podman will pull it automatically, from Docker Hub — the same experience you would have with Docker CLI.
Let's start another container inside
postgresql pod: with a PostgreSQL Prometheus exporter:
podman run -d --pod postgresql -e DATA_SOURCE_NAME="postgresql://postgres:password@localhost:5432/postgres?sslmode=disable" wrouesnel/postgres_exporter
We can see top processes inside the pod with
podman pod top postgresqlcommand. And we can access PostgreSQL metrics if we
curl localhost:9187/metrics.
If we want to create the same setup again without running imperative shell commands and to store this setup as a declarative code, we can run
podman generate kube postgresql > postgresql.yaml which will result in a Kubernetes-compatible pod definition. If you follow the link and examine this YAML file, you will see that Podman correctly configured all the ports and even exported all the environment variables, which you can cleanup if you want to rely on the image defaults.
Remove the pod with
podman pod rm postgresql -f. Then, instead of running all of the commands again, simply run
podman play kube postgresql.yaml to get the same result. You could also
kubectl apply -f postgresql.yaml and get this PostgreSQL running on your Kubernetes cluster.
Warning: if you happen to use Podman 1.4.2, then at this point you will hit a bug described in this GitHub Issue. Let's hope by the time you read it the issue is fixed. If it's not fixed, then follow steps to fix your YAML from the Issue description or simply copy contents of my gist, which already contains a fixed definition.
The YAML file generated by Podman should not be used "as is," because Podman dumps all environment variables, securityContext and other things that you could leave without in your development environment and that might have better defaults in your Kubernetes cluster. Consider it a convenient scaffolding, not a final result.
Using Podman in a Real Ruby on Rails Application
At mkdev we completely automated our development environment with Podman. New developers (assuming they have a Linux machine running) can run a single script
./script/bootstrap.sh to get the application running. The script itself looks like this:
#!/bin/bash set +e if [ "$(podman pod ps | grep mkdev-dev | wc -l)" == "0" ] ; then echo "> > > Starting PostgreSQL, Redis and Mattermost" podman play kube pod.yaml else echo "Development pod is already running. Re-create it? Y/N" read input if [ $input == "Y" ] ; then podman pod rm mkdev-dev -f podman play kube pod.yaml else echo "Leaving bootstrap process." exit 0 fi fi echo "> > > Waiting for PostgreSQL to start" until podman exec postgres psql -U postgres -c '\list' do echo "> > > > > > PostgreSQL is not ready yet" sleep 1 done podman exec -u postgres postgres psql -U postgres -d template1 -c 'create extension hstore;' echo "> > > Creating development IM database" until podman exec -u postgres postgres createdb mattermost; do sleep 1; done echo "> > > Creating test IM database" until podman exec -u postgres postgres createdb mattermost_test; do sleep 1; done echo "> > > Creating and seeding the database" ./script/setup.sh ./script/exec.sh 'bundle exec rails db:create db:migrate db:test:prepare' ./script/seed.sh echo "> > > Attempting to start the app" ./script/run.sh
We rely on
play kube feature to create all of the required services, the way you would normally run
docker-compose up. Our
pod.yaml defines 4 containers — PostgreSQL, development and test Mattermost instances and Redis server. We run a dedicated test instance of Mattermost because we need to reset its database after integration tests, and we don't want to reset development instance as it would certainly be unproductive.
Instead of running
rails and
rake commands directly, we hide them inside
scripts folder, in a similar setup you can remember from Scripts to Rule Them All article by GitHub around how they organize these kind of scripts internally.
scripts/bootstrap.sh invokes a number of other scripts, like seeding the database, downloading some dependencies and triggering database migrations. One script that developers find useful is
scripts/exec.sh:
#!/bin/bash set -e echo "Running command in new container ..." podman run --pod mkdev-dev -it --rm -v $(pwd):/app:Z docker.io/mkdevme/app:dev $1
It runs a command in a new application container and then removes this container. This is very useful to run one-off commands like database migrations or rake tasks.
scripts/run.sh simply starts the application container, if not started, and then spins up a Rails server inside:
#!/bin/bash set -e if [ "$(podman ps | grep app | grep mkdevme | wc -l)" == "0" ] ; then echo "> > > Starting new application container" podman run --pod mkdev-dev --name app -v $(pwd):/app:Z -d docker.io/mkdevme/app:dev tail -f /app/log/development.log fi n=0 until [ $n -ge 5 ] do podman exec app /entrypoint.sh bundle exec rails s -b '0.0.0.0' -P /tmp/mkdev.pid n=$[$n+1] echo "Not all components are up. Sleeping for 10 seconds." sleep 10 done
Note that the command executed inside the application container is just a
tail -f, which results in a never-dying container. It's done this way mostly to allow developers to quickly enter the container and debug something inside in case Rails refuses to boot with a new error.
You might not fancy the number of bash scripts we had to write. It is definitely not as nice as a single docker-compose.yaml file. It is not too bad, though. These scripts need to be written once and are not overly complicated. In the end, it's more the matter of taste, than a real technical drawback.
With this set of handy scripts, we cover most of development tasks, like restarting the server, executing any commands and so on. There are certain things to be improved, like there always are, but in general, we are pretty happy with the result. Developers have identical environments, with same dependency versions, same Ruby version and same everything and they can (re-)create whole local setup in seconds. These are the same benefits you would get from Docker, but in this case without Docker at all.
Dockerless: Is It Worth It?
And that concludes this series. Hopefully, you've learned something new about container standards and new tools in the container world. One question you might still ask: was it worth it? I sure asked this question myself. It would certainly be easier to apply good old Docker skills and practices and we would probably end up with the same result, but faster.
But the point is that in the end we did end up with the same result. Container images are being built, containers are being used in development and test environments and we have the same benefits as with Docker. We didn't have to compromise much on features, even though we for sure struggled in the beginning with certain bugs in Podman. Even as I was writing this article I discovered yet another bug in Podman!
Now that mkdev has a working solution with Podman and Buildah, we will likely stick with it. There are ideas flowing around for deploying our application in a Podman-spawned container managed as a systemd service (we are not at scale to have any reasons to introduce container orchestration tool to the stack). The
pod.yaml we have can be used to deploy review apps for new Pull Requests, which is something that would improve our testing processes even more. And there are more features in Podman with every release.
As I already did in the first article of this series, I encourage you to learn more about what's happening in the container landscape. There are new things to learn and to try and there are some very good articles to read that I've linked in all three parts of this series.
Looking forward to your comments and proposals for new container-related articles. What would you like to know about them in-depth?
Feel free to ask any questions in the comments below, I will make sure to reply to them directly or extend this article! You can also hire our DevOps mentors to explain everything to you.
Published at DZone with permission of Kirill Shirinkin . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/dockerless-part-3-moving-development-environment-t
|
CC-MAIN-2019-51
|
refinedweb
| 2,766
| 53.71
|
31 October 2012 08:12 [Source: ICIS news]
SINGAPORE (ICIS)--French oil and gas major Total said on Wednesday its refining and chemicals business’ adjusted net operating income rose by 54% to €564m ($732m) in the third quarter on the back of a sharp increase in refining margins.
The European refining margin indicator averaged at $51/tonne in the third quarter of this year, surging from an average of $13.4/tonne in the same period of last year, the company said.
“In contrast, petrochemical margins further deteriorated in the third quarter because of weak demand in Europe and a slowdown in ?xml:namespace>
The company’s overall adjusted net income rose by 20% year on year to €3.34bn in the third quarter, with sales up by 8% to €49.9bn, the company said.
For the first nine months of this year, Total’s adjusted net income rose by 7% year on year to €9.28bn, with sales up by 9% at €150.2
|
http://www.icis.com/Articles/2012/10/31/9609078/frances-total-q3-profit-for-refining-and-chemicals-surges-by.html
|
CC-MAIN-2014-42
|
refinedweb
| 165
| 62.68
|
Hello,
What do I need to do to get TAP tests running?
I misunderstood. You need to configure with "--enable-tap-tests".
There is a spurious empty line added at the very end of "mainloop.h": + #endif /* MAINLOOP_H */Not in my diff, but that's been coming and going in your diff reviews.
Strange. Maybe this is linked to the warning displayed with "git apply" when I apply the diff.Strange. Maybe this is linked to the warning displayed with "git apply" when I apply the diff.
I would suggest to add a short one line comment before each test to explain what is being tested, like "-- test \elif execution", "-- test \else execution"...Where are you suggesting this?
In "regress/sql/psql.sql", in front of each group which starts a test.
Debatable suggestion about "psql_branch_empty":The name isn't great. Maybe psql_branch_stack_empty()?
Yep, maybe, or "empty_stack" or "stack_is_empty" or IDK...
"psql_branch_end_state": it is a pop, it could be named "psql_branch_pop" Yeah, we either need to go fully with telling the programmer that it's a stack (push/pop/empty) or (begin_branch/end_branch/not_branching). I'm inclined to go full-stack, as it were.
Anything consistent is ok. I'm fine with calling a stack a stack:-) -- Fabien. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription:
|
https://www.mail-archive.com/pgsql-hackers@postgresql.org/msg304102.html
|
CC-MAIN-2021-04
|
refinedweb
| 224
| 76.42
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.