text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
hey guys i've got a conversion here but it is not displaying correctly.
it is making the large hexadecimal numbers into negative decimal numbers.
also when displaying a hex number with a 0 at the front it isn't leaving displaying the 0, i need it to.
(its reading these numbers from a file.
this is the code.
thanks guys..
Code:#include <stdio.h> #include <stdlib.h> #include <math.h> FILE * inputdata; void openfile(void); int main() { openfile(); return (0); } void openfile(void) { unsigned int hex; int i=0; inputdata = fopen("numbers.txt","r"); if (inputdata==NULL) { printf("File cannot be opened for reading"); } printf("Line # Hex Decimal Binary\n"); while (fscanf(inputdata,"%x",&hex) != EOF) { printf("%i %x %d\n",i , hex , hex); i++; } } | http://cboard.cprogramming.com/c-programming/55541-converting-hex-dec.html | CC-MAIN-2014-52 | refinedweb | 126 | 60.92 |
So, you want to know what is Hadoop and how can Python be used to perform a simple task using Hadoop Streaming?
HADOOP is a software framework that mainly is used to leverage distributed systems in an intelligent manner and also, perform efficiently operations on big datasets without letting the user worry about nodal failures (Failure in one among the ‘n’ machines performing your task). Hadoop has various components :
-.
I will walk you through the Hadoop MapReduce component. For further information on Map Reduce you can either Google. For now I will present to you a brief introduction of it.
What is MapReduce
To know it truly you have to understand your problem first. So, what kind of problems can Map Reduce solve ?
- Counting occurrences of digits in a list of numbers
- Counting prime numbers in a list
- Counting the number of sentences in a text
- Counting the Average of 10 million numbers in a database
- Counting the name of all people belonging to a particular sales region
Do you think that these are trivial problems ? Yes, they appear as if they are but what if you have millions of records and time for processing the results is very important for you? Thinking again ? You’ll get your answer.
Not just time but multiple dimensions of a task are there and map reduce if implemented efficiently, can help you overcome the risks associated with processing that much data.
Okay, enough of what and why! Now ask me how !!!
A MapReduce ‘system’ consists of two major kinds of functions, namely the Map Function and the Reduce function (not necessarily with the same names but more often with the pre-decided intentions of acting as the Map and Reduce functions). So, how do we solve the simple problem of counting a million numbers from a list quickly and display their sum ? This is, let me tell you, a very long operation though simple (For a complex program in MapReduce using not Hadoop but mincemeat module please go throughthis.
In this particular example the Map Function(s) will go through the list of numbers and create a list(s) of key-value pairs of the format {m,1} for every number m that occur during the scan. The reduce function takes the list from the Map function(s) and forms a list of key-value pairs as {m,list(1+)}. 1+ means 1 or more occurrences of 1.
The complicated expression above is nothing but just the number m encountered in the scan(s) by the Map Function(s) and the 1’s in the value in the Reduce task appear as many times as the number was encountered in the Map Function(s). So, that basically means {m, number of times m was encountered in the Map Phase}.
The next step is to aggregate the 1’s in the value for every m. This means {m,sum(1s)}. The task is almost done now. All we got to do is just display the number and the corresponding sum of the 1s as the count of the number. But wait, still you don’t understand as why this is a big deal right? Anybody can do this. But hey! The Map Functions aren’t just there to take all your load and process alone all of it. Nope! there are in fact many instances of your Map Function working in parallel in different machines in your cluster (if it exists, else just multithread your program to create multiple instances of Map (but why should you when you have distributed systems). Every Map Function running simultaneously work on different chunks of your big list, hence, sharing the task and reducing processing time. See! What if you have a big cluster and many machines running multiple instances your Map Functions to process your list? It’s simple; your work gets done in no time!!! Similarly, the Reduce functions can also run in multiple machines but generally after sorting ( where your mincemeat or hadoop programs will first sort the say,m‘s and distribute distinct such m‘s to different reduce functions in different machines). So, even aggregation task gets quicker and you are ready with your output to impress your boss!
A brief outline of what happened to the list of numbers is as follows:
Map functions counted every occurrence of every number m
Map functions stored every number m in the form {m,1} - as many pairs for any number m
Reduce functions collected all such {m,1} pairs
Reduce functions converted all such pairs as {m,sum(1's)} - Only 1 pair for a number m
Reduce functions finally displayed the pairs or passed it to the main function to display or process
In the part two of the tutorial I will explain how to install Hadoop and do the same program in python using Hadoop Framework.
For a similar program in mincemeat please go through :
import mincemeat
import sys
file = open(sys.argv[1], “r”)
#The data source can be any dictionary-like object
data = list(file)
file.close()
datasource = dict(enumerate(data))
def mapfn(k, v):
import hashlib
total = 0
for num in v.split():
condition = num.isdigit()
if condition:
yield ‘sum’, int(num)
if condition:
yield ‘sumsquares’, int(num)**2
yield ‘count’, 1
def reducefn(k, vs):
result = sum(vs)
return result
s = mincemeat.Server()
s.datasource = datasource
s.mapfn = mapfn
s.reducefn = reducefn
results = s.run_server(password=”changeme”)
sumn = results[“sum”]
sumn2 = results[“sumsquares”]
n = results[“count”]
variance = (n*sumn2 – sumn**2)/float(n**2)
stdev = variance**0.5
print “Count is : %d”%n
print “Sum is : %s” %sumn
print “Stdev : %0.2f”%stdev | https://codeandsmile.wordpress.com/tag/mincemeat/ | CC-MAIN-2020-34 | refinedweb | 945 | 68.91 |
.
Moving and scaling an object according to hand position.
Hi,
I need some help with my college project. I have a cylinder and need it to act as a coil. For example, if I touched the cylinder's surface it's height will decrease (scaled in the y direction) as if pressing on a coil then when I remove my hand it returns back to its original size.
This is what I reached till now but I still have some problems that I can't solve.
public class Deformation : MonoBehaviour
{
Vector3 tempPos; private void InteractionManager_SourceUpdated(InteractionSourceUpdatedEventArgs hand) { if (hand.state.source.kind == InteractionSourceKind.Hand) { Vector3 handPosition; hand.state.sourcePose.TryGetPosition(out handPosition); float negXRange = transform.position.x - transform.localScale.x; float posXRange = transform.position.x + transform.localScale.x; float negYRange = transform.position.y - (transform.localScale.y / 2); float posYRange = transform.position.y + (transform.localScale.y / 2); float negZRange = transform.position.z - transform.localScale.z; float posZRange = transform.position.z + transform.localScale.z; float handX = handPosition.x; float handY = handPosition.y; float handZ = handPosition.z; if ((negXRange <= handX) && (handX <= posXRange) && (negYRange <= handY) && (handY <= posYRange) && (negZRange <= handZ) && (handZ <= posZRange)) { tempPos.y = handPosition.y; transform.localScale = tempPos; } else { tempPos.y = 0.3f; transform.localScale = tempPos; } } } // Use this for initialization void Start() { tempPos = transform.localScale; InteractionManager.InteractionSourceUpdated += InteractionManager_SourceUpdated; }
I attached two scripts to my object (cylinder) the TapToPlace script from the HoloToolKit and the deformation script stated above. The problem is when I deploy to my HoloLens to test, when I place the cylinder first to the needed place then try to deform it after that, it is placed but not deformed. If I tried it the other way around both work. Any ideas why does the deformation script does not work after the TapToPlace one?
The cylinder when viewed by my HoloLens is somehow transparent. I mean that I can see my hand through it. I need it to be more solid.
I wonder if there is something like a delay that I can use because when I use the deformation script stated above the cylinder is scaled to my hand position then scaled back to its default size very fast and appears as if blinking.
At first I place the cylinder on a setup (something as a table for example) then I begin to deform it. When I commented the else part in the deformation script stated above, it was scaled and left stable without returning to the original size. It is scaled symmetrically so its height is decreased from up and down resulting in the base of the cylinder becomes away from the table. I need the base of the cylinder to be always stable and touching the table under it.
Note: I am using Unity 2017.3.1f1 (64-bit) - HoloToolkit-Unity-2017.2.1.3
Thank you in advance.
Answers
@HoloSheep @Patrick @ahillier @james_ashley @DavidKlineMS @Jesse_McCulloch @stepan_stulov @Jimbohalo10 @mark_grossnickle @neerajwadhwa
@NourNabhan
Sorry, I don't use HoloToolkit, have no clue.
Cheers
Building the future of holographic navigation. We're hiring.
@stepan_stulov
Thank you very much.
Hi, you can try my full tutorial on how to move, resize or rotate objects in your HoloLens apps using MR-Toolkit-Unity 2017.2 version (as latest as it gets) | https://forums.hololens.com/discussion/10434/moving-and-scaling-an-object-according-to-hand-position | CC-MAIN-2019-22 | refinedweb | 537 | 59.09 |
SPAM
Discussion in 'Python' started by TBK, Sep 8, 2009.
- Similar Threads
Reducing Spam Associated with Posting to NewsgroupsMicrosoft Communities Team [MSFT], Oct 14, 2003, in forum: ASP .Net
- Replies:
- 0
- Views:
- 624
- Microsoft Communities Team [MSFT]
- Oct 14, 2003
Help Needed with Perl cgi script and spam problemKnute Johnson, Mar 18, 2006, in forum: Perl
- Replies:
- 11
- Views:
- 3,619
- Joe Smith
- Mar 22, 2006
from spam import eggs, spam at runtime, how?Rene Pijlman, Dec 8, 2003, in forum: Python
- Replies:
- 22
- Views:
- 1,050
- Fredrik Lundh
- Dec 10, 2003
Why 'class spam(object)' instead of class spam(Object)' ?Sergio Correia, Sep 7, 2007, in forum: Python
- Replies:
- 7
- Views:
- 557
- Ben Finney
- Sep 18, 2007
- Replies:
- 3
- Views:
- 720
- Lew
- Mar 25, 2008
What is Anti-Spam Filter.(thunderbird spam filter)zax75, Mar 27, 2008, in forum: Java
- Replies:
- 1
- Views:
- 1,336
- Lew
- Mar 28, 2008
SPAM: Why are we getting all the spam ??David Binnie, May 22, 2009, in forum: VHDL
- Replies:
- 2
- Views:
- 570
- Rich Webb
- May 22, 2009
Re: Updated Spam List 2011 - Largest SPAM Collection Over The Net !!!clamz, Jul 15, 2011, in forum: HTML
- Replies:
- 8
- Views:
- 1,048
- clamz
- Jul 16, 2011 | http://www.thecodingforums.com/threads/spam.697466/ | CC-MAIN-2016-44 | refinedweb | 202 | 72.5 |
I have a problem with my program and it has to do with my finding average function. The program goes like this:
The program will grade a series of exams and then print a grade report for students in a course. Input: An instructor has a class of students each of whom takes a multiple-choice exam with 10 questions. For each student in the class, there is one line in the input file. The line contains the answers that student gave for the exam. The input file named "grade_data.dat" will have the following format: line 1: the key for the exam (e.g.) = bccbbadbca
lines 2-n:a set of answers. You know you are done when you get to a line with no data. You will not know in advance how many exams you have to grade and you don't need to store the exam answers in your program. The program is to read the input file and grade each exam and print out the score for that exam. You will also keep track of how many students earned each score (0-10) and print a report after the grading. Output: Here is an example of how the output might appear. You will write the report to an output file named "grade_report.dat"
student 1 - 8
student 2 - 10
student 3 - 1
etc.
Final Report
------------
10 - 4
9 - 2
8 - 3
.
.
1 - 3
0 - 0
high score - 10
low score - 1
mean score - 6.25
So, when I try to calculate the mean of the scores the average gives me random numbers each time I open the output file. Here is the code:
#include <iostream> #include <fstream> #include <string> #include <iomanip> using namespace std; double avgScore (int []); int main(){ // open files ifstream inFile; inFile.open("grade_data.txt"); ofstream outputFile; outputFile.open("grade_report.txt"); // read the answer key string anskey; getline( inFile, anskey ); int scores[11]; // index range is 0..10 int thisScore; int count = 0; string line; while( getline( inFile, line ) && line != "" ) { ++count; // compares answer key to each students score thisScore =0; // initialize loop to 0 for( int i =0; i<10; ++i ) if( line[i] == anskey[i] ) ++thisScore; // updates score counter scores[thisScore]++; // outputs scores to "grade_report.txt" file outputFile << "student " << count << " - " << thisScore << endl; } inFile.close(); outputFile << "\nThe Average Score is: " << avgScore( scores) << endl; system("pause"); return 0; } double avgScore (int ary[] ) { double total =0; int numStudents =0; for (int i =0; i < 11; i++){ numStudents += ary[i]; total+= ary[i]*i; } return total / numStudents; }
Can someone help me? | https://www.daniweb.com/programming/software-development/threads/174005/exam-scores-program | CC-MAIN-2018-30 | refinedweb | 424 | 72.26 |
Files with '.plist' the extension is used by Mac OS X applications to store application properties. The plislib module provides an interface to read/write operations of these property list files.
The plist file format serializes basic object types, like dictionaries, lists, numbers, and strings. Usually, the top level object is a dictionary. To write out and to parse a plist file, use the dump() and load() functions. Serialized byte strings are handled by use dumps() and loads() functions. Values can be strings, integers, floats, booleans, tuples, lists, dictionaries (but only with string keys).
This module defines the following functions −
Following script stores serialized dictionary in plist file
import plistlib properties = { "name" : "Ramesh", "College":"ABC College", "Class":"FY", "marks" : {"phy":60, "che":60, "maths":60} } fileName=open('prpos.plist','wb') plistlib.dump(pl, fileName) fileName.close()
To read plist file use load() function
with open('marks.plist', 'rb') as fp: pl = plistlib.load(fp) print(pl) | https://www.tutorialspoint.com/generate-and-parse-mac-os-x-plist-files-using-python-plistlib | CC-MAIN-2021-43 | refinedweb | 156 | 58.79 |
As per the Jersey documentation,.
HEAD
OPTIONS
OPTIONS
HEAD
The quote you gave answers half of your question:
For HEAD the runtime will invoke the implemented GET method (if present) and ignore the response entity (if set).
So to enable the HEAD method on your enpoint you have two options:
The reason why POST method cannot be used to provide a default HEAD implementation is that POST method is neither safe nor idempotent (as defined in the HTTP standard). This means that if someone calls a POST method they must assume that it will have consequences on the application/resource state. GET and HEAD on the other side are both safe and idempotent so they must not change the state.
To answer the second part of your question - implementing HEAD doesn't differ from implementing other HTTP methods:
import javax.ws.rs.GET; import javax.ws.rs.HEAD; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; @Path("api/ping") public class MyResource { @GET @Produces(MediaType.TEXT_PLAIN) public String ping() { return "pong!"; } @HEAD public Response getHeaders() { return Response.status(200). header("yourHeaderName", "yourHeaderValue").build(); } } | https://codedump.io/share/EcbLUvhcgFnK/1/jax-rs-post-api-not-supporting-head-request | CC-MAIN-2017-51 | refinedweb | 197 | 59.4 |
This is your resource to discuss support topics with your peers, and learn from each other.
07-23-2010 07:57 PM
Hello,
I'm trying to create a simple screen that has to images in it. To do this, I am creating a VerticalFieldManager (although to my understanding the screen itself acts as one) then adding bitmapfields to it on opposite corners. However, I find that only the first bitmap that I add to the screen is displayed. What's going on here? I know the images load properly because I have altered the order in which they appear, and it is always the case that only the first one added loads. Any help is appreciated. Here is my code:
public class MyScreen extends FullScreen {
public MyScreen() {
VerticalFieldManager _mainManager = new VerticalFieldManager();
_mainManager.setBackground(BackgroundFactory.crea
add(_mainManager);
BitmapField _bitmap;
Bitmap _mainmenutop;
Bitmap _cursor;
BitmapField _bitmap2;
_bitmap = null;
_bitmap2 = null;
_mainmenutop = Bitmap.getBitmapResource("mainmenutop.png");
_bitmap = new BitmapField(_mainmenutop, BitmapField.TOP
| BitmapField.RIGHT);
_cursor = Bitmap.getBitmapResource("cursor.png");
_bitmap2 = new BitmapField(_cursor, BitmapField.BOTTOM
| BitmapField.LEFT);
_mainManager.add(_bitmap);
_mainManager.add(new SeparatorField());
_mainManager.add(_bitmap2);
}
}
Solved! Go to Solution.
07-23-2010 08:13 PM
I just wonder if in fact the images are there, but because they are not focusable, the screen will not let you scroll to them. Add a NullField between the images and after the last one (or In fact any focusable Field not just a NullField) and see if that allows you to scroll and see you images.
07-23-2010 08:24 PM
Hi,
I tried your suggestion:
_mainManager.add(_bitmap);
_mainManager.add(new NullField());
_mainManager.add(_bitmap2);
_mainManager.add(new NullField());
But no luck. I also tried it with SeparatorFields and NullFields and no luck either.
-Juan
07-23-2010 09:54 PM
Can you try adding super(NO_VERTICAL_SCROLL) as the first line in your screen's constructor (right after public MyScreen() ) and tell us what you see?
07-23-2010 11:46 PM
Incredibly, the main problem is Field.TOP in your _bitmap!
I knew VerticalFieldManager did not work well with Field.TOP and Field.BOTTOM, but that was really unexpected. I wonder how many experts could predict that...
So remove Field.TOP from the style bits in your _bitmap and make sure you don't use VerticalFieldManager or Screen with VERTICAL_SCROLL (or Field.BOTTOM will send your field down to virtual infinity).
07-24-2010 03:18 PM
and if you were to make Peter's suggestion work for you, you'd want to try something like this
_mainManager.add(_bitmap);
_mainManager.add(new NullField(Field.Focusable));
_mainManager.add(_bitmap2);
_mainManager.add(new NullField(Field.Focusable));
07-24-2010 05:52 PM
Problem solved (removing TOP fixed it.) Thanks for all the help. | http://supportforums.blackberry.com/t5/Java-Development/Multiple-items-in-a-VerticalFieldManager/m-p/552208/highlight/true | CC-MAIN-2014-41 | refinedweb | 463 | 59.5 |
Locally Installed Modules
I want to use modules in my QtQuick application, but I have problems getting it to work.
I make a new project, selects Qt Quick Application, and call it TestProject.
Then I make a new directory in the same directory as the .pro file, calling it "MyComponents".
Add new files "Comp1.qml" and "Comp2.qml" under the "MyComponents" directory.
Following the example in , I make a new qmldir file in the "MyComponents" directory, with the following content:
module TestProject.MyComponents 1.0 Comp1 1.0 Comp1.qml Comp2 1.0 Comp2.qml
I also set the QML_IMPORT_PATH in the pro file to the full path to the directory containing my TestProject folder.
When I now set
import TestProject.MyComponents 1.0
in the main.qml file, it shows the import line fine without a red underline. But running the project always ends in
qrc:/main.qml:4 module "TestProject.MyComponents" is not installed
I am wondering what could be wrong, as I followed the instructions in the Qt Documentation. | https://forum.qt.io/topic/53203/locally-installed-modules | CC-MAIN-2017-51 | refinedweb | 172 | 62.14 |
How do you write to text files WITHOUT OVERWRITING?
Discussion in 'Java' started by javajavalink, Dec 14, 2004.
Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
- Similar Threads
using ofstream to alter file contents WITHOUT overwriting themStewart, Nov 4, 2004, in forum: C++
- Replies:
- 8
- Views:
- 793
- Stewart
- Nov 5, 2004
Deploy without overwriting Web.Configdev648237923, Jan 17, 2007, in forum: ASP .Net
- Replies:
- 4
- Views:
- 525
- Walter Wang [MSFT]
- Jan 23, 2007
Safely renaming a file without overwritingSteven D'Aprano, Oct 28, 2006, in forum: Python
- Replies:
- 11
- Views:
- 474
- Wolfgang Draxinger
- Oct 29, 2006
SOLUTION: Deploying web apps without overwriting config valuesCowboy \(Gregory A. Beamer\), Oct 4, 2008, in forum: ASP .Net
- Replies:
- 0
- Views:
- 341
- Cowboy \(Gregory A. Beamer\)
- Oct 4, 2008
from package import * without overwriting similarly named functions?Reckoner, Oct 24, 2008, in forum: Python
- Replies:
- 6
- Views:
- 278
- Fernando H. Sanches
- Oct 25, 2008 | http://www.thecodingforums.com/threads/how-do-you-write-to-text-files-without-overwriting.139225/ | CC-MAIN-2014-41 | refinedweb | 186 | 65.83 |
The Importance of Covariates in Causal Inference: Shown in a Comparison of Two Methods
1. Introduction
In the last century, scholars across a variety of fields have explored the mathematics of decision-making, or learning to intervene within a system. A challenge that relies on decision-making is fundamentally different, and more complex, than a classification problem. For example, designing a machine to distinguish a photo of a dog from a cat (a classification problem) is different from teaching one to recommend a photo of a dog versus a photo of a cat in order to maximize a key parameter (a decision-making problem).
Decision-making problems become even more complex when we consider that historical observed data result from past decisions, which may have been biased and/or do not convey the whole picture. For example, assume we have observed that carrying an umbrella is highly correlated with the likelihood that a car accident will occur. Can we recommend the intervention “do not carry an umbrella” in order to reduce the average number of accidents? Of course not, considering that carrying an umbrella and the car accidents are both outcomes prompted or caused by the rain.
In principle, a randomized test that is expected to disentangle the independence between the treatment and all baseline variables can solve this issue with biases in observed data. To take up our earlier example, if we carry an umbrella randomly regardless of the weather and then measure the accident rate, we will remove the effect of rain on the outcome.
However, there are many cases in which a randomized test is not feasible—such tests can be very expensive, and, in some circumstances, even unethical. Furthermore, even in an experimental setup, tests will not always go smoothly and thus may require statistical adjustments. One of the biggest challenges with randomized tests is the issue of compliance; for example, people who are assigned to perform (or not perform) a task might not follow the specified procedures. Causal Inference offers a variety of techniques that can help in cases in which either a randomized test is not feasible or the outcome of the test is not reliable.
At Wayfair, there are many business cases which face these challenges, and thus benefit from the use of causal inference techniques. In customer service, for example, we would like to measure the effect of various agents’ behaviors on customer satisfaction, which we assess through a survey sent after each contact. Politeness is one of those behaviors. Clearly, we can’t conduct any randomized tests to assess the impact of this behavior, because it is unethical and impractical to randomly be impolite to customers. Thus we only have access to historical observations to inform our assessments. In observation data, correlation is easy to measure, the challenge here is to estimate how much of the correlation is because of causality.
For this purpose, we need a third variable that explains both control and target variable. Reichenbach’s common cause principle, the basis of causal inference, asserts that this variable always exists. This principle states:
If two random variables X and Y are statistically dependent, then there exists a third variable Z that causally influences both variables. (In a special case Z might coincide with either X or Y) [1]
Though we may not always observe variable Z (which I will refer to as a set of covariates), Reichenbach’s principle claims that it exists, and this claim has been tested by experimental studies in many fields [2]. As I intend to show, the biggest challenge in causal inference is finding this set of covariates Z which influences both the treatment and outcome.
To illustrate the principle discussed above, in this blog post I will compare two known approaches that have been used across multiple domains in academia and industry:
1) Neyman-Rubin’s Potential Outcome Theory (and its variations)
2) Pearlian do-calculus using the Bayesian Network framework
In this comparison, I will first evaluate the effectiveness of each approach and outline the particular challenges of using them. Then I will compare the two methods using synthetic data. The challenges encountered in employing each method should drive home the importance of finding the correct set of covariates to use when conducting causal inference.
2. Approach One: Neyman-Rubin’s Potential Outcome Theory [3]
Assume we have to choose between two decisions (for example, yes/no or on/off). In Neyman-Rubin’s Potential Outcome Theory (referred to hereafter as “potential outcomes”), we design separate models for each of those decisions. To put it in mathematical terms, assume we have a set of variables x which has two components {x0 , x1}, referred to as control and treatment, respectively. Applying this to the umbrella example, we would say that x0 represents “not carrying an umbrella” and x1 represents “carrying an umbrella.” We would then take our observed data and split it into two buckets: x0 those who did not carry an umbrella, and x1 , those who carried an umbrella.
One of the common methods used in conjunction with potential outcomes is called the two model approach. With the two model approach, we try to separate the outcome variable Y into two models; we model Y0 for the control group and Y1 for the treatment group. The objective will be to approximate the “Average Treatment Effect (ATE)” Δ = E [Y1 – Y0]. Applying this to our umbrella example, Y represents whether or not an individual got into an accident. Here, we are looking to see which Z variable (with our umbrella example, daytime versus nighttime, city versus country environment, or perhaps, presence or absence of rain) accounts for the effect of umbrella carrying and getting into an accident.
In order to find the causal impact of X on the variable Y, we need to determine whether the control and target variables are a function of a third variable. Thanks to Reichenbach’s common cause principle, we know a set of variables Z exists that explains both X and Y; in order to model Y1 and Y0 we need a set of variables Z, such that at a given Z, the changes in input won’t change the target variables or X⊥Y0 , Y1 | Z (X and Y0 , Y1 are independent given Z). Applying this fact of independence the joint distribution will change as follows:
The biggest challenge with the potential outcomes approach lies in this assumption: X⊥Y0 , Y1 | Z. Our estimation of causal inference heavily depends on exactly how well this assumption holds, but unfortunately because we never observe Y0 and Y1 for the same sample, we cannot test the assumption. We are further limited in our evaluation by our insufficient knowledge of the system. The fact that this method does not provide any tools or instruction for detecting covariates Z correctly makes it vulnerable to error. The one condition that Rubin mentions in his paper (see [2]) is that no variable in Z should be chosen after the treatment is applied. In practice this is difficult, as it is sometimes hard to evaluate the relative timing of each variable when using observational data. I will show in section 5 that even if we select a set of variables Z ‘ that contains the correct covariates as a subset (i.e. Z ⊂ Z ‘) there is a chance that we will still not be able to predict the intervention correctly.
When implementing the two model approach as a part of potential outcomes, one can use multiple methods for constructing Y0 and Y1 using Z. The simplest way is to assume Y0 and Y1 are linear models of Z and that X ∈ {0,1} will not appear in Y0 . Other methods, including matching techniques, are also commonly used to calculate causal inference using potential outcomes. This can be achieved using various distance metrics including propensity matching, Mahalanobis matching, and genetic matching. I recommend taking a look at this amazing blog [4] written by Iain Barr for a deeper dive and better detailed explanation of various techniques. I have used some of his code in my analysis below (section 5) as it is written very clearly and it perfectly serves the purpose of this blog.
While there are ways to implement other methods to compensate for this approach’s lack of detection methods for the correct covariates, the fact that this common and popular method is particularly vulnerable to this failing emphasizes the extreme importance of finding the right covariates for causal inference.
3. Approach Two: Pearlian Do-Calculus
Now let’s examine another approach to causal inference. At the core of the Pearlian causal inference method is the construction of a network between our variables (this is the “Bayesian network”) and the definition of a set of variables called “back-door” variables. In the following section, I will focus on introducing these concepts and then we’ll see how they are connected to causal inference.
3.1 Bayesian networks as an I-Map (Map of Independence)
A Bayesian network is a directed acyclic graph, meaning the connections between nodes have directions and there are no cycles within the graph. It is a map that represents independence across a set of variables. In a Graph G, with {xi : i = 1, 2, …, n} as the sets of variables, we can write the probability joint distribution as follows:
in which pai refers to the parent of xi.
Figure 1. An example of a Bayesian Network.
Let’s take a look at Fig. 1, in which the parents of node X1 includes pa1 = {X4 , X5}. The joint distribution based on the graph in Fig. 1 can be written as follows:
In other words, each child-parent family in a graph G represents a deterministic function:
in which ∈i (1 ≤ i ≤ n) are mutually independent, arbitrarily distributed random variables.
In a graph, a path that connects two variables is called a trail; a trail is active when X and Y are not independent given the connecting variables. If there is not a direct edge between X and Y, they are connected through a third variable Z by one of the following situations:
Figure 2: All possible combinations for three connected variables in a graph
A. Indirect Causal Effect (Fig. 2a): X → Z → Y The trail is active if and only if Z is not observed
B. Indirect evidential effect (Fig. 2b): X ← Z ← Y The trail is active if and only if Z is not observed
C. Common cause (Fig. 2c): X ← Z → Y The trail is active if and only if Z is not observed
D. Common effect (Fig. 2d): X → Z ← Y The trail is active if and only if either Z or one of Z’s descendants are observed.
As shown in Fig. 2, one can observe the independence and conditional independence between different variables by looking at a given Bayesian Network. In order to give greater form to concept, it is important to understand a few definitions along the way. First we must learn about “D-separation.” This concept will help us find more important variables which will make the target and the control variables independent.
D-Separation: Let X, Y, and Z be three sets of nodes in G. We say that X and Y are d-separated given Z if there is no active trail between any node x ∈ X and y ∈ Y given Z.
Next, we need to define “back-door variables.” This definition is so important, we will dedicate a whole section to it!
3.2 Back-door variables
We discussed the importance of the covariates in the previous sections. In Bayesian networks, we can detect the right covariates using the connections between variables. The following definition is one of the main types of variables that can be used for intervention calculation.
Back-door criterion: [5]: A set of variables Z satisfies the back-door criterion relative to an ordered pair of variables (Xi , Xj) in a directed acyclic graph G if:
(i) no node in Z is a descendant of Xi, and
(ii) Z blocks every active trail between Xi, and Xj via the parents of Xi
Figure 3. An example taken from [4] to explain the back-door criterion.
As an example, in Fig. 3, the following sets of variables are matching with back-door criteria for the sets of (Xi , Xj):
After identifying a set of back-door variables, we can easily calculate the intervention. The following theorem provides the mathematics of how everything comes together and shows the role of the back-door variables:
G-Formula Theorem: if a set of variables Z satisfies the back-door criterion relative to (X, Y) then:
Basically, when we identify the back-door variables, we can calculate the intervention through the equation above (hereafter referred to as Eq. 1). I will show later that we only need one set of the back-door variables in order to calculate the intervention. In the case that we have multiple sets of back-door variables, the answer will be the same if we use any of the sets.
So, let’s apply the Pearlian do-calculus on our umbrella example. The Bayesian network would probably look like this:
Figure 4. The graph for the umbrella problem.
The variable Rain, is our back-door variable for the setting of umbrella as the control and accident as the target variable. Looking at Eq. 1 (of course we could answer with certainty if we had the actual data), Rain will be the z variable, Umbrella will be the x, and the probability of being in an accident will be y. Intuitively we know that umbrella and accident are independent variables so p (accident | umbrella, rain) will be completely explained by rain i.e. p (accident | umbrella, rain) = p (accident | rain) because umbrella ⊥ accident | rain.
As you can see, the Pearlian do-calculus model does provide methods for detecting the correct covariates, improving on the potential outcomes approach. However, it still presents challenges of its own which we will explore in our tests of these approaches below.
4. Making Synthetic Data
Now that we understand the pros and cons of these two common approaches, let’s see how they play out in practice. In order to put these theories to the test, first, I need to create some synthetic data. This approach will help me to control the distribution of both input and their relationship to the output of the system. I can also generate randomized tests for the control variable that I would like to test. Consequently, I can produce the data in which the inference is very different from the observation; most importantly, I will be able to know exactly how much the causal inference of my control variable is on the target.
I have used the following python packages to make this tutorial; most of them are standard python packages, “causalgraphicalmodels” and “causalinference” are two specific ones that I use for finding the back-door variables and also calculating the various potential outcomes model.
import numpy as np import seaborn as sns import itertools import statsmodels.api as sm from scipy.stats import norm import causalgraphicalmodels as cgm import causalinference as ci
Note: the following function must be added to causalgraphicalmodels/example.py
cie_example = StructuralCausalModel({ "z1": lambda n_samples: np.random.normal(size=n_samples), "z2": lambda n_samples: np.random.normal(size=n_samples), "z8": lambda n_samples: np.random.normal(size=n_samples), "z9": lambda n_samples: np.random.binomial(1, p=0.01, size=n_samples) *\ np.random.normal(size=n_samples) +\ np.random.normal(loc=0.1, scale=0.01, size=n_samples), "x": logistic_model(["z1", "z2"], [-1, 1]), "z3": linear_model(["x"], [1]), "y": linear_model(["z3", "z5", "z9"], [3,-1, 30]), "z4": linear_model(["z2"], [-1]), "z5": linear_model(["z4"], [2]), "z6": linear_model(["y"], [0.7]), "z7": linear_model(["y", "z2"], [1.3, 2.1]) })
After updating example.py, we have our data generator ready:
cie = cgm.examples.cie_example cie_cgm = cie.cgm cie_cgm.draw()
Figure 5. The Bayesian network that is used for synthetic data.
data = cie.sample(n_samples=100000) data.head()
Table 1. A sample of the synthetic data.
Fig. 5 shows the Bayesian network that shows the connection of the variables. Table 1 just shows an example of the data. I sampled 100,000 times from the network shown in Fig. 4, equivalent to the number of rows of observation.
In order to simulate the randomized test, we can simply use the same network and remove all the arrows going into node x ; i.e. in our graph, the value of x is controlled by variable z2 but we would like to randomly change the value of x. Thus, by detaching the connection between z2 and x (removing the effect of z2), we can randomly assign values to our control variable x.
cie_cgm.do('x').draw()
Figure 6. The Bayesian Network for the randomized test.
As Fig. 6 illustrates above, the structure of a randomized test can be achieved by simply removing all of the connections from the variables (z1 and Z2) that control variable x.
randomized_test = (cie .do('x') .sample(n_samples=100000, set_values={"x": np.random.binomial(p=0.5, n=1, size=100000)}))
sns.distplot(randomized_test[randomized_test.x==1].y.clip(-25, 30), label='Treatment') sns.distplot(data[data.x==1].y.clip(-25,30), label='Observation for Treatment') sns.distplot(randomized_test[randomized_test.x==0].y.clip(-25, 30), label='Control') sns.distplot(data[data.x==0].y.clip(-25,30), label='Observation for Control') plt.ylabel('Probability Density') plt.legend()
Figure 7. Probability density function for y conditioned on x ∈ {Treatment=1, Control=0} for both randomized tests and observations.
As shown in Fig. 7, there is a slight difference in the uplift of variable in x within observational and the results of the randomized test. The actual difference can be calculated as follows:
ATE = randomized_test[randomized_test.x==1].y.mean() - randomized_test[randomized_test.x==0].y.mean() ATE_obs = data[data.x==1].y.mean() - data[data.x==0].y.mean() print(f"Actual ATE:{ATE}") print(f"Observed ATE: {ATE_obs}")
The observation estimates the value of the intervention almost 50% wrong (48%). In the following sections, I will use various techniques to estimate the “Actual ATE” using only observational data.
5. Testing the Potential Outcomes Method
So let’s put the potential outcomes theory to the test. We just generated the data and we are ready to see how far we can go with this approach. As outlined in the description of this theory above, potential outcomes can be a very powerful approach if one knows the correct covariates. I will estimate the ATE within the potential outcome framework mostly using two model approaches using various sets of covariates. The first set I will try with all available observed covariates in my data. This practice will show that even if the correct covariates are a subset of the selected variables, the estimation still will be unsuccessful!
For the second set, I will choose the right set of covariates. I will do this by borrowing from Pearlian do-calculus and choosing from the back-door variable sets. In this exercise, I am trying to show that when the covariates are correctly selected, all the variations of modeling in potential outcomes converge to the same solution. Although this cannot be generalized to every problem, I would like to emphasize the importance of choosing the right covariates compared to making a more complex model.
5.1. All variables as the covariates
zs = [c for c in data.columns if c.startswith("z")] cm = ci.CausalModel(Y=data.y.values, D=data.x.values, X=data[zs].values) cm.est_via_ols() cm.est_via_matching() cm.est_propensity_s() cm.est_via_weighting() cm.stratify_s() cm.est_via_blocking() print(cm.estimates)
y = [] yerr = [] x_label = [] for method, result in dict(cm.estimates).items(): y.append(result["ate"]) yerr.append(result["ate_se"]) x_label.append(method) x = np.arange(len(y)) plt.errorbar(x=x, y=y, yerr=yerr, linestyle="none", capsize=5, marker="o") plt.xticks(x, x_label) plt.title("Estimated Effect Size", fontsize=18) plt.hlines(ATE, -0.5, 3.5, linestyles="dashed") plt.xlim(-0.5,3.5);
Figure 8. The dashed line shows the actual ATE from a randomized test.
The estimation of the ATE using various techniques and all the variables at the covariates is shown in Fig. 8; in this figure the dash line is the actual value. The results clearly show how off the mark the estimation is, and how these results didn’t substantially improve when we introduced more complex modelling techniques to our calculations.
5.2. Z2 as the covariates
I will show in the next section how we can choose the right covariates using the back-door variable definition, but for now assume that I magically select Z2 as the covariates.
NOTE: the code is pretty much the same for generating the outputs the only change is that the variable “zs” must be replaced as follows:
zs = ['z2']
Figure 9. The dashed line shows the actual ATE from a randomized test.
The results shown in Fig. 9 confirm our hypothesis regarding the importance of the covariates versus the complexity of the models. The graph shows that the true value of the actual ATE is within the range of the estimations of the four methods with which we experiment. In this case, the complexity (nonlinearity) of the methods did not add any value to the accuracy of the final output.
This test suggests that the potential outcomes approach is an effective technique which provides different methods to estimate the causal inference. The only challenge is that potential outcomes does not provide a clear instruction for choosing the right the covariates.
6. Testing the Pearlian Do-Calculus Approach
6.1. List of back-door variables
In the potential outcomes section, we discovered that choosing the covariates correctly is a crucial step. Bayesian networks provide the tools and facilitate this process through the selection of so-called ‘back-door variables.’
Referring to section 3.2, the “causalgraphicalmodels” package in python provides the list of back-door variables for a set of control-target variables for a given graph.
cie_cgm.get_all_backdoor_adjustment_sets("x", "y")
I have presented all the back-door variable sets above; you can use these as an exercise to to test if you can find the same list or at least can find the rationale behind each of these sets.
If we choose any set from this list, potential outcomes can estimate the ATE correctly (I have shown this with only one set).
6.2. Using Eq. 1 for calculating the ATE
Eq. 1 (copied below for convenience) requires the joint distribution of a set of covariates Z. We also need the P (Y|X = x, Z = z) which is basically a function that maps Z and x to Y.
In our case, Z = {Z2} which simplifies the joint distribution to a single probability density function; I will make two OLS models with Z2 separately for X = 0 and X = 1.
Let’s make the first OLS for the observational data conditioned on X = 1.
# The X_ is the covariates while X==1 Z_ = data[data[X]==1]['z2'] Y_ = data[data[X]==1]['y'] # Add a constant in order to have an intercept in OLS Z_ = sm.add_constant(Z_) # Fitting the OLS model y_x1_z = sm.OLS(Y_, Z_) y_x1_z_model = y_x1_z.fit()
I will make the same OLS for X = 0
Z_ = data[data.x==0]['z2'] Y_ = data[data.x==0]['y'] Z_ = sm.add_constant(Z_) y_x0_z = sm.OLS(Y_, Z_) y_x0_z_model = y_x0_z.fit()
In order to estimate the P (Z2), I assume a normal distribution with the µ , σ calculated from the observation. Of course, in more general cases one should calculate P (Z) using more elaborate techniques.
mu = data[Z].mean() std = data[Z].std() z_ = data[Z].values p_z = lambda z: norm.pdf(z, mu, std)
So we have all the ingredients ready! Let’s calculate the P (Y |do (X = x))
# probably the better practice is to estimate the Y standard deviation as a # function of z2. For now I assume the std won't change from observation to # intervention (which is not the case) y_std = data[Y].std() def p_y_x_z(z, y, model): return norm.pdf(y, model.predict([1, z]), int(y_std)) * p_z(z) def p_y_dox(ys, x): if x==1: model = y_x1_z_model elif x==0: model = y_x0_z_model return np.sum(v_p_z(z_, ys, model)) # np.vectorize helps me to pass a numpy array to a function that receives scalar v_p_z= np.vectorize(p_y_x_z) v_p_y_dox = np.vectorize(p_y_dox) # Because I'd like to calculate the whole density function of P(Y|do(x)) I have to # calculate the Eq(1) for different y. For calculating the ATE I don't need # to save all of these values. yy = np.linspace(-20, 30, 60) py_dox1 = v_p_y_dox(yy, x=1) py_dox0 = v_p_y_dox(yy, x=0)
By using Z2 as the covariates:
Figure 10. The estimation of probability distribution of treatment and control effect using Z2 as the covariates.
By using Z5 as the covariates:
Figure 11. The estimation of probability distribution of treatment and control effect using Z5 as the covariates.
There are two important points illustrated in Fig. 10 and Fig. 11. First, choosing the right covariates can estimate not only the ATT or ATC, but also it can predict the whole probability distribution of the inference. Second, we have used two completely different sets of variables and as far as they fit the back-door variable criteria the results will be the same.
6.3. Learning the structure of Bayesian Networks
As we have seen so far, as opposed to the potential outcomes method, the do-calculus approach provides a powerful tool to find the relative covariates for estimating the intervention. The question remains as to how difficult it will be to estimate the network architecture, which is the crucial step for finding the “back-door” or “front-door” variables. Luckily, many structural learning algorithms have been created for this purpose. Providing a deep review of these important tools is unfortunately out of the scope of this blog. As such, I would like to show an example of an existing package in R called bnlearn that has implemented most of those algorithms efficiently, is open source, and it is relatively easy to use.
The equivalent in Python package is called pgmpy, however the network learning modules in this package are much slower than those in bnlearn. So, for now I will try to show that technically it is possible to get the network from the observed data using bnlearn.
# ryp2 helps to run R modules in python from rpy2.robjects.packages import importr import rpy2.robjects.packages as rpackages import rpy2 # importing bnlean bnlearn = rpackages.importr('bnlearn') readr = rpackages.importr('readr') # importing the data that we generated in the previous section as csv dtrain = readr.read_csv('data.csv') # Hill Climbing is the most common algorithm for structural learning dag = bnlearn.hc(dtrain) print(dag)
Bayesian network learned via Score-based methods:
Model:
[z9][z8][z2][z1][z4|z2][x|z2:z1][z5|z4][z3|x][y|z9:z5:z3][z6|y][z7|z2:y]
nodes: 12
arcs: 11
undirected arcs: 0
directed arcs: 11
average markov blanket size: 2.67
average neighbourhood size: 1.83
average branching factor: 0.92
learning algorithm: Hill-Climbing
score: BIC (Gauss.)
penalization coefficient: 5.756463
tests used in the learning procedure: 187
optimized: TRUE
In this example, bnlearn manages to learn the structure perfectly even though our data includes discrete and continuous variables which technically is called a hybrid network.
Conclusion
In this blog I have tried to show the challenges inherent in causal inference calculation, which ultimately relies on finding the right covariates. The potential outcomes approach provides better tools for calculating causal inference, but it does not provide a suitable path to find the right sets of covariates. On the other hand, the Pearlian do-calculus method can provide (at least in our example) the right tool for estimating the correct sets of variables, but presents a challenge in determining the graph structure.
One assumption I have made is that we have measured or observed the confounding variables within our data, which is not always the case. I will dedicate my next blog to the estimation of causal inference in situations where we have not observed the confounding variable and must approximate it first.
References
- Reichenbach, Hans. The direction of time. Vol. 65. Univ of California Press, 1956.
- Peters, Jonas, Dominik Janzing, and Bernhard Schölkopf. Elements of causal inference: foundations and learning algorithms. MIT press, 2017.
- Rubin, D. B. (2005). Causal inference using potential outcomes: Design, modeling, decisions. Journal of the American Statistical Association, 100(469), 322-331.
- Barr, Ian. Causal Inference With Python,
- Pearl, J. (1995). Causal diagrams for empirical research. Biometrika, 82(4), 669-688. | https://tech.wayfair.com/2020/03/the-importance-of-covariates-in-causal-inference/ | CC-MAIN-2020-29 | refinedweb | 4,815 | 54.52 |
Post your Comment
Access the User's node and Retrieve the Preference Information in Class
will learn how to access the user's node and retrieve the
stored information... of the
access user's preference node and and retrieving all information... that how a Preference node can
be access using a class object or a string
Creating a Preference Node
Node and System
Preference Node. Here we are going to create a user preference... preference node for the
calling user and node("/Roseindia Employees"... Creating a Preference Node
the Parent and Child Nodes of a Preference Node
are going to retrieve the Parent and Child nodes of a
Preference Node. You can see... Retrieving the Parent and Child Nodes of a Preference Node...(java.lang.String.class)
to get the Preference node. After that the method prefs.parent
Saving and Retrieving The Preferences
will learn about the way to retrieves the user preference node using the same
class and saves and retrieves a preference in the node.
Here an example... must be a string and the preference values are stored in a
node. The preference
Finding a Preference in a Preference Tree
the preference of Preference
node in a Preference Tree.
You can see in the given example that we are retrieving the keys and values
of the Preference node 'Java...
Finding a Preference in a Preference Tree
Determining If a Preference Node Exists
Determining If a Preference Node Exists
... Preference node
exists or not.
A preference node can be created automatically...(), or Preferences.systemNodeForPackage().
To avoid creating a node, you should first
Exporting the User Type Preferences in a Subtree of Preference Node
Exporting the User Type Preferences in a Subtree of Preference Node... will learn that how a user can access the User preferences.
Here we are going... of the user as root type="user" after that it display
the node name
Node class
Node class hii,
What is a node class?
hello,
A node class is a class that has added new services or functionality beyond the services inherited from its base class
Determining
Getting and Setting Java Type Values in a Preference
and get the Java Type values in a
Preference.
As you know that a preference node holds only string values. Therefore the
Preferences class provides some...
Getting and Setting Java Type Values in a Preference
How to retrieve URL information
How to retrieve URL information
Here we are going to explain a method for retrieve all
the network information for a given URL. In the given below example we use the
URL
Getting Methods Information of a class
to retrieve information of all methods of a class (that included
in the program...
Getting Methods Information of a class
... of the getMethods() method in
more detail.
Create an object of class. Now retrieve all
Preferences Overview
the user and system configuration data. In the
preference class it can call... the path for applications to store and
retrieve the preference and configured data.../
node in the user preferences tree. If that node does not exist
information updations
information updations I have created the following interface that allows updations to customer information:
public interface validateInfo
{
public void validate(String empcode, String password);
}
class updateInfo implements
Overview of Networking through JAVA,How to retrieve URL information
How to retrieve URL information
Here we are going to explain a method for retrieve all
the network information for a given URL. In the given below example we use the
URL
Retrieve information for Folder structure - Design concepts & design patterns
Retrieve information for Folder structure Hello Friends,
I want to write program which can retrieve information of Folders structures and Files... org.apache.commons.net.;
public class FTPLocation
Getting Fields Information of a class
Getting Fields Information of a class
...; you
will know how to retrieve the specific field of any class by using...;Rectangle" and then by
invoking getClass() method we will have a class object
Java--Xml Modify Node Value - Java Beginners
Java--Xml Modify Node Value I have the following xml.
test_final_1
2009-025T13:23:45
B2B
.
I want to retrieve the values... org.xml.sax.*;
import org.jdom.xpath.*;
public class DOMElements{
static public
Java Xml -Node retrieval - Java Beginners
Java Xml -Node retrieval I have the following xml
test_final_1
2009-025T13:23:45
B2B
In the above xml,I want to retrieve... java.text.SimpleDateFormat;
public class XMLParser{
Date date=new Date();
public
Using get method in preferences
. Every
value of the preference data stored using the Preferences API is stored in a
particular node in one of the two preference trees and has a String
key value with it which must be unique within that particular node of the
tree
List Information about folder
List Information about folder
This Example shows you how to show folder information.
In this code we have used method of Folder class to retrieve information about
Getting information about Constructor
Getting
information about Constructor
In this section you will learn,
how to
retrieve the information... that provides the
usage of the getConstructors() method.
Declare a class "
Xml append node problem
Xml append node problem print("code sample");Question:
I create...();
System.out.println("111 ---> "+stringWriter.toString());
Node nds... to Node.
Output:
111 ---> <?xml version="1.0" ?><cwMin>31<
Post your Comment | http://roseindia.net/discussion/20082-Access-the-Users-node-and-Retrieve-the-Preference-Information-in-Class.html | CC-MAIN-2013-20 | refinedweb | 881 | 54.12 |
SUA Deprecated In Windows 8? 226
An anonymous reader writes "I just tried to install Subsystem for UNIX-based Applications (SUA) on Windows 8 Preview and found that it's marked as DEPRECATED: 'Subsystem for UNIX-based Applications (SUA) is a source-compatibility subsystem for compiling and running custom UNIX-based applications and scripts on a computer running Windows operating system. WARNING: SUA is deprecated starting with this release and will be completely removed in the next release. You should begin planning now to employ alternate methods for any applications, code, or usage that depend on this feature.'"
Metro (Score:3, Funny)
Re:Metro (Score:4, Insightful)
It's console-only, actually. It's just something that runs on the POSIX subsystem that NT provides, but it really sucks. First, it's an ancient POSIX interface. Second, it's command line only - it's not like you get X or anything. Third, well, you lose access to Win32 (can't cross subsystems).
If's really a checkbox item, just like how NT had the ability to run OS/2 programs too.
If you're porting a Unix app to Windows, you don't use SUA. You use a Unix-to-Win32 porting library (of which Cygwin is just one), just like how Windows apps can be ported to Linux using WineLib or its commercial equivalents as well.
Hell, a Cygwin program at least can mix Win32 API calls with POSIX calls, because Cygwin maps POSIX calls to Win32.
Re: (Score:2)
Cygwin (Score:5, Insightful)
Re: (Score:2, Troll)
Commercial customers that require production-quality platforms with enterprise-level support probably will avoid Cygwin.
Re: (Score:2, Informative)
How about this?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Mainly because SUA is faster, since there's no translation to Win32 - it sits directly on top of the kernel.
Re: (Score:3, Funny)
The installer for Cygwin is extremely simple and intuitive to use, andmakes remote, unattended installs a breeze,so you'll have no difficulties there.
Re: (Score:2)
Re:Cygwin (Score:4, Informative)
Look, I love Cygwin and have been using it since forever. But it's pretty slow at a lot of crucial operations, making it unsuitable for a large class of things folks use SUA for.
More importantly, it suffers from a serious lack of manpower and direction. For a project which is so vast and so important to open source, it has alarmingly few active maintainers. The lack of maintainers is made worse by the fact that a considerable amount of maintainer effort is duplicated between cygports and the official cygwin distribution.
Everybody uses cygwin but as far as I can tell very few people pay RH for cygwin support, and thus there are AFAIK only three people who are paid for their work on cygwin.
The lack of manpower really shows. Crucial packages go for long periods without important bugfixes, and new releases take a long time to get ported&integrated from upstream. Development on the cygwin core is fairly slow. NT-based versions of Windows offered quite considerable benefits over Win9x (lots of additional capabilities and much less of a mismatch with POSIX -> better security and performance), but the first version to really take advantage of these benefits was 1.7, released for Christmas 2009 [slashdot.org]- 7 years after the majority of users (much less the majority of technical users likely to use cygwin) had made the switch. The developers had their first serious discussion about the possibility of a 64-bit version of Cygwin in June of this year; it will likely be quite a while before a 64-bit version is released. A lot of cygwin's performance problems could be fixed if the core developers weren't already overburdened as it is.
Unless cygwin can attract a lot of new developers I don't think the project can stay up-to-date enough to continue to support the uses we all already rely on it for, much less be in a position to give SUA emigres a soft landing.
What packages are so slow to update? (Score:3)
All I really use Cygwin for is a bash script interpreter. It's done a fine job of that, though it does take an abominable amount of time to start a console window.
It lets me write cross-platform database installation scripts for *nix and Windows, but to be honest, that's about all the use I have for it at this time.
I haven't even bothered updating the install in over a year. Why bother? It works.
Who uses that anyway? (Score:4, Informative)
Cygwin or UnxUtils [sourceforge.net] work great.
Re: (Score:2)
UnxUtils hasn't been updated since 2003. I'm not sure when SUA has been updated but it can't be too far behind.
Re:Who uses that anyway? (Score:4, Informative)
Cygwin? (Score:2)
I think Cygnus Solutions solved your problem about 20 years ago.
Re:Cygwin? (Score:4, Informative)
Have you ever actually tried to use Cygwin as a *nix-compatibility layer in a production environment. The word "kludge" doesn't seem to begin describe it.
Re: (Score:2)
I agree with you entirely - but the author is stupid enough to be relying on SUA so rather than that statement that "it is a kludge" I'd say it's a question of "is it a better kludge?".
Re: (Score:2)
Yes. Where I work, we have a production load and test process for some of our hardware that depends on cygwin. We have had cygwin cause problems for us only once, and that happened to be on a non-production station. (Production stations had all their files on the local hard drive - this machine was performing some operations on LAN shares and the Cygwin issue was related to LAN shares.)
Now the Xilinx ISE toolchain that was being called from our scripts hosted in Cygwin... ugh that's a whole other story.
Re: (Score:2)
And cygwin is massively crippled compared to *nix. Wherever possible, I try to get native user land ports, and if I need more advanced things, well, there happen to be a number of open source *nix-like operating systems out there. It's one thing to want to run sed on your Windows box, quite another to basically try to port over the entire *nix environment.
Re: (Score:2)
Re:Cygwin? (Score:5, Interesting)
Re:Cygwin? (Score:4, Interesting)
100% agreed. The commercial alternative - MKS Toolkit [mkssoftware.com] - integrates seamlessly with Windows, and is both more complete and faster than Cygin. Yes, it costs money, and no, it is not open source - but if you need to do Unix-like stuff on Windows, it actually makes life tolerable.
But Unix-like stuff itself is not tolerable, which is why it has to be reimplemented with GNU, Linux, Cygwin and other free software.
For instance, how does the vi editor in MKS stack up to Vim? If the following link gives a more or less complete manual, it's freaking pitiful: [mkssoftware.com]
Why would I pay money for that stuff if I would end up compiling GNU coreutils, bash, and other packages?
projects depending on this (Score:3)
Well, I actually failed to find a project that would really depend on SUA. Anyone knows about anything that would be harmed by this change?
Now you have it, now you dont. (Score:2, Insightful)
Re: (Score:2)
I still rue the day they dropped proper MS-DOS from Windows. So much great software developed in that environment, you'd think they'd still be updating it today.
Re: (Score:2)
We relied on FoxPro from the 2.5 DOS version through the upgrade path to Visual FoxPro6.0. Then, MS announced on the UniversalThread VFP forum that they were dropping VFP and set up classes on that forum to teach
.NET, the "new" developer paradigm. They bestowed "MVP" badges to prominent VFP forum members who then played the .NET flutes that led the children out of the VFP villages. The outrage among a large number of the approximately 300,000 VFP developers, world wide, was palatable. MS was t
Re: (Score:3)
Hello, I am billions of dollars of enterprise backend software written in C# and
.net. Can you please explain to me how Microsoft is going to phase out C# and convince the millions of C# developers to rewrite their enterprise software in HTML5 and Javascript?
Can you explain to me how future versions of SQL Server, Exchange, Sharepoint, etc, are going to be written in HTML5?
Re: (Score:2)
He can't. It's just something he posts over and over, despite being unable to support it. He's been called out on it many times.
Re: (Score:2)
most recent was silverlight.
Silverlight wasn't really killed in the niche where it was actually being used (for intranet apps, rather than as a Flash killer) - it just got a facelift. If you actually look at Win8 UI framework, it's basically Silverlight reimplemented in native code with a bunch of namespaces renamed. You can port a Silverlight app to Win8 in a few hours at most.
.net people were already going amok in their community forum, all ablaze due to rumors tied to win 8.
"Rumors" being the key word here. Going from "you can now write apps in HTML/JS" to ".NET is dead" was quite a stretch, but there were enough people willing t
SSO (Score:2)
Windows itself seems close to being deprecated (Score:5, Interesting)
Is this a shock to anyone after The Week of Windows 8 Hype? If there was a theme running through all of the stories it was this: why would they have any interest in maintaining a UNIX command line layer?
Win32 (and UNIX more so) isn't going to lend itself to the sort of app store lockdown Microsoft is moving to. If you have a choice of buy Win32 apps/games at Walmart/Gamestop and Microsoft gets no taste of the action or buy everything at the App Store and give Microsoft 30%, which do you think they are going to 'nudge' you toward? And by 'nudge' I mean turn your PC into an iPhone with hard crypto locks and remove all options that do not let them rake off their 30 points.
Re: (Score:3, Interesting)
>
I think that's going to last about 6 months after Win8 release, and then they're going to realize that early adopters are putting keyboards and mice on their tablets and struggling to re-enable
Re: (Score:2)
That's like taking off in an airliner with holes in the wings and hoping that the stewardesses will pass out parachutes.
Re: (Score:2)
Some have stated that Win8 is stated to be a failure already as far as x86 machines go, based on the fact that companies waited 10 years on XP and skipped Vista, and are only now moving to W7. Companies won't be going to W8 anytime soon. Consumer PC purchases in the windows market are down, so who's actually going to go with W8? Tablets and phones seem about all that's left, and they're not running x86. (That means no W7 interface on those devices)
Re: (Score:2)
The old rule was that there was always two reasonable windows releases for every bad one (win95, win98; winME), (win2000, WinXP, Vista). Now the rule appears to be every other release is going to suck. So though Win7 is ok, win8 appears doomed. That metro interface isn't going to fly in the corporate world where people are trying to get real work done.
Re: (Score:2)
"I think that's going to last about 6 months after Win8 release, and then they're going to realize that early adopters are putting keyboards and mice on their tablets and struggling to re-enable the traditional desktop"
You do realize that Windows 8 is for both desktops/laptops and tablets? Tablet users use keyboards as well (witness the sale of keyboards for the iPad); the Metro interface is just a touch-friendly environment. It takes all of one registry edit to kill it permanently and switch back to all-tr
Re: (Score:3)
> Win 8 - it's gonna suck, right?
Yep. It's inevitable. I think the even/odd thing is that in release (a) we try new stuff, and in release (b) we fix/withdraw it, and then in release (c) we try new stuff again, so it tends to devolve into even=suck odd=less_suck. Or the other way around, depending on if they start at "1" or "0".
Re: (Score:3)
a traditional Windows desktop will be available (certainly on x86, perhaps on arm) for those who are determined enough to figure out how to reenable it
"Determined enough"? You mean, like locating a huge (2x1) tile labeled "Desktop", with Win7 wallpaper for the background picture, right at the home screen?
Re: (Score:2)
"for those who are determined enough to figure out how to reenable it"
Hint: Click on the tile marked "Desktop".
photo shop is a big windows and mac app that will (Score:2)
photo shop / the adobe CS pack is a big windows and mac app that will not work good with a touch based ui (maybe for a photo shop light), also it has lot's of 3rd party plug ins' (I hear that the MS app store is more open to that then the apple one). Also big screens and dual or more helps with it as well.
AutoCAD is a other high end app that needs a good input, good CPU + GPU, big screen / dual or more screens and there even high end mouses for it [chipchick.com]
Re: (Score:2)
"That may be Microsoft's plan, but it's a real loser for expensive specialty software. At my work, we have plenty of technical apps that cost more than the Windows machine they're running on, even though they require fairly hefty hardware. There's no way a company writing a $10K app is going to be willing to hand over $3K to Microsoft to get it on their appstore."
If you think that company is going to re-write their "$10k app" that requires "hefty hardware", in HTML5/Javascript, you've got a screw loose. Met
Re: (Score:2)
Metro apps can be written in native C++,
.NET, or HTML5/JS. So, porting the guts of just about any app to metro is trivial. I've been working on the developer preview with our application, and the integration with the old code is trivially easy. It's the writing of a new interface that poses difficulty. I could see photoshop moving its browser to Metro, or Autocad providing a metro-style viewer - a full screen touch-based Autocad viewer could be pretty cool on a tablet actually.
This is (Score:5, Insightful)
If you look at the kind of work Microsoft has put into the Linux kernel recently relating to Hyper-V... [lwn.net].
Re: (Score:2)
Re: (Score:3)
Microsoft, if you're reading this, provide a real note app too al-la
Re: (Score:2)
They didn't put that "work" into their code voluntarily. They were forced to do it because they were in violation of the GPL.
Re: (Score:2).
SUA is pants (Score:2)
I think this is yet another indication that SUA is pants and everyone should be using Cygwin.
Re: (Score:2)
I think it's another indication that anyone who trusted MS to support functionality that didn't directly benefit MS was a damned fool. When it comes to Microsoft, the only way to win is to not play. i.e. Don't buy their stuff, ever.
The problem, for those asking, is customers (Score:2)
Windows-only shops may tolerate you insisting on SUA, because it's a Microsoft product.
Start talking about CygWin or VMs and their eyes glaze over, they suck their thumbs, and moan "Wasn't on my MCSE, hippies will eat me, wasn't on my MCSE, hippies will eat me."
I know that there's not really any significant difference in support terms (other than not getting the flakey almost-POSIX and BSODs that continue to burden SUA), and that they'd be better off switching to a native POSIX environment anyway, but t
Re: (Score:2)
Those people are all going to die of acute boneitis when they first clap eyes on Metro anyway.
SUA is already dead in Windows 7 (Score:3)
They've killed it by only supporting the features necessary to re-share existing NFS services using SMB and AD. Integration of Windows with non-AD LDAP and Kerberos is virtually non-existent and requires a ton of work and 3rd party utilities to get it working. I don't think NFSv4 is even supported.
it does make sense. (Score:2)
why support something that your new os's 'made for' or 'works with' branding program is designed to kill?
Windows doesn't fully support SUA even now... (Score:2)
...Which can be seen by viewing SUA based process in Windows's Task Manager.
Do this:
1. Install SUA
2. Run KSH (the command line shell that SUA installs)
3. Open Task Manager
4. Change the columns so that 'command line' is showing.
You will notice that the SUA processes have _wrong_ (corrupted?) information displayed. This is based on the fact SUA is a different _subsystem_ and stores process based information (specifically, command line information) in memory in a _different_ format than the _Win32_ subsystem.
S
Oh, no, maybe 0.2 people affected (Score:3)
I have never seen one instance of this actually being used in any environment from small up to very large enterprise.
My two cents worth. (Score:2)
VMware workstation running one of the many "live" distributions as a mountable
.iso image.
Re: (Score:2)
I was wondering the same thing. What's the use of this SUA thingy?
Re: (Score:2)
Re:Cygwin (Score:5, Interesting)
It's not Cygwin. It's an implementation of the POSIX APIs that goes directly to the NT APIs instead of through Win32..)
Re:Cygwin (Score:4, Informative).)
Yep, fork() on Interix (SUA) works much more efficiently. The NT kernel has supported what's essentially fork() since at least NT 4.0. The problem until Interix - and the reason why Cygwin's fork() sucks - is that the Win32 DLLs don't react well to being fork()ed. kernel32.dll gets confused, and simple things like console output stop working. Interix doesn't use the Win32 API, instead using a custom POSIX API and the NT API directly. The NT API has been updated to work in the event of a fork().
The NT API function NtCreateProcess [ntinternals.net] spawns a new process. The SectionHandle parameter takes a handle to the image section (IE, CreateFileMapping with SEC_IMAGE) representing the EXE you want the new process to run. If you pass NULL for SectionHandle, you will instead be creating a copy of the parent process's address space, the main part of fork().
Re: (Score:2)
Licenses? cygwin is GPL last time I checked.
Re:Cygwin (Score:4, Informative)
Reaching back a bit, I think the use was that it meant that Windows NT and successors could tick the "Posix compliant" tick box that was required by some (mainly publice sector) contracts.
Perhaps Posix is no longer on so many checklists.
SUA vs Cygwin (Re:Cygwin) (Score:5, Informative)
Well, first off the basic thing is speed. SUA has kernel hooks for syscall translation. It's able to do many of the POSIX syscalls in a much quicker fashion than Cygwin. Cygwin, on the other hand, does *everything* for POSIX syscalls in userland, causing it to be slow (for example, a fork, at times can take *seconds* to complete).
So, SUA is much better this way... problem is, it's tricky to get things to compile for it, I never did get things building reliably for it. Cygwin has a full suite of programs already built, and it's much easier to build existing Linux/UNIX/POSIX programs for than SUA.
Being a Windows user who needs *NIX tools for many processing tasks, what do I use? Cygwin. Easier to set up and get running. The speed drives me insane, though. My login script, which runs many programs before bringing up my bash prompt will take 5-6 seconds.
Ideal solution: Hyper-V or some other VM software running a VM in the background that I can get a terminal to, that has filesystem access to my system drives too.
Re: (Score:3)
Fastest and most compatible way to run Linux programs on Windows (which doesn't even need any special hardware) ? [colinux.org]
Re: (Score:2)
SUA was once called SFU (Services For UNIX), and it replaces the built-in POSIX subsystem which has been an integral part of NT since NT 3.1.
The built-in POSIX subsystem alone was basically useless as shipped, since it came without many command line utilities, but SFU (now SUA) upgraded it to a more or less useful configuration, including a series of commands built to use the API; in some ways it accomplished the same thing as Cygwin, but in a different way.
In my opinion, Cygwin is vastly better since i.
Rumor has it that Windows 8 deprecates SUA in favor of its newer, sexier replacement, OMGWTFBBQ.
You heard it here first.
Re: (Score:3)
I thought it was being replaced with STFU... to truly tell the users how they feel.
;)
Of course, there's also the new service used for background downloading images called "NSFW" that was planned for Metro.
Re: (Score:2)
I doubt telling people to "just use Linux" is a reasonable solution though. If it were that simple they wouldn't be bothering with SUA in the first place.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I don't think so. Even if that is attempted it isn't hardware, it's firmware. Firmware can be updated.
UEFI cannot be user-updated, that is its whole security model. You can bring it in to Dell and ask them to install Grub on it though, I'm sure.
Re:I feel like... (Score:5, Insightful)
Re:I feel like... (Score:4, Insightful)
a full blown unix VM
That's the key right there. With virtualization software in the state that it is now, why would you run POSIX applications shoe-horned into windows, when you can have a proper POSIX system running in a VM.
Re: (Score:2)
The problem is that VMs consequently bring an isolation level that doesn't allow you, for example, to work at the native filesystem. You cannot easily grep something in your "My Documents" folder, as far as I know and, even if you can, you'll be consuming way more resources than needed, which may bring consequences as bigger execution times. They're a great solution for a lot of problems, though, don't get me wrong.
Re: (Score:3)
You can map a VM drive to a user folder and tell Windows (quite easily now) to store your documents on that drive.
Re: (Score:2)
With software in the state that it is now, why would you run shoe-horned windows, when you can have a proper POSIX system running on your computer?
Re: (Score:3)
That's the key right there. With virtualization software in the state that it is now, why would you run POSIX applications shoe-horned into windows, when you can have a proper POSIX system running in a VM.
I agree that SUA is pretty bad, but running Cygwin allows me to run commands like:
sort -o
/dev/clipboard /dev/clipboard
This sorts the data on the Windows clipboard. Having the whole *nix user land plus access to Windows features/drives/data makes the command line in Windows much less painful than before. A VM won't really solve that.
Re: (Score:2)
Re: (Score:3)
What FUD? This is an actual story about actual facts involving MS. That is NOT FUD.
What Fear? What uncertainty? what doubt?
Re: (Score:2)
Are you implying that it actually matters or is in any way an "evil" activity?
This stuff isn't used that much anymore on Windows. If the usage isn't there, why continue to support it?
Re: (Score:2)
It's the constant drumbeat of uninformed speculation that gets me worn out. Pandering to leet k3wlness rather than fact-based criticism.
The FACTS are probably nothing like what's being asserted here.
Re: (Score:2)
If you're bitching about Outlook Web Access, try setting your cookies to a non-paranoid mode. Works perfectly fine in Firefox with normal cookies.
Re: (Score:2)
Yup. My guess is that the person who had issues with OWA and Firefox was using excessively aggressive privacy settings.
Re: (Score:2)
Re: (Score:2)
Y+Us e missed the point. They are cutting out something people use, and doing it rather quickly.
The fact that it may be a PoS is irrelevant. This is quick even for MS. Deprecated, and then removed in the next release candidate of the same version? Yeah, that's just crappy.
Re: (Score:2)
This is quick even for MS. Deprecated, and then removed in the next release candidate of the same version?
It is quick, but not that quick. Where does it say that it'll be removed in the next release candidate? TFS speaks of "release"; this normally actually means RTM (as in, not a prerelease).
Re: (Score:2)
OWA for Exchange 2010 works much better in Firefox than it does in IE6.
Re: (Score:2)
Re: (Score:2)
DR-DOS, Wordperfect, Ami pro count as direct sabotage in modifying their code to break competitors products.
What a load of rubbish! A single beta version of Windows didn't work under DR-DOS. To that sounds like they are just eliminating bugs in the was DR-DOS runs Windows from distracting the developers who are trying to find bugs in their own code. Any version of Windows (pre-Win95) that could be purchased would run under DR-DOS.
WordPerfect killed themselves by resisting the move to WYSIWYG and a Windows version. When they finally did it, it was incredibly buggy. How is Microsoft responsible for that?
As for Ami
Re: (Score:3)
"Should be a rule that you dont' create a model that depends on Microsoft."
If you make an application that runs on Windows, then it depends on MS. How many people had their Netscape installations disabled by MS updates of IE? How many application vendors were unable to compete because MS was the only one with access to undocumented APIs? You do remember that the DOJ eventually found them guilty of unfair trade practices because of these tactics.
Or what about workalike operating systems like DR-DOS. MS a [wikipedia.org]
1-800-what-model-is-that? (Score:3)
Troll much? But let's take a refresher course, ten years later.
1-800-what-model-is-that? I worked in the 1980s on Chinese, Japanese, and Korean input methods. CJK input was a time-limited product, completely dependent on Microsoft, so of course we got eaten when they folded CJK fonts and IME into the operating system.
Your sentiment really pissed me off, because a lot of people gave blood to bring Microsoft down a peg or two in the 1
Re: (Score:2)
Netscape committed suicide also. They released a buggy as hell browser as v3. It turned me into a MSIE user for a short while, that's how bad it was. The white window thing, where it would fail to paint, should bring back memories. Their server product quickly grew stale compared to Apache. They had to know the revenue stream for a commodity interface like a web browser wouldn't last. People sold command shells back in the old days too...that didn't last. Bad business plan. Blaming Microsoft is hard
Re: (Score:3)
Jesus, did you even read TFS? "WARNING: SUA is deprecated starting with this release and will be completely removed in the next release." Or do you not trust what Microsoft themselves tells you about their products (hmmm, can I get back to you on that one
...)?
Re: (Score:2)
If you read the summary, it says Deprecated in Win8 and scheduled to be removed in a future version. Of course, by "a future version", that could be Win10 for all we know. MS has deprecated features in the past that remained well beyond the "next" release.
Re: (Score:2) | https://tech.slashdot.org/story/11/09/22/1354258/sua-deprecated-in-windows-8 | CC-MAIN-2017-09 | refinedweb | 4,834 | 71.75 |
int.
The behaviour of
tolower() is undefined if the value of ch is not representable as unsigned char or is not equal to EOF.
It is defined in <cctype> header file.
ch: The character to convert
The
tolower() function returns a lowercase version of ch if it exists. Otherwise it returns ch.
#include <cctype> #include <iostream> #include <cstring> #include <cstdio> using namespace std; int main() { char str[] = "John is from USA."; cout << "The lowercase version of \"" << str << "\" is " << endl; for (int i=0; i<strlen(str); i++) putchar(tolower(str[i])); return 0; }
When you run the program, the output will be:
The lowercase version of "John is from USA." is john is from usa. | https://cdn.programiz.com/cpp-programming/library-function/cctype/tolower | CC-MAIN-2019-51 | refinedweb | 115 | 72.56 |
Reliability is a critical success factor for any version control system. We want Bazaar to be highly reliable across multiple platforms while evolving over time to meet the needs of its community.
In a nutshell, this is September 2009, Bazaar ships with a test suite containing over 23,000 tests and growing. We are proud of it and want to remain so. As community members, we all benefit from it. Would you trust version control on your project to a product without a test suite like Bazaar has?
As of Bazaar 2.1, you must have the testtools library installed to run the bzr test suite.
To test all of Bazaar, just run:
bzr selftest
With --verbose bzr will print the name of every test as it is run.
This should always pass, whether run from a source tree or an installed copy of Bazaar. Please investigate and/or report any failures.
Currently, bzr selftest is used to invoke tests. You can provide a pattern argument to run a subset. For example, to run just the blackbox tests, run:
./bzr selftest -v blackbox
To skip a particular test (or set of tests), use the –exclude option (shorthand -x) like so:
./bzr selftest -v -x blackbox.
To test only the bzr core, ignoring any plugins you may have installed, use:
./bzr --no-plugins selftest
By default Bazaar uses apport to report program crashes. In developing Bazaar it’s normal and expected to have it crash from time to time, at least because a test failed if for no other reason.
Therefore you should probably add debug_flags = no_apport to your bazaar.conf file (in ~/.bazaar/ on Unix), so that failures just print a traceback rather than writing a crash file.
Similar to the global -Dfoo debug options, bzr selftest accepts -E=foo debug flags. These flags are:
Bazaar can optionally produce output in the machine-readable subunit format, so that test output can be post-processed by various tools. To generate a subunit test stream:
$ ./bzr selftest --subunit
Processing such a stream can be done using a variety of tools including:
Bazaar ships with a config file for testrepository. This can be very useful for keeping track of failing tests and doing general workflow support. To run tests using testrepository:
$ testr run
To run only failing tests:
$ testr run --failing
To run only some tests, without plugins:
$ test run test_selftest -- --no-plugins
See the testrepository documentation for more details.
We have a Hudson continuous-integration system that automatically runs tests across various platforms. In the future we plan to add more combinations including testing plugins. See <>. (Babune = Bazaar Buildbot Network.)
Bazaar can use subunit to spawn multiple test processes. There is slightly more chance you will hit ordering or timing-dependent bugs but it’s much faster:
$ ./bzr selftest --parallel=fork
Note that you will need the Subunit library <> to use this, which is in python-subunit on Ubuntu.
The tests create and delete a lot of temporary files. In some cases you can make the test suite run much faster by running it on a ramdisk. For example:
$ sudo mkdir /ram $ sudo mount -t tmpfs none /ram $ TMPDIR=/ram ./bzr selftest ...
You could also change /tmp in /etc/fstab to have type tmpfs, if you don’t mind possibly losing other files in there when the machine restarts. Add this line (if there is none for /tmp already):
none /tmp tmpfs defaults 0 0
With a 6-core machine and --parallel=fork using a tmpfs doubles the test execution speed.
Normally you should add or update a test for all bug fixes or new features in Bazaar.
Bzrlib’s tests are organised by the type of test. Most of the tests in bzr’s test suite belong to one of these categories:
- Unit tests
- Blackbox (UI) tests
- Per-implementation tests
- Doctests
A quick description of these test types and where they belong in bzrlib’s source follows. Not all tests fall neatly into one of these categories; in those cases use your judgement.
Unit tests make up the bulk of our test suite. These are tests that are focused on exercising a single, specific unit of the code as directly as possible. Each unit test is generally fairly short and runs very quickly.
They are found in bzrlib/tests/test_*.py. So in general tests should be placed in a file named test_FOO.py where FOO is the logical thing under test.
For example, tests for merge3 in bzrlib belong in bzrlib/tests/test_merge3.py. See bzrlib/tests/test_sampler.py for a template test script.
Tests can be written for the UI or for individual areas of the library. Choose whichever is appropriate: if adding a new command, or a new command option, then you should be writing a UI test. If you are both adding UI functionality and library functionality, you will want to write tests for both the UI and the core behaviours. We call UI tests ‘blackbox’ tests and they belong in bzrlib/tests/blackbox/*.py.
When writing blackbox tests please honour the following conventions:
- Place the tests for the command ‘name’ in bzrlib/tests/blackbox/test_name.py. This makes it easy for developers to locate the test script for a faulty command.
- Use the ‘self.run_bzr(“name”)’ utility function to invoke the command rather than running bzr in a subprocess or invoking the cmd_object.run() method directly. This is a lot faster than subprocesses and generates the same logging output as running it in a subprocess (which invoking the method directly does not).
- Only test the one command in a single test script. Use the bzrlib library when setting up tests and when evaluating the side-effects of the command. We do this so that the library api has continual pressure on it to be as functional as the command line in a simple manner, and to isolate knock-on effects throughout the blackbox test suite when a command changes its name or signature. Ideally only the tests for a given command are affected when a given command is changed.
- If you have a test which does actually require running bzr in a subprocess you can use run_bzr_subprocess. By default the spawned process will not load plugins unless --allow-plugins is supplied.
Per-implementation tests are tests that are defined once and then run against multiple implementations of an interface. For example, per_transport.py defines tests that all Transport implementations (local filesystem, HTTP, and so on) must pass. They are found in bzrlib/tests/per_*/*.py, and bzrlib/tests/per_*.py.
These are really a sub-category of unit tests, but an important one.
Along the same lines are tests for extension modules. We generally have both a pure-python and a compiled implementation for each module. As such, we want to run the same tests against both implementations. These can generally be found in bzrlib/tests/*__*.py since extension modules are usually prefixed with an underscore. Since there are only two implementations, we have a helper function bzrlib.tests.permute_for_extension, which can simplify the load_tests implementation.
We make selective use of doctests. In general they should provide examples within the API documentation which can incidentally be tested. We don’t try to test every important case using doctests — regular Python tests are generally a better solution. That is, we just use doctests to make our documentation testable, rather than as a way to make tests. Be aware that doctests are not as well isolated as the unit tests, if you need more isolation, you’re likely want to write unit tests anyway if only to get a better control of the test environment.
Most of these are in bzrlib/doc/api. More additions are welcome.
There is an assertDoctestExampleMatches method in bzrlib.tests.TestCase that allows you to match against doctest-style string templates (including ... to skip sections) from regular Python tests.
bzrlib/tests/script.py allows users to write tests in a syntax very close to a shell session, using a restricted and limited set of commands that should be enough to mimic most of the behaviours.
A script is a set of commands, each command is composed of:
- one mandatory command line,
- one optional set of input lines to feed the command,
- one optional set of output expected lines,
- one optional set of error expected lines.
Input, output and error lines can be specified in any order.
Except for the expected output, all lines start with a special string (based on their origin when used under a Unix shell):
- ‘$ ‘ for the command,
- ‘<’ for input,
- nothing for output,
- ‘2>’ for errors,
Comments can be added anywhere, they start with ‘#’ and end with the line.
The execution stops as soon as an expected output or an expected error is not matched.
If output occurs and no output is expected, the execution stops and the test fails. If unexpected output occurs on the standard error, then execution stops and the test fails.
If an error occurs and no expected error is specified, the execution stops.
An error is defined by a returned status different from zero, not by the presence of text on the error stream.
The matching is done on a full string comparison basis unless ‘...’ is used, in which case expected output/errors can be less precise.
Examples:
The following will succeeds only if ‘bzr add’ outputs ‘adding file’:
$ bzr add file >adding file
If you want the command to succeed for any output, just use:
$ bzr add file ... 2>...
or use the --quiet option:
$ bzr add -q file
The following will stop with an error:
$ bzr not-a-command
If you want it to succeed, use:
$ bzr not-a-command 2> bzr: ERROR: unknown command "not-a-command"
You can use ellipsis (...) to replace any piece of text you don’t want to be matched exactly:
$ bzr branch not-a-branch 2>bzr: ERROR: Not a branch...not-a-branch/".
This can be used to ignore entire lines too:
$ cat <first line <second line <third line # And here we explain that surprising fourth line <fourth line <last line >first line >... >last line
You can check the content of a file with cat:
$ cat <file >expected content
You can also check the existence of a file with cat, the following will fail if the file doesn’t exist:
$ cat file
You can run files containing shell-like scripts with:
$ bzr test-script <script>
where <script> is the path to the file containing the shell-like script.
The actual use of ScriptRunner within a TestCase looks something like this:
from bzrlib.tests import script def test_unshelve_keep(self): # some setup here script.run_script(self, ''' $ bzr add -q file $ bzr shelve -q --all -m Foo $ bzr shelve --list 1: Foo $ bzr unshelve -q --keep $ bzr shelve --list 1: Foo $ cat file contents of file ''')
You can also test commands that read user interaction:
def test_confirm_action(self): """You can write tests that demonstrate user confirmation""" commands.builtin_command_registry.register(cmd_test_confirm) self.addCleanup(commands.builtin_command_registry.remove, 'test-confirm') self.run_script(""" $ bzr test-confirm 2>Really do it? [y/n]: <yes yes """)
To avoid having to specify “-q” for all commands whose output is irrelevant, the run_script() method may be passed the keyword argument null_output_matches_anything=True. For example:
def test_ignoring_null_output(self): self.run_script(""" $ bzr init $ bzr ci -m 'first revision' --unchanged $ bzr log --line 1: ... """, null_output_matches_anything=True)
bzrlib.tests.test_import_tariff has some tests that measure how many Python modules are loaded to run some representative commands.
We want to avoid loading code unnecessarily, for reasons including:
test_import_tariff allows us to check that removal of imports doesn’t regress.
This is done by running the command in a subprocess with PYTHON_VERBOSE=1. Starting a whole Python interpreter is pretty slow, so we don’t want exhaustive testing here, but just enough to guard against distinct fixed problems.
Assertions about precisely what is loaded tend to be brittle so we instead make assertions that particular things aren’t loaded.
Unless selftest is run with --no-plugins, modules will be loaded in the usual way and checks made on what they cause to be loaded. This is probably worth checking into, because many bzr users have at least some plugins installed (and they’re included in binary installers).
In theory, plugins might have a good reason to load almost anything: someone might write a plugin that opens a network connection or pops up a gui window every time you run ‘bzr status’. However, it’s more likely that the code to do these things is just being loaded accidentally. We might eventually need to have a way to make exceptions for particular plugins.
Some things to check:
In order to test the locking behaviour of commands, it is possible to install a hook that is called when a write lock is: acquired, released or broken. (Read locks also exist, they cannot be discovered in this way.)
A hook can be installed by calling bzrlib.lock.Lock.hooks.install_named_hook. The three valid hooks are: lock_acquired, lock_released and lock_broken.
Example:
locks_acquired = [] locks_released = [] lock.Lock.hooks.install_named_hook('lock_acquired', locks_acquired.append, None) lock.Lock.hooks.install_named_hook('lock_released', locks_released.append, None)
locks_acquired will now receive a LockResult instance for all locks acquired since the time the hook is installed.
The last part of the lock_url allows you to identify the type of object that is locked.
To test if a lock is a write lock on a working tree, one can do the following:
self.assertEndsWith(locks_acquired[0].lock_url, "/checkout/lock")
See bzrlib/tests/commands/test_revert.py for an example of how to use this for testing locks.
In our enhancements to unittest we allow for some addition results beyond just success or failure.
If a test can’t be run, it can say that it’s skipped by raising a special exception. can’t be run because a dependency (typically a Python library) is not available in the test environment. These are in general things that the person running the test could fix by installing the library. It’s OK if some of these occur when an end user runs the tests or if we’re specifically testing in a limited environment, but a full test should never see them.
See Test feature dependencies below.
The test exists but is known to fail, for example this might be appropriate to raise if you’ve committed a test for a bug but not the fix for it, or if something works on Unix but not on Windows.
Raising this allows you to distinguish these failures from the ones that are not expected to fail. If the test would fail because of something we don’t expect or intend to fix, KnownFailure is not appropriate, and TestNotApplicable might be better.
KnownFailure should be used with care as we don’t want a proliferation of quietly broken tests..)
For example:
class TestStrace(TestCaseWithTransport): _test_needs_features = [StraceFeature]
This means all tests in this class need the feature. If the feature is not available the test will be skipped using UnavailableFeature.
Individual tests can also require a feature using the requireFeature method:
self.requireFeature(StraceFeature)
The old naming style for features is CamelCase, but because they’re actually instances not classses they’re now given instance-style names like apport.
Features already defined in bzrlib.tests and bzrlib.tests.features include:
- apport
- paramiko
- SymlinkFeature
- HardlinkFeature
- OsFifoFeature
- UnicodeFilenameFeature
- FTPServerFeature
- CaseInsensitiveFilesystemFeature.
- chown_feature: The test can rely on OS being POSIX and python supporting os.chown.
- posix_permissions_feature: The test can use POSIX-style user/group/other permission bits.
New features for use with _test_needs_features or requireFeature are defined by subclassing bzrlib.tests.Feature and overriding the _probe and feature_name methods. For example:
class _SymlinkFeature(Feature): def _probe(self): return osutils.has_symlinks() def feature_name(self): return 'symlinks' SymlinkFeature = _SymlinkFeature()
A helper for handling running tests based on whether a python module is available. This can handle 3rd-party dependencies (is paramiko available?) as well as stdlib (termios) or extension modules (bzrlib._groupcompress_pyx). You create a new feature instance with:
# in bzrlib/tests/features.py apport = tests.ModuleAvailableFeature('apport') # then in bzrlib/tests/test_apport.py class TestApportReporting(TestCaseInTempDir): _test_needs_features = [features.apport]
Translations are disabled by default in tests. If you want to test that code is translated you can use the ZzzTranslations class from test_i18n:
self.overrideAttr(i18n, '_translations', ZzzTranslations())
And check the output strings look like u"zz\xe5{{output}}".
To test the gettext setup and usage you override i18n.installed back to self.i18nInstalled and _translations to None, see test_i18n.TestInstall.
When code is deprecated, it is still supported for some length of time, usually until the next major version. The applyDeprecated helper wraps calls to deprecated code to verify that it is correctly issuing the deprecation warning, and also prevents the warnings from being printed during test runs.
Typically patches that apply the @deprecated_function decorator should update the accompanying tests to use the applyDeprecated wrapper.
applyDeprecated is defined in bzrlib.tests.TestCase. See the API docs for more details. the tests that relate to that particular interface. Sometimes we discover a bug elsewhere that happens with only one particular transport. Once it’s isolated, we can consider whether a test should be added for that particular implementation, or for all implementations of the interface.
See also Per-implementation tests (above)..
A single scenario is defined by a (name, parameter_dict) tuple. The short string name is combined with the name of the test method to form the test instance name. The parameter dict is merged into the instance’s attributes.
For example:
load_tests = load_tests_apply_scenarios class TestCheckout(TestCase): scenarios = multiply_scenarios( VaryByRepositoryFormat(), VaryByTreeFormat(), )
The load_tests declaration or definition should be near the top of the file so its effect can be seen.
We have a rich collection of tools to support writing tests. Please use them in preference to ad-hoc solutions as they provide portability and performance benefits.
The bzrlib.tests module defines many TestCase classes to help you write your tests.
See the API docs for more details.
When writing a test for a feature, it is often necessary to set up a branch with a certain history. The BranchBuilder interface allows the creation of test branches in a quick and easy manner. Here’s a sample session:
builder = self.make_branch_builder('relpath') builder.build_commit() builder.build_commit() builder.build_commit() branch = builder.get_branch()
make_branch_builder is a method of TestCaseWithMemoryTransport.
Note that many current tests create test branches by inheriting from TestCaseWithTransport and using the make_branch_and_tree helper to give them a WorkingTree that they can commit to. However, using the newer make_branch_builder helper is preferred, because it can build the changes in memory, rather than on disk. Tests that are explictly testing how we work with disk objects should, of course, use a real WorkingTree.
Please see bzrlib.branchbuilder for more details.
If you’re going to examine the commit timestamps e.g. in a test for log output, you should set the timestamp on the tree, rather than using fuzzy matches in the test.()
Usually a test will create a tree using make_branch_and_memory_tree (a method of TestCaseWithMemoryTransport) or make_branch_and_tree (a method of TestCaseWithTransport).
Please see bzrlib.treebuilder for more details.
PreviewTrees are based on TreeTransforms. This means they can represent virtually any state that a WorkingTree can have, including unversioned files. They can be used to test the output of anything that produces TreeTransforms, such as merge algorithms and revert. They can also be used to test anything that takes arbitrary Trees as its input.
# Get an empty tree to base the transform on. b = self.make_branch('.') empty_tree = b.repository.revision_tree(_mod_revision.NULL_REVISION) tt = TransformPreview(empty_tree) self.addCleanup(tt.finalize) # Empty trees don't have a root, so add it first. root = tt.new_directory('', ROOT_PARENT, 'tree-root') # Set the contents of a file. tt.new_file('new-file', root, 'contents', 'file-id') preview = tt.get_preview_tree() # Test the contents. self.assertEqual('contents', preview.get_file_text('file-id'))
PreviewTrees can stack, with each tree falling back to the previous:
tt2 = TransformPreview(preview) self.addCleanup(tt2.finalize) tt2.new_file('new-file2', tt2.root, 'contents2', 'file-id2') preview2 = tt2.get_preview_tree() self.assertEqual('contents', preview2.get_file_text('file-id')) self.assertEqual('contents2', preview2.get_file_text('file-id2'))
If your test needs to temporarily mutate some global state, and you need it restored at the end, you can say for example:
self.overrideAttr(osutils, '_cached_user_encoding', 'latin-1')
This should be used with discretion; sometimes it’s better to make the underlying code more testable so that you don’t need to rely on monkey patching.
Sometimes it’s useful to observe how a function is called, typically when calling it has side effects but the side effects are not easy to observe from a test case. For instance the function may be expensive and we want to assert it is not called too many times, or it has effects on the machine that are safe to run during a test but not easy to measure. In these cases, you can use recordCalls which will monkey-patch in a wrapper that records when the function is called.
If yout test needs to temporarily change some environment variable value (which generally means you want it restored at the end), you can use:
self.overrideEnv('BZR_ENV_VAR', 'new_value')
If you want to remove a variable from the environment, you should use the special None value:
self.overrideEnv('PATH', None)
If you add a new feature which depends on a new environment variable, make sure it behaves properly when this variable is not defined (if applicable) and if you need to enforce a specific default value, check the TestCase._cleanEnvironment in bzrlib.tests.__init__.py which defines a proper set of values for all tests.
Our base TestCase class provides an addCleanup method, which should be used instead of tearDown. All the cleanups are run when the test finishes, regardless of whether it passes or fails. If one cleanup fails, later cleanups are still run.
(The same facility is available outside of tests through bzrlib.cleanup.)
Generally we prefer automated testing but sometimes a manual test is the right thing, especially for performance tests that want to measure elapsed time rather than effort.
To get realistically slow network performance for manually measuring performance, we can simulate 500ms latency (thus 1000ms round trips):
$ sudo tc qdisc add dev lo root netem delay 500ms
Normal system behaviour is restored with
$ sudo tc qdisc del dev lo root
A more precise version that only filters traffic to port 4155 is:
tc qdisc add dev lo root handle 1: prio tc qdisc add dev lo parent 1:3 handle 30: netem delay 500ms tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip dport 4155 0xffff flowid 1:3 tc filter add dev lo protocol ip parent 1:0 prio 3 u32 match ip sport 4155 0xffff flowid 1:3
and to remove this:
tc filter del dev lo protocol ip parent 1: pref 3 u32 tc qdisc del dev lo root handle 1:
You can use similar code to add additional delay to a real network interface, perhaps only when talking to a particular server or pointing at a VM. For more information see <>. | http://doc.bazaar.canonical.com/bzr.dev/developers/testing.html | CC-MAIN-2020-50 | refinedweb | 3,862 | 55.74 |
Produces an inlined version of
call via its
inlined method.
ConstructorsConstructors
MembersMembers
A typer for inlined bodies. Beyond standard typing, an inline typer performs the following functions:
- Implement constant folding over inlined code
- Selectively expand ifs with constant conditions
- Inline arguments that are by-name closures
- Make sure inlined code is type-correct.
- Make sure that the tree's typing is idempotent (so that future -Ycheck passes succeed)
A buffer for bindings that define proxies for actual arguments
A map from parameter names of the inlineable method to references of the actual arguments.
For a type argument this is the full argument type.
For a value argument, it is a reference to either the argument value
(if the argument is a pure expression of singleton type), or to
val or
def acting
as a proxy (if the argument is something else).
A map from references to (type and value) parameters of the inlineable method
to their corresponding argument or proxy references, as given by
paramBinding.
A map from the classes of (direct and outer) this references in
rhsToInline
to references of their proxies.
Note that we can't index by the ThisType itself since there are several
possible forms to express what is logicaly the same ThisType. E.g.
ThisType(TypeRef(ThisType(p), cls))
vs
ThisType(TypeRef(TermRef(ThisType(
These are different (wrt ==) types but represent logically the same key
Populate
paramBinding and
bindingsBuf by matching parameters with
corresponding arguments.
bindingbuf will be further extended later by
proxies to this-references.
Drop any side-effect-free bindings that are unused in expansion or other reachable bindings. Inline def bindings that are used only once.
Make
tree part of inlined expansion. This means its owner has to be changed
from its
originalOwner, and, if it comes from outside the inlined method
itself, it has to be marked as an inlined argument.
A binding for the parameter of an inline method. This is a
val def for
by-value parameters and a
def def for by-name parameters.
val defs inherit
inline annotations from their parameters. The generated
def is appended
to
bindingsBuf.
Populate
thisProxy and
paramProxy as follows:
1a. If given type refers to a static this, thisProxy binds it to corresponding global reference,
1b. If given type refers to an instance this to a class that is not contained in the
inline method, create a proxy symbol and bind the thistype to refer to the proxy.
The proxy is not yet entered in
bindingsBuf; that will come later.
2. If given type refers to a parameter, make
paramProxy refer to the entry stored
in
paramNames under the parameter's name. This roundabout way to bind parameter
references to proxies is done because we don't know a priori what the parameter
references of a method are (we only know the method's type, but that contains TypeParamRefs
and MethodParams, not TypeRefs or TermRefs. | http://dotty.epfl.ch/api/dotty/tools/dotc/typer/Inliner.html | CC-MAIN-2019-04 | refinedweb | 483 | 52.6 |
RDF::NS::URIS - Popular RDF namespace prefixes from prefix.cc as URI objects
version 20140910
use RDF::NS::URIS; use constant NS => RDF::NS::URIS->new('20120905'); NS->foaf_Person; # an URI object NS->uri('foaf:Person); # same NS->foaf_Person->as_string; #
RDF::NS::URIS works like RDF::NS but it returns instances of URI instead of plain strings. You must have installed module URI to use this package.
Jakob Voß
This software is copyright (c) 2013 by Jakob Voß.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~voj/RDF-NS-20140910/lib/RDF/NS/URIS.pm | CC-MAIN-2014-52 | refinedweb | 102 | 63.09 |
I have four arrays of size 8 each with DNA bases assigned (A T G or C).
In a method I have to pass a base, in this case A and compare all the positions of each arrays and determine how many 'A' are in each position of each array. And then return an array with the totals, which would also be size 8.
I'm having trouble comparing each positions to the base... :S This is just part of my code without the other methods included...I just need help with that one.thanks! and i know it's wrong.I've tried it so many ways :/
Code :
import java.io.*; import java.util.*; public class Lab6 { public static void main(String[] args) throws FileNotFoundException { Scanner inFile = new Scanner(new File("seq.txt")); //text data file with DNA bases char base = 'A'; //base passed to countBase method int size = 8; //size of the arrays char[] array1 = new char[size]; //first array with 8 indexes char[] array2 = new char[size]; //second array with 8 indexes char[] array3 = new char[size]; //third array with 8 indexes char[] array4 = new char[size]; //fourth array with 8 indexes int i = 0; while(inFile.hasNext()) { array1[i] = inFile.next().charAt(0); array2[i] = inFile.next().charAt(0); array3[i] = inFile.next().charAt(0); array4[i] = inFile.next().charAt(0); i++; } System.out.println(array1); System.out.println(array2); System.out.println(array3); System.out.println(array4); int[] result = countBase(array1, array2, array3, array4, base); int mostIndex = findMost(result); double average = computeAvg(result); System.out.println(result); System.out.println("The largest value in the counter array is " + result[mostIndex]); System.out.println("The average for the counter array is " + average); } public static int[] countBase(char[] one, char[] two, char[] three, char[] four, char letter) //count number of times a base appears { int[] counter = new int[8]; int sum = 0; for(int i = 0; i < one.length; i++) { if(letter == one[i] || letter == two[i] || letter == three[i] || letter == four[i]) { sum++; } } return counter; | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/8285-quick-method-array-question-printingthethread.html | CC-MAIN-2015-18 | refinedweb | 339 | 56.96 |
Isolated a 5 block memory leak to the use of "struct passwd". Tried several different free(user) calls to no avail. How is this struct meant to be freed? Several different SO questions on the topic, but I'm finding little documentation on how this particular struct should be handled. Program works without issue otherwise. Thanks.
#include <assert.h>
#include <pwd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
int main() {
pid_t pid = getpid();
uid_t uid = getuid();
struct passwd *user = getpwuid(uid);
unsigned int bufferMaxLen = strlen(".dir.") + strlen(user->pw_name) + 10;
char* dirName = malloc(bufferMaxLen * sizeof(char));
assert(dirName != NULL);
sprintf(dirName, "%s.dir.%d", user->pw_name, pid);
printf("bufferMaxLen is: %d\n", bufferMaxLen);
printf("Directory name is: %s\n", dirName);
free(dirName);
return 0;
}
With that code, it looks like implementation specific library feature. It's quite normal that libraries allocate memory when you first call a function, and never free it: There's no way or opportunity to free it during normal execution, and it would be totally pointless to free single memory allocations at exit handler, when they are about to be freed by program exit right after.
You can think of this kind of allocations as static data, except they're only static pointers to buffer/struct allocated only when needed. Benefit is, less memory is used if the relevant function is never called. Downside is slightly more complex code, runtime cost and memory use if the function does get called, not to mention memory analyzer confusion, demonstrated by your question :-).
Tools like Valgrind have ignore filters for hiding this kind of "leaks". | https://codedump.io/share/hrlLAqOAZ2HZ/1/struct-passwd-is-source-of-memory-leak---how-to-properly-free | CC-MAIN-2018-13 | refinedweb | 276 | 58.99 |
Question: Suppose the following high temperatures were recorded for major cities
Suppose the following high temperatures were recorded for major cities in the contiguous United States for a day in July.
.png)
Construct and interpret a stem-and-leafdiagram.
View Solution:
.png)
Construct and interpret a stem-and-leafdiagram.
View Solution:
Answer to relevant QuestionsA police officer is concerned with excessive speeds on a portion of Interstate 90 with a posted speed limit of 65 miles per hour. Using his radar gun, he records the following speeds for 25 cars and trucks:Construct a ...A statistics instructor wants to examine whether a relationship exists between the hours a student spends studying for the final exam (Hours) and a student’s grade on the final exam (Grade). She takes a sample of eight ...The one-year return (in %) for 24 mutual funds is as follows:a. Construct a frequency distribution using classes of 220 up to 210, 210 up to 0, etc.b. Construct the relative frequency, the cumulative frequency, and the ...The Wall Street Journal (August 28, 2006) asked its readers:“Ideally, how many days a week, if any, would you work from home?” The following relative frequency distribution summarizes the responses from 3,478 ...The following relative frequency histogram summarizes the median household income for the 50 states in the United States (U.S. Census, 2010).a. Is the distribution symmetric? If not, is it positively or negatively skewed?b. ...
Post your question | http://www.solutioninn.com/suppose-the-following-high-temperatures-were-recorded-for-major-cities | CC-MAIN-2017-30 | refinedweb | 244 | 56.86 |
Opened 5 years ago
Closed 5 years ago
Last modified 5 years ago
#18658 closed Cleanup/optimization (fixed)
Better support for exceptions / error messages in custom admin actions
Description
The docs don't mention any way to create error messages in custom admin actions. According to an answer on Stackoverflow, this can be achieved with:
from django.contrib import messages # Then, when you need to error the user: messages.error(request, "The message")
There are three issues with this:
Change History (9)
comment:1 Changed 5 years ago by
comment:2 Changed 5 years ago by
comment:3 Changed 5 years ago by
I'll tackle this one as part of my work at the Django sprint in Stockholm!
comment:4 Changed 5 years ago by
Added pull request:
comment:5 Changed 5 years ago by
comment:6 Changed 5 years ago by
Allowing a string for the level for convenience seems fine - but we should still accept messages.INFO style constants.
I've made that change and added some more docs and release note here:…4b4f7c5dbf795b3db379e2100936bba06c71c718
Thanks for your work on this
comment:7 Changed 5 years ago by
Looks good. Thanks for the improvements as well!
Thanks. Definitely something we should add. Probably something along the lines of adding a
levelkwarg to
message_user. | https://code.djangoproject.com/ticket/18658 | CC-MAIN-2017-51 | refinedweb | 215 | 68.2 |
Closed Bug 1251308 Opened 5 years ago Closed 5 years ago
JS::Describe
Scripted Caller return value is now a lie in some cases
Categories
(Core :: JavaScript Engine, defect)
Tracking
()
mozilla48
People
(Reporter: bzbarsky, Assigned: bbouvier)
References
Details
(Keywords: regression, sec-moderate, Whiteboard: [post-critsmash-triage][adv-main48+])
Attachments
(1 file, 6 obsolete files)
[Tracking Requested - why for this release]: Could cause security problems; needs careful audit. The API documentation for JS::DescribeScriptedCaller clearly says the return value indicates whether a scripted caller was found. The patch for bug 1229642 changed the implementation so that now a false value could mean no scripted caller _or_ could mean the caller asked for a filename and we OOMed trying to create a copy of the filename. Note that very long filenames are not hard to produce (e.g. inline <script> in a long data: URI), so OOM here doesn't necessarily mean we're really out of memory in any meaningful sense. Further, since pages get to control the filename, this allows pages to (probabilistically, at least) cause JS::DescribeScriptedCaller to lie about whether there was a scripted caller. I haven't audited all the codepaths that call this function, but it seems like at least the debugging stuff in nsGlobalWindow::ShowSlowScriptDialog will behave wonkily in this case, though not sure if it handles no filename right now. Also, it looks like some callers do NOT check the return value (because they assume that it represents what it's documented to represent and they know they have a scripted caller) and then use filename.get(). See for example ReportWrapperDenial. Will that work correctly (e.g. without crashing) in the OOM case in the new world? And, again, I did _not_ audit anything close to all the consumers. I'm pretty worried about what might be lurking in there.
Flags: needinfo?(luke)
Flags: needinfo?(bbouvier)
We don't have a sec rating yet but I'll track this anyway, may be a regression from 46
tracking-firefox46: ? → +
tracking-firefox47: ? → +
Sounds like we should change the OOM case to return a null filename (which, iiuc, is a valid filename) instead?
Flags: needinfo?(luke)
Per the previous implementation, AutoFilename::get() could return null: Assuming the callers already had to handle this case (which is not clear to me), a fix would be to set the outparam to a null filename, as suggested by Luke in comment 2.
Flags: needinfo?(bbouvier)
Per previous comments, assigning to nullptr if the copy didn't succeed, and carrying on with lineno/column. Boris, does this look like a right fix?
Comment on attachment 8724664 [details] [diff] [review] fix.patch I don't know whether this is correct, sorry. In the old impl, AutoFilename::get() would return null if either i.scriptSource() was null or ScriptSource::filename() returned null. Were those possible things that could happen?
They were infallible so they'd only return null if the originally-passed filename was null. So this would change observable behavior for oom, which I think is fine as long as the filename is not being used to make security judgements. I'd hope that was all based on the compartment's principle, not filename string. The "real" fix of course is to make DescribeScriptedCaller fallible like every other JSAPI (making the current bool return some "hasScriptedCaller" outparam); I'd been hoping to avoid that.
> so they'd only return null if the originally-passed filename was null Did Gecko ever pass null in practice? > which I think is fine as long as the filename is not being used to make security judgements Well, or if null would cause a crash when we didn't use to crash... Maybe we would have crashed on OOM anyway, so maybe it's alright, but if Gecko never uses null filenames, this part should be checked. > The "real" fix of course is to make DescribeScriptedCaller fallible like every other JSAPI There are places where it's called that have no sane way to handle failure, afaict. But they could at least pretend to, I guess (e.g. not log anything on OOM or something).
(In reply to Boris Zbarsky [:bz] from comment #7) > Did Gecko ever pass null in practice? I see crashes when we forget to handle null filenames inside SM, so I had assumed so, but it's possible that's only via shell methods and never from Gecko under normal conditions. Another alternative, if we knew the filename didn't have security meaning, would be to just return a static string "out of memory" (which would be better than null anyway).
I like that static string approach at least for backporting purposes. The filename here should not have security meaning in callers.
Benjamin, can you audit the call sites so we can figure out a security rating for this? Thanks.
Flags: needinfo?(bbouvier)
I tried looking at all the uses but a few go off into CSP and it's hard to convince myself that there isn't a case where we allow something we used to reject before. Assuming the easy fix above (just return a static string literal "out of memory"), this should be easy to backport. That being said, since the issue isn't publicly known, maybe it's ok to let it ride the trains?
Flags: needinfo?(bbouvier)
I was looking into that, but is the static string literal such an easy fix? It seems that the API of DescribeScriptedCaller should be changed, as it currently takes a UniqueChars ptr, and it could probably not with a static string... (as we don't want to have to duplicate the static string, of course)
Yeah, what we'd need to do is change 'filename' from a UniqueChars to a UniquePtr<char[], JS::MaybeFreePolicy> for some new JS::MaybeFreePolicy whose constructor took an enum indicating whether to actually call free.
At this point, wouldn't this just be easier and more pragmatic to add the out parameter and change the external API? It would involve more changes and won't ease backporting, but it should be pretty mechanical changes.
(In reply to Benjamin Bouvier [:bbouvier] from comment #14) Makes sense to me
Note that some of the consumers of this API cannot handle it throwing sanely right now, so you will either need to silently suppress exceptions or change a lot more infrastructure. Certainly the changes involved are NOT mechanical.
So passing the bool outparam indicating success is actually some trouble too, because then we need to add static const "out of memory" strings at more places in the embedder. That's bad. Reverting to a solution close to MaybeFreePolicy: - for non-wasm frames, just revert to the sourcescript solution - for wasm frames, try a copy and put out-of-memory if it failed That makes things a bit complicated in the custom deleter class... Can I safely push the patch to try, or should I assume unexpected watchers on try and just all test suites locally?
Comment on attachment 8727518 [details] [diff] [review] wip.patch Review of attachment 8727518 [details] [diff] [review]: ----------------------------------------------------------------- Thanks for writing this up! But Arg! I just noticed, digging through the impl of UniquePtr, that UniquePtr::reset() would be broken with this design since simply updates ptr(), but not del(). We don't really need to support reset() here, but it seems like we shouldn't leave that handgun lying on the table so I'm afraid we need to handroll our own class here (probably back to AutoFilename, since Unique implies it's actually a UniquePtr). I'm sorry for not realizing this earlier. ::: js/src/jsapi.cpp @@ +6059,5 @@ > + if (filename) { > + static char outOfMemory[] = "out of memory"; > + FilenameDeleter& deleter = filename->get_deleter(); > + if (!i.isWasm()) { > + // Non-wasm frames have a script source to read the filename from. How about reversing the order of then/else (handling special cases first) and then change this comment to "All other frames have a ..."?
Duh, I should have caught it. The good news is that we can have more control on the API and do something cleaner, imo.
Assignee: nobody → bbouvier
Attachment #8727518 - Attachment is obsolete: true
Status: NEW → ASSIGNED
Attachment #8727733 - Flags: review?(luke)
Comment on attachment 8727733 [details] [diff] [review] 1251308.patch Review of attachment 8727733 [details] [diff] [review]: ----------------------------------------------------------------- Excellent, thanks! ::: js/src/jsapi.cpp @@ +6012,5 @@ > } > > namespace JS { > > +void AutoFilename::reset() { { on newline (here and below) @@ . @@ +6033,5 @@ > + if (p) > + reinterpret_cast<ScriptSource*>(p)->incref(); > +} > + > +void AutoFilename::setOOM() { Instead of hardcoding to OOM, could you have a setUnowned(const char* msg)? ::: js/src/jsapi.h @@ +5389,5 @@ > + private: > + // Actually a ScriptSource, not put here to avoid including the world. > + void* ss_; > + > + char* msg_; Can this be "filename_" instead?
Attachment #8727733 - Flags: review?(luke) → review+
(In reply to Luke Wagner [:luke] from comment #20) > @@ . > ss->filename() returns a const char*. If we want to be able to delete in the owned case, we need char*. I agree that it is nicer to have a simple get() function. Then I've included two char* in AutoFilename, one const and one non-const (alternatives being using const_cast (ugly), tagged union {char*;const char*} (hackish)).
Updated patch (Waldo recommended to use a Variant on irc, which does a great job here, even if it's more wordy). Luke, I'd like you to have another quick look, just to make sure. There are cases where filename is nullptr (e.g. calls to EvalInContext return a nullptr filename, as basic/testBug1235874.js showed -- adding an assertion that we have a non-nullptr made it fail).
Attachment #8727733 - Attachment is obsolete: true
Attachment #8728356 - Flags: review?(luke)
Comment on attachment 8728356 [details] [diff] [review] 1251308.patch [Security approval request comment] How easily could an exploit be constructed based on the patch? Good question; see bz's comment 0. In the worst case, people can craft very long names and instantaneously crash Firefox, if my understanding is correct. According to comment 6, security judgments are based on compartments, not filenames, so security issues couldn't be provoked because of this. Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem? Not clearly. However bz found the issue just by reading the code, so anybody digging into the code and knowing it well could too. Which older supported branches are affected by this flaw? If not all supported branches, which bug introduced the flaw? Bug 1229642 => mozilla 46+ (so 47 and 48 are affected too). Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be? Hopefully, this patch could be backported everywhere. How likely is this patch to cause regressions; how much testing does it need? It shouldn't cause any regression. Although it's a pretty safe change, it would need full testing; I haven't done a try-build because I am not sure what our policies are with respect to that? Can we safely try-build sec patches? Or do we need to run locally all the test suites?
Comment on attachment 8728356 [details] [diff] [review] 1251308.patch Review of attachment 8728356 [details] [diff] [review]: ----------------------------------------------------------------- I would've just used a const_cast, but the Variant approach here is even nicer. Mmm, Variants; I may have to start using these. ::: js/src/jsapi.cpp @@ +6031,5 @@ > +} > + > +void AutoFilename::setScriptSource(void* p) > +{ > + ss_ = p; Can you MOZ_ASSERT(!ss_);? @@ +6041,5 @@ > +} > + > +void AutoFilename::setUnowned(const char* filename) > +{ > + filename_.as<const char*>() = filename; Can you MOZ_ASSERT(!filename_.as<const char*>());? @@ +6046,5 @@ > +} > + > +void AutoFilename::setOwned(char* filename) > +{ > + filename_.as<UniqueChars>() = UniqueChars(filename); I would think this would assert since filename_ will have been constructed with the <const char*> type by this point. Rather, what I think you're supposed to do is 'filename_ = AsVariant(UniqueChars(filename));'. The only way I think you can test DescribeScriptedCaller of wasm from within SM is to have a wasm module import 'evalcx' (a shell function) and pass 0 (which will eval the string "0" b/c of the magic of type coercion). Could you confirm and add such a test? @@ +6049,5 @@ > +{ > + filename_.as<UniqueChars>() = UniqueChars(filename); > +} > + > +const char* AutoFilename::get() const { { on newline @@ +6077,5 @@ > return false; > > + if (filename) { > + if (i.isWasm()) { > + static const char* outOfMemory = "out of memory"; I don't think you need a static var to hold this pointer into global memory. I'd just pass "out of memory" inline in the setUnowned() below.
[Security approval request comment] See comment 23.
Attachment #8728356 - Attachment is obsolete: true
Attachment #8728356 - Flags: review?(luke)
Attachment #8728458 - Flags: review?(luke)
Comment on attachment 8728458 [details] [diff] [review] 1251308.patch Review of attachment 8728458 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/jsapi.cpp @@ +6031,5 @@ > +} > + > +void AutoFilename::setScriptSource(void* p) > +{ > + MOZ_ASSERT(!ss_); Probably also good to MOZ_ASSERT(!get()); @@ +6048,5 @@ > +} > + > +void AutoFilename::setOwned(char* filename) > +{ > + filename_ = AsVariant(UniqueChars(filename)); and here too
Attachment #8728458 - Flags: review?(luke) → review+
Comment on attachment 8728458 [details] [diff] [review] 1251308.patch This doesn't need sec-approval+ to land as it is a sec-moderate rated issue. We only require approval for sec-high and sec-critical rated issues. Land away!
Carrying forward r+, with suggested changes + have setOwned take a UniqueChars&& param, to avoid the silly sequence: UniqueChars copy = ...; filename->setOwned(copy.release()); // in AutoFilename::setOwned(char* filename): variant = AsVariant<UniqueChars>(UniqueChars(filename)); (this was also triggering a build issue on windows, which thought it had to use the deleted copy ctor of UniqueChars and not the move ctor). Posting to try and then to inbound.
Attachment #8728458 - Attachment is obsolete: true
Attachment #8729491 - Flags: review+
For what it's worth, the build error was due to the fact that the AutoFilename was annotated with JS_PUBLIC_API and the copy assignment operator not explicitly deleted. Updated the patch to reflect that, try running it right now.
Carrying forward r+. Updated patch with deleted copy assignment operator in AutoFilename to make msvc happy. Approval Request Comment [Feature/regressing bug #]: bug 1229642 [User impact if declined]: potential way to OOM / instacrash the browser. No possible exploits a priori. [Describe test coverage new/current, TreeHerder]: green on try, pushed on inbound today [Risks and why]: fairly low; this reverts and adapts code to former behavior [String/UUID change made/needed]: n/a
Attachment #8729491 - Attachment is obsolete: true
Attachment #8730186 - Flags: review+
Attachment #8730186 - Flags: approval-mozilla-beta?
Attachment #8730186 - Flags: approval-mozilla-aurora?
Status: ASSIGNED → RESOLVED
Closed: 5 years ago
status-firefox48: --- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla48
Group: javascript-core-security → core-security-release.
Flags: needinfo?(bbouvier)
(In reply to Ritu Kothari (:ritu) from comment #33) >. This is what Luke suggested in comment 11 too, yeah. If my understanding is correct, the worse that can happen is insta-crashing the browser, but bz's initial comment suggests it could be worse than that. That being said, the code has now landed on Nightly and has passed all tests. I am fine with not uplifting, if other people are fine too (bz, luke?).
Flags: needinfo?(bbouvier)
Not backporting is probably ok.
Comment on attachment 8730186 [details] [diff] [review] 1251308.patch Everybody agrees that given the size of the code change and possible risk to quality, it's ok to let this ride the trains.
Attachment #8730186 - Flags: approval-mozilla-beta?
Attachment #8730186 - Flags: approval-mozilla-beta-
Attachment #8730186 - Flags: approval-mozilla-aurora?
Attachment #8730186 - Flags: approval-mozilla-aurora-
Whiteboard: [post-critsmash-triage]
Whiteboard: [post-critsmash-triage] → [post-critsmash-triage][adv-main48+]
Group: core-security-release | https://bugzilla.mozilla.org/show_bug.cgi?id=1251308 | CC-MAIN-2020-45 | refinedweb | 2,590 | 63.7 |
In this chapter, we will cover the following recipes:
Configuring the Dart environment
Setting up the checked and production modes
Rapid Dart Editor troubleshooting
Hosting your own private pub mirror
Using Sublime Text 2 as an IDE
Compiling your app to JavaScript
Debugging your app in JavaScript for Chrome
Using the command-line tools
Solving problems when pub get fails
Shrinking the size of your app
Making a system call
Using snapshotting
Getting information from the operating system
This chapter is about increasing our mastery of the Dart platform. Dart is Google's new language for the modern web, web clients, as well as server applications. Compared to JavaScript, Dart is a higher-level language so it will yield better productivity. Moreover, it delivers increased performance. To tame all that power, we need a good working environment, which is precisely what Dart Editor provides. Dart Editor is quite a comprehensive environment in its own right and it is worthwhile to know the more advanced and hidden features it exposes. Some functionalities are only available in the command-line tools, so we must discuss these as well.
This recipe will help customize the Dart environment according to our requirements. Here, we configure the following:
Defining a
DART_SDKenvironment variable
Making
dart-sdk\binavailable for the execution of the Dart command-line tools
We assume that you have a working Dart environment installed on your machine. If not, go to and choose Option 1 for your platform, which is the complete bundle. Downloading and uncompressing it will produce a folder named
dart, which will contain everything you need. Put this in a directory of your choice. This could be anything, but for convenience keep it short, such as
d:\dart on Windows or
~/dart on Linux. On OS X, you can just drop the directory in the
App folder.
Create a
DART_SDKenvironment variable that contains the path to the
dart-sdkfolder. On Windows, create and set
DART_SDKto
d:\dart\dart-sdkor
<your-dart-sdk-path>\dart-sdkwhen using a dart from another folder (if you need more information on how to do this, refer to). On Linux, add this to your configuration file
.bashrcand/or
.profileusing the
export DART_SDK=~/dart/dart-sdkcode. On OS X, export
DART_SDK=/Applications/dart/dart-sdkor in general export
DART_SDK=/path/to/dart-sdk.
The installation directory has a subfolder
dart-sdk\bin, which contains the command-line tools. Add this subfolder to the path of your environment. On Windows, add
%DART_SDK%\bininstead to the front of the path (system environment) variable and click on OK. On Linux or OS X, add
export PATH=$PATH:$DART_SDK/binto your configuration file.
Reset your environment configuration file or reboot your machine afterwards for the changes to take effect.
Setting the
DART_SDK environment variable, for example, enables plugins such as dart-maven to search for the Dart SDK (dart-maven is a plugin that provides integration for Google Dart into a maven-build process). If the OS of your machine knows the path where the Dart tools reside, you can start any of them (such as the Dart VM or dartanalyzer) anywhere in a terminal or command-line session.
Test the environment variable by typing
dart in a terminal and press Enter. You should see the following help text:
Usage: dart [<vm-flags>] <dart-script-file> [<dart-options>]
Executes the Dart script passed as <dart-script-file>
When developing or maintaining, an app's execution speed is not so important, but information about the program's execution is. On the other hand, when the app is put in a customer environment to run, the requirements are nearly the opposite; speed is of utmost importance, and the less information the program reveals about itself, the better. That's why when an app runs in the Dart Virtual Machine (VM), it can do so in two runtime modes:
The Checked mode: This is also known as the debug mode. The checked mode is used during development and gives you warnings and errors of possible bugs in the code.
The Production mode: This is also known as the release mode. You deploy an app in the production mode when you want it to run as fast as possible, unhindered by code checks.
Open your app in Dart Editor and select the startup web page or Dart script, usually
web\index.html.
When working in Dart Editor, the checked mode is the default mode. If you want the production mode, open the Run menu and select Manage Launches (Ctrl + Shift + M). The Manage Launches window appears, as shown in the following screenshot:
The Manage Launches window
Under Dartium settings, you will see the checkbox Run in checked mode. (If you have selected a Dart script, it will be under the header VM settings.) Uncheck this to run the script in the production mode. Next, click on Apply and then on Close, or on Run immediately. This setting will remain in place until you change it again.
Scripts that are started on the command line (or in a batch file) with the
dart command run in the Dart VM and thus in the production mode. If you want to run the Dart VM in the checked mode, you have to explicitly state that with the following command:
dart –c script.dart or: dart --checked script.dart
You can start Dartium (this is Chromium with the Dart VM) directly by launching the Chrome executable from
dart\chromium; by default, it runs Dart Editor in the production mode. If you would like to start Dartium in the checked mode, you can do this as follows:
On Windows, in the
dart\chromiumfolder, click on the
chromefile
On Linux, in the
~/dart/chromiumfolder, open the
./chromefile
On OS X, open the
DART_FLAGSfolder and then open
path/Chromium.app
Verify this setting by going to the following address in the Chrome browser that you just started
chromium://version.
When a web app runs in the Dart VM in Chrome, it will run in the production mode, by default.
In the checked mode, types are checked by calling assertions of the form
assert
(var1 is T) to make sure that
var1 is of type
T. This happens whenever you perform assignments, pass parameters to a function, or return results from a function.
However, Dart is a dynamic language where types are optional. That's why the VM must, in the production mode, execute your code as if the
type annotations (such as
int n) do not exist; they are effectively thrown away. So at runtime, the following statement
int x = 1 is equivalent to
var x = 1.
A binding
x is created but the
type annotation is not used.
With the checked mode, Dart helps you catch type errors during development. This is in contrast to the other dynamic languages, such as Python, Ruby, and JavaScript, where these are only caught during testing, or much worse, they provoke runtime exceptions. You can easily check whether your Dart app runs in the checked mode or not by calling the function
isCheckedMode() from
main() (see the script
test_checked_mode\bin\ test_checked_mode.dart in the
Chapter 1 folder of the code bundle), as shown in the following code:
main() { isCheckedMode(); // your code starts here } void isCheckedMode() { try { int n = ''; throw new Exception("Checked Mode is disabled!"); } on TypeError { print("Checked Mode is enabled!"); } }
Tip
Downloading the example code
You can download the example code files for all Packt books you have purchased from your account at. If you purchased this book elsewhere, you can visit and register to have the files e-mailed directly to you.
The exception message will be shown in the browser console. Be sure to remove this call or comment it out before deploying it to the production mode; we don't want an exception at runtime!
Dart Editor is based upon the Eclipse Integrated Development Environment (IDE), so it needs the Java VM to run. Sometimes, problems can arise because of this; if this is the case, be sure to consult the Dart Editor Troubleshooting page on the Dart website at.
Some of the JVM settings used by Dart Editor are stored in the
DartEditor.ini file in the
dart installation directory. This typically contains the following settings (on a Windows system):
-data @user.home\DartEditor -vmargs -d64 -Dosgi.requiredJavaVersion=1.6 -Dfile.encoding=UTF-8 -XX:MaxPermSize=128m -Xms256m -Xmx2000m
If you notice strange or unwanted behavior in the editor, deleting the
settings folder pointed to by
–data and its subfolders can restore things to normal. This folder can be found at different locations depending on the OS; the locations are as follows:
On a Windows system,
C:\Users\{your username}\DartEditor
On a Linux system,
$HOME/.dartEditor
On an OS X system,
$HOME/Library/Application Support/DartEditor
Deleting the settings folder doesn't harm your system because a new settings folder is created as soon as you reopen Dart Editor. You will have to reload your projects though. If you want to save the old settings, you can rename the folder instead of just deleting it; this way, you can revert to the old settings if you ever want to.
The settings for data points to the
DartEditor folder are in the users home directory, which contains various settings (the metadata) for the editor. Clearing all the settings removes the metadata the editor uses.
The -d64 or –d32 value specifies the bit width necessary for the JVM. You can check these settings for your installation by issuing the command
java –version in a terminal session, whose output will be as follows:
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
If this does not correspond with the
–d setting, make sure that your downloaded Dart Editor and the installed JVM have the same bit width, by downloading a JVM for your bit width.
Tip
If you work with many Dart projects and/or large files, the memory consumption of the JVM will grow accordingly and your editor will become very slow and unresponsive.
Working within a 32-bit environment will pretty much limit you to 1GB memory consumption, so if you see this behavior, it is recommended to switch to a 64-bit system (Dart Editor and JVM). You can then also set the value of the
–Xmx parameter (which is by default set to 2000m = 2 GB) to a higher setting, according to the amount of memory you have installed. This will visibly improve the loading and working speed of your editor!
If your JVM is not installed in the default location, you can add the following line to the
.ini file in the line before
-vmargs:
-vm /full/path/to/java
If you face a problem, it might be solved by upgrading Dart SDK and the Dart Editor to the latest version. In the Dart Editor menu, select Help and then About Dart Editor. If a new version is available, this will automatically download, and when done, click on Apply the update.
Another possibility for when the pub repository is not reachable (because you have no Internet access or work behind a very strict firewall) is to host your own private pub mirror.
Follow these steps to host your own private pub mirror:
You need a server that speaks to the pub's HTTP API. Documentation on that standalone API does not yet exist, but the main pub server running at pub.dartlang.org is open source with its code living at. To run the server locally, go through these steps:
Install the App Engine SDK for Python.
Verify that its path is in
$PATH.
Install the pip installation file, beautifulsoup4, and pycrypto webtest packages.
From the top-level directory, run this command to start the pub server
dev_appserver.py app.
Verify that it works in your browser with.
You need to set a
PUB_HOSTED_URLenvironment variable to point to the URL of your mirror server, so that the pub will look there to download the hosted dependencies, for example,
PUB_HOSTED_URL =:[email protected]:8042.
Manually upload the packages you need to your server, visit(sign in as an administrator), go to the Private Key tab, and enter any string into the private key field.
The server from is written in Python and is made to run on Google App Engine, but it can be run from an Intranet as well.
Dart Editor is a great environment, but Sublime Text also has many functionalities and can be used with many other languages, making it the preferred editor for many developers.
You can download Sublime Text free of cost for evaluation, however, for continued use, a license must be purchased from.
Tim Armstrong from Google developed a Dart plugin for Sublime Text, which can be downloaded from GitHub at, or you can find it in the code download with this book. The easiest way to get started is to install the Package Control plugin first by following the instructions at.
In Sublime Text, press Ctrl + Shi ft + P (Windows or Linux) or Cmd + Shift + P (OS X; this goes for all the following commands), click on Install Package to choose that option, and then click and choose Dart to install the plugin. Any Dart file you then open shows the highlighted syntax, matching brackets, and so on.
Also, click on Menu Preferences, Settings, and then on User and add the path to your dart-sdk as the first line in this JSON file:
{ "dartsdk_path": "path\to\dart-sdk", … }
If you want to manually install this plugin, copy the contents of the
dart-sublime-bundle-master folder to a new directory named
Dart in the
Sublime packages directory. This directory has different locations on different OS. They are as follows:
On Windows, this will likely be found at
C:\Users\{your username}\AppData\Roaming\Sublime Text 2\Packages
On Linux, this will likely be found at
$HOME/Sublime Text 2/Pristine Packages
On OSX, this will likely be found at
~/Library/Application Support/Sublime Text 2/Packages
The plugin has a number of code snippets to facilitate working with Dart, for example, typing
lib expands the library statement. Other snippets include
imp for import,
class for a class template,
method for a method template, and
main for a
main() function. Typing a snippet in the pop-up window after pressing Ctrl + SHIFT + P lets you see a list of all the snippets. Use Ctrl + / to (un)comment the selected code text.
The plugin has also made a build system for you. Ctrl + B will invoke the dartanalyzer and then compile the Dart code to JavaScript with the dart2js compiler, as shown in the following screenshot. Editing and saving a
pubspec.yaml file will automatically invoke the
pub get command.
Working in Sublime Text 2
Deploying a Dart app in a browser means running it in a JavaScript engine, so the Dart code has to first be compiled to JavaScript. This is done through the
dart2js tool, which is itself written in Dart and lives in the
bin subfolder of
dart-sdk. The tool is also nicely integrated in Dart Editor.
Right-click on
.htmlor the
.dartfile and select Run as JavaScript.
Alternatively, you can right-click on the
pubspec.yamlfile and select Pub Build (generates JS) from the context menu. You can also click on the Tools menu while selecting the same file, and then on Pub Build.
The first option invokes the
pub serve command to start a local web server invoking dart2js along its way in the checked mode. However, the compiled
.dart.js file is served from the memory by the internal development web server on. This is only good for development testing.
In the second option, the generated files are written to disk in a subfolder
build/web of your app. In this way, you can copy this folder to a production web server and deploy your web app to run in all the modern web browsers (you only need to deploy the
.js file, not the
.precompiled.js file or the
.map file). However, Pub Build in Dart Editor enables the checked mode by default; use the
pub build command from a console for the production mode.
The
dart2js file can also be run from the command line, which is the preferred way to build non-web apps.
Tip
The command to compile the dart script to an output file
prorabbits.js using
-o <file> or
-out <file> is
dart2js -o prorabbits.js prorabbits.dart.
If you want to enable the checked mode in the JavaScript version, use the
–c or
- checked option such as
dart2js –c -o prorabbits.js prorabbits.dart. The command
dart2js –vh gives a detailed overview of all the options.
The
pub build command, issued on a command line in the folder where
pubspec.yaml is located, will do the same as in option 2 previously, but also apply the JavaScript shrinking step; the following is an example output for app
test_pub:
f:\code\test_pub>pub build
Building test_pub... (0.3s)
[Info from Dart2JS]:
Compiling test_pub|web/test.dart...
[Info from Dart2JS]: Took 0:00:01.770028 to compile test_pub|web/test.dart. Built 165 files to "build"
You can minify both the JavaScript version and the Dart version of your app.
To produce more readable JavaScript code (instead of the minified version of the production mode, refer to the Shrinking the size of your app recipe), use the command
pub build --mode=debug, which is the default command in Dart Editor.
Alternatively, you can add the following transformers section to your app's
pubspec.yaml file:
name: test_pub description: testing pub transformers: - $dart2js: minify: false checked: true dependencies: js: any dev_dependencies: unittest: any
Note
For more information, refer to.
The
dart2js tool can also be used as Dart to Dart to create a single
.dart file that contains everything you need for the app with this command:
dart2js --output-type=dart --minify -oapp.complete.dart app.dart
This takes the Dart app, tree shakes it, minifies it, and generates a single
.dart file to deploy. The advantage is that it pulls in dependencies like third-party libraries and tree shakes it to eliminate the unused parts.
In this recipe, we will examine how to debug your app in the Chrome browser.
From the menu in the upper right-hand corner, select Tools and then Developer Tools.
Verify via Settings (which is the wheel icon in the upper right corner of the Developer Tools section) that the Enable JavaScript source maps option is turned on. Make sure that debugging is enabled, either on all the exceptions or only on uncaught exceptions.
Choose Sources in the Developer Tools menu, then press Ctrl + O to open a file browser and select the Dart script you wish to debug.
Now reload the application and you will see that the execution stops at the breakpoint. On the right, you have a debug menu, which allows you to inspect scope variables, watch the call stack, and even create watch expressions, as shown in the following screenshot:
Debugging JS in Chrome
Chrome uses the source map file
<file>.js.map generated while compiling the JavaScript code to map the Dart code to the JavaScript code in order to be able to debug it.
In this recipe, we will examine how to debug your app in the Firefox browser.
In Firefox, the source maps feature is not yet implemented. Use Shift + F2 to get the developer toolbar and the command line. In the top menu, you will see Debugger. Place a breakpoint and reload the file. Code execution then stops and you can inspect the value of the variables, as shown in the following screenshot:
Debugging JS in Firefox
Some things can be done more easily on the command-line, or are simply not (yet) included in Dart Editor. These tools are found in
dart-sdk/bin. They consist of the following:
For every tool, it might be useful to know or check its version. This is done with the
-- versionoption such as
dart --versionwith a typical output of Dart VM version: 1.3.0 (Tue Apr 08 09:06:23 2014) on "windows_ia32".
The
dart –v –hoption lists and discusses all the possible options of the VM. Many tools also take the
--package_root=<path>or
–p=<path>option to indicate where the packages used in the imports reside on the filesystem.
dartanalyzer is written in Java and works in Dart Editor whenever a project is imported or Dart code is changed; it is started
dartanalyzer prorabbits.dartwith output:
Analyzing prorabbits.dart...
No issues found (or possibly errors and hints to improve the code)
The previous output verifies that the code conforms to the language specification, pub functionality is built into Dart Editor, but the tool can also be used from the command line (refer to
test_pub). To fetch packages (for example, for the
test_pubapp), use the following command in the folder where
pubspec.yamllives,
pub get, with a typical output as follows:
Resolving dependencies... (6.6s) Got dependencies!
A packages folder is created with symlinks to the central
packagecache on your machine. The latest versions are downloaded and the package versions are registered in the
pubspec.lockfile, so that your app can only use these versions.
If you want to get a newer version of a package, use the
pub upgradecommand. You can use the
–vand
-- traceoptions to produce a detailed output to verify its workings.
The
dartfmttool is also a built in Dart Editor. Right-click on any Dart file and choose Format from the context menu. This applies transformations to the code so that it conforms to the Dart Style Guide, which can be seen at can also use it from the command line, but then the default operation mode is cleaning up whitespace. Use the
–toption to apply code transforms such as
dartfmt -t –w bank_terminal.dart.
Solving problems when
pub getfails
Compiling your app to JavaScript (for
pub build)
Documenting your code from Chapter 2, Structuring, testing, and deploying an application
Publishing your app to a pub (for pub publishing)
Using snapshotting to start an app in Dart VM
For additional information, refer to
The pub package manager is a complex tool with many functionalities, so it is not surprising that occasionally something goes wrong. The
pub get command downloads all the libraries needed by your app, as specified in the
pubspec.yaml file. Running
pub get behind a proxy or firewall used to be a problem, but it was solved in the majority of cases. If this still haunts you, look at the corresponding section at.
This recipe is especially useful when you encounter the following error in your Dart console while trying to open a project in Dart Editor during the pub get phase:
Pub install fails with 'Deletion failed'
First try this; right-click on your project and select Close Folder. Then, restart the editor and open your project again. In many cases, your project will load fine. If this does not work, try the
pub
gun command:
Delete the pub cache folder from
C:\Users\{your username}\AppData\Roaming\Pub.
Delete all the
packagesfolders in your project (also in subfolders).
Delete the
pubspec.lockfile in your project.
Run
pub getagain from a command line or select Tools in the Dart Editor menu, and then select Pub Get.
The
Pub\Cache subfolder contains all the packages that have been downloaded in your Dart environment. Your project contains symlinks to the projects in this cache, which sometimes go wrong, mostly on Windows. The
pubspeck.lock file keeps the downloaded projects constrained to certain versions; removing this constraint can also be helpful.
Temporarily disabling the virus checker on your system can also help
pub get to succeed when it fails with the virus checker on.
The following script by Richard Schmidt that downloads packages from the pub repository and unpacks it into your Dart cache may also prove to be helpful for this error, which can be found at. Use it as
dart downloadFromPub.dart package m.n.l.
Here,
package is the package you want to install and
m.n.l is the version number such as 0.8.1. You will need to build this like any other dart package, and if during this process the
pub get command fails, you will have to download the package and unpack it manually; however, from then on, you should be able to use this script to work around this issue.
When
pub get fails in Dart Editor, try the following on the command line to get more information on the possible reasons for the
pub --trace 'upgrade' failure.
There is now also a way to condense these four steps into one command in a terminal as follows:
pub cache repair
On the web, the size of the JavaScript version of your app matters. For this reason,
dart2js is optimized to produce the smallest possible JavaScript files.
When you're ready to deploy, minify the size of the generated JavaScript with
–m or
-- minify, as shown in the following command:
dart2js –m -o prorabbits.js prorabbits.dart
Using
pub build on the command line minifies JavaScript by default because this command is meant for deployment.
The
dart2js file utilizes a tree-shaking feature; only code that is necessary during execution is retained, that is, functions, classes, and libraries that are not called are excluded from the produced
.js file. The minification process further reduces the size by replacing the names of variables, functions, and so on with shorter names and moving code around to use a few lines.
Be careful when you use reflection.
Using reflection in the Dart code prevents tree shaking. So only import the
dart:mirrors library when you really have to. In this case, include an
@MirrorsUsed annotation, as shown in the following code:
library mylib; @MirrorsUsed(targets: 'mylib') import 'dart:mirrors';
In the previous code, all the names and entities (classes, functions, and so on) inside of
mylib will be retained in the generated code to use reflection. So create a separate library to hold the class that is using mirrors.
A fairly common use case is that you need to call another program from your Dart app, or an operating system command. For this, the abstract class
Process in the
dart:io package is created.
Use the
run method to begin an external program as shown in the following code snippet, where we start Notepad on a Windows system, which shows the question to open a new file
tst.txt (refer to
make_system_call\bin\ make_system_call.dart):
import 'dart:io'; main() { // running an external program process without interaction: Process.run('notepad', ['tst.txt']).then((ProcessResult rs){ print(rs.exitCode); print(rs.stdout); print(rs.stderr); }); }
If the process is an OS command, use the
runInShell argument, as shown in the following code:
Process.run('dir',[], runInShell:true).then((ProcessResult rs) { … }
The
Run command returns a Future of type
ProcessResult, which you can interrogate for its exit code or any messages. The exit code is OS-specific, but usually a negative value indicates an execution problem.
Use the
start method if your Dart code has to interact with the process by writing to its
stdin stream or listening to its
stdout stream.
One, but before it starts executing. This enables a much faster startup because the work of tokenizing and parsing the app's code was already done in the snapshot.
This recipe is intended for server apps or command-line apps. A browser with a built-in Dart VM can snapshot your web app automatically and store that in the browser cache; the next time the app is requested, it starts up way faster from its snapshot. Because a snapshot is in fact a serialized form of an object(s), this is also the way the Dart VM uses to pass objects between isolates. The folder
dart/dart-sdk/bin/snapshots contains snapshots of the main Dart tools.
Occasionally, your app needs access to the operating system, for example, to get the value of an environment variable to know where you are in the filesystem, or to get the number of processors when working with isolates. Refer to the Using isolates in the Dart VM and Using isolates in web apps recipes, in Chapter 8, Working with Futures, Tasks, and Isolates, for more information on working with isolates.
In this recipe, you will see how to interact with the underlying operating system on which your app runs by making system calls and getting information from the system.
The
Platform class provides you with information about the OS and the computer the app is executing on. It lives in
dart:io, so we need to import this library.
The following script shows the use of some interesting options (refer to the code files
tools\
code\
platform\
bin\
platform.dart of this chapter):
import 'dart:io'; Map env = Platform.environment; void main() { print('We run from this VM: ${Platform.executable}'); // getting the OS and Dart version: print('Our OS is: ${Platform.operatingSystem}'); print('We are running Dart version: ${Platform.version}'); if (!Platform.isLinux) { print('We are not running on Linux here!'); } // getting the number of processors: int noProcs = Platform.numberOfProcessors; print('no of processors: $noProcs'); // getting the value of environment variables from the Map env: print('OS = ${env["OS"]}'); print('HOMEDRIVE = ${env["HOMEDRIVE"]}'); print('USERNAME = ${env["USERNAME"]}'); print('PATH = ${env["PATH"]}'); // getting the path to the executing Dart script: var path = Platform.script.path; print('We execute at $path'); // on this OS we use this path separator: print('path separator: ${Platform.pathSeparator}'); }
When run, the above code gives the following output:
Our OS is: windows We are running Dart version: 1.3.3 (Wed Apr 16 12:40:55 2014) on "windows_ia32" We are not running on Linux here! no of processors: 8 OS = Windows_NT HOMEDRIVE = C: USERNAME = CVO PATH = C:\mongodb\bin;C:\MinGW\bin;... We execute at /F:/Dartiverse/platform/bin/platform.dart path separator: \
Most of the options are straightforward. You can get the running VM from
Platform.executable. You can get the OS from
Platform.operatingSystem; this can also be tested on a Boolean property such as
Platform.isLinux. The Dart version can be tested with the
Platform.version property. The
Platform.environment option returns a nice map structure for the environment variables of your system, so you can access their values by name, for example, for a variable
envVar, use
var envVar = Platform.environment["envVar"].
To get the path of the executing Dart script, you can use the path property of
Platform.script because the latter returns the absolute URI of the script. When building file paths in your app, you need to know how the components in a path are separated;
Platform.pathSeparator gives you this information. | https://www.packtpub.com/product/dart-cookbook/9781783989621 | CC-MAIN-2022-27 | refinedweb | 5,161 | 61.36 |
umask(2) umask(2)
NAME [Toc] [Back]
umask - set and get file creation mask
SYNOPSIS [Toc] [Back]
#include <sys/stat.h>
mode_t umask(mode_t cmask);
DESCRIPTION [Toc] [Back]
umask() sets the process's file mode creation mask to umask() and
returns the previous value of the mask. Only the file access
permission bits of the masks are used.
The bits set in cmask specify which permission bits to turn off in the
mode of the created file, and should be specified using the symbolic
values defined in stat(5).
EXAMPLES [Toc] [Back]
The following creates a file named path in the current directory with
permissions S_IRWXU|S_IRGRP|S_IXGRP, so that the file can be written
only by its owner, and can be read or executed only by the owner or
processes with group permission, even though group write permission
and all permissions for others are passed in to creat().
#include <sys/types.h>
#include <sys/stat.h>
int fildes;
(void) umask(S_IWGRP|S_IRWXO);
fildes = creat("path", S_IRWXU|S_IRWXG|S_IRWXO);
RETURN VALUE [Toc] [Back]
The previous value of the file mode creation mask is returned.
SEE ALSO [Toc] [Back]
mkdir(1), sh(1), mknod(1M), chmod(2), creat(2), mknod(2), open(2).
STANDARDS CONFORMANCE [Toc] [Back]
umask(): AES, SVID2, SVID3, XPG2, XPG3, XPG4, FIPS 151-2, POSIX.1
Hewlett-Packard Company - 1 - HP-UX 11i Version 2: August 2003 | http://nixdoc.net/man-pages/HP-UX/man2/umask.2.html | CC-MAIN-2013-20 | refinedweb | 229 | 59.64 |
Answered by:
Display issue binding to itemTemplate in a fragment
When I try to bind a ListView to a template using WinJS.UI.ListLayout, it only displays raw JSON in the ListView.
It seems it does not work when the code below is part of a fragment.
So to duplicate just create a new project in VS11, which will use fragments (subpages), and the template will not render.
<div id="regularListIconTextTemplate" data- <h4 data-</h4> <h6 data-</h6> </div> <div id="listView1" data-</div>
The data used is from the sdk sample:
var myData = new WinJS.Binding.List([ { } ]);
Thanks -
Sunday, March 04, 2012 5:59 PM
- Edited by S - Lee Whitney Sunday, March 04, 2012 6:34 PM
- Edited by Jeff SandersMicrosoft employee, Moderator Thursday, March 08, 2012 2:44 PM Better description
Question
Answers
All replies
- Wondering if you have what I see...if you set your ListView to GridLayout does it come out fine? Not what you need of course, but are the templates rendered?Monday, March 05, 2012 7:14 PM
See the walktrough for the ListView:
WinJS.Binding.Template must have a single root element. Create a div element to serve as a parent for the template's contents.
That should get you going!
-Jeff
Jeff Sanders (MSFT)
Monday, March 05, 2012 8:31 PMModerator
- Proposed as answer by Jeff SandersMicrosoft employee, Moderator Monday, March 05, 2012 8:31 PM
- Unproposed as answer by S - Lee Whitney Tuesday, March 06, 2012 1:23 AM
That was helpful to know but did not fix the problem.
It still does not work in a subpage (fragment), but works fine when the template is in the main page (default.html):
<div id="regularListIconTextTemplate" data- <div style="width: 150px; height: 100px;"> <h4 data-</h4> <h6 data-</h6> </div> </div>
Again to duplicate the problem, you just create a new project in VS11 beta and add in this template into the .html page that is loaded into default.html at startup.
Any help appreciated-
Tuesday, March 06, 2012 1:22 AM
- Edited by S - Lee Whitney Tuesday, March 06, 2012 1:23 AM
hi harlequin,
i think i noticed what you are referring to in another situation, but in this specific case the template does not render with GridLayout nor ListLayout.
The data is displayed in raw JSON format, rather than the rendered template format.Tuesday, March 06, 2012 1:26 AM
- I will try and take some time to build a repro of this tomorrow. Can you post what you see?
Jeff Sanders (MSFT)Tuesday, March 06, 2012 8:59 PMModerator
Hi,
I have the same problem but I fix it temporary using a function as itemTemplate.
In HTML markup I only declare itemDatasource property in the listView element:
<div data- </div> <div data- <div class="row"> <div data-</div> <div data-</div> <div data-</div> </div> </div> </div>
... In my fragment ready function I write this code ...
var listViewControl = myListView.winControl; listViewControl.itemTemplate = itemTemplateFunction;
.. and this is my itemTemplateFunction...
function itemTemplateFunction(itemPromise) { return itemPromise.then(function (item) { var template = myItemTemplate.winControl; var data = template.render(item.data); return data; }); };
it works!!!
Anyway, i would like to know why template mechanism doesn't work in a Page Control Fragment.
Cesar Cruz (ilitia Technologies)
Wednesday, March 07, 2012 9:38 AM
- Proposed as answer by Jeff SandersMicrosoft employee, Moderator Wednesday, March 07, 2012 8:24 PM
I've taken the default navigation app (consumer preview) and tried to add a simple listview. The data shows, but it displays in raw json rather than formatted from the template. I've put the static data in default.js so it gets loaded right away, then in homePage.html I've put the template.
Below are the files
// default);(); })();
homepage.html:
<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>homePage</homePage.css" rel="stylesheet"> <script src="/js/homePage.js"></script> </head> <body> <!-- The content that will be loaded and displayed. --> <div class="fragment homepage"> <header aria- <button class="win-backbutton" aria-<span class="pagetitle">Welcome to Application1! </span></h1> </header> <section aria- <div id="mediumListIconTextTemplate" data- <div style="width: 150px; height: 100px;"> <div> <!-- Displays the "title" field. --> <h4 data-</h4> <!-- Displays the "text" field. --> <h6 data-</h6> </div> </div> </div> <div id="basicListView" data- </div> </section> </div> </body> </html>
Peter Kellner Microsoft MVP • ASPInsider
Wednesday, March 07, 2012 3:31 PM
- Edited by Peter KellnerMVP Wednesday, March 07, 2012 3:32 PM mis spelling
- Merged by Jeff SandersMicrosoft employee, Moderator Thursday, March 08, 2012 2:42 PM Same issue
Hi Peter,
Can you try changing your options attribute slightly to read
itemTemplate: select('#mediumListIconTextTemplate')
instead of just the bare identifier? I believe the trouble is when processAll is being called, the fragment hasn't been parented to the DOM yet, and so the DOM id hasn't been promoted to a global.
Cheers,
-JeffWednesday, March 07, 2012 7:08 PM
If that doesn't work you can add this to homePage.js:
// This function is called whenever a user navigates to this page. It // populates the page elements with the app's data. function ready(element, options) { WinJS.UI.processAll(element).then(function elementsProcessed() { var lv = document.getElementById("basicListView").winControl; lv.itemTemplate = document.getElementById("mediumListIconTextTemplate"); }); }
Senior Dev for Windows Phone ServicesWednesday, March 07, 2012 7:43 PM
I'm doing a presentation tonight on win8 metro (meetup) and have it somewhat figured out but not enough detail to really post. The code I had to add looks something like this. I think it has to do with the bind missing
ui.setOptions(listView, { //itemTemplate: element.querySelector("#legislaturesTemplate"), itemDataSource: DataExampleLeg.itemList.dataSource, oniteminvoked: this.itemInvoked.bind(this) });
Peter Kellner Microsoft MVP • ASPInsiderWednesday, March 07, 2012 7:50 PM
Is your datasource visible in the global namespace or is it inside of a namepace and hidden?
Jeff Sanders (MSFT)Wednesday, March 07, 2012 8:25 PMModerator
My datasource is created inside an anonymous function but I create a namespace called "ApplicationData" to make the data publicily accessible, so my datasource its inside of a namespace and public.
The question is... can it affect to the problem? The JSON representation of the datasource is visible in the listview so the datasource is accessible outside.Thursday, March 08, 2012 12:36 PM
As long as it is visible you should be OK. I will try and build a repro of this.
Jeff Sanders (MSFT)Thursday, March 08, 2012 1:02 PMModerator
Wow, don't I think I could have guessed that in a 100 years, but hey, that's life in a beta right? :)
It's now working well, thanks for your help Jeff -
LeeSaturday, March 10, 2012 2:23 PM | https://social.msdn.microsoft.com/Forums/en-US/0964ab96-994f-4e9b-a85f-bccd07a69eeb/display-issue-binding-to-itemtemplate-in-a-fragment?forum=winappswithhtml5 | CC-MAIN-2015-35 | refinedweb | 1,114 | 56.45 |
Ecto, You Got Some ‘Splainin To Do
Ecto is a powerful tool for interacting with databases, and I thoroughly enjoy using it. But with great power comes great responsibility.
I was looking through a particular part of an app trying to find which of 5 or so queries being made had the highest execution cost. The client was reporting 500 errors in production and I was hoping for a simple change that could be made to a very complicated part of that app.
Manual process is tedious
My process for this was to copy the query logs and params, use a SQL formatter so it could be read, then go paste in the params in the appropriate places. This process takes a few minutes and for long queries, it’s easy to mess up and spend even longer trying to find the syntax errors.
All that to
EXPLAIN was driving me nuts, so I went to hexdocs looking for a better way. I found Ecto.Adapters.SQL.to_sql/3 would return a tuple that included the SQL string and the params to go with. I just needed a way to interpolate that into a complete SQL string.
Searching for a better way
As I searched for something to do that, I discovered something much better. I added
EXPLAIN to the SQL strings and used Ecto.Adapters.SQL.query/4. That way, I was able to quickly put the following in a module as I was troubleshooting to get the job done.
def explain(query) do {sql, params} = Ecto.Adapters.SQL.to_sql(:all, MyApp.Repo, query) results = Ecto.Adapters.SQL.query!(MyApp.Repo, "EXPLAIN " <> sql, params) IO.inspect(results, printable_limit: :infinity) query end
I love simple tools like this and had to share, so it’s been rolled into a package on hex that can be quickly added to any project’s repo module. Let me know if this helps you next time you’re using ecto.
Documentation is available on hexdocs.pm
Additional Ecto Resources
-
-
-
-
- | https://revelry.co/ecto/ | CC-MAIN-2020-29 | refinedweb | 335 | 72.66 |
I'm using PhantomJS page.evaluate() to do some scraping. My problem is that the code I pass to the webkit page is sandboxed, and so has no access to the variables of my main phantom script. This makes it hard make the scraping code generic.
page.open(url, function() {
var foo = 42;
page.evaluate(function() {
// this code has no access to foo
console.log(foo);
});
}
I've had that exact problem. It can be done with a little trickery, because
page.evaluate also can accept a string.
There are several ways to do it, but I use a wrapper called
evaluate, which accepts additional parameters to pass to the function that must be evaluated on the webkit side. You would use it like this:
page.open(url, function() { var foo = 42; evaluate(page, function(foo) { // this code has now has access to foo console.log(foo); }, foo); });
And here is the
evaluate() function:
/* * This function wraps WebPage.evaluate, and offers the possibility to pass * parameters into the webpage function. The PhantomJS issue is here: * * * * This is from comment #43. */ function evaluate(page, func) { var args = [].slice.call(arguments, 2); var fn = "function() { return (" + func.toString() + ").apply(this, " + JSON.stringify(args) + ");}"; return page.evaluate(fn); } | https://codedump.io/share/cFooiENLwNe3/1/pass-arguments-with-pageevaluate | CC-MAIN-2018-22 | refinedweb | 205 | 61.43 |
@neverdie for us - yes. For other folks, I think the advantage is the number of targets it supports and the license. Equivalent Segger costs hundreds of $
BTW, Sandeep added BMP support into his core after I raised the issue
@neverdie for us - yes. For other folks, I think the advantage is the number of targets it supports and the license. Equivalent Segger costs hundreds of $
BTW, Sandeep added BMP support into his core after I raised the issue
@neverdie said in nRF5 Bluetooth action!:
else go square
if you ask me, go this way given the BT module itself is already beyond the circular footprint
@alowhum just buy a real Blue Pill (around $2) and convert it into BMP.
Then you''ll get both a programmer and an USB-serial that you can use to get data from NRF52 UART
@nca78 BTW, HolyIoT makes a similar beacon but nrf52 based. Should be much more energy efficient. The price is about $7
Guys, pls. have in mind that Ebyte short NON pa+lna module uses pa+lna pinout.
Took me some time to figure it out
@rozpruwacz I would strongly encourage you do start with nrf52832 ebyte mosule coupled with Nevrdie's breakout board. It is a) proven b) cheap c) breadboard friendly
Got it, thx. Just to clarify: if I want eg 5 x esp8266 nodes, do I need to add them as 5 gateways, or I add one as a a gateway and other 4 connect to it?
@sancho119 said in GUIDE - NRF5 / NRF51 / NRF52 for beginners:
Hello,
Some news !
i buy a Jlink, and success in erase my nrf52832 with nrf prog, using method of berkseo.
Next i try to upload a basic sketch (blink) :
#define MY_RADIO_NRF5_ESB
#include <nrf.h>
#include <MySensors.h>
void setup() {
Serial.begin(9600);
hwPinMode(LED_BUILTIN,OUTPUT_H0H1);
}
void loop() {
digitalWrite(LED_BUILTIN, HIGH); // turn the LED on (HIGH is the voltage level)
delay(1000); // wait for a second
digitalWrite(LED_BUILTIN, LOW); // turn the LED off by making the voltage LOW
delay(1000); // wait for a second
}
I have this :
Open On-Chip Debugger 0.10.0-dev-00254-g696fc0a (2016-04-10-10:13)
Licensed under GNU GPL v2
For bug reports, read
debug_level: 2
0
adapter speed: 10000 kHz
cortex_m reset_config sysresetreq
Info : No device selected, using first device.
Info : J-Link ARM-OB STM32 compiled Aug 22 2012 19:52:04
Info : Hardware version: 7.00
Info : VTarget = 3.300 V
Info : clock speed 10000 kHz
Info : SWD IDCODE 0x2ba01477
Info : nrf52.cpu: hardware has 6 breakpoints, 4 watchpoints
nrf52.cpu: target state: halted
target halted due to debug-request, current mode: Thread
xPSR: 0x01000000 pc: 0x0000051c msp: 0x20010000
** Programming Started **
auto erase enabled
Warn : Unknown device (HWID 0x00000139)
Warn : not enough working area available(requested 32)
Warn : no working area available, falling back to slow memory writes
wrote 12288 bytes from file C:\Users\Sancho\AppData\Local\Temp\arduino_build_76789/NRF5_blink_led8.ino.hex in 4.155332s (2.888 KiB/s)
** Programming Finished **
** Verify Started **
Warn : not enough working area available(requested 52)
verified 10812 bytes in 0.165932s (63.632 KiB/s)
** Verified OK **
** Resetting Target **
shutdown command invoked
I think its good, but when i put a led with a resistance on P0.08, nothing
could you help me more ?
LED_BUILTIN corresponds to po17
I thought I mastered nrf5 platform bit it looks I am not.
I have a PCB where i2c sensor is hardwired to po30 and po31 of nrf52832.
What changes I have to make in MyBoardNRF5.cpp?
Shall I bring lines 28/29 that have "A4" and "A5" in their desciption to 30th and 31st position in the list?
@ileneken3 have you returned the jumpers back?
@omemanti said in [nRF5 action!]Maybe that's the €3 solution for unlocking.
BMP is enough to unlock. BluePill costs ca. €2 and it's easily convertable to BMP
Can you pls. upload the design files?
@rozpruwacz said in my first nrf5 ... NRF51/NRF52 which is better for MySensors ?:
So smd modules are no good for me
If so, your only choice is nrf52DK. Solid investment given you get jlink programmer with it
how do you guys solder nrf52840 modules without reflow oven?
they have many pads at the bottom
@hugob said in my first nrf5 ... NRF51/NRF52 which is better for MySensors ?:
- The programmer on the Arduino IDE is not working for me ("No J-Link" error, while there is a J-Link interface available), so I export a HEX fle form Arduino IDE and program the board with the nRFConnect tool from Nordic.
if read sandeep's notes on Githun, you'll find you have to replace the driver with Zaddig but then Jlink will stop functioning in Keil.
That's why I use BMP for Arduino-style programming as it has its own drivers and do not ruin jlink installation
@rozpruwacz I would strongly encourage you do start with nrf52832 ebyte mosule coupled with Nevrdie's breakout board. It is a) proven b) cheap c) breadboard friendly | https://forum.mysensors.org/user/toyman | CC-MAIN-2022-33 | refinedweb | 837 | 72.76 |
Create and publish your own components with react and gulp
Few months ago I tried to publish my own components to use in different react projects, during my search I found two tools that stood out, Bit and Storybook, at that time I decided to use bit which was easier to configure and use, to a certain extent like that it was, but the moment of truth arrived, to publish my components; certainly bit has documentation that explains how you should configure your project to publish your components according to the language you are using, but in my case, the problem was that bit used third-party libraries to transpile the code from typescript to javascript, for my fortune the library that was supposed to accomplish this process did not work, since the project was abandoned and therefore the code transpilation didn’t complete correctly. So I opted to transpile manually and publish that code, everything was fine, until I realized something, each user who wanted to install the components in their projects, had to log in with bit to be able to use them, this was a trouble, especially when you work under the principles of integration and continuous distribution, for the reason that you has to configure your pipelines (in my case Jenkins) so that they have this access, and on occasions this process for them to log in had problems.
So I decided to try Storybook but it did not convince me at all, it seemed to me that it complicated certain things, it was there that I decided to do the code transpilation process manually and use npm for the publication of the components. The advantage of doing this is that you can have full control of your project, without the need for extra configurations or access problems (unless you publish your component with restricted access).
The principle of the project is simple, you create your component using react, the code is transpiled into javascript and then you publish it in your npm account. Next I will explain in detail how to use the project so that you can publish your own components, in this example I will publish a component called modal.
Basically the project uses the base of any react project using the typescript template.
npx create-react-app reusable-components-react — template typescript
In this case I added some extra files to unit test the components (using Jest) and transpile the code. The structure is simple and consists of three folders components, container and demos.
The container folder is basically the initial screen where we are going to register by means of list items the path to redirect ourselves to each component that we are going to test. It’s important to record the paths in the App.tsx file, otherwise you won’t be able to access your workspace.
import React from "react";
import { BrowserRouter, Route, Switch } from "react-router-dom";
import { hot } from "react-hot-loader/root";
import ModalDemoLayout from "./demos/ModalDemo";
import { MainSection } from "./container/main";
const Root = () => {
return (
<React.StrictMode>
<BrowserRouter>
<Switch>
<Route path="/demo/modal" component={ModalDemoLayout} exact />
<Route path="/" component={MainSection} exact />
</Switch>
</BrowserRouter>
</React.StrictMode>
);
};
export default hot(Root);
Just as Storybook presents a graphical interface to be able to visualize the components, here we can organize them in a simpler way, we just have to add the name of the path in which we are going to test our component and select the name of our component to redirect us to our area work or we can enter the route directly in the main.tsx file.
Now comes the part in which we build our components, for this we must be clear that all the components that we want to publish must be in the components folder, as well as consist of the following:
* Index.tsx
* css file (in case you need it)
* Unit test file (not mandatory, but good practice)
* README.md (documentation of the component being created)
The index.tsx file is basically all the code of our component, the name should not be changed, since it is the file that is searched for in each component at the time of code transpilation, the rest of the files can be named as desired since contain css styles and unit tests.
Styles can be applied using two ways, the first is to create a css file to modify the styles of the component elements, or if Material UI is used, makeStyles function can be used, which allows you to create or modify styles but within the same component. Now, the unit test files will not be published together with the component, nor will they prevent it from being published, it is simply a good practice when developing and validating possible errors, thus avoiding having to launch multiple versions to correct something that could have detected with a unit test.
Once we have our components ready, we execute the following command:
npm run build
This command transpiles the code into javascript and generates a directory called lib, which contains all the components that we are going to publish.
Inside the directory we can see an index.js file, which exports all the components for later use in the project in which they are going to be used; you can also observe a package.json, this file basically contains the information to publish the components, including the dependencies necessary to work and the version.
If you wonder how it is possible to generate these files automatically, and not have to register the packages that should be required for your components, the answer is Gulp, if you are not familiar with this library, here is a brief explanation, basically gulp allows us to automate certain processes, by defining tasks, these tasks can execute console commands and code that you need. In this case, it helps us with three tasks: transpiling the code, creating the index.js, and generating the package.json.
To understand this better I am going to explain each task, the first and most important thing is the transpilation of code, this transformation requires a file tsconfig.json in this case it is called lib.tsconfig.json which contains the rules for transpilation, using the next command:
tsc -p lib.tsconfig.json
We can generate the javascript code from typescript, the next thing is to copy the css and md files to each directory, as a result we have the following:
tsc -p lib.tsconfig.json && copyfiles src/components/*/*.css lib/ && copyfiles src/components/*/*.md lib/ && copyfiles README.md lib/src/components
Once our code is ready, an index with all the components must be generated, for that a library called barrel-me is used, which automatically generates a file with all the indexes from a directory.
barrel-me -d lib/src/components/
Finally, the generation of the package.json file, this file is important since without it you cannot publish the components in npm, for this it uses two files, one is in assets/publish_package_template.json and the other is the package. json found in the root; the file in assets contains the information that indicates which is the main file and the tags with which the package to be published will be identified
Now, why do we need the root package.json, since it contains the version with which we are going to publish our package, as well as its name; but above all the dependencies that each component requires to function correctly, this is how the third task is in charge of generating a package.json using two base files, but more importantly it extracts the libraries that each component uses and compares them with the package.json located in the root to list all the necessary dependencies and attach them to the final file.
Understood this we proceed to publish the package in npm using
publish-cmp
If everything has gone well we should be able to see our package published in npm.
The only thing that remains is to install the package in the projects that we want.
You can see the code of project here:
Here is npm library published:
And here is a demo with library published in npm:
Note: You must be careful with libraries versions of you custom components and your projects where you want implement
Questions? Comments? Contact me at ridouku@gmail.com | https://ridouku.medium.com/create-and-publish-your-own-components-with-react-and-gulp-63a8f27e5571?source=post_internal_links---------1---------------------------- | CC-MAIN-2021-39 | refinedweb | 1,398 | 52.23 |
rename the default namespace in Windows Phone app project
This article explains how you can change the namespace in a Windows Phone app project.
Windows Phone 8
Windows Phone 7.5
Overview
Visual Studio creates phone projects with a default namespace matching the app's name. While sometimes that is adequate, often you would prefer to add your company name into the namespace or group the namespace name with some other project.
The error message you'll get if the namespaces in .cs and .xaml don't match doesn't directly hint to the problem:
D:\Sources\...\PanoramaApp1\PanoramaApp1\App.xaml.cs(11,18,11,27): error CS0234: The type or namespace name 'Resources' does not exist in the namespace 'MyFirstApp' (are you missing an assembly reference?)
Even worse is the situation if the startup object is not correctly set: the app compiles and deploys correctly, but closes immediately after starting. There is no obvious cause when debugging!
Solution
If you're just changing the namespace "name" (without adding a parent namespace) then:
- Change the namespace in an arbitrary code-behind file, say MainPage.xaml.cs. The hot tip that appears after the change proposes to rename the namespace in the whole project. Accept!
- Verify the startup object. Normally you would have to select the newly named App class here, say My.FooBar.App. It should be selectable in the combo box.
Note that the above order is important! If you first change the default namespace, you have to change all .cs and all .xaml files manually:
namespace MyFirstApp
<phone:PhoneApplicationPage
x:Class="MyFirstApp.MainPage"
Change to:
namespace MyLastApp
<phone:PhoneApplicationPage
x:Class="MyLastApp.MainPage"
If you have a parent namespace then there are some additional "intermediate" steps:
- First you change the namespace in an arbitrary code-behind file, say MainPage.xaml.cs. The hot tip that appears after the change proposes to rename the namespace in the whole project. Accept!
- Beware! If you also added a parent namespace (like the "My" in My.FooBar), the change affects only the last part of the name! In this case you have to check all .cs and .xaml files for the correct naming.
- Open the project's property page and see the default namespace entry changed. This has to be changed here manually to include the parent.
- The last step is to verify the startup object. Normally you would have to select the newly named App class here, say My.FooBar.App. It should be selectable in the combo box.
Summary
Visual Studio should automate the rename/re-factoring job, but since it doesn't, the above steps provide a clear process for manually handling renaming the namespace.
Hamishwillee - Thanks
Hi Thomas
Even as a beginner programmer I've already run into this problem, so it is very useful indeed. I've given this a basic subedit and added some categories.
Thoughts
In terms of "room for improvement" - at your discretion, this is still very useful as is.
RegardsHamish
hamishwillee (talk) 09:42, 12 July 2013 (EEST)
Hamishwillee - PS Why "overview"Because this is very much a "How to" like How to detect if an app is running in Kid’s Corner
hamishwillee (talk) 09:45, 12 July 2013 (EEST)
Influencer - Willdo
Thanks Hamish,
I'll look into it tonite. 'MS Visual Studio' was the category I found in category chooser that I thought fitted most.Is there a table of categories and their usage?Thomas
influencer (talk) 10:23, 12 July 2013 (EEST)
Influencer - Didit
Especially describing the effect of a wrong/undefined startup object showed me again how cruel life can be. Small misconduct and you might search for a long time!T
influencer (talk) 23:44, 12 July 2013 (EEST)
Hamishwillee - Excellent - further breakdown
Hi Thomas
Thank you. I've split the solution into separate sections for with parent and without and added numbered bullets. I think this is helpful - not sure if should actually have separate section headings for this. If you disagree feel free to revert that bit, but propose that you should have numbered list still because that makes more readable.
I am in the process of moving from tags to category tree, so still refining those. You can see the structure in progress, with most of the interesting categories under "General Programming on Windows Phone". The names should generally be obvious.
Going forward I will be updating the instructions on adding categories etc. Hopefully in the next couple of weeks when I've finished moving all articles to the new structure.
RegardsH
hamishwillee (talk) 08:30, 15 July 2013 (EEST) | http://developer.nokia.com/community/wiki/How_to_rename_the_default_namespace_in_Windows_Phone_app_project | CC-MAIN-2015-14 | refinedweb | 765 | 65.22 |
A new Flutter plugin for getting unique id based on device
String uniqueID = await UniqueId.getID;
example/README.md
Demonstrates how to use the unique_id plugin.
import 'dart:async'; import 'package:flutter/material.dart'; import 'package:flutter/services.dart'; import 'package:unique_id/unique_id.dart'; void main() => runApp(new MyApp()); class MyApp extends StatefulWidget { @override _MyAppState createState() => new _MyAppState(); } class _MyAppState extends State<MyApp> { String _uniqueID = 'Unknown'; @override void initState() { super.initState(); initPlatformState(); } // Platform messages are asynchronous, so we initialize in an async method. Future<void> initPlatformState() async { String uniqueID; // Platform messages may fail, so we use a try/catch PlatformException. try { uniqueID = await UniqueId.getID; } on PlatformException { uniqueID = 'Failed to get platform version.'; } // If the widget was removed from the tree while the asynchronous platform // message was in flight, we want to discard the reply rather than calling // setState to update our non-existent appearance. if (!mounted) return; setState(() { _uniqueID = uniqueID; }); } @override Widget build(BuildContext context) { return new MaterialApp( home: new Scaffold( appBar: new AppBar( title: const Text('Plugin example app'), ), body: new Center( child: new Text('Unique ID for this device: $_uniqueID\n'), ), ), ); } }
Add this to your package's pubspec.yaml file:
dependencies: unique_id: ^0.0.2
You can install packages from the command line:
with Flutter:
$ flutter packages get
Alternatively, your editor might support
flutter packages get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:unique_id/unique_id.dart';
We analyzed this package on May 17,.. | https://pub.dev/packages/unique_id/versions/0.0.2 | CC-MAIN-2019-22 | refinedweb | 248 | 50.02 |
#include<iostream> #include<boost/tokenizer.hpp> #include<string> int main(){ using namespace std; using namespace boost; string s = "Field 1,\"putting quotes around fields, allows commas\",Field 3, 123, 34.5"; tokenizer<escaped_list_separator<char> > tok(s); for(tokenizer<escaped_list_separator<char> >::iterator beg=tok.begin(); beg!=tok.end();++beg){ cout << *beg << "\n"; } }
Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.
Have a better answer? Share it in a comment.
From novice to tech pro — start learning today.
As you can see iterator itself is a pointer. In your case it will be a pointer to string, meaning:
Open in new windowBy doing *beg we dereference the variable which effectively becomes:
Open in new windowTherefore when you are doing this type of statements:
Open in new windowYou are actually doing
Open in new windowwhich in turn calls the operator=() for a string. As you probably know this operator copies content of the string, not the pointer, not the reference. In this example:
Open in new windowt will have a string "hello" inside it, and it will be located at a different address than s.
I need to stress, that tran.AccountNumber is a string object. It is not a pointer. It exists within the scope of the struct Transaction, and does not represent a pointer.
Please read here about dereference operator, and here about the iterators
Open in new window
Open in new windowIf you want to use the vector, just iterate through it the same way you have iterated through the tokenizer:
Open in new windowHowever, your question makes me think that you have a comma-separated string with the elements each at its own position, like a transaction string from a bank where the elements are defined like this:
Open in new windowIn this case it will be wise to define a structure and move the tokens to each elements. I recommend you define this structure: are right. It is what I am looking for.
When I assign *beg to a string in the following in the following statement. Do we need to create a new string to hold the content of *beg. I am confused when it is a copy and when it is a reference.
case PosAccountNumber: tran.AccountNumber = *beg; break;
case PosAccountName: tran.AccountName = *beg; break;
tran.AccountNumber is a string. When you use this statement:
Open in new windowyou copy the content of the current token to the string inside the structure. If you are confused about the structures and enums then you can quite simply use normal string variables, like this:
Open in new window
See the code sample I posted above (which chaau was 'kind' enough to ignore), it does exactly that:
Open in new window
This is actually a problem of EE. It does not have any indication of what is going on while you type the answer. If you have an experience with SO, you would know that there the answers from other users "magically" appear while you type your answer.
Send a note to the EE administrators with your concerns. Maybe they will implement some sort of AJAX functionality to show answers dynamically
where 'line' will need to be tokenized. Below is not complete code, but did illustrate what I need to accomplish
Open in new window
Open in new window
tran.AccountNumber = *beg;
My questions are
1. When tokenize, is *beg has a copy of the actual string. Or just a reference to part of the 'line that has the token.
2. If *beg has its own copy, when I do the above assignment, does tran.AccountNumber get a new copy or a reference to *beg?
3. tran.AccountNumber get a new copy and *beg also has it copy, will this cause memory leak since I am reading in all the lines from file?
2. No, when you do the assignment, you get a new copy of the token. If you hawever use it with 'atoi()', this function will evaluate the copy '*beg' has
3. No, no menory leaks, all variables are 'auto' and will go out of scope.
Chaau, thank for your educating explanation and reference articles. They are very helpful
if tran.AccountNumber is wchar_t *, how should I assigned *beg to it?
Open in new window
Sara
Open in new window
the code for assigning and converting a token would turn to
Open in new window
Sara | https://www.experts-exchange.com/questions/28306215/how-to-extracted-result-into-string-or-numeric-from-boost-tokenizer.html | CC-MAIN-2018-17 | refinedweb | 738 | 72.36 |
Transform Transform child by index.
Returns a transform child by index.
If the transform has no child, or the index argument has a value greater than the number of children then an error will be generated. In this case "Transform child out of bounds" error will be given. The number of children can be provided by childCount.
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { public Transform meeple; public GameObject grandChild;
public void Example() { //Assigns the transform of the first child of the Game Object this script is attached to. meeple = this.gameObject.transform.GetChild(0);
//Assigns the first child of the first child of the Game Object this script is attached to. grandChild = this.gameObject.transform.GetChild(0).GetChild(0).gameObject; } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/ScriptReference/Transform.GetChild.html | CC-MAIN-2019-39 | refinedweb | 134 | 67.55 |
We are in the process of adding more flexibility to our data partitioning on a working synchronization scheme. Our first step is converting our normal SQL scripts to run under the DirectRow handling for each table and populating the download stream through the .NET API. It all seems to be working fine until we try reading from tables that contain a long varchar or xml type column. For these we have receive either Out of Memory exceptions or Protected Memory Access exceptions.
Versions: SA 12.0.1.3436 (& 3519, latest EBF), .NET 4, Consolidated db running under SA 12.0.1, OS Win 7 x64
SQL is running under the iAnywhere reader, and the exception happens on the first call to NextRow() before we have any chance to access the data in our code. Code snippet with the key elements of the data access is below.
Are there known limitations when dealing with long data types? We have several long binary types in other tables which do download just fine. As long as we skip tables with the long character types everything else succeeds (with a couple workarounds for other small problems).
Thanks
using System.Data;
using iAnywhere.MobiLink.Script;
...
// called from handle_download: loop through download tables
// connection is the return from DBConnectionContext.GetConnection()
// downloadCommand is the return from a call to DownloadTableData.GetUpsertCommand()
private void executeDownload(string sqlStatement, DBConnection connection, IDbCommand downloadCommand)
{
var selectCommand = connection.CreateCommand();
selectCommand.CommandText = sqlStatement;
selectCommand.Prepare();
var tableUpsertParameters = downloadCommand.Parameters;
var reader = selectCommand.ExecuteReader();
object[] row;
// memory access exception happens in call to reader.NextRow()
while ((row = reader.NextRow()) != null)
{
//add row to download stream here
}
reader.Close();
selectCommand.Close();
}
asked
04 Jan '12, 15:45
Ron Emmert
336●5●11●18
accept rate:
12%
edited
04 Jan '12, 17:20
How long are your long columns?
Data type of the column is "long varchar". Actual contents vary from less than 100 characters to around 100k depending on what the column contains.
It's hard to know what order the rows were actually retrieved in, but on the first failure, there would not have been more than 60K in that particular row.
Note that the same script runs just fine with the same contents using a normal download SQL statement.
I'm asking because we allocate a single contiguous chunk to return the value, so if the column was 1GB long, I would expect out of memory errors. 100K or so is not unreasonable, and I would expect it to succeed.
The whole test consolidated db, indices and all, is well under 500MB . It's possible that an occasional single row could exceed 1MB, but I would be surprised to find that condition in this test case.
Some more information when running against other consolidated db platforms.
We don't get the exceptions when running against SQL Server (2008 R2), but then all the returned values for long type (varbinary(max) or varchar(max)) columns are truncated to zero length.
When the consolidated db is Oracle (11g) we get the same behavior. Mostly protected memory access exceptions.
In addition, on some tables when we close the reader, we also get protected memory access exceptions. In our tests, the tables that do that are consistent and repeatable, but there is no particular pattern of column types that we can identify yet causing this one.
The patterns suggest a couple questions to me:
Using SQL scripts we've always just included the long types in the SELECT clause and ML handles it. Do we need to treat long values differently in DirectRow handling?
Is the DirectRow handling really designed for handling the bulk of data from the consolidated database that SQL Script handling is? Most of the documentation refers to using DirectRow handling for interfacing with web services or some other non-db or external db data source.
The DBConnection and other related interfaces make a rather simple .NET-ODBC bridge, so that NextRow call is doing a bunch of ODBC work under the hood. Some ODBC drivers have restrictions on when you can fetch long columns, which explains the behaviour you're seeing with Microsoft SQL Server. It's probable that the server itself does something smarter against MSS to hide this.
Typically customers use the direct row API for only a few tables, but it's not unheard of for it to be used for everything.
I'll look into your crashes when closing the reader.
I've reproduced the exceptions you reported in the question and have a fix. I'll update this answer tomorrow with a CR# you can use to request an EBF.
answered
05 Jan '12, 17:11
Bill Somers
1.1k●8●19
accept rate:
37%
While updating a test for this change, I ran into what I think is your DataReader crash and some other issues. I'll update this post when I've got everything fixed up.
My fixes are submitted. The CR# is 695370. You'll have to get in touch with support to kick off the EBF process.
Thanks, we'll pick up the EBF & test.
Got an early release and this fixes it.
Once you sign in you will be able to subscribe for any updates here
Answers
Answers and Comments
Markdown Basics
learn more about Markdown
Question tags:
mobilink ×324
.net ×61
server ×26
question asked: 04 Jan '12, 15:45
question was seen: 1,665 times
last updated: 30 Jan '12, 17:22
Is there a trick to .NET sync scripts to inject the DBConnectionContext into the constructor?
Error "Specified database not found"
Multiple Relay Servers on a machine / multiple MobiLink Servers on a machine
Could verbose reporting of row counts on ML Server show actual counts for Direct Row API tables?
mobilink relay server not connecting
android: how to configure mobilink server
What does [-10206] 'System.SystemException: Parameter 3 13 is out of range for conversion mean?
is there a way to get the mobilink error on the remote application via C# ?
MobiLink .NET API - Binary Data in Sql Server
Cannot load DLL or shared object: 'ml.net12.dll' for Script Language: '.net'
SQL Anywhere Community Network
Forum problems?
Maintenance log
and OSQA
Disclaimer: Opinions expressed here are those of the poster and do not
necessarily reflect the views of the company.
First time here? Check out the FAQ! | https://sqlanywhere-forum.sap.com/questions/9211/mobilink-net-api-memory-access-exceptions-on-readernextrow | CC-MAIN-2019-18 | refinedweb | 1,058 | 57.16 |
vfwscanf()
Scan input from a file (varargs)
Synopsis:
#include <wchar.h> #include <stdarg.h> int vfwscanf( FILE * fp, const wchar_t *format, va_list arg );
Since:
BlackBerry 10.0.0
Arguments:
- fp
- The streamfwscanf() function scans input from the file designated by fp, under control of the argument format.
The vfwscanf() function is the wide-character version of vfscanf(), and is a "varargs" version of fwscanf().
The vfwscanf() function is generally considered unsafe for string handling; it's safer to use fgetws() to get a line of input and then use vswscanf() to process the input.
Returns:
The number of input arguments for which values were successfully scanned and stored, or EOF if the scanning reached the end of the input stream before storing any values.
Classification:
Last modified: 2014-06-24
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/v/vfwscanf.html | CC-MAIN-2016-40 | refinedweb | 146 | 66.03 |
23 May 2012 18:11 [Source: ICIS news]
HOUSTON (ICIS)--?xml:namespace>
The American Chemistry Council (ACC) said April’s decline in its monthly US Chemical Production Regional Index came after a downward-revised 0.4% decline in March from February.
April’s chemical output slipped in several key segments, including pharmaceuticals, organic chemicals, plastic resins and synthetic rubber, the ACC said.
Many segments, however, saw rising production, with the largest gains recorded in man-made fibres, adhesives, industrial gases, chlor-alkali and inorganic chemicals.
Compared with April 2011, total
For the US Gulf coast, which is home of much of the country’s petrochemicals production, the index was down 0.5% in April from March, and it was down 0.5% year on year from April 2011 | http://www.icis.com/Articles/2012/05/23/9563074/us-chemical-production-slips-0.3-in-april-from-march.html | CC-MAIN-2014-41 | refinedweb | 128 | 56.76 |
Kenneth Porter wrote: > >I made some tweaks and discovered it won't work with a list with a dash in >the name, due to the regex used to extract the action suffix. Can someone >more nimble with regex let me know how to adjust it? > ><> > >I also changed the "printf STDERR" to syslog to my maillog. > >Here's the before/after regex: > >< if ($addr =~ /(.*)-(.*)\+.*$/) { >--- >> if ($addr =~ /(.*)-([^-]+)\+.*$/) { Actually, all the regexps are 'wrong' in that they will parse a local_part such as my-list into a listname of 'my' and a suffix of 'list' ($list and $cmd in the terminology of the module). The original contrib/mm_handler recovered from that by saying if the suffix didn't match something in the @validfields array, then the listname was the whole thing and the command was 'post' The revised version appears to do a similar thing, so I'm not sure what the problem is. Can you be more specific about the list name and the failure? I.e., what is the local_part of the address that fails and what does split_addr(local_part) return (or what error is logged)? BTW, the old regexps from contrib/mm-handler /(.*)-(.*)\+.*$/ and /(.*)-(.*)$/ and the new regexps /(.*)-([^-]+)\+.*$/ and /(.*)-([^-]+)$/ are virtually equivalent. The only difference is they require at least one character in the second group. The fact that they don't allow a '-' in the second group is irrelevant since the first group is greedy and will eat up all but the last '-' anyway. -- Mark Sapiro <mark at msapiro.net> The highway is for gamblers, San Francisco Bay Area, California better use your sense - B. Dylan | https://mail.python.org/pipermail/mailman-developers/2008-March/020009.html | CC-MAIN-2018-26 | refinedweb | 268 | 71.14 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
The Lisp family has macros that allow you to generate new code by manipulating programs; S-expressions are used as the representation. For many uses, this is what makes it hard to work with. Manipulating program language in such a low-level form is difficult. Looking at Stratego/XT, the authors clearly indicate that performing program manipulation with ATERMS (Stratego's underlying, low-level term representation, analogous to s expression data) is quite difficult and undesirable. Stratego has mechanisms to deal with program manipulation in a more natural way.
My read of Stratego shows that they authors allow the direct use of program fragments in the language being manipulated as a short form for underlying aterms.
What this boils down to is this: What is the best representation of a program to use, when the program must be manipulated/transformed? Are different representations appropriate for runtime manipulation (a program manipulating itself) vs. build-time manipulation?
Lisp chooses S-Expressions. Java programs can be transformed at compilation or runtime by bytecode manipulation -- aspect-oriented programming can be used to solve problems similar to those macros are generally applied to. C++ performs build-time transformation via its template mechanisms. Scala provides flexible, extensible syntax that relies on efficient compilation to deliver transformation-like behavior. Boo (a .NET language) provides, as a part of its standard library/compiler the ability to modify the AST of the program; there is no inconvenient "detour" through syntax.
This leads me to these thoughts:
1. To the extent is can, the type system of a language should provide as much transformational power (templates/type parameters) as it can. The transformational power of a type system, it seems to me, is just as important as type-safety/correctness.
2. Syntax extensions can result in highly readable code, and as such provide oft-needed flexibility. A good test here is building syntax for array-language operators; does the code "look" as natural as that for atomic values? Does it perform well?
3. Programs increasingly operate in an environment of fluid metadata, and must be able to adapt. Therefore, transformation at runtime is highly desirable.
4. Completeness dictates that transformations should be able to construct other transformations. This necessitates runtime transformations.
5. Losing the transformative and corrective powers of type systems seems undesirable when constructing transformations in general -- are type-safe transformations at runtime any different from those performed by compilers?
Modern VMs perform just-in-time compilation; selective optimized compilation of programs is performed. When new code is dynamically generated, that code is folded into the same JIT framework as "normal" code. When existing code is modified, a VM will "de-optimize", then allow further optimization to occur when needed. If a program's type system is modified, is it possible to perform similar work? In general, it seems that a comprehensive model for the evolution of a program's full AST+Data (types/logic/data) is needed. Lisp does this without checks and balances. Is a given transformation of a program acceptable?
Perhaps there is a kind of bitemporal typed lambda system out there somewhere; something that understands what the type system used to be, what we think it is now, and allows both to be combined together in computation. :)
For many uses, this is what makes it hard to work with. Manipulating program language in such a low-level form is difficult.
In Lisp, S-expressions is how you specify programs as a programmer, so it's only natural that macros work at this level as well. In other words, it's not "low-level" compared to the level at which Lisp programmers work. I'd also argue that "manipulating program language in such a low-level form is difficult" is wrong. The difficulties in writing sophisticated Lisp macros is generally not due to the S-expresssion representation but more related to issues of hygiene or the lack thereof. Hygiene is an issue when working in any system in which macros can be composed in fairly arbitrary ways.
I take issue with the exact same statement. S-Expressions are nothing more than syntax, and may have arbitrary (i.e. high-level or low-level) semantics.
I largely agree with Per. Lack of hygene is asking for trouble, and creates difficulty for the programmer. A purely hygenic system totally avoids variable capture, which almost always avoids problems... many macros don't need to capture variables, even if they bind new variables.
In my limited experience, the issue with Scheme's syntax-rules is that it has a normal-order evaluation semantics and no way of forcing the evaluation of a given sub-expression. This leads to the requirement that macros be CPS'ed in order to be composable.
At the time the Scheme report R5RS was written, the research on macros and hygiene hadn't come to a conclusion. Now severel Scheme implementation support the syntax-case system, which makes it easy to write hygienic macros, but at the same time also allow arbitrary manipulation of the syntax.
A complete list of papers on macros is available at at the impressive readscheme.org
You seem to be suggesting that syntax-case is "the end", and that research on macros and hygiene has now come to an end... Is that a generally accepted idea, that syntax-case is, in some sense, an optimal solution to the problem, and now it's solved? I'm not being facetious, I honestly don't know much about recent developments in macrology. I guess I haven't seen much "new" since syntax-case, although there still seems to be a lot of really interesting work on macro/module interactions.
Never say never. I like syntax-case and in particular the version made by Flatt for PLT Scheme. There are other solutions though such as syntactic closures or explicit renaming.
One thing I know is, that syntax-rules macros aren't expressive enough -- for one they loose too much source location information in the process of macro expansion, which makes it difficult to write precise, custom made error messages.
The interaction between modules and macros are indeed very tricky to get right, and I am looking forward to see what R6RS will suggest. Presumably they use ideas from (among others) Dybvig and Flatt's papers.
A particular interesting one is "Composable and Compilable Macros: You want it When" by Flatt.
You'll find no disagreement from me, and I don't consider myself an expert macrologist, but my comments about syntax-rules applies to syntax-case as well. :)
... my comments about syntax-rules applies to syntax-case as well.
which were: can't get this to make sense.
In the syntax-case macro system macro transformers are normal Scheme functions. That means that they use the normal Scheme semantic, so syntax-case macros do not use normal-order semantics.
Expansion of a subexpression can be done with local-expand (I can't remember whether it is called the same in psyntax - but the important point is that the underlying macro model supports it).
And finally syntax-case is not purely hygienic. will assume that if you want to force an expansion of a sub-expression, then you want to look inside the output of the expansion for something interesting. I will try to explain why that may not be a good idea:
Applicative order evaluation for macros yields macros that are non-portable across implementations. For example, suppose you use two or more implementations that support syntax-rules. The first system supports letrec natively and provides no macro for it. Thus, an expansion of (letrec ([x 5]) x) will be the same expression. The other system does not support letrec natively but expands it to another form: (letrec-values (((x) (#%datum . 5))) x) because letrec-values is its native form. A third system may expands it into (let ([x (void)]) (let ([t 5]) (set! x t) x)) (or maybe it will even expand the let forms into direct application of a lambda expression, a let-values form, or to a call to call-with-values). Writing a portable macro across these implementations is just not realistic.
Aziz,,,
The language machine takes rather an extreme position on this, but one which works pretty well. It operates by doing substitutions - it recognises patterns and substitutes replacements. To that extent operates in much the same way as a macro processor.
But the patterns it recognises and the patterns it substitutes can include nonterminal symbols and variable bindings, and rules can be constructed that operate as 'backend' rules analysing material produced by 'frontend' rules. There is no explicit parse tree, but the 'ghost of a parse tree' determines the scope rules of variable references from within fragments of transformed representations.
So you can construct and operate on whatever internal representation suits your purposes as a means of communicating between 'frontend' and 'backend' rules.
As for type information, you can use associative arrays to associate information with symbols. One way of using this is to make the type information enter into the analysis, and again, it's up to you to devise the representation that suits you.
Stratego is something I outght to look into. But I have tried very hard to keep the grammar of the lmn metalanguage as simple and un-rebarbative as possible.
In the Scheme world a lot of research have been put into the question of how macros and modules interact. One solution is has been to drop S-expressions favoring syntax-objects, which are "S-expressions with annotations". To make the manipulation of syntax-objects convenient, a custom pattern matching on syntax-objects is used, that enables the programmer to use normal S-exp syntax most of the time.
Look for papers on syntax-case such as Syntactic Abstraction in Scheme" (Dybvig et al.) which contains the low-level details, and
"Writing Hygienic Macros in Scheme With syntax-case" (Dybvig) which contains a high level explanation of how to use them.
Slate has Smalltalk syntax and is a prototype based multiple dispatch language.
If you mark a message send to be a macro call then the message will be sent to the parsed AST form of the input parameters at macro expand time. There are labelled quotation and etc...
slate.tunes.org
Macros in Slate
Wow! That's a lot of material you cover in your posting. Books have been, could have, should be and will be written on these topics for years to come.
Let me fence myself into one tiny corner. The topic, Macros/Syntax vs AST manipulation, can be seen as to pit two complimentary program representations against eachother: the syntactical representation of a program (either in text format, or as a concrete syntax tree [CST]), and the AST.
(I also see another line very interesting of debate: full program transformation vs "macro-like" meta-programming, but I will not go there now.)
The first we probably should acknowledge, is that both have their uses. The CST is a faithful representation of what the programmer actually wrote, usually down to the indentation (whitespace) and comments. This information is necessary for doing some forms of program manipulation, most notably refactoring, but sometimes also reverse engineering. This is not to say that refactoring and reverse engineering should/must be done (exclusively) on the CST, however.
The AST, harking back to McCarthy, was intended as the essential representation for a program. It glosses over lexical elements, comments and formatting, containing only the syntactical essence. In a "pure" AST, recreating the original program faithfully is impossible. It is guaranteed to be semantically equivalent, but not syntactically. ASTs are the representation of choice in compiler front-ends.
The upside of the AST approach is that the AST is precise (if it is not, you have have done something wrong in your design, or your language semantics is very convoluted). The immediate downsides are that the AST usually has no syntax (could be serialized to XML, or ATerms). It is also rather verbose, because of its preciseness:
Consider the Java statement int foo;
This does not say anything about modifiers to the variable foo, such as final, static, etc, because the language semantics says that the absence of these markers means "non-final" and "non-static". Both by convention and for practical purposes, such information is always embedded explicly into ASTs, making the representation uniform. I.e., an AST node for the variable declaration would say that it is "non-static" and "non-final". This is allows us to have exactly one node type variable declarations, which is extremely handy when writing the transformation code, and results in more readable and compact code.
The price to pay for this level of uniformity, is verboseness, and this is exactly what concrete syntax attacks. In Stratego, concrete syntax allows you to write |[ int foo; ]|, and have the Stratego compiler expand it inline to the full AST representation. Used in this way, concrete syntax is merely a convenience for the meta-programmer. It does not allow Stratego to do anything it could not already do on the AST.
From experience, we know that this is extremely useful when generating AST new nodes, but it is not always that useful when matching patterns against existing code. Think of patterns as ASTs with holes, where we
can line up the ASTs against a tree and see if the non-holey parts match. If they do, we can pick out to the sub-ASTs visible through the holes.
Writing such templates on the AST is perfectly precise, but sometimes it turns out that writing equivalent patterns on concrete syntax form is tricky, and in Stratego, we do a lot of pattern matching. Therefore, concrete syntax cannot be said to be a panacea, but it is an extremely handy tool in the kit.
(Disclaimer: I've spent a year working with the Stratego team in Utrecht, and also implemented an aspect-language extension to Stratego [AspectStratego].)
Thanks in particular for providing the CST/AST terminology -- hadn't yet picked up on those words yet.
"Patterns with holes" is what we mostly see out there in the Lisp world, and perhaps within Stratego as well. Given that state of the art, most people think that macros are quite arcane. Some of that is due to lack of familiarity, and some of it is due to inherent complexity.
We use macros/transformations when we want to inject functionality into code that has no idea it is being transformed. If the code is aware that it will be transformed, conventional mechanisms (function refs, type parameters, etc) can be used to accomplish the "customization" that is required.
Aspect systems for Java replace "patterns with holes" with a series of specific scenarios -- pointcut, joins, and so forth. All of these can be seen as "patterns with holes" but it's a significantly easier when we give them names, as done with aspects. Vocabulary is important; concepts given agreed names are the benefit of pattern languages. Stratego provides for the composition of transformations; a pattern language of transformations is equally important.
Practical code usually has to be pretty smart about dealing with other code. For example, Java's reflection API is very commonly used to allow one library of code to adapt itself to other code. There are many libraries that use reflection to figure out what classes "look like" do persistence on that class's behalf. These libraries take on many forms -- some provide libraries that can perform the persistence, others generate classes that can do the task, and still others generate descriptions (such as SQL or XML) that can accomplish the task with the aid of other tools.
Should a reflection-style API or usage be any different than performing transformation tasks at compile time? In the Lisp world this is unified, but perhaps this unification is easy because of the untyped nature of references.
One of the key things brought to the table by static typing is the "view", where the compiler can transform one kind of entity to another. Views are one of the most important parts of "do what I mean".
What we're all really looking for is the language that gives us the most "do what I mean" for the characters we type, and couples it with "I get what you mean".
Object-orientation is useful primarily for its ability to allow complex namespaces to be constructed via ordering and scoping, from which we invoke functionality and/or access data.
Type systems to date provide a measure of control over this localized namespace assembly, and give us a little "magic" assignment to help the process along (super, self, outer objects, etc). The type system and declarations act as a set of rules that interact to create (hopefully) desired behavior. This is precisely a code transformation process. Languages today mostly lock up this set of rules in a running program, and do not allow them to be modified. Part of this is rooted in the difficulties of dealing with change in the ruleset.
That problem has been studied in rule engines, most of which do permit the rules to be changed "on the fly", and perhaps there's something to be learned there. I've often wondered if compilers might be better implemented as rule-based systems instead of deterministic transformations, where the order of transformations is not known, but is "worked out" during compilation by various means.
p.s. What's up with the Gentoo Java support? ;)
Again, you touch on a lot of difficult and extremely juicy topics. I'll only pick up some, to keep my reply reasonably short.
One part here is static typing vs dynamic typing in both your subject language (the language you are transforming) and the transformation language. In Lisp, the subject and transformation language is the same, and it's dynamically typed. In, say MetaML and MetaOcaml, the subject and object languages are still the same, but they are statically typed.
Both of these approaches (static vs dynamic again) have their uses, and I don't want to devolve into this debate, save to say that doing program transformation on a statically typed languge allows you to do a lot more transformation at compile-time (offline), because of the information provided by the additional typing information.
Your question, should a reflection-style API or usage be any different than performing transformation tasks at compile time, is a very good one. What I have observed in my experience with reflection APIs, is that the information you get through these APIs is not always complete enough and often rather low-level. Take Java as an example. Using java.lang.reflect, we cannot get the AST for a method. We must reconstruct it from the bytecodes. What if the bytecodes were generated by Jython? What to we reconstruct then? Even for Java, how can we reconstruct a foreach loop reliably? We cannot.
(Adding the AST as an extra section in the .class file is certainly possible, in the same way as debugging info and annotations are added already, see JSR-175, for example).
The reflection API gives us access to our subject program, but not in its original subject language. The Java->bytecode transformation is not an isomorphism: we do not know exactly where we came from, and bytecodes are at a significantly lower abstraction level. These issues alone make it impossible to use the exact same approaches for transformation at runtime (online) as we do at compile-time (offline).
Arguably, we could pass our software around in .ast files instead of .class files. In a way, this is what Lisp does, and we see that this gives us significantly better expressiveness in our transformations.
(As for Java on Gentoo, I have unfortunately not had the time to work on it significantly since before the summer, but I can assure you that we'll finally unleash Java 1.5 and a lot of new J2EE and server-oriented Java packages in the coming weeks.)
Template Haskell allows compile-time transformations of the AST, with the caveat that any transform can fail a typecheck. I think the Template Haskell paper also describes MetaML as allowing compile-time transformation, but only if provably type safe beforehand.
So it seems that both typesafe ahead of time and checked type safety per transformation are valid models.
MetaML/MetaOCaml do not directly allow you to transform pieces of code. MetaML/MetaOCaml introduce several explicit annotations for multi-stage programming, similar to binding-time annotations in partial-evaluation.
These operators give you control over the time at which parts of the code are evaluated, but do not actually allow you to modify quoted code. In fact, similar to binding-time annotations in partial-evaluation, if you drop the staging annotations from MetaML/MetaOCaml program, then you get a working, but usually very slow, program.
Static type checking of a multi-staged programming language is therefore not extraordinary difficult: on top of the type system of OCaml they basically have to check if the staging annotations are used correctly.
Static type checking of code generated by a meta-program written in language that allows you to construct (and maybe even modify) arbitrary abstract syntax trees is very difficult. At GPCE'05 there were two papers on this subject: "Statically Safe Program Generation with SafeGen" by Shan Shan Huang, David Zook, and Yannis Smaragdakis and "A Type System for Reflective Program Generators" by Dirk Draheim, Christof Lutteroth, Gerald Weber. The first system uses allows you to implement a code generator that generates Java programs, taking as input other Java code. SafeGen can now prove that generator produces only type-correct Java programs. SafeGen uses a theorem prover and is sound. The second system uses a more conventional type checker, but is not sound.
What about composition of transformations? It seems to me that working on the AST loses the ability to compose macros, or at least requires a very different system to that present in Scheme.
Since Stratego was mentioned in the original post, I feel allowed to use it as an example here, but the idea of composable AST transformations is in principle language-independent.
One of the design goals for Stratego was indeed composable transformations, and in my opinion, it tackles this problem very well.
I have also programmed Scheme, and developed DSLs in it, so I see where you are coming from. Scheme is a lovely language. Your intuition that the systems are rather different is correct. In my opinion, the most apparent difference is that in Scheme, the macros (meta program) is written inside Scheme, and is part of the program (subject program).
A macro in Scheme cannot (at least not to my knowledge) 'reify' arbitrary functions, that is, it cannot get the CST for an arbitrary function or other macro. It gets s-exps as arguments, and works on these, treating these s-exps as code, data or both. As a side effect, the macro may change global variables, etc, but the primary effect of a macro is to expand its computation at the invocation site. That is, in-place macro expansion is the primary mode of operation.
With Stratego, the meta program is entirely outside the subject program. It has access to the entire AST (or CST; actually the Stratego language works equally well on CSTs, but this is a lot more cumbersome and therefore never done), and can modify any part of the program. A transformation is expressed with rewrite rules, controlled by strategies. Strategies are composable by using control flow combinators (left-choice operator, composition operator, etc), to obtain new strategies. These strategies are in general never invoked directly from the subject program: There is no canonical way to directly call a Stratego transformation in the subject program.
That being said, you can come up with whatever scheme you want to apply the transformations. The most common approach is to run a Stratego transformation over your program as part of the compilation process, before handing code off to the compiler. The transformation acts as a preprocessor, and may pick up on whatever hooks you have put into your subject program.
An example of this is found in CodeBoost, a high-level C++ optimizer written in Stratego. Here, we use Stratego as sort of a macro expander/function generator that generates specialized map functions.
The specializer strategy will walk through the C++ program, look for invocations of the function map. When it finds one, it will use other strategies to infer the types of the arguments to map. After that, it will use yet other strategies to generate a specialized map function, with a unique name. The invocation will then be rewritten to call the specialized function. (The generated map may later be inlined.)
This is an example of strategy composition: we have strategies for C++ type inference (semantic analysis), traversals on C++ ASTs, and generation of specialized new maps.
I don't know very much at all about term rewriting, so I'm curious what people do to typecheck rewriting programs, and what the safety properties typechecking checks are. Can you tell me anything about this?
I can tell you how we do it in Stratego, but this is certainly not the only term rewriting language.
Stratego works on terms, and terms are constructed from signatures. Think of signatures as data type declarations, or type declarations.
A signature consists of a set of constructors. A constructor is on the form C(t1, ..., tn). C is the name of the constructor, and t1, ..., tn are the types of subterms allowable at a given place.
If: Expr * Block * Block
If: Expr * Block * Block
In current versions of Stratego, only the arity is checked inside a transformation. Therefore, you can create the term If(Expr(...),Expr(...),Expr(...)) without being caught.
This is not as stupid as it looks, because a lot of transformations go from one language (say a high-level AST) to another (say a compiler IL), and this happens by rewriting parts of the AST in multiple rewrite steps. It is okay that intermediate terms are invalid, as long as the final result of the transformation is a valid term in the output language (say, the IL).
The way we handle this in Stratego is to have format checkers (these are automatically generated from the signatures) applied after transformations. A format checker will verify that the output term from a transformation is correct.
Typically, a transformation system, say a compiler, is composed from multiple individual transformations in a pipeline. Putting the format checkers between each stage in the pipeline (which are usually quite small), we can always ensure type correctness, and catch errors at a reasonable granularity (though other techniques, such as unit testing, are more useful for catching and avoiding errors).
3. Programs increasingly operate in an environment of fluid metadata, and must be able to adapt. Therefore, transformation at runtime is highly desirable.
Do you really think so ? I've always had the impression that runtime transformations are way more perilous than they are interesting. Or perhaps are you thinking of some kind of type-safe run-time transformations ?
I guess it all depends on how you define "perilous". If you define it as "can't guarantee type-correctness post-transformation", then some space-age thinking is needed to ensure transformations stay consistent with type.
If you define it as "customer kicks my ass because they don't want to have to bring the server down to make metamodel changes", then that's something else again. ;)
If you define it as "customer kicks my ass because they don't want to have to bring the server down to make metamodel changes", then that's something else again. ;)
Of course, in that case :)
Still, you might be interested by the Kell project (and it's pre-prototype implementation Chalk), a distributed language of reconfigurable components. In particular, the notion of replacing a bit of a program by another is a primitive notion of the language.
C++ performs build-time transformation via its template mechanisms.
The C++ template mechanism works by substitution, not by evaluating expressions. A C++ template class/procedure can not do any kind of transformation like LISP or meta-languages.
C++ most certainly does evaluate expressions at template evaluation time. However, you are correct that its ability to evaluate expressions is severely limited compared to languages which support metaprogramming more naturally. For instance, the infamous compile-time factorial metafunction would not be possible if the template engine performed no evaluations. Also, I believe that it has been proven that the template engine is Turing-complete. This may come as a bit of a surprise, but consider that branching can be achieved via template specialization, which is yet another form of evaluation.
Todd Veldhuizen has a sketch of a proof for the Turing-completeness of the C++ template engine.
(Also, it may be useful to point out that term rewriting is equational reasoning. That is, it is purely about substitution. Term rewriting has been proven equivalent to lambda calculus, in computational power, and is therefore also Turing complete;)
It's surprising something so trivial makes it into a paper. And he's obviously not a Lisp hacker, or he'd have written a lambda calculus interpreter. :)
Here's my normal-order lambda calculus evaluator that works by term rewriting. It didn't take any more "brain cycles" to write than the equivalent ML or Haskell version although you spend time battling with compiler error messages.
// Term definitions.
template<char Name>
struct Var { };
template<typename Param, typename Body>
struct Lambda { };
template<typename Left, typename Right>
struct Apply { };
// Structural equality.
template<typename First, typename Second>
struct Eq
{
enum { Res = false };
};
template<char Name>
struct Eq< Var<Name>, Var<Name> >
{
enum { Res = true };
};
template<typename FirstParam, typename FirstBody, typename SecondParam, typename SecondBody>
struct Eq< Lambda<FirstParam, FirstBody>, Lambda<SecondParam, SecondBody> >
{
enum { Res = Eq<FirstParam, SecondParam>::Res && Eq<FirstBody, SecondBody>::Res };
};
template<typename FirstLeft, typename FirstRight, typename SecondLeft, typename SecondRight>
struct Eq< Apply<FirstLeft, FirstRight>, Apply<SecondLeft, SecondRight> >
{
enum { Res = Eq<FirstLeft, SecondLeft>::Res && Eq<FirstRight, SecondRight>::Res };
};
// Substitution.
template<typename Term, typename ReplaceTerm, typename ReplaceVar>
struct Substitute { };
template<typename Var, typename ReplaceTerm>
struct Substitute< Var, ReplaceTerm, Var >
{
typedef ReplaceTerm Res;
};
template<typename Param, typename Body, typename ReplaceTerm>
struct Substitute< Lambda<Param, Body>, ReplaceTerm, Param >
{
typedef Lambda<Param, Body> Res;
};
template<typename Param, typename Body, typename ReplaceTerm, typename ReplaceVar>
struct Substitute< Lambda< Param, Body >, ReplaceTerm, ReplaceVar >
{
typedef typename Substitute<Body, ReplaceTerm, ReplaceVar>::Res SubBody;
typedef Lambda<Param, SubBody> Res;
};
template<typename Left, typename Right, typename ReplaceTerm, typename ReplaceVar>
struct Substitute< Apply<Left, Right>, ReplaceTerm, ReplaceVar >
{
typedef typename Substitute<Left, ReplaceTerm, ReplaceVar>::Res SubLeft;
typedef typename Substitute<Right, ReplaceTerm, ReplaceVar>::Res SubRight;
typedef Apply<SubLeft, SubRight> Res;
};
// Reduction.
template<typename Term>
struct Reduce
{
typedef Term Res;
};
template<typename Param, typename Body>
struct Reduce< Lambda<Param, Body> >
{
typedef typename Reduce<Body>::Res ReducedBody;
typedef Lambda<Param, ReducedBody> Res;
};
template<typename Param, typename Body, typename Right>
struct Reduce< Apply< Lambda<Param, Body>, Right > >
{
typedef typename Substitute<Body, Right, Param>::Res Res;
};
template<typename Left, typename Right>
struct Reduce< Apply<Left, Right> >
{
typedef typename Reduce<Left>::Res ReducedLeft;
typedef typename Reduce<Right>::Res ReducedRight;
typedef Apply<ReducedLeft, ReducedRight> Res;
};
// Evaluation.
template<typename Term, bool IsDone = false>
struct Eval { };
template<typename Term>
struct Eval<Term, true>
{
typedef Term Res;
};
template<typename Term>
struct Eval<Term, false>
{
typedef typename Reduce<Term>::Res Reduced;
typedef typename Eval< Reduced, Eq<Term, Reduced>::Res >::Res Res;
};
Try evaluating the following term:
Lambda<Var<'y'>,
Apply<Lambda<Var<'x'>,
Lambda<Var<'y'>, Var<'x'> > >,
Var<'y'> > >
You got me there, Vesa. Darn alpha conversion! That's what I get for not using de Bruijn indices. In any case hopefully I got my point across: lambda calculus interpreters are intrinsically cooler than Turing machines in demonstrating the computational completeness of template metaprogramming. :)
By the way, I pretty much learned everything non-trivial I know about template metaprogramming from those old COTDs and TOTDs of yours on flipCode. It's interesting to see that you've left games and went into PLT. I hope it's treating you well.
For example:
#include
using namespace std;
template < int N > struct factorial {
static int eval() {
return N * factorial< N - 1 >::eval();
}
};
template < > struct factorial < 0 > {
static int eval() {
return 1;
}
};
int main()
{
cout << factorial< 5 >::eval();
getchar();
return 0;
}
but static expression evaluation predates templates, so it is not part of the template engine. It just happened that the compiler needed to evaluate the static expressions passed to a template in order to keep the substitutions going.
In other words, the Turing-completeness of the C++ template engine happened accidentally, whereas LISP was designed around the principle of code processing itself. | http://lambda-the-ultimate.org/node/1040 | crawl-002 | refinedweb | 5,497 | 52.49 |
Introduction to Prime Number in C++
What is the prime number? Any number which is greater than 1 and it should either be divided by 1 or the number itself is called a prime number. As prime numbers cannot be divided by any other number it should only be the same number or 1. For example here is the list of Prime Number in C++ that are divisible by either 1 or number itself.
List of some Prime Numbers
2 3 5 7 11 13 17 19 23 29 31 37 41…
You might be thinking why 2 is considered as a prime number? Well, it’s an exception therefore 2 is the only prime number in the list which is also even. Only two numbers are consecutive natural numbers which are prime too! Also, 2 is the smallest prime number.
The logic behind the prime number is that if you want to find prime numbers from a list of numbers then you have to apply the mentioned below logics:
If the given number is divisible by itself or 1, 2 is the only even prime number which is an exception so always remember. Divide the given number by 2, if you get a whole number then number can’t be prime!
Except 2 and 3 all prime numbers can be expressed in 6n+1 or 6n-1 form, n is a natural number.
There is not a single prime number that ends with 5 which is greater than 5. Because logically any number which is greater than 5 can be easily divided by 5.
For a more clear explanation that supports all the above-given logic here is the table of all the prime numbers up to 401:
Prime Numbers Using Various Methods
Now let’ see how to find prime numbers using various methods such as for loop, while loop, do-while loop. The output will be the same in all three loop cases because logic is the same only implementing way is different.
We will see that through a C ++ code separately for every loop.
Example #1
Finding a prime number using for loop
Code:
#include <iostream>
#include <math.h>
using namespace std;
int main() {
int x; // Declaring a variable x
cout << "Please enter the number : "; // cout to get the input value from user
cin >> x;
cout << "Here is the list of all the prime numbers Below "<< x << endl;
for ( int m=2; m<x; m++) //implementing for loop to find out prime numbers
for ( int n=2; n*n<=m; n++)
{
if ( m % n == 0)
break;
else if ( n+1 > sqrt (m)) {
cout << m << endl;
}
}
return 0;
}
Output:
As you can see in the above code we have taken two for loops because we need a list of prime numbers that will be below the given number in our program. We have included for loop within another for loop to make our calculation easier. A condition is added through if statement to break the loop once we reach our given number in code.
Example #2
Finding a prime number using for loop with if-else
Code:
#include <iostream>
using namespace std;
int main ()
{
int number, x, count = 0;
cout << "Please enter the number to check if it's prime or not : " << endl;
cin >> number;
if ( number == 0)
{
cout << "\n" << number << " This number is not prime";
exit(1);
}
else {
for ( x=2; x < number; x++)
if ( number % x == 0)
count++;
}
if ( count > 1)
cout << "\n" << number << " This number is not prime.";
else
cout << "\n" << number << " This is prime number.";
return 0;
}
Output:
Example #3
Finding a prime number using WHILE loop with if-else
Code:
#include <iostream>
using namespace std;
int main()
{
int lower, higher, flag, temporary;
cout << "Please enter the two numbers for finding prime numbers between them: "<< endl;
cin >> lower >> higher;
if ( lower > higher) { //It will swap the numbers if lower number is greater than higher number.
temporary = lower;
lower = higher;
higher = temporary;
}
cout << "Hence the Prime numbers between the number " << lower << " and " << higher << " are: "<< endl;
while ( lower < higher)
{
flag = 0;
for ( int x = 2; x <= lower/2; ++x)
{
if ( lower % x == 0)
{
flag = 1;
break;
}
}
if ( flag == 0)
cout << lower << " ";
++lower;
}
return 0;
}
Output:
In the above code, we have taken integers as a lower number, higher number, temporary variable, and a flag. Initially, we take two numbers as an input one is lower while the other is higher. In case the lower number is bigger than the higher number then these numbers will be swapped through a temporary variable first to move further in code. Now while loop will follow up until lower is less than higher and for loop, the condition will keep calculating prime numbers between them.
Conclusion
In, prime number logic can be used not only in C++ but in any programming language. From a small set of numbers to a big amount of numbers this logic can be used to find a set of prime numbers according to requirements within seconds without wasting any time in computer programming.
Recommended Articles
This is a guide to Prime Number in C++. Here we discuss the list of some prime numbers, and Various methods used in for Prime Numbers. You can also go through our other suggested articles to learn more– | https://www.educba.com/prime-number-in-c-plus-plus/ | CC-MAIN-2020-29 | refinedweb | 878 | 61.19 |
From: Javier Estrada (ljestrada_at_[hidden])
Date: 2004-03-08 12:40:27
A couple of weeks ago I posted in c++ moderated:
It seems natural that if there is a <iosfwd> to forward declare the I/O
related classes, there should be a header for the classes contained in the
std namespace.
In the trenches, often times I come across classes that can be simply
defined in terms of references and pointers to other classes. An example is
defining interfaces or "protocol classes." Granted, with a small project,
using an <stdfwd> header does not provide a big advantage, but I deal with
500+ files in a framework...
I believe that the rationale is so simple that I even feel lazy about
explaining the benefits, but oh well:
// needlessly including the definition of string (or basic_string) and
vector
#include <string>
#include <vector>
class ISeek{
public:
virtual void seek_knowledge(const std::string& token) = 0;
virtual void seek_advice(const std::vector<std::string>& advisers) = 0;
};
// using the proposed header...
#include <stdfwd>
class ISeek{
public:
virtual void seek_knowledge(const std::string& token) = 0;
virtual void seek_advice(const std::vector<std::string>& advisers) = 0;
};
Please ready your darts and fire at will. I'd like to know if there is
enough support that it could fly in a proposal to the std committee.
Javier
jestrada at developeer dot com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2004/03/62449.php | CC-MAIN-2021-31 | refinedweb | 246 | 51.07 |
#include <gromacs/utility/textstream.h>
Interface for writing text.
Concrete implementations can write the text to, e.g., a file or an in-memory string. The main use is to allow unit tests to inject in-memory buffers instead of reading in files produced by the code under test, but there are also use cases outside the tests where it is useful to abstract out whether the output is into a real file or something else.
To use more advanced formatting than writing plain strings, use TextWriter.
The current implementation assumes text-only output in several places, but this interface could possibly be generalized also for binary files. However, since all binary files currently written by GROMACS are either XDR- or TNG-based, they may require a different approach. Also, it is worth keeping the distinction between text and binary files clear, since Windows does transparent
LF-
CRLF newline translation for text files, so mixing modes when reading and/or writing the same file can cause subtle issues.
Both methods in the interface can throw std::bad_alloc or other exceptions that indicate failures to write to the stream..
Implemented in gmx::TextOutputFile, and gmx::StringOutputStream. | https://manual.gromacs.org/documentation/2019-beta2/doxygen/html-full/classgmx_1_1TextOutputStream.xhtml | CC-MAIN-2021-17 | refinedweb | 195 | 52.9 |
We recommend using Visual Studio 2017
This documentation is archived and is not being maintained.
Port Operations in the .NET Framework with Visual Basic
Visual Studio 2010
You can access your computer's serial ports through the .NET Framework classes in the System.IO.Ports namespace. The most important class, SerialPort, provides a framework for synchronous and event-driven I/O, access to pin and break states, and access to serial driver properties. It can be wrapped in a Stream object, accessible through the BaseStream property. Wrapping SerialPort in a Stream object allows the serial port to be accessed by classes that use streams. The namespace includes enumerations that simplify the control of serial ports.
The simplest way to create a SerialPort object is through the OpenSerialPort method.
Reference
Other Resources
Show: | https://msdn.microsoft.com/en-us/library/ms172760(v=vs.100).aspx | CC-MAIN-2018-13 | refinedweb | 132 | 50.12 |
Refactoring to Patterns: Simplification.
Algorithms often become complex as they begin to support many variations. Replace Conditional Logic with Strategy (129) shows how to simplify algorithms by breaking them up into separate classes. Of course, if your algorithm isn’t sufficiently complicated to justify a Strategy [DP], refactoring to one will only complicate your design.
You probably won’t refactor to a Decorator [DP] frequently. Yet it is a great simplification tool for a certain situation: when you have too much special-case or embellishment logic in a class. Move Embellishment to Decorator (144) describes how to identify when you really need a Decorator and then shows how to separate embellishments from the core responsibility of a class.
Logic for controlling state transitions is notorious for becoming complex. This is especially true as you add more and more state transitions to a class. The refactoring Replace State-Altering Conditionals with State (166) describes how to drastically simplify complex state transition logic and helps you determine whether your logic is complex enough to require a State [DP] implementation.
Replace Implicit Tree with Composite (178) is a refactoring that targets the complexity of building and working with tree structures. It shows how a Composite [DP] can simplify a client’s creation and interaction with a tree structure.
The Command pattern [DP] is useful for simplifying certain types of code. The refactoring Replace Conditional Dispatcher with Command (191) shows how this pattern can completely simplify a switch statement that controls which chunk of behavior to execute.
Compose Method
You can’t rapidly understand a method’s logic.
Transform the logic into a small number of intention-revealing steps at the same level of detail.
Motivation
Kent Beck once said that some of his best patterns are those that he thought someone would laugh at him for writing. Composed Method [Beck, SBPP] may be such a pattern. A Composed Method is a small, simple method that you can understand in seconds. Do you write a lot of Composed Methods? I like to think I do, but I often find that I don’t, at first. So I have to go back and refactor to this pattern. When my code has many Composed Methods, it tends to be easy to use, read, and extend.
A Composed Method is composed of calls to other methods. Good Composed Methods have code at the same level of detail. For example, the code set in bold in the following listing is not at the same level of detail as the nonbold code:
private void paintCard(Graphics g) { Image image = null; if (card.getType().equals("Problem")) { image = explanations.getGameUI().problem; } else if (card.getType().equals("Solution")) { image = explanations.getGameUI().solution; } else if (card.getType().equals("Value")) { image = explanations.getGameUI().value; } g.drawImage(image,0,0,explanations.getGameUI()); if (shouldHighlight()) paintCardHighlight(g); paintCardText(g); }
By refactoring to a Composed Method, all of the methods called within the paintCard() method are now at the same level of detail:
private void paintCard(Graphics g) { paintCardImage(g); if (shouldHighlight()) paintCardHighlight(g); paintCardText(g); }
Most refactorings to Composed Method involve applying Extract Method [F] several times until the Composed Method does most (if not all) of its work via calls to other methods. The most difficult part is deciding what code to include or not include in an extracted method. If you extract too much code into a method, you’ll have a hard time naming the method to adequately describe what it does. In that case, just apply Inline Method [F] to get the code back into the original method, and then explore other ways to break it up.
Once you finish this refactoring, you will likely have numerous small, private methods that are called by your Composed Method. Some may consider such small methods to be a performance problem. They are only a performance problem when a profiler says they are. I rarely find that my worst performance problems relate to Composed Methods; they almost always relate to other coding problems.
If you apply this refactoring on numerous methods within the same class, you may find that the class has an overabundance of small, private methods. In that case, you may see an opportunity to apply Extract Class [F].
Another possible downside of this refactoring involves debugging. If you debug a Composed Method, it can become difficult to find where the actual work gets done because the logic is spread out across many small methods.
A Composed Method’s name communicates what it does, while its body communicates how it does what it does. This allows you to rapidly comprehend the code in a Composed Method. When you add up all the time you and your team spend trying to understand a system’s code, you can just imagine how much more efficient and effective you’ll be if the system is composed of many Composed Methods.
Mechanics
This is one of the most important refactorings I know. Conceptually, it is also one of the simplest—so you’d think that this refactoring would lead to a simple set of mechanics. In fact, it’s just the opposite. While the steps themselves aren’t complex, there is no simple, repeatable set of steps. Instead, there are guidelines for refactoring to Composed Method, some of which include the following.
- Think small: Composed Methods are rarely more than ten lines of code and are usually about five lines.
- Remove duplication and dead code: Reduce the amount of code in the method by getting rid of blatant and/or subtle code duplication or code that isn’t being used.
- Communicate intention: Name your variables, methods, and parameters clearly so they communicate their purposes (e.g., public void addChildTo(Node parent)).
- Simplify: Transform your code so it’s as simple as possible. Do this by questioning how you’ve coded something and by experimenting with alternatives.
- Use the same level of detail: When you break up one method into chunks of behavior, make the chunks operate at similar levels of detail. For example, if you have a piece of detailed conditional logic mixed in with some high-level method calls, you have code at different levels of detail. Push the detail down into a well-named method, at the same level of detail as the other methods in the Composed Method.
Example
This example comes from a custom-written collections library. A List class contains an add(ノ) method by which a user can add an object to a List instance:
public class List... public void add(Object element) { if (!readOnly) { int newSize = size + 1; if (newSize > elements.length) { Object[] newElements = new Object[elements.length + 10]; for (int i = 0; i < size; i++) newElements[i] = elements[i]; elements = newElements; } elements[size++] = element; } }
The first thing I want to change about this 11-line method is its first conditional statement. Instead of using a conditional to wrap all of the method’s code, I’d rather see the condition used as a guard clause, by which we can make an early exit from the method:
public class List... public void add(Object element) { if (readOnly) return; int newSize = size + 1; if (newSize > elements.length) { Object[] newElements = new Object[elements.length + 10]; for (int i = 0; i < size; i++) newElements[i] = elements[i]; elements = newElements; } elements[size++] = element; }
Next, I study the code in the middle of the method. This code checks to see whether the size of the elements array will exceed capacity if a new object is added. If capacity will be exceeded, the elements array is expanded by a factor of 10. The magic number 10 doesn’t communicate very well at all. I change it to be a constant:
public class List... private final static int GROWTH_INCREMENT = 10; public void add(Object element)... ... Object[] newElements = new Object[elements.length + GROWTH_INCREMENT]; ...
Next, I apply Extract Method [F] on the code that checks whether the elements array is at capacity and needs to grow. This leads to the following code:
public class List... public void add(Object element) { if (readOnly) return; if (atCapacity()) { Object[] newElements = new Object[elements.length + GROWTH_INCREMENT]; for (int i = 0; i < size; i++) newElements[i] = elements[i]; elements = newElements; } elements[size++] = element; } private boolean atCapacity() { return (size + 1) > elements.length; }
Next, I apply Extract Method [F] on the code that grows the size of the elements array:
public class List... public void add(Object element) { if (readOnly) return; if (atCapacity()) grow(); elements[size++] = element; } private void grow() { Object[] newElements = new Object[elements.length + GROWTH_INCREMENT]; for (int i = 0; i < size; i++) newElements[i] = elements[i]; elements = newElements; }
Finally, I focus on the last line of the method:
elements[size++] = element;
Although this is one line of code, it is not at the same level of detail as the rest of the method. I fix this by extracting this code into its own method:
public class List... public void add(Object element) { if (readOnly) return; if (atCapacity()) grow(); addElement(element); } private void addElement(Object element) { elements[size++] = element; }
The add(ノ) method now contains only five lines of code. Before this refactoring, it would take a little time to understand what the method was doing. After this refactoring, I can rapidly understand what the method does in one second. This is a typical result of applying Compose Method. | http://www.informit.com/articles/article.aspx?p=1398607&seqNum=6 | CC-MAIN-2018-43 | refinedweb | 1,554 | 54.63 |
{-# LANGUAGE NoImplicitPrelude #-} {- | Copyright : (c) Mikael Johansson 2006 Maintainer : mik@math.uni-jena.de Stability : provisional Portability : requires multi-parameter type classes Permutation of Integers represented by cycles. -} module MathObj.Permutation.CycleList where import Data.Set(Set) import qualified Data.Set as Set import Data.List (unfoldr) import Data.Array(Ix) import qualified Data.Array as Array import qualified Data.List.Match as Match import Data.Maybe.HT (toMaybe) import NumericPrelude (fromInteger) import PreludeBase type Cycle i = [i] type T i = [Cycle i] fromFunction :: (Ix i) => (i, i) -> (i -> i) -> T i fromFunction rng f = let extractCycle available = do el <- choose available let orb = orbit f el return (orb, Set.difference available (Set.fromList orb)) cycles = unfoldr extractCycle (Set.fromList (Array.range rng)) in keepEssentials cycles -- right action of a cycle cycleRightAction :: (Eq i) => i -> Cycle i -> i x `cycleRightAction` c = cycleAction c x -- left action of a cycle cycleLeftAction :: (Eq i) => Cycle i -> i -> i c `cycleLeftAction` x = cycleAction (reverse c) x cycleAction :: (Eq i) => [i] -> i -> i cycleAction cyc x = case dropWhile (x/=) (cyc ++ [head cyc]) of _:y:_ -> y _ -> x cycleOrbit :: (Ord i) => Cycle i -> i -> [i] cycleOrbit cyc = orbit (flip cycleRightAction cyc) {- | Right (left?) group action on the Integers. Close to, but not the same as the module action in Algebra.Module. -} (*>) :: (Eq i) => T i -> i -> i p *> x = foldr (flip cycleRightAction) x p cyclesOrbit ::(Ord i) => T i -> i -> [i] cyclesOrbit p = orbit (p *>) orbit :: (Ord i) => (i -> i) -> i -> [i] orbit op x0 = takeUntilRepetition (iterate op x0) -- | candidates for Utility ? takeUntilRepetition :: Ord a => [a] -> [a] takeUntilRepetition xs = let accs = scanl (flip Set.insert) Set.empty xs lenlist = takeWhile not (zipWith Set.member xs accs) in Match.take lenlist xs takeUntilRepetitionSlow :: Eq a => [a] -> [a] takeUntilRepetitionSlow xs = let accs = scanl (flip (:)) [] xs lenlist = takeWhile not (zipWith elem xs accs) in Match.take lenlist xs {- Alternative to Data.Set.minView in GHC-6.6. -} choose :: Set a -> Maybe a choose set = toMaybe (not (Set.null set)) (Set.findMin set) keepEssentials :: T i -> T i keepEssentials = filter isEssential -- is more lazy than (length cyc > 1) isEssential :: Cycle i -> Bool isEssential = not . null . drop 1 inverse :: T i -> T i inverse = map reverse | http://hackage.haskell.org/package/numeric-prelude-0.1/docs/src/MathObj-Permutation-CycleList.html | CC-MAIN-2016-30 | refinedweb | 367 | 50.12 |
With the current version of Raspios, and at least on a Pi 3B, I reckon the micro-AP unofficial feature can be made to work reliably. Wpa-supplicant can drive the AP, no need to depend on hostapd. The AP can operate over zeroconf/link-local, no need for a DHCP server (there isn’t one in busybox IIRC, adding one could be possible). The stock dhcpcd config can be used.
Enabling/disabling the micro-AP can be done at any time, for what I see, as long as the change is done in a temporary network namespace (similar to this.) All in all that should be enough to provide a rescue network that allows users to log in, examine the situation, run raspi-config, fix their configuration and reboot.
The AP is literally a backdoor to the machine, so control is a concern. Automated AP start in case network configuration has failed isn't great for various reasons. Adding a user-defined wireless password to protect the AP creates a usability problem. The only sane way of providing the feature is, I think, through positive, simple and local user input. Give me a dedicated button, and spawning an open AP wouldn't bother me at all.
Which brings me to the point. I've looked at the gpio-shutdown overlay and to me it's just what the doctor ordered: on demand, could be activated from a keyboard key combination or with a GPIO button.
However I suspect the power button to be a specific case. I'd like to know:
- Which magic keyboard key to choose, and how to make the system(d) react to it? I'm looking for something that would work in both Desktop and Lite versions of Raspios.
- Which GPIO pin for the physical button in absence of keyboard?
Thanks in advance. | https://www.raspberrypi.org/forums/viewtopic.php?f=66&p=1895148&sid=e5c33828ec97c0afc3c8622b3b289f2a | CC-MAIN-2021-39 | refinedweb | 308 | 63.49 |
Why you need Single Node Swarm for Development
tomwerneruk
Originally published at
fluffycloudsandlines.blog
on
・4 min read
Docker Swarm has matured enough that it's adoption is starting to pickup. Docker Captains are working with real clients rolling it out everyday. That's fine for clustered workloads, but what about locally?
Should you be using Swarm Mode even for local development? In my opinion, TL;DR - Yes.
Swarm Mode has many benefits which only make sense in production multi Docker host scenarios, but used locally it can add value and reduce differences between Development and Production (#win). What does Swarm actually do?
Swarm elevates a random collection of Docker hosts (or just one) into a cluster, which orchestrates (fancy word for automatic management) starting and stopping containers, managing cluster-wide information and providing transparent connectivity between hosts.
With one host, transparent connectivity isn't a win, but the same interface for cluster information management (Secrets, Configs, Labels) and the advanced service management is. In the same way that Compose is an evolution over plain Docker, Swarm is a step forward over Compose. Here are my top 3 reasons why you should be thinking about using Swarm locally.
Reduce Surprises - Simplification and Consistency
Let's face _ one of _ the elephants in the room. Docker Swarm (as at 18.09) doesn't currently have feature parity with
docker run. Excluding concepts that just plain don't make sense for the service level of abstraction (i.e container names, restart policy), there are some of the more 'fringe' options that are not supported (sysctl Kernel tuning, host device mapping) - although some are currently work-in-progress. There is an excellent tracker of gaps (and progress) here.
This means that a compose file that works when targeting
docker-compose isn't guaranteed to work 100% with Docker Swarm without tweaks. Sorry, but nothing is perfect.
If Swarm is still for you in production, then it makes sense to use Swarm locally to avoid you having to ;
- effectively learn two product 'versions' at once,
- have to maintain two or more compose files in parallel causing potential for more mistakes.
Secrets
Docker Secrets allow the secure publishing of sensitive details for an application to consume, taking the recommendations of the 12 Factor App to a new level by providing a much more granular approach compared to environment variables. They are great , but they do have limitations; nothing is entirely secure (the old adage, if you have host access, nothing is secure) and it needs to be supported by the application (or an appropriate entrypoint script).
Accepting these limitations, they are only available in Docker Swarm, as their implementation is tied to Swarm's internal cluster Raft database. That means, if you're using plain docker or Compose you can't access them (see below for an exception). But as we've just agreed above, your application will likely need changes to take advantage of Secrets, so do you really want to start adding if statements to code to handle Dev vs. Prod? (answer is no).
Yes, you can use Docker Secrets with Compose, but they get namespaced by Compose (<stack>_ is prepended) which means that you need to maintain two different references to secrets, depending on environment (again not good, as it causes differences between Dev and Prod) - it is possible, but it is down to you, your development flow and how your code is organised.
This means, in reality, if you want Secrets in your application, you should use Swarm locally. There are ways round it (simulating, injecting environment variables, branching in code), but they all point to the fact you are having to do extra work to get you to the same point.
Next hurdle, Secrets are immutable (they can't be viewed or amended once created). This is not development friendly, these values are likely to change as you iterate. Therefore, there is an option to source secrets from a file. Keeping them in files means it is much quicker and easier to tear down a stack and it's secrets and recreate them if you rebuild your environment or need to change a secret.
For example, on one of my current projects (which has separate development and production compose files - multi-stage builds planned!), I can cleanly reference the same secret in my code, but have a simple workflow to change it in Development.
development.yml
version: '3.6' services: appserver: image: appserver:latest secrets: - application_secret secrets: application_secret: file: ./compose/local/secrets/application_secret
version: '3.6' services: appserver: image: appserver:latest secrets: - application_secret secrets: application_secret: external: true
The only difference here is the 'source' of the secret.
Sourcing from a file kind of defeats the purpose of a secret, but it preserves the interface.
Marking it as external in production requires manual intervention to create the secret, however, in larger organisations the person with the contents of the secret might be very different to the person deploying code. When it comes to deployment where security matters, separation of responsibility keeps the clipboards at bay.
Keeping as much consistency between environments reduces the complexity of the code that handles our secrets and keeps things clean.
Test Scaling
Horizontal scaling of an application requires thought. Just because Docker can scale a service multiple times, doesn't automatically mean your application will handle being run in this manner.
Run your code at scale locally (just because you have one host, doesn't mean you can't run multiple instances of a container) and rattle these issues out early. This will start to weed out issues with concurrent access to databases, message queues and files on shared storage (to name a few). Swarm will still load balance between multiple instances automatically, even on one host.
From the background I come from, predictability, simplicity and ease of handover between Development to Operations, trump novelty. In my eyes adopting Swarm locally helps reduce surprises and make life easier throughout the whole Develop and Deploy lifecycle. That's all for now!
Being a Female Programmer: How is it For You?
This probably depends on where we live and work, but personally, I have not experienced anything negative for being a female programmer in my few years of career.
Full-text search with Node.js and ElasticSearch on Docker
Brian Neville-O'Neill -
Running Gatling in Azure Container Instances
Daniel Edwards -
Planning the technology stack for an Event Sourcing project
Keith Mifsud -
| https://practicaldev-herokuapp-com.global.ssl.fastly.net/tomwerneruk/why-you-need-single-node-swarm-for-development-i7k | CC-MAIN-2019-39 | refinedweb | 1,076 | 52.7 |
NAME
vm_page_bits, vm_page_set_validclean, vm_page_clear_dirty, vm_page_set_invalid, vm_page_zero_invalid, vm_page_is_valid, vm_page_test_dirty, vm_page_dirty, vm_page_undirty - manage page clean and dirty bits
SYNOPSIS
#include <sys/param.h> #include <vm/vm.h> #include <vm/vm_page.h> int vm_page_bits(int base, int size); void vm_page_set_validclean(vm_page_t m, int base, int size); void vm_page_clear_dirty(vm_page_t m, int base, int size); void vm_page_set_invalid(vm_page_t m, int base, int size); void vm_page_zero_invalid(vm_page_t m, boolean_t setvalid); int vm_page_is_valid(vm_page_t m, int base, int size); void vm_page_test_dirty(vm_page_t m); void vm_page_dirty(vm_page_t m); void vm_page_undirty(vm_page_t m);
DESCRIPTION
vm_page_bits() calculates the bits representing the DEV_BSIZE range of bytes between base and size. The byte range is expected to be within a single page, and if size is zero, no bits will be set. vm_page_set_validclean() flags the byte range between base and size as valid and clean. The range is expected to be DEV_BSIZE aligned and no larger than PAGE_SIZE. If it is not properly aligned, any unaligned chucks of the DEV_BSIZE blocks at the beginning and end of the range will be zeroed. If base is zero and size is one page, the modified bit in the page map is cleared; as well, the PG_NOSYNC flag is cleared. vm_page_clear_dirty() clears the dirty bits within a page in the range between base and size. The bits representing the range are calculated by calling vm_page_bits(). vm_page_set_invalid() clears the bits in both the valid and dirty flags representing the DEV_BSIZE blocks between base and size in the page. The bits are calculated by calling vm_page_bits(). As well as clearing the bits within the page, the generation number within the object holding the page is incremented. vm_page_zero_invalid() zeroes all of the blocks within the page that are currently flagged as invalid. If setvalid is TRUE, all of the valid bits within the page are set. In some cases, such as NFS, the valid bits cannot be set in order to maintain cache consistency. vm_page_is_valid() checks to determine if the all of the DEV_BSIZE blocks between base and size of the page are valid. If size is zero and the page is entirely invalid vm_page_is_valid() will return TRUE, in all other cases a size of zero will return FALSE. vm_page_test_dirty() checks if a page has been modified via any of its physical maps, and if so, flags the entire page as dirty. vm_page_dirty() is called to modify the dirty bits. vm_page_dirty() flags the entire page as dirty. It is expected that the page is not currently on the cache queue. vm_page_undirty() clears all of the dirty bits in a page.
NOTES
None of these functions are allowed to block.
AUTHORS
This manual page was written by Chad David 〈davidc@acns.ab.ca〉. | http://manpages.ubuntu.com/manpages/intrepid/man9/vm_page_zero_invalid.9freebsd.html | CC-MAIN-2014-15 | refinedweb | 445 | 71.75 |
The!
That said, for the first couple minutes I didn’t even know HOW to start. I knew that I wanted to start with a core class in my domain (an Indicator). Indicators do all sorts of interesting stuff, but since I’m writing tests first, I don’t even have an Indicator class! To solve that little problem, I wrote the only test that made sense:
[TestFixture]
public class IndicatorTest
{
[Test]
public void CanCreateIndicatorInstance()
{
Indicator indicator = new Indicator();
Assert.IsNotNull(indicator);
}
}
Of course the code won’t build. So I create a new class “Indicator” and make sure the default constructor is there. I know that in theory you’re supposed to make the test fail – but making my constructor throw a fake exception for the same of making it fail doesn’t seem worthwhile. It’s entirely possible that I’ll opt to go with only factories for my Indicator class, but for now, this is the quickest way to get started.
Next, I know that every Indicator will have an Name, so I write my next test:
[Test]
public void CanSetIndicatorName()
{
Indicator indicator = new Indicator();
Assert.IsNull(indicator.Name);
indicator.Name = “Test Indicator“;
Assert.AreEqual(“Test Indicator“, indicator.Name);
}
So I go into my Indicator class and create a straightforward private _name field with a public Name property. Two points here. I could have easily made the test fail, but figured the broken build was good enough. Also, I agree that a public field would be the simplest solution here, but that violates our coding standards.
The Indicator class has a number of similar properties, so I write similar tests for all the ones I know about. Admittedly it isn’t particularly fun and it doesn’t feel too productive, but it’s only a couple minutes of my time. There’s an urge inside of me pushing me to take a bigger leap, but I keep it in check.
That’s it
I know it isn’t much, but I had such a hard time accepting this way of starting, I figured others might also. I promise things will quickly get more interesting (though I think next post I might take a little break and go over nANT and CruiseControl.NET).
On a side note, as much as people talk about the importance of unit tests when it comes to refactoring, I honestly see them as even more useful when working in a team. The CanSetIndicatorName test tells the other developers on the project that someone expects the Name property to be null and NOT string.Empty when a new instance is created. If anyone changes my field declaration to _name = string.Empty, the test will fail and hopefully they’ll think twice about their change, else risk breaking code somewhere they weren’t away of.
[tags: Agile, TDD, .NET]
About the first test.
When doing new development it might be a bit overdone. But I’ve just started adding unit tests to existing code, in that case creating the first test can already be quite tricky. And especially when starting with TDD I can see the point of making it (just to get the feeling of having somewhere to start). Working with a testing framework is something else then reading the tutorials.
Tim:
I certainly agree with you for complicated tests, but isn’t it just overkill for these simple examples? Isn’t Jeffrey’s “common sense” approach better here (I’m asking sincerely).
When you say: “I could have easily made the test fail, but figured the broken build was good enough.”, I think you’re missing the point. The initial failure verifies that when your test actually fails, your test harness will report that failure. The broken build failure doesn’t catch this. Without using the “Red” step of “Red-Green-Refactor”, you risk having tests that appear to run successfully, even when they have actually failed. It may seem silly and trivial, but it’s useful nonetheless, especially as your tests get more complicated.
I know there is a lot of dogmatism around TDD for some folks. The key to remember is that automated tests as a whole are assets that can be used continually to verifiy code. Unit tests help to localize a failure. The more I do TDD, the more pragmatic I become about it. I tend to fudge the lines a bit and don’t keep religiously to Kent Beck’s guidelines. If a test creates a safety net under a bit of code, then I consider it beneficial. I’m using common sense as my guide more and more.
I agree. You want to keep your fine-grained tests because when a more complicated test blows up, if the fine grained test still passes, you can exclude that from the cause. But if it fails, then it is the likely culprit.
Removing the assert is probably wise as it just takes up space.
However, if your test is to make sure that your class has a default public constructor, I would probably name the test accordingly or put a comment there.
[Test]
public void CanInvokeIndicatorDefaultConstructor()
{
new Indicator();
}
or something like that.
Nat & Haacked:
Thanks for the input, as this is a learning experience, I appreciate it!
I agree that it’s a little silly, but not completely useless. You’ll always have some unit tests that are going to be more complex than others and as such they’ll implictly test other functionality. Should the simpler unit tests be elimited because the code is covered elsewhere?
The first test also tests that the Indicator class has a public default constructor. That very well my be a required behaviour. Again, other tests are bound to cover that functionality, but I must say I enjoy having a test devoted to it. It sends a pretty clear message about the need to have it.
Nat, you are right though, I could (and maybe should?) remove the Assert. If the code (a) compiles and (b) doesn’t throw an exception I know I’m ok.
The first test isn’t completely useless as it does test that the constructor doesn’t throw an exception. If it did, the test would fail. And yes, the second test would also fail if the constructor threw an exception, but the *granularity* of that test is less than the first test.
However, I do tend to agree that writing a test before you have a class to test might be a bit overkill. I usually at least create the class and a stub method or property that throws a NotImplementedException().
You can see my approach in this post (the topic is on something else, but I happen to demonstrate my TDD approach as a side effect).
There is no point in the first test. Firstly, you’re not testing anything that isn’t tested in the second test. Second, it’s testing the behaviour of the new operator and nothing else: new is *guaranteed* never to return null, so the assertion is unnecessary noise.
In fact, the first test *decreases* the quality of the code. Unnecessary tests increase the maintenance burden because they make it harder to find the tests that actually are specifying required behaviour. | http://codebetter.com/karlseguin/2006/06/23/tdd-lesson-1-dont-worry-about-starting-small/ | CC-MAIN-2014-15 | refinedweb | 1,210 | 63.8 |
.
So, if you need to persist and fetch the users in your system to and from a database, then you're in the right place.
Prerequisites¶
This version of the bundle requires Symfony 2.1+. For Symfony 3.0+ please use upcoming 2.0 release. If you are using Symfony 2.0.x, please use the 1.2 (the class to use depends of your storage)
-:
c) CouchDB User class¶
If you're persisting your users via the Doctrine CouchDB ODM, then your
User
class should live in the
CouchDocument namespace of your bundle and look
like this to start:
d) Propel 1.x User class¶
If you don't want to add your own logic in your user class, you can simply use
FOS\UserBundle\Propel\User as user class and you don't have to create
another class.
If you want to add your own fields, you can extend the model class by overriding the database schema.
Just copy the
Resources/config/propel/schema.xml file to
app/Resources/FOSUserBundle/config/propel/schema.xml,
and customize it to fit your needs. three configuration values are required to use the bundle:
- The type of datastore you are using (
orm,
mongodb,
couchdbor
propel).
- The firewall name which you configured in Step 4.
- The fully qualified class name (FQCN) of the
Userclass which you created in Step 3.
Caution
When using one of the Doctrine implementation, you need either to use
the
auto_mapping option of the corresponding bundle (done by default
for DoctrineBundle in the standard distribution) or to activate the mapping
for FOSUserBundle otherwise the base mapping will be ignored..
For Propel 1 users you have to install the TypehintableBehavior before to build your model. First, install it:
You now can run the following command to create the model:
Note
To create SQL, run the command
propel:build --insert-sql or use migration
commands if you have an existing schema in your database.
You now can login at!
Next Steps¶
Now that you have completed the basic installation and configuration of the FOSUserBundle, you are ready to learn about more advanced features and usages of the bundle.
The following documents are available:
- Overriding Default FOSUserBundle Templates
- Overriding Default FOSUserBundle
- FOSUserBundle Configuration Reference
- FOSUserBundle Invitation
This work, including the code samples, is licensed under a Creative Commons BY-SA 3.0 license. | http://symfony.com/doc/current/bundles/FOSUserBundle/index.html | CC-MAIN-2017-39 | refinedweb | 391 | 53.92 |
Storage
Discover more resources: Compute Data Services App Services Reference Samples Scenario-based tutorials
This tutorial shows how to create a multi-tier ASP.NET MVC application that uses the WebJobs SDK to work with Azure queues and Azure blobs in an Azure Website. The application also uses Azure SQL Database.
The sample application is an advertising bulletin board. Users create an ad by entering text and uploading an image. They can see a list of ads with thumbnail images, and they can see the full size image when they select an ad to see the details. Here's a screenshot:
You can download the Visual Studio project from the MSDN Code Gallery.
The tutorial assumes that you know how to work with ASP.NET MVC or Web Forms projects in Visual Studio. The sample application uses MVC, but most of the tutorial also applies to Web Forms.
The tutorial instructions work with the following products:
If you don't have one of these, Visual Studio 2013 Express for Web will be installed automatically when you install the Azure SDK.
The tutorial shows how to do the following tasks:
The sample application uses the queue-centric work pattern to off-load the CPU-intensive work of creating thumbnails to a backend process.end website stores the image in an Azure blob, and it stores the ad information in the database with a URL that points to the blob. At the same time, it writes a message to an Azure queue. A backend process running as an Azure WebJob uses the WebJobs SDK to poll the queue for new messages. When a new message appears, the WebJob creates a thumbnail for that image and updates the thumbnail URL database field for that ad. Here's a diagram that shows how the parts of the application interact:
WebJobs run in the context of a website and are not scalable separately. For example, if you have one Standard website instance, you only have 1 instance of your background process running, and it is using some of the server resources (CPU, memory, etc.) that otherwise would be available to serve web content.
If traffic varies by time of day or day of week, and if the backend processing you need to do can wait, you could schedule your WebJobs to run at low-traffic times. If the load is still too high for that solution, you can consider alternative environments for your backend program, such as the following:
This tutorial shows how to run the frontend in a website and the backend as a WebJob in the same website. For information about how to choose the best environment for your scenario, see Azure Websites, Cloud Services, and Virtual Machines Comparison.
To start, set up your development environment by installing the Azure SDK for Visual Studio 2013.
If you don't have Visual Studio installed, Visual Studio Express for Web will be installed along with the SDK.
Depending on how many of the SDK dependencies you already have on your machine, installing the SDK could take a long time, from several minutes to a half hour or more.
The tutorial instructions have been written using the next preview release of Visual Studio 2013 Update 4. The only difference for Visual Studio 2013 Update 3 is in the create-from-scratch section where you create the WebJob project: with Update 4 the WebJobs SDK packages are automatically included in the project; without Update 4 you have to install the packages manually.
An Azure storage account provides resources for storing queue and blob data in the cloud. It is also used by the WebJobs SDK to store logging data for the dashboard.
In a real-world application, you typically create separate accounts for application data versus logging data, and separate accounts for test data versus production data. For this tutorial you'll use just one account.
Open the Server Explorer window in Visual Studio.
Right-click the Azure node, and then click Connect to Microsoft Azure.
Right-click Storage under the Azure node, and then click Create Storage Account.
In the Create Storage Account dialog, enter a name for the storage account.
The name must be must be unique (no other Azure storage account can have the same name). If the name you enter is already in use you'll get a chance to change it.
The URL to access your storage account will be {name}.core.windows.net.
Set the Region or Affinity Group drop-down list to the region closest to you.
This setting specifies which Azure datacenter will host your storage account. For this tutorial your choice won't make a noticeable difference, but for a production site you want your web server and your storage account to be in the same region to minimize latency and data egress charges. The website (which you'll create later) should be as close as possible to the browsers accessing your site in order to minimize latency.
Set the Replication drop-down list to Locally redundant.
When geo-replication is enabled for a storage account, the stored content is replicated to a secondary datacenter to enable failover to that location in case of a major disaster in the primary location. Geo-replication can incur additional costs. For test and development accounts, you generally don't want to pay for geo-replication. For more information, see How To Manage Storage Accounts.
Click Create.
Download and unzip the completed solution.
Start Visual Studio.
From the File menu choose Open > Project/Solution, and clicking the Restore button at the top right.
In Solution Explorer, make sure that ContosoAdsWeb is selected as the startup project.
Open the application Web.config file in the ContosoAdsWeb project.
The file contains a SQL connection string and an Azure storage connection string for working with blobs and queues.
The SQL connection string points to a SQL Server Express LocalDB database.
The storage connection string is an example that has placeholders for the storage account name and access key. You'll replace this with a connection string that has the name and key of your storage account.
<connectionStrings>
<add name="ContosoAdsContext" connectionString="Data Source=(localdb)\v11.0; Initial Catalog=ContosoAds; Integrated Security=True; MultipleActiveResultSets=True;" providerName="System.Data.SqlClient" />
<add name="AzureWebJobsStorage" connectionString="DefaultEndpointsProtocol=https;AccountName=[accountname];AccountKey=[accesskey]"/>
</connectionStrings>
The storage connection string is named AzureWebJobsStorage because that's the name the WebJobs SDK uses by default. The same name is used here so you only have to set one connection string value in the Azure environment.
In Server Explorer, right-click your storage account under the Storage node, and then click Properties.
In the Properties window, click Storage Account Keys, and then click the ellipsis.
Copy the Connection String.
Replace the storage connection string in the Web.config file with the connection string you just copied. Make sure you select everything inside the quotation marks but not including the quotation marks before pasting.
Open the App.config file in the ContosoAdsWebJob project.
This file has two storage connection strings, one for application data and one for logging. For this tutorial you'll use the same account for both. The connection strings have placeholders for the storage account keys.
<configuration>
<connectionStrings>
<add name="AzureWebJobsDashboard" connectionString="DefaultEndpointsProtocol=https;AccountName=[accountname];AccountKey=[accesskey]"/>
<add name="AzureWebJobsStorage" connectionString="DefaultEndpointsProtocol=https;AccountName=[accountname];AccountKey=[accesskey]"/>
<add name="ContosoAdsContext" connectionString="Data Source=(localdb)\v11.0; Initial Catalog=ContosoAds; Integrated Security=True; MultipleActiveResultSets=True;"/>
</connectionStrings>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
</startup>
</configuration>
By default, the WebJobs SDK looks for connection strings named AzureWebJobsStorage and AzureWebJobsDashboard. As an alternative, you can store the connection string however you want and pass it in explicitly to the JobHost object.
JobHost
Replace both storage connection strings with the connection string you copied earlier.
Save your changes.
To start the web frontend of the application, press CTRL+F5.
The default browser opens to the home page. (The web project runs because you've made it the startup project.)
To start the WebJob backend of the application, right-click the ContosoAdsWebJob project in Solution Explorer, and then click Debug > Start new instance.
A console application window opens and displays logging messages indicating the WebJobs SDK JobHost object has started to run.
In your browser, click Create an Ad.
Enter some test data and select an image to upload, and then click Create.
The app goes to the Index page, but it doesn't show a thumbnail for the new ad because that processing hasn't happened yet.
Meanwhile, after a short wait a logging message in the console application window shows that a queue message was received and has been processed.
After you see the logging messages in the console application window, refresh the Index page to see the thumbnail.
Click Details for your ad to see the full-size image.
You've been running the application on your local computer, and it's using a SQL Server database located on your computer, but it's working with queues and blobs in the cloud. In the following section you'll run the application in the cloud, using a cloud database as well as cloud blobs and queues.
You'll do the following steps to run the application in the cloud:
After you've created some ads while running in the cloud, you'll view the WebJobs SDK dashboard to see the rich monitoring features it has to offer.
Close the browser and the console application window.
In Solution Explorer, right-click the ContosoAdsWeb project, and then click Publish.
In the Profile step of the Publish Web wizard, click Microsoft Azure Websites.
In the Select Existing Website box, click Sign In.
After you're signed in, click New.
In the Create site on Microsoft Azure dialog box, enter a unique name in the Site name box.
The complete URL will consist of what you enter here plus .azurewebsites.net (as shown next to the Site name text box). For example, if the site name is ContosoAds, the URL will be ContosoAds.azurewebsites.net.
In the Region drop-down list choose the same region you chose for your storage account.
This setting specifies which Azure datacenter your website will run in. Keeping the website and storage account in the same datacenter minimizes latency and data egress charges.
In the Database server drop-down list choose Create new server.
Alternatively, if your subscription already has a server, you can select that server from the drop-down list.
Enter an administrator Database username and Database.
Click Create.
Visual Studio creates the solution, the web project, the Azure Website, and the Azure SQL Database instance.
In the Connection step of the Publish Web wizard, click Next.
In the Settings step, clear the Use this connection string at runtime check box, and then click Next.
You don't need to use the publish dialog to set the SQL connection string because you'll set that value in the Azure environment later.
You can ignore the warnings on this page.
For this tutorial, the default values of the options under File Publish Options are fine.
In the Preview step, click Start Preview.
You can ignore the warning about no databases being published. Entity Framework Code First will create the database; it doesn't need to be published.
the preview window shows that binaries and configuration files from the WebJob project will be copied to the app_data\jobs\continuous folder of the website.
Click Publish.
Visual Studio deploys the application and opens the home page URL in the browser.
You won't be able to use the site until you set connection strings in the Azure environment in the next section. You'll see either an error page or the home page depending on site and database creation options you chose earlier.
It's a security best practice to avoid putting sensitive information such as connection strings in files that are stored in source code repositories. Azure provides a way to do that: you can set connection string and other setting values in the Azure environment, and ASP.NET configuration APIs automatically pick up those values when the app runs in Azure. In this section you'll set connection string values in Azure.
In Server Explorer, right-click your website under the Websites node, and then click View Settings.
The Azure Website window opens on the Configuration tab.
Change the name of the DefaultConnection connection string to ContosoAdsContext.
Azure automatically created this connection string when you created the site with an associated database, so it already has the right connection string value. You're just changing the name to what your code is looking for.
Add two new connection strings, named AzureWebJobsStorage and AzureWebJobsDashboard. Set type to Custom, and set the connection string value to the same value that you used earlier for the Web.config and App.config files. (Make sure you include the entire connection string, not just the access key, and don't include the quotation marks.)
These connection strings are used by the WebJobs SDK, one for application data and one for logging. As you saw earlier, the one for application data is also used by the web frontend code.
Click Save.
In Server Explorer, right-click the website, and then click Stop Website.
After the website stops, right-click the website again, and then click Start Website.
The WebJob automatically starts when you publish, but it stops when you make a configuration change. To restart it you can either restart the site or restart the WebJob in the Azure management portal. It's generally recommended to restart the site after a configuration change.
Refresh the browser window that has the site URL in its address bar.
The home page appears.
Create an ad, as you did when you ran the application locally.
The Index page shows without a thumbnail at first.
Refresh the page after a few seconds, and the thumbnail appears.
If the thumbnail doesn't appear, the WebJob may not have started automatically. In that case, go to the WebJobs tab in the
In the Azure management portal, select your website.
Click the WebJobs tab.
click the URL in the Logs column for your WebJob.
A new browser tab opens to the WebJobs SDK dashboard. The dashboard shows that the WebJob is running and shows a list of functions in your code that the WebJobs SDK triggered.
Click one of the functions to see details about its execution
The Replay Function button on this page causes the WebJobs SDK framework to call the function again, and it gives you a chance to change the data passed to the function first.
When you're finished testing, delete the website and the SQL Database instance. The website is free, but the SQL Database instance and storage account accrue charges (minimal due to small size). Also, if you leave the site running, anyone who finds your URL can create and view ads. In the Azure management portal, go to the Dashboard tab for your website, and then click the Delete button at the bottom of the page. You can then select a check box to delete the SQL Database instance at the same time. If you just want to temporarily prevent others from accessing the site, click Stop instead. In that case, charges will continue to accrue for the SQL Database and Storage account. You can follow a similar procedure to delete the SQL database and storage account when you no longer need them.
For this sample application, web site activity always precedes the creation of a queue message, so there is no problem if the website goes to sleep and terminates the WebJob due to a long period of inactivity. When a request comes in, the site wakes up and the WebJob is restarted.
For WebJobs that you want to keep running even when the website itself is inactive for a long period of time, you can use the AlwaysOn feature of Azure Websites.
In this section you'll do the following tasks:
In Visual Studio, choose New > Project from the File menu.
In the New Project dialog, choose Visual C# > Web > ASP.NET Web Application.
Name the project ContosoAdsWeb, name the solution ContosoAdsWebJobsSDK (change the solution name if you're putting it in the same folder as the downloaded solution), and then click OK.
In the New ASP.NET Project dialog, choose the MVC template, and clear the Host in the cloud check box under Microsoft Azure.
Selecting Host in the cloud enables Visual Studio to automatically create a new Azure Website and SQL Database. Since you already created these earlier, you don't need to do so now while creating the project. If you want to create a new one, select the check box. You can then configure the new website and SQL database the same way you did earlier when you were deploying the application.
Click Change Authentication.
In the Change Authentication dialog, choose No Authentication, and then click OK.
In the New ASP.NET Project dialog, click OK.
Visual Studio creates the solution and the web project.
In Solution Explorer, right-click the solution (not the project), and choose Add > New Project.
In the Add New Project dialog, choose Visual C# > Windows Desktop > Class Library template.
Name the project ContosoAdsCommon, and then click OK.
This project will contain the Entity Framework context and the data model which both the frontend and backend will use. As an alternative you could define the EF-related classes in the web project and reference that project from the WebJob project. But then your WebJob project would have a reference to web assemblies which it doesn't need.
Right-click the web project (not the solution or the class library project), and then click Add > New Azure WebJob Project.
In the Add Azure WebJob dialog, enter ContosoAdsWebJob as both the Project name and the WebJob name. Leave WebJob run mode set to Run Continuously.
Click OK.
Visual Studio creates a Console application that is configured to deploy as a WebJob whenever you deploy the web project. To do that it performed the following tasks after creating the project:
For more information about these changes, see How to Deploy WebJobs by using Visual Studio.
The new-project template for a WebJob project automatically installs the WebJobs SDK NuGet package Microsoft.Azure.WebJobs and its dependencies.
One of the WebJobs SDK dependencies that is installed automatically in the WebJob project is the Azure Storage Client Library (SCL). However, you need to add it to the web project to work with blobs and queues.
Open the Manage NuGet Packages dialog for the solution.
In the left pane, select Installed packages.
Find the Azure Storage package, and then click Manage.
In the Select Projects box, select the ContosoAdsWeb check box, and then click OK.
All three projects use the Entity Framework to work with data in SQL Database.
In the left pane, select Online.
Find the EntityFramework NuGet package, and install it in all three projects.
Both web and WebJob projects will work with the SQL database, so both need a reference to the ContosoAdsCommon project.
In the ContosoAdsWeb project, set a reference to the ContosoAdsCommon project. (Right-click the ContosoAdsWeb project, and then click Add > Reference. In the Reference Manager dialog box, select Solution > Projects > ContosoAdsCommon, and then click OK.)
In the ContosoAdsWebJob project, set a reference to the ContosAdsCommon project.
The WebJob project needs references for working with images and for accessing connection strings.
System.Drawing
System.Configuration
This tutorial does not show how to create MVC controllers and views using scaffolding, how to write Entity Framework code that works with SQL Server databases, or the basics of asynchronous programming in ASP.NET 4.5. So all that remains to do is copy code and configuration files from the downloaded solution into the new solution. After you do that, the following sections will show and explain key parts of the code.
To add files to a project or a folder, right-click the project or folder and click Add > Existing Item. Select the files you want and click Add. If asked whether you want to replace existing files, click Yes.
In the ContosoAdsCommon project, delete the Class1.cs file and add in its place the following files from the downloaded project.
In the ContosoAdsWeb project, add the following files from the downloaded project.
In the ContosoAdsWebJob project, add the following files from the downloaded project.
You can now build, run, and deploy the application as instructed earlier in the tutorial. Before you do that, however, stop the WebJob that is still running in the first website you deployed to. Otherwise that WebJob will process queue messages created locally or by the app running in a new website, since all are using the same storage account.
The following sections explain the code related to working with the WebJobs SDK and Azure Storage blobs and queues. For the code specific to the WebJobs SDK, see the Program.cs section. or the Azure runtime environment. The second constructor enables you to pass in the actual connection string. That is needed by the WebJob project, since it doesn't have a Web.config file. You saw earlier where this connection string was stored, and you'll see later how the code retrieves the connection string when it instantiates the DbContext class.
The BlobInformation class is used to store information about an image blob in a queue message.
BlobInformation
public class BlobInformation
{
public Uri BlobUri { get; set; }
public string BlobName
{
get
{
return BlobUri.Segments[BlobUri.Segments.Length - 1];
}
}
public string BlobNameWithoutExtension
{
get
{
return Path.GetFileNameWithoutExtension(BlobName);
}
}
public int AdId { get; set; }
}
Code that is called from the Application_Start method creates an images blob container and an images queue if they don't already exist. This ensures that whenever you start using a new storage account, the required blob container and queue will be created automatically.
Application_Start
The code gets access to the storage account by using the storage connection string from the Web.config file or Azure runtime environment.
var storageAccount = CloudStorageAccount.Parse
(ConfigurationManager.ConnectionStrings["AzureWebJobsStorage"].To.
});
}
Similar code gets a reference to the blobnamerequest queue and creates a new queue. In this case no permissions change is needed. The ResolveBlobName section later in the tutorial explains why the queue that the web application writes to is used just for getting blob names and not for generating thumbnails.
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
var imagesQueue = queueClient.GetQueueReference("blobnamerequest");
imagesQueue.CreateIfNotExists();
The _Layout.cshtml file sets the app name in the header and footer, and creates an "Ads" menu entry.
The Views\Home\Index.cshtml file displays category links on the home page. The links pass the integer value of the Category enum in a querystring variable to the Ads Index page.
Category
>
In the AdController.cs file the constructor calls the InitializeStorage method to create Azure Storage Client Library objects that provide an API for working with blobs and queues.
InitializeStorage 3 seconds after each try for up to 3("blobnamerequest");.
Create
.
UploadAndSaveBlobAsync the back-end process that an image is ready for conversion to a thumbnail.
BlobInformation blobInfo = new BlobInformation() { AdId = ad.AdId, BlobUri = new Uri(ad.ImageURL) };
var queueMessage = new CloudQueueMessage(JsonConvert.SerializeObject(blobInfo));
await thumbnailRequestQueue.AddMessageAsync(queueMessage);
The code for the HttpPost Edit method is similar except that if the user selects a new image file any blobs that already exist for this ad must be deleted.
Edit
if (imageFile != null && imageFile.ContentLength != 0)
{
await DeleteAdBlobsAsync(ad);
imageBlob = await UploadAndSaveBlobAsync(imageFile);
ad.ImageURL = imageBlob.Uri.ToString();
}
Here is();
}
The Index.cshtml file displays thumbnails with the other ad data:
<img src="@Html.Raw(item.ThumbnailURL)" />
The Details.cshtml file displays the full-size image:
<img src="@Html.Raw(Model.ImageURL)" />
The Create.cshtml and Edit.cshtml files specify form encoding that enables the controller to get the HttpPostedFileBase object.
HttpPostedFileBase
@using (Html.BeginForm("Create", "Ad", FormMethod.Post, new { enctype = "multipart/form-data" }))
An <input> element tells the browser to provide a file selection dialog.
<input>
<input type="file" name="imageFile" accept="image/*" class="form-control fileupload" />
When the WebJob starts, the Main method calls Initialize to instantiate the Entity Framework database context. Then it calls the WebJobs SDK JobHost.RunAndBlock method to begin single-threaded execution of triggered functions on the current thread.
Main
Initialize
JobHost.RunAndBlock
static void Main(string[] args)
{
Initialize();
JobHost host = new JobHost();
host.RunAndBlock();
}
private static void Initialize()
{
db = new ContosoAdsContext();
}
The WebJobs SDK calls this method when a queue message is received. The method creates a thumbnail and puts the thumbnail URL in the database.
public static void GenerateThumbnail(
[QueueTrigger("thumbnailrequest")] BlobInformation blobInfo,
[Blob("images/{BlobName}", FileAccess.Read)] Stream input,
[Blob("images/{BlobNameWithoutExtension}_thumbnail.jpg"), FileAccess.Write] CloudBlockBlob outputBlob)
{
using (Stream output = outputBlob.OpenWrite())
{
ConvertImageToThumbnailJPG(input, output);
outputBlob.Properties.ContentType = "image/jpeg";
}
var id = blobInfo.AdId;
Ad ad = Program.db.Ads.Find(id);
if (ad == null)
{
throw new Exception(String.Format("AdId {0} not found, can't create thumbnail", id.ToString()));
}
ad.ThumbnailURL = outputBlob.Uri.ToString();
Program.db.SaveChanges();
}
The QueueTrigger attribute directs the WebJobs SDK to call this method when a new message is received on the thumbnailrequest queue.
QueueTrigger
[QueueTrigger("thumbnailrequest")] BlobInformation blobInfo,
The BlobInformation object in the queue message is automatically deserialized into the blobInfo parameter. When the method completes, the queue message is deleted. If the method fails before completing, the queue message is not deleted; after a 10-minute lease expires, the message is released to be picked up again and processed. This sequence won't be repeated indefinitely if a message always causes an exception. After 5 unsuccessful attempts to process a message, the message is moved to a queue named {queuename}-poison. The maximum number of attempts is configurable.
blobInfo
The two Blob attributes provide objects that are bound to blobs: one to the existing image blob and one to a new thumbnail blob that the method creates.
Blob
[Blob("images/{BlobName}", FileAccess.Read)] Stream input,
[Blob("images/{BlobNameWithoutExtension}_thumbnail.jpg")] CloudBlockBlob outputBlob)
Blob names come from properties of the BlobInformation object received in the queue message (BlobName and BlobNameWithoutExtension). To get the full functionality of the Storage Client Library you can use the CloudBlockBlob class to work with blobs. If you want to reuse code that was written to work with Stream objects, you can use the Stream class.
BlobName
BlobNameWithoutExtension
CloudBlockBlob
Stream
* If your website runs on multiple VMs, this program will run on each machine, and each machine will wait for triggers and attempt to run functions. In some scenarios this can lead to some functions processing the same data twice, so functions should be idempotent (written so that calling them repeatedly with the same input data doesn't produce duplicate results). * For information about how to implement graceful shutdown, see Graceful Shutdown. * The code in the ConvertImageToThumbnailJPG method (not shown) uses classes in the System.Drawing namespace for simplicity. However, the classes in this namespace were designed for use with Windows Forms. They are not supported for use in a Windows or ASP.NET service.
ConvertImageToThumbnailJPG
If you compare the amount of code in the GenerateThumbnails method in this sample application with the worker role code in the Cloud Service version of the application, you can see how much work the WebJobs SDK is doing for you. The advantage is greater than it appears, because the Cloud Service sample application code doesn't do all of the things (such as poison message handling) that you would do in a production application, and which the WebJobs SDK does for you.
GenerateThumbnails
In the Cloud Service version of the application, the record ID is the only information in the queue message, and the background process gets the image URL from the database. In the WebJobs SDK version of the application, the queue message includes the image URL so that it can be provided to the Blob attributes. If the queue message didn't have the blob URL, you could use the Blob attribute in the body of the method instead of in the method signature.
A program that uses the WebJobs SDK doesn't have to run in Azure in a WebJob. It can run locally, and it can also run in other environments such as a Cloud Service worker role or a Windows service. However, you can only access the WebJobs SDK dashboard through an Azure Website. To use the dashboard you have to connect the website to the storage account you're using by setting the AzureWebJobsDashboard connection string on the Configure tab of the management portal. Then you can get to the Dashboard by using the URL https://{websitename}.scm.azurewebsites.net/azurejobs/#/functions. For more information, see Getting a dashboard for local development with the WebJobs SDK, but note that it shows an old connection string name.
In this tutorial you've seen a simple multi-tier application that uses the WebJobs SDK for backend processing. The application has.
For more information, see Azure Web Jobs Recommended Resources.
Want to edit or suggest changes to this content? You can edit and submit changes to this article using GitHub. | http://azure.microsoft.com/en-us/documentation/articles/websites-dotnet-webjobs-sdk-get-started/ | CC-MAIN-2014-42 | refinedweb | 4,866 | 56.66 |
Discover faster, more efficient performance monitoring with an enterprise APM product learning from your apps. Take the AppDynamics APM Guided Tour! In DevOps, Continuous Integration (CI) is increasingly the integration method of choice, in large part because of the speed at which it enables the release of new features, bug fixes, and product update. In a digital world that moves ...Read More »
Giveaway Alert: The iPhone X
Want to Level Up Your Smartphone Experience? Well, you have come to the right place! We are giving away The New iPhone X Enter Now to Win a $999 Smartphone! It’s the phone that 7 ate 9 to get to. The excitement around iPhone X was so high they literally just jumped past 9 to get there, and now that it’s ...Read More »
New Webinar: How Intuit Automates Root Cause Analysis at Scale
Hear ...Read More »
Top 20 Online Programming Courses to Boost your Career
Do ...Read More »
Java Interface – Journey Over the Years to Java 9 – Default and Private Methods
Introduction ...Read More »
Creating opportunities to deliver value (as an Independent Scrum Caretaker)
I ...Read More »
Continuous Delivery of ADF applications with WebLogic Shared Libraries
Introduction There is a pretty popular architecture pattern when ADF applications are built on top of shared libraries. So the main application is being deployed as an EAR and all subsystems are implemented within shared libraries that can be independently built and deployed to WebLogic as JARs in “hot” mode without downtime. The advantages of this approach seem to be ...Read More »
Measure Your Cost per Feature
As Mark Kilby and I work on the geographically distributed teams book, I realized this morning that we need to define cost per feature. I already wrote Wage Cost and Project Labor Cost and the management myth that it’s cheaper to hire people where the wages are less expensive. (It might be, but it might not be.) That’s because of ...Read More »
Going Serverless? Compare Your FaaS Options, ...Read More »
Exclusive Book Offers (75% off): Mastering Android Application Development, Expert Android Programming, Android High Performance Programming
Hello fellow Geeks! Today, we have some exciting news for you! Java Code Geeks and Packt have teamed up to offer you weekly discounts on their extensive library of books. This week, we are offering discounts on three books to help you understand and master Android. Check them out! Mastering Android Application Development Antonio Pachón Ruiz There are millions of ...Read More »
Neo ...Read More »
Using JAX-RS exceptions for status codes
One way to send specific HTTP (error) status codes from a JAX-RS resource is to use the javax.ws.rs.core.Response class with its Builder Pattern-like API. If you want to specify the return type according to the response body, you can still do so and send a different status on errors by throwing a WebApplicationException. @Path("test") public class TestResource { @GET public ...Read More » | https://www.javacodegeeks.com/ | CC-MAIN-2017-51 | refinedweb | 490 | 65.22 |
You can subscribe to this list here.
Showing
1
results of 1
Hallo,
Hancock, David (DHANCOCK) hat gesagt: // Hancock, David (DHANCOCK) wrote:
> I won't profess to understanding exactly how this works, but one of our
> developers did exactly what you're trying to do. We wanted to save and email
> the full HTML "red-bar" traceback for oncall engineers, but we only wanted
> users to see a short message with an identifying number so we could
> correlate it on the system when they called in.
...
> The errorPageFilename uses a number returned by createRandomCode as part of
> the filename. The date from createTimeStamp is also used for the filename,
> so an error in servlet SomeFunc makes the file:
>
> Error-SomeFunc.py-01-Nov-03_15:09Z--11081.html
>
> It uses getPublicHTML to construct the contents of that file.
>
> The getPrivateHTML method is probably the one you're most interested in. It
> constructs a much smaller, kinder, gentler page for the user to see (with
> words like "We've encountered an error processing your request. If you need
> immediate assistance call 1-800-BITEME. Please reference code: 12345 when
> you call. We're sorry for the inconvenience."
Late answer here, but I finally got around to figure all this out. As
was recommended by you and others, I first looked into customizing
"privateErrorPage" but somehow it felt wrong, that I and Webware were
sending out something called "private" to the public. Then after a
bit more digging I found out, that this depends on the setting in
Application.Config:
If 'ShowDebugInfoOnErrors' is True, then the html generated by
privateErrorPage is sent to the user. If it is False, then the
"publicErrorPage" is sent. So I finally customized the public part and
kept my private parts for myself, so to say. Basically now my custom
exceptions look like this:
### __init__.py
from WebKit.ExceptionHandler import ExceptionHandler as _ExceptionHandler
class ExceptionHandler(_ExceptionHandler):
def publicErrorPage(self):
html = open("errorPage.html").read()
return html
def contextInitialize(app, ctxPath):
app._exceptionHandlerClass = ExceptionHandler
### end __init__
And this is in Application.Config:
...
'ShowDebugInfoOnErrors': 0,
...
'SaveErrorMessages': 1,
'EmailErrors': 1,
...
Thanks again for your pointers.
ciao
--
Frank Barknecht _ ______footils.org__ | http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200311&viewday=19 | CC-MAIN-2015-27 | refinedweb | 360 | 57.98 |
Node JS API Authentication using JWT
Posted 1 year ago by Ryan Dhungel
Category: Authentication API Mongo DB Express JS Node
Introduction
This tutorial will teach you how to build Token based Authentication system with Node JS, Express JS and Mongo DB. There will be 3 routes for Signup, Signin and Secret page. Only a user who signup or signin to the application, will get access to secret page.
We will use postman to test our API. You can use your own frontend framework like React, Angular or Vue to build the frontend and use this API as backend.
Prerequisite
- Basic understanding of JavaScript
- Basic understanding of Node JS
- Basic understanding of HTTP and API
- Be able to use postman
Lessons 13
Lets begin by creating a fresh node js project.
// make a directory mkdir nodeapi // get inside the project cd nodeapi // initialize with npm npm init // hit enter multiple times to accept all the default options
In this section, you will learn about modules in node js.
- Modules are basically a block of code. It can be a function, variable, object etc.
- NPM packages like express, mongoose can be installed and use in your node project. They are third party modules.
- You can use
require()to use these modules
- You can create your own modules too and use them throught your application.
// create a new file called test.js inside your project folder touch test.js // in test.js write the following code console.log(module) // save the file then in the terminal run node test.js // in the terminal you see the following Module { id: '.', exports: {}, parent: null, filename: '/Users/kaloraat/node/nuxtapi/app.js', loaded: false, children: [], paths: [ '/Users/kaloraat/node/nuxtapi/node_modules', '/Users/kaloraat/node/node_modules', '/Users/kaloraat/node_modules', '/Users/node_modules', '/node_modules' ] }
As you can see, the module object has exports property which is an empty object. Which means you can add your own functions, varialbes to this empty
exports : {} object. Whatever you add to this exports object will be available throughout your node project. Because it is a part of the node
process.
It's like a global object where you have access to all the properties and methods of node js.
To better understand process. Try
console log(process) in test.js and see the output in the terminal. It contains huge number of properties along with module export which we just saw.
// test.js console.log(process) // then run in terminal node test.js
Lets move from console logs and create a function called
sum inside test.js and export it using
module.exports so that it is usable in other parts of our application.
module.exports.sum = function(a, b) { return a + b; };
Lets use this
sum() in another file called app.js
// terminal touch app.js
// app.js const test = require("./test.js"); console.log("This is sum function's output: ", test.sum(100, 20));
// terminal node app.js // output This is sum function's output: 120
This is how you can export a function from one file to be used in another file. This is the basics of modules in node js. Lets move on to actually build an app using node express and mongodb.
Lets start building API using node express and mongodb along with necessary packaged. We will be adding more packages as we progress.
// terminal - inside nodeapi project npm install express morgan body-parser mongodb mongoose
// app.js // require express module to build express app const express = require("express"); // to see friendly notifications in the terminal such as GET '/signup' const morgan = require("morgan"); // to extract body info form HTTP headers const bodyParser = require("body-parser"); // execute express as a function const app = express(); // middlewares app.use(morgan(`dev`)); // use 'dev' format in console logs in terminal app.use(bodyParser.json()); // parse json // start the server on the given port const port = process.env.PORT || 3000; app.listen(port); // use backtics `` not comma '' console.log(`Server is listening on ${port}`);
Now try running this app in the terminal.
// terminal node app.js // output Server is listening on 3000
To stop the server press
control+C or
command+C
You might not like starting node server manually each time using
node app.js
The solution to this is using a npm package called nodemon. Lets install it globally.
// terminal npm install -g nodemon
Then in your node app, you must have noticed
package.json file. This file keeps track of all the packages you have installed. You can also write a script to run the node server using nodemon.
Modify the existing script to the following.
// package.json "scripts": { "start": "nodemon app.js" },
now in the terminal you can run this command
npm start
This will run your node app and keep track of the changes automatically. To stop you can always press
control+C or
command+C
Instead of writing all the code inside
app.js, we will be adopting MVC (model, view, controller) pattern. But obviously there will be no views folder because we are building API. Lets begin by creating two folders inside the nodeapi project folder.
mkdir routes mkdir controllers
Create
users.js file inside controllers folder
// controllers/users.js module.exports = { // req - contains incoming http request information // res - has methods available to respond to the incoming requests // next - proceed to the next stage signup: async (req, res, next) => { res.json("signup called"); }, signin: async (req, res, next) => { res.json("signin called"); }, secret: async (req, res, next) => { res.json("secret called"); } };
Create
users.js file inside routes folder
const express = require("express"); // user express router const router = express.Router(); const UsersController = require("../controllers/users"); router.route("/signup").post(UsersController.signup); router.route("/signup").post(UsersController.signin); router.route("/secret").get(UsersController.secret); module.exports = router;
Now lets use the routes in app.js
app.use(bodyParser.json()); // parse json // use routes app.use("/", require("./routes/users"));
Now we can test these routes we have created. for this lets use postman. If you dont have it yet, please install first.
First make sure you have node server running.
// terminal npm start
Then open postman and make post request to
The fields with green circle are required. Use dropdown and buttons to adjust the settings in postman. Here is the screenshot.
you get the following response.
It means we are doing good. Lets move on to validation. After validation we will be able to send user data like email and password to create a new user and save in the dabase.
Before saving user data(on user signup) to database, we obviously need to validate the data. We can use a package called Joi for validation which a part of a Hapi JS, a node js framework.
First lets install
joi from npm.
// terminal npm install joi
Then create a folder called helpers and inside, create a file called
routeHelpers.js
// helpers/routeHelpers.js const Joi = require("joi"); module.exports = { // using arrow function validateBody: schema => { return (req, res, next) => { // validate the incoming req.body using Joi.Validate() // passing arguments - req.body and schema(see below) const result = Joi.validate(req.body, schema); // on error if (result.error) { // respon with 400 status code and error in json format return res.status(400).json(result.error); } // attach value property to res object // our goal is to use validated data(res.value.body) instead of direct (res.body) if (!req.value) { req.value = {}; } req.value["body"] = result.value; next(); }; }, // define schemas object schemas: { authSchema: Joi.object().keys({ email: Joi.string() .email() .required(), password: Joi.string().required() }) } };
Now we can use this validation inside
routes/users.js
// routes/users.js // full code const express = require("express"); // user express router const router = express.Router(); // for validation - use object destructuring to bring in only the needed properties const { validateBody, schemas } = require("../helpers/routeHelpers"); const UsersController = require("../controllers/users"); router .route("/signup") .post(validateBody(schemas.authSchema), UsersController.signup); router.route("/signup").post(UsersController.signin); router.route("/secret").get(UsersController.secret); module.exports = router;
Is your server still running? If not first start the server using
npm start
Now try using postman to make
post request to with following data as body. Make sure the Headers > Content-Type is selected as
application/json
Then write the following data in body section in json format and send with the request.
You will see the json response of
The next step is to use mongo database so that we can save the user info and ulitmately create or signup a new user.
We can use mongo db to save data. We can also use a package called mongoose to easily query the mongo db. There is also a software available called Robo mongo or Robo 3T which is a GUI for mongo db. Using robo mongo you can visually see the mongo database.
If all this is new to you then you can also use online mongodb service called mlab.
We have already installed mongoose, if you haven't installed it yet. Please install.
npm install mongoose
Require mongoose in app.js
const express = require("express"); // mongoose const mongoose = require("mongoose"); // nodeapi will be the name of the database // if it doesnt exist, will be created automatically // if you are using mlab, pass the url mongoose.connect("mongodb://localhost/nodeapi");
Then create a models folder in the root of your node project. Then inside
models folder, create
user.js file. Models will act like a middleman for communicating with database.
mkdir models
// models/user.js const mongoose = require("mongoose"); const Schema = mongoose.Schema; // create a schema // define the type const userSchema = new Schema({ email: { type: String, required: true, unique: true, lowercase: true }, password: { type: String, required: true } }); // create a model - make the model name singular / mongoose will make it plural const User = mongoose.model("user", userSchema); // export the model module.exports = User;
In
controllers/user.js first require the user from
models/user so that we can create a new user. Mongoose will be in instant communication with given database. In our case we have
mongoose.connect("mongodb://localhost/nodeapi"); in the
app.js
// controllers/user.js // full code // require user model const User = require("../models/user"); module.exports = { signup: async (req, res, next) => { // use req.value.body not req.body // this will give use validated body of the request const { email, password } = req.value.body; // check if user exists // use await keyword because it takes time solve this query from database const foundUser = await User.findOne({ email }); if (foundUser) { // respond with 403 forbidden status code return res.status(403).json({ error: "Email already in use" }); } const newUser = new User({ email, password }); // await for a new user to be saved because it takes some time await newUser.save(); // once user is saved, respond with json res.json({ user: "created" }); }, signin: async (req, res, next) => { res.json("signin called"); }, secret: async (req, res, next) => { res.json("secret called"); } };
Now you can try using postman. You can make
post request to just like you did earlier with
Headers > Content-Type > application/json
And email and password as body.
It should successfully save the new user to database. This time instead of getting the response of "signup called" like earlier, you should see the following:
{ "user": "created" }
That's because we responded with user created json response in
controllers/user.js on signup. If you send the same email and password info twice, you will see the following response:
{ "error": "Email already in use" }
That means, our validation is working.
Since we are building an API, we need a way to respond to the frontend with a secure token which can be used to identify the user and allow access. For this we will be using json web token.
With our existing code, you will see the following deprecation warning in the console:
Server is listening on 3000 (node:6394) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect. (node:6394) DeprecationWarning: collection.ensureIndex is deprecated. Use createIndexes instead. POST /signup 403 29.781 ms - 32
To get rid of this error, you can make the following adjustments in your
app.js
const mongoose = require("mongoose"); // for console error about deprected issues mongoose.set("useCreateIndex", true); mongoose.connect( "mongodb://localhost/nuxtapi", { useNewUrlParser: true } );
Lets move on to responding with json web token on user signup.
Lets generate json web token and respond with token on user signup. First we need to install npm package jsonwebtoken. To learn more about jsonwebtoken visit their
github page.
In
controllers/user.js, we want to send token as a json response, once the user is saved.
Steps:
- require
jsonwebtoken
- create a
signTokenmethod to genereate a token, passing user as argument.
- use this
signTokenright after saving the user in signup method
- send token as json response
npm install jsonwebtoken
// controllers/user.js // full code // require user model const User = require("../models/user"); // require json web token const JWT = require("jsonwebtoken"); // create a signToken method to genereate a token, passing user as argument. signToken = user => { return JWT.sign( { iss: "NodeAPI", // issuer sub: user.id, // sub means subject which is mandatory iat: new Date().getTime() // issued at date }, "jkahfdskjhfalkdslads" // random secret ); }; module.exports = { signup: async (req, res, next) => { const { email, password } = req.value.body; const foundUser = await User.findOne({ email }); if (foundUser) { return res.status(403).json({ error: "Email already in use" }); } const newUser = new User({ email, password }); await newUser.save(); // use signToken method to generate token and respond on signup const token = signToken(newUser); res.status(200).json({ token }); }, signin: async (req, res, next) => { res.json("signin called"); }, secret: async (req, res, next) => { res.json("secret called"); } };
Now try making the same post request as earlier, this time instead of previously hard coded "created" text you get the following response. A token.
// make post request to // json response in postman { "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJOb2RlQVBJIiwic3ViIjoiNWJhZjZhN2FhOTBjM2MxYzdlMWFhNmQzIiwiaWF0IjoxNTM4MjIyNzE0NTQ1fQ.AnYxeNimVOKW3TKmprNMcfi0YAout__8RCv2Q_kEOBo" }
Here is the screenshot:
Since frontend frameworks dont use sessions, this token will be used to authenticate the users and stored as cookies in localstorage to keep track of authenticated user. The next step is to signin user. We will be using passport.js for that.
But before we move to using passport, lets do a bit of refactor here. Lets extract the random string "jkahfdskjhfalkdslads" to a new location.
Create a folder called config and create
index.js inside that folder.
// terminal mkdir config
// config/index.js module.exports = { JWT_SECRET: "jkahfdskjhfalkdslads" };
Then we can use this secret key in
controllers/user.js
// require secret const { JWT_SECRET } = require("../config"); // create a signToken method to genereate a token, passing user as argument. signToken = user => { return JWT.sign( { iss: "NodeAPI", // issuer sub: user.id, // sub means subject which is mandatory iat: new Date().getTime() // issued at date }, JWT_SECRET ); };
Now lets move on to use passport.js.
So far we have been able to create a new user and respond with token. Now based on this token a user must be authenticated. A user with token should have access to
"/secret" route. Any other user who has not signup or signin into our application should not get access to
"/secret" route.
Lets begin by installing passport and passport-jwt (passport autnentication stragegy based on json web token) using npm.
// terminal npm install passport passport-jwt
Then create a file called
passport.js in the root of your project.
// terminal touch passport.js
// passport.js // full code const passport = require("passport"); const JwtStrategy = require("passport-jwt").Strategy; const { ExtractJwt } = require("passport-jwt"); const { JWT_SECRET } = require("./config"); const User = require("./models/user");); } } ) );
Then use passport in
routes/users.js
// routes/user.js const passport = require("passport"); const passportConf = require("../passport");
Then apply passport.authenticate() method to the route
'./secret'
router .route("/secret") .get( passport.authenticate("jwt", { session: false }), UsersController.secret );
Now in controller/user.js > secret method, make the following change so that we can see the response when we try accessing secret page.
// controllers/user.js secret: async (req, res, next) => { console.log("I managed to get here!"); res.json({ secret: "resource" }); }
Now its time to test if we get access to secret page. Go to postman and try making
get request to page.
You get the response of
Unauthorized
To get access, first create a new user like you did earlier. A
post request with Headers
Content-Type of
application/json and
body with email and password. For testing, you can either delete the old user that is already saved in the database or change the email to create a new user.
Then you should get the token as a response, just like you did previously.
Now copy that token and add it to the Headers in postman. This time along with Content-Type, we will also need to send Authorization key with token as value. Here is the screenshot.
Watch this screenshot closly. The method has been changed from
post to
get and the url is pointing to secret page. with this if you hit send button, you get access to the secret page.
But if you remove the token and try accessing the secret page, you get
unauthorized.
This is great. We now can create and save a new user. Then respond with JWT which can be used to authorize user to protected routes. Next setp is implementing signin. We will use
passport-local strategy to implement signin.
In the last section, we implement
passport-jwt strategy. This time we will use
passport-local stragety to authenticate user using username and password.
// terminal npm install passport-local
In the existing
passport.js file, add local stragegy.
// require const LocalStrategy = require("passport-local").Strategy;
Then add local stragety. Put the following code at the end of the file.
// this code should be at the end of // passport.js // local stragegy passport.use( new LocalStrategy( { usernameField: 'email', }, async (email, password, done) => { // find the user specified in the given email const user = await User.findOne({email}); // if user doesnt exist, handle it if (!user) { return done(null, false); } // check if password is correct // if not, handle it // otherwise return the user } ) );
As you can see, I have left last few lines with comment. We need has the password using bcrypt before saving to database which is a standard practice. Then we need to compare the saved password with the supplied password that we get on signin request.
Lets move on to
models/user.js to implement hashing and compare. Then we will come back to
passport.js local-strategy and finish up our API.
In the last section we were trying to implement passport local-strategy but we realized that we need to has and compare the password using bcrypt to implement signin method. Lets do it!
// terminal npm install bcryptjs
Steps:
- Require bcrypt
- Use
premethod so that we can run certain piece of code before saving data to database
- In this case we will hash the password and only then save to database
- Then use mongoose
schema.methods.isValidPasswordproperty to validate
Here is the full code of
models/user.js
// models/user.js const mongoose = require("mongoose"); // require bcrypt const bcrypt = require("bcryptjs"); const Schema = mongoose.Schema; // create a schema // define the type const userSchema = new Schema({ email: { type: String, required: true, unique: true, lowercase: true }, password: { type: String, required: true } }); // using pre method we can run certain piece of code before saving to database userSchema.pre("save", async function(next) { try { // generate a salt const salt = await bcrypt.genSalt(10); // generate a password hash (salt+hash) const passwordHash = await bcrypt.hash(this.password, salt); this.password = passwordHash; next(); } catch (error) { next(error); } }); // use mongoose schema.methods.isValidPassword property to validate userSchema.methods.isValidPassword = async function(newPassword) { try { return await bcrypt.compare(newPassword, this.password); } catch (error) { throw new Error(error); } }; const User = mongoose.model("user", userSchema); module.exports = User;
Lets move back to the passport.js local-strategy to complete the signin feature.
Lets finish off the passport local-strategy to signin user.
Here is the full code of
passport.js. Here we are using
user.isValidPassword(password) method that we added to mongoose
schema.methods in
models/user.js passing password as argument.
// passport.js const passport = require("passport"); const JwtStrategy = require("passport-jwt").Strategy; const { ExtractJwt } = require("passport-jwt"); const LocalStrategy = require("passport-local").Strategy; const { JWT_SECRET } = require("./config"); const User = require("./models/user"); // passport jwt strategy); } } ) ); // local stragegy passport.use( new LocalStrategy( { usernameField: "email" }, async (email, password, done) => { try { // find the user specified in the given email const user = await User.findOne({ email }); // if user doesnt exist, handle it if (!user) { return done(null, false); } // check if password is correct const isMatch = await user.isValidPassword(password); // if not, handle it if (!isMatch) { return done(null, false); } // otherwise return the user done(null, user); } catch (error) { done(error, false); } } ) );
We are almost done. Let's make some small change to
controllers/user.js so that we can test our API.
We have already done a lot of work so far. Lets work on signin method now in controllers/user.js. Similar to signup method, we need to send token as response so that the user can be authenticated.
// controllers/user.js signin: async (req, res, next) => { // respond with token const token = signToken(req.user); res.status(200).json({ token }); },
Now you can p
assport.authenticate("local") method to authenticate the signin route.
// routes/user.js // signin router .route("/signin") .post( validateBody(schemas.authSchema), passport.authenticate("local", { session: false }), UsersController.signin );
Now just like you created a new user by making post request to signup route, you can make post request to signin route with the existing user email and password. If your email and password matches with the one that is saved in the database, you will be authenticated. Then with the returned token, you can get access to secret route.
Please delete or do not use the user that your created earlier to signin. Because then we did not have bcrypt implemented. But now we have hashed password implementation, old password wont work. So create a new user and try accessing signin route as well as secret route.
Everything should work perfectly.
Here are the screenshots:
Create a new user with
post request to
User signin with
post request to
Now copy the token and make a get request to protected route. In our case its secret page. Paste the token as value to Headers > authorization key.
Congratulations. You have build a Node JS API with Authentication. You can extend this API by adding CRUD features. You can also build a web app using frontend frameworks like Angular, React or Vue that uses this API.
For any help or suggestion, leave a comment below. Cheers! | https://kaloraat.com/courses/node-api-auth-tutorial | CC-MAIN-2020-05 | refinedweb | 3,783 | 60.31 |
Ladies and Gentlemen.We are in the presence of one of the most important computer feature used in wherever u can think of,homes,offices, companies,projects e.tc.This feature without it.computer i wont say wud be useless but it will become 50% less efficient. The name given to this outstanding feature is "FILE".
"FILes", this tutorial im writing is a file, whichs is just a bunch of text im writing.A collection of text whether garbage or not is called a file.OKay, lets say you have created a file and saved it as "myFile.dat".How can we manipulate it ( e.g extract a part of it, delete some part of it, encrypt it, e.tc so many things)?With the help of computer prrograms like c/c++, we can easily manipulate tha given file(in this case, myFile.dat).Enough of that buch of text( whats a bunch of text called..... a File ).Im goint to indroduce how to open and manipulate files using c / c++.so put your seatbelts on cuz the Traffic light is Green.
C++/C I/O operations
C++ commands to open files
1) ifstream( class to open file for reading)
2) ofstremn( class to open file for writing )
3) all of this classes extend from their main super class or father called "fstream".
As i said classes, just as you create e.g your "Quadilateral class"(which is more of like the fstream class as in this example )
class Quadilateral { double getArea();//example }
Ractangle, square are all Quadilaterals,likewise ifstream and ofstream are all fstream(confusing?i knw, read a tutorial on classes) any Quadilateral has to know its area.in this example our Rectangle and shape are more of like our "ifstream" and ofstream" .get a tutorial or request one to be written.
Okay lets get back to businees.
we want to open our littile file "myFile.dat".How do we do it in c++?its simple,
1)we have to create an object of type ifstream(remember before you can use a class u have to instantiate it.This is exactly what we are doing);
ifstream in; */ you can put anything apart from "in" its ur choice but put something meaningfull, sumthing that has to do with reading data*/
2) we now use one of the methods of the class ifstream( which it may or may have not inherited from its father(fstream) ).I havnt had much time to check how fstreeam was created but i think its more of an abstract class in which all its subclasses can implement its methods.The designer of the class created many method of which we have "open", "close" e.t.c. lets see an example, of how to open out little file "myFile.dat" for reading.
in.open("myFile.dat"); */ remeber the object "in" we created from the class ifstream */ in.close(); //closes the file
The same thing happens when you re opening a file for writing just foloow the steps but this time we are using the ofstream class.
ofstream out; //" out" can be any name but meaningful out.open( "output.dat" ); * / open a file for wrintg and naming it "output.dat" */ out.close(); // closes the file
PHEW, That was a hell lot of "FILE"(if u knw what i mean,bunch of text) for simple statements.
okay so know we have opened our file "myFile.dat" for reading, automatically the cursor is at the begining of the file.note, we dont knw what "myFile.dat" contains but we knw it could be numbers, bunch of words, mixture of both, e.t.c.Let stick with numbers because its somewhat the same idea you will c .
okay we want to read a number ofrom the file.here is code
int num; //variable to store number we read from file in >> num; */ remeber how to read from keyboard "cin >> num".this is exactly it but this time we are not reading from the keyboard but from our file we opened using our object("in")*/
if our file consisted of strings we can read each word by substituting the "int" with "string". Back to numbers, u read your first number and store it in variable num, u can print it using "cout << num;"
OKay hold on now, remeber we said we dont knw how many data our file contains, so obviously it might have contained only one data which we have read and stored in number, or it could have many more data in the file.How can we know?
Okay, you...The guy at the front row typing this tutorial answer that question.LOL
so we want to read the file till the end of file because we do not how many numbers it contains or the size of the file. so to read here is the code, we use a loop, i prefer while loop cuz u will c, its easier.modifying our code we knw hav
int num; in >> num; while( num ) { cout << num << endl; in >> num; }
what the code does, is after declaring the variable num, it reads the first num from the file( remeber the cursor is automatically at the begining of the file). it then checks in the wile loop whether the operation "in >> num" was successfull. if successfull, it enters the loop.which then prints the value, and then reads another num, it goes back to the loop to test whether it is successfull.After all the data have been read the condition will be false and you are out the loop, and have printed all the data in the file. This is the complete code.
#include <iostream> #include <fstream> using namespace std; int main() { ifstream in; ofstream out; int num; // remeber it can be string in.open("myFle.dat"); in.open("output.dat"); in >> num; while ( num ) { cout << num << endl; in >> num; } return 0; }
Thats it for the basics, ill be continuing this tutorial.its very hard to get everything, so dats why im stopping here, to allow room for questions. so u should keep on checking for the update.Take care guys..Ask questions, thats the best way to learn. | http://www.dreamincode.net/forums/topic/13930-basic-c-file-io/ | CC-MAIN-2017-04 | refinedweb | 1,028 | 72.26 |
In this tutorial, I will provide the essential details about the air quality index, the pinout of the MQ-135 sensor, and how to measure air pollution using the MQ-135 sensor.
After this tutorial, you will be able to develop the air pollution monitoring and alert system using the Arduino Uno board with an MQ-135 sensor.
Hardware components
Software
Makerguides.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to products on Amazon.com.
What is air pollution?
Air pollution is the presence of excessive amounts of undesirable and unsafe solid or gaseous substances such as Carbon Monoxide, Lead, Nitrogen Oxide, Ozone, Particulate Matter, Sulfur Dioxide, etc., atmosphere.
Air pollution cause and effect
Air pollution has become an increasingly hazardous problem over the past few years. This factor is directly related to human health.
Global warming has become a severe concern for many countries; one widely faced issue is air pollution.
Other effects of air pollution also include various diseases like lung cancer, ischemic heart disease, asthma attacks, etc.
What is an air pollution monitoring system?
The Air pollution monitoring system is a facility to measure air pollutants using sensors, processing using microcontrollers, and showing results using various displays.
How can air pollution be monitored?
Air quality is a measure of how clean or polluted the air is. Air pollution is usually measured as Air Quality Index (AQI) in the PPM unit.
The sensors are most suitable for identifying hotspots at roadsides and near point sources. This sensor gets data that can be continuously monitored via different displays.
These are portable monitoring tools that can continuously monitor a range of pollutants.
Data can be downloaded to your computer and analyzed.
Air Quality Index: Source
What sensor can detect air pollution?
Nowadays, progress in electronics parts and availability of various sensors like MQ-135, MQ-2, MQ-3, MQ-4, MQ-5, MQ-6, etc., to measure the air pollutant, different systems are developed to monitor the number of air pollutants in the air.
MQ135 is one of the most popular sensors for measuring AQI in PPM.
What is the MQ-135 sensor?
MQ-135 is a gas sensor that has lower conductivity in clean air. It is low cost and suitable for different applications.
This module operates at 5V, has 33Ω±5% resistance, and consumes around 150mA.
This sensor has four pins from four pins, Digital and Analog output pins, which are used to approximate these gasses levels in the atmosphere.
When these gasses go beyond a threshold limit in the air, the digital pin rises.
This threshold value can be set by using the onboard potentiometer.
Features of MQ-135 Sensor
- Wide detecting scope, Fast response, and High sensitivity
- Long lifespan
- Heater Voltage: 5.0V
- It contains analog output and high/low digital output
- The TTL output signal is a low level
- The operating Voltage is +5V
- Detected/Measure NH3, NOx, alcohol, Benzene, smoke, CO2, etc.
- Detection Range: 10 – 300 ppm NH3, 10 – 1000 ppm Benzene, 10 – 300 Alcohol
If you want more details about the MQ-135 sensor, refer to the MQ-135 datasheet and MQ-135 Schematic pdf.
Working Mechanism
Now, let’s understand the working mechanism of the MQ-135 gas sensor.
The MQ-135 Gas sensor consists of Tin Dioxide (SnO2). When target pollution gas exists, the sensor’s conductivity increases along with the gas concentration.
Users can convert the change of conductivity to correspond to the output signal of gas concentration through a simple circuit.
The MQ-135 gas sensor has a high sensitivity to NH3, S2, C6H6 series steam and can monitor smoke and other toxic gasses. It can detect kinds of toxic gasses.
Pinout of MQ135
The MQ-135 sensor module has four pins, with the most important part being an adjustable potentiometer.
Can air quality be measured with an Arduino device?
Arduino Uno consists of 14 digital input/output (I/O) pins and 6 analog input pins, which fulfill the requirement of the AQI monitoring system.
This system could be extended by various modules compatible with Arduino, such as Ethernet shield, GSM/GPRS shield, GPS logger shield, RTC shield, and built-in board with VCC, Ground, etc.
Now you know quite a bit about the MQ-135 sensor, let’s move ahead and learn how to interface the MQ-135 with Arduino.
-> Read our guide about What You Can Build with Adruino.
Wiring MQ-135 sensor with Arduino UNO
This circuit diagram is self-explanatory. The best way to interface this circuit is to start with an Arduino Uno board and LCD with the breadboard.
Step 1: First, place a 16×2 LCD on the breadboard, as shown in the figure.
Now, connect A to +5V with a 220-ohm resistor and K to Ground.
To vary the contrast of a 16×2 LCD, connect the VO to the middle pin of the potentiometer and VDD to +5V, VSS & RW to Ground.
Also, provide +5V and Ground to the potentiometer as depicted in the figure.
The summary of connections is mentioned in the below table.
Step 2: The MQ-135 module connects to the A0 pin of an Arduino Uno and connects GND to Ground, providing +5V to VCC.
Step 3: Connect Anode (+) of Green LED to digital pin 8 of Arduino; Blue LED to digital 9 pin of Arduino and Red LED to digital pin 10 of Arduino and all LEDs Cathode (-) Ground with 220-ohm resistor.
Step 4: Connect the buzzer’s positive terminal to digital pin 11 of Arduino and the negative terminal to the ground.
-> Learn more about How Easy Is It To Learn Arduino here.
Arduino Code For Air Pollution Monitoring
The following code allows you to measure the air pollutant in the air in a PPM unit using the MQ-135 sensor.
You can see the results on a serial monitor and a 16×2 LCD.
// Include library for LCD and define pins #include <LiquidCrystal.h> const int rs = 2, en = 3, d4 = 4, d5 = 5, d6 = 6, d7 = 7; LiquidCrystal lcd(rs, en, d4, d5, d6, d7); // Define pins and variable for input sensor and output led and buzzer const int mq135_aqi_sensor = A0; const int green_led = 8; const int blue_led = 9; const int red_led = 10; const int buzzer = 11; // Set threshold for AQI int aqi_ppm = 0; void setup() { // Set direction of input-output pins pinMode (mq135_aqi_sensor, INPUT); pinMode (green_led, OUTPUT); pinMode (blue_led, OUTPUT); pinMode (red_led, OUTPUT); pinMode (buzzer, OUTPUT); digitalWrite(green_led, LOW); digitalWrite(blue_led, LOW); digitalWrite(red_led, LOW); digitalWrite(buzzer, LOW); // Initiate serial and lcd communication Serial.begin (9600); lcd.clear(); lcd.begin (16, 2); Serial.println("AQI Alert System"); lcd.setCursor(0, 0); lcd.print("AQI Alert System"); delay(1000); } void loop() { aqi_ppm = analogRead(mq135_aqi_sensor); Serial.print("Air Quality: "); Serial.println(aqi_ppm); lcd.setCursor(0, 0); lcd.print("Air Quality: "); lcd.print(aqi_ppm); if ((aqi_ppm >= 0) && (aqi_ppm <= 50)) { lcd.setCursor(0, 1); lcd.print("AQI Good"); Serial.println("AQI Good"); digitalWrite(green_led, HIGH); digitalWrite(blue_led, LOW); digitalWrite(red_led, LOW); digitalWrite(buzzer, LOW); } else if ((aqi_ppm >= 51) && (aqi_ppm <= 100)) { lcd.setCursor(0, 1); lcd.print("AQI Moderate"); Serial.println("AQI Moderate"); tone(green_led, 1000, 200); digitalWrite(blue_led, HIGH); digitalWrite(red_led, LOW); digitalWrite(buzzer, LOW); } else if ((aqi_ppm >= 101) && (aqi_ppm <= 200)) { lcd.setCursor(0, 1); lcd.print("AQI Unhealthy"); Serial.println("AQI Unhealthy"); digitalWrite(green_led, LOW); digitalWrite(blue_led, HIGH); digitalWrite(red_led, LOW); digitalWrite(buzzer, LOW); } else if ((aqi_ppm >= 201) && (aqi_ppm <= 300)) { lcd.setCursor(0, 1); lcd.print("AQI V. Unhealthy"); Serial.println("AQI V. Unhealthy"); digitalWrite(green_led, LOW); tone(blue_led, 1000, 200); digitalWrite(red_led, HIGH); digitalWrite(buzzer, LOW); } else if (aqi_ppm >= 301) { lcd.setCursor(0, 1); lcd.print("AQI Hazardous"); Serial.println("AQI Hazardous"); digitalWrite(green_led, LOW); digitalWrite(blue_led, LOW); digitalWrite(red_led, HIGH); digitalWrite(buzzer, HIGH); } delay (700); }
How The Code Works
I have used an Arduino IDE to program an Arduino Uno board.
Step 1: First, I have included the necessary header file for 16×2 LCD,
#include <LiquidCrystal.h> // Header file for LCD
Step 2: Define the variables for different pins used in the Arduino board for LCD, Buzzer, LED, and MQ-135. Also, define the variable for analog value.
const int rs = 2, en = 3, d4 = 4, d5 = 5, d6 = 6, d7 = 7; // LCD to Arduino pins LiquidCrystal lcd(rs, en, d4, d5, d6, d7); // LiquidCrystal function const int mq135_aqi_sensor = A0; // Connect MQ-135 sensor to A0 pin of Arduino const int green_led = 8; // Connect green LED to digital 8 pin of Arduino const int blue_led = 9; // Connect blue LED to digital 9 pin of Arduino const int red_led = 10; // Connect red LED to digital 10 pin of Arduino const int buzzer = 11; // Connect buzzer to digital 11 pin of Arduino int aqi_ppm = 0; // Initialize the variable for catch up analog value
Step 3: In the void setup() function, I have initialized the Buzzer, LED as an output device, and MQ-135 as an input device to Arduino Uno.
lcd.begin(); function initialize 16×2 LCD, before initializing the 16×2 LCD you must clear.
Using lcd.clear(); function. Also, set the baud rate at 9600-speed rate using Serial.begin(); function.
Step 4: After Initialize,16×2 LCD in set the cursor home position.
lcd.setCursor(0, 0);
The below message will print on the serial terminal and 16×2 LCD.
Serial.println(“AQI Alert System”);
lcd.print(“AQI Alert System”);
Step 5: In void loop() function, initializing the variable to catch up analog value, using analogRead(); function. And print on message serial terminal and 16×2 LCD.
aqi_ppm = analogRead(mq135_aqi_sensor); Serial.print("Air Quality: "); Serial.println(aqi_ppm); lcd.setCursor(0, 0); lcd.print("Air Quality: "); lcd.print(aqi_ppm);
Step 6: If the Suppose Analog value is between 0 to 50, the green LED will blink. Print on message serial terminal. And on LCD will print a message in the second row and first position.
lcd.setCursor(0, 1); lcd.print("AQI Good"); Serial.println("AQI Good"); digitalWrite(green_led, HIGH);
Step 7: Same as, depending on AQI value, LED will blink and print on the message serial terminal and the 16×2 LCD following the table.
LCD Output:
Serial Monitor Output:
Video Preview & Explanation
If you need a guide from the video, follow this link Video Preview.
–> Check out our guide to the Top 12 Best Arduino Online Courses
Conclusion
After this tutorial, you can develop your own Air Monitoring and Alert System using the MQ-135 sensor and Arduino Uno board.
I hope you found this tutorial informative. If you did, please share it with a friend who likes electronics and making things!
I would love to know what project you plan on building or have already made with the Arduino.
If you have any questions or suggestions or think things are missing in this tutorial, please leave a comment below.
Note that comments are held for moderation to prevent spam. | https://www.makerguides.com/air-pollution-monitoring-and-alert-system-using-arduino-and-mq135/ | CC-MAIN-2022-21 | refinedweb | 1,834 | 56.25 |
39074/how-to-insert-record-in-dynamodb-stored-in-json-file-using-cli
The command is the same as that you use normally, all you need to do is replace the data to the file location. Its better to navigate to the location where your json file is.
C:\Users\priyj_kumar\Desktop>aws dynamodb put-item --table-name YourTableName --item --return-consumed-capacity TOTAL
The output that you will receive is as follows:-
{
"ConsumedCapacity": {
"TableName": "Employee",
"CapacityUnits": 1.0
}
}
Hope this helps.
You can use the below command
$ aws ...READ MORE
The code would be something like this:
import ...READ MORE
Hello @Jino,
The command for creating security group ...READ MORE
You can use method of creating object ...READ MORE
Follow the guide given here in aws ...READ MORE
Check if the FTP ports are enabled ...READ MORE
To connect to EC2 instance using Filezilla, ...READ MORE
AWS ElastiCache APIs don't expose any abstraction ...READ MORE
The best practice will be:
running this command ...READ MORE
Here is the table that I have ...READ MORE
OR | https://www.edureka.co/community/39074/how-to-insert-record-in-dynamodb-stored-in-json-file-using-cli?show=39076 | CC-MAIN-2019-30 | refinedweb | 177 | 69.48 |
Archive for March 2010
Polyglot Programming at the AIC
Thanks to everyone who attended my session on Polyglot Programming at AIC earlier today. The ability to combine languages to achieve simpler solutions is worth consideration, although the potential downsides of adopting multiple languages need to be borne in mind. In truth, many of us are already doing a form of polyglot programming – combining a programming language server side (such as C#) with Javascript on the client and SQL for data access. However, this form of polyglot programming arises passively and is done because we have to and not because we have deliberately and actively selected a set of languages. In order to be successful with polyglot programming, there are two crucial components: a platform and architecture. The platform should provide language interoperability and the architecture should provide guidance on which languages to use and where in the architecture they are appropriate (taking into account that this guidance will evolve over time as you gather information and feedback about what really works for you and your team.) In discussing the platform, I briefly touched on some of the features in .NET 4 and talked about the trends in language design. I’ll be expanding on these platform themes and diving into a little more detail about .NET 4.0 at TechDays in April.
Here are the links and references I used in my session:
Nick Watts’ post on polyglot programming (I used his definition)
The definition of polyglot in the Compact Oxford English Dictionary
Neal Ford’s post on polyglot programming
Bertrand Meyer’s article on polyglot programming in Dr.Dobb’s
Information about the Babel Project at the Lawrence Livermore National Laboratory
Ted Neward’s MSDN article The Polyglot Programmer: Mixing and Matching Languages
History of Programming Languages
Hans Christian Fjeldberg’s thesis on Polyglot Programming
Dean Wampler’s presentation on Polyglot and Poly-Paradigm Programming
I also referred to Ola Bini’s idea of fractal programming.
I suggested that there are two practical areas where experimentation with polyglot programming could be beneficial with existing applications and systems: extension and testing. By extension, I mean the ability to customise and add functionality – which is a great fit for a dynamic language like IronRuby or IronPython. Testing is also an area where dynamic languages have much to offer and I’d suggest taking a look at Ben Hall’s presentation that he gave at QCon earlier this year.
Batteries Included
Sometimes you’ll hear the Python standard library referred to as “batteries included” – a little more info here. IronPython can also use these included batteries. As an example, today I needed to list the files in a folder. So, a simple Python script seemed like a good way of doing that (I’m sure there are better and more ingenious ways.) Here it is:
import os
import os.path
import sys
from optparse import OptionParser
def list_files(path, indent=0):
for filename in sorted(os.listdir(path)):
print " " * indent + filename
full_path = os.path.join(path, filename)
if (options.recursive) and (os.path.isdir(full_path)):
list_files(full_path, indent + 2)
parser = OptionParser()
parser.add_option("-d", "--directory",
action="store", type="string", dest="path",
help="The directory to list.")
parser.add_option("-r", "--recursive",
action="store_true", default=False,
help="Whether to list subdirectories.")
parser.add_option("-f", "--output_file",
action="store", type="string", dest="output_file",
help="Directory contents will be listed to this file if specified.")
(options, args) = parser.parse_args()
if (options.path):
path = options.path
else:
path = sys.path[0]
out = sys.stdout
if (options.output_file):
output_file = open(options.output_file, 'w')
sys.stdout = output_file
list_files(path)
if (options.output_file):
output_file.flush()
output_file.close()
sys.stdout = out
As you can see, the script takes advantage of optparse to process the command line arguments, sys (to get the current folder and to get access to and redirect the output of the script), os (to list the contents of a folder) and os.path (to test if a given path is a folder.) And, being IronPython, you also get a second set of batteries in the form of the .NET framework.
Iron Python at QCon
I spent yesterday on the Microsoft stand at QCon 2010. I took a few Iron Python samples with me to show to those who are interested. I wanted to be able to show three things: .NET runs Python, Python extends .NET and Python runs .NET.
.NET runs Python
To show that .NET can run Python I used the Text Processing sample I’ve blogged about before. I’ve subsequently added optparse to it so that it can be driven from the command line. The point of this sample is that it uses standard Python libraries, the whole application is written in Python (there’s a little XAML to describe the UI) and runs on the DLR courtesy of IronPython.
Python extends .NET
For a simple demonstration of extending a .NET application with Python, I took the sample application described here. This application allows the user to write Python (at runtime) that interacts with the application.
Python runs .NET
The last sample was an adaptation of the code here that reads a Twitter feed. Rather than use Twitter (with all the shortened urls and abbreviations) I decided to use an RSS feed from the BBC to create an Iron Python newsreader. The code is remarkably simple:
import clr clr.AddReference('System.Speech') clr.AddReference('System.Xml') from System.Speech.Synthesis import SpeechSynthesizer from System.Xml import XmlDocument, XmlTextReader xmlDoc = XmlDocument() xmlDoc.Load("") spk = SpeechSynthesizer() itemsNode = xmlDoc.DocumentElement.SelectNodes("channel/item") for item in itemsNode: print item.SelectSingleNode("title").InnerText news = "<?xml version='1.0'?><speak version='1.0' xml:<break />" + item.SelectSingleNode("description").InnerText + "</speak>" spk.SpeakSsml(news)
This is Python using standard .NET libraries to show how a Python programmer has the .NET framework available to them through Iron Python.
Gestalt
The final thing I talked about is Gestalt, which allows you to run Python (and Ruby) in the browser. It does this by using the DLR, which is part of Silverlight – this is all encapsulated in javascript. | https://remark.wordpress.com/2010/03/ | CC-MAIN-2020-29 | refinedweb | 1,013 | 57.57 |
Flash Player 9 and later, Adobe AIR 1.0 and
later
Pausing and resuming a sound
Monitoring playback
Stopping streaming sounds
Playing a loaded sound can be as simple as calling the Sound.play() method for
a Sound object, as follows:
var snd:Sound = new Sound(new URLRequest("smallSound.mp3"));
snd.play();
When playing back sounds using ActionScript 3.0, you can perform
the following operations:
Play a sound from a specific starting position
Pause a sound and resume playback from the same position
later
Know exactly when a sound finishes playing
Track the playback progress of a sound
Change volume or panning while a sound plays
To perform these operations during playback, use the SoundChannel,
SoundMixer, and SoundTransform classes.
The SoundChannel class controls the playback of a single sound.
The SoundChannel.position property can be thought
of as a playhead, indicating the current point in the sound data
that’s being played.
When an application calls the Sound.play() method,
a new instance of the SoundChannel class is created to control the
playback.
Your application can play a sound from a specific starting position
by passing that position, in terms of milliseconds, as the startTime parameter
of the Sound.play() method. It can also specify
a fixed number of times to repeat the sound in rapid succession
by passing a numeric value in the loops parameter
of the Sound.play() method.
When the Sound.play() method is called with
both a startTime parameter and a loops parameter,
the sound is played back repeatedly from the same starting point
each time, as shown in the following code:
var snd:Sound = new Sound(new URLRequest("repeatingSound.mp3"));
snd.play(1000, 3);
In this example, the sound is played from a point one second
after the start of the sound, three times in succession.
If your application plays long sounds, like songs or podcasts,
you probably want to let users pause and resume the playback of
those sounds. A sound cannot literally be paused during playback
in ActionScript; it can only be stopped. However, a sound can be
played starting from any point. You can record the position of the
sound at the time it was stopped, and then replay the sound starting
at that position later.
For example, let’s say your code loads and plays a sound file
like this:
var snd:Sound = new Sound(new URLRequest("bigSound.mp3"));
var channel:SoundChannel = snd.play();
While the sound plays, the SoundChannel.position property
indicates the point in the sound file that is currently being played.
Your application can store the position value before stopping the
sound from playing, as follows:
var pausePosition:int = channel.position;
channel.stop();
To resume playing the sound, pass the previously stored position
value to restart the sound from the same point it stopped at before.
channel = snd.play(pausePosition);
Your application might want to know when a sound stops playing
so it can start playing another sound, or clean up some resources
used during the previous playback. The SoundChannel class dispatches
an Event.SOUND_COMPLETE event when its sound finishes
playing. Your application can listen for this event and take appropriate
action, as shown below:
import flash.events.Event;
import flash.media.Sound;
import flash.net.URLRequest;
var snd:Sound = new Sound();
var req:URLRequest = new URLRequest("smallSound.mp3");
snd.load(req);
var channel:SoundChannel = snd.play();
channel.addEventListener(Event.SOUND_COMPLETE, onPlaybackComplete);
public function onPlaybackComplete(event:Event)
{
trace("The sound has finished playing.");
}
The SoundChannel class does not dispatch progress events during
playback. To report on playback progress, your application can set
up its own timing mechanism and track the position of the sound
playhead.
To calculate what percentage of a sound has been played, you
can divide the value of the SoundChannel.position property
by the length of the sound data that’s being played:
var playbackPercent:uint = 100 * (channel.position / snd.length);
However, this code only reports accurate playback percentages
if the sound data was fully loaded before playback began. The Sound.length property
shows the size of the sound data that is currently loaded, not the
eventual size of the entire sound file. To track the playback progress
of a streaming sound that is still loading, your application should
estimate the eventual size of the full sound file and use that value
in its calculations. You can estimate the eventual length of the sound
data using the bytesLoaded and bytesTotal properties
of the Sound object, as follows:
var estimatedLength:int =
Math.ceil(snd.length / (snd.bytesLoaded / snd.bytesTotal));
var playbackPercent:uint = 100 * (channel.position / estimatedLength);
The following code loads a larger sound file and uses the Event.ENTER_FRAME event
as its timing mechanism for showing playback progress. It periodically
reports on the playback percentage, which is calculated as the current
position value divided by the total length of the sound data:
import flash.events.Event;
import flash.media.Sound;
import flash.net.URLRequest;
var snd:Sound = new Sound();
var req:URLRequest = new
URLRequest("");
snd.load(req);
var channel:SoundChannel;
channel = snd.play();
addEventListener(Event.ENTER_FRAME, onEnterFrame);
channel.addEventListener(Event.SOUND_COMPLETE, onPlaybackComplete);
function onEnterFrame(event:Event):void
{
var estimatedLength:int =
Math.ceil(snd.length / (snd.bytesLoaded / snd.bytesTotal));
var playbackPercent:uint =
Math.round(100 * (channel.position / estimatedLength));
trace("Sound playback is " + playbackPercent + "% complete.");
}
function onPlaybackComplete(event:Event)
{
trace("The sound has finished playing.");
removeEventListener(Event.ENTER_FRAME, onEnterFrame);
}
After the sound data starts loading, this code calls the snd.play() method and
stores the resulting SoundChannel object in the channel variable.
Then it adds an event listener to the main application for the Event.ENTER_FRAME event
and another event listener to the SoundChannel object for the Event.SOUND_COMPLETE event
that occurs when playback is complete.
Each time the application reaches a new frame in its animation,
the onEnterFrame() method is called. The onEnterFrame() method estimates
the total length of the sound file based on the amount of data that
has already been loaded, and then it calculates and displays the
current playback percentage.
When the entire sound has been played, the onPlaybackComplete() method
executes, removing the event listener for the Event.ENTER_FRAME event
so that it doesn’t try to display progress updates after playback
is done.
The Event.ENTER_FRAME event can be dispatched
many times per second. In some cases, you won’t want to display
playback progress that frequently. In those cases, your application
can set up its own timing mechanism using the flash.util.Timer class;
see Working with dates and times.
There is a quirk in the playback process for sounds that are
streaming—that is, for sounds that are still loading while they
are being played. When your application calls the SoundChannel.stop() method
on a SoundChannel instance that is playing back a streaming sound,
the sound playback stops for one frame, and then on the next frame,
it restarts from the beginning of the sound. This occurs because
the sound loading process is still underway. To stop both the loading and
the playback of a streaming sound, call the Sound.close() method.
Twitter™ and Facebook posts are not covered under the terms of Creative Commons. | http://help.adobe.com/en_US/as3/dev/WS5b3ccc516d4fbf351e63e3d118a9b90204-7d21.html | CC-MAIN-2015-06 | refinedweb | 1,185 | 56.66 |
Changing Proxy If proxy is dead
By
RyukShini, in AutoIt General Help and Support
Recommended Posts
This topic is now closed to further replies.
Similar Content
- By watchoverme
hi all, how can i move mouse to the place where pixel changes
While Sleep (3000)
$pix = PixelChecksum(0,0,55,55)
If IsArray($pix) = True Then
MouseMove($pix [0],$pix[1])
EndIf
WEnd
- Ignacio
Hello, and good day
Im trying to make a sentence autocompleter so that when you type certain words (or commands) the scripts, and im in need of help/pointers
That is what i have at the time and the issues i have currently is that:
- I cant find an easy way to reset the counter to 0 in case a different letter from those are pressed ( tried NOT _ispressed but i think i got it wrong)
-Is there another way to detect the key press that _ispressed? (i couldnt find it so far), since i feel like it is too clunky ( although maybe that is just me and my way to code)
- for some reason the hex code (6F) for the / (divide nume pad) isnt working for me
Im thinking of making a text file with some words to use them as variables/comparations (so that at least removes the need of a variable for the words in the script) and make the script make a temporal text file to save the input and then compare it with the other one. But i dont know if that is a good approach.
Thanks for your time and patience.
- By WiorDi37
Hello, Everyone!
I want when clicking the exit button the window will close. If content changes upon exit the program will automatically choose not save.
Look forward to the help, thanks.
#include <ButtonConstants.au3> #include <GUIConstantsEx.au3> #include <WindowsConstants.au3> #include <AutoItConstants.au3> $GUI = GUICreate("Form1", 220, 119, 192, 124, $WS_SYSMENU) GUISetFont(10, 400, 0, "Tahoma") GUICtrlCreateGroup("Chuẩn bị trình chiếu", 16, 16, 185, 65) $ok_Button = GUICtrlCreateButton("Ok", 32, 48, 75, 25) $exit_Button = GUICtrlCreateButton("Exit", 112, 48, 75, 25) GUICtrlCreateGroup("", -99, -99, 1, 1) GUISetState(@SW_SHOW) While 1 $nMsg = GUIGetMsg() Switch $nMsg Case $ok_Button ShellExecute(@MyDocumentsDir&'\Dich-thuat\Short-Document.pdf', "", "", Default, @SW_MAXIMIZE) WinWaitActive("Data and Computer Communications (Eighth Edition) - Google Chrome") ShellExecute(@MyDocumentsDir&'\Dich-thuat\Document.rtf', "", "", Default, @SW_MAXIMIZE) WinWaitActive("Document.rtf [Compatibility Mode] - Word") ShellExecute(@MyDocumentsDir&'\Dich-thuat\Presentation1.pptx', "", "", Default, @SW_MAXIMIZE) WinWaitActive("Presentation1.pptx - PowerPoint") MouseClick("left", 1381, 886, 1) Sleep(2000) MsgBox(64, "Thông báo", "Đã chuẩn bị xong") Case $exit_Button WinClose("Presentation1.pptx - PowerPoint") ;I need help handling this place Case $GUI_EVENT_CLOSE Exit EndSwitch WEnd
-? | https://www.autoitscript.com/forum/topic/180253-changing-proxy-if-proxy-is-dead/ | CC-MAIN-2018-17 | refinedweb | 432 | 50.87 |
Warning: You are browsing the documentation for Symfony 3.1, which is no longer maintained.
Read the updated version of this page for Symfony 5.3 (the current stable version).:
namespace AppBundle\Form; use AppBundle(array( 'data_class' => Post::class, )); } }
Best Practice
Put the form type classes in the
AppBundle\Form namespace, unless you
use other custom form classes like data transformers.
To use the class, use
createForm() and pass the fully qualified class name:
// ... use AppBundle\Form\PostType; // ... public function newAction(Request $request) { $post = new Post(); $form = $this->createForm(PostType::class, $post); // ... }
Registering Forms as Services¶
You can also register your form type as a service. This is only needed if your form type requires some dependencies to be injected by the container, otherwise it is unnecessary overhead and therefore not recommended to do this for all form type classes.).
This work, including the code samples, is licensed under a Creative Commons BY-SA 3.0 license. | https://symfony.com/doc/3.1/best_practices/forms.html | CC-MAIN-2021-43 | refinedweb | 157 | 56.96 |
Type: Posts; User: Symus
I finally managed to resolve the situation. And here it is the final code:
#include <iostream>
#include <cstdlib>
#include <iomanip>
using namespace std;
OK, why is this not working? :(
And how can I fix it?
Hello guys,
Thanks for your help and suggestions. I managed to solve this task without converting to string. Here is my code if someone is interested:
#include <iostream>
#include...
Hi all,
I am trying to solve the following task:
-Write a recursive function that prints an integer with decimal separators. For example, 12345678 should be printed as 12,345,678.
The problem...
Oops...yes, copy-paste!!! Sorry.
I just tried it and it works...even for the in-order traversal... I don't know what I did before but I guess that I was calling the wrong functions. I feel so...
I understand what you mean. Let me show you:
void TreeNode::postorder() const
{
if (left != NULL)
{
left->preorder();
}
I tried this the first time but when I run the program the output is wrong...I don't know why it happens. That's why I decided to do it with argument :)
Thank you for your help.
I found one more mistake in the same function, the cout should be a.data instead of data. The program now works accurately.
Thanks :)
Hi all,
I am trying to develop a function which prints a binary tree using post-order traversal (left, right, root) but I am experiencing some troubles. The code
is compiled but when I run the...
Thanks for your help! :)
I see. So is there a way to do this with seek/tell. Could also point me the easiest way of reversing a string?
Thank you in advance. :)
Thank you Paul,
But the assignment requires to use these pointers and I think that my problem is right there. I think my positions are not correct.
Hello guys,
I have an assignment that asks me to write a program which reads all lines from a text file (one by another), reverses them and write to the same file. I also have a pseudocode with...
Hello Paul,
First I would like to thank you for your reply - so thank you :)
It is a little hard for me to fully understand the code above because we did not learned all the things which I see...
Hello guys,
I am new to the forum and I kindly ask for help :) (excuse me for my bad language)
I have a task to write a program Simple Diary (in C++) which has to store notes in a text file but... | http://forums.codeguru.com/search.php?s=b257b80583c185bc7fdc2ec8e591fe1b&searchid=5373573 | CC-MAIN-2014-42 | refinedweb | 434 | 83.56 |
just released a new PyGObject, for GNOME 3.7.2 which is due on Wednesday.
In this version PyGObject went through some major refactoring: Some 5.000 lines of static bindings were removed and replaced with proper introspection and some overrides for backwards compatibility, and the static/GI/overrides code structure was simplified. For the developer this means that you can now use the full GLib API, a lot of which was previously hidden by old and incomplete static bindings; also you can and should now use the officially documented GLib API instead of PyGObject’s static one, which has been marked as deprecated. For PyGObject itself this change means that the code structure is now a lot simpler to understand, all the bugs in the static GLib bindings are gone, and the GLib bindings will not go out of sync any more.
Lots of new tests were written to ensure that the API is backwards compatible, but experience teaches that ther is always the odd corner case which we did not cover. So if your code does not work any more with 3.7.2, please do report bugs.
Another important change is that if you build pygobject from source, it now defaults to using Python 3 if installed. As before, you can build for Python 2 with
PYTHON=python2.7 or the new
--with-python=python2.7 configure option.
This release also brings several marshalling fixes, docstring improvements, support for code coverage, and other bug fixes.
Thanks to all contributors!
Summary of changes (see changelog for complete details):
- [API change] Drop almost all static GLib bindings and replace them with proper introspection. This gets rid of several cases where the PyGObject API was not matching the real GLib API, makes the full GLib API available through introspection, and makes the code smaller, easier to maintain. For backwards compatibility, overrides are provided to emulate the old static binding API, but this will throw a PyGIDeprecationWarning for the cases that diverge from the official API (in particular, GLib.io_add_watch() and GLib.child_watch_add() being called without a priority argument). (Martin Pitt, Simon Feltman)
- [API change] Deprecate calling GLib API through the GObject namespace. This has always been a misnomer with introspection, and will be removed in a later version; for now this throws a PyGIDeprecationWarning.
- [API change] Do not bind gobject_get_data() and gobject_set_data(). These have been deprecated for a cycle, now dropped entirely. (Steve Frécinaux) (#641944)
- [API change] Deprecate void pointer fields as general PyObject storage. (Simon Feltman) (#683599)
- Add support for GVariant properties (Martin Pitt)
- Add type checking to GVariant argument assignment (Martin Pitt)
- Fix marshalling of arrays of struct pointers to Python (Carlos Garnacho) (#678620)
- Fix Gdk.Atom to have a proper str() and repr() (Martin Pitt) (#678620)
- Make sure g_value_set_boxed does not cause a buffer overrun with GStrvs (Simon Feltman) (#688232)
- Fix leaks with GValues holding boxed and object types (Simon Feltman) (#688137)
- Add doc strings showing method signatures for gi methods (Simon Feltman) (#681967)
- Set Property instance doc string and blurb to getter doc string (Simon Feltman) (#688025)
- Add GObject.G_MINSSIZE (Martin Pitt)
- Fix marshalling of GByteArrays (Martin Pitt)
- Fix marshalling of ssize_t to smaller ints (Martin Pitt)
- Add support for lcov code coverage, and add a lot of missing GIMarshallingTests and g-i Regress tests. (Martin Pitt)
- pygi-convert: remove deprecated GLib ? GObject conversions (Jose Rostagno)
- Add support for overriding GObject.Object (Simon Feltman) (#672727)
- Add –with-python configure option (Martin Pitt)
- Do not prefer unversioned “python” when configuring, as some distros have “python” as Python 3. Use Python 3 by default if available. Add –with-python configure option as an alternative to setting $PYTHON, whic is more discoverable. (Martin Pitt)
- Fix property lookup in class hierarchy (Daniel Drake) (#686942)
- Move property and signal creation into _class_init() (Martin Pitt) (#686149)
- Fix duplicate symbols error on OSX (John Ralls)
- [API add] Add get_introspection_module for getting un-overridden modules (Simon Feltman) (#686828)
- Work around wrong 64 bit constants in GLib Gir (Martin Pitt) (#685022)
- Mark GLib.Source.get_current_time() as deprecated (Martin Pitt)
- Fix OverflowError in source_remove() (Martin Pitt) (#684526) | http://voices.canonical.com/user/29/tag/gnome/?page=1 | CC-MAIN-2018-13 | refinedweb | 681 | 52.49 |
User Name:
Published: 09 Aug 2010
By: Xianzhong Zhu.
As is well known, Shader Effect is a great thing supported with the release of WPF 3.0/ Silverlight 3.0. However, WPF only comes with five kinds of bitmap effects and two kinds of rendering effects while Silverlight only supports two kinds of rendering effects: BlurEffect (fuzzy rendering) and DropShadowEffect (projection rendering). Currently, because WPF/Silverlight is merely used by most developers to develop applications in a rare range of fields they are just looked upon as Winform/Webform alternatives. For this, the above types of ready-made rendering effects may be sufficient to meet the needs of the majority of occasions. But if you intend to use WPF/Silverlight in high-end development, such as animation and game design, then these cases requires dozens of custom rendering effects library to meet the needs of rendering colorful pictures..
The development environments we'll use in the sample application are:
1. Windows XP Professional (SP3);
2. .NET 4.0;
3. Visual Studio 2010;
4. Microsoft Silverlight Tools for Visual Studio 2010;
5. Microsoft Expression Blend 3;
6. DirectX SDK (Download:, name DXSDK_Aug09.exe, size 553.3 MB);
7. (Optional) a mini rendering editor Shazzam, whose download address is.
Explained by the previous article, you have already gained a better understanding with HLSL. In fact, HLSL rendering is not limited to static rendering, there also bears the other greater energy in it.
In the previous article you've known that a typical HLSL program generally contains several global variables of the type register. From another point of view, these global variables right corresponds to the changeable dependency properties of the Effect component. Readers familiar with Silverlight Storyboard are easy to realize that by updating the value of each global variable in the HLSL code according to certain rules we can also get cool animated form of rendering effect, so that another kind of animation is generated. This is just what really interests us.
In the subsequent sections in this article I will show you how to achieve HLSL based advanced animation effects on Silverlight platform.
First of all, let's follow the listed steps below to set up the demo project sketch.
1. Start Visual Studio 2010 to create a basic Silverlight 4 project and name it SL4CustomEffectAnimation.
2. Right click the solution SL4CustomEffectAnimation, select "Add|New Project...", and add a Silverlight 4 class library, naming it SL4CustomEffectLib.
As implied above and in the previous article, with the help of Shazzam, a good tool to help you quickly create and test HLSL script, we can more easily start HLSL script programming and also obtain semi-finished C# code.
Hence, we can create our interested HLSL shader effect using Shazzam and compile the related scripts into .ps files and finally encapsulate these files and associated C# classes into the above class library SL4CustomEffectLib to be used later on in the Silverlight sample application.
As is known, there are more than forty ready-to-use sample shader effects (refer to Figure 1) shipped with Shazzam. For simplicity, in this case we are going to use some of these built-in shader effects to create our custom shader effects library.
Since we will use the ready-to-use effects and code provided by Shazzam things become easier. Take the Ripple effect (which is one of our interested effects to be used in the sample application) for example. Just follow the steps below:
1. Start Shazzam and select the tab "Shader Loader".
2. Click the sub tab "Sample Shaders". From the preinstalled sample list box click the file 'Ripple.fx'. In this way, we can obtain the following HLSL scripts:
Since we've dissected the similar scripts in the previous article and there are already enough comments in the code, we do not need to give further explanations.
3. Click the tab "Generated Shader – C#" at the lower right corner in Shazzam, and you will see the following ready-to-use C# code:
Here, I still do not want to give further explanations about the above C# code since it's nearly all the same for you to create other Shader effects related C# code. However, all the above public static read-only typed dependency properties should cause your extreme attention. In fact, as you've guessed, these properties are just the ones we will change to create later animations. So, we cannot clone the right form of C# code in building our above effect library.
4. Select the menu "Tools" - "Compile Shader" or press the F7 key to compile your script. If all goes well, the system will give appropriate hints.
5. The current task is to find the related .ps file and copy the C# code. In my machine the path seems a bit peculiar - C:\Documents and Settings\Administrator\Local Settings\Application Data\Shazzam\GeneratedShaders. Anyway, copy the file Ripple.ps to the root path of the project SL4CustomEffectLib.
In the next section, we will use the above Shazzam generated files to set up our custom effecs library - SL4CustomEffectLib.
To facilitate the management of all the .ps files we put the above file Ripple.ps to a newly-created sub folder ShaderSource in the project SL4CustomEffectLib. Then, in the Properties dialog box of this file, do remember to change the type from "Build Action" to "Resource".
Afterwards, you can follow the below steps to create the ripple effect related class:
1. Right click the project SL4CustomEffectLib, select "Add|New Item...", and add a common C# class file, naming it Ripple.cs.
2. Copy the previously-generated file C# class file in Shazzam into the file Ripple.cs. And then, make a little modification with it, as follows:
In the above code, compared with the Shazzam version, we mainly made modification at three points: get rid of the readonly constraint of all the dependency properties; change the .ps file related Uri path; set all the public properties to virtual. That's all.
readonly
OK, according to the process creating the ripple effect class, we can easily create other interesting effect classes and related .ps files.
Still, go to the path C:\Documents and Settings\Administrator\Local Settings\Application Data\Shazzam\GeneratedShaders. Then, copy the rest files BandedSwirl.ps, GrowablePoissonDisk.ps, Magnify.ps, Pinch.ps, and ZoomBlur.ps to the sub folder ShaderSource. And also, in their Properties dialog box, change the type from "Build Action" to "Resource".
Next, inside the project SL4CustomEffectLib create other related classes files corresponding to the .ps files, i.e. BandedSwirl.cs, GrowablePoissonDisk.cs, Magnify.cs, Pinch.cs, and ZoomBlur.cs. Obviously, these classes related C# codes are nearly the same as their Shazzam versions except for a little modification like that done in the class Ripple. For details, you can refer to the source code yourselves. Here we are to neglect them.
Finally, build the library SL4CustomEffectLib. With nothing wrong, you will have gotten a custom effect library ready to be used in the subsequent Silverlight game application.
To use the above class library in the Silverlight application is very simple: just add a reference to the library related assembly, then refer to the appropriate namespace, and last in the behind codes (in the simple case you can also use in the XAML markups) use the effect components.
Since we are going to use the custom effects to create animations rather than enhance images statically we should, under most cases, handle the effect components dynamically in the behind code.
Please follow the below steps.
1. Add reference to the above generated assembly SL4CustomEffectLib.dll and add related using sentences:
2. Declare global variables:
3. Initialize the application:
Herein, the method changeEquip takes the responsibility of initializing the sprite. This method is a great thing. However, to save space, we are not to delve into it; you can study it by downloading the source code.
changeEquip
4. Achieve the animation effect. This functionality is achieved in the method BeginShaderAnimation.
BeginShaderAnimation
To fully comprehend the above code, you should first have a better understanding with the Storyboard class. Then by digging further, you can easily find that the crucial code lies in the following one sentence:
By invoking the method SetTarget of the story board, the animation is bound to the target - the specified shader effect component. Then, the animation begins.
SetTarget
5. Change the animation effect. This functionality is achieved in the event handler ComboBox_SelectionChanged.
ComboBox_SelectionChanged
In the running time, by clicking the ComboBox that contains various kinds of custom effect names we can change the custom animation effects that are applied to the sprite. Up to now, we've accomplished our final target – dynamically altering the effect by changing their properties and then playing related animations.
There is so much for the theory. Let's take a rest to enjoy the final snapshots.
Figure 2 illustrates the initial state of the sample game. In there, no shader animation has been applied upon the sprite. Note here you can move the sprite by clicking any place at the scene.
Figure 3 gives the ripple animation effect that is applied upon the sprite.
Figure 4 gives the magnify animation effect that is applied upon the sprite.
Figure 5 gives the banded swirl animation effect that is applied upon the sprite.
OK, for the rest animation effect, you can try them yourselves.
In this article, we've finally achieved our initial target of using custom effect to create splendid Silverlight 4 animations. In fact, because the Effect concept has been introduced since Silverlight 3 you can of course implement them in Silverlight 3 (you need to copy all the related code into your Silverlight 3 environment). Anyway, a true conclusion is the Effect component provides another miraculous kind of animation to enhance your Silverlight based game applications!
It is noteworthy that even the newest Silverlight 4.0 version supports only pixel rendering that is still limited to software-based algorithms rather than the hardware-supported GPU (video card) based ones. What's more, for the time being it can not achieve the HLSL render on vertices. Though, these capacities have been largely able to meet our current needs. We can implement HLSL-based animation rendering effects by simply coding. With these and together with the proper imaging we can easily achieve the results of commonly-seen environmental effects, such as halo, rain and snow, clouds, lightning, ice and other magic effects such as explosions, laser, crystal using no more than tens of KB of storage space. Till now, we should have enough reasons to say that the future of Silverlight will become better and better. | http://dotnetslackers.com/articles/silverlight/Using-Custom-Effect-to-Create-Silverlight-4-Animation.aspx | crawl-003 | refinedweb | 1,773 | 56.05 |
Intro: Tracking Acceleration Variations With Raspberry Pi and MMA7455 Using Python
I didn’t trip, I was testing gravity. It still works…
A representation of an accelerating space shuttle clarified that a clock at the highest point of the shuttle would pick speedier than one at the base because of gravitational time expansion. Some contended that accelerating on board the shuttle would be the same for both clocks, so they ought to tick at the same rate. Give some thought to it.
Thoughts, motivation, and even guideline can originate from anyplace—however when your attention is on innovation, it gets the contribution from the individuals who concentrate on that point. Raspberry Pi, the mini, single board Linux PC, offers unique undertakings and master counsel on arranging, programming, and electronics ventures. Close by being Raspberry Pi and devices tutorial makers, we get a kick out of the chance to program and tinker and make astonishing things with Computer Science and Electronics squash up. We as of late had the joy of taking a shot at a task utilizing an accelerometer and the thoughts behind what you could do with this gadget are truly cool. So in this task, we will incorporate MMA7455, a 3-axis Digital accelerometer sensor, to measure acceleration in 3 dimensions, X, Y, and Z, with the Raspberry Pi using Python.Let’s see if it pays off.
Step 1: Hardware We Require
We know how troublesome it can be to attempt and take after without knowing which parts to get, where to arrange from, and how much everything will cost in advance. So we've done all that work for you. Once you have the parts all squared away it ought to be a snap to do this task. Take after the going with to get a complete parts list.. 3-Axis accelerometer, MMA7455
Produced by Freescale Semiconductor, Inc., the MMA7455 3-Axis Digital Accelerometer is a low power, a smaller scale machined sensor fit for measuring acceleration up along its X, Y, and Z axis. We obtained this sensor from DCUBE Store
4. Connecting Cable
We acquired the I2C Connecting cable from DCUBE Store
5. Micro USB cable
The slightest entangled, however, most stringent regarding power necessity is the Raspberry Pi! The most prescribed and least demanding approach to managing the strategy is by the utilization of the Micro USB cable. A more advanced and specialized path is to give power specifically by means of GPIO or USB ports.
6. Networking Support
Get your Raspberry Pi associated with an Ethernet (LAN) cable and interface it to your home particularly to a Screen or TV with an HDMI cable. Elective, you can use SSH to establish with your Raspberry Pi from a Linux PC or Mac from the terminal. Likewise, PuTTY, a free and open-source terminal emulator sounds like a smart thought.
Step 2: Connecting the Hardware
Make the circuit as indicated by the schematic showed up. In the schematic, you will see the connections of various electronics components, connecting wires, power cables, and I2C sensor.
Raspberry Pi and I2C Shield Connection
As a matter of first importance take the Raspberry Pi and spot the I2C Shield on it. Press the Shield nicely over the GPIO pins of Pi and we are finished with this progression as easy recommend should reliably take after the Ground (GND) connection between the output of one device and the input of another device.
Internet Access is Key
To make our endeavor a win, we require an Internet connection for our Raspberry Pi. For this, you have alternatives like interfacing an Ethernet (LAN) join with the home network. Also, as an alternative, a satisfying course is to use a WiFi USB connector. By and large representing this, you require a driver to make it work. So incline toward the one with Linux in the delineation.
Power Supply
Plug in the Micro USB cable into the power jack of the Raspberry Pi. Punch up and we are ready.
Connection to Screen
We can have the HDMI cable connected to another Monitor/TV. see the Python Code for the Raspberry Pi and MMA7455 Sensor in our GithubRepository.
Before continuing to the code, guarantee you read the standards given in the Readme chronicle and Set up your Raspberry Pi as indicated by it. It will simply relief for a minute to do in light of current circumstances.. # MMA7455L # This code is designed to work with the MMA7455L_I2CS I2C Mini Module available from dcubestore.com #
import smbus import time
# Get I2C bus bus = smbus.SMBus(1)
# MMA7455L address, 0x1D(16) # Select mode control register, 0x16(22) # 0x01(01) Measurement Mode, +/- 8g bus.write_byte_data(0x1D, 0x16, 0x01)
time.sleep(0.5)
# MMA7455L address, 0x1D(16) # Read data back from 0x00(00), 6 bytes # X-Axis LSB, X-Axis MSB, Y-Axis LSB, Y-Axis MSB, Z-Axis LSB, Z-Axis MSB data=bus.read_i2c_block_data(0x1D, 0x00, 6)
# Convert the data to 10-bits xAccl = (data[1] & 0x03) * 256 + data [0] if xAccl > 511 : xAccl -= 1024 yAccl = (data[3] & 0x03) * 256 + data [2] if yAccl > 511 : yAccl -= 1024 zAccl = (data[5] & 0x03) * 256 + data [4] yield on Screen. Taking after a few minutes, it will display each one of the parameters. In the wake of ensuring that everything works easily, you can utilize this wander each day or make this wander a little part of a much more prominent task. Whatever your needs you now have one more contraption in your gathering.
Step 5: Applications and Features
The MMA7455, manufactured by Freescale Semiconductor, a low-power high-performance 3-axis digital accelerometer can be used for Sensor Data Changes, Product Orientation, and Gesture Detection. It's perfect for applications such as Mobile Phone/ PMP/PDA: Orientation Detection (Portrait/Landscape), Image Stability, Text Scroll, Motion Dialing, Tap to Mute, Laptop PC: Anti-Theft, Gaming: Motion Detection, Auto-Wake/Sleep For Low Power Consumption and Digital Still Camera: Image Stability.
Step 6: Conclusion
If you've been contemplating to explore the universe of the Raspberry Pi and I2C sensors, then you can shock yourself by making used of the hardware basics, coding, arranging, authoritative, etc. When you're attempting to be more creative in your little venture, it never damages to swing to outside sources. Gravimeter Prototype to measuring the Local Gravitational Field of the Earth with MMA7455 and Raspberry Pi using Python. In the above venture, we have utilized fundamental computations. The basic principle of design is to measure very tiny fractional changes within the Earth's gravity of 1 g. So you could utilize this sensor in various ways you can consider. The algorithm is to measure the rate of change of the vertical gravity vector in all three perpendicular directions giving rise to a gravity gradient tensor. It can be deduced by differencing the value of gravity at two points separated by a small vertical distance, l, and dividing by this distance. We will attempt to make a working rendition of this prototype sooner rather than later, the configuration, the code, and modeling works for structure borne noise and vibration analysis. We believe all of you like it!
For your solace, we have an enchanting video on YouTubewhich may help your examination. Trust this endeavor routes further investigation. If opportunity doesn’t knock, build a door.
Discussions | https://www.instructables.com/id/Tracking-Acceleration-Variations-With-Raspberry-Pi/ | CC-MAIN-2018-39 | refinedweb | 1,221 | 52.6 |
Issue 31 | Dec 2010
Blender learning made easy
Physics of Circular Motion A World of Rotations BioBlender: Blender for Biologists Computer Simulation and Modeling of Liquid Droplets COVERART Virus -by Adam Auksel
EDITOR Gaurav Nawani
gaurav@blenderart.org
CONTENTS
2
MANAGING EDITOR Sandra Gilbert sandra@blenderart.org WEBSITE Nam Pham
nam@blenderart.org
Physics of Circular Motion
5
A World of Rotations
7
Microorganismal Worlds
22
BioBlender: Blender for Biologists
27
The Transporters
24
Educational Science and Engineering videos
33
Computer Sim. and Modeling of Liquid Droplets
36
DESIGNER/LAYOUTING Gaurav Nawani/Nikhil Rawat"
EDITORIAL
3 The open nature of Blender, as well as the ease of creating python add-ons allows for complete customization of Blender to the needs of the project (e.g. Bio-Blender).
Blender provides an excellent tool set for producing simulations, visualizations and walkthroughs, as well as educational videos based on the various research projects.
Issue 31 | Dec 2010 - "Under the Microscope"
IZZY SPEAKS - Magical Electron Microscopes
I
have always been fascinated by electron microscope style images. They often display a fragile almost magical quality that is just beautiful. Over the years, blender users have come up with a number of creative methods for producing these lovely images.
“After learning how to create your own lovely new artworks, you might want to kick it up a notch and get a little motion going.”.
Creating a Microscopic Virus Effect Blendercookie.com has become a major educational resource since it's launch. Among the numerous video tutorials covering a dizzying array of topics, there is a beautiful tutorial by Jonathan Williamson on creating a Microscopic Virus.
4 us through how he created his ‘Microcosm’ scene!
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP -
Introduction
by- Adam Kalisz-
5.
Problems Fortunately, since Blender can save the sound into the final video due to its sequencer, there was only one big problem: The text animation.
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP -
6
Physics of Circular Motion
I decided to do it in After Effects, which turned out to be a very time-consuming process. After Effects doesn't support the audio playback in real time, at least not until CS3. So I had to switch on the display of seconds in the timeline in Blender and tweak all of the fade in and fade out animations in After Effects synchronously while scrubbing through the animation in Blender. Hopefully in future, Blender will get a sophisticated tool to add some text in the compositor to avoid third party software, which brings more trouble than it actually helps, at least as far as synchronisation is concerned.
You can watch the animation here:
Conclusion To conclude, the development of this video was a rather simple task. Blender was a great aid with its built-in animation features allowing for a dynamic, customisable animation process. I really appreciate Blender’s big arsenal of tools and the organised graphical user interface. It's an indispensable Open Source graphics suite that everybody should use and support. I'm very enthusiastic about it and founded the first Blender user group in Nuremberg with some other guys and I'm trying to establish a quality German video tutorial website to increase the number of Blender users in Germany. Thanks to the Blender Foundation and all of the developers!!!
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP -
Introduction What is it with rotations that makes them so frightening? Actually rotations are very useful, and sometimes absolutely necessary. Imagine a world without rotations... Tire manufacturers will agree with me... Indeed, of the three kinds of transformations (translation, rotation and scaling), rotations are by far the most complex. Let's see why.
by- Pep Ribal
Take the Blender default scene. Activate the Transform panel (hotkey N). Make sure the Translate manipulator is on, and Transform Orientation is set to Global. Now move the default cube along the X axis using the manipulator (red arrow). Take a look at the Transform panel as you drag the cube. You'll see the Location X value change as you do, while the other values remain unchanged. OK, drop the cube where you wish. Now do the same along the Y axis, and you'll see Location Y change as you drag. Once more, the rest of the values remain unchanged. Finally, you can see the same thing happens with Z axis. Then you can change the 3D manipulator to Scale. Keep trying with the three axes, and you'll realize that each modification affects only its own axis value (Scale values). It also changes the Dimensions values, but these are not relevant, as they refer to the final dimensions of the mesh, not to the transform properties of the object.
Rotating an object First of all, a brief description of the Transform Orientations available for the 3D manipulators in Blend-
7
A world of rotations er. View has a set of axes aligned with the viewport direction, Normal is aligned with the normal of the actual object data selection (like mesh faces) in Edit Mode, and it's equivalent to Local orientation in Object Mode, Local is aligned with the object local coordinate system, and Global is aligned to the world coordinate system. We will see later what Gimbal means. Once that’s said, let's start the show. First make sure XYZ Euler is selected in the Transform panel. Try now with the Rotate manipulator, with Global orientation. Drag around the Z axis (blue ring). You can also use hotkey R, and then Z for rotation around the Z global axis. You can see the Rotation Z value change as you rotate. Drop it at will. Now rotate around any of the other two axes... What happens? All three rotation values (X, Y and Z) change as you drag... We have just discovered that rotation around one axis affects the value of the other two. Let's go deeper into this. Open Figure 1. Initial state the provided file 'RotationsWorld.blend'. There you have three simple airplanes (fig 1). We will use the Rotate manipulator to perform three rotations on them: 120º around the X global axis, 60º around Y, and 45º around Z. But we will change the order of those rotations in each object.
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP -
8
A world of rotations
Bear in mind that positive angles mean counterclockwise rotations. Start with 'PlaneA'. Use the manipulator to rotate X axis first. Check the amount of rotation in the 3D view header, not in the Transform panel. Use the Ctrl key to round up the rotated value to 120º as you drag. If you use hotkeys R and then X, you can also enter 120 with the keyboard. Next, rotate 60º around Y axis and finally 45º around Z axis. Now perform the same rotations on 'PlaneB' but in this order: first 60º around Y, then 120º around X, and last, 45º around Z. When done, go for 'Plane C', using a new order: 45º around Z, 120º around X, and 60º around Y (figure 2). Remember to always check the rotation amount in the 3D view header only.
Figure 3. After rotations around local axes.
around X local axis for instance, you can also press R, X, X. In translation and scaling we can just enter the values we wish into the Transform panel manually, as there is only one way to interpret their meanings. But as we have just seen, with rotations, entering the values X=120º, Y=60º, Z=45º in the slider controls might not yield the desired result. If we were looking for the orientation of 'PlaneA', that would have been OK. But if we wanted for instance, the rotation of any of the other two, that wouldn't have done the trick.
We need a rotation system with a special set of axes that lets us forget about the order, so that we can type the three rotation angles directly in the Transform panFigure 2. After rotations around global axes. el, or use a manipulator so that each ring affects only one axis value. And that's exactly what Blender does. OK, what do we have now? Three airplanes with a com- It's not using the global or local axes, as you have seen pletely different orientation in space. If you take a look by the strange numbers you got in the rotation values at the rotation values of the three planes, only 'PlaneA' of the objects. So what is that wonderful system that keeps the values of the applied rotation (X=120º, Y=60º, Blender uses internally? Z=45º), while the others hold very strange numbers. You can see that the order of rotation is important. Even if we use Local mode for rotation manipulators, the problem doesn't improve (figure 3). For rotation
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP Euler rotations
We have previously set rotation mode to XYZ Euler. That is exactly what Blender is using internally. The best way to see these type of rotations in action is to set the Transform Orientation of the 3D manipulator to Gimbal. This widget lets you see the current state of the Euler rotation transform. A gimbal is a circular gadget that spins around an axis that goes through one of its diameters. If you mount three of these one inside the other, you have a 3-axes gimbal (figure 4). This kind of device is used in gyroscopes, for instance.
Figure 4. A gimbal gadget.
Figure 5. Physical gimbal rotation.
The Blender Gimbal rotation manipulator closely resembles one of these gadgets, as in a 3-axes gimbal these move in relation to the others. However, Figure 6. Blender gimbal rotation. Blender gimbal is a bit different from that, the main difference being the axis of rotation of the rings
9
A world of rotations as you can see in figures 5 and 6. While the physical gimbal rotates around one of its diameters (figure 5), each ring of the Blender gimbal rotates Figure 7. After Euler rotations. around an axis that goes through the centre of the ring and is perpendicular to all of its diameters (figure 6). So, let's start playing with the gimbal. Take any object with 0 rotation. Now activate the Gimbal manipulator. Set the rotation mode to XYZ Euler (though it would work with any other Euler type). And now start rotating the axes individually. You can repeat the experiment of the three airplanes, and you'll get the results of figure 7. See what happens in the Transform panel. Now, each gimbal axis is directly related to the corresponding rotation value of the object. What does it mean? That order of rotation doesn't matter. Maybe you have realized that all three airplanes end up in the same position using the Euler gimbal. If so, you will have seen that all three airplanes have the same rotation values in the Transform panel. In other words, you can enter the desired rotation angles numerically in the slider controls. So, what's the difference between a local or global rotation system and the gimbal system? And why are there six different types of Eulers? And why am I asking all this if I know the answer...?
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP -
As you can see in a physical 3-axis gimbal gadget, there are three axes configured in such a way that they form a hierarchy. When we rotate the outermost ring, the one on top of the hierarchy, we are actually rotating the entire system around the axis of that ring. Rotating the middle ring, we can see the innermost ring rotate as well. If we rotate the innermost ring, only that ring moves. With Blender gimbals the same thing happens. So we must choose one axis to be on top of the hierarchy, one to be in the middle, and the last one to be at the bottom. Let's say we want the Z axis to be on top; the X axis to be its child; and finally Y to be at the bottom. In other words, Z axis will be the parent of X, and X the parent of Y. In order from bottom to top, we have Y, X, and Z. That forms a YXZ gimbal, used for YXZ Euler rotations. There are six different combinations of hierarchies with the three axis X, Y and Z, and therefore, six different kinds of gimbals, each one of which is associated to its corresponding Euler rotation system. With Eulers, it's important to remember that the axis written first is the one at the bottom of the hierarchy, while the last one on the right is the one on top of it. Thus, in a XYZ Euler, the Z axis in on top, while X is at the bottom. Blender uses two things to calculate the Euler rotation of an object. First, the values of the three rotations around each of the three axes (X,Y and Z); and second, what type of Euler hierarchy are these values based on. For instance it's not the same to use a XYZ or a ZXY hierarchy. You can check this by taking the rotated airplane. Don't change the rotation values, just change to any of the other five Euler modes. You will immediately see the final rotation changes.
10
A world of rotations When Blender has calculated the rotation of the object (using the Eulers), it stores that rotation in the object matrix, which is basically a 4x4 matrix of numbers that keep track of the full transformation state of the object: its location, rotation and size. When you are just modelling (not animating), it doesn't make any difference which rotation mode you are using, as they will all end up in the same place internally, i.e. the matrix. No use will be made of the Euler values. However when animating, Blender actually uses those Euler values to interpolate rotations, as we will see later. Any of the Euler types has the advantages of isolating the effect of each axis, though they yield different rotations. Not a big deal. It's just a question of experimenting with them, and see how each type of gimbal behaves. OK, now we have found a magical rotation system that will make this world a better place for you and me... So, why do we need other rotation systems?
Euler rotation problems If we want to define any orientation, or we want to rotate a face or a group of vertices, we can get them to rotate wherever we want to use any of the Euler modes. But when it comes to animation, we can run into some trouble in certain circumstances.When you want to animate a rotating object you have to use the same system from one keyframe to the other. You cannot start defining a XYZ Euler orientation for one keyframe, and then a YZX Euler for the next one. Why? Because Blender interpolates between two rotations using the values of the specific system used (Euler or any other); it doesn't use the rotation stored in the object matrix. So if you use a different system in two consecutive keyframes, there is no way to calculate the interpolation values between them.
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP -
Let's go for another experiment. Open the file 'RotationsWorld.blend'. Make sure that the airplanes don't have any rotation applied, then go to frame 1 and select ZXY Euler rotation mode. We will focus on one of the airplanes as we are going to make it do some aerobatics. You can delete the other two if you want. The first thing to do will be to set the Transform Orientation to Local, so that we can manipulate our airplane easily. Bear in mind however, that even if the manipulator is set to Local, Blender is using ZXY Euler internally to compute rotations and to interpolateangles, so what Blender is actually using is the ZXY gimbal. Now insert a rotation keyframe in frame 1, with the airFigure 8. Keyframe 1 (local axes shown). plane in rest position (no rotations at all) as shown in figure 8. Now let's move to frame 25 using the arrow keys. In this frame, the pilot has astonished the audience of the airshow by setting the airplane in vertical Figure 9. Keyframe 2 (local axes shown). position. Rotate 90ยบ around the X local axis (remember that positive angles mean counterclockwise rotations), and set a new rotation key-
11
A world of rotations frame (figure 9). Now the nose of the plane is pointing up. You can switch from Local to Global mode to see how the local axis has change. The "up" side of the plane is not the same as the "up" side of the world. OK. But the pilot, who is a really bold guy, hasn't had enough. He wants to make a nice turn to his right while keeping the aircraft nose up. So now, get back to Local mode and go to frame 50. Then use the manipuFigure 10. Keyframe 3 (local axes shown). lator to rotate the airplane 90ยบ around the Y local axis. Set a new rotation keyframe (figure 10). Now rewind to frame 1, and check the full animation using the arrow keys back and forth. You'll see that from keyframe 1 (frame 1) to keyframe 2 (frame 25) everything works as expected. But something weird happens between keyframe 2 and keyframe 3 (frame 50). We expect a right turn, but the airplane nose actually makes a weird movement. To check what the problem was, set Gimbal orientation, rewind to frame 1, and check the animation again. As you approach frame 25, the Z rotation axis of the manipulator gets closer and closer to Y axis. At frame 25, the Z axis is completely aligned with the Y axis, as shown in figure 11.
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP We have just lost one axis of movement. This phenomenon is known as the Gimbal lock, and it's a very typical source of headache for animators.
Figure 11. Keyframe 2 (gimbal shown) You may have found that your Where is the Z axis...? Perfect gimbal lock animated rotations behave in a weird manner (it has indeed happened to me), and there is no way to fix them no matter what you do to avoid the problem. Well, it's more than likely that you have been a victim of the hideous gimbal lock (ock... ock... ock...).
So, back to our airplane. We are nose up, and we have lost an axis to perform a right turn. For doing that kind of turn, we would need to get our Z axis back. Well, if you check the interpolation values of the animation between keys 2 and 3, you will realize that this is exactly what Blender does. While the Y axis rotates the commanded 90º, it also undoes the initial 90º rotation in the X axis (which caused the gimbal lock), and rotates 90º around the Z axis so that the final position can be reached. The end result is the weird movement of the airplane. Unfortunately, that caused the audience to go back home, and the airshow resulted in a complete failure. Let's try to see when gimbal lock occurs, taking into account the type of Euler rotation we choose, so that we can avoid it.
12
A world of rotations We know there are three rotation axes in a gimbal. When all three axes are perpendicular between them, all is fine. However, as one of the axes starts to move towards another, they lose their relative perpendicularity, meaning that we are starting to lose some degree of freedom of movement. The problem reaches its maximum when two axes become completely aligned (parallel), that is, when we completely lose one of the three axes. Let's take for instance a XYZ Euler gimbal. What happens when the axis at the bottom of the hierarchy (in this case, X) rotates? Nothing important actually. All three axes keep perpendicular whatever rotation you apply to the X axis, which just keeps spinning around itself. What if we rotate the topmost axis in the hierarchy (Z)? Then all the axes in the system rotate with it, keeping their relative positions, without losing freedom of movement like before. The problem comes when we rotate the axis in the middle (Y). Its effect is to get its child axis (X) closer to its parent axis (Z). That said, one important thing to remember is that the middle axis in the Euler hierarchy is crucial, and we need to keep an eye on it most of all. Now that we know when gimbal lock is reached, we can see how to avoid it. So, if you need an object to perform an animation with a series of rotations in which its Z axis will reach angles close to 90º (or equivalent angles like -90º or 270º), we will avoid the use of Euler rotation systems XZY and YZX, as in these, Z axis lays in the middle of the hierarchy. However, we could still use XZY Euler even if the Z axis reaches 90º, but only if in those particular moments we don't need the X axis to rotate. We need to make sure that as soon as we need rotations around the X axis, Z rotation is far from 90º (and equivalents).
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP -
13
A world of rotations
If you want to perform the former aerobatics, you can choose a different Euler system. For instance, you can repeat the experiment with a XYZ Euler system, and you will see everything working fine. When you are done with it, you can take a look at the reFigure 12. Euler rotations: animation sulting animacurves. tion curves in the Graph editor (figure 12). See how intuitive those f-curves are. You can see a 90ยบ rotation around the X axis take place between frame 1 and 25, and another 90ยบ rotation around the Z axis between frames 25 and 50.
Axis Angle rotations If you set the rotation mode to Axis Angle, you will notice that you now have 4 values for defining rotations: X, Y, Z and W. With Euler we had 3 values representing a rotation angle around each axis. With axis-angle we define two things: one axis and one angle. The axis is defined by X, Y and Z; the rotation angle, by W. You can see that in figure 13.
Figure 13. Axis-angle rotation.
The effective rotation is done around the axis (X,Y,Z). This axis is an infinite line that goes through the centre of the object and the point defined by (X,Y,Z) in the local coordinate system of the object. There are many ways to define the same axis. The most important thing is the ratio between these three values. Thus, (1,0.5,3) is the same axis as (2,1,6).
This is one of the big advantages of the Euler rotations as you can directly manipulate the rotation f-curves easily, knowing that the curves are independent between them. Those three curves give you a clear picture of what's going on, even if you don't see the actual So once we have this rotation axis, all we have to do is object. to make the object rotate around it by the amount In this case all you have to do is keep an eye on the given in the W value. So if W=0, no rotation is applied, green curve (Y axis) and make sure it doesn't approach regardless of the values in X, Y and Z. Conversely, if X, Y 90ยบ when the red curve (X axis) is different than zero. and Z equal 0, no axis is defined, so once more, there will be no rotation regardless of the W value. OK, but is there any rotation system which doesn't suffer from gimbal lock...? You can easily see that the most obvious advantage of axis-angle is rotation around an arbitrary axis. This Sure there is. makes axis-angle very suitable for objects that spin constantly around the same axis. The rotation of Earth around its peculiar axis is a perfect example.
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP -
14
A world of rotations
Bear in mind that negative values matter in the axis definition, as axes have a direction. Two axes defined with opposite direction (simply changing the signs of X, Y, and Z) will have an opposite direction, and so, an inverse rotation. So changing the sign of all four values (axis and angle) has no effect on the final orientation (except for animation purposes). One thing to note is that angle values in W are expressed in radians, not degrees. In that case, if you want to perform a 90ยบ turn, you must rotate /2 in the W slider control. It's possible to type pi directly in the sliders as Blender knows its value (around 3.141592). You just need to know that 360 degrees are equivalent to 2x radians, so that you can calculate other angles. You can directly enter values like pi/2, 3*pi/2, 2*pi, pi, etc. Anyway, manipulators and hotkey R always use degrees. OK, now that we know what axis-angle is, it's time to play with it.
quite big, as it's going from one line to a perpendicular one (90ยบ away). And second, we went from angle 0 to angle /2 (90ยบ). So two things were moving the same amount: the axis and the angle. Once again, let's go back to frame 1. Instead of axis (0,1,0), let's enter (1,0,0). It makes no difference, as the rotation angle value W is 0. Now update the keyframe and see how it goes. Everything runs quite fine now. The second half is not perfect, but quite acceptable. Why isn't it perfect? It's important that between two consecutive keyframes most of the movement is taken by only one of the components: either the axis or the angle. In our initial airplane movement, both components were moving the same amount (90ยบ), and that created a turbulent movement that made the pilot sick. Then we completely fixed the problem by keeping the axis still between keyframes. In the second half of the animation, the angle moves more than the axis, which is good, but both of them move.
You can repeat the aerobatics experiment from scratch. Set rotation mode to Axis Angle, and redo the three key- If you want absolutely perfect movements, just move only one of the two components between two consecuframes using the 3D manipulator of choice, or hotkey tive keyframes (usually the angle). Sometimes this is R. Play the animation back. difficult, so the best thing in those situations is to start What happens? Nothing good really. considering any other rotation system.
Axis-angle rotation problems As mentioned before, axis-angle works well for rotations around a fixed axis. So our aerobatics is not the best example to use. Let's see why it didn't work out smoothly (by now the pilot is depressed and already thinking about retiring).
In axis-angle, rotation manipulators have to be used with special care, as they can lead to unwanted results. A simple rotation using the manipulator can lead, for instance, to the flipping (sign change) of the axis. If the initial axis is (0,1,0) and the final one results in (0,-1,0), that will produce, most probably, undesired effects, as we are changing its direction, which means a 180ยบ rotation of the axis (not around the axis).
What happened here is that from keyframe 1 to keyframe 2, two things were interpolated. First we went from axis (0,1,0) to axis (1,0,0). This axis movement is
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP -
Moreover, in axis-angle you can define axes and angles using any value, as big or small as you want. Rotations can consist of several spins around the axis, that is, Nx2 , where N is any number of revolutions, positive or negative. However, when using rotation manipulators you will always get axis values up to 1.0, and angles up to 2 . This, along with the possible axis flip, are good reasons to prefer editing rotation values directly on the Transform panel over using rotation manipulators or hotkey R. To summarize, axis-angle is good for rotations around an arbitrary axis, as long as that axis doesn't keep moving, or at least its movement is really controlled. Remember that you can quickly move the axis to any value when rotation angle is 0 so that moment can be used to switch from one axis to another. Now that you are done with the new airshow animation using axis-angle, take a look at the resulting animation curves (figure 14)... What can you see?
Wouldn't it be nice, however, to have a rotation system which, while keeping its immunity to gimbal lock, at the same time produced perfect and smooth rotation interpolations, and not just fixed-axis ones? Yeah, that would be awesome...!
Quaternion rotations Quaternions were discovered by the Irish mathematician Sir William Rowan Hamilton. According to the Wikipedia, "the breakthrough finally came on Monday 16 October 1843 in Dublin, when Hamilton was on his way to the Royal Irish Academy where he was going to preside at a council meeting. While walking along the towpath of the Royal Canal with his wife, the."
Figure 14. Axis-angle rotations: animation curves.
Yeah, right. Just curves. It's actually very difficult to know how they translate visually. While with Eulers we could grasp the meaning of the F-curves, now it's difficult to tell how the object is rotated.
15
A world of rotations
This reminds me of the day I was walking along the streets of my home town, and it came to my mind a recipe of beans with mushroom sauce. Immediately I took my chisel and hammer (I always bring them in my pockets, just in case). I couldn't resist carving the recipe into a stone of my neighbour's wall... Surprisingly, Wikipedia didn't mention that. My neighbour however, did mention it to his lawyer (he is allergic to mushrooms). Back to quaternions, you can just forget about the formulae that Hamilton carved. Actually, you can forget about most of the maths around quaternions (unless you are a mathematician, a 3D software developer, or just very interested in Algebra).
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP -
16
A world of rotations
A quaternion is a vector, i. e. a set of numbers, in a specific 4-dimensional space. In this case, this vector has four numbers. These four numbers are called X, Y, Z and W in Blender. Just check it in the Transform panel, setting Quaternion (WXYZ) rotation mode... Doesn't it remind you of something? Of course, axis-angle has the same component names. So are these values related somehow to the corresponding axis-angle values? Absolutely. Let's see the differences, though.
that axis didn't have to be normalized. Bear in mind though, that even in Axis Angle mode the rotation manipulator and the R hotkey also normalize the (X,Y,Z) vector. Another question arises here; what units is W using to describe an angle, as it can only range from -1.0 to 1.0? To understand the correspondence between the axisangle W value and the quaternion W value, we will call the first AW, and the second QW. Its relation is as follows:
QW = cos (AW / 2) In the first place, a quaternion can represent a rotation only if it is normalized, which means that the length If you know what a cosine function is, great. If not, (or modulus) of the vector must be 1 (this is called a unit vector). What does it mean in practice? Mathemat- don't worry the slightest bit. The only thing you should be aware of is how quaternion W behaves in relation to ically: axis-angle W (the actual angle of rotation around the axis). The following table has a few examples that W2 + X2 + Y2 + Z2 = 1 might help you: This formula is not too useful for 3D artists. However it might help to understand how these four values relate Quaternion W Angle in radians Angle in degrees to each other. In the first place, none can have an absolute value greater than 1. Second, when one value in1.000 0 0 creases (in absolute value), the rest decrease, and vice versa. Absolute value means to forget about the sign, 0.707 90 /2 i.e., the absolute value of -0.75 is 0.75. So, all four values range from -1.0 to 1.0. 0.000 180 Now we know how quaternion values relate and affect -0.707 270 3x /2 each other. But what do they actually mean? Do they have the same meaning as in axis-angle? Well, actually -1.000 360 2x they do. In a quaternion, X, Y and Z are still defining the same axis of rotation that axis-angle does, and W is defining an angle of rotation around that axis. You could think after seeing this table, that if a quaternion with W=1 is equivalent to a 0ยบ angle, and There is one unique (normalized) way to define a given with W=0 it represents 180ยบ, then 90ยบ should correrotation using a quaternion. On the other hand, in axis- spond to W=0.5. Actually it doesn't work like this, as angle you could define the same axis using many differ- you can see in the table, as the cosine doesn't behave ent combination of values, as the vector representing like a linear function.
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP -
It actually behaves in a more "circular" way, which is much more suitable for rotations. Observing the table, you can think there is no way to use a quaternion to define a rotation beyond 360º, or below 0º. You think well. This is a small drawback of quaternions, but later we will see how to overcome it. So let's try to see how a quaternion works. Open 'RotationsWorld.blend' once more and select one of the airplanes. Set rotation mode to Quaternion (WXYZ). Initially, W has value 1.0, which means a rotation of 0º, so we don't need any rotation axis. It doesn't matter then if X, Y and Z are all zero. Now increase the value of X slightly, clicking on the right triangle in the X slider of the Transform panel. See what happens. We have just defined an axis; a point in the direction of the X axis defines the X axis itself. If we keep increasing the X value, we are still defining the same axis, however W decreases. The bigger the value of X, the smaller the value of W. In other words, we are rotating around X axis, as W keeps decreasing towards 0, i. e. towards 180º (see the table). When X reaches 1, W is 0, which means a rotation of 180º around the X axis. So the effect of increasing the value of X is to bring the object to this position; upside down around the X axis. Now clear the rotation. You can repeat the same experiment with Y and Z values. As you will see, all of them try to bring the object upside down around their own axis. On the other hand, what is the effect of making W bigger? Obviously to take the object away from those upside down positions, and preserve the original position with no rotations at all. The balance between the four values is what defines the final rotation.
17
A world of rotations If you repeat the experiment using negative values, you will see the same effect but in the opposite rotation direction. Take for instance the experiment around the X axis, but this time taking it slowly towards -1.0. We are defining the same rotation values (W is still positive) but applied around an axis that runs along the X axis in the opposite direction. This is similar to what happened with axis-angle. In this case also, changing the sign of all four values has no effect on the final rotation. And with quaternions, it doesn't have an effect on animation interpolations either. Even if in theory W cannot hold a number corresponding to a negative angle, changing its sign works in a similar fashion. For instance, W=0.707 represents a 90º rotation, while W=-0.707 is 270º, which is in fact equivalent to -90º (270º=360º-90º). Now that we know what a quaternion is and how it works, we are ready to repeat the airshow. Set the three keyframes once more using Quaternion (WXYZ) mode. What happens now? An incredible aerobatic manoeuvre. The audience is shouting, jumping, hugging, laughing...! The best show ever! And all thanks to Sir Hamilton and his magic chisel... What about quaternion animation curves? Take a look at them (figure 15)... What do you think? Yeah, awful. Forget about animating those evil f-curves... And there is more. In quaternion f-curves, Linear Extrapolation doesn't work well. Since the quaternion must be normalized, its values can't keep growing forever. The relation between the four values ends up reaching a normalized balance, and so the rotation slowly stops at that point. You have seen the main advantage of quaternions: its absolute smoothness and perfection in interpolations,
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP -
18
A world of rotations
no gimbal lock, no weird movements, etc. However, we have seen a drawback: the inability to define more than one revolution, or negative angles. Let's see an example. Open a Blender scene and take any unrotated object (our good old airplane will do). Choose Quaternion (WXYZ) rotation mode. Set a keyframe in frame 1, without rotation. Go to frame 25. Rotate it 200º counterclockwise around Z axis. Set the rotation key. Check the animation. What happens? Blender actually interpolates using a clockwise rotation! Blender has chosen the shortest path between 0º and 200º, which is equivalent to -160º. Quaternions can't define revolutions (successive spins around an axis). They just define orientations in space; from 0º to 360º, where 0º is equivalent to 360º (value 1 is equivalent to -1 for X, Y, Z and W parameters). Blender always performs the shortest path rotation between one orientation and the next if you use the rotation manipulators or hotkey R. This means you can't rotate 180º or more between two keyframes using those. But you can get bigger angles by directly editing the Transform panel values. However you will never get a 360º or bigger rotation. If you want to overcome this, and make the object spin several times, you have to set intermediate keyframes between the initial and final state, so that turns between them are smaller than 360º (or -360º for clockwise rotation). If you use manipulators, turns must be less than 180º (or -180º).
Gimbals and locks
Provided that neither of these two systems use Euler gimbals, what is the meaning of the Gimbal orientation of the 3D manipulators? In Axis Angle, you Figure 15. Quaternion rotations: animawill see that Gimtion curves. bal aligns its Z component with the defined axis (X,Y,Z), so that if you rotate the manipulator blue ring (Z axis) you will be directly controlling the W value, and just the W value. However, if you want negative values, or values beyond 2 , you must edit the W value in the Transform panel. On the other hand, when using quaternions you can see that Gimbal currently has no special meaning and is equivalent to the Local orientation. Perhaps future releases of Blender will give it a special use. Regarding the lock buttons in the Transform panel, their use is to restrict rotations (and locations/scaling) to only the desired axes using the 3D manipulators or hotkey R. However, in the specific case of rotations, if you activate the 4L button, you can restrict rotations by axis-angle or quaternion component instead of axis. As they have 4 components (X, Y, Z, W), you get an extra lock. However, in the specific case of quaternions, remember that even changing just one of the components will affect the other three as the final vector must be normalized.
No, we are not going to talk about gimbal lock anymore. Just about gimbals, and component locks in either quaternion and axis-angle rotations.
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP -
19
A world of rotations
(or Yaw/Pitch/Roll) angles, used with the so called TaitBryan or cardan angles, which are a different kind of Euler rotations. But this stuff is out of the scope of this · Local or global rotation systems aren't valid for computing rotations as the order of rotation around article as Blender doesn't use them the three axes affects the final result. I hope not to have made your head rotate too much.
Summary
· Euler rotation systems use a hierarchy of rotation axes which is valid to compute rotations, as the three rotation components are independent. However they suffer from gimbal lock in certain circumstances.
Be good!
· Axis-angle doesn't suffer from gimbal lock, but its use is almost specific to revolving around a fixed axis. · Quaternion system doesn't suffer from gimbal lock, and interpolates perfectly any pair of orientations. However it can't define successive revolutions unless we insert intermediate keyframes in between. As it's a perfect way to define orientations in space, it is very suitable for bones animation. · Regarding animation curves, the Euler system is the only one that provides an easy and intuitive way to edit them.
And finally... There are a couple videos around there that might help you see rotations in action. Check the Guerrilla CG Project website (guerrillacg.org). Watch the following videos: The Rotation Problem, and Euler Rotations Explained. One warning, though: in the first of these videos there is a small mistake; whenever 'Quaternion' is mentioned, it should actually say 'Axis Angle'. There are other 3D software packages out there that use other rotation systems, like the Heading/Pitch/Bank
Issue 31 | Dec 2010 - "Under the Microscope"
MAKING OF -
Introduction
by- Sally Olle (Estelle Parnell Designs)
The most advanced online 3D virtual world to hit the market, Blue Mars features photo-realistic rendering with CryENGINE-2 by CryTek and motion-captured avatar animations. Blue Mars launched in open beta to players and developers in September 2009. In about a year, the number of completely terraformed Cities, Villages and Metropolises (the basic real-estate categories on Blue Mars), has increased tenfold and a dedicated community of users and developers has been established. As such, Blue Mars has a very attractive emerging economy. The BLU$, or Blue Dollar, is the Blue Mars currency, and it is easily redeemable via each developers’ PayPal account. I was drawn to Blue Mars by the superior graphics, and most of all by the versatility and realism of the mesh clothing, having previously been a clothing designer in Second Life. The good news is that content for Blue Mars is made in 3rd party software such as Blender, which will give existing Blender users a fantastic head start into creating content for this platform. Content is imported into Blue Mars using the Collada format. There is a freely available Blender plug-in for Collada. The Collada file is imported into the relevant Blue Mars editor
20
Blue Mars (editors exist for clothing, furniture, bodies, Cities etc) where textures, maps and specialised shaders are applied to the content, and it is packed for uploading to Blue Mars. Creators can register as developers and download the developer toolkit for free at. Creators can sell content in rented “shops”, with the shop editor enabling them to customise the shop interior to their liking. Developers can also rent vacant blocks in Cities where they can create external shop structures for their own use. These images depict the workflow for a simple retro dress from Blender to Blue Mars. For beginners, the Blue Mars editors include a number of cloth templates to help get you started, though you can create any mesh from scratch. In this case I trimmed the mesh to the desired shape, and sculpted it to fit the Blue Mars reference avatar better.
Issue 31 | Dec 2010 - "Under the Microscope"
MAKING OF -
21
Blue Mars
I exported the mesh from blender as a Collada file, and imported it into the Blue Mars cloth editor. Here I applied textures and normal maps to the different linked objects, and selected the cloth shader.
rewards are not yet great, but there is a tangible growth in the market and my personal enjoyment of creating cloth cannot be denied. I am sure there are great things to come for Blue Mars as it develops from its beta status and I certainly want to be there to watch it grow. Estelle Parnall is the avatar behind the Australian content creator Sally Olle. She has been an active developer in Blue Mars since April 2010 and owns the Blue Mars city Fashion Esplanade where she sells a variety of content.
I then packed the object for uploading and went to the developer web page to upload the packed file, and to set a description and price for the object. Once uploaded, the content enters the Blue Mars QA Process. When it is released, the designer may allocate the new item to one of their shop shelves for sale. With a small population, the financial
Issue 31 | Dec 2010 - "Under the Microscope"
3D WORKSHOP -
By - Robert J. Tiess
Microorganismal Worlds: A Quick Tutorial
Introduction
pha (transparency) setting and pursue more of a "volumetric" texture.
Blender has a virtually limitless reservoir of untapped potential as either an artistic or technical tool. Many exciting possibilities can arise when you experiment with Blender's capabilities and try different approaches to creating and rendering scenes.
How will we do this? Nested meshes: several meshes, each inside the other.
Many of you might remember my "Astrobiology" series of images posted on BlenderArtists.org. Those works were exciting and educational for me because they helped me push both my artistic and technical skills further while reaffirming to me how incredibly flexible Blender will always be as a tool for expression, imagination, and exploration. For the images I submitted to this issue of BlenderArt Magazine, I wanted to do something different. Rather than tapping the techniques I developed for those projects, my intent was to focus more on using Blender's internal texturing system to achieve interesting details and visual possibilities. At first glance Blender's procedural texture set might seem underwhelming and incapable of anything complex or intriguing. I tended to think that way in my earliest days learning Blender. Back then there were no texture nodes, no render nodes, just texture slots in materials. While the main mesh of this tutorial's image (Microlifeform 4) uses four texture slots, and while it is true we can achieve more complex and interesting results by stacking textures, we are not limiting ourselves to that very useful and flexible technique. We are also going to make use of the material's Al-
· Step one is to place a mesh in the scene. In this example I have opted for a Sphere. I reshaped, resized, and rotated the sphere in my project (defined at 1024x1024 pixels). · Next, I added an Empty object to the scene as we are going to use this Empty to help us resize the Sphere recursively (over a series of repetitions). We need to move the Empty to where the Sphere is. To do this we copy the Location and Rotation of the Sphere by first selecting the Empty, then SHIFT key + selecting the Sphere. · Having selected both objects, I pressed the CTRL and C keys to make the Copy Attributes menu appear, selecting Location. CTRL and C keys were pressed again to then copy the Rotation of the Sphere. · Next, I selected the Sphere and added an Array modifier to it. We are going to use the Object offset and use the Empty as the object the modifier uses to generate successive meshes. How many? For this example I specified a Fixed Count of 20 in the Array modifier. · After doing this, I selected the Empty object and resized it (S key) by manually entering a value of .995. If all has gone well up to this point, you should see multiple spheres within each other.
Issue 31 | Dec 2010 - "Under the Microscope"
22
3D WORKSHOP -
Microorganismal Worlds: A Quick Tutorial
The next phase involves material and textures. With the Sphere selected we add a material. The material needs ZTransp (transparency) activated. I selected an Alpha value of .500. I didn't want any specularity, so those values are minimized. In the texture slots are three Cloud textures. They have relatively small Noise sizes (ranging .300 to .500). A Stucci texture is also used in the second texture channel. Together three of these four textures are set to affect the material's Alpha setting. 路 Two textures use a Subtract blend mode, while one blend mode is set to Add. 路 Nor (bump) and Col (color) are also affected by the textures in various ways. 路 The scene is lit by three lamps: one Sun lamp (ray tracing), one Hemi (hemispheric) lamp, and one omnidirection Lamp. 路 Three World texture settings also contribute slightly to the ambiance of the scene. There is something else going on in the example: particles. They serve to establish the surrounding "cilia" of this imaginary microscopic life form. For this we only need a ring of vertices surrounding the sphere. These vertices are assigned a simple Lambert / Blinn material with a Blend texture used to fade out the tips of the emitted hair particles.
In the microlifeform4-example.blend file you will notice there is in fact only half a sphere. This was done to speed up render time. Textures and meshes used as they are here result in very long renders, so this is one way to achieve our desired result without forcing Blender to calculate mesh faces which would not, in this project, make any difference in the final outcome. You might also notice some Render Nodes. These help us use Blender to maximize the potential of the final output image. There's some defocusing for depth of field, RGB Curves, and some other nodes used to tweak the final result all within Blender. The example file is provided in hopes of encouraging you to experiment with different settings (material, textures, lighting, render nodes, etc.). Change values and see what happens as a result. In fact, I think it's not only useful but necessary to allow yourself some time to use Blender in exploratory and unpredictable ways. You can learn more about Blender's and your own creative capabilities and pleasantly surprise yourself!
Technical notes Although this project was created in Blender 2.49, it, as well as the techniques referenced in the tutorial, work in the latest Blender 2.55 beta. Render times for the project file will be lengthy even on fast computers, so patience is a must
The major particle settings are: Hair (particle system type), 222 particles, Normal value of .400, Random value of .200, Brown (brownian motion) value of 8.00, Damp (dampening) value of .800, B-Spline interpolation, and Strand Render.
Issue 31 | Dec 2010 - "Under the Microscope"
23
MAKING OF -
Introduction
By - Enrique SahagĂşn
One of the most amazing stories I heard when I studied biology was about the way the cell distributes material within itself. The same net of microtubules that sustains the structure of the animal cell serves as a railroad to transport food and building materials. But the best is yet to come. The ones responsible for this transport are a family of small, two-legged, funny proteins called Kinesins [1]. Recent studies show that they actually walk along the microtubules while carrying the materials inside big vesicles located near the top of the kinesin molecule.
First approach The goal was to fake a video in which a kinesin could be seen carrying a huge vesicle [2]. In principle I was concerned about the taste of the final images and not about being precise in a biological sense (shame on me!). It is possible to model the kinesin using the data from the PDB (Protein Databank) [3] using a script by Michael Gantenbrinker [4]. Instead, I decided to make a roughly similar model with a simple armature. The movements
24
Transporters of the kinesin were made slow to Node editor simulate both the absence of gravity and the erratic flows of fluids Once the kinesin was animated, it within the cell. was time to start working with the node editor. First of all I set a The microtubule was modelled to strong defocus filter with a variable be a kind of organic pipe and is depth of field. Then I added a noise not realistic either. The space was layer which was blurred with a blur filled with a bunch of moving bub- filter. To finish, something that in bles to simulate the cell environprinciple is not a typical microment which in reality happens to graph artifact but which works very be much denser. These bubbles well in the final scene: a lens diswere animated using Blender tortion filter with variable disperphysics. Finally, the camera was sion. animated using a shaky effect [5]. And that’s all. Although, as I have The light is made of two sun lights said, many elements in this conwith no shadows. It is important struction are not realistic, the final to realize that in micrographs, results gives a nice feeling of a livdarker areas can be mistaken for ing micrograph [6]. shadows. Depending on the technique, darkness depends on the [1] density of materials, or on the an[2] gle the faces of the objects are pointing. [3] There are mainly two kinds of textures in this project. The ones for [4] the kinesin and microtubules are [5] simple materials with no specular [6] or mirror properties. Remember that at this scale, mirroring or specularity makes no sense. Their textures are cloudy textures with slight normals. The bubbles that simulate the environment have a transparent texture with a high IOR.
Issue 31 | Dec 2010 - "Under the Microscope"
MAKING OF -
25
Transporters
Figure a) Kinesin model. b) Kinesins (the one shown is from PDB 3kin)
Issue 31 | Dec 2010 - "Under the Microscope"
ARTICLE -
R
od Cockcroft has extended the facilities at (from a Blender comic directory) to include graphic novels created with or partially with Blender and has started a new Sintel story.
By - Rod Cockcroft
26
In the forums people can collaborate with others to create or discuss stories. Links can be created from each scene in a story directly to the forum so each scene can be discussed easily. If you think you can create a better storyline than one that has already been created, a fork can be inserted to create a new storyline.
Anyone could include a story that they have completed themselves or could develop a story in a private forum with only invited members in their group. Rod has put the new Sintel story in the Mixed Copyright Stories section. If anyone would like to join in or start a new story visit
There are three forums for writing and discussing stories:-
Creative commons stories All the content in this section must be Creative Commons
Mixed copyright stories People can contribute to stories in this section and include copyright restrictions on their work while including content with a Creative Commons License.
Copyright protected stories This section is for stories that are completely copyright protected.
Issue 31 | Dec 2010 - "Under the Microscope"
ARTICLE -
Introduction
By Raluca Mihaela Andrei, Mike Pan and Monica Zoppè
27
BioBlender: Blender for Biologists
Biologists know that, if the information of life is stored and transmitted through nucleic acids (DNA and RNA), the processes that do the actual work are most of the times proteins. These are active in all aspects of life, and in the latest years we are starting to get a glimpse of how they work. Proteins are machines composed of amino acids, which are in turn small groups of atoms arranged in specific ways[1]. Scientists are obtaining more and more information on the 3D arrangement of such atoms, and are starting to understand their activity through motion.
BioBlender for Windows is available from (on Linux machines it can be used with Wine). Because of its specialized nature, it requires the installation of PyMOL[3.4] , Python 2.6 [5] and NumPy[6] , which are all provided in Installer folder from the downloaded package.
Using BioBlender to build an animation To start BioBlender, simply go to the Bin folder and launch blender.exe, then open the template.blend scene (stored in BioBlender folder).
Notice that the template file not only has an optimized user-interface layout for biologists, but the template scene also contains lights, camera and world settings that are ideal for visualizing molecules. This setup ensures that researchers who are On the basis of information obtained by experinot familiar with the 3D software can still effecments of nuclear magnetic resonance (NMR), 3D tively use BioBlender. Each interface element visualization tools provided by BioBlender allow bi(buttons, sliders, toggles) has help text associated ologists to build a reasonable sequence of movewith it. By placing the mouse over them a pop-up ment for proteins. It also includes a dedicated visual text describes the function. Errors and progresses code to represent important features of their surare displayed in the console. Critical errors will apface (Electric and lipophilic potential) on the protein pear in the main BioBlender as a pop-up under the itself, using photo realistic rendering and special mouse. The atoms size is of order of Ångström (Å), effects. therefore the scale used is 1 Blender Unit = 1 Å. BioBlender is a software extension of Blender 2.5[2], an interface for biological visualization that allows the user to import and interactively view and manipulate proteins. It was developed and is maintained by the Scientific Visualization Unit of the CNR of Italy in Pisa, with the help and contribution of several members of the Blender community. Material, scenes, publications and other relevant information can be found at and/or.
This tutorial assumes that you already have BioBlender downloaded on your computer, with the required programs installed.
1. Select and import a .pdb file PDB files contain a description of one or multiple conformations (positions) of a single molecule. Different conformations of the same protein are listed in one NMR file and are called MODEL 1, MODEL 2 etc.
Issue 31 | Dec 2010 - "Under the Microscope"
ARTICLE -
28
BioBlender: Blender for Biologists A list of options are available to be considered before importing the protein in the Blender scene (5 in figure): Verbose: enable to display in the console extra information for debugging; SpaceFill: enable or disable to display the atoms with Van der Waals or covalent radii in the 3D scene, respectively;
In the BioBlender Select PDB File panel: · Select the .pdb file by browsing from your data (1 in figure). The file included in sampleData folder contains the 25 models of Calmodulin [7]. Alternatively, simply type the 4-letter code for the .pdb file to be fetched from [8] (make sure to pick an NMR Change the name of the protein (by default it is named “protein0”) in the field on the right (2 in figure). Naming the proteins is just a good habit that will help keeping the scene organized. Once a file is selected, the number of models and the chains are detected and shown in the BioBlender Import field (3 in figure); · Choose 2 models to import in the scene (by default all models are listed) typing their number separated by comma; · In the Keyframe Interval slider (4 in figure) set the number of frames between the protein conformations (Min 1, Max 200).
Hydrogen: enable to import Hydrogens if they are present in the .pdb file. This option makes importing much slower and it is important only for visualization. If the .pdb file does not contain Hydrogens (or if you chose not to import them), they will be added during the Electrostatic Potential calculation using external software; Make Bonds: enable it to have atoms connected by chemical bonds. Despite being time consuming this operation is very important in motion calculation; High quality: displays high-quality atom and surface geometries; slow when enabled; Single User: enable to use shared mesh for atoms in Game Engine; slow when enabled; Upload Errors: enable to send us automatically and anonymously an email with the errors you generate. This makes us aware of the problems that arise and help us fix them. Finally, press Import PDB button to import the protein to the 3D scene of Blender. Blender displays the protein in motion (by linear interpolation between atoms in the conformations; Esc to stop the animation).
Issue 31 | Dec 2010 - "Under the Microscope"
ARTICLE -
2. Visualization in the 3D viewport
3. Protein motion using the physic engine
Once imported, the protein is displayed with all atoms, Hydrogens included (if the Hydrogens check-box was enabled). The first 4 buttons in the BioBlender View enable different views: only alpha Carbons, main chain (N, CA, C), main chain and side chains (no H), or all atoms.
To calculate the transition of the protein between the 2 conformations the Blender Physics Engine is used. Press Run in Game Engine button to see the transition. Press Esc to leave GE and then 0 on Numerical Board to see from the camera point of view.
If the Surface display mode is selected, BioBlender will compute the surface of the protein by invoking PyMOL software, an external application. It uses the Solvent Radius set by the user and returns the Connolly mesh [9], displayed on the BioBlender 3D view. The default radius (1.4 Å) is the standard probe sphere, equivalent to water molecules.
29
BioBlender: Blender for Biologists
To check the appearance of surface calculated with different solvent radii, change the solvent radius value and press refresh button. The current surface is deleted and a new one is created. When atoms are displayed, by selecting one atom in the 3D display, the protein information of the selected atom is printed in the area outlined below; in the 3D view the selection will extend to the other atoms of he corresponding aminoacid.
Hit Run in Game Engine button again for an interactively view. When inside the Game Engine, the mouse controls the rotation of the protein, allowing to inspect the protein from all angles. The also applies an ambient occlusion filter to the scene, giving the viewer a much better sense of depth.
Set the Collision mode to one of the following states: 0, 1 or 2. When set to 0 the transition between the conformations is done using linear interpolation; the atoms will simply move from one position to the other. When set to 1 the collisions between atoms are considered, resulting in a more physico-chemical accurate simulation[10]. When set to 2, the newly evaluated movement will be record to F-Curves. Go to the Timeline panel on Blender and see that the new conformations are recorded at different time (200 frames away from the last model imported) as shown in the figure below; in this way both sets of transitions are available for comparison. These conformations can be exported as described later in section 6.
Issue 31 | Dec 2010 - "Under the Microscope"
ARTICLE -
30
BioBlender: Blender for Biologists
process is also time consuming This visualization method is a novel way to see the MLP and it always refers to last values of a protein onto the surface. Normally this is a changes in the relatively time consuming and tedious process involvMLP grey-levels ing running different programs from the command visualization. line, but BioBlender simplifies the entire process by alWhen the calculalowing the user to do everything under one unified intion is done (the terface. button is reIn BioBlender MLP Visualization section: leased) press F12 on your keyboard. · Choose a Formula (1 in figure; Testa formula [11] is Note:This is the MLP set by default);
4. Molecular Lipophilic Potential Visualization
· Set the Grid Spacing (2 in figure; expressed in Å, lower is more accurate but slower) for MLP calculation; Press Show MLP on Surface. It may take some time as the MLP is calculated in every point of the grid in the protein space, then mapped on the surface of the protein and finally visualized as levels of grey (light areas for hydrophobic and dark areas for hydrophilic [12]). A typical protein has varying degrees of lipophilicity distributed on its surface, as shown here for CaM. Use Contrast and Brightness sliders to enhance the MLP representation of your protein. Once you are satisfied with the grey-levels visualization hit Render MLP to Surface button for the photorealistic render. This
representation using our novel code: a range of visual features that goes from shiny-smooth surfaces for hydrophobic areas to dull-rough surfaces for hydrophilic ones. The levels of grey are baked as image texture that is mapped on specular of the material. A second image is created by adding noise to the first one and map it on bump. The light areas become shiny and smooth while the dark ones dull and rough as shown in the figure.
Press Esc to go back to the Blender scene.
5. Electrostatic Potential Visualization EP is represented as a series of particles flowing along field lines calculated according to the potential field due to the charges on the protein surface. For this reason, it is necessary to perform a series of steps (as described in [12]), and to decide the physical parameters to be used in the calculation (2 in the figure).
Issue 31 | Dec 2010 - "Under the Microscope"
ARTICLE -
31
BioBlender: Blender for Biologists
In BioBlender EP Visualization section: · Choose a ForceField (1 in figure; amber force field is set by default); · Set the parameters for EP computation, using the options shown in the figure below: · Ion concentration – 0.15 Molar is the default, physiological value; · Grid Spacing – in Å, lower is more accurate but slower; · Minimum Potential – the minimum value for which the field lines are calculated – the default value is 0 which implies calculation of all possible lines; increase it if you want to enhance the representation of EP;
6. Output
Raluca Mihaela Andrei1,2, Mike Pan1,* and Monica Zoppè1§
· n EP lines*eV/Å2 – the number of field lines calculated for eV/Å2.
In the BioBlender Output panel set the output file path (by default it is set to tmp folder); choose the kind of representation you prefer to render from the Visualize curtain menu:
Now press Show EP button. The process is time consum· Atom – render only atoms; ing as Show EP button invokes a custom software that calculates the field lines and exports them in the BioBlender · Plain Surface – render only surface; 3D scene as NURBS curves. The positive end of each curve becomes an emitter. The particles flow along the curves · MLP – render surface with MLP; from positive to negative. · EP + Plain Surface – render surface (no MLP) and EP; Change the Particle Density (3 in figure) to modify the number of the particles visualized in the scene. Clear · EP + MLP – render surface with MLP and EP; EP to delete the curves and the emitters. set Start Frame – the first frame of the animation;
To see the protein movement with the surface properties you have to render a movie. Since the movement implies a change of the atomic coordinates, the surface properties must be recalculated frame by frame.
Issue 31 | Dec 2010 - "Under the Microscope"
ARTICLE -
set End Frame – the last frame of the animation; set Export Step – the number of frames to skip during export, mostly used for faster export of .pdb files; enable Information Overlay to print extra information on the final image; enable Ambient Light only for GE visualization; do not enable it for MLP representation as its effect is confusing for MLP visual
32
BioBlender: Blender for Biologists
Hit Export Movie to render every frame of the animation. The output is a sequence of still images, this ensures that the rendering is resumed if the rendering process is disrupted. During section 3 Blender GE calculated and recorded intermediate conformations as keyframes. To save these coordinates as .pdb files for further analysis using external software, press Export PDB. A .pdb file is saved for each frame in the selected output. To obtain the movie follow standard Blender procedures: open the Video Sequencer Editor: Add -> Image, select the sequence of images, go to Properties window and set the Output path and the File Format to AVI JPEG in the Output panel and Start and End frame in the Dimensions panel. Now press Animation button in the Render panel.
Now you have your protein moving with the surface properties visualized. An image of CaM with EP and MLP is shown in the image below 8 Connolly, M L (1983) Solventaccessible surfaces of proteins and nucleic acids. Science 221: 709-13
References
1 Zoppè, M; Porozov, Y; Andrei, R M; Cianchetta, S; Zini, M F; Loni, T; Caudai, C; Calli- 9 Zini, M F; Porozov, Y; Andrei, eri, M (2008) Using Blender R M; Loni, T; Caudai, C; Zopfor molecular animation and pè, M (2010) Fast and Effiscientific representation. Pro- cient All Atom Morphing of ceedings of the Blender ConProteins Using a Game Enference Blender gine. (under review) 2 DeLano, WL, The PyMOL Molecular Graphics System, 2002
10 Testa, B; Carrupt, P A; Gaillard, P; Billois, F; Weber, P (1996) Lipophilicity in molecular modeling. Pharm 3 The PyMOL Molecular Graph- Res 13: 335-43 ics System, Version 1.2r3pre, Schrödinger, LLC 11 Andrei R M, Callieri M, Zini M F, Loni T, Maraziti 4 Python G, Pan M C, Zoppè, M (2010) A New Visual Code for Intui5 NumPy tive Representation of Surface Properties of 6 Kuboniwa H, Tjandra N, Biomolecules. (under review) Grzesiek S, Ren H, Klee C B, Raluca Mihaela Andrei1,2, Bax A (1995) Solution strucMike Pan1,* and Monica ture of calcium-free calmodZoppè1§ ulin. Nat Struct Biol 2: 768-76 Scientific Visualiza7 Berman, H M; Westbrook, J; 12 tion Unit, Institute of Clinical Feng, Z; GilPhysiology, CNR of Italy, liland, G; Area della Ricerca, Bhat, T N; Weissig, H; 13 Pisa, Italy Shindyalov, I N; Bourne, P 14 2Scuola Normale E (2000) The Superiore, Pisa, Italy Protein Data Bank. Nu15 *Present address: cleic Acids University of British ColumRes 28: 235bia, Vancouver, Canada Cor42 responding author
Issue 31 | Dec 2010 - "Under the Microscope"
ARTICLE -
Introduction
by- Francisco M. GomezCampos
Three-dimensional (3D) technologies have always seemed to relate to futuristic applications. However, 3D animation has reached maturity in the last decade, 3D movies are in fashion in cinemas, 3D television is coming next … Let’s acknowledge it: this is the present. So, why not try 3D teaching? Today, educators in schools and universities can complement their teaching with 3D animation, going beyond chalk and blackboards, overhead projectors and Power Point presentations. This report is a summary of our developments in the Department of Electronics at Universidad de Granada (University of Granada) in the south of Spain. In the last few years we have produced educational material using Blender to do animations, to show our students some concepts in electronic physics and to help them use laboratory instruments. We briefly describe here some technical issues regarding our videos, such as the procedure we followed to represent three-dimensional wave functions, a very important issue in Quantum Physics, and how we approached the modeFigure 1: The mesh of a wave function in a silicon crystal.
33
Educational Science and Engineering Videos ling and depiction of the screen of an instrument widely used in the Electronics lab: the oscilloscope. Teaching science is a hard task today. Nevertheless, Blender gave us the chance improve communication with our students, to let them know how things work in the depths of a silicon crystal… or in the lab next to their classroom.
Blender in Quantum Physics: It is said that Richard Feynman (1918-1988), one of the most important American physicists of the 20th century, summarized the complexity of the quantum world in a sentence: “I think I can safely say that nobody understands quantum mechanics”. Nevertheless, sometimes university lecturers have to teach… quantum mechanics. From Feynman’s sentence we can figure out the difficulties for students in learning something “impossible to understand”… Well, the point is that the mathematical structure describing very small things such as molecules or atoms is very complex, especially because particles are not imagined as small dots, but as something called “wave functions”. These wave functions give us the probability of finding the particle at a certain point in the space. How could we draw those probabilities on a blackboard, at each point in the space? Anything that offers us support in clarifying this concept is really useful. Blender helped us with this task. First of all, we made a program to compute these probabilities for electrons in a piece of silicon and to record it in files. They were just clouds of points. After that, we wrote a very simple script in Python to create a mesh in Blender with the dots of the files. In this way we created something like in Figure1.
Issue 31 | Dec 2010 - "Under the Microscope"
ARTICLE -
In this representation we are “drawing” probabilities. Thus, in those places where there are more dots, it is more likely the electron will be found. You can see there are lobes in the wave function represented (there are a huge number of different wave functions in a piece of silicon), so there are some regions where the electron is more likely to be observed. By making a rotational movement, we can take a complete image of this piece of semiconductor and see the three-dimensional distribution of probabilities, and Figure 2: Drawing of the scene. this is where 3D teachFor the wave function object we ing starts! At least this used a halo-type material. is more enjoyable than a blackboard!
Blender in Electronics: When you buy a TV or a DVD player you always have a user guide. You are supposed to learn how it works using this guide, but you also have the device in front of you… Interaction with the instrument is crucial to learn how it works but, what happens if you just have the user guide and you can’t imagine the device? You’re in trouble. The oscilloscope is a very useful instrument in electrical and electronic engineering labs. It consists of a screen where you can monitor electrical signals in a circuit. Students normally have to learn how an oscilloscope works before seeing it. This is complicated and the practice sessions take time. With Blender, you can model an oscilloscope and show in a simple way how it
34
Educational Science and Engineering Videos works. The advantage of 3D animation is that you are able to control everything in the scene, focusing the attention of the students during the explanation on those details the teacher thinks are the most relevant. And, of course, this makes science seem more fun. We modeled a virtual lab with an oscilloscope. We tried to model the environment in a realistic way to give the impression of a serious workplace. Textures and lights resembled those found in most real laboratories in universities.
Figure 3: The virtual laboratory.
We alternated the 3D environment of the lab with the 2D scene on the screen. To carry out the modeling of the latter, we set the camera for an orthographic view. In world properties, we set a blank screen (with no signal on it) as the background texture (see Figure 4) and we added a mesh, this being the signal on the screen.
Figure 4: Blank screen of the oscilloscope
Issue 31 | Dec 2010 - "Under the Microscope"
ARTICLE -
However, we wanted to let the signal appear gradually, so we used a plane with Z-transparency and moved it from one side of the screen to the other. Thus, those parts behind the plane appeared as the background image, and only part of the signal was visible. For the signal we used a halo texture. Its appearance was very close to the image on a real oscilloscope. We thought this might be useful for students in other universities, so we decided to broadcast these videos Figure 5: Diagram of the arrangement on YouTube. The for simulating a 2D oscilloscope number of views screen and the user comments are encouraging; we found there was great interest, especially from Spanish-speaking countries (the videos are in Spanish). To watch our videos, just look for user fmgomezcampos on YouTube. The core of the working team is composed of several professors with wide experience in both research and university teaching: J. E. Carceller, J. A. Jiménez-Tejada, J. A. López-Villanueva, S. Rodríguez-Bolívar, A. Godoy and myself. Figure 6: View of the arrangement from the camera
35
Educational Science and Engineering Videos We thought they might also be of interest to other scientists so we submitted our works to some learning conferences, where they were a great success. And now we think it’s time to let the Blender community know what we’re doing in 3D teaching! Figure 7: Rendering of the oscilloscope screen. The blue dot acts as a tracer of the signal, and is a child of the plane. The dot is placed on the edge of the plane.
Acknowledgments: Special thanks to Monica Zoppè and her great team at the Scientific Visualization Unit in Pisa, Italy (Raluca, Stefano, Ilaria, Maria Antonietta, Claudia, Tiziana and others I did not meet but who also worked on the same project). I enjoyed meeting these great professionals and wonderful people
, Ph.D. in Physics and lecturer in Universidad de Granada, Spain, in the Department of Electronics. He has managed several innovative learning projects in the Universidad de Granada. Email: fmgomez@ugr.es
Issue 31 | Dec 2010 - "Under the Microscope"
ARTICLE -
Computer Simulation and Modeling of Water Droplets Deposition on Nanofibres
Introduction
By - Richard Charvat
A flow involving more than a single phase is classified as multiphase or non-homogeneous, such as liquid flows in porous fiber media. We are interested in the dynamics of the evolving interface between the distinct phases during such non-homogeneous flows in a fiber mass. The dynamics of such flow are dominated by surface tensions, porous media anisotropy and non-homogenity, fiber volume fraction, and fiber wetting behaviors. The uncertain structural conditions in fibrous media, including the susceptibility to even small loads, as well as the tortuous connectivity of their open pores and poorly defined boundaries, result in complex local non-homogeneous flows and interfacial evolution. This complexity, in many cases, becomes prohibitive for the development of analytical theories describing these phenomena. The wetting and wicking of fiber mass constitute a class of flows that have critical scientific and first of all practical significance.
Idea Richard CHARVÁT 1, Eva KO? TÁKOVÁ and David LUKÁ? 2 1 Technical University of Liberec, Faculty of Art and Architecture, Atelier of Virtual Reality, Czech Republic 2 Technical University of Liberec, Faculty of Textile Engineering, Department of Nonwovens, Czech Republic
Adapt Monte-Carlo simulation based on the Ising model for a description of the wetting and wicking phenomena in fibrous media. We introduce here a 3-D Ising model, incorporated with the stochastic dynamics and the method of importance sampling, which enables us to interpret the model outputs in terms of wicking dynamics. The essential principle of this model is based on the discretion of the whole system of a fibrous mass, a
liquid source, and a wetting configuration at any given moment. The continuous media in the system, including the solid, liquid, and gas, are all divided as assemblies of individual cells occupied by the respective medium so that such a discrete system of cells can be manipulated more easily in a computer. The liquid wicking simulations are then set up from the initial configuration of the liquid layer into which the fiber mass with a predefined fiber orientation is in part vertically dipped, absorbing the liquid.
Figure 1. 2-D Ising basic ferromagnetic model vs 3-D Ising model for liquid-fiber mass interaction. A cell in the center forms a supercube with its neighboring cells. On the front surface, we can see various kinds of media that occupy the cells. For example, the white color denotes the air, the grey color denotes the liquid, and fiber cells are black.
Statistical physics in general deals with systems with many degrees of freedom. These degrees of freedom, in our case, are represented by the so called Ising variables. We assume that we know the Hamiltonian (the total internal energy) of the system.
Issue 31 | Dec 2010 - "Under the Microscope"
36
ARTICLE -
Computer Simulation and Modeling of Water Droplets Deposition on Nanofibres
The problem is computing the average or equilibrium macroscopic parameters observable (energy and liquid mass uptake) for a given initial system configuration. Moreover, we will monitor the kinetics or even dynamics of the system so as to simulate the wicking behavior with time, for more detail see [3],[4].
Model The auto-model (particularly so called Ising model) and Monte-Carlo method were used especially for simulation of a liquid droplet in contact with fibrous material. The mechanism of this kind of simulation is fully described in [2].
Figure 3. An illustration of the initial state of a simulated system. A dropping ball is placed on top of non-woven textile with a particular fibre orientation. After the start of the simulation process, the ball infiltrates inside, then when we change angle of wetting = dropping ball does penetrate (0°) or dropping ball doesn’t penetrate (90°).
Figure 2. A simulation box with cells illustrate schematically a 3-D Ising model of droplets on single fibre in various configurations. In this case it was used for procedural modeling of the Rayleigh instability phenomena (top). Computer visualization of Rayleigh instability of liquid droplets on single fibre (bottom).
Methodology With the use of an optimized algorithm, the 3-D Ising model improves accuracy and efficiency in simulation. This approach is capable of realistically simulating the complicated mechanisms involved in the filtration and separation processes. The fibrous material is represented by non-woven textile material.
Figure 4..
Issue 31 | Dec 2010 - "Under the Microscope"
37
ARTICLE -
Computer Simulation and Modeling of Water Droplets Deposition on Nanofibres
Interface Occupied elements of 3D model are imported in layers as vertices after which a volume effect is applied on them (various color depending on fibre or liquid fluid element). It is further possible to render or animate structures made by fibres with liquid interaction or even cut samples in desired positions with orthographic camera point of view. This method works properly even for very large data sets. Figure 6. Reconstructed voxel slices (left) of fibrous non-woven material. Textured image slice of droplet with alpha channel (bottom right).
Figure 5. System of fibrous non-woven material in computer workspace. Rendered image or linear sequence of images (top right).
Eventually it will be possible to use textured slices in a voxel visualization manner, like a 3D virtual reconstruction of human body from cuts obtained by medicine computer tomography devices.
Visualization Furthermore it is also possible to present linear or even real time content in low cost anaglyph stereoscopy or
Figure 7. Perspective anaglyph of fibrous non-woven material (left). Top view of complete fibrous system with an alpha channel slice (right).
active virtual reality projection due to much better immersion. Due to GLSL offering support for real time shaders, it is possible to experiment with scientific computing and visualization using the real time interactive game engine.
Issue 31 | Dec 2010 - "Under the Microscope"
38
ARTICLE 1. R. Charvat: Blender Like a Nanoscope (procedural modeling), paper for 8th Annual Blender Conference in Amsterdam, 25th October 2009. 2. D. Lukas, N. Pan, A. Sarkar, M. Weng, J. Chaloupek, E. Kostakova, L. Ocheretna, P. Mikes, M. Pociute and E. Amler: AutoModel Based Computer Simulation of Plateau-Rayleigh Instability, Physica A: Statistical Mechanics and its Applications, Volume 389, Issue11, 1 June2010, Pages 2164-2176. 3. D. Lukas, V. Soukupova, N. Pan and D. V. Parikh: Computer Simulation of 3-D Liquid Transport in Fibrous Materials, Simulation, vol. 80, issue 11, pp. 547-557, DOI: 10.1177/0037549704047307. 4. D. Lukas, E. Kostakova and A. Sakar: Computer Simulation of Moisture Transport in Fibrous Materials, Thermal and Moisture Transport in Fibrous Materials, edited by N. Pan and P. Gibson, Woodhead Publishing Limited, Cambridge, pp. 469-541, ISBN-13: 978-184569-057-1.
Computer Simulation and Modeling of Water Droplets Deposition on Nanofibres
import GameLogic cont = GameLogic.getCurrentController() obj = cont.getOwner()
Figure 8. Real time interactive visualization via GPU. Perspective camera view (top) and side view of fibrous system with droplets (bottom).
Parallel computing architecture is a programming approach for the performing of scientific calculations on the GPU as a data parallel computing device. The programming interface allows us to implement algorithms using extensions to the standard Python language used inside Blender [1]. Authors also thank the companies Elmarco and Cummins Filtration for their support and interest in this work
FragmentShader = """ uniform sampler2D color; varying vec3 light_vec; varying vec3 normal_vec; void main() { vec3 l = normalize(light_vec); vec3 n = normalize(normal_vec); float ndotl = dot(n,l); gl_FragColor = texture2D(color, gl_TexCoord[0].st)*ndotl; } """ mesh_index = 0 mesh = obj.getMesh(mesh_index) shader = mat.getShader() shader.setSource( FragmentShader,1) shader.setSampler('colorMap',0) . A piece of code an in integrated interpreter to apply real time pixel shader visualization via GPU.
Issue 31 | Dec 2010 - "Under the Microscope"
39
REVIEW -
I
ts raining Blender books lately and Packet publishing have been on the front of this literary assault. Fortunately for us blender users it's nice time to fill up our book shelves since final re-lease of Blender 2.5 is upon us soon. Blender is a relatively new tool for most professionals now that we have an excellent interface to begin with. We still need lots of information about crucial features of blender3d such as lighting and the material system. Blender 2.5 Lighting and Rendering book fills that slot nicely. Although it cannot be called a complete beginners book, since you will mostly be lost on various steps if you do not have at least working knowledge of Blenders interface, it can't be called an one stop extensive book for Advance users. So for an beginner to intermediate user, this book is easy to pickup and allows you to quickly grow on towards much more advance usage, thus making it an excellent learning companion. The books starts off with the very basics on lighting terminologies such as color and a basic premise of color theory then gradually moving on to understanding lighting in real world settings. After the grounds up on real life knowledge, the reader is exposed to blender’s myriad controls and features available for various types of lighting solutions. From Ambient Occlusion to Indirect Lighting, then on to outdoor and Indoor-lighting solutions. The explanation seems justly nice for newbies, but the level of explanation at some points leaves more advance users wanting for more, so clearly this book is not for experienced users.
40
Blender 25 Lighting and Rendering The book offers a nice portion to Materials in Blender. even better, UV Mapping which in my opinion is a good move as it brings the user right up there with the main tool-set's of Blenders Rendering pipeline. This is supplemented with a pretty good from the grounds up of the material system and its various features namely, diffuse, spec mirrors, IOR etc.
Whats Good ·
Explanations of most features, it covers almost every part of lighting and rendering including the Material system.
·
Very practical with excellent exercises for practical understanding.
·
Pretty straight forward and concise.
·
Easy to Pick up and read.
Whats bad. Not much really.
Seems like BA recommended buy ;)
Blender 2.5 Lighting and Rendering Packt Publishing Pages 252 Approx. ISBN 978-1-847199-88-1
Issue 31 | Dec 2010 - "Under the Microscope"
GALLERIA
Micro lifeform - by Robert J Tiess
41
Issue 31 | Dec 2010 - "Under the Microscope"
GALLERIA
Micro lifeform - by Robert J Tiess
42
Issue 31 | Dec 2010 - "Under the Microscope"
GALLERIA
Micro lifeform - by Robert J Tiess
43
Issue 31 | Dec 2010 - "Under the Microscope"
GALLERIA
Quantum - by Sam Brubaker
44
Issue 31 | Dec 2010 - "Under the Microscope"
GALLERIA
EMBO_cove - by Hua Wong
45
Issue 31 | Dec 2010 - "Under the Microscope"
GALLERIA
Centriole - by Leonard Bosgraaf
46
Issue 31 | Dec 2010 - "Under the Microscope"
GALLERIA
Cellulose Insect -by Antoni Villacreces
47
Issue 31 | Dec 2010 - "Under the Microscope"
GALLERIA
Weapon Of Mass Creation -by Yo Roque
48
Issue 31 | Dec 2010 - "Under the Microscope"
GALLERIA
WiBee - by Manuel Geissinger
49
Issue 31 | Dec 2010 - "Under the Microscope"
GALLERIA
Under The Microscope -by Grzegorz Wereszko and Adam Auksel
50
Issue 31 | Dec 2010 - "Under the Microscope"
GALLERIA
BMW - by Pierlot Damien
51
Issue 31 | Dec 2010 - "Under the Microscope" · · · 31 | Dec 2010 - "Under the Microscope"
52
Upcoming Issue ‘Theme’ Issue 32
“Spring is Sprung” · Modeling and texturing of plants, flowers, trees; can be realistic, toony, exotic, alien or even steampunk · Ant Landscape Add-On · Ivy Generator or similar scripts
· Use of arrays, curves and other modifiers to create vegetation 31 | Dec 2010 - "Under the Microscope"
Blender provides an excellent tool set for producing simulations, visualizations and walk-throughs, as well as educational videos based on t... | https://issuu.com/blenderart_magazine/docs/blenderart_mag-31_eng | CC-MAIN-2017-26 | refinedweb | 15,159 | 61.97 |
DotNetStories
This is a post relevant to all the developers out there that use Web Forms as their main ASP.Net platform.
In this post I would like to talk about ViewState and how we can move it and store it in a session on the server's memory. We know that the default hidden client field for ViewState can become very large on pages.That can problematic in terms of SEO and performance.
Let's talk a little bit about ViewState since not everybody knows what it is and why we need it. We need those state preserving mechanisms due to the fact that the web and HTTP in particular, are stateless..
Have a look here and here to see two posts of mine talking about ViewState.If you want to see some other posts of mine that I talk about State mechanisms have a look here and here
I am going to create a simple website that retrieves data from a database using the well established Entity Framework ORM.
I will be using Entity Framework version 4.0 that ships with Visual
Studio 2010 and .Net 4.0.
We will create a simple website and use Entity Framework as our data access layer. a new empty website and choose a suitable name for it. Choose C# as the development language.
2) Add a new item to your site, a web form. Leave the default name, Default.aspx
3) Add a new item to your site, a ADO.Net Entity Data model. Choose a suitable name for it, e.g AdventureWorkLT.edmx.
8) Now the wizard will identify the database objects and let us choose which database objects we want to include in our model.I included all the database objects.Hit the Finish button. AdvWorks.Designer.cs.
10) Now we are ready to start querying our database. We will issue a Linq to Entities query against the conceptual schema.
Let's create a new query and bind the resultset to a GridView control. Add a GridView control to the default.aspx page.
In this query we want to get only the customers that have more than one order and the Title equals "Ms"
If you are not so familiar with Linq queries, have a look at my other posts in this blog regarding LINQ.
In the Page_Load event handling routine type:
AdventureWorksLTEntities ctx = new AdventureWorksLTEntities();
var query = from mycust in ctx.Customers
join sho in ctx.SalesOrderHeaders on mycust.CustomerID
equals sho.CustomerID
into myorders
where (mycust.Title=="Ms." && mycust.SalesOrderHeaders.Count >= 1)
select new { mycust.FirstName, mycust.LastName,OrdersCount =
myorders.Count() };
GridView1.DataSource = query;
GridView1.DataBind();
Do not forget to add this as well in .cs file
using AdventureWorksLTModel;
11) Run your application and you will see the results.
12) Have a look at the View Source of your page and see the ViewState hidden field.You will see a large hidden field with lots of encoded data in it. We need to move it from the client (the browser will not have to download it) and store it on the server's memory where RAM is not an issue.Have a look at the picture below
13) Now let's see how we can accomplish that.I am going to use an Adapter which is basically a class file. Add another class file in the App_Code special folder.Name it ServerSideViewStateAdapter.cs.
The code for the .cs file follows
public class ServerSideViewStateAdapter:PageAdapter { public override PageStatePersister GetStatePersister() { return new SessionPageStatePersister(this.Page); } }
Let me explain what the code above does.I have my newly created class that needs to implement the PageAdapter class.
There is a method in that class called GetStatePersister that returns the SessionPageStatePersister that stores ASP.Net page view state on the web server.
Do not to forget to use in your code the two namespaces
using System.Web.UI.Adapters; using System.Web.UI;
14) Now we need to configure the Adapter. We need to add a new special folder to our site, a App_Browsers folder. Add a new item to this special folder,a .browser file and name it ViewStateAdapter.browser.
The code inside this file follows
<browsers> <browser refID="Default"> <controlAdapters> <adapter controlType="System.Web.UI.Page" adapterType ="ServerSideViewStateAdapter"> </adapter> </controlAdapters> </browser> </browsers>
I define that this Adapter (that I have already created) will target all browsers ( <browser refID="Default">)
15) Now that you have everything configured, view the page on the browser again. You will have the same results. Look now in the Page --> View Source and you will see that the ViewState has been removed and there is only a simple key now present.If I had huge ViewState info in my page by using this technique my clients would download much less client code. My website's responsiveness and performance would significantly increase.
Have a look at the picture below to see what I actually mean.
Having said all that I would like to mention once more that you can disable ViewState at application,page and control level.
If you go to your web.config file you can disable it for the whole application
<pages enableViewState-"false" />
You can disable it on page level inside the Page directive
<%@ Page EnableViewState = "false" %>
and you can also disable it on each individual control
<asp:Button
Drop me an email if you need the source code.
Hope it helps!! | http://weblogs.asp.net/dotnetstories/how-to-move-the-viewstate-from-a-hidden-field-on-the-client-to-a-session-on-the-server | CC-MAIN-2015-27 | refinedweb | 899 | 66.84 |
Introduction: Controlling Led's Brightness by Raspberry Pi and Custom Webpage
Using an apache server on my pi with php, I found a way to control an led's brightness using a slider with a customized webpage that is accessible on any device connected to the same network as your pi.
There are a lot of ways in which this can be achieved, however they require advanced CSS, Javascript and HTML skills. My method tends to be a bit user friendly.
Supplies
What you will need:
A raspberry pi with a Linux based OS installed(a 3 b+ in my case with Raspbian Buster)
A raspberry pi case for your pi(optional)
An led of any color and a resistor to go along with it(Am using a red led with no resistor)
male to female jumper wires(x2)
OR
male to male jumper wires(x2) and female to female jumper wires(x2)
A breadboard
Basic HTML, PHP and Javascript skills(If you don't know, then don't worry. I shall explain the commands)
Step 1: Step 1: Setting Up the Server
- This step involves the setting up of your Apache server with PHP on the raspberry pi as well as the development of the web page to control led's brightness
Part 1: Downloading and setting up your Apache server
Go to the terminal and type the following command to install Apache server on your pi
sudo apt-get update sudo apt-get install apache2 -y
Go to your browser and type if you are using pi's browser or type pi's local ip* if you are using another device(ensure your device is connected to the same network as your pi)
You must get this webpage
If you get the above web page in your browser then your installation works fine, If not then here are some troubleshooting tips:
Ensure that your device and your pi are on the same network
If you are sure of the above step or you are viewing the webpage on your pi then try re-installing Apache server
After you have ensured that your Apache installation is working and the website is displaying correctly use the following code for installing the PHP module(in the terminal).
sudo apt-get install php libapache2-mod-php -y
now, in the terminal type the following code one by one
cd /var/www/html sudo chown pi: index.html
The folder html in /var/www is the file in which your webpage is saved in the form of an html file called index.html
By default the ownership of index.html belongs to root, the 2nd line of code changes the ownership of the file to the user pi, if your username is not pi then edit the pi in the 2nd line of code to your username
In html format, you cannot execute php code(which we need for our program) therefore we can convert the file index.html into index.php using the following code
sudo mv index.html index.php
Note that the file ownership does not change and still belongs to pi
go to the site again and you will find that nothing changes except the fact that your index.html file is now index.php
now we want a blank page to modify our page but if you open the file with nano(command line text editor), you will find that the code is very large to delete, what we can do is delete the file index.php and replace the file with a new blank index.php. To do that type the following code in the command line one by one
sudo rm index.php
the rm command deletes the file
To make a new file:
sudo nano index.php
press ctrl+o, then hit enter, then hit ctrl+x
You will find that it's a blank page, try editing it with whatever html skills you have(Don't worry they'll work in a .php file) in nano and see what you get. Use the above command to open the file in nano type in whatever you want.
Use this code in your file and see what happens;)
<?php echo "Hello World"; ?>
Now might be a good time to explain the working of our project
We will add a slider to our html page to control the led's brightness
The value of the slider will be saved in a text file called "output.txt"(for which purpose we require php)
We will add some python code which will scan the output.txt file and write it's value to our led
So basically we just need one more file "output.txt"
Type this command in the terminal in your /var/www/html directory
sudo nano output.txt
Now press ctrl+o, press enter, then press ctrl+x
now type in the following code one by one to change directory settings(you cannot write to the file using php unless you type the following code due to permission issues)
sudo chown www-data:www-data /var/www/html/output.txt sudo chmod 644 /var/html/output.txt
now using nano go the index.php file and type in the following code(The code will be explained later)
<?php $myfile = fopen("output.txt", "w"); fwrite($myfile, "hello world"); fclose($myfile); ?>
Now check the output.txt file using nano and see what's written ;)
Now our server has all the prerequisites and all that is required is the setting up of the webpage.
Step 2: Step2 : Setting Up the Webpage
Using nano edit the index.php(make sure it is blank) then paste the following code:
<html> <body bgcolor = "cyan"> <center> <h1>modify the given slider to adjust the brightness of the led</h1> <h2>the pwm value is given to the left of the slider</h2>
<?php $myfile = fopen("output.txt", "r"); $pwm = fread($myfile, filesize("output.txt")); echo $pwm; echo'<input type = "range" min = "0" max = "100" value = "'.$pwm.'" id ="pwm" onchange = "adjust_pwm()"/>'; fclose($myfile); ?> <script> function adjust_pwm(){ var x = document.getElementById("pwm").value; window.location.href = "main_php.php?name=" + x; }
</script> </body>
</html>
Now make a new file called main_php.php using the following code:
sudo nano main_php.php
Paste the following code inside main_php.php
<html> <body> <?php $myfile = fopen("output.txt", "w"); fwrite($myfile, $_GET["name"]; fclose($myfile); echo "<script>window.location.href = 'index.php'</script>"; ?> </body> </html>
Explanation of the code(index.php):
consider the index.php part of the code
the <html> part marks the beginning of the html file and is commonly used to do so.
note that </html> marks the end of the file
the < makes the background color of our webpage to be cyan.
the <center> makes whatever that is between <center> and </center> to be aligned in the center of the web page.
the lines <?php and ?> mark the beginning and the end of the php code.all php variables begin with $ symbol and each php line ends with ";"
we have opened the file output.txt in read mode and saved it in the variable $myfile using the code $myfile = fopen("output.txt", "r"); the "r" in the fopen() command(used for opening files) after the comma specifies that we have opened the file in read mode.
the fread() command is used to read the contents of any file that has been opened by the fopen() command.
the filesize("output.txt") command is used to specifiy to read the entire contents of "output.txt".
more on php read/write here:-file read/write using php
the echo commandis quite obvious.
In this case we have used the echo command to make a slider with minimum range 0 and maximum range 100 and initial value to be whatever the last value is of the file "output.txt" if we change the slider position, it executes the javascript function adjust_pwm
PWM or pulse width modulation is what is used for changing the brightness of the led.
the fclose() command closes the file.
The javascript code is defined between <script> and </script> tags.
The function command defines the function.
Inside the function we have saved whatever value our slider has in the current position inside a variable x.
Note that variables in javascript are defined by using var.
then we change the url of our php file to main_php.php?name=whatever value our slider has using
window.location.href = "main_php.php?name=" + x;
the window.location.href command is used to change the url of our webpage, here we change the url to
main_php.php and also we give an extra parameter name containing the slider value along with it.
Explanation of the code(main_php.php):
consider the main_php.php part of the code
I have named this file main_php.php as this file is what writes the slider value(pwm value) to the file output.txt which is then read by our python code(in charge of writing the pwm value to our led).
its fairly simple, we open the file output.txt using fopen() command and write the value of our variable name to it using the fwrite() command, then we close the file using fclose() command and we move back to our index.php
Make sure that both of these files along with output.txt are in the /var/www/html folder.
Try opening the webpage and play around with the slider and see what happens to the output.txt file.
Here's how the webpage should look like
Step 3: The Python Code
go to your /var/www/html folder and make a python file(any name of your choice, use nano)
paste the following code in your python file
import time import RPi.GPIO as gpio gpio.setmode(gpio.BCM) gpio.setup(21, gpio.OUT) pwm = gpio.PWM(21, 100) pwm.start(0) try: while 1:
time.sleep(0.5) f = open("output.txt", "r") led_brightness = float(f.read()) pwm.ChangeDutyCycle(led_brightness) except KeyboardInterrupt:
pass pwm.stop() gpio.cleanup()
the code is pretty self explanatory
we have imported the gpio library RPi.GPIO to control the led using the library's own pwm
we have connected our led to pin 40 or gpio 21 of my pi, you can use any pin you have on your pi.
in the try section, we have converted the slider value in the text to a float data type and then we have written the same to our led.
the code is only to stop if you press ctrl+c.
Step 4: The Hardware
If not using a resistor:
The positive leg(the longer leg) of the led goes to the gpio pin specified in our code earlier,
The negative part goes
If using a resistor:
The positive part of the led goes to the gpio pin specified in our code earlier
Connect one end of your resistor to the negative leg(the shorter leg) and the other one.
Step 5: Demo
The completed project should work like this
2 years ago
Nice, thanks for sharing : ) | https://www.instructables.com/Controlling-Leds-Brightness-by-Raspberry-Pi-and-Cu/ | CC-MAIN-2022-21 | refinedweb | 1,848 | 70.53 |
I've a job, and when the job is run, at the very bottom of it, and I want to
enqueue
class SimpleJob
@queue = :normal
def self.perform(start)
puts "Right now, start = #{start}"
start += 12
time = some_request_external_api
self.set(wait: time).perform_later(start)
end
end
QUEUE=* rake resque:work
Right now, start = 12
Rather than enqueuing the job again within itself, you could either use a scheduler, such as
clockwork.
Or, if what you are trying to accomplish is in response to some event on another service, maybe you could look into it's documentation and see if it provides
webhook functionality.
These would send
post requests to your desired action whenever any action occurs on their side. | https://codedump.io/share/97z4wKk6Ucvw/1/how-to-enqueue-a-job-within-a-job-dynamically-in-rails | CC-MAIN-2018-17 | refinedweb | 119 | 66.07 |
This is the third release transfer and features around versioning.
Release transfer
I like to create a new version of the Concur SWCV that I used in my previous blog where I showed the transport capabilities in ESR in Eclipse. Right mouse click within the ESR navigation pane, and select New –> Software Component Version from the context menu.
In the upcoming dialog, you can either import a SWCV from SLD or create a local one. Here, I choose local. In the name of the local SWCV I indicate the new version number. Click on button Finish.
A new SWCV has been created. Next we have to create a Namespace. On the Namespace tab, select button Edit.
This brings up the Namespace editor. Here, we add a new Namespace by selecting button Add and maintaining the Namespace name. The name needs to correspond to the Namespace name in the previous SWCV.
Confirm the upcoming pop-up. Then save and activate your changes.
From the main menu, select entry ESR –> Transfer Design Objects.
In the upcoming dialog, select source and target SWCVs. In the scope section you can either select all active objects or objects from a selected namespace. Here, I chose Selected namespaces. Click on button Next.
On the next screen you see the list of namespaces within the source SWCV. In my case, there is only one. Select the respective namespace, and click on button Preview.
On the Preview screen, you get all objects displayed which will be transferred. Click on button Finish.
Once done, you get a success message displayed as well as the number of objects which have been transferred.
When refreshing the navigation tree, the new SWCV including the transferred objects are displayed.
Version history
Besides supporting different Software Component versions, objects within a SWCV can have different versions themselves. For any ESR object, you can get its history information. Here, I like to know the version history of a message mapping. Select the message mapping, and select entry Version History from the context menu.
A new tab opens showing all versions including version number, the user that has created the version, the date of creation, etc. The last version is the active one.
I like to restore the version 2. Double click on the second version. By the way, as you can see below, for each version the version origin is also displayed, i.e., I can trace back the change list details through which the version came into the ESR.
In the upcoming message mapping editor, in the right upper corner, you find a button to restore the displayed version.
Confirm the upcoming pop-up.
Save and activate your changes.
When refreshing the history pane, you see that a new version has been created.
Version comparison
For messagemapping objects, besides the version history, a version comparison is supported. In our previous example, before restoring a previous version, I like to compare the current version with the previous version. In the History tab, you can select two versions, and select entry Compare with Each Other from the context menu.
A new pane opens where the differences and changes between both versions are highlighted.
In the upper right corner, buttons are provided that support you in browsing through the differences and changes, i.e., show next difference, show previous difference, show next change, show next change.
In the next blog Best practices ESR in Eclipse – Part 4: Content organization, I will cover new features around content organization within the ESR in Eclipse. A list of all blogs in the blog series can be accessed from here Best practices ESR in Eclipse – Part 1: Overview of new capabilities with 7.5 SP02. | https://blogs.sap.com/2016/03/21/best-practices-esr-in-eclipse-part-3-release-transfer-and-versioning/ | CC-MAIN-2018-05 | refinedweb | 614 | 67.35 |
In this section, you will learn how to display the list of files from a particular directory.
Description of code:
In the given example, we have created an object of File class and parse the directory through the constructor of the File class. Then we have called listFiles() method which actually returns the array of both the file and directory names. So in order to get only the files, we have created a condition that if the array of files contains file, then display its name.
getName(): This method of File class returns the name of file or directory.
listFiles(): This method of File class returns an array of all the files and directories of the specified directory.
Here is the code:
import java.io.*; class ListFiles { public static void main(String[] args) throws Exception { String text = ""; File f = new File("C:/"); File[] listOfFiles = f.listFiles(); for (int j = 0; j < listOfFiles.length; j++) { if (listOfFiles[j].isFile()) { text = listOfFiles[j].getName(); System.out.println(text); } } } }
Through the above code, you can display list of files from any directory.
Output: | http://www.roseindia.net/tutorial/java/core/files/filelistfiles.html | CC-MAIN-2014-10 | refinedweb | 180 | 65.73 |
How is the volumetric capacity calculated? It looks like the gravimetric capacities are calculated with the Faraday’s laws for electrolysis but applying the same intuition but with the unit cell volume doesn’t seem to recreate the volumetric capacities in the Battery Explorer.
Zhiwen,
Thanks for asking the question.
I would like to use an example to answer your question.
For electrochemical reaction 0.75Li + Li0.25FePO4 -> LiFePO4 + 0.75e, each LiFePO4 stores 0.75e.
The cell volume is 302.3Å^3 per LiFePO4.
Then it gives you the volumetric capacity = 0.75e/302.3 Å^3;
plug in some of the constants to convert the units:
1Ah = 3600 * 6.25 * 10^18 e and 1Å^3 = 10^-27 L
It will give you that volumetric capacity = 0.75e/302.3 Å^3=441Ah/L.
Fortunately, pymatgen has a insertion_electrode module to calculate the battery cathode properties for you, so that you do not need to worry about it. Here is the usage ( Na battery example) ,
from pymatgen import MPRester from pymatgen.apps.battery.insertion_battery import InsertionElectrode mpr = MPRester() entry1 = mpr.get_entries("mp-763898", inc_structure="final")[0] entry2 = mpr.get_entries("mp-32486", inc_structure="final")[0] na_entry = mpr.get_entries("mp-10172", inc_structure="final")[0] a = InsertionElectrode([entry1,entry2],na_entry) print(a)
The output will be:
InsertionElectrode with endpoints at VPO5 and NaVPO5 Avg. volt. = 3.319478280000003 V Grav. cap. = 144.9496318169905 mAh/g Vol. cap. = 468.3816011364095
I hope this answers you question, but feel free to let me know if there is a problem.
Best regards,
Miao | https://matsci.org/t/volumetric-capacity-in-battery-explorer/735 | CC-MAIN-2020-40 | refinedweb | 258 | 53.37 |
Servlets Debugging
It is always difficult to testing/debugging a servlets. Servlets tend to involve a large amount of client/server interaction, making errors likely but hard to reproduce.
Here are a few hints and suggestions that may aid you in your debugging.");
All the messages generated by above syntax would be logged in web server log file.
Message Logging:
It is always great idea to use proper logging method to log all the debug, warning and error messages using a standard logging method. I use log4J to log all the messages.
The Servlet API also provides a simple way of outputting information by using the log() method as follows:
// Import required java libraries import java.io.*; import javax.servlet.*; import javax.servlet.http.*; public class ContextLog extends HttpServlet { public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, java.io.IOException { String par = request.getParameter("par1"); //Call the two ServletContext.log methods ServletContext context = getServletContext( ); if (par == null || par.equals("")) //log version with Throwable parameter context.log("No message received:", new IllegalStateException("Missing parameter")); else context.log("Here is the visitor's message: " + par); response.setContentType("text/html"); java.io.PrintWriter out = response.getWriter( ); String\n"; out.println(docType + "<html>\n" + "<head><title>" + title + "</title></head>\n" + "<body bgcolor=\"#f0f0f0\">\n" + "<h1 align=\"center\">" + title + "</h1>\n" + "<h2 align=\"center\">Messages sent</h2>\n" + "</body></html>"); } //doGet }
The ServletContext logs its text messages to the servlet container's log file. With Tomcat these logs are found in <Tomcat-installation-directory>/logs.
The log files do give an indication of new emerging bugs or the frequency of problems. For that reason it's good to use the log() function in the catch clause of exceptions which should normally not occur.
Using JDB Debugger:
You can debug servlets with the same jdb commands you use to debug an applet or an application.
To debug a servlet, we can debug sun.servlet.http.HttpServer, then watch as HttpServer executes servlets, you have to help your debugger by doing the following:
Set your debugger's classpath so that it can find sun.servlet.http.Http-Server and associated classes.
Set your debugger's classpath so that it can also find your servlets and support classes, typically server_root/servlets and server_root/classes.
You normally wouldn't want server_root/servlets in your classpath because it disables servlet reloading. This inclusion, however, is useful for debugging. It allows your debugger to set breakpoints in a servlet before the custom servlet loader in HttpServer loads the servlet.
Once you have set the proper classpath, start debugging sun.servlet.http.HttpServer. You can set breakpoints in whatever servlet you're interested in debugging, then use a web browser to make a request to the HttpServer for the given servlet (). You should see execution stop at your breakpoints.
Using Comments:
Comments in your code can help the debugging process in various ways. Comments can be used in lots of other ways in the debugging process.
The Servlet uses Java comments and single line (// ...) and multiple line (/* ... */) comments can be used to temporarily remove parts of your Java code. If the bug disappears, take a closer look at the code you just commented and find out the problem.
Client and Server Headers:
Sometimes when a servlet servlet debugging:
Be aware that server_root/classes doesn't reload and that server_root/servlets probably does..
Verify that your servlet's init() method takes a ServletConfig parameter and calls super.init(config) right away. | http://www.tutorialspoint.com/servlets/servlets-debugging.htm | CC-MAIN-2014-49 | refinedweb | 579 | 58.48 |
On Fri, 18 Mar 2005, Frank Everdij wrote: > Quoting Gary Setter <address@hidden>: > > > <snip> > > I'm not sure I can solve the problem, but since you made some > > changes to the source, would you post the definition of the > > QuoteChars please. > > Best regards, > > Gary Setter > > I have made no changes to the source, trust me. This is a plain vanilla > aspell-0.60.2. > But i have found a solution to the 4 errors in modules/filter/email.cpp: > > >From an old KDE-development discussion on porting to IRIX i remembered > >something > about code not belonging in c++ source code files, but in header files. > Aapparently the namespace section in email.cpp should belong in a header, > atleast according to the MIPSPro compiler, so i spliced the source file into a > header part and code part: That is very ugly and probably unnecessary. It may be just the anonymous namespace that is causing the problem. Try simply commenting out the "namespace {" and the closing "}" and see if that solved the problem. > --- common/string.hpp.save Mon Nov 29 18:50:05 2004 > +++ common/string.hpp Fri Mar 18 15:54:12 2005 > @@ -492,7 +492,7 @@ > > namespace std > { > - template<> static inline void swap(acommon::String & x, acommon::String & > y) > {return x.swap(y);} > + template<> inline void swap(acommon::String & x, acommon::String & y) > {return > x.swap(y);} > } That is fine. I believe I already fixed this. I will double check. > --- modules/speller/default/affix.hpp.save Fri Mar 18 09:58:10 2005 > +++ modules/speller/default/affix.hpp Fri Mar 18 09:58:39 2005 > @@ -109,7 +109,7 @@ > } > WordAff * expand_suffix(ParmString word, const unsigned char * new_aff, > ObjStack &, int limit = INT_MAX, > - unsigned char * new_aff = 0, WordAff * * * l = 0, > + unsigned char * new_aff_ = 0, WordAff * * * l = > 0, > ParmString orig_word = 0) const; Look at the file in the corresponding source file affix.hpp. The first new_add should just be "aff". > Last error i found is that when compiling i get an error in > gen/static-filters.src.cpp: > .. > cc-1070 CC: ERROR File = ./gen/static_filters.src.cpp, Line = 76 > The indicated type is incomplete. > > static KeyInfo nroff_options[] = { > ^ > > cc-1070 CC: ERROR File = ./gen/static_filters.src.cpp, Line = 103 > The indicated type is incomplete. > > static KeyInfo url_options[] = { > ^ > > 2 errors detected in the compilation of "lib/new_filter.cpp". > > Probably the perl scripts in directory gen got confused and decided to refrain > from putting an option in there, which the MIPSPro compiler rejects. modifying > them to: No those should be empty arrays. -- | http://lists.gnu.org/archive/html/aspell-devel/2005-03/msg00010.html | CC-MAIN-2016-18 | refinedweb | 416 | 57.67 |
lp:~bigdata-dev/charms/xenial/rsyslog-forwarder-ha/trunk
- Get this branch:
- bzr branch lp:~bigdata-dev/charms/xenial/rsyslog-forwarder-ha/trunk
Branch merges
Related bugs
Related blueprints
Branch information
- Owner:
- Juju Big Data Development
- Status:
- Development
Recent revisions
- 25. By Kevin W Monroe on 2016-10-27
remove sitepackages=True (it is not needed, and can cause conflicts if user has py2 flake8 installed)
- 24. By Kevin W Monroe on 2016-10-26
use explicit charm name in amulet test (bundletester confused rsyslog with rsyslog-fwrd without this)
- 23. By Kevin W Monroe on 2016-10-26
use our xenial rsyslog in the test; better metadata tags
- 22. By Kevin W Monroe on 2016-10-26
rework unit test for py3 and check that the rsyslogd process is actually running
- 21. By Kevin W Monroe on 2016-10-26
adjust test targets and setup for py3
- 20. By Kevin W Monroe on 2016-10-26
sync latest charmhelpers
- 19. By Kevin W Monroe on 2016-09-29
return (dont die) if a syslog aggregator relation already exists. this can happen if multiple principal charms are colocated on the same machine. in this case, rsyslogd will already be configured on the machine, so just log the event and move on.
- 18. By Kevin W Monroe on 2016-09-26
simplify deployment test and move to xenial
-
Branch metadata
- Branch format:
- Branch format 7
- Repository format:
- Bazaar repository format 2a (needs bzr 1.16 or later)
- Stacked on:
- lp:charms/rsyslog-forwarder-ha | https://code.launchpad.net/~bigdata-dev/charms/xenial/rsyslog-forwarder-ha/trunk | CC-MAIN-2019-26 | refinedweb | 253 | 53.41 |
As I have done nothing like this with Python before I am rather lost, and have a few questions which hopefully someone can help me with.
Code: Select all
import tkinter as tk from tkinter import * class Example(Frame): def __init__(self): super().__init__() self.initUI() def initUI(self): self.master.title("Dash") self.pack(fill=BOTH, expand=1) canvas = Canvas(self) canvas.create_rectangle(10, 10, 60, 110, outline="#000", fill="#fb0") canvas.create_rectangle(80, 10, 130, 110, outline="#000", fill="#f50") canvas.create_rectangle(150, 10, 200, 110, outline="#000", fill="#05f") canvas.create_text(18, 140, anchor="w", font="Arial", text="Volts Charge Temp ") Dim = tk.IntVar() canvas.pack(fill=BOTH, expand=1) tk.Checkbutton(self.master, text="Dim OK", variable = Dim) #, tk.grid(row = 0, sticky = W) def main(): root = Tk() ex = Example() root.geometry("210x250+580+150") root.mainloop() if __name__ == '__main__': main()
Am I starting in the right direction?
Can I combine Bar gauges and checkbuttons in onw window? How?
Can I then paste filled rectangles of varying height from my loop of code into the existing rectangles for my bar gauge?
Thank you for any help. | https://www.raspberrypi.org/forums/viewtopic.php?f=32&t=248480 | CC-MAIN-2019-39 | refinedweb | 190 | 53.27 |
GETHOSTID(3) BSD Programmer's Manual GETHOSTID(3)
gethostid, sethostid - get/set unique identifier of current host
#include <unistd.h> long gethostid(void); int sethostid(long hostid);
The sethostid() function establishes a 32-bit identifier for the current processor that is intended to be unique among all UNIX systems in ex- istence. This is normally a DARPA Internet address for the local machine. This call is allowed only to the superuser and is normally performed at boot-time. gethostid() returns the 32-bit identifier for the current processor. This function has been deprecated. The hostid should be set or retrieved by use of sysctl(3).
gethostname(3), sysctl(3), sysctl(8)
The gethostid() and sethostid() syscalls appeared in 4.2BSD and were dropped in 4.4BSD.
32 bits for the identifier is too small. MirOS BSD #10-current June 2,. | http://www.mirbsd.org/htman/sparc/man3/sethostid.htm | CC-MAIN-2014-52 | refinedweb | 140 | 60.11 |
SAP HANA Studio, Display your Application
09/29/2018
You will learn
Now that you have data it’s time to display it in some way as well as enable access to your data.
DEPRECATED: SAP HANA XS Classic is deprecated as of SPS02. Please use XS Advanced, and learn about how to get started with the new mission Get Started with XS Advanced Development.
The first step is to create an
xsodatafile that enables you to access the table via OData or from another application.
The first step is to create an
xsodatafile that enables you to access the table via OData or from another application.
service namespace "codejam.mylocation.services" { "codejam.mylocation.data::geolocation.history" key generate local "GEN_ID"; }
Once you have saved and activated this file you should be able to access it via the web browser and ensure that everything is working.
Now that you can access the data it is time to enhance your
index.htmlpage with a bit of magic in the way of Google Maps.
<!DOCTYPE html> <html> <head> <title>Map Locations</title> <meta name="viewport" content="initial-scale=1.0"> <meta charset="utf-8"> <style> #map { height: 80%; } html, body { height: 100%; margin: 0; padding: 0; } </style> <script src=""></script> </head> <body> <div id="map"></div> <script> var map; function initMap() { map = new google.maps.Map(document.getElementById('map'), { center: {lat: -34.397, lng: 150.644}, zoom: 8 }); $.ajax({ url: '/codejam/mylocation/services/geo.xsodata/geolocation.history?&format=json', async:false, dataType: 'json', success: function(data) { for(var i = 0; i < data['d']['results'].length; i++) { jsonDate = data['d']['results'][i]['date']; var date = new Date(parseInt(jsonDate.substr(6))); var latLng = new google.maps.LatLng(data['d']['results'][i]['lat'],data['d']['results'][i]['lon']); var marker = new google.maps.Marker({ position: latLng, map: map }); } }, failure: function(errMsg) { console.log(errMsg); } }); } </script> <script src=""></script> </body> </html>
- With a bit of magic you should now be able to see your data on a map once you have saved and activated it to the server. | https://developers.sap.com/westbalkans/tutorials/studio-display-project.html | CC-MAIN-2019-22 | refinedweb | 343 | 58.28 |
.
.
I found Ryan Tomayko's How I Explained REST to My Wife very clever and amusing, but one bit left me begging for more imagination..
Ay-ay-ay. I don't know any non-techie who would be satisfied with such an explanation. And I think the idea of alternative representations is one of the least geeky ideas Ryan is trying to communicate to his wife, so it seems a slam-dunk for a more interesting example.
My wife knows that if she goes to on her computer
she gets a listing of the TV channels in a table on the Web page. What
if our TV suddenly gained Web capabilities? When you went to on the TV you should probably get the live feed for
the TV Guide preview channel, the one with a scrolling list on what's on
each channel of the boob tube. Of course, since it's a Web-stylie TV,
forget channel numbers. Each item in the list should be a link you can
just actuate with your remote. Click to jump to, say when you saw that
the new John Legend video was playing. And you'd get the live channel
there, since it's a Web-stylie TV, rather than the Web page bragging
about the video. So there you go: live TV as an alternative
representation of the resource. And you know that wifey's going to
get any example that includes John Legend (OK, for some wives substitute
Justin Timberlake).
If your phone were a web-stylie phone, you could go to (better have that on speed dial) and have some
robot voice reciting the "what's on now" for the channels. And so the
audio message is an alternative representation of the resource. Oh, and
going a bit Jetsons with the whole thing, if the TV Guide site started
throwing up 404s on your web-stylie TV or phone, and you wanted to go to
the office to give them a piece of your mind in person, you could hail a
web-stylie taxi cab and tell it and, you guessed
it, you'll be whisked to headquarters. But pay attention, now, the
representation of the resource is probably not the destination building,
in this case, but rather the street address or directions to that
location as retrieved from the URL in the
web-stylie cab. Or something like that.
Now we're talking alternative representations. Rather than
remembering the channel number, hot-line phone number and physical
location for TV Guide, it's all available web-style from the URL. Of course, as long anyone needs to deal with
"aitch-tee-tee-pee-colon-slash-slashes" ain't no way this scenario is
playing out in real life. But then again neither is Ryan's example--Web
services. Oooooh!
I recently needed some code to quickly scrape the metadata from XHTML Web pages, so I kicked up the following code:
import amara XHTML1_NS = u'' PREFIXES = { u'xh': XHTML1_NS } def get_xhtml_metadata(source): md = {} for node in amara.pushbind(source, u'/xh:html/xh:head/*', prefixes=PREFIXES): if node.localName == u'title': md[u'title'] = unicode(node) if node.localName == u'link': linkinfo = dict([ (attr.name, unicode(attr)) for attr in node.xml_xpath(u'@*') ]) md.setdefault(u'links', []).append(linkinfo) elif node.xml_xpath(u'self::xh:meta[@name]'): md[node.name] = unicode(node.content) return md if __name__ == "__main__": import sys, pprint source = sys.argv[1] pprint.pprint(get_xhtml_metadata(source))
So, for example, scraping planet XML:
$ python xhtml-metadata.py {u'links': [{u'href': u'planet.css', u'media': u'screen', u'rel': u'stylesheet', u'title': u'Default', u'type': u'text/css'}, {u'href': u'/index.rdf', u'rel': u'alternate', u'title': u'RSS', u'type': u'application/rss+xml'}], u'title': u'Planet XMLhack: Aggregated weblogs from XML hackers and commentators'}
...simple - just don't use script in XSLT unless you really really really have to. Especially on the server side - XSLT script and ASP.NET should never meet. Use XSLT extension objects instead. As simple as it is.
—Oleg Tkachenko—"XSLT scripting (msxsl:script) in .NET - pure evil"""
Amen, f'real. When XSLT 1.1 first emerged the first thing that jumped out from the spec and punched me in the face was the embedded script facility. I made a fuss about it:.
I even worked with some like-minded folk to put together a petition. I have no idea whether that was instrumental in any way, but soon enough XSL 1.1 was dead and replaced with XSLT 2.0, which was built on XPath 2.0 and thus had other big problems, but at least no xsl:script.
xsl:script does live on in some implementations, and notably MSXML, as you can see from Oleg's post. You can also see some of the problems. XSLT and many more general-purpose languages make for uncomfortable fit and it can be hard for platform developers and users make things work smoothly and reliably. More important than memory leaks, script-in-xsl is a huge leak of XSLT's neat abstraction, and I think this makes XSLT much less effective. For one thing users are tempted to take XSLT to places where it does not fit. XSLT is not a general-purpose language. At the same time users tend not to learn good XSLT design and techniques because they scripting becomes an escape hatch. So an script user in XSLT generally cripples the language at the same time he is over-using it. An unfortunate combination indeed.
Oleg advocates XSLT extensions rather than scripting, which is correct, but I do want to mention that once you get used to writing extensions, it can be easy to slip into habits as bad as scripting. I've never been tempted to implement a Python scripting extension in 4XSLT, which would be easy, but that didn't stop me from going through a phase of overusing extensions. I think I've fully recovered, and the usage pattern I definitely recommend is to write the general-purpose code in a general-purpose language (Python, C#, whatever) and then call XSLT for the special and narrow purpose of transforming XML, usually for the last mile of presentation. It seems obvious, and yet it's a lesson that seems to require constant repetition.
Bruce D'Arcus commented on my entry "Creating JSON from XML using XSLT 1.0 + EXSLT", and following up on his reply put me on a bit of a journey. Enough so that the twists merit an entry of their own.
Bruce pointed out that libxslt2 does not support the
str:replace function. This recently came up in the EXSLT mailing list, but I'd forgotten. I went through this thread. Using Jim's suggestion for listing libxslt2 supported extensions (we should implement something like that in 4XSLT) I discovered that it doesn't support
regex:replace either. This is a serious pain, and I hope the libxslt guys can be persuaded to add implementations of these two very useful functions (and others I noticed missing).
That same thread led me to a workaround, though. EXSLT provides a bootstrap implementation of
str:replace, as it does for many functions. Since libxslt2 does support the EXSLT functions module, it's pretty easy to alter the EXSLT bootstrap implementation to take advantage of this, and I did so, creating an updated replace.xsl for processors that support the Functions module and
exsl:node-set. Therefore a version of the JSON converter that does work in libxslt2 (I checked) is:
<?xml version="1.0" encoding="UTF-8"?> <xsl:transform xmlns: <xsl:import >
One more thing I wanted to mention is that there was actually a bug in 4XSLT's
str:replace implementation. I missed that fact because I had actually tested a variation of the posted code that uses
regex:replace. Just before I posted the entry I decided that the Regex module was overkill since the String module version would do the trick just fine. I just neglected to test that final version. I have since fixed the bug in 4Suite CVS, and you can now use either
str:replace or
regex:replace just fine. Just for completeness, the following is a version of the code using the latter function:
<?xml version="1.0" encoding="UTF-8"?> <xsl:transform xmlns: <xsl:output <func:function <xsl:param <func:result <>
The article “Generate JSON from XML to use with Ajax”, by Jack D Herrington, is a useful guide to managing data in XML on the server side, and yet using JSON for AJAX transport for better performance, and other reasons. The main problem with the article is that it uses XSLT 2.0. Like most cases I've seen where people are using XSLT 2.0, there is no reason why XSLT 1.0 plus EXSLT doesn't do the trick just fine. One practical reason to prefer the EXSLT approach is that you get the support of many more XSLT processors than Saxon.
Anyway, it took me all of 10 minutes to cook up an EXSLT version of the code in the article. The following is listing 3, but the same technique works for all the XSLT examples.
<?xml version="1.0" encoding="UTF-8"?> <xsl:transform xmlns: >
I also converted the code to a cleaner, push style from what's in the article.
I updated my old overview page of XSLT processors APIs in Python with an MSXML 4.0 example I found on comp.lang.python today.
For the past few months in my day job (consulting for Sun Microsystems) I've been working on what you can call a really big (and hairy) enterprise mashup. I'm in charge of the kit that actually does the mashing-up. It's an XML pipeline that drives merging, processing and correction of data streams. There are a lot of very intricately intersecting business rules and without the ability to make very quick ad-hoc reports from arbitrary data streams, there is no way we could get it all sorted out given our aggressive deadlines.
This project benefits greatly from a side task I had sitting on my hard drive, and that I've since polished and worked into the Amara 1.1.9 release. It's a command-line tool called trimxml which is basically a reporting tool for XML. You just point it at some XML data source and give it an XSLT pattern for the bits of interest and optionally some XPath to tune the report and the display. It's designed to only read as much of the file as needed, which helps with performance. In the project I discussed above the XML files of interest range from 3-100MB.
Just to provide a taste using Ovidiu Predescu's old Docbook example, you could get the title as follows:
trimxml book/bookinfo/title
Since you know there's just one title you care about you can make sure trimxml stops looking after it finds it
trimxml -c 1 book/bookinfo/title
-c is a count of results and you can set it to other than 1, of course.
You can get all titles in the document, regardless of location:
trimxml title
Or just the titles that contain the string "DocBook":
trimxml title "contains(., 'DocBook')"
The second argument is an filtering XPath expression. Only nodes that satisfy that condition are reported.
By default each entire matching node is reported, so you get an output
such as "". You can specify
something different to display for each match using the
-d flag. For
example, to just print the first 10 characters of each title, and not
the
title tags themselves, use:
trimxml -d "substring(., 0, 10)" title
There are other options and features, and of course you can use the tool on local files as well as Web-based files.
In another useful development in the 4Suite/Amara world, we now have a Wiki.
With 4Suite, Amara, WSGI.xml, Bright Content and the day job I have no idea when I'll be able to get back to working on Akara, so I finally set up some Wikis for 4Suite.org. The main starting point is:
Some other useful starting points are
As a bit of an extra anti-vandalism measure I have set the above 3 entry pages for editing only by 4Suite developers. [...] Of course you can edit and add other pages in usual Wiki fashion. You might want to start with which is a collaborative addendum to the official FAQ.
Earlier this year I posted an off-hand entry about a scam call I received. I guess it soon got a plum Google spot for the query "Government grants scam" and it's been getting almost one comment a day ever since. Today I came across a comment whose author was requesting permission to use the posting and sibling comments in a book.
I have written a book on Winning Grants, titled "The Grant Authority," which includes a chapter on "Avoiding Grant Scams." It is in final stages of being (self)- published. I want to include comments and complaints about government grant scams on this Copia blog. I think the book's readers will learn alot from them.
How can I get permission to include written comments on this blog site in this book?
I'd never really thought about such a matter before. I e-mailed the correspondent permission, based on Copia's Creative Commons Attribution licensing, but considering he seemed especially interested in the comments, I started wondering. I don't have some warning on the comment form that submitted comments become copyright Copia's owners and all that, as I've seen on some sites. If I really start to think about things I also realize that our moderating comments (strictly to eliminate spam) might leave us liable for what others say. It all makes me wonder whether someone has come up with a helpful (and concise) guide to IP and tort concerns for Webloggers. Of course, I imagine such a read might leave my hair standing on end so starkly that I'd never venture near the 21st century diarist's pen again.
BTW, for a fun battle scene viewed in the cold, claret light of pedantry, inquire as to the correct plural of "conundrum". | http://copia.posthaven.com/tag/web?page=2 | CC-MAIN-2017-13 | refinedweb | 2,402 | 70.94 |
# $mol_func_sandbox: hack me if you might!.
Hello, I'm Jin, and I… want to play a game with you. Its rules are very simple, but breaking them… will lead you to victory. Feel like a hacker getting out of the JavaScript sandbox in order to read cookies, mine bitcoins, make a deface, or something else interesting.

<https://sandbox.js.hyoo.ru/>
And then I'll tell you how the sandbox works and give you some ideas for hacking.
How it works
============
The first thing we need to do is hide all the global variables. This is easy to do — just mask them with local variables of the same name:
```
for( let name in window ) {
context_default[ name ] = undefined
}
```
However, many properties (for example, `window.constructor`) are non-iterable. Therefore, it is necessary to iterate over all the properties of the object:
```
for( let name of Object.getOwnPropertyNames( window ) ) {
context_default[ name ] = undefined
}
```
But `Object.getOwnPropertyNames` returns only the object's own properties, ignoring everything it inherits from the prototype. So we need to go through the entire chain of prototypes in the same way and collect names of all possible properties of the global object:
```
function clean( obj : object ) {
for( let name of Object.getOwnPropertyNames( obj ) ) {
context_default[ name ] = undefined
}
const proto = Object.getPrototypeOf( obj )
if( proto ) clean( proto )
}
clean( win )
```
And everything would be fine, but this code falls because, in strict mode, you can not declare a local variable named `eval`:
```
'use strict'
var eval // SyntaxError: Unexpected eval or arguments in strict mode
```
But use it — allowed:
```
'use strict'
eval('document.cookie') // password=P@zzW0rd
```
Well, the global eval can simply be deleted:
```
'use strict'
delete window.eval
eval('document.cookie') // ReferenceError: eval is not defined
```
And for reliability, it is better to go through all its own properties and remove everything:
```
for( const key of Object.getOwnPropertyNames( window ) )
delete window[ key ]
```
Why do we need a strict mode? Because without it, you can use `arguments.callee.caller` to get any function higher up the stack and do things:
```
function unsafe(){ console.log( arguments.callee.caller ) }
function safe(){ unsafe() }
safe() // ƒ safe(){ unsafe() }
```
In addition, in non-strict mode, it is easy to get a global namespace just by taking `this` when calling a function not as a method:
```
function get_global() { return this }
get_global() // window
```
All right, we've masked all the global variables. But their values can still be obtained from the primitives of the language. For example:
```
var Function = ( ()=>{} ).constructor
var hack = new Function( 'return document.cookie' )
hack() // password=P@zzW0rd
```
What to do? Delete unsafe constructors:
```
Object.defineProperty( Function.prototype , 'constructor' , { value : undefined } )
```
This would be enough for some ancient JavaScript, but now we have different types of functions and each option should be secured:
```
var Function = Function || ( function() {} ).constructor
var AsyncFunction = AsyncFunction || ( async function() {} ).constructor
var GeneratorFunction = GeneratorFunction || ( function*() {} ).constructor
var AsyncGeneratorFunction = AsyncGeneratorFunction || ( async function*() {} ).constructor
```
Different scripts can run in the same sandbox, and it won't be good if they can affect each other's, so we freeze all objects that are available through the language primitives:
```
for( const Class of [
String , Number , BigInt , Boolean , Array , Object , Promise , Symbol , RegExp ,
Error , RangeError , ReferenceError , SyntaxError , TypeError ,
Function , AsyncFunction , GeneratorFunction ,
] ) {
Object.freeze( Class )
Object.freeze( Class.prototype )
}
```
OK, we have implemented total fencing, but the price for this is a severe abuse of runtime, which can also break our own application. That is, we need a separate runtime for the sandbox, where you can create any obscenities. There are two ways to get it: via a hidden frame or via a web worker.
Features of the worker:
* Full memory isolation. It is not possible to break the runtime of the main application from the worker.
* You can't pass your functions to the worker, which is often necessary. This restriction can be partially circumvented by implementing RPC.
* The worker can be killed by timeout if the villain writes an infinite loop there.
* All communication is strictly asynchronous, which is not very fast.
Frame features:
* You can pass any objects and functions to the frame, but you can accidentally grant access to something that you wouldn't.
* An infinite loop in the sandbox hangs the entire app.
* All communication is strictly synchronous.
Implementing RPC for a worker is not tricky, but its limitations are not always acceptable. So let's consider the option with a frame.
If you pass an object to the sandbox from which at least one changeable object is accessible via links, then you can change it from the sandbox and break our app:
```
numbers.toString = ()=> { throw 'lol' }
```
But this is still a flower. The transmission in the frame, any function will immediately open wide all doors to a cool-hacker:
```
var Function = random.constructor
var hack = new Function( 'return document.cookie' )
hack() // password=P@zzW0rd
```
Well, the proxy is coming to the rescue:
```
const safe_derived = ( val : any ) : any => {
const proxy = new Proxy( val , {
get( val , field : any ) {
return safe_value( val[field] )
},
set() { return false },
defineProperty() { return false },
deleteProperty() { return false },
preventExtensions() { return false },
apply( val , host , args ) {
return safe_value( val.call( host , ... args ) )
},
construct( val , args ) {
return safe_value( new val( ... args ) )
},
}
return proxy
})
```
In other words, we allow accessing properties, calling functions, and constructing objects, but we prohibit all invasive operations. It is tempting to wrap the returned values in such proxies, but then you can follow the links to an object that has a mutating method and use it:
```
config.__proto__.__defineGetter__( 'toString' , ()=> ()=> 'rofl' )
({}).toString() // rofl
```
Therefore, all values are forced to run through intermediate serialization in JSON:
```
const SafeJSON = frame.contentWindow.JSON
const safe_value = ( val : any ) : any => {
const str = JSON.stringify( val )
if( !str ) return str
val = SafeJSON.parse( str )
return val
}
```
This way only objects and functions that we passed there explicitly will be available from the sandbox. But sometimes you need to pass some objects implicitly. For them, we will create a `whitelist` in which we will automatically add all objects that are wrapped in a secure proxy, are neutralized, or come from the sandbox:
```
const whitelist = new WeakSet
const safe_derived = ( val : any ) : any => {
const proxy = ...
whitelist.add( proxy )
return proxy
}
const safe_value = ( val : any ) : any => {
if( whitelist.has( val ) ) return val
const str = JSON.stringify( val )
if( !str ) return str
val = SafeJSON.parse( str )
whitelist.add( val )
return val
}
```
And in case the developer inadvertently provides access to some function that allows you to interpret the string as code, we'll also create a `blacklist` listing what can't be passed to the sandbox under any circumstances:
```
const blacklist = new Set([
( function() {} ).constructor ,
( async function() {} ).constructor ,
( function*() {} ).constructor ,
eval ,
setTimeout ,
setInterval ,
])
```
Finally, there is such a nasty thing as `import()`, which is not a function, but a statement of the language, so you can not just delete it, but it allows you to do things:
```
import( "https://example.org/" + document.cookie )
```
We could use the `sandbox` attribute from the frame to prohibit executing scripts loaded from the left domain:
```
frame.setAttribute( 'sandbox' , `allow-same-origin` )
```
But the request to the server will still pass. Therefore, it is better to use a more reliable solution — to stop the event-loop by deleting the frame, after getting all the objects necessary for running scripts from it:
```
const SafeFunction = frame.contentWindow.Function
const SafeJSON = frame.contentWindow.JSON
frame.parentNode.removeChild( frame )
```
Accordingly, any asynchronous operations will produce an error, but synchronous operations will continue to work.
As a result, we have a fairly secure sandbox with the following characteristics:
* You can execute any JS code.
* The code is executed synchronously and does not require making all functions higher up the stack asynchronous.
* You can't read data that you haven't granted access to.
* You can't change the behavior of an application that uses the sandbox.
* You can't break the functionality of the sandbox itself.
* You can hang the app in an infinite loop.
But what about infinite loops? They are quite easy to detect. You can prevent this code from being passed at the stage when the attacker enters it. And even if such a code does get through, you can detect it after the fact and delete it automatically or manually.
If you have any ideas on how to improve it, [write a telegram](https://t.me/mam_mol).
Links
=====
* <https://sandbox.js.hyoo.ru/> — online sandbox with examples of potentially dangerous code.
* <https://calc.hyoo.ru/> — a spreadsheet that allows you to use custom JS code in cells.
* <https://showcase.hyoo.ru/> — other our apps. [Order a new one from us](https://t.me/nin_jin) if you want. | https://habr.com/ru/post/507648/ | null | null | 1,441 | 56.45 |
> > I have a function I would like to optimize (yes, I've done profiling). > > it transforms string like "3-7" into "3,4,5,6,7". > Try this: > function transform(str) > local _, _, from, to = string.find(str, "^(%d+)%-(%d+)$") > assert(from, "Transform needs a string in the format i-j") > local n1, n2 = tonumber(from), tonumber(to) > if n1 >= n2 then return from end > local template = from .. string.rep(", ", n2 - n1) > return (string.gsub(template, " ", function() n1 = n1 + 1; return n1 > end)) > end Hmm, interesting trick -- using closure in repl fn! I 'd never thought about this. > Philippe had a good idea about memoising results; the above can be > improved (slightly) by memoising the template string. (It only helps > on largish ranges.) > -- A useful tool: > function memoise(fn) > return setmetatable({}, { > __index = function(t, k) > local rv = fn(k) > t[k] = rv > return rv > end > }) > end [ ... mor cool stuff skipped ... ] > The test for n1 >= n2 is actually unnecessary, but that counts on a > particular behaviour of string.rep. In fact, the conversion of from > and to to numbers is also unnecessary (if the comparison is removed), > because it will happen automatically. So making those changes, we > could reduce the code to the following: (shorter, if not faster) At the end, I must say that I've learned a lot about Lua idioms and tips while solving this optimisation problem. Many thanks to you and others for their participation and very enlightening comments! -- Regards, max. | http://lua-users.org/lists/lua-l/2003-07/msg00053.html | crawl-001 | refinedweb | 245 | 64.91 |
Here is my code,don't ask about the stupid class names,it's from an exercise.
the header:
class Hen{ public: class Nest{ int go; public: friend Hen; class Egg{ int a; public: friend Nest; friend Hen; }; }; int test(Hen::Nest::Egg *xa); }
the cpp file
#include "chesire.h"
int test(Hen::Nest::Egg *xa){
return 1;
}
There are the errors:
error C2628: 'Hen' followed by 'int' is illegal (did you forget a ';'?)
error C3874: return type of 'main' should be 'int' instead of 'Hen'
1> chesire.cpp
error C2628: 'Hen' followed by 'int' is illegal (did you forget a ';'?)
error C2440: 'return' : cannot convert from 'int' to 'Hen'
What is wrong here? I can't find 1 single thing wrong with the function! | http://www.gamedev.net/topic/640369-weird-class-problems/ | CC-MAIN-2014-42 | refinedweb | 125 | 80.62 |
Introduction
This is a tutorial for calculating factorial of number using while loop in java. The program is given below that takes the number from user and calculates factorial and prints it. The program is not extendable. Go enjoy the program. Lets begin………..
What’s a factorial?
You may have studied factorial in your maths book. The factorial is usually denoted by n! .i.e 5! is factorial of 5. The factorial is calculated as:- 5! =1x2x3x4x5.
Program for calculating factorial of number in java.
//import Scanner as we require it. import java.util.Scanner; // the name of our class its public public class Factorial { //void main public static void main (String[] args) { //declare int int i=1,no,fact=1; //Declare input as scanner Scanner input = new Scanner(System.in); //Take input System.out.println("Enter Number :"); no = input.nextInt(); //while loops while(i<=no) { fact=fact*i; i++; } System.out.println("Factorial = "+fact); } }
Output
Enter Number :
5
Factorial = 120
How does it work
- You enter the number.
- The factorial is calculated using while loop
- The factorial is printed.
Extending it
The program cannot be extended.
Explanation.
- Import the Scanner.
- Declare the class as public
- Add the void main function
- Add system.out.println() function with the message to enter number.
- Declare input as Scanner.
- Take the inputs and save it in variables.
- Add a loop and calculate the factorial.
- Add system.out.println() function to print factorial.
At the end.
You learnt creating the Java program for Calculating factorial of number. So now enjoy the program.
Please comment on the post and share it.
And like it if you liked. | https://techtopz.com/java-programming-calculating-factorial-of-number/ | CC-MAIN-2019-43 | refinedweb | 272 | 54.69 |
Hi Niall, Niall Douglas wrote: >-----BEGIN PGP SIGNED MESSAGE----- >Hash: SHA1 > >I hate to be a bore but the patches I submitted here last time have >not been implemented in the latest CVS version. > > No bore: bug reports and suggestions are welcome. 8) >This means that quite simply pyste DOES NOT WORK on MSVC. > >Bug 1: Declaring an unnamed namespace in the pyste output causes >MSVC7.1 to gobble memory until infinity. > >My solution: Alter Save() in SingleCodeUnit.py: > > if declaration: > pyste_namespace = namespaces.pyste[:-2] > if pyste_namespace: > fout.write('namespace %s {\n\n' % >pyste_namespace) > fout.write(declaration) > if pyste_namespace: > fout.write('\n}// namespace %s\n' % >pyste_namespace) > fout.write(space) > >This simply doesn't write out a namespace declaration if it's >unnamed. > > Sorry, but I replied to your message, and suggested that you try out the --pyste-ns option. So you could run pyste with --pyste-ns=pyste or something, and the namespace wouldn't be empty anymore. I believe that MSVC does support namespaces, right? ;) If that doesn't work, I think your approach (not writing the declarations inside a namespace at all if no namespace is specified) is feasible. >Bug 2: Unnamed enums are still being called "$_xx". MSVC does not >like symbols starting with $. Furthermore the export_values() >facility is still not being employed. > >My solution: Alter Export() in EnumExporter.py: > > def Export(self, codeunit, exported_names): > if not self.info.exclude: > indent = self.INDENT > in_indent = self.INDENT*2 > rename = self.info.rename or self.enum.name > full_name = self.enum.FullName() > if rename[0:2] == "$_" or rename == '._0': > global uniqueTypeCount > codeunit.Write('module', code) > exported_names[self.enum.FullName()] = 1 > > I'm really sorry about that, this one passed through. I implemented it in CVS, but not verbatim as in here. First, only unnamed enums are exported with export_values() by default. If you want a normal enum to be exported, you have to call export_values on it: color = Enum('color',...) export_values(color) The reason is that if we made it so that all enums were exported with export_values, then it would break code that expected the old behaviour. But I must say that I prefer enums with export_values, since they mirror better in Python the same semathics of enums in C++. Thanks a lot for the feedback Niall! Nicodemus. Ps: I also implemented the change so that any \ will be translated to / before being passed to gccxml. | https://mail.python.org/pipermail/cplusplus-sig/2003-September/005259.html | CC-MAIN-2014-10 | refinedweb | 398 | 61.73 |
Working on a version of Conway's Game of Life using 2d arrays and when trying too calculate the sum of each cell's "neighbors", I keeping getting blocked by the nil values.
def neighbor_count
grid.each_with_index do |row, idx|
row.each_with_index do |column, idx2|
[grid[idx - 1][idx2 - 1], grid[idx - 1][idx2], grid[idx - 1][idx2 + 1],
grid[idx][idx2 - 1], grid[idx][idx2], grid[idx][idx2 + 1],
grid[idx + 1][idx2 - 1], grid[idx + 1][idx2], grid[idx + 1][idx2 + 1]
].compact.sum
end
end
end
block (2 levels) in neighbor_count': undefined method
nil values are only a symptom of the illness. Don't treat the symptoms, get rid of the problem! Which is that you are violating array bounds.
.each_with_index enumerates all indexes from the first to the last. And so
idx + 1 on the last index will trigger this out-of-bounds situation. And
idx - 1 on the first will produce an unexpected value instead of an error, which will impact your calculations. Good luck debugging that. :)
Put some guard checks in your code, to make sure you never go out of bounds.
Just to be absolutely clear, the problem is not that
grid[idx + 1][idx2] is nil and messes up your calculations. It is that
grid[idx + 1] is nil! And, naturally, you can't do
nil[idx2]. That's the error. | https://codedump.io/share/hHsw4m4hrbIi/1/how-do-i-convert-nil-to-0-in-an-array-to-get-the-sum | CC-MAIN-2017-43 | refinedweb | 229 | 67.45 |
While performing linear search we need to go through each and every node thus it takes more time and hence increases the complexity.
In this python program we are going to use binary search which is an effective way to search an item from the list. In binary search our work become easy as the half of the array is ignored in first comparison. All we have to do is to take out the middle index and compare it with element we need to find.
The main point is that the binary search will perform on sorted array.
Algorithm:
- The total number of items to enter in the array is asked from the user.
- Array is created.
- Input the number from the user which we need to search.
- A function is created naming “binsearch()”
- The function checks if the middle element is greater or smaller than the element that we need to find.
- if middle<element then binsearch(A,m,b,q) is called.
- if middle>element then binsearch(A,a,m,q) is called.
- else if element==middle
print “element found”
Else print” not found”
Exit
Code:
def binsearch(A, a, b, q): m=int((a+b)/2) if (q> A[m]): binsearch(A, m, b, q) elif (q<A[m]): binsearch(A, a, m, q) elif (q== A[m]): print("the element is found on index:" ) print(m+1) else: print("item not present in the list") n=int(input("enter the total no. of elements: "r)) print("enter the elements:") A=[] for i in range(n): a=int(input()) A.append(a) print("enter the no. to be searched:") q=int(input()) binsearch(A, 0, n-1, q)
Output:
enter the total no. of elements: 6 enter the elements: 1 2 3 4 5 6 enter the no. to be searched: 4 the element is found on index: 4
Report Error/ Suggestion | https://www.studymite.com/python/examples/binary-search-program-in-python/ | CC-MAIN-2021-39 | refinedweb | 315 | 71.95 |
>
I have a cage in my game hanging from the ceiling and I want it to dangle around. Therefor I wrote the following Script:
using UnityEngine; using System.Collections;
public class Cage : MonoBehaviour {
private float xrot;
private float zrot;
private float time;
void Start()
{
xrot = (Random.Range (0, 1000) / 1000.0f) * 10f - 5f;
zrot = (Random.Range (0, 1000) / 1000.0f) * 10f - 5f;
time = (Random.Range (0, 1000) / 1000.0f) * Mathf.PI;
}
void Update () {
if(Time.time - time >= 0){
transform.RotateAround (new Vector3 (transform.position.x, 12, transform.position.z), new Vector3 (1, 0, 0), xrot * Time.deltaTime * Mathf.Cos (Time.time - time));
transform.RotateAround (new Vector3 (transform.position.x, 12, transform.position.z), new Vector3 (0, 0, 1), zrot * Time.deltaTime * Mathf.Cos (Time.time - time));
}
}
}
The Code choses a random x and z rotation speed (positive or negative) and uses a cosine to rotate the cage. When the cage is instantiated it has a rotation of 270 and hangs straight down from the ceiling which is the reason why is has to have full speed at the beginning since it starts a the lowest and fastest part of the swing motion. The problem was all cages started swinging at the same time so I used "time" to start them at a different velocities. Now I do not know how to correct the start rotation of the cage. Having a random starting velocity demands the corresponding start rotation since those values are connected. I solved the poblem not very nice by letting them start to swing after the random delay but if you now look at the cage at the very beginning of the game they first do not move and then start abruptly which I do not want :D So is there any way how I can calculate the starting rotation for my cages with the value of "time"?
Thanks in advance! :D
Answer by NoseKills
·
Jun 07, 2015 at 07:55 PM
If all else fails, I think you could run a random amount of Updates for each cage when they are created to "fast forward" their position to a random spot.
Answer by Dave-Carlile
·
Jun 07, 2015 at 08:04 PM
How about using a Quaternion.Slerp? It takes from rotation, to rotation parameters, as well as t which you change from 0 to 1 over time. When t is 0 it returns from rotation, and when t is 1 it returns to rotation, and when t is in between it returns the appropriate value between from and to.
from rotation
to rotation
t
from
to
Your script would need to change t over time, and when it reaches 1 you would reverse the direction and move it back toward 0, and then reverse and back toward 1, and so on.
Once you have that set up, all you need to do for a random start position is to start t at a random value between 0 and 1. It would also be simple to have different swing speeds by varying how fast you modify t.
Edit: You could also use a tweening library from the asset store for changing the value of t, or mechanim, or it's fairly simple to handle yourself in Update or a Coroutine or something..
Making a bubble level (not a game but work tool)
1
Answer
Flip over an object (smooth transition)
3
Answers
How to move object based on rotation
1
Answer
How do I rotate my player on the Z axis to a set value?,How do I rotate player smoothly on "Z" axis?
0
Answers
GetKeyUp not registering
2
Answers | https://answers.unity.com/questions/982074/trigonometric-functions.html | CC-MAIN-2019-26 | refinedweb | 604 | 71.85 |
1 package org.tigris.scarab.util.build.l10nchecker;2 3 /* ================================================================4 * Copyright (c) 2005 import java.util.HashMap ;50 import java.util.Map ;51 52 /**53 * Static utility class to hold the issue message types as defined by the 54 * task55 */56 public class L10nIssueTemplates57 {58 private static Map issueMessageTypes = null;59 60 /**61 * Set the severity of an issue.62 * 63 * In case this function is called the first time, 64 * issueMessageTypes is initialized.65 * 66 * @param _clazz The class to set the severity for.67 * 68 * @param messageType The new severity69 */70 public static void setMessageType (Class _clazz, int messageType)71 {72 if (issueMessageTypes == null)73 {74 issueMessageTypes = new HashMap ();75 }76 issueMessageTypes.put (_clazz, new Integer (messageType)); 77 }78 79 /**80 * Retrive the message type of the class representing this issue.81 * 82 * @param _clazz The class representing the issue (usually one83 * of org.tigris.scarab.util.build.l10nchecker.issues.*Issue84 * 85 * @return The message type (see {@link L10nIssue} for details.86 * In case the class is not represented in issueMessageTypes, the 87 * function returns null. 88 */89 public static int getMessageType (Class _clazz)90 {91 if (issueMessageTypes == null || !issueMessageTypes.containsKey(_clazz))92 {93 return L10nIssue.MESSAGE_IGNORE;94 }95 return ((Integer )issueMessageTypes.get(_clazz)).intValue();96 }97 98 /**99 * Clear all definitions100 */101 public static void reset ()102 {103 issueMessageTypes = null;104 }105 }106
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/tigris/scarab/util/build/l10nchecker/L10nIssueTemplates.java.htm | CC-MAIN-2016-44 | refinedweb | 242 | 58.48 |
the adda-tagged posts
adda's project page
. adda is the programming language of addx,
a secure and friendly system
that promotes computing as a
universal 2nd language
.
having everyone -- including children
and seniors -- as its intended
client
requires that addx provide a language
that is not
straying far from current conventions
while at the same time improving
on natural language
with an elegant system that has
fewer rules for the
beginner to learn .
. a language should also be iconic like math,
never using a keyword when a symbol would be
more easily understood;
though neither should it be using
a morass of arcane symbols
just for
the sake of abbreviation .
. there should be a binary version of
the language,
-- a common intermediate language --
and tools that allow
humans to
work in the language of their choice .
-- because freedom,
foremost,
is the Americium Dream .
hist
12.10.5: page is up with abstract .
. I still haven't got the syntax completed yet;
this is what I have so far .
2012.10: adda/summary of the syntax
. this lang tries to use the more popular features
of the syntax found in math, internet url's, and english .
url's look like this:
protocol://subdomain.domain.domType/folder/subfolder/file.fileType#component .
. folders are actually pointers to records
whose fields are files or folders,
hence (/) becomes the pointer dereferencing operator .
. likewise,
the (.) specifies either a type identifier or a record component;
and the (#) specifies an array component
(an array is defined as a sequence of items,
and all these item share the same type;
so f.txt is a container for type txt,
or for a sequence of .txt .
. the internet uses (:) somewhat like english does,
to make an association, or pairing;
adda uses (:) for labels and mappings
( labeling means a symbol is declared,
and after the (:), it is defined;
a mapping means showing the names of items
rather than relying on list position for item identification);
examples include:
# constant declaration:
symbol: (some value the symbol will constantly have );
# variable declaration:
symbol: .aType`= (initial value);
# a case statement:
truthFunction ? (true: doWhenTrue; false: doWhenFalse);
# parameter association:
functionCall( color: red, mood: blue ) .
type"type:
. we need a special type identifier
for declaring that a symbol is a type identifier,
and that identifier is .type; for example:
integer.type: .< (list functions for accessing integers) >;
TrafficLight.type: .{ black, red, yellow, green};
RealPoint.type: .( x, y: .real ) -- this is a record .
. finally,
to help us distinguish the 2 uses of (.):
{ record component selection, type specification},
we need to ensure that record components are
never allowed to have the same name as type identifiers .
. one way to avoid that is by capitalizing type id's,
and never capitalizing component names,
but adda should not enforce this way,
because a less restrictive way to make this distinction
is to depend on prior declarations:
. when you find x.y, then either y should be a type id,
or x should have already been declared to be a record .
inline typing:
. to avoid having to declare symbols before use,
we have a syntax convention for
combining declaration with first use:
. only the first use of a symbol
need include its type specification;
other uses of it can still include its type,
but it's unnecessary, and if you spell the name wrong
that has the effect of declaring another symbol;
examples:
x.integer --. this declares x to be type integer;
x: .integer --. this does the same thing .
. here's a for-loop declaring a control variable:
for x.integer in 0..n: print x;
--. we could have said (print x.integer)
and it would have meant the same thing .);
eg, (^(x.symbol).list: for terminal in f(s):
s in grammar? be terminal else be '(s) )
-- (be x) means (return x)
but it's exiting the nearest enclosing expression
not the innermost function literal .
. a function is implicitly describing a set of points,
and sets can be generated:
( f(x:1..3) .
function/pre- and post-conditions:
(x.t).t | (conditions) .
constants:
. to define something as a constant,
we can use (symbol: value) or (symbol.type: value)
instead of (symbol.type`= value).
TypeId#value vs ValueType$valueLiteral:
. types can be thought of as arrays of values,
so t#x evals x as one of t's values .
. B$10 = the value 10 as parsed by the binary value type .
Color#green works only if green is not redefined;
because in the expression (aType#x),
x can be any expression, not just a literal;
Color$green is always unambiguous;
because the ($) says what follows is
one of a type Color's literals .
. RGB$(0,0,1) -- RGB color model for color literals .
12.3.23: 8.21: adda/oop/syntax/sections for use, is, has[as]:
<
use ,,,imports;
is ,,,supertypes; -- establishing type compatibility .
as ,,, subtypes;
type.( my class vars: t )
.( my public ivars: t )
#( more public ivars -- names from t1 ).t3
#( even more pub ivars --names from t2 ).t4
>;
the use.section:
. why would an interface need to import a module?
they may refer to types other than self
that are not in the top-level library .
the as.section:
. a supertype has or controls its subtypes;
eg, number has all of these types:
int, real, quotient, complex, modular .
review the syntax:
obj`= value -- ivar initialization;
type::value -- fully-qualified instance value
obj.component -- public ivar;
obj`body/local -- private ivars;
obj#(component expression),
type::.component -- access default initial value
type.component -- public class var;
type#(component expression), -- same;
type`body/local -- private class vars;
obj`message(...); instance message call;
type::`message -- access message's body
type`attribute -- class message call;
obj(obj callable? apply it to this expression),
function obj -- call with this obj
type::function(...) -- access function's body
obj.type(select a variant of this type)
-- declare obj to be of this type . | http://amerdreamdocs.blogspot.com/p/adda-adds-automatica-programming-lang.html | CC-MAIN-2018-30 | refinedweb | 974 | 57.37 |
Glyph Lefkowitz wrote: > Matt. Heh, strictly speaking the original setup.py was reverted and my setup.py was moved out of the way to setup_egg.py. But yes, a parallel setuptools setup.py would be the correct answer for now. I might start scanning through the Twisted code to see what would need to change to allow eggs. The obvious one is __file__ references but to use namespace packages we'd need to clear everything out of twisted.__init__. If I get a chance, I'll post my findings back here. > >? pkg_resources has a couple of functions for locating a resource (file or directory, i think) by name, from inside a package. Obviously, you can't guarentee you can write to those resources though ;-) - Matt -- __ / \__ Matt Goodall, Pollenation Internet Ltd \__/ \ w: __/ \__/ e: matt at pollenation.net / \__/ \ t: +44 (0)113 2252500 \__/ \__/ / \ Any views expressed are my own and do not necessarily \__/ reflect the views of my employer. | http://twistedmatrix.com/pipermail/twisted-python/2005-October/011665.html | CC-MAIN-2014-41 | refinedweb | 166 | 75.2 |
If you're new to Python
and VPython: Introduction
A VPython tutorial
Pictures of 3D objects
What's new in VPython 6
VPython web site
VPython license
Python web site
Math module (sqrt etc.)
Numpy module (arrays)
For basic examples of mouse handling,
see Click example or Drag
example.
The simplest mouse interaction is to wait for the user to click before proceeding in the program. Suppose the 3D display is in scene, the default window created by VPython. Here is a way to wait for a mouse click, which is defined as the mouse button being pressed and released without moving the mouse (the event occurs when the mouse button is released):
ev = scene.mouse.getclick()
You can use the package of information contained in the variable "ev":
sphere(pos=ev.pos, radius=0.1)
Starting with VPython 6, an alternative for waiting for a mouse click is to wait for various kinds of mouse or keyboard events:
scene.waitfor('keydown') # wait for keyboard key press
scene.waitfor('keyup') # wait for keyboard key release
scene.waitfor('click keydown') # click or keyboard
As with scene.mouse.getclick(), you can obtain a package of information about the event that caused the end of the wait, with the added information of whether it was a mouse or keyboard event:
from visual import *
box()
scene.waitfor('click')
print('You clicked.')
ev = scene.waitfor('click keydown')
if ev.event == 'click':
print('You clicked at', ev.pos)
else:
print('You pressed key '+ev.key)
The first statement, scene.waitfor('click'), makes the display be the focus of keyboard input.
The object scene.mouse contains lots of information about the current state of the mouse, which you can interrogate at any time:
pos The current 3D position
of the mouse cursor; scene.mouse.pos. VPython
always chooses a point in the plane parallel to the screen and passing through scene.center. (See Projecting
mouse information onto a given plane for other options.)
pick The nearest object
in the scene which falls under the cursor, or None. At present curve, label, helix, extrusion, and faces cannot be picked. The picked object is scene.mouse.pick.
pickpos The 3D point on
the surface of the picked object which falls under the cursor, or None; scene.mouse.pickpos.
camera The read-only current
position of the camera as positioned by the user, scene.mouse.camera.
For example, mag(scene.mouse.camera-scene.center) is the distance from the center of the scene to the current position of the
camera. If you want to set the camera position and direction by program, use scene.forward and scene.center,
described in Controlling Windows.
ray A unit vector pointing
from camera in the direction of the mouse cursor. The points under the mouse
cursor are exactly { camera + t*ray for t>0}.
The camera and ray attributes together define all of the
3D points under the mouse cursor.
project() Projects position
onto a plane. See Projecting mouse position onto a given
plane.
alt = True if the ALT key
is down, otherwise False
ctrl = True if the CTRL key
is down, otherwise False
shift = True if the SHIFT
key is down, otherwise False
Different kinds of mouse
The mouse routines can handle a three-button mouse,
with "left",
"right", and "middle" buttons. For systems with
a two-button mouse, the "middle" button consists of the
left and right buttons pressed together. For the Macintosh one-button
mouse, the right button is invoked by holding down the Command key (normally used for rotating the camera view),
and the middle button is invoked by holding down the Option key (normally used for zooming the camera view).
Design for left-button events if possible
VPython continues to provide the basic mouse event functionality.
Polling and callback
There are two different ways to get a mouse event, "polling" and "callback". In polling, you continually check scene.mouse.events to see whether any events are waiting to be processed, and you use scene.mouse.getevent() mouse and keyboard events, and we will discuss it first. Programs that use polling will continue to work, but you cannot mix polling and callback approaches: you must use one or the other in a program.
Handling events with callbacks
Here is a simple example of how to use callbacks to process click events:
from visual import *
s = sphere(color=color.cyan)
def
change():
if s.color == color.cyan:
s.color = color.red
else:
s.color = color.cyan
scene.bind('click', change)
We define a "function" named "change". Then we "bind" this function to click events occurring in the display named "scene". Whenever VPython detects that a click event has occurred, VPython calls the bound function, which in this case toggles the sphere's color between cyan and red.
This operation is called a "callback" because with scene.bind you register with VPython that you want to be called back any time there is a click event. Here are the built-in events that you can specify in a bind operation:
Mouse: click, mousedown, mousemove, mouseup
Keyboard: keydown, keyup
Other: redraw, draw_complete
The event 'mousedown' or 'mouseup' occurs when you press or release the left button on the mouse, and the 'mousemove' event occurs whenever the mouse moves, whether or not a button is depressed. The events 'keydown' and 'keyup' are discussed in the keyboard', change)
The example program eventHandlers.py illustrates the callback method for handling many kinds of events.
Details of the event
You can get detailed information about the event by writing the callback function like this (note the variable 'evt' in parentheses):
def info(evt):
print(evt.event, evt.pos, evt.button)
Here we specify an argument in the definition of the callback function ('evt' in this case). When the function is called due to a specified event happening, VPython sends the function the information contained in scene.mouse, plus 'event', which is the name of the event that triggered the callback, such as 'mousedown' or 'click'. The name of the argument need not be 'evt'; use whatever name you like. In addition to evt.event and evt.button, there is further event information in the form of evt.press, evt.click, evt.drag, evt.drop, and evt.release (see details in the section on polling), but this information is more relevant when using polling rather than callbacks to get events.
You can optionally have VPython send the callback function an additional argument. Here is a revised version of the color-change example, in which in the bind operation we specify an additional argument to be sent, in this case a list of objects whose colors should toggle:
from visual import *
s = sphere(pos=(-2,0,0), color=color.cyan)
b = box(pos=(2,0,0), color=color.red)
def
change(evt, objects):
for obj in objects:
if obj.color == color.cyan:
obj.color = color.red
else:
obj.color = color.cyan
scene.bind('click', change, [s, b])
Right or middle button mouse events
Normally, only the left mouse button will trigger an event, but if you specify scene.userspin = False, so the right button is no longer bound to camera rotation, clicking with the right mouse button will cause a callback. Similarly, if you specify scene.userzoom = False, you can click with the middle button (or left+right buttons).
Unbinding
Suppose you executed scene.bind('mousedown mousemove', Drag), but now you no longer want to send mousemove events to that function. Do this:
scene.unbind('mousemove', Drag)
You can also leave a function bound but start and stop having events sent to it:
D = scene.bind('mousemove', Drag)
...
D.stop() # temporarily stop events going to Drag
...
D.start() # start sending events to Drag again
You can check whether the callback is in start or stop mode with D.enabled, which is True if the callback has been started and False if it has been stopped.
Custom events: triggers
It is possible to create your own event type, and trigger a callback function to do something. Consider the following example, where the event type is 'color_the_ball':
def clickFunc():
s = sphere(pos=scene.mouse.pos, radius=0.1)
scene.trigger('color_the_ball', s)
def ballFunc(newball):
newball.color=color.cyan
scene.bind('click', clickFunc)
scene.bind('color_the_ball', ballFunc)
box(pos=(1,0,0))
We bind click events to the function clickFunc, and we bind our own special event type 'color_the_ball' to the function ballFunc. The function clickFunc is executed when the user clicks the mouse. This function creates a small sphere at the location of the mouse click, then triggers an event 'color_the_ball', with the effect of passing to the function ballFunc the sphere object. Finally ballFunc applies a color to the sphere. (Obviously one could color the sphere in clickFunc; the example is just for illustration of the basic concept.).
The simplest polling mouse interaction is to wait for a mouse click:
scene.mouse.getclick() Wait for a mouse click. If you say m = scene.mouse.getclick(), the variable m gives information about the event. For example, m.pos is the location of the mouse at the time of the click event.
It is a useful debugging technique to insert scene.mouse.getclick() into your program at a point where you would like to stop temporarily to examine
the scene. Then just click to proceed.
In the Drag
example you will see how to use event-handling functions to process mouse events continuously.
events The number of events
(press, click, drag, or drop) which have been queued; e.g. scene.mouse.events.
scene.mouse.events = 0 may be used to discard
all input. No value other than zero can be assigned.
getevent() Obtains the
earliest mouse event and removes it from the input queue. If no events are
waiting in the queue (that is, if scene.mouse.events is zero), getevent() waits until the user enters
a mouse event (press, click, drag, or drop). getevent() returns an object with attributes similar to a mouse object: pos, button, pick, pickpos, camera, ray, project(), alt, ctrl, and shift.
These attributes correspond to the state of the mouse when the event took
place. For example, after executing mm = scene.mouse.getevent() you can look at the various properties of this event, such as mm.pos, mm.pick, mm.drag (see below), etc.
The getevent() function provides additional information, in addition to the usual information such as pos or pick:
press = 'left'
for a press event, or 'right'
or 'middle', or None. That is, if you execute mm = scene.mouse.getevent(), mm.press will be 'left', 'right', 'middle', or None. A press event occurs when a mouse button
is depressed.
click = 'left' for
a click event, or 'right' or 'middle', or None. A click event occurs when all mouse buttons
are released with no movement of the mouse. (This is also a release event.) Note that a click event happens when the mouse button is released. See Click example.
drag = 'left' for
a drag event, or 'right' or 'middle', or None;
in this case pos and other attributes correspond to the state
of the mouse at the time of the original press event, so as not to lose initial
position information. A drag event occurs when the mouse is moved
slightly after a press event, with mouse buttons still down. This can be used to signal the beginning of dragging an
object. See Drag example.
drop = 'left' for
a drop event, or 'right' or 'middle', or None. A drop event occurs when the mouse buttons
are released after a drag event. (This is also a release event.)
release = 'left'
following click and drop events, indicating which button was released,
or 'right' or 'middle', or None. A release event occurs when the mouse buttons
are released after a click or drag event.
button = 'left', 'right', or 'middle'.
If you are interested in every type of event (press, click,
drag, and drop), you must use events and getevent().
If you are only interested in left click events (left button down and up without
significant mouse movement), you can use clicked and getclick():
clicked The number of left
clicks which have been queued; e.g. scene.mouse.clicked.
This does not include a count of nonclick events (press, drag, or drop).
getclick() Obtains the
earliest mouse left click event (pressing the left button and releasing it
in nearly the same position) and removes it from the input queue, discarding
any earlier press, drag, or drop events. If no clicks are waiting in the queue
(that is, if scene.mouse.clicked is zero), getclick() waits until the user clicks. Otherwise getclick() is just like getevent().
Between a drag event (start of dragging) and a drop event
(end of dragging), there are no mouse events but you can examine the continuously
updated position of the mouse indicated by scene.mouse.pos.
Normally, dragging with right or middle button represents
spin or zoom, and is handled automatically by VPython, so you can check for
left-button drag or drop events simply by checking whether drag or drop is true (in Python, a nonempty string
such as 'left' is true, None is false). Unless you disable user zoom (scene.userzoom
= False), press, click, drag, drop,
and release with the middle button are invisible
to your program. Unless you disable user spin (scene.userspin
= False), press, click, drag, drop,
and release with the right button are invisible
to your program.
Projecting
mouse position onto a given plane
Here is a way to get the mouse position relative to a particular
plane in space:
temp = scene.mouse.project(normal=(0,1,0), point=(0,3,0))
if temp: # temp is None if no intersection with plane
ball.pos = temp
This projects the mouse cursor onto a plane that is perpendicular
to the specified normal. If point is not
specified, the plane passes through the origin. It returns a 3D position,
or None if the projection of the mouse misses the plane.
In the example shown above, the user of your program will
be able to use the mouse to place balls in a plane parallel to the xy plane,
a height of 3 above the xy plane, no matter how the user has rotated the point
of view.
You can instead specify a perpendicular distance d from the origin to the plane that is perpendicular to the specified normal.
The example above is equivalent to
temp = scene.mouse.project(normal=(0,1,0), d=3)
Pausing for mouse or keyboard'). | http://vpython.org/contents/docs/mouse.html | CC-MAIN-2014-15 | refinedweb | 2,416 | 73.58 |
Can I use C# 5/6/7 in Unity?
Yes, you can.
Unity has been stuck with CLR 2.0 for a very long time, but almost all the latest C# features do not require the latest versions of CLR. Microsoft and Mono compilers can compile C# 5/6/7 code for CLR 2.0 if you explicitly ask them to do so.
Late binding (
dynamic) feature that came with C# 4.0 still won't be available in Unity.
Ok, what should I do?
For C# 6.0
Copy
CSharp60Supportfolder from this repository (or the downloads page) to your Unity project. It should be placed in the project's root, next to the
Assetsfolder.
Import
CSharp60Support.unitypackageinto your project. It's located inside
CSharp60Supportfolder.
Reimport Allor just restart the editor, whatever is faster in your case.
[Optional] On Windows, run
/CSharp60Support/ngen install.cmdonce with administrator privileges. It will precompile csc.exe, pdb2mdb.exe and mcs.exe using Ngen that will make compilation in Unity a bit faster.
For C# 7.0 preview
On MacOS download and install Mono 4.6+. On Windows download and install .Net Framework 4.6.2+.
Copy
CSharp70Supportfolder from this repository (or the downloads page) to your Unity project. It should be placed in the project's root, next to the
Assetsfolder.
Import
CSharp70Support.unitypackageinto your project. It's located inside
CSharp70Supportfolder.
Reimport Allor just restart the editor, whatever is faster in your case.
[Optional] On Windows, run
/CSharp70Support/ngen install.cmdonce with administrator privileges. It will precompile csc.exe, pdb2mdb.exe and mcs.exe using Ngen that will make compilation in Unity a bit faster.
Thus, the project folder is the only folder that changes. All the other projects will work as usual.
How does it work?
/Assets/CSharp vNext Support/Editor/CSharpVNextSupport.dllis an editor extension that modifies the editor's internal data via reflection, telling it to use the alternative C# compiler (
/CSharpXXSupport/CSharpCompilerWrapper.exe). If it doesn't exist, the stock compiler will be used.
CSharpCompilerWrapper.exereceives and redirects compilation requests from Unity to one of the actual C# compilers using the following rules:
If
CSharp70Supportfolder exists and contains
Roslynfolder, then Roslyn C# 7.0 preview compiler will be used;
If
CSharp60Supportfolder contains
Roslynfolder, then Roslyn C# 6.0 compiler will be used;
else if
CSharp60Supportfolder contains
mcs.exe, then this Mono C# 6.0 compiler will be used;
else the stock compiler will be used (
/Unity/Editor/Data/Mono/lib/mono/2.0/gmcs.exe).
To make sure that
CSharpCompilerWrapper.exe does actually work, check its log file:
UnityProject/CSharpXXSupport/compilation.log
Response (.rsp) files
If you want to use a response file to pass extra options to the compiler (e.g.
-unsafe), the file must be named
CSharpCompilerWrapper.rsp.
What platforms are "supported"?
This hack seems to work on the major platforms:
- Windows (editor and standalone)
- MacOS (editor and standalone)
- Android
- iOS
On MacOS the Roslyn compiler cannot create debug information files (.pdb) that Unity can consume. If you compile your code with Roslyn on MacOS you won't be able to debug it.
Since WebGL doesn't offer any multithreading support, AsyncBridge and Task Parallel Library are not available for this platform. Caller Info attributes are also not available, because their support comes with AsyncBridge library.
AsyncBridge/TPL stuff is also not compatible with Windows Store Application platform (and probably all the platforms that use .Net runtime instead of Mono runtime) due to API differences between the recent versions of .Net Framework and the ancient version of TPL (System.Threading.dll) that comes with AsyncBridge. Namely, you can't use async/await, Caller Info attributes and everything from System.Threading.dll (concurrent collections for example).
Making builds from command line
If you want to build your project from command line the simple way, it works as usual. For example,
unity.exe -buildWindows64Player <pathname>
However, if you use Build Player Pipeline, you'll have to take extra steps, because otherwise the old compiler will be used and the build will fail:
- Make a copy of
CSharpCompilerWrapper.exeand place it into
/Unity/Editor/Data/Mono/lib/mono/2.0on Windows or
/unity.app/contents/mono/lib/mono/2.0on Mac OS X.
- Rename this copy to
smcs.exe.
- Make sure that in the Player Settings the API Compatibility Level option is set to
NET 2.0.
Other known issues
C# 5.0/6.0 is not compatible with Unity Cloud Build service for obvious reason.
Using Mono C# 6.0 compiler may cause occasional Unity crashes while debugging in Visual Studio -
IL2CPP doesn't support exception filters added in C# 6.0 (ExceptionFiltersTest.cs).
If a MonoBehaviour is declared inside a namespace, the source file should not contain any C# 6.0-specific language constructions before the MonoBehaviour declaration. Otherwise, the editor won't recognize the script as a MonoBehaviour component.
Bad example:
using UnityEngine; using static System.Math; // C# 6.0 syntax! namespace Foo { class Baz { object Qux1 => null; // C# 6.0 syntax! object Qux2 { get; } = null; // C# 6.0 syntax! } class Bar : MonoBehaviour { } // "No MonoBehaviour scripts in the file, or their names do not match the file name." }
Good example:
using UnityEngine; namespace Foo { class Bar : MonoBehaviour { } // ok class Baz { object Qux1 => null; object Qux2 { get; } = null; } }
There's a bug in Mono C# 6.0 compiler, related to null-conditional operator support (NullConditionalTest.cs):
var foo = new[] { 1, 2, 3 }; var bar = foo?[0]; Debug.Log((foo?[0]).HasValue); // error CS1061: Type `int' does not // contain a definition for `HasValue' and no extension method // `HasValue' of type `int' could be found. Are you missing an // assembly reference?
Mono compiler thinks that
foo?[0]is
intwhile it's actually
Nullable<int>. However,
bar's type is deduced correctly -
Nullable<int>.
License
All the source code is published under WTFPL version 2.
Want to talk about it?
Random notes
Roslyn C# 6.0 compiler was taken from VS 2015 installation. C# 7.0 compiler was taken from VS 15 Preview 5 installation.
mcs.exe,
pdb2mdb.exeand its dependencies were taken from Mono 4.4.1.0 installation. pdb2mdb.exe that comes with Unity is not compatible with the assemblies generated with Roslyn compiler.
AsyncBridge library contains a set of types that makes it possible to use async/await in projects that target CLR 2.0. It also provides Caller Info attributes support. For more information, check this blog post.
If you use async/await inside Unity events (Awake, Start, Update etc) you may notice that continuations (the code below
awaitkeyword) are executed in background threads. Most likely, this is not what you would want. To force
awaitto return the execution to the main thread, you'll have to provide it with a synchronization context, like all WinForms and WPF applications do.
Check
UnityScheduler.cs,
UnitySynchronizationContext.csand
UnityTaskScheduler.csexample implementations located in the project. These classes create and register several synchronization contexts for the Unity's main thread, so async/await could work the way they do in regular WinForms or WPF applications.
For more information about what synchronization context is, what it is for and how to use it, see this set of articles by Stephen Toub: one, two, three. | https://bitbucket.org/alexzzzz/unity-c-5.0-and-6.0-integration/src | CC-MAIN-2017-51 | refinedweb | 1,202 | 61.12 |
LinkedHashMap is). In last few tutorials we have discussed about HashMap and TreeMap. This class is different from both of them:
HashMapdoesn’t maintain any order.
TreeMapsort the entries in ascending order of keys.
LinkedHashMapmaintains the insertion order.
Let’s understand the
LinkedHashMap with the help of an example:
import java.util.LinkedHashMap; import java.util.Set; import java.util.Iterator; import java.util.Map; public class LinkedHashMapDemo { public static void main(String args[]) { // HashMap Declaration LinkedHashMap<Integer, String> lhmap = new LinkedHashMap<Integer, String>(); //Adding elements to LinkedHashMap lhmap.put(22, "Abey"); lhmap.put(33, "Dawn"); lhmap.put(1, "Sherry"); lhmap.put(2, "Karon"); lhmap.put(100, "Jim"); // Generating a Set of entries Set set = lhmap.entrySet(); // Displaying elements of LinkedHashMap Iterator iterator = set.iterator(); while(iterator.hasNext()) { Map.Entry me = (Map.Entry)iterator.next(); System.out.print("Key is: "+ me.getKey() + "& Value is: "+me.getValue()+"\n"); } } }
Output:
Key is: 22& Value is: Abey Key is: 33& Value is: Dawn Key is: 1& Value is: Sherry Key is: 2& Value is: Karon Key is: 100& Value is: Jim
As you can see the values are returned in the same order in which they got inserted.
First of all this collection document is awesome !!
can you pls add more examples and methods for LinkedHashMap ? As you did for other collections ?
thanks !! | https://beginnersbook.com/2013/12/linkedhashmap-in-java/ | CC-MAIN-2017-34 | refinedweb | 220 | 54.59 |
Details
Description
Issue Links
- incorporates
HADOOP-8374 Improve support for hard link manipulation on Windows
- Resolved
HADOOP-8409 Fix TestCommandLineJobSubmission and TestGenericOptionsParser to work for windows
- Resolved
HADOOP-8411 TestStorageDirecotyFailure, TestTaskLogsTruncater, TestWebHdfsUrl and TestSecurityUtil fail on Windows
- Resolved
HADOOP-8412 TestModTime, TestDelegationToken and TestAuthenticationToken fail intermittently on Windows
- Resolved
HADOOP-8414 Address problems related to localhost resolving to 127.0.0.1 on Windows
- Resolved
HADOOP-8421 Verify and fix build of c++ targets in Hadoop on Windows
- Resolved
HADOOP-8424 Web UI broken on Windows because classpath not setup correctly
- Resolved
HADOOP-8486 Resource leak - Close the open resource handles (File handles) before throwing the exception from the SequenceFile constructor
- Resolved
HADOOP-8534 Some tests leave a config file open causing failure on windows
- Resolved
-
MAPREDUCE-4201 Getting PID not working on Windows. Termination of Task/TaskJVM's not working
- Resolved
MAPREDUCE-4263 Use taskkill /T to terminate tasks on Windows
- Resolved
MAPREDUCE-4321 DefaultTaskController fails to launch tasks on Windows
- Resolved
MAPREDUCE-4368 TaskRunner fails to start jars when the java.library.path contains a quoted path with embedded spaces
- Resolved
MAPREDUCE-4369 Fix streaming job failures with WindowsResourceCalculatorPlugin
- Resolved
HADOOP-8454 Fix the ‘chmod =[perm]’ bug in winutils
- Resolved
HADOOP-8544 Move an assertion location in 'winutils chmod'
- Resolved
HADOOP-8440 HarFileSystem.decodeHarURI fails for URIs whose host contains numbers
- Closed
HADOOP-8101 Access Control support for Non-secure deployment of Hadoop on Windows
- Resolved
MAPREDUCE-3898 Hadoop for Windows - Interfacing with Windows to manage MR tasks
- Resolved
MAPREDUCE-4203 Create equivalent of ProcfsBasedProcessTree for Windows
- Resolved
MAPREDUCE-4260 Use JobObject to spawn tasks on Windows
- Resolved
HADOOP-8645 Stabilize branch-1-win
- Resolved
- is related to
HADOOP-8139 Path does not allow metachars to be escaped
- Open
MAPREDUCE-4322 Fix command-line length abort issues on Windows
- Resolved
HBASE-6814 [WINDOWS] HBase on Windows
- Open
- relates to
HADOOP-8900 BuiltInGzipDecompressor throws IOException - stored gzip size doesn't match decompressed size
- Closed
-
-
-
Activity
- All
- Work Log
- History
- Activity
- Transitions
Hi Aaron, thank you for the fix-ups. That is appreciated.
It is totally makes sense to designate 1.1.0 the target version. We have been developing the patch against the 1.0.
To your other question - yes, the goal is to commit to both the branch and trunk once the community discussion, feedback incorporation, and review processes have been completed to the community's satisfaction.
--Alexander
Got it. Thanks for the explanation. I've also added a target version of 0.24.0, which corresponds to trunk.
+1 looking forward to seeing hadoop run in more places for more people!
This is great for Hadoop - it expands the set of platforms and market for Hadoop.
I also suggest that we break the work down into sub-jiras so that the community can review smaller chunks. If you post a full patch set I can suggest sub-jiras.).
+1 Looking forward to these patches, and getting rid of cygwin requirement.
The latest Windows Azure Java Client is available at:
We have uploaded the patch for Azure Storage (i.e. XStore) support within HDFS. Our goal has been to enable Azure Storage to be used in an analogous fashion to AWS's S3. This is designed to supplement intra-cluster "local" HDFS in our Hadoop on Azure service.
Looked at hadoop-8079.patch - quite small.
Did not see any windows commands corresponding for the bin/hadoop series of bash scripts.
Did you forget to attach those files or forgot to do "svn add" or "git add" before generating the patch?
After going through the patch here is a proposal for jira breakdown
- Security
src/core/org/apache/hadoop/io/SecureIOUtils.java
src/core/org/apache/hadoop/security/UserGroupInformation.java
src/core/org/apache/hadoop/security/ShellBasedUnixGroupsMapping.java
src/core/org/apache/hadoop/security/Credentials.java
src/core/org/apache/hadoop/fs/RawLocalFileSystem.java
src/core/org/apache/hadoop/util/ProcessTree.java
- General Utils - DU, DF, windows shell
src/core/org/apache/hadoop/fs/DU.java
src/core/org/apache/hadoop/fs/DUHelper.java
src/core/org/apache/hadoop/fs/DF.java
src/core/org/apache/hadoop/fs/FileUtil.java
src/core/org/apache/hadoop/util/Shell.java
- Interfacing with OS to make MR tasks
src/mapred/org/apache/hadoop/mapred/TaskController.java
src/mapred/org/apache/hadoop/mapred/Child.java
src/mapred/org/apache/hadoop/mapred/JvmManager.java
src/mapred/org/apache/hadoop/mapred/TaskRunner.java
src/mapred/org/apache/hadoop/mapred/TaskTracker.java
src/mapred/org/apache/hadoop/mapred/ReduceTask.java
src/mapred/org/apache/hadoop/mapred/TaskLog.java
src/mapred/org/apache/hadoop/mapred/DefaultTaskController.java
- Azure file system support
The azure patches will go here.
Sanjay, the taxonomy you propose is pragmatic and reasonable. I believe that it helps segment the changes in the right way to ease discussion and review.
@Sanjay,
Thanks for creating the sub jiras. I am going to create a branch 1.0-win off the 1.0 branch so that we can quickly iterate on the patches on a branch and then see if it all falls into place. It'll also help setup a windows build on that branch as well so that folks can take a look at it.
Add patches to match sub-JIRAs breakdown.
security.patch -> 1. Security
general-utils-windows.patch -> 2. General Utils - DU, DF, windows shell
mapred-tasks.patch -> 3. Interfacing with OS to make MR tasks
windows-cmd-scripts.patch -> cmd scripts
Thanks David.
Can you please upload the patches to respective jiras? Eg: windows-cmd-scripts.patch to.
Also note that you'll have to grant license to Apache for inclusion. You'll see this option when you try uploading a patch.
I just create a branch-1-win on svn. Ill try and create a windows build within the next couple of days.
Why not do this trunk first like we do with other new features? branch-1 is the sustaining branch..
Overall, it's a good initial start, though it could be made a bit more elegant and easier to test.
Testing is what worries me here, as even if the release process & Jenkins test on Windows, there's no guarantee anyone else will, which increases the likelihood of a regression sneaking in. The smaller amount of platform-specific code the better
- Incomplete full use of ASF guidelines; all if() clauses should be curly braced for better long-term maintenance esp. w/ patches.
- Some of the changes seem IDE-triggered, not OS-related; these should be removed as they complicate other patches and versions.
- I'm not sure about "temp hack to copy file" comment above a method in FileUtil; it's a bit worrying.
- Even when exceptions are swallowed, a log at debug level is wise. Just in case something really, really unexpected happens.
- The patches imply that cygwin will never be used again. Is this something everyone is happy with? I don't personally have any...
- I'm curious why the SymLink code opts to copy a file instead of using ::CreateSymbolicLink(); I assume that an extended org.apache.hadoop.fs.HardLink class will also avoid ::CreateHardLink(). I know these aren't exported via the Java runtime, but is there no way they could be invoked by executing something? If that's not possible, then this is a good time to add ln to the windows command line.
- stop-slave.cmd and its siblings use the phrase "Microsoft Hadoop Distribution"
This should not be in the ASF source, and will fall foul of the ASF trademark rules were it to be used in products not released by the ASF
This is a good opportunity to do better abstraction and so make it possible to test a lot of the abstraction behaviour (e.g. the file copying), even on Linux, so ensuring that test coverage is higher across all platforms. For example, there is a lot of snippets like
String[] shellCmd = {(Path.WINDOWS)?"cmd":"bash", (Path.WINDOWS)?"/c":"-c", untarCommand.toString() };
And
return (WINDOWS)? new String[]{"cmd", "/c", "df -k " + dirPath + " 2>nul"}: new String[] {"bash","-c","exec 'df' '-k' '" + dirPath + "' 2>/dev/null"}; }
I could imagine something to generate a bash command or a wincommand that takes a list of args
String bashCommand(String[] args) { String[] command = new String[args+2]; command[0] = "bash"; command[1] = "-c"; //array copy here return command; } String winCommand(String[] args) { String[] command = new String[args+2]; command[0] = "cmd"; command[1] = "/c"; //array copy here return command; } String command(String[] args) { return (!WINDOWS) bashCommand(args): winCommand(args); }
Similarly, quietBashCommand and quietWinCommand() would set up the null output. You could test at the low level bash/win command generation and very that what you got is what is expected; unit tests for all platforms.
While what you are suggesting helps, the code changes to
command( WINDOWS ? foo-windows : foo-bash)
Many (but not all) commands are inside Shell.java and fairly isolated from rest of code. An example is
public static String[] getGroupsForUserCommand(final String user) { //'groups username' command return is non-consistent across different unixes return (WINDOWS)? new String[] {"cmd", "/c", "id -Gn " + user}: new String [] {"bash", "-c", "id -Gn " + user}; }.
Regarding the org.apache.hadoop.fs.azurenative classes
- keys like "fs.azure.buffer.dir" need to be pulled out and made constants; the embedding of strings is something the main codebase is slowly moving away from. Some of the code does this, but not all.
- The code depends on microsoft-windowsazure-api 1.2.0 , which is in the maven repository. There's also a 0.2.0 version in there -any particular reason for not using the latest release?
- Testing? How is anyone working with this code going to use the fs. Is there S3-style remote access, or do you have to bring up a VM in the cluster?
- The catch of Exception and wrapping with AzureException is best set up so that IOException exceptions aren't caught and wrapped, as they match the signature. I don't know if the native API throws these, but adding an extra layer of nesting never helps with troubleshooting live systems.
It may be cleaner to keep the azure FS source tree outside the main hadoop code, and host it in a parallel hadoop-azurefs project with the extra dependency, and the extra output artifacts. Anyone who added a mvn or ivy dependency on hadoop-azurefs would get the -api JAR, and testing could be isolated. This could also be a good opportunity to do the same for KFS, which is under-tested in the current release process, and for any other DFS clients that people want in the codebase. Maybe the policy should be: if it is testable by anyone, put it in the hadoop source tree, but if not, the FS vendor has to do it. (I'm thinking of things like GPFS here and others, not just AzureFS)
Another nice feature would be for Win32/64 versions of the native libraries to be available alongside the Hadoop releases (and an OS/X version too). Integrating this into the main release process would be tricky, but a Windows VM with optimising 32 and 64 bit compilers could be used to do a simultaneous release
@Eli>Why not do this trunk first like we do with other new features? branch-1 is the sustaining branch.
Branch 1-win is just being used as a proof of concept for the patches. The trunk patches are expected to be provided and checked into trunk before this jira is completed.
I applied these patches and ran the tests. Looks like some tests are failing.
A fair number of tests are failing. I suggest that the team works on a "commit-then-review" in the branch-1-win and iterate to improve the solution and fix tests to get a working branch. Comments in the jiras will be addressed. Following that the team posts a set of small trunk-patches to make it convenient for review.
Add patch for the branch-1-win branch. The patch includes changes for all the sub-JIRAs. Note this is work in progress. Tests affected by these changes are still under review and test patches are forthcoming.
Close the open resource handles (File handles) before throwing the exception from the SequenceFile constructor
My suggestion is to use "System.nanoTime() / 1000000" instead of System.currentTimeMillis().
Changed Target Version to 1.3.0 upon release of 1.2.0. Please change to 1.2.1 if you intend to submit a fix for branch-1.2.
We've completed the intended scope of this issue, so I'm resolving it. Thank you to all contributors!
Hi Alexander, the "fix version" field should only be set once the change has been committed and the JIRA resolved. Thus, I've removed that field.
Since 1.0.0 has already been released, it's not an appropriate "target version." I've changed the target version to 1.1.0.
One question I have - you say in the description that you've developed the patch set against Hadoop 1.0, but that you'd like to refine the patch set until it can be committed to Hadoop trunk. Is the intention to commit this both to branch-1 and trunk? Or just trunk? | https://issues.apache.org/jira/browse/HADOOP-8079?focusedCommentId=13213873&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-35 | refinedweb | 2,219 | 55.64 |
Quickly Widgets Not in Dependencies
In the program I built, I use the line:
from quickly import prompts
The program works fine on my development machine. I can package it, and it works! But.. if I package it and move it to another machine, the program will not work. i run the program in debug mode using -vv and it tells me that it cannot import prompts. If I install quickly-widgets on the second machine, it works.
My question is: why doesn't it make quickly-widgets a dependency? and How can I solve this problem so quickly-widgets is imcluded in the package?
Thanks! Quickly is great.
Elmer
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- Quickly Edit question
- Assignee:
- No assignee Edit question
- Solved by:
- E. E. Perry
- Solved:
- 2011-12-21
- Last query:
- 2011-12-21
- Last reply:
- 2011-12-10
I was able to force the dependency using
quickly configure dependency
but it almost seems like a bug. Quickly is great and it makes development very quick and easy, but I feel it should do better about including dependency written in the code.
Elmer
Whoops a typo there it should read
quickly configure dependencies
Elmer
Hi Elmer
My understanding of the design of quickly is that your project explicitly does *not* depend on quickly, and by extension quickly-widgets. Quickly is based on templates, it creates a stand alone source tree for you to build your project. Using this concept you would copy some modules from quickly-widgets into your source tree. The advantage of this is that you could create a debian package importable into a non-ubuntu distribution e.g. debian itself.
Of course, your source tree is not completely stand alone, it depends on python ,gtk but these are available in every compatible distribution.
The automatic dependency detection is in python-
Hey Tony,
Thanks for your answer. Yeah, I understand (and don't expect) quickly-widgets to be a dependency automatically, but as stated above, I imported quickly prompts in the code. So... somehow, the dependency was not made while packaging. I guess it is really no big deal since I can force the dependency. I'm wondering however, since the import doesn't mention quickly-widgets directly if that could be part of the problem. Perhaps, even, that the problem is not in quickly itself, but the disutils?
Anyway, this is not a nagging problem, since it is easily resolved, but think it needs to be documented so people are aware.
You make a good point about not using quickly-widgets if you plan for an app to be used in distributions other than Ubuntu.
Elmer
This sounds like a packaging bug to me ...
This question was expired because it remained in the 'Open' state without activity for the last 15 days. | https://answers.launchpad.net/quickly/+question/179904 | CC-MAIN-2017-51 | refinedweb | 472 | 62.27 |
When you use python 3.5 pickle library to save a python object to a file, you may encouter TypeError: file must have a ‘write’ attribute error. In this tutorial, we will introduce how to fix this error to help you save python object to a file.
Here is an example:
import pickle list = [1, 2, 3] pickle.dump(list, 'binary_list.bin')
Then you will get this error: TypeError: file must have a ‘write’ attribute
The function pickle.dump() is defined as:
pickle.dump(obj, file, protocol=None, *, fix_imports=True)
Here file is not the name of a file, it is a file object.
To fix this error, we should open a file then use pickle.dump().
The solution is here.
with open("binary_list.bin","wb") as f: pickle.dump(list, f)
Then you will find binary_list.bin file is created and python list is saved into this file. | https://www.tutorialexample.com/fix-python-pickle-typeerror-file-must-have-a-write-attribute-error-python-tutorial/ | CC-MAIN-2021-31 | refinedweb | 149 | 77.13 |
difference between stdio.h and cstdio
Discussion in 'C++' started by david wolf,
<cstdio> and fflush(stdout), Aug 10, 2004, in forum: C++
- Replies:
- 3
- Views:
- 15,834
- Ali Cehreli
- Aug 10, 2004
why is my iostream program so much slower than equivalent cstdio program?Mark, Dec 23, 2004, in forum: C++
- Replies:
- 20
- Views:
- 1,801
- Dave O'Hearn
- Dec 27, 2004
exec() and sending to STDIO and reading from STDIO, Oct 17, 2006, in forum: Java
- Replies:
- 18
- Views:
- 1,979
- crazzybugger
- Oct 22, 2006
#include <cstdio> and #include <stdio.h> in the same file!?, Jan 22, 2013, in forum: C++
- Replies:
- 2
- Views:
- 457 | http://www.thecodingforums.com/threads/difference-between-stdio-h-and-cstdio.459012/ | CC-MAIN-2015-22 | refinedweb | 107 | 76.76 |
Both python and ruby have a
yield which is thoroughly confusing to Java programmers and looks like the stuff that nightmares are made of. But eventually you realize that it isn't so bad after all. But before we deal with our nightmares, we to look at iterators.
Python for loops look so nifty thanks to iterators. Things like lists and tuples implement the iterator interface. Of course in python you don't need explicit interfaces like what we are used to in Java. However a class that implements
def __iter__(self): and
def next(self): can be iterated over. Here is a complete example from the python tutorial. | http://www.raditha.com/blog/archives/iterators-and-generators-and-yields.html | CC-MAIN-2017-47 | refinedweb | 109 | 66.64 |
Convert liquid templates to handlebars templates.
Follow this project's author, Jon Schlinkert, for updates on this project and others.
$ npm install --save liquid-to-handlebars
If you've ever seen a jekyll boilerplate, or another project that uses liquid templates and wished it was written in handlebars, this is your solution!
Please create an issue if you find a tag that doesn't convert correctly, and I'll add support. Thanks!
var convert = ;// Converts this liquidconsole;// To this handlebars//=> 'Price: ${{default product_price 2.99}}'
You will also need to include any missing handlebars helpers that provide similar functionality to the liquid filters that are being replaced. For example:
var hbs = ;hbs;var fn = hbs;console;//=> 'Price: $2.99'console;//=> 'Price: $4.99'
Once your liquid templates are converted to handlebars, if you attempt to render all of the templates with handlebars without any additional work, it's a good bet that you'll receive a bunch of errors from missing helpers.
Save yourself a bunch of time and frustration by following these steps:
Add the following to your app:
var hbs = ;// override handlebars' built-in `helperMissing` helper,// so that we can easily see which helpers are missing// and get them fixedhbs;
Now, when you run handlebars, if you see a message like this:
missing helper foo
You can either create the
foo helper from scratch, or use a helper library that already includes the helpers you need.
Any of the following libraries may be used, but the [liquid-filters][] library might be most useful (during migration, at least).
Examples
var hbs = ;var filters = ;var helpers = ;hbs;hbs;
There are many more examples in the docs folder, as well as test/fixtures and test/expected.
basic operators
From this liquid:
{% if product.type == "Shirt" or product.type == "Shoes" %}This is a shirt or a pair of shoes.{% endif %}
To this handlebars:
{{#if (or (is product.type "Shirt") (is product.type "Shoes"))}}This is a shirt or a pair of shoes.{{/if}}
boolean
From this liquid:
{% if settings.fp_heading %}<h1>{{ settings.fp_heading }}</h1>{% endif %}
To this handlebars:
{{#if settings.fp_heading}}<h1>{{ settings.fp_heading }}</h1>{{/if}}
case
From this liquid:
{% case handle %}{% when 'cake' %}This is a cake{% when 'cookie' %}This is a cookie{% else %}This is not a cookie/cake{% endcase %}
To this handlebars:
{{#is handle 'cake'}}This is a cake{{else is handle 'cookie'}}This is a cookie{{ else }}This is not a cookie/cake{{/is}}
Requires the "is" helper.
else
From this liquid:
{% if username and username.size > 10 %}Wow, {{ username }}, you have a long name!{% else %}Hello there!{% endif %}
To this handlebars:
{{#if (and username (gt username.size 10))}}Wow, {{ username }}, you have a long name!{{else}}Hello there!{{/if}}
contains
From this liquid:
{% if product.title contains 'Pack' %}This product's title contains the word Pack.{% endif %}
To this handlebars:
{{#if (contains product.title "Pack")}}This product's title contains the word Pack.{{/if}}
Basic loops
From this liquid:
<!-- if site.users = "Tobi", "Laura", "Tetsuro", "Adam" -->{% for user in site.users %}{{ user }}{% endfor %}
To this handlebars:
<!-- if site.users = "Tobi", "Laura", "Tetsuro", "Adam" -->{{#each site.users as |user|}}{{ user }}{{/each}}
Accessing specific items in arrays
From this liquid:
<!-- if site.users = "Tobi", "Laura", "Tetsuro", "Adam" -->{{ site.users[0] }}{{ site.users[1] }}{{ site.users[3] }}
To this handlebars:
<!-- if site.users = "Tobi", "Laura", "Tetsuro", "Adam" -->{{get site.users 0}}{{get site.users 1}}{{get site.users 3}}
From this liquid:
{{ "Ground control to Major Tom." | split: "" | reverse | join: "" }}
To this handlebars:
{{join (reverse (split "Ground control to Major Tom." "")) ""}}
Many more examples in the docs folder and unit tests.
This is a tool for converting projects that use liquid templates to use handlebars templates.
Why convert to handlebars?
A few reasons:
gh-pagesor other static sites using liquid. It would be nice if these were available in a templating language more friendly to javascript devs.
The tipping point was when I recently spent a few hours converting a liquid project over to handlebars by hand, and I realized I would need to repeat that process every time I found a liquid resource I wanted to use.
This converter took me a day to create, but I can now use any liquid resource with very little if any time spent on converting templates at all.
You might also be interested in these projects:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
$ npm install && npm test
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on September 21, 2017. | https://www.npmjs.com/package/liquid-to-handlebars | CC-MAIN-2017-43 | refinedweb | 790 | 66.64 |
Python is a versatile piece of kit straight out of the box. By itself it can do just about anything, from simple calculations, to automating a Twitter account, through to being a robot’s ‘brain’. However, we can make most jobs that we will do in Python a lot easier by using modules – sort of like Python add-on kits. These modules create groups of functions that make tasks quicker to write and perform. Without them, we’d have to write horribly lengthy code once we got beyond simple tasks.
To use a module, you must first install it onto your machine and then import it into any code that will use it. This article will take you through both steps:
Installing Modules
If you are using an Anaconda install of Python, the interface in Anaconda Navigator should have most modules that you are looking for available. Simply ensure that it is ticked and installed on your environment page.
If you are using another type of Python install, simply open up the terminal (your machine’s terminal, not Python) and run ‘pip install [MODULE NAME]’. Any issues that you run into at this point will be well-documented on Stack Overflow and Google, so give those a look if so.
Importing Modules
With our module installed on your machine and ready to go, we just need to import it. For this example, we’ll import the ‘math’ module, to give us access to the value for pi. Let’s see what happens without importing the module:
math.pi()
--------------------------------------------------------------------------- NameError Traceback (most recent call last) <ipython-input-1-9493078a23d9> in <module>() ----> 1 math.pi() NameError: name 'math' is not defined
import math print(math.pi)
3.141592653589793
Without importing the math module, Python obviously has no idea what it is. Once we import it, away we go!
With some modules, you will notice a convention to import them and give them different names. This is done with the ‘as’ keyword after our import:
import pandas as pd import numpy as np np.arange(0,10,2)
array([0, 2, 4, 6, 8])
Summary
Harvard’s Introduction to Computer Science course opens with the discussion that we are ‘standing on the shoulders of giants’, highlighting the work of programmers before us that have built languages, modules and tools that allow us to be more productive.The Python community is a perfect example of this, with thousands of modules available to us that make complex tasks a bit more manageable.
See some of these modules in action across data analysis, visualisation and web scraping. | https://fcpython.com/python-basics/python-modules | CC-MAIN-2021-43 | refinedweb | 431 | 69.52 |
Advertisement
A very good example illustrating adding swing com
A very good example illustrating adding swing compnents to an Applet qnd demonstrating event handling technique.I shall use it for teaching introductory Java.Thanks
bputhal
applet for windows
my applet is fonctionel apllet in the wodr fba
Applet - Applet
in details to visit....... the applet concept and HTML?
what is mean by swing? Hi friend
Good tutorials for beginners in Java
in details about good tutorials for beginners in Java with example?
Thanks.
Hi, want to be command over Java, you should go on the link and follow...Good tutorials for beginners in Java Hi, I am beginners in Java
swing to applet
swing to applet hi how i can isplay a java swing into applet java
thanks
Swing Button Example
Swing Button Example Hi,
How to create an example of Swing button in Java?
Thanks
Hi,
Check the example at How to Create Button on Frame?.
Thanks
applet - Applet
:
Thanks... in Java Applet.",40,20);
}
}
2) Call this applet with html code...applet i want a simple code of applet.give me a simple example
java - Applet
java what is applet? Hi Friend,
Please visit the following link:
Thanks
applet - Applet
information,visit the following link:
Thanks...*;
import java.awt.*;
public class CreateTextBox extends Applet implements
Java Swing Tutorials
in
Java Swing Applications.
Adding
an Icon to a JButton... how to create the
JTabbedPane container in Java Swing. The example... of GUI.
Chess Application In Java Swing
In the given example | http://roseindia.net/tutorialhelp/allcomments/46 | CC-MAIN-2015-27 | refinedweb | 252 | 58.48 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.